added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2021-09-04T13:22:59.143Z
2021-09-01T00:00:00.000
237406443
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2021.720555/pdf", "pdf_hash": "f1ee2c04e57f2aa23fd5323cdfca921b748be297", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41983", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "f1ee2c04e57f2aa23fd5323cdfca921b748be297", "year": 2021 }
pes2o/s2orc
Green Synthesis of New Category of Pyrano[3,2-c]Chromene-Diones Catalyzed by Nanocomposite as Fe3O4@SiO2-Propyl Covalented Dapsone-Copper Complex Nanomagnetic dapsone-Cu supported on the silica-coated Fe3O4 (Fe3O4@SiO2-pr@dapsone-Cu) nanocomposite was synthesized and characterized by Fourier transform infrared (FT-IR), energy-dispersive X-ray (EDX), X-ray diffraction (XRD), field emission scanning electron microscope (FE-SEM), transmission electron microscopy (TEM), zeta potential, vibrating sample magnetometer (VSM), and thermogravimetric analysis (TGA). This newly synthesized nanocomposite was chosen to act as a green, efficient, and recyclable Lewis acid for the multicomponent synthesis of new derivatives of pyrano[3,2-c]chromene-diones through the reaction of aromatic aldehydes, indandione, and 4-hydroxycoumarin in water. All of the synthesized compounds are new and are recognized by FT-IR, NMR, and elemental analysis; this avenue is new and has advantages such as short reaction times, high productivity, economical synthesis, and use of green solvent, H2O, as a medium. The catalyst is magnetically recoverable and can be used after six runs without a decrease in the efficiency. INTRODUCTION Multicomponent reactions (MCRs) are useful avenue for the synthesis of organic compounds. This reaction is a combination of at least three components in a one-pot domino (Li and Chan, 1997;Grieco, 1998;Dömling and Uzi, 2000;Dömling, 2002;Hosseini-Zare et al., 2012). MCRs have benefits such as higher atom economy and selectivity and production of complex molecules with low by-products. Nowadays, MCRs have attracted a lot of interest in organic transformation (Bienaymé et al., 2000;Kandhasamy and Gnanasambandam, 2009;Müller 2014). To the best of our knowledge, there are no reports on the use of Fe3O4@SiO2-propyl-loaded dapsone-copper as a catalyst for the synthesis of pyrano[3,2-c]chromenes via multicomponent reactions of aldehydes, indandione, and 4-hydroxycoumarin results. Synthesis of Fe3O4@SiO2-Cl Fe 3 O 4 @SiO 2 NPs were prepared by Zare Fekri (Nikpassand et al., 2017;. Synthesis of Fe 3 O 4 @SiO 2 @dapsone 500 mg Fe 3 O 4 @SiO 2 -Cl MNPs in 50 ml distilled water were irradiated under ultrasound for 30 min. Then, dapsone 0.5 g was added. The mixture was refluxed at 110°C for 14 h. The Fe 3 O 4 @SiO 2 @dapsone was filtered in the presence of an enormous magnet and washed with chloroform several times and dried at 80°C for 4 h. Synthesis of Fe 3 O 4 @SiO 2 @dapsone-Cu 500 mg Fe 3 O 4 @SiO 2 @dapsone MNPs in 50 ml EtOH-H 2 O (1:1) were irradiated under ultrasonic bath for 30 min. Then, 20 ml aqueous solution of copper chloride (I) (0.1 g; 0.001 mol) was added to the Fe 3 O 4 @SiO 2 @dapsone and stirred for 48 h. The Fe 3 O 4 @SiO 2 @dapsone-Cu MNPs were filtered in the presence of a magnetic bar and washed using ethanol and water subsequently, to separate the nanoparticles. General Procedure for the Synthesis of Pyrano[3,2-c]chromene-Dione A mixture of aldehyde (1.0 mmol), indan-1,3-dione (2.0 mmol), 4-hydroxycoumarin (1 mmol), and 0.05 g Fe 3 O 4 @SiO 2 @ dapsone-Cu MNPs was stirred at room temperature in 10 ml distilled water for the required reaction time as indicated by TLC (TLC silica gel 60 F250, ethyl acetate : n-hexane 1 : 4). After completion of the reaction, the resulting mixture was filtered in the presence of an efficient magnetic bar to separate the catalyst. The catalyst was washed with 10 ml ethanol and reused. The crude products were collected and dried. Synthesis and Characterization In order to prepare nanocatalyst, initially, Fe3O4 MNPs were modified with silica and then with chloropropyl silane via chemical bonds to obtain Fe3O4@SiO2-pr. In the next step, Fe3O4@SiO2-propyl was covalented by substitution reaction by dapsone to prepare Fe3O4@SiO2-propyl loaded dapsone. This nanocatalyst was treated with copper chloride to produce Fe3O4@SiO2-propyl@dapsone-Cu (Scheme 1). The structure of the prepared nanocatalyst was studied and fully characterized using FT-IR, energy-dispersive X-ray (EDX), XRD, zeta potential, TEM, and field emission scanning electron microscope (FE-SEM) analysis. As shown in Figure 1 (FE-SEM and TEM), the magnetic nanoparticles have a spherical shape with an average diameter of 14-38 nm. The synthesized nanoparticles have aggregated well. (Figure 3A), which indicate that the MNPs have highly crystalline cubic spinel structure of the magnetite and matched with the diffraction patterns of the standard Fe 3 O 4 (JCPD 19-0629). This confirmed the stability of the crystalline phase of the magnetite core in the structure after silica coating, condensation, and complexation process. The absence of an amorphous peak in pattern confirmed the crystalline structure. Using Debye-Scherrer equation, the mean size of crystallite was calculated as 12.1 nm from the XRD pattern (crystallite shape factor: 0.9 and λ CuKa1 1.54060 Å). This value is lower than the size obtained by FE-SEM and TEM due to the fact that some crystallite forms a particle. Also, the d-spacing and full width at half maximum Figure 3B revealed the TGA analysis of synthesized nanoparticles. Two weight losses are observed. The first decrease is related to a temperature below 333°C because of desorption of water and the second weight-loss step at 524°C is due to decomposition of organic compound as dapsone. As shown in Figure 4A, the zeta potential was scanned. The large zeta potential obtained revealed a more stable dispersion of synthesized MNPs. The zeta potential value of dispersed synthesized in deionized water in absence of any electrolyte was +25.1 mV. The presence of iron, oxygen, nitrogen, carbon, silica, sulfur, and copper, in EDX, revealed the successful synthesis of these nanoparticles. The magnetic properties of synthesized nanoparticles are shown in Figure 5. The results approve the superparamagneticity behavior. To complete our assessment, we checked the effect of different conditions in the sample reaction. For example, we treated 4nitrobenzaldehyde, indandione, and 4-hydroxycoumarin under stirring at room temperature and refluxing in EtOH. The satisfactory results were obtained via the reaction of 4nitrobenzaldehyde, indandione, and 4-hydroxycoumarin in the presence of 0.05 g of Fe 3 O 4 @SiO 2 -propyl@dapsone-Cu in aqueous media under stirring ( Table 1). To expand the generality and efficiency of this avenue, some aldehydes with electron-donating or electronwithdrawing substituents were treated with indan-1,3dione and 4-hydroxycoumarin. The results are summarized in Table 2. As a proposed mechanistic pathway, initially, aldehyde was activated by the nanocatalyst, followed by nucleophilic attack of C-H acid of indan-1,3-diones, together with the departure of water, and chalcone was produced. Nucleophilic attack of 4hydroxycoumarin to chalcone as Michael addition and then intramolecular cyclization followed by elimination of water lead to product 4 (Scheme 3). Furthermore, the magnetic nanoparticles are magnetically recoverable and can be reused for six runs. Appearance SCHEME 2 | Multicomponent synthesis of pyrano[3,2-c]chromene-diones. features of the catalyst were not changed after several uses ( Figure 6). To better understand the stability of catalyst after five cycles under these reaction conditions, FE-SEM and TEM analyses were carried out. The results are summarized in Figure 7. CONCLUSION In conclusion, a new catalytic method for the synthesis of pyrano [3,2-c]chromene-diones has been developed. This method offers several advantages, such as simple workup and purification procedure without the use of any chromatographic method, FIGURE 6 | The recyclability of nanocatalyst. SCHEME 3 | Proposed mechanism for the synthesis of pyranochromene-diones. Frontiers in Chemistry | www.frontiersin.org September 2021 | Volume 9 | Article 720555 9 mild reaction conditions, use of inexpensive and commercially available starting materials, recyclability and reusability of the catalyst, high product yields, and short reaction time. So we think that this procedure could be considered a new and useful addition to the present methodologies in this area. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary files, further inquiries can be directed to the corresponding author. AUTHOR CONTRIBUTIONS LZ carried out experimental studies, wrote the original draft, and analyzed spectral characterization of synthesized molecules and project planning, proofreading, and editing. ACKNOWLEDGMENTS Financial support from the Research Council of Payame Noor University of Rasht branch is sincerely acknowledged.
v3-fos-license
2020-09-16T13:06:19.208Z
2020-09-01T00:00:00.000
221719671
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/21/18/6607/pdf", "pdf_hash": "94194d52a114eb1ee2327b97761402f861390367", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41984", "s2fieldsofstudy": [ "Medicine" ], "sha1": "93be974e483487805f3809a4c6a4cac13ac5d808", "year": 2020 }
pes2o/s2orc
The Role of Metabolomics in Current Concepts of Organ Preservation In solid organ transplantation (Tx), both survival rates and quality of life have improved dramatically over the last few decades. Each year, the number of people on the wait list continues to increase, widening the gap between organ supply and demand. Therefore, the use of extended criteria donor grafts is growing, despite higher susceptibility to ischemia-reperfusion injury (IRI) and consecutive inferior Tx outcomes. Thus, tools to characterize organ quality prior to Tx are crucial components for Tx success. Innovative techniques of metabolic profiling revealed key pathways and mechanisms involved in IRI occurring during organ preservation. Although large-scale trials are needed, metabolomics appears to be a promising tool to characterize potential biomarkers, for the assessment of graft quality before Tx and evaluate graft-related outcomes. In this comprehensive review, we summarize the currently available literature on the use of metabolomics in solid organ Tx, with a special focus on metabolic profiling during graft preservation to assess organ quality prior to Tx. Introduction Solid organ transplantation (Tx) is the only curative treatment option for patients suffering from end-stage organ failure. Liver, kidney, heart, lung and, to some extent, pancreas and intestine Tx are incorporated into routine clinical care worldwide, and both patient and allograft survival are continuing to improve [1]. However, the growing disparity between organ supply and demand has led to the increasing use of donation after circulatory death (DCD) and extended criteria donor (ECD; aged ≥60 years or aged 50-59 years with vascular comorbidities) allografts [2][3][4], despite their higher susceptibility to ischemia-reperfusion injury (IRI) and consecutive inferior outcomes, including mortality and morbidity after Tx [5,6]. The organ preservation process is a critical link in the chain of donation and Tx, and therefore is of major interest in research in order to provide strategies to improve Tx outcome [7]. The allograft, metabolically impaired during warm and cold ischemia (WI and CI), is further damaged by a paradox reperfusion injury after revascularization and re-oxygenation. Short-term and long-term complications including post-reperfusion syndrome, delayed graft function (DGF) and even immune activation have been associated with IRI [5,8]. The implementation of new storage techniques, such as machine perfusion (MP), has paved the way for the continuous supply of oxygen and substrates for the synthesis of adenosine triphosphate (ATP) and other metabolites, enabling the continuous removal of end products and stimulation of the organ's metabolism [9]. Studies have demonstrated reduced rates of DGF and improved allograft survival in machine-perfused organs compared to static cold storage (SCS) [4,[10][11][12]. Moreover, MP provides a unique opportunity to collect graft tissue and perfusate samples, as well as information regarding functional activity and flow dynamics prior to Tx. The accurate evaluation of allograft quality is essential in order to prevent unjustified donor organ rejection [13,14], and to estimate Tx outcomes. Additionally, MP offers a unique platform to facilitate intervention and modification to further optimize ECD grafts [5,15]. However, it remains unclear which temperature setting is preferable for optimal organ preservation [16]. Hypothermia slows metabolism and oxygen consumption, so that organs can survive longer without nutrient supplements, while normothermia preserves the graft at a near-physiological condition [5]. Metabolomics, firstly introduced in the late 1990s, is a postgenomic high-throughput systems biology approach of diagnostic innovation in clinical medicine [17,18]. In metabolomics, a large number of metabolites (sugars, amino acids, lipids, organic acids and nucleotides) can be measured using non-chemical, non-colorimetric methods, such as gas chromatography-mass spectrometry (GC-MS), liquid chromatography-MS (LC-MS) or nuclear magnetic resonance (NMR) spectroscopy [18,19]. The advantages of these analytical approaches are their accuracy, rapidity, the small sample volume, and the possibility of simultaneous detection and quantification in a single measurement, mostly without any preselection [20]. NMR spectroscopy relies on the signals from various nuclei, including 1 H, 31 P and 13 C, while MS involves the ionization of metabolites present in samples and separation based on the mass/charge ratio [21,22]. Moreover, MS can also be combined with chromatographic methods (LC or GS) to improve metabolite separation. These technologies allow the investigation of metabolic changes in disease models and organ physiology, including Tx [23]. In general, metabolomic approaches have been performed to monitor two key aspects of organ physiology during preservation: (i) severity of organ IRI and (ii) organ function or dysfunction [19]. Several metabolomics studies revealed altered levels of metabolites originating from the urea cycle (urea and glutamate), energy metabolism (e.g., formate, orthophosphate, ATP, lactate, pyruvate, fatty acids and carbohydrates), oxidative phosphorylation (fumarate and succinate) and oxidative stress (increased levels of reduced glutathione) in IRI [19,22]. With a better understanding of the underlying harmful metabolic processes occurring during organ preservation, the possibility of ameliorating organ quality, as well as extending storage time and even improving allograft quality prior to Tx, may become clinical routine. It seems that innovative techniques, such as MP, combined with metabolomics has significant potential as a clinical tool for the assessment of preserved organs before Tx, since many potential biomarkers could be identified with the evolution of metabolomics. However, the current level of evidence is scarce and further studies are needed to pave the way for clinical trials. The purpose of this comprehensive review is to give an overview of the literature on the use of metabolomics in solid organ Tx, with special focus on metabolic profiling during graft preservation in order to compare preservation methods and assess the quality of grafts prior to Tx. Materials and Methods This comprehensive literature review was performed by selecting articles investigating different solid organ preservation methods for Tx, in which analytical techniques of metabolomics were applied. A literature search was conducted in the MEDLINE and EMBASE databases using Medical Subject Heading (MeSH) terms "metabolomics", "heart", "lung", "kidney", "liver", "intestine", "pancreas", and "transplantation" until and including 10 May 2020 (English articles only). All hits were screened by title and abstract by two reviewers (MK and VZ) independently. Then full-text articles were reviewed for potential eligibility. Additional articles identified through reference list screening were included. Database-specific search strategies and a flowchart of the literature search according to the PRISMA guideline are provided as Supplementary Materials [24]. In total, 38 publications were included in this study. Metabolomics in Heart Preservation The shortage of donor hearts relative to the demand is an ongoing challenge, given the increasing societal burden of heart failure [25]. A large number of potential donor hearts are discarded because the short safe preservation time of 4-6 h is exceeded due to logistical reasons [26,27]. The standard SCS method is simple and cheap, but suboptimal for preserving cardiac allografts, especially in the case of ECD hearts [26]. Research is currently focusing on the development of new preservation strategies to enhance the preservation of donor hearts, extend the maximum preservation duration, and evaluate the quality of donor hearts prior to Tx [27]. MP provides a continuous perfusion with oxygenated blood or preservation solution, resulting in improved metabolism compared to SCS (Table 1). Previous experiments by Peltz et al. suggested significant advantages in rat hearts preserved by hypothermic MP (HMP; 4-10 • C) over SCS [28] due to improved cellular ATP and energy charge levels during the ischemic period [28]. In a canine study, NMR spectroscopy analysis revealed a dramatic decrease in tissue lactate in hearts preserved with continuous HMP with similar levels of myocardial edema [29]. After SCS, a more than five-fold increased lactate to alanine ratio was observed when compared to preservation by HMP [29]. Comparable results have been described in large animal experiments performed on pig hearts [30]. Continuous perfusion reduces the functional impairment of myocardium and tissue lactate accumulation without increasing edema; it therefore appears to be a promising tool to improve the results of heart Tx. In 2010, Cobert et al. compared oxidative metabolism during 10 h of canine heart HMP with two commonly used extracellular-type preservation solutions (UW and Celsior ® ) [31]. Despite increased edema development, no detriment to the metabolic profile, analyzed by 1 H NMR spectroscopy of tissue samples, was observed in the Celsior ® group. Lactate/alanine ratios remained low in both groups, denoting favorable metabolic profiles in HMP and indicating primarily aerobic metabolism. Moreover, lactate accumulation in the preservation solution was low and did not increase over time in either group [31,32]. Elevated lactate/alanine ratios in the right ventricle and left atrium stood in contrast to the low ratios found in the left ventricle. This data suggested that, despite excellent left ventricular perfusion, right ventricle perfusion is reduced, and oxidative metabolism may not be maintained by retrograde HMP [32]. A previously published study investigated myocardial metabolism on isolated rat hearts during HMP [33]. The authors found that glucose, even if provided at high concentrations, is minimally effective on oxidative pathways during MP. Pyruvate appears to be a more promising exogenous substrate, as the significantly increased incorporation of labeled carbon into Krebs cycle intermediates, in positions that exclusively occur through oxidative metabolism, could be demonstrated via a 1 H and 13 C NMR spectroscopy approach [33]. Experiments with discarded human hearts showed that HMP could support myocardial metabolism over long periods [34]. The lactate/alanine ratios determined by 1 H NMR spectroscopy were lower in all perfused hearts when compared to the SCS group, indicating ongoing oxidative metabolism and reduced intracellular lactate accumulation in the MP groups. Moreover, 31 P NMR spectroscopy demonstrated more stable high-energy phosphate to inorganic phosphate ratios in the perfusion groups, indicating that HMP preservation is effective in maintaining myocardial high-energy phosphates even over 12 h of perfusion [34]. This study demonstrated that the acceptable ischemic interval of donor hearts could be increased using MP techniques; therefore, improved donor-recipient matching and extension of the donor pool could be achieved by permitting for long-distance procurements. Recently, Martin et al. compared the metabolic changes during WI and CI in mice, as well as in porcine and human hearts using LC-MS [35]. They proposed that succinate accumulation is a major feature within ischemic hearts across species, and that CI slows succinate generation, thereby reducing tissue damage upon reperfusion caused by the production of mitochondrial reactive oxygen species (ROS). Importantly, the inevitable periods of WI during organ procurement lead to the accumulation of damaging levels of succinate, despite cooling organs as rapidly as possible [35]. Moreover, the metabolism during WI and CI was similar in hearts of different species, encouraging the development of therapies using animal models. The data suggest that preventing oxidation and the accumulation of succinate during Tx might improve the outcomes of Tx. This could pave the way towards new treatment approaches. However, there is a lack of clinical data, and more trials are needed in order to evaluate the role of metabolomics and the possibility of ameliorating graft quality prior to heart Tx. Metabolomics in Lung Preservation The field of lung Tx has made significant advances over the last decades. Despite these advances, morbidity and mortality remain high when compared to other examples solid organ Tx [36]. Ex vivo lung perfusion is already well established in clinical routine, and allows explanted donor lungs to be perfused and ventilated while being evaluated and reconditioned prior to Tx [36,37]. Advanced knowledge about the metabolic profile during preservation can help in finding innovative biomarkers for early allograft dysfunction (EAD), enabling timely therapeutic intervention to prevent functional decline. Currently, there is only a limited number of available metabolomics studies in lung preservation for Tx ( Table 2). Pillai et al. showed, as early as 1986, the feasibility of obtaining 31 P NMR spectra of porcine lungs maintained in a viable state during normothermic MP (NMP; 37 • C) with oxygenated blood [38]. During anoxia or ischemia, ATP and intracellular pH declined and inorganic phosphate increased, but all returned to control levels during subsequent normoxia or reperfusion [38]. Deep knowledge on the recovery of the lungs from anoxia and ischemia is important in order to improve the protocols for preservation. In 2003, Jayle et al. presented the beneficial effects of polyethylene glycol (PEG) in lung cold preservation [39]. In their study, PEG preserved porcine lungs better than UW and Euro-Collins (EC) solution. By means of 1 H NMR spectroscopy, lactate, pyruvate, citrate and acetate were only detected after reperfusion, with a reduced production of acetate and pyruvate in PEG-preserved organs indicating a better mitochondria metabolism and integrity [39]. As a result, PEG solution was able to improve the pulmonary vascular resistance and reduce leukocyte infiltration (the important factors related to IRI). Peltz et al. characterized lung metabolism in rats by 13 C NMR spectroscopy, and suggested that glucose added to lung preservation solutions plays a minor role as an energy source [40,41]. Instead, the lung prefers to catabolize endogenous fuels during the SCS period. Adding a substrate such as pyruvate leads to multiple metabolic alterations, including the following: (i) enhanced overall oxidative metabolism, (ii) reduction of the contribution of endogenous stores, and (iii) activation of glucose and glycogen synthesis [40]. To sum up, the addition of pyruvate to Perfadex ® solution increased metabolism during SCS and improved lung function after reperfusion [40,41]. The development of metabolomic techniques allowed the exploration of the preservation solution substrate's composition's influence on graft metabolism during storage. Further research could help to extend the ischemic interval of stored lungs and improve the results of lung Tx. In 2012, Benahmed et al. assessed the tissue quality of DCD pigs' lungs using 1 H NMR spectroscopy [42]. They identified 35 mostly upregulated metabolites over the period of SCS, indicating cellular degradation, whereas levels of glutathione decreased. During HMP, the majority of the metabolites remained stable, including glutathione. In contrast, the levels of uracil showed a reverse profile, indicating cell damage followed by oxidative stress. These results demonstrated that HMP has a positive effect on lung quality by protecting cells against oxidative disorders. Moreover, glutathione and uracil were found to be promising biomarkers for the evaluation of lung quality prior to Tx [42]. The authors described NMR to be a very reliable and rapid technique, which can be simply implemented in a hospital environment. More recently, Hsin et al. revealed a small panel of metabolites, such as N2-methylguanosine, 5-aminovalerate, oleamide and decanoylcarnitine, in the perfusate highly correlating with primary graft dysfunction (PGD). These metabolites were identified as potential biomarkers for the selection of human ECD lungs after 4 h of ex vivo lung NMP [43]. However, further validation studies are needed to confirm these findings. By identifying high risk lung grafts, it may be possible to develop ex vivo repair strategies, using the MP platform, to render these lungs suitable for Tx. Metabolomics in Kidney Preservation Kidney Tx indisputably confers a significant survival advantage and a better quality of life compared to dialysis [44]. Currently, there is increasing evidence supporting the use of pulsatile MP over SCS in kidney preservation [4], but more studies are needed to compare MP and SCS. Previous studies suggested that metabolomics might be a useful method of evaluating renal medullary damage ex vivo after CI and reperfusion from tissue, plasma, urine and perfusate samples, showing more efficient results than conventional histology and biochemical analysis (Table 3). Early experimental studies on porcine kidneys compared two standard preservation solutions, UW and EC, for kidney SCS (24 and 48 h) [20,45,46]. The most relevant metabolites for evaluating kidney function after autoTx, determined by 1 H NMR spectroscopy, were citrate, dimethylamine (DMA), lactate and acetate in urine, and trimethylamine-N-oxide (TMAO) in urine and plasma. While the TMAO/creatinine, DMA/creatinine, lactate/creatinine and acetate/creatinine ratios were significantly higher in kidneys stored in EC solution compared to UW solution [20], the citrate/creatinine ratio was elevated in the UW group compared to the EC group during follow-up. These findings clearly demonstrated that retrieval conditions might influence renal medulla injury, and that UW solution is more efficient in reducing renal medullary damage than EC solution, even after prolonged CI [20,45,46]. Moreover, NMR spectroscopy was able to discriminate kidneys with significant renal damage more efficiently than conventional biochemical parameters and light microscopy [46]. Previously, 1 H NMR-based metabolic profiling revealed mild and severe IRI in rat kidney grafts after 24 and 42 h of SCS, respectively [47]. Significantly decreased levels of polyunsaturated fatty acids and elevated levels of allantoin, a marker of oxidative stress, were found after 42 h of SCS. TMAO, a marker of renal medullary injury, and allantoin were significantly increased, correlating with the severity of histologic damage, while serum creatine (commonly used end point) values were not different between Tx groups [47]. In future clinical applications, quantitative metabolomics may help to distinguish between IRI, and early and chronic rejection. In 2014, Bon et al. proposed a protocol for MP perfusate metabolomics analysis as a tool for the assessment of preserved kidney quality, to reduce the number of discarded organs and optimize patient management [49]. The potential of NMR to predict graft outcome by analyzing perfusates in a DCD pig model of kidney autoTx, over 22 h of HMP, was evaluated. Levels of several metabolites, including lactate, choline, or amino acids such as valine, glycine or glutamate, increased over time, whereas there was a reduction in total glutathione during this period. The changes in these biomarkers were less severe in grafts with better function recovery based on lower plasma creatinine levels determined after 3 months [49]. The authors concluded that the analysis of biomarkers during kidney HMP using NMR could be an interesting tool for assessing graft quality, and was compatible with clinical application. Another study characterized the metabolic profile of porcine DCD kidneys using 1 H NMR spectroscopy over 24 h of HMP, compared to traditional SCS controls [11]. The total amount of central metabolites, such as lactate, glutamate, fumarate, aspartate and acetate, observed in the HMP kidney system suggests a greater degree of de novo metabolic activity than during SCS [11]. Whilst the majority of glucose is metabolized into glycolytic endpoint metabolites, such as lactate, the presence of non-glycolytic pathway derivatives suggests that the metabolism during HMP is more complex than previously thought [50]. The maintenance of central metabolic pathways may contribute to the clinical benefits of HMP. Supplemental oxygenation during HMP was proposed to restore cellular ATP levels and ensure metabolic activity in DCD kidney grafts [48]. More recently, using NMR combined with GC-MS, Patel et al. found that 18 h HMP of porcine DCD kidneys with high perfusate partial pressure of oxygen (PO 2 , 95%) results in a greater degree of aerobic metabolism at the end of MP, compared to active aeration (21%) [51]. Darius et al. [52] investigated the metabolic, functional, structural and flow dynamic effects of low and high perfusate PO 2 (30% vs. 90%) during a continuous 22 h HMP in a porcine DCD kidney IRI autoTx model, confirming those findings. 1 H NMR analysis was used to determine the concentration of metabolites within the circulating perfusate at the end of the perfusion. While this animal study did not yield any advantages for early graft function after high perfusate PO 2 , compared to low PO 2 , perfusate metabolic profile analysis suggested that high perfusate PO 2 conditions supported the aerobic mechanism [52]. More effective MP strategies could reduce the harmful effects of IRI, hence improving outcomes of Tx. Subsequently, NMR spectroscopy was used to examine the metabolic profile of the HMP perfusate, at 45 min and 4 h, from human cadaveric kidneys awaiting Tx [10]. In this study, promising discriminators between kidneys with DGF and those with immediate graft function (IGF) were identified. Glucose, inosine and gluconate concentrations were lower in DGF kidneys compared to IGF at both time points, while leucine concentrations were higher [10]. During kidney HMP, a significant portion of the metabolic activity persists-a currently poorly understood mechanism. Therefore, further research on the modification of harmful metabolic processes may improve graft-related outcomes, and consequently has the potential to modify ECD organs. Furthermore, it remains unclear how accurately levels of perfusate metabolites reflect intracellular activity. The same research group later compared the metabolic profiles of human and porcine kidneys with regard to HMP-derived perfusate to determine whether the porcine model is a valid surrogate for human studies [53]. Out of 30 metabolites analyzed, 16 were present in comparable concentrations in the pig and human kidney perfusates. Only 3-hydroxybutyrate showed significantly different rates of concentration change [53]. It seems that pig and human kidneys during HMP appear to be metabolically similar, confirming the pig as a valuable model for further kidney-related studies. Hypothermic resuscitation perfusion of the preserved liver was capable of restoring high-energy nucleotides even after prolonged SCS time (48 h) [54]. The metabolic profiles of adenine nucleotides demonstrated a direct correlation between high ATP content prior to Tx and improved outcome in terms of liver function, as indicated by the normalization of serum enzyme levels and prothrombin time post Tx [61]. In porcine liver grafts, the levels of nucleotide triphosphates decreased to undetectable levels during 4 h of SCS, but regenerated after 2 h of oxygenated HMP, while glycolytic intermediates (3-phosphoglycerate and 2,3 diphosphoglycerate) increased significantly during SCS and subsequently declined following HMP [55]. Cellular damage, determined by the concentrations of glycerophosphorylcholine (GPC) and glycerophosphorylethanolamine (GPE), was minimal during SCS. However, upon HMP, the levels of GPC and GPE decreased, indicating a degree of cellular damage caused by reperfusion [55]. Interestingly, prolonged SCS (24 h) was associated with a significantly reduced (approximately 40%) liver capacity to regenerate ATP levels during hypothermic reperfusion when compared to the shorter SCS time (2 h) [56]. Gibelin et al. assessed liver graft function in isolated perfused rat livers after 24 h of preservation in EC vs. UW solution at 4 • C [57]. The transaminases levels were similar in these groups; however, the levels of lactate, pyruvate, succinate, citrate, aceto-acetate and b-hydroxybutyrate detected by proton NMR were significantly higher in the UW than in the EC group [57]. The analysis of metabolic profiles allows an efficient evaluation of liver graft preservation quality and the functional recovery during reperfusion. The first study applying 1 H NMR analysis to bile produced during NMP was published by Habib et al. [58]. This study revealed several changes in biliary constituents, between bile produced during retrieval and perfusion, as follows: (i) the concentration of bile acids, lactate, glucose and phosphatidylcholine increased, while (ii) the concentration of acetate decreased. These changes were more pronounced in DCD rabbit liver grafts compared to DBD grafts, although this did not reach statistical significance [58]. These metabolites may be potential markers of the extent of WI injury and the functional activity of machine-perfused liver grafts. Previously, Fontes et al. described a new preservation modality for the liver, combining subnormothermic MP (SNMP; ∼21 • C) with hemoglobin-based oxygen carrier (HBOC) solution in a porcine orthotopic Tx model [60], analyzing over 600 tissue, perfusate and bile metabolites by GC-MS. The results revealed sustained metabolic activity (gluconeogenesis, albumin secretion, branched-chain amino acid secretion, urea production and ROS scavenging) during MP. Bile analysis over a 5-day period suggested that hydrophilic bile was secreted in the SNMP group, in contrast to hydrophobic bile documented in the SCS group. MP at 21 • C with the HBOC solution significantly improved liver preservation compared to SCS [60]. Later, Liu et al. proposed that the alanine and histidine measured by 1 H NMR in HMP perfusate estimated WI injury in porcine liver grafts, and might be potential biomarkers of liver viability [59]. More recently, the metabolomic profiles, obtained by NMR, of back-table biopsies were significantly different in liver grafts with EAD [67]. The best discriminative metabolites, lactate and phosphocholine, were significantly associated with graft dysfunction, with excellent accuracy. The authors proposed the possibility of assessing the efficiency of graft resuscitation on MP by using these two markers in future studies [67]. Identifying metabolic biomarkers may enable the use of older donors and donors with longer ischemic times. The first use of NMR spectroscopy on human liver samples was reported in 2005, determining metabolic profiles before organ retrieval, during HMP and after Tx [62]. The revealed variations in donor livers were consistent in most donors. First, GPC decreased in the majority of livers, suggesting increased cell turnover. Interestingly, in the graft that developed PGD, GPC remained stable, probably reflecting a lower degree of cellular activity, and therefore this substance might be a new biomarker for liver function [62]. Bruinsma et al. demonstrated the significant potential of MP combined with metabolomics as a clinical instrument for the assessment of preserved livers [63]. They applied SNMP (21 • C) on discarded human livers and determined changes by means of metabolic profiling with GC-MS and LC-MS, observing improvements in energetic cofactors and redox shifts, as well as the reversal of ischemia-induced alterations in specific pathways, including lactate metabolism and Krebs cycle intermediates. By this metabolomics approach, livers with similar metabolic patterns clustered based on the degree of injury [63]. This could help to identify organs that are suitable for Tx and those that should be discarded. Karimian et al. compared the metabolomics of discarded steatotic human livers during 3 h of SNMP and NMP [64]. They found that steatotic livers replenish ATP storages more efficiently during SNMP than NMP. However, there is a significant depletion of glutathione during SNMP, likely due to the inability to overcome the high energy threshold needed for glutathione synthesis, highlighting the increased levels of oxidative stress in steatotic livers [64]. This study demonstrated that SNMP and NMP produce significantly different metabolomic profiles in liver grafts. More knowledge is needed to maximize the potential of both organ resuscitation techniques. Raigani et al. recently analyzed the use of NMP in combination with metabolic profiling to elucidate the deficiencies in metabolic pathways in steatotic livers [65]. During NMP, energy cofactors increased in steatotic livers to a similar extent as in normal livers, but a significant lack in anti-oxidant capacity, efficient energy utilization and lipid metabolism was observed. Steatotic livers appeared to oxidize fatty acids at a higher rate, but favored ketone body production rather than energy regeneration via the Krebs cycle, leading to a slower lactate clearance and therefore higher transaminase levels in steatotic livers [65]. Currently, the lack of standard criteria for determining the graft suitability for Tx after MP remains a significant limiting factor as regards the clinical use of discarded human livers. In 2020, Xu et al. proposed a small panel of metabolites involved in the purine pathway as promising biomarkers for the determination of human liver tissue quality before liver Tx [66]. Higher ratios of adenosine monophosphate/urate, adenine/urate, hypoxanthine/urate and alanine aminotransferase were associated with inferior graft quality (DBD vs. DCD) and outcomes (early graft function vs. EAD) post Tx. Moreover, a superior prediction ability as compared to a combination of conventional liver function and risk markers is proposed [66]. Conclusions Innovative techniques of metabolic profiling, including NMR, GS-MS and LC-MS, have identified key pathways and mechanisms involved in organ damage during WI and CI, as well as IRI occurring during solid organ preservation. A growing number of experimental studies claimed metabolomics to be a promising tool for the assessment of graft quality as an original and reliable method, and which may easily be implemented in daily hospital routine. Although large-scale trials are needed, MP combined with metabolomics appears to be a potent tool for characterizing potential biomarkers to estimate graft-related outcomes prior to Tx. Moreover, biomarkers found in the perfusate, bile or urine are advantageous over organ biopsies, for being non-invasive and thus enabling more frequent and objective sampling. The currently available evidence on metabolic profiling during graft preservation suggests improved graft quality maintenance by HMP compared to traditional SCS, since significant metabolic activity is absent during SCS but not during HMP. Therefore, the regeneration of important metabolites, such as high-energy phosphate nucleotides, following a period of hypothermic perfusion in large, clinically related animal models has been proven to be feasible. HMP is able to support organ metabolism, and seems promising especially for long-term preservation. Other MP techniques (NMP and SNMP) revealed promising results too; however, further studies are necessary since the debate over the role of optimal preservation temperature continues. Moreover, more studies should focus on metabolic changes over time. In the future, we should progress toward organ-tailored preservation, whereby high-risk grafts can undergo assessment by metabolic profiling and re-conditioning prior to Tx; therefore, the maintenance of metabolic activity and organ function during preservation is an important factor. There is still a need for universal analytical techniques that are able to accurately-with appropriate sensitivity and specificity-identify and quantify the complete scope of metabolites in biological samples, thus enabling implementation in routine clinical practice.
v3-fos-license
2019-07-17T13:03:22.040Z
2019-07-15T00:00:00.000
196814136
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-019-46679-7.pdf", "pdf_hash": "5c60de94414a630be52059b935d383f4a9225a20", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41986", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "5c60de94414a630be52059b935d383f4a9225a20", "year": 2019 }
pes2o/s2orc
Enhanced activity of highly conformal and layered tin sulfide (SnSx) prepared by atomic layer deposition (ALD) on 3D metal scaffold towards high performance supercapacitor electrode Layered Sn-based chalcogenides and heterostructures are widely used in batteries and photocatalysis, but its utilizations in a supercapacitor is limited by its structural instability and low conductivity. Here, SnSx thin films are directly and conformally deposited on a three-dimensional (3D) Ni-foam (NF) substrate by atomic layer deposition (ALD), using tetrakis(dimethylamino)tin [TDMASn, ((CH3)2N)4Sn] and H2S that serves as an electrode for supercapacitor without any additional treatment. Two kinds of ALD-SnSx films grown at 160 °C and 180 °C are investigated systematically by X-ray diffractometry, Raman spectroscopy, X-ray photoelectron spectroscopy, and transmission electron microscopy (TEM). All of the characterization results indicate that the films deposited at 160 °C and 180 °C predominantly consist of hexagonal structured-SnS2 and orthorhombic-SnS phases, respectively. Moreover, the high-resolution TEM analyses (HRTEM) reveals the (001) oriented polycrystalline hexagonal-SnS2 layered structure for the films grown at 160 °C. The double layer capacitance with the composite electrode of SnSx@NF grown at 160 °C is higher than that of SnSx@NF at 180 °C, while pseudocapacitive Faradaic reactions are evident for both SnSx@NF electrodes. The superior performance as an electrode is directly linked to the layered structure of SnS2. Further, the optimal thickness of ALD-SnSx thin film is found to be 60 nm for the composite electrode of SnSx@NF grown at 160 °C by controlling the number of ALD cycles. The optimized SnSx@NF electrode delivers an areal capacitance of 805.5 mF/cm2 at a current density of 0.5 mA/cm2 and excellent cyclic stability over 5000 charge/discharge cycles. capacitors, while the transition metal-based electrodes typically show pseudocapacitance behavior [6][7][8][9] . Transition metal dichalcogenides (TMDCs) of MX 2 , where M is the transition metal and X can be sulfur (S), selenium (Se), or tellurium (Te), are extensively studied in energy storage applications, due to their high capacity and high power density that could be attributed to their layered structures, high surface areas, and electronic conductivities [10][11][12] . Among them, the transition metal sulfides, such as molybdenum disulfide (MoS 2 ) and tungsten disulfide (WS 2 ) have been investigated mostly for improving the supercapacitor performance 7,13,14 . Tin (Sn) -based chalcogenides, such as tin(II) sulfide (SnS), tin(IV) sulfide (SnS 2 ), Sn 2 S 3 , S 3 S 4 , and tin selenide (SnSe), etc., also have various energy-related applications because of their robust structure, and outstanding electrical and optical characteristics 15,16 . Among them, SnS is found in an orthorhombic structure with each Sn atom bonded to six S chalcogen atoms, which forms a distorted octahedral geometry structure with an interlayer spacing of c = 0.433 nm (JCPDS No. . In contrast, SnS 2 occurs in a hexagonal unit cell where the central Sn atom is covalently bonded to six other S chalcogen atoms in the octahedral sites of individual layers. These layers consist of three atomic planes like the prototype of cadmium iodide (CdI 2 ) with a larger interlayer spacing (c = 0.5899 nm, JCPDS No. . This distinct layered structure with a bigger interlayer spacing could make it suitable for the insertion and extraction of guest species, make swelling tolerant hosting spaces, and increase the diffusion for guest ions including lithium cation (Li + ), sodium cation (Na + ), potassium cation (K + ), hydrogen ion (H + ), and hydroxide (OH − ), etc. Owing to these unique crystallographic features, Sn chalcogenides are widely explored as active materials for lithium and sodium ion batteries and electrocatalytic applications [17][18][19] . However, tin sulfides (SnS x ) alone have not been widely explored as active materials for supercapacitor applications, since they suffer from relatively poor electrical conductivities and structural instabilities in electrochemical conditions, resulting in a limited cycling ability 7 . To solve the above problems, Sn-based composites or hetero-structured electrodes have been suggested with complex preparation procedures [20][21][22][23][24][25][26] . For example, Chauhan et al. and Wang et.al. synthesized SnS 2 /reduced graphene oxide (RGO) nanosheets and SnS 2 /MoS 2 composited by the hydrothermal method and presented enhanced specific capacitances of 500 F/g and 220 F/g, respectively, with cycle stability up to 1000 cycles 22,24 . However, most of the studies involved complex and hybrid materials of SnS x electrodes, rather than pure SnS x with low specific capacitance and short cycle life [21][22][23][24] . To tackle the above issues, one promising approach is the direct and conformal growth of nanostructured transition metal sulfides on a three-dimensional (3D) conductive substrate. With this option, a porous substrate can provide a larger surface to volume ratio, which enables greater electrode/electrolyte contact. Besides, the composite electrode is binder-free, which can also contribute to enhanced electrochemical performance and an overall higher energy density. In this study, we suggest a composite of SnS x @3D Ni-foam (NF) as a promising electrode for a supercapacitor. For the direct and conformal growth of SnS x films on 3D NF, atomic layer deposition (ALD) was adopted at relatively low temperatures below 200 °C. The major advantages of ALD over other deposition techniques such as evaporation, radio frequency (RF) sputtering, chemical vapour deposition (CVD), and spray pyrolysis, etc. [27][28][29][30][31][32][33] are its ability to deposit highly uniform, thin film over a large surface. Furthermore, ALD allows for an extremely conformal coating of different materials with excellent thickness control, which is achieved by using sequential surface chemical processing and self-limiting reactions [34][35][36][37][38][39][40][41] . ALD has been extensively researched in several energy-related fields that primarily include solar photovoltaic and secondary batteries (like Li/Na-ion, Li-S and Li-air batteries). To date, unfortunately, for supercapacitor electrode, very few studies have been done so far on direct active electrode materials. ALD-prepared VO x , TiN, TiO 2 , NiO, RuO 2 , Co 8 S 9 and MoS 2 are among few of them that have been previously investigated for supercapacitor electrode due to its high uniformity and conformality onto any desired multifaceted substrate [42][43][44][45][46][47][48][49][50] (Table S1). For instance, Li et.al. developed ALD process to deposit Co 8 S 9 on porous NF as a promising electrode for supercapacitor in terms of high specific capacitance, rate capability and long term cyclibility 47 . Similarly, our recent research work further established the potential of ALD by coating a uniform and conformal MoS 2 nano-layer on 3D NF, exhibited noteworthy performance for supercapacitor electrode 13 . Therefore, these studies truly suggest the potential of this method in developing the thin films of metal sulfides for direct fabrication of supercapacitor electrodes without any additional treatment. In this study, we prepared SnS or SnS 2 predominant SnS x films by controlling the deposition temperature 35,39,40 in order to comparatively investigate their supercapacitor's performances. For the first time, two kinds of ALD-SnS x films were directly grown on NF, and then the composites of ALD-SnS x @NF were tested as active electrodes without any additional treatment. The coverage of free-standing 3D NF surface by ALD-SnS x provides superior electric connection and enhance the overall capacitance as a result of additional Faradaic reaction. The fundamental layer-by-layer features of ALD can protect the SnS x film from decomposition, deformation and depletion in the long-term cycling performance test. Results and Discussion physical characterizations of ALD-sns x films with deposition temperature. Phases characterized by XRD, Raman spectroscopy, and XPS. The crystallographic structure and phase of the as-grown SnS x films deposited on Si/SiO 2 substrates at two different temperatures of 180 °C and 160 °C (denoted as SnS x -180 and SnS x -160) were studied using the grazing incident angle XRD (GIAXRD). The XRD patterns of SnS x -180 showed sharp peaks consistent with the orthorhombic structure of SnS (JCPDS No. 39-0354) with the space group of Pnma (space group 62), and the peak of the (111) crystallographic plane at reflection 2θ = 31.8° was far more intense than others (Fig. 1). Conversely, the XRD results for the SnS x -160 sample exhibited diffraction patterns conforming to the phase of hexagonal SnS 2 , and one clear peak was observed at approximately 2θ = 15.7°, corresponding to the (001) crystallographic plane (JCPDS No. 23-0677) with the space group of P-3m1 (space group 164). However, a broad hump centred at 2θ around 32.7°, corresponding to the (111) crystallographic plane of SnS, was also observed. From the XRD patterns of SnS x -160, it can be inferred that thin film is composed of crystalline SnS 2 and amorphous SnS. The lattice parameters are a = 4.13, b = 11.41, and c = 4.90 Å for the orthorhombic SnS (SnS x -180) and a = 3.55, b = 3.51, and c = 5.82 Å for hexagonal SnS 2 (SnS x -160) calculated from the least square fitting to the Bragg peaks. Table S2 summarizes the average crystallite size, d-spacing determined from the XRD data by the Scherrer formula and X'Pert High score software. As shown in Table S2, the crystallite size of SnS increased as the deposition temperature increased to 180 °C. The shape of crystalline grain is not exactly spherical; therefore, the calculated values from the Scherrer formula only present a rough estimation for comparison. The interlayer spacing (d-spacing) of SnS grown at 180 and 160 °C corresponding to the strongest XRD peak at 2θ = 31.8 and 15.7°, is found to be 0.28 and 0.59 nm respectively. Previous studies 35 have also demonstrated that SnS film is predominantly formed above 180 °C, whereas the single phase of SnS 2 film is formed at a slightly lower deposition temperature of 160 °C. These findings suggest that the formation of SnS is more favourable at high temperatures, as compared to relatively low temperatures. From these observations and others, it is clear that temperature plays a crucial role in determining the phase of ALD-grown thin film 12,[34][35][36] . Interestingly, for the XRD patterns of SnS x -180 deposited on NF, a diffuse peak from orthorhombic SnS (111) was observed (Fig. S1a); however, a diffraction peak from SnS 2 was not detected for the SnS x -160 deposited on NF. This is attributed to the existence of sharp Ni peaks and low mass content of the active materials 13 . In addition, the crystalline growth of the film on Ni-foam was not completely discarded, which is presented and discussed in the following sections in this article. In order to gain more insight into the phase of SnS x , Raman spectra, which delivers a structural fingerprint of molecules, were recorded and are shown in Figs 2 and S1b. For comparison, we used reference data from a single crystal of SnS 2 and SnS because Raman active modes of polycrystalline thin films can be complex, due to the shifting and broadening of the peaks. This is attributed to the grain boundaries, stretched defects, and stresses in the polycrystalline thin films. The Raman spectra of all the SnS x samples showed distinguishable peaks (second to fifth) located at, 102 ± 3 cm −1 (A g ), 169 ± 2 cm −1 (B 3g ), 196 ± 2 cm −1 (B 2g ), and 230 ± 2 cm −1 (A g ), which correspond to first order single phonon oriented transverse optical (TO) and longitudinal optical (LO) vibrational modes of SnS. The first Raman peak at 56 ± 2 cm −1 [B 2g (LO)-A g (TO)] and last peak at approximately 313.4 cm −1 [A g (TO) + A g (LO2)], can be assigned to the second order multiple phonons scattering process. In addition, the observed Raman active peak at 56 ± 2 cm −1 belongs to the vibrational mode of SnS phase, whereas the specific Raman peak at 313.4 cm −1 (only can be seen in the SnS x -160 sample) is associated with optical phonon mode of 2H-SnS 2 poly-type with a hexagonal symmetry, and is related to Sn-S bonding in the a-c plane 18,35 . Moreover, the Raman spectra obtained from SnS x @NF (Fig. S1b) reflects further the phase dependent www.nature.com/scientificreports www.nature.com/scientificreports/ growth of SnS and SnS 2 dominant phase at different temperature. The Raman spectra of the SnS x film grown at 160 °C clearly exhibited a broad and strong peak at around 318 cm −1 that is completely absent when the film is grown at 180 °C, matches well with Raman mode of SnS 2 phase. This observation was in agreement with the XRD results of the SnS x film grown at 160 °C, which showed the formation of a hexagonal SnS 2 phase. Furthermore, as compared to previous Raman studies, a significant red-shift in the position of all the Raman peaks has been detected [51][52][53] . The slight shifts in peaks are also consistent with those previously reported for SnS x thin film deposited particularly at higher temperatures. From this study and others, it could be expected that the change in peak position in the Raman spectra mainly relies on their exterior surface texture or roughness [51][52][53] . A previous study on electrodeposited SnS by Mathew et al. revealed traces of other phases by Raman spectroscopy 54 . In order to investigate the oxidation state, chemical composition, and constituent elements of as-grown SnS x films, X-ray photoelectron spectroscopy (XPS) measurements were carried out. The contaminations and various oxidation states present on the active materials play a crucial role in determining their fundamental properties, which can affect the electrochemical reactions 17 . It is well-known that Sn chalcogenides have versatile oxidation characteristics, and focusing on their compositions can clarify their electrochemistry. Argon ion sputtering at the surface of the samples was performed for 60 s prior to the XPS measurement to remove unwanted surface species, such as oxides and other contaminants, from the SnS x films. Inevitably, surface oxidation of both ALD-grown SnS x films revealed their oxygen content to be around 10.7% in SnS x -180 and 7.3% in SnS x -160 film by a full survey of XPS analysis (Fig. S2 & Table S3). In this analysis, the Sn 3d and S 2p core level XPS spectra of SnS x @ NF-180 and SnS x @NF-160 displayed clear differences (Figs 3 & S3). The binding energy (BE) values were calibrated based on the adventitious carbon 1 s peak (284.8 eV). The oxidation states of Sn in SnS x were established from the deconvoluted XPS spectra of the SnS x @NF-180 (Fig. 3a). The 3d 5/2 transition was deconvoluted into three peaks at BE values 483.8 eV, 485.6 eV, and 486.5 eV, corresponding to the Sn oxidation state of 0, + 2 and + 4, respectively 11,12 . The XPS analysis suggests that the dominant phase is SnS for the thin film deposited at 180 °C; however, the formation of metallic Sn in notable quantities was also evident. Similarly, the SnS x @NF-160 sample also showed a pair of doublets at BEs (3d 5/2 ) of approximately 485.2 eV and 486.4 for Sn(II) and Sn(IV) states, respectively (Fig. 3b). The XPS analysis of samples grown at a 160 °C showed an increase in the peak intensity from a Sn oxidation state of +4 and the disappearance of Sn oxidation state of 0, which suggests the formation of a mixed phase of SnS and SnS 2 . The sulfur 2p core-level XPS spectrum from SnS x @NF-180 and SnS x @NF-160 are shown in Fig. 3c,d. The spectrum of SnS x @NF-180 sample showed two www.nature.com/scientificreports www.nature.com/scientificreports/ peaks at BEs of approximately 160.8 eV and 162.1 eV, corresponding to the S 2 p 3/2 and S 2p 1/2 electronic states, respectively. Whereas for the SnS x @NF-160 sample, these peaks appeared at slightly higher BEs of 161.7 eV (S 2p 3/2 ) and 162.9 eV (S 2p 1/2 ). The shift in peak position is attributed to the change in the stoichiometry of the film. Thus, the XPS results also confirmed that the dominant phase of the film deposited at 180 °C is SnS (Sn 2+ ) and that at 160 °C is SnS 2 (Sn 4+ ) 17,40,41 . Atomic weight percentages of the deconvoluted states of Sn are recorded in Table S4 from the core-level spectra using the CASA XPS software. The atomic percentages of the various valence states in SnS x -180 and SnS x -160 films were calculated and showed that Sn elements with valence state of + 2 and +4 are dominant species in SnS x -180 and SnS x -160 film, respectively, which agrees with the phase analyses by XRD. Owing to the various oxidation states of Sn (Sn +4 , Sn +2 , and Sn 0 ), thermodynamically stable Sn oxides have often been found as SnO or SnO 2 on the surface of SnS and SnS 2 55 . Morphologies and microstructures characterized by SEM and TEM. Scanning electron microscopy (SEM) images were recorded to recognize morphological changes in ALD-SnS x thin films grown at two different deposition temperatures. Figure 4 displays the surface morphology of SnS x films grown at deposition temperatures of 180 °C and 160 °C on an NF substrate. It was evident that the prepared SnS x films grew uniformly on the entire NF surface at both temperatures. This highlights the potential of ALD towards conformal and uniform coating on a complex structure such as NF. The SnS x @NF-180 film showed the formation of flakes type morphology (Fig. 4c), whereas the SnS x @NF-160 film exhibited large granules-like morphology with relatively higher surface roughness, resulting in higher interface area between the electrode and electrolyte during charge storage (Fig. 4f). This indicates that surface morphology can be modified by simply adjusting the deposition temperature. Sinsermsuksakul et. al. and Ham et.al. reported similar behaviour on ALD-SnS x films with deposition temperature 36,40 . Moreover, from digital photographic images, SnS and SnS 2 predominant ALD-SnS x films were clearly noticeable from their exterior, where SnS is grey and SnS 2 is golden yellow 17 . Further, an elemental analysis of SnS and SnS 2 were performed using EDS (Fig. S4) where the acquired chalcogen to the metal ratio for SnS x @NF-180 is ~0.87 and 1.7 for SnS x @NF-160 (Table S5). In both cases, the estimated ratios clearly indicated the predominant formation of SnS and SnS 2 phase at two different deposition temperatures. Chalcogen to metal ratios obtained from the XPS analysis was also in line with these EDS results. Therefore, ALD-grown SnS x films at 180 °C (SnS rich-SnS) and 160 °C (SnS 2 rich-SnS) have been successfully synthesized with distinct difference in phase and stoichiometry by changing the deposition temperature. Figure S5 shows the corresponding energy dispersive X-ray (EDS) elemental distribution mapping for the SnS x film on the 3D NF. It is confirmed a uniform distribution of Sn and S in the deposited thin film on 3D NF. It is difficult to achieve conformal coating on the NF with extreme precision of the film thickness by any other material synthesis technique. Therefore, ALD provides us with a direct and easy fabrication of an NF-supported composite electrode for efficient supercapacitor application. The TEM analysis for SnS x @NF-160 was carried out to characterize the phase and microstructure of SnS x thin film in detail. The images (Fig. 5a-c), selected area electron diffraction (SAED) patterns (Fig. 5d), and the EDS elemental mapping (Fig. 5e-h) were obtained. The TEM images confirmed the conformal coating of Ni-foam substrate with 60 nm of SnS x film (Fig. S6). The high-resolution TEM images (Fig. 5c) clearly demonstrated a www.nature.com/scientificreports www.nature.com/scientificreports/ polycrystalline layered structure of SnS 2 film with an inter-layer spacing of approximately 0.61 nm. The lattice fringes can be indexed to the (001) plane of the hexagonal SnS 2 , where the Sn atoms are sandwiched between two layers of hexagonally close-packed S atoms, while the adjacent sulfur layers are connected by the weak van der Waals interaction 12,40 . Layered structured film could provide an increase in surface area and a greater number of accessible active sites, which would allow for improved electrode/electrolyte contact and enhanced charge storage capacity 14 . The layered nature could also provide effective channels for the proper mass transport of electrolyte ions within the electroactive material, thereby causing speedy redox reactions and charge adsorption on the electrode surface 14,56 . The polycrystalline nature of the materials was further confirmed by the SAED patterns (Fig. 5d). The observed diffraction rings were from the (002), and (001) planes of the hexagonal SnS 2 structure. Hence, the TEM diffraction pattern results are well matched with the XRD results. Once again, the STEM-EDS elemental mapping (Fig. 5e,f) of the SnS x @NF-160 sample confirmed the uniform distribution of elemental Sn (Fig. 5f) and S (Fig. 5g) on Ni-foam. electrochemical characterizations of supercapacitor. Phase-dependent studies. In order to investigate its performance as an electrode for a supercapacitor, the electrochemical characterizations of the SnS x @NF electrodes were performed in a three-electrode system. While 2M KOH aqueous solution was used as the electrolyte, standard Pt and Ag/AgCl electrodes were applied as the counter and reference electrodes, respectively. The supercapacitor performance of SnS x @NF-180 and SnS x @NF-160 at fixed 60 nm was thoroughly investigated with cyclic voltammetry (CV), galvanostatic charge/discharge (GCD), and electrochemical impedance spectroscopy (EIS) measurements. By adjusting the number of ALD cycles, the thickness of the active electrodes was fixed at 60 nm for both SnS x @NF-180 (670 ALD cycle) and SnS x @NF-160 (500 ALD cycle). Figure 6a shows CV curves for the SnS x @NF-180 and SnS x @NF-160 electrodes at a 10 mV/s scan rate within a potential window of 0-0.6 V. As can be seen from the figure, the CV curves of these electrodes clearly depicted both electrical double layer capacitance (EDLC) and Faradaic characteristics, which renders this composite an efficient electrode for a supercapacitor 13,57 . It was also evident that the CV curve of SnS x @NF-160 exhibited a larger area than that of SnS x @ NF-180, presenting the former as a superior supercapacitor electrode material as compared to the latter. This enhanced performance of the SnS x @NF-160 electrode is mainly attributed to the polycrystalline-layered structure of SnS 2 , which facilitated the rapid transport of the electrolyte ions, and thus enhanced both the EDLC and the Faradaic contribution for this electrode 14,[58][59][60][61][62] . Figure 6b presents charge/discharge curves of SnS x @NF grown at 180 °C and 160 °C, at a current density of 0.5 mA/cm 2 within a potential window of 0-0.6 V used for CV measurements. The charge/discharge times of the SnS x @NF-160 and SnS x @NF-180 electrodes were 1676 s and 717 s, respectively. The higher charge, as well as discharge time for the SnS x @NF-160 electrode, reconfirms its enhanced charge storage ability. The areal capacitance was calculated from the charge/discharge curves of the SnS x @NF-180 and SnS x @NF-160 electrodes at different current densities (Fig. 6c). The areal capacity of individual SnS x @NF composite electrodes decreased with an increase in operational current density. This is due to the lower absorption/desorption or intercalation of electrolyte ions into the electrode at higher current densities. The inner active sites may not have taken part in the redox reaction, possibly due to a lower diffusion of ions within the electrode www.nature.com/scientificreports www.nature.com/scientificreports/ and the positive K + ions only reaching the outer surface of the electrode materials. With the increase in the current rate from 0.5 mA/cm 2 to 5 mA/cm 2 , the areal capacitance was decreased considerably from 805.55 mF/cm 2 to 622.22 mF/cm 2 in the case of SnS x @NF-160, and from 364.44 mF/cm 2 to 166.66 mF/cm 2 for SnS x @NF-180. In particular, at a current density of 0.5 mA/cm 2 , the SnS x @NF-160 composite revealed a higher areal capacitance of 805.55 mF/cm 2 than that of the SnS x @NF-180 electrode (364.44 mF/cm 2 ), due to its unique layered structure. The results obtained with the SnS x @NF-160 composites are also better than several previous reports with other electrode materials (Table S8) as well as comparable with the areal capacitance reported earlier for ALD-grown electrodes used in supercapacitor (Table S1). These enhanced performances could be attributed mainly to the uniform and conformal coverage of SnS x film on high-surface-area NF that exhibited a higher concentration of active sites. It also enables a better electric contact with the unique layered structure, providing large interlayer spacing for the easy migration of the electrolyte ions into the electrode. To further understand the dynamics of the interfacial charges and their transfer process between the electrode and electrolyte, electrochemical impedance spectroscopy (EIS) measurements were performed at a frequency range of 1 Hz to 50 kHz. Figure 6d presents Nyquist plots for the SnS x @NF-180 and SnS x @NF-160 electrodes. The typical characteristics of a supercapacitor were revealed from these Nyquist plots. It can be seen that the shape of impedance spectra from two electrodes are similar, which consist of a quasi-semicircle in the high frequency region and a straight oblique in the low frequency region. After a close inspection, the magnified region in the inset of Fig. 6d confirmed that the Nyquist plot contains slightly different characteristics. At the high-frequency region, a tilted straight line is observed (diffusion behavior) instead of a semicircle which is attributed to the transport limitation of K + ion through the distributed resistance-capacitance in the porous nickel foam substrate. Generally, solution resistance of the electrolyte (Rs) denotes the internal resistance and the numerical values of real-axis intercept in the high-frequency range can be applied to evaluate its size. The diameter of the other semicircle at the low-frequency region belongs to charge transfer impedance due to the pseudocapacitive behavior. Whereas the slope of the straight line representing the Warburg impedance, which might be attributed to the electrolyte ion diffusive impedance and proton diffusion inside the SnS x electrode 13 . As observed in the figure, the high frequency region reflects the combined charge transfer resistance (Rct) and double layer capacitance at the electrode-electrolyte interface. Further, at the low-frequency region, Warburg impedance can be seen from the straight oblique, which indicates the electrolyte ion or proton diffusion within the electrode. The Warburg resistance (diffusion resistance) of the SnS x @NF-180 and SnS x @NF-160 electrode is found to be 0.36 and 0.20 ohm/s, respectively. Inferior values of resistance are beneficial for improving the storage capacity of the electrode materials. These results indicate that the lower Warburg resistance and larger interlayer spacing of layered SnS 2 in the SnS x @NF-160 composite might be responsible for improving the diffusion of ionic charge carriers. www.nature.com/scientificreports www.nature.com/scientificreports/ Furthermore, the diffusion coefficient D 0 for the prepared electrode in KOH medium was calculated by applying the Randles-Sevcik equation 63 where I p is the peak current position, n is the number of electrons transferred in the redox reaction, A is the effective electrode area of the working electrode in cm 2 , C is the concentration of the diffusing species in the bulk of the electrolyte, and v is the voltage scan rate. According to the above equation, the diffusion coefficient of electrolyte ions at the interfacial region is calculated to be 5.44 × 10 −11 and 2.27 × 10 −10 cm 2 /s for the SnS x @NF-180 and SnS x @NF-160 electrode, respectively. It is believed that the enhanced values of the SnS x @NF-160 composite are caused by the layered structure with the larger interlayer spacing provides a larger surface area and more active sites for the ion transport. Therefore, the enhanced performance of SnS x @NF-160 electrode further confirmed the optimum electrode for achieving the better performance in this study. Thickness-dependent studies. From these analyses, it is clear that SnS x @NF-160 is the better choice for this study because it contains a predominant phase of Sn (IV) with some quantities of Sn (II), which confirms the formation of SnS 2 with its well-known hexagonal layered structure and higher intrinsic conductivity contribution from the SnS phase. For this particular study, it can be concluded that the SnS 2 phase, which was predominant in SnS x @ NF-160, performs better as an electrode in a supercapacitor. Furthermore, the thickness of the active electrode material used in an energy storage device also plays a critical role. The performance of such a storage device is subjected to degrading beyond a certain thickness of the active electrode layer. An unnecessary mass loading beyond a critical thickness may severely affect the capacitance of the device. This is caused by two different factors acting simultaneously with each other. The extra thickness, which does not take part in the electrochemical process of storing the charge, will at the same time provide some extra electrical resistivity to the whole electrode. In this regard, ALD can be considered as one of the best techniques with its precise control over film thickness. The following sections of this article present an optimized thickness for SnS x @NF-160 to maximize the performance of the supercapacitor, by controlling the number of ALD cycles. The CVs of SnS x @NF-160 prepared with the different ALD cycles and thicknesses [150 (18 nm), 300 (36 nm), 500 (60 nm) and 700 (84 nm)] are shown in Fig. 7a. The CV results suggest the presence of pseudocapacitance behaviour with apparent redox peaks. Two types of pseudo-capacitive behaviours were evident: (1) a broad peak in the central region similar to that observed in a battery like Faradaic reaction and (2) broad diffuse Faradaic processes, which was likely from the composite materials. This complex behaviour is because of the combined effect of double layer capacitance (non-faradaic) and pseudocapacitance (Faradaic) processes, possibly due to the insertion/extraction of K + from the interlayer of the SnS 2 7,13,57 . To obtain further information on the effect of voltage scan rates on the capacitive response of the electrodes, the CV results were recorded at different scan rates (Fig. 7b) for the SnS x @NF-160 sample (500 ALD cycle) with the best performance. The area under the redox peaked and EDLC increased with the scan rate, while the anodic (discharge) peak shifted towards higher voltages and the cathodic (charging) peak shifted to lower values 57 . The increase in the current response is an indication of superior kinetics and better reversibility of the interfacial Faradaic redox reactions and the fast rate of electronic or ionic transport. Similar features were also observed in CVs and the charge/discharge curves of all the electrodes prepared with different ALD cycles (Figs S7 & S8a). To further examine the electrochemical behaviour of the SnS x @NF-160 electrodes, GCD measurements were carried out at different numbers of ALD cycles by maintaining a potential window of 0-0.45 V (Fig. 7c). The charge/discharge time period (total duration for a complete charging and discharging cycle) for samples prepared with 150 ALD cycle was approximately 124 s and extended to 1676 s for samples prepared with 500 ALD cycles. This signifies an increase in capacitance with the thickness of SnS x thin film. However, further increases in ALD cycle numbers (700 ALD cycles, thicker film) and a drastic drop in charge/discharge time period (approximately 832 s) were also detected. These findings correspond with the results from the CV measurements. The decrease in capacitance with thickness (150, 300, and 700 ALD cycles) beyond an optimal value of approximately 500 ALD cycles may be due to the blocking of electrolyte ions (K + and OH − ) and higher electrode resistance (Fig. S9). Unlike the linear charge/discharge profiles generally reported for pure EDLC, the charge/discharge profile of the SnS x @NF composite electrode appeared to plateau between 0.24 V and 0.34 V (discharge), which is attributed to the Faradaic reactions. The charge/discharge profiles were symmetrical except for a slight curvature, indicating the contribution of a pseudocapacitive process along with the electric double layer capacitance 10,20 . The charge/ discharge profiles of SnS x @NF-160 grown with 500 ALD cycles were measured at different current ranges of 0.5 mA to 10 mA (Fig. 7d). The curves were also symmetrical except for a slight curvature, which points out the pseudocapacitive contributions towards the total capacitance. The areal capacity estimated from the charge/discharge profile of SnS x @NF-160 prepared with different ALD cycles (Fig. 7c) is shown in Fig. 7e. The highest areal capacitance (e.g. 805.55 mF/cm 2 at 0.5 mA/cm 2 current density) was observed with the SnS x thin film grown by an optimal ALD cycle number of 500. A further increase in the SnS x ALD cycles (700 ALD cycles) led to a drastic decline in the capacitance (402 mF/cm 2 at 0.5 mA/cm 2 ). Additionally, the areal capacitance gradually decreased with an increase in the operational current density, and the samples prepared with more ALD cycles demonstrated steeper drops. The charge/discharge curves at different current densities for all SnS x electrodes (150, 300, and 700 ALD cycles) are shown in Figs S10 & S8b and a gradual decrease in the discharge time with increasing current density was evident. The reason for this sudden fall in capacitance can be explained based on two factors. The first is the low penetration of the electrolyte ions into the bulk of the deposited SnS x film above a certain critical thickness, and the second is the increase in electrode resistivity with thickness. A thicker SnS x film with a relatively high resistivity will impede the transport of both electrons from the NF and ions from the electrolyte 13 . The CE of all SnS x @NF-160 composite electrodes at different current densities (0.5-5 mA/cm 2 ) was calculated www.nature.com/scientificreports www.nature.com/scientificreports/ from the ratio of discharge to the corresponding charge time and the values are shown in Fig. 7f. At higher current density (1-5 mA/cm 2 ), the optimized SnS x @NF electrode grown with 500 ALD cycles presented CE values near to 100%. However, a lower CE was recorded (slightly higher than 80%) for this electrode at low current densities of 0.5 mA/cm 2 and 0.7 mA/cm 2 , which could be attributed to the higher ratio of side reactions at low current density than that at high current density. A lower CE was achieved for the other three SnS x @NF-160 electrodes with 150, 300, and 700 ALD cycles, which is consistent with the other inspections above. It is well known that the total electrochemical charge stored in the electrode can be separated by the diffusion-controlled faradaic process and capacitive process. The kinetic analysis is performed to differentiate the diffusion-controlled capacity from the capacitive one. Generally, the faradaic process further includes the diffusion controlled behavior from the conversion/alloying reaction and redox pseudocapacitive effects from the charge transfer with surface/subsurface atoms. The individual specific contribution from the diffusion controlled and capacitive processes could be obtained using the following equation 64 , where i is the current in CV curves at a fixed potential (V), v is the scan rate, the term k 1 (v) and k 2 (v 1/2 ) represents the current response from the surface capacitance and diffusion-controlled faradaic process, which can be calculated by analyzing CV current at various scan rate. www.nature.com/scientificreports www.nature.com/scientificreports/ To linearize, both sides of the Eq. (2) are divided by square root of the scan rate, Now, the values of k 1 and k 2 can be determined by plotting i/v 1/2 vs. v 1/2 from the slope and intercept of the linear fit of the plot as shown in Fig. 8a. Thus, the current contribution through the capacitive processes and diffusion controlled (K + intercalation/deintercalation) can be easily distinguished quantitatively at each potential for a fixed scan rate. The contribution of capacitive effect and diffusion controlled faradaic process in total stored charge of the SnS x @NF-160 electrode at various potentials is presented in Figs 8b and S11a. It can be seen that the double layer charging is slightly larger than diffusion controlled, meaning a significant proportion of the capacitance is resulted from the insertion/deinsertion mechanism, specifically at a low scan rate, like 56.46% at 5 mV/s. The contribution from the double layer charging process was increased to 64.71, 72.17, 76.05, 80.39, 82.91 and 85.29% as the scan rate increased 10 to 100 mV/s, suggesting the significant role for capacitive charge storage in the total capacity of the electrode. On the other hand, as the scan rate increased from 5 to 100 mV/s, the capacitance contribution from diffusion-controlled mechanism gradually decreases from 43.53-14.70%. Therefore, with increasing scan rate, the capacitive current increased gradually and intercalation current (diffusion-controlled) decreases and vice versa. It should be noted that at high scan rates, the electrolyte ions do not have enough time to insertion/extraction into/from the SnS x layer. Thus, the capacitive behavior at the surface of the electrodes become dominant which delivers fast charge/discharge characteristics compared to the slow ion intercalation/ deintercalation kinetics owing to the diffusion-controlled process 65 . At higher scan rates, the electrolyte ions mainly undergo adsorption/desorption at the electrode/electrolyte interface. Furthermore, the amount of areal capacity contribution from the inner and outer surface of the optimized electrode (SnS x @NF-160, 500 ALD cycles) is calculated another method using Trasatti analysis. According to Trasatti theory, the maximum total capacitance can be determined as the sum of the capacitance provided by the inner and outer surface of the electrode materials, which can be expressed using the following equation 66 , Areal capacitances at different scan rates were calculated and plotted versus the square root of the scan rates (v 1/2 ) as well as the inverse square root (v −1/2 ) of the same to determine those two contributions separately. The charge stored from the insertion process and the outer surface of the electrode is mainly dependent upon the specific scan rate (v). In this plotting (Fig. S11b), the maximum areal capacity for this electrode is calculated by extrapolation of capacitance value at v = 0 (intercept of the fitted straight line) from the plot of the areal capacitance (C areal ) vs. square root of the scan rate (v 1/2 ), since the diffusion of electrolyte ions into the electrode is maximum when the scan rate tends to zero. On the other hand, the areal capacitance contributed by the outer surface is found from the intercept (v = ∞) of C areal versus reciprocal of the square root of scan rate (v −1/2 ) in the Fig. S11c where the diffusion of electrolyte ions into the electrode is supposed to be absent or negligible. Once we obtained the above two values, the capacitance contribution of the inner part of the electrode can be calculated. The SnS x electrode grown by 500 ALD cycles presented a total capacitance of ~682 mF/cm 2 and the capacitance exhibited by the outer surface was ~304 mF/cm 2 . Thus, the capacitance contribution exhibited by the inner surface of the electrode is ~378 mF/cm 2 ; indicating most of the capacitance was contributed by the inner surface regions of the electrode. It thus can be said, more numbers of active sites are present in the inner surface region of the electrode and the smooth intercalation of ions during the electrochemical process should be significantly facilitated by the layered structure of SnS 2 . Cyclic stability test. The stability of the electrode material is an essential factor to be considered in justifying the performance of energy storage devices. Generally, SnS x -based electrodes suffer a short cycle life, and in this www.nature.com/scientificreports www.nature.com/scientificreports/ context, the long-term cycling stability of the current SnS x @NF-160 composite electrode needs to be tested. Therefore, the cycling life test for the optimized SnS x @NF-160 (500 ALD cycles) was further examined by 5000 consecutive charge/discharge cycles at a fixed current density of 10 mA/cm 2 . Figure 9 displays the capacitance retention and the corresponding CE for 5000 cycles. It shows that the capacitance retention initially decreased for the first few hundred cycles, which could be attributed to the active site saturation of the surface of SnS x film during the charge/discharge process. Furthermore, the capacitance retention gradually increased followed by another nominal decrease until 5000 cycles. This slight decrease in capacitance is mainly due to the structural breakdown and delamination of SnS x films on the surface of NF in the aqueous solution, and the electrolyte ion may have been trapped in the layers of SnS 2 from repeated charge/discharging 67 . Interestingly, excellent cycling stability was maintained with approximately 90% capacity retention even after 5000 cycles, and the CE was above 99% during the entire cycling process. Therefore, it was confirmed that ALD-grown SnS x @NF provided greater cycling stability than those of previously reported composite structure of SnS prepared by wet chemical synthesis and other methods 7,10,20,22-24 . As shown in the inset of Fig. 9, the electrochemical stability of the electrode was also evident for the 1 st and 5000 th charge/discharge processes. It is worth noting that the shape of the charge/ discharge curves remained almost equivalent and nearly overlapped for the 1 st and 5000 th cycles, demonstrating superior electrochemical stability of the electrode. The capacitance retention obtained in this current work was higher than most of the earlier research reported both on SnS and other TMDC-based supercapacitors (shown in Tables S7 & S8). Such outstanding stability reflects the homogeneous and conformal deposition of a layered structure like SnS 2 on 3D NF by ALD, with a larger interlayer spacing and suitable porosity. To elucidate the effects of consecutive charge/discharge processes on the film surface and morphology, if any, and to check the adhesion between SnS x and 3D Ni-foam, a SEM analysis was performed for the samples with 5000 charge/discharge cycles. SEM-EDS mapping confirmed that after a long cycling test period, the SnS x film was still uniformly presented throughout the NF, which proves the strong and robust bonding between SnS x films with NF ( Fig. 10). Further, the post-cycling SEM images in Fig. S12 depicted that the overall structure was still retained during the cycling process, in spite of the somewhat irregular surface agglomeration of the initial granules-like structure 13,68,69 . Therefore, outstanding cycling stability can be achieved from strong physicochemical bonding with dense layer-by-layer metal sulfide films grown by ALD on NF substrates, leading to stable mechanical and electronic contact during extensive charge/discharge processes. Conclusions Unlike other TMDCs such as molybdenum disulfide (MoS 2 ) and tungsten disulfide (WS 2 ), tin sulfide (SnS x ) has not been widely explored as an electrode material for supercapacitors because of its structural instability, poor conductivity issues, and low redox reactions that lead to short cycle life and lower specific capacitance. To address these issues, in this study we suggested a composite of SnS x @3D Ni-foam (NF) as a promising electrode for a supercapacitor. ALD processes using TDMASn and H 2 S at 180 °C and 160 °C were successfully adopted for the direct and conformal growth of SnS x films on 3D NF. Electrochemical measurements systematically proved that the SnS x @NF-160 electrode performed better compared to the SnS x @NF-180. The composite electrode of SnS x @ NF-160 demonstrated a higher areal capacitance of 805.55 mF/cm 2 than that of SnS x @NF-180 (364.44 mF/cm 2 ) and several other reported electrodes materials. The superior performance of SnS x @NF-160 is likely due to the layered structure of SnS 2 grown at 160 °C with large inter layer spacing, which is supported by XRD, XPS, and HRTEM analyses. The precise growth per cycle, along with the self-limiting nature of ALD, permits the preparation of well-controlled SnS x @NF electrodes for the supercapacitor with optimum coating thicknesses, simply by varying ALD cycles. Among four electrodes grown with different ALD cycles (150, 300, 500, and 700 cycles), the SnS x @NF-160 electrode grown with 500 ALD cycles (with a thickness of approximately 60 nm) performed the best. An ultra-stable cycle stability up to 5000 cycles with high capacity retention (>90%) and excellent coulombic efficiency (approximately 99%) proved that this composite can be a suitable candidate for such applications. This study reveals the future promise of the ALD technique for the growth of other TDMCs to maximize the potential of composites for energy storage devices. www.nature.com/scientificreports www.nature.com/scientificreports/ Methods Materials synthesis. ALD-SnS x films were grown on a thermally-grown silicon dioxide (SiO 2 ) covered silicon (Si) wafer and 3D NF, which were used for material characterization and the supercapacitor's electrodes, respectively, in a travelling-wave type thermal ALD reactor (Lucida D-100, NCD technology, Korea) at 0.35 Torr. For SnS x film deposition, a commercially available tetrakis(dimethylamino)tin [TDMASn, ((CH 3 ) 2 N) 4 Sn] and hydrogen sulfide (H 2 S) were used as an Sn precursor and a co-reactant, respectively. The TDMASn container was heated at 40 °C to provide enough vapour pressure during the SnS x film deposition. High purity argon (Ar) gas (99.999%) was supplied at a flow rate of 100 sccm as a carrier gas, which facilitated the appropriate transfer of the precursor to the chamber. The following experimental protocol was applied for the SnS x films deposition to guarantee the self-limiting film growth from the previous study 35 : 1 s pulsing of the TDMASn precursor, 10 s of Ar purging, 1 s pulsing of the H 2 S reactant gas, and 10 s of Ar gas purging. One ALD cycle consisted of four steps and by repeating ALD cycles, the film with a desirable thickness could be prepared precisely. The SnS x films were deposited at two different process temperatures, 180 °C and 160 °C, in order to deposit SnS and SnS 2 predominant films, respectively. NF (purity > 99.99%, with excellent anti-corrosive) is commercially available at MTI Korea and possesses distinct features, for example, light weight, highly uniform, suitable porosity (more than 95%, ~100 pores per inch. average hole diameter about 0.25 mm), intrinsic strength (Lengthwise, ≥1.25 N/mm 2 ; Widthwise ≥1.00 N/mm 2 ), and high thermal, electrical, and magnetic conductivities. The SnS x films deposited at 180 °C and 160 °C on the NF, are henceforth abbreviated as SnS x @NF-180 and SnS x @NF-160 in this article unless stated otherwise. Further optimization of the SnS x @NF-160 electrode was performed by varying the SnS x film's thickness with four different (150, 300, 500, and 700) ALD cycles. Materials characterizations. The selected samples were examined with a plan-view scanning electron microscope (SEM, HITACHI S-4800) to characterize the morphology of the film and confirm the conformal growth of SnS x on 3D NF. A cross-sectional view transmission electron microscopy (TEM, Hitachi, HF-3300 equipped with a 300 kV accelerating voltage and field emission gun) analysis was performed to study the microstructure of the film and the conformal and uniform deposition of SnS x films on 3D NF. The Focused Ion Beam (FIB, Hitachi/NB 5000) technique was used to fabricate the sample for TEM analysis. An X-ray photoelectron spectroscopy (XPS, ESCALAB 250 XPS spectrometer with an Al Kα source in KBSI) analysis was www.nature.com/scientificreports www.nature.com/scientificreports/ performed to identify the oxidation state, chemical composition, and constituent elements of as-grown SnS x films. Energy-dispersive spectroscopy (EDS), in conjunction with SEM, was used to ensure uniform elemental distribution of Sn and S on the complex 3D NF substrate. electrochemical measurements. The electrochemical studies were conducted in a conventional three-electrode system, where the ALD-grown SnS x films on NF (1 cm × 1 cm) directly served as working electrodes without any additional treatment. Platinum (Pt) and silver/silver chloride (Ag/AgCl) electrodes were used as the counter and reference electrodes, respectively. The electrochemical measurement of the prepared electrodes was carried out using cyclic voltammetry (CV), galvanostatic charge/discharge investigations, and electrochemical impedance spectroscopy (EIS) in a potentiostat/galvanostat (Versa STAT 3, Princeton Research, USA electrochemical workstation) instrument with aqueous potassium hydroxide (KOH) solution (2M) as the electrolyte. The areal capacitances of the prepared electrodes were calculated from the discharge characteristics region of the charge-discharge profiles using the following expression 7,13 where I is the current density, t d is the discharge time, V is the potentials window, and A is the area of the electrode.
v3-fos-license
2018-04-03T02:18:40.745Z
2015-09-30T00:00:00.000
14203243
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.dib.2015.09.015", "pdf_hash": "e4cd2b2d73e3e5cb4c0220b2417d54bc65f79904", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41987", "s2fieldsofstudy": [ "Biology" ], "sha1": "e4cd2b2d73e3e5cb4c0220b2417d54bc65f79904", "year": 2015 }
pes2o/s2orc
Expression of biomarker genes of differentiation in D3 mouse embryonic stem cells after exposure to different embryotoxicant and non-embryotoxicant model chemicals There is a necessity to develop in vitro methods for testing embryotoxicity (Romero et al., 2015) [1]. We studied the progress of D3 mouse embryonic stem cells differentiation exposed to model embryotoxicants and non-embryotoxicants chemicals through the expression of biomarker genes. We studied a set of 16 different genes biomarkers of general cellular processes (Cdk1, Myc, Jun, Mixl, Cer and Wnt3), ectoderm formation (Nrcam, Nes, Shh and Pnpla6), mesoderm formation (Mesp1, Vegfa, Myo1e and Hdac7) and endoderm formation (Flk1 and Afp). We offer dose response in order to derive the concentration causing either 50% or 200% of expression of the biomarker gene. These records revealed to be a valuable end-point to predict in vitro the embryotoxicity of chemicals (Romero et al., 2015) [1]. We offer to the readers the primer sequence and their respective annealing temperatures to assay using Power SYBR Green methodology the quantitative real time PCR the expression of 6 different genes (5 biomarkers of differentiation plus a house-keeping). We show as the treatments do not affect the expression of the house-keeping gene, which is an unavoidable requirement for validating the quantification of gene expression. We show doses-responses of model chemicals that allow deriving the concentrations causing either 50% or 200% of expression of the biomarker genes. Data We needed to select model chemicals with different embryotoxicity in order to develop a cellular method for testing embryotoxicity based on the alterations of the differentiation of D3 mouse embryonic stem cells. We finally selected our model chemicals ( Table 1) among those that were previously used in the pre-validation or validation study of an embryonic stem cell method sponsored by the European Union Reference Laboratory for Alternatives to Animal Testing and by other papers dealing with the development of in vitro methods for testing embryotoxicity [2][3][4]. We needed to assay the effect of the selected chemicals (Table 1) on the alterations of D3 cells monitoring changes in biomarker genes. For that, we used quantitative PCR with Power SYBR Green methodology for 5 biomarker genes (plus in house-keeping gene). We designed for this purpose the primers shown in Table 2. Table 2 is also displaying annealing temperatures of such primers. In order to check if the chemicals alter the expression of the house-keeping gene (β-actin) we determined that there were no statistical significant differences among the number of thermal cycles of control samples and samples exposed to all the tested concentrations of all model chemicals listed in Table 1 (Scheme 1). These findings are needed in order to validate further results with the biomarker genes. We determined the effect of gene expression of biomarker genes of all the selected model embryotoxicants (Figs. 1-7). The dose-response plots were used to derive ECD50 or ECD200, which were used as end-points for enhancing the performance of embryonic stem cell methods for testing embryotoxicity [1]. Experimental design, materials and methods D3 cells cultured on monolayer under spontaneous differentiation were exposed to several concentrations of the strong embryotoxicants 5-fluorouracil ( Fig. 1) and retinoic acid (Fig. 2); of the weak embryotoxicants 5,5-diphenylhydantoin (Fig. 3), valproic acid ( Fig. 4) and LiCl (Fig. 5); and of the nonembryotoxicants saccharin (Fig. 6) and penicillin G (Fig. 7) for 5 days. At the end of exposure, cells were lysed, RNA was extracted and retrotranscribed to cDNA, and each gene was amplified and quantified by quantitative real time PCR as previously described [1,5,6] and using to 2 À ΔΔCt calculations [7] and β-actin as a house-keeping control gene The expression of each gene was normalized against the expression of this same gene in the control (non-exposed) cells. The mean 7s.d. of three independent biological replicates run in the experiment is shown. (* ¼statistically different form control for at least p o0.05 in Dunnett test; ** ¼statistically different form control for at least p o0.01 in Dunnett test.) Table 2 Primer sequences and annealing temperatures used in the quantitative real time PCR experiments with Power SYBR Green methodology.
v3-fos-license
2018-12-15T13:50:19.642Z
2017-09-22T00:00:00.000
56340008
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://gmd.copernicus.org/articles/11/1989/2018/gmd-11-1989-2018.pdf", "pdf_hash": "8b920e339c8be638acbbc44cdac96a3857130854", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41989", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "8b920e339c8be638acbbc44cdac96a3857130854", "year": 2017 }
pes2o/s2orc
Improved representation of groundwater at a regional scale – coupling of mesocale Hydrologic Model ( mHM ) with OpeneGeoSys ( OGS ) Most of the current large scale hydrological models do not contain a physically-based groundwater flow component. The main difficulties in large-scale groundwater modeling include the efficient representation of unsaturated zone flow, the characterization of dynamic groundwater-surface water interaction and the numerical stability while preserving complex physical processes and high resolution. To address these problems, we propose a highly-scalable coupled hydrologic and groundwater model (mHM#OGS) based on the integration of two open-source modeling codes: the mesoscale hydrologic 5 Model (mHM) and the finite element simulator OpenGeoSys (OGS). mHM#OGS is coupled using a boundary condition-based coupling scheme that dynamically links the surface and subsurface parts. Nested time stepping allows smaller time steps for typically faster surface runoff routing in mHM and larger time steps for slower subsurface flow in OGS. mHM#OGS features the coupling interface which can transfer the groundwater recharge and river baseflow rate between mHM and OpenGeoSys. Verification of the coupled model was conducted using the time-series of observed streamflow and groundwater levels. More10 over, we force the transient model using groundwater recharge in two scenarios: (1) spatially variable recharge based on the mHM simulations, and (2) spatially homogeneous groundwater recharge. The modeling result in first scenario has a slightly higher correlation with groundwater head time-series, which further validates the plausibility of spatial groundwater recharge distribution calculated by mHM in the mesocale. The statistical analysis of model predictions shows a promising prediction ability of the model. The offline coupling method implemented here can reproduce reasonable groundwater head time series 15 while keep a desired level of detail in the subsurface model structure with little surplus in computational cost. Our exemplary calculations show that the coupled model mHM#OGS can be a valuable tool to assess the effects of variability in land surface heterogeneity, meteorological, topographical forces and geological zonation on the groundwater flow dynamics. 1 Geosci. Model Dev. Discuss., https://doi.org/10.5194/gmd-2017-231 Manuscript under review for journal Geosci. Model Dev. Discussion started: 22 September 2017 c © Author(s) 2017. CC BY 4.0 License. Introduction The significance of depiction of terrestrial water cycle as an integrated system has been continuously recognized.Historically, hydrologic models and groundwater models are isolated and developed in parallel, with either near-surface water flow or subsurface water flow being considered.Furthermore, these models use simple bucket-type expressions together with several vertical water storage layers to describe near-surface water flow (Refsgaard and Storm, 1995;Wood et al., 1997;Koren et al., 2004;Samaniego et al., 2010).Due to the limitation in computational capability, all traditional hydrologic models simplify water flow processes by ignoring lateral water flow, thus inevitably fall short of explicit characterization of the subsurface groundwater head dynamics (Beven et al., 1984;Liang et al., 1994;Clark et al., 2015). The implicit groundwater representations in traditional hydrologic models show inadequacy in many aspects.Water table depth has a strong influence on near-surface water processes such as evapotranspiration (Chen and Hu, 2004;Yeh and Eltahir, 2005;Koirala et al., 2014).Moreover, water table fluctuation has been discovered as a factor affecting runoff generation, thus affects the prediction skill of catchment runoff (Liang et al., 2003;Chen and Hu, 2004;Koirala et al., 2014).Typical hydrologic models also show inadequacy in simulating solute transport and retention at the catchment scale.Van Meter et al. (2017) found that current nitrogen fluxes in rivers can be dominated by groundwater legacies.Moreover, stream-subsurface water interactions may be significant in modulating the human and environmental effects of nitrogen pollution (Azizian et al., 2017).To assess the response of groundwater to climate change, a physically based groundwater representation including lateral subsurface flow is urgently needed (Scibek and Allen, 2006;Green et al., 2011;Ferguson et al., 2016) . On the other hand, numerous groundwater models have been developed in parallel.Groundwater models allow for both steady-state and transient groundwater flow in three dimensions with complex boundaries and a complex representation of sources and sinks.A variety of numerical codes are available such as MODFLOW (Harbaugh et al., 2000), FEFLOW (Diersch, 2013), OpenGeoSys (Kolditz et al., 2012).A constant challenging problem in groundwater modeling is the reasonable characterization of heterogeneous and noncontinuous geological properties (Dagan, 1986;Renard and de Marsily, 1997;Attinger, 2003;Heße et al., 2013;Zech et al., 2015).Groundwater models usually contain a physically-based representation of subsurface physics, but fall short in providing good representation of surface and shallow soil processes.For example, models for predicting groundwater storage change under either climate change (e.g., global warming) or human-induced scenarios (e.g., agricultural pumping) always use a constant or linear expression to represent spatially distributed recharge (Danskin, 1999;Selle et al., 2013).The groundwater numerical models may contain some packages or interfaces to simulate surface water or unsaturated zone processes (Harbaugh et al., 2000;Kalbacher et al., 2012;Delfs et al., 2012).Those packages always need extra data and right characterization of many topographical and geological properties.Parameterization of topographical and geological parameters is a big challenge due to the strong spatial and temporal heterogeneity and lack of data (Moore and Doherty, 2006;Arnold et al., 2009). With the development of computational capability and increasingly importance in responding to climate change, the coupled models coupling two or more hydrological components together have been attracting more and more attentions.The coupled hydrological model highlights the interactions across the shallow soil column and the deep groundwater aquifer.There exist many reviews of the approaches used in coupling surface water-groundwater processes (Brian A. Ebel, 2010;Fleckenstein et al., 2010;Barthel and Banzhaf, 2015).In recent years, there have been some efforts towards coupling surface hydrological model with detailed groundwater model.Maxwell and Miller (2005) coupled LSM (Common Land Model) with a variably saturated groundwater model ParFlow as an integrated model, and demonstrated the need for improved groundwater representation in near-surface water schemes.Sutanudjaja et al. (2011) coupled a land surface model PCR-GLOBWB with a groundwater modeling code MODFLOW using offline coupling scheme, and tested the coupled model using a case study in Rhine-Meuse basin.De Graaf et al. (2015) extended this approach to the global scale.Another highly highly developed coupled model is GSFLOW, which is based on integration of the USGS Precipitation-Runoff Modeling System (PRMS) and the USGS Groundwater Flow Model (MODFLOW and MODFLOW-NWT).GSFLOW has been successfully applied to many case studies (Markstrom et al., 2008;Huntington and Niswonger, 2012;Hunt et al., 2013). In this study, we document the development of a coupled surface hydrological and groundwater model (mHM#OGS) with an an overall aim of bridging the gap between catchment hydrology and groundwater hydrology at a regional scale.We choose two highly-scalable, open-source codes with high reputations and wide popularities in their corresponding fields: the mesoscale Hydrologic Model mHM (Samaniego et al., 2010;Kumar et al., 2013;Samaniego et al., 2013) and the THMC simulator OpenGeoSys (Kolditz et al., 2012;Wang et al., 2009;Kalbacher et al., 2012).The coupling is achieved by mechanistically accounting for the spatio-temporal dynamics of mHM generated groundwater recharge and baseflow as boundary conditions to the OGS model as a off-line coupled mode.A nested time-stepping approach is used to account for differences in time-scales of near-surface hydrological and groundwater processes, i.e., smaller time steps for typically faster surface runoff routing in mHM and larger time steps for slower subsurface flows in OGS.We applied the coupled model mHM#OGS in a Central German mesoscale catchment (850 km 2 ), and verify the model functioning using measurements of streamflow and groundwater heads from several wells located across the study area.The, herein, illustrated coupled (surface) hydrological and groundwater model (mHM#OGS) is our first attempt towards development of a large-scale coupled modeling system to analyze the spatio-temporal variability of groundwater flow dynamics at a regional scale. The paper is structured as follows.In the next section, we describe the model concept, model structure, coupling schematic, study area and model setup used in this study.In section 3, we present the simulation result of mHM#OGS in a catchment in central Germany, where the subsurface properties are well characterized and long-term monitoring of river stage and groundwater level exist.We also assess the effects of different spatial patterns of groundwater recharge to groundwater dynamics.In the last section, conclusion and future work are discussed. mesoscale Hydrologic Model (mHM) The mesoscale Hydrologic Model (mHM, www.ufz.de/mhm) is a spatially explicit distributed hydrologic model that uses grid cells as a primary modeling unit, and accounts for the following processes: canopy interception, snow accumulation and melting, soil moisture dynamics, infiltration and surface runoff, evapotranspiration, subsurface storage and discharge generation, A unique feature of mHM is the application of Multiscale Parameter Regionalization (MPR).The MPR method accounts for subgrid variabilities of catchment physical characteristics such as topography, terrain, soil and vegetation (Samaniego et al., 2010;Kumar et al., 2013).The model is flexible for hydrological simulations at various spatial scales due to applying the MPR methodology.Within mHM, three levels are differentiated to better represent the spatial variability of state and inputs variables. Detailed description of MPR as well as formulations governing hydrological processes could be referred to in Samaniego et al. (2010) and Kumar et al. (2013). OpenGeoSys (OGS) OpenGeoSys ( OGS) is an open-source project with the aim of developing robust numerical methods for the simulation of Thermo-Hydro-Mechanical-Chemical (THMC) processes in porous and fractured media.OGS is written by C++ with a focus on the finite element analysis of coupled multi-field problems.Parallel versions of OGS are available based on both MPI and OpenMP concepts (Wang et al., 2009;Kolditz et al., 2012;Wang et al., 2017).OGS has been successfully applied in different fields like water resources management, hydrology, geothermal energy, energy storage, CO 2 storage, and waste deposition (Kolditz et al., 2012;Shao et al., 2013;Gräbe et al., 2013;Wang et al., 2017).In the field of hydrology / hydrogeology, OGS has been applied to various topics such as regional groundwater flow and transport (Sun et al., 2011;Selle et al., 2013), contaminant hydrology (Beyer et al., 2006;Walther et al., 2014), reactive transport (Shao et al., 2009;He et al., 2015), and sea water intrusion (Walther et al., 2012), etc. Structure of the coupled model The coupled model mHM#OGS was developed to simulate coupled surface-water and groundwater (SW/GW) flow in one or more catchments by simultaneously calculating flow across the land surface and within subsurface materials.mHM#OGS simulates flow within three hydrological regions.The first region is limited by the upper bound of plant canopy and the lower bound of the soil zone bottom.The second region includes open-channel water, such as streams and lakes.The third region is the saturated groundwater aquifer.mHM is used to simulate the processes in the first and second region, while OGS is used to simulate the hydrological processes in the third region.The model development is guided by the following principles: -Solve the governing equations for surface water and groundwater flow using a sequential boundary condition switching technique.-Calculate model-wide and detailed water balances in both time and space. -Use nested time stepping method to allow different time steps in surface and subsurface regimes -Allow different cell sizes in the spatial discretization of the grid cell resolution used for mHM and the finite-element solution used for OGS. -Allow different model boundaries definition using standard specified-head, specified-flow, or head-dependent boundary 5 conditions. An integrated workflow for coupled modeling is illustrated in Figure 1 c.The entire modeling workflow is separated into three independent parts.The first part is the data preparation and pre-processing part marked by blue background in Figure 1 c.This part includes several codes for data preparation and model pre-processing developed by the mHM and OGS communities 5 Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2017-231Manuscript under review for journal Geosci.Model Dev. Discussion started: 22 September 2017 c Author(s) 2017.CC BY 4.0 License.(Fischer et al., 2015;Kolditz et al., 2016).The second part is model coupling, which is composed of the respective computations of mHM and OGS and the data communication between two codes (see components marked by red background in Figure 1 c). Technically, we use a self-developed data communication code GIS2FEM to convert data format and exchange information. The detailed illustration of this part is in the following subsection.The third part is the water budget, which is designed to calculate an overall water balance, as well as component-based water budgets for all storages simulated by mHM#OGS (marked by light green background in Figure 1 c). Boundary condition-based coupling The subsurface flow equation is solved in OGS.OGS applies a standard (centered) Galerkin finite element method to discretize PDEs.Here we list the governing equations of groundwater flow in saturated zones used in this study.It can be expressed as: where S s is the specific storage coefficient [1/L], ψ p is the pressure head in porous media [L], t is time[T], q is the Darcy flux (LT -1 ), q s is a specified rate source/sink (T -1 ), q e is the exchange rate with the surface water (T -1 ), K s is the saturated hydraulic on Γ, where n is the outer norm of the boundary surface. The surface flow over the catchment in either the hillslopes or open channels can be expressed using a kinematic-wave equation ( an approximation of the Saint-Venant equations).The kinematic-wave equation for flow in streams can be expressed as: where t is time [T], v is the averaged velocity vector [LT -1 ], ψ s is the surface water depth [L], q e is the exchange rate with subsurface water [LT -1 ].In mHM, a Muskingum-Cunge method is used to solve Eq. (3) (Te Chow, 1988). The kinematic-wave approximation assumes the gravitational forces are balanced by frictional forces such that: where S o is the slope of the channel [-], and S f is the friction slope of the channel [-].This assumption is used because the potential areas of application of this model would hardly exhibit abruptly hydrographs with supercritical flows. Manning's equation (Te Chow, 1988) is used to calculate the depth averaged velocity using discharge: where m is the Manning roughness coefficient [L -1/3 T].As an empirical formula, the Manning's formula has been widely used in surface water flow models. The source terms q e and q e in Eq. ( 1) and Eq. ( 3) explicitly represent the communication of water between the surface and subsurface flow compartments (Camporese et al., 2010).The surface-to-subsurface flux q e is determined after solving the surface routing equation Eq. ( 3) for the following feed to the subsurface flow equation Eq. ( 1), meanwhile the subsurface-tosurface feed q e is determined by solving the subsurface flow equation for the following feed to the surface water equation.Note that the time step of subsurface flow is always larger than that of surface flow.Note that poor water balance might occur if more than 30-50 surface time steps exist in one subsurface time step (Camporese et al., 2010).Here we use a nested time stepping method to avoid the water balance problem (see the following paragraph).Besides, we use the original linear groundwater storage in mHM to calculate daily q e . Nested time stepping is adopted in this study in order to calculate fast surface and slow subsurface flow simultaneously.As reported by Cunge (1969), the Muskingum-Cunge used to solve surface flow equations is unconditionally stable in case some perquisites being meet, such as proper grid size and time step size.The subsurface solver in OGS is however, implicit in time and limited by less restrictive precision constraints.We use a nested time stepping method to calculate daily surface processes and monthly subsurface processes in a sequential manner.This strategy automatically fits to any stepping size difference, and avoids water balance error in flux exchange between two modules. Conversions between volumetric flux ([L 3 /T]), specific flux ([L/T]), and water head ([L] ) are performed by adjusting different time steps or cell sizes.Specifically, the time series of groundwater recharge obtained from mHM were fed to the model interface of mHM#OGS.After reading raster file of groundwater recharge, the interface assign the proper recharge value to the top surface elements of OGS mesh by checking the coordinates of the centroid of each top surface element.For each surface element, if its centroid is within a grid cell of the raster file, the value of this grid cell is assigned to the surface element.After all top surface elements have been processed, the elements that have been assigned with the recharge values are involved in the face integration calculation, whereby the recharge data is converted into nodal source terms (see details in Figure 2).The parameter set used in this study is shown in Table 1. 12 End of simulation -Close all input files and processes. Study area and model setup We use a mesoscale catchment upstream of Naegelstedt catchment located in central Germany, with a drainage area of about 845 km 2 to verify our model(see Figure 3).The Naegelstedt catchment comprises the headwaters of the Unstrut river basin.The Unstrut river basin is a sedimentary basin of Unstrut river, a left tributary of the Saale.It was selected in this study because there are many groundwater monitoring wells operated by Thuringian State office for the Environment and Geology (TLUG) and the Collaborative Research Center AquaDiva (Küsel et al., 2016).Morphologically, the terrain elevation within the catchment is in a range of 164 m and 516 m, whereby the higher regions are in the west and south as part of the forested hill chain of the Hainich (see Figure 3).The Naegelstedt catchment is one of the most intensively used agricultural regions in Germany.In terms of water supply, about 70% of the water requirement is satisfied by groundwater (Wechsung, 2005).About 17% of the land in this region is forested area, 78% is covered by crop & grassland and 4% is housing and transport area.The mean annual precipitation in this area is about 660 mm. Meteorological forcings and morphological properties We started the modeling by performing the daily simulation of mHM to calculate near-surface and soil zone hydrological processes.Several resolutions ranging from 200 m to 2 km are applied in mHM to account different scale of spatial heterogeneity.Muschelkalk (mm) and low Muschelkalk (mu).According to previous geological survey (Seidel, 2004), even the same sub-unit of Muschelkalk have a diverse hydraulic properties depending on their positions and depths.They are further divided into subunits with higher permeability (see mo1, mm1 and mu1 in Figure 4) and sub-units with lower permeability (see mo2, mm2 and mu2 in Figure 4).The uppermost layer with a depth of 10 m is set as a soil layer, whereby the hydraulic properties are set the same with mHM setting (see soil in Figure 4).A alluvium layer is set along the mainstream and major tributaries representing 10 granite and stream deposits. Boundary conditions Based on the steep topography along the watershed divides, groundwater is assumed to be naturally seperated and not able to pass across the boundaries of the watershed.No-flow boundaries were imposed at the outer perimeters surrounding the basin as well as at the lower aquitard. The stream network were delineated using a digital elevation model (DEM)-based pre-processor, and then clipped based on 5 a threshold correlated with field observations.In general, all streams are regarded as perennial in this study, except for those in the mountainous region where flow is intermittent.Stream network is determined based on long-term average of accumulated routed streamflow(see Figure 5).We cut off the small tributaries by which long-term average of accumulated routed streamflow is below a threshold.In other words, we neglect the intermittent streamflow to the upper stream reaches (see the lower left graph in Figure 5). Calibration Calibration of the integrated model were conducted using two different conceptual scenarios in order to test the effect of spatial heterogeneity evaluated by Multiscale parameter regionalization (MPR) method in mHM.For the first conceptual scenario (SC1), spatial variability of physiographic characteristics are characterized by MPR method in mHM.The heterogeneous groundwater recharge distribution is determined within the MPR framework.For the second scenario (SC2), we kept the total amount of groundwater recharge of the catchment the same with SC1 in every time step, but use the homogeneous distributed groundwater recharge.SC2 does not consider the complex spatial variability of groundwater recharge caused by variations in climatic conditions, land use, topography and geological heterogeneity. The coupled model was calibrated following a two-step procedure.In the first step, mHM was calibrated independent of OGS for the period 1970 -2005 by matching observed runoff at the outlet of the catchment.The first 5 years are used as "spin-up" period to set up initial conditions in near-surface soil zone.The calibration workflow is a consecutive workflow where the parameters which affect the potential evapotranspiration, soil moisture, runoff and shallow subsurface flow were first calibrated until convergence criteria was matched.The calibrated mHM model is also verified by measurements from a single eddy-covariance measurement station in the study area.The calibration goodness was handled by means of calculating the Nash-Sutcliffe efficient (NSE). In the second step, OGS is run independent of mHM in steady-state using long-term averaged outer forcings.Spatially distributed but long-term average recharge estimated by mHM were fed as the steady-state boundary condition.The long-term average baseflow rate estimated by mHM simulations were also used as boundary condition at stream beds.The groundwater where IQ md 7525 and IQ dt 7525 are the inter-quantile ranges of the time-series of modeling result and observations, respectively. Spatial-temporal dynamics of recharge and baseflow Groundwater recharge has an arbitrary behaviour depending on the sporadic, irregular, and complex features of storm rainfall occurrences, geological structure, and morphological features.The temporal and spatial variability of groundwater recharge is estimated by mHM calculation with a period of 30 years from 1975 to 2005.conductivity.The greatest point-wise monthly groundwater recharge varies from 26 mm in early spring, to 51 mm in late spring, to 14 mm in winter.We have also evaluated the plausibility of groundwater recharge simulated by mHM with other reference datasets.On the large scale, the simulated groundwater recharge from mHM agrees quite well with the estimation from the Hydrological Atlas of Germany (please refer to Kumar et al. (2016)). Figure 7 shows the boxplot and histogram of normalized monthly groundwater income and outcome over the whole catch-5 ment.We do not include the human effects (e.g., pumping, abstractions and irrigation).Therefore, baseflow to streams is considered as the only source of groundwater discharge.The boxplot shows the degree of spread and skewness of the distribution of the monthly groundwater recharge and groundwater discharge.It shows that the long-term mean value of monthly groundwater recharge and discharge are balanced with the value of approximately 8 mm/month.Due to the numerical error, a tiny difference of 2% between groundwater recharge and baseflow is observed in the boxplot.This slight bias is within an 10 acceptable interval.The figure also shows that the spread of groundwater recharge is wider than the baseflow, which demonstrate the buffering effect of groundwater storage.the distribution pattern of monthly baseflow is unimodal.It also indicates that the monthly groundwater recharge distribution has a higher deviation than baseflow distribution. Model evaluation using discharge & groundwater heads We use the scenario with distributed groundwater recharge calculated by mHM (SC1) as the default scenario, and the scenario with homogeneous groundwater recharge (SC2) as the reference scenario in this study.All calibrated parameters' values are in SC1 by default.For mHM calibration, the calibration result is good with R cor >0.9 for the monthly discharge simulation at the Naegelstedt station (see Figure 10).Other fluxes like evaportransporation measured at eddy-covariance stations inside this area, also shows quite reasonable correspondence to the modeled estimation (please refer to Heße et al. (2017)). The steady-state groundwater model calibration result shows that the groundwater model can plausibly reproduce the finite numbers of observed groundwater heads within the catchment.Figure 8 shows the 1-to-1 plot of simulated and observed groundwater heads using different recharge scenario SC1 and SC2, respectively (locations of those wells are shown in Figure 3).It can be observed that the model is capable of reproducing spatially-distributed groundwater heads in a wide range, with low RMSE values of 6.22 m in SC1 and 10.14 m in SC2, respectively.There are certain differences between simulated heads and observed heads.This difference is caused by many possible reasons, such as the limited spatial resolution, uniform meshing, or over-simplified geological zonation.A smaller RMSE in SC1 indicates that mHM is able to capture spatial heterogeneity and produce a more realistic groundwater recharge distribution.The errors in simulated heads (lower two graphs in Figure 8) show that most of simulated head errors are within an interval of ±6 m in SC1, while most of errors are within a range of ±10 m in SC2.Nevertheless, there are still some flawed points where the prediction is biased.Adding more model complexity to improve the match between simulation and observation was avoided due to the large spatial scale, the limited spatial resolution of mesh, and the noise of groundwater head data (i.e., with a various time span from 10 years to 30 years). In general, the model is capable of capturing the historical trend of groundwater dynamics, even though the mean value of simulation and observation values may differ slightly.Due to the limitation of spatial resolution and homogeneous K in each geological unit, this difference is acceptable. To compare the model results between two scenarios, we drew the barcharts of R cor and |QRE| at each monitoring wells using two recharge scenarios, respectively (see Figure 12).The mean value and the median value of the basin scale R cor and QRE are also calculated and shown in Figure 12. Figure 12 (a) indicates that the correlation with observations of simulations using SC1 is higher than that using SC2, with the averaged R cor of 0.703 and 0.685, respectively.The standard deviation of R cor in SC1 is 0.109, which is 13% smaller than 0.125 in SC2.Considering the only difference between SC1 and SC2 is the spatial distribution of recharge, the heterogeneous groundwater recharge estimated using mHM can be verified as a better evaluation than the homogeneous spatially distributed recharge.The relative difference of R cor between SC1 and SC2 is moderate, which indicates spatial characterization recharge distribution might has a less important influence to groundwater dynamics in the study area.This phenomenon is highly related to the coarsest resolution describing meteorological forcings (e.g., precipitation) among three spatial levels used in mHM.The coupled model also shows its potential in predicting groundwater flood and drought.Figure 13 displays the seasonality of groundwater heads over the whole catchment by means of calculating the long-term mean groundwater heads in spring, summer, autumn and winter, respectively.It indicates that in general, the possibility of groundwater flood in spring & summer is higher than in autumn & winter.However, the spatial variability within groundwater flood & drought event is significant.The groudwater head in northern, eastern and southeastern mountainous areas tend to fluctuate more wildly than the central plain areas.This phenomenon is consistent with the fact that the mountainous areas have a larger recharge rate than the plain areas. Considering the need of predicting groundwater flood & drought in extreme climate events, we select a wet month (August 2002) and a dry month (August 2003), and show the groundwater heads variation in these months.Figure 13 e) and f) show two scenes of groundwater head variation in wet season and dry season, respectively.The groundwater heads in wet season is higher than the mean values (see Figure 13 e ).The variation of groundwater heads in dry season, however, shows a strong spatial variability.The strong spatial variability of groundwater heads variation has also been found in Kumar et al. (2016). Discussion and conclusion A coupled hydrologic model mHM#OGS is proposed and applied it in a mesoscale catchment in central Germany.A boundary condition-based off-line coupling method is applied to depict the dynamic flow exchanges between the surface and subsurface water regimes.This coupling method, together with nested time stepping, allow the surface and subsurface parts to be solved sequentially, keep computational surplus to a minimum, while avoid possible water balance problem in the flow exchange processes.The result shows a promising prediction capability in surface and subsurface water modeling via calibration and comparison to groundwater time series.The SC1 using spatially heterogeneous groundwater recharge distribution is more plausible than SC2 which uses spatially homogeneous groundwater recharge.The results of this study highlight the successful application of Multiscale Parameter Regionalization (MPR) method in characterizing spatial heterogeneous groundwater recharge.In the spatial scale of 10 3 km 2 (the scale in this study), the MPR method shows a moderate improvement in groundwater recharge representation.Note that MPR has been successfully applied at a larger scale over Europe (Thober et al., 2015;Kumar et al., 2013;Zink et al., 2016;Rakovec et al., 2016).The effectiveness of MPR to characterize groundwater dynamics at larger scales (e.g., 10 4 -10 6 km 2 ) even global scales is still unknown and needs to be explored.Moreover, MPR has been proved its capability to produce better runoff prediction in cross-validated locations (e.g., ungauged basins) (Samaniego et al., 2010).To date, the effectiveness of MPR in unguaged basins has not been verified by groundwater head dynamics.In the next step, we may use the groundwater time series to test the effectiveness of MPR in ungauged basins using the coupled model mHM#OGS . The convincing results of this study provide a new possibility in improving classic large-scale disbributed hydrologic models, such as the current unmodified version of mHM (Samaniego et al., 2010;Kumar et al., 2013), VIC (Liang et al., 1994), PCR- coupling processes in large-scale hydrologic cycles, which is significant for a wide range of real-world applications, including land subsidence, agricultural irrigation, nutrient circulation, salt water intrusion, drought, and heavy metal transport. We realize that there are several limitations in the current model.The first one is the fact that we use an off-line coupling scheme instead of a full coupling scheme between mHM and OGS.Consequently, we do not account for the explicit feedback from OGS to mHM, although it might be less important related to the large subsurface time step.This approach has certain advantages of less computational consumption and better numerical stability, and fits perfectly on the long-term large-scale groundwater modeling in this study.In the next step, we may try to incorporate the full coupling scheme in the next version of mHM#OGS model.Via the full coupling scheme, the dynamic interactions between overland flow and groundwater flow, and between soil moisture dynamics and groundwater dynamics are explicitly accounted.This approach is open to a broader spectrum of calibration options, such as calibration using remotely sensed soil moisture data.The second one is that we do not use the parallel computing.Although the whole simulation is conducted on the EVE linux cluster at UFZ, which is a high performance computing platform, we do not use the distributed computing to reduce computational efforts.In the future, a parallel version of mHM#OGS is needed to reduce computation time for the computationally-expensive full coupling procedure. 3 scheme which routes runoff in upstream cells along river network using the Muskingum-Cunge algorithm.The model is driven by daily meteorological forcings (e.g., precipitation, temperature), and it utilizes observable basin physical properties or signals (e.g., soil textural, vegetation, and geological properties) to infer the spatial variability of the required parameters.mHM is an open-source project written in FORTRAN 2008.Parallel versions of mHM are available based on OpenMP concepts. Figure 1 . Figure 1.The concept of mHM#OGS model a) the conceptual representation of hydrological processes in a catchment; b) the schematic used to couple the mHM and OGS.The upper box depicts the canopy interception, atmospheric forcing, and the land surface processes represented by mHM.The lower box depicts the saturated zone represented by the OGS groundwater model; c) the complete workflow including several interfaces with external softwares for data import, format conversion, model calibration and water balance check. Figure 2 . Figure 2. Transfer of groundwater recharge from mHM grid cells to OGS nodes using the model interface GIS2FEM. 4 Transfer surface-to-subsurface exchange rates to OGS -Transfer surface-to-subsurface volumetric flow rates needed for computing saturated flow as Neumann boundary conditions in OGS. 5 Steady-state calibration -Run OGS-only steady-state simulation using boundary conditions given by mHM in step 4. The calibrated K field is fed to transient model.The steady-state groundwater head serves as an initial condition of the transient mHM#OGS modeling.6 Start transient simulation of mHM#OGS -Sequence through coupled mHM and OGS components.7 Compute stepwise near-surface flow and storage in mHM -Compute spatially-distributed daily near-surface processes.8 Transfer primary variables and volumetric flow rates to OGS -The same as step 4, except this step generate time-dependent raster files of flow rates.8 Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2017-231Manuscript under review for journal Geosci.Model Dev. Discussion started: 22 September 2017 c Author(s) 2017.CC BY 4.0 License.Solve the groundwater flow equation -Calculate groundwater heads and groundwater flow velocity field in study region.10 Compute budgets -Run water budget package to check overall water balance as well as timedependent water budgets in each storages.11 Write results -Output the simulations results. Heße et al. (2017) have already established the mHM simulation over the study area.The meteorological and morphological settings in this paper are the same with his work.For the detailed settings of meteorological forcings and morphological properties, please refer toHeße et al. (2017).2.4.2 Aquifer properties and meshingTypical distributed hydrological models use shallow soil profiles or extended soil profile to present groundwater storage.Here, we use a spatially distributed aquifer model to explicitly present groundwater storage.We set up this spatially distributed aquifer model through geological modeling based on well log data and geophysical data from Thuringian State office for the Environment and Geology (TLUG).To convert data format, we use the workflow developed byFischer et al. (2015) to convert the complex 3D geological model into open-source VTU format file that can be used by OGS.Model elements of OGS were set to a 250 m × 250 m horizontal resolution and a 10 m vertical resolution over the whole model domain. Figure 3 . Figure 3.The Naegelstedt catchment used as the test catchment for this model.The left map shows elevation and locations of monitoring wells used in this study.The lower right map shows the relative location of Naegelstedt catchment in Unstrut basin.The upper right map shows the location of Unstrut basin in Germany. Figure 4 . Figure 4. Three-dimensional and cross Section view of hydrogeologic zonation in the Naegelstedt catchment.The upper left figure shows the complete geological characterization and zonation including alluvium and soil zone.The upper right figure shows the geological characterization along two cross sections.The lower map shows the detailed zonation of geological sub-units beneath the soil zone and alluvium. Figure 5 . Figure 5. Illustration of stream network used in this study.a) Stream network based on long-term average of accumulated routed streamflow; b) Stream network whereby long-term averaged accumulated monthly streamflow rate is above 1000 mm; c) Stream network whereby longterm averaged accumulated monthly streamflow rate is above 1500 mm, which is also the default setting in this study; d) Stream network whereby long-term averaged accumulated monthly streamflow rate is above 2000 mm. Figure 6 . Figure 6.Spatial distributions of groundwater recharge in Naegelstedt catchment (unit: mm/month) (a) during early spring, (b) late spring, and (c) winter of year 2005. Figure 6 shows the spatial variability of groundwater recharge in three months: the early spring (March) (Figure 6 a), late spring (May) (Figure 6 b), and winter (January) (Figure 6 c).The results indicate that the largest groundwater recharge may occur at mountainous areas.Greatest recharge occurs in the upstream bedrock areas where dominant sedimentary is Muschelkalk with a relatively low hydraulic Figure 7 . Figure 7. Mean monthly water balance of groundwater over the Naegelstedt catchment.a) Boxplot indicates spread, skewness, and outliers of groundwater recharge and groundwater discharge.b) Histogram indicates the distribution of groundwater balance.c) Monthly time series of groundwater recharge and baseflow. Figure 7 b shows the distribution of monthly groundwater recharge and monthly baseflow.The figure indicates that the distribution pattern of monthly groundwater recharge is skewed right, whereas Figure 8 . Figure 8. Illustration of steady-state calibration results.(a) Observed and simulated groundwater head, including RMSE and Rcor; (b) Difference between simulated and observed head related to the observaed head values. Figure 10 . Figure 10.Observed and modeled monthly streamflow at the outlet of Naegelstedt catchment. Figure 11 Figure 11 presents observed and simulated groundwater heads for the period 1975-2005 in SC1.Five out of 19 monitoring wells with different geological and morphological types were chosen as samples to test the effectiveness of our model.Well 4728230786 is located at northern upland and near the mainstream, whereas well 4828230754 is located at the southwestern lowland.Both of those two wells show promising simulation results with the R cor of 0.81247 and 0.75279, and the QRE of -10 Figure 11 . Figure 11.The comparison between measurement data (green dashed line) and model output of groundwater head anomaly in SC1 (blue solid line).(a) Monitoring well 4728230786 located at upland near stream.(b) Monitoring well 4628230773 located at mountainous area.(c) Monitoring well 4728230781 located at a hillslope at northern upland.(d) Monitoring well 4828230754 located at lowland.(e) Monitoring well 4728230783 located at northern mountain. Figure 12 . Figure 12.Barplots of a) the Pearson correlation coefficient Rcor and b) absolute inter-quantile range error |QRE| in all monitoring wells in two scenarios.Each bar corresponds to an individual monitoring well in the following order: 0 -4830230779, 1 -4828230754, 2 - Figure 12 b Figure12b shows the distribution of the absolute value of inter-quantile range error |QRE| in SC1 and SC2.It can be found in Figure12that the distribution pattern of |QRE| is more complicated than R cor .We can see that the |QRE| in two wells are abnormally higher than the other wells.This indicates the accurate quantification of amplitude in particular locations Figure 13 . Figure 13.Seasonal variation of spatially-distributed groundwater heads by their anomalies after removing the long-term mean groundwater heads (unit: m).a) Long-term mean groundwater head distribution in spring; b) Long-term mean groundwater head distribution in summer; c) Long-term mean groundwater head distribution in autumn; d) Long-term mean groundwater head distribution in winter; e) Monthly mean groundwater head distribution in wet season (August 2002); f) Monthly mean groundwater head distribution in dry season (August 2003). Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2017-231Manuscript under review for journal Geosci.Model Dev. Discussion started: 22 September 2017 c Author(s) 2017.CC BY 4.0 License.GLOBWB (Van Beek and Bierkens, 2009), WASMOD-M (Widén-Nilsson et al., 2007) .Those distributed hydrologic models do not include the function of calculating spatial-temporal groundwater heads, therefore are not able to reasonably represent groundwater head and storage dynamics in their groundwater regime.This may be insignificant in global scale hydrologic modeling, which always has a coarser resolution of 25-50 km.The physical representation of groundwater flow is needed in future global hydrologic model with finer spatial resolutions down to 1 km.Moreover, the inclusion of groundwater model OGS in the coupled model is particularly significant for areas with large sedimentary basins or deltas (e.g., the sedimentary basins of Mekong, Danube, Yangtze, Amazon, and Ganges-Brahmaputra Rivers).The coupled model mHM#OGS also provides a potential in predicting groundwater flood & drought in extreme climate events.Due to the prediction capability of mHM in ungauged basins, the coupled model is also capable of predicting groundwater flood & drought in ungauged basins, which is quite valuable due to the lack of comprehensive groundwater observations at regional scales.Providing previous work byHeße et al. (2017) in Travel Time Distributions (TTDs) using mHM, we can expand the range of their work to the complete hydrologic cycle beneath atmosphere, which is important due to the pollutant legacy in groundwater storage.The coupled model is also able to evaluate surface water and groundwater storage change under different meteorological forcings, which allows the decent study of hydrologic response to climate change (e.g.global warming).Besides, the versatility of OGS also offers the possibility to address Thermo-Hydro-Mechanical-Chemical (THMC) Geosci.Model Dev.Discuss., https://doi.org/10.5194/gmd-2017-231Manuscript under review for journal Geosci.Model Dev. Discussion started: 22 September 2017 c Author(s) 2017.CC BY 4.0 License. Table 1 . Table of hydrological parameters used in this study.The two models are coupled in a sequential manner by fed fluxes and variables from one model to another at every subsurface time step.Technically, the coupling interface converts time-series of variables and fluxes to Neumann boundary conditions, which can be directly read by OGS.The modified OGS source code can produce raster files containing the time-series of flow-dependent variables and volumetric flow rates with the same resolution of mHM grid cells which can be directly read by mHM.The detailed workflow of the coupling technique is shown in Table2. Table 2 : Description of computational sequence for mHM#OGS using a sequential coupling scheme Computation 1Initialize, assign, and read -Run mHM and OGS initialize procedures, OGS assign, read and prepare parameters and subroutines for later simulation.2Computenear-surfacehydrologicprocesses in mHM -Calculate near-surface processes such as snow melting, evapotranspiration, fast interflow, slow interflow, groundwater recharge, and surface runoff for each grid cell.3Compute the long-term mean of land-surface and soil-zone hydrologic processes -Compute the long-term mean of the entire simulation period and write them as a set of raster files. Table 3 . Estimates of hydraulic properties for calibrated steady-state groundwater model in Naegelstedt catchment.
v3-fos-license
2021-02-11T06:19:42.625Z
2021-02-01T00:00:00.000
231866885
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/e23020199", "pdf_hash": "db8d6b5ad63dbcc2873444f9a1013b6992bf7cc3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41990", "s2fieldsofstudy": [ "Mathematics", "Physics", "Computer Science" ], "sha1": "bf26449c726150d836aa07df622bf39be7251e2b", "year": 2021 }
pes2o/s2orc
Error Exponents and α-Mutual Information Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s E0 functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the α-mutual information and the Augustin–Csiszár mutual information of order α derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order α under cost constraints by means of the maximization of the α-mutual information subject to an exponential average constraint. Phase 1: The MIT School The capacity C of a stationary memoryless channel is equal to the maximal symbolwise input-output mutual information. Not long after Shannon [1] established this result, Rice [2] observed that, when operating at any encoding rate R ă C, there exist codes whose error probability vanishes exponentially with blocklength, with a speed of decay that decreases as R approaches C. This early observation moved the center of gravity of information theory research towards the quest for the reliability function, a term coined by Shannon [3] to refer to the maximal achievable exponential decay as a function of R. The MIT information theory school, and most notably, Elias [4], Feinstein [5], Shannon [3,6], Fano [7], Gallager [8,9], and Shannon, Gallager and Berlekamp [10,11], succeeded in upper/lower bounding the reliability function by the sphere-packing error exponent function and the random coding error exponent function, respectively. Fortunately, these functions coincide for rates between C and a certain value, called the critical rate, thereby determining the reliability function in that region. The influential 1968 textbook by Gallager [9] set down the major error exponent results obtained during Phase 1 of research on this topic, including the expurgation technique to improve upon the random coding error exponent lower bound. Two aspects of those early works (and of Dobrushin's contemporary papers [12,13] on the topic) stand out: (a) The error exponent functions were expressed as the result of the Karush-Kuhn-Tucker optimization of ad-hoc functions which, unlike mutual information, carried little insight. In particular, during the first phase, center stage is occupied by the parametrized function of the input distribution P X and the random transformation (or "channel") P Y|X , 2 of 52 E 0 pρ, P X q "´log ÿ yPB˜ÿ xPA P X pxqP introduced by Gallager in [8]. (b) Despite the large-deviations nature of the setup, none of the tools from that thennascent field (other than the Chernoff bound) found their way to the first phase of the work on error exponents; in particular, relative entropy, introduced by Kullback and Leibler [14], failed to put in an appearance. To this date, the reliability function remains open for low rates even for the binary symmetric channel, despite a number of refined converse and achievability results (e.g., [15][16][17][18][19][20][21]) obtained since [9]. Our focus in this paper is not on converse/achievability techniques but on the role played by various information measures in the formulation of error exponent results. Phase 2: Relative Entropy The second phase of the error exponent research was pioneered by Haroutunian [22] and Blahut [23], who infused the expressions for the error exponent functions with meaning by incorporating relative entropy. The sphere-packing error exponent function corresponding to a random transformation P Y|X is given as Roughly speaking, optimal codes of rate R ă C incur in errors due to atypical channel behavior, and large deviations establishes that the overwhelmingly most likely such behavior can be explained as if the channel would be supplanted by the one with mutual information bounded by R which is closest to the true channel in conditional relative entropy DpQ Y|X }P Y|X |P X q. Within the confines of finite-alphabet memoryless channels, this direction opened the possibility of using the combinatorial method of types to obtain refined results robustifying the choice of the optimal code against incomplete knowledge of the channel. The 1981 textbook by Csiszár and Körner [24] summarizes the main results obtained during Phase 2. Phase 3: Rényi Information Measures Entropy and relative entropy were generalized by Rényi [25], who introduced the notions of Rényi entropy and Rényi divergence of order α. He arrived at Rényi entropy by relaxing the axioms Shannon proposed in [1], and showed to be satisfied by no measure but entropy. Shortly after [25], Campbell [26] realized the operational role of Rényi entropy in variable-length data compression if the usual average encoding length criterion Er pcpXqqs is replaced by an exponential average α´1 log Erexppα pcpXqqs. Arimoto [27] put forward a generalized conditional entropy inspired by Rényi's measures (now known as Arimoto-Rényi conditional entropy) and proposed a generalized mutual information by taking the difference between Rényi entropy and the Arimoto-Rényi conditional entropy. The role of the Arimoto-Rényi conditional entropy in the analysis of the error probability of Bayesian M-ary hypothesis testing problems has been recently shown in [28], tightening and generalizing a number of results dating back to Fano's inequality [29]. Entropy 2021, 23,199 3 of 52 Phase 3 of the error exponent research was pioneered by Csiszár [30] where he established a connection between Gallager's E 0 function and Rényi divergence by means of a Bayesian measure of the discrepancy among a finite collection of distributions introduced by Sibson [31]. Although [31] failed to realize its connection to mutual information, Csiszár [30,32] noticed that it could be viewed as a natural generalization of mutual information. Arimoto [27] also observed that the unconstrained maximization of his generalized mutual information measure with respect to the input distribution coincides with a scaled version of the maximal E 0 function. This resulted in an extension of the Arimoto-Blahut algorithm useful for the computation of error exponent functions [33] (see also [34]) for finite-alphabet memoryless channels. Within Haroutunian's framework [22] applied in the context of the method of types, Poltyrev [35] proposed an alternative to Gallager's E 0 function, defined by means of a cumbersome maximization over a reverse random transformation. This measure turned out to coincide (modulo different parametrizations) with another generalized mutual information introduced four years earlier by Augustin in his unpublished thesis [36], by means of a minimization with respect to an output probability measure. The key contribution in the development of this third phase is Csiszár's paper [32] where he makes a compelling case for the adoption of Rényi's information measures in the large deviations analysis of lossless data compression, hypothesis testing and data transmission. Recall that more than two decades earlier, Csiszár [30] had already established the connection of Gallager's E 0 function and the generalized mutual information inspired by Sibson [31], which, henceforth, we refer to as the α-mutual information. Therefore, its relevance to the error exponent analysis of error correcting codes had already been established. Incidentally, more recently, another operational role was found for α-mutual information in the context of the large deviations analysis of composite hypothesis testing [37]. In addition to α-mutual information, and always working with discrete alphabets, Csiszár [32] considers the generalized mutual informations due to Arimoto [27], and to Augustin [36], which we refer to as the Augustin-Csiszár mutual information of order α. Csiszár shows that all those three generalizations of mutual information coincide upon their unconstrained maximization with respect to the input distribution. Further relationships among those Rényi-based generalized mutual informations have been obtained in recent years in [38][39][40][41][42][43][44][45]. In [32] the maximal α-mutual information or generalized capacity of order α finds an operational characterization as a generalized cutoff rate-an equivalent way to express the reliability function. This would have been the final word on the topic if it weren't for its limitation to discrete-alphabet channels, and more importantly, encoding without cost constraints. Cost Constraints If the transmitted codebook is cost-constrained, i.e., every codeword pc 1 , . . . , c n q is forced to satisfy ř n i"1 bpc i q ď n θ for some nonnegative cost function bp¨q, then the channel capacity is equal to the input-output mutual information maximized over input probability measures restricted to satisfy ErbpXqs ď θ. Gallager [9] incorporated cost constraints in his treatment of error exponents by generalizing (1) to the function E 0 pρ, P X , r, θq "´log ÿ yPB˜ÿ xPA P X pxq exppr bpxq´r θqP with which he was able to prove an achievability result invoking Shannon's random coding technique [1]. Gallager also suggested in the footnote of page 329 of [9] that the converse technique of [10] is amenable to extension to prove a sphere-packing converse based on (3). However, an important limitation is that that technique only applies to constantcomposition codes (all codewords have the same empirical distribution). A more powerful converse circumventing that limitation (at least for symmetric channels) was given by [46] also expressing the upper bound on the reliability function by optimizing (3) with respect Entropy 2021, 23, 199 4 of 52 to ρ, r and P X . A notable success of the approach based on the optimization of (3) was the determination of the reliability function (for all rates below capacity) of the direct detection photon channel [47]. In contrast, the Phase Two expression (2) for the sphere-packing error exponent for cost-constrained channels is much more natural and similar to the way the expression for channel capacity is impacted by cost constraints, namely we simply constrain the maximization in (2) to satisfy ErbpXqs ď θ. Unfortunately, no general methods to solve the ensuing optimization have been reported. Once cost constraints are incorporated, the equivalence among the maximal α-mutual information, maximal order-α Augustin-Csiszár mutual information, and maximal Arimoto mutual information of order α breaks down. Of those three alternatives, it is the maximal Augustin-Csiszár mutual information under cost constraints that appears in the error exponent functions. The challenge is that Augustin-Csiszár mutual information is much harder to evaluate, let alone maximize, than α-mutual information. The Phase 3 effort to encompass cost constraints started by Augustin [36] and was continued recently by Nakiboglu [43]. Their focus was to find a way to express (3) in terms of Rényi information measures. Although, as we explain in Item 62, they did not quite succeed, their efforts were instrumental in developing key properties of the Augustin-Csiszár mutual information. Organization To enhance readability and ease of reference, the rest of this work is organized in 81 items, grouped into Section 13 and an appendix. Basic notions and notation (including the key concept of α-response) are collected in Section 2. Unlike much of the literature on the topic, we do not restrict attention to discrete input/output alphabets, nor do we impose any topological structures on them. The paper is essentially self-contained. Section 3 covers the required background material on relative entropy, Rényi divergence of order α, and their conditional versions, including a key representation of Rényi divergence in terms of relative entropies and a tilted probability measure, and additive decompositions of Rényi divergence involving the α-response. Section 4 studies the basic properties of α-mutual information and order-α Augustin-Csiszár mutual information. This includes their variational representations in terms of conventional (non-Rényi) information measures such as conditional relative entropy and mutual information, which are particularly simple to show in the main range of interest in applications to error exponents, namely, α P p0, 1q. The interrelationships between α-mutual information and order-α Augustin-Csiszár mutual information are covered in Section 5, which introduces the dual notions of α-adjunct and xαy-adjunct of an input probability measure. The maximizations with respect to the input distribution of α-mutual information and order-α Augustin-Csiszár mutual information account for their role in the fundamental limits in data transmission through noisy channels. Section 6 gives a brief review of the results in [45] for the maximization of α-mutual information. For Augustin-Csiszár mutual information, Section 7 covers its unconstrained maximization, which coincides with its αmutual information counterpart. Section 8 proposes an approach to find C c α pθq, the maximal Augustin-Csiszár mutual information of order α P p0, 1q subject to ErbpXqs ď θ. Instead of trying to identify directly the input distribution that maximizes Augustin-Csiszár mutual information, the method seeks its xαy-adjunct. This is tantamount to maximizing α-mutual information over a larger set of distributions. Section 9 shows where the maximization on the right side is unconstrained. In other words, the minimax of Gallager's E 0 function (3) with cost constraints is shown to be equal to the maximal Augustin-Csiszár mutual information, thereby bridging the existing gap between the Phase 1 and Phase 3 representations alluded to earlier in this introduction. As in [48], Section 10 defines the sphere-packing and random-coding error exponent functions in the natural canonical form of Phase 2 (e.g., (2)), and gives a very simple proof of the nexus between the Phase 2 and Phase 3 representations, namely, with or without cost constraints. In this regard, we note that, although all the ingredients required were already present at the time the revised version of [24] was published three decades after the original, [48] does not cover the role of Rényi's information measures in channel error exponents. Examples illustrating the proposed method are given in Sections 11 and 12 for the additive Gaussian noise channel under a quadratic cost function, and the additive exponential noise channel under a linear cost function, respectively. Simple parametric expressions are given for the error exponent functions, and the least favorable channels that account for the most likely error mechanism (Section 1.2) are identified in both cases. Relative Information and Information Density We begin with basic terminology and notation required for the subsequent development. 1. If pA, F , Pq is a probability space, X " P indicates PrX P F s " PpF q for all F P F . 2. If probability measures P and Q defined on the same measurable space pA, F q satisfy PpAq " 0 for all A P F such that QpAq " 0, we say that P is dominated by Q, denoted as P ! Q. If P and Q dominate each other, we write P !" Q. If there is an event such that PpAq " 0 and QpAq " 1, we say that P and Q are mutually singular, and we write P K Q. 3. If P ! Q, then dP dQ is the Radon-Nikodym derivative of the dominated measure P with respect to the reference measure Q. Its logarithm is known as the relative information, namely, the random variable ı P}Q paq " log dP dQ paq P r´8,`8q, a P A. As with the Radon-Nikodym derivative, any identity involving relative informations can be changed on a set of measure zero under the reference measure without incurring in any contradiction. If P ! Q ! R, then the chain rule of Radon-Nikodym derivatives yields ı P}Q paq`ı Q}R paq " ı P}R paq, a P A. Throughout the paper, the base of exp and log is the same and chosen by the reader unless explicitly indicated otherwise. We frequently define a probability measure P from the specification of ı P}Q and Q " P since If X " P and Y " Q, it is often convenient to write ı X}Y pxq instead of ı P}Q pxq. Note that Example 1. If X " N`µ X , σ 2 X˘( Gaussian with mean µ X and variance σ 2 X ) and Y " N`µ Y , σ 2 Y˘, then, 4. Let pA, F q and pB, G q be measurable spaces, known as the input and output spaces, respectively. Likewise, A and B are referred to as the input and output alphabets respectively. The simplified notation P Y|X : A Ñ B denotes a random transformation from pA, F q to pB, G q, i.e. for any x P A, P Y|X"x p¨q is a probability measure on pB, G q, and for any B P G , P Y|X"¨p Bq is an F -measurable function. 5. We abbreviate by P A the set of probability measures on pA, F q, and by P AˆB the set of probability measures on pAˆB, F b G q. If P P P A and P Y|X : A Ñ B is a random transformation, the corresponding joint probability measure is denoted by P P Y|X P P AˆB (or, interchangeably, P Y|X P). The notation P Ñ P Y|X Ñ Q simply indicates that the output marginal of the joint probability measure P P Y|X is denoted by Q P P B , namely, 6. If P X Ñ P Y|X Ñ P Y and P Y|X"a ! P Y , the information density ı X;Y : AˆB Ñ r´8, 8q is defined as ı X;Y pa; bq " ı P Y|X"a }P Y pbq, pa, bq P AˆB. Following Rényi's terminology [49], if P X P Y|X ! P XˆPY , the dependence between X and Y is said to be regular, and the information density can be defined on px, yq P AˆB. Henceforth, we assume that P Y|X is such that the dependence between its input and output is regular regardless of the input probability measure. For example, if X " Y P R, then P Y|X"a pAq " 1ta P Au, and their dependence is not regular, since for any P X with non-discrete components P XY Entropy 2021, 1, 0 6 of 52 Example 1. If X ∼ N µ X , σ 2 X (Gaussian with mean µ X and variance σ 2 4. Let (A, F ) and (B, G ) be measurable spaces, known as the input and output spaces, respectively. Likewise, A and B are referred to as the input and output alphabets respectively. The simplified notation P Y|X : A → B denotes a random transformation from (A, F ) to (B, G ), i.e. for any x ∈ A, P Y|X=x (·) is a probability measure on (B, G ), and for any B ∈ G , P Y|X=· (B) is an F -measurable function. 5. We abbreviate by P A the set of probability measures on (A, F ), and by P A×B the set of probability measures on (A × B, F ⊗ G ). If P ∈ P A and P Y|X : A → B is a random transformation, the corresponding joint probability measure is denoted by P P Y|X ∈ P A×B (or, interchangeably, P Y|X P). The notation P → P Y|X → Q simply indicates that the output marginal of the joint probability measure P P Y|X is denoted by Q ∈ P B , namely, 6. If P X → P Y|X → P Y and P Y|X=a P Y , the information density ı X;Y : Following Rényi's terminology [49], if P X P Y|X P X × P Y , the dependence between X and Y is said to be regular, and the information density can be defined on (x, y) ∈ A × B. Henceforth, we assume that P Y|X is such that the dependence between its input and output is regular regardless of the input probability measure. For example, if X = Y ∈ R, then P Y|X=a (A) = 1{a ∈ A}, and their dependence is not regular, since for any P X with non-discrete components P XY P X × P Y . 7. Let α > 0, and P X → P Y|X → P Y . The α-response to P X ∈ P A is the output probability measure P Y[α] P Y with relative information given by where κ α is a scalar that guarantees that P Y[α] is a probability measure. Invoking (9), we obtain For brevity, the dependence of κ α on P X and P Y|X is omitted. Jensen's inequality applied to (·) α results in κ α ≤ 0 for α ∈ (0, 1) and κ α ≥ 0 for α > 1. Although the α-response has a long record of services to information theory, this terminology and notation were introduced recently in [45]. Alternative terminology and notation were proposed in [42], which refers to the α-response as the order α Rényi mean. Note that κ 1 = 0 and the 1-response to P X is P Y . If p Y[α] and p Y|X denote the densities of P Y [α] and P Y|X with respect to some common dominating measure, then (13) becomes For α > 1 (resp. α < 1) we can think of the normalized version of p α Y|X as a random transformation with less (resp. more) "noise" than p Y|X . Let α ą 0, and P X Ñ P Y|X Ñ P Y . The α-response to P X P P A is the output probability measure P Yrαs ! P Y with relative information given by where κ α is a scalar that guarantees that P Yrαs is a probability measure. Invoking (9), we obtain For brevity, the dependence of κ α on P X and P Y|X is omitted. Jensen's inequality applied to p¨q α results in κ α ď 0 for α P p0, 1q and κ α ě 0 for α ą 1. Although the α-response has a long record of services to information theory, this terminology and notation were introduced recently in [45]. Alternative terminology and notation were proposed in [42], which refers to the α-response as the order α Rényi mean. Note that κ 1 " 0 and the 1-response to P X is P Y . If p Yrαs and p Y|X denote the densities of P Yrαs and P Y|X with respect to some common dominating measure, then (13) becomes For α ą 1 (resp. α ă 1) we can think of the normalized version of p α Y|X as a random transformation with less (resp. more) "noise" than p Y|X . We will have opportunity to apply the following examples. Example 2. If Y " X`N, where X " N`µ X , σ 2 X˘i ndependent of N " N`µ N , σ 2 N˘, then the α-response to P X is Example 3. Suppose that Y " X`N, where N is exponential with mean ζ, independent of X, which is a mixed random variable with density with α µ ě ζ. Then, Yrαs, the α-response to P X , is exponential with mean α µ. Relative Entropy and Rényi Divergence Given a pair of probability measures pP, Qq P P 2 A , relative entropy and Rényi divergence gauge the distinctness between P and Q. 9. Provided P ! Q, the relative entropy is the expectation of the relative information with respect to the dominated measure with equality if and only if P " Q. If P ! Q, then DpP}Qq " 8. As in Item 3, if X " P and Y " Q, we may write DpX}Yq instead of DpP}Qq, in the same spirit that the expectation and entropy of P are written as ErXs and HpXq, respectively. 10. Arising in the sequel, a common optimization in information theory finds, among the probability measures satisfying an average cost constraint, that which is closest to a given reference measure Q in the sense of Dp¨}Qq. For that purpose, the following result proves sufficient. Incidentally, we often refer to unconstrained maximizations over probability distributions. It should be understood that those optimizations are still constrained to the sets P A or P B . As customary in information theory, we will abbreviate max P X PP A by max X or max P X . Theorem 1. Let P Z P P A and suppose that g : A Ñ r0, 8q is a Borel measurable mapping. Then, achieved uniquely by PX !" P Z defined by ı X˚}Z paq "´gpaq´log Erexpp´gpZqqs, a P A. (22) Proof. Note that since g is nonnegative, η " Erexpp´gpZqqs P p0, 1s. Furthermore, Entropy 2021, 23, 199 8 of 52 Therefore, the subset of P A for which the term in t¨u in (21) is finite is nonempty: Fix any P X from that subset, (which therefore satisfies P X ! P Z ! PX) and invoke the chain rule (7) to write which is uniquely minimized by letting P X " PX. Note that for typographical convenience we have denoted X˚" PX. 11. Let p and q denote the Radon-Nikodym derivatives of probability measures P and Q, respectively, with respect to a common dominating σ-finite measure µ. The Rényi divergence of order α P p0, 1q Y p1, 8q between P and Q is defined as [25,50] where (28) and (29) hold if P Example 1. If X ∼ N µ X , σ 2 X (Gaussian with mean µ X and variance σ 2 X ) and Y ∼ N µ Y , σ 2 Y , then, 4. Let (A, F ) and (B, G ) be measurable spaces, known as the input and output spaces, respectively. Likewise, A and B are referred to as the input and output alphabets respectively. The simplified notation P Y|X : A → B denotes a random transformation from (A, F ) to (B, G ), i.e. for any x ∈ A, P Y|X=x (·) is a probability measure on (B, G ), and for any B ∈ G , P Y|X=· (B) is an F -measurable function. 5. We abbreviate by P A the set of probability measures on (A, F ), and by P A×B the set of probability measures on (A × B, F ⊗ G ). If P ∈ P A and P Y|X : A → B is a random transformation, the corresponding joint probability measure is denoted by P P Y|X ∈ P A×B (or, interchangeably, P Y|X P). The notation P → P Y|X → Q simply indicates that the output marginal of the joint probability measure P P Y|X is denoted by Q ∈ P B , namely, 6. If P X → P Y|X → P Y and P Y|X=a P Y , the information density ı X;Y : Following Rényi's terminology [49], if P X P Y|X P X × P Y , the dependence between X and Y is said to be regular, and the information density can be defined on (x, y) ∈ A × B. Henceforth, we assume that P Y|X is such that the dependence between its input and output is regular regardless of the input probability measure. For example, if X = Y ∈ R, then P Y|X=a (A) = 1{a ∈ A}, and their dependence is not regular, since for any P X with non-discrete components P XY P X × P Y . 7. Let α > 0, and P X → P Y|X → P Y . The α-response to P X ∈ P A is the output probability measure P Y[α] P Y with relative information given by where κ α is a scalar that guarantees that P Y[α] is a probability measure. Invoking (9), we obtain For brevity, the dependence of κ α on P X and P Y|X is omitted. Jensen's inequality applied to (·) α results in κ α ≤ 0 for α ∈ (0, 1) and κ α ≥ 0 for α > 1. Although the α-response has a long record of services to information theory, this terminology and notation were introduced recently in [45]. Alternative terminology and notation were proposed in [42], which refers to the α-response as the order α Rényi mean. Note that κ 1 = 0 and the 1-response to P X is P Y . If p Y[α] and p Y|X denote the densities of P Y [α] and P Y|X with respect to some common dominating measure, then (13) becomes For α > 1 (resp. α < 1) we can think of the normalized version of p α Y|X as a random transformation with less (resp. more) "noise" than p Y|X . Q, and in (27), R is a probability measure that dominates both P and Q. Note that (28) and (29) state that pt´1qD t pX}Yq and t D 1`t pX}Yq are the cumulant generating functions of the random variables ı X}Y pYq and ı X}Y pXq, respectively. The relative entropy is the limit of D α pP}Qq as α Ò 1, so it is customary to let D 1 pP}Qq " DpP}Qq. For any α ą 0, D α pP}Qq ě 0 with equality if and only if P " Q. Furthermore, D α pP}Qq is non-decreasing in α, satisfies the skew-symmetric property p1´αqD α pP}Qq " α D 1´α pQ}Pq, α P r0, 1s, and inf αPp0,1q 12. The expressions in the following pair of examples will come in handy in Sections 11 and 12. Example 4. Suppose that σ 2 α " α σ 2 1`p 1´αqσ 2 0 ą 0 and α P p0, 1q Y p1, 8q. Then, Entropy 2021, 23, 199 9 of 52 Example 5. Suppose Z is exponentially distributed with unit mean, i.e., its probability density function is e´t1tt ě 0u. For d 0 ě d 1 and α such that p1´αq µ 0`α µ 1 ą 0 we obtain 13. Intimately connected with the notion of Rényi divergence is the tilted probability measure P α defined, if D α pP 1 }P 0 q ă 8, by where Q is any probability measure that dominates both P 0 and P 1 . Although (37) is defined in general, our main emphasis is on the range α P p0, 1q, in which, as long as P 0 M P 1 , the tilted probability measure is defined and satisfies P α ! P 0 and P α ! P 1 , with corresponding relative informations where we have used the chain rule for P α ! P 0 ! Q and P α ! P 1 ! Q. Taking a linear combination of (38)-(41) we conclude that, for all a P A, Henceforth, we focus particular attention on the case α P p0, 1q since that is the region of interest in the application of Rényi information measures to the evaluation of error exponents in channel coding for codes whose rate is below capacity. In addition, often proofs simplify considerably for α P p0, 1q. 14. Much of the interplay between relative entropy and Rényi divergence hinges on the following identity, which appears, without proof, in (3) of [51]. Theorem 2. Let α P p0, 1q and assume that P 0 M P 1 are defined on the same measurable space. Then, for any P ! P 1 and P ! P 0 , where P α is the tilted probability measure in (37) and (43) holds regardless of whether the relative entropies are finite. In particular, 15. Relative entropy and Rényi divergence are related by the following fundamental variational representation. Theorem 3. Fix α P p0, 1q and pP 1 , P 0 q P P 2 A . Then, the Rényi divergence between P 1 and P 0 satisfies where the minimum is over P A . If P 0 M P 1 , then the right side of (47) is attained by the tilted measure P α , and the minimization can be restricted to the subset of probability measures which are dominated by both P 1 and P 0 . Proof. If P 0 K P 1 , then both sides of (47) are`8 since there is no probability measure that is dominated by both P 0 and P 1 . If P 0 M P 1 , then minimizing both sides of (43) with respect to P yields (47) and the fact that the tilted probability measure attains the minimum therein. The variational representation in (47) was observed in [39] in the finite-alphabet case, and, contemporaneously, in full generality in [50]. Unlike Theorem 3, both of those references also deal with α ą 1. The function dpαq " p1´αq D α pP 1 }P 0 q, with dp1q " lim αÒ1 dpαq, is concave in α because the right side of (47) is a minimum of affine functions of α. 16. Given random transformations P Y|X : A Ñ B, Q Y|X : A Ñ B, and a probability measure P X P P A on the input space, the conditional relative entropy is Analogously, the conditional Rényi divergence is defined as Entropy 2021, 23, 199 of 52 A word of caution: the notation in (50) conforms to that in [38,45] but it is not universally adopted, e.g., [43] uses the left side of (50) to denote the Rényi generalization of the right side of (49). We can express the conditional Rényi divergence as where (52) holds if P X P Y|X ! P X Q Y|X . Jensen's inequality applied to (51) results in Nevertheless, an immediate and crucial observation we can draw from (51) is that the unconstrained maximizations of the sides of (53) and of (54) over P X do coincide: for all α ą 0, 17. Conditional Rényi divergence satisfies the following additive decomposition, originally pointed out, without proof, by Sibson [31] in the setting of finite A. Theorem 4. Given P X P P A , Q Y P P B , P Y|X : A Ñ B, and α P p0, 1q Y p1, 8q, we have Furthermore, with κ α as in (14), Proof. Select an arbitrary probability measure R Y P P B that dominates both Q Y and P Y , and, therefore, P Yrαs too. Letting pX, Zq " P XˆRY , we have where (61) follows from (13), and (62) follows from the chain rule of Radon-Nikodym derivatives applied to P Yrαs ! P Y ! R Y . Then, (58) follows by specializing Q Y " P Yrαs , and the proof of (57) is complete, upon plugging (58) into the right side of (63). A proof of (57) in the discrete case can be found in Appendix A of [37]. 18. For all α ą 0, given two inputs pP X , Q X q P P 2 A and one random transformation P Y|X : A Ñ B, Rényi divergence (and, in particular, relative entropy) satisfies the data processing inequality, where P X Ñ P Y|X Ñ P Y , and Q X Ñ P Y|X Ñ Q Y . The data processing inequality for Rényi divergence was observed by Csiszár [52] in the more general context of f -divergences. More recently it was stated in [39,50]. Furthermore, given one input P X P P A and two transformations P Y|X : A Ñ B and Q Y|X : A Ñ B, conditioning cannot decrease Rényi divergence, Since D α pP Y|X } Q Y|X |P X q " D α pP X P Y|X } P X Q Y|X q, (65) follows by applying (64) to a deterministic transformation which takes an input pair and outputs the second component. Inequalities (53) and (65) imply the convexity of D α pP}Qq in pP, Qq for α P p0, 1s. Dependence Measures In this paper we are interested in three information measures that quantify the dependence between random variables X and Y, such that P X Ñ P Y|X Ñ P Y , namely, mutual information, and two of its generalizations, α-mutual information and Augustin-Csiszár mutual information of order α. Theorem 4 and (72) result in the additive decomposition for any Q Y with D α pP Yrαs } Q Y q ă 8, thereby generalizing the well-known decomposition for mutual information, which, in contrast to (77), is a simple consequence of the chain rule whenever the dependence between X and Y is regular, and of Lemma A1 in general. 22. 23. If α P p0, 1q, (47) and (69) result in For α ą 1 a proof of (81) is given in [39] for finite alphabets. 24. Unlike IpP X , P Y|X q, we can express I α pP X , P Y|X q directly in terms of its arguments without involving the corresponding output distribution or the α-response to P X . This is most evident in the case of discrete alphabets, in which (76) becomes For example, if X is discrete and H α pXq denotes the Rényi entropy of order α, then for all α ą 0, If X and Y are equiprobable with PrX ‰ Ys " δ, then, in bits, I α pX; Yq " 1´h α pδq, where h α pδq denotes the binary Rényi entropy. 25. In the main region of interest, namely, α P p0, 1q, frequently we use a different parametrization in terms of ρ ą 0, with α " 1 1`ρ . Theorem 5. For any ρ ą 0, we have the upper bound Just like (53), we will show in Section 7 that (86) becomes an equality upon the unconstrained maximization of both sides. 26. Before introducing the last dependence measure in this section, recall from Definition 7 and (58) that P Yrαs ! P Y , the α-response (of P Y|X ) to P X defined by where the expectation is with respect to X " P X . We proceed to define P Yxαy ! P Y , the xαy-response (of P Y|X ) to P X by means of with X " P X . Note that P Yx1y " P Yr1s " P Y . 27. In the case of discrete alphabets, (92) becomes the implicit equation which coincides with (9.24) in Fano's 1961 textbook [7], with s Ð 1´α, and is also given by Haroutunian in (19) of [22]. For example, if A " B is discrete and Y " X, then P Yxαy " P X , while P α Yrαs pyq " c P X pyq, y P A. 28. The xαy-response satisfies the following identity, which can be regarded as the counterpart of (57) satisfied by the α-response. Theorem 6. Fix P X P P A , P Y|X : A Ñ B and Q Y P P B . Then, Proof. For brevity we assume Q Y ! P Y . Otherwise, the proof is similar adopting a reference measure that dominates both Q Y and P Y . The definition of unconditional Rényi divergence in Item 11 implies that we can write pα´1q times the exponential of the left side of (94) as where pX, Yq " P XˆPY , (96) follows from (92), and (97) follows from the definition of unconditional Rényi divergence in (27). Taking expectation with respect to X " P X of (106)-(108) yields (99) because of Lemma A1 and (105). If α ě 1, then Jensen's inequality applied to the right side of (94) results in (98) but with the opposite inequality. Moreover, (107) is reversed and the remainder of the proof holds verbatim. In the case of finite input-alphabets, a different proof of (99) is given in Appendix B of [54]. 29. Introduced in the unpublished dissertation [36] and rescued from oblivion in [32], the Augustin-Csiszár mutual information of order α is defined for α ą 0 as where (111) follows from (98) if α P p0, 1s, and from the reverse of (99) if α ě 1. We conform to the notation in [40], where I a α was used to denote the difference between entropy and Arimoto-Rényi conditional entropy. In [32,39,43] the Augustin-Csiszár mutual information of order α is denoted by I α . In Augustin's original notation [36], I ρ pP X q means I c 1´ρ pP X , P Y|X q, ρ P p0, 1q. Independently of [36], Poltyrev [35] introduced a functional (expressed as a maximization over a reverse random transformation) which turns out to be ρI c 1 1`ρ pX; Yq and which he denoted by E 0 pρ, P X q, although in Gallager's notation that corresponds to ρI 1 1`ρ pX; Yq, as we will see in (233). I c 0 pX; Yq and I c 8 pX; Yq are defined by taking the corresponding limits. 30. In the discrete case, (110) boils down to which can be juxtaposed with the much easier expression in (82) for I α pX; Yq involving no further optimization. Minimizing the Lagrangian, we can verify that the minimizer in (112) satisfies (93). With pX, s Yq " P XˆQY , we have where the expectations are with respect to X. 31. The respective minimizers of (72) and (110), namely, the α-response and the xαyresponse, are quite different. Most notably, in contrast to Item 7, an explicit expression for P Yxαy is unknown. Instead of defining P Yxαy through (92), [36] defines it, equivalently, as the fixed point of the operator (dubbed the Augustin operator in [43]) which maps the set of probability measures on the output space to itself, where X " P X . Although we do not rely on them, Lemma 34.2 of (α P p0, 1q) and Lemma 13 of [43] (α ą 1) claim that the minimizer in (110), referred to in [43] as the Augustin mean of order α, is unique and is a fixed point of the operator T α regardless of P X . Moreover, Lemma 13(c) of [43] establishes that for α P p0, 1q and finite input alphabets, repeated iterations of the operator T α with initial argument P Yrαs converge to P Yxαy . 32. It is interesting to contrast the next example with the formulas in Examples 2 and 6. This result can be obtained by postulating a zero-mean Gaussian distribution with variance v 2 α as P Yxαy and verifying that (92) is indeed satisfied if v 2 α is chosen as in (116). The first step is to invoke (32), which yields where we have denoted s 2 Assembling (120) and (121), the right side of (92) becomes where (124) follows by Gaussian integration, and the marvelous simplification in (125) is satisfied provided that we choose Comparing (122) and (125), we see that (92) is indeed satisfied with Yxαy " N`0, v 2 α˘i f v 2 α satisfies the quadratic equation (126), whose solution is in (116)-(118). Invoking (32) and (116), we obtain Beyond its role in evaluating the Augustin-Csiszár mutual information for Gaussian inputs, the Gaussian distribution in (116) has found some utility in the analysis of finite blocklength fundamental limits for data transmission [55]. 33. This item gives a variational representation for the Augustin-Csiszár mutual information in terms of mutual information and conditional relative entropy (i.e., non-Rényi information measures). As we will see in Section 10, this representation accounts for the role played by Augustin-Csiszár mutual information in expressing error exponent functions. Theorem 8. For α P p0, 1q, the Augustin-Csiszár mutual information satisfies the variational representation in terms of conditional relative entropy and mutual information, where the minimum is over all the random transformations from the input to the output spaces. Proof. Invoking (47) with pP 1 , P 0 q Ð pP Y|X"x , Q Y q we obtain " min Averaging over x " P X , followed by minimization with respect to Q Y yields (128) upon recalling (67). In the finite-alphabet case with α P p0, 1q Y p1, 8q, the representation in (128) is implicit in the appendix of [32], and stated explicitly in [39], where it is shown by means of a minimax theorem. This is one of the instances in which the proof of the result is considerably easier for α P p0, 1q; we can take the following route to show (128) for α ą 1. Neglecting to emphasize its dependence on P X , denote Invoking (47) we obtain Averaging (132) with respect to P X followed by minimization over Q Y , results in which shows ě in (128). If a minimax theorem can be invoked to show equality in (134), then (128) is established for α ą 1. For that purpose, for fixed R Y|X , f p¨, R Y|X q is convex and lower semicontinuous in Q Y on the set where it is finite. Rewriting it can be seen that f pQ Y ,¨q is upper semicontinuous and concave (if α ą 1). A different, and considerably more intricate route is taken in Lemma 13(d) of [43], which also gives (128) for α ą 1 assuming finite input alphabets. 34. Unlike mutual information, neither I α pX; Yq " I α pY; Xq nor I c α pX; Yq " I c α pY; Xq hold in general. 35. It was shown in Theorem 5.2 of [38] that α-mutual information satisfies the data processing lemma, namely, if X and Z are conditionally independent given Y, then I α pZ; Xq ď mintI α pZ; Yq, I α pY; Xqu. 37. The convexity/concavity properties of the generalized mutual informations are summarized next. Ip¨, P Y|X q and I c α p¨, P Y|X q are concave functions. The same holds for I α p¨, P Y|X q if α ą 1. (c) If α P p0, 1q, then IpP X ,¨q, I α pP X ,¨q and I c α pP X ,¨q are convex functions. In general, it holds since (67) is the infimum of linear functions of P X . The same reasoning applies to Augustin-Csiszár mutual information in view of (110). For α-mutual information with α ą 1, notice from (51) that D α pP Y|X } Q Y |P X q is concave in P X if α ą 1. Therefore, (c) The convexity of IpP X ,¨q and I α pP X ,¨q follow from the convexity of D α pP}Qq in pP, Qq for α P p0, 1s as we saw in Item 18. To show convexity of I c α pP X ,¨q if α P p0, 1q, we apply (169) in Item 45 with P Y|X " λP 1 Y|X`p 1´λqP 0 Y|X , and invoke the convexity of I α pP X ,¨q: Although not used in the sequel, we note, for completeness, that if α P p0, 1q Y p1, 8q, [38] (see corrected version in [41]) shows that exp´´1´1 α¯I α p¨, P Y|X q¯{pα´1q is concave. 5. Interplay between I α pP X , P Y|X q and I c α pP X , P Y|X q In this section we study the interplay between both notions of mutual informations of order α, and, in particular, various variational representations of these information measures. 38. For given α P p0, 1q Y p1, 8q and P Y|X : A Ñ B, define Q Xrαs !" P X , the α-adjunct of P X by with κ α the constant in (14) and P Yrαs , the α-response to P X . 39. Example 9. Let Y " X`N with X " N`0, σ 2 X˘i ndependent of N " N`0, σ 2 N˘, and snr " 40. Theorem 10. The xαy-response to Q Xrαs is P Yrαs , the α-response to P X . Proof. We just need to verify that (92) is satisfied if we substitute Yxαy by Yrαs, and instead of taking the expectation in the right side with respect to X " P X we take it with respect to r X " Q Xrαs . Then, where (154) is by change of measure, (155) follows by substitution of (152), and (156) is the same as (13). 41. For given α P p0, 1q Y p1, 8q and P Y|X : A Ñ B, we define Q Xxαy !" P X , the xαyadjunct of an input probability measure P X through where P Yxαy is the xαy-response to P X and υ α is a normalizing constant so that Q Xxαy is a probability measure. According to (9), we must have Hence, 42. With the aid of the expression in Example 7, we obtain Example 10. Let Y " X`N with X " N`0, σ 2 X˘i ndependent of N " N`0, σ 2 N˘, and snr " Then, the xαy-adjunct of the input is which, in contrast to Q Xrαs , has larger variance than σ 2 X if α P p0, 1q. 43. The following result is the dual of Theorem 10. Theorem 11. The α-response to Q Xxαy is P Yxαy , the xαy-response to P X . Therefore, Proof. The proof is similar to that of Theorem 10. We just need to verify that we obtain the right side of (92) if on the right side of (91) we substitute P X by Q Xxαy and P Yrαs by P Yxαy . Let s X " Q Xxαy . Then, where (162) 44. By recourse to a minimax theorem, the following representation is given for α P p0, 1q Y p1, 8q in the case of finite alphabets in [39], and dropping the restriction on the finiteness of the output space in [43]. As we show, a very simple and general proof is possible for α P p0, 1q. Theorem 12. Fix α P p0, 1q, P X P P A and P Y|X : A Ñ B. Then, where the minimum is attained by Q Xrαs , the α-adjunct of P X defined in (152). Proof. The variational representations in (81) and (128) result in (165). To show that the minimum is indeed attained by Q Xrαs , recall from Theorem 10 that the xαyresponse to Q Xrαs is P Yrαs . Therefore, evaluating the term in tu in (165) for Q X Ð Q Xrαs yields, with r X " Q Xrαs , where (167) follows from (152) and (168) Theorem 13. Fix α P p0, 1q, P X P P A and P Y|X : A Ñ B. Then, The maximum is attained by Q Xxαy , the xαy-adjunct of P X defined by (157). Proof. First observe that (165) implies that ě holds in (169). Second, the term in tu on the right side of (169) evaluated at Q X Ð Q Xxαy becomes p1´αq I α pQ Xxαy , P Y|X q´DpP X } Q Xxαy q " p1´αq I α pQ Xxαy , P Y|X q`p1´αqI c α pP X , P Y|X q`υ α " p1´αqI c α pP X , P Y|X q, where (170) follows by taking the expectation of minus (157) with respect to P X . Therefore, ď also holds in (169) and the maximum is attained by Q Xxαy , as we wanted to show. Hinging on Theorem 8, Theorems 12 and 13 are given for α P p0, 1q which is the region of interest in the analysis of error exponents. Whenever, as in the finite-alphabet case, (128) holds for α ą 1, Theorems 12 and 13 also hold for α ą 1. Notice that since the definition of Q Xxαy involves P Yxαy , the fact that it attains the maximum in (169) does not bring us any closer to finding I c α pX; Yq for a specific input probability measure P X . Fortunately, as we will see in Section 8, (169) proves to be the gateway to the maximization of I c α pX; Yq in the presence of input-cost constraints. 46. Focusing on the main range of interest, α P p0, 1q, we can express (169) as where we have defined the function (dependent on α, P X , and P Y|X ) and ξ α is the solution to 9 Ipξ α q " 1 1´α . Recall that the maxima over the input distribution in (172) and (175) are attained by the xαy-adjunct Q Xxαy defined in Item 41. 47. At this point it is convenient to summarize the notions of input and output probability measures that we have defined for a given α, random transformation P Y|X , and input probability measure P X : • P Y : The familiar output probability measure P X Ñ P Y|X Ñ P Y , defined in Item 5. • P Yrαs : The α-response to P X , defined in Item 7. It is the unique achiever of the minimization in the definition of α-mutual information in (67). • P Yxαy : The xαy-response to P X defined in Item 26. It is the unique achiever of the minimization in the definition of Augustin-Csiszár α-mutual information in (110). • Q Xrαs : The α-adjunct of P X , defined in (152). The xαy-response to Q Xrαs is P Yrαs . Furthermore, Q Xrαs achieves the minimum in (165). • Q Xxαy : The xαy-adjunct of P X , defined in (157). The α-response to Q Xxαy is P Yxαy . Furthermore, Q Xxαy achieves the maximum in (169). Maximization of I α pX; Yq Just like the maximization of mutual information with respect to the input distribution yields the channel capacity (of course, subject to conditions [57]), the maximization of I α pX; Yq and of I c α pX; Yq arises in the analysis of error exponents, as we will see in Section 10. A recent in-depth treatment of the maximization of α-mutual information is given in [45]. As we see most clearly in (82) for the discrete case, when it comes to its optimization, one advantage of I α pX; Yq over IpX; Yq is that the input distribution does not affect the expression through its influence on the output distribution. 48. The maximization of α-mutual information is facilitated by the following result. Theorem 14 ([45]). Given α P p0, 1q Y p1, 8q; a random transformation P Y|X : A Ñ B; and, a convex set P Ă P A , the following are equivalent. (a) PX P P attains the maximal α-mutual information on P, I α pPX, P Y|X q " max PPP I α pP, P Y|X q ă 8. (b) For any P X P P, and any output distribution Q Y P P B , where PY rαs is the α-response to PX. Moreover, if P Yrαs denotes the α-response to P X , then D α pP Yrαs }PY rαs q ď I α pPX, P Y|X q´I α pP X , P Y|X q ă 8. Note that, while I α p¨, P Y|X q may not be maximized by a unique (or, in fact, by any) input distribution, the resulting α-response PY rαs is indeed unique. If P is such that none of its elements attain the maximal I α , it is known [42,45] that the α-response to any asymptotically optimal sequence of input distributions converges to PY rαs . This is the counterpart of a result by Kemperman [58] concerning mutual information. 49. The following example appears in [45]. Example 11. Let Y " X`N where N " N`0, σ 2 N˘i ndependent of X. Fix α P p0, 1q and P ą 0. Suppose that the set, P Ă P A , of allowable input probability measures consists of those that satisfy the constraint We can readily check that X˚" Np0, Pq satisfies (181) with equality, and as we saw in Example 2, its α-response is PY rαs " N p0, α P`σ 2 q. Theorem 14 establishes that PX does indeed maximize the α-mutual information among all the distributions in P, yielding (recall Example 6) max P X PP Curiously, if, instead of P defined by the constraint (181), we consider the more conventional P " tX : ErX 2 s ď Pu, then the left side of (182) is unknown at present. Numerical evidence shows that it can exceed the right side by employing non-Gaussian inputs. (56) and (178) implies that if PX attains the finite maximal unconstrained α-mutual information and its α-response is denoted by PY rαs , then, max X I α pX; Yq " max PPP I α pP, P Y|X q " max aPA D α pP Y|X"a }PY rαs q, Recalling which requires that PXpAα q " 1, with Aα " " x P A : D α pP Y|X"x }PY rαs q " max aPA D α pP Y|X"a }PY rαs q * . For discrete alphabets, this requires that if x R Aα , then PXpxq " 0, which is tantamount to with equality for all x P A such that PXpxq ą 0. For finite-alphabet random transformations this observation is equivalent to Theorem 5.6.5 in [9]. 51. Getting slightly ahead of ourselves, we note that, in view of (128), an important consequence of Theorem 15 below, is that, as anticipated in Item 25, the unconstrained maximization of I α pX; Yq for α P p0, 1q can be expressed in terms of the solution to an optimization problem involving only conventional mutual information and conditional relative entropy. For ρ ě 0, 7. Unconstrained Maximization of I c α pX; Yq 52. In view of the fact that it is much easier to determine the α-mutual information than the order-α Augustin-Csiszár information, it would be advantageous to show that the unconstrained maximum of I c α pX; Yq equals the unconstrained maximum of I α pX; Yq. In the finite-alphabet setting, in which it is possible to invoke a "minisup" theorem (e.g., see Section 7.1.7 of [59]), Csiszár [32] showed this result for α ą 0. The assumption of finite output alphabets was dropped in Theorem 1 of [42], and further generalized in Theorem 3 of the same reference. As we see next, for α P p0, 1q, it is possible to give an elementary proof without restrictions on the alphabets. (187) Proof. In view of (143), ě holds in (187). To show ď, we assume sup X I α pX; Yq ă 8 as, otherwise, there is nothing left to prove. The unconstrained maximization identity in (183) implies sup X I α pX; Yq " sup aPA D α pP Y|X"a }PY rαs q (188) where PY rαs is the unique α-response to any input that achieves the maximal α-mutual information, and if there is no such input, it is the limit of the α-responses to any asymptotically optimal input sequence (Item 48). Furthermore, if tX n u is asymptotically optimal for I α , i.e., lim nÑ8 I α pX n ; Y n q " sup X I α pX; Yq, then tX n u is also asymptotically optimal for I c α because for any δ ą 0, we can find N, such that for all n ą N, ě I α pX n ; Y n q. Maximization of I c α pX; Yq Subject to Average Cost Constraints This section is at the heart of the relevance of Rényi information measures to error exponent functions. 53. Given α P p0, 1q, P Y|X : A Ñ B, a cost function b : A Ñ r0, 8q and real scalar θ ě 0, the objective is to maximize the Augustin-Csiszár mutual information allowing only those probability measures that satisfy ErbpXqs ď θ, namely, Unfortunately, identity (187) no longer holds when the maximizations over the input probability measure are cost-constrained, and, in general, we can only claim C c α pθq ě sup P X : A conceptually simple approach to solve for C c α pθq is to (a) postulate an input probability measure PX that achieves the supremum in (197); (b) solve for its xαy-response PY using (92); (c) show that pPX, PY q is a saddle point for the game with payoff function where Q Y P P A and P X is chosen from the convex subset of P A of probability measures which satisfy ErbpXqs ď θ. Since PY is already known, by definition, to be the xαy-response to PX, verifying the saddle point is tantamount to showing that BpP X , PY q is maximized by PX among tP X P P A : ErbpXqs ď θu. Theorem 1 of [43] guarantees the existence of a saddle point in the case of finite input alphabets. In addition to the fact that it is not always easy to guess the optimum input PX (see e.g., Section 12), the main stumbling block is the difficulty in determining the xαy-response to any candidate input distribution, although sometimes this is indeed feasible as we saw in Example 7. 54. Naturally, Theorem 15 implies If the unconstrained maximization of I c α p¨, P Y|X q is achieved by an input distribution X ‹ that satisfies ErbpX ‹ qs ď θ, then equality holds in (200), which, in turn, is equal to I c α pP ‹ X , P Y|X q. In that case, the average cost constraint is said to be inactive. For most cost functions and random transformations of practical interest, the cost constraint is active for all θ ą 0. To ascertain whether it is, we simply verify whether there exists an input achieving the right side of (200), which happens to satisfy the constraint. If so, C c α pθq has been found. The same holds if we can find a sequence tX n u such that ErbpX n qs ď θ and I α pX n ; Y n q Ñ sup X I α pX; Yq. Otherwise, we proceed with the method described below. Thus, henceforth, we assume that the cost constraint is active. 55. The approach proposed in this paper to solve for C c α pθq for α P p0, 1q hinges on the variational representation in (172), which allows us to sidestep having to find any xαy-response. Note that once we set out to maximize I c α pP X , P Y|X q over P " tP X P P A : ErbpXqs ď θu, the allowable Q X in the maximization in (175) range over a ξ-blow-up of P defined by Γ ξ pPq " tQ X P P A : DP X P P, such that DpP X }Q X q ď ξu. (201) As we show in Item 56, we can accomplish such an optimization by solving an unconstrained maximization of the sum of α-mutual information and a term suitably derived from the cost function. 56. It will not be necessary to solve for (176), as our goal is to further maximize (172) over P X subject to an average cost constraint. The Lagrangian corresponding to the constrained optimization in (197) is where on the left side we have omitted, for brevity, the dependence on θ stemming from the last term on the right side. The Lagrange multiplier method (e.g., [60]) implies that if X˚achieves the supremum in (197), then there exists ν˚ě 0 such that for all P X on A and ν ě 0, Note from (202) that the right inequality in (203) can only be achieved if and, consequently, The pivotal result enabling us to obtain C c α pθq without the need to deal with Augustin-Csiszár mutual information is the following. Theorem 16. Given α P p0, 1q, ν ě 0, P Y|X : A Ñ B, and b : A Ñ r0, 8q, denote the function Then, and C c α pθq " min νě0 tν θ`A α pνqu. (208) In conclusion, we have shown that the maximization of Augustin-Csiszár mutual information of order α subject to ErbpXqs ď θ boils down to the unconstrained maximization of a Lagrangian consisting of the sum of α-mutual information and an exponential average of the cost function. Circumventing the need to deal with xαy-responses and with Augustin-Csiszár mutual information of order α leads to a particularly simple optimization, as illustrated in Sections 11 and 12. 57. Theorem 16 solves for the maximal Augustin-Csiszár mutual information of order α under an average cost constraint without having to find out the input probability measure PX that attains it nor its xαy-response PY (using the notation in Item 53). Instead, it gives the solution as Although we are not going to invoke a minimax theorem, with the aid of Theorem 9-(b) we can see that the functional within the inner brackets is concave in P X ; Furthermore, if V P p0, 1s, then log ErV ν s is easily seen to be convex in ν with the aid of the Cauchy-Schwarz inequality. Before we characterize the saddle point pν˚, QXq of the game in (215) we note that pPX, PY q can be readily obtained from pν˚, QXq. where τ α is a normalizing constant ensuring that PX is a probability measure. Proof. (a) We had already established in Theorem 13 that the maximum on the right side of (210) is achieved by the xαy-adjunct of P X . In the special case ν " ν˚, such P X is PX. Therefore, QX, the argument that achieves the maximum in (206) for ν " ν˚, is the xαy-adjunct of PX. According to Theorem 11, the α-response to QX is the xαy-response to PX, which is PY by definition. (c) For ν " ν˚, PX achieves the supremum in (209) and the infimum in (211). Therefore, (216) follows from Theorem 1 with Z " QX and gp¨q given by (214) particularized to ν " ν˚. The saddle point of (215) admits the following characterization. Proof. First, we show that the scalar ν˚ě 0 that minimizes satisfies (217). If we abbreviate V " exp`´p1´αqbpX˚q˘P p0, 1s, then the dominated convergence theorem results in d dν Therefore, (217) is equivalent to 9 f pν˚q " 0, which is all we need on account of the convexity of f p¨q. To show (218), notice that for all a P A, where (223) is (216) and (224) is (157) where (227) With the same approach, we can postulate, for every ν ě 0, an input distribution R ν X , whose α-response R ν Yrαs satisfies where the only condition we place on c α pνq is that it not depend on a P A. If this is indeed the case, then the same derivation in (226)-(229) results in and we determine ν˚as the solution to θ "´9 c α pν˚q, in lieu of (217). Sections 11 and 12 illustrate the effortless nature of this approach to solve for A α pνq. Incidentally, (230) can be seen as the α-generalization of the condition in Problem 8.2 of [48], elaborated later in [61]. Gallager's E 0 Functions and the Maximal Augustin-Csiszár Mutual Information In keeping with Gallager's setting [9], we stick to discrete alphabets throughout this section. 59. In his derivation of an achievability result for discrete memoryless channels, Gallager [8] introduced the function (1), which we repeat for convenience, Comparing (82) and (232), we obtain E 0 pρ, P X q " ρ I 1 which, as we mentioned in Section 1, is the observation by Csiszár in [30] that triggered the third phase in the representation of error exponents. Popularized in [9], the E 0 function was employed by Shannon, Gallager and Berlekamp [10] for ρ ě 0 and by Arimoto [62] for ρ P p´1, 0q in the derivation of converse results in data transmission, the latter of which considers rates above capacity, a region in which error probability increases with blocklength, approaching one at an exponential rate. For the achievability part, [8] showed upper bounds on the error probability involving E 0 pρ, P X q for ρ P r0, 1s. Therefore, for rates below capacity, the α-mutual information only enters the picture for α P p0, 1q. One exception in which Rényi divergence of order greater than 1 plays a role at rates below capacity was found by Sason [63], where a refined achievability result is shown for binary linear codes for output symmetric channels (a case in which equiprobable P X maximizes (233)), as a function of their Hamming weight distribution. Although Gallager did not have the benefit of the insight provided by the Rényi information measures, he did notice certain behaviors of E 0 reminiscent of mutual information. For example, the derivative of (233) with respect to ρ, at ρ Ð 0 is equal to IpX; Yq. As pointed out by Csiszár in [32], in the absence of cost constraints, Gallager's E 0 function in (232) satisfies max P X E 0 pρ, P X q " ρ max in view of (233) and (187). Recall that Gallager's modified E 0 function in the case of cost constraints is E 0 pρ, P X , r, θq "´log ÿ yPB˜ÿ xPA P X pxq exppr bpxq´r θqP which, like (232) he introduced in order to show an achievability result. Up until now, no counterpart to (234) has been found with cost constraints and (235). This is accomplished in the remainder of this section. 60. In the finite alphabet case the following result is useful to obtain a numerical solution for the functional in (206). More importantly, it is relevant to the discussion in Item 61. Theorem 19. In the special case of discrete alphabets, the function in (206) is equal to where the maximization is over all G : A Ñ r0, 8q such that ÿ aPA Gpaq expp´p1´αqνbpaqq " 1. (241) 61. We can now proceed to close the circle between the maximization of Augustin-Csiszár mutual information subject to average cost constraints (Phase 3 in Section 1) and Gallager's approach (Phase 1 in Section 1). Theorem 20. In the discrete alphabet case, recalling the definitions in (202) and (235) , for ρ ą 0, max P X E 0 pρ, P X , r, θq " ρ max where the maximizations are over P A . Proof. With the maximization of (235) with the respect to the input probability measure yields where • the maximization on the right side of (247) is over all G : A Ñ r0, 8q that satisfy (237), since that constraint is tantamount to enforcing the constraint that P X P P A on the left side of (247); • (248) ðù Theorem 19; • (249) ðù Theorem 16. The proof of (242) is complete once (244) is invoked to substitute α and ν from the right side of (249). If we now minimize the outer sides of (245)-(249) with respect to r we obtain, using (205) and (244), In p. 329 of [9], Gallager poses the unconstrained maximization (i.e., over P X P P A ) of the Lagrangian Note the apparent discrepancy between the optimizations in (243) and (253): the latter is parametrized by r and γ (in addition to ρ and θ), while the maximization on the right side of (243) does not enforce any average cost constraint. In fact, there is no disparity since Gallager loc. cit. finds serendipitously that γ " 0 regardless of r and θ, and, therefore, just one parameter is enough. 62. The raison d'être for Augustin's introduction of I c α in [36] was his quest to view Gallager's approach with average cost constraints under the optic of Rényi information measures. Contrasting (232) and (235) and inspired by the fact that, in the absence of cost constraints, (232) satisfies a variational characterization in view of (69) and (233), Augustin [36] dealt, not with (235), but with min Q Y D α pP Y|X }Q Y |P X q, whereP Y|X"x " P Y|X"x exp`r 1 bpxq˘. Assuming finite alphabets, Augustin was able to connect this quantity with the maximal I c α pX; Yq under cost constraints in an arcane analysis that invokes a minimax theorem. This line of work was continued in Section 5 of [43], which refers to min Q Y D α pP Y|X }Q Y |P X q as the Rényi-Gallager information. Unfortunately, sincẽ P Y|X is not a random transformation, the conditional pseudo-Rényi divergence D α pP Y|X }Q Y |P X q need not satisfy the key additive decomposition in Theorem 4 so the approach of [36,43] fails to establish an identity equating the maximization of Gallager's function (235) with the maximization of Augustin-Csiszár mutual information, which is what we have accomplished through a crisp and elementary analysis. Error Exponent Functions The central objects of interest in the error exponent analysis of data transmission are the functions E sp pR, P X q and E r pR, P X q of a random transformation P Y|X : A Ñ B. Reflecting the three different phases referred to in Section 1, there is no unanimity in the definition of those functions. Following [48], we adopt the standard canonical Phase 2 (Section 1.2) definitions of those functions, which are given in Items 63 and 67. 63. If R ě 0 and P X P P A , the sphere-packing error exponent function is (e.g., (10.19) of [48]) E sp pR, P X q " min 64. As a function of R ě 0, the basic properties of (254) for fixed pP X , P Y|X q are as follows. Entropy 2021, 23,199 34 of 52 (a) If R ě IpP X , P Y|X q, then E sp pR, P X q " 0; If R ă IpP X , P Y|X q, then E sp pR, P X q ą 0; (c) The infimum of the arguments for which the sphere-packing error exponent function is finite is denoted by R 8 pP X q; (d) On the interval R P pR 8 pP X q, IpP X , P Y|X qq, E sp pR, P X q is convex, strictly decreasing, continuous, and equal to (254) where the constraint is satisfied with equality. This implies that for R belonging to that interval, we can find ρ R ě 0 so that for all r ě 0, 65. In view of Theorem 8 and its definition in (254), it is not surprising that E sp pR, P X q is intimately related to the Augustin-Csiszár mutual information, through the following key identity. Proof. First note that ě holds in (256) because from (128) we obtain, for all ρ ě 0, where (260) follows from the definition in (254). To show ď in (256) for those R such that 0 ă E sp pR, P X q ă 8, Property (d) in Item 64 allows us to write where (262) follows from (255). To determine the region where the sphere-packing error exponent is infinite and show (257), first note that if R ă I c 0 pX; Yq " lim αÓ0 I c α pX; Yq, then E sp pR, P X q " 8 because for any ρ ě 0, the function in tu on the right side of (256) satisfies where (264) follows from the monotonicity of I c α pX; Yq in α we saw in (143). Conversely, if I c 0 pX; Yq ă R ă 8, there exists P p0, 1q such that I c pX; Yq ă R, which implies that in the minimization we may restrict to those Q Y|X such that IpP X , Q Y|X q ď R, and consequently, I c pX; Yq ě 1´ E sp pR, P X q. Therefore, to avoid a contradiction, we must have E sp pR, P X q ă 8. The remaining case is I c 0 pX; Yq " 8. Again, the monotonicity of the Augustin-Csiszár mutual information implies that I c α pX; Yq " 8 for all α ą 0. So, (128) prescribes DpQ Y|X }P Y|X |P X q " 8 for any Q Y|X is such that IpP X , Q Y|X q ă 8. Therefore, E sp pR, P X q " 8 for all R ě 0, as we wanted to show. Augustin [36] provided lower bounds on error probability for codes of type P X as a function of I c α pX; Yq but did not state (256); neither did Csiszár in [32] as he was interested in a non-conventional parametrization (generalized cutoff rates) of the reliability function. As pointed out in p. 5605 of [64], the ingredients for the proof of (256) were already present in the hint of Problem 23 of Section II.5 of [24]. In the discrete case, an exponential lower bound on error probability for codes with constant composition P X is given as a function of I c 1 1`ρ pP X , P Y|X q in [44,64]. As in [64], Nakiboglu [65] gives (256) as the definition of the sphere-packing function and connects it with (254) in Lemma 3 therein, within the context of discrete input alphabets. In the discrete case, (257) is well-known (e.g., [66]), and given by (83). As pointed out in [40], max X I c 0 pX; Yq is the zero-error capacity with noiseless feedback found by Shannon [67], provided there is at least a pair pa 1 , a 2 q P A 2 such that P Y|X"a 1 K P Y|X"a 2 . Otherwise, the zero-error capacity with feedback is zero. 66. The critical rate, R c pP X q, is defined as the smallest abscissa at which the convex function E sp p¨, P X q meets its supporting line of slope´1. According to (256), 67. If R ě 0 and P X P P A , the random-coding exponent function is (e.g., (10.15) of [48]) with rts`" maxt0, tu. 68. The random-coding error exponent function is determined by the sphere-packing error exponent function through the following relation, illustrated in Figure 1. pX; Yq E sp pR,P X q E r pR,P X q R Figure 1. E sp p¨, P X q and E r p¨, P X q. E r pR, P X q " min rěR E sp pr, P X q`r´R ( E sp pR, P X q, R P rR c pP X q, IpP X , P Y|X qs; I c 1 2 pX; Yq´R, R P r0, R c pP X qs. Proof. Identities (268) and (269) are well-known (e.g. Lemma 10.4 and Corollary 10.4 in [48]). To show (270), note that (256) expresses E sp p¨, P X q as the supremum of supporting lines parametrized by their slope´ρ. By definition of critical rate (for brevity, we do not show explicitly its dependence on P X ), if R P rR c , IpP X , P Y|X qs, then E sp pR, P X q can be obtained by restricting the optimization in (256) to ρ P r0, 1s. In that segment of values of R, E sp pR, P X q " E r pR, P X q according to (269). Moreover, on the interval R P r0, R c s, we have max ρPr0,1s where we have used (266) and (269). The first explicit connection between E r pR, P X q and the Augustin-Csiszár mutual information was made by Poltyrev [35] although he used a different form for I c α pX; Yq, as we discussed in (29). 69. The unconstrained maximizations over the input distribution of the sphere-packing and random coding error exponent functions are denoted, respectively, by Coding theorems [8][9][10]22,48] have shown that when these functions coincide they yield the reliability function (optimum speed at which the error probability vanishes with blocklength) as a function of the rate R ă max X IpX; Yq. The intuition is that, for the most favorable input distribution, errors occur when the channel behaves so atypically that codes of rate R are not reliable. There are many ways in which the channel may exhibit such behavior and they are all unlikely, but the most likely among them is the one that achieves (254). It follows from (187), (256) and (270) that (274) and (275) can be expressed as Therefore, we can sidestep working with the Augustin-Csiszár mutual information in the absence of cost constraints. 70. Shannon [1] showed that, operating at rates below maximal mutual information, it is possible to find codes whose error probability vanishes with blocklength; for the converse, instead of error probability, Shannon measured reliability by the conditional entropy of the message given the channel output. That alternative reliability measure, as well as its generalization to Arimoto-Rényi conditional entropy, is also useful analyzing the average performance over code ensembles. It turns out (see e.g., [28,68]) that, below capacity, those conditional entropies also vanish exponentially fast in much the same way as error probability with bounds that are governed by E sp pRq and E r pRq thereby lending additional operational significance to those functions. 71. We now introduce a cost function b : A Ñ r0, 8q and real scalar θ ě 0, and reexamine the optimizations in (274) and (275) allowing only those probability measures that satisfy ErbpXqs ď θ. With a patent, but unavoidable, abuse of notation we define E sp pR, θq " sup where (279) where (284) follows from (270). In particular, if we define the critical rate and the cutoff rate as respectively, then it follows from (270) that Summarizing, the evaluation of E sp pR, θq and E r pR, θq can be accomplished by the method proposed in Section 8, at the heart of which is the maximization in (206) involving α-mutual information instead of Augustin-Csiszár mutual information. In Sections 11 and 12, we illustrate the evaluation of the error exponent functions with two important additive-noise examples. Additive Independent Gaussian Noise; Input Power Constraint We illustrate the procedure in Item 58 by taking Example 6 considerably further. 73. Suppose A " B " R, bpxq " x 2 , and P Y|X"a " N`a, σ 2 N˘. We start by testing whether we can find R ν X P P A such that its α-response satisfies (230). Naturally, it makes sense to try R ν X " N`0, σ 2˘f or some yet to be determined σ 2 . As we saw in Example 6, this choice implies that its α-response is R ν Yrαs " N`0, α σ 2`σ2 N˘. Specializing Example 4, we obtain where (292) follows if we choose the variance of the auxiliary input as In (294) we have introduced an alternative, more convenient, parametrization for the Lagrange multiplier λ " 2 ν σ 2 N log e P p0, αq. 296) " where we denoted snr " θ Entropy 2021, 23, 199 39 of 52 In accordance with Theorem 16 all that remains is to minimize (297) with respect to ν, or equivalently, with respect to λ. Differentiating (297) with respect to λ, the minimum is achieved at λ˚satisfying whose only valid root (obtained by solving a quadratic equation) is with ∆ defined in (118). So, for α P p0, 1q, (208) becomes Letting α " 1 1`ρ , we obtain 74. Alternatively, it is instructive to apply Theorem 18 to the current Gaussian/quadratic cost setting. Suppose we let QX " N`0, σ˚2˘, where σ˚2 is to be determined. With the aid of the formulas where µ ě 0, and X " N`0, σ 2˘, (217) becomes upon substituting σ 2 Ð σ˚2 and Likewise (218) translates into (291) and (292) with pν, σ 2 q Ð pν˚, σ˚2q, namely, Entropy 2021, 23, 199 40 of 52 Eliminating σ˚2 from (305) by means of (308) results in (299) and the same derivation that led to (300) shows that it is equal to ν˚θ`c α pν˚q. 75. Applying Theorem 17, we can readily find the input distribution, PX, that attains C c α pθq as well as its xαy-response PY (recall the notation in Item 53). According to Example 2, PY , the α-response to QX is Gaussian with zero mean and variance where (309) follows from (308) and (310) follows by using the expression for ∆ in (118). Note from Example 7 that PY is nothing but the xαy-response to N`0, snr σ 2 N˘. We can easily verify from Theorem 17 that indeed PX " N`0, snr σ 2 N˘s ince in this case (216) becomes which can only be satisfied by PX " N`0, snr σ 2 N˘i n view of (305). As an independent confirmation, we can verify, after some algebra, that the right sides of (127) and (300) are identical. In fact, in the current Gaussian setting, we could start by postulating that the distribution that maximizes the Augustin-Csiszár mutual information under the second moment constraint does not depend on α and is given by PX " Np0, θq. Its xαyresponse PY xαy was already obtained in Example 7. Then, an alternative method to find C c α pθq, given in Section 6.2 of [43], is to follow the approach outlined in Item 53. To validate the choice of PX we must show that it maximizes BpP X , PY xαy q (in the notation introduced in (199)) among the subset of P A which satisfies ErX 2 s ď θ. This follows from the fact that D α´P Y|X"x }PY xαy¯i s an affine function of x 2 . 76. Let's now use the result in Item 73 to evaluate, with a novel parametrization, the error exponent functions for the Gaussian channel under an average power constraint. Theorem 23. Let A " B " R, bpxq " x 2 , and P Y|X"a " N`a, σ 2 N˘. Then, for β P r0, 1s, The critical rate and cutoff rate are, respectively, Note that the parametric expression in (312) and (313) (shown in Figure 2) is, in fact, a closed-form expression for E sp pR, snr σ 2 N q since we can invert (313) to obtain The random coding error exponent is with the critical rate R c and cutoff rate R 0 in (314) and (315), respectively. It can be checked that (326) coincides with the expression given by Gallager [9] (p. 340) where he optimizes (235) with respect to ρ and r, but not P X , which he just assumes to be P X " Np0, θq. The expression for R c in (314) can be found in (7.4.34) of [9]; R 0 in (314) is implicit in p. 340 of [9], and explicit in e.g., [69]. 77. The expression for E sp pR, θq in Theorem 23 has more structure than meets the eye. The analysis in Item 73 has shown that E sp pR, P X q is maximized over P X with second moment not exceeding θ by PX " Np0, θq regardless of R P´0, 1 2 logp1`snrq¯. The fact that we have found a closed-form expression for (254) when evaluated at such input probability measure and P Y|X"a " N`a, σ 2 N˘i s indicative that the minimum therein is attained by a Gaussian random transformation QY |X . This is indeed the case: define the random transformation In comparison with the nominal random transformation P Y|X"a " N`a, σ 2 N˘, this channel attenuates the input and contaminates it with a more powerful noise. Then, Furthermore, invoking (33), we get where (333) is (312). Therefore, QY |X does indeed achieve the minimum in (254) if P Y|X"a " N`a, σ 2 N˘a nd PX " Np0, θq. So, the most likely error mechanism is the result of atypically large noise strength and an attenuated received signal. Both effects cannot be combined into additional noise variance: there is no σ 2 ą 0 such that Q Y|X"a " N`a, σ 2˘a chieves the minimum in (254). Additive Independent Exponential Noise; Input-Mean Constraint This section finds the sphere-packing error exponent for the additive independent exponential noise channel under an input-mean constraint. It is shown in [70,71] that max X : ErXsďθ IpX; X`Nq " logp1`snrq, achieved by a mixed random variable with density To determine C c α psnr ζq, α P p0, 1q, we invoke Theorem 18. A sensible candidate for the auxiliary input distribution QX is a mixed random variable with density where Γ˚P p0, 1q is yet to be determined. This is an attractive choice because its α-response, QY rαs , is particularly simple: exponential with mean α µ " ζ Γ˚, as we can verify using Laplace transforms. Then, if Z is exponential with unit mean, with the aid of Example 5, we can write So, (218) is satisfied with To evaluate (217), it is useful to note that if γ ą´1, then Therefore, the left side of (217) specializes to, withX˚" QX, while the expectation on the right side of (217) is given by Therefore, (217) yields with ρ " 1´α α . So, finally, (220), (344) and (345) give the closed-form expression C c α pθq " snr Γ˚log e´log Γ˚`1 1´α logpα`p1´αqΓ˚q. As in Item 73, we can postulate an auxiliary distribution that satisfies (230) for every ν ě 0. This is identical to what we did in (341)-(343) except that now (344) and (345) hold for generic ν and Γ. Then, (351) is the result of solving θ "´9 c α pν˚q, which is, in fact, somewhat simpler than obtaining it through (217). 79. We proceed to get a very simple parametric expression for E sp pR, θq. Theorem 24. Let A " B " r0, 8q, bpxq " x, and Y " X`N, with N exponentially distributed, independent of X, and ErNs " ζ. Then, under the average cost constraint ErbpXqs ď ζ snr, where η P p0, 1s. Now we go ahead and express both ρ˚and Γ˚as functions of snr and R exclusively. We may rewrite (357)-(360) as which, when plugged in (361), results in ρ˚" p1`snrq expp´Rq´1 where we have introduced Evidently, the left identity in (372) is the same as (355). The critical rate and the cutoff rate are obtained by particularizing (360) and (356) to ρ˚" 1 and ρ " 1, respectively. This yields As in (326), the random coding error exponent is E r pR, ζ snrq " # E sp pR, ζ snrq, R P pR c , logp1`snrqq; with the critical rate R c and cutoff rate R 0 in (373) and (375), respectively. This function is shown along with E sp pR, ζ snrq in Figure 3 for snr " 3. 80. In parallel to Item 77, we find the random transformation that explains the most likely mechanism to produce errors at every rate R, namely the minimizer of (254) when P X " PX, the maximizer of the Augustin-Csiszár mutual information of order α. In this case, PX is not as trivial to guess as in Section 11, but since we already found QX in (339) with Γ " Γ˚, we can invoke Theorem 17 to show that the density of PX achieving the maximal order-α Augustin-Csiszár mutual information is pXptq " Γα`p 1´αqΓ˚δ ptq`1´Γα`p 1´αqΓ˚α Γζ e´t Γ˚{ζ 1tt ą 0u, whose mean is, as it should, α ζ Γ˚1´Γα`p1´αqΓ˚" ζ snr " θ. Let QY be exponential with mean θ`κ, and QY |X"a have density qY |X"a ptq " 1 κ e´t´a κ 1tt ě au, and η as defined in (372). Using Laplace transforms, we can verify that PX Ñ QY |X Ñ QY where PX is the probability measure with density in (377). Let Z be unit-mean exponentially distributed. Writing mutual information as the difference between the output differential entropy and the noise differential entropy we get IpPX, QY |X q " hppθ`κqZq´hpκZq (381) in view of (363). Furthermore, using (335) and (379), DpQY |X } P Y|X |PXq " log ζ κ`ˆκ ζ´1˙l og e (384) " log η`ˆ1 η´1˙l og e (385) where we have used (380) and (354). Therefore, we have shown that QY |X is indeed the minimizer of (254). In this case, the most likely mechanism for errors to happen is that the channel adds independent exponential noise with mean ζ{η, instead of the nominal mean ζ. In this respect, the behavior is reminiscent of that of the exponential timing channel for which the error exponent is dominated (at least above critical rate) by an exponential server which is slower than the nominal [72]. Recap 81. The analysis of the fundamental limits of noisy channels in the regime of vanishing error probability with blocklength growing without bound expresses channel capacity in terms of a basic information measure: the input-output mutual information maximized over the input distribution. In the regime of fixed nonzero error probability, the asymptotic fundamental limit is a function of not only capacity but channel dispersion [73], which is also expressible in terms of an information measure: the variance of the information density obtained with the capacity-achieving distribution. In the regime of exponentially decreasing error probability (at fixed rate below capacity) the analysis of the fundamental limits has gone through three distinct phases. No information measures were involved during the first phase and any optimization with respect to various auxiliary parameters and input distribution had to rely on standard convex optimization techniques, such as Karush-Kuhn-Tucker conditions, which not only are cumbersome to solve in this particular setting, but shed little light on the structure of the solution. The second phase firmly anchored the problem in a large deviations foundation, with the fundamental limits expressed in terms of conditional relative entropy as well as mutual information. Unfortunately, the associated maximinimization in (2) did not immediately lend itself to analytical progress. Thanks to Csiszár's realization of the relevance of Rényi's information measures to this problem, the third phase has found a way to, not only express the error exponent functions as a function of information measures, but to solve the associated optimization problems in a systematic way. While, in the absence of cost constraints, the problem reduces to finding the maximal α-mutual information, cost constraints make the problem much more challenging because of the difficulty in determining the order-α Augustin-Csiszár mutual information. Fortunately, thanks to the introduction of an auxiliary input distribution (the xαy-adjunct of the distribution that maximizes I c α ), we have shown that α-mutual information also comes to the rescue in the maximization of the order-α Augustin-Csiszár mutual information in the presence of average cost constraints. We have also finally ended the isolation of Gallager's E 0 function with cost constraints from the representations in Phases 2 and 3. The pursuit of such a link is what motivated Augustin in 1978 to define a generalized mutual information measure. Overall, the analysis has given yet another instance of the benefits of variational representations of information measures, leading to solutions based on saddle points. However, we have steered clear of off-the-shelf minimax theorems and their associated topological constraints. We have worked out two channels/cost constraints (additive Gaussian noise with quadratic cost, and additive exponential noise with a linear cost) that admit closedform error-exponent functions, most easily expressed in parametric form. Furthermore, in Items 77 and 80 we have illuminated the structure of those closed-form expressions by identifying the anomalous channel behavior responsible for most errors at every given rate. In the exponential noise case, the solution is simply a noisier exponential channel, while in the Gaussian case it is the result of both a noisier Gaussian channel and an attenuated input. These observations prompt the question of whether there might be an alternative general approach that eschews Rényi's information measures to arrive at not only the most likely anomalous channel behavior, but the error exponent functions themselves. 4. Let (A, F ) and (B, G ) be measurable spaces, known as the input and output space respectively. Likewise, A and B are referred to as the input and output alphabe respectively. The simplified notation P Y|X : A → B denotes a random transformatio from (A, F ) to (B, G ), i.e. for any x ∈ A, P Y|X=x (·) is a probability measure o (B, G ), and for any B ∈ G , P Y|X=· (B) is an F -measurable function. 5. We abbreviate by P A the set of probability measures on (A, F ), and by P A×B th set of probability measures on (A × B, F ⊗ G ). If P ∈ P A and P Y|X : A → B is random transformation, the corresponding joint probability measure is denoted b P P Y|X ∈ P A×B (or, interchangeably, P Y|X P). The notation P → P Y|X → Q simpl indicates that the output marginal of the joint probability measure P P Y|X is denote by Q ∈ P B , namely, Q(B) = P Y|X (B|x) dP X (x) = E P Y|X (B|X) , B ∈ G . (11 6. If P X → P Y|X → P Y and P Y|X=a P Y , the information density ı X;Y : A × B → [−∞, ∞) is defined as ı X;Y (a; b) = ı P Y|X=a P Y (b), (a, b) ∈ A × B. (12 Following Rényi's terminology [49], if P X P Y|X P X × P Y , the dependence betwee X and Y is said to be regular, and the information density can be defined on (x, y) A × B. Henceforth, we assume that P Y|X is such that the dependence between i input and output is regular regardless of the input probability measure. For exampl if X = Y ∈ R, then P Y|X=a (A) = 1{a ∈ A}, and their dependence is not regular, sinc for any P X with non-discrete components P XY P X × P Y . 7. Let α > 0, and P X → P Y|X → P Y . The α-response to P X ∈ P A is the output probabilit measure P Y[α] P Y with relative information given by regardless of whether the right side is finite. Proof. If P ! Q ! R, we may invoke the chain rule (7) to decompose ı P}R paq´ı Q}R paq " ı P}Q paq. Then, the result follows by taking expectations of (A2) when a Ð X " P. To show that (A1) also holds when P 5. We abbreviate by P A the set of probability measures on (A, F ), and by P A×B the set of probability measures on (A × B, F ⊗ G ). If P ∈ P A and P Y|X : A → B is a random transformation, the corresponding joint probability measure is denoted by P P Y|X ∈ P A×B (or, interchangeably, P Y|X P). The notation P → P Y|X → Q simply indicates that the output marginal of the joint probability measure P P Y|X is denoted by Q ∈ P B , namely, Q(B) = P Y|X (B|x) dP X (x) = E P Y|X (B|X) , B ∈ G . 6. If P X → P Y|X → P Y and P Y|X=a P Y , the information density ı X;Y : A × B → [−∞, ∞) is defined as ı X;Y (a; b) = ı P Y|X=a P Y (b), (a, b) ∈ A × B. Following Rényi's terminology [49], if P X P Y|X P X × P Y , the dependence between X and Y is said to be regular, and the information density can be defined on (x, y) ∈ A × B. Henceforth, we assume that P Y|X is such that the dependence between its input and output is regular regardless of the input probability measure. For example, if X = Y ∈ R, then P Y|X=a (A) = 1{a ∈ A}, and their dependence is not regular, since for any P X with non-discrete components P XY P X × P Y . 7. Let α > 0, and P X → P Y|X → P Y . The α-response to P X ∈ P A is the output probability measure P Y[α] P Y with relative information given by where κ α is a scalar that guarantees that P Y[α] is a probability measure. Invoking (9), we obtain κ α = α log E E 1 α [exp(α ı X;Y (X;Ȳ))|Ȳ] , (X,Ȳ) ∼ P X × P Y . For brevity, the dependence of κ α on P X and P Y|X is omitted. Jensen's inequality applied to (·) α results in κ α ≤ 0 for α ∈ (0, 1) and κ α ≥ 0 for α > 1. Although the α-response has a long record of services to information theory, this terminology and notation were introduced recently in [45]. Alternative terminology and notation were proposed in [42], which refers to the α-response as the order α Rényi mean. Note that κ 1 = 0 and the 1-response to P X is P Y . If p Y[α] and p Y|X denote the densities of P Y [α] and P Y|X with respect to some common dominating measure, then (13) becomes For α > 1 (resp. α < 1) we can think of the normalized version of p α Y|X as a random transformation with less (resp. more) "noise" than p Y|X . Q, i.e., that the expectation on the left side is 8, we invoke the Lebesgue decomposition theorem (e.g. p. 384 of [74]), which ensures that we can find α P r0, 1q, P 0 K Q and P 1 ! Q, such that
v3-fos-license
2020-03-04T03:02:47.026Z
2019-10-29T00:00:00.000
211752077
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://mdsoar.org/bitstream/11603/16049/1/SSRN-id3477281.pdf", "pdf_hash": "325ae9cd626d8cd789d41c2b526291371f1e9fd7", "pdf_src": "ElsevierPush", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41993", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "sha1": "42bc8d02427528632d49e5d0ecc129d054367315", "year": 2021 }
pes2o/s2orc
Extreme Weather and Ratings on Corporate Climate Mitigation Policies This study examines whether the extreme weather events (EWEs) incurred at the headquarters of firms have an impact on their climate mitigation policies. I show that, controlling for county fixed effects, the annual number of EWEs at the headquarter counties of the largest public firms in the US significantly improves the subsequent ratings of their climate mitigation policies, with recent EWEs having a more pronounced impact. I also find that the EWEs at the neighboring counties do not have a similar effect, and provide some evidence that the impact of EWEs on climate ratings is stronger for weakly-governed firms, and that some EWEs positively affect the likelihood of utility firms’ expressing a concern for climate risk through their SEC filings. These results support the idea that personal weather experiences can influence managerial belief in anthropogenic climate change which in turn affects corporate climate mitigation policies. Introduction The purpose of this study is to examine whether the extreme weather events (EWEs) incurred at the headquarters of firms have an impact on their climate mitigation policies. Three empirical facts motivate the study. First, despite the overwhelming evidence in support of anthropogenic climate change (ACC) (Cook et al., 2013), there is divergent reception of this issue in the public (e.g., Leiserowitz, Maibach, Roser-Renouf, Rosenthal, & Cutler, 2017), resulting in a failure to enact federal legislations in the US to limit greenhouse gas (GHG) emissions (Arroyo, 2019;Wallach, 2012). Therefore, businesses are mostly on their own to make decisions that can bear serious consequences on future climate. Second, in addition to rising global average temperature, one primary potential consequence of ACC is the increasingly frequent incidences of EWEs such as heat waves, droughts, wildfires, and floods (IPCC, 2012;Melillo, Richmond, & Yohe, 2014), and individual perception of ACC is often shaped by their personal experiences with these EWEs (Bergquist, Nilsson, & Schultz, 2019;Demski, Capstick, Pidgeon, Sposato, & Spence, 2017;Rudman, McLean, & Bunzl, 2013). Third, firm policies often reflect managers' personal characteristics, values, and beliefs (Bansal & Roth, 2000;Cronqvist, Makhija, & Yonker, 2012;Cronqvist & Yu, 2017;Hambrick, 2007;Hambrick & Mason, 1984;Lawrence & Morell, 1995;Shahab et al., 2020;Sunder, Sunder, & Zhang, 2017;Walls & Hoffman, 2013). If it is reasonable to expect that managers reside in the headquarters of their firms (Pirinsky & Wang, 2010) and, similar to a lay person, are more concerned about ACC after personally experiencing more incidences of EWEs, then the potential "imprinting" of their personal beliefs on corporate policies suggests that the EWEs at the headquarters of the firms may be positively related to their climate mitigation policies. institutional norms) (Boiral et al., 2012;Damert & Baumgartner, 2018;Hoffman, 2005), technological innovation (Okereke, 2007;Pinkse & Kolk, 2010), corporate governance (Aggarwal & Dow, 2012;Galbreath, 2010), and litigation and reputational risk management (Hoffman, 2005;Wellington & Sauer, 2005). But the role of the personal attributes of managers on climate policies has received little attention. However, both theory and evidence suggest that corporate policies often reflect the personal characteristics of managers, such as their educational background, tenure, age, experiences, and values (Bansal & Roth, 2000;Cronqvist et al., 2012;Cronqvist & Yu, 2017;Hambrick, 2007;Hambrick & Mason, 1984;Lawrence & Morell, 1995;Shahab et al., 2020;Sunder et al., 2017;Walls & Hoffman, 2013). For example, Cronqvist et al. (2012) find that managers are consistent in borrowing patterns both in their house purchases and corporate financing decisions. Walls and Hoffman (2013) document that corporate directors with past environmental experiences set greener environmental policies. Sunder et al. (2017) find that the "sensation seeking" tendency of CEOs leads to better corporate innovations. Cronqvist and Yu (2017) find that internalizing a daughter's other-regarding preferences motivates a CEO to engage more in corporate social responsibility (CSR), and Shahab et al. (2020) document that CEOs with research and financial expertise improve the sustainable performance and environmental reporting of publicly listed firms in China. Studies also suggest that managers' ecological values matter for corporate environmental policies (e.g., Bansal & Roth, 2000;Boiral et al., 2012;Okereke & Russell, 2010). In the context of ACC, theories in cognitive psychology and neuroscience suggest that individual perception of the risk of ACC is shaped by two fundamental information processing systems: analytical and experiential (Marx et al., 2007;Slovic, Finucane, Peters, & MacGregor, 2004). While the former uses algorithms and normative rules such as probability, statistics, formal logic and risk Electronic copy available at: https://ssrn.com/abstract=3894744 assessment, the latter operates mainly on personal memories and concrete images (Slovic et al., 2004). Because ACC is derived from more than 100 years' data and involves sophisticated statistics and modeling (National Research Council, 2010), an accurate understanding of this issue is only possible for a person with strong analytical capabilities. However, most people including presumably most of the professional managers lack these abilities. Thus, despite its scientific nature, analytical processing of statistical information is unlikely to be the only channel for most lay public to fully digest and accept ACC. This gives rise to the importance of other channels such as popular media. But this channel is subject to the issue of bias and trust (Weber, 2010). In this sense, the "seeing is believing" mentality based on personal experiences may play a significant role in individual acceptance of ACC (Galbreath, 2014). Indeed, despite the "unscientific" nature of this practice because of the difficulty to attribute individual EWEs to ACC or natural variability (National Academies of Science, 2016), experiencing these events may still influence people's perception of ACC. This is particularly true since relative to analytical processing which relies on statistical expression and hence is cognitively costly (Stanovich & West, 1998), experiential processing operates automatically through memories, often involves vivid images, and frequently elicits strong feelings that make the experiences memorable and dominant in information processing (Epstein, 1994;Loewenstein, Weber, Hsee, & Welch, 2001;Sloman, 1996). Because emotions often play a role in managerial decision making (Hoffman & Bansal, 2012), the presumably intense feelings as elicited from the experiences of EWEs have the potential to strengthen or change the beliefs of managers about ACC, and motivate them to take climate mitigative actions (Leiserowitz, 2006). To materialize this potential, however, managers need to be aware of the connection between EWEs and ACC. Absent this awareness, personal experiences may not result in a concern about Electronic copy available at: https://ssrn.com/abstract=3894744 ACC (Whitmarsh, 2008). An awareness of the connection between climate change and more frequent and intense incidences of EWEs is expected given the increasing popularity of this topic in the media (Boykoff, 2009). In fact, climate change was a fact of organizational life since at least 1995 (Hoffman, 2006;Kolk & Pinkse, 2007;Okereke & Russell, 2010;Wilbanks et al., 2007). The same may not be true for the connection between EWEs and the anthropogenic nature of climate change since this is influenced by social, institutional, cultural, and in particular partisan factors (Hulme, 2009). Nonetheless, empirically we do observe that individuals are more concerned about climate change and intend to take actions to reduce its impact after experiencing EWEs (Bergquist et al., 2019;Demski et al., 2017;Rudman et al., 2013). Therefore, it may also be reasonable to expect that managers will behave similarly on a personal level. If they further attempt to imprint a stronger belief in ACC into corporate policies, then a positive relationship between EWEs at headquarters and firm climate mitigation policies is expected. Some anecdotal evidence in the literature suggests the plausibility of the mechanisms as described above. For example, based on detailed interviews with five house-builders in the UK, Hertin, Berkhout, Gann, and Barlow (2003) find that direct signals of climate change such as increased flood risk and hotter summer play a role in managerial perception of climate change. Similarly, Bleda and Shackley (2008) argue that experiences with anomalous (significant and frequent) weather events will increase managerial belief in ACC. Galbreath (2014) and Weber (2006) suggest the importance of personal experiences in driving climate actions. In the environmental psychology literature, there is also a significant number of studies demonstrating that individuals express a stronger concern for ACC after experiencing EWEs (e.g., Bergquist et al., 2019;Demski et al., 2017;Rudman et al., 2013). Electronic copy available at: https://ssrn.com/abstract=3894744 Though extant climate models predict more frequent incidences of many types of EWEs with ACC, the regional distribution of the EWEs is uneven (IPCC, 2012;Melillo et al., 2014). For example, California has historically suffered more from droughts and wildfires than many other states, while southern regions are more likely to experience heat waves. Therefore, the effect of the frequency of EWEs on climate policies is expected to be dependent on controlling for the regional differences in the exposure to EWEs. Indeed, Haigh and Griffiths (2012) find that "climate surprises" are important in changing business strategies. As mentioned above, Bleda and Shackley (2008) also argue that experiences with anomalous weather events will increase managerial belief in ACC. Galbreath (2014) documents that the regional impact of climate change is important to determine corporate climate strategies. These arguments lead to my first hypothesis: Managerial Experiencing Hypothesis (MEH): Controlling for regional differences, the frequency of the EWEs as experienced by corporate managers is positively associated with corporate climate mitigation policies. In experiential processing, rare events such as EWEs tend to be underweighted relative to their probabilities of occurrence (Hertwig, Barron, Weber, & Erev, 2004;Hogarth & Einhorn, 1992). This is because by nature rare events happen infrequently, so the chance of their occurring in the recent past is small. Because experiential processing operates through working memories which typically include only the memories from the recent past, rare events are likely to be underweighted. On the other hand, when these events did occur, the same mechanism suggests that the experiential processing system tends to overweigh the probability of their reoccurring given the vivid and recent memories from the intense experiences of these events. This argument suggests that recent EWEs may have a stronger impact on managerial decision making on climate policies than distant ones. Similar "recency effects" have been documented in many studies (e.g., Bhootra & Hur, 2013;Dessaint & Matray, 2017;Trotman, Tan, & Ang, 2011). For example, Dessaint and Matray (2017) find that the longer in the past hurricanes had stricken the neighboring counties, the less salient they became, and hence the smaller an effect they had on firm precautionary cash holdings. These arguments lead to my second hypothesis: Recency Hypothesis (RH): More recent EWEs have a stronger effect on corporate climate mitigation policies than more distant ones. It is notable that a revised version of the attention-based view of the firm to explain corporate climate adaptation may also generate the MEH (Pinkse & Gasbarro, 2019). Specifically, Pinkse and Gasbarro (2019) argue that corporate adaptation to climate change depends on an organizational awareness of and a sense of vulnerability to climate stimuli such as abnormal weather. Because of the need for globally collective action to mitigate the impact of climate change, however, awareness may be more relevant for the formulation of mitigation policies than an assessment of vulnerability. As argued in Pinkse and Gasbarro (2019), there are three factors that determine corporate awareness of climate stimuli: risk perception, perceived uncertainty, and firm knowledge of local ecosystems. Direct experiences with climate stimuli especially in the form of an increasing incidence of EWEs as in this study are one of the main drivers of a high risk perception (Pinkse & Gasbarro, 2019). On the other hand, the uncertainty associated with the perception of the climate stimuli could manifest both through the ambiguity in the anthropogenic nature, and the severity and timing of climate change (Pinkse & Gasbarro, 2019). As argued previously, though ex-ante the acceptance of ACC is influenced by many factors, empirically we do observe that individuals exhibit a higher concern for ACC and more willingness to take mitigation actions after experiencing EWEs (Bergquist et al., 2019;Demski et al., 2017;Rudman et al., 2013). In addition, the fact that firms observe a more frequent incidence of EWEs that is the Electronic copy available at: https://ssrn.com/abstract=3894744 focus of the study also implies a decreased uncertainty with respect to the severity and timing of ACC, since an increased frequency of EWEs is consistent with the prediction of the dominant climate models (IPCC, 2012;Melillo, Richmond, & Yohe, 2014). These arguments suggest a decreased perceived uncertainty of climate change when firms experience more frequent incidences of EWEs. A third factor that can moderate risk perception and perceived uncertainty is the knowledge firms possess of local ecosystems (Pinkse & Gasbarro, 2019). However, this factor mainly affects firms' sense of vulnerability to climate change and for the same reason as explained above may be less relevant for the setting of climate mitigation policies. Therefore, a high risk perception and a decreased perceived uncertainty as a result of witnessing more frequent incidences of EWEs by a firm create an awareness of climate change, which may lead to corporate climate mitigation policies as the MEH predicts. 2 It is useful to point out here the difference between experiential processing and a closely related concept in information processing, salience, which refers to the replacement of objective probabilities with subjective decision weights that are determined by the relative prominences of different situations (Bordalo, Gennaioli, & Shleifer, 2013). While experiential processing typically requires direct experiences of an event, the salience theory only involves a recollection of a salient situation which may be provoked by memories from a related experience. Nonetheless, witnessing EWEs may increase the salience of ACC and affect the decision weights similar to what experiential processing of weather information does. Therefore, the two theories may generate similar predictions. Sample The sample used in the empirical study is an intersection of several databases. To test the two hypotheses, I use a third-party rating as an indicator for a firm's climate mitigation policies. The ratings data are from the KLD STATS database, which rates the CSR policies of the largest public firms by market capitalization in the US. The database started in 1991 with around 650 firms and expanded to about 3,100 firms in 2003. My KLD data ends in 2012. The data cover more than 60 environmental, social, and governance (ESG) indicators in seven categories: environment, community, human rights, employee relations, diversity, customers, and governance. The ratings are reported at the end of a calendar year. As detailed in the Internet Supplementary, I focus on the years before 2009 because the definitions of the rating variables have changed significantly since the acquisition of KLD by MSCI in 2010. I also exclude the industries with zero or sparse incidences of climate ratings to consider the industry applicability and potential data collection error of this variable. Measures I describe the major variables in this section. Appendix A provides the detailed definitions of all the variables. Climate Policy and Other CSR Variables As mentioned I use the KLD ratings as a measure of firm climate policies. These ratings are a binary variable indicating either a strength or a concern. According to the data guide, the strength or concern is assigned a value of one if a firm meets the (proprietary) criteria as established for a rating, and zero otherwise. I focus on the strength rating for the climate policies because the concern rating is only applicable to the most carbon intensive industries (petroleum, utility, and transportation), which implies that managers have little leeway to alter this rating since doing so may mean exiting their industries altogether. In contrast, firms have more discretion to earn a strength rating by means such as generating or using more renewable energy while staying in the same industry. I term the climate strength rating Climate rating. I also create two additional variables from the KLD data to consider that CSR investments are often clustered: one with all the ratings in the environment category other than Climate rating (Net Corporate Environmental Responsibility or Net CER), and the other with all the CSR ratings in the categories other than environment (Net CSR). Because the availability of the KLD variables changes over time, studies have adopted different methods to define Net CER and Net CSR (e.g., Cai, Jo, & Pan, 2011;. In my primary specification I follow to define these two variables, but I show in the Internet Supplementary that the results are robust to other definitions. Extreme Weather I measure the strength of EWEs striking the headquarter of a firm as the annual number of EWEs incurred at the headquarter county of the firm (EWE). One concern for using the number of rather than the economic damages caused by EWEs is that some of the EWEs may not be severe enough to induce changes in beliefs. 4 This concern is alleviated by the fact that the NOAA Storm Database records only exceptional meteorological events with the "intensity to cause loss of life, injuries, significant property damage, and/or disruption to commerce" (NWS, 2018). The Internet Supplementary lists the weather events that are covered by this database. The database is good at recording transient events such as storms but deficient in the coverage of long-duration events such as droughts. Therefore, I examine the robustness of the results by excluding droughts from the definition of EWE in the Internet Supplementary. The specific weather events that are used in the definition of EWE are listed in Appendix B, where the events are grouped into four categories: Heat event, Drought, Wildfire, and Flood. These events are predicted to increase with ACC with relatively low uncertainty (Melillo et al., 2014). 5 In the Internet Supplementary, I show the robustness of the results using an alternative definition 4 Although I lack the insurance data for the economic damages of all the EWEs, I use the NOAA Billion-Dollar Disasters Database with estimated losses by the "mega-disasters" causing at least $1 billion inflation-adjusted damages for robustness checks and obtain similar results. The results are presented in the Internet Supplementary. 5 The change in the frequency of other EWEs and the extent of their human influences are more uncertain, including hurricanes, tornadoes, hail, thunderstorms, winter storms, and cold spells. In the Internet Supplementary, I examine the impact of other types of EWEs on climate ratings. of EWE with a more comprehensive list of weather events. To facilitate interpretation, I standardize EWE to have a mean of zero and a standard deviation of one. Control Variables Because climate policies are part of CSR, I follow the literature on the determinants of CSR for the control variables in the regressions (Aggarwal & Dow, 2012;Baron, Harjoto, & Jo, 2011;Di Giuli & Kostovetsky, 2014;Jiraporn, Jiraporn, Boeprasert, & Chang, 2014), including firm size, sales growth, return on assets (ROA), leverage, dividend payout, capital expenditure, R&D and advertising expenditures, and cash balance. 6 All the control variables are winsorized at the 1 st and 99 th percentiles to consider outliers and, similar to EWE, lagged by one year to alleviate the concern for endogeneity. Model The two hypotheses developed in Section 2 require netting out the regional differences in the exposure to EWEs. There are two methods to do this: FEs and de-trending EWE. I employ the FE model for two reasons. First, the "trend" variable is typically calculated from the average value of the variable over the past 10 years or longer period of time (e.g., Egan & Mullin, 2012). In my example this is infeasible because my sample started at the second year after the beginning of the Storm Database. Second, controlling for county FEs also considers, at least to some extent, the influences of other regional unobservable characteristics such as local community engagement in climate change actions and regulatory pressures on firm climate policies. 7 The KLD data are typically very "sticky" in that the ratings rarely change over time, presumably reflecting the threshold that firms need to overcome to receive a rating. This data feature is likely 6 Most of the extant studies on the determinants of climate mitigation strategies in the management literature are based on survey data, which is different from the sample in this study. This is the primary reason why I followed the literature on the determinants of CSR for the control variables in the determinants of Climate rating. 7 In the Internet Supplementary, I show that the results are similar if I also include the firm FEs. to decrease the chance of finding a significant relation between EWEs and climate ratings since the FE models rely on within variations. Therefore, the significant relation that I find may be a conservative estimate of the true relation between EWEs and firms' climate mitigation policies. Since the dependent variable, Climate rating, is a dummy, it is most appropriate to use a probit or logit model for the empirical analysis. However, nonlinear models suffer from the "incidental parameter" problem with the inclusion of a large number of FEs, which can compromise the consistency of the estimates (Neyman & Scott, 1948). Because controlling for FEs is critical for this study, I use linear models as the primary specification, but use a probit model for robustness in the Internet Supplementary. My primary empirical specification is as follows: , , , = 0 + 1 * , −1 + , , , −1 In the above equation, , , , is the climate rating of firm i headquartered in county j and operating in industry k in year t. , −1 is the number of EWEs that occurred in county j and year t-1. , , and are the county, industry, and year FEs, respectively. These controls consider sector and regional affiliations as well as regulatory uncertainty as shown to matter for corporate climate strategies (Boiral et al., 2012;Cadez et al., 2019;Damert & Baumgartner, 2018;Levy & Kolk, 2002). The interaction of industry and year FEs considers the industry-specific shocks in a given year such as the adjustment of rating criteria for some industries (though not as comprehensive as the wholesale adjustment in 2010). I classify industries based on the three-digit SIC code. In the Internet Supplementary, I show that the results are robust to alternative industry classifications. My primary coefficient of interest is 1 . The standard errors are adjusted for heteroscedasticity and clustered at both the firm (for autocorrelation) and county-year levels (for the possibility that contemporaneous climate policies of neighboring firms may be correlated). Summary Statistics Because of the sparse incidence of climate ratings (the mean climate rating is only 4.2% for the full sample without excluding any industries), the screening criteria as described earlier resulted in about 75%/60% of the industries/firms excluded from the sample. This obviously raises a concern for the representativeness of the study. I conduct two types of checks to help alleviate this concern. First, in the Internet Supplementary I entertain alternative industry exclusions including using the full sample without any exclusions and confirm the robustness of the results. Second, I list the average values of Climate rating by industries in Table 1 to gauge the representativeness of the sample. 8 Insert Table 1 about here The statistics in Table 1 show that despite the exclusions, the sample still covers a relatively wide range of industries. Specifically, except for service industries for which climate ratings may not be applicable (SIC1=8), all other industries as indicated by the one-digit SIC code are represented. It is also interesting to observe that more polluting industries, such as petroleum and utility firms, also have higher incidences of climate ratings. This is consistent with the idea that polluting industries also have more opportunities to adopt clean technologies or increase renewable energy to earn a strength rating (Jo & Na, 2012;Kotchen & Moon, 2012). Table 2 reports the summary statistics of the major variables. As shown, the incidence of Climate rating is sparse with a mean of only 0.09 and a median of 0 even after the significant industry exclusions. The average annual number of EWEs in the sample is 4.95, with a standard deviation of 6.62. The statistics are similar when using a county-level sample that keeps one observation for all the firms in the same county in a given year, with a mean of 4.53 and a standard deviation of 6.63. This county-level sample is free of the bias caused by the uneven distribution of headquarters across counties. The statistics also show that out of the standard deviation of 6.63, 5.59/2.87 comes from cross-sectional/within-county variations. The within variation is critical for the implementation of FE models (Zhou, 2001). Insert Table 2 about here Table 2 Hypotheses Tests In this section I test the two hypotheses as developed in Section 2. I first examine the MEH, which predicts that controlling for the regional differences, EWE is positively associated with climate ratings. I use two methods to highlight the importance of demeaning EWE: t-tests based on matched samples and regressions. I conduct two types of matching. First, I match firms experiencing more EWEs with those experiencing fewer EWEs (stratified by sample median) by industry and firm size. This matching generates 2,490 pairs. The second matching is similar except that the EWEs are county-demeaned. This results in 2,116 matched pairs. The t-test results for the differences between the climate ratings of the two matched samples are presented in Table 3. Insert Table 3 about here The table shows that while the difference between climate ratings is not significant for the sample based mainly on cross-sectional variations of EWEs, it is significant at the 5% level for the sample based on within-county variations. Therefore, netting out the regional difference in EWEs is important in the relation between EWEs and climate ratings. In The results show that none of these variables is significant. Overall, the results in Table 5 provide support for the RH. Insert Table 5 about here Collectively, the results in Tables 3-5 support the two hypotheses as developed in the study. These results are consistent with the idea that managerial experiential processing of weather information is important to determine corporate climate mitigation policies. As stated in the introduction, the fact that I do not observe managerial belief directly suggests that these results are amenable to alternative explanations, with the LAH and DH as two prominent ones. The LAH states that the positive effect of EWEs on climate ratings is not driven by managers but local stakeholders (employees, community residents, local NGOs, etc.) who become more concerned about ACC after experiencing more incidences of EWEs. In contrast, the DH argues that the physical damages caused by EWEs to firms' headquarter properties do not change managerial belief in ACC, but simply result in the damaged properties being replaced by new ones which coincidentally have a lower carbon footprint. However, the fact that the positive and significant relation between EWEs and climate ratings only holds when county FEs are controlled for is inconsistent with the prediction of the DH because if it is true, one would expect that this relation should also hold without including the county FEs since it should be the damage itself rather than its region-demeaned value that matters for the replacement decision. Below I conduct several additional tests to further substantiate the MEH against the two alternative hypotheses. EWEs at Neighboring Counties and Climate Ratings Because the MEH is about experiential learning of ACC through EWEs and if managers' chance of personally experiencing EWEs decreases with the distance from their firms' headquarters, it would be reasonable to expect that EWEs incurred at the neighboring counties to a firm's headquarter county should not matter as much for its climate policy as the EWEs at the headquarter county. I examine this implication of the MEH in the Internet Supplementary, where I document that the EWEs at the neighboring counties do not have a significant effect on the climate rating of a firm. This result provides further evidence that is consistent with experiential learning but inconsistent with the expectation that if the LAH holds, the community residents at the neighboring counties, after suffering from the EWEs, should also attempt to push the firms to engage more in climate mitigation actions. The evidence is also inconsistent with the DH in the sense that since firms in my sample are large, they are likely to have properties in the neighboring counties as well. If the damage to a firm's facilities drives the positive relation between EWEs and climate ratings, then a similar relation should also exist between the EWEs at the neighboring counties and climate ratings as well. Corporate Governance and the Effect of EWEs on Climate Ratings Since the MEH rests on managers imprinting their personal beliefs in ACC in corporate climate policies, the degree to which managers can do so may depend on the corporate governance of their firms. It is expected that powerful managers afforded by weak governance should have more discretion to influence policy setting and imprint their personal values (Cronqvist et al., 2012). In the Internet Supplementary I examine this implication of the MEH and obtain some results that are consistent with this expectation. To the extent that powerful managers are insensitive to the demands of the stakeholders of the firm, this evidence is not consistent with the prediction of the LAH. The replacement decision based on the DH should also not be related to the governance strength of a firm. EWEs and Managerial Concern for Climate Risk If it is the managers themselves rather than the local stakeholders of the firms who drive a change in climate policies after abnormal number of EWEs had impacted their local areas, then they may have an incentive to express a concern for climate risk either through media or their financial statements to justify their actions. Similarly, if physical damages rather than managerial learning about ACC drives the relationship between EWEs and climate ratings, then there is no reason to expect managers to be concerned about climate risk. Therefore, examining the effect of EWEs on managerial incentive to express a concern for ACC not only provides direct evidence for the mechanisms underlying the MEH, but also serves as a means to pit the MEH against both the LAH and DH. However, managerial concerns for climate risk may vary with industries. One dominant concern is for potential regulations to limit GHG emissions (Okereke & Russell, 2010). From this perspective, the utility industry may suffer the greatest risk and hence may have the strongest incentive to disclosure the climate risk in their financial statements (Brouhle & Harrington, 2009;Weinhofer & Hoffmann, 2010). In fact, President Barack Obama's Clean Power Plan of 2015, one of the first major federal initiatives to limit GHG emissions, was directed to the utility industry. For this reason, I focus on the utility industry to examine whether EWEs incurred at the headquarters can affect the likelihood of managers to express a concern for climate risk in their financial statements. To do this, I follow Dessaint and Matray (2017) Table 6. Insert Table 6 about here I first run regressions on Climate rating for this subsample to examine whether the significant effect of EWEs on climate ratings also holds for utility firms. Indeed, Model 1 shows that EWE continues to be positive and significant. 10 In Model 2 when I break up the EWEs into the four categories of extreme weather as defined earlier, I find that only floods are significantly related to climate ratings. This is in slight contrast to the results based on the full sample (Model 4 in Table 4), where wildfires are also weakly significant. In Model 3, I examine the effect of EWE on Climate risk. I utilize a specification that is similar to Equation (1) Model 4 shows that only Flood positively and significantly (at the 10% level) affects the likelihood of managers' expressing climate risk in their financial statements. In unreported analysis I also find that the results are similar if the dependent variable is defined over regulatory risk mention or managerial risk mention respectively, the two types of climate risk mentions as defined above. A natural interpretation of these results is that while managers of utility firms are concerned about the climate risk as a reflection of their belief in ACC after experiencing abnormal number of floods, they take corporate mitigative actions to reduce the impact of ACC. It is notable that none of the control variables is significant in the two models on Climate risk. 11 To demonstrate the importance of netting out the regional differences in the exposure to EWEs in the relation between EWEs and managers' mentioning of climate risk in their financial statements, I omit the county FEs in Model 5. Now, Flood loses significance, which is similar to the results with respect to climate ratings and consistent with managerial learning of climate change based on abnormal level of EWEs. Collectively, the results in Table 6 provide some further evidence that is consistent with the MEH but hard to be explained by both the LAH and DH. In the Internet Supplementary, I conduct three additional tests to examine the implications of the DH and find evidence that is further inconsistent with this hypothesis. By and large, the empirical results as established in the study provide the strongest support for the MEH, despite the plausibility of two alternative explanations based on local stakeholders activism and physical damages caused by EWEs. Contributions of the Study This study employs a fine level of geographic resolutionheadquarter counties, in an attempt to identify managerial personal experiences of EWEs, and studies their impact on corporate climate mitigative policies as measured by KLD ratings. The results show that controlling for county FEs, EWEs result in higher climate ratings, and more recent EWEs have a stronger impact. The best explanation for these results out of the three hypotheses considered is that personal experiences with respect to EWEs changes or enhances managerial belief in ACC and, in an 11 The large coefficient on R&D is due to the very sparse incidences of positive R&D expenditures for utility firms. Among the 734 firm-years, only two have positive R&D, and the two values are very small. It is also notable that advertising drops out in Table 6 because none of the firms in the sample had any advertising expenditure, which is typical for utility firms. attempt to imprint this belief into corporate actions, managers set a climate-friendly firm policy that is captured by a third-party rating. The primary contribution of this study is the documentation of a seemingly robust positive relationship between EWEs at the headquarters of the largest public firms and their climate ratings. Despite the anecdotal evidence as discussed earlier suggesting the importance of personal experiences in the formation of managerial ecological value with respect to ACC (Bleda & Shackley, 2008;Hertin et al., 2003), large scale empirical evidence is lacking to the author's best knowledge. This is particularly true for large multinational firms as is the case for many firms in this study, since a general perception is that the value of managers' personal characteristics may be dwindled given the complex parameters through which these firms need to navigate and myriad constraints to which they are subject. However, recent evidence suggests that even for these firms executives' personal traits may still matter for corporate actions (e.g., Cronqvist et al., 2012;Cronqvist & Yu, 2017;Sunder et al., 2017). This study provides the first large-scale empirical evidence suggesting that managerial belief in ACC as presumably facilitated by directly experiencing EWEs likely plays a role in the formation of corporate climate policies in large public firms, resembling a similar mechanism that has been documented in entrepreneurial firms (Kaesehage, Leyshon, Ferns, & Leyshon, 2019). Despite the fact that a lack of direct measure for managerial belief especially in connection with personal experiences of EWEs makes the results and explanations as provided in the study to be only suggestive, an uncontroversial message coming out of the study for practitioners is that proximity to key decision makers even for natural forces is critical to drive firm policies. From this perspective, this study provides some evidence to substantiate the claim in Galbreath (2014, p. 100) that "location (or proximity) may be a more important salience attribute than power, legitimacy or urgency when considering climate change". In terms of the salience theory, an additional contribution of this study is that direct experiences rather than just awareness of EWEs Study Limitations There are several limitations to this study. First, as stated above, although I have provided some evidence to suggest that MEH seems to be the most reasonable explanation for the results documented in the study, the fact that managerial belief is not directly observable opens doors for other explanations, which is left for future studies. Second, ratings are an indirect measure of corporate policies and stated policies also may not result in eventual actions (GS Sustain, 2009). A future study could examine the connection between EWEs and corporate climate mitigation actions such as GHG emissions. Third, the sample period in the study is relatively old and covers a period that was more than 10 years ago. This data limitation is partly due to the post-2009 changing definitions of the rating variables by MSCI as mentioned above. However, as stated in the introduction, this period is characterized by businesses' increasing concerns about ACC presumably facilitated by the adoption of the Kyoto Protocol. Although the US was not an eventual signer for this treaty, businesses in the US may still feel the pressure to act. On the other hand, basic probability theory suggests that "weather extremes may change much faster than weather means" (Fankhauser, Smith, & Tol, 1999, p. 71), hence these extreme events may be noticeable much sooner than a rising temperature. From this perspective, this time period may have provided a good setting to examine the relation between EWEs and corporate climate policies given that recent years have witnessed an increasing role of other potential influencers of these policies such as institutional investors (Kruger, Sautner, & Starks, 2020), which may complicate such a relation. Finally, while the current study is limited by climate mitigation policies, a potential future research topic is to examine the impact of EWEs on firms' climate adaptation policies/actions. While a significant strand of literature has already begun this inquiry, a large-scale study employing a comprehensive set of industries is lacking to the author's best knowledge, and may provide a fruitful area for future explorations. Conclusion The fundamental premise of the study is that managers, similar to a lay person, may be undergoing changes in their beliefs in ACC after experiencing extreme weather that are predicted to increase with climate change. The tendency for managers to imprint their personal values into corporate policies then suggests that firm climate policies will likely reflect their beliefs in ACC, which predicts a positive relationship between EWEs at headquarters which are presumably Electronic copy available at: https://ssrn.com/abstract=3894744 experienced by managers and the climate mitigation policies of the firm. The empirical evidence provides some support to these arguments. A key message emanating from the study is that local impact of climate change is important to drive global climate policies of a firm. This mismatch between the scopes of the problem and a potential driver for the solution poses a significant challenge for humankind to solve the unprecedented issue of ACC, especially given its uneven regional impact. On the one hand, the positive effect of EWEs on climate ratings indicates that experiential learning through EWEs may be effective in motivating some professional managers to change climate policies. On the other hand, because climate change in the years before its most catastrophic impact is uncertain and can well be more pleasant for some time (e.g, Egan & Mullin, 2016), absent other forces, many firms may refrain from taking mitigative actions if their local areas have not experienced a negative change in weather patterns based on the results in the study. Yet the window of opportunity for humankind to avoid or reduce the potentially devastating effect of ACC may well lie in those years. If simulated experiences have similar effect to real experiences, then one way to encourage more managers to take climate actions is to design some education programs that permit the simulated experiencing of calamitous natural disasters that are predicted to unfold with continued climate change. The effectiveness of this type of program for corporate managers is unknown and is left for future exploration. Table 5. Test the Recency Hypothesis This table examines the RH (Recency Hypothesis), which states that recent EWEs have a more pronounced impact on climate ratings than distant ones. EWEt-1, EWEt-2 and EWEt-3 are one-year lagged, two-year lagged, and three-year lagged EWE, respectively. The subscript "fhalf"/"shalf" indicates the first/second half of the corresponding year. Each of these variables is standardized to have a mean of 0 and standard deviation of 1. The dependent variable for each model is Climate rating. All models also include control variables as in Table 4, the county FEs and the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. (1) Table 6. Extreme Weather and Managerial Concern for Climate Risk of Utility Firms This table examines the effect of extreme weather on managerial concern for climate risk as reflected in financial statements (10-K, 10-Q, or 8-K) of utility firms. Climate risk is a dummy variable that equals one if managers mentioned climate risk in their financial statements, and zero otherwise. See Appendix A for the definitions of all other variables. All models also include the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. (1) Climate rating Dummy variable that equals one if a firm has taken significant measures to reduce its impact on climate change and air pollution through use of renewable energy and clean fuels or through energy efficiency, or the firm has demonstrated a commitment to promoting climate-friendly policies and practices outside its own operations, and zero otherwise (env_str_d). KLD STATS Net CER Lagged value of the total strength count of corporate environmental responsibility (CER) ratings excluding Climate rating of a firm scaled by the number of strength items excluding Climate rating in the CER category in a given year, minus the total concern count of CER ratings scaled by the number of concern items in the CER category in that year. KLD STATS Net CSR Lagged value of the sum of total strength counts of community, human rights, employee relations, diversity, product quality and safety, and governance ratings of a firm scaled by their respective number of strength items in a given year, minus the sum of total concern counts of community, human rights, employee relations, diversity, product quality and safety, and governance ratings of a firm scaled by their respective number of concern items in a given year. KLD STATS Raw EWE Lagged total number of severe meteorological EWEs incurred at the headquarter county of a firm in a given year, where the specific EWE types included in the calculation are listed in Appendix B. NOAA Storm EWE Lagged standardized value of raw EWEs with a mean of 0 and standard deviation of 1. NOAA Storm Size Lagged value of the log of total sales (log(sale)). COMPUSTAT Salesgrow Lagged value of the log of sales growth (log(sale/lagged sale)). COMPUSTAT ROA Lagged value of return on asset, defined as income before extraordinary items scaled by total assets (ib/at). COMPUSTAT Dividend Lagged value of cash dividends for common and preferred stock scaled by operating income ((dvc+dvp)/oibdp). Capexp Lagged value of capital expenditure scaled by total assets, missing values coded as zeros (capx/at). R&D Lagged value of R&D expenses scaled by total assets, missing values coded as zeros (xrd/at). COMPUSTAT Adver Lagged value of advertising expenses scaled by total assets, missing values coded as zeros (xad/at). COMPUSTAT Cash Lagged value of cash balance scaled by total assets (che/at). COMPUSTAT Changing Definitions of Climate Change Related Ratings in KLD There are two climate change related policy ratings in the KLD STATS Database: climate strength (env_str_d) and climate concern (env_con_f). Table I1 lists the definitions of these two variables over time. It can be seen that climate concern is mainly an industry-based variable, especially before 2012. As such firms have little leeway to change this rating unless changing their industries. Following the acquisition of KLD by MSCI in 2010, the definition of the strength rating underwent a dramatic change. In fact, climate strength was called "Clean Energy" before 2010. After 2010, the definition of this variable was expanded to apply to more industries/companies. The change in the definition of the strength rating is apparent from Figure I1 Specifically, the 2011 user guide stated that one of the changes in the definitions of the KLD variables was the "introduction of industry specific ESG ratings templates for each of the seven ESG ratings categories." Starting from 2010, MSCI assigned a "NR (Not Rated)" if a specific rating is not relevant for an industry. This marking means that prior to 2010 KLD did not consider the industry applicability of its ratings. Thus, a climate rating of "0" may mean either that the firm did not meet the criteria for friendly climate mitigation policies to qualify for the strength rating, or the rating was not applicable to the firm's industry at all. For example, many service industries (SIC1=8) have "0" climate ratings throughout the sample period. These ratings may indicate that firms in these industries do not engage sufficiently in efforts to mitigate climate change to earn a strength rating. But another possibility is that mitigating climate change may not be relevant for many of these industries, such as health, legal, educational and management services. If this is true, including these industries in the sample is likely to introduce noise to the empirical analysis and decrease the power of the test. This issue would be solvable if the post-2009 data included an accurate classification of industries with consistent "NR" markings, so that I could backtrack the industries in the prior years to determine the applicability of the climate rating. Unfortunately, I find that MSCI did not issue "NR" until 2012 and, more importantly, the "NR" marking is not consistent across firms in the same industry in many cases. For example, while MSCI assigns a "NR" to hhgreggg, Inc., a retail chain of consumer electronics, it gives a rating of "1" to Best Buy, another retailer of electronics products. This kind of inconsistency is prevalent. Actually, around one third of the industries as covered by MSCI in 2012 have inconsistent classifications. To consider the industry applicability of climate ratings during my sample period, I exclude the industries with zero or very sparse incidences of climate ratings, or industries with very few firmyears. Industries with zero incidence of climate ratings during the entire sample period are excluded because the climate rating is not likely to be applicable to these industries. Industries with a sparse incidence of climate ratings are excluded to consider the possible coding error by KLD. For example, if over the entire sample period only one firm-year out of a 100-firm-year industry has ever received a climate strength rating, this is likely due to an error in data collection. Specifically, I exclude industries whose average Climate rating is less than 0.01. Finally, industries whose number of firm-years is fewer than five are excluded because the small sample makes it difficult to determine the true incidence of climate ratings for these industries. It turns out, since the incidence of climate ratings is sparse (the mean climate rating is only 4.2% for the full sample without excluding any industries or firms), these screenings excluded a significant number of industries and firms. Specifically, about 75%/60% of the industries/firms are excluded from the sample. Therefore, in the next section I examine alternative samples to check the robustness of the results. Alternative Samples In the main text I excluded around 75%/60% of the industries/firms from the sample to consider the industry applicability of the KLD ratings. Here I entertain alternative industry exclusions to examine the robustness of the results. The results are reported in Table I2. In Model 1 I use the full sample without any exclusions. As mentioned in the main text, the average value of Climate rating in this case is only 4.2%, which is consistent with a very sparse incidence of climate strength ratings in the KLD data. The results show that EWE continues to be positive and highly significant, though the coefficient becomes smaller. However, because as compared to the primary sample the average value of Climate rating for the full sample decreases significantly (from 0.09 to 0.042), the economic significance of the results actually increases. Specifically, for one standard deviation increase in the annual number of EWEs at the headquarter county of the average firm in the sample (2.65), the coefficient on EWE in Model 1 (0.007) suggests that climate rating will be upgraded by 0.002889 (=0.007*2.65/6.42) notch, which represents a 6.88% improvement in its rating. The industry applicability of climate rating would not be an issue if MSCI issued consistent "NR" (Not Rated) markings for all the industries after 2009 when it changes the definition of this variable significantly. If this were the case, I would have been able to backtrack all the industries in the pre-2009 period if they received "NR" after 2009. Unfortuantely this is not the case. MSCI did not start to issue "NR" until 2012 and, more importantly, it did not issue consistent "NR" for all the firms in the same industry. For example, some firms in a given industry may receive a "NR" which, according to the data guide, should suggest that climate rating is not applicable to these firms' industry. However, the data suggest this is not the case -other firms in many of these industries actually receive either a "1" or "0" rating. This inconsistency introduces further noise into the analysis. In light of this, I entertain three types of industry exclusions to construct samples to check the robustness of the results. First, I ignore the consideration of inconsistency in "NR" markings, and exclude all the industries which received a "NR" for at least one of their firms at 2012 from the sample. It turns out that this exclusion is very significant. The sample size after the exclusions is only 7,193, which is even smaller than that based on the exclusion criteria I applied for the primary sample (7,706). However, the regression results as reported in Model 2 of Table I2 show that EWE continues to be positive and significant. The sample for Model 3 is based on excluding all the industries which have a consistent "NR" marking at 2012. Because of the large number of inconsistent "NR" markings, this exclusion criterion is not restrictive at all as is apparent from the sample size (17,276), which is only slightly smaller than the full sample (17,349). As expected, the regression results are also similar to those based on the full sample. Because some industries may have a strength rating (Climate rating = 1) between my sample period (from 1997 to 2009) even though they are consistently marked as "NR" by MSCI at 2012, I also consider this type of inconsistency in Model 4 and exclude an industry only if it is consistently marked as "NR" at 2012 and does not have a "1" rating for any of its firms between the sample period. Naturally, because of an even smaller number of industries/firms being excluded, the sample size in this case is even closer to that of full sample. The regression results as shown in Model 4 are also similar. As described in Section 3 of the main text, one drawback of the COMPUSTAT data is that it has only the most recent information on headquarter locations. To get around this issue, I manually collected the historical headquarter data for the S&P 500 firms at 2006 between 1997 and 2009. The sample size dropped dramatically to 2,160 firm-years. Model 5 in Table I2 reports the results. As shown, EWE continues to be positive and significant though the significance level has decreased, presumably due to smaller sample size. However, the coefficient on EWE is larger. In addition to the four categories of weather events as included in the definition of EWE, the extant climate models also predict that other EWEs such as hurricanes, tornadoes, and winter storms may also be affected by ACC, though with more uncertainties (Melillo, Richmond, & Yohe, 2014). Therefore, I examine the robustness of the results in Model 2 of Table I3 by including a more comprehensive list of weather events in the definition of the EWE variable (EWE1). The specific types of EWEs in the definition of EWE1 are listed in Table I4. 13 These events largely fall into 8 categories: heat events (including droughts and wildfires), floods, winter weather, tropical storms (including hurricanes), wind events, hails, lightnings, and tornadoes. As can be seen from Model 2, the coefficient on EWE1 is positive and highly significant, and much larger than what is reported in Model 4 of Table 4 in the main text. In untabulated analysis, however, I do not find that the coefficient on EWE and the difference between EWE1 and EWE to be statistically different when controlling both in the regression. Alternative Definitions of EWE In models 3 & 4 I examine the types of EWEs that are associated with ACC with even less certainties (IPCC, 2012a). Specifically, I consider cold events that are the sum of the annual numbers of cold/wind chill, extreme cold/wind chill and frost/freeze events as in Table I4. At first glance this seems to be the opposite of heat events as studied in the main text. That is, while heat waves are expected to increase with ACC, cold weather must be expected to decrease with it. Closer examination of climate models, however, suggests this is not the case. The change in the incidence of extremely cold weather as a result of climate change is actually uncertain. This uncertainty is driven by the fact that while an increase in mean temperature implies a decrease in the incidence of cold weather, an increase in the variance and/or a shift in the shape of the probability distributions of temperatures that are possible in ACC, can result in an increase in cold weather (IPCC, 2012a, p. 7 & 121). Therefore, the change in cold events with ACC is ambiguous. Complicating this ambiguity further is the uncertainty of managerial knowledge of the implications of ACC on the incidence of cold weather. If managers understand climate change as implying an increase in heat events and a decrease in cold spells, then experiencing more cold weather will lower their confidence in ACC and decrease their incentives to take mitigation actions. On the other hand, if managers understand the inherent uncertainty associated with the change in cold weather as a result of ACC as the discussion on climate models above suggests, experiencing more cold weather is possible to enhance their belief in ACC and increase their incentive to adopt climate mitigation policies. Indeed, there is some evidence suggesting that individuals are more concerned about climate change after experiencing abnormally cold weather (e.g., Brooks, Oxley, Vedlitz, Zahran, & Lindsey, 2014;Capstick & Pidgeon, 2014;Lang, 2014). The results in models 3 & 4 confirm the complexity of the relation between cold spells and climate ratings: while on average cold weather is positively associated with climate ratings as Model 3 demonstrates, the effects of different types of cold weather are not consistent as Model 4 shows. Specifically, both cold/wind chill and frost/freeze are significant, but in opposite directions. In untabulated analysis, I further consider other EWEs in Table I4 that are also cold-related, including blizzard, heavy snow, ice storm, lake-effect snow, sleet, winter storm, and winter weather. The results are similar. In particular, none of these winter weather related EWEs is individually significant, but the total sum of the three cold events as studied in Table I3 and these events is positive and significant. Note that this latter group of cold events is part of EWE1, and all of them are associated with water vapor in the atmosphere, which may increase as a result of ACC (IPCC, 2012b). By and large, the results with respect to cold EWEs as reported in Table I3 are harder to interpret than the results with respect to heat events in Table 4 of the main text. This provides another reason to focus on the types of EWEs that are associated with ACC with less uncertainty which is what is currently done. This can help alleviate the ambiguity with respect to managers' knowledge base. Using Economic Damages to Measure EWE One drawback of using the frequency of EWEs to measure managerial experiential learning of ACC is that it ignores the severity of EWEs. Though I cannot fully account for this issue due to data limitations, I partially address the issue by employing the NOAA Billion-Dollar Disasters Database to estimate the economic damages of the headquarter states caused by "mega-disasters" incurring at least $1 billion inflation adjusted damages (Smith & Katz, 2013). I assume that the damage of a state is proportional to its GDP. 14 I then sum up the estimated damages of all the disasters affecting the state in the previous year, and "normalize" this variable using the 2009 state GDP. The normalization takes account of the different levels of wealth at stake at different point in time (e.g., Pielke Jr. et al., 2008;Simmons, Sutter, & Pielke Jr., 2013). One of the downsides of this disaster variable is its coarser geographic resolution. I examine the relationship between this economic damage variable and climate ratings in Model 3 of Table I3. To be consistent with the state-level disaster variable, I replace the county FEs with the state FEs, and cluster the standard errors at both the state-year and firm levels. The results show that Billion disaster loss is positive and significant, which is consistent with those based on the frequency of EWEs. Contemporaneous Values of EWE and Control Variables The primary specification employs one-year lagged values for all the independent variables including the primary variable of interest, EWE. The rationale for using the lagged EWEs is to consider the possibility that climate policies may be determined in the middle of a year even though climate rating is reported at the end of the year, hence some EWEs that happened after the determination of the policy should not matter for managerial experiential learning of ACC. The inclusion of the lagged control variables is to alleviate the concern for endogeneity. Here I examine the robustness of the results by using the contemporaneous values of all these variables. This enables the sample to include the starting year of the Storm Database, 1996. Hence the final sample covers the years from 1996 to 2009 with 9,481 firm-year observations. The regression results are presented in Model 4 of Table I3. As shown, EWEt (contemporaneous value of EWE) continues to be positive, though the significance becomes slightly weaker at the 10% level. Alternative Definitions of Net CER/CSR The literature has employed different methods to define the net rating of a firm's CSR. In my primary analysis I follow to define Net CER/CSR. I examine the robustness of the results in this section using other definitions. I first follow to consider the industry-specificity of CSR, and define Net CER/CSR as the net environmental/social score (total strength counttotal concern count) minus the minimum value of this score in the firm's industry, scaled by the industry range (maximumminimum) of this score. Results using these measures of Net CER/CSR are reported in Model 1 of Table I5. The coefficient on EWE continues to be positive and significant. Interestingly, Net CER based on this definition loses significance. In untabulated analysis, I define other measures of Net CER/CSR by following , , and Cai, Jo, and Pan (2011), respectively. The results are also similar. Probit Model Because the dependent variable, Climate rating, is a dummy variable, it is more appropriate to use a nonlinear model than the linear model I employ in the paper. As discussed in the main text, however, the critical importance of controlling for a large number of FEs in the model jeopardizes the consistency of nonlinear models due to the "incidental parameter" problem (Neyman & Scott, 1948). Here I employ a probit model to examine the robustness of the results, but only control for county and year FEs in light of the incidental parameter problem. The results as reported in Model 2 of Table I5 show that EWE continues to be positive and highly significant, suggesting that the major findings in the paper are not sensitive to model specifications. Firm FEs In my primary specification I include county and industry-year FEs. But some unobserved firmlevel characteristics may also help determine climate policies, such as corporate culture. If these firm-specific unobservable characteristics do not change over time, they could be captured by a firm FE. Therefore, I examine the robustness of the results by further including the firm FEs in the regression model. The results are reported in Model 3 of Table I5. As it turns out, controlling for these unobservable time-invariant firm attributes makes the coefficient on EWE to be even larger. Alternative Industry Classifications In my primary empirical analysis I classify industries based on three-digit SIC code. I check the robustness of the results by different industry classifications in this section. I consider four classifications: four-digit SIC code, two-digit SIC code, Fama-French 48 industries, and Fama-French 12 industries (Fama & French, 1997). 15 I then interact each of these four types of industry dummies corresponding to these classifications with year dummies respectively in the regressions. The results are reported in Table I6, and demonstrate that a positive and significant impact of EWEs on climate ratings is robust to different industry classifications. EWEs at Neighboring Counties and Climate Ratings As a placebo test, in Table I7 I examine whether the EWEs at the counties neighboring to the headquarter county of a firm have a different impact on its climate rating as compared with the EWEs at the headquarter county of the firm. Because experiential processing is often associated with strong affect, compared to the possibly vicarious nature of the EWE experience in the neighboring counties, the affect associated with the personal experiencing of EWEs at the headquarter county may be stronger. Indeed, there is some evidence from the environmental psychology literature that direct experience of natural hazards is more powerful than vicarious experience to increase the concern of an individual for climate change (Lujala, Lein, & Rod, 2015). As a result, it may be reasonable to expect that the EWEs at the neighboring counties may not generate as strong an impact on climate ratings as the EWEs presumably experienced personally by managers at their headquarter counties. I employ two methods to identify the neighboring counties. In Models 1 and 2 of Table I7, I identify a neighboring county based on whether its distance from the headquarter county is within a certain range. 16 If multiple counties meet this criterion I take the average of their annual frequencies of EWEs. A second method to identify a neighboring county, which is used in Model 3, is to rank the distances from all the neighboring counties to the headquarter county. I then include the annual number of EWEs at the four closest neighboring counties respectively. Models 1 and 2 consider two sets of ranges, a coarser set with ranges between 0 and 200km, and between 200km and 400km; and a finer set to further divide each of these two ranges into two equal sub-ranges. The results show that none of the EWE variables corresponding to the neighboring counties is significant. One possibility is that managers may be more affected by the maximum number of EWEs across all the neighboring counties than their average because the maximum may be more salient. However, I find similar results if defining the EWE variable at the neighboring counties as the maximum value of EWEs among all the neighboring counties. The 16 I use the Haversine formula to calculate the great-circle distance between two places on a sphere. The formula is given by 12 = × 2 × arcsin (min(1, Table I7 demonstrate that personal rather than possibly vicarious experiences of EWEs is of paramount importance to determine climate ratings. It is notable that the insignificant effects of the EWEs at the neighboring counties on climate ratings seem in spirit to be inconsistent with the results in Dessaint and Matray (2017), who show that managers residing at a region which is in the neighborhood of the hurricane areas nonetheless overact to the hurricane risk by holding excessive amount of cash. Though sorting out the reasons to fully explain the difference in results is beyond the scope of this paper, I note three possible explanations. First, unlike the liquidity decision that relates to daily operations, the decision on climate policies seems less urgent from a firm's perspective. As a result, the hurdle above which managers are motivated to act may be higher for the latter type of decisions, and hence personal experiencing of EWEs may be indispensable to overcome this hurdle. Second, the results in Table 4 in the main text suggest that flooding is the most prominent type of EWEs to impact climate ratings. The statistics further show that most of the flood events is flash flood, which tends to be locally incurred. As a result, if these flood events are not reported by the media and if no one in the flood area describes these experiences to a manager, he/she may well be unaware of these events at all. Finally, though the EWEs in this study may be intense enough to alter managerial belief in ACC and hence change corporate climate policies, the intensity of these disasters may still not be comparable to hurricanes. Therefore, the same logic as in the first explanation applies while managers may act to the risk of hurricanes by simply observing their impact on neighbors, personal experiencing the types of EWEs as in this study may be indispensable for them to overcome the hurdle to take actions. Corporate Governance and Effect of EWEs on Climate Ratings Though I have argued that experiencing EWEs by managers is likely to change or enhance their beliefs in ACC and motivate them to adopt friendlier corporate mitigation policies, a positive and significant relation between the EWEs in the headquarter of a firm and its climate rating leaves space for alternative explanations of the result. To provide more direct evidence for the role managers may play in this relation, I examine whether the governance mechanisms of a firm may shape this relation. The rationale is that managers should have more leeway to imprint their personal beliefs on corporate policies if they are more powerful hence the governance of the firm is weaker (Cronqvist, Makhija, & Yonker, 2012). I consider three types of governance mechanisms, CEO tenure, board size, and antitakeover mechanisms. I obtain the data for these governance mechanisms from the Institutional Shareholder Service (formerly RiskMetrics) Director and Antitakeover databases, which cover the director profiles and antitakeover devices of the S&P 1,500 firms. First, longer tenure may indicate managerial power and entrenchment (Hermalin & Weisbach, 1998). An entrenched CEO should have a stronger influence on corporate decision-making. In Model 1 of Table I8, I interact EWE with a Longer tenure dummy, which equals one if the tenure of a CEO is longer than the sample median, and zero otherwise. I expect the interaction term to be positive and significant. The results show that although it is positive, it is not significant. Second, because of the coordination difficulty and free-rider problem, larger boards are generally associated with weaker governance (Eisenberg, Sundgren, & Wells, 1998;Yermack, 1996). Therefore, in Model 2 of Table I8 I interact EWE with a Larger board dummy, which equals one if the board size is larger than the sample median, and zero otherwise. As predicted, the interaction term EWE * Larger board is positive and significant. Third, firms with more antitakeover mechanisms in place are generally associated with a lower valuation because of the presumably stronger protection against external market discipline as afforded by these devices (L. Bebchuk, Cohen, & Farrell, 2009;Gompers, Ishii, & Metrick, 2003). L. A. Bebchuk and Cohen (2005) further show that among the 24 major antitakeover mechanisms (the components of the Gindex as in Gompers et al. (2003)), classified board is the most critical. Therefore, in Model 3 of Table I8 I interact EWE with a Classified board dummy, which equals one if the election of the board of directors of a firm is staggered, and zero otherwise. 17 The results show that the interaction term is positive but not significant. Altogether, the results in Table I8 provide some evidence that is consistent with the argument that stronger managerial power as afforded by weaker corporate governance makes it easier for managers to imprint their personal beliefs in ACC on corporate climate mitigation policies after experiencing EWEs. However, these results still cannot rule out the possibility that local stakeholders of the firm, after witnessing EWEs or suffering great losses from them, push managers to adopt friendlier climate policies. For example, the stronger effect of EWEs on climate ratings under a larger board may be driven by the increased chance of connecting with some board members hence shaping corporate policies by local stakeholders because of the simple fact that the board has more members. At the minimum, however, these results suggest that managers do play a role in the relation between EWEs in their headquarters and climate ratings. Damage Hypothesis One alternative explanation for the significantly positive effect of EWEs on climate ratings is that in response to the damages caused by EWEs to a firm's properties at the headquarter, the firm replaces the assets with presumably more energy-efficient ones, hence earning a climate strength 17 Another reason for me to use classified board instead of the G-index (or a refined E-index as in L. Bebchuk et al. (2009)) is that some components that are needed to calculate the G-index or the E-index are not available after 2006. rating. In addition to the arguments made in the main text, I conduct two additional tests in this section to further examine this "Damage Hypothesis". First, note that the arguments made in the previous section suggest that many of the EWEs used in the construction of the primary EWE variable tend to be local, such as flash flood. Actually, I find that the correlation between the EWEs at the headquarter county and those of the closest neighboring county is only 0.41. Though still high, this statistic is nonetheless more consistent with the "local EWEs" argument above. If this is the case, and since the firms in my sample are the largest public firms in the U.S., it may be reasonable to expect that the psychological impact of the EWEs incurred at the headquarters of these firms is more significant than their economic impact. Though I lack the data on the geographic distribution of these firms' assets to better test this argument, it might be reasonable to expect that the larger a firm is, the less economically important are the physical assets at the firm's headquarter since they are expected to be a smaller fraction of the firm's total assets. If this is the case, then the Damage Hypothesis should be more applicable to smaller firms with fewer assets, because then the assets at their headquarters may be more significant. Therefore, in Model 1 of Table I9, I interact EWE with a Small size dummy, which equals one if the lagged value of the total asset of a firm is at or below the sample median, and zero otherwise. If Damage Hypothesis holds, I expect this interaction term to be positive and significant. However, as shown the interaction term is negative and insignificant. Between the two major types of assets, tangible and intangible, tangible assets should be subject more to the arguments in the Damage Hypothesis, hence firms with more tangible assets/higher capital intensity should be a better candidate to be consistent with this hypothesis. Therefore, in Hypothesis predicts a positive interaction term, the result in Model 3 is negative and insignificant. Changing Extreme Weather Type and Climate Ratings One prediction of the theory based on the two fundamental information processing systems and the attention-based view of the firm as discussed in the main text is that the occurrence of a new type of EWE may have a more pronounced impact on climate ratings as compared to merely more frequent incidences of EWEs of the same types. The fundamental reason for this is that both theories are based on direct experiences of climate stimuli, and a different kind of EWE experiences may more easily generate awareness and/or a change in belief in ACC. A rigorous examination of this hypothesis requires an identification of the "normal" type(s) of EWE(s) at a specific location. However, the Storm database that I employ for EWEs does not allow for a strict identification of the "normal" EWE(s). The reason for this is because the database starts in 1996 and my sample starts in 1997 (because I use the lagged values of EWEs), so there is no historical information to identify the trend of EWEs which might more plausibly serve as the "normal" EWEs at a location. Despite this difficulty, I use two methods to roughly identify the "normal" EWE(s) to examine the plausibility of this Changing Type Hypothesis (CTH). The results are presented in Table I10. In Table I3. This enlargement of the set of EWEs to identify normal EWE(s) does not change the results. The interaction term is still positive and highly significant. One obvious drawback of using the earliest EWE(s) during the sample period to identify normal EWE(s) as in these two models is that since there is no historical trend, the first appearance in the sample does not necessarily suggest this (these) EWE(s) is (are) the most common type(s) of EWE(s) in the county. Model 3 attempts to alleviate this concern. Specifically, instead of using Collectively, the results in Table I10 suggest the plausibility of the CTH despite the drawbacks as discussed above in identifying the normal type(s) of EWE(s) at a given location. The company has taken significant measures to reduce its impact on climate change and air pollution through use of renewable energy and clean fuels or through energy efficiency. The company has demonstrated a commitment to promoting climate-friendly policies and practices outside its own operations. On or after 2010 This indicator measures a firm's policies, programs, and initiatives regarding climate change. Factors affecting this evaluation include, but are not limited to, the following: • Companies that invest in renewable power generation and related services. • Companies that invest in efforts to reduce carbon exposure through comprehensive carbon policies and implementation mechanisms, including carbon reduction objectives, production process improvements, installation of emissions capture equipment, and/or switch to cleaner energy sources. • Companies that take proactive steps to manage and improve the energy efficiency of their operations. • Companies that measure and reduce the carbon emissions of their products throughout the value chain and implement programs with their suppliers to reduce carbon footprint. • Insurance companies that have integrated climate change effects into their actuarial models while developing products to help customers manage climate change related risks. Climate concern (env_con_f) Before 2012 The company derives substantial revenues from the sale of coal or oil and its derivative fuel products, or the company derives substantial revenues indirectly from the combustion of coal or oil and its derivative fuel products. Such companies include electric utilities, transportation companies with fleets of vehicles, auto and truck manufacturers, and other transportation equipment companies. On or after 2012 This indicator measures the severity of controversies related to a firm's climate change and energy-related policies and initiatives. Factors affecting this evaluation include, but are not limited to, a history of involvement in GHG-related legal cases, widespread or egregious impacts due to corporate GHG emissions, resistance to improved practices, and criticism by NGOs and/or other third-party observers. Table I2. Alternative Samples This table reports the robustness of the results by using alternative samples. Sample 1 is the full sample without excluding any industries/firms. Sample 2 excludes all the industries with "NR" (not rated) markings by MSCI at 2012. Sample 3 excludes all the industries with "consistent" "NR" markings by MSCI at 2012, where consistency is defined as the situation when all the firms in the same industry are marked as "NR". Sample 4 excludes all the industries with consistent "NR" markings by MSCI at 2012 and consistent "0" Climate rating between 1997 and 2009. Sample 5 includes the S&P 500 firms at 2006 with the headquarter county data manually collected for the sample period. The dependent variable for each model is Climate rating. See Appendix A in the main text for the definitions of all the variables. All models also include the county FEs and the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. (1) Table I4. Cold event is the lagged sum of the annual number of cold/wind chill, extreme cold/wind chill and frost/freeze events as listed in Table I4. Billion disaster loss is the estimated total normalized state loss by "megadisasters" causing at least $1 billion inflation-adjusted economic damages in the previous year. The loss of a state is assumed to be proportional to its GDP, and the normalization is based on the GDP at 2009. EWEt is the contemporaneous value of EWE. See Appendix A in the main text for the definitions of all other variables. The dependent variable for each model is Climate rating. The control variables for Model 6/the other models are at their contemporaneous/one-year lagged values. All models also include the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels for all the models except for Model 5, and at both the firm and state-year levels for Model 5. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. Extreme Weather at Neighboring Counties and Climate Ratings This table examines the effect of EWEs at neighboring counties on climate ratings. EWEnbx-y is the average of the annual numbers of EWEs incurred at the counties which lie between x and y kilometers of the headquarter county of the firm. EWEnbi is the number of EWEs of the ith-closest county to the headquarter county of the firm. The dependent variable for each model is Climate rating. See Appendix A in the main text for the definitions of all other variables. All models also include the county FEs and the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. (1) (2) This table examines the impact of governance mechanisms on the effect of EWEs on climate ratings. Longer tenure is a dummy variable that equals one if the tenure of the CEO is longer than the median value in the sample, and zero otherwise. Larger board is a dummy variable that equals one if the board size is larger than the median value in the sample, and zero otherwise. Classified board is a dummy variable that equals one if the election of the board of directors of a firm is classified, and zero otherwise. The dependent variable for each model is Climate rating. See Appendix A in the main text for the definitions of all other variables. All models also include the county FEs and the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. Testing the Damage Hypothesis This table examines the Damage Hypothesis, which states that the positive and significant impact of EWEs on climate ratings is driven by firms' replacing their damaged assets from the EWEs with new and less carbon-intensive ones. Small size is a dummy variable that equals one if the lagged total assets of the firm is at or below the sample median, and zero otherwise. High capital intensity is a dummy variable that equals one if the lagged capital intensity (gross PPE scaled by total assets) of the firm is above the sample median, and zero otherwise. Old PPE is a dummy variable that equals one if the lagged industry-adjusted percent used-up of PPE (accumulated depreciation/gross PPE) of the firm is above the sample median, and zero otherwise. The dependent variable for each model is Climate rating. See Appendix A in the main text for the definitions of all other variables. All models also include the county FEs and the interactions of year and three-digit SIC industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. Changing Extreme Weather Type and Climate Ratings This table reports the results to examine the possibility that changing extreme weather type may have a more pronounced impact on climate ratings than simply having more frequent but the same type(s) of extreme weather event(s). New type based on time (1)/duration (1) is a dummy variable that equals one if in the previous year a new category of EWE among the four categories as listed in Appendix B of the main text occurred in the headquarter county of a firm and zero otherwise, where the "normal" type of EWE at the county is based on the categories (types) of EWE(s) out of those listed in Appendix B of the main text (Table I4) that first occurred in the county in the previous year during the sample period/occurred in the county in the previous year over the largest number of years between 1997 and 2019 and, in the event of ties, for the largest number of times between 1997 and 2019. The dependent variable for each model is Climate rating. See Appendix A in the main text for the definitions of all other variables. All models also include the county FEs and the interactions of year and corresponding industry FEs. Standard errors are adjusted for heteroscedasticity and clustered at both the firm and county-year levels. t-statistics are reported in parentheses. *, **, and *** indicate significance at the 10%, 5%, and 1% levels, respectively. (1)
v3-fos-license
2017-10-22T01:32:39.604Z
2015-03-06T00:00:00.000
37290515
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=54524", "pdf_hash": "295824a427408bba04ad7555517690cc4d2c8b69", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41994", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "4f08d5c2ef49f24bd993cd0c2082118ad4245f45", "year": 2015 }
pes2o/s2orc
Activity of Secondary Bacterial Metabolites in the Control of Citrus Canker This study investigated the protective effects of secondary bacterial metabolites, produced by Pseudomonas sp. (bacterium strain LN), on citrus canker disease caused by Xanthomonas axonopodis pv. citri (Xac 306). The LN bacteria strain was cultured in liquid medium and the supernatant free-cells was treated with methanol (AMF) and ethyl acetate (AEF), respectively, and then the extract was concentrated, filtrated, lyophilized and fractionated by vacuum liquid chromatography (VLC). After VLC, eight fractions were obtained. All fractions’ activity against Xac 306 by agar well diffusion assay and minimum inhibitory concentration but in different concentrations were tested. Cytotoxicity effects were observed in all fractions in 50 μg∙mL−1 concentration. The comet assay demonstrated that the fractions EAF, VLC2 and VLC3 presented no genotoxic effects at tested concentrations. In plants only VLC3 showed significant results (p < 0.05), reducing the incidence of citrus canker lesions. Introduction Citrus canker (CC), caused by the bacterium Xanthomonas axonopodis pv.citri (Xac), is a serious worldwild disease in citrus production [1].The pathogen causes necrotic lesions on leaves, stems and fruits.Severe infections can cause defoliation, blemished fruits, premature fruit drop, twig dieback, general tree decline [2], and consequent yield decrease.The losses can reach hundreds of millions of dollars per year.Only in Florida, US$ 200 millions had been spent until 2001, not including the losses with the elimination of contaminated trees and plantlets [3]. The infective process begins with the entrance of Xac through natural openings (stomata) or wounds.Inside the plant, cells of Xac multiply in intracellular space until the spaces become filled with bacteria or exopolysaccharide [4].The earliest symptoms on leaves appear, under optimum conditions, as slightly raised tiny blisterlike lesions about 4 -7 days after inoculation [5].As the lesion ages, they turn brown, and a water-soaked margin appears surrounded by a chlorotic halo. The methods used to control CC in areas of the world where it is endemic, involves use of resistant varieties of citrus, windbreaks to hinder inoculum dispersal and timely applications of copper-containing bactericides [2].However, the bacterium can develop resistance to the products, thus demanding an increase of copper application frequency.In addition, there is an increasing concern about the accumulation of such substances in food and in the environment [6].For these reasons, the use of antagonist bacteria or their metabolites has been proposed as a promising strategy for plant protection [7]- [10]. The aim of this work was to evaluate secondary bacterial metabolites produced by Pseudomonas sp.LN strain on the severity of citrus canker lesion caused in Citrus sinensis cv.Valencia, by isolate 306 of Xac. Bacterial Strains The pathogen Xanthomonas axonopodis pv.citri 306 strain (Xac 306) whose genome had been sequenced [11], was used in all experiments.This bacterium was stored in 30% (v/v) glycerol in liquid nitrogen.The renovation of the cells was made each 12 months on nutrient agar, at 28˚C during 48 h; to maintain of biological viability. The non-pathogenic antagonist bacteria, Pseudomonas sp.strain LN, was isolated from leaves with citrus canker lesions collected in Astorga, PR, Brazil [12].The procedure of storing and cell maintenance was the same as for Xac 306, except the culture medium that was tryptic soil agar (TSA) plus copper chloride (100 mg•L −1 ).Cain and collaborators [13] suggests that some bacteria isolated from culture media added copper might produce antagonistic substances. Inoculum The strain LN inoculum was prepared from cultures grown for 48 h at 28˚C on TSA plus copper chloride.Cells were removed from the surface of the growth medium with a sterile loop and suspended in sterile phosphate buffer (KH 2 PO 4 1 g; K 2 HPO 4 1.5 g; distilled water 1000 mL, pH 7.0).Inoculum concentration was determined spectrophotometricaly (λ = 590 nm) and adjusted to a final density of 10 8 CFU•mL −1 using sterile phosphate buffer. Pathogen inoculum was prepared from cultures of Xac 306 grown on nutrient agar for 48 h at 28˚C.Xac 306 was prepared as described above, and the pathogen final concentration was adjusted to 10 8 CFU•mL −1 . Antagonistic Bacterial Extract Bacteria strain LN was grown in TSB plus copper (Cain et al., 2000) (1500 mL) and incubated in a shaker at 28˚C and 100 rpm, during 15 days.After bacterial growth, the medium was centrifuged in a refrigerated centrifuge (9000 rpm at 4˚C), to obtain the secondary metabolites in the supernatant.The solid residue was discarded. Aliquots of 500 mL were extracted with ethyl acetate (500 mL) in separation funnel.This process was repeated five times.The organic phase, called acetate fraction (AF 0.031 g) was concentrated in rotatory evaporator and lyophilized.The lyophilized powder was suspended in methanol and filtered; the filtrate was lyophilized to yield the acetate methanol fraction (AMF 0.022 g).The AMF was suspended in ethyl acetate, filtered and the filtrate was lyophilized; originating the acetate-ethyl acetate fraction (AEF 0.020 g). Vacuum Liquid Chromatography (VLC) VLC was performed using a glass column (20 mm diameter × 140 mm height), silica gel 60 (0.063 -0.200 mm, Merck), vacuum at ~380 mm Hg and the organic solvents were hexane, dichloromethane, ethyl acetate, methanol, methanol plus distilled water 1:1 (v/v), and distilled water; in this order [14].Nine grams of silica gel were packaged in the glass column; afterwards in the top of the column 1.55 g of AEF fraction was added.Twenty milliliters of each solvent was added four times.The respective organic phases were collected and dried originating the fractions hexane (VLC1 0.0014 g), dichloromethane (VLC2 0.1541 g), ethyl acetate (VLC3 0.5659 g), methanol (VLC4 0.7217 g) water: methanol mix v/v (VLC5 0.0197 g) and distilled water (VLC6 0.0022 g).All organic phases were concentrated in rotatory evaporator and lyophilized. Agar Well Diffusion Assay The experiments were performed utilizing four replications per treatment (fractions).Petri plates containing nutrient agar were inoculated with a suspension of 10 8 CFU of Xac 306 in pour plate.A 150 µL aliquot of AMF, EAF and all fractions of VLC were tested in a concentration of 0.01 at 0.0001 mg•mL −1 in wells of 9 mm diameter.The plates were incubated at 28˚C during 48 h and control was used distilled water for each tested fraction. Minimum Inhibitory Concentration In this step the experiments was performed in duplicate.The fractions, which showed more antimicrobial activity against Xac 306 (EAF, VLC2, VLC3 and VLC4) in the item 2.3, were selected to the next step.In this test a 24-well tissue culture plate containing 1.9 mL of nutrient broth was inoculated with of Xac 306 (10 8 CFU•mL −1 ) per well.Two-fold dilution of fractions concentration such as 5000; 2500; 1250; 625; 312.5; 156.25; 78.12; 39.06; 19.53; and 9.76 µg•mL −1 were tested.Distilled water as used as control for each fraction tested.The minimal inhibitory concentration (MIC) was defined as the lowest concentration of FAM, FAE and VLC fractions which had non-bacterial growth after incubation.To determine the minimal inhibitory concentration, 50 µL of each wells with no visible growth were picked up from each well, and inoculated on nutrient agar and incubated at 28˚C during 48 h to check viable cells of Xac 306 cells observing the presence or absence of Xac 306 colony in Petri dishes. The fractions effects on cell viability were analyzed using the colorimetric MTT (3-[4, 5-dimethilthiazol-2yl]-2, 5-diphenil tetrazolium bromide) method (MTT-based assay kit, Sigma Chem.Co.) according to the manufacturer's instructions.Cells were grown in microplates with 96-wells for 48 h.After attachment to plates, the supernatant of each well was removed carefully and replace with 100 µL of fresh DMEN containing different concentrations of EAF, VLC2, and VLC3.All the fractions were tested in the same concentration (50; 10; 5; 1; 0.5 and 0.05 µg•mL −1 ).The plates were incubated for 72 h at 37˚C in a 5% CO 2 chamber with humidified atmosphere.The absorbance of each well was measured in a microplate reader at 490 nm and 630 nm.The experiment was performed in four replicate per fraction. Genotoxicity Test HEp-2 cells were grown as adherent monolayers in 25 cm 2 , sterile disposable flasks utilizing Dulbecco's Modified Eagles's Medium (DMEM F12) (Gibco BRL) supplemented with 10% fetal bovine serum (Gibco BRL) for the fortification of growth factors.The cultures were maintained in a BOD incubator at 37˚C.The cells were grown for two complete cell cycles (24 h) before being treated according to the protocol for each test. The alkylating agent ethylmethane sulfonate (EMS) (Acros Organics) is a well-known chemical mutagen.It was used for the positive control in the determination of genotoxicity and cytotoxicity and to cause mutations at the hgprt locus.EMS was dissolved in Ca 2+ and Mg 2+ free phosphate-buffered saline (PBS), pH 7.4, just before use.The final concentration of this agent in the cultures was 155 µg•mL −1 in the comet assay. The general procedure for the genotoxicity test (comet assay) followed the method described by Speit and Hartmann [15] wich is based on the original work of [16] with some modifications.Approximately 1.8 × 10 5 cells were grown in 2.5 mL culture tubes for 24 h.After incubation, the cells were washed with 5 mL PBS and treated in medium without serum at 37˚C for 3 h.At the end of treatment, the cells were washed, trypsinized and re-suspended in the fresh medium.The cells were then centrifuged (900 rpm) for 5 min to remove the supernatant leaving 300 µL for re-suspension of the pellet.Liquified agarose, 120 µL, was then added to a 20 µL aliquot of cell suspension.The sample was applied to a microscope slide previously gelatinized and covered with a coverslip.After 20 min at 4˚C, the coverslip was removed and the slides were immersed for 60 min at 4˚C in lysis solution containing 89 mL of lysis stock solution (2.5 M NaCl, 100 mM EDTA, 10 mM Tris, ~8 g NaOH to obtain a pH of 10, 890 mL of deionized water, and 1% sodium lauryl sarcosinate), 1 mL Triton X-100 (Merck) and 10 mL DMSO. The slides were then placed in an electrophoresis chamber in a chilled bath and maintained submersed in alkaline running buffer (pH > 13) (5 mL 200 mM EDTA, 30 mL 10 N NaOH and 965 mL of deionized water at 4˚C) for 20 min, to examine for single strand breaks.The slides were electrophoresed for 20 min at 25 V and 300 mA.The slides were then covered with neutralizing solution (0.4 M Tris-HCL, pH 7.5) for 15 min at 22˚C.Finally, the slides were submersed in absolute ethanol for 10 min at 22˚C.All the procedures were carried out under low light to avoid extra DNA damage by the action of the light.The material was stained with 80 L ethidium bromide (20 µg•mL −1 ) and examined with fluorescence microscopy using an excitation filter of 515 -560 nm and an emission filter of 590 nm. The extend and distribution of DNA damage indicated by the comet assay was evaluated by examining at least 100 randomly selected and no-overlapping cells on the slides per treatment.These cells were scored visually according the presence or absence of nucleus tail.The results were evaluated by the statitical tests analysis of variance (ANOVA) and Tukey's test at p < 0.05, with the experimental criterion being the significance of the response in relation on the negative control in genotoxicity assay, and in relation to the positive control. Antimicrobial Activity Test in Orange Leaves Plants of C. sinensis cv.Valencia were used in the greenhouse experiments under the following conditions: 28˚C/22˚C and 10 h/14 h day/night respectively and 80% relative humidity.Plants were watered three times at each seven days and fertilized twice a month with Hewitt solution for non-legume [17]. Orange leaves with 15 days old were treated with EAF, VLC2, and VLC3 fractions which showed high antimicrobial activity against Xac 306 in a antimicrobial activity assay in 24-well tissue culture plate.All the fractions were applied at the concentration 0.01 g•mL −1 diluted in distillated water and applied with a hand-sprayer in the abaxial and adaxial leaf surface.The application was realized before or after leaves inoculation with Xac 306. Plants were subjected to a moist chamber for 24 h before and after inoculation and kept at greenhouse, in order to stimulate the stomata opening and improve the efficiency of bacterial infection and the action of the treatments.Control plants were sprayed with distilled water.The plants were watered three times a week.The number of citrus canker lesions per leaf was determined 21 days after inoculation to harvest 18 leafs per plant.The average number of lesions was calculated by the equation: ∑no lesions in 18 leaves/∑area of 18 leaves.The treatments with the fractions were done before (pre-treatment) and after (post-treatment) the pathogen spraying, in five replications.Analysis of variance was performed on root square-transformed data and means were compared by Tukey multiple range test at P < 0.05.Data were transformed by ( ) y 0.5 SQRT 0.5 y + − + . Agar Well Diffusion Assay In this experiment antimicrobial activity as tested in all fraction obtained from supernatant of LN culture extracted by ethyl acetate (EAF and AMF) and obtained from EAF fractionation in vacuum liquid chromatography extracted with n-hexane (VLC1), dichloromethane (VLC2), ethyl acetate (VLC3), methanol (VLC4), water: methanol mix v:v (VLC5), and distilled water (VLC6).In this experiment only EAF, VLC2, VLC3, and VLC4 showed antimicrobial activity against Xac 306 (Table 1).However different effects were observed.EAF and VLC2 fractions had the highest antimicrobial effect were found halo with 30 mm of diameter on 0.01 mg•mL −1 , in the same concentration VLC3 presented halo with 20 mm, and VLC4 10 mm.The 0.001 mg•mL −1 the EAF, VLC2 and VLC3 showed the same antimicrobial effect, and VLC4 did not have any effect (Table 1). Minimum Inhibitory Concentration In this experiment was tested the bactericidal effect only a fractions which showed antimicrobial effect against Xac 306.EAF fraction showed bactericidal effect in concentration of 5000 until 312.5 µg•mL −1 .On the other hand in VLC2 and VLC3 the bactericidal effect was observed also in 156.25 µg•mL −1 .However VLC4 showed bactericidal effect only in 5000 and 2500 µg•mL −1 (Table 2). Cytotoxicity Assay The cytotoxicity capacity of EAF, VLC2 and VLC3 fractions were tested.In the ten-fold dilutions curves the three fractions showed similar cytotoxicity effects on the different concentrations evaluated (Figure 1).Similar Table 2. Minimum Inhibitory Concentration of derived from the second extraction with ethyl acetate (EAF) of supernatant of LN culture and fractions obtained from vacuum liquid chromatography extracted with dichloromethane (VLC2), ethyl acetate (VLC3), and methanol (VLC4) in two-fold dilution on the growth of Xac 306.results were obtained for CC 50 which was calculated using the regression equation from each one, and the three fractions EAF, VLC2 and VLC3 showed CC50 values approximated 36.98;30.09 and 32.22 µg•mL −1 , respectively. Genotoxicity Assay Figure 2 present the data for genotoxicity, for the concentrations 1, 0.1 and 0.05 µg•mL −1 of the fractions EAF, VLC2 and VLC3.No DNA migration was found at the EAF, VLC2 and VLC3 fractions on its concentrations tested, compared with positive control.The total cells with damage in the comet assay did not differ from the negative control, except for VLC3 on the high concentration (1 µg•mL −1 ).Antimicrobial activity test in orange leaves. Antimicrobial Activity Test in Orange Leaves The fractions EAF, VLC2, and VLC3 were tested in the greenhouse experiments to check its capacity to reduce the area of citrus canker lesions formed on leaves.The strategies used to treat the leaves with fractions before or after infestation with Xac 306 did not show any inhibitory effect (data not show).However, the leaves treated with CLV3 fraction had 40% less lesions area when compared with control.The fractions EAF and VLC2 did not differed significantly to the control (Figure 2). Discussion In the previous studies was testing the effect of secondary metabolite produced by Pseudomonas aeruginosa strain LV on different phytopathogenic bacteria [8]- [10], and other pathogenic bacteria [18] [19].In the present study was evaluate the effect secondary metabolites produced by Pseudomonas sp.LN strain on the severity of citrus canker lesion caused in C. sinensis cv.Valencia by Xac 306. The results shows that antibiosis of fractions from secondary bacterial metabolites extracted with different organic solvents against Xac 306 population in vitro, as well as its ability to control the severity of citrus canker in planta.EAF, VLC2 and VLC3 had highest antimicrobial efficiency to the control of Xac 306 in vitro, when compared with the others fractions tested (AMF, EAF, VLC1, VLC4, VLC5 and VLC6).The highest effect observed was obtained from compounds soluble first at all in ethyl acetate (EAF).When the EAF fraction was fractionated by vacuum liquid chromatography the bactericidal effect also was observed in another fraction obtained from dichloromethane (VLC2).However the highest bactericidal effect against Xac 306 after chromatography assay still was found in ethyl-acetate fraction.The concentrations of EAF, VLC2, and VLC3 fractions considered non-cytotoxic were far below the concentrations was found bactericidal effects in the three assays carried out in the paper.The maximum non-cytotoxic concentrations observed for mammals cell was 10 µg•mL −1 and to Xac 306 was 156.25 µg•mL −1 in vitro, and in planta the amount which was observed antimicrobial effect was 10,000 µg•mL −1 .However the leaves treated did not have any aspects of toxicity; like color changes or morphological aspects. In the comet assay were tested non-cytotoxic concentrations for the fractions EAF, VLC2 and VLC3, to better evaluated induced DNA damage [20].Only VLC3 in the concentration of 1 µg•mL −1 demonstrated cytotoxicity and DNA migration could not be observed.Ours results showed that these fractions and its concentrations tested had no genotoxic effects on HEp-2 cells. The application (before or after sprayed Xac 306) of the fractions did not affect the occurrence of citrus canker lesions.After one day on the leaf Xac 306 is not fully established [21].Despite the fact that the fraction VLC3 decreased significantly the citrus canker lesions when compared with control, and showing a possibility to control the severity of citrus canker.However, the level of this control is not enough to justify the solution of citrus canker disease, before a field experiment will not carried out to verify that in this experimental conditions we can observed the same effects that was observed in a greenhouse experiment, that is the challenge for the future. Conclusions In conclusion, EAF, VLC2, VLC3 and VLC4 fractions had antagonistic effect against Xac 306 in vitro.AEF, VLC2 and VLC3 demonstrated non-cytotoxic effect in concentration of 10 µg•mL −1 on HEp-2 cells.VLC3 had significant results in the control of foliar citrus canker lesions caused in C. sinensis cv.Valencia in greenhouse conditions. On the other hand, other studies need to be carried out to determine the best conditions for application to get the high efficiency of the bactericidal effect of the VLC3 fraction also including EAF and VLC3 as well as the identification and purification secondary metabolites extracted from the LN supernatant.These determinations are very important to evaluate the possible impact of these metabolites on the environment, given that the biological nature of the secondary metabolites not necessarily assures that they are not hazardous to the environment [22]. Figure 1 . Figure 1.Correlation between rate of cell mortality and fractions concentrations.(a) [EAF] Fraction from the second extraction with ethyl acetate of supernatant of bacterial culture; (b) [VLC2] Fraction derived from de vacuum liquid chromatography extracted with dichloromethane; (c) [VLC3] Fraction derived from de vacuum liquid chromatography extracted with ethyl acetate. Figure 2 . Figure 2. Control of citrus canker lesion formation by Xac 306 on leaf of orange trees (C.sinensis pv.Valencia) by fractions derived from the second extraction with ethyl acetate (EAF) of supernatant of LN culture and fractions obtained from vacuum liquid chromatography extracted with dichloromethane (VLC2), ethyl acetate (VLC3) in a concentration of 10 mg•mL −1 .Values are the means of 5 replicates.Means for each treatment with the same letter are not significantly different of Tukey test (P ≤ 0.05). Table 1 . Effects of fraction derived from the supernatant of LN culture extracted with methanol (AMF) and ethyl acetate (EAF), and fractions obtained from vacuum liquid chromatography extracted with n-hexane (VLC1), dichloromethane (VLC2), ethyl acetate (VLC3), methanol (VLC4), water: methanol mix vol:vol (VLC5) and distilled water (VLC6) in ten-fold dilution on the growth of Xac 306 in nutrient agar in Petri dishes.
v3-fos-license
2019-01-11T13:30:49.627Z
2013-08-23T00:00:00.000
128978737
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36396", "pdf_hash": "14569e785f408a663fdc71bcefce105933547deb", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41995", "s2fieldsofstudy": [ "Geology" ], "sha1": "14569e785f408a663fdc71bcefce105933547deb", "year": 2013 }
pes2o/s2orc
Spectral Comparison of Real Spectra with Site Effects Included vs MOC-2008 Teorical Spectra for Guadalajara City , Mexico Guadalajara city is the capital of the Mexican federal state of Jalisco. It is located close to the Pacific coast and is subjected to a large seismic risk. We present a seismic response study for some sites along the city. We calculated response spectra of shear-waves by using seismic records of actual earthquakes registered on rock and near the source as Green functions and propagated them trough a flat and horizontal layered media using a linear equivalent method to compare them with the response spectra calculated according to the Federal Commission of Electricity (CFE) seismic design buildings Manual (MOC-2008, 2008) which is widely used as reference on construction projects in Mexico. Our results show that MOC-2008 manual underestimates the spectral amplitudes and frequency band for the response spectra. Introduction It is well known that seismic damage distribution is strongly influenced by physical and dynamic properties of the soil.The cyclical load response capacity to earthquakes for example, depends on these properties.Seismic response evaluation is one of the most important problems to solve in seismic engineering.The Analysis of the seismic response of the soil is used to predict the response of the seismic movement in the surface, which in turn is essential to obtain the design spectra to evaluate the risk.Evaluation is especially important in places with potentially liquefaction risk. Design spectra is a tool that allows us to evaluate the forces to which the structures will be submitted in dependency on its own dynamic characteristics.Guadalajara city occupies the second site in Mexico, on population and economy growth.During the last years, more modern and greater civil structures have appeared at the whole city (Figure 1). Federal Commission of Electricity (CFE) seismic design buildings Manual [1] has been used as a quasi-official reference for construction since its appearance in 1993.A building regulation update, based on this type of studies is required as the recurrence cycle of earthquakes for this region of México which is about 80 to 100 years, and with the knowledge that the last devastating earthquake occurred on June 1932 [2], and it turns clear that this type of studies should be the bases to carry out a review of the building regulations. In this article we estimate seismic response at eight sites distributed within the urban zone of Guadalajara city using records from real earthquakes occurred in along the subduction zone assuming one-dimensional propagation in order to test the current regulation. Seismic records and earthquakes used are shown in Table 1. Earthquakes and Seismic Records Site effect due to geological conditions is one of the main factors which contribute to damage distribution during earthquakes.Subsoil impedance contrasts can significantly amplify the shaking level, as well as increase the duration of strong ground motion.The first action to take to prevent damages is to know site effect distribution.Subsoil dynamic properties allow us to predict the site response when it is submitted to earthquakes or dynamic charges. In order to estimate the seismic response we chose earthquakes occurred along the subduction zone and seismic records (accelerations) from the closest stations installed over hard rock in order to prevent site effect to be used as Green Functions.We selected seismic stations far enough to assure vertical incidence at Guadalajara sites. We selected three earthquakes: Tecomán, 2003 (Mw 7.5) recorded at Manzanillo station 65 km northwest from the epicenter, Colima, 1995 (Mw 7.3) recorded at 1. PGA column refers to maximum acceleration on the station.Seismic records used as Green functions are shown in Figure 3.As the interest is on SH waves propagation, we used only the horizontal components. Sites of Study We selected eight sites distributed within the urban zone of Guadalajara city (Figure 4) where Lazcano Diaz [3] performed a site characterization which included thicknesses, density, shear-wave velocity (Vs) among other parameters. Table 2 shows some basic parameters at each site: location, shear-wave velocity for the first 30 m (Vs30), dominant period from Federal commission of electricity CFE [1] calculated with PRODISIS [4] using the velocity profile, dominant period from Lazcano Diaz [3] and NEHRP [5] soil classification.In all cases, classification corresponds to rigid soil with shear velocities between 180 to 360 m/s. As it is important to understand shear-wave propagation obtained in this study, we reproduce shear-wave velocity profiles from Lazcano Diaz [3] in Figure 5. Wave One-Dimensional Propagation. The Linear Equivalent Method We propagated through a one-dimension layered media SH waves computed from the horizontal components of or a damping of rs of any project from a probabilistic po methodology (LEM) [6].Soil shear modulus depends on the deformation resistance of the soil.Damping ratio is associated with dissipative behavior of the soil to cyclic inputs.In a linear equivalent scheme, non-linear behavior of shear modulus and damping ratio are represented by curves as those shown in Figure 6 proposed by many authors ( [7,8], among others). We used ProShake [9] soft ave propagation in order to add site effect to the original record assuming:  Harmonic waves: Spectra (5%) and Design Spectra MOC-2008 ng the seismic recor media, we built seismic records with site effect included for each site.We then used them to calculate, the response spectra using Degtra program [10]. Response spectra are usually estimated f 5% because it is representative of the observed damping of reinforced concrete and structural steel.Figures 7 and 8 show the response spectra for a damping of 5% for each site.We show the spectra for both horizontal components using each real record as well as the mean plus standard deviation. Seismic paramete int of view (the deterministic way is no longer used), requires certain knowledge of seismic activity of the area.The probability of occurrence of a seismic event greater than a reference is given by where "P" is the probability that an "X earthquake" in building codes. " event is less than an x event previously defined in a time t.This probability in terms of the project can be defined considering the probability of the usual buildings which is of 10% in 50 years.It is associated with the existence of an earthquake which occurs every 450 years and it is named "rare For the same sites, we calculated the response spectra for each site according to MOC-2008 [1] procedure which is based on probabilistic criteria.Figures 9 and 10, shows mean plus standard deviation response spectra for both horizontal components calculated in this study and the responses for a rock site, for edge state of service and for collapse according to MOC-2008 [1] procedure.We observe the higher amplitudes at sites Av.Patria fortunately we do not have records to analyze in the soud namericana, Torrena, J. del Bosque and Gran Plaza where of around 3.12 g and the lower amplitudes of around 1.2 g at the sites located in downtown, Biblioteca and Rotonda.Amplitudes are higher for response spectra using seismic records and frequency band is shorter for response spectra from MOC-2008 [1].The flat zone in the last case is from about 0.1 to 1.6 sec while significant amplitudes from seismic records extend up to 1.6 seconds. Co Our results show th zone of Guadalajar fication of the seismic waves despite NERPH soil classification which corresponds to rigid soil with shear velocities between 180 to 360 m/s. It is obvious to conclude that the actual regulation must not be applied indiscriminately to every site where there is a construction project but it is necessary to make studies of this kind to characterize the site effect at each site.While MOC-2008 manual i odels, a comparison with real records is needed.This kind of studies should be complemented with other studies such as seismic microzonation. This research was partially supported by Institutional Copyright © 2013 SciRes.OJCE Figure 1 . Figure 1.Large buildings appear each year in Guadalajara city. Figure 2 . Figure 2. Location of Guadalajara city and earthquakes used in this study. Figure 4 . Figure 4. Location of sites selected for this study (triangles). dotted line represents spectra from the Oaxaca 1999 record.Green line is the mean plus standard deviation.(a) Biblioteca site; (b) Rotonda site; (c) J. del bosque site; (d) Torrena site. Figure 8 . Figure 8. Response spectra for 5% damping for the eight sites using real records propagating through a layered media.Continuous line represents spectra using the Colima 2003 record, continuous line with dots, the spectra using Colima 1995 record, Figure 9 . Figure 9. Response spectra calculated according to MOC-2008 procedure vs response spectra using seismic records.The different curves represent mean plus standard deviation response spectra for both components, response for a rock site, for edge state of service and for collapse.(a) Biblioteca site; (b) Rotonda site, (c) J. del bosque site; (d) Torrena site. Figure 10 . Figure 10.Response spectra calculated according to MOC-2008 procedure vs response spectra using seismic records.The different curves represent mean plus standard deviation response spectra for both components, respon for a rock site, for edge state of service and for collapse.(a) Gran Plaza site; (b) Eulogio Parra site; (c) Av.Patria site; (d) U ite. Table 1 . Main features of earthquakes used in this study. same station 25 km northeast from the epicenter and Oaxaca, 1999 (Mw 7.5) recorded at LANE station 19 km west from the epicenter.Locations of earthquakes used in this study are shown in Figure2.Basic information from earthquakes used are shown in Table the Table 2 . Main features of the sites used in this study. As it is important to understand shear-wave propagation obtained in this study, we reproduce shear-wave velocity profiles from Lazcano Diaz, 2007 in Figure5.
v3-fos-license
2018-04-03T04:45:44.921Z
2015-09-11T00:00:00.000
5480770
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.5005/jp-journals-10005-1321", "pdf_hash": "e567b2e65627008e1674b657ae998a482757a443", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:41996", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e567b2e65627008e1674b657ae998a482757a443", "year": 2015 }
pes2o/s2orc
Oral Manifestations and Dental Management of Epidermolysis Bullosa Simplex ABSTRACT Epidermolysis bullosa (EB) is a group of hereditary chronic disorders, characterized by fragility of the skin and mucous membranes in response to minor mechanical trauma. The objective of this study was to report the case of a young girl diagnosed with epidermolysis bullosa simplex (EBS), transmitted by an autosomal dominant gene. Cutaneous findings included blisters and dystrophy following minimal friction. Recurrent blisters and vesicle formation on the hard palate were the main oral findings. In conclusion, publications concerning the oral and clinical manifestations of EBS are important for providing knowledge and an early multidisciplinary approach that prevents blister formation and improves these patients’ quality of life, with the dentist playing an important role in oral health management. How to cite this article: Scheidt L, Sanabe ME, Diniz MB. Oral Manifestations and Dental Management of Epidermolysis Bullosa Simplex. Int J Clin Pediatr Dent 2015;8(3):239-241. INTRODUCTION Epidermolysis bullosa (EB) is a heterogeneous group of hereditary disorders characterized by extreme fragility of the skin and mucous membranes, which gives rise to the formation of blisters following minor trauma. 1 This derma tological condition is a severe autoimmune disease. 2,3 There are four major types of EB that differ phenotypically and genotypically: simplex (EBS), junctional (JEB), dystrophic (DEB) and Kindler´s syndrome. 4 Transmission electron microscopy (TEM) is considered the ideal method for diagnosing this pathology. 4 The most prevalent type of EB is the EBS, which mostly involves feet, hands and neck. Histological analysis reveals that its cleavage level is above the basement mem brane. 5 Local pain is the most common symptom and avoiding friction will prevent lesions. 6 The maintenance of skin integrity is a serious challenge for dental practice. 7 Therefore, the aim of this study was to report the case of a girl with EBS, describing the clinical features and the precautions that help improve patient's quality of life, particularly in relation to dental treatment. CASE REPORT A 10-year-old girl began pediatric dental treatment in 2005 and continued to attend monthly appointments. Her mother authorized the use of her case file for the purposes of scientific studies and signed a term of free, informed consent. The EBS case was diagnosed by a pediatrician soon after birth. Scars on the feet and blisters on the hands showed the need for a precise diagnosis. Therefore, TEM confirmed the autosomal dominant gene through paternal inheritance. The girl's diet necessarily includes only soft foods. Oral hygiene has always been performed carefully with an extra soft rubber made toothbrush and fluoride dentifrice. Intraoral examination showed mixed dentition ( Fig. 1) and the hard palate showed numerous vesicles (Fig. 2), but the tongue presented normal characteristics (Fig. 3). Radio graphs were not requested because these lesions affect the skin. The oral manifestations are the same since she began dental treatment. Her hands were dystrophic (Fig. 4) and usually protected by gloves to avoid any impact. Her right hand showed a blister that she had just perforated (Fig. 5). The purpose of the dental appointments is to control and prevent caries. The use of an aloe vera tooth gel (bright sparkling, forever living products, Scottsdale, Arizona, USA) at home was suggested to soothe the burning feeling affecting the gums. A mouthwash was also prescribed (Biotene, GlaxoSmithkline, USA) to fortify bioactive enzymes and help the salivary immune system protect the mucosal surfaces. 8 A diagnosis of EB required monthly dental appointments to maintain a high standard of personal oral hygiene. The recurrent blister lesions continue to develop mostly on the hard palate, but she never had any systemic complications related to EBS. DISCUSSION Epidermolysis bullosa is a challenge to health professionals because there is no definitive cure. Skin care attempts to minimize the severity of blister lesions due to the pain, risk of infection and dissatisfaction with appearance. 2 Epidermolysis bullosa is a prime example of a dermatological condition that has a profound psychological impact across all aspects of health. 9 Depression and shame are very common as a result of the appearance. 10 The patient described in this study is shy. All major types of EB are characterized by blisters following mild mechanical trauma. Many patients with EB can present systemic complications, such as ocular, genital and oropharyngeal infections, involving difficulty in swallowing. 12 The patient described in this study was diagnosed early and has not developed any complications or disturbances in swallowing which is in agreement with Fortuna et al. 11 Epidermolysis bullosa patients require special precautions during dental treatment because of the greater probability of lesioning the soft tissue when handling cutting instruments close to the skin and oral mucosa. 5 Cariogenic food, limited mouth opening caused by wounds and poor oral hygiene caused by pain are predisposing factors to dental caries. 12 In this case, minimal intervention has so IJCPD far preserved the oral cavity and monthly topic fluoride application helped to control dental caries. The patient maintains continuous contact with the health team to avoid complex treatments. Numerous alternative therapies are used as first aid treatment for blisters. The application of aloe vera gel (bright sparkling, forever living products, Scottsdale, Arizona, USA) diminish the subdermal temperature, providing a refreshed sensation, reducing the healing period and promoting antimicrobial activity. 13 The decrease in blister formation due to oral moisturizing and saliva stimulation is the reason Biotene mouthwash (GlaxoSmithkline, USA) was prescribed. This product possesses buffering capacity, an immunological effect, antimicrobial activity and a self-cleaning effect. 8 Epidermolysis bullosa treatment is generally focused on support. Perforating the blisters contributes to accelerating the healing process and prevents continued lateral spread of the blisters. Currently, researchers are focusing their attention on gene and cell therapy, recombinant protein infusions, intradermal injections of allogenic fibroblasts and stem cell transplantation. Other developing therapies are directed toward the enhancement of wound healing and better quality of life for EB patients. 14 A multidisciplinary approach involving the following health professionals is essential: nutritionist, pediatrician, dermatologist, plastic surgeon, hematologist, gastroenterologist, ophthalmologist, cardiologist, pediatric dentist, nurse and occupational therapist. The girl comes to the dental office every month to main tain her oral health. She attends dermatological reevalua tions sporadically and, once a year, returns to her pediatrician for control exams. This girl has gotten used to soft food and to avoiding certain physical activities that can hurt her. She believes she has good quality of life, but is always careful not to cause the formation of more blisters. CONCLUSION This case emphasizes that patients with EBS need special precautions during dental treatment because of the greater probability of blister formation. Moreover, those patients require an early multidisciplinary approach to improve their quality of life, with the dentist playing an important role in oral health management.
v3-fos-license
2019-03-08T14:19:04.865Z
2019-02-15T00:00:00.000
67857470
{ "extfieldsofstudy": [ "Medicine", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0212361&type=printable", "pdf_hash": "df60af8b62ba5766d88a395a5252b8d850508ce5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42001", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "df60af8b62ba5766d88a395a5252b8d850508ce5", "year": 2019 }
pes2o/s2orc
Support vector machine with quantile hyper-spheres for pattern classification This paper formulates a support vector machine with quantile hyper-spheres (QHSVM) for pattern classification. The idea of QHSVM is to build two quantile hyper-spheres with the same center for positive or negative training samples. Every quantile hyper-sphere is constructed by using pinball loss instead of hinge loss, which makes the new classification model be insensitive to noise, especially the feature noise around the decision boundary. Moreover, the robustness and generalization of QHSVM are strengthened through maximizing the margin between two quantile hyper-spheres, maximizing the inner-class clustering of samples and optimizing the independent quadratic programming for a target class. Besides that, this paper proposes a novel local center-based density estimation method. Based on it, ρ-QHSVM with surrounding and clustering samples is given. Under the premise of high accuracy, the execution speed of ρ-QHSVM can be adjusted. The experimental results in artificial, benchmark and strip steel surface defects datasets show that the QHSVM model has distinct advantages in accuracy and the ρ-QHSVM model is fit for large-scale datasets. Introduction Support vector machine (SVM) [1] proposed by Vapnik and his cooperators has become an excellent tool for machine learning. SVM is a comprehensive technology by integrating the margin maximization principle, kernel skill and dual method. It has perfect statistical theory, which makes SVM be widely applied in many fields [2][3][4]. In spite of that, great efforts are needed to improve SVM. So, SVMs with different attributes have been proposed, such as least squares SVM (LS-SVM) [5], proximal SVM (PSVM) [6], v-SVM [7], fuzzy SVM (FSVM) [8] and pinball loss SVM (Pin-SVM) [9]. In 2007, Jayadeva et al. proposed a twin support vector machine (TWSVM) [10] for pattern classification. TWSVM is derived from generalized eigenvalue proximal SVM (GEPSVM) [11]. GEPSVM and the other multi-surface classifiers [12][13] are used to solve the XOR problems and reduce the computing time of SVM. Similarly, the TWSVM classifier determines two nonparallel separating hyper-planes by solving two quadratic programming problems (QPPs) PLOS with smaller size. TWSVM has advantages in classification speed and generalization, which makes TWSVM become a new popular tool for machine learning. Based on TWSVM, some extended TWSVMs have been proposed, such as least squares TWSVM (LS-TSVM) [14], twin bounded SVM (TBSVM) [15], twin parametric-margin SVM (TPMSVM) [16], Laplacian TWSVM (LTWSVM) [17] and weighted TWSVM with local information (WLTSVM) [18]. Support vector data description (SVDD) [19] inspired by support vector classifier is a oneclass learning tool. SVDD implements the minimum volume description by building a hypersphere for target samples. When negative samples can be used, [19] provided a new SVDD with negative examples (SVDD_neg). SVDD_neg merges negative samples into training dataset to improve the description of hyper-sphere with the minimum volume. Different versions of classifiers have been extended from SVDD because the inner-class of samples can be gathered to the greatest extent. These classifiers include maximal-margin spherical-structured multi-class SVM (MSM-SVM) [20], twin support vector hyper-sphere (TSVH) [21], twinhypersphere support vector machine (THSVM) [22], maximum margin and minimum volume hyper-spheres machine with pinball loss (Pin-M 3 HM) [23] and least squares twin support vector hyper-sphere (LS-TSVH) [24]. A main challenge for all versions of SVM is to avoid the adverse impact of noise. As mentioned in [9], classification problems may have label noise and feature noise. So, anti-noise versions of SVM have been proposed. [13] proposed L1-norm twin projection support vector machine. In [13], L1-norm is shown to be robust to noise and outliers in data. [25] overcame noise impact on LS-SVM with weight varying. [26] adopted a robust optimization method in SVM to deal with uncertain noise. [27] built a total margin SVM with separating hyperplane which is insensitive to noise. [8] built a fuzzy SVM by applying a fuzzy member into each input sample. Fuzzy SVM can restrain the adverse effect brought by noise. These versions of SVMs have achieved some success in avoiding the adverse impact of noise, but they are not good at dealing with the feature noise around the decision boundary. In 2014, Huang et al. [9] designed a novel Pin-SVM by introducing pinball loss. Pin-SVM uses pinball loss to replace hinge loss, which makes Pin-SVM not only maintain the good property of SVM, but also be less sensitive to noise, especially the feature noise around the decision boundary. As such, the pinball loss has been successively introduced into different versions of SVM in [23], [28] and [29]. In this paper, a novel support vector machine with quantile hyper-spheres (QHSVM) for pattern classification is proposed. It inherits the excellent genes of SVDD_neg, TWSVM and Pin-SVM. QHSVM has the following attributes and advantages. a. QHSVM adopts pinball losses instead of hinge losses. The hinge losses with maximizing the shortest distance between two classes of samples are sensitive to noise. The pinball losses adopt quantile distance to replace the shortest distance. The quantile distance depending on many samples reduces the sensitivity to noise, especially the feature noise around the decision boundary. So, QHSVM improves the anti-noise ability of hyper-spheres by using the pinball losses. b. QHSVM searches for two quantile hyper-spheres with the same center for positive or negative samples. On the premise of using pinball losses, the volume of one quantile hypersphere is required to be as small as possible, while that of the other one is required to be as big as possible. Moreover, QHSVM requires the target samples to be close to the same center of two hyper-spheres as much as possible. These attributes ensure that the margin maximization principle and the inner-class clustering maximization of samples are implemented. c. QHSVM has a QPP for positive or negative samples. The QPP makes one class as a target class and makes the other class as a negative class. QHSVM explores the potential information of target samples to the greatest extent. And the negative samples are used to improve the description of hyper-sphere. These attributes improve the generalization of QHSVM. d. In order to meet the classification requirement of high efficiency, a new local center-based density estimation method is proposed. And QHSVM with surrounding and clustering samples (ρ-QHSVM) is given. The local center-based density estimation method can appropriately split training samples into surrounding samples and clustering samples. The hyper-spheres of ρ-QHSVM will be described by sparse surrounding samples, while the center of hyper-spheres will be clustered by clustering samples. In [23], Pin-M 3 HM also has the genes of THSVM and Pin-SVM. It seems that our QHSVM is similar to Pin-M 3 HM. In fact, our QHSVM is different from Pin-M 3 HM in the above attributes (b), (c) and (d). Furthermore, our QHSVM formulates two QPPs with the same structures, but Pin-M 3 HM has two QPPs with different structures. This paper is organized as follows. Section 2 reviews related work. Section 3 proposes the model of QHSVM and the local center-based density estimation method. Section 4 solves the new QHSVM and ρ-QHSVM. Section 5 deals with experimental results and Section 6 contains concluding remarks. Support vector machines with hinge loss and pinball loss For binary classification, the hinge loss is widely used. The hinge loss proposed in [1] brings popular standard SVM classifier. Suppose a training dataset T r = {(X 1 ,y 1 ),(X 2 ,y 2 ),� � �,(X m ,y m )}, where X i 2 < d�1 and y i 2{1,−1}. Standard SVM searches for an optimal separating hyperplane w T φ(x)+b = 0 by convex optimization, where w 2 < d�1 , b 2 < and φ(�) is a nonlinear feature mapping function. Its corresponding optimization problem can be described as follows: where c is a trade-off parameter. The hinge loss (L h ) i is given by (1), the final QPP of SVM can be obtained: QPP (3) of SVM searches for two support hyper-planes w T φ(x)+b = ±1 by maximizing the shortest distance between two classes of samples. The support hyper-planes belong to boundary hyper-planes. So, SVM is sensitive to noise. In 2014, Huang et al. [9] proposed a Pin-SVM classifier by introducing the pinball loss into standard SVM. Pin-SVM has the good property of standard SVM and is insensitive to noise, especially the feature noise around the decision boundary. The pinball loss in [9] is just like the following: ( where τ is an adjusting parameter. Replacing (L h ) i in (1) with (L τ ) i , the QPP of Pin-SVM can be obtained: Pin-SVM is insensitive to noise because the pinball loss is correlated with quantiles [30][31]. The pinball loss in (5) changes the idea of (3) into maximizing the quantile distance. Specially, when τ!0, Pin-SVM reduces to SVM. And the decision functions of QPPs (3) and (5) can be determined by using Lagrangian function, Karush-Kuhn-Tucker (KKT) condition and kernel function. Their formulas can be found in [1] and [9]. Twin support vector machine TWSVM determines two nonparallel hyperplanes by optimization two QPPs, which is different from standard SVM. Each QPP of TWSVM is very much in line with standard SVM. Its size is smaller than single QPP of SVM. So, TWSVM is comparable with SVM in classification accuracy and has higher efficiency. Moreover, TWSVM is excellent at dealing with the dataset with cross planes. Support vector data description with negative examples SVDD is an efficient method to solve a one-class data description problem. It builds a hypersphere to cover one class of target samples by the description of the minimum volume. The hyper-sphere embodies the inner-class clustering maximization of samples. Based on SVDD, SVDD_neg adds negative samples. When negative samples can be used, they can improve the hyper-sphere description of target samples. QPP of SVDD_neg can be given by where R and C are the radius and center of the hyper-sphere respectively. QPP (9) requires the target samples be inside of the hyper-sphere and the negative samples be outside of the hypersphere. On one hand, this requirement ensures that the hyper-sphere describes a closed boundary around the target samples well. On the other hand, it can be used to distinguish the target samples and negative samples. Inspired by SVDD_neg, some classifiers with hypersphere have been proposed in [20][21][22][23][24]. Pinball losses for quantile hyper-spheres The idea of QHSVM is similar to SVDD_neg in building a hyper-sphere. However, it needs to build two hyper-spheres with the same center for target samples, which is different from SVDD_neg. We consider a support vector machine with boundary hyper-spheres (BHSVM). BHSVM has two boundary hyper-spheres with the same center for target samples, which are shown in Fig 1(A). For binary classification, X þ i is firstly considered as a target sample. So, X À j is considered as a negative sample. These two boundary hyper-spheres must satisfy the following inequality constraints: where R + is the radius of the boundary hyper-sphere covering the target samples andR þ is the radius of the other boundary hyper-sphere. C + is the center of the two hyper-spheres. And the negative samples are outside of the hyper-sphere with the radiusR þ . x þ i andx þ j are the corresponding slack variables. Moreover, BHSVM requires min R + and maxR þ . So, BHSVM satisfying (10) and (11) maximizes the shortest distance between two classes of samples. Hinge losses are adopted in (10) and (11), which can be given as The hinge losses (12) are shown in Fig 1(C). It is known that the hinge losses are sensitive to noise [9]. In order to reduce the adverse effect brought by noise, QHSVM is generated by introducing the pinball losses to BHSVM. At this term, QHSVM inherits the ideas of Pin-SVM. The pinball losses for quantile hyper-spheres can be expressed as follows: The pinball losses (14) are shown in Fig 1(D). If the hinge losses in (10) and (11) are replaced by (14), then two inequality constraints with pinball losses can be obtained: Under the constraints of (15) and (16), the hyper-spheres of QHSVM are insensitive to noise because they are quantile hyper-spheres. The quantile hyper-spheres are shown in Fig 1 (B). Maximizing the quantile distance instead of maximizing the shortest distance is implemented. Compared with (10), (15) requires that some samples must be distributed outside of the hyper-sphere, which can be controlled with parameter τ. That is to say, maximizing the quantile distance of QHSVM depends on a number of samples. So, QHSVM is insensitive to noise, especially the feature noise around the decision boundary. When τ!0, (15) becomes (10). For (16), a similar conclusion can be drawn. For binary classification, the other case is that X À j is a target sample and X þ i is a negative sample. Similarly, the corresponding pinball losses can be obtained as 8 > > > > < > > > > : And the inequality constraints with pinball losses can be expressed as: where C − is the center of two quantile hyper-spheres. R − andR À are the radii of the two quantile hyper-spheres respectively. x À j andx À i are the corresponding slack variables. Primal formulation and analysis For binary classification, consider two datasets X þ ¼ fX þ i ji ¼ 1; 2; � � � ; m þ g and X À ¼ fX À j jj ¼ 1; 2; � � � ; m À g. Next, we formulate two QPPs with the inequality constraints (15), (16), (19) and (20): whereX þ and � X þ represent two datasets in class +1.X À and � X À represent two datasets in class -1. The numbers of samples in four datasets arem þ ,m À , � m þ and � m À respectively. For QHSVM, these datasets are specified as X + or X − . So, for QHSVM, QPPs (21) and (22) need to satisfy the following condition: For QPP (21) with the condition (23), X þ i is the target sample, and X À j is the negative sample. QPP (21) searches for two quantile hyper-spheres: O + andÔ þ . Their radii are R + andR þ . And the two hyper-spheres have the same center C + . The first term of objective function in QPP (21) minimizes (R + ) 2 , which tends to keep the volume of O + as small as possible. The second term maximizes ðR þ Þ 2 , which is to force the volume ofÔ þ as big as possible. On the other hand, minimizing (R + ) 2 and maximizing ðR þ Þ 2 mean to keep the margin between O + andÔ þ as big as possible, which embodies the margin maximization principle. The first and the second constraint conditions in QPP (21) make O + be a quantile hyper-sphere controlled by τ instead of boundary hyper-sphere because some target samples fall outside of O + . The third and the fourth constraint conditions in QPP (21) also makeÔ þ be a quantile hyper-sphere controlled by τ because some negative samples fall inside ofÔ þ . These constraints make the maximum margin depend on many samples instead of few samples, which ensures QPP (21) is insensitive to noise, especially the feature noise around the decision boundary. The third and the fourth terms of objective function in QPP (21) are to minimize the sum of slack variables caused by some samples not satisfying the constraint conditions. The fifth term and constraint condition require the target samples to be distributed in the center of O + as much as possible. In other words, the center of O + is close to the cluster of target samples. This means our QHSVM exploits the prior structural information of target samples. Our QHSVM should be not sensitive to the structure of the data distribution. So, the term ensures that the inner-class clustering of samples is maximized. The last constraint condition ensures the radius of O + is not smaller than that ofÔ þ . c þ 2 , c þ 3 and v + are trade-off parameters. For QPP (22) with the condition (23), X À j is the target sample, and X þ i is the negative sample. QPP (22) is similar to QPP (21) in attribute and conclusion. So, it is not necessary to analyze again. Similar to TWSVM, QHSVM builds two support hyper-spheres for binary classification. For QPP (21) with the condition (23), O + with parameters C + and R + is referred to as the support hyper-sphere of the target sample X þ i . The negative sample X À j is only used to improve the description of O + . O + is described by using the margin maximization principle and inner-class clustering maximization of samples. It is insensitive to noise. X À j is only used to implement the margin maximization principle. Similarly, for QPP (22) with the condition (23), O − with parameters C − and R − is reckoned as the support hyper-sphere of X À j . X þ i is only used to improve the description of O − . All mentioned above is helpful to improve the generalization of QHSVM. For binary QHSVM, the following decision function can be obtained. QHSVM with surrounding and clustering samples For QPPs (21) and (22) with the condition (23), all training samples are used for optimization with inequality constraints, which means QHSVM is fit for classification without high efficiency requirement. For a highly efficient classification problem, we provide a QHSVM with surrounding and clustering samples, which is called ρ-QHSVM. The surrounding samples refer to samples that are distributed near the boundary of the quantile hyper-spheres. In the case of X + , its surrounding samples are distributed near the boundary of O + . The clustering samples refer to samples that are distributed near the center of the quantile hyper-spheres. The quantile hyperspheres of ρ-QHSVM can be obtained by using sparse surrounding samples rather than all samples. So, these training samples should be divided into surrounding samples and clustering samples. In order to achieve it, a novel local center-based density estimation method is proposed. Local center-based density estimation is originated from kernel density estimation in [32]. Kernel density estimation yields Gaussian weight by calculating the distance between a sample and its K-nearest neighbors. This kernel density weight can efficiently characterize the local geometry of samples manifold, but it can't capture surrounding samples in the training dataset. So, the local center-based density estimation method is designed. Consider a training dataset X = {X i |i = 1,2,� � �,m}. Firstly, the kernel function C(X i ,X l ) = φ (X i )�φ(X l ) is introduced. Then, the steps for a local center-based density estimation method are given in nonlinear feature mapping space: Step 1: Calculate the square distance between each sample X i and the others. Step2: Search for K-nearest neighbors in nonlinear feature mapping space for each sample X i . Step 3: Calculate the mean of square distances for the training dataset. Step 4: Calculate the kernel density weight for each sample X i . Step 5: Determine the center of K-nearest neighbors for sample X i . Step 6: The local center-based density of X i is estimated as follows. It can be seen from the above steps that the local center-based density of X i is estimated with the distance between the sample and its K-nearest neighbors, where K is given by user. Moreover, the local center-based density is a Gaussian kernel density. When q i = 1, r i ¼ r w i . A bigger r w i indicates that X i is closer to its K-nearest neighbors. So, r w i can be used to check if X i is a clustering sample or an isolated sample. However, r w i can't be used to identify surrounding samples. The training dataset can be divided into clustering samples and surrounding samples from center to outside. The surrounding samples distributed near the boundary of the quantile hyper-spheres deviate the center of K-nearest neighbors. Fig 2 shows that the surrounding sample x s is far from the center of K-nearest neighbors, while the clustering sample x c is close to that center of K-nearest neighbors. This is their distinctive characteristics. So, q i is used to represent the deviation degree. When q i 6 ¼1, each ρ i must be compensated with q i . ρ i is called as local center-based density. The smaller ρ i is, the closer X i is to boundary. The bigger ρ i is, the closer X i is to clustering region. On the other hand, Gaussian kernel parameter δ 2 is set as � d 2 . � d 2 is the mean of square distances for the training dataset, which makes (30) fit for different training datasets with different clustering degrees. whererĩ þ and � r � i þ are the local center-based densities ofXĩ þ and � X � i þ respectively. The principle of division isrĩ þ < � r � i þ . So,Xĩ þ is a surrounding sample with small local center-based density. And � X � i þ is a clustering sample with big local center-based density. Support vector machine with quantile hyper-spheres according to ratio ε. whererĩ þ and � r � i þ are the local center-based densities ofXĩ þ and � X � i þ respectively. The principle of division isrj À < � r � j À . Based onX þ , � X þ ,X À and � X À , the two QPPs of ρ-QHSVM are expressed as (21) and (22). Comparing with QHSVM, X þ i and X À j are changed asX þ i andX À j respectively.X þ andX À are sparse datasets because the number of samples is reduced greatly. And their sparseness is controlled by ε. This means that the number of samples with inequality constraints is greatly reduced. So, the optimization speed of ρ-QHSVM is improved. Moreover, it can be seen from QPP (21) that the optimization accuracy is controlled by boundary samples. So, for QPP (21), the surrounding samples sets fX þ i ji ¼ 1; 2; � � � ;m þ g and fX À j jj ¼ 1; 2; � � � ;m À g ensure the optimization accuracy because they include boundary samples. On the other hand, comparing with QHSVM, X þ i is changed as � X þ l , which shows that the center of the support hyper-sphere is closer to the samples with higher clustering degree. And � X þ is a sparse dataset, because the number of samples is also reduced. So, the clustering samples improve optimization speed and accuracy with equality constraints. For QPP (22), similar attributes and conclusions can be obtained. In summary, ρ-QHSVM is fit for high efficiency classification. Solution to ρ-QHSVM Comparing with ρ-QHSVM, QHSVM has the additional condition (23). So, QHSVM can be considered as a special case of ρ-QHSVM. So, the solution of ρ-QHSVM is only given in this section. And the solution of QHSVM can be obtained from ρ-QHSVM. Experiments and results analysis In order to test the performance of the proposed classification model, QHSVM, ρ-QHSVM, SVM, Pin-SVM, TWSVM and THSVM are compared by using artificial and benchmark datasets with noise. Moreover, ρ-QHSVM is used to classify strip steel surface defects datasets obtained from a steel plant in China. It must be noted that THSVM is an extended binary classifier based on SVDD_neg. In this experiment, the nonlinear classifiers adopt kernel function C(X i ,X l ) = exp(−kX i −X l k 2 /2δ 2 ). And the linear classifiers adopt C(X i ,X l ) =X i �X l . All classifiers are solved and executed with MATLAB 7.11 on Windows 7 running on a PC with Intel Core CPU (3.2 GHz) and 4 GB RAM. Moreover, for a fair comparison, all classifiers use the same quadprog solver in MATLAB. For QHSVM, some parameters need to be determined. In order to reduce the computation complexity, assume that c þ 2 ¼ c À 2 , c þ 3 ¼ c À 3 and v + = v − for QHSVM and ρ-QHSVM. This brevity method has also been used in [16], [22], [23], [29] and [32]. For TWSVM and THSVM, c 1 = c 2 and v 1 = v 2 are set. All parameters c's, v's and δ's are chosen from the set {2 l |l = −9,−8,� � �,10}. K is used to control the number of nearest neighbors. For the nearest neighbors' algorithm, K is generally determined by grid search. In [18] and [32], K has been discussed. According to [18] and [32], K is set as 8. The parameter τ is chosen from {0.1,0.2,0.5,1}. There are some common parameter selection methods: exhaustive search, 5-fold cross validation, grid search and optimization search. In the experiments, in order to completely cut interactions between training and testing phases, the following selection methods are adapted. Firstly, we randomly split m all samples into m training training samples and m testing testing samples, where m all = m training +m testing . And the split step is repeated n training times. Thus, n training training/testing datasets are obtained. Then, the parameter values are determined by 5-fold cross validation and grid search for the ith training dataset, where i = 1,2,� � �,n training . The final classifier is set through the determined parameter values and is used to evaluate the accuracy for the i-th testing dataset. It can be seen that the step is repeated n training times. Finally, we can obtain n training testing accuracies. And the average accuracy and the standard deviation of all accuracies are calculated. The average accuracy and the standard deviation are used to evaluate the performance of the classifiers in UCI datasets and strip steel surface defects datasets. In artificial datasets, the average accuracy is used to represent the performance of the classifiers. To make statistical analysis sound, n training is set as 50 and m training = 5m testing . Artificial datasets To illustrate the ability of QHSVM graphically, the 2-D artificial datasets with Gaussian distribution are adopted. Suppose the samples X þ i (i = 1,2,� � �,m + ) satisfy Gaussian distribution N (μ 1 ,∑ 1 ). And the mean μ 1 is [−0.38,−0.38] T and covariance matrix ∑ 1 is diag (0.1,0.1). Suppose the samples X À j (j = 1,2,� � �,m − ) also satisfy Gaussian distribution N(μ 2 ,∑ 2 ) with μ 2 = [0.38,0.38] T and ∑ 2 = diag(0.03,0.03). Moreover, some samples in artificial datasets are introduced with noise around the decision boundary by using an adjustable parameter θ, which are called noise samples. θ is the ratio of the number of noise samples to the number of training samples. These noise samples affect the labels around the boundary. The labels of these noise samples are selected from {+1,−1} with equal probability. And the positions of these samples satisfy Gaussian distribution with the following parameters μ n = [0,0] T and S n = diag (0.03,0.03). Firstly, the dataset D 1 with m + = 100 and m − = 100 is built according to the above Gaussian distribution. Then, the dataset D n 1 is obtained by introducing noise samples with θ = 10% into D 1 . 4 (A-1) that the decision boundary of SVM is obtained based on two parallel support hyper-planes. These two support hyper-planes belong to boundary hyper-planes. Compared with Fig 4(A-1), the boundary hyper-planes of SVM in Fig 4(A-2) change in position. The result proves that SVM is adversely affected by noise samples. The support hyper-planes of Pin-SVM are quantile hyper-planes. Many samples are added between two quantile hyper-planes, which dilutes the adverse impact of noise samples. So, the decision boundary has not changed much for Pin-SVM on D 1 and D n 1 . Different from SVM, TWSVM uses two nonparallel support hyper-planes to describe two classes of samples. This attribute makes TWSVM be in favor of the description of training dataset, especially the dataset with cross planes. However, each support hyper-plane of TWSVM needs to be supported by a parallel boundary hyper-plane. So, noise samples also affect the nonparallel support hyper-planes of TWSVM. THSVM builds two support hyper-spheres. Each hyper-sphere covers one class of samples and keeps away from the other class of samples. THSVM maximizes the margin between the two classes and the inner-class clustering of samples. So, the decision boundary of THSVM becomes more reasonable. It can be seen from Fig 4(D-1) that the decision boundary of THSVM curves to the clustering samples. However, its two support hyper-spheres belonging to boundary are affected by noise samples near the boundary. If τ = 0, the quantile hyper-spheres reduce to the boundary hyper-spheres for QHSVM. So, QHSVM (τ = 0) includes two boundary hyperspheres with the same center for every class of samples. It can be seen that it has similar attributes with THSVM. So, it is clear that the decision boundary of QHSVM (τ = 0) will be changed by noise samples. QHSVM (τ = 0.5) builds two quantile hyper-spheres with the same center for every class of samples. Compared with the boundary hyper-spheres, some samples are added inside or outside of the quantile hyper-spheres. These samples reduce the adverse impact caused by noise samples around the decision boundary. So, the training results of QHSVM (τ = 0.5) for D 1 and D n 1 are not changed obviously, just like the support hyper-spheres and the decision boundary. Moreover, the decision boundary of QHSVM (τ = 0.5) for D 1 and D n 1 are both reasonable, which are curved to clustering samples. All these results prove that QHSVM has better performance because it integrates the excellent attributes of Pin-SVM, TWSVM and THSVM. Then, the dataset D 2 with m + = 200 and m − = 200 is built. According to the prescribed rules, it is divided into the training dataset and testing dataset. And noise samples with θ = 0%, 5%, 10%, 20% are introduced into the training dataset respectively. At last, the testing accuracies for different classifiers with linear kernel are shown in Table 1. For θ = 0%, compared with SVM and TWSVM, THSVM and QHSVM have better classification accuracies, which shows that the nonparallel hyper-planes (hyper-spheres) and inner-class clustering of samples strengthen the performance of classifiers. For θ = 0%, the testing accuracy of Pin-SVM is lower than that of SVM. One possible reason is that there are some isolated samples in D 2 , which can be seen from Fig 4(B-1). The only error point in Fig 4(B-1) deviates from the dataset with "+" in black. Quantile hyper-plane is sensitive to isolated samples as well as noise samples. For θ6 ¼0%, QHSVM provides the best testing accuracy compared with the other classifiers. All these results show that QHSVM performs the best in accuracy for datasets with noise samples, which is due to pinball losses, two nonparallel support hyper-spheres and inner-class clustering of samples. For θ6 ¼0%, the testing accuracy of Pin-SVM is higher than that of SVM, TWSVM and THSVM, which shows that the pinball loss can improve classifier's performance for datasets with noise samples. The testing accuracy of Pin-SVM is lower than that of QHSVM. The reason is that it does not have the attributes of inner-class clustering of samples and nonparallel support hyper-planes. Testing accuracies corresponding to different classifiers with nonlinear kernel are shown in Table 2. For all conditions, QHSVM has the best testing accuracy. Compared with Table 1, testing accuracies corresponding to all classifiers in Table 2 are improved, which shows that the classifiers with nonlinear kernel improves the classification results. Finally, in order to test the performance of ρ-QHSVM, the datasets D 3 (m + = m − = 100), D 4 (m + = m − = 400), D 5 (m + = m − = 700) and D 6 (m + = m − = 1000) are built. And noise samples with θ = 0% are introduced into these datasets. Nonlinear classifiers of SVM, Pin-SVM, TWSVM, THSVM and QHSVM are tested on accuracy and speed. Testing results are shown in Table 3. The conclusions on Table 3 are nearly the same with that on Table 2, which shows that QHSVM has excellent and stable performance for different-scale datasets. THSVM and TWSVM are faster than SVM, Pin-SVM and QHSVM. The reason is that these two classifiers solve two smaller QPPs instead of one large QPP used for SVM and Pin-SVM. The efficiency of QHSVM is the lowest because it solves two large QPPs to obtain better classification accuracy. So, QHSVM is not fit for high efficiency requirement. In order to solve the above problem, ρ-QHSVM with adjustable execution speed is proposed. It uses parameter ε to adjust the execution speed. The accuracy and execution time of ρ-QHSVM with different ε for differentscale datasets are shown in Table 3. The classification accuracy of ρ-QHSVM reduces as ε becomes small. The sparseness of surrounding samples and clustering samples is controlled by ε. This is caused by the fact that reducing ε means reducing the number of surrounding samples. Fewer surrounding samples inevitably reduce the classification accuracy for datasets with noise samples. For small-scale datasets, if ε is big, ρ-QHSVM is close to QHSVM, and exceeds the other classifiers in accuracy. Take the dataset D 3 as an example, when ε = 0.7, ρ-QHSVM is close to QHSVM in accuracy. For large-scale datasets, when ε is small, ρ-QHSVM is close to QHSVM in accuracy. And it also exceeds the other classifiers in accuracy. For the dataset D 6 , the classification accuracy of ρ-QHSVM exceeds that of Pin-SVM when ε = 0.3, and is close to that of QHSVM when ε = 0.4. It can be seen from Table 3 that the smaller ε is, the higher the efficiency of ρ-QHSVM is. When ε�0.4, ρ-QHSVM is the fastest classifier, which shows that the efficiency of ρ-QHSVM can be adjusted by ε. The results of Table 3 show that the improvement of execution time brought by ρ-QHSVM is limited for small-scale datasets under the premise of high accuracy. However, for small-scale datasets, this difference is insignificant because the execution time of classifiers is small. For large-scale datasets, the execution time of ρ-QHSVM is reduced greatly under the premise of high accuracy. For example, ρ-QHSVM has high efficiency and testing accuracy for the dataset D 6 when ε = 0.3. So, ρ-QHSVM is fit for large-scale datasets with high efficiency requirement. UCI datasets with noise samples In order to further test the performance of QHSVM, all classifiers are run on fifteen public benchmark datasets downloaded from the UCI Machine Learning Repository [33]. Ten smallscale or middle-scale datasets are used for testing accuracy, including Heart, Ionosphere, Breast, Thyroid, Australian, WPBC, Pima, German, Sonar and ILPD. And five large-scale datasets are used for testing accuracy and speed, including Wifi, Splice, Wilt, Musk and Spambase. The details of these original benchmark datasets are listed in Table 4. In order to highlight the anti-noise ability of QHSVM, the benchmark datasets with noise samples are tested. Each benchmark dataset is corrupted by zero-mean Gaussian noise. For each feature, the ratio of the variance of noise to that of the feature denoted as θ is set to be 0%, 5% and 10%. And all original and corrupted benchmark datasets are normalized before training. Table 5 shows the testing accuracies of SVM, Pin-SVM, TWSVM, THSVM and QHSVM with nonlinear kernels on the ten benchmark datasets. It can be seen that QHSVM achieves the best testing accuracy for majority of datasets. For the original benchmarked datasets with θ = 0%, QHSVM and THSVM yield the best testing accuracy on 5 and 2 of 10 datasets respectively. And SVM, Pin-SVM and TWSVM yield the best testing accuracy on 1, 1 and 1 of 10 datasets respectively. This result shows that QHSVM and THSVM with nonparallel hyperspheres and inner-class clustering of samples strengthen the performance of classifiers. It should be pointed out that QHSVM has obvious advantage for corrupted benchmark datasets. For the corrupted benchmark datasets with θ = 5% and θ = 10%, QHSVM and Pin-SVM yield the best testing accuracies on 13 and 5 of 20 datasets respectively. And SVM, TWSVM and THSVM yield the best testing accuracy on 2, 1 and 1 of 20 datasets respectively. This result shows that QHSVM and Pin-SVM are better than classifiers with hinge loss for the corrupted datasets. Moreover, for the original and corrupted benchmark datasets, QHSVM is superior to THSVM and Pin-SVM, because it has merits of pinball losses, nonparallel hyper-spheres and inner-class clustering of samples. This conclusion is the same as experimental results on the artificial datasets. Table 6 shows the classification accuracies and execution time of SVM, Pin-SVM, TWSVM, THSVM and ρ-QHSVM with nonlinear kernels on the five large-scale datasets. In the above section, it has been found that ρ-QHSVM improves the execution efficiency for the large-scale artificial datasets under the premise of high accuracy. This part of experiment also proves the same conclusion. According to the experimental results on the artificial datasets, parameter ε of ρ-QHSVM is set as 0.3. Compared with the other classifiers, the execution time of ρ-QHSVM is the shortest. The reason is that ρ-QHSVM solves smaller QPPs with inequality constraints. These smaller QPPs are produced on sparse surrounding samples. On the other hand, ρ-QHSVM achieves the best testing accuracy for the majority of datasets. For the original benchmark datasets with θ = 0%, ρ-QHSVM yields the best testing accuracy on 3 of 5 datasets, while for the corrupted benchmark datasets with θ = 5% and θ = 10%, ρ-QHSVM yields the best accuracy on 6 of 10 datasets. The reason is that the local center-based density estimation method ensures the reasonable division about surrounding samples and clustering samples. In general, ρ-QHSVM has higher efficiency and accuracy for large-scale datasets compared with the other classifiers. PASCAL VOC dataset The PASCAL VOC dataset [34] is a public benchmark dataset and is often used in challenge competitions for supervised machine learning. The dataset is composed of color images of twenty visual object classes in realistic scenes. In the experiment, the ten classes of them are chosen, such as person, cat, cow, dog, horse, sheep, bicycle, bus, car and motorbike. These color images are converted to the intensity images, then are resized to s times the size of the original color images so that they have the specified 4096 pixels, where s is a positive real number. So, each image is represented as a sample vector with 4096 elements. We choose 800 and 1600 vectors as training samples respectively and the others are testing samples. In order to highlight the anti-noise ability, the PASCAL VOC dataset with noise are built. The PASCAL VOC dataset is corrupted by zero-mean Gaussian noise. For each feature, the ratio of the variance of noise to that of the feature denoted as θ is set to be 5%. For brevity, we build ten nonlinear binary classifiers with one-against-rest method. Then, the mean of ten accuracies is presented in Fig 5. It can be seen from Fig 5(A) that the performance of our QHSVM is superior to that of SVM, Pin-SAVM, TWSVM and THSVM in challenging the PASCAL VOC dataset with noise. The result highlights that the new attribute of pinball losses improves the anti-noise ability of our QHSVM. Furthermore, the attribute of nonparallel hyper-spheres strengthens the generalization performance of the classifier. Fig 5(B) shows that the average accuracy of QHSVM is not lower than that of other classifiers. This indicates that the QHSVM also has reliable performance in challenging the original PASCAL VOC dataset. The robustness of QHSVM is strengthened by maximizing the margin between two hyper-spheres with the same center and maximizing the inner-class clustering of samples. Moreover, two nonparallel quantile hyperspheres improve the generalization of QHSVM. In addition, the performance of all classifiers is improved with the increase of training samples. In the case of more training samples, the performance of all classifiers in corrupted dataset is close to that in original dataset. Notably, the accuracies of our classifier in the two datasets are close. This also shows that our QHSVM has better robustness than other classifiers. Strip steel surface defects datasets Strip steel surface defects datasets are obtained from Northeastern University (NEU) surface database [35]. In the experiment, four typical defects datasets in NEU surface database are Support vector machine with quantile hyper-spheres investigated: patches (S1 Dataset), inclusion (S2 Dataset), scratches (S3 Dataset) and scale (S4 Dataset). Their typical images are shown in Fig 6. These defect images are extracted as defect samples, and each defect sample includes sixteen attributes. This means that each defect sample is a 16-dimensional vector. Their related attributes have been described in our previous work [36]. It can be seen that the strip steel surface defects classification belongs to multi-class classification. There are many multi-class classification methods based on binary classifier, such as one-against-one, one-against-rest, decision directed acyclic graph and binary tree [37]. And the binary tree model is most widely used. Multi-class classifiers for SVM, Pin-SVM, TWSVM, THSVM and ρ-QHSVM can be obtained on the binary tree. According to the binary tree model, three QPPs are needed to solve for SVM and Pin-SVM, while six QPPs are needed to solve for TWSVM, THSVM and ρ-QHSVM. Moreover, to obtain more samples, the strip steel surface defects datasets are supplemented by rotation, distortion, translation, and scaling. In the end, the strip steel surface defects datasets include 8000 samples and each type of defects includes 2000 samples. All parameters of classifiers are obtained with the same method mentioned above. And the parameter ε of ρ-QHSVM is set to 0.3. The accuracies and execution time of all classifiers for all types of defects are shown in Table 7 and Fig 7 respectively. It can be seen from Table 7 that the accuracy of ρ-QHSVM is always the best for all types of defects. The accuracy of Pin-SVM is better than that of SVM, TWSVM and THSVM. The reason is that the strip steel surface defects datasets are corrupted by noise usually. It is well known that there is noise on the production line of strip steel. So, the pinball losses in ρ-QHSVM and Pin-SVM work for the strip steel surface defects datasets with noise samples. Moreover, the other excellent attributes improve the performance of ρ-QHSVM further. Besides, the efficiency of ρ-QHSVM is high. TWSVM, THSVM and ρ-QHSVM is better than SVM and Pin-SVM in execution time, which is shown in Fig 7. Though SVM and Pin-SVM only need to solve three QPPs for four types of datasets, these QPPs are all large. TWSVM, THSVM and ρ-QHSVM need to solve smaller QPPs, which improves the execution time. ρ-QHSVM has the fastest speed, which is benefited from the local center-based density estimation method. The method improves the classification efficiency under the premise of high accuracy. In summary, ρ-QHSVM is very fit for the strip steel surface defects classification. Conclusions A novel QHSVM classifier is proposed for pattern recognition in this paper. QHSVM has remarkable attributes: pinball losses, two nonparallel quantile hyper-spheres and inner-class clustering of samples. The quantile hyper-spheres ensure that QHSVM is insensitive to noise, especially the feature noise around the decision boundary. The robustness of QHSVM algorithm is strengthened by maximizing the margin between two hyper-spheres with the same center and maximizing the inner-class clustering of samples. Moreover, compared with standard SVM model, two nonparallel quantile hyper-spheres improve the generalization of QHSVM. On the other hand, in order to satisfy the requirement of high efficiency for large-scale datasets classification, a new version of QHSVM with adjustable execution speed is proposed, which is called ρ-QHSVM. Under the premise of high accuracy, ρ-QHSVM reduces the execution time. That benefits from the local center-based density estimation which reasonably divides training samples into surrounding samples and clustering samples. The proposed QHSVM and ρ-QHSVM are compared with SVM, Pin-SVM, TWSVM and THSVM through numerical experiments on artificial, benchmark and strip steel surface defects datasets with noise. The results show that QHSVM performs the best in accuracy for datasets with noise samples, which is due to pinball losses, two nonparallel support hyper-spheres and inner-class clustering of samples. The execution time of ρ-QHSVM is reduced greatly under the premise of high accuracy for large-scale datasets, especially strip steel surface defects datasets. ρ-QHSVM has the fastest speed, which is benefited from the local center-based density estimation method. In the future, it is necessary to find the optimal parameters for QHSVM with some effective methods. And how to apply QHSVM to unbalanced datasets will be investigated. Supporting information S1 Dataset. Patches dataset. The first typical strip steel surface defects dataset.
v3-fos-license
2018-03-03T23:29:23.641Z
2018-03-01T00:00:00.000
3646214
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-018-2993-0", "pdf_hash": "a5023ffc81df5536c4e542ea47bf91d2217286b2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42002", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "a5023ffc81df5536c4e542ea47bf91d2217286b2", "year": 2018 }
pes2o/s2orc
Development of tissue inflammation accompanied by NLRP3 inflammasome activation in rabbits infected with Treponema pallidum strain Nichols Background The inflammasome responses in Treponema pallidum infection have been poorly understood to date. This study aimed to investigate the expression of the nucleotide-binding leucine-rich receptor protein 3 (NLRP3) inflammasome in the development of tissue inflammation in rabbits infected with T. pallidum. Methods Forty-five rabbits were randomly assigned to a blank group or an infection group, and the latter was divided into no benzathine penicillin G (BPG) and BPG treatment subgroups. Rabbits in the infection group were injected intradermally with 0.1 mL of a 107/mL T. pallidum suspension at 10 marked sites along the back, and the blank group was treated with normal saline. The BPG treatment subgroup received 200,000 U of BPG administered intramuscularly twice, at 14 d and 21 d post-infection. The development of lesions was observed, and biopsies of the injection site and various organs, including the kidney, liver, spleen, lung, and testis, were obtained for NLRP3, caspase-1, and interleukin-1β (IL-1β) mRNA analysis during infection. Blood was also collected for the determination of IL-1β concentration. Results Rabbits infected with T. pallidum (both the BPG treatment and no BPG treatment subgroups), exhibited NLRP3 inflammasome activation and IL-1β secretion in cutaneous lesions, showing a trend in elevation to decline; NLRP3 mRNA expression reached a peak at 18 d in the BPG treatment subgroup and 21 d in the no BPG treatment subgroup and returned to “normal” levels [vs. the blank group (P > 0.05)] at 42 d post-infection. The trend was similar to the change in cutaneous lesions in the infected rabbits, which reached a peak at 16 d in the BPG treatment subgroup and 18 d in the no BPG treatment subgroup. NLRP3, caspase-1, and IL-1β mRNA expression levels were slightly different in different organs. NLRP3 inflammasome activation was also observed in the kidney, liver, lung, spleen and testis. IL-1β expression was observed in the kidney, liver, lung and spleen; however, there was no detectable level of IL-1β in the testes of the infected rabbits. Conclusions This study established a clear link between NLRP3 inflammasome activation and the development of tissue inflammation in rabbits infected with T. pallidum. BPG therapy imperceptibly adjusted syphilitic inflammation. Background Syphilis is a sexually transmitted disease caused by the bacterial spirochete Treponema pallidum [1]. The inflammatory processes induced by T. pallidum within infected tissues result in the development of lesions, and lesion resolution has been reported previously [2]. The innate immune system, the first line of host defense of microbial infection, is recognized as the major contributor to the acute inflammation induced by tissue damage or microbial infection [3]. The innate immune system has an imperative function in controlling the initial pathogen invasion and activates various members of the nucleotide-binding leucine-rich receptor (NLR) family in the cytoplasm, resulting in the assembly of an NLR-containing multiprotein complex that recruits and activates caspase-1, leading to interleukin-1β (IL-1β) production [4]. NLRP3 is the best-characterized member of the NLR family involved in the innate immune system; this system is activated by exogenous and endogenous stimulatory factors, such as bacteria, viruses, fungi, and components of dying cells [5,6], and NLRP3 serves as a platform for the activation of caspase-1 and the maturation of the proinflammatory cytokine IL-1β to engage in the innate immune response [7]. The role of the NLRP3 inflammasome in pathogenic infections, such as those caused by Pneumococcus [8], Helicobacter pylori [9], Neospora caninum [10], and Mycobacterium tuberculosis [11] has been demonstrated. However, the involvement of NLRP3 in the inflammatory processes of T. pallidum infection is poorly understood. In this study, we investigate the expression of the NLRP3 inflammasome during the development of tissue inflammation associated with syphilis, the activation of the inflammasome and release of IL-1β were estimated during T. pallidum infection in a rabbit model. Animal experiments The T. pallidum Nichols strain was kindly provided by Lorenzo Giacani, Ph.D. (University of Washington, Seattle) and was propagated via intra-testicular serial passage in New Zealand white rabbits to maintain virulence in our laboratory as previously described [12]. Forty-five male New Zealand white rabbits (purchased from the Xiamen University Laboratory Animal Center, weighing approximately three kilograms each) with negative results in both the reactive rapid plasma reagin and T. pallidum particle agglutination tests, were randomly assigned to two groups, a blank group (n = 15) and an infection group (n = 30). The latter was divided into the no benzathine penicillin G (BPG) treatment subgroup (n = 15) and the BPG treatment subgroup (n = 15). The animals were housed individually at 16 to 18°C and were fed with antibiotic-free food and water. Rabbits in the infection group were injected intradermally with 0.1 mL of a 10 7 treponeme/mL suspension at 10 marked sites along the back, while rabbits in the blank group were injected with normal saline. The backs of the rabbits were meticulously kept free of fur by daily clipping throughout the experiment. Rabbits in the BPG treatment subgroup received 200,000 U of BPG administered intramuscularly twice, at 14 d and 21 d post-infection. One representative site of each animal was selected separately and biopsied (4-mm punch biopsies obtained under local lidocaine anesthesia) for RNA extraction at 1, 4, 7, 10, 14, 18, 21, 28, 35 and 42 d post-infection. One representative site on each animal was dedicated exclusively for the observation of lesion appearance and development up to 42 d post-infection; the diameter of the lesion was measured using a vernier caliper. Three animals were randomly selected for euthanasia in the two groups at 7, 14, 21, 28, and 42 d post-infection, and the kidney, liver, spleen, lung, and testis organs were then harvested for experimental analysis. Blood was collected at 1, 4, 7, 10, 14, 18, 21, 28, 35 and 42 d postinfection, and serum was isolated and frozen at − 80°C until analysis of the IL-1β concentration. All protocols involving animals were approved in advance by the animal experimental ethics committee of the Medical College of Xiamen University. Statistical analysis The data were expressed as the mean ± SD. Statistical analyses were performed using the SPSS 13.0 software (SPSS Inc., Chicago, USA). Student's t-test was applied to compare the means between two groups. In cases with more than two groups, a one-way analysis of variance was employed to examine the differences between the groups, and Dunnett's post-comparison test was used to conduct multiple comparisons. A 2-tailed P value of less than 0.05 was accepted as being statistically significant. Development of cutaneous lesions in rabbits infected with T. pallidum In the infected rabbits, cutaneous lesions began to develop at 4 d post-infection and then reached a peak at 16 d in the BPG treatment subgroup and at 18 d in the no BPG treatment subgroup ( Fig. 1a, b). In the BPG treatment subgroup, the lesions gradually began to shrink at 16 d (2 d after the first BPG treatment) and subsequently disappeared at 28 d post-infection. The cutaneous lesions disappeared at an earlier time point in the BPG treatment subgroup than that in the no BPG treatment subgroup (28 d vs. 42 d). The lesions were barely detectable at 35 d and disappeared at 42 d in the no BPG treatment subgroup. The cutaneous lesions in the no BPG treatment subgroup were significantly larger than those in the BPG treatment subgroup at 18, 21, 24, 28 and 35 d post-infection (Fig. 1c) (P < 0.05). No lesions developed in the blank group (data not shown). NLRP3 inflammasome activation in the cutaneous lesions of infected rabbits In the infected rabbit group, the NLRP3 mRNA levels showed a trend in elevation to decline and reached a peak at 18 d post-infection in the BPG treatment subgroup and at 21 d in the no BPG treatment subgroup. In the BPG treatment subgroup, NLRP3 mRNA expression was suppressed at 18 d post-infection (4 d after the first BPG treatment) and returned to "normal" levels [(i.e., not significantly different from the blank group (P > 0.05)] at 42 d post-infection. Notably, the level of NLRP3 mRNA exhibited a reduction at an earlier time point in the BPG treatment subgroup (18 d) than that in the no BPG treatment subgroup (21 d). The expression of NLRP3 mRNA in the no BPG treatment subgroup was significantly higher than that in the BPG treatment subgroup at 21, 28, and 35 d post-infection (P < 0.05) (Fig. 2a). However, the expression of caspase-1 and IL-1β mRNA showed a "saddle pattern" of change over time post-infection; caspase-1 expression reached an initial peak at 7 d and a second peak at 28 d, while IL-1β The results are expressed as the mean ± SD. Student's t-test was applied to compare the means of the diameters between the BPG treatment and no BPG treatment subgroups. * P < 0.05 mRNA reached a first peak at 14 d and a second peak at 28 d in both the BPG treatment and no BPG treatment subgroups (Fig. 2b, c). Notably, despite different trends in the expression of NLRP3, caspase-1, and IL-1β mRNAs in cutaneous lesions during infection, at 42 d post-infection, the expression of all three mRNAs returned to "normal" levels [i.e., were not significantly different from the blank group levels (P > 0.05)] in both the BPG treatment and no BPG treatment subgroups. The expression of NLRP3, caspase-1, and IL-1β mRNAs was maintained at a low level and showed no fluctuations in the blank group during the experimental period (Fig. 2). IL-1β secretion in rabbits infected with T. pallidum Except at 1 d and 42 d post-infection, the IL-1β concentration in the infection group was significantly higher than that in the blank group, which maintained a low level with no fluctuations. The dynamic tendency of the serum IL-1β concentration in the BPG treatment subgroup was similar to that in the no BPG treatment subgroup, which presented a trend of increasing early and a later decrease (Fig. 3). The serum IL-1β level reached a peak at 18 d in the BPG treatment subgroup and at 21 d in the no BPG treatment subgroup, before returning to normal levels [i.e., not significantly different Values represent the mean ± SD of triplicate experiments. A one-way analysis of variance was employed to examine the differences in the three groups, and Dunnett's post-comparison test was used to conduct multiple comparisons. * P < 0.05, the BPG treatment subgroup or no BPG treatment subgroup vs. the blank group. # P < 0.05, the BPG treatment subgroup vs. the no BPG treatment subgroup from the blank group level (P > 0.05)] at 42 d postinfection. NLRP3 inflammasome expression in the organs of rabbits infected with T. pallidum Additionally, the dynamics of NLRP3, caspase-1, and IL-1β mRNAs were monitored in five organs: the kidney, liver, spleen, lung, and testis. The results showed that NLRP3, caspase-1, and IL-1β mRNAs had different expression levels in the different organs of infected rabbits. The NLRP3 mRNA expression levels in the infection group showed a trend in elevation to decline in all five organs but was still higher than "normal" at the endpoint of the study (vs. the blank group, P < 0.05) (Fig. 4a-e). Similar to the trend in NLRP3, caspase-1 mRNA showed an initial increase and then a decreasing trend later in four organs (kidney, liver, lung, and testis). The expression of caspase-1 mRNA in the kidney showed a "saddle pattern" of change over time post-infection, which was different from that in the other three organs (liver, lung, and testis) and remained higher than that in the blank group at the endpoint of the study (P < 0.05). However, caspase-1 mRNA level in the spleen was not different among the BPG treatment subgroup, the no BPG treatment subgroup, and the blank group (Fig. 4f-j). Regarding the expression level of IL-1β mRNA, there was no difference in the testes among the BPG treatment subgroup, the no BPG treatment subgroup, and the blank group. However, IL-1β mRNA expression showed an earlier increase and later decrease in the kidney, liver, spleen, and lung and returned to normal at 21 d in the kidney and 42 d in the liver [vs. the blank group (P > 0.05)]. IL-1β mRNA was still expressed in the lung and spleen (vs. the blank group, P < 0.05) at the endpoint of the study (42 d post-infection) (Fig. 4k-o). Discussions T. pallidum can provoke an intense innate immune response, which is generally believed to be the cause of tissue damage [14]. In a rabbit model, T. pallidum infection presents with the progression of macrophage activation and mononuclear cell infiltration at the sites of the experimental inoculation [15]. Immunohistochemistry and real time-PCR analysis of biopsy specimens obtained from primary and secondary syphilis lesions demonstrate that syphilitic skin lesions are also composed of macrophages and lymphocytes that express mRNAs for IL-1β, Interferon-γ and IL-12 in experimentally infected rabbit tissues [16] and human primary syphilitic lesions [17]. Results from prior studies have confirmed that innate immune cells, such as macrophages, can express pattern recognition receptors and sense microbes by recognizing the pathogen-associated molecular patterns of pathogens [18]; then various members of the NLR family in the cytoplasm are activated, resulting in the assembly of an NLR and the activation of caspase-1, leading to IL-1β production [4]. NLRP3 inflammasome activation/IL-1β release results in hepatocyte pyroptosis, liver inflammation, and fibrosis in mice [19]. In the present study, we found that NLRP3 inflammasome activation and IL-1β secretion were exhibited in T. pallidum-infected rabbits at the early phase and showed a trend in elevation to decline. The trend was similar to the changes in the lesions of the infected rabbits, which showed evidence of a link between NLRP3 inflammasome activation and inflammatory injury caused by the T. pallidum infection. The activation of the NLRP3 inflammasome is closely related to disease development. Penicillin has been recommended as the mainstay of treatment for all types of syphilis since this drug was first used for this indication in 1943 [20]. In this study, we also investigated the effect of penicillin treatment on the expression of the NLRP3 inflammasome during the development of tissue inflammation due to syphilis. We found that regardless of whether the infected rabbits received BPG treatment, the expression levels of NLRP3, caspase-1, and IL-1β in cutaneous lesions all showed an identical trend in elevation to decline, similar to the Fig. 3 Dynamics of the IL-1β concentration in rabbits infected with T. pallidum. Values represent the mean ± SD of triplicate experiments. A one-way analysis of variance was employed to examine the differences between groups, and Dunnett's post-comparison test was used to conduct multiple comparisons. * P < 0.05, the BPG treatment subgroup or no BPG treatment subgroup vs. the blank group trend found in the cutaneous lesions, and the expression of NLRP3, caspase-1, and IL-1β mRNAs in lesions eventually returned to "normal" levels in both the BPG treatment and no BPG treatment subgroups, but the time point of reduction was slightly different. The cutaneous lesions disappeared at an earlier time point (at 28 d) in the BPG treatment subgroup than in the no BPG treatment subgroup (at 42 d). In addition, NLRP3 mRNA expression was suppressed at an earlier time point in the BPG treatment subgroup (18 d) than in the no BPG treatment subgroup (21 d). BPG therapy imperceptibly adjusted syphilitic inflammation. T. pallidum disseminates systemically and induces inflammation in diverse tissues and organs [21]. Innate immune cells, such as macrophages in tissues and organs not only mediate bacterial clearance but also lead to tissue damage and clinical symptoms [22]. In this study, we detected NLRP3 inflammasome activation in five organs, the kidney, liver, lung, spleen and testis, further confirming that T. pallidum induced systemic inflammatory during infection. We also found that IL-1β was expressed in the kidney, liver, lung and spleen tissue but was not detectable in the testes of the infected rabbits. One possible explanation is that there may be some difference in the number of IL-1β-producing cells (such as macrophages) or in the cellular function cytokine production in response to T. pallidum stimulation among different organs, The other possible reason is that the testis represents a distinct immunoprivileged site where invading pathogens can be tolerated without evoking detrimental immune responses [23]. In addition, we found that NLRP3 was differently expressed in different organs and was also recovered at different times, further confirming the existence of different immune response profiles to T. pallidum in different organs. Additionally, only three animals were harvested for experimental analysis; thus, the possibility of individual differences in the immune response of animals may result in the nonregularity. Further study requires more animals to eliminate individual differences. In this study, we demonstrated that T. pallidum-induced inflammasome activation was positively correlated with changes in the skin lesions of rabbits. Further studies are required to understand the mechanisms of NLRP3 inflammasome regulation by IL-1β in T. pallidum infection. Also, T. pallidum multiplicity may correlate with the different disease outcome [1,24], the immune response to different T. pallidum strains would deserve our future study. In addition, we only monitored the changes at 42 d post-infection. Therefore, further studies are required to determine changes in the NLRP3 inflammasome in rabbits with relapse in the no BPG treatment subgroup. Conclusions In the present study, we established a clear link between NLRP3 inflammasome activation and the development of tissue inflammation in rabbits infected with T. pallidum; NLRP3 inflammasome activation was similar to the process of self-limited disease. We also found that BPG therapy imperceptibly altered the syphilitic inflammation, but the underlying mechanism remains unclear. Availability of data and materials The datasets generated during the current study are available from the corresponding author on reasonable request. Authors' contributions YL, LLL and TCY conceived and designed the study. LRL, YX, and WL analyzed the data and drafted the manuscript. YYC, XZZ, ZXG, and KG collected and organized the data. MLT, HLZ, SLL, HLL, WDL, XML keeped rabbits and collected samples. LRL, TCY, LLL, MLT, XML, YYC, YX and HLZ obtained the funding and revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript. (See figure on previous page.) Fig. 4 Dynamics of NLRP3, caspase-1, and IL-1β mRNA expression in organs of rabbits infected with T. pallidum. a-e Dynamics of NLRP3 mRNA expression in the kidney (a), liver (b), lung (c), spleen (d) and testis (e). f-j Dynamics of caspase-1 mRNA expression in the kidney (f), liver (g), lung (h), spleen (i) and testis (j). k-o Dynamics of IL-1β mRNA expression in the kidney (k), liver (l), lung (m), spleen (n) and testis (o). Values represent the mean ± SD of triplicate experiments. A one-way analysis of variance was employed to examine the differences in groups, and Dunnett's post-comparison test was used to conduct multiple comparisons. * P < 0.05, the BPG treatment subgroup or no BPG treatment subgroup vs. the blank group
v3-fos-license
2016-06-17T04:18:01.220Z
2015-08-31T00:00:00.000
15197611
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2015.00060/pdf", "pdf_hash": "a35e38aab07bcc9c31e7b3154e56e9ee1e62689d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42004", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "a35e38aab07bcc9c31e7b3154e56e9ee1e62689d", "year": 2015 }
pes2o/s2orc
Chronic Kidney Disease and Fibrosis: The Role of Uremic Retention Solutes Chronic kidney disease (CKD) is a major global health concern, and the uremic state is highly associated with fibrogenesis in several organs and tissues. Fibrosis is characterized by excessive production and deposition of extracellular matrix proteins with a detrimental impact on organ function. Another key feature of CKD is the retention and subsequent accumulation of solutes that are normally cleared by the healthy kidney. Several of these uremic retention solutes, including indoxyl sulfate and p-cresyl sulfate, have been suggested to be CKD-specific triggers for the development and perpetuation of fibrosis. The purpose of this brief review is to gather and discuss the current body of evidence linking uremic retention solutes to the fibrotic response during CKD, with a special emphasis on the pathophysiological mechanisms in the kidney. Introduction The kidneys are essential for the clearance of (metabolic) waste products. And even though the primary cause of kidney disease (either acute or chronic) is often related to direct injury, e.g., inflammatory damage in the case of glomerulonephritis and pyelonephritis, or hypoperfusion, ischemia, and toxic damage, this will ultimately result in a reduced renal function. When the kidneys fail to purify the body of metabolic end-products, a host of substances that are normally excreted into the urine are retained. This condition is called uremia, after the first recognized and most abundant retention product, urea. Many uremic retention solutes are biologically active and exert toxicity, affecting the functional capacity of almost every organ system in the body, resulting in the complex clinical picture of uremia (1). Currently, uremic retention products are most often classified according to their removal pattern by dialysis, which is, up to now, the most frequently applied method to reduce solute levels in patients with endstage renal disease (ESRD). Three major groups are considered: (1) small water-soluble compounds, molecular weight (MW) <500 D, which are easy to remove by any type of dialysis; (2) larger middle molecules, mostly peptides, MW >500 D, which can only be cleared by dialyzers containing large pore membranes (high flux dialysis); and (3) protein-bound compounds, mostly with a low MW (<500 D), these solutes are difficult to remove by any type of dialysis, as protein binding hampers their clearance. Especially this class of retention solutes greatly contributes to comorbidities, such as organ fibrosis, observed in chronic kidney disease (CKD) patients. Fibrosis and Uremia: Clinical Aspects Fibrosis is a process whereby functional tissue is replaced by connective tissue. Once this phenomenon exceeds the level of physiological repair, it will result in loss of organ architecture as well as loss of functional tissue. Causes may be local damage, e.g., trauma, infection, or ischemia, but also more diffuse conditions, like systemic inflammation. In what follows, we will summarize the clinical consequences of fibrotic changes in uremia. Renal Fibrosis As stated above, kidney disease is often due to direct injury, yet in many cases this initial insult will initiate fibrogenesis, especially when regeneration as healing process is inadequate (2). The kidney is a complex organ containing a wide variety of cells that constitute the glomeruli, tubules, interstitium, and capillaries (3). And the initial site of injury determines the subsequent pathology, e.g., glomerular IgA deposition will cause glomerular fibrosis, whereas infections or proteinuria will provoke tubulointerstitial fibrosis (3). Still, irrespective of etiology, the subsequent fibrotic response will ultimately affect the functional capacities of the kidney, with uremia as one of the consequences. To cope with this problem, many strategies have been developed in hopes of slowing down or even reversing fibrogenesis (4). Although several studies have been successful at the pre-clinical level, only limited advances have been made at this time in the translation of these findings to the level of patient treatment (4). In addition, in the analysis of the urinary proteome related to CKD and CKD progression, a marked positive correlation appears with collagen or matrix protein fragments, which via a bottom-to-top approach confirms the pathophysiological role of fibrosis in the functional disturbance of kidneys and other tissues in patients with CKD (5). Cardiac Fibrosis Similar to the kidney, fibrosis of the heart and its valves depends on a host of damaging processes, such as ischemia or inflammation, with heart failure as the functional resultant. Many of the factors that cause kidney failure, e.g., hypertension and diabetes, concurrently promote cardiac fibrosis (6, 7), a condition induced by uremic retention solutes as well (8,9). In its turn, the ensuing cardiac failure causes hypoperfusion and ischemia of the kidneys, which is causative for uremia. The Cardio-Renal Axis All these elements together result in a close interaction of renal and cardiac dysfunction, often termed, correctly or incorrectly, cardiorenal syndrome (10,11). Nevertheless, there is no debate that kidney and heart dysfunction are closely intertwined, resulting in a high cardiovascular burden in renal failure patients (12,13). Fibrosis of Other Organ Systems It seems conceivable that uremia at large is a profibrotic condition, which may be detrimental for organ systems other than the kidney and heart. A study in CKD rats demonstrated the presence of fibrosis in the peritoneal membrane, an organ system not directly involved in hemodynamic homeostasis, within 6 weeks (14). The most clinically relevant organ system next to the heart and the kidneys is the vascular bed. Vessel stiffness, a key pathophysiological feature of uremia, and at least in part the consequence of fibrosis (15), results in systolic hypertension, diastolic hypoperfusion, a diminished physiologic response to orthostasis and volume loss, and an enhanced risk of cardiovascular events such as myocardial ischemia and ischemic and hemorrhagic stroke (16). Thus, it is clear that organ fibrosis is a key feature of CKD, yet the pathophysiological mechanisms underlying the fibrotic response during uremia remain to be fully elucidated. Uremic Solutes and Renal Fibrosis Fibrosis is the end result of a complex cascade of cellular and molecular responses initiated by organ damage (3). And even though there is a range of organ-specific triggers, the fibrotic process and associated signaling pathways are highly conserved between different organs (3). Furthermore, in recent years, epithelial-to-mesenchymal transition (EMT) has emerged as a leading, yet highly debated, hypothesis for the origin of collagenous matrix-producing myofibroblasts that contribute to the fibrotic response (17)(18)(19)(20)(21). Renal fibrosis ends in uremia, yet uremia per se will also further enhance the fibrotic process, because of the direct biological effects of uremic toxins. In a recent systematic review on the toxicity of two uremic retention products, e.g., indoxyl sulfate and p-cresyl sulfate, of the 27 retrieved highquality studies, at least five demonstrated a direct link to EMT and/or kidney fibrosis (22). Therefore, the remainder of this review will delineate the profibrotic impact of several uremic solutes on the kidney (summarized in Table 1). TGF-β β β Signaling Pathway in CKD Transforming growth factor (TGF)-β is one of the key factors driving the fibrotic response in most organs. Binding of TGF-β to a serine-threonine kinase type II receptor results in the recruitment and phosphorylation of a type I receptor, which in turn phosphorylates SMADs thereby initiating a host of signaling cascades (3,23). TGF-β is synthesized and secreted by inflammatory cells and a variety of effector cells, and activation of the pathway results in the formation and deposition of extracellular matrix proteins (3). In 1992, the role of TGF-β in renal fibrosis was still uncertain (24), yet in the following years more and more studies demonstrated the involvement of this factor in renal fibrogenesis (25)(26)(27). More recently, the interplay between uremic retention solutes, such as indoxyl sulfate and p-cresyl sulfate, and TGF-β has gained more scientific attention (28,29). Impact of Indoxyl Sulfate on Fibrogenesis Indoxyl sulfate is a small organic aromatic polycyclic anion derived from dietary tryptophan that has extensively been studied in conjunction with CKD-associated cardiovascular disease (22), and it is reported that this uremic solute can induce vascular calcification and correlates with coronary artery disease and mortality (30)(31)(32). Indoxyl sulfate is thought, however, to also contribute to a plethora of pathologies observed in dialysis patients, including tubulointerstitial inflammation and whole-kidney damage (22). Already in the 1990s, studies were published linking indoxyl sulfate to progression of renal disease as well as renal fibrosis (33,34). Miyazaki et al. observed that indoxyl sulfate overload augmented the gene expression of tissue inhibitor of metalloproteinases (TIMP)-1, intercellular adhesion molecule (ICAM)-1, alpha-1 type I collagen (COL1A1), and TGF-β in the renal cortex of 5/6-nephrectomized rats (33,34). Moreover, indoxyl sulfate stimulated the production of TGF-β by renal proximal tubular cells in vitro (34). Almost a decade later, it was demonstrated that exposure of HK-2 cells to indoxyl sulfate resulted in a reactive oxygen species (ROS)-mediated up-regulation of plasminogen activator inhibitor (PAI)-1 (28), a downstream signaling molecule of the TGF-β pathway associated with most aggressive kidney diseases (35)(36)(37). Furthermore, Saito et al. reported that indoxyl sulfate can increase α-smooth muscle actin (α-SMA) and TGF-β expression in HK-2 cells by activation of the (pro)renin receptor through ROS-Stat3-NF-κB signaling (38). Also in mouse renal proximal tubular cells, it was demonstrated that indoxyl sulfate activates the TGF-β pathway, as illustrated by an increased SMAD2/3 phosphorylation (39). Although the contribution of EMT to fibrosis remains controversial, phenotypic alterations reminiscent of EMT, also referred to as epithelial phenotypic changes (EPC), might play a role in the fibrotic response as well as disease progression (40,41). Several studies have demonstrated that indoxyl sulfate induces EMT, as demonstrated by a reduced expression of E-cadherin and zona occludens (ZO)-1, and increased expression of α-SMA in rat kidney as well as rat proximal tubular (NRK-52E) cells (42,43). Furthermore, Sun et al. reported that treatment with indoxyl sulfate increased the expression of the EMT-associated transcription factor Snail, concurrent with an elevated expression of fibronectin and α-SMA as well as a diminished expression of E-cadherin in both mouse kidneys and murine proximal tubular cells (39). Similar effects of indoxyl sulfate have also been observed in human renal cell models (42,44). Renal cells can become senescent due to a variety of (stress) triggers, including aging, and these cells, while in growth arrest, can contribute to renal fibrosis by secreting profibrotic cytokines and growth factors (45). It has been demonstrated that exposure of HK-2 cells to indoxyl sulfate resulted in an increased expression of p53 and p65, and augmented β-galactosidase activity (46,47), indicating that indoxyl sulfate induces senescence. Lastly, the renoprotective anti-aging factor klotho, which is involved in a myriad of homeostatic processes (48)(49)(50), might mitigate renal fibrosis by suppressing TGF-β signaling and vice versa, deficient klotho expression may accelerate senescence and fibrosis (51,52). Adijiang et al. reported that in both Dahl salt-resistant normotensive and Dahl salt-sensitive hypertensive rats, treatment with indoxyl sulfate resulted in lower gene expression of klotho (53). These findings were corroborated by the study of Sun and colleagues showing that indoxyl sulfate suppressed klotho expression in murine renal proximal tubules as well as HK-2 cells (54). Taken together, it is evident that indoxyl sulfate can contribute to renal fibrogenesis via an array of pathophysiological mechanisms (Figure 1), e.g., ROS production, stimulating expression of the profibrotic factor TGF-β, induction of EMT/EPC, promoting cellular senescence and by reducing klotho expression. Profibrotic Activity of Other Protein-Bound Solutes Next to indoxyl sulfate, several other uremic solutes have been linked to renal fibrosis, most prominently the p-cresol metabolite, p-cresyl sulfate (22,29). p-Cresol is formed by colonic bacteria from dietary tyrosine and this parent compound is either conjugated to sulfate or glucuronic acid giving rise to circulating p-cresyl sulfate or p-cresyl glucuronide (55). The main profibrotic effect currently described for p-cresyl sulfate is the induction of TGF-β (protein) expression. Sun et al. reported that exposure of murine renal proximal tubular cells to p-cresyl sulfate resulted in an increased expression of TGF-β and SMAD phosphorylation, concurrent with the induction of EMT (39). Conversely, in human conditionally immortalized renal proximal tubule epithelial cells, p-cresyl sulfate failed to induce EMT, whereas p-cresyl glucuronide did promote phenotypical changes associated with EMT (56). Furthermore, Watanabe et al. showed a ROS-dependent production and secretion of TGF-β protein in HK-2 cells upon treatment with p-cresyl sulfate (57). Moreover, they reported that p-cresyl sulfate increased the gene expression of TIMP-1 and COL1A1 (57). And, similar to indoxyl sulfate, it has been demonstrated that p-cresyl sulfate mitigated the expression of klotho in both murine and human renal cell models (54). Two other widely studied uremic solutes are hippuric acid and indole-3-acetic acid. Both protein-bound compounds have deleterious effects on normal renal (metabolic) functioning (58,59), yet there is scant evidence for their potential impact on fibrosis. Satoh and colleagues demonstrated that treatment with either hippuric acid or indole-3-acetic acid induced glomerular sclerosis in rats (60). And indole-3-acetic acid stimulated interstitial fibrosis (60). Also, it has been reported that indole-3-acetic acid activated the TGF-β pathway in HK-2 cells, as illustrated by an increased expression of PAI-1 (28). Thus, several of the protein-bound uremic retention solutes, although chemically very diverse entities, can elicit similar toxic effects thereby promoting renal fibrosis. Yet, the majority, if not all, of this evidence has been obtained experimentally in animals or by in vitro studies, whereas clinical studies on this aspect are virtually absent. Therefore, far more (clinical) research is needed to fully characterize the possible profibrotic effects of the more than 150 cataloged uremic retention solutes. New Kids on the Block Next to the widely studied protein-bound uremic toxins, several other lesser-known retention solutes might play a role in renal fibrosis, for instance leptin and marinobufagenin (MBG). Leptin (from the Greek word leptos, meaning "thin") is a product of the obese gene, identified in 1994 (61), and is secreted by adipocytes. Activation of its signaling pathway in the hypothalamus reduces food intake and increases energy expenditure. This adipocytokine is eliminated from the circulation via the kidneys mainly by metabolic degradation in the tubules (62). And it has been reported that serum leptin levels are increased in CKD and ESRD patients (63,64 (65). Furthermore, it has been reported by Wolf and colleagues that leptin plays a role in the progression of renal fibrogenesis (66). They demonstrated that leptin triggers glomerular endothelial cell proliferation via TGF-β. In addition, Briffa et al. reported that leptin increased TGF-β production and secretion in opossum kidney proximal tubule cells (67). Furthermore, it is shown that leptin stimulates COL1A1 production in renal interstitial (NRK-49F) fibroblasts (68). Moreover, hyperleptinemia is associated with increased blood pressure, a known risk factor for renal fibrosis, and additional deleterious (potential profibrotic) effects of this adipocytokine have been described in experimental and clinical studies (69,70). Noteworthy, two classes of compounds with similar damaging effects on the cardiovascular system as leptin are dimethylarginines and advanced glycation end-products; however, more studies are needed to unveil the suspected profibrotic potential of these compounds. For an overview of the toxicity of both groups of solutes, the interested reader is referred to the reviews by Schepers et al., and Mallipattu et al. (71,72). Marinobufagenin belongs to the family of endogenous cardiotonic steroids (CTS), also known as digitalis-like factors, and is produced by adrenal cortical cells (73). MBG is a Na + /K + -ATPase inhibitor that specifically binds to the α subunit of the sodium pump. This results in renal sodium excretion, increased myocardial contractility, and vasoconstriction (73). Therefore, CTS derived from dried toad skins were already used 1000 years ago in traditional medicine to treat congestive heart failure. Elevated levels of MGB have been detected in CKD and hemodialysis patients (74,75), and Fedorova et al. reported that MBG stimulated renal fibrosis in rats as well as increased tubular expression of Snail (76). Furthermore, they demonstrated that MGB induced EMT in LLC-PK1 cells as observed by increased levels of collagen I, fibronectin, and vimentin (76). In line with the observed profibrotic effect of MBG, it was reported that immunization against this steroid attenuated renal fibrosis in 5/6-nephrectomized rats (77). These thought-provoking results warrant further scrutiny and without a doubt many more uremic retention solutes may be classified as profibrotic in the near future. Future Directions CKD is a growing health concern and renal fibrosis is an integral part of the pathophysiological mechanism underlying disease progression. Current therapies for renal fibrosis mainly focus on the etiology of the disease, such as hypertension or diabetes, and as such show only limited efficacy in halting the fibrotic process (3). A key feature of uremia is the accumulation of a wide array of potential toxic solutes and slowly a body of evidence is emerging implicating these retention solutes as culprits in CKDassociated (renal) fibrogenesis. Therefore, therapies aimed at limiting the intake/absorption/production of uremic solutes, such as oral adsorbents or probiotics (78)(79)(80), or treatment modalities supporting the clearance of these compounds, e.g., living dialysis membranes (81), will most likely have a great potential for slowing fibrosis. Moreover, better understanding of the profibrotic effects of the multiplicity of uremic retention solutes will further aid in unveiling novel therapeutic targets. Author Contributions HM and PO conceived the manuscript. HM, ES, and RV wrote the manuscript. GG and PO critically revised and improved the manuscript writing. All authors approved the final version of the manuscript and fully agree with its content. Funding This work was supported by the Netherlands Organisation for Health Research and Development (ZonMW; grant number 114021010).
v3-fos-license
2022-07-10T05:20:34.147Z
2022-07-01T00:00:00.000
250387224
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "ca842d95cd59b5bf4de3e42f899508e80fa4eb9e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42005", "s2fieldsofstudy": [ "Business" ], "sha1": "ca842d95cd59b5bf4de3e42f899508e80fa4eb9e", "year": 2022 }
pes2o/s2orc
A Conceptual Model of Nurses’ Turnover Intention The World Health Organisation predicts a lack of 15 million health professionals by 2030. The lack of licenced professionals is a problem that keeps emerging and is carefully studied on a global level. Strategic objectives aimed at stimulating employment, improving working conditions, and keeping the nurses on board greatly depends on identifying factors that contribute to their turnover. The aim of this study was to present a conceptual model based on predictors of nurses’ turnover intention. Methods: A quantitative, non-experimental research design was used. A total of 308 registered nurses (RNs) took part in the study. The Multidimensional Work Motivation Scale (MWMS) and Practice Environment Scale of the Nursing Work Index (PES-NWI) were used. Results: The conceptual model, based on the binary regression models, relies on two direct significant predictors and four indirect significant predictors of turnover intention. The direct predictors are job satisfaction (OR = 0.23) and absenteeism (OR = 2.5). Indirect predictors that affect turnover intention via job satisfaction are: amotivation (OR = 0.59), identified regulation (OR = 0.54), intrinsic motivation (OR = 1.67), and nurse manager ability, leadership and support of nurses (OR = 1.51). Conclusions: The results of the study indicate strategic issues that need to be addressed to retain the nursing workforce. There is a need to ensure positive perceptions and support from managers, maintain intrinsic motivation, and promote even higher levels of motivation to achieve satisfactory levels of job satisfaction. Introduction Since the early 1920s, the phenomenon of employee turnover has preoccupied many experts. With the advent of the 21st century, it became a global problem that spread to all areas of organisational processes. Considering the development of new strategies for employees' retention, market dynamics, the development of research methodology, and technology, it is not surprising that employee turnover intention is a phenomenon that will be studied again and again. The practical application of theoretical knowledge and the development of conceptual models of employees' turnover intention will help managers develop better and more efficient business practices that will ultimately optimise the use of organisational resources. Turnover intention reflects the employees' attitude towards the organisation, or, in other words, their conscious intention to leave the organisation [1]. An individual's intention is identified as the most important cognitive antecedent of behaviour [2]. The healthcare sector is the largest group of employees in the world, and nurses account for the largest share in this group. The lack of licenced professionals is a recurring problem that is being carefully studied at the global level. Strategic objectives aimed at promoting employment, improving working conditions, and keeping nurses on board depend heavily on identifying the factors that contribute to their turnover. The results of numerous international studies indicate that there is a significant increase in the number of nurses who express their intention to change jobs [3,4]. Throughout history, various theories and models of turnover intention have been presented describing how to identify the key cognitive antecedent of this behaviour. Considering the presented constructs, in this study, we have singled out the ones that played an important role and thus presented a new model of nurses' turnover intention. Background Even though the term turnover intention was not generally used until the middle of the 20th century, the first published work dates from 1925. It dealt with questions and answers about turnover intention among employees and it is considered as the antecedent of the standard model until the 1960s. Over the years that followed, scientists introduced methodological characteristics that are still used today in studies that look into employees' attitudes, workplace conditions, job satisfaction, and demographic characteristics [5]. The first formal theory of voluntary fluctuation was described by March and Simon in 1958 when they made a paradigm shift that existed at that time. From that point on, and until 1970, was a period referred to as the foundational models period. The period around the 1980s was the time of theory testing and served as an introduction for models that appeared in the 1990s and became widespread after that. The early 21st century was marked by meta-analyses and the creation of new constructs through the concept of turnover intention [5]. It is extremely important to note that models and theories related to turnover intention are frequently referenced and identified in the literature. In her paper, Ngo-Hena states that models are closely related to theories, and that the difference between theories and models is not always clear [6]. Peterson and Bredow clearly distinguish between models and theories. The main components of theories are concepts and propositions, and core knowledge is considered an essential feature of any profession. These include models, definitions, constructs, and analyses where the interrelationship between the observed constructs and the accompanying variables used to explain the occurrence of certain phenomena is crucial. On the other hand, conceptual models are used to define the purpose and objective in building theoretical frameworks [7]. Conceptual Models of Turnover Intention Turnover intention is considered in the literature as a complex concept, because it is associated with economic, organisational, and psychological outcomes that depend on a variety of factors [8] and have implications for organisations and individuals alike [9], and that cannot be measured with a single variable. A concept represents a different perception that varies from one individual to another and refers to interrelated ideas that represent and carry a mental image of a phenomenon. Concepts are focused on a specific phenomenon; they are more abstract and less explicit and specific than theories. They develop through three stages: formulation, modelling, and validation. They provide a solid background for building relevant theories, and researchers use them as a guide for developing research ideas [7]. Throughout history, numerous turnover intention models have been presented and analysed. The most far-reaching models that had the most far-reaching impact on the development of the concept of turnover intention and theoretical frameworks are presented in the continuation. The first published formal theory about turnover intention initiated an entire conceptual era. In 1977, Mobley and Price were among the first researchers who tested various turnover intention models, which were based on the Theory of Organisational Equilibrium. They identified a more comprehensive turnover process and outlined the sequence of steps that employees go through before making the actual decision to quit [10]. Their intermediate linkages model proposed a set of realizations in the process of withdrawal (e.g., thoughts of abandonment, expected benefits) and job search behaviour (e.g., evaluation of alternatives) that associate job dissatisfaction with actual withdrawal. In particular, Mobley presented job dissatisfaction as the main construct and showed that its effects that lead people to think about leaving their jobs. It is interesting to note that the aforementioned authors were among the first to also identify the potentially mitigating effects on turnover intention [6,11]. Based on Price's earlier work, Price and Mueller developed a causal turnover model in 1981 that identified the antecedents of job satisfaction and turnover intention, and added organisational commitment as a mediator between these two variables [12]. Other signs of turnover intention were, among other things, the nature of the job (e.g., a routine job), involvement, job commitment, and family connections. Price's work represented a significant horizontal and vertical shift in the turnover model development by introducing job satisfaction [6,13]. Characteristic of the research of these authors was the specificity of the sample, since the participants were exclusively RNs. The research showed that studies based on specific groups of subjects were extremely important for the development of theoretical frameworks [6]. An interesting approach to turnover intention was also presented by Hulin and Hanisch in 1991 in their general withdrawal model. The model is based on the presumption that job dissatisfaction in general or some specific aspects trigger a set of behavioural and cognitive responses. Psychological and behavioural withdrawal are only a part of the comprehensive set of adaptive behaviours, which also include modified behaviour and attempts to influence better work outcomes. Actual turnover intention is merely a subset within the withdrawal construct, which also includes alternative actions, such as tardiness, absenteeism, and retirement [13,14]. In 1994, Lee and Mitchell presented the unfolding model, in which they showed that turnover decision is not always the result of accumulated job dissatisfaction, but can occur even without much reflection. They suggested several decision-making paths that employees can take before actually deciding to quit their job [10]. In general, their model highlights the complexity and dynamics of the turnover process and suggests that future researchers who wish to study turnover intention should consider the ways in which people leave their jobs [13]. This paper outlines the complex process by which employees, as individuals, evaluate their feelings, personal situation, work environment, and ultimately make the decision over time to stay with or leave the organisation. They noted that existing models of employee turnover are too simplistic, and that turnover intention can develop in many ways. In addition, one of the key events that trigger turnover is system shock, described as an event that causes individuals to evaluate their current and potentially future jobs. They also emphasized the urgent need for a new theory of turnover intention that systematically analyses all the important facts that have taken place in the last 50 years [10]. In 2002, Steel presented an analysis that identified two other important reasons for employees' turnover in situations where they do not have alternative employment. The first was having an alternative source of income and not being forced to work, and the second was receiving a spontaneous job offer [15]. In 2004, Maertz and Campion identified the proximal causes of turnover intent and, consequently, the best predictors of turnover intention behaviour. They combined content and process models of turnover intention and showed that the motivational forces of commitment and withdrawal are systematically related to the type of turnover decision [16]. This suggests that different groups of employees are motivated by different triggers. It also suggests that these causes play a role in the effects of all other major constructs in the literature. A key finding of the empirical study was that employees who quit without a job alternative had stronger negative affect than those who chose otherwise, indicating the importance of the effect of impulsive quitting [13,16]. Using theoretical approaches, researchers have developed several empirical models that interpret individuals' behaviour. Common themes among these models point to the fact that behaviour in turnover intention is a multifaceted process that includes attitudes, decisions, and behavioural components. The more recently proposed models are often continuations or fine-tunings of earlier models that are considered to be the basis of the current concept of turnover intention and related theories [6]. Despite numerous studies, many questions remain unanswered about the mechanisms that drive nurses to quit and the causes and consequences that this phenomenon entails at the personal, organisational, and social levels [17]. Constructs Based on previous studies, researchers have used numerous constructs to measure, explain, and describe turnover intention. Most commonly, they analysed job satisfaction, stress, emotional exhaustion, income, working conditions, autonomy, recognition, and respect within a team of health professionals, personal characteristics, leadership skills, work environment, and loyalty to the organisation [18][19][20][21]. These constructs are presented in continuation of the subsections: work motivation (Section 1.3.1), nursing practice environment (Section 1.3.2), job satisfaction (Section 1.3.3), and absenteeism (Section 1.3.4). Work Motivation The individual's motivation is deeply rooted in the Self-Determination Theory-SDT [22,23]. It is a widely accepted theory applied in various fields (e.g., education, healthcare, sports) as well as in organisational management [24]. Central to SDT is the difference between autonomous motivation and controlled motivation. Controlled motivation is influenced by extrinsic regulation (social and material) and introjected regulation, whereas autonomous motivation is influenced by identified regulation and intrinsic goals [25]. It is interesting to note that professional motivation in nursing has not been analysed as a separate factor, but instead it has been analysed through or within other factors. Turnover intention, older age, and low motivation of the newly employed are listed in the literature as reasons for the current nurse shortage [26]. Nursing Practice Environment The nurses' working environment is a complex construct to conceptualise and measure. Its theoretical underpinnings span into organisational, occupational, and labour sociology, and it is defined as a set of organisational determinants in the work environment that impact professional nursing practice [27]. Emphasis was placed on more intense research in nurses' work environment to better understand of the turnover intention [28]. Job Satisfaction Job satisfaction is defined as an individual's complex perception that includes certain assumptions and beliefs about a particular job (cognitive component), feelings toward the job (affective component), and job evaluation (evaluative component) [29]. Nurses' job satisfaction can be divided into three categories: those related to the organisation and performance of the job, those related to interpersonal relationships, and those related to the personal characteristics of the employees themselves [30]. Absenteeism Absenteeism is defined as the employee's intentional or regular absence from their workplace [31]. It is also defined as absence from work in the sense of a shortage of workforce due to the inability of employees to perform efficiently, for personal, legal, or illness-related reasons. Sick leave-related absenteeism is defined as the altered perception, ability, and motivation of an employee, usually due to illness or injury. This leads to staff shortages and consequently jeopardises the efficiency and quality of care while increasing the burden on other employees [32]. Based on our previous work demonstrating the content and construct validity of variables of individual constructs [33], which was a pre-test to the current study, we decided to select, examine, and present those that have either a positive or negative significant effect on turnover intention, though this time on a larger number of subjects, as proposed in the previous study. In addition, the purpose of this study was to present a conceptual model based on the effect of the predictors that influence nurses' turnover intention. Study Design and Participants A quantitative, non-experimental research design was used. An anonymous survey was conducted. A closed-ended questionnaire was used as an instrument. The questionnaire was sent via email using the web-based survey tool 1KA [34]. Data were collected between December 2019 and December 2021 at the University of Rijeka, Faculty of Health Studies (Croatia), where 308 registered nurses (RNs) voluntarily responded to the questionnaire. The study included fully employed RNs with high school vocational education and training (VET) and Bachelor of Science (BSc) nurses who are continuously and directly employed in healthcare. To obtain a heterogeneous sample, we collected data from nurses from all three levels of health as well as social welfare institutions from different Croatian counties. The binary regression sample size formula n = 100 + 50i [35] was used to determine the sample size of the study, where i represents the number of predictors. The binary regression analysis with the highest number of predictors, i.e., predicting job satisfaction, was used as a reference. In the pre-test for the current study [33], a total of i = 4 predictors were found to be significant, resulting in a minimum sample size of 300 participants. Instrument The questionnaire was based on the Multidimensional Work Motivation Scale (MWMS) and the Practice Environment Scale of the Nursing Work Index (PES-NWI), as well as a section with closed-ended questions to collect demographic data (e.g., age, gender, years of service as an RN), employment information, absenteeism (number of days absent in the past 12 months), and a stand-alone question that measured the level of job satisfaction. The MWMS was developed for the application of SDT in practice [23], while the PES-NWI [36] was designed for the analysis of nurses' work environment. MWMS consists of six constructs: amotivation, external regulation (social), external regulation (material), introjected regulation, identified regulation, and intrinsic motivation, while PES-NWI of five: nursing foundations for quality of care; nurse participation in hospital affairs; nurse manager ability, leadership, and support of nurses; staffing and resource adequacy; and collegial nurse-physician relations. PES-NWI was developed based on the characteristics of hospitals known to attract employees (so-called "magnet hospitals"). The validity and reliability of the Croatian version of both instruments has been confirmed in previous studies [33,37]. Permission to use and adapt both instruments was obtained from the authors of both questionnaires by e-mail. The level of job satisfaction was measured in accordance with the global approach on a scale of 1-10, where 1 represents dissatisfaction and 10 complete satisfaction. The global approach is based on the definition that job satisfaction is a general affective attitude towards one's job and organisation. It can be understood as a one-dimensional construct, and it is measured by a single-level scale, i.e., based on the answer to a completely unambiguous question: "How satisfied are you with your job?" [38]. This was later recoded into a dichotomous variable: low job satisfaction (for median and below) and high job satisfaction (above median). Turnover intention was measured with a dichotomous (yes/no) variable: "Have you considered changing your job in the last year?" Absenteeism was measured based on days of absence from work during the previous year. Ethical Considerations All participants were informed of the research objectives and asked to participate voluntarily in the study. They had the right to withdraw from the study at any time without any consequences. The study was conducted in accordance with ethical principles and human rights standards [39]. Ethical approval was granted by the Ethical Committee of the Faculty of Health Studies in Rijeka and the Faculty of Medicine in Osijek (both in Croatia, EU). Confidentiality of participants was ensured both during and after the study. All categorical data were represented by absolute and relative frequencies, whereas numerical data were represented by the arithmetic mean and standard deviation or the median and interquartile range if the data were not normally distributed. The Shapiro-Wilk test was used to check the normality of the distribution [40]. Comparison of categorical data within groups and between groups was performed using the Chi square test. Correlations between numerical variables were tested using the non-parametric Spearman's rank correlation test (r s ). The Mann-Whitney U test was used to test the differences in numerical variables between two groups. Binary regression analysis was used to investigate the predictors of turnover intention. All the assumptions regarding the use of this statistical analysis (e.g., sample size, multicollinearity) were carefully considered [41]. Results A total of 308 participants were included in this study, with a median age of 30 years. They were predominantly female, RN with high school vocational education and training (VET) ( Table 1). Two significant predictors of turnover intention were identified when binary regression was applied: job satisfaction as a negative predictor with OR = 0.23 and absenteeism with OR = 2.5 (Table 2). Furthermore, four significant predictors of job satisfaction were identified: two negative predictors amotivation with OR = 0.59 and identified regulation with OR = 0.54 and two positive predictors intrinsic motivation with OR = 1.67 and nurse manager ability, leadership, and support of nurses with OR = 1.51 (Table 3). In the preliminary analyses conducted to investigate multicollinearity between potential predictors, the examined variable years of work experience was excluded from further analysis because it was strongly correlated with age (r s = 0.930, p < 0.001), with no statistically significant differences in participants with high and low job satisfaction (Mann-Whitney U test, p = 0.550). Nagelkerke R = 29.0%). The attempt to identify predictors of absenteeism was unsucces ful as the binary regression model was not significant (χ 2 = 11.854, d.f. = 14; n = 254; p 0.62; Nagelkerke R 2 = 5.5%). The conceptual model of turnover intention was constructe based on binary regression results and is presented in Figure 1. Discussion Any progress towards a better understanding of the RN turnover intention is of u most importance. The World Health Organisation predicts a shortage of 15 million healt workers by 2030 [42]. This will result in the largest expected skills shortage, which coul trigger global competition for qualified health workers. We also need to highlight wor motivation as one of the important factors in solving the recruitment and retention prob lems in the healthcare sector. Middle-income countries will face labour shortages, as their demand for labour wi exceed supply [43]. The economic impact of nurses leaving the healthcare system canno be fully represented due to a lack of consistent definitions and measurements. Howeve it is estimated that the cost is four to five times higher as productivity decreases with ne hires [44]. When employees leave their jobs, valuable financial and social capital is los affecting the morale of other employees and the reputation of the organisation, as we affecting teamwork and work processes [1]. In 1981, Price and Mueller tested their prevailing conceptual model exclusively o RNs to explain turnover intention. The relatively rapid recognition and persistent use o their model compared to those of other authors contributed to its legitimacy and subs quently triggered a series of studies on the causes of turnover intention among nurse [13,45]. Many authors describe organisational and individual factors that influence jo satisfaction, turnover intention, and actual quitting. Lake, in her study, integrated specif variables that pertain to the nursing profession (burnout and autonomy) [46]. Brewer Discussion Any progress towards a better understanding of the RN turnover intention is of utmost importance. The World Health Organisation predicts a shortage of 15 million health workers by 2030 [42]. This will result in the largest expected skills shortage, which could trigger global competition for qualified health workers. We also need to highlight work motivation as one of the important factors in solving the recruitment and retention problems in the healthcare sector. Middle-income countries will face labour shortages, as their demand for labour will exceed supply [43]. The economic impact of nurses leaving the healthcare system cannot be fully represented due to a lack of consistent definitions and measurements. However, it is estimated that the cost is four to five times higher as productivity decreases with new hires [44]. When employees leave their jobs, valuable financial and social capital is lost, affecting the morale of other employees and the reputation of the organisation, as well affecting teamwork and work processes [1]. In 1981, Price and Mueller tested their prevailing conceptual model exclusively on RNs to explain turnover intention. The relatively rapid recognition and persistent use of their model compared to those of other authors contributed to its legitimacy and subsequently triggered a series of studies on the causes of turnover intention among nurses [13,45]. Many authors describe organisational and individual factors that influence job satisfaction, turnover intention, and actual quitting. Lake, in her study, integrated specific variables that pertain to the nursing profession (burnout and autonomy) [46]. Brewer-Kovner emphasises the economic factor by including a robust set of variables (labour market demand and household demands) to predict factors influencing the turnover of young RNs [18]. Inspired by the development of various models throughout history, in this study, we have examined the relationship between the predictors that influence nurses' turnover intention and have presented a conceptual model based on the results. Many studies on work motivation are based on the examination of intrinsic motivation, therefore the application of SDT in RN work motivation contributes to a better understanding of the concept of motivation [47]. Identifying factors that influence the nurses' motivation is considered a preventive tool in dissatisfaction and turnover intention [48]. Amotivation is described as the absence of motivation for an activity [23]. In the present study, amotivation has a significant negative impact on the level of job satisfaction. Furthermore, the amotivation is emphasized as a negative predictor of job performance because it has to do with not being motivated at all [49]. This is not surprising, because, without motivation, no one will enjoy working, because motivation is the drive that makes you work towards a goal. Our results show that intrinsic motivation had a significant positive impact, while the identified regulation had a negative impact on the level of job satisfaction. In the literature, intrinsic motivation is described as doing an activity for its own sake, i.e., because it is intrinsically interesting and enjoyable, while identified regulation refers to doing an activity because one identifies with its meaning/value and voluntarily accepts it as one's own [23]. An intrinsically motivated worker does not work just to earn money to satisfy his/her needs or those of his/her family but works because of the satisfaction he/she receives from the challenges of work. This provides the opportunity to use his/her knowledge, skills, and potential, thereby developing a sense of accomplishment and self-fulfilment, which in turn makes him/her marketable in society [49]. Intrinsic motivation factors of nurses' job satisfaction have a significant impact on the performance, stability, and productivity of the health institution [50]. Nurses' lifelong learning has a significant impact on performance and productivity [51]. Accordingly, continuing professional development (CPD) programmes are central to nurses' lifelong learning and are an important aspect that keeps nurses' knowledge and skills up to date. This requires different learning methods and ways of acquiring and building knowledge. To achieve this, nurses can take different approaches to acquire knowledge: CPD, through formal learning, courses, or workshops, as well as informal learning in the workplace, self-reflection, review of literature for best evidence through journal clubs, and mutual feedback. Evidence from CPD literature indicates that many nurses prefer informal learning methods in the workplace and find that the most meaningful learning occurs in interaction with their colleagues. Nurses were found to consider informal learning methods such as supervision, participation in team meetings/briefings, and mentoring. Organisational culture played an important role in staff professional development [50,51]. The main challenge for nursing is capitalising on the workplace as a learning resource that can integrate learning with development, improvement, knowledge translation, inquiry, and innovation. This requires skilled facilitators, particularly for systems' leaders [52]. In previous studies, identified regulation as part of autonomous motivation has been associated with increased job satisfaction [25,53]. The result presents a paradoxical situation, because Croatian nurses who identify with their vocation and find meaning in their job and internalise the values that the work of a nurse entails are likely to feel less satisfied in their job [33]. If a worker identifies with and/or genuinely enjoys their work, motivation by earnings may be secondary [54]. Of all the SDT motivation types in this study, only intrinsic motivation has a positive impact on Croatian nurses' job satisfaction. RNs tend to be more motivated by intrinsic motivation factors than other occupational groups. Intrinsic motivation factors have a significant impact on nurses' performance; it is a form of natural motivation that can increase the RNs' interest in performing their tasks. A 2019 study by Gunawan et al. states that hospital management needs to create and improve work motivation that can affect the RNs' performance [55]. If managers know what motivates staff, it can have an impact on performance results. This is not surprising as, Maslach and Leiter [56], in their six areas of the work-life model, emphasised the importance of intrinsic motivation through intrinsic rewards such as recognition for one's contributions at work. Managers are advised to identify the needs of nurses and design a relevant motivational program to encourage nurses to achieve maximum performance [55]. Similar to our results, Moll-Khosrawi et al., in 2021, also found that job satisfaction was positively correlated with intrinsic motivation and negatively correlated with amotivation [57]. Our results suggest that, of all the PES-NWI constructs, only nurse manager ability, leadership, and support of nurses was found to be a valid predictor of job satisfaction. In the literature, the construct nurse manager ability, leadership, and support of nurses refers to the ability of head nurses to be good nursing managers and leaders, to support other nurses, and to have their backs when making decisions [36]. Maslach and Leiter [56] also emphasize the importance of managers who can improve intrinsic motivation through intrinsic rewards. This suggests that a nurses' manager can have a significant impact on employees' intrinsic motivation. The nurse manager is responsible for a large area of a healthcare organisation and manages large budgets and a large number of RNs. Therefore, an RN should not be promoted to the role of nurse manager without the necessary training [58]. Both nurses who wish to be promoted to nurse manager and current nurse managers should attend training programs. They also need to develop advanced management skills to meet current and future challenges [59]. The healthcare industry, in particular, generally suffers the consequences of absenteeism and turnover and has one of the highest turnover rates of any industry. Supervisors or managers with strong leadership and motivational skills are essential to achieving the desired behaviours and attitudes in employees. Nurse managers with higher emotional intelligence have a greater potential to be successful in a leadership role. Emotional intelligence can be developed and trained through nurses' lifelong learning. Their skills affect the nurse's behaviour, which ultimately affects their leadership skills, such as decision-making, performance, and productivity [60]. Previous research has shown that collegial support is a factor that can help reduce turnover among staff [42]. A study conducted in the Netherlands shows that the organisational or managerial interventions have the greatest impact in preventing increased turnover [61]. Several authors have shown that personal characteristics such as age [4,62,63] and years of work experience [4] are associated with job satisfaction. No similar results were found in our study. Implications for Nursing Practice Knowing the intrinsic and extrinsic motivators that drive people to engage in nursing is an important aspect of recruitment. Our plan is to obtain additional data on RN turnover at the national level to better understand this phenomenon and contribute with our research to the development of a new approach in the recruitment process. Given the current RNs' work environment, the challenge for healthcare management is to find out how to improve motivational scores through their work. Although there is empirical evidence for some of the constructs in this study, much more attention should be given to research aimed at understanding the nurses' turnover intention in terms of individual behaviour. More conceptual models should be developed with the antecedents and consequences of turnover intention to better understand this phenomenon. As mentioned above, there is a need to implement targeted lifelong training programmes of nurses-managers with the aim of enhancing and improving the necessary competencies for leadership and the management of material and human resources (e.g., Emotionally Intelligent Leadership, or transformational leadership program). Emotional intelligence is an essential attribute of an effective leader because it affects job satisfaction, atmosphere, and the way people work. For nurse managers, emotionally intelligent leadership is thought to potentially impact staff retention, teamwork, quality of patient care, and job satisfaction. Leadership style impacts nurse training and support, improving the work atmosphere, reducing nurse mental health problems, and staff retention [64]. Transformational leadership is a type of leadership style characterised by a leaders' ability to understand their organisation culture and reimagine and rebuild it according to a new vision. This authentic form of leadership embraces innovation and creativity while requiring competence in building trust and relationships and rational compassion [65]. We would like to encourage Croatian researchers and all relevant stakeholders to participate in national/EU/global initiatives on nursing education and retention. Limitations of the Study There are two limitations to this study that should be considered before generalising the results of this study. First, the results are based on data collected using a web-based questionnaire. Although the survey was anonymous, participants may not have provided entirely truthful responses. This raises several issues, as such that the observation of participants may lead to bias in their actual responses. Therefore, it remains a challenge to choose research methods and data collection procedures that overcome these issues. Second, the study was conducted in one country. Hence, further replications should be performed in various settings to confirm the validity of the presented conceptual model. Authors should discuss the results and how they can be interpreted from the perspective of previous studies and of the working hypotheses. The findings and their implications should be discussed in the broadest context possible. Future research directions may also be highlighted. We point to four directions for future research: (1) conducting an in-depth study of nurses who recently left their previous jobs and identifying the actual reasons for turnover (e.g., accepting a job abroad, low salary) and identifying and classifying their new job positions, (2) examining the impact of employee turnover on other team members and the organisation as a whole, (3) investigating whether nurse managers have acquired adequate leadership skills and competencies, and (4) better clarifying the impact of absenteeism as a predictor of turnover intention. Conclusions The presented conceptual model indicates the strategic issues that need to be addressed to retain the nursing workforce. Intrinsic motivation had a significant positive impact, while identified regulation had a negative impact on the level of job satisfaction. Nurse manager ability, leadership, and the support of nurses were found to be valid predictors of job satisfaction. There is a need to ensure positive perceptions and support from managers, maintain intrinsic motivation, and promote higher levels of motivation to achieve a satisfactory level of job satisfaction. Organisations need to ensure that managers and supervisors are properly trained to adequately support their employees.
v3-fos-license
2022-11-01T22:03:15.858Z
2022-11-01T00:00:00.000
253273801
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bdj.pensoft.net/article/90967/download/pdf/", "pdf_hash": "0630dd8153de04d66fb59ca741cd4ddfef533155", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42007", "s2fieldsofstudy": [ "Biology" ], "sha1": "0630dd8153de04d66fb59ca741cd4ddfef533155", "year": 2022 }
pes2o/s2orc
Two new species of the genus Macrothele Ausserer, 1871 (Araneae, Macrothelidae) from China Abstract Background The family Macrothelidae Simon, 1892 belongs to the infraorder Mygalomorphae, currently contains two genera and 47 described species, from South Europe, South, and East Southeast Asia, Central, West, and North Africa. New information Two new species of the funnel-web spider genus Macrothele Ausserer, 1871 from Yunnan Province, China are described: Macrothelewashanensis Wu & Yang, sp. n. (♂♀), and M.wuliangensis Wu & Yang, sp. n. (♂♀). Detailed descriptions, diagnostic illustrations and distribution map are provided. All specimens are deposited in the Institute of Entomoceutics Research, Dali University (DUIER). Introduction The spider family Macrothelidae Simon 1892 is an important spider group in the infraorder Mygalomorphae. They usually build funnel webs using crevices and cavities in slopes, occasionally build web in surface deciduous layers. So far, the family has 47 species of two genera reported worldwide (World Spider Catalog 2022), of which 29 species are known from China (Pocock 1901, Saitô 1933, Hu and Li 1986, Shimojana and Haupt 1998, Song et al. 1999, Xu and Yin 2001, Xu et al. 2002, Li and Zha 2013, Shi et al. 2018, Wang et al. 2019, Tang et al. 2020, Lin et al. 2021, Tang et al. 2022. We are carrying out a systematic investigation on the Chinese fauna of Macrothelidae and have collected a lot of specimens from Yunnan Province. During this study, two new species have been discovered and described here: Macrothele washanensis Wu & Yang, sp. n. and M. wuliangensis Wu & Yang, sp. n. Materials and methods Specimens were examined and measured with Olympus SZX16 and Leica M205A stereomicroscopes and an Olympus CX33 compound microscope. All specimens examined were preserved in 80% ethanol. The left male palps were examined after dissection and removal from the specimens, and the female genitalia were treated in 10% NaOH for 24 hours to dissolve tissue and examine the vulvae. The distribution map was produced by ArcMap software (version 10.8). Diagnosis Males of Macrothele washanensis sp. n. resemble M. arcuata Tang, Zhao & Yang, 2020 by having similar bulb shape, but they can be distinguished by the BH no protrusion in prolateral view, embolus tapers from base to apex, and hook-shaped, the ratio of the length of the BH to the length of the E is almost 1 : 4 ( Fig. 1B-E); the four tibial spines visible in prolateral view ( Fig. 1G-I); tibia I with nine spines visible in ventral view, tibia II straight, with three ventral spines (Fig. 2) (vs tibia with three prolateral spines, and three ventral spines, embolus with visible protrusion, joint of embolus and bulb is strongly bent, embolus needle shaped, the ratio of the length of the BH to the length of the E is almost 1 : 5; tibia I with 26 spines, tibia II with retrolateral bend and 15 ventral spines in M. arcuata). Females of M. washanensis sp. n. can be differentiated from M. arcuata by the receptacula apically teardrop shaped, the ratio of the length of the T to the length of the CD is almost 1 : 6 ( Fig. 6) (vs copulatory duct long, shape of the English letter "G"; receptacula apically oval, the ratio of the length of the T to the length of the CD is almost 1 : 8 in M. arcuata). Etymology The species epithet is a noun in apposition referring to the type locality. Ecology Spinning large funnel web on crevices. Female often stays in the entrance of funnel tube, when the sheet part of funnel web was hit by other animals, she quickly rush out, to catch the prey, or attack the enemy (Fig. 7B). If the male of the same species comes, releasing some chemical clue or sending vibration via the web, female accepted the clue and walk out for further communication and copulation (Fig. 7A). Diagnosis Males of Macrothele wuliangensis sp. n. resemble M. washanensis sp. n. by having similar palpal bulb morphology, but they can be distinguished by having spines in prolateral and dorsal views of palpal tibia and similar palpal bulb morphology, females of the new species are similar to others by the apically teardrop shaped receptacula bent inwards apically. Males of M. wuliangensis sp. nov. can be distinguished from M. washanensis sp. n. having five tibial spines visible in prolateral view, two tibial spines visible in dorsal view ( Fig. 8G-I); tibia I has 10 ventral spines with six arranged in three pairs, tibia II has 7 ventral spines ( Fig. 9) (vs four tibial spines visible in prolateral view, 0 dorsal spines; tibia I with nine spines visible in ventral view, tibia II has 3 ventral spines in M. washanensis sp. n.). Females of M. wuliangensis sp. n. can be differentiated from M. washanensis sp. n. by the ratio of the length of the T to the length of the CD is almost 1:5 ( Fig. 13) (the ratio of the length of the T to the length of the CD is almost 1 : 6 in M. washanensis sp. n.). Etymology The specific name refers to the type locality and is a noun in apposition.
v3-fos-license
2020-08-20T10:05:41.125Z
2020-08-12T00:00:00.000
225496506
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://cmro.in/index.php/jcmro/article/download/322/420", "pdf_hash": "eff6b20c65472a9b772fce270e8919fa2d875448", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42009", "s2fieldsofstudy": [ "Medicine" ], "sha1": "daca55c124c45163be894d1fef0f2852af6ea6c8", "year": 2020 }
pes2o/s2orc
Toothbrush, its Design and Modifications : An Overview 5Intern, J.N.Kapoor DAV (C) Dental College, Yamuna Nagar, Haryana Abstract Tooth brush has been an integral part of a daily routine across many cultures around the world from the times of antiquity to the 21st century. Over the years, several types of toothbrush has been invented. Some of the them are useful for physically and mentally handicapped children The aim of this review article is to describe toothbrush design and various modifications that have been made in the several years. INTRODUCTION E ffective plaque control facilitates good gingival and periodontal health, prevents tooth decay and preserves oral health for lifetime. (1) The various methods commonly used for plaque removal include chemical and mechanical methods. Among the various mechanical aids available toothbrushing is the primary and most widely accepted method of plaque removal. (2) Toothbrushing carried out with effective technique and for adequate duration of time has been found to be highly effective measure of plaque control. The design of a toothbrush especially with regard to its size and contour should be such that it aids in mechanical removal of plaque. The efficacy depends on types, design of brush, method of brushing, time taken and also on supervision in care of small children. (3) Over its long history, the toothbrush has evolved to become a scientifically designed tool using modern ergonomic designs and safe and hygienic materials that benefit us all. (4) Due to variety of brushes currently available and constant development of new Supplementary information The online version of this article (https://doi.org/10.15520/jcmro.v3i08.32 2) contains supplementary material, which is available to authorized users. brushes ,the dental professional must maintain a high level of knowledge of these products and advise the patients appropriately. (5) Hence this review article emphasizes on tooth brush designs. DISCUSSION Among the various mechanical aids available, Toothbrushing is the primary and most widely accepted method of plaque removal(Loe1979). (6) Various tooth brushing methods have been advocated. Each has been designed for specific need of patient as dental and periodontal conditions. The basic fundamentals have not changed since the times of the Egyptians and Babylonians which includes a handle to grip, and a bristle-like feature to clean the teeth. Using modern ergonomic designs and safe and hygienic materials, toothbrush has evolved to become a scientifically designed tool that benefit all of us during its long history. (7) Types of toothbrushes ( 1) Conforms to individual's requirements in size, shape and texture. 2) Be easily and efficiently manipulated. 3) Be readily cleaned and aerated, impervious to moisture. 4) Be durable and inexpensive. 5) Have prime functional property of flexibility, softness and diameter of bristles or filaments and strength, rigidity and lightness in the handle. 6) Be designed for utility, efficiency and cleanliness. I) Conventional or manual toothbrush design The ideal toothbrush design is specified as being user-friendly, removes plaque effectively and has no deleterious soft tissue or hard tissue effects. (9) Conventional manual tooth brush design mainly consists of (10) Head, bristles and handle. 3/8 inches wide 2) Surface area : 2.54-3.2cm 3) Number of rows : 2-4 rows of bristles 4) Number of tufts : 5-12 tufts per row 5) Number of bristles : 80-85 bristles per tuft 1) Head: It is designed for effective cleansing of every tooth surface. Each brush head, is divided into 2 parts: the toe, located at the extreme end of the head, and the heel end closest to the handle. (12) Toothbrush heads are composed of tufts, which are individual bundles of filaments secured in a hole in the toothbrush head. (5) Filaments within the tufts are known as bristles. Toothbrush heads usually comes in different (8) shapes and sizes. a) Shapes : There are variety of shapes such as rectangular, oblong, oval, almost round and diamond shape. Every tooth surface can be cleaned effectively with the conventional toothbrush head designs. Diamond shaped toothbrush is convenient for posterior teeth cleaning as its head is narrower than conventional. Round head/oblong shaped head is easier to guide around brackets and wires. (5) CURRENT MEDICAL RESEARCH AND OPINION CMRO 03 (08), 570−578 (2020) b) Size : There are usually three types of head size: medium, large and small. The size of head is usually chosen based on size of the individual's mouth. 13 For adults, large or medium sized heads would be sufficient. Small size heads are recom-mended for children as their teeth and mouth are generally smaller. (5) Based on the size of oral cavity, different sizes of heads are available according to the age. (7) 0-2 years : Brush head size should be approximately the diameter of a Hong Kong 10-cent coin (~15mm) 2-6 years Brush head size should be approximately the diameter of a Hong Kong 20 -cent coin (~19mm) 6-12 years Brush head size should be approximately the diameter of a Hong Kong 50 -cent coin (~22mm) years and above Brush head size should be approximately the diameter of a Hong Kong one dollar coin (~25mm) Latest toothbrush heads are flexible. The head is splitted into two parts and join together by a rubber portion, so that it bends and curves to follow the curvature of our teeth as we brush. It also helps us to access places which are hard to reach. (13) 2) Bristles : Toothbrush heads are composed of tufts, which are individual bundles of filaments secured in a hole in the toothbrush head. Filaments within the tufts are known as bristles. (14) Bristles are vital because they directly contact the teeth and gum tissue. (15) Bristles usually varies in (16) Bristle type : Toothbrush bristles ranges from very soft to soft in texture, although harder bristle versions are available. (4) Soft bristle toothbrush are preferred because, Firstly, many people don't follow a proper techinque of toothbrushing, also hard toothbrush bristles cause abrasion of the surface and tend to remove the surface enamel of the tooth. Secondly the gingival damage by hard bristles pull it down towards to root, which leads to sensitivity of the teeth while drinking cold liquids, even water. (17) Pattern (14) The different bristle designs include flat trim, multilevel, wavy design, zigzag design etc. The firmness of a bristle depends on three factors i.e. Materials, diameter and length. Bristle Shape Toothbrush bristles with sharp edges (also known as burrs) are more destructive to oral tissues than endrounded bristles. The soft-bristled brushes that are ADA approved are end-rounded. (5) Bristle arrangement Multitufted brushes usually offer assorted bristle sizes and shapes and are engineered for better cleaning. (5) 3) Handle : Handle is that part of the brush from where we hold the brush.The most recent toothbrush models include handles that are straight, angled, curved, and contoured with grips and with soft rubber areas to make them easier to hold, use and control. (4) The handle should provide a good grip to the hand. (17) II) Powered toothbrush Mechanical devices were patented in the mid-19th century with the goal of addressing the limitations of manual toothbrushes. (18) Ritsert and Binns and Grossman and Proskin found that an electric toothbrush was more effective in removing plaque than a manual toothbrush when used by children and adolescents (6) The power toothbrush as we recognize it today has its roots in prototypes first commercially available in the 1960's. With the introduction of the Oral-B Plaque Remover 'D5' and its novel, prophylaxis-inspired oscillatingrotating mode of action, a major milestone in the development timeline of power toothbrushes occurred in 1991. With brush head being cup-shaped and endrounded bristles which provides robust plaque removal with 5600 oscillations per minute, this was the first clinically proven power toothbrush technology which clean teeth better than a manual toothbrush. It also featured new compliance-enhancing features, including a two minute light timer to boost brushing frequency. (18) In 2007, the Oral-B Triumph with Smart Guide was the first power toothbrush with clinically proven combined oscillating/rotating/pulsating technology, alongwith an innovative new wireless remote display feature (Smart Guide) for continuous visible brushing feedback. Since their debut in the early 1990s, sonic power toothbrushes have continued to evolve. Oral-B introduced a new sonic power brush (Sonic Complete TM ) in 2004 followed by the Pulsonic TM in 2008, targeting consumers who favored sonic brushes but wanted a quieter, slimmer/lighter option with maximum cleaning performance. Most recently, DiamondClean TM by Philips boasts a redesigned handle and high-density, diamond-shaped bristles that should improve cleaning and whitening. (18) Differences in Power Toothbrush Technologies (5,7,18) Three variables which can distinguished commercially available power toothbrushes are : Brush head, Power source, Cleaning technology modality. C) Brush Heads : The small, round brush head is designed to perfectly cup and wrap the tooth surface. Brush heads customized for specific patient desires/ needs has been offered by sonic toothbrush manufacturers. Basis for Professional Recommendation of Power Toothbrushes (19) There are three key reasons why a power toothbrush is a wise choice. 1) Patient Compliance and Preference : Power toothbrushes overcome these barriers to maintain good oral hygiene through increased self-feedback and ease of use and have been shown to enhance motivation and compliance. 2) Clinical effectiveness : Many current-generation power toothbrushes have shown convincing evidence of efficacy in reducing plaque, gingivitis, stain and calculus in clinical research of varying study designs, lengths and patient populations. 3) Safety : The safety of modern power toothbrushes is not a matter of concern as it has been researched extensively. The recommendation should be based on clinical effectiveness in plaque, gingivitis, stain, and calculus control and safety, with allowances for patient preference. (20,21) • Rechargeable brushes : Rechargeable brushes have many features which include, cost variation based on the extent of high-tech options to monitor safety, brushing time and ensurance of best brushing experience. Some models (e.g., Oral-B premium brushes). • Oscillating-Rotating Brushes : An extensive independent review has also concluded that oscillating-rotating power toothbrushes have been shown to be as gentle on teeth and gums as a manual toothbrush. • Multi-Directional Brushes : This brush was designed for patients who prefer a manual-like brushing experience, but still want better cleaning results than a regular manual or a leading sonic power technology. CURRENT MEDICAL RESEARCH AND OPINION CMRO 03 (08), 570−578 (2020) • Sonic Brushes : Sonic toothbrushes are widely available, and recent clinical research has shown the effectiveness of sonic power technology in plaque, gingivitis and stain reduction. • Battery-Powered : These brushes represent the lowest end of the cost spectrum and valued by those seeking a budget-friendly power brush option or who want to test the waters with power toothbrushes with a minimal cost investment. 2) Patient is instructed to spread the dentifrice over several teeth before starting to brush to prevent splashing of the dentifrice. 3) Not turning the power brush on until the brush is in the oral cavity also reduces the spattering of toothpaste. 4) The patient should vary the brush position to reach each tooth surface, including the distal, facial, mesial and lingual surfaces. The angulation may need to be altered for access to malposition teeth. Be sure to instruct the patient to "feel" the toothbrush on all surfaces of the teeth. This will become second nature after a while, so the patient will not have to think about it. Toothbrush is placed with filaments pointing into the occlusal pits at a right angle for brushing the occlusal surfaces. The brush head is moved in a slight circular motion whereas the filaments are in the occlusal pits or can press moderately (not bending the bristles) so the filaments go straight into the pits and fissures. Sharp and quick strokes for the occlusal surfaces. To dislodge any loosened debris, the toothbrush should be lifted after each stroke. 5) With power toothbrush, tongue cleaning can also be done as it retards plaque formation and total plaque accumulation. For tongue cleaning, some toothbrushes have specific brush head design. With the tongue extruded, the brush head should be placed at a right angle to the midline of the tongue with the bristles pointing toward the throat. The sides of the filaments are drawn forward toward the tip of the tongue. with light pressure. This should be repeated 3-4 times till the tongue surface is clean. III) Other different types of toothbrushes 1. Proxabrush : The interdental brush is slender, so it is only effective over a small surface area per stroke. These shortcomings call for a specially designed brush, that can remove plaque, easily and efficiently, from the critical surfaces which bound residual ridges in the partially edentulous subject. These brushes are known as proxabrush. Its design facilitates access to proximal surfaces, even as far back as third molars. This brush has the advantage that it carries the head of the brush at right angles to the handle, and it is thus easy to apply to distal and mesial surfaces of posterior teeth. (6) 2. Soladey-2 : A new toothbrush called Soladey 2 ® has been recently introduced and is claimed to have better plaque removing potential than conventional toothbrushes due to a photo-electrochemical effect with incorporation of an Ntype semiconductor of Titanium dioxide (TiO) at the neck of the brush. It is possible that the reported photocatalytic property of the semiconductor may be involved in some way in the observed reduction of plaque (Niwa & Fukuda 1989). (23) 3. The traveler ′ s toothbrush : The current traveler's toothbrush includes a toothbrush that houses toothpaste in a cylindrical handle. It uses a mechanical device consisting of a twist knob attached to a string and rubber gasket.The redesigned toothbrush also includes toothpaste within its handle, but possess an ergonomically shaped handle allowing a comfortable grip while brushing. 11. Musical toothbrushes : De La Rosa suggested that an average child removes only about 50% of the plaque present on teeth. This tooth brush consists of the handle that is available in different animal shape and also when we press the button the music will play for 3 min. When music starts, the child will start the brushing when the music stop the child will stop the brushing. (30) 12. Clinically proven products to meet the needs of patients undergoing more specialised care such as orthodontics, implants and periodontal surgery. a) Orthodontic Toothbrush: The Orthodontic toothbrush has been developed for safe and effective brushing of teeth fitted with orthodontic appliances including braces, brackets, tubes and wires. b) Post-Surgical Toothbrush: After oral surgery, it is important that patients keep their mouth clean, especially to help the wound heal uneventfully. The Post-Surgical toothbrush has been designed with those instructions in mind -to help keep the healing wound clean. The post-surgical toothbrush is highly effective in removing dental plaque and food debris near the healing wound and any sutures that kept the wound closed. These brushes are designed to be used until the surgical site is fully healed. c) Denture Toothbrush : It is recommended for the daily care of removable dentures and acrylic retainers. The Denture brush consists of two differently configured brush heads: a flat bristled head for smooth surfaces and a single-tufted head for hardto-reach areas. It is recommended that removable dentures and orthodontic retainers are brushed at least twice a day, especially after meals. SUMMARY & CONCLUSION Plaque control is one of the key elements of practice of dentistry. Mechanical plaque removal with tooth brushes remains the primary method of maintaining good oral hygiene. Keeping in mind the main purpose of brushing, any toothbrush with a simple design following ADA specifications, that provides access to all areas of the mouth should be the suitable one, provided the patient uses proper brushing technique. It is certain that for a motivated, well-instructed person with the time and skill, mechanical plaque control measures are sufficient to attain complete dental health. Toothbrushing and interproximal oral hygiene aids proves the optimal method of controlling plaque accumulation, whereas gingivitis can be prevented by daily toothbrushing. Powered toothbrushes are superior to their manual counterparts in their ability to remove plaque from the approximal areas but show equality on the flat or facial surfaces of the teeth. An oral hygiene training program has to be based on risk analysis and tailored to the individual needs by diagnosis, education and training, and needs-related oral hygiene.
v3-fos-license
2018-12-13T14:56:17.501Z
2018-12-12T00:00:00.000
54472511
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12031-018-1229-5.pdf", "pdf_hash": "a87a0ae1d0f1d63bcba1761f78b199cf56ec921b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42010", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "01169e591ac70510d37f99693c12614d90ffd9f4", "year": 2018 }
pes2o/s2orc
Retinoprotective Effects of TAT-Bound Vasoactive Intestinal Peptide and Pituitary Adenylate Cyclase Activating Polypeptide Vasoactive intestinal peptide (VIP) and pituitary adenylate cyclase activating polypeptide (PACAP) belong to the same peptide family and exert a variety of biological functions. Both PACAP and VIP have protective effects in several tissues. While PACAP is known to be a stronger retinoprotective peptide, VIP has very potent anti-inflammatory effects. The need for a non-invasive therapeutic approach has emerged and PACAP has been shown to be retinoprotective when administered in the form of eye drops as well. The cell penetrating peptide TAT is composed of 11 amino acids and tagging of TAT at the C-terminus of neuropeptides PACAP/VIP can enhance the traversing ability of the peptides through the biological barriers. We hypothesized that TAT-bound PACAP and VIP could be more effective in exerting retinoprotective effects when given in eye drops, by increasing the traversing efficacy and enhancing the activation of the PAC1 receptor. Rats were subjected to bilateral carotid artery occlusion (BCCAO), and retinas were processed for histological analysis 14 days later. The efficiency of the TAT-bound peptides to reach the retina was assessed as well as their cAMP increasing ability. Our present study provides evidence, for the first time, that topically administered PACAP and VIP derivatives (PACAP-TAT and VIP-TAT) attenuate ischemic retinal degeneration via the PAC1 receptor presumably due to a multifactorial protective mechanism. Introduction Vasoactive intestinal peptide (VIP) and pituitary adenylate cyclase activating polypeptide (PACAP) belong to the same peptide family. PACAP exists in 27 and 38 amino acid forms, and the shorter peptide shows 67% homology with VIP. They also share their receptors: VPAC1 and VPAC2 receptors bind both VIP and PACAP. However, PACAP also has a specific PAC1 receptor, which only binds PACAP. PACAP has a widespread occurrence in the body and a broad array of functions . Among others, PACAP influences gastrointestinal, urinary and cardiovascular functions (Heppner et al. 2018;Parsons and May 2018;Reglodi et al. 2018a), plays a role in reproduction and pregnancy (Lajko et al. 2018;Reglodi et al. 2012;Ross et al. 2018), has diverse behavioral and cognitive functions (Farkas et al. 2017;Gupta et al. 2018;Han et al. 2014;King et al. 2017), plays roles during both early development and aging Reglodi et al. 2018b;Watanabe et al. 2007), as well as influences the functions of both endocrine and exocrine glands (Bardosi et al. 2016;Egri et al. 2016;Prevost et al. 2013;Sasaki et al. 2017). VIP has also been shown to have diverse actions in addition to the originally described vasodilatory effects (Gozes 2008;Hill et al. 2007; Moody and Gozes 2007;Vu et al. 2015). VIP was originally isolated as a vasoactive peptide in the airways, later confirmed in the gastrointestinal tract (Vu et al. 2015). VIP is involved, among others, in immunomodulatory pathways (Abad and Tan 2018;Carrión et al. 2016;Jimeno et al. 2014), in nervous system Tamas Atlasz, D. Werling, D. Reglodi and Rongjie Yu contributed equally to this work. development and in the acquisition of certain neurological disorders (Maugeri et al. 2018a, b;Morell et al. 2012). Both PACAP and VIP exert protective effects in several tissues (Brifault et al. 2016;Giladi et al. 2007;Reglodi et al. 2011;Shioda and Gozes 2011). VIP has stronger anti-inflammatory effects (Olson et al. 2015), while PACAP is a more potent antiapoptotic peptide (Reglodi et al. 2018c). In the eye, VIP and PACAP have various biological effects. Among others, PACAP has been described to participate in the iris sphincter functions (Yoshitomi et al. 2002), stimulates tear secretion Shioda et al. 2018) and modulates its composition (Gaal et al. 2008), influences corneal keratinization and wound repair (Ma et al. 2015;Nakamachi et al. 2016) and is involved in the sensory innervation of the ocular surface (Wang et al. 1996). PACAP and VIP also have protective effects on the corneal endothelium (Koh et al. 2017;Maugeri et al. 2018a, b). Both peptides and their receptors are distributed also in the retina, where they are involved in information processing of visual stimuli (Akrouh and Kerschensteiner 2015;Atlasz et al. 2016;Dragich et al. 2010;Pérez de Sevilla Müller et al. 2017;Webb et al. 2013) and have trophic functions (Endo et al. 2011;Fabian et al. 2012). The retinoprotective effects of PACAP are well-documented and have been proven in different injury models, such as excitotoxic, ischemic, UV light-induced, traumatic, diabetic and oxygen-induced injuries (Atlasz et al. 2008(Atlasz et al. , 2011Kvarik et al. 2016;Shioda et al. 2016;Szabadfi et al. 2016;Vaczy et al. 2016). VIP, on the other hand, seems to be a less potent retinoprotective peptide. VIP has been shown to exert retinoprotective effects mainly in conditions involving inflammatory processes (Shi et al. 2016;Tunçel et al. 1996). However, in ischemic retinopathy, VIP was proven to be ten times less active than PACAP . In most in vivo retinal disease models, PACAP and VIP have been administered as intravitreal injection in order to guarantee that the injected peptides reach the retina in high enough concentrations to exert protective effects. As PACAP exerts dramatic retinoprotective effects proven by dozens of studies, therapeutic use is implied and so the need for a non-invasive approach has emerged. One possible approach is to enhance cell penetration of these peptides. The cell penetrating peptide TAT (GRKKRRQRRRPQ) is derived from the HIV Tat protein (Schwarze et al. 1999). TAT has protein transduction domains (PTDs) with the ability to efficiently traverse cellular membranes. TAT can not only transfer different types of molecules (peptides, large molecular proteins, DNAs) into a variety of cell types, but can also bring the linked molecules across many biological barriers such as the blood-brain barrier (BBB), mucosal barrier and lung respiratory epithelium in vivo (Dietz and Bähr 2004). Our previous study reported that the tagging of TAT at the C-terminus of neuropeptides PACAP/VIP enhanced the traversing ability of the peptides through the biological barriers, such as BBB and blood-air barrier and blood-testis barrier (Yu et al. 2012a, b). Furthermore, we found that VIP-TAT has higher activity on the activation of PACAP preferring PAC1receptor than VIP (Yu et al. 2014). The structure analysis showed that TAT has a two-dimensional structure similar to that of PACAP(28-38), and PACAP(28-38) has been shown to facilitate the binding and the activation of PAC1-R (Vaudry et al. 2009). PACAP in the form of eye drops has first been shown to exert local effects on the cornea. It has been shown to enhance corneal wound regeneration and nerve regrowth after injuries (Fukiage et al. 2007;Ma et al. 2015;Nakamachi et al. 2016;Shioda et al. 2018). Similarly, VIP has been shown to enhance corneal wound repair after alkali burn injury (Tuncel et al. 2016). Our recent studies have demonstrated that PACAP eye drops not only lead to topical effects, but PACAP is able to pass the ocular barriers and reach the retina, where it can exert retinoprotective effects (Werling et al. , 2017. We hypothesize that the passage through ocular layers can be further enhanced by the binding of TAT peptide, which is known to increase passage of peptides through biological barriers. We have previously shown that intravitreally administered VIP is able to protect the retina against hypoperfusioninduced injury, but only in a dose ten times higher than that of PACAP . We hypothesized that TATbound PACAP and VIP (PACAP-TAT, VIP-TAT) could be more effective in exerting retinoprotective effects when given in eye drops, by increasing the traversing efficacy and enhancing the activation of the PAC1 receptor. The aim of the present study, therefore, was to investigate the potential retinoprotective effects of PACAP-TAT and VIP-TAT administered in eye drops following bilateral carotid artery occlusion (BCCAO)-induced retinopathy in rats. Materials The peptides PACAP38, VIP, PACAP-TAT (tagging TAT at the C-terminus of PACAP38) and VIP-TAT (tagging TAT at the C-terminus of VIP) were chemically synthesized by GL Biochem Ltd. (Shanghai, China). Peptides Labeled with Fluorescein Isothiocyanate In order to trace the drugs, peptides were labeled with fluorescein isothiocyanate (FITC) using a FITC Protein Labeling Kit from ChangRu Biotech Ltd. (Guangzhou, China) according to the manufacturer's protocol. After the labeling reaction, gel filtration was used to remove the free FITC. In order to determine the amount of the residual free FITC, the peptides were submitted to ultrafiltration using Amicon Ultra -0.5 mL (Millipore, USA) with a molecular sieve of 2000 Da. After centrifugation (1000×g, 10 min) the peptide was subjected to fluorescence measurements performed with the multiwavelength scanner Victor 3 (GE, USA) at the excitation of 495 nm and the emission of 520 nm. Protein concentrations were determined using the K4000 Bradford Protein Quantification Kit (Innovative, Guangzhou, China). The labeling efficiency was calculated using the following formula: label efficiency (LE) = fluorescence value (FV)/peptide mass (mol) (PM), representing the fluorescence intensity (AU) per mol of peptide (mol). The Efficiency of Reaching the Retina Male rats with body weight from 160 to 180 g were purchased from the Medical and Experimental Animal Center (Guangdong, China). Rats were randomly assigned to one of the experimental groups (7 rats per group) and subjected to eye drops with FITC labeled peptides (100 nmol/kg) and PBS as control. Rats were sacrificed by anesthesia 2 h after the eye drop administration and the retina was separated, weighed, washed three times with PBS and divided into two parts. One part was prepared on the glass slide with glycerin and subjected to fluorescence microscopic observation of FITC with 495 nm excitation/520 nm emission. All images, focused on the left upper regions of the retina, were taken with 500 ms exposure time. The other part of the retina was subjected to grinding and ultrasonication in PBS at a concentration of 100 mg weight tissue per milliliter of PBS. The supernatant was collected by centrifugation and the fluorescence intensity in the supernatant (100 uL) was determined. The valid fluorescence intensity (FI) for each sample treated with FITC labeled peptide was corrected by subtracting the fluorescence value of the sample treated with PBS, which was used as a blank background. The Efficiency of Traversing Eye to Retina (EtE) was expressed as the percentages of the FITC labeled peptide mass in the retina to the total FITC labeled peptide mass. The EtE was calculated using the following formula: EtE = tFI/LE/PW × 100% (tFI presents the fluorescence intensity of retina (arbitration unit, AU); LE presents the label efficiency of each peptide which has been determined above; PW presents the peptide mass (mole)). cAMP Accumulation Assay PAC1-CHO cells cultured in Dulbecco's Modified Eagle's Medium (DMEM) at 37°C were scraped off the surface with rubber policeman, washed twice with PBS and the density of the cells was adjusted to 2 × 10 6 /mL. Peptides were added to 500 uL cells suspension with the corresponding varying working concentrations of the detected factor. After incubation at 37°C for 5-10 min cells were harvested and the lysates were subjected to cAMP quantification using the enzyme immunoassay kit for cAMP (Biyuntian, Shanghai, China), following the manufacturer's instructions. Protein concentration of each sample was determined using BCA assay, and the cAMP level of each sample was calculated following the formula: cAMP level (pmol/mg protein) = cAMP concentration (pmol/mL)/ protein concentration (mg/mL). The cAMP level in each sample was plotted as the percentage (%) of the maximal cAMP level in cells treated with PACAP27 versus the logarithmic value of the peptide concentrations. All experiments were run with at least four parallel samples and were repeated three times. Histological Procedure in the Retina Adult male rat litters were housed in the animal facility in individual cages in a 12 h light-dark cycle with food and water ad libitum. Animal housing, care and application of experimental procedures were in accordance with institutional guidelines under approved protocols (No: BA02/2000-26/2017, University of Pecs). Under isoflurane anesthesia, common carotid arteries were exposed on both sides through a midline incision and then ligated with a 3-0 filament. A group of animals (sham group) underwent all steps of the operating procedure except ligation of the carotid arteries. Immediately following the operation, the right eye of the animals was treated with derivatives of PACAP (PACAP-TAT /n = 17/ or VIP-TAT /n = 17/) eye drops (1 μg/drop). Dose and schedule of the eye drop treatments were based on our previous experiments ). In the experiment for histological analysis, the different derivatives were dissolved in benzalkonium solution for ophthalmic use (solutio ophthalmica cum benzalkonio (SOCB)). The left eye, serving as a control, was treated with vehicle containing neither PACAP-TAT nor VIP-TAT. Animals were treated for five consecutive days, twice a day with one drop of drug, under brief isoflurane anesthesia (max. 5 min). Fourteen days after the operation, rats (n = 10 SHAM and n = 24 BCCAO) were killed with anesthetic and the eyes were processed for histology. The eyes were removed and the retinas were solved in phosphate buffered saline (PBS), fixed in 4% paraformaldehyde dissolved in 0.1 M phosphate buffer (PB) (Sigma, Budapest, Hungary) and embedded in Durcupan ACM resin (Sigma, Budapest, Hungary). Retinas were cut at 2 μm and stained with toluidine blue dye (Sigma, Hungary). Sections were mounted in DPX medium (Sigma, Hungary) and photographs were taken with a digital CCD camera using the Spot program. Central retinal areas within 1 mm from the optic nerve were used (n = 5 measurements from one tissue block). The following parameters were measured: (i) cross-section of the retina from the outer limiting membrane (OLM) to the inner limiting membrane (ILM), (ii) the width of individual retinal layers (outer nuclear layer [ONL], outer plexiform layer [OPL], inner nuclear layer [INL], inner plexiform layer [IPL]), (iii) the number of cells/100 μm section length in the GCL, and the (iv) number of cells/1 μm2 in the OPL and in the IPL. Results are presented as mean ± SEM. Statistical comparisons were made using the two-way ANOVA test followed by Tukey's post hoc analysis. TAT Tagging Enhances the Efficiency of Reaching Retina The fluorescence imaging results of the retina after the treatment with eye drops of FITC labeled peptides (Fig. 1) showed that the FITC fluorescence density per area unit in the retina treated with eye drops of PACAP-TAT (Fig. 1a) and VIP-TAT (Fig. 1c) was much higher than in retinas treated with eye drops of PACAP/VIP, indicating that PACAP-TAT/VIP-TAT reached the retina more efficiently than PACAP/VIP. The calculation of the Efficiency for Traversing Eye to Retina (EtE) showed that the PACAP-TAT/VIP-TAT reached the retina with the efficiency (3.66 ± 0.67%, 3.05 ± 0.58%) about three-fold that of PACAP/VIP (1.23 ± 0.56%, 0.97 ± 0.47%), respectively. TAT-Tagging Enhanced the Activity of PACAP/VIP on the Activation of PAC1-R The results of cAMP assay (Fig. 2) showed that PACAP-TAT had EC50 of 23.6 ± 4.4 pM significantly higher than PACAP38 with 11.7 ± 3.1 pM, whereas VIP-TAT had EC50 of 0.14 ± 0.02 nM about 1/200 of the EC50 of VIP 30.1 ± 4.1 nM. These results showed that TAT-tagging enhanced the activity of VIP on the activation of PAC1-R, but inhibited the activity of PACAP38. Morphological Analysis in the Retina after BCCAO Carotid occlusion caused significant thickness reduction in all layers compared to sham animals. The most marked reduction in thickness was found in the outer and inner plexiform layers, and as a consequence, the total retinal thickness (OLM-ILM) was significantly less than in control retinas (Figs. 3 and 4). PACAP derivatives (PACAP-TAT, VIP-TAT) administration alone in sham animals did not cause any changes in the retinal thickness (Figs. 3 and 4). Eye drops containing PACAP-TAT or VIP-TAT caused significant amelioration in all retinal layers compared to the sham group. The thickness of the major retinal layers was significantly larger than that of the degenerated ones ( Figs. 3 and 4). This was especially conspicuous in the OPL, which almost disappeared in several BCCAO-induced degenerated retinas and was preserved in PACAP-TAT or VIP-TAT-treated animals. The number of cells in different retinal layers also changed. BCCAO led to a significant cell loss in the ONL, INL and GCL. Eye drops with PACAP-TAT counteracted the effects of the BCCAO in all nuclear layers. The cell numbers in the GLC/100 μm, in the ONL/500 μm2 and in the INL/500 μm2 were significantly higher compared to Discussion In the present study we demonstrated the efficacy of TATbound PACAP and VIP peptides to reach the retina and exert a retinoprotective effect in a model of ischemic retinopathy in rats. The retinoprotective effects of PACAP are welldocumented in models of many different retinopathies (Atlasz et al. 2011Shioda et al. 2016). Intravitreal injections of PACAP have been shown to lead to robust retinoprotective effects in various models of retinal injuries . The protective effects have been demonstrated to affect all neuronal cell types, from ganglion cells (Atlasz et al. 2010;Shoge et al. 1999) animals. Statistical significance (*p < 0.05 vs. SHAM+SOCB retinas, #p < 0.05 vs. BCCAO+SOCB retinas) was calculated by two-way ANOVA followed by Bonferroni's post hoc test bipolar neurons (Szabadfi et al. 2016), the two main interneuronal types, amacrine and horizontal cells ) and the main glial cells, Muller glial cells (Nakatani et al. 2006;Werling et al. 2016). Furthermore, PACAP is an endogenous regulator of retinal microglial cells/macrophages, important in certain pathological conditions (Wada et al. 2013). PACAP not only affects the neurons and glial cells of the retina leading to retinoprotection, but also helps to preserve the integrity of the blood-retinal barrier (Scuderi et al. 2013) and protects the retinal pigment epithelial cells against oxidative stress injury, a process important in preservation of the outer barrier of the retina . Furthermore, PACAP influences retinal vasculogenesis, especially under pathological conditions ). VIP has also been shown to have effects in the visual system according to some studies, although most results point to its involvement in photic neuronal transmission rather than its trophic effects (Akrouh and Kerschensteiner 2015;Dragich et al. 2010;Pérez de Sevilla Müller et al. 2017;Webb et al. 2013). VIP is an important neuromodulator along the visual transmission pathways, not only in the retina, but all the way to the cortex where it influences visual information processing (Galletti and Fattori 2018;Wilson and Glickfeld 2014). Regarding retinoprotection, a few studies indicate that VIP may also exert trophic effects in certain retinal injuries. Among others, VIP has been shown to protect retinal ganglion cells against excitotoxic injury in vitro (Shoge et al. 1998). VIP also protected against ischemia-reperfusion injury induced by ophthalmic vessel ligation (Tunçel et al. 1996), where both systemic and intravitreal VIP decreased oxidative stress as shown by reduced malondialdehyde levels. This led to a more preserved histological structure, which is in accordance with our present findings. Our earlier study, using the same hypoperfusion model used in the present study, showed that intravitreal VIP administration led to retinal morphological amelioration, but only at doses ten times higher than PACAP ). In the present study, we show a similar degree of protection, using TAT-bound VIP. VIP's actions include not only direct effects, but also indirect effects, through stimulation of activity-dependent neurotrophic protein (ADNP) and its short fragment NAP, with highly potent neuroprotective effects. Both ADNP and NAP exerted strong protection against a variety of stress factors (Steingart et al. 2000). In the retina, NAP protected against laser-induced retinal damage (Belokopytov et al. 2011), to decrease hypoxia-inducible factor levels in a model of diabetic retinopathy (D'Amico et al. , 2018Maugeri et al. 2017), to prevent apoptotic cell death (Scuderi et al. 2014) and to promote neuronal growth after hypoxiainduced injury (Zheng et al. 2010). VIP also affects autonomic reflexes and choroidal blood flow, which eventually affects retinal blood supply (Bill and Sperber 1990). Applying VIP on the ocular surface in the form of eye drops has so far been shown to exert local effects on the cornea. Regarding ischemic injury, PACAP has been shown to be protective in most cell layers affected in BCCAO-induced retinal ischemia. VIP was previously proven to be ten times less effective: intravitreal 100 pmol VIP, in contrast to the same dose of PACAP, led to no ameliorating effect on the retinal structure. However, 1000 pmol intravitreal VIP produced a protective effect. As eye drops, VIP was not effective alone (not shown). However, in our present study, we confirm that VIP bound to TAT peptide could effectively traverse the ocular barriers and exert a neuroprotective effect in the retina. PACAP-TAT did not prove to have significantly higher retinoprotective efficacy than untagged PACAP, but VIP exerted much stronger retinoprotective effects when bound to TAT. These results were consistent with our previous report that TAT with similar structure with PACAP(28-38) endowed VIP with higher affinity for PAC1-R (Yu et al. 2014). As for PACAP38, the tagging with TAT at the C-terminus of PACAP38 would be redundant and interfere with the receptor binding. This may be the reason why TAT tagging had some negative effect on PACAP38's activity on the activation of PAC1-R. Also, as VIP has been implicated in a variety of other ocular diseases as a possible therapeutic approach (Berger et al. 2010;Cakmak et al. 2017;Satitpitakul et al. 2018), our results with topical applications leading to retinoprotection may open new therapeutic approaches. In summary, our present study provides evidence, for the first time, that topical administration of PACAP and VIP derivatives (PACAP-TAT and VIP-TAT) dissolved in SOCB attenuate ischemic retinal degeneration via the PAC1 receptor presumably due to a multifactorial protective mechanism.
v3-fos-license
2019-07-14T07:01:40.920Z
2019-06-17T00:00:00.000
196193088
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://ieeexplore.ieee.org/ielx7/32/9489265/08737777.pdf", "pdf_hash": "0077e38f23711d149804fe60c911e2a816a7adb0", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42012", "s2fieldsofstudy": [ "Computer Science", "Business" ], "sha1": "b8aa415033ce25ef771dafc0fb7525f02278b7ee", "year": 2021 }
pes2o/s2orc
On Company Contributions to Community Open Source Software Projects —The majority of contributions to community open source software (OSS) projects are made by practitioners acting on behalf of companies and other organisations. Previous research has addressed the motivations of both individuals and companies to engage with OSS projects. However, limited research has been undertaken that examines and explains the practical mechanisms or work practices used by companies and their developers to pursue their commercial and technical objectives when engaging with OSS projects. This research investigates the variety of work practices used in public communication channels by company contributors to engage with and contribute to eight community OSS projects. Through interviews with contributors to the eight projects we draw on their experiences and insights to explore the motivations to use particular methods of contribution. We find that companies utilise work practices for contributing to community projects which are congruent with the circumstances and their capabilities that support their short- and long-term needs. We also find that companies contribute to community OSS projects in ways that may not always be apparent from public sources, such as employing core project developers, making donations, and joining project steering committees in order to advance strategic interests. The factors influencing contributor work practices can be complex and are often dynamic arising from considerations such as company and project structure, as well as technical concerns and commercial strategies. The business context in which software created by the OSS project is deployed is also found to influence contributor work practices. INTRODUCTION O PEN source software (OSS) is widely deployed in commercial software products and services [1], [2], [3] and within companies [4], [5], [6], and is used to support open innovation processes between companies [7], [8]. Given the level of integration into company products, processes and services, company software developers have long contributed to OSS projects for many reasons, including improvement of the quality of the software they use and a desire to influence the direction in which the software is developed [1], [8], [9], [10], [11]. Research on developers' contributions to OSS projects has focused on the motivation and behaviour of individuals [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], as well as the challenges of using the tools available to make technical contributions [22], [23], [24], [25], [26], [27]. A wide range of research on company engagement with and contribution to OSS projects has provided an understanding of the motivations of companies to use OSS and to work with projects, and their ways of working [8], [10], [11], [28], [29], [30], [31]. However, while some research illuminates company strategies when engaging with OSS projects [8], [28], [30], [31], [32], [33] often it is in the context of projects where the company has a controlling influence over the community and the direction of software development. In this work, we focus on engagement by companies and their employees and contractors with community OSS projects. By community OSS project we mean an OSS project managed by a foundation or otherwise collectively organised [34], where many contributors are professional practitioners directed by companies and other organisations who collaborate to create high quality software [1]. Decisions on how companies engage with community OSS projects are taken both by managers within each company and individual developers, and the majority of contributions are made directly by developers or other employees, who decide how to conduct each individual interaction with the project [28], [31], [34], [35], [36]. It can be inferred that many contributions made by companies are motivated by software development needs driven by business requirements. Further, we conjecture that practitioners commissioned by companies are working with technical, fiscal and temporal constraints within the business context that may not be apparent which also motivate contributions and how they are made. The context for decisions about engagement and contributions is also determined by the governance [37], [38] and licensing [39], [40] of the OSS project as well as the strategic interests of other participants in the project [31], [33]. Previous research has shown that when a company and individuals (owners, employees and contractors) affiliated with the company engage with an OSS project governed outside the company's control and specific development context, it is critical to adhere to established "work practices that are appreciated by community members" [41]. This article seeks to illuminate how companies engage with and contribute to OSS projects independent of company control and collaborate with other companies and organisations to achieve their own and common aims. To that end we ask the first research question: RQ 1: How do companies contribute to community OSS projects? The context in which a company contributes to a community OSS project is framed by many factors, including the business and technical interests of the contributing business, as well as those of others in the community, and the governance systems of the OSS project. Less clear is how those factors combine to motivate the use of particular work practices, i.e. the specific methods or approaches used to collaborate in community OSS projects. For example, why a particular method of interaction might be chosen to achieve a given outcome. To explore the motivations of companies and practitioners working on their behalf to adopt specific work practices we ask a second research question: RQ 2: What factors inform the selection of specific work practices used by companies to contribute to community OSS projects? To answer RQ 1 we undertook an investigation of the public online records of eight community OSS projects (including mailing lists and issue trackers) to identify the opportunities to make contributions to each project and the work practices used to make the contributions. In addition we interviewed practitioners from companies engaged with the eight community projects investigated to obtain further information about the types of contributions made, particularly those that may not be apparent from public records. The interviews also explored the practitioners' motivations to use particular forms of contribution in order to answer RQ 2. The interviews examined both the more strategic or policy level decisions about why a company might engage with an OSS project in a particular way, and the choices individuals working for companies make about how to make a contribution to an OSS project. This research extends previous work [42] that investigated RQ 1 in five community OSS projects. In this research the scope of the investigation is increased to include all contributions made to a project, expands the number of OSS projects studied from five to eight, and adds a second research question to examine motivation for the observed behaviour. The remainder of the article is structured as follows. We first present background information and a review of the academic literature (Section 2). In Section 3 we outline the research methodology and give details of the selected projects in Section 4, including the governance mechanisms that frame project activity. In Section 5 we present results, which detail interactions between companies and OSS projects as well as the reasons given by developers for the approaches used. In Section 6 we analyse and discuss the results, and present the conclusions from the study in Section 7. BACKGROUND AND RELATED WORK In a keynote presentation at ICSE 2017, the Executive Director of the Eclipse Foundation, Mike Milinkovich, explained how many companies strategically engage with OSS projects and claimed that "every software company is an open source company" [43]. Research shows that many companies in different sectors, utilise software that is developed and maintained by OSS projects external to the company [44]. Software created by OSS projects supports business activities, including software development, as part of revenue generating products and services, and as part of open innovation processes [7], [8]. Practitioners engaged with software deployed from well-known community OSS projects (including Linux and the Apache web server) and open source foundations (e.g. Eclipse Foundation and MariaDB Foundation) experience many business benefits, and at the same time encounter a number of challenges that companies need to overcome in order to engage successfully with and contribute to OSS projects [43], [45]. Community OSS projects, like Linux and the Apache web server, are organised for the mutual benefit of participants, in contrast to single vendor, or company controlled, OSS projects, where the OSS project is intended to benefit the controlling business [46]. The development of software by companies in open innovation processes mediated by community OSS projects has been described as "OSS 2.0" [1]. The benefits to businesses of contributing to OSS projects extend beyond the creation of software and include the acquisition of marketable knowledge and expertise [47], and organisational learning [8], [9], [48]. Individual contributors also benefit in terms of their careers [21] and their careers within projects [49], which also has value for the employer [21], [31]. Business and OSS Projects Research on the interaction between companies, practitioners, and OSS projects has been undertaken from a variety of perspectives (See Table 1). Fitzgerald [1] articulated the idea that OSS development had evolved from being dominated by individuals to a process where businesses, and, particularly, professional software developers undertaking paid work on behalf of businesses, collaborate to develop software, and characterised it as OSS 2.0. The continuing growth and development of OSS 2.0 is exemplified by very large scale community OSS projects like OpenStack.Ågerfalk and Fitzgerald noted that businesses using and contributing to OSS projects take part in external collaboration with an "unknown workforce" [50]. A further observation made byÅgerfalk and Fitzgerald, and also by Dahlander and Magnusson [30], was that the working relationships between companies within OSS projects are not governed by contracts that, for example, formally specify deliverables and delivery dates. Consequently, companies need to give careful consideration to the relationship between internal software development processes and engagement with strategically important community OSS projects. The strategic concerns, as Lundell et al. [44] argued, include the sustainability of both external OSS projects important to the business and the business itself. Riehle [34], [46] identified the principles of the business models used by participants in community OSS projects, arguing that the software developed is often restricted to core non-differentiating functionality to which companies then add value to generate revenue. Further, Germonprez et al. [36] highlighted that community OSS projects are, sometimes unplanned, collaborations between competitors to create core software platforms. OSS Project Governance In the absence of contracts, governance processes used by OSS projects provide the basis for collaboration between businesses involved in OSS projects; regulating financial contributions, where they are permitted, and contributions from individuals working on behalf of the businesses. Research has examined forms of governance, and how governance facilitates both contributions and the activities of contributors. Markus [37] synthesised a set of core functions a governance system fulfils from a review of the academic literature. Informal aspects of governance found as norms within OSS projects are recognised, in addition to the formal aspects of governance recorded as rules. Four broad types of governance are identified by Germonprez et al. [52] that include meritocracies, as well as more flexible systems, and that these forms of governance can coexist within the same project. In a case study of the Linux kernel, Shaikh and Henfridsson [38] found that governance evolves within an OSS project and may also manifest itself as different coexistent forms. Furthermore, Shaikh and Henfridsson found evidence that the tools used to manage software development contribute to the project governance process [38]. Alves et al.'s [51] systematic review of the literature on governance systems supporting both open and proprietary software ecosystems identified three key aspects of governance systems: how the participants are able to create value from their contributions, the coordination of the activities of contributing organisations, and mechanisms used to balance between openness and control. Business: Activity and Motivation Bonaccorsi and Rossi [10] found that companies were motivated to contribute to OSS projects for economic and technological reasons, rather than the more altruistic motives sometimes ascribed to individuals. Germonprez et al. [36] described business motivations to use community OSS as a means of open collaboration to create core, nondifferentiating software. The authors also identified that the process of collaboration is not always easy, but that participants gain clear economic benefits [36]. Activity in two community OSS projects governed by foundations and four OSS projects each controlled by a single company was compared by Mäenpää et al. [53] who found the foundation governed projects facilitated greater external collaboration through increased openness. Businesses can also be motivated to participate actively in a project for strategic purposes. Schaarschmidt et al. found resource deployment -the deployment of company developers in the OSS project -was a common strategy in the community OSS projects investigated, particularly where company developers acquire committer or core developer 1 status [31]. The acquisition of knowledge is also an important benefit for businesses participating in OSS projects. The contribution to organisational learning was identified by practitioners interviewed by Lundell et al. [9]. The finding is supported by Munir et al. [8] who observed the value to a company of knowledge interchange with OSS projects in an open innovation process. Andersen-Gott et al. [47] also highlighted that technical knowledge gained through contribution to an OSS project can be monetised by the business through the provision of complementary services. The finding is aligned with earlier work by Lakhani and von Hippel [48] which concluded that the main beneficiaries of help-giving in the Apache web server project were the help-givers themselves, who derived direct learning benefits from the experience. Businesses can also be cautious about contributing to community OSS projects. A study of OpenStack by Zhang et al. [56], [57] considered the importance of dominance of the software development process by particular companies and the consequences for both the software created by the OSS project and the community. The authors concluded that there are advantages to single company dominance for software development, but that some smaller companies become reluctant to contribute to the development process because of concerns that they are providing free labour to support the efforts of the dominant companies [56], [57]. An investigation by Morgan and Finnegan [55] found tensions between senior managers' desire for the company to be selfreliant and the opportunity to derive business value from OSS. Lundell et al. [54] also found differing attitudes towards collaboration with OSS projects held by technical staff and management. Caution also extends to technical contributions. Linåker et al. [7] studied a model, developed within Sony Mobile, that supports decision-making concerning the contribution of internally developed enhancements to OSS projects. The concern for the company is to distinguish between source code that represents functionality that has business value, and code that can be contributed to the upstream OSS project. Practitioners: Activity and Motivation The motivation of individual developers to contribute to OSS projects has been studied extensively. Many authors, including Lerner and Tirole [12], have found that developers are motivated by "career considerations and ego gratification" [12]. More recent studies of developers working in community OSS projects continue to support this perspective. The motivating factor of a "career path" for company developers working on OpenStack was found by van Wesel et al. to be a strong influence on their activity [49], while Riehle [21] highlighted the value to developers' careers of contributing to OSS projects. Furthermore, Riehle [21] provided evidence of the value of core developers to businesses also identified by Schaarschmidt et al. [31]. However, the studies focused on the motivation of developers to work 1. We use the term core developer to refer to contributors who have commit privileges to the project version control system and therefore act as a gatekeeper. [58] with OSS projects, and did not examine either how developers work to support the aims of the business they work for, or the commercial pressures and constraints on their activity. The retention of contributors by OSS projects has also been a concern for researchers, particularly in the first few months of a contributor's activity. Zhou et al.'s study of company developers contributing to Gnome and Mozilla [58], for example, examined the characteristics of the behaviour of those who became long-term contributors, rather than making a small number of contributions at most. However, the study did not consider the motivations of company developers whose involvement with the project might be intended to complete a specific task for commercial reasons. Further evidence of short-term or intermittent contribution by individuals is found in the research literature. For example, Pinto et al. [19] investigated contributors to GitHub projects that make single or very limited contributions, sometimes referred to as drive-by, or casual, contributions. Although the authors considered the motivations of contributors they did not consider commercial aspects in detail. However, they observed that some casual contributors were known to be longer-term contributors to other OSS projects. An investigation by Lin et al. [18] found contributors to five community OSS projects who created source code tended to have briefer relationships with projects than those who edited the project source code. The focus of the work was on developer turnover in projects and did not examine the reasons behind short-term contribution. A speculative explanation might be that some developers contribute source code to resolve an issue of significance to their employer, and once the task has been completed there is no business case to contribute further code. To summarise, the academic literature identifies the business incentives to contribute to community OSS projects, and the principles of the business models used to generate revenue from participation. There is also evidence that businesses gain from participation in OSS projects in other ways, such as knowledge acquisition. Researchers have also identified that participation in OSS projects can be challenging for businesses. Much research acknowledges that contributors to OSS projects are mainly acting on behalf of businesses, and documents the career incentives for developers. However, few studies examine how such developers interact with OSS projects, particularly community projects, to undertake tasks in order to achieve business goals. RESEARCH DESIGN In this article we report on a descriptive multi-case study [59] of a purposeful sample [60] of eight community OSS projects. We initially compiled a list of software created by OSS projects that is of strategic importance to the businesses represented by six of the authors. From that list, we selected eight OSS projects to investigate according to three fundamental criteria. Firstly, the projects investigated are community OSS projects; that is they are neither exclusively controlled nor maintained by a single commercial entity, but are maintained independently, by independent foundations, or under the aegis of the Apache Software Foundation (ASF) and the Eclipse Foundation. Secondly, the projects are widely used in that they are deployed in, or support the development or provision of, products and services in multiple business; i.e. the projects studied provide software recognised by many businesses as appropriate for use in commercial contexts. Thirdly, the projects have active communities of contributors, and histories measured in years. In addition, the software solutions implemented by the projects are not concentrated in a single domain and represent a variety of project types including open innovation and large-scale industry collaborations (See Table 2). Furthermore, the eight projects are also ones that seven of the authors have first hand experience of as users in commercial deployment contexts and as contributors. In summary, in this work we investigate company participation in OSS projects where there is a non-exclusive relationship between the contributor and the project, and the company must interact with other contributors -commercial entities and individuals -to achieve its goals. Archival Investigation We examined the public archival records [61] available for each of the eight projects to investigate the first research question. Beginning with the project website and any project pages on foundation websites, we identified the online resources that define the project and how it works, as well as the systems used to contribute to the project including mailing lists, changelogs, and bug tracking information (see Appendix A for details). The interactions recorded in publicly available online resources for each project were analysed. Firstly, to identify the forms of communication used to contribute to the projects and the types of contribution made; and, secondly, for evidence of both the manner in which interactions were conducted and their outcomes. The resources used by each project vary and those examined include mailing lists, online forums, and bug and issue trackers. Three categories of activity in OSS projects reported in the academic literature were used as an a priori framework to identify the wider purpose of the contribution: bug reports [62], feature requests [62], and support messages or help requests [48], [63]. A fourth category was also used to capture events outside the scope of the other three categories, including project governance activity. The characteristics of the work practices used by individuals to pursue their aims in each contribution were also identified and captured using an open coding process, informed by Glaser's ideas [64], [65]. The first author took main responsibility for the coding process and emerging codes were discussed and scrutinised by the first three authors as the coding progressed. A key principle adopted is the notion that, contingent on context, any unit of coding is acceptable. Accordingly codes are applied to email threads, issue tracker tickets, individual comments, emails, and to sentences to develop and refine abstract concepts grounded in the data to support reporting of observations. Evolving observations were discussed by all authors. We followed an iterative coding process. Initially, one month of activity on the Apache Solr issue tracker and the four project mailing lists was considered in the analysis. Eleven more months of activity on Solr was subsequently considered so that the period between April 2017 and the end of March 2018 was analysed. Thereafter activity in the public communication channels of each project was considered for the same period, with the exception of OpenStack Nova. OpenStack Nova has a volume of activity and number of communication channels that is considerably greater than the other projects analysed. Three months of activity (May and October 2017, and January 2018) were selected at random for analysis. Practitioner Interviews To contribute detail to the first research question and to investigate the second research question we conducted open interviews with contributors to each project. Email invitations were sent to individuals identified as having contributed to each of the eight OSS projects in public communication channels as part of their employment. In total, seventeen respondents were interviewed; nine and eight respectively from the primary and secondary software sectors [66]. The majority of interviewees were employed to work in multiple roles, including non-technical roles. Sixteen interviewees, for example, spent at least part of their working week as a software developer, and some of those also had a role as a core developer in an OSS project as part of their paid work. Some interviewees had consultancy roles; both technical consultancy working with a specific OSS project, and providing bespoke solutions where the OSS formed part of the solution. Just less than half also had non-technical roles, for example in business management and in practitioner training. As well as experience of project governance as core developers, a few interviewees also had wider experience of foundation and project governance having had roles on steering committees and in foundation administration. Interviewees with software development roles all have several years of industry experience. Some have sought roles where they can contribute to OSS projects, while others work with OSS because of strategic decisions made by their employer. Interviews were conducted in English by the first author of which fourteen were conducted by telephone and three by email. English is the native language of the first author and four interviewees (all telephone interviewees), and a working language for thirteen interviewees. Interviewees were informed that reporting of the research will preserve anonymity for both the individuals and their employer so that they were able to discuss their experiences and motivations more freely. Interviews initially probed for a generic understanding of the interviewee's work context through an opening question of the form: "Please describe your involvement with [OSS project] and how that activity relates to your employment?" For each interview a number of follow up questions were prepared to explore observed activities in which the interviewee had participated. The telephone interviews were recorded and transcribed. Each interviewee was sent the transcript of the interview to check the transcription of the conversation and correct any misunderstandings that may have occurred during the conduct of the interview. Interviewees were also invited to expand on any points made during the interview, if they wished. The approved interview transcripts were analysed using an open coding process (using the same open coding strategy as used in the archival investigation, see Section 3.1). A coding scheme was developed through analysis of the first five interview transcripts by the first author and evolved iteratively thereafter. A coding dictionary was maintained and the coding of interviews was reviewed following each interview through scrutiny by the first three authors as the systematic coding process progressed. Anonymised synopses of the interviews, including quotes, were discussed by all authors during a four month period to allow time for reflection. In retrospect, we found that after twelve interviews saturation was being reached because subsequent interviews gave very limited additional material apart from further examples supporting the evidence already gathered. CASES AND CASE CHARACTERISTICS The eight OSS projects investigated are: Apache CloudStack, Apache Solr, Bouncy Castle, Contiki-NG, Eclipse Leshan, MariaDB Server, OpenStack and Papyrus (See Table 2). As well as having a specific technical mission, each project provides opportunities for both companies and individuals acting on their behalf to interact with the project. The governance model for each project defines the intellectual property mechanisms and communication systems through which contributions are made (See Table 3). In addition, contributions may be made to the governing foundation depending on its financial structure (See Table 4). Apache CloudStack provides a cloud computing platform. Initially developed as a proprietary licensed product, CloudStack became an OSS project in 2010 and came under the control of the ASF in 2012 [76]. Apache Solr was initially proprietary licensed software that became an OSS project governed by the ASF in 2006. The close relationship with the Apache Lucene project led to the integration of the two projects in 2010. Bouncy Castle is a widely used cryptographic library first released as an OSS project in 2000. Contiki-NG provides an operating system for small, low-powered devices [77]. Contiki-NG was forked 2 in 2017 from Contiki-OS, which was established as an OSS project in 2003. Contiki-OS remains an active OSS project, but has not released a version of the software since 2015. The Contiki-NG project is independent of company control and managed by its core developers. MariaDB Server is a fork of MySQL. Originally a closed source database management system, MySQL was established as an OSS project in 1995. MariaDB Server was forked from MySQL in 2009 and the MariaDB Foundation was created in 2012 to preserve the project's independence. Leshan and Papyrus are projects in the Eclipse ecosystem and are governed by the Eclipse Foundation. Leshan is an implementation of the Lightweight Machine to Machine (LWM2M) protocol [78] and is overseen by the Eclipse Internet of Things Working Group [79]. Papyrus is an industry-led project to create a UML and SysML modelling tool and has been an Eclipse Foundation project since before its initial release in 2008 [35], [80]. Papyrus is managed by the Papyrus Industry Consortium [81], a member of the PolarSys [82] working group, which oversees a number of projects focused on model driven engineering and embedded systems. OpenStack provides a platform that supports the provision of private and public clouds through virtual servers and software defined storage on heterogeneous hardware [83]. Development of OpenStack is supported by the Open Stack Foundation [84]. The governance of an OSS project outlines the management of the project itself and its assets, and defines the mechanisms through which the project publishes information, manages activities and receives contributions. Markus [37] identified six categories of formal and informal governance structures and rules found in OSS projects. • Ownership of Assets: rules for ownership of intellectual property, foundation structure. 2. A fork occurs where an OSS project divides into two separate projects. There are many reasons why projects may fork including inactivity by core developers and disagreements over the direction of software development [41]. • Chartering the Project: the goals of the project. • Community Management: rules pertaining to membership and the roles members may have. • Software Development Processes: rules for requirements gathering, coordination, software changes and release management. • Conflict Resolution and Rule Changing: rules concerning conflict resolution and changing rules. • Use of Information and Tools: rules concerning communication, and the use of tools and repositories. Table 3 provides an overview of the way each of the eight selected projects implements governance mechanisms for each category identified by Markus, with the exception of 'Chartering the Project' which all eight projects do through their websites and in other documentation, and is omitted from the table to avoid duplication. The MariaDB Server project, for example, states that the software is ". . . an enhanced, drop-in replacement for MySQL" and will remain open source [88]. The practical differences for individual contributors that arise from the governance mechanisms are in two main areas. Firstly the communication channels and software development coordination tools used by the projects vary in number and complexity from Contiki-NG's use of a GitHub repository, to the multiple mailing lists, code review tools, planning system and internet relay chat channels used by OpenStack. Secondly, the mechanisms used by the projects to acquire the right to use and distribute contributed source code introduces a layer of bureaucracy in six projects. The ASF, Eclipse Foundation projects, and OpenStack ask individual contributors, and companies, for ASF projects and OpenStack, to complete a Contributor Licence Agreement (CLA) [89], [90], [91], [92] 34 which gives the project or foundation a perpetual copyright licence for the contribution. The MariaDB CLA [93] is used for contributions made using version two of the GNU Public License (GPL v2) and relies on copyright assignment. The MariaDB CLA grants joint copyright at the point of contribution and, thus, appears to function similarly to a copyright licence. Contributions to MariaDB can also be made using the permissive MIT Licence. From a contributor's perspective, a CLA requires a business to approve the terms under which technical contributions are made on its behalf, and for practitioners acting for a company to seek that approval. The licences under which the source code for each project is distributed place varying obligations on users that modify the source code, and few restrictions on users deploying the software. The more permissive licences -BSD 3-Clause, the Eclipse Distribution Licence (EDL) and MIT -place yet fewer obligations on users of the software, allowing industrial users to integrate the code with their own solutions and products. Companies and individuals may also contribute to the foundations financially, or in kind, and the nature of the relationships formed between the foundations and the donors is related to the legal status of each foundation (See Table 4). 3. We refer to v2.0.0 of the ECA dated 2016-08-22, which was in force during the period of data collection for this research. v3.0.0 of the ECA was published in October 2018. 4. Access to the OpenStack CLA requires a Launchpad account. The Legion of the Bouncy Castle Inc. supports the development of the Bouncy Castle library through donations from companies and individuals [94]. The ASF uses sponsorship to pay for the infrastructure used by projects, accounting and legal costs, as well as marketing [95]. The Eclipse, MariaDB and OpenStack Foundations offer a range of membership types through which companies and organisations are able to influence the direction of software development to varying degrees [96], [97], [98], [99] RESULTS In this section we report on interactions between practitioners representing companies and OSS projects. First we report observations of how businesses interact with OSS projects when making contributions. We then report on why software developers, both core and non-core, use specific work practices, and their employers' motivations to contribute to the OSS projects. Work Practices Used to Contribute to OSS Projects Governance frames the activities of contributors to OSS projects and the public communication systems through which they contribute to projects. Through examination of interactions found in archives of the collaboration platforms used by the projects 5 we identified work practices used to contribute to OSS projects. Bug Reporting and Fixing Practitioners adopt two fundamental approaches when reporting bugs. One approach is to ask an exploratory question on a mailing list or in a forum. The question typically enquires about some functionality to ensure that the submitter understands how the software is intended to work and whether the observed behaviour is to be expected, the result of error on their part, or a genuine issue (See Table 5, Example 1). Often the contributor will include precise details of the software, hardware and operating system they are using to provide context. Where the issue is identified as a 5. Appendix A lists the data sources and archives examined. bug it may then be reported via the issue tracker, either by the original contributor, or by a core developer. Where the contributor is more certain they have identified a problem with the software a bug report is submitted to the mailing list or issue tracker with supporting evidence (Example 2). Bug fixes are contributed to projects in response to an identified fault and, sometimes, with a bug report. The mechanism for contributing bug fixes varies according to the tools projects use and the project workflow. For example, a bug report accompanied by source code may be submitted as a pull request on GitHub. Many projects prefer that bug fixes are submitted with unit tests that establish both the problem and that the bug fix solves the problem and to support regression testing (Example 3). Where unit tests have been omitted from a proposed bug fix the core developers will often request unit tests to support fixes as part of the code review process. Interactions with an OSS project require the input of company resources, including staff time, and, thus, are a financial cost for the business. Many reports of issues consist of a few steps and messages exchanged between the contributor and the OSS project (as in Example 2). However, sometimes, despite details being provided the core and non-core developers manage to misunderstand each other, which may require further contribution of time by both parties (Example 4). Contributions in larger projects can also be forgotten or take a long time to be integrated into the code base, especially when they are not a priority for core developers, and, thus, have the potential to become wasted effort for the contributor (Example 5). Feature Requests Non-core contributors also make requests to add new features to OSS projects. As with reporting bugs, the opportunities available for the initial approach include an exploratory question on a mailing list, in a forum or as a GitHub issue, so that the contributor can understand whether the project would be receptive to the proposed feature (Example 6). Sometimes, as with bug fixes, non-core developers will submit a feature request with the source code that implements the feature. The implementation represents an investment of resources by the contributing company. Typically, as with code submitted to fix bugs, the developer will take part in a review process and revise the code to ensure that it meets the core developers' requirements before it is integrated into the code base. There are also occasions where contributors do not take part in the review process and the code, depending on project policy, often does not get merged into the project. It is uncommon, but significant amounts of software development work (sometimes of the order of thousands of lines of code) can be submitted and abandoned by their contributors. In some cases, the core developers have reviewed the contribution, the requested revisions are not made, and the contributor does not respond to further messages (Example 7). Review comments on source code submitted by noncore contributors are, in many OSS communities, mostly made by the core developers who have to accept the code. Sometimes, however, commercially unrelated non-core developers will contribute to code reviews and we infer that there are likely to be strong motivations for what appear to be unconventional actions. Observed interactions include the addition of relatively minor technical points that may improve the maintainability of the code (Example 7). We also observed a core developer not directly involved in the code review process intervene when they recognise a more generic use of a new feature that helps their employer's use of the software (Example 8). OpenStack Nova uses a different process for making a feature request. The most common workflow is that an idea is discussed initially in the IRC channel and a proposal subsequently developed and added as a blueprint on the Launchpad platform (Example 9). The blueprint can be reviewed by the community to ensure that the proposal is relevant for the project, that the proposed implementation is sound and does not duplicate or restrict existing functionality, or overlap with other proposals. The blueprints also support traceability of the feature implementation and associated code reviews. Core developers also make feature requests. Mostly, feature requests made by core developers are openly documented alongside those made by non-core developers in the projects investigated, including Leshan, OpenStack, and Solr. Feature requests are made openly so that other members of the project community can understand what functionality and implementation is being proposed, that the idea is sound, and welcome, and to identify whether the idea has been considered previously (Example 10). Furthermore, documenting intended areas of development may attract potential contributors. Support Project documentation may be incomplete, misunderstood, not read closely, or possibly out of date. Consequently, users of the software often seek help on mailing lists and in forums, and both core and non-core contributors provide support (Example 11). As software users, those giving help have detailed collective knowledge of the deployment and use of the software. Mailing lists and issue trackers are asynchronous communication channels. Sometimes practitioners seeking help may have solved a problem, or have continued working on it, while waiting for a response. In such cases, practitioners can provide a commentary on their ongoing problem solving activity and the outcomes (Example 12). Occasionally, contributors make mistakes and take actions in communications channels that the core developers may not anticipate, despite extensive documentation of how the core developers expect contributors to behave. Typically, users would be expected to ask for help on a mailing list and only to use the issue tracker when a bug has been identified. However, on occasion, help questions can be posted directly to the issue tracker (Example 13). The response of the core developers to such errors varies from project to project. Other Activities While the literature reports three groupings of contribution made in the communication channels of OSS projects, those active in OSS projects contribute in other ways. Three main additional types of activity are observed. The first is the creation and maintenance of documentation. A second type of activities relates to project governance and administration, and community building. Thirdly there are opportunities for additional activity in some larger projects as a consequence of the additional communication channels and tools available to developers. Documentation activity in OSS projects can be seen as two processes. There is a deliberate effort to create and maintain documentation of the software. In some larger projects, such as Solr, there is often at least one person who focuses on managing the documentation effort (Example 14). The other form of documentation is a knowledge maintenance task undertaken by contributors -mostly core developers -that occurs during other activities such as providing support, fixing bugs, and feature implementation. While working on the primary task, links or connections are made to related items in the issue tracker, and sometimes to sources of information outside the project. Recording connections between mailing list threads, issues and other items annotates and connects knowledge within the communication and software development systems to create a detailed, emergent documentation of the project. Project governance activities and opportunities differ between foundations and projects. ASF project core developers, for example, vote to accept a release candidate as the next release. ASF project management committees also vote privately to appoint new committers, who are subsequently introduced to the community on the developer mailing list. A similar process happens when new core reviewers are appointed for OpenStack Nova. Occasionally, projects need to discuss issues such as the impact of the foundation's intellectual property rules on the project (Example 15). The core developers in most of the projects studied are responsible for ensuring contributors have completed the appropriate CLAs when making their initial technical contribution. In addition, OpenStack Nova, as noted, uses IRC extensively and makes logs available so others can, as with email threads, follow discussions and understand the discussion leading to a specific decision. IRC is also used to help coordinate other aspects of OpenStack development in close to real time (Example 16), as well as a channel for automated messages from the Gerrit code review tool, and to organise or invite review for particular code revisions. Also within OpenStack are a number of special interest groups (SIG) 6 that developers with common interests use to coordinate their activities across the project. Motivations to Adopt Contribution Strategies In this and the following subsection we report on analysis of the data collected from interviews conducted with contributors to the eight OSS projects studied. This subsection focuses on the strategic choices made about company contributions to OSS projects, and the following subsection (Section 5.3) focuses on why practitioners use individual work practices. In both subsections we report additional work practices, or aspects of those reported above, which were uncovered during the interviews. A variety of factors influencing the type and extent of company engagement with the eight community OSS projects emerged from analysis of the collected interview data (See Table 6). Two key factors influencing the extent of engagement are the relationship between the company's business model and the deployment of the project software, and the maturity of the domain and the software. Another factor identified is the influence of the location of knowledge and expertise within the project and the contributing business on the manner in which companies contribute to OSS projects. Some contributors to OpenStack and Apache CloudStack worked 90% or more of their time on the OSS project. In each case the company involved deploys the software as a key component of one or more of the company's revenue streams. In one business model, for example, the software is deployed to customers as a platform for them to deliver their services and products, in another the business deploys the software as a platform to support service delivery to 6. The OpenStack SIGs are listed at https://wiki.openstack.org/ wiki/OpenStack SIGs and include activities related to supporting newcomers, and high performance computing OSS is deployed to generate revenue by delivering functionality or service to customers. Revenue generated through adding value with software that depends on, adapts, or enhances OSS. OSS project as an open innovation platform and revenue is generated by other company products. Software and Domain Maturity The technical context of the OSS project software, including the factors contributing to the evolution of the domain and the pressures for continuing software development. Domain: evolves through external pressures e.g. technology change. Software: additional functionality required by user. Software: incomplete implementation of specification. Knowledge and Expertise The knowledge and expertise required to deploy and develop software is not evenly distributed in OSS project communities. Core developers have implementation expertise. Other contributing businesses may have the required implementation or domain expertise. Users of an OSS project have extensive experience of deploying the software. their customers, as well as being a platform their customers could also add value to. The cost to the businesses of switching to different software would be considerable, or even existential, as stressed by one interviewee: ". . . if the project dies then basically our company dies because the core business of the company is based on [the OSS project]." Where the business's product is part of the OSS project, the business adds value to the OSS by developing functionality for itself or a client, and contributes enhancements to the OSS project, perhaps identifying possible bugs as well. Some other businesses contributing to OpenStack and Apache CloudStack use a more product-focused strategy. Product-focused companies are similarly reliant on deployment of software from the OSS project to deliver services to their customers. While the contributors, the company developers, still work on the upstream project their main focus is on the product they develop within the business. The product is integrated with or dependent on the software created by the OSS project, and company developers contribute features in the project software and fix bugs to meet specific requirements to support their product. The high level of confidence that businesses place in the capabilities and initiative of some individuals at the centre of their engagement with OSS projects was also reflected in the qualitative analysis. Some developers were given a great deal of license by their employer to work on the OSS project while delivering value to the business. For example, one interviewee commented that around 90% of their work on the OSS project was not specifically requested or directed by the company. The employer was said to have decided to invest in the OSS project to improve the quality and extend the functionality of the software, which is integral to the business. As the interviewee observed: "If we work towards code quality and community building, [the project] will become more attractive for other developers." By encouraging wider deployment and participation the company anticipates maintenance costs will be reduced in the long term. In addition, interviewees suggested other reasons for the intensity of the company's engagement with CloudStack or OpenStack including the relative size of the project and the relatively fast pace of software development, as well as the evolution of cloud services and the cloud domain. Two developers commented on the reciprocated development effort between company and OSS project as mutually beneficial, with one saying: ". . . we provide contributions to [the OSS project] at the same time that we get benefits from the community." Another described the process as follows: ". . . we execute the change in our branch and then we push to upstream. We work with a custom fork of [the project], which enables us to customize it to our needs and to create new features faster. We do have a roadmap to migrate back to upstream versions. Then, we fork again, and so on." The IoT sector, for example, follows a similar open innovation model to that seen in CloudStack and OpenStack by developing standards compliant infrastructure, such as communication protocol stacks, in OSS projects. The business model identified from analysis of interview data is product-focused. Companies collaborate in OSS projects, such as Leshan, to implement infrastructure that complies with established technical standards. Accordingly, development and maintenance costs for the communication systems are shared between companies contributing to the OSS projects. Individual businesses generate revenue through the development of connected products. Importantly, those products are interoperable because they have been developed to use standards-based implementations of supporting infrastructure. Furthermore, the development of new products is supported through the provision of reference implementations of communication and server infrastruc-ture. In addition to businesses developing products, some companies employ developers to work on open innovation projects, though not exclusively on a single project 7 . A third group of businesses are identified from analysis of the interview data. The companies can be less directly engaged with OSS projects as a consequence of their business model. Interviewees working for consultancies, and as individual consultants, for example, tended to interact intermittently with the OSS project, mostly when deploying software for a client or maintaining an existing installation. Although their business model is dependent on software from the OSS project, the need to interact with the project is reduced because of the manner in which the software is deployed. Typically the interaction consists of reporting issues or asking questions on the user mailing list. However, as some interviewees explained, there are occasions where they develop software for the OSS project to support their particular use case and contribute that code back to the project, if appropriate. The mechanisms available for businesses to contribute to an OSS project obviously influence the types of contributions that can be made. Analysis of the data collected from interviews also reveals constraints and opportunities within both businesses and OSS projects that shape contribution strategies. The value of participation in project governance to their employers, and in particular the enhancement of company reputation through active involvement in project governance, was a strong influence for some interviewees. One interviewee, however, provided note of caution through a critique of company involvement in foundations and project steering committees, arguing that tensions within larger companies around budgets and company aims often mean that company commitment to projects does not always fully reflect the business's strategic interests. Companies may also adopt strategies that support a nontechnical effort made by the project which brings business benefits to the contributor. The Bouncy Castle library, for example, is deployed in some operational contexts where software must be certified as meeting the Federal Information Processing Standards (FIPS). An interviewee described how the FIPS certification process is expensive and companies donate money to the Legion of the Bouncy Castle Inc. to contribute towards the payment of FIPS certification fees. Furthermore, other interviewees commented that financial contributions, or contributions in kind can be more relevant for the project or foundation in addition to technical contributions, in some circumstances, and can form part of a strategy to support the OSS projects the business uses. One example highlighted by interviewees is the contribution of money through foundation memberships to finance the computing infrastructure required for the development of OpenStack. The location of expertise and knowledge within the business and the project also emerged, during analysis of interview data, as a factor influencing decisions about contributing to an OSS project. Constraints on a business in terms of expertise, knowledge or personnel required 7. For example, the approach used by Bosch Software Innovations GmbH is outlined as part of a talk at FOSDEM 2018 (https://fosdem. org/2018/schedule/event/eclipse iot/) to contribute can mean that sometimes it can be more cost effective to commission work through consultancy and software development companies already engaged with a project to fix bugs or develop software for the OSS project. Committers for the two ASF projects studied, for example, are mostly company developers paid to work on the project. Some businesses employing core developers sell services and support for software from the OSS project. One interviewee spoke of their employer having a support contract for much of the work with the upstream project, but added that they also undertake some smaller tasks in-house and submit bug fixes and feature requests as well. A slightly different model is found with Bouncy Castle and MariaDB Server where considerable domain expertise is often required to work with the source code. Both projects receive technical contributions from their respective communities, including larger companies. In addition, Crypto Workshop, a company run by the founders of Bouncy Castle, sells support subscriptions and undertakes commissioned work on the project source code. The MariaDB Corporation also undertakes commissioned work to develop additional functionality. Interviewees commented that acquiring the necessary expertise to create technical contributions can be prohibitively expensive, or cannot produce the timely solution required. Accordingly, paying for established expertise can be a cost effective means of contributing code to the project and gaining the required functionality. Bug Reporting The care taken and attention to detail by developers contributing bug reports was a theme identified during analysis of the interviews. An interviewee explained that bug reporting was often a slow and painstaking process. They highlighted the challenge of gathering information from different parts of the project documentation and systems such as issue trackers, as well as external sources including question and answer sites like Stack Overflow, to determine whether the observed problem had been seen previously, and potentially resolved. The same interviewee asserted that only then would they ask a question on the appropriate mailing list. Even at that stage, they explained, the question would employ the use of negative politeness in a formula such as ". . . or have I missed something?", so that the possibility of an error or oversight is allowed for to protect the reporter's community reputation. Another interviewee emphasised the value of bug reports for improving project documentation. In their experience -both as a core developer and contributor -some bug reports arise because software behaviour found by users differs from the documentation. They also explained that where standards are implemented, bug reports can help identify and document cogent misinterpretations and misunderstandings of the standard. As well as the desire expressed by bug reporters to report their observations clearly, interview analysis found that core developers also needed clear evidence in bug reports. Without clear evidence to help identify the underlying cause they found the process of determining the nature of the problem, and possible solutions to be considerably more challenging, To develop understanding of the problem and explain it clearly. Tentative bug report To acknowledge potential knowledge gap between reporter and core. To protect reputation in project community by allowing for error. Feature Requests Feature proposal To ensure relevance of proposed contribution. To prevent unnecessary work by contributor. Code review To ensure suitability of contributed code. To support and encourage new contributors. To document project source code quality expectations. Support Help-seeking To identify a solution to a problem. To indicate continuing use of feature under threat of deprecation. To document a problem for client. Help-giving To encourage software use. To identify bugs that might lie behind support requests. To gain knowledge and develop skills to support customers. Other Activities Documentation To record project processes for fellow core developers. Source code maintenance To ensure tasks that do not attract non-core developers are completed. Governance To provide a layer of project oversight. and often time-consuming. Some request specific forms of evidence are included in bug reports, but that it is not always supplied. One core developer interviewed, however, remained relatively optimistic: ". . . we have a template . . . please tell us which branch, please give us the logs. And it's frequently ignored. But, OK, if it's ignored and we can handle the issue in another way it's OK." Feature Requests Qualitative evaluation of the collected interview data found some developers are often motivated to submit feature requests to migrate already implemented features into the OSS project. Often the feature has been implemented and tested within the contributor's private version of the OSS project source code, and the feature would be easier to manage if it were incorporated in the upstream source code and released as part of the software from OSS project. One interviewee explained the challenging and expensive process of needing to integrate local code revisions into each release of the upstream project software to introduce functionality required before the company could deploy the software from the OSS project to customers. The effort required to maintain their own version of the upstream project's source code and make revisions to each release to integrate their own code introduced an unacceptable level of effort and cost for the business. By having their features incorporated into the upstream project, future revisions to the project would not break the code, and the company would not have the overhead of reintegrating code to each release. The interviewee observed that the process of having the feature accepted and integrated in the OSS project had taken a long time and had been achieved in a series of steps, rather than as a single feature request. They also emphasised the importance of the project to the business, saying: "We had to do some fairly awkward things . . . to continue being able to produce what we needed to produce and to make a saleable product." Depending on the project infrastructure, there are opportunities to negotiate with core developers and other users about proposed features to understand whether the proposed feature is likely to be accepted. Some interviewees commented that discussion of proposals of new or extended functionality was an effective process for scrutinising proposals and revising them to ensure the quality of the proposed contribution, and its acceptability to the rest of the community. Another interviewee added that working in relevant areas of the project source code that other contributors appeared not to be interested in increased the likelihood of features being accepted. Part of the process of submitting a feature request is the code review work undertaken by the core developers prior to accepting and integrating the feature. In a large project such as OpenStack there is an extensive process of code review undertaken by developers from different contributing companies. A theme that emerged from the analysis of the collected interview data was the value of the review process as a means of preventing unwanted or undesirable changes, and supporting the longevity of the project. Interviewees also noted the need for vigilance dur-ing reviews to ensure that implementations were sufficiently generic so that the project remained useful to the wider user community and that new features were implemented for all supported platforms. Core developers also highlighted the value of a supportive and educational review process. In particular that the contributing developer should to be encouraged to continue contributing to support the longevity of the community and the software created by the project. Two core developers also commented that supportive code reviews can require additional effort and might seem an inefficient use of time. However, they both emphasised the value to the OSS project of investing time, with one saying: "Is it worth it? Personally, I feel that code contribution integration is often less efficient than if I coded it myself but this is normal as contributors need to gain skills on the project, this is a kind of investment. For the longevity of the project, it's important to have more people involved." An additional benefit of code reviews, explained by one interviewee, is that they document the core developers' expectations for source code quality, and, in their opinion, potentially, influence the quality of future contributions. One interviewee also drew attention to the challenges of processing large feature requests, explaining that the larger a submitted feature was the more difficult it was to review. In practice they preferred submissions to consist of smaller features so that each could be better understood, tested in isolation, and integrated more easily. Non-core contributors with limited experience of OSS reported finding the practice a challenge at first. Interviewees working as core developers also identified that some software has additional requirements that are not always apparent to contributors of code. Additional considerations can consequently make integration of contributions time-consuming and challenging. Examples given by interviewees include the security aspects of Bouncy Castle, Contiki-NG and MariaDB, and differences between the virtualisation models implemented on hardware platforms used for CloudStack and OpenStack installations. Support The definition of support activities used earlier includes both asking and answering questions as well as the provision of documentation, both for fellow contributors and end users. Some subtleties of the process of asking for and providing help in forums and on mailing lists were illuminated by analysis of the interviews. Rather than simply asking for help, help requests can be considerably richer and have multiple intended audiences. One interviewee explained having used a mailing list question about a potential bug: ". . . to document to the project that there are people still using it [the functionality] and there are likely to be people using it for quite some time." Furthermore, the same interviewee reported using questions about potential bugs and possible solutions to document for their clients that the company was making progress towards resolving the issue. Additional uses of mailing lists were identified through the qualitative analysis. Mailing lists are not just help forums, or places to make announcements, but can also be a practical means of disseminating information. One interviewee saw the provision of an email summarising decisions made in different communication channels as helpful for those involved in one particular area of development. Furthermore, they argued that while a variety of communication channels enhanced interaction in large projects, it also created problems for information management, particularly in the sense of curating knowledge of the project's evolution. Analysis also found both help-givers and help-seekers value the learning process required to formulate and respond to mailing list questions. One help-seeker explained the value of preparing questions to their working life saying: ". . . part of the motivation is that I found it to be a useful way of fixing problems. So I probably write twice as many questions with the intention of posting them to a mailing list as I actually post." They then elaborated: ". . . by the time you have formulated a good question and collected all the information to say what the problem is then the process of asking the question will often make the answer become clear." A consultant explained the value of reading mailing lists and providing help as a way of acquiring knowledge and skills. As well as learning the soft skills required to help others, the interviewee identified an additional benefit to their professional practice as: ". . . learning about the problems that other people face so that when I run into similar problems with the consultancy work I can remember the problem." Other Activities We also found that core developers, in particular, but not exclusively, engaged in a wide range of activities, both technical and non-technical, to support the longevity of the project. An interviewee explained the value of documenting project processes, ". . . because then anyone else can take over parts of the process when someone leaves . . . " Three more interviewees highlighted, from their experience as core developers, the importance of undertaking basic software maintenance tasks and code quality improvement activities, such as fixing some bugs, refactoring code and identifying unused portions of source code for deletion. One commented about the motivation of contributors to maintain code: ". . . if there is a very well-known bug that somebody needs to sit and fix: for some people it's not part of the day-to-day job so they will not do it. . . . sometimes it means that we end up getting lots of nice cool new things, but there's that old thing [bug] back there that nobody is looking at." From the interview analysis we also identified motivations for other forms of contribution. Some, such as contributions to project governance, for example, depend on the opportunities provided by the project and foundation structure. Projects including Eclipse Foundation projects and OpenStack have steering committees that help determine the direction of software development. One interviewee emphasised the value of strong steering committees to projects in providing a layer of oversight. OpenStack is a very large project and has forms of internal organisation and structures that many other projects do not. For example there are a number of SIGs for cross-cutting, project-wide concerns, such as security. One OpenStack developer interviewed stressed participation in the SIGs as a valuable aspect of their work because they provide input into technical contributions in the form of trying to standardise development approaches across the project. ANALYSIS The combination of observations and analysis of project archives and rich insights and experiences of how experienced contributors work with community OSS projects provides rich accounts of work practices used, as well as explanations of why the observed approaches are used. In this section we elaborate on the variation in type, extent and intensity of interaction between company practitioners and OSS projects. We also identify how the nature of some contributions represent an investment in the project by the business, and analyse how costs and availability of resources can influence the way that businesses and practitioners contribute to community OSS projects. Type, Extent and Intensity of Interaction A wide variation in the extent and intensity of engagement between individual practitioners and companies, and the OSS projects was observed. Some practitioners spend a large proportion of their working time directly on a project while others interact with projects less often. The form and the intensity of the engagement with the OSS project appear to be largely related to how the business adds value to the project software. The software might be deployed as a component of a product or service that directly generates revenue, for example, or the business may add value by applying their expertise to deploy, and perhaps manage, solutions for customers that include the OSS project software. An additional factor identified by interviewees was the critical nature of the OSS project software to the business model of the company they worked for. For some practitioners there was no viable alternative software solution. Their interactions with the project focused on the two objectives of delivering a viable product or technical solution now, despite difficult challenges, and working towards a more cost effective solution for the business in the future. Where companies provide consultancy services to support deployment of the project software then their interactions with the OSS project may be infrequent and limited largely to help-seeking and bug reporting. The company's main requirement in such situation is reliable software for their customers to use and, perhaps, some additional knowledge of how to deploy the software. Companies that add value by incorporating the OSS in a product or service will have similar needs to those adding value through consultancy services, but also may need to add or improve functionality to support their work. It is notable, however, that many companies deploying OSS components in products and services do not engage with or contribute to some, if not most, of the OSS projects whose software they use. The first reason is that the component is perceived to be a commodity, or is used as if it were one. For example a specific version of a component may be deployed and only reliable, fully tested functionality is used. The second reason is that the component is replaceable. Amongst the interviewees there was also variation in the intensity of engagement with the OSS project. Some contributors, especially in the cloud domain, spent a great deal of their time working on the project software either to improve the software or to add functionality required by products they were developing that built on the OSS project software. The reason for the intense or extensive engagement with the project seemed to be related to the domain, as the greatest intensity of activity was seen with contributors to CloudStack and OpenStack. A contributing factor reported by interviewees was the speed of product development within the domain, where, typically, development work was undertaken on a private fork of the project software, and then reintegrated with the upstream project. Investment in Projects Some interviewees, particularly core developers, identified strategic dimensions to their activities. They spoke of an additional investment of time and effort when reviewing and integrating contributed source code to encourage further contributions. Others identified software maintenance tasks such as refactoring that would not be done by non-core contributors. In some cases, where the project software is a major component of the employer's revenue stream, core developers also reported a larger part of the work they did was of direct benefit to the project, but of less immediate benefit to their employer. Work of this kind contributes to the quality of the software created by the project as well as making technical contributions easier by reducing technical debt. Similarly, activities, that may be non-technical, such as supporting the governance of the project contribute to the longevity of the project, and represent a longer term perspective of the project. That companies invest in the project rather than just contributing to the technical effort is indicative of the long-term importance of the project to the business. Operational Costs Engagement with a project is often seen as a long-term investment in staff time and expertise, which also consumes company resources that are typically considered a shortterm operational cost. Consequently, for both businesses and individual contributors it is desirable that interactions with OSS projects should be effective and efficient. We found interactions where developers proceed cautiously by, for example, trying to explore whether a specific feature might be accepted (Example 6) so as to avoid duplication of effort, or unnecessary work. However, we also found that contributors can be drawn into time-consuming interactions (e.g. Example 4). Example 4 and similar cases can have obvious causes, such as miscommunication or not following instructions, but in some instances the cause is less clear and further research is needed to understand the causes of inefficient interactions between contributors and project and how they might be avoided. If, as Milinkovich argues [43], OSS will be used increasingly, then without understanding the causes of inefficient interaction and how to avoid them, a lot of working time, and thus resources, may be used unnecessarily. Several interviewees indicated the value of preparation of questions and bug reports using the project documentation, and external sources, as a way of ensuring that interactions within the project were more effective and efficient. One interviewee drew attention to the value to their employer of help-giving within a project as preparation for working with customers. Interviewees also identified the richness of some contributions that was not immediately apparent from observation as a means of trying to achieve additional goals. For example, an outwardly simple helprequest was used to try to influence software development plans within the project. A further aspect of the costs encountered, and identified by interviewees, is that of technical debt, in the sense that maintaining local source code improvements, or bug fixes and reintegrating them at each release, is not an efficient or cost effective way of working. It is, however, as was identified in one case, a necessity where the upstream project is critical to the business model. In the long term a business reduces costs through enhancements to OSS project software being integrated in the project software; so long as they do not generate revenue through the addition of commercially differentiating functionality. As well as the influence of deployment and the business model on the way that companies contribute to OSS projects, two additional factors appear to motivate the approaches to contribution by companies and those working with them, or on their behalf. The first is the maturity of the software implementation and the domain, and the second is the location of the knowledge and expertise required to complete a given task. In both cases care is needed to identify cost effective opportunities to make contributions. Software and Domain Maturity Projects where the software implementation is perceived as relatively immature and the core functionality under development, such as Leshan (where some parts of the OMA LWM2M specification remain to be implemented), can require greater investment of company resources in the project (in collaboration with competitors or not) to ensure the software meets the company's requirements. Opportunities for a company to work with the community include company developers joining the community, and the company employing core developers within the community. The latter is possible where the implementation aims of the company do not conflict substantially with those of the community. An example might be the development of software to implement an existing standard where the domain is thought to be relatively mature and clearly defined. Software maturity is not easily evaluated and has many aspects. Consequently, determining the maturity of an OSS project requires recognition of aspects of the project, and whether they might be subject to change. The core functionality of Solr, for example, is perceived as relatively mature. However, machine learning techniques are relevant in the search domain and are being introduced to Solr. In addition, relative increase in the amount of data searched and consequent changes to the hardware platforms on which Solr is deployed in industry also influence the direction of software development. Consequently there can be implementation or reimplementation of features and functionality as software technology develops, and software and configuration changes in response to developments in external technology. Accordingly, though aspects of an OSS project may be considered mature and the direction of software development largely self-regulates, there are aspects of the project that may evolve as a consequence of external changes. Although the involvement of some companies using the software is mainly limited to contributing bug reports, vigilance is also required to ensure the project software continues to meet existing requirements. Knowledge and Expertise Interviewees also identified the location of knowledge and expertise as a significant factor when deciding to make a contribution to an OSS project, because it can indicate who may be best placed to contribute as well as the type of contribution that can be made. Three broad categories of knowledge emerged from analysis of OSS projects and the interview data: knowledge of the application domain, knowledge the software implementation, and knowledge of software deployment and use. First, the application domain knowledge in the software is an asset that companies exploit to add value when delivering a service or a product. In some sectors, for example security, or during product innovation there can also be a significant level of domain knowledge within the company using the software. In the case of Bouncy Castle, for example, the application domain expertise and awareness of the community helps to identify new areas for development, as well as to report bugs clearly to the developers. Also, in projects associated with open innovation, like Leshan, expertise within companies that arises from product development supports the implementation of features, or missing functionality, in the upstream project. Second, detailed knowledge of the software implementation is usually limited to the core developers within a project. Generally, it is possible to delegate responsibility for implementing bug fixes and feature requests to the upstream project. However, some deployment contexts can require timely implementation of fixes and features. Where the project software is deployed as part of a product, or delivers a critical service, it can be necessary for contributors to implement bug fixes to maintain revenue generation. A key challenge in such circumstances, therefore, is to acquire sufficient knowledge of the software implementation to be able to implement meaningful changes. Furthermore, there is a trade-off between implementing a bug fix that resolves the problem until it is fixed in the next release, and a solution that meets the requirements and development plans of the core developers sufficiently that the changes will be incorporated into the upstream project. Striking the right balance is a key challenge for a company, and, as identified through the analysis of interview data, there are many ways to acquire software implementation knowledge and expertise. Employees can acquire knowledge sufficient to implement fixes, though there may be limitations to the level of expertise that can be acquired alongside their dayto-day work. However, some core developers are willing to invest time to nurture contributors so that they are able to make more effective technical contributions. There is also the issue highlighted by some interviewees where the level of knowledge and expertise required to develop the software, in particular cases, can be such that it may be an unrealistic proposition for a contributor (individual or company) to acquire that expertise, especially where business resources are constrained. A further option, identified by some interviewees, is to hire expertise already within the upstream project in the form of core developers -either as consultants, or to invest in the project, if appropriate, and employ core developers as staff. Alternatively, companies may develop working relationships with businesses that employ core developers. Third, deployment and use knowledge and expertise lies both with the user community and the core developers. Contributing knowledge to the project community makes the project more attractive to new users and contributors, and can contribute to the development of professional expertise of the help-giver. Documenting knowledge of deployment also helps improve the quality of the software by recording use cases and requirements that the core developers may not have considered. To summarise: the challenge for any business contributing to an OSS project is to understand the wide range of factors that might motivate the contribution, as well as the factors that constrain avenues of action, and the need to identify appropriate means of interacting with the project that are an effective use of resources. Discussion In this article we have reported on how and why companies and practitioners acting on their behalf contribute to eight community OSS projects. Companies contribute to OSS projects for business reasons, and decisions about the nature of the contribution are influenced primarily by the need for the business to generate long-term value, i.e. for the business to benefit from the contribution. We found a wide variety of ways businesses contribute to projects and provided some explanations of the choices made from analysis of interviews with practitioners. Observations of practitioners' work practices and the interviews provide rich descriptions of the factors considered, including where expertise and knowledge lie, and how effectively the business might be able to contribute in order to achieve its goals. The goals may be short-term, such as fixing a bug, or more long-term, such as supporting a particular development effort, or perhaps even influencing the direction of software development. A major influence on the type and level of engagement with an OSS project is the way in which the business deploys the software created by the project. In some deployment contexts the software requires little or no modification to create revenue for the company; the revenue comes from selling the expertise required to deploy the software. At the other end of the spectrum the software is deployed in a context where ongoing development of features is required to support the generation of revenue. In such usage contexts the software may be deployed to provide services to customers who have evolving requirements, or it may be that the software is in a rapidly evolving domain. However, it is important to remember that businesses participate in OSS projects for multiple reasons, and while there may be common factors for many companies contributing to a given project, they do not lead to the same form of contributions to the project. Contributions, and consequently the pace and direction of development, are the combination of the needs and capabilities of many companies. The capability of a company to make a technical contribution to an OSS project is contingent on having the necessary technical and implementation knowledge and expertise, as well as other resources, including staff time to make the contribution. Companies need to be aware of their strengths and weaknesses when making decisions and to adopt a cautious approach when proposing additional features, for example. Furthermore, that if, for any reason, the company lacks the capacity to make a technical contribution, then there is the opportunity to outsource the work to others. An orthogonal factor that must be considered is the timeliness of any implementation; can the company afford to wait for the project to implement the change? The company may then need to acquire the necessary expertise. The contributions made by interviewees were motivated, mostly, by the need to complete software development tasks for their employer. The majority of activity was intended to meet short-term goals, and driven by the need to deliver a product or service. Some activity was more strategic and intended by the commissioning business to support the community OSS project. We also identified instances where individuals spent part of their paid work making contributions to develop skills beneficial to the business. While many interviewees were motivated for career reasons to work in jobs where they could contribute to OSS projects, contributions were made in the context of paid employment, and for the benefit of the business. Rather than simply contributing in ways that support the technical effort where there are direct benefits to the company's revenue, some companies contribute to the longer term future of the project. There appears to be a point at which the project becomes of sufficient importance for the business to support aspects of software development, through employing core developers, that contribute to the long-term future of the project in ways that do not have an apparent financial return for the company. Exactly what factors lead to the decision to invest in a community OSS project is a subject for future research. We have identified a range of non-code contributions including donations, sponsorship, and participation in governance processes, and some motivations to make such contributions. Documentation, for example, is an activity that developers contribute to and that some individuals specialise in. Individuals contributing primarily to the documentation process did not respond to invitations for interview. The broad topic of non-technical contributions and particularly those who make only non-code contributions to community OSS projects is therefore an area for future research. The study presented in this article presents systematic analyses of collected data from public sources and contributors to eight widely used community OSS projects implementing software in a variety of domains. We acknowl-edge the inherent characteristics of utilising a purposeful sampling for transferability of findings from the study. While we cannot reflect every possible form of business pressure on, and work practice used by, contributors to OSS projects in this study, we have provided results which draw from a systematic analysis of rich insights and experiences of activities related to the investigated community OSS projects, including drawing on the long experience of the authors. Consequently, we conjecture that the findings may be particularly representative for transferability of results related to company involvement with other community OSS projects. CONCLUSIONS We have reported on the interactions of companies with eight community open source software (OSS) projects governed independently and by foundations. The work practices used by companies to contribute to OSS projects are identified and characterised, and the motivations for the use of particular contribution strategies and work practices explored through analysis of data gathered during interviews with contributors. Our investigation provides a picture of the inherent complexity for businesses working with OSS and illuminates the manner in which companies and practitioners contribute to OSS projects, despite the outward similarity of the project structures, available communication channels, and apparent business models and priorities of participants. We found key factors that help determine how a company interacts with a community OSS project include the maturity of the software created by the project, the business context within which the company deploys the software, and the balance of areas of knowledge and expertise between the company and the project. In addition, companies have a strong interest in the longevity and sustainability of projects they use and contribute to, that also motivates their activities and both technical and non-technical contributions, and can motivate a much more strategic investment of resources in the project. This study makes the following contributions to the existing body of knowledge: • Identification of work practices used by companies and practitioners to contribute to eight widely used community OSS projects. • Documentation of factors for both the community OSS project and contributors that influence company and practitioner decision-making. • Documentation of insights from industrial praxis into the opportunities and constraints of the relationships between projects, companies and practitioners. • Identification of opportunities for companies and practitioners to improve their strategies for working with community OSS projects. In summary, this study contributes novel findings about the nature of, and the decision-making behind, strategic and everyday contributions by companies and practitioners to community OSS projects. The rich descriptions and analysis of the interviewed practitioners' insights and experiences provide an understanding of the nature of the complex interplay between influences from technical and business considerations that inform decisions made by businesses and individual practitioners about the work practices used to make contributions to independently governed OSS projects. The findings draw from investigations of company involvement with eight community OSS projects which to large extent may be transferable to other similar contexts involving other OSS projects. However, findings from the study should not be perceived as supporting a contextindependent prescribed method for contributing successfully to all other community OSS projects. Hence, findings from the study indicate the need for awareness and understanding by businesses and practitioners of the many characteristics of both their own situation and goals, and those of the OSS project, in order to be able to contribute effectively. Erik Lönroth holds an MSc in Computer Science and is the Technical Responsible for the high performance computing area at Scania IT AB. He has been leading the technical development of four generations of super computing initiatives at Scania and their supporting subsystems. Erik frequently lectures on development of super computer environments for industry, open source software governance and HPC related topics.
v3-fos-license
2024-05-07T15:06:06.928Z
2024-05-01T00:00:00.000
269609357
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/227433/20240505-22589-1vpwrpj.pdf", "pdf_hash": "253fc63c8ebc58735becdadf6406e2b1e935305f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42013", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a33a19653b3d22573d963ec1d7be5496c65aaa4e", "year": 2024 }
pes2o/s2orc
Occult Gastrointestinal Hemorrhage From a Meckel’s Adenocarcinoma: A Diagnostic Dilemma Gastrointestinal bleeding from Meckel's diverticulum can be challenging to diagnose. We present a case of a 78-year-old man with painless hematochezia. Despite undergoing standard investigations, the source of bleeding remained elusive until arteriography localized bleeding from Meckel's diverticulum. Prompt management involved embolization followed by laparoscopic resection. This case underscores the need to consider Meckel’s diverticulum as a source of obscure gastrointestinal bleeding even in the elderly, as well as the need to use non-conventional diagnostic approaches when standard methods fail. The successful management of the case through embolization and laparoscopic resection highlights the crucial role interventional radiologists and surgeons play in the management of Meckel’s diverticulum-related complications. Introduction Lower gastrointestinal bleeding (LGIB) is a frequently encountered clinical phenomenon.Typically, colonoscopy, nuclear medicine scan, and angiography, including CT mesenteric angiography (CTMA), successfully identify the etiology of the bleeding in most cases.However, a subset of patients presents a diagnostic dilemma, as the source of bleeding remains elusive despite employing conventional procedures [1]. We present the case of a 78-year-old man who presented with painless hematochezia.He underwent colonoscopy, upper gastrointestinal endoscopy, and computed tomography (CT) angiogram, all of which yielded inconclusive results.A nuclear medicine red blood cell scan revealed bleeding activity in the right colon, and a repeated colonoscopy with terminal ileum intubation was unremarkable.Subsequent provocative arteriography, performed with heparin and alteplase, raised suspicion of Meckel's diverticulum due to active bleeding from the ileal branch of the superior mesenteric artery, initially thought to be the vitelline artery.This was managed with embolization followed by laparoscopic resection. Case Presentation A 78-year-old man with a medical history of non-anticoagulated paroxysmal atrial fibrillation presented with painless hematochezia for one day.He did not report previous episodes of gastrointestinal (GI) bleeding.On presentation, he was hemodynamically stable with hemoglobin at 12.8 g/dL.The coagulation profile and serum chemistries were within normal limits. A colonoscopy showed the presence of hematin in the terminal ileum and throughout the colon.However, the source of the bleeding was not identified.Repeat colonoscopy with terminal ileum intubation the next day was unremarkable.Due to continued hematochezia, he underwent a CT angiogram of the abdomen and pelvis, which did not reveal the source of the bleeding.He then underwent an esophagogastroduodenoscopy (EGD), which also did not reveal a source of bleeding.Subsequently, a nuclear red blood cell bleeding scan indicated bleeding activity suspected to be in the right colon (Figure 1).Based on this result, he underwent a repeat colonoscopy which showed a normal terminal ileum but re-demonstrated hematin throughout the colon without active bleeding.A subsequent arteriography did not reveal active bleeding.The patient was transferred to the intensive care unit (ICU) for provocative arteriography using a hospital-established protocol of heparin and alteplase.He had an episode of hematochezia in the ICU.Direct arteriography after provocation showed active bleeding from the ileal branch of the superior mesenteric artery, which was thought to be the vitelline artery.Thus, bleeding from a Meckel's diverticulum was suspected.Coil embolization of the bleeding vessel was performed during angiography.The patient ultimately underwent diagnostic laparoscopy with washout and resection of a perforated Meckel's diverticulum.Pathology showed a moderately differentiated adenocarcinoma with mucosal ulceration, necrosis, and involvement of all layers of the bowel (Figure 2).Postoperatively, the patient experienced no further episodes of bleeding. Discussion Although the majority of GI bleedings are identified and subsequently managed by upper GI endoscopy or colonoscopy, obscure GI bleeding can be challenging to manage [1,2].In this case, the source of GI bleeding was not identifiable by colonoscopy with terminal ileum intubation, EGD, CT angiography, direct arteriography, and nuclear medicine red blood cell scan.Faced with ongoing bleeding and diagnostic uncertainty, we proceeded with provocative angiography in accordance with our institutional protocol.This procedure was conducted in the ICU with the simultaneous presence of a general surgeon and interventional radiologist, ensuring a swift response to any potential complications.Capsule endoscopy could have been considered as a diagnostic option, but it was not available at the hospital units at the time of the patient encounter. Meckel's diverticulum is a rare cause of GI bleeding [3], with Meckel's adenocarcinoma being even rarer [4].Meckel's diverticulum is the most common congenital malformation of the upper gastrointestinal tract [3].It results from an incomplete obliteration of the fetal vitelline duct during the first 8 weeks of gestation.As a true diverticulum, it is lined by typical ileal mucosa.However, most Meckel's diverticula contain heterotopic tissue, with ectopic gastric mucosa being the most common heterotopic tissue.Although Meckel's diverticula can remain asymptomatic, complications may occur due to the acid secretion by the heterotopic gastric mucosa, leading to ulceration of the nearby ileal mucosa [3].Although the incidence of Meckel's diverticulum is equal in both males and females, the frequency of complications is 3-4 times higher in males [5].While typically, Meckel's diverticula present in childhood, they may remain undiagnosed until adult life [3,6].Due to its rarity in adults, it is often missed or its diagnosis is delayed preoperatively [6].The most common complication in adults is intestinal obstruction with intussusception, with Meckel's diverticulum serving as the lead point [7].Other complications include ulceration, diverticulitis, and perforation with subsequent fistula formation.Rarely, individuals with Meckel's diverticulum may develop torsion, volvulus, and tumors, such as carcinoid tumors, carcinoma, sarcoma, or adenocarcinoma [7]. Hemorrhage is a common presentation in both children and adults, with children having red or maroon stools and adults having more melenic stools due to slower colonic transit time [7].Imaging modalities, such as ultrasound, CT scan, angiography, and MRI, are available to aid in diagnosis; however, the sensitivity and specificity are low [8].Additionally, the 99m Tc-pertechnetate Meckel's scan is helpful in diagnosing Meckel's diverticulum as it could detect ectopic gastric mucosa [9].Meckel's scan was not performed in our case due to a lack of clinical suspicion for Meckel's diverticulum.Given the patient's age and presentation with painless hematochezia, other etiologies such as diverticulosis, vascular malformations, or malignancies were considered initially.Instead, we opted for other standard diagnostic modalities guided by the patient's clinical presentation and established investigative practices for evaluating gastrointestinal bleeding in adults.We became suspicious of Meckel's diverticulum based on the arteriography, which showed active bleeding from the ileal branch of the superior mesenteric artery, initially thought to be the vitelline artery.Subsequently, the patient underwent embolization and surgical resection of the meckel's diverticulum. There are several teaching points involved in this case.Meckel's diverticulum can be seen in elderly patients.It can be missed despite intubation of the terminal ileum during colonoscopy.Interpretation of a positive nuclear medicine bleeding scan requires care and attention.The location of activity shows where blood is found at the time of the scan and not necessarily the bleeding site.A provocative arteriography may be required to identify and treat the source of obscure gastrointestinal bleeding.While asymptomatic Meckel's diverticulum may not necessitate surgery, surgical resection becomes imperative for symptomatic cases, particularly those involving bleeding, as there is a potential risk of harboring malignancy. Conclusions In summary, this case underscores the diagnostic challenges of obscure gastrointestinal bleeding, particularly in elderly patients with uncommon conditions like Meckel's diverticulum.Despite a thorough evaluation involving various imaging modalities, the source of bleeding remained elusive until provocative arteriography revealed active bleeding from a Meckel's diverticulum, ultimately leading to surgical resection.The case highlights the need for a high index of suspicion, especially in the elderly, and the importance of considering rare etiologies in cases of persistent gastrointestinal bleeding.It also emphasizes the limitations of conventional imaging techniques and the significance of interpreting positive nuclear medicine bleeding scans with caution.The successful resolution of the case through targeted interventions serves as a valuable teaching point for clinicians managing obscure gastrointestinal bleeding. FIGURE 1 : FIGURE 1: Images of nuclear medicine nuclear red blood cell scan showing bleeding in the right upper quadrant, suspicious for bleeding in the terminal ileum or right hemicolon (red arrows). FIGURE 2 : FIGURE 2: Pathologic assessment of the resected tissues (A) Hematoxylin and eosin staining at 100x magnification showed moderately differentiated adenocarcinoma (red arrow) adjacent to benign small intestinal mucosa (blue arrowhead).(B) High-resolution images with hematoxylin and eosin staining at 400x magnification demonstrated marked architectural distortion, variations in nuclear size and shape, hyperchromasia, and increased mitoses.(C) Immunohistochemistry staining for CK7 at 100x magnification indicated strong staining in the adenocarcinoma (red arrow) and weak staining of the adjacent benign small intestinal mucosa (blue arrowhead).(D) Immunohistochemistry staining for Villin at 400x magnification showing loss of Villin immunostaining in the moderately differentiated adenocarcinoma (red arrow) and normal small intestinal mucosa with strong luminal and membrane staining (blue arrowhead).
v3-fos-license
2020-12-17T09:10:41.293Z
2020-12-01T00:00:00.000
229323528
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2304-8158/9/12/1875/pdf", "pdf_hash": "130aea7e2ee2b4ec02edcfd32b6eda21f651445e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42014", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "ad6d5c236eab76085c80e4758cefda6af1bbc54d", "year": 2020 }
pes2o/s2orc
Nutrient Effect on the Taste of Mineral Waters: Evidence from Europe In this study, 15 selected bottled mineral waters from chosen European countries were tested for their mineral nutrient contents. In particular, six important nutrients (Ca2+, Mg2+, Na+, K+, HCO3−, Cl−) were measured using atomic absorption spectroscopy. The content of mineral nutrients in all sampled mineral waters were compared to their expected content based on the label. Consequently, their taste was evaluated by 60 trained panelists who participated in the sensory analysis. The results from both the atomic absorption spectroscopy and sensory analysis were analyzed using the regression framework. On the basis of the results from the regression analysis, we determined to what extent the individual mineral nutrients determined the taste of the mineral water. According to the regression results, four out of six analyzed nutrients had a measurable impact on taste. These findings can help producers to provide ideal, health-improving nutrients for mineral water buyers. Introduction The body of an average adult male is 60% (w/w) water and the body of an average adult woman is 55% (w/w). There may be significant differences between individuals on the basis of many factors such as age, health, weight, and gender. Body water is divided into intracellular and extracellular fluids. The intracellular fluid, which makes up about two-thirds of the body water, is the fluid contained in the cells. The extracellular fluid that makes up one-third of the body's fluids is the fluid contained in the areas outside the cells. Extracellular fluid itself is divided into plasma (20% extracellular fluid), interstitial fluid (80% extracellular fluid), and transcellular fluid, which is normally ignored in water calculations, including gastrointestinal, cerebrospinal, peritoneal, and ocular fluid [1]. The amount of water should one drink is broadly discussed outside of scientific circles, but there is no one-size-fits-all answer. The daily four-to-six cup rule is for generally healthy people. This can also vary, especially if there is a water loss through sweat because of exercising or higher temperature [2]. Drinking water is usually consumed as bottled water or as tap water. Bottled waters are generally very popular and also very diverse in terms of overall mineral content and composition. There are many mineral spring drinking waters on the market, which are much diversified in terms of mineral composition. According to Act 275/2004 Coll. §2 of Czech law, bottled infant water is a product of high-quality water from a protected underground source, which is suitable for the preparation of infant food and Mineral Nutrients The mineral nutrient contents are important characteristics of mineral water. Mineral nutrients are inorganic substances that must be ingested and absorbed in adequate amounts to satisfy a wide variety of essential metabolic and/or structural functions in the body [5]. Mineral water contains a combination of the main cations (Ca 2+ , Mg 2+ , Na + , K + ), anions (HCO − 3 , Cl − ), and specific compounds (which can determine the medicinal value of water) in varying amounts All mineral nutrient contents can be read from individual labels provided on the packaging. Labeling fulfils Directive 2009/54/EC of The European Parliament and of The Council on the exploitation and marketing of natural mineral waters from 18 June 2009 [6]. Mandatory criteria that have to be written on a label are presented in Table 1. Overview of mineral nutrients and their intake as well as dislike thresholds are presented in Table 2. Table 1. Indications and criteria laid down in article 9(2) of 2009/54/EC [6]. Indications Criteria Very low mineral content Mineral salt content, calculated as a fixed residue, not greater than 50 mg/L Low mineral content Mineral salt content, calculated as a fixed residue, not greater than 500 mg/L Rich in mineral salts Mineral The equilibrium state of calcium is given by the relationship among its intake and its absorption and excretion. Even small changes in absorption and excretion can neutralize its high intake or compensate for the lack. There are large variations in calcium intake across countries, depending mainly on milk and dairy consumption. Developing countries have the lowest consumption, especially in Asia, and the highest consumption is found in Europe and North America [7]. Magnesium Most magnesium is found in bones and teeth. To a lesser extent, it is found in the blood and tissues. It is irreplaceable for most biochemical reactions in the human body and its deficiency worsens the course of virtually every disease. We take magnesium by eating bananas, nuts, or almonds. Magnesium relieves irritability and nervousness, releases energy from glucose (blood sugar), and affects proper bone structure. It keeps the circulatory system in good condition and prevents heart attacks. Deficiency can be caused by increased consumption of alcoholic beverages, caffeinated beverages, consumption of semi-finished products, stressful situations, and consumption of certain drugs such as various antibiotics or contraceptives. Sodium Virtually all sodium present in food and water is rapidly absorbed by the gastrointestinal tract. Its level in extracellular fluids is maintained by the kidneys and determines their volume. Sodium balance is controlled through a complex mechanism involving both the nervous and hormonal systems. Sodium is mainly excreted in the urine in amounts that correlate with dietary intake [8]. It is also important in maintaining acid-base balance and thus pH in the human body; it also contributes to the control of blood pressure. Sodium and potassium are minerals necessary for the function of muscles and nerves, and potassium plays a vital role in the heart. Sodium-rich mineral waters can be an adjunct to conditions where excessive sweating occurs but are not suitable for long-term consumption for people suffering from hypertension and chronic heart disease. Sodium is hardly present in drinking tap water [9,10]. Potassium The adult human body contains a total of approximately 110-137 g of potassium, with 98% stored inside the cells, and only 2% in the extracellular fluid [11]. Potassium is the cation most commonly found in intracellular fluid and ensures proper potassium distribution across the cell membrane, which is essential for normal cell function. Long-term maintenance of potassium homeostasis is ensured by changes in its renal excretion as a function of changes in its intake [7]. It is one of the most abundant minerals in the human body. In unprocessed foods, potassium is most often present in the form of bicarbonate generators such as citrate. The potassium added during the process is mostly potassium chloride. The body absorbs about 85% of the intake of potassium [10]. Chlorine Chlorine exists primarily in the form of sodium chloride in nature [11]. The electrolyte balance is maintained in the body by adjusting the excretion of the kidneys and the gastrointestinal tract according to their total intake. In healthy individuals, chloride is almost completely absorbed in the proximal part of the small intestine. Normal fluid loss is about 1.5 to 2 L per day, along with about 4 g of chloride per day. Most (90-95%) are excreted in the urine, while less is excreted in the feces (4-8%) and sweat (2%) [12]. Healthy individuals can tolerate large amounts of chloride, provided there is sufficient concomitant intake of freshwater. Hypochloremia (a decrease of Cl − , especially in extracellular fluid) leads to a decrease in glomerular filtration in the kidneys. 1.1.6. Neutralization Capacity and HCO − 3 For natural, drinking, and service water, it is assumed that the most important buffer system is carbon dioxide (free)-bicarbonate-carbonate. It can be said that at approximately pH 4.5, all total carbon dioxide will be present in the form of free carbon dioxide, and at a pH of about 8.3, all total carbon dioxide will be in the form of bicarbonates. Normal water is usually in the range of 4.5-8.3; therefore, acid neutralizing capacity (ANC) is usually determined to pH 4.5 using a standard solution of hydrochloric acid and base neutralizing capacity (BNC) to pH 8.5 is determined using a sodium hydroxide solution [13]. Literature Review The World Health Organization (WHO) has issued recommendations to define criteria of comfort and pleasure (water pleasant to drink, clear, and with a balanced mineral content). These recommendations are the basis used by the European Union to prepare directives to define drinking water as lacking any particular taste [30]. Nevertheless, different waters have different tastes. This is caused by dissolved minerals in the water. Cations such as calcium, sodium, and potassium impact drinking water taste. A neutral taste is encountered where CaCl 2 < 120 mg/L and Ca(HCO 3 ) 2 > 610 mg/L, although when CaCl 2 is at levels > 350 mg/L, water is disliked. The optimum sodium concentration is 125 mg/L for distilled water and is typically found as NaHCO 3 and Na 2 SO 4 . Water is disliked when NaHCO 3 exceeds 630 mg/L and > 75 mg/L Na 2 CO 3 . Potassium is typically present at low levels as KHCO 3 , K 2 SO 4 , and KCl, and is important at the cellular level of the taste buds. A low potassium concentration has positive effects on water acceptance. KCl acts similar to NaCl in taste effects. Magnesium is typically present in water as MgCO 3 , Mg(HCO 3 ) 2 , MgSO 4 , and MgCl 2 and can impart an astringent taste, being able to be tasted at 100-500 mg/L [24]. Water containing magnesium salts at 1000 mg/L has been considered acceptable [31]. Consumers dislike water containing MgCl 2 > 47 mg/L and Mg(HCO 3 ) 2 > 58 mg/L. Anions such as bicarbonate, chloride, and sulfate also impact the taste. At neutral pH, the bicarbonate is more common than carbonate and helps keep cations in solution. In contrast, carbonate increases at higher pH and at lower dissolved CO 2 levels. Aeration also adds O 2 and removes CO 2 , promoting carbonate precipitation. The taste threshold concentration for chloride is 200-300 mg/L [24,27]. Increased chloride levels in the water in the presence of sodium, calcium, potassium, and magnesium can cause water to become objectionable. Preference testing has revealed that water containing NaCl < 290 mg/L is acceptable and NaCl > 465 mg/L is disliked. Testing also indicates that CaCl 2 < 120 mg/L is neutral, while CaCl 2 > 350 mg/L is disliked [18]. Sensory analysis of water can be conducted for example by Quantitative Descriptive Analysis [32] or Quantitative Flavor Profiling [33]. However, a detailed methodology was discussed earlier in Krasner et al. [34] and Suffet et al. [35], resulting in the standard method AWWA [36]. There is also a number of standards and articles focused only on odor [35,[37][38][39]. In one of the recent articles, Rey-Salgueiro et al. [40] evaluated bottled natural mineral water (17 still and 10 carbonated trademarks) to propose training procedure for new panelists. The tasting questionnaire included 13 attributes for still water plus overall impression, and they were sorted by color hues, transparency and brightness, odor/aroma, and taste/flavor/texture, and two more for carbonated waters (bubbles and effervescence). Harmon et al. [41] investigated people's preferences for different water sources and factors that predict such preferences using a blind taste test. Water preferences of 143 participants for one name-brand bottled water, one groundwater-sourced tap water, and one indirect potable reuse (IDR) water were assessed. For predictors of water preference, we measured each participant's phenylthiocarbamide taste sensitivity. Koseki et al. [42] evaluated taste of alkali ion water (calcium sulfate or calcium lactate added to tap water with a calcium concentration of 17.5 mg/L, creating a calcium concentration of 40 mg/L) and bottled mineral waters. There were two studies, one with 166 panelists and one with 150 panelists. For evaluation of taste, the studies used a five-point Likert scale. The goal was to assess improvement or deterioration of taste after adding alkali. It was found that addition is preferable in any mineral water. Platikanov et al. [28,43] conducted chemometric analysis experiments with sensory analysis of water samples. The first experiment was divided into two studies. These were conducted with 17 and 13 trained panelists and 23 and 28 samples of water, respectively. The second experiment was conducted with 69 untrained volunteers and 25 samples of water. Both cases used bottled and tap water. In both cases, the researchers was concluded that the most important factor that influenced panelists' preferences was the overall level of mineralization. Water samples with high levels of mineralization were rated with low scores. All aforementioned mineral nutrients also influenced the taste of mineral water. Consumers have different preferences as to which mineral water they will buy. One of these criteria is necessarily the taste of mineral water. Taste of mineral water in terms of nutrients is, therefore, one of the most important characteristics that determine the success of mineral waters in the market. Similarly, it is important to determine which mineral nutrients are perceived by consumers to be tasty and whether or not poor taste may make consumers omit consumption of important health-supporting nutrients from mineral water. In this paper, we examined six important nutrients and their influence on the taste of water. Firstly, the nutrient content in water was measured, focusing on mineral waters of generally >500 mg/L mineral residue or TDS. Then, a model was built to define and explain their influence on taste. Finally, on the basis of the findings, conclusions were drawn. Determination of Composition Three samples of each mineral water available on the market in Central Europe were analyzed. HCO − 3 determination: 100 mL of the sample was collected in a titration flask and 3 drops of methyl orange were added. Then, the sample was titrated with a standard hydrochloric acid solution until the first hint of onion coloring occurred, the value of acid consumed was recorded. Three measurements were made for each sample. The bicarbonate concentration was calculated using Equation (1) below. where C HCO − 3 is the molar concentration of HCO − 3 (mol/L −1 ); C HCl is molar concentration of HCl (mol/L −1 ); V HCl is the volume of HCl (L); Foods 2020, 9, 1875 6 of 13 V is the volume of water (L); M HCO − 3 is the molar weight of HCO − 3 (g·mol −1 ). Determination of Cl − : A total of 100 mL of the sample was collected in a titration flask and 1 mL of potassium dichromate was added. The sample was then titrated with a standard solution of AgNO 3 (0.07372 mol/L −1 ) until the first constant red color occurred. The value of the consumed solution was recorded and calculated according to Equation (2) below. The volumetric solution itself was standardized with sodium chloride base. where C Cl − is molar concentration of Cl (mol·L −1 ); C AgNO 3 is the molar concentration of AgNO 3 (mol·L −1 ); V AgNO 3 is the volume of AgNO 3 (L); V is the volume of water (L); M Cl − is the molar weight of Cl (g·mol −1 ). Other elements were measured by AAS (atomic absorption spectroscopy). The VARIAN SPECTR AA 110 Atomic Absorption Spectrometer (GFAAS, Varian AA280Z, Varian, Australia equipped with a GTA-110Z graphite furnace atomizer) was used for measurement ( Figure 1). where C HCO is the molar concentration of HCO 3 (mol/L −1 ); C HCl is molar concentration of HCl (mol/L −1 ); V HCl is the volume of HCl (L); V is the volume of water (L); M HCO is the molar weight of HCO 3 (g·mol −1 ). Determination of Cl − : A total of 100 mL of the sample was collected in a titration flask and 1 mL of potassium dichromate was added. The sample was then titrated with a standard solution of AgNO3 (0.07372 mol/L −1 ) until the first constant red color occurred. The value of the consumed solution was recorded and calculated according to Equation (2) below. The volumetric solution itself was standardized with sodium chloride base. where C Cl -is molar concentration of Cl (mol·L −1 ); C is the molar concentration of AgNO3 (mol·L −1 ); V is the volume of AgNO3 (L); V is the volume of water (L); M Cl -is the molar weight of Cl (g·mol −1 ). Other elements were measured by AAS (atomic absorption spectroscopy). The VARIAN SPECTR AA 110 Atomic Absorption Spectrometer (GFAAS, Varian AA280Z, Varian, Australia equipped with a GTA-110Z graphite furnace atomizer) was used for measurement ( Figure 1). For measurement of calcium, a calcium carbonate standard (1 g/L) was used; magnesium standard-magnesium carbonate (1 g/L); sodium standard-sodium carbonate (1 g/L); potassium standard-potassium nitrate (1 g/L); 1.5% nitric acid; 5% lanthanum chloride; and demineralized water. For measurements with AAS, an acetylene/air gas mixture was used, and it was measured with a hollow discharge cathode for the particular elements. After switching off the gas, heating the lamp, and optimizing the source, we first measured the standard in order to determine the calibration curve. Samples were then measured, and a blank of demineralized water was measured between each of For measurement of calcium, a calcium carbonate standard (1 g/L) was used; magnesium standard-magnesium carbonate (1 g/L); sodium standard-sodium carbonate (1 g/L); potassium standard-potassium nitrate (1 g/L); 1.5% nitric acid; 5% lanthanum chloride; and demineralized water. For measurements with AAS, an acetylene/air gas mixture was used, and it was measured with a hollow discharge cathode for the particular elements. After switching off the gas, heating the lamp, and optimizing the source, we first measured the standard in order to determine the calibration curve. Samples were then measured, and a blank of demineralized water was measured between each of the 3 samples, with the instrument reporting results in mg/L concentrations, from which the average of the blank was subtracted and converted to the original concentration before dilution. Grouping Refreshing, tasty, and good water is formed by a certain (optimal) concentration of inorganic water components. It is significantly affected mainly by concentrations of calcium, magnesium, iron, manganese, bicarbonates, alumina, etc. Since there are clear differences in mineral waters, they were divided into several groups. The first group is based on mineralization. Waters were split into 4 groups: • VSM = very strongly mineralized, with the content of solutes greater than 5000 mg/L. The second group is based on pH: The third group is based on country of origin. Finally, we anonymized all brands by all of these groups and numbered them. Panelists, Water Samples, and Preparation The taste and smell of water are among the most important characteristics of drinking water quality from the consumer's point of view. Subjective sensory evaluation, therefore, has its irreplaceable place in the monitoring of drinking water quality. To evaluate the taste of individual mineral waters, 60 trained panelists participated in the sensory analysis of taste. The sensory water analysis training is carried out by the National Institute of Public Health (NIPH). Determination of the smell and taste of mineral waters was performed by experts who completed a course in sensory analysis and according to the standard EN 1622 standard. This standard defines sampling, test environment, standardized procedure, and expression of results. Each panelist was always randomly assigned 1 water from each group to assess the full range of samples. Members worked individually and no discussion took place after the session. Samples were administered at room temperature in 200 mL beakers filled to approximately one-third of their volume. All samples were served at room temperature (20 • C). Samples were labeled with a 4-digit code and the samples were poured in a different room to avoid influencing the rating due to the personal preferences of the evaluators. Since each evaluator evaluated only 5-7 samples (18 evaluators per water), the waters were divided into 5 groups according to the amount of minerals with 3 samples, according to the data of total dissolved content. Descriptive Sensory Analysis The evaluators assessed the overall taste on a 10 cm graphic unstructured scale with a border of both sides, in which one extreme was marked "zero taste intensity (1)" and the other as "maximum taste intensity (10)". The graphical representation on the scale was converted to values with a ruler to the nearest half-centimeter. Regression Analysis Regression analysis was performed [44][45][46]. Using regression analysis, we were able to obtain information as to whether, for example, the higher concentration of chloride in mineral water improved taste or, on the other hand, the higher concentration of bicarbonate deteriorated taste. The general model for n variables is in the form of Equation (3): where y is a vector of dependent variables; β is a vector of regression coefficients; x are vectors for independent variables; e is a matrix associated with errors of the estimation. On the basis of the results of sensory analysis results, we performed linear regression analysis between taste as the dependent variable and individual nutrients as independent variables. All statistical analyses were conducted in statistical software R version 3.6.1 [47]. Atomic Absorption Spectrometry Results Results from the mineral nutrients measurements are presented in Table 2. Authors grouped mineral waters according to the methodology presented above. Results from Table 3 are compared to measurements on the label. There were no statistically significant differences (in terms of paired t-test) between the data measured on AAS and the data on the labels (Table 3). Minor deviations can be expected, since the evaluation of water on the labels is usually not up to date, as their analyzes are carried out at larger intervals and the quantitative characteristics of the water may vary depending on many factors such as temperature, rainfall, and potential momentaneous industrial and agricultural pollution [48]. Results of Sensory Analysis and Hedonic Pricing Sensory analysis was performed on 15 samples where every sample was tested 18 times. Results of the sensory analysis are presented in the last column of Table 2. Finally, on the basis of the results from nutrient content measurement and taste perception, we performed a regression analysis in order to determine a possible relationship between composition and taste. After several iterations where some of the mineral nutrients had to be removed as they were superfluous, we generated the final regression results, which are presented in Table 4. Two removed nutrients are chlorine and potassium. A significant regression was found (F(6, 8) = 8.56, p < 0.005, N = 18), with R 2 of 0.8652. Assumptions of linear regression were verified by R library gvlma [49] and by Breusch-Pagan test of heteroscedasticity [50]. All assumptions of linear regression were accepted: global stats (p = 0.5564), heteroscedasticity (p = 0.2981), skewness (p = 0.1689), kurtosis (p = 0.9544), and link function (p = 0.8624). Results indicate that selected nutrients have a significant impact on taste, albeit their influence is relatively small. For example, when the concentration of calcium goes up by 1 mg per liter, the perceived taste goes up 0.01. A similar interpretation can be made in terms of other mineral nutrients. While calcium and magnesium have a positive impact on taste, sodium and bicarbonate have a negative impact on taste. Magnesium has the relatively strongest influence on taste, while bicarbonate has an influence that is approximately 17 times smaller. It has to be acknowledged, however, that the estimates of coefficients are probably valid within a certain range. Above a certain threshold, the positive effect on taste may cause a negative perception. Moreover, the difference between the predicted value and real sensory taste value is depicted in Figure 2. Figure 2 also graphically benchmarks taste of individual mineral waters. Exceptions were Czech MM neutral 2, Czech PM neutral 2, and Czech PM neutral luxury 1. However, even for those three types, the difference was around 15%. Discussion Many authors who examine taste concentrate primarily on the amount of total dissolved solids (TDS) [51]. TDS can be described as the total amount of dissolved cations Al 3+ , Fe 2+ , Mn 2+ , Ca 2+ , Mg 2+ , K + , and Na + , and anions such as CO 3 2-, HCO 3 , SO 4 2-, and Cl − . For example, Bruvold and Daniels (1990) [51] claimed that the higher is the amount of TDS, the poorer the taste. Daniels et al. (1988) claimed that a high amount of TDS may lead consumers to refuse to drink water at all [52]. According to Kozisek (2004) [4], cations such as calcium, sodium, and potassium impact drinking water taste. While calcium is perceived usually neutrally, sodium can be perceived to influence the taste of water negatively. Potassium is perceived mainly neutrally, albeit the low potassium level increases acceptance of water. Magnesium is usually perceived mainly positively [31]. For chloride, enhanced chloride levels in mineral water can cause water to become less Exceptions were Czech MM neutral 2, Czech PM neutral 2, and Czech PM neutral luxury 1. However, even for those three types, the difference was around 15%. Discussion Many authors who examine taste concentrate primarily on the amount of total dissolved solids (TDS) [51]. TDS can be described as the total amount of dissolved cations Al 3+ , Fe 2+ , Mn 2+ , Ca 2+ , Mg 2+ , K + , and Na + , and anions such as CO 2− 3 , HCO − 3 , SO 2− 4 , and Cl − . For example, Bruvold and Daniels (1990) [51] claimed that the higher is the amount of TDS, the poorer the taste. Daniels et al. (1988) claimed that a high amount of TDS may lead consumers to refuse to drink water at all [52]. According to Kozisek (2004) [4], cations such as calcium, sodium, and potassium impact drinking water taste. While calcium is perceived usually neutrally, sodium can be perceived to influence the taste of water negatively. Potassium is perceived mainly neutrally, albeit the low potassium level increases acceptance of water. Magnesium is usually perceived mainly positively [31]. For chloride, enhanced chloride levels in mineral water can cause water to become less acceptable. This is particularly true when sodium, calcium, potassium, and magnesium are present in water [27]. In some studies, chloride was found to be neutral [53]. In other studies, chloride content in water was considered to be negative while its effect can be mitigated by decreasing water temperature [54]. Whelton et al. (2007) [18] provided a summary of mineral effects on the taste of drinking water of Cl − , HCO − 3 , Na + , K + , and Mg 2+ . The review indicates that K + is perceived mainly positively, Ca 2+ and Na + are perceived both neutrally or positively, and Cl − and Mg 2+ are perceived neutrally or negatively. Acceptance is usually dependent on concentration of other minerals. Chidya et al. (2019) [55] published an article wherein they found differences between the labels and the actual water content in some cases, although we did not find this to be the case in this article. Issues were found in terms of pH and concentration of F − anions. Zuliani et al. (2020) [56] presented a multi-elemental analysis of 13 bottled waters and found similar results to those in this article in that mostly HCO − 3 , Ca 2+ , and Mg 2+ dominated in bottled mineral and spring water. This paper is novel in terms of taste perception in the selection of the composition of analyzed waters. The drinking waters investigated in this paper mostly contained higher levels of Ca 2+ and HCO − 3 , while levels of Cl − were lower (compared with [28,41,43,48] by Welch t-test). Drinking water considered in these papers was also studied in terms of water concentration of total dissolved solids less than 500 mg/L. In this paper, we also analyzed drinking waters with the sum of minerals above 500 mg/L (in cases of moderately mineralized, strongly mineralized, and healing waters). For these waters, the data also showed that if the concentration of HCO 3 − to concentration of Cl − was greater than 50, Cl − did not significantly (p-value < 0.01) affect the taste, and only HCO 3 − was important. Generally, as tap water flavor is among the major concerns for water supporters, only a minor percentage is used for drinking [48], increasing the importance of bottled water composition and flavor [57]. It needs to be kept in mind that consumers assess their tap water primarily by its initial assessment via taste, odor, as well as appearance [58,59]. However, similar preferences and assessments can be also seen in quality of drinking water produced by for example reverse osmosis [60] or other filtration methods, as well as, for example, when optimizing drinking water taste by appropriate adjusting of mineralization (measured by TDS), such as that done by [61]. In any case, it is important not to forget also about the role of consumer preferences, which are also valid in cases of the mineralization of water, wherein the preference ratings vary [48]. However, it is necessary to uncover taste determinants as it is the initial basis for water industry providers in understanding perceptions and preferences among types of drinking water [48]. Therefore, determination as to what extent the individual mineral nutrients determine the taste of the mineral water is essential, as such findings can help producers to provide ideal, health-improving nutrients for mineral water buyers. Conclusions The measurement of nutrients in bottled mineral water in selected European countries shows that the measured values of nutrients correspond for the most part with the label values. This paper brings about new findings with regards to the influence of mineral nutrients on the taste of mineral water. Results revealed which minerals are considered to be tasty, even though the producers of mineral waters may influence their content in terms of better satisfying consumer preferences. At the same time, given that the health effects of certain minerals are well known, the optimal taste can contribute to the improvement of consumers' health. Specific nutrients were allocated that affect water in both positive and negative ways. Our study also confirms the findings of other authors, such as in the case of magnesium being perceived positively. On the other hand, some results were quite surprising, such as chloride being found to have an insignificant impact on taste. The present study provides directions to producers of mineral waters as to how to change the content of certain minerals and in this way better satisfy the needs of their customers. In cases wherein better taste is also associated with improvement of human health due to certain mineral nutrient contents, benefits from drinking mineral water may spill over into the wider society due to higher healthcare cost savings. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2023-10-20T21:43:43.643Z
2023-12-04T00:00:00.000
264376074
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "f058b5e7b4c1308211b35a36521ff62f79edfb1d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42015", "s2fieldsofstudy": [ "Physics" ], "sha1": "748a876643d776b0f3821805c3a984af502bff65", "year": 2023 }
pes2o/s2orc
Neural network assisted high-spatial-resolution polarimetry with non-interleaved chiral metasurfaces Polarimetry plays an indispensable role in modern optics. Nevertheless, the current strategies generally suffer from bulky system volume or spatial multiplexing scheme, resulting in limited performances when dealing with inhomogeneous polarizations. Here, we propose a non-interleaved, interferometric method to analyze the polarizations based on a tri-channel chiral metasurface. A deep convolutional neural network is also incorporated to enable fast, robust and accurate polarimetry. Spatially uniform and nonuniform polarizations are both measured through the metasurface experimentally. Distinction between two semblable glasses is also demonstrated. Our strategy features the merits of compactness and high spatial resolution, and would inspire more intriguing design for detecting and sensing. Introduction Amplitude, phase, polarization, and wavelength are fundamental properties of light.Among them, the state of polarization (SoP) characterizes the direction in which the electric component of the field oscillates.The analysis and measurement of the SoP plays a key role in wide applications from remote sensing 1 and astronomy 2 to biology 3,4 and microscopy 5 since the light-matter interactions have strong dependences on the SoP.Hence, various types of polarization detection (i.e., polarimetry) systems have been developed over the past decades.In general, traditional polarimetry systems can be categorized into division of time and division of space 6 .The former requires rotating polarization elements resulting in long detection time.The latter can be further grouped into division of amplitude, aperture, and focal plane.All of these techniques are equipped with a set of polarizers, waveplates, beam-splitters or filters, making the systems bulky and complex, therefore hinders the future development for miniature and compact optical devices. Metasurface, a new emerging flat optical device, enables thin and lightweight optical elements with precisely engineered wavefronts [7][8][9][10][11][12][13][14][15][16][17] .Many innovative methods based on metasurfaces for polarimetry have been proposed.Constructing a subwavelength scatterer 18 or antenna array 19 , onchip polarimeter can be realized, however, the propagation waveguide for polarization-selective coupling or the free space for directional scattering restricts the application in full local mapping of the inhomogeneous SoP.Based on polarization filters and spatial multiplexing scheme 20,21 , full-Stokes polarimetric measurements can be obtained with complex fabrication due to the dual-layer configuration [22][23][24] .Polarimetry can also be demonstrated with the design of plasmonic meta-gratings 25,26 , yet the reflection/diffraction mode makes it challenging for direct integration on sensors.Similar to the division of focal plane, researchers design a metalens array (usually consisting three to six metalenses) to split and focus light in different polarization bases for estimating the full Stokes vectors [27][28][29][30][31] .However, this spatial interleaved design method still suffers from the trade-off between the detection pixel size (i.e., 3-6 metalenses) and measurement crosstalk, prohibiting them to access highspatial resolution in polarization analyses.Matrix Fourier optics has also been introduced and applied to the realization of polarimetry 32 , whereas the design needs to be fed into an optimization and the polarization-dependent propagation with different diffraction orders would occupy a substantial volume of space. Most of the relevant works focused on the intensity measurement of different polarization bases (at least four) to calculate the Stokes vector.Actually, for an arbitrary SoP, it can be decomposed into a pair of orthogonal polarization states (e.g., right-hand and left-hand circular polarizations, termed as RCP and LCP) with different amplitudes and phase shifts.If the amplitude contrast and the phase difference can be detected simultaneously, then the SoP can be obtained one time without spatial multiplexing. Here, we propose a new strategy based on a noninterleaved chiral metasurface and neural network assistance to analyze the SoP in high spatial resolution.This single chiral metasurface can modulate the co-polarization and two cross-polarizations independently to present the amplitude and phase information.Both spatially uniform and nonuniform polarization states are detected in simulations and experiments, showing very good fidelity.Note that in the spatially nonuniform polarization detection, an inhomogeneous SoP beam is generated by a specially designed metasurface, and a neural network is employed to strengthen the detection of SoPs., in which E x and E y are the complex amplitudes of the x and y polarized components, δ xy is the corresponding phase difference.Based on the Stokes parameters, the ellipticity angle χ and the azimuth angle ψ for a polarized light can be expressed as Stokes parameters defined as where 2χ and 2ψ are also the spherical coordinates on the surface of Poincaré sphere.Under the circular base, these two parameters can be written as in which E R and E L are the complex amplitudes of the RCP and LCP components, δ cp is the phase difference calculated as δ cp ¼ φ L À φ R .Apparently, the SoP can be derived with the learning of the amplitude contrast and phase shift of two CP components. To obtain the amplitude and phase information simultaneously, we propose a novel interferometric strategy to analyze the polarization states.As shown in Fig. 1, the designed chiral metasurface generates two focal lines in the case of RCP incidence, n is the cross-CP part (RCP to LCP), and m is the co-CP part with the point of intersection marked as A. While under LCP incidence, similar two focal lines emerge as well, one is the same co-CP component in the position of m, and the other is the cross-CP component (LCP to RCP) termed as l with the point of intersection marked as B. When the SoP of incidence is linear or elliptical, all three focal lines appear with a new intersection signed as C.With the average intensity distribution of AC and BC, the amplitude contrast of RCP and LCP components can be obtained.Analyzing the intensity of the points A and B, for which the electrical fields can be written as where α 0 , α 1 , and α 2 are the corresponding ratios related to the specific design, the phase difference between RCP and LCP can then be derived as well due to the interferometric effect. The implementation of the above-mentioned functionalities demands the independent phase modulation for triple polarization channels (RCP to LCP, LCP to RCP, and the co-CP part).Similar independent manipulations of multiple channels (phase or the combination of phase and amplitude) have been demonstrated recently based on a few layers of structures, supercells, adding noise, or optimization methods [34][35][36][37][38][39][40][41] .Here, considering a single planar anisotropic meta-atom, we first analyze the possibility to independently modulate the triple polarization channels.The Jones matrix describing the relation between the input and the output electric field in Cartesian coordinates can be written as in which R(θ) is the rotation matrix added to increase the degrees of freedom with Pancharatnam-Berry (PB) phase 42,43 and the amplitude differences are ignored for a straightforward estimation.After the transformation into a circular base, the Jones matrix expresses as from which it can be concluded that the existence of φ xy ensures the independent phase manipulation of one co-CP (diagonal term) and two cross-CP (non-diagonal term) light.Thus, the meta-atoms that are about to be utilized should break the mirror symmetry and n-fold (n > 2) rotational symmetry for the generation of the cross components of electric polarizability (i.e., e iφxy ).Inspired by our precious work 44,45 , planar chiral meta-atoms are chosen for the decoupling purpose, and the design principle simplifies to where φ d is the propagation phase 46 , φ PB is the PB phase and φ χRL and φ χLR refers to the different chiral phase delays for the working LCP light (RCP incidence) and RCP light (LCP incidence) respectively ðjφ χRL j≠jφ χLR jÞ. For the proof-of-concept, we first perform the simulations (using commercial finite-difference time domain (FDTD) software, Lumerical) to build up a meta-atom library.The designed wavelength is set as 470 nm.The material SiNx is chosen to have high transmission in visible light (n = 2.032 + 0.0013i) and facilitate the integration with Complementary Metal-Oxide-Semiconductor Transistor (CMOS) technology.As illustrated in Fig. 2a, the lattice is hexagonal with the lattice constant a = 360 nm to suppress the high order diffraction.The height of the meta-atom is set as 1.2 μm to cover 0-2π phase shift.Changing the structural sizes of the chiral meta-atoms (the nanorods as particular cases are also included), a data cube including different phase delays of the co-and cross polarizations can be obtained.Figure 2b shows the scattered distribution of φ RL + φ LR and φ co , indicating a rich parameter space (gray dots).The φ co (i.e., φ d ) and φ RL + φ LR (i.e., φ χRL+ φ χLR ) is simplified under eight-grade phase approximation.The atoms are chosen by simultaneously satisfying φ co and φ RL + φ LR requirements, which enables the co-polarization and two cross-polarization phase modulations by setting appropriate rotation angles, respectively.Besides, the efficiency of the selected atom is expected to be as high as possible. The blue dots mark the final selected atoms with required phase delays and the average efficiency is about 71% (details can be found in Supplementary Note 1). In order to realize the functionalities stated above for polarimetry (Fig. 1), the three polarization channels need to acquire the uncorrelated phases as where s 0 determines the offset distance of the focal line from the center, f is the focal length, θ n and θ l are the angles between the focal line (n and l) and the x axis (θ n = 120°, θ l =-120°).To validate the polarimetry function, we first perform simulations and experiments with different uniform polarization incidences.Figure 3 show the results for six polarizations including two linear polarizations (x with S = (1,0,0),and y with S = (−1,0,0)), two circular polarizations (LCP with S = (0,0,−1), and RCP with S = (0,0,1)) and two elliptic polarizations (S = (0.64,0,0.77), and S = (0.64,0,-0.77)).The full-wave simulation results (performed by Lumerical software) are illustrated in Fig. 3b.The focal lines are not ideally homogeneous and the background noise emerges due to the invalid of local periodic approximation 47,48 .Thus, in addition to the ratios in Eq.3, circular polarization analysis (extracting the field profiles under two CP biases in simulations and adding two CP filters in experiments) are further applied to improve the accuracy of polarization calculation and analysis.The top panel in Fig. 3b corresponds to LCP components and the bottom is RCP.In the first column (x polarization), line n and m appear in the LCP bias (top), corresponding to the co-CP part of the LCP component and the cross-CP part of the RCP component.Destructive interference occurs at the point A due to the π difference between the RCP and LCP components, corresponding to δ cp = 0 (the π difference is the original phase shift between the RCP and LCP components which is considered in all cases).As for RCP bias (bottom), line l and m appear and the point B also disappears due to the destructive interference.In the second column (y polarization), line n and m appear in the LCP bias (top) and line l and m appear in the RCP bias (bottom) similarly, yet the point A and B are both enhanced about twofold intensity, indicating constructive interference with δ cp = π.For CP light incidences, things get easier, only one line appears in the different biases, as shown in the third and fourth column.With elliptic polarization incidence, the situation gets complicated, two lines appear with distinct intensity contrast in both LCP and RCP biases.By carefully calculating the amplitude contrast and the intensities of points A and B, the ellipticity angle χ and the azimuth angle ψ can be obtained and the S parameters are further derived, shown in Fig. 3a.More results of other uniform polarizations are shown in Supplementary Note 2. Based on about 34 different polarization incidences, the average transmission of the metasurface defined by the transmitted power divided by the incident power is about 80%.The average diffraction efficiency defined by the power of the focal line within three times full-widths at half maximum (FWHM) divided by the transmitted copolarized light is calculated ~70%.Thus, the total efficiency defined by the average transmission multiplied by the average diffraction efficiency is calculated as about 56%, indicating potential applications in real-world scenarios.The details can be found in Supplementary Note 2. In experiments, the metasurface was illuminated by a white-light laser (Fianium Super-continuum, 4 W) with a 470 nm filter with a bandwidth of 10 nm.A polarizer (Thorlabs, WP25L-VIS) and a quarter-wave plate (Thorlabs, AQWP05M-600) were used to generate different polarization incidences.Then the focal plane was collected to the image sensor through an objective (NA = 0.5) with a clearer vision of details. Figure 3c To enable the polarimetry as precisely as possible, we need to locate the point A and B carefully, which inevitably brings the inconvenience when the experimental setup is changed or the image patterns are distributed in different positions of the sensors.To address these issues, we introduced a deep learning framework [49][50][51][52][53] to assist with polarization detection.Such a deep learning framework involves training neural networks to learn the relations between the detected images and the corresponding S parameters.The construction of neural-assisted polarization detection process is divided into two steps: preparation of training data and the training process of the neural network, as is shown in Fig. 4a-c.In the first step, to obtain a sufficient amount of training data within a limited time and enhance the robustness of the entire system as well, we used data augmentation to process the experimental and simulated data, such as including noises, performing image translation, rotation, scaling and other techniques, as is shown in Fig. 4a.These operations can help the neural network analysis mitigate the influence from the fluctuation of surrounding environment to some extent, and improve the potential for practical applications.After one image augmentation operation, we could obtain two 128 × 128 images, corresponding to focal plane intensity profiles with LCP bias and RCP bias, respectively.The combination of these two images result in a dual-channel three-dimensional image (with a size equal to 128 × 128 × 2), which serves as an input to the neural network.The output of the neural network is a 1 × 3 vector representing the corresponding Stokes parameters (S 1 , S 2 , S 3 ) of the input data, as is shown in Fig. 4b.After data augmentation, we ultimately obtained 8500 sets of training data and 585 sets of test data.Such the amount of training data is sufficient for dealing with this kind of particular application scenario In the second step, we constructed a deep convolutional neural network (CNN), whose structure and parameters are shown in Fig. 4c.Each input data was processed through 6 consecutive convolutional layers, during which hidden features in the intensity profiles were extracted and calculated.After flattening the output of the CNNs into a 1D vector (512 × 1) and passing through a fully connected layers, the predicted Stokes parameters was generated.Throughout the network, a ReLU activation function was applied to each layer except for the last one, for which there was no activation function 52 error (MSE) 51 , which is defined as follows: where S real is the real S parameters for the input data, while S pre is the predicted one generated by the CNN.The train epochs were set as 200, and the distribution of MSE with respect to the train epoch was shown in Fig. 4d.When the training was completed, the MSE was 0.003 for the training set, which corresponds to an average prediction standard deviation (sqrt (MSE)) of 0.055 for each S parameter.Then, we demonstrated the accuracy of the well-trained CNN by putting all test sets into it, and calculated the corresponding deviation.Figure 4e shows the statistics distribution as well as the cumulative probability of the prediction deviation for the test set, where the average deviation for each S parameter is no more than 0.1 for 85% of the test set and the average deviation for all the test data is about 0.06, indicating a very good fidelity.The performance on the test set demonstrates that the CNN model has good predictive ability and is robust because it works well even when input images have been translated, rotated, or scaled in different length, angle, or ratio, respectively.With the help of the CNN model, there is no need to locate the points A and B for every image precisely, which can help us to calculate out the polarization state of the light in very short time (< 0.05 s for each image). Moreover, the dimension of the single metasurface can be further reduced to 10 μm with s 0 = 2 μm and f = 12 μm, corresponding to a higher spatial resolution.Due to the limitation of our current experimental conditions, only simulations under different incidences are performed to construct the foundation of the training and test data.The statistics distribution and the cumulative probability of the prediction deviation for the new test set are shown in Fig. 4f, in which the average deviation for all the test set is about 0.108, and the percentage of the average deviation no more than 0.1 is 70%, also indicating reliable predictions. Furthermore, in order to demonstrate the spatially nonuniform SoP analysis capability, a specifically designed PB-metasurface with diameter of D = 200 μm was fabricated to generate a beam with inhomogeneous polarization states for the measurement (details can be found in Supplementary Note 3).In experiments, the linearly polarized laser beam with wavelength of 470 nm was first enlarged by a beam expander (see Fig. S7), then passed through the PB-metasurface and formed a vector beam by acquiring different phase shifts with different biases (Fig. 5a).Then this vector beam illuminated on the chiral metasurface array and was focused to the image plane with different patterns in each detection pixel.An objective accompanying by a CP filter was further utilized to transfer the image to a CMOS sensor.Figures 5b, c displays the focal plane intensity profiles with LCP bias and RCP bias, respectively.We first align the center of the beam pictures without and with passing through the metasurface polarimetry, corresponding to Fig. 5a, b (or 5c).Then extracting the distance between adjacent intersections as the mesh size to draw the dividing network.The dotted line marks the beam size, in line with the one in Fig. 5a.The region of interest marked in numbers corresponds to the middle of the beam.Assisted by the neural network proposed in Fig. 4, we can locally map the Stokes parameters easily, as shown in Fig. 5d-f.The number of the red points in the figures corresponds to the positions in Fig. 5b, c with different azimuth angles.The dotted blue line shows the designed continuously variable SoP, in high accordance with the neural network assisted mapping.According to the measured Stokes parameters, the schematics of polarization distribution are drawn in Fig. 5g for an intuitive view, which shows very good coincidence with the designed one (Fig. 5h).The polarization distribution is more complicated than ideal situation (radially-polarized vector beam) due to the partial circular polarization conversion, however, such complexity happens to prove the polarimetry capability of the proposed neural network assisted chiral metasurface. Objects that have similar appearances but with different materials or functions are ubiquitous in our daily lives.Here in the final part, the non-interleaved chiral metasurfaces were implemented to distinguish two semblable glasses.As shown in Fig. 6a, b, these glasses have similar morphology features that we cannot differentiate from each other with our naked eyes.Yet enabled by the chiral metasurfaces, the specific function of each eyeglass can be discovered.In experiments, the generated light without defined polarization is first filtered by the wavelength of 470 nm and transmits through a beam expander to illuminate the pieces of glasses, respectively.The transmitted light further passes through the chiral metasurfaces, and is captured by an objective and a CMOS sensor (Fig. S8). Figure 6c, d illustrate the intensity profiles under different CP biases, from which the differences of the glasses can be easily distinguished.In Fig. 6c, the left and the right glass exhibit the same linear SoP patterns with the metasurface when light passes through them, indicating the feature of linear polarization for sunscreen purpose (termed as LP glass).While in Fig. 6d, the left and right glass shows totally different circular SoP patterns, revealing the feature of RCP and LCP for 3D display purpose (termed as CP glass). Discussion To summarize, we present a novel interferometric polarimetry method based on non-interleaved chiral metasurfaces.Different uniform polarizations are measured both in simulations and experiments.Furthermore, incorporated with a deep convolutional neural network, spatially nonuniform polarizations are experimentally analyzed and mapped with resolution of 20 μm in a highly accurate, quick, and robust way.The spatial mapping resolution can also be increased to 10 μm assisted by the neural work.Objects with similar morphology features while different polarization characteristics can also be easily distinguished through the metasurface. The proposed non-interleaved design is a clear departure from conventional ones, and can enable the polarimetry with higher spatial resolution.Normally, the dimension of the single metasurface cannot be further reduced to a very small value, in which the insufficient sampling would result in inaccuracy, crosstalk, or even invalidation of the measurements.Such embarrassment can be alleviated by utilizing neural network.Although with a slight degraded performance as the dimension decreases, the incorporation of the neural network is still promising to access to higher space resolution with merits of quickness and robustness.While for mixed polarization scenarios with incoherent light, proposed scheme is hard to handle due to design principle based on interferometric effect.Note that due to the mismatch between the focal length and the working distance of our CMOS sensor, we did not directly mount the metasurface on the sensor to build a compact prototype.Nevertheless, the direct transmission mode guarantees the further integration with sensors to realize miniature and compact devices in the near future.The proposed scheme can undoubtedly extend to other spectral bands and would shines in the situation with high spatial resolution requirements for polarimetry. Materials and methods The metasurface was prepared with a combination process of electron-beam lithography (EBL) and reactive ion etching (RIE) process.First, the SiNx layer was deposited on the fused-silica substrate using the plasma-enhanced chemical vapor deposition (PECVD) to a thickness of 1200 nm.Then PMMA A4 resist film with thickness of 200 nm was spin-coated onto the substrate and baked at 170 °C for 5 min.Next, a 42 nm thick layer of a water-soluble conductive polymer (AR-PC 5090) was spin-coated on the resist for the dissipation of E-beam charges.The device pattern was written on an electron beam resist using E-beam writer (Elionix, ELS-F125).The conductive polymer was then dissolved in water and the resist was developed in a resist developer solution.An electron beam evaporated chromium layer was used to reverse the generated pattern with a lift-off process, and was then used as a hard mask for dry etching the silicon nitride layer.The dry etching was performed in a mixture of CHF3 and SF6 plasmas using an inductively coupled plasma reactive ion etching process (Oxford Instruments, PlasmaPro100 Cobra300).Finally, the chromium layer was removed by the stripping solution (Ceric ammonium nitrate). Fig. 1 Fig. 1 Schematic illustration of the proposed non-interleaved chiral metasurface for polarimetry.n, l, and m are the independent focal lines corresponding to the three channels (RCP to LCP, LCP to RCP, and the co-CP part).A, B and C are the intersections.s 0 corresponds to the off-center distance of the focal lines Fig. 2 Fig. 2 Design and manufactured chiral metasurface.a Diagram of the unit cell and the hexagonal lattice.b Phase parameter space with the selected ones marked in blue.c Co-CP phase distribution of a supercell.d Optical microscopy image and the enlarged picture of 2 × 2 supercells (40 μm × 40 μm).e Enlarged SEM image showing the meta-atom morphology Figure2cshows the co-CP phase distribution of a single metasurface with dimension of D = 20 μm, s 0 = 3 μm, and f = 25 μm, corresponding to an off-axis focused cylindrical lens, while the other two cross-CP phase distributions are the relatively rotated ones.Considering the requirements of both the spatially uniform and nonuniform SoP detections, an array (25 x 25) of such chiral metasurface (termed as a detection pixel) is designed and fabricated.Figure2ddisplays its optical microscopy image, and the right panel is the enlarged picture of the area marked by blue dotted line with four detection pixels (2 × 2) and sizes of 40 μm × 40 μm.The partial scanning electron microscopy (SEM) image is also illustrated in Fig.2e, indicating the basic morphology maintained. illustrates the captured images (one detection pixel) with different commercial CP filters (top panel showing LCP component and bottom showing RCP component).The results are consistent with the simulations while with different intensity ratios due to the fabrication errors.The calculated S parameters are also shown in Fig. 3a, close to the theoretical values. Fig. 3 Fig. 3 Polarimetry results for different uniform polarizations.a Calculated S parameters from simulation and experiment results.b The focal plane electrical field profiles with LCP bias (top panel) and RCP bias (bottom panel) in simulation.c The focal plane intensity profiles for LCP light (top panel) and RCP light (bottom panel) in experiments Fig. 4 Fig. 4 Neural network architecture for the polarimetry.a The input of the neural network with data augmentation.b The corresponding real S parameters of the input data in a. c The structure and parameters of the CNN.d The distribution of MSE with respect to the train epoch.The statistics distribution and cumulative probability distribution of the prediction deviation for the test set with spatial resolution of (e) 20 μm, (f) 10 μm Fig. 5 Fig. 5 Experimental results for spatially nonuniform SoP analysis.a Intensity distributions of the spatially nonuniform polarized incident beam.φ is the azimuth angle with respective to the drawn diameter.Focal plane intensity profiles of the chiral metasurface array with LCP (b) and RCP (c) bias.The scale bar is 10 μm.d-f Analysis of the Stokes parameters with the assistance of the proposed neural network.The number of the red points is in accordance with the position and azimuth angle in b and a.The dotted blue lines are the designed S parameters.g Schematics of the polarization distribution according to the measured S parameters.h Schematics of the designed polarization distribution Fig. 6 Fig. 6 Experimental results glasses distinction.Photos of two pair of glasses with similar morphology features while different functions, a is linear polarized glasses, b is circular polarized glasses, c and d are the relevant measurement results |E y |cosδ xy , and S 3 = 2|E x | |E y |sinδ xy , are widely utilized to represent the SoP of light
v3-fos-license
2023-10-11T13:10:00.424Z
2023-10-05T00:00:00.000
263802963
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10592951", "pdf_hash": "1c5e12f50fff194cb90d4b43be46577e9e2d857e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42016", "s2fieldsofstudy": [ "Biology" ], "sha1": "df0dcaa542fde79a67faa86ff93ddd319f04b060", "year": 2024 }
pes2o/s2orc
MOIRE: A software package for the estimation of allele frequencies and effective multiplicity of infection from polyallelic data Malaria parasite genetic data can provide insight into parasite phenotypes, evolution, and transmission. However, estimating key parameters such as allele frequencies, multiplicity of infection (MOI), and within-host relatedness from genetic data has been challenging, particularly in the presence of multiple related coinfecting strains. Existing methods often rely on single nucleotide polymorphism (SNP) data and do not account for within-host relatedness. In this study, we introduce a Bayesian approach called MOIRE (Multiplicity Of Infection and allele frequency REcovery), designed to estimate allele frequencies, MOI, and within-host relatedness from genetic data subject to experimental error. Importantly, MOIRE is flexible in accommodating both polyallelic and SNP data, making it adaptable to diverse genotyping panels. We also introduce a novel metric, the effective MOI (eMOI), which integrates MOI and within-host relatedness, providing a robust and interpretable measure of genetic diversity. Using extensive simulations and real-world data from a malaria study in Namibia, we demonstrate the superior performance of MOIRE over naive estimation methods, accurately estimating MOI up to 7 with moderate sized panels of diverse loci (e.g. microhaplotypes). MOIRE also revealed substantial heterogeneity in population mean MOI and mean relatedness across health districts in Namibia, suggesting detectable differences in transmission dynamics. Notably, eMOI emerges as a portable metric of within-host diversity, facilitating meaningful comparisons across settings, even when allele frequencies or genotyping panels are different. MOIRE represents an important addition to the analysis toolkit for malaria population dynamics. Compared to existing software, MOIRE enhances the accuracy of parameter estimation and enables more comprehensive insights into within-host diversity and population structure. Additionally, MOIRE's adaptability to diverse data sources and potential for future improvements make it a valuable asset for research on malaria and other organisms, such as other eukaryotic pathogens. MOIRE is available as an R package at https://eppicenter.github.io/moire/. Introduction 1 Genetic data can be a powerful source of information for 2 understanding malaria parasite phenotype and transmission without consideration of strain composition from polyclonal samples results in a consistent overestimation of heterozygosity, leading to potentially faulty inference about population diversity. Additionally, naive estimation offers no principled way to address genotyping error beyond heuristics, further biasing estimates of diversity in ways that depend on choices made during initial interpretation of genotyping data.Alternatively, considering only monoclonal samples is potentially problematic, as this may require a substantial number of samples to be discarded when collected from regions where multiple infection is the rule rather than the exception.Further, the monoclonal subset of samples are fundamentally different from the larger population of interest, as they preclude the possibility of within-host relatedness between strains.This ignores a potentially important source of information about transmission dynamics, as within-host relatedness may be indicative of co-transmission or persistent local transmission (Wong et al., 2017;Nkhoma et al., 2020;Wong et al., 2018). To address these issues and make full use of available data, Chang et al. (2017) developed a Bayesian approach (THE REAL McCOIL) to estimate allele frequencies and MOI in the context of polygenomic infections from single nucleotide polymorphism (SNP) based data.More recently, coiaf (Paschalidis et al., 2023) and SNP-Slice (Ju et al., 2023) have been developed to further improve computational efficiency and resolving power. Briefly, coiaf takes user provided allele frequencies and SNP read count data and applies an optimization procedure to estimate either discrete or continuous values for MOI.SNP-Slice also takes SNP read count data and uses a non-parametric Bayesian approach to simultaneously estimate phased strain identity and within-host strain composition.While Paschalidis et al. suggest within-host relatedness as a possible explanation for continuous values of MOI, and the method by Ju et al. may provide a way to interrogate within-host relatedness through phased strain composition, none of these methods directly consider or estimate within-host relatedness.Further, these methods are all tailored to SNP based data and are unable to accommodate more diverse polyallelic loci, such as microsatellites, which have been widely used in population genetic studies (Anderson et al., 2000;Tessema et al., 2019;Roh et al., 2019;Pringle et al., 2019). Other methods that infer within-host relatedness (Zhu et al., 2019), in contrast, rely on whole genome sequencing (WGS) data.WGS based approaches, however, frequently have poor sensitivity for detecting minority strains and low density infections (Tessema et al., 2022).In recent years, the declining cost of DNA sequencing and development of high throughput, high diversity, targeted sequencing panels have made polyallelic data even more attractive for genomic based studies of malaria (Tessema et al., 2022;LaVerriere et al., 2022;Kattenberg et al., 2023).Genetic analysis methods leveraging polyallelic loci have the potential for substantially increased resolving power over their SNP based counterparts, particularly in the context of related polyclonal infections in malaria (Taylor et al., 2019;Inna Gerlovina et al., 2022).Unfortunately, there are limited tools available to analyze these types of data. We present here a new Bayesian approach, Multiplicity Of Infection and allele frequency REcovery from noisy polyallelic data (MOIRE ), that, like THE REAL McCOIL, enables the estimation of allele frequencies and MOI from genomic data that are subject to experimental error.In addition, MOIRE estimates and accounts for within-host relatedness of parasites, a common occurrence due to the inbreeding of parasites serially 81 co-transmitted by mosquitoes (Nkhoma et al., 2020(Nkhoma et al., , 2012)).an interpretable quantity that is comparable across genotyping 98 panels and transmission settings.We contrast this with the within-99 host infection fixation index, F W S , a frequently used metric 100 of within-host diversity and signal of inbreeding and population 101 sub-structure (Manske et al., 2012;Auburn et al., 2012), and 102 demonstrate the inherent shortcomings of F W S as a non-portable 103 metric. 106 Consider observed genetic data X = (X1, . . ., Xn) from n samples 107 indexed by i, where each Xi is a collection of vectors indexed 108 by l of possibly differing length, representing the varying number 109 of alleles possible at each locus, e.g.polyallelic loci.Each vector 110 is binary, with 1 representing the allele was observed or 0 111 representing the allele went unobserved at locus l for sample i. 112 From this data, we wish to estimate MOI for each individual 113 (µ = [µ1, . . ., µn]), within host relatedness (r = [r1, . . ., rn]), 114 defined as the average proportion of the genome that is identical by 115 descent across all strains, individual specific genotyping error rates 116 ), and population allele 117 frequencies at each locus (π = [π1, . . ., π l ]).Similar to Chang et al. 118 (2017), we applied a Bayesian approach and looked to estimate the 119 posterior distribution of µ, r, ϵ + , ϵ − and π as 120 using MOIRE with default priors and settings, using 40 parallel tempered chains for 5000 burn-in steps, followed by 10,000 samples which were thinned to 1000 total samples. Estimation of multiplicity of infection, within-host relatedness, and allele frequencies We simulated collections of 100 samples under varied combinations of population mean MOI, average within-host relatedness, false positive and false negative rates, and different genotyping panels (details of our simulation procedure may be found in the supplement section 4).Individual MOIs were drawn from zero truncated Poisson (ZTP) distributions with rate parameters 1, 3, and 5, resulting in mean MOIs of 1.58, 3.16, and 5.03 respectively.Within-host relatedness was simulated from settings with low, moderate, and high relatedness.False positive and false negative rates were varied from 0 to 0.1.We first simulated synthetic genomic loci with prespecified diversity: 100 SNPs, 30 loci with 5 alleles (moderate diversity), 30 loci with 10 alleles (high diversity), and 30 loci with 20 alleles (very high diversity) with frequencies drawn from the uniform Dirichlet distribution.We also assessed potential real world performance of MOIRE by simulating data for 5 currently used genotyping panels from 12 regional populations characterized by the MalariaGEN Pf7 dataset (Abdel Hamid et al., 2023) as described in the supplementary material (section 7, Supplementary Figure 4).Genetic loci were selected according to a 24 SNP panel (Daniels et al., 2008), a 101 SNP panel (Chang et al., 2019), and 3 recently developed amplicon sequencing panels consisting of 128 (LaVerriere et al., 2022), 165 (Aranda-Diaz and Neubauer Vickers, 2022), and 233 (Kattenberg et al., 2023) diverse microhaplotypes respectively.Like the fully synthetic simulations, these simulations were varied over a range of MOI and withinhost relatedness, however error rates were fixed at moderate false positive and false negative rates of .01 and .1 respectively for the purposes of computational feasibility due to the extensive number of simulations required.We chose these levels as we believe they are reflective of the most likely situation of higher levels of false negatives and relatively low rates of false positives from a typical bioinformatics pipeline.We then ran MOIRE and calculated summary statistics of interest on the sampled posterior distributions. We estimated allele frequencies, heterozygosity, MOI, within-host relatedness, and eMOI using the mean or median of the posterior distribution output by MOIRE.It should be noted that withinhost relatedness is only defined for polyclonal infections, so the posterior distribution of within-host relatedness is conditional on the MOI being greater than 1.We contrasted these with naive estimates of allele frequency and MOI by assuming that an observed allele was contributed by a single strain, and estimated MOI as equal to the second-highest number of alleles observed across loci.We calculated ground truth allele frequencies using the true number of strains contributing each allele. Under moderate false positive and false negative rates of 0.01 and 0.1 respectively, MOIRE accurately recovered parameters of interest across a range of genotyping panels, population MOI, and within-host relatedness (Figure 1, Table 1).Allele frequencies estimated by MOIRE were unbiased across genotyping panels (Figure 1B), leading to unbiased estimates of heterozygosity (Figure 1C).Naive estimation exhibited substantial bias that 247 MOI was also well estimated by MOIRE, with accuracy increasing substantially in the presence of more diverse loci (Figure 1D). In the context of SNPs, MOIRE recovered MOI accurately up to approximately 4 strains, and then began to exhibit limited ability to resolve.More diverse panels enabled greatly improved resolving power, allowing for the accurate recovery of MOI up to approximately 7 strains.Naive estimation substantially underestimated MOI in comparison, due in part to the limited capacity of low diversity loci to discriminate MOI, as well as the presence of related strains that deflate the observed number of distinct alleles.This bias was particularly prominent for low diversity markers such as SNPs which can only resolve up to 2 strains. MOIRE was generally able to recover within-host relatedness, particularly for moderate and high diversity markers in the context of high relatedness (Figure 1E).SNP based panels had difficulty resolving individual level within-host relatedness and were sensitive to the uniform prior.It should be noted that in the circumstance that a monoclonal infection has an inferred MOI greater than 1, MOIRE will likely classify these infections with very high relatedness (Figure 1E).This is due to the presence of false positives that MOIRE will sometimes infer as an infection consisting of highly related strains rather than being explained by observation error.Therefore, within-host relatedness should be interpreted in the context of the probability of the infection being polyclonal.A more robust metric is eMOI, since it is a metric of diversity that integrates MOI and within-host relatedness. MOIRE recovered eMOI with high accuracy under all conditions using polyallelic panels (Figure 1F).SNP panels exhibited a larger degree of bias at higher eMOI, but still performed relatively well for eMOI of up to 4. This demonstrates that while identifiability of MOI or within-host relatedness may be challenging in some situations, eMOI is a reliably identifiable quantity when estimated using highly polymorphic markers. All simulations were also conducted without any relatedness present.MOIRE was still able to accurately recover allele frequencies, heterozygosity, and MOI, indicating that minimal bias or uncertainty are introduced by attempting to estimate relatedness (Supplementary Figure 1). These patterns held across the range of false positive and false negative rates simulated with the fully synthetic simulations. Allele frequencies and heterozygosity remained well estimated by MOIRE across settings, however bias was elevated for individual level estimates of MOI, within-host relatedness, and eMOI when false positive rates were increased and panel diversity was low. Increased false negative rates did not result in any additional bias within the range of tested values (Supplementary Figure 2). Population inference MOIRE is a probabilistic approach providing a full posterior distribution over model parameters, allowing estimation of credible intervals for model parameters as well as functions thereof.While sample level parameters estimated by the model are useful, it may also be useful to estimate population level summary statistics for reporting and comparison purposes.We thus calculated the posterior distribution of population level summaries of interest, such as mean MOI, mean within-host relatedness, and mean eMOI.We note that mean within-host relatedness is defined only for samples with MOI greater than 1, therefore the posterior distribution of mean within-host relatedness was calculated across samples with MOI greater than 1 at each iteration of the MCMC algorithm.MOIRE accurately estimated these quantities across a range of conditions (Supplementary Figure 3), with the best performance seen for polyallelic data. Population mean MOI was accurately estimated across all panels, with improved precision at lower levels (Supplementary Figure 3A, Table 1).Credible interval (CI) coverage in general was poor, likely due to the challenge of identifiability in conjunction with within-host relatedness.SNP panels were largely unable to resolve population level mean within-host relatedness and exhibited poor CI coverage and substantial sensitivity to the uniform prior specification due to the low relative information contained in these markers.Polyallelic panels in contrast had improved precision as more diverse panels were used, although CI coverage was also poor due to persistent sensitivity to the uniform prior as indicated by slightly overestimating within-host relatedness below .5 and underestimating within-host relatedness above .5. Population mean eMOI was remarkably accurate for low and medium mean MOI when using SNP based panels, with bias only becoming apparent at higher mean MOI (Supplementary Figure 3C, Table 1).Polyallelic panels had substantially improved precision across a wide range of values, further demonstrating that while population mean within-host relatedness or mean MOI may be challenging to identify, mean eMOI remains a highly identifiable quantity when genetic markers with sufficient diversity are used. Metric stability across genetic backgrounds Population metrics of genetic diversity enable researchers to make comparisons across space and time, and to answer questions relating to differences in transmission dynamics.In order for a metric to be useful for these purposes, it must be sensitive to changes in transmission dynamics while remaining insensitive to other factors that vary and may confound interpretation, such as the genotyping panel used, or the local allele frequencies for a given panel.For example, if we were to compare two populations that exhibit the same transmission dynamics, we would want the metric to be the same, uninfluenced by differing population allele frequencies.It would be even better if the metric is insensitive to the genotyping panel used, allowing for comparisons across studies that are independent of the technology utilized.For each level of relatedness (low and high), we simulated 100 infections with a mean MOI of 1.51 and 3.16, for a total of 400 infections across 4 conditions.Keeping the MOI and relatedness fixed for each sample, we varied the genetic diversity of the panel used to genotype each sample.We then calculated the mean eMOI from MOIRE, mean MOI using the naive estimator, and mean F W S using a naive estimate of allele frequencies for each simulation to assess the sensitivity of each metric to varying the genetic diversity of the panel.True mean eMOI and mean MOI are fixed values within levels of within-host relatedness and are annotated by dashed lines.Mean F W S is not fixed within levels of within-host relatedness and MOI because it is a function of the genetic diversity of the panel. To explore the performance of eMOI across varying transmission 346 settings, we simulated 100 samples with MOI drawn from a ZTP 347 distribution with either λ = 1 or λ = 3.For each sample, we 348 then simulated either low or high within-host relatedness.For each 349 individual level simulation, we then observed simulated genetics 350 parameterized by each of the 12 regional populations previously 351 described using the 5 genotyping panels, followed by the previously 352 described observation process with false positive and false negative 353 rates of .01 and .1 respectively.We then fit MOIRE on each 354 simulation independently. For each simulation, we calculated mean eMOI, mean naive MOI, 356 and the within-host infection fixation index (F W S ) (Roh et al., 357 2019;Manske et al., 2012), a frequently used metric of within-358 host diversity that relates genetic diversity of the individual 359 infection to diversity of the parasite population.Mean MOI was 360 calculated using the second-highest number of observed alleles, 361 and F W S used the observed genetics, assuming all alleles were 362 equifrequent within hosts, and naive estimates of allele frequencies 363 to estimate heterozygosity.For these metrics to be most useful in 364 characterizing transmission dynamics, they should be the same for 365 all simulations with the same degree of within-host relatedness and 366 mean MOI, no matter the panel used nor the genetic background 367 of the population.We found that mean eMOI was stable across all 368 genetic backgrounds using microhaplotype based panels, yielding 369 accurate estimates of mean eMOI despite substantial variability in 370 local diversity of alleles, as shown by heterozygosity, and differing 371 genomic loci (Figure 2A).Interestingly, while the SNP panels exhibited reduced precision and downward bias as expected, they were consistently biased with respect to the true eMOI, even across different panels.This suggests that SNP panels, while limited in resolving power, may still have utility in estimating relative ordering of eMOI.These results also demonstrate that eMOI may be readily used and compared across transmission settings and is relatively insensitive to other factors such as heterozygosity that may vary across settings.In contrast, mean naive MOI and F W S were sensitive to genetic background and genotyping panel in confounded ways.Mean naive MOI, only useful with polyallelic markers, exhibits an inherent upward bias as mean heterozygosity increases that is most severe at higher mean MOI.This bias also varied with the genotyping panel used, making it difficult to interpret and compare across settings (Figure 2B).F W S is also sensitive to genetic background and panel used, exhibiting an upward trend as heterozygosity increases and a bias that varies across panels.This is inherent to the construction of the metric, as it is coupled to an estimate of the true heterozygosity of genetic loci being used (Figure 2C).This simulation demonstrates limitations in the utility of F W S as a metric of within-host diversity for a population as it is inherently uncomparable across settings due to its high sensitivity to varying genetic background and genotyping panel used.Mean eMOI, in contrast, is a stable metric of genetic diversity that is insensitive to genetic background and genotyping panel, and is thus readily comparable across settings. Application to a study in Northern Namibia We next used MOIRE to reanalyze data from a previously conducted study carried out in northeastern Namibia consisting of 2585 samples from 29 health facilities across 4 health districts genotyped at 26 microsatellite loci (Tessema et al., 2019).We ran MOIRE across samples collected from each of the 4 health districts independently.Running MOIRE in this way implies that we are assuming that all samples from each health district come from a shared population with the same allele frequencies.We then calculated summary statistics of interest on the sampled posterior distributions. We compared our results to the naive estimation conducted in the original study and found that overall relative ordering of mean MOI was maintained, with Andara and Rundu exhibiting the highest MOI, Zambezi the lowest, and Nyangana in between, consistent with contemporary estimates of transmission intensity (Tessema et al., 2019).However, similar to our simulations, naive estimation substantially underestimated mean MOI across health districts compared to MOIRE (Figure 3A and C).Individual within-host relatedness was estimated to be very high across sites (IQR: .61-.91) with no differences between sites (Figure 3B). This suggests substantial inbreeding which may be indicative of persistent local transmission, consistent with the original findings by Tessema et al. (2019) We also found that heterozygosity across loci estimated by MOIRE was generally lower (IQR: .55 -.85), consistent with the previously described simulations demonstrating that naive estimation overestimates heterozygosity, and that previously detected statistically significant differences in heterozygosity between the Zambezi region and the other three regions may have been an artifact of biased estimation (Figure 3D). We also ran MOIRE independently across each of the 29 health facilities, excluding 2 health facilities from the Zambezi region Fig. 3: Estimated MOI, relatedness, eMOI and heterozygosity in Northern Namibia.MOIRE was run on data from 2585 samples from 29 clinics genotyped at 26 microsatellite loci, subset across four health districts.Each point represents the posterior mean or median for each sample or locus level parameter.The black circle represents the population mean with 95% credible interval for each health district and the black triangle indicates the naive estimate where applicable.In the case of eMOI (C), the naive estimate is simply the MOI.Opacity was used to accommodate overplotting in A, C and D, however opacity in B is reflective of the posterior probability of a particular sample being polyclonal to emphasize that an observation's contribution to the posterior distribution of mean within-host relatedness is weighted by its probability of being polyclonal.This is due to the fact that mean within-host relatedness is only defined for samples with MOI greater than 1, and thus the posterior distribution of within-host relatedness was calculated by taking the mean withinhost relatedness across samples with MOI greater than 1 at each iteration of the MCMC algorithm.Therefore, the opacity of each point in B is reflective of the contribution of that sample to the posterior distribution of mean within-host relatedness.due to low total number of samples (n = 9 in each).Stratifying 430 by health facility revealed substantial heterogeneity in mean MOI, 431 within-host relatedness, and consequently eMOI, also consistent 432 with the findings by Tessema et al. (2019) (Figure 4).Interestingly, 433 Tessema et al. (2019) identified Rundu district hospital as having 434 exceptionally high within-host diversity as measured by F W S , 435 which was posited to be due to a large fraction of the patients 436 having traveled or resided in Angola.We found that Rundu district 437 hospital had the highest mean eMOI and greatest spread across 438 observations ], IQR = 4.88).This 439 was mainly driven by much higher mean MOI (7 [95% CI: 6.5-440 7.5]), and low mean within-host relatedness (.47 [95% CI: .43-441 0.51]).The combination of high MOI and relatively low within-442 host relatedness, translating into high population mean eMOI, 443 454 In particular, naive estimation systematically overestimates 455 measures of allelic diversity such as heterozygosity and 456 systematically underestimates MOI.State-of-the-art methods diversity than naive MOI or F W S , and is insensitive to other factors that may vary across settings such as allele frequencies of given genetic markers.Further, by decomposing the genetic state of an infection into components of within-host relatedness and the number of distinct strains present, we have enabled the characterization of these quantities independently, which may be of interest in their own right.For example, within-host relatedness may be of interest in the context of understanding the role of inbreeding and co-transmission in the parasite population (Wong et al., 2022;Nkhoma et al., 2020), and the number of distinct strains may be of interest in the context of understanding superinfection dynamics. While we have demonstrated the utility of polyallelic data, MOIRE is still compatible with SNP based data and can offer benefits over other approaches.When using SNP based panels, eMOI is still well characterized up to moderate levels, and while the reduced capacity of SNPs generally results in biased estimates, the estimates recovered reflect changes in within-host relatedness yet are stable across genetic backgrounds.Thus, these data may be useful for comparing relative ordering of eMOI across settings and providing inference.In contrast, existing analytical approaches are likely to be sensitive to model misspecification by not considering within-host relatedness and varying genetic backgrounds, and may be biased in ways that are difficult to interpret and compare across settings. We also note that while increasing the number of loci genotyped is always beneficial, the largest gains in recovering estimates of interest are through using sufficiently diverse loci.Our simulations demonstrate that, even with a modest number of very diverse loci such as our synthetic simulations using 30 loci, eMOI can be recovered with a high accuracy and precision.Marginal increases in complexity of incorporating several highly diverse loci, for example in the context of drug resistance monitoring, may be outweighed by the substantial insights obtained from jointly understanding transmission dynamics, population structure, and drug resistance through increased accuracy of estimating resistance marker allele frequency.Modern amplicon sequencing panels have been developed precisely for these contexts, combining high diversity targets with comprehensive coverage of known resistance markers (LaVerriere et al., 2022;Aranda-Diaz and Neubauer Vickers, 2022;Kattenberg et al., 2023).MOIRE provides a powerful tool for leveraging polyallelic data to understand malaria epidemiology, and there are multiple avenues for future work to further improve inference.First, the observation model does not currently fully leverage the information in sequencing based data where the actual number of reads may be available.This may provide additional information, e.g. to inform false positive rates by considering the number of reads attributable to an allele, as well as false negative rates by considering the total number of reads at a locus which may be indicative of sample quality.Second, we currently consider only a single, well mixed, background population parameterized by allele frequencies at each locus.However, it may be the case that there are multiple distinct populations with their own allele frequencies, and that the observed data is a mixture of these populations.This may be particularly relevant in the context of malaria transmission where there may be multiple distinct populations of parasites circulating in a region.Future work may consider a mixture model over allele frequencies, where the number of populations 82 Critically, MOIRE takes as input genetic data of arbitrary83 diversity, allowing for estimation of allele frequencies, MOI, 84 and within-host relatedness from polyallelic as well as biallelic 85 data.MOIRE is able to fully utilize polyallelic data, yielding 86 joint estimates of allele frequencies, sample specific MOIs and 87 within-host relatedness along with probabilistic measures of 88 uncertainty.We demonstrate through simulations and applications 89 to empirical data the ability of MOIRE to leverage a variety of 90 polyallelic markers.Polyallelic markers can greatly improve jointly 91 estimating sample MOI, within-host relatedness, and population 92 allele frequencies, resulting in reduced bias and increased power 93 for understanding population dynamics from genetic data.We also 94 introduce a new metric of diversity, the effective MOI (eMOI), a 95 continuous value that combines estimates of the true MOI and 96 the degree of within-host relatedness in a single sample, providing 97 Fig. 1 : Fig.1: True vs. estimated values of parameters across panels of varying genetic diversity.Panel A summarizes the distribution of heterozygosity across each panel used.Each symbol represents the estimated value of the parameter for a single simulated dataset, with the true value of the parameter on the x-axis and the estimated value on the y-axis.Simulations were pooled across mean MOIs and levels of relatedness.False positive and false negatives rates were fixed to 0.01 and 0.1 respectively.Opacity was set to accommodate overplotting, except in the case of withinhost relatedness where opacity reflects the estimated probability that a sample is polyclonal, calculated as the posterior probability of the sample MOI being greater than 1, as individual withinhost relatedness is only defined for samples with MOI greater than 1.MOIRE accurately recovered parameters of interest with increasing accuracy as panel diversity increased, while naive estimation exhibited substantial bias where such estimators exist. Fig. 2 : Fig.2: Comparison of mean eMOI to other summary measures of diversity across varying levels of within-host relatedness.For each level of relatedness (low and high), we simulated 100 infections with a mean MOI of 1.51 and 3.16, for a total of 400 infections across 4 conditions.Keeping the MOI and relatedness fixed for each sample, we varied the genetic diversity of the panel used to genotype each sample.We then calculated the mean eMOI from MOIRE, mean MOI using the naive estimator, and mean F W S using a naive estimate of allele frequencies for each simulation to assess the sensitivity of each metric to varying the genetic diversity of the panel.True mean eMOI and mean MOI are fixed values within levels of within-host relatedness and are annotated by dashed lines.Mean F W S is not fixed within levels of within-host relatedness and MOI because it is a function of the genetic diversity of the panel. Fig. 4 : 448 Translating Fig.4: Estimated MOI, relatedness, eMOI and heterozygosity in Northern Namibia, stratified by health facility.MOIRE was run independently on data from each health facility.Two health facilities from the Zambezi region were excluded due to only having 9 samples present in each subset.Health facilities are plotted in geographic order from West to East.Plotting conventions are the same as in Figure3. ) 166within-host relatedness decreases, however the contribution to the 167 estimate of eMOI from within-host relatedness also decreases, and 168 thus the overall precision of eMOI is maintained.169InferenceandImplementation170 We fit our model to observed genetic data using a Markov 171 Chain Monte Carlo (MCMC) approach using the Metropolis-Hastings algorithm with a variety of update kernels.Details of 173 sampling and implementation are described in the supplementary 174 material (section 5).MOIRE is implemented as an R package 175 and is available with tutorials and usage guidance at https: 176 //eppicenter.github.io/moire/.All sampling procedures were 177 implemented using Rcpp (Eddelbuettel and Francois, 2011) for 178 efficiency.Substantial effort was placed on ease of use and 179 limiting the amount of tuning required by the user by leveraging 180 adaptive sampling methods.We provide weak default priors for 181 all parameters and recommend that users only modify priors if 182 they have strong prior knowledge about the parameters, such 183 as experimentally derived estimates of false positive and false 184 negative rates using samples with known parasite compositions 185 and densities.All analysis conducted in this paper was done 186 Table 1 . Mean absolute deviation (MAD) of estimates of MOI, heterozygosity, within-host relatedness, and eMOI across simulations using synthetic (top) and real-world (bottom) genotyping panels.The MAD of estimates of MOI were calculated by taking the mean of the MAD for each stratum of true MOI between 1 and 10.MOI Within-host relatedness accuracy is only considered for samples with a true MOI > 1. Coverage rates of 95% credible intervals are shown in parentheses for estimates by MOIRE.
v3-fos-license
2022-04-12T06:22:43.534Z
2022-04-01T00:00:00.000
248084020
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://osf.io/r57hb/download", "pdf_hash": "650343823c156bca5dcf44fe055d70bbdc30ae27", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42019", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "9c1c19c700cc44ca8bda6d28fadeacc508aa17a6", "year": 2022 }
pes2o/s2orc
Varieties of Ignorance: Mystery and the Unknown in Science and Religion Abstract How and why does the moon cause the tides? How and why does God answer prayers? For many, the answer to the former question is unknown; the answer to the latter question is a mystery. Across three studies testing a largely Christian sample within the United States (N = 2524), we investigate attitudes toward ignorance and inquiry as a window onto scientific versus religious belief. In Experiment 1, we find that science and religion are associated with different forms of ignorance: scientific ignorance is typically expressed as a personal unknown (“it's unknown to me”), whereas religious ignorance is expressed as a universal mystery (“it's a mystery”), with scientific unknowns additionally regarded as more viable and valuable targets for inquiry. In Experiment 2, we show that these forms of ignorance are differentially associated with epistemic goals and norms: expressing ignorance in the form of “unknown” (vs. “mystery”) more strongly signals epistemic values and achievements. Experiments 2 and 3 additionally show that ignorance is perceived to be a greater threat to science and scientific belief than to religion and religious belief. Together, these studies shed light on the psychological roles of scientific and religious belief in human cognition. does the moon affect the tides? How and why do bad things happen to good people? In the current paper, we investigate the role of ignorance across the domains of science and religion, focusing on ignorance regarding the answers to questions about how or why something is the case. When is this kind of ignorance taken as a basis to question belief (e.g., that the moon really does cause the tides, or that bad things really do happen to good people), and when is it a call to action--evidence that inquiry is worth pursuing? When is such ignorance considered a threat, and when is it better accepted, and even favored or revered? Science and religion are sometimes characterized as different "ways of knowing" (e.g., Harris, 2007; see also Gould, 2002), suggesting that each domain involves a unique set of norms governing knowledge and justified belief. A complementary idea is that science and religion involve different forms of ignorance (or the absence of knowledge or belief), with unique sets of norms governing attitudes and practices concerning the unknown. This is the hypothesis we explore, with the expectation that attitudes toward ignorance have the potential to reveal epistemic commitments concerning the nature of inquiry and justified belief across domains, and also to shed light on the psychological roles of scientific and religious belief more broadly. To test this hypothesis, we focus on ignorance concerning how and why something is the case--forms of knowledge often taken to be central to understanding (e.g., Lombrozo & Wilkenfeld, 2019). We refer to these manifestations of ignorance as "scientific unknowns" when they concern the natural world and canonically scientific content (e.g., how and why the big bang created the universe), and as "religious unknowns" when they incorporate the supernatural and concern canonically religious content (e.g., how and why God created the universe). We predict that scientific and religious unknowns differ in the following three respects: (1) scientific unknowns are regarded as more viable, appropriate, and valuable targets for inquiry than are religious unknowns; (2) scientific unknowns (by virtue of suggesting failures or limitations of inquiry) are perceived as more threatening to scientific belief and to the domain of science than religious unknowns are to religious belief and the domain of religion; and (3) these different attitudes toward ignorance are reflected in language, such that ignorance about scientific answers is more naturally expressed in terms of the unknown (e.g., "It's unknown how and why the big bang created the universe"), whereas ignorance about religious answers is more naturally expressed in terms of mystery (e.g., "It's a mystery how and why God created the universe"). Prior work suggests that scientific and religious beliefs may indeed differ with respect to the epistemic practices and norms that govern them. For instance, there is evidence that scientific (vs. religious) beliefs are more likely to be justified by appeal to evidence (Metz, Weisberg, & Weisberg, 2018;Shtulman, 2013), more likely to be held with high confidence Davoodi et al., 2019;Harris, Pasquini, Duke, Asscher, & Pons, 2006), and more likely to be perceived as objectively true (Heiphetz, Spelke, Harris, & Banaji, 2013; see also Friesen, Campbell, & Kay, 2015;Gottlieb, 2007;Heiphetz, Spelke, Harris, & Banaji, 2014). Consistent with the idea that people intuitively differentiate scientific and religious belief, Heiphetz, Landers, and Van Leeuwen (2018) found that English-speaking adults in the United States tend to use the word "think" in discussing scientific claims (e.g., "I think the universe started with the Big Bang") and the word "believe" in discussing religious claims (e.g., "I believe God created the world in seven days"). Evidence for a similar distinction has been documented across several other languages and cultures (Van Leeuwen, Weisman, & Luhrmann, 2021). These findings fit within a developing framework according to which scientific and religious beliefs are tuned to different psychological functions (e.g., Davoodi & Lombrozo, 2021;Davoodi & Lombrozo, 2020;Tetlock, 2002;Van Leeuwen, 2014). For instance, Davoodi and Lombrozo (2021) find that scientific explanations are more strongly associated with epistemic merits (e.g., being logical and based on evidence), whereas religious explanations are more strongly associated with nonepistemic merits (e.g., social, emotional, and moral benefits). One possibility, then, is that scientific beliefs typically serve more epistemic roles--such as offering veridical representations of the world that support accurate predictions and effective interventions--whereas religious beliefs typically serve other roles, such as buffering existential anxiety, signaling group membership, or promoting prosociality within groups (e.g., Norenzayan, 2013;Norenzayan & Hansen, 2006;Pichon, Boccato, & Saroglou, 2007). If these functional profiles predict different attitudes toward scientific versus religious belief, we should expect corresponding differences in attitudes toward scientific versus religious ignorance. What might this functional approach predict about attitudes toward scientific versus religious ignorance? First, consider the case of science. If science (and scientific belief) is aligned with epistemic goals, then ignorance should be acknowledged because it defines the contours of prior success and points the way toward future progress. It might be aversive (insofar as it indicates current limitations), but it should also be motivating--a sign that there is further inquiry to pursue. This is consistent with the observation that recognizing and reporting ignorance and uncertainty is central to scientific practice (see Smithson, 1993), but empirical evidence on the corresponding psychology has been relatively sparse. Prior work has documented variability in public attitudes toward scientific uncertainty (Gustafson & Rice, 2020), as well as preferences for calibrated expressions of uncertainty concerning factual claims (e.g., Sah, Moore, & MacCoun, 2013;Tenney, MacCoun, Spellman, & Hastie, 2007). However, little is known about scientific ignorance as such. Kominsky, Langthorne, and Keil (2016) found that adults (and even children as young as 9years-old) appreciate "virtuous ignorance" when it comes to factual matters that are actually or practically unknowable, such as the number of leaves on all the trees in the world. In these studies, participants favored an informant who acknowledged ignorance about "unknowable" facts over an informant who claimed to have knowledge. Similarly, preschoolers have been shown to distinguish between an informant who professes ignorance and an informant who provides inaccurate information, attributing and endorsing future knowledge more often to the informant in the former instance (Kushnir & Koenig, 2017). This research suggests that recognizing (scientific) ignorance is likely to be valued in certain contexts. However, it remains unclear whether such ignorance has implications for inquiry. Inquiry concerning something "knowable" may be fruitful, but inquiry into the unknowable may be a waste of time. What about ignorance in the case of religion? At least some religious traditions, including several Christian traditions, seem to embrace a notion of mystery, perhaps signaling that something is not only unknown, but also unknowable, or otherwise inappropriate as a target of inquiry. Monsignor Charles Pope (2013), for example, states that "in the ancient Christian tradition, mystery is something to be accepted and even appreciated…the attempt to solve many of the mysteries in the Christian tradition would be disrespectful, and prideful too." This attitude toward the unknown is potentially puzzling if religion (and religious belief) is regarded as a predominantly epistemic exercise. However, it makes more sense if religion (and religious belief) is instead aligned with other goals--such as signaling commitment or structuring community, and to that end nurturing faith and humility. Consistent with these suggestions, there is evidence that at least within some populations, religion (vs. science) is in fact judged to be less oriented toward inquiry and more oriented toward mystery. In one set of studies, predominantly Christian participants within the United States judged that questions about science demanded an explanation more strongly than questions about religion, and correspondingly, that it was more appropriate to answer questions about religion with "It's a mystery" than it was to answer questions about science in the same way (Liquin, Metz, & Lombrozo, 2020). These results were driven in part by participants' stronger beliefs that questions about religion (vs. science) could not be answered (because the answer is beyond human comprehension), but also that they should not be answered (perhaps because doing so would be seen as disrespectful or prideful, as Monsignor Charles Pope suggests). In other studies (Gill & Lombrozo, 2019), a similar population was presented with vignettes about a character who encounters a scientific or religious claim for the first time, and decides to either pursue further inquiry (i.e., seeking evidence or explanation), or abdicates from further inquiry (i.e., does not seek further evidence or explanation). Participants were asked to judge how committed that character is to both science and religion. Characters who pursued inquiry (on any topic) were judged more committed to science than characters who did not. By contrast, for religious claims in particular, characters who pursued further inquiry were regarded as less committed to religion than were characters who chose not to pursue further evidence or explanation. The theoretical considerations and evidence just reviewed motivate our predictions that scientific unknowns will be regarded as more viable, appropriate, and valuable targets for inquiry than religious unknowns, and that the latter will be more strongly associated with "mystery" versus a more generic expression of ignorance, such as "unknown." These considerations also suggest that whereas scientific ignorance may be regarded as more circumscribed in scope (limited to particular people or points in time), religious ignorance might be seen as insurmountable: not merely a current unknown, but something that is in principle unknowable, and appropriately so. We test this suite of predictions in Experiments 1 and 2. In Experiment 1, we investigate the expressions of ignorance that participants judge most appropriate in response to questions about science and religion (e.g., how and why the big bang occurred vs. how and why God answers prayers). We predict that expressions of ignorance in science will tend to focus on the unknown (vs. mystery) and to involve a personal scope ("it's unknown to me"), whereas expressions of ignorance within religion will tend to focus on mystery (vs. the unknown) and a universal scope ("it's a mystery [to everyone]"). In Experiments 1 and 2, we additionally investigate whether scientific unknowns, as compared to religious mysteries, are seen as more consistent with epistemic goals, such that they are more viable, appropriate, and valuable targets for inquiry (Experiment 1), and more indicative of epistemic values and norms (Experiment 2). In Experiments 2 and 3, we additionally consider the implications of ignorance for belief. If someone learns that it is unknown how the first living organisms emerged from natural processes, or that it is a mystery how God parted the Red Sea, is this likely to threaten the corresponding beliefs--that the first living organisms emerged from natural processes, or that God parted the Red Sea? Do scientific unknowns pose a threat to science, or religious mysteries to religion? Does the level of threat to a belief or domain depend on the form that ignorance assumes: a mere unknown versus a mystery? If science is evaluated in terms of its epistemic success, then ignorance might be threatening insofar as it suggests the domain of science is circumscribed (because something is unknowable) or that a belief system is incomplete (because something is currently unknown). On the other hand, if religious ignorance is assumed to be inevitable or even desirable, religious ignorance is unlikely to threaten the corresponding beliefs or the domain as a whole. Prior work offers conflicting predictions. Klein and Colombo (2018) develop a theoretical account of mysteries, from which they argue that learning that something is a mystery can sometimes offer evidence against the corresponding belief. Specifically, they suggest that some mysteries pose a conflict with our pre-existing beliefs (e.g., Jesus turning water into wine conflicts with our intuitive theories of matter and material change), while the negation of the mystery does not (that Jesus did not turn water into wine). They argue that in such cases, learning that something is a mystery offers some evidence against the claim itself (that Jesus in fact turned water into wine). Because both religious and scientific claims are often counterintuitive (Boyer, 1994;2001;Boyer & Ramble, 2001;Lane & Harris, 2014), Klein and Colombo's analysis suggests that learning that something is a mystery is potentially threatening to beliefs from both domains (but see Bussey (2011) for an argument about mystery as an appropriate form of not knowing in both science and religion, which suggests that mysteries may not be threatening to either domain). On the other hand, there is evidence that religious beliefs may derive psychological value from their status as unverifiable. In particular, Friesen et al. (2015) found that religious believers reported greater religious conviction after reading a passage that claimed the existence of God could never be proven or disproven, versus one that claimed the existence of God would eventually be proven or disproven. Moreover, when their religious beliefs were threatened, participants were more likely to endorse unfalsifiable (vs. falsifiable) reasons for religious beliefs. If "mystery" signals unverifiability, a declaration of mystery could actually bolster, rather than challenge, the corresponding religious belief. In Experiment 2, we test the hypothesis that ignorance is more threatening to scientific belief and to science than to religious belief and religion, with the largest threat coming from ignorance in the form of mystery (because it is less consistent with epistemic norms). In Experiment 3, we look at personal belief, asking whether people's confidence in their own beliefs decreases in the face of stated ignorance, with differential effects across domains (science vs. religion) and forms of ignorance (unknown vs. mystery). Across Experiments 1-3, we thus offer the first systematic investigation of the psychology of scientific and religious ignorance, including implications for inquiry and belief. Experiment 1 In Experiment 1, we investigated the forms of ignorance associated with scientific versus religious explanation-seeking questions (e.g., how and why humans evolved from earlier primates vs. how and why God answers prayers). We varied two dimensions of ignorance: unknown ("it's unknown") versus mystery ("it's a mystery"), and universal (e.g., "it's unknown") versus personal (e.g., "it's unknown to me"). We predicted that compared to science, religion would be more strongly associated with mystery versus unknown, and with personal versus universal. To create conditions in which participants would naturally respond to a "how and why" question about science or religion with some form of ignorance, participants were first asked to report religious or scientific beliefs that they hold (e.g., that humans evolved from earlier primates, or that God answers prayers), but for which they do not know the "how and why." They were then asked to select the most appropriate response from the four options generated by crossing unknown versus mystery with personal versus universal. In addition, participants were asked about the possibility and value of inquiry into the "how and why" concerning their belief (e.g., "It would be fruitful to investigate how this happens"), the norms governing inquiry ("People shouldn't try to answer how and why this happens"), the verifiability of their belief (e.g., "This belief can be tested"), and whether it was held on faith ("I hold this belief on faith"). Most of these items were included to offer a conceptual replication of prior work documenting perceived differences between science and religion, with science more strongly associated with inquiry (e.g., Gill & Lombrozo, 2019;Liquin et al., 2020) and with epistemic dimensions of belief (e.g., Davoodi & Lombrozo, 2021). Following up on this domain distinction, the current study allowed us to ask whether unknown reflects a more "scientific" profile than mystery with respect to these aspects of inquiry and belief. The procedures, predictions, and analyses for Experiment 1 were preregistered and are available on OSF at (https://osf.io/43yte/). A copy of the survey and data are available at (https://osf.io/8x6vq/). Participants Participants were 506 adults recruited on Prolific (257 self-identified as a woman, 241 as a man, and 8 as nonbinary, M Age = 34 years, SD Age = 12 years). Of these, 37% identified as Christian and 29% as Atheist, with the remaining 34% including "other" (14%), "Spiritual" (11%), and other religious affiliations (9%)--Buddhist, Jewish, Hindu, Muslim, and combinations of two or more affiliations. Participation in all studies was restricted to Prolific workers in the United States who had not participated in any related pilot studies, and who had an approval rating of at least 95% based on at least 100 prior tasks. An additional 103 participants were excluded from analyses because they did not meet criteria for belief generation in either domain (i.e., religious or scientific beliefs; N = 99), as detailed below, and/or because they did not pass attention checks in blocks where they did meet criteria for belief generation. Procedure Participants completed all procedures online using Qualtrics Survey Software (see OSF page for the full survey). Each participant first consented to participate and pledged to pay attention and respond carefully. Participants were then told: "In this survey, we will ask you about some of your beliefs. Sometimes, we may hold a belief (for example, that humans evolved from earlier primates), but when asked about the details of this belief (for example, how and why humans evolved from earlier primates), we may not have all the information. These are the kinds of beliefs we would like to ask you about." They then completed a brief training on the "content" of beliefs. This was to ensure that when asked to describe their belief, participants only produced the propositional content of their belief (e.g., "humans evolved from earlier primates" vs. "I believe that humans evolved from earlier primates"), as this content was used for later questions. In the training phase, they were also familiarized with the scale used for subsequent questions. On the next page, participants were asked to indicate whether they could think of a scientific belief or a religious/spiritual belief (order counterbalanced) that they hold, but for which they do not know much about the how and why. If they indicated that they could not do so, they were then asked the same question about a belief from the other domain. If they indicated that they could not do so in the second domain either, they were taken to the end of the survey and corresponding data were excluded from analysis. If participants indicated that they could think of a belief that met our requirements in the given domain, they were invited to type the content of the belief into a text box. The text entered into this box was then used on the next page, where participants were asked: "Which of the following answers do you think is most appropriate in response to the question of how and why [generated belief]." Using a drop-down menu, participants selected one of: "It's unknown," "It's unknown to me," "It's a mystery," "It's a mystery to me" (order randomized). On the next page, participants rated their agreement with five statements about the value and possibility of inquiry regarding their belief (order randomized), on a scale from -3 (strongly disagree) to 3 (strongly agree), with 0 representing "neither agree nor disagree" (see Table 1, items 1-5). On the next page, participants rated their agreement with four statements about the evidential status of the belief itself (see Table 1, items 6-9). If participants reported holding a belief in the second domain that met our requirements, these steps were repeated for the second belief. There was one attention check item intermingled among the statements about belief for each domain (e.g., "for this item, please select the midpoint of the scale."). If participants failed the attention check for a given domain, data from that domain were excluded from analysis. At the end of the survey, participants completed a brief demographics survey where they were asked about gender, age, income, education, religiosity, and religious affiliation. Table 1 Experiment 1--Measures used to assess judgments about mechanism and belief Questions about inquiry 1. Inquiry-how "It would be fruitful to investigate how this happens." 2. Inquiry-why "It would be fruitful to investigate why this happens." 3. Inquiry-investigation "Investigating this phenomenon should be a priority for future research." 4. Epistemic limit "How and why this happens is beyond human comprehension." 5. Epistemic regulation "People shouldn't try to answer how and why this happens." Questions about belief 6. Verifiability-verified "This belief can be verified as true or false." 7. Verifiability-tested "This belief can be tested." 8. Verifiability-evidence "In principle, one could find evidence relevant to whether this belief is true or false." 9. Faith "I hold this belief on faith." Results We first verified that participants who reported scientific or religious beliefs that met our requirements in fact generated propositions from the corresponding domains. In particular, we did not want to code beliefs that rejected religion as religious (e.g., "heaven or hell does not exist"), or pseudoscientific beliefs as scientific (e.g., "I believe there really is a big-foot"). Two independent coders coded all beliefs as "domain appropriate" or not, with very high levels of agreement (religious beliefs: 98% agreement; kappa = 0.84, SE = 0.06, p < .001, CI [0.71, 0.95]; scientific beliefs: 99% agreement; kappa = 0.84, SE = 0.08, p < .001, CI [0.67, 0.99]). Disagreements were resolved by a third coder. This resulted in excluding 22 beliefs as domain-inappropriate for religion (vs. 274 as domain-appropriate and included in analyses of all qualifying beliefs), and 11 as domain-inappropriate for science (vs. 431 as domain-appropriate and included in analyses of all qualifying beliefs). However, including these beliefs in the analyses that follow does not change the patterns reported in the results. Domain-appropriate religious beliefs mostly reflected canonical beliefs from Abrahamic traditions (e.g., God's creation of the universe/humans; God's existence; the existence of an afterlife in some form) as well as Christian beliefs about Jesus' life and crucifixion. Spiritual beliefs included beliefs in karma and reincarnation, and were coded as domain-appropriate together with religious beliefs. Domain-appropriate scientific beliefs were more diverse compared to religious beliefs and included a range of scientific topics, including climate change, physical health, mental health, genetics, evolution, and the big bang. A complete list of all domain-appropriate and domain-inappropriate beliefs generated by participants is included on OSF (https://osf.io/h9zk3/). Analytic approach We addressed our three research questions with analyses (detailed below) conducted in two ways. We first analyzed all qualifying beliefs, whether or not the participants generated a T. Davoodi, T. Lombrozo / Cognitive Science 46 (2022) 9 of 29 belief from a single domain or from both domains. We then repeated each analysis including only data from participants who generated a qualifying belief in both domains (N = 205). The advantage of the former approach is that it excludes fewer participants; the advantage of the latter is that it ensures that any effects, if found, reflect genuine differences across domains, and not selection effects leading some kinds of participants to be systematically excluded from a single domain. All reported confidence intervals refer to the 95% level. Forms of ignorance in science and religion Our first research question concerned the relationship between domain (religion vs. science) and forms of ignorance, which varied along two dimensions (unknown = 1 vs. mystery = 0, and individual scope = 1 vs. universal scope = 0). For each dimension, we regressed the response code (0 vs. 1) on domain using a mixed-effects binomial regression using the glmer function from the lme4 package in R. We also included by-participant intercepts to account for the within-subjects structure of domain. For the scope of ignorance, we found that questions about scientific beliefs were more often judged to involve personal ignorance and less often judged to involve universal ignorance than were questions about religious beliefs (B = 2.45, SE = 0.30, p < .001, OR = 11.57, CI [6.48, 20.67]). When considering only participants with qualifying beliefs in both domains, we replicated these results (B = 2.53, SE = 0.35, p < .001, OR = 12.61, CI [6.32, 25.14] (Fig. 1). Attitudes toward inquiry and evidence in science and religion Our second research question concerned the perceived roles of inquiry, verifiability, epistemic regulation, epistemic limits, and faith across domains. We first created composite scores for inquiry and verifiability. Based on pilot testing, we expected the three inquiry items to form a reliable construct, which warranted combining them into a single score (α = 0.87). Likewise, the three items measuring verifiability formed a reliable construct and were combined into a single score (α = 0.94). For each dependent variable, we conducted mixed-effects linear regression analyses with Domain as a predictor, using the lme function from the nlmer package in R. Again, we included by-participant random intercepts to account for variability within participants across Domain. Fig. 2 shows the effects of Domain for each measure. For epistemic limits and regulation, we found that questions about science were less likely to be judged beyond human comprehension than were questions about religion (B = -3.80, For verifiability, we found that scientific beliefs were rated as more verifiable than religious beliefs, both when all qualifying beliefs were included (B = 3.39, SE = 0.10, t = 34.80, p < .001, CI [3.20, 3.58]) and when only participants with qualifying beliefs in both domains were included (B = 3.43, SE = 0.13, t = 26.01, p < .001, CI [3.17, 3.69]). Associations between ignorance and attitudes toward inquiry and evidence within domains The analyses above reveal that on average, questions about science versus religion are associated with different forms of ignorance, and also with different attitudes toward inquiry and evidence. Our third research question concerned whether variation in forms of ignorance was associated with these attitudes within each domain, with unknown showing a more scientific profile than mystery. Correspondingly, we conducted linear regression models within each domain on each of the four epistemic measures and on "faith," with either Ignorance Type (unknown vs. mystery) or Ignorance Scope (personal vs. universal) as a predictor. Within the domain of religion, we did not find significant effects of ignorance type nor of ignorance scope on inquiry (Ignorance Type: For verifiability, the effect of Ignorance Type also failed to reach significance (B = -0.14, SE = 0.20, t = -0.70, p = .48), but there was a significant effect of Ignorance Scope (B = 0.48, SE = 0.21, t = 2.27, p = .02, CI [0.06, 0.90]): personal (vs. universal) ignorance was associated with higher attributions of verifiability to religious beliefs. All of these patterns remained consistent when we restricted our sample to only those with qualifying beliefs in both domains (note that in this second set of models, we analyzed participants' religious beliefs, but only for those whose generated qualifying beliefs in both domains). Within Ignorance Scope did not predict ratings for epistemic regulation (B = -0.19, SE = 0.12, t = -1.53, p = .12). All of these patterns remained the same when restricting the sample only to participants with qualifying beliefs in both domains (note that in this second set of models, we analyzed participants' scientific beliefs, but only for those whose generated qualifying beliefs in both domains). Finally, ratings for the faith-based item in the domain of science were the only dimension to show significant effects of Ignorance Type: mystery (vs. unknown) was associated with stronger agreement that a belief was held on faith (B = -0.57, SE = 0.22, t = -2.57, p = .01, CI [-1.00, -0.13]). Ignorance Scope also significantly predicted "faith-based" ratings, such that universal (vs. personal) ignorance was associated with stronger agreement that a belief was held on faith (B = -0.59, SE = 0.25, t = -2.33, p = .02, CI [-1.09, 0.09]). While the effect of Ignorance Scope remained significant for participants with qualifying beliefs in both domains, the effect of Ignorance Type was not significant within this sample (B = -0.28, SE = 0.31, t = -0.92). Discussion The findings from Experiment 1 support the prediction that unknown answers to scientific and religious questions are associated with different forms of ignorance. Scientific questions about how and why something is the case were most often answered with "it's unknown to me," while religious questions about how and why something is the case were most often answered with "it's a mystery." These responses reveal variation along two dimensions of ignorance: unknown versus mystery, and personal versus universal scope. Experiment 1 also corroborated prior research (Davoodi & Lombrozo, 2020;Gill & Lombrozo, 2019;Liquin et al., 2020) in finding reliable differences across domains in attitudes toward inquiry and evidence. Specifically, we found that compared to scientific beliefs, religious beliefs were judged less appropriate targets for inquiry, were deemed less verifiable, and were more likely to be held on faith. Interestingly, this held even though participants were ignorant of the "how and why" regarding their beliefs in both domains. We also predicted that even within a domain, selecting unknown versus mystery as the appropriate form of ignorance would be associated with a more "scientific" profile. This prediction was only borne out when it came to holding a belief on faith within the domain of science: participants who selected "it's a mystery [to me]" (vs. "it's unknown [to me]") in response to a scientific question were more likely to indicate that their belief was held on faith, although this difference did not persist among participants with qualifying scientific and religious beliefs. We did find more reliable differences between personal versus universal ignorance: within both domains, universal ignorance was associated with lower ratings for verifiability, and within science, universal ignorance was also associated with higher ratings for inquiry, epistemic limits, and being held on faith. Experiment 2 Experiment 1 successfully demonstrated that science and religion are associated with different forms of ignorance: unknown in science, and mystery in religion. Experiment 1 was also successful in corroborating prior work on differences across domains, with scientific claims more strongly associated with evidence and inquiry than religious claims. Experiment 2 went beyond these domain-based associations by introducing an experimental manipulation of ignorance: participants were presented with a hypothetical expert who was posed a question about science or religion for which the participant did not know the "how and why," and to which the expert responded with either "It's unknown" or "It's a mystery." Although this is a very subtle manipulation, we expected participants to draw different inferences about the expert as a function of their ignorance type, and for ignorance to be treated differently across domains. We elaborate on both predictions below. Our first hypothesis concerned the epistemic commitments reflected by unknown versus mystery. Based on the material reviewed in the introduction and the findings from Experiment 1, we predicted that expressing ignorance in the form of unknown (vs. mystery) would be more consistent with a scientific orientation, and thus with epistemic goals (e.g., wanting to know more), epistemic achievements (e.g., being knowledgeable), and epistemic norms (e.g., valuing truth). To test this, participants were asked to indicate the extent to which the expert who reported "It's unknown" or "It's a mystery" is curious, knowledgeable, and values truth. Our second hypothesis concerned the implications of ignorance in each domain. Does ignorance in response to a scientific question (e.g., stating that it is unknown or a mystery how and why the universe was caused by the big bang) threaten the truth of that belief (e.g., that the universe was so created) or science as a whole? Does ignorance in response to a religious question (e.g., stating that it is unknown or a mystery how and why God created the universe) threaten the truth of that belief or religion as a whole? We expected that in the domain of science, mystery would be more threatening than unknown, because it is less consistent with the epistemic norms that govern science. For the domain of religion, by contrast, we expected this effect to be attenuated or reversed. The procedures, predictions, and analyses for Experiment 2 were preregistered and are available in OSF at (https://osf.io/6daec/). A copy of the survey and data are available at (https://osf.io/pb3xk/). Participants Participants were 1014 adults recruited on Prolific (559 self-identified as a woman, 443 as a man, and 12 as nonbinary, M Age = 33 years, SD Age = 12 years). Of these, 39% identified as Christian and 24% as Atheist, with the remaining 37% including "other" (14%), Spiritual (11%), and other religious affiliations--Buddhist, Jewish, Hindu, Muslim--as well as combinations of two or more affiliations (12%). Participation in all studies was restricted to Prolific workers in the United States who had not participated in any related pilot studies, and who had an approval rating of at least 95% based on at least 100 prior tasks. An additional 110 participants were excluded from analyses because they did not meet criteria for belief generation in either domain (i.e., religious or scientific beliefs; N = 92), as detailed below, and/or because they did not pass attention checks in blocks where they did meet criteria for belief generation. Procedure Participants completed all procedures online using Qualtrics Survey Software (see OSF page for the survey). Each participant first consented to participate and pledged to pay attention and answer questions carefully. The task was introduced as follows: "In this survey, we will ask you several questions. For most of them, there are no right or wrong answers. We ask that you try your best to answer the questions based on your beliefs and, where relevant, the information provided." Phase 1--Belief identification For each participant, we first identified a proposition that met the requirements outlined in Experiment 1 (namely, one that the participant endorses, but for which they do not know much about the how and why). To do so, participants were presented with claims from the domain of religion or science, with each domain presented in a separate block in counterbalanced order. For each domain, participants worked through a minimum of 1 and a maximum of 5 claims in a fixed order until we identified one for which (1) the participant indicated that they believed that claim, and (2) when asked whether they "know how and why this happens/d?" they indicated "no" (see Table 2 for a complete list of religious and scientific claims). As soon as a belief that met the relevant requirements was identified, participants moved on to the next phase. If a belief that met both requirements was not identified after the fifth claim, we did not analyze data from that participant for that domain. Do you believe that physical exercise rejuvenates cells in the brain? 5 Do you believe that God created the universe? Do you believe that humans evolved from earlier forms of primates? Note that each participant could see 1-5 of these questions, depending on which met the requirements of belief without knowledge of "how and why." Phase 2--Expert introduction and ignorance affirmation After identifying a belief that met our requirements, we introduced participants to an expert who was said to hold the same belief that met our requirements for the participant. For example, if a participant indicated that they believed God answers prayers but did not know much about the "how and why," they were told: "'A' is an expert when it comes to religious questions like those you were asked about. 'A' also believes that God answers prayer." Participants were then told "When asked how and why this happens, here is what 'A' said," with the response being either "It's a mystery" or "It's unknown." This manipulation was betweenparticipants, with random assignment. Phase 3--Ratings Next, participants indicated their agreement with three sets of four statements: belief-threat items, domain-threat items, and expert-epistemic items ( Table 3). The set of expert-epistemic items included one attention check item intermingled among the other three questions. Each set of items was presented in a single block, with both block order and the order of items within each block randomized. Items were rated on a scale from -3 (strongly disagree) to 3 (strongly agree), and with 0 representing "neither agree not disagree." These three phases were subsequently repeated for the second domain. Data from a given domain were excluded if the participant did not pass the attention check ("for this item, please select the number between 3 and 1") corresponding to that domain. At the end of the experiment, participants completed a brief demographics survey where they were asked about gender, age, income, education, religiosity, and religious affiliation. Results The main goals of our analyses were to ask whether unknown versus mystery reflect different epistemic commitments, and whether they are differentially threatening across domains. Table 3 Measures used to assess judgments about belief, domain, and epistemic commitments in Experiment 2 Belief-threat items 1. Threat "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, is threatening to scientific/religious belief." 2. Importance "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, makes scientific/religious belief seem less important." 3. Value "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, makes scientific/religious belief seem less valuable." 4. Questioning "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, makes one question scientific/religious belief." Domain-threat items 5. Threat "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, is threatening to science/religion." 6. Importance "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, makes science/religion seem less important." 7. Value "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, makes science/religion seem less valuable." 8. Value "Saying 'it's a mystery/it's unknown' when asked why and how some things happen, makes one question science/religion." Expert-epistemic items 9. Truth "A/B values truth above all." 10. Knowledge "A/B is a knowledgeable person." 11. Curious "A/B is a curious person and always wants to know more." Below, we present analyses that reflect each of our two main predictions. As in Experiment 1, we first included qualifying beliefs from each domain in our analyses (scientific beliefs: N = 894, religious beliefs: N = 713) and then repeated the analysis with participants who had qualifying beliefs from both domains (N = 588). Epistemic commitments Our first prediction was that an expert's ignorance in the form of unknown (vs. mystery) would be seen as more reflective of epistemic commitments. To test this, we first created a composite Epistemic Commitments score based on the average ratings for "curious," "knowledgeable," and "valuing truth" (items 9-11 in Table 3). Based on pilot testing, we expected these items to form a single reliable construct, and reliability was indeed high (α = 0.87). We conducted a mixed-effects linear regression model (using the lme function) on this composite score with Domain (Religion, Science), Ignorance Type (Unknown, Mystery), and their interaction as predictors, defining a random intercept accounting for participant-level variability in ratings across Domains. This model revealed a main effect of Ignorance Type (B = 0.36, SE = 0.10, t = 3.66, p < .001, CI [0.16, 0.54]): when the expert indicated that the answer is unknown, participants attributed higher levels of curiosity, knowledgeability, and valuation of truth than when the expert indicated that the answer is a mystery. Neither the main effect of Domain nor the interaction between Domain and Ignorance Type was significant (B = -0.14, Fig. 3. Experiment 2-The effect of Ignorance Type on the attribution of epistemic commitments by Domain, both when including all qualifying beliefs (left panel) and only participants with qualifying beliefs in both domains (right panel). Note: "0" represents "neither agree nor disagree." Negative numbers indicate lower attribution of commitments (curiosity, knowledgeability, and valuation of truth) and positive numbers indicate higher attribution of commitments. Points represent jittered data points. Boxes represent average scores--black line in the middle--and +/-1 SEM. SE = 0.07, t = -1.89, p = .06 and B = 0.10, SE = 0.10, t = 0.96, p = .34, respectively; see Fig. 3). The pattern remained the same when we restricted analysis to participants with qualifying beliefs in both domains. Threats to belief and domain Our second prediction was that mystery (vs. unknown) would be perceived as more threatening to scientific beliefs and science. We also expected that this difference between mystery and unknown would be attenuated or reversed for religious beliefs and religion. To test these predictions, we first created a "threat" composite score based on ratings of value, importance, threat, and questioning for both beliefs and the domain as a whole (items 1-8 in Table 3). Based on pilot testing, we expected these items to form a single reliable construct, and reliability was indeed very high (α = 0.95). We then ran a mixed-effects linear regression model on the Threat composite score with Ignorance Type, Domain, and their interaction as predictors. The model included a random Fig. 4). We followed up this interaction by analyzing effects of Ignorance Type within each domain. For science, we found the predicted main effect of Ignorance Type such that mystery posed more threat as compared to unknown (B = -0.35--Mystery as reference, SE = 0.12, t = 2.90, p = .003, CI [-0.58, -0.12], and B = -0.38, SE = 0.13, t = -2.83, p = .005, respectively, for participants with qualifying scientific beliefs and those with qualifying scientific and religious beliefs). For religion, we found no significant effect of Ignorance Type (B = -0.06, SE = 0.11, t = -0.53, p = .60, and B = -0.06, SE = 0.12, t = -0.48, p = .63, respectively, for participants with qualifying religious beliefs and those with qualifying religious and scientific beliefs). Discussion Experiment 2 found that across the domains of both science and religion, an expert's ignorance was taken to indicate subtly different epistemic commitments depending on whether that ignorance was expressed as unknown or mystery. Specifically, when an expert said "It's unknown," participants inferred greater curiosity, knowledgeability, and valuation of truth than when the expert said "It's a mystery." This is consistent with the findings from Experiment 1 in that unknown (vs. mystery) was more strongly associated with science, and science was more strongly associated with epistemic goals and values (namely evidence and inquiry). However, it goes beyond Experiment 1 in demonstrating an experimental effect of ignorance type across domains, and in linking ignorance type to epistemic commitments more directly. Experiment 2 also found differences in the role of ignorance across domains. Overall, participants tended to judge that a statement of ignorance was not threatening to the corresponding belief nor to its domain. However, they were least likely to dismiss ignorance as a threat when that ignorance came in the form of mystery in the domain of science. In the domain of science, "it's a mystery" was judged more threatening than "it's unknown." In the domain of religion, no such difference was observed. These findings are consistent with the idea that because science has epistemic aims, forms of ignorance that are less aligned with epistemic aims are more threatening--they suggest a form of incongruity or failure. In Experiment 3, we further test effects of different types of ignorance across domains. Experiment 3 Experiment 2 found that affirmations of ignorance were judged more threatening to science and scientific belief than to religion and religious belief, especially when ignorance took the form of mystery versus unknown. In Experiment 3, we tested an implication of this result: that ignorance about "how and why" some scientific proposition is the case should reduce confidence in the truth of that scientific proposition, with weaker (or absent) effects of ignorance on confidence in religious propositions. So, for example, learning that it is unknown (or a mystery) how and why the moon causes the tides should decrease confidence in the claim that the moon causes the tides, whereas learning that it is unknown (or a mystery) how and why God answers prayers should decrease confidence that God answers prayers to a smaller extent (or not at all). This result would extend the effect of ignorance on perceived threat to a belief or domain found in Experiment 2 to the much more personal currency of confidence in one's own beliefs. Following Experiment 2, we also predicted that for science, ignorance expressed as mystery (vs. unknown) would reduce confidence more strongly. The procedures, predictions, and analyses for Experiment 3 were preregistered and are available in OSF at (https://osf.io/fh7ex/). A copy of the survey and data are available at (https://osf.io/jgykh/). Participants Participants were 1004 adults recruited on Prolific (540 self-identified as a woman, 440 as a man, 22 as nonbinary, and 2 as "other"; M Age = 37 years, SD Age = 14 years). Of these, 42% identified as Christian and 25% as Atheist, with the remaining 33% including "other" (13%), "Spiritual" (10%), and other religious affiliations--Buddhist, Jewish, Hindu, Muslim-as well as combinations of two or more affiliations (10%). Participation in all studies was restricted to Prolific workers in the United States who had not participated in any related studies, and who had an approval rating of at least 95% based on at least 100 prior tasks. An additional 251 participants were excluded from analyses because they did not meet criteria for belief generation in either domain (i.e., religious or scientific beliefs; N = 92), as detailed below, and/or because they did not pass attention checks in blocks where they did meet criteria for belief generation. Procedure Participants completed all procedures online using Qualtrics Survey Software (see OSF page for the survey). Each participant first consented to participate and pledged to pay attention and answer questions carefully. The introduction to the task was the same as that of Experiment 1, encouraging participants to think of a belief that they hold, but for which they do not know "the how and why." Also as in Experiment 1, participants then completed a brief training on how to report the "content" of beliefs, and if they could generate a belief that met our requirements, they were asked to type its content into a text box. If they indicated that they could not think of such a belief in a given domain, they moved on to the same question for the other domain (order counterbalanced). Once participants produced the content of a belief that met the requirements in either domain, the content of the belief was reproduced on a new page and they were asked "how confident are you that [content of belief]?". Participants answered this "Confidence-pre" measure on a scale of 1-7, with 7 representing "completely confident" and 1 representing "not so confident." On the next page, they were asked to report the likelihood that an expert knows the "how and why" pertaining to the belief (Expectation of Knowledge: "even though you do not know how and why [belief], how likely do you think it is that experts do?"). This question was answered on a 1-7 scale, with 1 representing "not likely at all" and 7 representing "very likely." We included this question to rule out a plausible interpretation of our predicted pattern of results: that participants expect the answers to scientific questions to be known, and the answers to religious questions to be unknown, such that a profession of ignorance is more challenging to the former only because it violates expectations. Measuring Expectation of Knowledge allowed us to account for such expectations in our statistical analyses. After participants rated their level of confidence and Expectation of Knowledge with respect to their own belief, they moved on to a phase like that of Experiment 2. They were first introduced to an expert ("imagine someone who is an expert when it comes to scientific [religious] questions and whom you trust. This expert also believes that [content of participant generated belief]"). They were then asked to imagine that when asked how and why [content of belief] happens, the expert responds with either "it's a mystery" or "it's unknown" (between-subjects). Participants were then encouraged to take a moment to think about the implications of the expert's response and to answer the following questions based on this response. Immediately after, they were asked to rate their level of confidence in the belief again (Confidence-post). Before moving on to the next block, participants also responded to an attention check ("for this item, please select the number on the scale that is greater than two but less than four"). If participants failed the attention check from a given block corresponding to belief from one of the two domains, data from that domain were excluded from analysis. At the end of the experiment, participants completed a brief demographics survey where they were asked about gender, age, income, education, religiosity, and religious affiliation. Results As in Experiment 1, we first verified that participants who reported scientific or religious beliefs that met our requirements in fact generated propositions from the corresponding domains. Two independent coders coded all beliefs as "domain appropriate" or not, and agreement was substantial for religious beliefs and moderate for scientific beliefs (religious beliefs: 95% agreement; kappa = 0.66, SE = 0.06, p < .001, CI [0.54, 0.77]; scientific beliefs: 98% agreement; kappa = 0.43, SE = 0.11, p = .007, CI [0.17, 0.69] 1. ), with disagreements resolved by a third coder. This resulted in excluding 59 beliefs as domain-inappropriate for religion (e.g., "humans create babies"; vs. 559 as domain-appropriate included in analyses of all qualifying beliefs), and 20 beliefs as domain-inappropriate for science (e.g., "horoscopes are true on a daily basis"; vs. 867 as domain-appropriate included in analyses of all qualifying beliefs). However, including these beliefs in the analyses that follow does not change the reported patterns. A complete list of all domain-appropriate and domain-inappropriate beliefs generated by participants is included on OSF (https://osf.io/cys7f/). As in the two prior studies, we first conducted analyses on all qualifying beliefs and then restricted the same analyses to participants who generated qualifying beliefs in both domains (N = 433). Change in confidence Our main prediction was that ignorance would result in a larger decrease in confidence for scientific beliefs than for religious beliefs, with a moderating effect of ignorance type (such that mystery was more threatening to science than unknown). To test this, we conducted a mixed-effects linear regression model on Confidence Change (post -pre) with Domain, Ignorance Type, and their interaction term as predictors. We defined a by-participant random intercept to account for within-participant variability across the two domains. There was a main effect of Domain (B = -0.53, SE = 0.10, t = -5.51, p < .001, CI [-0.72, -0.34]), but there was no effect of Ignorance Type (B = 0.02, SE = 0.12, t = 0.22, p = .83), and no interaction between Domain and Ignorance Type (B = 0.10, SE = 0.14, t = 0.76, p = .45). The same pattern was found for participants who generated beliefs that met the requirement in both domains (Domain: B = -0.65, SE = 0.11, t = -5.98, p < .001; Ignorance Type: B = -0.06, SE = 0.13, t = -0.46, p = .65; Domain X Ignorance Type: B = 0.05, SE = 0.16, t = 0.35, p = .73). As shown in Fig. 5, confidence in scientific beliefs dropped more than confidence in religious beliefs, but this effect was not moderated by the type of ignorance professed by the expert. To further probe the effect of Domain, we asked whether the difference in confidence change between scientific and religious beliefs could be an artifact of participants' expectation about whether the answer to a given question is known (Expectation of Knowledge). We conducted a linear mixed-effects model on Confidence Change with Domain as the predictor, while controlling for Expectation of Knowledge. For all qualifying beliefs and for participants with qualifying beliefs in both domains, Expectation of Knowledge significantly predicted Confidence Change (B = -0.07, SE = 0.02, t = -3.11, p = .002, CI [-0.12, -0.03] and B = -0.07, SE = 0.03, t = -2.45, p = .01, CI [-0.12, -0.01], respectively), but the main effect of Domain remained significant (B = -0.25, SE = 0.10, t = -2.52, p = .01, CI [-0.45, 0.06] and B = -0.42, SE = 0.12, t = -3.59, p < .001, CI [-0.64, -0.19], respectively). So, while answers to scientific questions were in fact judged more likely to be known than answers to religious questions, this difference does not fully account for the fact that confidence in scientific beliefs decreased more sharply in response to expert ignorance than did confidence in religious beliefs. Discussion Experiment 3 found a striking shift in participants' own scientific beliefs in the face of ignorance. On average, participants reported that they would be less confident in their own reported scientific beliefs (e.g., "exercise keeps you healthy") compared to their religious beliefs (e.g., "Christ will return to earth in the future") if they heard a trusted expert affirm that the "how and why" of the belief is unknown or a mystery. Importantly, although participants judged scientific questions to be more "knowable" than religious questions, this did not explain the more sizeable decrease in confidence about scientific beliefs in the face of ignorance. Contrary to our predictions, an expert affirming "mystery" did not decrease confidence in scientific beliefs more than an expert declaring something "unknown." Thus, Experiments 2 and 3 were consistent in finding weaker effects of ignorance in challenging religious belief than scientific belief, but unlike patterns in Experiment 2, Experiment 3 did not show a moderating role for ignorance type in this effect of domain. General discussion In both science and religion, (perceived) ignorance and (perceived) knowledge are arguably two sides of the same coin. Just as norms for knowledge might differ across domains, so too, might the role of ignorance: in science, ignorance about how and why something is the case often propels inquiry, but in religion, such ignorance may more readily be accepted as mystery. Across three experiments, we asked whether ignorance in the case of science is perceived differently from ignorance in the case of religion, and whether this difference reflects distinct profiles with respect to epistemic commitments and goals. In Experiment 1, we found that scientific ignorance (i.e., not knowing how and why some scientific phenomenon is the case) is most often expressed as a personal unknown ("It's unknown to me"), whereas religious ignorance (i.e., not knowing how and why some religious phenomenon is the case) is more commonly expressed as a universal mystery ("It's a mystery"). Corroborating previous work on the epistemic qualities of science versus religion, Experiment 1 also documented stronger associations between scientific (vs. religious) questions and the perceived viability and value of inquiry (Liquin et al., 2020). We also predicted that even within each domain, expressions of ignorance in the form of "unknown" would be associated with a more epistemic profile than expressions of mystery. For instance, we expected that scientific "unknowns" would be perceived as more viable and valuable targets of inquiry than scientific "mysteries." Despite the striking differences across domains in both expressions of ignorance and in the perceived value of inquiry, these predicted patterns of differentiation within domain were not observed. In Experiment 2, we found that experts who reported that the answer to a scientific or religious question is unknown were perceived to be more knowledgeable, more curious, and more concerned with truth than were experts who reported that the answer to a scientific or religious question is a mystery. Thus, across domains, we did observe an association between expression of ignorance in the form of unknown (vs. mystery) and stronger expectations of adherence to epistemic values and epistemic achievements. Experiment 2 also found that ignorance is perceived to pose a greater challenge to science and scientific belief than to religion and religious belief. Similarly, Experiment 3 more strikingly found that participants' confidence in their own scientific beliefs dropped more substantially compared to confidence in their religious beliefs after they were asked to imagine an expert who confessed ignorance. Thus, across both Experiments 2 and 3, we present evidence that ignorance poses a greater threat to scientific belief than to religious belief. While the predicted patterns of variation across domains and forms of ignorance were consistent across studies, the interactions between domain and ignorance type were not. Notably, Experiment 2 found that "mysteries," as compared to "unknowns," were perceived as more threatening to science. However, this difference was not found in Experiment 3, which investigated a similar effect by measuring the decline in participants' confidence in their own beliefs. We speculate that if this difference across experiments is reliable, it may have arisen from the different materials used. In Experiment 2, the set of beliefs that participants could select was restricted to five predesignated belief statements, all of which are prominent and common in public discourse (e.g., that the moon causes the tides; that CO 2 emissions cause climate change; see Table 2). By contrast, in Experiment 3, participants generated their own beliefs, which tended to be more specialized and idiosyncratic (e.g., "oxygen makes cancer cells spread," "not getting enough sleep can lead to obesity," "muscles have a memory that is adaptive," and "micro-wounds end up healing the cells of the body"). It may be that learning that something is a mystery is especially threatening to scientific claims that are known to be elements of shared public discourse, versus those that are more personal. With the exception of this inconsistency across Experiments 2 and 3, our findings are consistent across experiments, and also consistent with prior work in suggesting that within our largely Christian and U.S. adults sample, science and religion are indeed associated with different norms for knowledge and belief (e.g., Davoodi & Lombrozo, 2021), with correspondingly different attitudes toward inquiry (Gill & Lombrozo, 2019;Liquin et al., 2020). However, we go beyond this prior work in three important ways. First, we show that science and religion are associated with different forms of ignorance: personal unknowns versus universal mysteries (Experiment 1). Second, we show that these forms of ignorance are differentially associated with epistemic goals and norms: expressing ignorance in the form of "unknown" (vs. "mystery") more strongly signals epistemic values and achievements (Experiment 2). Third, we show that universal ignorance (i.e., experts not knowing the answers) is perceived to be a greater threat to science and scientific belief than to religion and religious belief (Experiments 2 and 3). A potential limitation of our work stems from the focus on "how and why" questions. Are the findings documented here likely to extend to other forms of ignorance and inquiry, such as ignorance concerning whether something is the case (e.g., Does the moon cause the tides? Does God answer prayers?). Gill and Lombrozo (2019) found that differences across science and religion in response to evidence-seeking extended to the truth of claims themselves (e.g., whether the shroud of Turin is the burial cloth of Jesus), even when they did not concern the "how and why." That said, we do expect boundary conditions on our effects. For instance, ignorance related to the performance of relevant duties is likely to be a call to action in either domain: How should a ritual be performed? How should a cell culture be maintained? Our expectation is that ignorance will prompt inquiry in either domain when it impedes the realization of the functions of belief in that domain, such that differences in the roles of ignorance stem from differences in the functional roles of the corresponding beliefs (see Davoodi & Lombrozo, 2021, for relevant discussion). Second, we expect that the threat of professed ignorance to belief (in Experiments 2 and 3) will only emerge for beliefs about which participants have some uncertainty. If a claim is firmly rejected (i.e., assigned zero confidence), ignorance will (trivially) have no effect. At the other extreme, if a claim is already accepted with extremely high confidence (e.g., that the dinosaurs became extinct, or that Jesus died for our sins), it seems unlikely that an expression of ignorance concerning "how and why" would have much, if any, effect, since high confidence was already achieved in the absence of such knowledge. In such cases, belief is likely to be based on sources of justification that do not depend on "how and why" (e.g., fossil evidence of dinosaur extinction and deference to religious authorities). A second (and in our view, more substantial) limitation of this work is the exclusive focus on a largely Christian sample within the United States. It thus remains unclear whether the features associated with the domains of science and religion that we observe extend beyond this sample. In particular, it is plausible that adherents of other religious traditions differ strongly in their attitudes toward inquiry and mystery. As one example, world-renowned Noble laureate in Physics and a devout Muslim, Abdus Salam, viewed his religious faith as an inspiration for his notable scientific findings. He believed that "the Holy Quran enjoins us to reflect on the varieties of Allah's created laws of nature," and he saw his scientific career as doing just that (Lewis, 1980). Given this unique pattern of interplay between religious faith and scientific findings, one would expect religious "mysteries" to invite inquiry and a search for evidence. This would imply a similar pattern to our participants' judgments of the implications of scientific unknowns for inquiry. Other religious traditions, such as Judaism, also promote inquiry concerning a variety of religious matters. In the words of Ismar Schorsch (2000), rabbi and professor of Jewish history, "the centrality of revelation never put a damper on the human right to question the divine." Investigating differences across cultures (and across different religious communities within a given cultural context) is, therefore, an important direction for future research. Even within Christianity itself, there is little doubt that interactions between cultural traditions, theological teachings, and personal factors will also lead to variability in attitudes toward inquiry about religious mysteries. For example, in contrast to the sentiment expressed by Monsignor Charles Pope (2013) to accept, respect, and stay away from scrutinizing religious mysteries, Christian priest John Van Sloten (2021) preaches about specific scientific findings and mechanisms (e.g., how the human knee or the human gut works) as a way to uncover the mysteries of creation and to express truths about God's mind. Specific cultural factors that might lead to this variability include the relationship between politics and religion. For example, in theocratic sociopolitical structures, religious mysteries may be more accepted and revered, whereas a democratic structure may encourage personal choice in seeking explanations to religious mysteries. Additionally, personal factors can impact one's attitude toward religious mysteries. For example, science curiosity (see Landrum, Hilgard, Akin, Li, & Kahan, 2016), or cognitive style (see Pennycook, Cheyne, Seli, Koehler, & Fugelsang, 2012;Shenhav, Rand, & Greene, 2012) may contribute to variability in inquisitiveness toward religious mysterious and explanation-seeking behaviors. The interplay between relevant cultural and personal factors should be studied within and between particular religious traditions to better understand the extent to which our findings are generalizable, and more generally, whether and when a "scientific" attitude can be adopted towards religious content, or a "religious" attitude towards science. A final limitation of our work worth highlighting relates to the limited range of epistemic attitudes we investigated. Specifically, while we show that objective, verifiable, and empirical ways of knowing are rated as less valuable and appropriate for religious beliefs compared to scientific beliefs, it is possible that within the domain of religion, some regard more subjective or personal experiences (such as miracles or mystical experience) as legitimate sources of evidence or justification for belief. In fact, in prior work, religious individuals have been documented to rely on religious sources or subjective experiences, such as what one feels in one's heart, as justifications for their religious beliefs (Metz et al., 2018; see also . Thus, it is possible that with a broader range of measures, we would be able to identify different forms of evidence or inquiry associated with each domain. Relatedly, it might be important to ask whether there are further kinds of ignorance worth distinguishing. For example, might people approach scientific unknowns that are thought to be unknowable differently from those that are at present merely unknown? Are scientific matters that are thought to be beyond human comprehension (e.g., Chomsky, 2009) distinguished from those that we could comprehend but never know, such as the number of grains of sand in the world (e.g., Kominsky et al., 2016)? These are important questions to raise within the broader project of characterizing the varieties of ignorance that shape cognition. Our findings also raise new questions for science communication and scientific education. As noted in the introduction, recognizing ignorance and uncertainty is an inevitable and invaluable part of the scientific process (Firestein, 2012). The fact that scientific ignorance is sometimes regarded as threatening to science or scientific belief is, therefore, a potential concern for public understanding and acceptance of science. Prior work--for instance, in the context of risk communication and climate change--has investigated public responses to scientific uncertainty (Gustafson & Rice, 2020), including different ways in which uncertainty can be conveyed. It is striking that a subtle change in the linguistic expression of ignorancefrom "It's unknown" to "It's a mystery"--has reliable (if small) effects on belief. Further research on expressions of ignorance and uncertainty is thus likely to play an important role in scientists' and policy-makers' ability to craft effective messages about scientific content. Reducing our ignorance about ignorance may seem like a roundabout way to get at the nature of epistemic commitments and intuitive theories of knowledge, but it is a powerful one: beliefs about what we can and should know shape decisions at all scales, from the questions we ask of others and ourselves to the funding policies we are likely to support. Focusing on the domains of science and religion helps reveal the systematic links between ignorance, inquiry, and belief, and the viability and value of investigating the psychology of what we do not know.
v3-fos-license
2017-05-17T19:53:57.233Z
2017-05-15T00:00:00.000
6426839
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2017.00233/pdf", "pdf_hash": "bd7f248bf3286858200277400b204a72f0dc1a40", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42021", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "sha1": "bd7f248bf3286858200277400b204a72f0dc1a40", "year": 2017 }
pes2o/s2orc
High Fat Diet-Induced Hepatic 18-Carbon Fatty Acids Accumulation Up-Regulates CYP2A5/CYP2A6 via NF-E2-Related Factor 2 To investigate the role of hepatic 18-carbon fatty acids (FA) accumulation in regulating CYP2A5/2A6 and the significance of Nrf2 in the process during hepatocytes steatosis, Nrf2-null, and wild type mice fed with high-fat diet (HFD), and Nrf2 silenced or over expressed HepG2 cells administered with 18-carbon FA were used. HE and Oil Red O staining were used for mice hepatic pathological examination. The mRNA and protein expressions were measured with real-time PCR and Western blot. The results showed that hepatic CYP2A5 and Nrf2 expression levels were increased in HFD fed mice accompanied with hepatic 18-carbon FA accumulation. The Nrf2 expression was increased dose-dependently in cells administered with increasing concentrations of stearic acid, oleic acid, and alpha-linolenic acid. The Nrf2 expression was dose-dependently decreased in cells treated with increasing concentrations of linoleic acid, but the Nrf2 expression level was still found higher than the control cells. The CYP2A6 expression was increased dose-dependently in increasing 18-carbon FA treated cells. The HFD-induced up-regulation of hepatic CYP2A5 in vivo and the 18-carbon FA treatment induced up-regulation of CYP2A6 in HepG2 cells were, respectively, inhibited by Nrf2 deficiency and Nrf2 silencing. While the basal expression of mouse hepatic CYP2A5 was not impeded by Nrf2 deletion. Nrf2 over expression improved the up-regulation of CYP2A6 induced by 18-carbon FA. As the classical target gene of Nrf2, GSTA1 mRNA relative expression was increased in Nrf2 over expressed cells and was decreased in Nrf2 silenced cells. In presence or absence of 18-carbon FA treatment, the change of CYP2A6 expression level was similar to GSTA1 in Nrf2 silenced or over expressed HepG2 cells. It was concluded that HFD-induced hepatic 18-carbon FA accumulation contributes to the up-regulation of CYP2A5/2A6 via activating Nrf2. However, the CYP2A5/2A6 expression does not only depend on Nrf2. INTRODUCTION Non-alcoholic fatty liver disease (NAFLD) is an important chronic liver disease and metabolic syndrome, correlated with diabetes, obesity, and cardiovascular diseases (Angulo, 2002;Marchesini et al., 2003;Yoneda et al., 2012). In recent years, morbidity of NAFLD has grown not only in Eastern but also in Western countries, and high-fat intake has become an important health issue (Kojima et al., 2003;Bhala et al., 2013). A "two-hit" theory is widely advocated to explain the progression of NAFLD although the exact mechanism of NAFLD is still unknown. Lipid deposition in hepatocytes is considered to be the first hit, which caused by insulin resistance and lipometabolism disorder and resulted in non-alcoholic liver simple steatosis (SS). Oxidative stress and the imbalance of proinflammatory cytokines resulting from lipid peroxidation and the increased reactive oxygen species (ROS) are likely to be the second hit. The second hit contributes to the progressive development of non-alcoholic steatohepatitis (NASH), liver fibrosis, cirrhosis, and even hepatocellular carcinoma from SS (Bugianesi et al., 2002). Excessive fatty acids (FA) deposition and its peroxidation in hepatocytes are considered to be the major factors causing cytotoxicity and exacerbated hepatopathology. According to the reported data of our lab and others, 18-carbon FA is the most abundant in hepatic FA composition and accumulated badly in the HFD-induced liver, especially in Nrf2 deleted animals (Zhukova et al., 2014;. CYP2A5 and Nrf2 expression were both induced by 18-carbon FA treatment in mouse primary hepatocytes (Cui et al., 2016). Nevertheless, the role of 18-carbon FA in inducing CYP2A6 in human hepatoma cell line has never been reported. The metabolism and detoxification of xenobiotics including drugs and environmental toxicants mainly take place in the liver. A variety of enzymes with overlapping substrate specificity are expressed in the liver and are divided into Phase I (oxidizing) and Phase II (conjugating) drug metabolizing enzymes (DMEs). Approximately 90% of Phase I metabolism is carried out by enzymes belonging to the cytochrome P450 (CYP) superfamily. Mouse CYP2A5 and its human ortholog CYP2A6 belong to the CYP450 family. There are increasing evidence that hepatic CYP2A5 expression and activity was enhanced in the liver exposed to various chemical hepatotoxins and pathophysiological conditions [chemicals (Jounaidi et al., 1994), AFB1 , microorganism (Sipowicz et al., 1997), carcinoma , parasite (Montero et al., 1999)], while levels of most CYP enzymes are either unchanged or decreased (Kojo et al., 1998). CYP2A5/2A6 is coumarin-7-hydroxylase and responsible for the metabolism of nicotine, drugs and procarcinogens such as aflatoxin B1 (AFB1) and nitrosamines (Camus-Randon et al., 1993;Kirby et al., 1994a,b;Felicia et al., 2000). The hepatotoxic compounds that up-regulate CYP2A5/2A6 are structurally unrelated and are not Abbreviations: ALA, α-linolenic acid; CD, control diet; FA, fatty acids; GGT, gamma-glutamyltransferase; HFD, high fat diet; LA, linoleic acid; Nrf2, NF-E2related factor 2; OA, oleic acid; SA, stearic acid; SS, simple steatosis; TFA, total fatty acids. considered to be CYP inducers. The mechanism of CYP2A5/2A6 increased in different pathogenesis induced liver damage is still unclear. It has been proposed that the common mechanism in the CYP2A5-inducing conditions is a direct or indirect systemic effect elicited by toxicity or tissue damage, rather than the chemical itself (Camus-Randon et al., 1996;Salonpää et al., 1997). Lipid accumulation (hepatocytes steatosis) is generally the early stage of liver damages induced by various structurally unrelated chemicals. Studies focus on the expression of hepatic CYP2A5/2A6 in hepatocytes steatosis is limited at the moment. NF-E2-related factor 2 (NFE2L2 or Nrf2), a basic leucine zipper transcription factor that belongs to the Cap "N" Collar (CNC) family of transcription factors is expressed in diverse cell types including hepatocytes (Oyake et al., 1996). Activated by electrophiles and oxidants, Nrf2 binds to DNA sequences which named antioxidant response elements (ARE), and initiates the transcription of target genes that contribute to elimination of free radicals and electrophiles (Wakabayashi et al., 2004). In other words, Nrf2 is a key nuclear transcription factor that regulates the expression of genes against oxidative stress. Nrf2 is reported to play a cytoprotective role in NAFLD by regulating the expression of antioxidants and cytokines, thus resisting oxidation, inflammation, and fibrosis which generate the second hit of the "two-hit" theory (Chowdhry et al., 2010;Sugimoto et al., 2010;Meakin et al., 2014). Deficiency of Nrf2 in mice leads to rapid onset and progression of NAFLD. Thus, the potential of Nrf2 as the treatment target of NAFLD has been demonstrated using Nrf2 activators in vivo and in vitro (Shimozono et al., 2013). According to former reports of our lab, Nrf2 and CYP2A5 mRNA expressions were both elevated in mouse model of hepatocytes steatosis accompanied with 18-carbon FA accumulation in the hepatocytes . In mouse primary hepatocytes treated with 18-carbon FA, Nrf2, and CYP2A5 expressions were increased (Cui et al., 2016). We hypothesize that the common stimulus for up-regulation of CYP2A5/2A6 in liver damages caused by various structurally unrelated chemicals is hepatocellular FA accumulation (generally the early stage of liver damages), and Nrf2 is a potential mechanism by which 18-carbon FA induces CYP2A5/2A6 expression. Our objective is to investigate the relationship between hepatocellular 18-carbon FA accumulation and CYP2A5/2A6 expression and the involvement of Nrf2 in the process by (i) investigating the effects of hepatic steatosis on CYP2A5 expression via Nrf2 in Nrf2-null and wild type (WT) mice fed with HFD, (ii) examining the effects of 18-carbon FA [stearic acid (SA, C18:0), oleic acid (OA, C18:1), linoleic acid (LA, C18:2), and alpha-linolenic acid (ALA, C18:3)], which significantly accumulated in the HFD fed mice liver, on Nrf2 and CYP2A6 expressions in HepG2 cells, and (iii) if the effects of 18-carbon FA on CYP2A5/2A6 expression are related to Nrf2 with Nrf2 silenced or over expressed HepG2 cells. As the classical target gene of Nrf2, GSTA1 mRNA expression was detected in Nrf2 silenced or over expressed HepG2 cells to indirectly reflect the activation of Nrf2. The results indicated that the HFDinduced hepatocellular 18-carbon FA accumulation up-regulates CYP2A5/2A6 via Nrf2 during hepatocytes steatosis. However, Nrf2 is not the only compound that regulates CYP2A5/2A6 expression. Animals and Diets As described in our former report , 8 weekold WT and Nrf2-null male mice with ICR background fed 8 weeks of control diet (CD) and HFD were used for experiments. All the mice were pathogen-free. Each group consisted of 10 mice. All of the experimental protocols that involve animals were approved by the Northeast Agricultural University Animal Care and Use Committee prior to the initiation of the study. Mouse Liver Pathology Liver sections (4 µm) were dewaxed in xylene, passed through graded ethanol solutions, stained with hematoxylin and eosin (HE), and then examined by three pathologists ignorant of the mice groups and their diets. Oil Red O Staining Oil red O staining was performed according to the method described in our previous study . The fat accumulated in hepatocytes was demonstrated intuitively as red. Cell Culture and Treatments HepG2 cells used in our experiments were obtained from Harbin Medical University, China. The passage number of cells used in each experiment is 3 or 4. All assays were performed with nine replicates. MTT Cell Viability Assay 1 × 10 4 HepG2 cells were platted in 96-well plates and cultured in Dulbecco's modified eagle medium (DMEM, Gibco, NY, USA) with 10% fetal bovine serum (FBS, Invitrogen, Life Sciences, USA) and 0, 0.25, 0.5, 1, 2 millimole per liter (mM) SA, OA, LA, and ALA, respectively, for 24 h. Then, 10 µL of 5 mg/ml MTT was added into each well and 4 h incubation at 37 • C was needed. Discard the supernatant. One hundred fifty microliters of dimethyl sulfoxide (DMSO, Jiancheng, Nanjing, China) was added into each well and incubated at 37 • C for 10 min. Formazan in living cells was solubilized by DMSO. The absorbance was measured at 490 nm with a microplate reader (Bio-Rad iMark, USA). HepG2 Cells Culture and Treatment HepG2 cells were cultivated in DMEM solution containing 10%FBS at 37 • C in 5% CO 2 . 2 × 10 5 HepG2 cells were platted in 6-well plate and cultivated for 6 h, and then changed the culture medium to DMEM solution without FBS. Twenty-five hours later, 0.25, 0.5, 1 mM SA, OA, LA, and ALA standards (Sigma, St. Louis, MO, USA) prepared via saponification and albumin binding were added into culture medium, respectively. A 24 h incubation was needed before harvesting the cells for Nrf2 and CYP2A6 levels detection. Nrf2 Gene Over Expression in HepG2 Cells pcDNA3-EGFP-C4-Nrf2 and pcDNA3-EGFP-C4 were the Nrf2 over expression plasmid and the NC plasmid, respectively. They were both bought from Addgene. The procedures of transfection and cells treatments are same as Section Nrf2 Gene Silence in HepG2 Cells. Real-Time PCR HepG2 cells were disrupted by TRIzol reagent (Invitrogen, Carlsbad, USA). The total RNA isolation and the process of reverse transcription were same as our former report . ABI 7500 sequence detection system was used in performing SYBR-Green quantitative real-time polymerase chain reaction (RT-PCR). The mRNA levels of human CYP2A6, Nrf2, and GSTA1 were normalized to β-actin. The relative change in mRNA gene expression were calculated using the − Ct method. Primers were designed using Primer Premier5.0 software and were based on the mRNA sequences available at the National Center for Biotechnology Information ( Table 1). Gene Sequences of primers Band intensities were measured using Quantity One software (BioRad, Hercules, CA). The quantity of Nrf2 protein expression in nucleus was relative to Histon H1, and in cytoplasm was relative to β-actin. Nrf2 and CYP2A5/2A6 protein expressions in hepatocytes were relative to β-actin. Statistical Analysis Ten replicates were used to generate an individual data point in each of the independent experiments in vivo. Nine replicates were used to generate an individual data point in each of the independent experiments in vitro. Results are expressed as the mean ± standard deviation (SD). A statistical software package (SPSS, version 17.0) was used for the data analyses. ANOVA was used to compare quantitative data among groups. The column charts were designed by Graphpad Prism 6.0. Nrf2 Deletion Aggravated HFD-Induced Hepatic Steatosis As shown in Figure 1, after 8 weeks of HFD feeding, livers from WT mice showed micro-and macro-vesicular fat accumulation. In contrast, livers from Nrf2-null mice were more sensitive to HFD feeding than WT mice with evidence of greater macrovesicular fat accumulation. In addition, the Oil Red O staining mice liver sections showed that fat accumulated much more in HFD-Nrf2-null mice hepatocytes than HFD-WT mice. In the liver of CD-Nrf2-null mice, there was slight hepatocellular fat accumulation. However, there is no evidence of fat accumulation in the CD-WT mice liver. In addition, data in also gives evidence of FA accumulation in the liver of HFD fed mice. Nrf2 deficiency improved the hepatic FA accumulation induced by HFD, especially the 18carbon FA. As the predominant components of hepatic FA, 18carbon FA accumulated significantly in HFD fed mice liver of both genotypes (WT and Nrf2-null mice). Nrf2 Deletion Inhibited HFD-Induced Mouse Hepatic CYP2A5 Expression In HFD-WT mice, the Nrf2 protein expression level in hepatocytes nucleus was multiplied by 3.87 in comparison with the control group (Figure 2A). While in liver cytoplasm the Nrf2 protein expression level was decreased by 36.8% in HFD-WT mice compared to the control ( Figure 2B). The hepatic CYP2A5 protein expression level was multiplied by 11.13 in HFD-WT mice compared to CD-WT mice (Figures 2C,D). However, in Nrf2-null mice the expression level of CYP2A5 in hepatocytes Frontiers in Pharmacology | www.frontiersin.org FIGURE 2 | Nrf2 and CYP2A5 protein expressions in HFD feeding mice. Nrf2 protein expression in mouse hepatocytes nucleus and its quantity relative to Histon H1 is shown in panel (A). Nrf2 protein expression in liver cytoplasm and its quantity relative to β-actin is shown in panel (B). Hepatic CYP2A5 protein expression in WT and Nrf2-null mice, which fed with CD and HFD, is revealed in panel (C) and the quantity relative to β-actin is shown in panel (D). * represent statistical difference caused by HFD within WT or Nrf2-null groups; $ represent statistical difference caused by Nrf2 deficiency on the same diet. * 0.01 < P < 0.05, **P < 0.01; $ 0.01 < P < 0.05, $$P < 0.01. Frontiers in Pharmacology | www.frontiersin.org was not changed by HFD feeding (Figures 2C,D). In the control groups, Nrf2 deficiency did not influence the basal expression of hepatic CYP2A5. Nrf2 and CYP2A6 Expression Levels Were Increased in HepG2 Cells Treated With 18-Carbon FA The low concentrations (0.25, 0.5, 1 mM) of SA, OA, LA, and ALA have no significant effect on the viability of HepG2 cells. However, HepG2 cells viability was significantly inhibited (over 25%) as the concentration of FA was 2 mM (Figure 3A). Nrf2 protein (Figures 3B,C) and mRNA ( Figure 3E) expression was increased with the increasing doses of SA, OA, and ALA (0.25, 0.5, 1 mM). Nrf2 protein (Figures 3B,C) and mRNA ( Figure 3E) expression was decreased gradually in HepG2 cells administered with increasing doses of LA (0.25, 0.5, 1 mM), but these expression levels were found higher than the control group. CYP2A6 protein (Figures 3B,D) and mRNA ( Figure 3F) expressions were both dose-dependently up-regulated in HepG2 cells treated with SA, OA, LA, and ALA in comparison with the control cells. Nrf2 Silence Inhibited 18-Carbon FA Induced CYP2A6 Expression, Nrf2 Over-Expression Accelerated CYP2A6 Expression Transfected with Nrf2 siRNA, Nrf2 protein expression declined 86% in 24 h and 84% in 48 h, respectively (Figures 4A,B). Transcription of Nrf2 in HepG2 cells declined 90.10% in 24 h and 91.26% in 48 h, respectively ( Figure 4C). Twenty-four hours after pcDNA3-EGFP-C4-Nrf2 was transfected in HepG2 cells, Nrf2 protein (Figures 4D,E), and mRNA ( Figure 4F) expression was multiplied by 2.01 and 10.88, respectively. Fortyeight hours after pcDNA3-EGFP-C4-Nrf2 was transfected, Nrf2 protein (Figures 4D,E) and mRNA ( Figure 4F) expression was multiplied by 1.89 and 10.84, respectively. GSTA1 mRNA relative expression showed an elevation of 528% in cells transfected with pcDNA3-EGFP-C4-Nrf2 and was decreased by 43.1% in cells transfected with Nrf2 siRNA (Figure 4G). In cells transfected with NC plasmids (NC siRNA or pcDNA3-EGFP-C4), Nrf2, and GSTA1 expressions showed no statistical difference compared to the control cells (Figure 4). CYP2A6 mRNA and protein expressions were increased by LA and ALA treatment in HepG2 cells transfected with functional plasmids or NC plasmids. HepG2 cells were examined for mRNA and protein expression levels in the presence or absence of LA and ALA treatment, the expressions of mRNA ( Figure 5A) and protein (Figures 5B,C) levels of CYP2A6 were extremely reduced in Nrf2 silenced cells compared to the NC groups. Conversely, Nrf2 over-expressed HepG2 cells expressed much higher CYP2A6 mRNA ( Figure 5D) and protein (Figures 5E,F) levels than the cells transfected with NC plasmid. In the presence or absence of LA and ALA treatment, GSTA1 mRNA relative expression was reduced in Nrf2 silenced cells ( Figure 5G) and was increased in Nrf2 over expressed cells (Figure 5H) compared to the NC groups. DISCUSSION AND CONCLUSIONS Hepatic CYP2A5/2A6 is reported up-regulated in various pathophysiological liver damages and induced by structurally variable hepatotoxic chemicals (Jounaidi et al., 1994;Pelkonen et al., 1997;Sipowicz et al., 1997;Raunio et al., 1998;Montero et al., 1999), including NAFLD . In all of these conditions, lipid accumulation in hepatocytes always occurs at the early stage of liver damage and results in redox status disorder in hepatocytes, which contributes to the second hit. Nrf2 is a transcription factor that activated by oxidative stress and regulates the transcription of numerous cytoprotective target genes. Our former researches showed that hepatic Nrf2 and CYP2A5 mRNA expressions were increased significantly in HFD fed mice and accompanied with badly hepatic 18-carbon FA (SA, OA, LA, and ALA) accumulation, which is the most abundant component of liver FA . Meanwhile, mice primary hepatocytes treated with 18-carbon FA also express more Nrf2 and CYP2A5 than the control cells (Cui et al., 2016). Thus, we hypothesize that HFD-induced hepatic 18-carbon FA accumulation upregulates CYP2A5 in hepatocytes steatosis may correlate with the activation of Nrf2. Whether Nrf2 expression is necessary for CYP2A5/2A6 up-regulation in hepatocellular steatosis has never been reported. Our study aims to investigate the necessity of Nrf2 expression for hepatic 18-carbon FA accumulation induced CYP2A5/2A6 up-regulation. Serological and pathological tests indicated that mice developed liver steatosis that was exacerbated by Nrf2 deletion after 8 weeks of HFD feeding. In condition of mild damage in hepatocytes, ALT from the cytoplasm leaks out because the membrane permeability is enhanced. While in condition of severe damage in hepatocytes, AST from mitochondria leaks out due to the disruption of mitochondrial membrane. Compared to the HFD-WT mice, the increase of AST and the decrease of ALT in HFD-Nrf2-null mice indicated that Nrf2 deficiency enhanced the hepatocytes damage induced by HFD feeding. ALP leaks into blood from hepatocytes in condition of liver damage. Kidney is considered to be the major organ that produces GGT, but the GGT in serum primarily comes from the hepatobiliary system. In heavily injured hepatocytes, GGT existed in the smooth endoplasmic reticulum will leak out into blood. Thus, the increase of serum ALP and GGT levels induced by Nrf2 deletion in HFD fed groups also demonstrated that Nrf2 deficiency accelerated the HFD-induced liver damage. Moreover, the grown macrovesicular fat accumulation in HFD-Nrf2-null mice liver in comparison with the HFD-WT mice indicated that Nrf2 deficiency intensified the liver sensibility to HFD feeding. The changes of CYP2A5 protein expression and Nrf2 nuclear translocation showed the same trend in HFD fed WT mice liver compared to the control mice (Figure 2). Considering the hepatic FA accumulation , the increase of CYP2A5 protein expression and Nrf2 nuclear translocation in HFD-WT mice demonstrated that HFD-induced FA accumulation upregulated CYP2A5 and activated Nrf2. However, the hepatic CYP2A5 protein expression in HFD-Nrf2-null mice approached to the basal level in mice fed with CD. This phenomenon indicated that expression of Nrf2 is crucial for the up-regulation of mouse hepatic CYP2A5 induced by HFD feeding. Another research based on mice exposed to cadmium chloride (16 mmol/kg body weight) reported that cadmium alters cellular redox status, induced hepatic CYP2A5 in WT mice but not in Nrf2-null mice (Abu-Bakar et al., 2004). Our study is in FIGURE 4 | Nrf2 gene silence and over expression. In Nrf2 siRNA transfected HepG2 cells, Nrf2 protein expression is shown in panel (A) and its quantity relative to β-actin is shown in panel (B); Nrf2 mRNA expression is shown in panel (C). In pcDNA3-EGFP-C4-Nrf2 transfected HepG2 cells, Nrf2 protein expression is shown in panel (D) and its quantity relative to β-actin is shown in panel (E); Nrf2 mRNA expression is shown in panel (F). The GSTA1 mRNA relative expression in different (Continued) FIGURE 4 | Continued groups was shown in panel (G). NC, in panel (A-C) represents the HepG2 cells that transfected with negative control siRNA, in panel (D-F) represents the HepG2 cells that transfected with negative control plasmid (pcDNA3-EGFP-C4). In panel (G), NC-O represents the cells that transfected with pcDNA3-EGFP-C4 and NC-S represents the HepG2 cells that transfected with negative control siRNA. Cell, means the control cells that without any administration. Asterisks * represent statistical difference caused by Nrf2 siRNA or pcDNA3-EGFP-C4-Nrf2 transfection from control cells without any stimulation. * 0.01< P < 0.05, **P < 0.01. agreement with the above finding that Nrf2 may be the common regulator which contributes to the up-regulation of CYP2A5 in liver injuries induced by structurally irrelevant pathogenesis. Our data also showed that in Nrf2-null mice, the increase of CYP2A5 induced by hepatic FA accumulation was inhibited but the hepatocellular damage was enhanced which indicated that CYP2A5 up-regulation does not contribute to the development of the HFD-induced liver damage. As shown in , the 18-carbon FA showed high concentration (almost 60%) in mouse liver and high sensitivity to Nrf2 deficiency, which indicated that 18-carbon FA is the predominant component contributed to the formation of hepatocytes steatosis. Thus, SA (C18:0), OA (C18:1), LA (C18:2), and ALA (C18:3) were chosen as the HepG2 cells stimulus to investigate the role of 18-carbon FA in inducing CYP2A6 and the necessity of Nrf2 in the process. HepG2 cell line was used in this study because it regenerates easily and very similar to human normal hepatocytes in resisting lipid accumulation (Chao et al., 2010;Chavez-Tapia et al., 2011;Yao et al., 2011). FA showed strong cytotoxicity via inhibiting the cell growth and even causing cell death at certain concentrations. Toxicity of FA on cells depends on the concentration, solvent, cell type and culture condition, which lead to different FA concentrations that were used in different studies. In our study, the FA that used in stimulating HepG2 cells was prepared via saponification and albumin binding, so that the toxicity of DMSO or ethanol on cell viability was avoided. MTT cell viability assay revealed that the optimal FA concentrations for inducing HepG2 cells steatosis without significant inhibition in cell viability were 0.25, 0.5, and 1 mM ( Figure 3A). Nrf2 and CYP2A6 expression induced by 18-carbon FA (SA, OA, LA, and ALA) in vitro (Figures 3B-F) showed the same tendency as the experiments in vivo (Figure 2), which indicated that the experiment in vitro reproduced the process in vivo to a certain extent. Interestingly, Nrf2 mRNA and protein expressions were dosedependently increased in HepG2 cells exposed to SA, OA, and ALA, but dose-dependently decreased in HepG2 cells exposed to LA, although still much higher than the control cells. The current reports showed that although LA and its metabolite derivatives play cytoprotective role in HepG2 cells and mouse/rat organs via inducing the expression of Nrf2 and its downstream antioxidative genes (Mollica et al., 2014;Furumoto et al., 2016), LA concentrations higher or lower than the optimal value attenuated its auxo-action on Nrf2 expression (Zeng et al., 2016). However, the exact mechanism of diverse LA concentrations (with similar cytotoxicity) showed different effects on inducing Nrf2 expression remains to be investigated. Nrf2 or Keap1 activators, inhibitors and primary hepatocytes of WT and Nrf2-null mice have been used in some studies focused on the relationship between Nrf2 and CYP2A5 in liver (Abu-Bakar et al., 2004;Lämsä et al., 2010;Shimozono et al., 2013). Furthermore, the expression of Nrf2 can't be inhibited by inhibitors fundamentally (Jnoff et al., 2014), and the primary hepatocytes continuous passage hard. In our experiment, Nrf2 silenced and over-expressed cell models were established successfully by transfected with Nrf2 specific siRNA and pcDNA3-EGFP-C4-Nrf2 in HepG2 cells using liposomemediated method. The eukaryotic expression vector or siRNA combined with liposome and got into cells by endocytosis to accomplish Nrf2 over expression or silencing transiently. It was observed from Figure 4 that the transfection did not showed statistically significant effect on the cell viability at both time points (24 and 48 h), and Nrf2 mRNA & protein expressions in the Nrf2 silenced and over expressed cell models were found to be declined or improved. Twenty-four hours of time-point was chosen in the current study because longer liposome effect on cells causes heavier hepatocytes damage, and the Nrf2 silenced and over expressed results almost the same in 24 and 48 h. The variation trend of GSTA1 mRNA relative expression was similar as Nrf2 in Figure 4. This result indirectly indicated that Nrf2 gene silencing and over expression in Nrf2 silenced and over-expressed cell models were satisfactorily used. It is well-known that LA and ALA are important precursors of many long chain FAs in vivo. LA and ALA cannot be synthesized by the body or cells and must be absorbed from food. Our data showed that LA and ALA at 1 mM were the most effective activator of CYP2A6. Thus, LA and ALA were chosen as the stimulus of Nrf2 silenced and over expressed HepG2 cells in the current study to scrutinize the involvement of Nrf2 in regulating CYP2A6. As shown in Figure 5, the expression of CYP2A6 induced by LA and ALA was significantly attenuated in Nrf2 silenced cells, and was markedly enhanced in Nrf2 over expressed cells, which indicated that Nrf2 expression was very crucial for the LA and ALA induced CYP2A6 upregulation. As the classical target gene of Nrf2, GSTA1 mRNA expression change in this experiment was similar to CYP2A6, which also indirectly proved that CYP2A6 expression was influenced by Nrf2. Is the gene of CYP2A5/2A6 located on a chromosome regulated by Nrf2? Whether Nrf2 is the sole compound regulates CYP2A6 in hepatocytes steatosis? Two putative stress response elements (StRE) within the promoter of mouse Cyp2a5 at positions −2514 to −2505 and −2386 to −2377 have been identified with computer-based sequence analysis, which may interact with Nrf2 (Abu-Bakar et al., 2007). In our study, the expression of Nrf2 and CYP2A6 showed inverse tendency in cells treated with increasing concentrations FIGURE 5 | Continued expressions were increased in Nrf2 over-expressed cells with or without LA and ALA administration. In the presence or absence of LA and ALA administration, GSTA1 mRNA expression was down-regulated in cells transfected with Nrf2 siRNA (G) and was up-regulated in cells transfected with pcDNA3-EGFP-C4-Nrf2 (H). In addition, the NC cells in panel (A-C,G) were transfected with NC siRNA. The NC cells in panel (D-F,H) were transfected with NC plasmid (pcDNA3-EGFP-C4). * represent statistical difference caused by transfection of Nrf2 siRNA and pcDNA3-EGFP-C4-Nrf2 from cells transfected with NC siRNA and NC plasmid; $ represent statistical difference caused by LA and ALA administration within Nrf2 silenced cells, Nrf2 over expressed cells, and control cells. *0.01 < P < 0.05, **P < 0.01; $ 0.01 < P < 0.05, $$P < 0.01. of LA, which indicated that Nrf2 was not the only pathway contributes to the increase of CYP2A6 induced by hepatic 18carbon FA accumulation. An aryl hydrocarbon receptor (AHR)dependent pathway has been reported to associate with the upregulation of CYP2A5 and a putative AHR response element (XRE) was identified in the Cyp2a5 promoter at the position −2514 to −2492 using luciferase reporter gene assays (Arpiainen et al., 2005). In addition, the dexamethasone (DEX) induced CYP2A6 increase in human primary hepatocytes was attenuated by the glucocorticoid receptor (GR) antagonist, and a mutation of hepatic nuclear factor 4 (HNF4) alpha response element (HNF4-RE), which suggested that GR and HNF4 alpha involved in the induction of CYP2A6 by DEX (Onica et al., 2008). However, how many genes contribute to the up-regulation of CYP2A5/2A6 induced by hepatocytes steatosis remains to be investigated. In conclusion, our data give us a clue that HFD-induced hepatic 18-carbon FA accumulation up-regulates CYP2A5/2A6 via Nrf2 during hepatocellular steatosis. However, Nrf2 is not the only molecule that regulates the expression of CYP2A5/2A6. AUTHOR CONTRIBUTIONS XheW and XZ designed this study and contributed to the paper writing. XC, XS, and XhuiW established Nrf2 silenced and over expressed HepG2 cell models, and participated in western blot assay and the paper writing. XL and YQ participated in the HepG2 cells viability assay and animal feeding. MH, WL, and IM conducted the experiment of liver pathology and real-time PCR assay.
v3-fos-license
2021-06-03T00:45:16.347Z
2021-02-04T00:00:00.000
235286951
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/1056/1/012004/pdf", "pdf_hash": "38d2af5be180e5b9df4d4b15e89e15a12ca3c2a5", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42022", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "38d2af5be180e5b9df4d4b15e89e15a12ca3c2a5", "year": 2021 }
pes2o/s2orc
Interfacial compatibilization of PLA and Mg in composites for bioresorbable bone implants In this study, polylactide (PLA)/magnesium (Mg) composites were produced through extrusion and compression-molding. In order to enhance the interfacial adhesion between the hydrophobic matrix and the hydrophilic filler, an amphiphilic PEO-b-PLLA block copolymer was used. The morphological study shows an effective improvement of the PLA/Mg interactions following the copolymer addition. Moreover, the surface contact angle test proves the decrease of the PLA hydrophobicity. Thus, a significant influence on the cell adhesion is expected. Furthermore, hydroxyapatite formation in bulk after eight weeks of the immersion in a simulated body fluid (SBF) is also shown, suggesting that the bioactivity will be noticeably improved. However, a decrease in mechanical properties and cell adhesion is observed. Introduction Stainless steel, cobalt-based alloys and especially titanium-based alloys have been the key materials for bone implants since the 1980s. However, these permanent implants present some disadvantages, such as side effect from long-term use: implant erosion and partial resorption, cortical bone erosion and fragmentation due to the high Young's modulus of the implant requiring second/multiple surgical interventions [1]. Thus, alternatives to permanent metal implants already available on the market, such as bioresorbable implants, have been developed and continue to gain popularity. Together with the requirements for any implant (biocompatibility and suitable mechanical properties), the bioresorbable implants include biodegradability and bioactivity so as to achieve adequate strength and stiffness maintained during a required time interval, allow cell adhesion and proliferation, degrade and resorb for bone healing until its complete regeneration. For these reasons, the polylactide (PLA) is among the biodegradable and biocompatible polymers of choice [2]. However, PLA possesses low mechanical properties (Young's modulus (GPa) = 3-4) compared to bone performance Young's modulus (GPa) = 7-30), uncontrollable and relatively slow degradation time (> 6 months) and acidic degradation products (causing local inflammation issues) [3]. To successfully remediate this issues, (nano)fillers have often been incorporated to polymers. Among all, Mg present good osteoconductivity, high mechanical properties (compared to monolithic polymers) and lower degradation times (< 1 month) [4,5]. Bearing in mind the published results, we propose here a combination of PLA and Mg. However, the poor interfacial adhesion between the hydrophobic PLA and the hydrophilic Mg is expected to negatively affect the matrix/filler (composite) properties. One of the possibilities to enhance the compatibility between a polymer and a filler involves modification of the filler surface using compatibilizing agents [6,7]. We report on the testing of an amphiphilic copolymer to form a new interface between the matrix and the filler, thus improving the interfacial adhesion of PLA/Mg biocomposites and the composite's biological activity. Preparation of composites The amphiphilic diblock copolymer poly(ethylene oxide-block-polylactide), (PEO-b-PLLA) was synthesized via bulk ring opening polymerization (ROP) of L-lactide initiated by a PEO with a fixed chain length of 5000 g/mol. [8] The amount of copolymer was fixed to 10 wt.% in PLA. The filler amount was 5 wt.%, 10 wt.% and 15 wt.% of Mg and composites were prepared via extrusion and shaped into cylinders (12-mm length and 4-mm diameter) and rectangular specimens of 60×12×3 mm 3 (length×width×thickness) via compression-molding. Morphology, surface hydrophilicity and dynamic mechanical analysis 3.1.1 Morphology The morphology of all the composites was visualized using scanning electron microscopy (SEM) to evaluate the influence of PEO-b-PLLA on the PLA/Mg interfacial adhesion. The samples were cryofractured and covered with gold prior to observation. As seen in Figure 1 (A), the poor interactions between the hydrophobic PLA matrix and hydrophilic Mg filler resulted in cavities left by the Mg particles after cryofracture. This behavior is considered as typical for incompatible composites with low interfacial adhesion between the matrix and the filler [5]. In contrast, the presence of the amphiphilic copolymer allowed an interestingly good compatibility between PLA and Mg as evidenced by the absence of cavities and the smooth and homogenous surface after criofracture (Figure 1 Surface hydrophilicity Together with the surface-morphology, surface hydrophilicity is known to tailor the bioactivity of implants. [8] Therefore, the influence of the copolymer on the PLA/Mg surface hydrophilicity was evaluated using water contact angle (WCA) analysis. Interestingly, the addition of PEO-b-PLLA showed a positive effect on the composite wettability. Indeed, the amphiphilic copolymer is able to enhance the hydrophilicity of the PLA/Mg composites by selective surface localization of the hydrophilic PEO blocks. [9] Thus, a decrease in the WCA from the first set (without copolymer) to the second one (with copolymer) was observed ( Dynamic mechanical analysis Dynamic mechanical analyses (DMA) are often used to assess the mechanical properties of samples as a function of the temperature. The results obtained for all composites and the neat PLA are presented in table 2 . As seen, an improvement in the storage moduli (E') was reached at 10 wt.% of Mg and 37 °C for the first set (PLA/xMg). Concerning the PLA/10Copo/xMg set, the E' reproduces the same behavior of the first one with a decrease in the E' values. In reality, the E' decrease indicated the plasticizing effect of the PEO present in the PEO-b-PLLA copolymer [10]. After the copolymer addition, a significant drop in the loss factor tanδ was detected when the Mg amount increased. According to the literature, the decrease of tanδ is related to the highest intensity of interface adhesion [11]. Consequently, the composites reinforced with an amount of 10 wt.% of Mg filler will be more closely investigated in the following section. In vitro degradation study In order to evaluate the bioactivity of the composites, an in vitro degradation test was performed by soaking the cylinders in the SBF at 37 °C and at pH 7.4. Prior to this test, the samples were taken out of the SBF and rinsed several times with distilled water, wiped off and dried in nitrogen. The visual aspect of the PLA/10Mg and PLA/10Copo/10Mg composites revealed the formation of a white friable layer on the composites surface. Its thickness for the PLA/10Copo/10Mg was found to also increase over time. This might be explained by the formation of hydroxyapatite 'HAp' (the major natural bone component) based on Mg degradation. [12] According to the literature [13], Mg degrades in SBF mainly through reactions with water by producing Mg cations (Mg 2+ ) and hydroxide anions (OH -) and releasing hydrogen gas. Figure ). Indeed, some cracks and pitting corrosion were observed in the composites ( figure 2). This behavior is more pronounced in PLA/10Copo/10Mg compared to PLA/10Mg composite because of the hydrophilicity of PEO. In order to confirm the chemistry of the white layer, EDX analysis was performed on the composites surface and in bulk (cross-section). The data obtained revealed that after a period of eight weeks HAp was only formed at the surface of PLA/10Mg (Ca/P=1.6), while the mineral was present in the entire bulk of the PLA/10Copo/10Mg. Interestingly, the content of Ca and P was more prominent in the presence of copolymer in bulk. Indeed, the Ca/P ratio was 1.64 reached after eight weeks of degradation versus 1 for the PLA/10Mg composite. Thus, the copolymercontaining samples seem more beneficial for new bone tissue formation during the bone-healing process [14]. Conclusion As was shown above, the presence of an amphiphilic copolymer enhances the PLA/Mg interfacial adhesion and decreases the PLA hydrophobicity via creating a completely new interface between PLA and Mg. In fact, it is assumed that the more the surface is hydrophilic, the more the hydrophilic bonelike layer adheres. Moreover, HApthe major component of natural bonewas formed in bulk only after eight weeks of immersion in the SBF for the PLA/10Copo/10Mg. However, the copolymer presence decreased the mechanical properties of the composite-based materials. In this regard, research is in progress to develop composites that meet the mechanical properties of natural bone by using novel co (polymers) to improve the PLA/Mg adhesion.
v3-fos-license
2023-01-12T16:31:00.084Z
2023-01-01T00:00:00.000
255666046
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/20/2/1084/pdf?version=1673085078", "pdf_hash": "f7d735db742c45a90df75b1be06be5a44166c895", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42023", "s2fieldsofstudy": [ "Medicine" ], "sha1": "31fa06167c8da6721a66740c954b7bd51f6e52b6", "year": 2023 }
pes2o/s2orc
Factors Influencing the Adoption of Magnetic Resonance-Guided High-Intensity Focused Ultrasound for Painful Bone Metastases in Europe, A Group Concept Mapping Study Magnetic resonance imaging-guided high-intensity focused ultrasound (MR-HIFU) is an innovative treatment for patients with painful bone metastases. The adoption of MR-HIFU will be influenced by several factors beyond its effectiveness. To identify contextual factors affecting the adoption of MR-HIFU, we conducted a group concept mapping (GCM) study in four European countries. The GCM was conducted in two phases. First, the participants brainstormed statements guided by the focus prompt “One factor that may influence the uptake of MR-HIFU in clinical practice is...”. Second, the participants sorted statements into categories and rated the statements according to their importance and changeability. To generate a concept map, multidimensional scaling and cluster analysis were conducted, and average ratings for each (cluster of) factors were calculated. Forty-five participants contributed to phase I and/or II (56% overall participation rate). The resulting concept map comprises 49 factors, organized in 12 clusters: “competitive treatments”, “physicians’ attitudes”, “alignment of resources”, “logistics and workflow”, “technical disadvantages”, “radiotherapy as first-line therapy”, “aggregating knowledge and improving awareness”, “clinical effectiveness”, “patients’ preferences”, “reimbursement”, “cost-effectiveness” and “hospital costs”. The factors identified echo those from the literature, but their relevance and interrelationship are case-specific. Besides evidence on clinical effectiveness, contextual factors from 10 other clusters should be addressed to support adoption of MR-HIFU. Introduction Pain is a common consequence of bone metastases that substantially reduces the quality of life of patients with advanced cancer [1,2]. For patients with persistent pain despite the use of analgesics, radiotherapy is a well-established treatment option that leads to complete or partial pain relief after two to four weeks in about 60-70% of patients [3][4][5]. Magnetic resonance image-guided high-intensity focused ultrasound (MR-HIFU) is an emerging non-invasive alternative that holds the promise to promote faster pain palliation than radiotherapy in a larger proportion of patients [6][7][8]. HIFU thermally ablates the periosteal nerve and tumor by delivering acoustic energy to the targeted treatment region [9]. HIFU can be performed under the guidance of magnetic resonance imaging (MRI) or ultrasound, but MRI guidance is preferred for bone treatments because MRI thermometry provides a near real-time assessment of temperature and thermal-dose distribution on soft tissues [9]. This enables monitoring the thermal damage on the treated and surrounding healthy tissues, and modulation of the energy level in case the temperature rise is insufficient [9]. MR-HIFU can be performed under general anesthesia or sedation depending on the location of the treatment, the patient characteristics, and the experience of the attending physicians, and it therefore requires an anesthesiologist or sedationist in the MRI room during the procedure [10]. After MR-HIFU treatment, pain response occurs within three days, and 67% to 88% of patients have complete or partial pain relief [6,7,11]. To date, no randomized controlled trial (RCT) has been performed to compare the effectiveness of MR-HIFU to radiotherapy. Therefore, a three-armed RCT was designed to compare focused ultrasound and radiotherapy for noninvasive palliative pain treatment in patients with bone metastases-the FURTHER-trial (ClinicalTrials.gov Identifier: NCT04307914). Evidence from RCTs should underpin the adoption of medical technologies in medical settings, including oncology [12]. However, the adoption of medical technologies encompasses multiple interacting factors, such as the patient's experience with the underlying illness, the clinician's resistance to new technologies, the processes of technology application in organizations, financing, and regulatory aspects [13]. These contextual factors have proven to play an even stronger role in the adoption of new technologies than the proof of their effectiveness [12]. Thus, to understand the complexity of the interventions, and the complexity of the social context in which the interventions are being tested, qualitative research is increasingly undertaken alongside RCTs [14]. This is necessary because RCTs may tolerate or control the context, but they do not engage with the context from different perspectives. Moreover, to support the implementation of new technologies, barriers and facilitators from different levels and contexts need to be elicited in order to ground the development of effective implementation strategies [15]. The most common methodologies applied to elicit contextual factors on various levels are focus groups, semi-structured interviews, or mix-method research such as Delphi panels [14]. Group concept mapping (GCM) is one alternative participatory mixed-method research that has been applied to theory development, planning of programs and social interventions, and evaluation of programs in health care [16]. The adoption of MR-HIFU technology is expected to face several challenges, including technical advancements, accumulation of clinical evidence, and reimbursement [17]. However, a systematic evaluation of barriers and facilitators influencing the adoption of MR-HIFU for bone metastases was lacking. To investigate barriers and facilitators influencing the adoption of MR-HIFU in European countries, a GCM approach was applied alongside the FURTHER-trial. Our objective was to elicit the contextual factors influencing the adoption of MR-HIFU, which are not routinely addressed in the RCT design, but could equally impact successful adoption of this technology. Study Settings FURTHER is a H2020-funded research project that aims to assess the effectiveness of MR-HIFU to improve early pain palliation for cancer patients with painful bone metastases. The FURTHER project's main component is a prospective, multicentric, three-arm RCT (ClinicalTrials.gov registration number NCT04307914); it is the first to assess the effectiveness of MR-HIFU compared to either radiotherapy or a combination of MR-HIFU and radiotherapy for pain palliation. Patient recruitment for the trial started on 10.03.2020 in the Netherlands, Germany, Finland, and Italy [18]. The GCM study took place in an early phase of the FURTHER-trial. GCM GCM combines qualitative data obtained from participatory inquiry and multivariate statistical analyses to create concept maps. These concept maps are visual representations that summarize the main ideas of the group (i.e., representing multiple perspectives) and their interrelationships [19,20]. The resulting concept maps express the opinion of the participants on the topic using their own terms and can then be used as a guide for strategically planning the adoption of medical technologies [16,19]. Participant Selection of the GCM Study The participants represented different perspectives: patients, referring physicians, medical specialists, clinical researchers, technology providers, hospital managers, including members of the FURTHER Consortium. The participants were selected using two different methods. First, purposive sampling was used to ensure diverse representation [21]. Second, snowball sampling (i.e., a chain-referral method) was used to facilitate participant engagement [21]. An invitation letter was sent via email to all identified stakeholders outlining the purpose of the GCM study. The invitation letter included a link to the FURTHER project website, where information on MR-HIFU procedure and the FURTHER project was available. A link was provided at the end of the letter, and those interested in participating created a username and password. A similar invitation was sent before the beginning of phase I and phase II and participation in phase II was independent from phase I. Data Collection and Analysis Data collection was conducted online using the platform from Group Wisdom™ (Concept System Inc., Ithaca, NY, USA, Version 2020). At first login, the participants signed electronically an informed consent (provided in Supplementary Materials-File S1) and were informed that they could withdraw consent for participation anytime. Participants' anonymity was guaranteed, and they were asked three to five non-identifying questions about their own background to allow subgroup analyses (File S1). The GCM study was then conducted in two phases: phase I consisted of a brainstorming task, and phase II comprised sorting and rating tasks. The tasks were conducted in English, with the objective of engaging all countries in creating a single European concept map. Figure 1 summarizes the tasks presented to each participant in each phase and how the data were processed and analyzed. PHASE I BRAINSTORMING Task: add statements that complete the focus prompt: One factor that may influence (either positively or negatively) the uptake of MR-HIFU in clinical practice in my country or local context is that ... PHASE II SORTING & RATING Sorting Task: sort the statements into different piles based on how they consider ideas as related; and label these piles Overview of data collection and analysis for the GCM study. Participants are responsible for generating ideas (phase I) and organizing and structuring the ideas (phase II). a Performed by two researchers independently. Phase I-Brainstorming Phase I took place from 1 August to 31 December 2021. During this period, the participants were asked to brainstorm statements guided by a focus prompt. The focus prompt reflected the research question in a complete-the-sentence format: "One factor that may influence (either positively or negatively) the uptake of MR-HIFU in clinical practice in my country or local context is that..." Reminders were sent by email monthly encouraging the participants to add new statements and to complement the ideas from other participants gathered during that period. Phase I was stopped when the topic was exhausted (i.e., if one week after the last reminder, the participants stopped adding new statements). To eliminate redundancy and potential ambiguity, the statements added were processed. Two researchers (JSCG and ACS) followed a stepwise approach: (i) splitting statements with more than one idea; (ii) merging redundant statements; (iii) editing the remaining statements to ensure comprehensibility. Finally, one participant revised the resulting list of statements to ensure there were no data loss or changes in meaning. Overview of data collection and analysis for the GCM study. Participants are responsible for generating ideas (phase I) and organizing and structuring the ideas (phase II). a Performed by two researchers independently. Phase I-Brainstorming Phase I took place from 1 August to 31 December 2021. During this period, the participants were asked to brainstorm statements guided by a focus prompt. The focus prompt reflected the research question in a complete-the-sentence format: "One factor that may influence (either positively or negatively) the uptake of MR-HIFU in clinical practice in my country or local context is that..." Reminders were sent by email monthly encouraging the participants to add new statements and to complement the ideas from other participants gathered during that period. Phase I was stopped when the topic was exhausted (i.e., if one week after the last reminder, the participants stopped adding new statements). To eliminate redundancy and potential ambiguity, the statements added were processed. Two researchers (JSCG and ACS) followed a stepwise approach: (i) splitting statements with more than one idea; (ii) merging redundant statements; (iii) editing the remaining statements to ensure comprehensibility. Finally, one participant revised the resulting list of statements to ensure there were no data loss or changes in meaning. Phase II-Sorting and Rating Phase II took place from 12 April to 31 May 2022, and reminders were sent every two weeks. The participants had the choice to log out and resume as many times as needed until the predefined end date of phase II. The statements were presented in a random order for the participants to complete two tasks: sorting and rating the statements. First, the participants were asked to sort the statements into different piles based on how they consider ideas to be related and to label these piles. The participants were explicitly instructed not to sort statements according to priority or value (e.g., important, hard-todo) and not to group dissimilar statements into an indefinite pillar (e.g., labeled "other"). Second, the participants were asked to rate statements on two dimensions: (i) Importance (i.e., how important is this factor for the uptake of MR-HIFU treatment for bone metastases in your country?), and (ii) Changeability (i.e., how possible is it to act on this factor to promote the adoption of MR-HIFU for bone metastases in your country?). To answer both questions, each statement was rated using 5-point Likert scales, from 0 (not at all important/not at all possible) to 4 (extremely important/extremely possible). Data generated in phase II were analyzed using the GCM software (Concept System Inc., Version 2020). To generate the point map, multidimensional scaling (MDS) was used to attribute XY coordinates to the statements, which were then plotted into a twodimensional plane. To understand the cohesiveness between statements, bridging indices were calculated (on a 0 to 1 scale). Bridging indices closer to 0 indicate that a statement was often piled together with statements immediately adjacent to it on the map. Finally, we calculated the stress value for this study. The stress value reflects the discrepancy between the input data matrix (i.e., the original sorting data) and the final point map (File S2) [22]. Stress values of previous GCM studies ranged between 0.205 and 0.365. Thus, having a lower stress value than the average of previous studies (0.285) indicates that the participants sorted the statements in a similar manner [19,20]. To develop the cluster map, Ward's hierarchical cluster analysis was applied to group statements reflecting similar concepts into clusters. To decide on the final number of clusters, two researchers (ACS and JSCG) independently examined several cluster solutions (from 15 to six). Starting with the 15-cluster solution, the clusters were merged one by one until information was lost, which could impact the practicality or interpretability of the cluster map. Bridging indices were considered while constructing the cluster map, and labels derived from the original sort data. A closing session was organized in a hybrid event with all authors to finalize the labeling of the clusters (in cases where a clear preference from the original sort data could not be identified). Furthermore, we calculated average ratings for each statement and cluster of statements. Average rating values were plotted in pattern matches to show how the clusters were ranked according to importance and changeability. Average ratings were plotted in go-zone displays (i.e., bi-variate graphs for two rating dimensions-importance and changeability). The go-zone is divided into four quadrants (above and below the mean rating for each dimension). Statements falling at the northeast quadrant are important statements, on which it is possible to act, and therefore should be prioritized. The Pearson's correlation coefficient was calculated to measure the linear relationship between the two rating dimensions. Lastly, subgroup analyses were performed per country, and we calculated the variance of average ratings to determine the coherence between country subgroups. Participants Overall, 79 stakeholders were invited, and 45 of them were involved in at least one phase of this study, resulting in a participation rate of 56%. In phase I, 28 (35%) the participants contributed to the brainstorming task. In phase II, 31 (39%) contributed to the sorting task, 33 (41%) rated statements according to importance, and 29 (36%) according to changeability. Table 1 shows the participants' characteristics according to each phase of this study. Collected Statements Seventy-one statements were collected at the end of phase I. Monthly reminders were useful especially because when the participants logged in a second time, they could read and complement the statements added by other participants. For example, one participant added the statement "reimbursement"; in a second login other participants complemented with the statements "Reimbursement in ambulatory care is essential", and "Reimbursement is important, both inside the hospital and in ambulatory care". After adjusting for redundancy and potential ambiguity, 49 statements entered phase II to be sorted and rated. In the File S3, Figure S1 and Table S2 detail and exemplify the process of splitting and merging statements. Concept Maps Sorting data from 28 participants entered the MDS and cluster analysis. Three participants had to be excluded because they sorted most statements according to priority (e.g., do not agree, important) or value (i.e., two piles of positive vs. negative factors). The point map ( Figure S2 in File S4) shows the statements (and respective identification numbers) plotted on an x-y chart. The calculated stress value was 0.2560. The cluster map (Figure 2) comprised of 12 clusters: "competitive treatments", "physicians' attitudes", "alignment of resources", "logistics and workflow", "technical disadvantages", "radiotherapy as first-line therapy", "aggregating knowledge and improving awareness", "clinical effectiveness", "patients' preferences", "reimbursement", "cost-effectiveness" and "hospital costs". Table 2 illustrates one representative statement for each cluster, and a full list of the statements contained in each cluster is provided in the Supplementary Materials (File S5). "clinical effectiveness", "patients' preferences", "reimbursement", "cost-effectiveness" and "hospital costs". Table 2 illustrates one representative statement for each cluster, and a full list of the statements contained in each cluster is provided in the Supplementary Materials (File S5). To ensure internal validity, one adjustment in the clusters had to be made. According to the initial hierarchical cluster analysis, statement 14 (i.e., "difficult patient recruitment, due to a large range in referring medical specialists") was assigned to the cluster "radiotherapy as first-line therapy". However, bridging values indicated that statement 14 was often piled with statements from the clusters "physicians' attitude" and "logistics and workflow". Because statement 14 matched the issue addressed in the cluster "physicians' attitude" more appropriately, it was manually moved to this cluster. To ensure internal validity, one adjustment in the clusters had to be made. According to the initial hierarchical cluster analysis, statement 14 (i.e., "difficult patient recruitment, due to a large range in referring medical specialists") was assigned to the cluster "radiotherapy as first-line therapy". However, bridging values indicated that statement 14 was often piled with statements from the clusters "physicians' attitude" and "logistics and workflow". Because statement 14 matched the issue addressed in the cluster "physicians' attitude" more appropriately, it was manually moved to this cluster. Importance and Changeability of Statements and Clusters Pattern matches show the differences between the two rating dimensions (importance vs. changeability) (Figure 3). The cluster "clinical effectiveness" was the most important and the most changeable, while the cluster "competitive treatments" was the least important and the least changeable. Importance and Changeability of Statements and Clusters Pattern matches show the differences between the two rating dimensions (importance vs. changeability) (Figure 3). The cluster "clinical effectiveness" was the most important and the most changeable, while the cluster "competitive treatments" was the least important and the least changeable. The cluster "clinical effectiveness" was the most important, followed by "radiotherapy as first-line therapy" and "patients' preferences". The coherence of perceived importance was notably lower for cluster "reimbursement" and "clinical effectiveness" (i.e., The cluster "clinical effectiveness" was the most important, followed by "radiotherapy as first-line therapy" and "patients' preferences". The coherence of perceived importance was notably lower for cluster "reimbursement" and "clinical effectiveness" (i.e., variance between countries 0.34 and 0.14, respectively). Table 3 shows the clusters ranked in order of importance. For the cluster "reimbursement", average importance ratings were higher for Germany and the Netherlands (average ratings ≥ 3.00) compared to Italy (average 2.33) and Finland (average 1.83). The low coherence between countries regarding the importance of the cluster "clinical effectiveness" was explained by divergence in one country. Figure 4 shows average ratings on the importance dimension according to country-specific subgroups. In Italy, the most important factors were the availability of anesthesiologists for MR-HIFU procedures (statement 43) and frequency of time slots at the MRI dedicated for HIFU (statement 31), both from the cluster "alignment of resources". variance between countries 0.34 and 0.14, respectively). Table 3 shows the clusters ranked in order of importance. For the cluster "reimbursement", average importance ratings were higher for Germany and the Netherlands (average ratings ≥ 3.00) compared to Italy (average 2.33) and Finland (average 1.83). The low coherence between countries regarding the importance of the cluster "clinical effectiveness" was explained by divergence in one country. Figure 4 shows average ratings on the importance dimension according to country-specific subgroups. In Italy, the most important factors were the availability of anesthesiologists for MR-HIFU procedures (statement 43) and frequency of time slots at the MRI dedicated for HIFU (statement 31), both from the cluster "alignment of resources". Figure 5 shows average ratings for how important the statements are, and how possible it is to act on each statement to promote the adoption of MR-HIFU. The correlation between the two rating dimensions was high (r = 0.77), resulting in 22 (44%) statements falling at the northeast quadrant (i.e., important statements, on which it is possible to act). Notably, all statements contained in the clusters "clinical effectiveness" and "patients' preferences" fell into the northeast quadrant. At least one factor from eight other clusters (including "physicians' attitudes", "alignment of resources", "logistics and workflow", "technical disadvantages", "radiotherapy as first-line therapy", "aggregating knowledge and improving awareness", "reimbursement", and "cost-effectiveness") fell into the northeast quadrant. Figure 5 shows average ratings for how important the statements are, and how possible it is to act on each statement to promote the adoption of MR-HIFU. The correlation between the two rating dimensions was high (r = 0.77), resulting in 22 (44%) statements falling at the northeast quadrant (i.e., important statements, on which it is possible to act). Notably, all statements contained in the clusters "clinical effectiveness" and "patients' preferences" fell into the northeast quadrant. At least one factor from eight other clusters (including "physicians' attitudes", "alignment of resources", "logistics and workflow", "technical disadvantages", "radiotherapy as first-line therapy", "aggregating knowledge and improving awareness", "reimbursement", and "cost-effectiveness") fell into the northeast quadrant. In contrast, none of the statements from the clusters "competitive treatments" and "hospital costs" fell in the northeast quadrant. Statements located in the northeast quadrant are listed in the Supplementary Materials (File S6). Discussion Evidence from the FURTHER-trial is expected to be paramount to the adoption of MR-HIFU but is not enough to ensure successful adoption of this technology. The cluster map developed in our study elicited several individual experiences and offers a conceptual understanding of the factors that may influence the adoption of MR-HIFU in clinical practice. The low stress value (0.25) shows that the participants sorted statements in a similar manner; however, the subgroups per country perceived the importance of these factors slightly differently. In subgroup analysis per country, reimbursement is notably more important in Germany and the Netherlands compared to Finland, which might be explained by the specific health care financing structures of these countries [23]. For example, in Germany health care providers can negotiate supplementary bundled payment from statutory health insurances for innovative procedures (Neue Untersuchungs-und Behandlungsmethoden) In contrast, none of the statements from the clusters "competitive treatments" and "hospital costs" fell in the northeast quadrant. Statements located in the northeast quadrant are listed in the Supplementary Materials (File S6). Discussion Evidence from the FURTHER-trial is expected to be paramount to the adoption of MR-HIFU but is not enough to ensure successful adoption of this technology. The cluster map developed in our study elicited several individual experiences and offers a conceptual understanding of the factors that may influence the adoption of MR-HIFU in clinical practice. The low stress value (0.25) shows that the participants sorted statements in a similar manner; however, the subgroups per country perceived the importance of these factors slightly differently. In subgroup analysis per country, reimbursement is notably more important in Germany and the Netherlands compared to Finland, which might be explained by the specific health care financing structures of these countries [23]. For example, in Germany health care providers can negotiate supplementary bundled payment from statutory health insurances for innovative procedures (Neue Untersuchungs-und Behandlungsmethoden) [10]. In contrast, Finland has a system of cost-outlier payment (i.e., individual cases with exceptionally high costs are billed separately) and Finnish municipalities act as both payers and providers of health care [23]. Moreover, in Germany and the Netherlands, the time-lag between collection of data (e.g., resource use) and preparing the data for hospital reimbursement takes in average two years, while in Finland, this time-lag for the data is less than one year [23]. In addition, divergences between countries could be explained by MR-HIFU being at different phases of implementation within the specific organizations or health care sys-tems [24,25]. This could explain why in our results the cluster "clinical effectiveness" is perceived as the most important in all countries, except for Italy where the cluster "alignment of resources" is more important. A multiple case study on the adoption of intensitymodulated radiotherapy found that availability of resources is very important at the pre-implementation phase (i.e., when adopters are still forming an attitude about the innovation). In contrast, clinical evidence becomes more important in the post-implementation phase (i.e., confirming the decision and continuing action) [24]. In health care markets, the adoption of technologies often follows a cyclical and dynamic process, more so for medical devices that are continuously being updated and enhanced with supplementary technology [24]. There are several theories and frameworks describing the diffusion of innovations in health care [25,26]. Based on a literature review of theoretical and empirical studies, Greenhalgh et al. proposed a theoretical framework, the NASSS framework [27]. The NASSS framework stands for Non-adoption, Abandonment, Spread, Scale-up and Sustainability of health and care technologies. According to the NASSS framework, the probability of successful adoption depends on the degree of complexity for seven domains: (i) the condition, (ii) the technology, (iii) the value proposition, (iv) the adopter system, (v) the health care organization and (vi) the wider system, and lastly (vii) the continuous embedding and adaptation over time [13,27]. The statements identified in our study generally fit the domains from the NASSS framework, even though the structure/categorization may deviate in some points [27,28]. For example, the clusters "alignment of resources" and "logistics and workflow" reflect the complexity within the health care organization (domain v), and the cluster "physicians' attitude" reflects the complexity within the adopter system (domain iv). On the other hand, the statement "bone metastases patients are often unfit for general anesthesia" (ID 45) highlights a complexity that could be intuitively placed within the condition domain. However, this statement was grouped in the cluster "physicians' attitude" because it was assumed to be an important part of the physicians' rationale. Hence, although the factors influencing the adoption of MR-HIFU echo previous findings, the relevance of each factor (and how they interact) is notably specific for the case studied [13]. According to our results, to promote adoption of MR-HIFU for pain palliation of bone metastases, clinical evidence from randomized clinical trials (statement 34) is seen as the utmost priority. This might result from the fact that 70% of our participants were involved in the FURTHER-trial. However, previous research has shown that the strength or quality of scientific evidence does not always have a large influence on the decision to adopt innovations in health care [12,29]. For many decision-makers, experiential knowledge can feel more relevant and applicable, and real-world data about the budgetary, operational, and patient impacts can have an equally high impact [12]. Although the cluster "competitive treatments" was perceived as generally unimportant, it is noteworthy that "radiotherapy as first-line treatment" was clustered separately. Radiotherapy is the current standard of care for patients with bone metastases [4], and its importance for the adoption of MR-HIFU is indubitable. However, the competitive advantage of radiotherapy seems difficult to overcome, largely due to the logistic advantages of radiotherapy and the already established referral workflow between care providers. There were several advantages of GCM alongside a multicentric RCT. First, GCM enables to study the context in which the intervention will be applied, which is normally overlooked by the RCT design. About 30% of the participants were not members of the FUR-THER consortium, such as representatives from medical societies and regulatory bodies, who broadened the perspective of an otherwise highly specialized research group. Second, to a multicentric European RCT, the online and asynchronous format was advantageous to engage participants who have busy schedules and are geographically dispersed [30]. Third, GCM brainstorming has been shown to be efficient in terms of time and financial costs compared to other qualitative research approaches such as interviews [31]. Fourth, GCM offered a structured process that allowed engagement of different stakeholders while giving them equal voice and relevance [20]. The anonymous participation in the brainstorming task allowed the participants to respond freely and may offset response behavior that can stem from the hospital hierarchy [20]. Moreover, the involvement of stakeholders in the process itself creates commitment to the adoption of the MR-HIFU [20]. The online GCM format qualified as a reliable and practical solution for stakeholder engagement in the face of the current travel restrictions imposed by the COVID pandemic. However, it should be acknowledged that the COVID pandemic could have influenced the perceived importance of some factors. For instance, the availability of anesthesiologists for MR-HIFU procedures was perceived as a very important factor. Because anesthesiologists were pulled from elective treatments to attend patients with COVID and were broadly unavailable for MR-HIFU treatments, the importance of this factor could have been overestimated. Because MR-HIFU is in early phase of implementation in clinical practice and the topic is novel, the number of participants was representative to answer the research question. Although GCM studies can have larger sample sizes, the number of participants at each phase was appropriate to perform all the GCM analyses [20]. The overall participation rate was similar to the average participation of online-based qualitative studies, which according to a systematic review is 44.1% [32]. One important limitation of the present GCM study was low patient representation. The patient group consists of older patients with advanced cancer, who have multimorbidity, limited mobility, and limited life expectancy. The online format was thought to be appropriate because it would abstain from in-person interaction (e.g., as needed for focus groups). However, patient recruitment for the FURTHER-trial stopped for two years during the COVID pandemic. As a result, only six patients were invited to participate or to appoint a representative, but five declined due mainly to language barrier. Future studies that intend to apply the GCM methodology in the context of a multinational trial should consider engaging patients in their own language. Conclusions In conclusion, GCM offered a structured process that promoted engagement of different stakeholders alongside the FURTHER-trial. The resulting concept maps shed light on how the participants discern the interrelationships and the relevance of factors that may influence the adoption of MR-HIFU in clinical practice in Europe. Although these are likely to change as the technology evolves and the implementation process continues, the present GCM study was able to construct a common understanding among participants. The findings of this GCM study can be used as a basis to develop strategies and recommendations on how to support the adoption of MR-HIFU in European oncologic care. Institutional Review Board Statement: Ethical review and approval were waived for this study because no personal information was collected, and data are not considered sensitive or confidential in nature. Informed Consent Statement: Informed consent was obtained from all subjects involved in this study.
v3-fos-license
2021-09-12T06:16:32.295Z
2021-09-11T00:00:00.000
237479577
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academic.oup.com/jcem/article-pdf/107/2/e653/42224956/dgab671.pdf", "pdf_hash": "ffe538f1c881524201c52dae0a6530c91441a4aa", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42024", "s2fieldsofstudy": [ "Medicine" ], "sha1": "621a4dbef34ae7f341634379b9b3a3510b4fed8a", "year": 2021 }
pes2o/s2orc
Waist-height ratio and the risk of severe diabetic eye disease in type 1 diabetes: a 15-year cohort study Disclosure summary: EBP reports receiving lecture honorariums from Eli Lilly, Abbott, Astra Zeneca, Sanofi, Boehringer Ingelheim and is an advisory board member of Sanofi. P-H.G. reports receiving lecture honorariums from Astellas, Astra Zeneca, Boehringer Ingelheim, Eli Lilly, Medscape, MSD, Mundipharma, Novo Nordisk, PeerVoice, Sanofi, Sciarc and being an advisory board member of Astellas, Astra Zeneca, Bayer, Boehringer Ingelheim, Eli Lilly, Janssen, Medscape, MSD, Mundipharma, Novo Nordisk, and Sanofi. No other potential conflicts of interest relevant to this article were reported. VH and CF report no conflict of interest. Abstract Context: Obesity prevalence has increased in type 1 diabetes (T1D). However, the relationship between body composition and severe diabetic eye disease (SDED) is unknown. Objective: To investigate the associations between body composition and SDED in adults with T1D. Methods: From 5401 adults with T1D in the Finnish Diabetic Nephropathy Study, we assessed 3468, and 437 underwent dual-energy-X-ray-absorptiometry for body composition analysis. The composite outcome was SDED, defined as proliferative retinopathy, laser treatment, anti-VEGF treatment, diabetic maculopathy, vitreous hemorrhage, vitrectomy. Logistic regression analysis evaluated the associations between body composition and SDED. Multivariable Cox regression analysis assessed the associations between the anthropometric measures and SDED. Subgroup analysis was performed by stages of albuminuria. The relevance ranking of each variable was based on the z statistic. Results : During a median follow-up of 14 . 5 (IQR 7 . 8-17.5) years, 886 SDED events occurred. Visceral/android fat ratio was associated with SDED (OR 1.40, z=3.13), as well as the percentages of visceral (OR 1.80, z=2.45) and android fat (OR 1.28, z=2.08), but not the total body fat percentage. Waist-height ratio showed the strongest association with the SDED risk (HR=1.28, z= 3.73), followed by the waist (HR 1.01, z=3.03), body mass index (HR 1.03, z=2.33), and waist-hip ratio (HR 1.15, z=2.22). The results were similar in normo- and microalbuminuria, but not significant in macroalbuminuria. A WHtR ≥ 0.5 increased the SDED risk by 28% at the normo- and microalbuminuria stages. Conclusions : WHtR, a hallmark of central obesity, is associated with SDED in individuals with type 1 diabetes. This study included two different analyses. First, an observational prospective study was conducted to investigate the impact of anthropometric measures related to central obesity  waist-height ratio (WHtR), waist-hip ratio (WHR), and waist circumference (WC)  and BMI as a measure of general obesity on the risk of SDED in a large cohort of adults with type 1 diabetes. Introduction Diabetic retinopathy (DR) is a common microvascular complication of diabetes, which may progress to severe stages and even to blindness. It is the fifth most common cause of blindness and visual impairment worldwide (1). From 1990 to 2015, the crude global prevalence of each cause of blindness and visual impairment decreased, except for DR, which increased (1). Given that the incidence of type 1 diabetes increases by 4.2% annually also among young adults (2), it is possible in the future, there will be an even higher number of individuals who have to cope with type 1 diabetes. The knowledge of modifiable risk factors, early diagnosis, and treatment are crucial to avoid the progression and the burden of DR. Although there are well-known risk factors for severe diabetic eye disease (SDED) in individuals with type 1 diabetes (3)(4)(5), it is still unknown whether central obesity is related to SDED in those individuals. A few cross-sectional studies have been conducted in individuals with type 1 diabetes to assess the relationship between body mass index (BMI) and DR, but the results are controversial (6,7). Furthermore, BMI may not reflect central obesity (8,9), which has been considered a risk factor for DR in individuals with type 2 diabetes (10,11). Considering that the prevalence of obesity has increased in type 1 diabetes along with the last decades (12), it is important to understand whether the fat distribution, especially the visceral fat, is a risk factor for SDED in this population. Thus, this study aimed to explore the associations between body composition and SDED in adults with type 1 diabetes. Study design This study included two different analyses. First, an observational prospective study was conducted to investigate the impact of anthropometric measures related to central obesity waist-height ratio (WHtR), waist-hip ratio (WHR), and waist circumference (WC) and BMI as a measure of general obesity on the risk of SDED in a large cohort of adults with type 1 diabetes. Second, a cross-sectional analysis was performed to investigate the association between body composition and the prevalence of any retinopathy except SDED or SDED. Furthermore, a similar cross-sectional analysis was performed to evaluate the association between WHtR (representing central obesity) and the Early Treatment of Diabetic Retinopathy Study (ETDRS) grading (13). Study population The Finnish Diabetic Nephropathy (FinnDiane) Study is a nationwide, prospective, multicenter (93 centers across Finland) study since 1997, that aims to identify risk factors for type 1 diabetes complications and recruitment of new participants is still ongoing. For the longitudinal analysis, from a total of 5401 individuals with type 1 diabetes in the FinnDiane cohort, 1933 individuals were excluded due to SDED at baseline. Thus, we assessed 3468 individuals for the occurrence of SDED. Then, since no anthropometric was associated with SDED in the macroalbuminuria stage, we limited the analyses to 3146 individuals with normo-and microalbuminuria from which 437 had their body composition evaluated by dual-energy-X-ray-absorptiometry (DXA), that was included in the regular The composite outcome was SDED, defined as proliferative diabetic retinopathy (PDR), the initiation of laser treatment or anti-vascular endothelial growth factor (anti-VEGF), diabetic maculopathy, vitreous hemorrhage and vitrectomy identified from the Care Register for Health Care until the end of 2017, whatever comes first. The diabetic retinopathy classification at baseline was based on the FinnDiane questionnaire in which the participant as well as the attending physician answered the question of whether the participant had or did not have previous diabetic retinopathy and/or had undergone laser treatment for diabetic eye disease. Furthermore, this information was later double-checked by a physician from the FinnDiane Study Group by reviewing the patient files for all potential information on retinal screening and ophthalmology consultations. Thus, participants were categorized into no retinopathy, any retinopathy except SDED and SDED. In a subset of participants, ETDRS grading data were available for further sensitivity analysis and they were classified at baseline according to the ETDRS grading as no retinopathy (ETDRS 10), mild nonproliferative diabetic retinopathy (NPDR)(ETDRS 20 and 35), moderate NPDR (ETDRS 43 and 47), severe NPDR (ETDRS 53) and PDR (ETDRS 61-85) (13). A c c e p t e d M a n u s c r i p t 9 appendicular lean mass refers to the lean mass of both legs and arms. BMI was calculated as total body weight (kilograms) divided by the square of the height (meters). WC was measured in centimeters by a stretch-resistant tape at the horizontal plane midway between the superior iliac crest and the lower margin of the lowest rib. The hip circumference was measured with the same tape around the widest part over the great trochanters and WHR and was calculated by dividing the WC by the hip circumference. The WHtR was calculated by dividing the WC by the height and values < 0.5 were considered normal for both sexes (15). Statistical analyses Data on categorical variables are presented as frequencies, continuous variables as means (± standard deviation, SD) for normally distributed values and otherwise as medians (interquartile range, IQR). Between-group comparisons were performed with the χ 2 test for categorical variables, with ANOVA for normally distributed continuous variables and with Mann-Whitney or Kruskal-Wallis test for non-normally distributed continuous variables. After excluding the individuals with SDED at baseline, a multivariable Cox regression analysis was used to assess the association between the anthropometric measures and the risk of SDED adjusted for baseline covariates such as age at onset of diabetes, duration of diabetes, sex, glycated hemoglobin A1c (HbA1c), systolic blood pressure (SBP), triglycerides, smoking, lipid-lowering medication, any retinopathy except SDED, estimated glomerular filtration rate (eGFR) and DN stages. Then, in a subset of 768 participants, a sensitivity analysis for the association between WHtR and the risk of SDED was performed using a similar model but replacing the covariate any retinopathy except SDED at baseline with ETDRS grading at baseline. Follow-up time was counted from the baseline visit until one of the components of that WHtR was the anthropometric measure most strongly associated with the risk of SDED, we performed a score ranking of WHtR and the other risk factors (HbA1c, age at onset of diabetes, duration of diabetes, triglycerides, systolic blood pressure, smoking, sex, lipidlowering medication, any retinopathy except SDED, eGFR and DN stages) using z statistics. Since the interactions between sex and the anthropometric measurements or body composition variables were not significant, the analyses were conducted by pooling men and women together. Finally, we used the %FINDCUT SAS macro tool to identify an optimal cutoff point for the WHtR to classify individuals at high risk versus low risk of SDED (16). After the establishment of the cutoff value, two groups were created and the risk of SDED was compared between the groups. In the cross-sectional analysis, a multinominal logistic regression model was used to evaluate the associations between body composition, any retinopathy except SDED or SDED, taking the no retinopathy as the reference group. The model was adjusted for HbA1c, SBP, triglycerides, smoking, lipid-lowering medication, eGFR and DN stages. The same model Association between anthropometric measures and SDED In the longitudinal dataset including all 3468 individuals with type 1 diabetes, the median age was 34.8 (IQR 25.7-45.1) years 51.2% were female and the median duration of diabetes Table 1. In the analysis including all individuals, the WHtR was the anthropometric measure strongest associated with the risk of SDED (HR=1. 28 Subgroups by DN stages In the subgroup analysis by DN stages, WHR was no more associated with SDED in the individuals with normo-and microalbuminuria ( Table 2). At the macroalbuminuria stage, no anthropometric measure was associated with SDED. Thus, after excluding the individuals with macroalbuminuria and unknown stage of albuminuria at baseline, 3146 individuals remained for further analysis, of which 24.3% developed SDED during a median follow-up of 15.0 (IQR 8.4-17.6) years (Table 1). At baseline, the median age was 34.3 (25.3-44.4) years, 51.9% were women and the median duration of diabetes was 14.7 (7.9-22.6) years. Among the individuals with normo-and microalbuminuria, at baseline, 99.6% of those with obesity (BMI > 30kg/m 2 ), 69.1% of those with overweight (BMI ≥ 25kg/m 2 and < 30kg/m 2 ) and 10.7% of those with normal weight (BMI < 25kg/m 2 ) presented a WHtR ≥ 0.5. Baseline clinical characteristics according to the incidence of SDED in individuals with normo-and microalbuminuria are shown in Table 1. Among those with normo-and microalbuminuria, the WHtR was the anthropometric measure strongest associated with the risk of SDED (HR 1.32 for 0.1 increase, z= 3.86), followed by WC (HR 1.01 for 1 cm increase, z=3.04) and BMI (HR 1.03 per 1kg/m2 increase, z=2.73)( Table 2). The results were similar when individuals with normo-or microalbuminuria were analysed separately. The risk of SDED in the group with normo-and microalbuminuria was 28% higher (HR 1.28, 95% CI 1.08-1.50) in individuals with a WHtR ≥ 0.5 compared to the individuals with a WHtR < 0.5 (Figure 1). In the score ranking of the relevance of SDED risk factors, at the normo-and microalbuminuria stages, WHtR appeared at the fifth position ( Association between body composition and SDED The individuals with SDED presented similar body weight and lean mass (total and appendicular lean mass), notwithstanding that they had a higher percentage of body fat mass, visceral fat mass, android fat mass and a lower percentage of body lean mass and appendicular lean mass compared to those without SDED. Consequently, they had higher ratios of visceral and android fat to appendicular lean mass ( Table 3). Table 4). The percentages of visceral and android fat were positively associated with SDED, whereas the percentage of appendicular lean mass was negatively associated with SDED (Table 4). Interestingly, the percentages of the total body fat and the total body lean masses were not associated with SDED (Table 4). Using the "no previous retinopathy" in the FinnDiane questionnaire as the reference group in a multinominal logistic regression model, the visceral fat mass percentage was also associated with any retinopathy except SDED at baseline (OR 1.63, 95% CI 1.03-2.59, p=0.04), however, the total body fat mass percentage was not (OR 1.01, 95% CI 0.97-1.05, p=0.62). The associations between body composition and SDED are shown in Table 4. Discussion In this study, we showed that a simple measure such as the WHtR is associated with an increased risk of SDED in adults with type 1 diabetes, placing it among the six most important risk factors for SDED in this population. Furthermore, we found that the central body fat distribution is associated with the presence of SDED. We are not aware of any other studies in a large cohort of individuals with type 1 diabetes that have assessed such relationships, especially stratified by different stages of albuminuria. Obesity is causally related to DN in individuals with type 1 diabetes (17), whilst its relationship with SDED is still unclear. Although studies including individuals with type 2 diabetes have shown that a higher BMI was associated with DR (18,19), a meta-analysis and systematic review revealed that being overweight or obese did not confer an increased risk of DR (20). Possibly, these discrepancies concerning the relationship between BMI and DR in individuals with type 2 diabetes are because BMI does not necessarily reflect the body fat A c c e p t e d M a n u s c r i p t 15 distribution, especially the central fat, which has been associated with DR in people with type 2 diabetes (10,11). Concerning studies in individuals with type 1 diabetes, the data are even more scarce, and the results are also controversial. Similar to our findings, a Belgian cross-sectional study (6), including 592 participants with type 1 diabetes, and the DCCT/EDIC study (3) have shown that individuals with DR presented with a higher BMI. A crosssectional Australian study, including 501 adults with type 1 diabetes, found an association between the BMI >30kg/m 2 and DR (7). However, in the DCCT/EDIC study (3) the authors did not find an association between BMI and the progression of DR. Nevertheless, we have to take into consideration that we are looking at different endpoints. The DCCT/EDIC study evaluated the progression of DR, while the FinnDiane study did not look at each progressive stage of DR, but at the risk of developing a severe stage of diabetic eye disease. The discrepancies also may be explained by the fact that the DCCT/EDIC cohort is better clinically characterized including a greater number of individuals with ETDRS grading than the FinnDiane cohort. Another possible reason may be related to the relationship between body composition and SDED that was not explored in the DCCT/EDIC study. Since we showed that the visceral fat mass percentage but not the total body fat mass percentage is associated with SDED, differences in the body composition between the FinnDiane cohort and DCCT/EDIC cohort may explain different results, despite a similar BMI. In our study, BMI was positively associated with SDED, although it was the third of four anthropometric measures in the ranking of relevance. The weaker association, by z-value, between BMI and SDED compared to the association between WHtR and SDED may be due to the lower power of BMI compared to WHtR to estimate the visceral fat in individuals with T1D, according to previous research of our group (21). Recently, we also showed that although BMI and WHtR are associated with non-alcoholic fatty liver in adults with type 1 diabetes, WHtR shows a A c c e p t e d M a n u s c r i p t 16 stronger association than BMI (22). The observed differences between BMI and WHtR are even more relevant in clinical practice since, given that in the present dataset, 10.7% of the individuals with normal BMI and 69.1% of the overweight people presented a WHtR ≥ 0.5, which means that several individuals at high risk of SDED would not be recognized if only a BMI ≥ 30kg/m 2 would be considered as a risk factor. The central fat, estimated by WHR, has been associated with DR in a few studies including individuals with type 2 diabetes (10,11). However, in the present study, it was the last of four anthropometric measures in the ranking of relevance and, beyond that, WHR was not associated with SDED in the subgroup analysis according to DN stages. To understand the disagreement with the literature, it is important to recognize that the present study included individuals with type 1 diabetes, which differs from those with type 2 diabetes in many aspects, furthermore, the WHR was not an estimator of visceral fat as good as the WHtR according to our previous research (21). In other words, it seems that visceral fat is the main factor for SDED, therefore, the stronger association between the anthropometric measure and the visceral fat, the better predictor. In the present study, we showed for the first time that the percentage of visceral fat mass is closely associated with SDED in individuals with type 1 diabetes and that the ratio of visceral to android fat shows an even stronger association. This result emphasizes the greater relevance of the visceral fat for the risk of SDED compared to the android fat, which includes the visceral and subcutaneous fat located at the android region. Furthermore, the associations between SDED and the ratios of visceral and android fat to appendicular lean mass demonstrate the importance of having a balance in the body composition concerning lean mass to central fat mass, since the functional muscle tissue improves insulin sensitivity whereas visceral fat increases insulin resistance. The mechanism involved in the relationship between visceral fat and SDED is still unknown, albeit some hypotheses can be suggested. Adipocytes from visceral fat produce plasminogen activator inhibitor type 1 (PAI-1) (23), which has been associated with end-stage proliferative DR in individuals with type 2 diabetes (24). Visceral fat also produces TNF-α (25), which has been associated with DR in individuals with type 1 diabetes (26), beyond leading to an inflammatory and insulinresistant state (27), thus contributing to the increase in blood glucose and triglycerides, two relevant risk factors for SDED. Since insulin resistance has also been associated with low skeletal muscle mass (28), which has been associated with DR in type 2 diabetes (29), it may explain the negative association between SDED and the appendicular lean mass percentage, as well as the positive association between SDED and the ratios of central fat to appendicular lean mass in our study. Another possible link between SDED and visceral fat is the positive association between visceral fat and VEGF (30), which is involved in the pathogenesis of DR (31). Another novelty of the present study was to show the contribution of WHtR alongside the well-known risk factors for SDED. Similarly to the results from the DCCT/EDIC study (3), we showed that the HbA1c is the most important risk factor for SDED in our cohort. However, we also found that central obesity, represented by WHtR, is another important risk factor. It is of note that the association between WHtR and the risk of SDED remained after adjusting for ETDRS grading in a subset of individuals. In this study, no anthropometric measure was associated with the risk of SDED in individuals with macroalbuminuria. Possibly, the advanced DN stage is such an important risk factor for SDED that overwhelms any other risk factor. The present study has some limitations. We used the variable "any retinopathy at baseline" to adjust the analysis, but there was no detailed information on the grading of the retinopathy in the questionnaires. Another limitation is that we did not have information on ETDRS grading for all individuals at baseline and during FinnDiane follow-up visits, which hampers any assessment of the impact of body composition and WHtR on the progression of DR, like it was done with BMI in the landmark DCCT/EDIC study. However, we tried to mitigate this limitation by performing a sensitivity analysis that showed the WHtR was still associated with SDED after adjusting for baseline ETDRS grading. Another limitation of the present study is the fact it was conducted in a Caucasian-Finnish population with type 1 diabetes, therefore we cannot exclude whether ethnicity may have an impact on the results, since the waist threshold may differ according to ethnicity. On the other side, the WHtR threshold of 0.5 we found in our cohort for the risk of SDED was the same well-known WHtR threshold for cardiovascular risk and mortality (15,32) in the general population. Thus, our findings may motivate further studies to investigate the mechanisms involved in the relationship between visceral fat and SDED. This study has several strengths and the main one is the long-term follow up of a large cohort of individuals with type 1 diabetes. Second, the body composition was assessed by DXA, which is the gold standard method. Furthermore, we showed in a large sample of individuals with type 1 diabetes that, WHtR, a simple measure with a unique threshold for both sexes, is associated with the risk of a severe complication of diabetes in the absence and at the early stage of DN. From a clinical perspective, this study not only highlights a new modifiable risk factor for SDED but more importantly, it shows that a simple anthropometric measure related to central obesity is associated with SDED in individuals with type 1 diabetes. Given that WHtR is a modifiable Acknowledgements The skilled technical assistance of Anna Sandelin, Mira Korolainen, and Jaana Tuomikangas is gratefully acknowledged. The authors also acknowledge all people from the FinnDiane Study Group and all physicians and nurses at each FinnDiane center participating in patient recruitment and characterization. Data availability Restrictions apply to the availability of some or all data generated or analysed during this study to preserve patient confidentiality or because they were used under license. The corresponding author will on request detail the restrictions and any conditions under which access to some data may be provided. 19 The logistic regression model was adjusted for age at onset of diabetes, duration of diabetes, sex, glycated hemoglobin A1c, systolic blood pressure, triglycerides, smoking, lipid-lowering medication, estimated glomerular filtration rate and DN stage. OR: odds ratio. CI: confidence interval. Appendicular means both arms and legs. The percentage of body fat and lean mass are related to total body weight.
v3-fos-license
2018-04-03T05:16:46.068Z
2009-01-01T00:00:00.000
21590017
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/rbccv/a/hyn9bPn3ymNTXQTLcVmpyGC/?format=pdf&lang=pt", "pdf_hash": "9c5de9f88ab945a70b1e4bfc3daa397f517f7972", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42025", "s2fieldsofstudy": [ "Medicine" ], "sha1": "231a21d1ef0de4a526b987960aaf1d9c15cc3a51", "year": 2009 }
pes2o/s2orc
Outcomes of aortic coarctation surgical treatment in adults Objective: The aim of this study is to describe our experience in aortic coarctation surgery in adult patients by assessing the immediate and mid-term outcomes. Methods: From January 1997 to March 2000, 50 consecutive adult patients underwent surgery for correction of aortic coarctation, through left lateral thoracotomy. Of these, forty two (84%) patients presented high blood pressure, with mean systolic arterial pressure of 170.56 mmHg (125-220 mmHg). The mean of pressure gradient in the coarctation area was 51.4 mmHg (18-123 mmHg). Other associated surgical cardiovascular diseases were not treated in the same operative act, except in two cases of patent ductus arteriosus (PDA). Three different techniques were used: aortic coarctation resection with end-to-end anastomosis was performed in 20 (40%) patients, coarctation enlargement with bovine pericardial patch was performed in 22 (44%) patients and synthetic tube interposition was performed in eight (16%) patients. Results: Operative morbidity was low; there was one case of bleeding who required reoperation. The most common immediate postoperative event was high blood pressure (98%), but it was easily controlled by intravenous drugs. There was no hospital death. Mean residual pressure gradient was 18.7 (8-33 mmHg). Patients were discharged in 9.5 days (5-30). Postoperative follow-up mean was 46.8 months (1-145 months) in 45 (91.8%) patients. Forty one (91.1%) of these followed-up patients had normal blood pressure, whereas 75.6% of them without drugs intake. 93.3% of these followed-up patients were asymptomatic. Four of them required further surgical operation, one needed a pacemaker implant, other two patients needed a cardiac valve replacement and one had endocarditis. There was one related death due to sepsis secondary to endocarditis. da Faculdade de Medicina da Universidade de São Paulo, São Paulo, SP, Brazil. Correspondence address: Marcelo Biscegli Jatene Av. Dr. Enéas de Carvalho Aguiar, 44 São Paulo, SP, Brazil -ZIP Code: 05403-900. E-mail: mbjatene@uol.com.br Article received on October 13th, 2008 Article accepted on June 5th, 2009 Our experience with this type of disease detected in patients aged over 18 years, including preoperative clinical aspects, correction techniques and, immediate and longterm postoperative evolution will be discussed next. METHODS From January 1997 to March 2000, 50 consecutive patients aged over 18 years with aortic coarctation underwent surgery at the Heart Institute of the Clinics Hospital -University of São Paulo. The numeric variables are presented as mean and standard deviation. The patients age ranged from 18 to 59 years (mean 25.4 years), of whom 36 (72%) were male patients. The aortic coarctation was located in the descending aorta after the emergence of the left subclavian artery in all cases, being excluded from this study those presenting stenosis or coarctation of aortic arch or abdominal aorta. SH was present in 42 (84%) cases, with mean systolic pressure of 170.5 mmHg (125 to 220 mmHg) and mean diastolic pressure INTRODUCTION The presence of aortic coarctation not submitted to surgical correction in adult patients leads to the frequent occurrence of high blood pressure (HBP) in the upper limbs, as well as presenting greater risk of several clinical symptoms. Problems such as acute myocardial infarction, intracranial hemorrhage, aortic rupture and cardiac insufficiency may manifest at various moments, in association with HBP, which could lead to increase mortality, due to the possibility of the occurrence of any of the problems previously mentioned. [1,2]. In many patients, the aortic coarctation presents asymptomatic evolution, performing the diagnosis from the investigation initiated after detecting HBP. In almost all the cases, there is an exuberant collateral circulation consisting of dilated intercostal arteries, internal thoracic arteries or branches of arteries near the aortic coarctation. Perfusion from the distal aorta to the aortic coarctation occurs, generating sufficient flow to satisfactorily perfuse the corresponding organs and tissues, concealing symptoms and complicating early diagnosis. [2,3]. The ideal time for surgical referral in aortic coarctation cases, following the established diagnosis, is variable. However, according to general consensus, the aortic coarctation must be corrected in the neonatal period or childhood, in order to avoid the sequels of late treatment, Conclusion: Surgical treatment of aortic coarctation, even in adult patients, is an efficient therapeutic choice, regardless of the applied surgical technique, with low morbidity and mortality. It reduces efficiently the arterial pressure levels in both immediate and mid-term follow-up. Descriptors: Aortic coarctation/surgery. Aorta/surgery. Heart defects, congenital. Adult. Twenty-five (50%) patients presented symptoms of a small extent (myocardial insufficiency I and II, according to the New York Heart Association); 19 (38%) were asymptomatic and 6 (12%) presented more intense symptoms (myocardial insufficiency III and IV), characterized predominantly by dyspnea to minimal efforts, as well as symptoms related to HBP, such as headache and dizziness. Two (4%) patients presented symptoms of preoperative hypertensive emergence, of which one case was of acute pulmonary edema and the other of hypertensive encephalopathy, both with positive evolution, with controlled HBP by specific medication and regression of symptoms. Resumo Two (4%) patients who presented symptoms of congestive cardiac insufficiency by valvulopathy, moderate mitral insufficiency in one patient and moderate aortic stenosis with aortic transvalvar gradient of 58 mmHg in another, performed the aortic coarctation diagnosis during the valvar disease examination and underwent valvulopathy correction prior to the aortic coarctation correction. In 13 (26%) patients, there were other associated heart diseases, of whom 8 (16%) presented valvar disease and 5 (10%) presented congenital heart disease. The diagnoses of the associated lesions are exposed in Table 1. After the clinical suspicion of aortic coarctation, all patients underwent ecocardiographic evaluation that confirmed the clinical diagnostic of aortic coarctation, as well as detecting hypertrophy of the left ventricle (LV) in 33 (66%) patients and moderate dysfunction of LV in 5 (10%). Complementary diagnosis by angiography was performed in 34 (68%) patients and magnetic nuclear resonance (MNR) in 10 (20%) patients. The mean systolic gradient in the aortic coarctation region was 58.2 mmHg (28 to 123 mmHg). Surgical technique All patients underwent surgical treatment by left lateralposterior thoracotomy; the approach was performed through the 4 th left intercostal space, with selective intubation of the lungs. A careful thoracic opening by dissection and isolation of the aorta and coarctate area were performed. Different correction techniques were used, varying according to the intraoperative aspect or the surgeon's preference for some technique of choice. In the patients in whom the correction was performed using synthetic tubes, in 5 (10%) it was performed the interposition of the tube replacing the coarctate aorta segment by end-to-end anastomosis with the proximal and distal stumps of the aorta; performing the proximal anastomosis with the subclavian artery and the distal anastomosis with the descending aorta, after the aortic coarctation. The numeric variables are presented as mean and standard deviation. The preoperative and postoperative variables were compared by the Student's t test for analogous and non-analogous values and by the analysis of double-factor variance. Statistical significance was considered for value of p<0.05. The variables behavior throughout time was estimated by the regression model proposed by Blackstone [15]. RESULTS There was no hospital mortality. The surgical treatment consisted of isolated correction of the aortic coarctation in all cases, except for two patients, in whom it was performed section and suture of the patent arterial channel, as part of the surgical technique for isolation of the aorta, in order to facilitate its mobilization. In the immediate postoperative period (IPO), HBP was observed in 49 (98%) patients, in need of specific medication (sodium nitroprusside in the first hours and association of beta-blocking agents or inhitors of the conversion enzyme. Temporary low cardiac output was observed in two (4%) cases, with resolution in both until the second day of postoperative; in two (4%) patients it was detected the presence of arrhythmias, being atrial fibrillation (AF) in one and non-sustained ventricular tachycardia in another, with positive resolution after hydro-electrolytic control and medication with Amiodarone and endovenous Xylocaine, respectively. One patient required reoperation due to bleeding in order to review the hemostasia, with positive evolution. Fourty-five (91.8%) patients were evaluated between 1 to 145 months (M=46.8). One late death was observed within three months of postoperative, by sepsis due to bacterial endocarditis in aortic valve; three patients required to Echocardiographic evaluation revealed reduction of the mean gradient by the aortic coarctation area, reducing from 58.2 mmHg to 21.4 mmHg. The results of the preoperative and postoperative gradients in the different techniques used are exposed in Figure 1. In the evaluation of the gradient by the aortic coarctation area, in all patients, significant reduction in the postoperative period was observed, compared to the preoperative period, as displayed in Figure 2. When comparing the surgical techniques of aortic coarctation correction (bovine pericardium versus patch, resection and end-to-end anastomosis), significant reduction of the gradient in the postoperative period was observed, compared to the preoperative period, with both techniques, however; comparing both techniques with one another, the preoperative and postoperative gradients with the different techniques were similar, as well as similar to one another ( Figure 3). After the reduction of the postoperative gradients, during the mid-term evolution, it was observed the maintenance of the mean gradients measured in different moments of the postoperative period (represented by the dots in the dispersion diagram), as displayed in Figure 4. Considering the 45 patients followed-up during the postoperative evolution, regarding the arterial pressure, it was observed the reduction of the systolic pressure levels, with statistical significance between the preoperative and postoperative periods, as shown in Figure 5. According to the clinical standpoint, out of 45 patients evaluated, 42 (93.3%) were found asymptomatic and 3 (6.7%) with mild intensity symptoms (myocardial insufficiency I and II). Fig. 2 -Preoperative and Postoperative gradients Regarding the control of arterial pressure, 41 (91.1%) patients were found normotensive (mean pressure of 127 X 76 mmHg) and 4 (8.9%) still remained hypertensive, using medication; 34 (75.6%) patients were not taking any sort of anti-hypertensive medication and 11 (24.4%) were under medication, being 9 (20%) under one medication (betablocking agent preferably) and 2 (4.4%) under two medications (beta-blocking agent and diuretic drugs). The behavior of the systolic arterial pressure during the postoperative evolution is shown in Figure 6. DISCUSSION Considering the surgical techniques used for the aortic coarctation correction, some aspects should be observed, in order to obtain good results and prevent eventual recurrence of the aortic coarctation ; amongst these it can be cited that the resection of the coarctate area jointly with the ductal tissue have an important role in preventing problems. In neonates and infants, the resection of the aortic coarctation technique and end-to-end anastomosis is being precognized as the technique of preference, due to the possibility of growth of the aorta and low rate of recoartation [16]. Sanches et al. [17] suggested that abnormalities of the periductal tissue in the aorta wall might be responsible for the restenosis in 22% of the operated patients, in the period from 6 weeks to 66 months and, in adults, it is not always possible to perform the resection of the coarctate area, due to the local anatomical conditions, such as the presence of caliber collaterals, difficulty of mobilization of the aorta, as well as hypoplastic segments of the aorta next to the aortic coarctation. Therefore, other techniques can be used, minimizing risks of bleeding caused by excessive manipulation and local dissection, such as enlargement of the coarctate area with patches (bovine pericardium or Gore-tex), interposition or bypass of synthetic tubes. Heinemann et al. [8] indicate the extraanatomical bypass in the following conditions: complex coarctation, reoperations, extensive aortic occlusive disease and complicated aneurisms, although this last condition is considered to be an exception. In our experience, in only 40% of the cases, the resection of the coarctate area with end-to-end anastomosis was possible; in 60% of the cases of our series, the resection was not possible, having been performed the enlargement of the coarctate area with bovine pericardial graft (44%) and synthetic tube interposition (16%). Bouchart et al. [7] refer the end-to-end anastomosis for correction of 86% of the aortic coarctation cases with mean age of 28 years, reminding the importance of extensive mobilization of the entire aorta (arch, base vessels and descending aorta). Aris et al. [18], in experience with patients with aortic coarctation aged over 50 years, report the usage of Dacron tubes, performing bypass of the aortic coarctation. Oliveira et al. [19], in 29 patients; performed: aortoplasty without graft in 9 (31%), aortoplasty with graft in 18 (62%), end-to-end anastomosis in 1 (3.5%) and aortoplasty with subclavian artery in 1 (3.5%). In the adult patient, expectations on the necessary growth of the aorta are minimized, thus allowing more options in the usage of referred techniques, with less probability of recurrence of the aortic coarctation. The aortic coarctation aortoplasty employing the Dacron patch was considered effective and safe by Venturini et al. [20], with the occurrence of aneurismal formation in only one of the 60 patients. On the other hand, Parks et al. [21], in 39 patients being followed-up, 10 presented aortic rupture, discontinuing this technique. Silva [22] emphasized the good results achieved in 3 patients followed-up for 30 years using pediculate pericardium, with no aneurismal formation. In our series, the cases in which enlargement of the coarctate area with patch was used, there were found no evidences of aneurismal formation. Another important aspect refers to the risk of bleeding during the opening of the thorax and dissection of the structures adjacent to the aortic coarctation, due to the presence of large collaterals close to the aorta, as well as the superficial and intercostal muscular planes [13,14]. Additional care must be taken during the opening, performing accurate control of the blood pressure, due to the presence of HBP in the large majority of patients. Sweeney et al. [23] report the occurrence of bleeding and hemodynamics instability. In our series of cases, there were no problems related to bleeding during the thorax opening and only one patient required reoperation for bleeding, with positive evolution. Regarding the postoperative events, in addition to the risk of postoperative bleeding, observed in one of our patients, HBP represented the most frequent occurrence. In our experience, as expected, the majority of the patients presented HBP in the immediate postoperative period, requiring the use of endovenous medication for control. SH was present in 81.8% of the patients in the experience of Oliveira et al. [19]. In addition to HBP, complications such as neurological or motor disturbances were observed [24]; in our experience no patient presented neurological complications. Lisboa et al. [11] also reported the absence of neurological complications with extra-anatomical techniques. In our series, there were no complications such as ventricular dysfunction or arrhythmias in the postoperative period, being observed the presence of left ventricular hypertrophy, as found previously described in all patients in the preoperative period by the echocardiogram. Also, in our series, there were no pulmonary infection complications or of any sort. As for the HBP control, the idea of removing the mechanical obstacle that caused increase in blood pressure may suggest that there is pressure stabilization in all patients; however, such fact is not a constant. As observed by Hager et al. [24], in a study involving 404 patients followed-up in a period from 1 to 27 years, the majority of the patients remained hypertensive in the long-term, whereas only a minority of the cases presented gradients higher than 20 mmHg and 43% of the patients undergone surgical correction of the aortic coarctation presented blood pressure stabilization. Unlike some reports, in our study, more than 90% of patients presented blood pressure stabilization in the postoperative evolution period, with or without the use of medication. We believe that the efficient relief of stenosis, regardless of the technique used, might be an important factor in the control of HBP. However, in approximately 25% of the patients, it is required the use of one or more anti-hypertensive drugs, for the control of the SH. Bouchart et al. [7] observed that, in 35 operated patients; 23 became normotensive without medication, 6 with monotherapy and 6 with anti-hypertensive medication. The fact that not all the patients presented blood pressure stabilization can be explained by the longer time of preoperative HBP, noting less elasticity in the aortic wall, at times even presenting fibrosis and calcification in the region next to the coarctate area, which, besides making surgical correction difficult, prevents adequate control of the arterial pressure. The persistence of HBP in the postoperative period may be associated to multiple factors, such as the persistence of endocrine factors and/or the reduction of the vascular bed complacence next to the aortic coarctation [25], factors not changed with the surgery. Such aspects could not be evaluated in the patients of our series, which makes it difficult to construe the high incidence of control of the HBP in the patients operated by our group; although, besides the relief of stenosis, another factor that could, in our opinion, be related to the HBP control, would be, in our patients, the mean age under 25 years that could be related to less fibrosis and better vascular complacence. Other authors correlated the persistence of hypertension in the postoperative to the existence of a residual gradient higher than 30 mmHg [25]. In our series, the postoperative gradient observed by the echocardiogram was lower than 20 mmHg, which would be an extra factor that could possibly explain the high incidence of HBP control. Another factor related may be a lower capacity of regulating the blood pressure, due to the lower sensitivity of the blood pressure sensors, located in different spots, along the aorta. As for the most frequent medication applied in the postoperative control of the blood pressure, it can be observed the large variability amongst different groups, with the application of different drugs, as well as associations of one or more medicines. In our study, the cases in which preoperative medication was required, the beta-blocking agents were the drug of choice, exclusively or in association with different types of diuretic drugs. In the last few years, the endovascular treatment by balloon or endoprosthesis has been performed more frequently, despite the higher incidence of restenosis in these patients [26]. Tyagy et al. [27] recommend the use of nitinol endoprosthesis for the correction of unsatisfactory results with the angioplasty by balloon. Karl [28] considers that the best treatment for aortic coarctation is the surgical one for most of the cases, considering the results in the long-term. Based on the data acquired with this study, we can conclude that the surgical treatment of aortic coarctation can be performed with efficient results regardless of the technique applied, with low morbidity and mortality, reducing the pressure levels in the mid-term follow-up, with or without the use of anti-hypertensive medication, even in adult patients.
v3-fos-license
2014-10-01T00:00:00.000Z
2013-08-30T00:00:00.000
14363956
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar4280", "pdf_hash": "4091c6e6d3475c6a68ea847657149b26519575a6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42027", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "b2a8e35f45c23b5fd6250792c8fb743b6f8576c6", "year": 2013 }
pes2o/s2orc
Genomic characterization of remission in juvenile idiopathic arthritis Introduction The attainment of remission has become an important end point for clinical trials in juvenile idiopathic arthritis (JIA), although we do not yet have a full understanding of what remission is at the cell and molecular level. Methods Two independent cohorts of patients with JIA and healthy child controls were studied. RNA was prepared separately from peripheral blood mononuclear cells (PBMC) and granulocytes to identify differentially expressed genes using whole genome microarrays. Expression profiling results for selected genes were confirmed by quantitative, real-time polymerase chain reaction (RT-PCR). Results We found that remission in JIA induced by either methotrexate (MTX) or MTX plus a TNF inhibitor (etanercept, Et) (MTX + Et) is characterized by numerous differences in gene expression in peripheral blood mononuclear cells and in granulocytes compared with healthy control children; that is, remission is not a restoration of immunologic normalcy. Network analysis of the differentially expressed genes demonstrated that the steroid hormone receptor superfamily member hepatocyte nuclear factor 4 alpha (HNF4α) is a hub in several of the gene networks that distinguished children with arthritis from controls. Confocal microscopy revealed that HNF4a is present in both T lymphocytes and granulocytes, suggesting a previously unsuspected role for this transcription factor in regulating leukocyte function and therapeutic response in JIA. Conclusions These findings provide a framework from which to understand therapeutic response in JIA and, furthermore, may be used to develop strategies to increase the frequency with which remission is achieved in adult forms of rheumatoid arthritis. Introduction The advent of biological therapies for chronic forms of arthritis has been accompanied by the hopes that: (1) therapies can be increasingly tailored to specific pathogenic pathways, decreasing unwanted side effects; and (2) by use of more targeted therapies, patients will experience more sustained periods of disease quiescence and, therefore, functional and subjective well-being. In juvenile idiopathic arthritis (JIA), the most common form of chronic arthritis in children, achieving the second of these objectives appears to be very near [1]. JIA is a term used to denote a heterogeneous group of childhood illnesses characterized by chronic inflammation and hypertrophy of synovial membranes. Distinct phenotypes are recognized based on disease presentation, clinical course, and specific biomarkers, for example, IgM rheumatoid factor [2]. However, even within carefully specified disease subtypes, considerable heterogeneity exists, especially with respect to response to therapy and overall outcome [3]. The biology underlying these differences is poorly understood, and obtaining a molecular understanding of phenotypic and therapeutic response differences is an important step toward developing individualized therapies for this family of diseases and their cognate conditions in adults. A major advance in pediatric rheumatology has been the recognition that treatment response can be staged based on consensus criteria developed by an international panel [4], and that these stages have biological validity that can be characterized at the molecular level by gene transcriptional profiling [5][6][7]. Wallace et al. [4] defined these specific states as: active disease (AD), inactive disease (ID), clinical remission on medication (CRM), and clinical remission (CR). While true remission (CR) appears to be difficult to achieve (in the Wallace study [7], only 5% of children with the multiple-joint, polyarticular form of JIA achieved remission within 5 years of diagnosis), sustained periods of disease control (CRM) have become a reality and the target end point for childhood arthritis clinical trials. However, although achieving CRM has become commonplace in pediatric rheumatology clinical care, preliminary studies have suggested that the CRM biological state is not a return to normal, but, rather, a homeostatic state where pro-inflammatory disease networks are counterbalanced by the emergence of antiinflammatory networks [6]. Indeed, peripheral blood gene expression abnormalities persist even in children who have been disease-free and off medication for a year or more [5,6]. Now that remission (or at least CRM) has become both the gold standard for clinical care and the end point for clinical trials for children, it is critical that we understand it at the molecular/biological level. One complication in doing so is that, while approximately 35 to 50% of children with JIA will experience CRM with the use of methotrexate (MTX) (usually in combination with nonsteroidal antiinflammatory drugs +/-corticosteroids, used systemically or via joint injection), others will attain this state only after a biological agent, most commonly a TNF inhibitor, is added to methotrexate and the other agents [8]. However, whether the state of remission induced by MTX is, at the molecular level, identical to remission induced by the addition of a TNF inhibitor remains unknown, even though the remission phenotype is identical in each case. Answering this question is critical to our understanding of both the biology of response to therapy in JIA and toward our understanding the disease process itself. Furthermore, while there may be fundamental differences in the biology of response in adults compared to children and between the different disease entities in which anti-TNF therapies are used, the frequency with which remission (as defined here) can be achieved in children provides an excellent opportunity to understand mechanisms of response in such a way that these therapies might be manipulated in adults or in other diseases to achieve the same ends. Thus, understanding remission at the molecular level in this specific disease can be expected to have a useful impact on the multiple other chronic forms of arthritis in which immunosuppressive and anti-TNF therapies are used. In this study, we used gene expression profiling to compare two groups of children with JIA who had achieved remission (CRM) to examine the medicationspecific effects on gene transcriptional profiles. Patients and controls This study was approved by the Oklahoma University Health Sciences Center (OUHSC) Institutional Review Board, and informed consent was obtained from all patients, or their parent/guardian, prior to the initiation of the study. We studied two independent cohorts of patients. One cohort was designated the training cohort, and the second, termed the testing cohort, was used to corroborate the results from the training cohort using quantitative, real-time PCR (qRT-PCR). Children with polyarticular onset, rheumatoid factor (RF)-negative JIA were recruited from the OU Children's Physicians' rheumatology clinics and fit criteria for this subtype as specified by the International League of Associations for Rheumatology (ILAR) [9]. All children were on treatment at the time they were studied, and all fit criteria for CRM as defined by Wallace and colleagues [7]. That is, these children had all reached the ID state (normal physical examinations, absence of morning stiffness, and normal complete blood counts and erythrocyte sedimentation rates on laboratory monitoring studies) and, to fit criteria for CRM status, had maintained the ID state for 6 continuous months. The patients were followed every 2 to 3 months following their achieving ID, and CRM state samples were taken 6 to 8 months following the achievement of ID status. In the training cohort (for microarray), 14 children (ages 8.9 ± 3.1 years; seven females and seven males) achieved CRM with the use of MTX alone, 7 to 48 months after starting therapy. The patient comparison group in this cohort consisted of 14 other children with polyarticular JIA (ages 8.9 ± 4.2 years; 13 females and one male) who achieved CRM only after the addition of the TNF inhibitor, etanercept (Et), 11 to 48 months after the initiation of therapy. Both of these groups were compared to a group of 15 healthy children (ages 11.5 ± 2.6 years; seven female and eight male) recruited from the OU Children's Physicians' General Pediatrics clinic (Table 1). A separate testing patient cohort of children with JIA was used to validate results from the first patient cohort. Eight of these children (ages 9.4 ± 4.7 years; eight females) achieved CRM with the use of MTX alone, 10 to 27 months from the initiation of therapy, and an additional eight children with polyarticular JIA (age 9.6 ± 4.5 years; eight females) achieved CRM only after the addition of Et, 10 to 26 months after the initiation of therapy. Eight healthy children (age 11.1 ± 2.8 years; four females and four males) were used as an independent comparison group to the testing cohort. Patient groups and characteristics are summarized in Table 1. All patients were treated with naproxen, 10 mg/kg/dose, as an adjunct to their primary drugs (that is, MTX +/-Et). Cell isolation Whole blood was drawn into 10 mL citrated Cell Preparation Tubes (Becton Dickinson, Franklin Lakes, NJ, USA). Cell separation procedures were started within one hour from the time the specimens were drawn. Peripheral blood mononuclear cells (PBMC) were separated from granulocytes and red blood cells by density-gradient centrifugation. Red cells were removed from granulocytes by hypotonic lysis, and PBMC and granulocytes were then immediately placed in TRIzol ™ reagent (Invitrogen, Carlsbad, CA, USA) and stored at -80°C. RNA isolation, labeling and gene expression profiling Total RNA was extracted using TRIzol ™ reagent according to the manufacturer's directions. RNA was further purified using a RNeasy MiniElute cleanup kit including a DNase digest according to the manufacturer's instructions (Qiagen, Valencia, CA, USA). RNA was quantified spectrophotometrically (Nanodrop, Thermo Fisher Scientific, Wilmington, DE, USA) and assessed for quality by capillary gel electrophoresis (Agilent 2100 Bioanalyzer; Agilent Technologies, Inc., Palo Alto, CA, USA). For the training cohort, sufficient amounts of high quality RNA for use in microarrays were obtained from 43 PBMC samples obtained from 14 JIA patients treated with MTX + Et, 14 JIA patients treated with MTX alone, and 15 healthy control children. From granulocytes, a sufficient amount of high quality RNA was obtained from 12 JIA patients treated with MTX + Et, 10 JIA patients treated with MTX alone, and 13 healthy control children. RNA samples were processed using GeneChip 3' IVT Express kit and hybridized to human U133 Plus 2.0 GeneChip ™ microarrays according to the manufacturer's protocol (Affymetrix, Santa Clara, CA, USA). GeneChips ™ were washed and stained using an Affymetrix automated GeneChip ™ 450 fluidics station and scanned with an Affymetrix 3000 7G scanner. All gene expression data has been made available publically via the Gene Expression Omnibus (accession GSE41831). Statistical analysis and network modeling CEL files were generated from scanned images using GeneChip ™ Operating Software (GCOS, Affymetrix, version 1.3.0.037). Signal intensities were generated using JustRMA software (BRB-Array Tools). A log base 2 transformation was applied before the data were quantile normalized. Signal intensities were filtered using a log intensity variation (BRB-Array Tools) to obtain probes with the 25% highest variance across the arrays (13,668 probes) for further evaluation. Samples were divided into three groups (controls, patients treated with MTX + Et, and patients treated with MTX alone) and gene expression differences were separately evaluated in each patient group relative to controls. Differences between groups were considered statistically significant using a two-sample t test with univariate random variance model if the P value was ≤0.001 (BRB Array Tools, version 3.8.0 stable release). Statistically significant differentially expressed genes were filtered to obtain those with a minimal 1.3-fold change between groups and a mean expression level above background in at least one group. Annotations for probes were obtained from Affymetrix and were further supplemented by SOURCE [10]. These gene annotations were compared with the Gene Ontology (GO) database [11] to identify overrepresented terms using the R package GO available from Bioconductor within BRB-Array Tools [12]. A minimum of five observations in a GO class and parent class plus a minimum ratio of 2 for the observed vs. expected numbers were required for further consideration. Interactions among differentially expressed genes in PBMC and granulocytes were analyzed using Ingenuity Pathway Analysis (IPA) software (Ingenuity Systems, Inc, Redwood City, CA, USA). Differentially expressed genes were mapped onto a global molecular network developed from information contained in the Ingenuity Pathways Knowledge Base. Gene expression validation by quantitative real-time RT-PCR Total RNA (described above) was reverse transcribed with iScript ™ cDNA synthesis kit according to the directions of the manufacturer (Bio-Rad, Hercules, CA, USA). Real-time RT-PCR was performed using SYBR Green reagents on an ABI Prism 7000 (for the training group; Applied Biosystems, Foster City, CA, USA) or a StepOne Plus (for the testing group; Applied Biosystems, Foster City, CA, USA). The temperature profile consisted of an initial 95°C step for 10 min, followed by 40 cycles of 95°C for 15 sec, 60°C for 1 min, and then a final melting curve analysis with a ramp from 60°C to 95°C over 20 min. Gene-specific amplification was confirmed by a single peak in the ABI Dissociation Curve software. Average threshold cycle (Ct) values for GAPDH (run in parallel reactions to the genes of interest) were used to normalize average Ct values of the gene of interest. These values were used to calculate averages for each group (healthy control or patient subsets), and the relative ΔCt was used to calculate fold-change values between the groups [5]. The nucleotide sequences of the primers are listed in Table 2. Table 2 Primers used for quantitative real-time PCR validation. Results Our primary aim in this study was to determine whether the CRM state as achieved in a typical clinical setting results in a return to normal immune homeostasis in peripheral blood leukocytes. Preliminary studies from our research group [5,6] indicated that this was not likely, and this study, performed on a larger group of patients with independent corroboration in a second group of patients, was designed to answer that question in a definitive way and elucidate differences at the molecular level. To do this, we first compared each of the CRM groups (that is, MTX or MTX + Et) to healthy control children. In both PBMC and in granulocytes, there were differences between children who achieved remission on MTX compared with those who achieved remission on MTX + Et relative to healthy control children. That is, although remission (CRM) is a distinct biological state, and phenotypically is indistinguishable among the groups, there were still differences in patterns of gene expression between and among the groups. For both cell types, hierarchical clustering of samples from the three groups (that is, healthy controls, children who had achieved remission on MTX, and children who had achieved remission on MTX + Et) revealed two clusters, each containing a similar proportion of samples from children in remission on MTX and healthy control children, while all but one of the samples from children who achieved remission on MTX + Et were grouped in one cluster. A 3 × 2 contingency table of these distributions revealed a nonrandom distribution of samples in both cell types (Table 3; PBMC: X 2 = 9.86, P < 0.007; granulocytes: X 2 = 11.5, P < 0.003). These results suggest that combined treatment with MTX + Et produced distinct gene expression responses that are distinct and more biologically focused at the gene expression level from the more heterogeneous responses detected among the MTX-treated patients, consistent with the idea that Et represents a more targeted therapy. Differences in PBMC gene expression profiles Gene expression differences were detected in 67 genes represented by 75 probes when PBMC from JIA patients who achieved remission using MTX alone were compared to healthy controls. Twenty-two of these genes showed higher levels of expression and 45 showed lower levels in patient compared with control samples (Table S1 in Additional file 1). Thus, MTX appears to act not merely by suppressing pro-inflammatory genes, but by a re-ordering of specific transcript levels. Not surprisingly, functional associations among the differentially expressed genes included genes whose products are active in cell-mediated immunity. This included decreased expression of signal transducer and activator of transcription 1 (STAT1), which plays an important role mediating the effects of interferon gamma and in TH17 cell differentiation. STAT1 is activated by IL-6, an important cytokine in the pathogenesis of JIA [13] and IL-6 is known to be modulated by methotrexate [14,15]. Similarly, the expression of complement factor B, whose expression is increased by pro-inflammatory cytokines, was decreased in MTXresponsive patient samples. The decreased expression patterns of chemokine receptor 6, granzymes A and K, and the killer cell lectin like receptor subfamily D member 1 and subfamily K member 1 transcripts suggest modulation by MTX in cells of the innate (natural killer cells) and adaptive (cytotoxic T cells) immune systems in JIA. Input of the differentially expressed genes from this analysis into IPA software revealed a statistically significant downregulation of leukocyte activation in samples from the methotrexate-responsive samples based on the downregulation of five genes: GZMA, which activates monocytes [16], STAT1 whose phosphorylation leads to macrophage activation [17], CALR, which increases activation of dendritic cells [18], SERPINB9, which increases leukocyte activation [19], and KLRK1, which increases NK cell activation [20]. Upregulation of CALR, PRDM1, STAT1, TAGAP and TNRC6B has been associated elsewhere with adult rheumatoid arthritis or juvenile polyarticular arthritis [21][22][23][24]. Their downregulation here is consistent with an immunosuppressive effect of MTX across a variety of different molecules. Downregulation of GZMA by MTX was previously reported by Belinsky et al. [25]. Fifty-two genes represented by 56 different probe sets were differentially expressed in PBMC samples from patients treated with MTX + Et relative to samples from healthy controls (Table S2 in Additional file 2). Transcripts for 25 of these genes were expressed in higher levels in patients and 27 were expressed in lower levels compared with controls. GO analysis indicated an overrepresentation of products of the differentially expressed genes with roles in immunity, as expected. Of the 24 GO biological process categories obtained, 11 were related to immunity with ratios of the number of observed to expected genes varying from 4.57 to 22.05. The second-most overrepresented categories were related to histone or chromatin modification, with observed/expected ratios between 3.29 and 12.53. This finding is consistent with the hypothesis that alterations in gene expression that characterize the transition from active disease to remission may be accomplished alterations in chromatin accessibility through epigenetic alterations. Twenty-one of the transcripts which showed differential expression when the MTX + Et group was compared with controls, including KLRD1 and CFB, were also differentially expressed in the MTX alone group vs. controls, suggesting either MTX-induced changes in the expression of these genes or persistence of pre-existing expression abnormalities not corrected by either drug regimen. AGRN and KLRD1, which showed decreased expression in the patient samples, are involved in T cell activation. CFB, AGRN and KLRD1 are involved in inflammatory response, while CFB and AGRN plus PPP1R14A are involved in cellcell interactions. The remaining 31 genes were uniquely differentially expressed in the MTX + Et vs. controls comparison (including CD22, CCR6 and TREM1 which are of immunologic interest and suggest a unique effect of Et or MTX + Et interaction) and 46 that were uniquely differentially expressed in the MTX vs. controls (including CXCR6, GZMA, TCRGC2, TCRGV5, TCR delta and STAT1, which are of immunologic interest and suggest the unique effect of MTX). The increased expression of CD22 and decreased expression of TREM1 which occurred in patients who responded to the combined methotrexate and etanercept therapy but not those who responded to methotrexate alone suggests an Et-dependent effect on these molecules. CD22 is known to negatively regulate B cell activation [26,27], while TREM1 activation in monocytes induces pro-inflammatory cytokines and chemokines such as TNF, IL-1 and IL-6 [28,29]. The transcriptional modulation of these molecules in response to these therapies should reduce inflammation and modulate immune responses in responding patients. Differential expression of genes in granulocytes The most striking differences in gene expression profiles of patients with JIA responding to therapy were detected in granulocytes. A total of 207 differentially expressed genes were identified when samples from patients treated with MTX + Et were compared to control samples (Table S3 in Additional file 3). This contrasts with 23 genes that were differentially expressed in patients who achieved remission on MTX relative to controls (Table S4 in Additional file 4). That is, patients achieving remission on MTX alone had granulocyte gene expression profiles that more closely resembled normal than patients treated with MTX + Et. Four genes were upregulated in samples from patients that achieved remission with either therapeutic regimen, namely chromodomain helicase DNA-binding protein 2 (CHD2), RNA-binding motif protein 25 (RBM25), tripartite motifcontaining 23 (TRIM23) and the KIAA0907 gene; seven genes were downregulated in both groups of patient samples relative to controls, namely forkhead box O1 (FOXO1), 3-phosphoinositide-dependent protein kinase-1 (PDPK1), PHD finger protein 20 (PHF20), splicing factor, arginine/serine-rich 18 (SFRS18), SAPS domain family, member 3 (SAPS3), neutral sphingomyelinase activation associated factor (NSMAF), and transmembrane protein 140 (TMEM140). FOXO proteins have been shown to regulate the expression of the TFN-related apoptosis-inducing ligand (TRAIL) [30], a TNF family member that can accelerate the rate of apoptosis in neutrophils [31][32][33]. TNF has been shown to induce granulocyte apoptosis in a dosedependent manner, and via differential effects on expression of Mcl-1 and Bfl-1 [34][35][36]. Twenty-eight statistically significant (P <10 -2 ) canonical pathways were identified from the MTX + Et differentially expressed genes using IPA, many of which are directly associated with inflammation, immunity, or apoptosis (Table 4). These include the RANKL signaling pathway, which regulates bone remodeling, the caspases-dependent apoptotic TWEAK signaling pathway, the CD27 apoptotic signaling pathway, the TNFR2 signaling pathway, the antigen-presenting CD40 signaling pathway, the TNF family and immunoregulatory APRIL and BAFF signaling pathways, the glucocorticoid receptor signaling pathway, and the IL-6 signaling pathway. Many of these differentially expressed genes, such as ATM, BIRC3, MAP2K7, NFKBIE, TRAF3 and XIAP, occurred in more than one signaling pathway. Based on the expression patterns of the molecules in Table 4 IPA software predicted decreased cellular apoptosis (3.24 × 10 -4 ). Collectively, these findings demonstrate an important role of Et in the modulation of the innate immune system of responding patients, which may have implications for this and other diseases. Between-group comparisons Having established that CRM is not a return to normal, we next sought to determine the degree to which the homeostatic state induced by MTX alone resembled that induced by the combination of MTX + Et. We first examined PBMC, comparing the gene expression profiles in JIA patients in remission following treatment with combined MTX + Et to samples from patients treated with MTX alone. Six genes were identified as being differentially expressed in PBMC between these groups (Table S5 in Additional file 5). Four genes were upregulated in patients treated with the combined therapy: cardiotrophin-like cytokine factor 1 CLCF1 complement component 3 (C3), nonprotein-encoding XIST antisense RNA (TSIX), and one gene currently lacking annotation (Affymetrix probe set ID 240861_at). Two genes were downregulated in patients using combined therapy: insulin-like growth factor 1 receptor (IGF1R) and the Y-linked protein kinase gene (PRKY). When the granulocyte expression profiles of children who achieved remission on MTX were compared directly with those who achieved remission on MTX + Et, we found 33 genes (42 probes) that showed differences in expression. This was an expected finding, given that children who had remission on MTX alone had expression profiles that more closely resembled the healthy controls than did children who achieved remission on MTX + Et. Differential gene expression was also detected with three different probes for the eukaryotic translation initiation factor 1A, Y-linked gene (EIF1AY) with no evidence of expression in the MTX + Et samples relative to the MTX alone samples, with six different probes for the nonprotein-coding × (inactive)-specific transcript (XIST) with expression above background only in the MTX + Et-treated samples, and with three different probes for the nibrin gene (NBN) expressed above background signal intensities in both groups. Only one gene, the insulinlike growth factor 1 receptor gene (IGF1R), was differentially expressed in both PBMC and neutrophils. However, this gene was overexpressed in granulocytes from patients treated with MTX + Et, but underexpressed in MTX + Et-treated PBMC samples. Collectively, these findings indicate very different effects of Et on different subsets of peripheral blood. The nibrin gene was overexpressed in MTX + Et relative to both MTX only and controls, as were the genes encoding for cytochrome b-245, beta polypeptide (CYBB), haloacid dehalogenase-like hydrolase domaincontaining 1A (HDHD1A), baculoviral IAP repeat-containing 3 (BIRC3), TNF receptor-associated factor 3 (TRAF3), and × (inactive)-specific transcript (XIST). The histone cluster 1, H1c (HIST1H1C) gene was found to be downregulated in MTX + Et relative to both MTX only and control samples. No overlap was found among the differentially expressed genes in the MTX only vs. MTX + Et and the MTX only vs. control samples (Table S6 in Additional file 6). Network analysis of differentially expressed genes Functional associations between differentially expressed genes identified above were analyzed using the IPA software. It is interesting to note that many of these networks contained hub-and-node structures characteristic of scalefree systems [37] as we have previously reported [38]. While some of these structures may be artifacts that emerge from the algorithms used by IPA to query the existing literature, there is biological coherence in many of the networks, all of which were generated in an unbiased fashion. For example, in both PBMC ( Figure 1A, and Figure S1 in Additional file 7) and granulocytes ( Figure 1B, and Figure S2 in Additional file 8), TNF alpha appears as a hub in at least one network, as would be predicted given Et's mechanism of action. We also noted hub-and-node structured networks derived from both types of cells that demonstrated interactions between the steroid hormone receptor/transcription factor hepatocyte nuclear factor 4 alpha (HNF4α) and differentially expressed gene products in both types of cells (Figure 2). Connections in these networks reflect HNF4α binding to DNA sequences in or adjacent to these genes that were identified by chromatin immunoprecipitation assays [39]. Because HNF4α had not been reported to be expressed in leukocytes, we undertook experiments to investigate this finding further. Expression of HNF4a in leukocytes Network analyses of the microarray data suggested a role for HNF4α in regulating a number of genes associated with remission (for example, Figure 2). HNF4α is a transcription factor and a steroid hormone receptor superfamily member that is expressed mainly in liver and kidney, and at lower levels in pancreatic islets, small intestine and colon [40]. Given the relationship between HNF4α and a number of the differentially expressed genes in PBMC and granulocytes, we tested for the presence of HNF4α protein in human leukocytes using immunofluorescence microscopy. As a positive control, we observed intense nuclear and light cytoplasmic staining in human hepatocellular carcinoma HepG2 cells ( Figure 3A). We next examined CD66b+ granulocytes, CD4+ T cells, and CD8+ T cells and detected HNF4α in each of these leukocyte subsets. In T cells, HNF4α immunofluorescent signals were of similar intensity in CD8+ and in CD4+ T cells and lower than intensities observed in CD66b+ granulocytes. All T cells were immunofluorescent positive for HNF4α while one-third of CD66b+ cells were positive. Staining in CD66b+ cells was primarily cytoplasmic, while nuclear and cytoplasmic staining was observed among each T cell subpopulation. All of these findings were observed in cells from healthy children, healthy adults, and children with JIA (data not shown). The findings support the hypothesis that HNF4α is expressed in each of these types of cells and indirectly corroborate the functional interaction of gene products in the networks reported above. Validation of microarray data We performed quantitative real-time PCR on RNA obtained from granulocytes and PBMC from the training cohort of patients and healthy controls to confirm the altered pattern of gene expression detected with microarrays. Ten differentially expressed genes identified from microarray expression patterns of PBMC were evaluated ( Figure 4A). All genes were similarly over-or underexpressed using both methods. In granulocytes, nine genes that were tested by qRT-PCR exhibited agreement between microarray and quantitative real-time PCR results ( Figure 4C). To further confirm the findings from the microarray results, 12 differentially expressed genes identified from microarray expression patterns in PBMC and 14 differentially expressed genes identified from microarray expression patterns in granulocytes were evaluated by quantitative real-time PCR on RNA obtained from the independent testing cohort. The PCR results confirmed the differential expression of 11 of the 12 genes in PBMC ( Figure 4B, 92% validation) and 13 of the 14 genes in granulocytes ( Figure 4D, 93% validation). The differentially expressed gene CXCR6 in PBMC and FOXO1 in neutrophils were not been validated by PCR (data not shown). Five (IER5, FUZ, RNF167, TRIM4 and ZNF277) of the 13 genes we validated in granulocytes were also relevant directly to the HNF4a network mentioned above. TARP, which was present in the PBMC network with an HNF4A hub (above), was validated by PCR in both training and testing groups. Thus, PCR experiments, including those performed on an independent patient cohort, corroborated results from both the expression and network analysis data. Discussion Significant advances have been made in the past 10 to 15 years in the treatment of JIA. Indeed, it is now possible to achieve remission in the majority of children with even the more severe polyarticular-onset forms of this disease, although sustained periods without clinical disease can currently be achieved and maintained only by sustained use of immunosuppressive medications. While preliminary studies from our group suggested that remission (both CRM and CR) in JIA represents a distinct biological 'state' that can be recognized at the molecular level as well as clinically [6], it is critical to derive a deeper understanding of the biological meaning of these states. In particular, since children with JIA can achieve remission on different medications, it is of great interest to know whether remission achieved on MTX leads to an identical immunologic/ biologic state as that achieved on a TNF inhibitor. We demonstrate here that children who have achieved the CRM state show significant differences from healthy controls, although those differences are more pronounced in the group who achieved remission on MTX + Et compared with those who achieved remission on MTX alone. The findings here corroborate our smaller, preliminary studies [22] and demonstrate that the CRM state is not a return to 'normal', but rather a re-ordering of transcriptional profiles in leukocytes (and very possibly other cells or tissues) in such a way that pro-inflammatory responses are counterbalanced by anti-inflammatory responses. Furthermore, this re-ordering occurs in cells of both the innate and adaptive immune systems. Findings in this study are thus consistent with previously published work [5,6,22,40] suggesting that, rather than being driven purely by aberrant adaptive immune processes, the pathogenesis of polyarticular JIA likely involves complex interactions between innate and adaptive immunity. For example, 21 of the differentially expressed genes in PBMC in patients (MTX + Et and MTX alone) relative to controls are known to be involved in T cell activation (AGRN and KLRD1), inflammation response (CFB, AGRN and KLRD1), and cell-cell interaction (CFB, AGRN and PPP1R14A). The differentially expressed genes in PBMC in patients (MTX alone) relative to controls are involved in T cell and NK cell proliferation (STAT1 and KLRK1) and in T cell apoptosis and death (SERPINB9, CALR, PRDM1 and GZMA). The more dramatic differences between children in remission and healthy control children were observed in the expression profiles of granulocytes. This was especially true when we compared children in remission on MTX + Et with the control population. In that comparison, there were 207 genes that showed differential expression. Not surprisingly, many of the differentially expressed genes act through TNF-associated pathways. For example, ABCF1 can be regulated by TNF-alpha and play a role in enhancement of protein synthesis and the inflammation process [41]. TNFAIP6, increased in JIA patient granulocytes, can be induced by pro-inflammatory cytokines such as TNFalpha and IL-1 [42]. Enhanced levels of TNFAIP6 protein have also been found in the synovial fluid of patients with osteoarthritis and rheumatoid arthritis [43]. GCH1 protein expression and enzyme activity are strongly induced by a mixture of three pro-inflammatory cytokines, IL-1beta, TNF-alpha, and IFN-gamma [44]. XIAP belongs to a family of apoptotic suppressor proteins and acts by binding to TNF receptor-associated factors TRAF1 and TRAF2. This protein also inhibits at least two members of the caspase family of cell-death proteases, caspase-3 and caspase-7. XIAP also regulates innate immune responses by interacting with NOD1 and NOD2 through interaction with RIP2 [45,46]. Despite the large number of differentially expressed genes in granulocytes of patients who responded to MTX + Et therapy, only 23 differentially expressed genes in granulocytes were identified in patients that achieved remission on MTX relative to controls. This finding argues that the majority of the MTX +Et transcriptome changes were affected by Et. Nevertheless, there may be functional overlap in genes affected by these therapeutic regimens. For example, among genes affected by MTX, neutral sphingomyelinase (N-SMase) activation associated factor (NSMAF), downregulated in MTX-treated patient vs. control samples, is an adaptor protein that constitutively binds to TNF-R1 and is involved in TNF-induced gene expression such as IL-6 and CXCL-2, and leukocyte recruitment, contributing to the establishment of the specific immune response [47]. The effect of these drugs is not limited to TNF-related modifications. Eleven genes (CHD2, KIAA0907, PHF20, RBM25, NSMAF, FOXO1, PDPK1, SAPS3, SFRS18, TMEM140 and TRIM23) were differentially expressed following treatment with MTX or MTX + Et, and participate in various cellular processes including cellular development, carbohydrate metabolism, cell morphology, cell death and gene expression. A number of differentially expressed genes identified in this study have been shown to bind the transcription factor HNF4α [40]. HNF4α belongs to the steroid hormone receptor superfamily and is enriched in liver [48]. HNF4α contributes to regulation of a large fraction of the liver and pancreatic islet transcriptomes by binding directly to nearly half of the actively transcribed genes in those tissues and plays a role in regulating the cytokineinduced inflammatory response [41,49]. Based on the predicted interaction between HNF4α and a number of differentially expressed genes in this study, we demonstrated expression of HNF4α in leukocytes at the protein level in PBMC and granulocytes from patients with JIA and from healthy controls using immunofluorescence assays. HNF4A transcripts were expressed above background signal intensities on the microarrays (data not shown). Our results support the hypothesis that HNF4α controls many genes associated with remission. Although we did not see significant differences in the expression levels of HNF4A transcripts in leukocytes between patients and healthy controls, HNF4α may be controlled by posttranscriptional events or may act as a cofactor, interacting with other transcription factors, for example ETS-domain transcription factor ELK1, and not directly bind to DNA to regulate these genes [50]. Some ETS family proteins interact with other transcription factors (AP-1, NF-B and Stat-5) to co-regulate the expression of cell-type-specific genes, and these interactions coordinate cellular processes in response to diverse signals from cytokines, growth factors, antigens, and cellular stresses [51]. The differences in the intracellular localization of HNF4α observed here in granulocytes and T cells remain to be explained. They may be related to the abundance of other transcriptional binding factors in such cells or to mutations or exon splice variations that are present in the HNF4A gene. Together, our results demonstrate that the remission state in JIA is not the result of a normalization of immune homeostasis. Gene expression in both PBMC and granulocytes remains abnormal when patients in remission are compared with healthy control children. Furthermore, while there are some overlapping points, remission achieved on MTX differs from remission achieved on MTX + Et, especially in granulocytes, suggesting overlapping but not identical 'set points' for each of these remission states. These findings provide insight into one of the single most important clinical features of chronic arthritis in children: the frequency of disease recurrence and the rarity of true remission (defined by the Wallace group as a full year off all medications without recurrence of disease signs or symptoms). Our studies show that the CRM state is still associated with distinct differences between children in remission (who appear to be completely normal) and perfectly healthy children. The degree to which these abnormalities reflect persistence of the underlying condition itself or a new immunologic homeostasis that emerges because of the drugs themselves is unclear, although our earlier studies [5,6] strongly suggest the latter. Longitudinal studies will be required to monitor the expression of these or other genes prior to, throughout, and after treatment to identify biomarkers that may predict which patients with JIA are likely to respond to particular therapeutic regimens to optimize therapy in the future. These findings represent the first steps in the identification of such molecules.
v3-fos-license
2018-04-03T02:11:02.776Z
2017-05-18T00:00:00.000
4022820
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-017-1958-8", "pdf_hash": "3b76f33b989d51f499236f1285ae8a35f9151d96", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42028", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7db9bf979d75ef5ad5f717a161ebbadebc4f851a", "year": 2017 }
pes2o/s2orc
Does the use of the Informed Healthcare Choices (IHC) primary school resources improve the ability of grade-5 children in Uganda to assess the trustworthiness of claims about the effects of treatments: protocol for a cluster-randomised trial Background The ability to appraise claims about the benefits and harms of treatments is crucial for informed health care decision-making. This research aims to enable children in East African primary schools (the clusters) to acquire and retain skills that can help them make informed health care choices by improving their ability to obtain, process and understand health information. The trial will evaluate (at the individual participant level) whether specially designed learning resources can teach children some of the key concepts relevant to appraising claims about the benefits and harms of health care interventions (treatments). Methods This is a two-arm, cluster-randomised trial with stratified random allocation. We will recruit 120 primary schools (the clusters) between April and May 2016 in the central region of Uganda. We will stratify participating schools by geographical setting (rural, semi-urban, or urban) and ownership (public or private). The Informed Healthcare Choices (IHC) primary school resources consist of a textbook and a teachers’ guide. Each of the students in the intervention arm will receive a textbook and attend nine lessons delivered by their teachers during a school term, with each lesson lasting 80 min. The lessons cover 12 key concepts that are relevant to assessing claims about treatments and making informed health care choices. The second arm will carry on with the current primary school curriculum. We have designed the Claim Evaluation Tools to measure people’s ability to apply key concepts related to assessing claims about the effects of treatments and making informed health care choices. The Claim Evaluation Tools use multiple choice questions addressing each of the 12 concepts covered by the IHC school resources. Using the Claim Evaluation Tools we will measure two primary outcomes: (1) the proportion of children who ‘pass’, based on an absolute standard and (2) their average scores. Discussion As far as we are aware this is the first randomised trial to assess whether key concepts needed to judge claims about the effects of treatment can be taught to primary school children. Whatever the results, they will be relevant to learning how to promote critical thinking about treatment claims. Trial status: the recruitment of study participants was ongoing at the time of manuscript submission. Trial registration Pan African Clinical Trial Registry, trial identifier: PACTR201606001679337. Registered on 13 June 2016. Electronic supplementary material The online version of this article (doi:10.1186/s13063-017-1958-8) contains supplementary material, which is available to authorized users. Part 2. Questions about claims Instructions: Read the text above each question, and then answer the question using one of the provided answers. For each question, choose what you think is the best answer and write the letter for that answer in the box provided. Example A teacher says that the children in his school run faster than the children going to school in another village. Question: How can the teacher be sure about this? Options: A doctor did a research study to find out if drinking tea keeps people from getting sick. He tossed a coin to decide who should get the tea and who should not. People who got tea went to the doctor's office every day to drink their tea. At the end of the study, people who got the tea were less likely to be sick than those who got no tea. Based on the text above, please answer the following questions: 2.1 Who went to the doctor's office every day? Options: A) People who did not get tea B) People who got tea Answer: 2.2 How did the doctor decide who should get tea? Options: A) By tossing a coin B) By asking people what they would like C) The doctor gave tea to those who were more likely to be sick D) The doctor asked people who came to his office Answer: 3. A doctor did a research study to find out if drinking tea keeps people from getting sick. He tossed a coin to decide who should get the tea and who should not. People who got tea went to the doctor's office every day to drink their tea. At the end of the study, people who got the tea were less likely to be sick than those who got no tea. Based on the text above, please answer the following questions: What was the treatment? Options: 4. Annette sees an advert on TV for a new soap which the makers say protects people from getting skin rashes. Annette thinks that this soap must be better than other soaps for protecting her skin. Question: Is Annette right? Options: A) No, the soap may be newer, but that does not mean that it is better than other soaps B) Yes, the new soap is probably better than most other soaps because it is newer C) Yes, the new soap is probably better than most other soaps because a well-known company makes it Answer:  5. Regina has an illness that makes it difficult for her to breathe. She hears on the radio about a medicine that has helped many people for their breathing problems. Question: How sure can Regina be that the medicine does not have any harms? Options: A) It is not possible to say. However, medicines are rarely harmful B) Not very sure, because all medicines may harm people as well as help them C) Very sure, since the medicine has helped many people, it is unlikely that it also harms people Answer:  6. John has a skin rash on his leg. A shop sells several creams to treat skin rashes. John chooses a cream from a well-known company, even though it is more expensive than the other creams. John thinks the cream is more likely to heal his rash than the other creams because it is more expensive. Question: Is John right? Options: A) No, just because the cream is expensive does not mean that it will work better than other creams B) It is not possible to say. However, expensive creams are likely to be better because the companies spend more time making them C) No, the cream is probably not as good as the other creams. Well-known companies are usually better at advertising D) Yes, the company is well-known for a reason, so it is more likely to be better than creams sold by lesser-known companies Answer:  7. Two companies make two different medicines for treating stomach pain. Each of them says that their medicine is the better one. Question: How can you know which of the two medicines is better for stomach pain? Options: A) It is not possible to say. The companies may just say their medicine is best because they want to make money B) I would rely on the best known company; it is more likely to have the best medicine C) I cannot trust either of the companies. They are probably both wrong 10. Sarah has an illness. There is a medicine for it, but she is unsure if she should try it. A research study comparing the medicine with no medicine found that the medicine was helpful but also that it could be harmful. Three of Sarah's friends are giving her advice about what to do. Question: Which advice below given to her by her friends is the best advice? Options: A) She should only take the medicine if many people have tried the medicine before B) She should only take the medicine if she thinks it will help her more than it will harm her C) If Sarah has enough money to buy the medicine, it could not hurt to try it Answer:  11. Dr. Acheng is an expert on treating headaches. A news reporter interviews Dr. Acheng about a new medicine. Dr. Acheng says that, in her personal experience, the new medicine is good for treating headaches. Question: How sure can we be that Dr. Acheng is right? Options: A) It is not possible to say. It depends on how long Dr. Acheng has been an expert on treating headaches B) Not very sure. Even though Dr. Acheng is an expert, the new medicine still needs to be compared in studies with other treatments C) Very sure. Dr. Acheng is an expert, so she knows if the new medicine is good or not based on her experience D) Very sure. Dr. Acheng would not be interviewed by a news reporter if her advice was not good Answer:  12. Edith has a stomach pain. Edith's mother says that fruit juice is a good treatment for stomach pain. She learnt about this treatment from Edith's grandmother. Over many years, other families she knows have also used fruit juice to treat stomach pain. Question: Based on this, how sure can we be that fruit juice is a good treatment for stomach pain? Options: A) Not very sure. Even though people have used fruit juice over many years, that does not mean that it helps stomach pain B) Very sure. If it has worked for Edith's mother and other people who have tried it, it will probably work for her too C) Not very sure. Edith should ask more families if they use fruit juice to treat stomach pain Answer:  13. At David's school, some students have poor parents. The students with poor parents drink less fruit juice than the children of other parents. The students with poor parents are also more often sick. Based on this link, David thinks that people who drink fruit juice, are less likely to get sick. Question: Is David correct? Options: A) It is not possible to say, it depends on whether or not David has poor parents B) Yes, students with poor parents do not drink fruit juice and are more often sick C) Yes, the juice is the only possible reason why the students with the poor parents are more often sick D) It is not possible to say. There could be other reasons why students with poor parents are more often sick Question: How sure can Harriet be that the old medicine is better than the new medicine? Options: A) Not so sure, because Harriet needs to know the results of all other studies comparing the new medicine with the old medicine B) Very sure, because she heard about the study on the radio C) Not so sure, unless she finds another study with the same results D) Very sure, because this is a new study Answer:  17. Doctors studied people with stomach pain before and after they took a new medicine. After taking the new medicine, many people felt less pain. Question: Can we be sure that the new medicine is good for treating stomach pain? Options: Question: Based on this link between using cream and smooth skin, is Judith correct? Options: A) It is not possible to say. It depends on how many younger and older girls there are B) It is not possible to say. There might be other differences between the younger and older girls C) Yes, because the younger girls use cream on their skin and they have smoother skin D) No, Judith should try using the cream herself to see if it works for her Answer:  20. Dr. Wasswa has done a research study giving a new medicine to people who were vomiting. Some of the people stopped vomiting after they got the new medicine. Dr. Wasswa says that this means that the medicine works. Options: A) No. The people who used the medicine were not compared with similar people who did not use the medicine B) Yes, some of the people stopped vomiting C) No, since not all of the people stopped vomiting Answer:  Instructions: Read the text at the top of the box. Then read the text in each row and choose what you think is the best answer by making a tick in one of the two boxes. There should be only one tick in each row. 21. When you are sick, sometimes people say that somethinga treatment -is good for you. Below you will find different things people say about such treatments. Do you agree or disagree with each of the following things being said? For each thing being said below, use a tick to mark whether you "agree" or "disagree". Things being said: I agree I disagree 21.1 Peter says that if a treatment works for one person, the treatment will help others too 21.2 Alice says that if some people try the treatment and feel better, this means that the treatment helps 21.3 Habibah says that, just because many people are using the treatment, this does not mean that it helps 21.4 Julie says that companies sometimes say that the treatment they make is best just to make money 22. A doctor wanted to know if a new medicine for treating headaches is better than an older medicine. The doctor did a research study, comparing the two medicines. Would the actions below make you more sure or less sure about the results of the study? For each action below, use a tick to mark whether you think the action would help you become "more sure" or "less sure". 23. To know if a treatment helps you, the treatment should be compared in research studies to other treatments (fair comparisons). Below you will find different things people say about such studies. Do you agree or disagree with each of the following things being said? For each thing being said below, use a tick to mark whether you "agree" or "disagree".
v3-fos-license
2016-05-04T20:20:58.661Z
2015-03-03T00:00:00.000
262641911
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0118290&type=printable", "pdf_hash": "cfd3fcdbfec154cf3b3c3f1453c3e26403663c30", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42031", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "cfd3fcdbfec154cf3b3c3f1453c3e26403663c30", "year": 2015 }
pes2o/s2orc
Discovery of Novel New Delhi Metallo-β-Lactamases-1 Inhibitors by Multistep Virtual Screening The emergence of NDM-1 containing multi-antibiotic resistant "Superbugs" necessitates the needs of developing of novel NDM-1inhibitors. In this study, we report the discovery of novel NDM-1 inhibitors by multi-step virtual screening. From a 2,800,000 virtual drug-like compound library selected from the ZINC database, we generated a focused NDM-1 inhibitor library containing 298 compounds of which 44 chemical compounds were purchased and evaluated experimentally for their ability to inhibit NDM-1 in vitro. Three novel NDM-1 inhibitors with micromolar IC50 values were validated. The most potent inhibitor, VNI-41, inhibited NDM-1 with an IC50 of 29.6 ± 1.3 μM. Molecular dynamic simulation revealed that VNI-41 interacted extensively with the active site. In particular, the sulfonamide group of VNI-41 interacts directly with the metal ion Zn1 that is critical for the catalysis. These results demonstrate the feasibility of applying virtual screening methodologies in identifying novel inhibitors for NDM-1, a metallo-β-lactamase with a malleable active site and provide a mechanism base for rational design of NDM-1 inhibitors using sulfonamide as a functional scaffold. Introduction Antibiotics being used to treat or prevent infectious disease have revolutionized the practice of medicine.Without them numerous modern therapies such as organ transplantation and cancer chemotherapy would simply not be possible [1].Unfortunately, overuse and/or misuse of antibiotics in farming and clinical practices has resulted in rising multidrug-resistance bacterial strains, among which gram-negative bacteria producing β-lactamases become the most prevalent [2][3][4].According to the U.S. Centers for Disease Control and Prevention, more than two million people are affected by infectious diseases with antibiotic resistance and at least 23,000 people died each year in the United States [5]. β-lactamases have been classified into four classes (A-D) based on their structures (Ambler classification) [6][7][8], among which the class B enzymes also known as metallo-β-lactamases (MBLs) requiring bivalent metal cation, normally Zn 2+ , as cofactors are further classified into B1, B2, and B3 subclasses [9].B1 and B3 MBLs containing two zinc binding sites exhibit a broad substrate spectrum profile including the last antibiotic defense lines carbapenems, therefore pose a looming pandemic threat [10].One typical example is the global dissemination of bacteria harboring B1 subgroup member New Delhi metallo-β-lactamase (NDM-1).These bacteria often carry several different resistance genes in addition to NDM-1 gene, bla NDM-1 , and are resistant to almost all antibiotics and only partially susceptible to colistin, tigecycline, and fosfomycin, creating enormous challenges in managing these multi-resistance "Superbugs" [11][12][13].In addition, colistin and tigecycline resistant NDM-1 harboring bacteria have been reported [14][15][16] and NDM-1 gene has been isolated in more than 11 bacterial species from natural environment [17][18][19].Given the biomedical importance role of MBLs, development of MBLs inhibitors become an urgent need.High throughput screening (HTS) and virtual screening (VS) are the two main methods to identify novel scaffolds for drug discovery.Indeed, HTS has successfully identified a number of MBLs inhibitors [20][21][22], yet structure-based drug design and virtual screening have not been widely applied in MBLs inhibitors development [23].Since the force field and zinc parameter has been optimized for metalloenzymes, molecular docking has proved to be a feasible way to found inhibitors or predict actual substrates of metalloenzymes structures [24][25][26][27][28]. Five inhibitors of CcrA, a B1 subclass of MBLs, with apparent Ki values less than 120 μM have been screened based on virtual screen method [29].Recent high-resolution x-ray crystallographic analyses of multiple three-dimensional structures of NDM-1 reveal that it shares a common structural-fold with other B1 MBLs [30][31][32][33][34].In addition, all three subclasses of MBLs share a common substrate hydrolysis mechanism [31].These findings suggest that discovery of NDM-1 inhibitors via structure-based design and in silica screening may be productive.The aim of this study is to identify novel inhibitors of NDM-1 using virtual screening methods. Bacterial strains and plasmids MBL DNA sequences of NDM-1, VIM-2 and SIM-1 lacking the signal sequences were codonoptimized for expression in E. coli, chemically synthesized and inserted into pUC-19.Sequencing-validated MBL genes were further cloned into the pET28a expression vector using the NcoI/XhoI sites.E. coli DH5α (ATCC 53868) was used routinely as host for molecular cloning and plasmids amplifying while E. coli BL21 (DE3) was used for MBLs expression.Bacteria were grown in Luria-Bertani (LB) medium supplemented with appropriate antibiotics. Protein expression and purification Recombinant NDM-1, VIM-2 and SIM-1 proteins were induced to express in E. coli BL21 (DE3) cells by 0.1 mM IPTG for 10 h at 2°C when the optical density (OD 600 nm) reached 0.7-0.8.Cells were harvested and cell lysate was prepared by sonication at 4°C.The protein expression levels in soluble and insoluble fractions were analyzed by 12% SDS-PAGE after ultracentrifugation.Individual MBL was purified from the lysate supernatant using Ni 2+ -affinity column (Bio Basic Inc, Markham, Canada).All three recombinant proteins showed an abundant expression after induction for 10 h and could be purified with an estimated purity around 95% (Figure A in S1 File).MBL activity analysis was carried out using the nitrocefin assay at 30°C in 300 μL HEPES buffer (30 mM HEPES, 10 μM ZnCl 2 , 100 mM NaCl, 20 μg/mL BSA, pH 6.8) at 482 nm with a UV-2400PC spectrophotometer (Shimadzu, Tokyo, Japan).The Michaelis constants, determined under initial velocity conditions by Lineweaver-Burk plot, for NDM-1, Vim-2 and SIM-1 were 9.54 ± 0.43 μM, 14.48 ± 0.68 μM and 31.3 ± 0.24 μM, respectively.These values are consistent with those previously reported [33,35]. Selection and preparation of structure models 22 reported NDM-1 X-ray crystallographic structures were analyzed [30][31][32][33][34]36] (Table A in S1 File) using protein alignment and superpose biopolymer module in Molecular Operating Environment suit (MOE, version 2009.10;Chemical Computing Group Inc; Montreal, QC, Canada) or the Protein Model Portal (PMP) [37] to facilitate the structure-based virtual screening.Structure 3Q6X (Figure B.A in S1 File) with a resolution value of 1.30 Å was selected for the screening process.The structural file contains two almost identical NDM-1 molecules with an RMSD value of 0.21 Å for C α atoms [31].The second structure after removing ligands and non-conserved water molecules in the active site, was processed for Protonate 3D and Energy Minimize using MOE.All hydrogen atomic coordinates were refined by the conjugate gradient method using the MMFF94x (Merck Molecular Force Field 94x) force field [38].Other 21 NDM-1 structures were also processed with ligand and solvent deletion, protonate 3D and energy minimization using the same parameters and superposed together. Initial virtual screening Hydrolyzed ampicillin, L-captopri, ampicillin and other 9 β-lactams (cefepime, cefotaxime, ceftazidime, cefuroxime, faropenem, imipenem, meropenem, penicillin G, piperacillin) structures downloaded from ZINC database were docked into the NDM-1 active site using different docking simulations in MOE and docking protocols in Discovery Studio (ADS, version 2.5; Accelrys Inc, San Diego, USA) according to the following procedure: the docking box was generated around the active site using the site finder module in MOE (Figure B.B in S1 File).The dimensions of the docking box were manipulated to accommodate all the amino acid residues present in the active site.Default parameters were used for all computational procedures unless otherwise stated. A virtual collection drug-like compounds subset taken from ZINC database containing 2,800,000 compounds was served as the screening library [39].The hits with firm binding conformations were collected and redocked into the active site using the libdock protocol in ADS.Those compounds with high libdock scores were selected as a focused library used for the further analysis. Docking results analysis Energy calculations and analysis of docking poses were performed on MOE.The resulting protein-inhibitor or protein-β-lactam complexes were analyzed using the protein-ligand interaction fingerprint (PLIF) implemented in MOE [40].The hydrolyzed ampicillin and NDM-1 residue interaction energies were calculated for the docked pose with the least RMSD value, assigning energy terms in kcal mol -1 for each residue.LigX-interaction application was used to provide ligand-interaction diagram to understand the binding type of those docked hits [41]. IC 50 Determination Ten different concentrations of compounds VNI-24, VNI-34 and VNI-41 ranging from 0 μM to 45.0 μM were used to determine the half-maximal inhibitory concentration (IC 50 ) against NDM-1 (1 nM) using nitrocefin (20 μM) as substrate.The assay was performed in the buffer for inhibitor screening in the presence or absence of 0.01% Triton X-100 [42].Each data point was performed in quadruplicate and the inhibition data were analyzed by a standard dose response curve fitting in the Origin 8.0 software. Analysis of NDM-1/VNI-41 complexes by molecular dynamics study NDM-1 and VNI-41 after optimized with partial charge were then subjected to molecular dynamics simulation (MD) employing the NVT (N = constant number, V = volume, and T = temperature) statistical ensemble and Nosé-Poincaré-Andersen (NPA) algorithm with the periodic boundary conditions applied to analysis stability of binding model of the compound.The complex was solvated in water molecules in a sphere mode with a 10 Å width layer.The molecular dynamics simulations were performed at a temperature of 310 K for 2000 ps.The data of position, velocity and acceleration were saved every 0.5 ps. NDM-1 structural superposition and optimization Structural superposition of the 22 reported NDM-1 structures using force realignment and refined with gaussian distance weight showed that most of the independently solved structures (except 3S0Z) shared a high degree of structural similarity with each other (Figure 1A, 1B, Figure C in S1 File).The average pair-wised RSMD for all atoms in these structures is 0.801 Å (Fig. 1A) and 3Q6X showed a high similarity with 4EYF, 4HOD, 4HL1, 4HL2, 4EYB and 4EY2 with RMSD values below 0.3 Å (Fig. 1A, Table A in S1 File).A notable variation is the distance between zinc ions ranging on average from 3.48 to 4.6 Å (Table A in S1 File, Figure B.C in S1 File).This indicates that the metal ions are relative flexible to move within the active site.While NDM-1 structures complexed with hydrolyzed antibiotics share a greater metal-ion separation (4.53 ± 0.11Å, 3Q6X 4.59 Å) with a slight outward flexing of His120 and His122, the binuclear Zn distance appears to be significantly less (3.72 ± 0.17Å, p < 0.001) in most apo-NDM-1 structures (Table A in S1 File).Since the binuclear Zn distance is compatible with μ-η1:η1carboxylate coordination, such distance changes also prevail in all other binuclear Zn MBLs [43], and this inherent flexibility of metal ions in the active site is likely important for substrate binding and turnover [36,44].4H0D and 4HL1, where the Zn ions replaced by Mn 2+ or Cd 2+ showed a similar hydrolyzed ampicillin binding framework as 3Q6X [45].A detailed RMSD analysis for 3Q6X reveals that residues in loop L3 (Leu65-Gly73) adjacent to the active site possess greatest variation (Fig. 1C).Moreover, L3 displayed the greatest deviations with the exception of the N terminal signal peptide when structures with hydrolyzed antibiotics (4HKY, 4EY2, 4EYF, 4HL1, 4HL2, 4H0D.4EYB and 3Q6X) were superposed with apo structures while L3 showed less difference among 3Q6X, 4HL2 (Hydrolyzed Ampicillin), 4EYB (Hydrolyzed Oxacillin), 4EYF (Hydrolyzed Benzylpenicillin) and 4EY2 (Hydrolyzed Methicillin) . These results suggest that L3 is involved in substrate binding in NDM-1.Loop L10 (Gly205-His228) showed the subordinate deviation after L3 among NDM-1 structures (Fig. 1C, Figure C.C in S1 File).Asn220 in L10 interacts with Zn1 to provide an oxyanion hole in polarizing the lactam carbonyl upon binding, and facilitates nucleophilic attack by the adjacent hydroxide [32].Regions of Ala121-Met129 flanking NDM-1 active site Quantitative stability-flexibility relationship analysis revealed that NDM-1 had several regions with significantly increased rigidity when compared with other four B1 MBLs [46].In most NDM-1 structures (except mutants and 3S0Z), RMSD of these regions were blow 0.5 Å (Figure C.A in S1 File).These evolutionary traits of NDM-1, with more rigid regions out of the active site together with the more plastic and more hydrophobic L3 loop [31] as compared to other MBLs, may provide more flexibility to accommodate a broader spectrum of substrates. Based on our detailed analysis, many NDM-1 structures shares some identical waters in the active site (Fig. 1D), which may play a role in the overall structure stability or in substrate binding and product turnover.The bridging water in the active site showed different distances among these structures, a likely consequence of change in distance between the metal ions (Figure B.D in S1 File) or the pH conditions that the protein crystallization were used.Unlike most other MBLs, NDM-1 functions well at high pH conditions [47].Our analyses suggest that 3Q6X is of high resolution and possesses a high degree of the structural similarities with other NDM-1 structures, therefore is suitable for docking and screening studies. Molecular docking Hydrolyzed ampicillin, L-captoril, ampicillin and other 9 β-lactams (cefepime, cefotaxime, ceftazidime, cefuroxime, faropenem, imipenem, meropenem, penicillin G, piperacillin) were docked into NDM-1 active site using different docking simulations in MOE and docking protocols in ADS to evaluate the ability of these programs to reproduce the experimental binding modes.For all programs the binding modes of the docked hydrolyzed ampicillin structures were found in a narrow range of RMSDs (Fig. 2A).The RMSDs of hydrolyzed ampicillin were 1.53-2.07Å, 1.86-2.62Å, 1.98-2.78Å and 1.79-2.31for Triangle Matcher, Alpha PMI, Alpha triangle, Proxy triangle placement in MOE respectively, while 1.46-2.65Å for libdock in ADS receptor-ligand interactions protocols.In general, poses with an RMSD < 2 Å are considered a success, and dockings with RMSDs between 2 and 3 Å are considered a partial success [48].For L-captopril, the RMSD values between poses docked into the active sites and the determined ligand structure in 4EXS arranged from 0.72 to 2.03 Å and the 2D interaction map of the docked L-captopril was similar to that in 4EXS (Fig. 3).Hydrolyzed ampicillin-residue interaction energies for the best docked pose and the structure reference were calculated [49].The interaction of best docked hydrolyzed ampicillin and ampicillin showed similar interactions as revealed by the X-ray structure (Fig. 2B, 2C, 2D).Among the conserved residues in the active site, Leu65, Gln123, Asp124, His189, Cys208, Lys211, Asn220 interact with the hydrolyzed ampicillin in both the structure complex and the docked pose.PLIF analysis showed that Gln123, His189 and Asn220 interacted with the docked hydrolyzed ampicillin at a high frequency (Fig. 4A).The residue interaction energies between NDM-1 and hydrolyzed ampicillin, and the 2D interaction map (Fig. 2E) well defined the dock results. After ampicillin and other β-lactams were docked into the active site, the 2D binding pattern of these binding pose were analyzed by the ligX-interactions and RLIF.404 docked poses of 10 different β-lactams showed that those substrates interacted with His120, His122, Asp124, His189, Lys211, Ser249, Asn220, Zn1 and Zn2 at a high frequency compared with other residues around the docking site.On the other hand, other residues Ile35, Phe70, Asp212 and Ser217 interacted less frequently (Fig. 4B).Docking poses also formed the inhibited conformers at a low frequency (< 10%), in which the carboxylic group of β-lactams coordinated with two zinc ions and kept the amide group away from the metal ions as described before [50].These findings, along with the high flexibility of NDM-1 active site, is consistent with NDM-1's broad substrate spectrum, as well as the fact that most reported MBL inhibitors interact with Zn 2+ or Zn 2+ chelating residues [51]. Structure-based screening and analysis Triangle Matcher placement method, followed by molecular mechanics refinement and scoring, was used for the first round docking based screening process.The placement stage was scored by E_place.Binding free energy values (G binding ) was quantified using London dG [52] and Affinity dG [53].After 1000 binding orientations for each compounds were refined, 30 conformations with lowest binding free energy, lowest affinity dG and London dG values was produced.In the screening process, we adopted the strategy that the most anticipated hits would exhibit the desirable scores in all the evaluation algorithms and be in conformity with screening threshold of different screening methods. Docking poses without major clashes were scored for receptor complementarity and were further screened using the criteria that affinity dG value was less than -10 kcal/mol and that london dG was less than -20 kcal/mol.E_refine for refinements using GridMIn was limited to 190 kca1/mol.E_conf, the energy of the conformer calculated at the end of the refinement was In the second round of screening, the receptor-ligand interactions protocols of ADS 2.5 were used to dock these 2218 compounds into the docking box of NDM-1.For each ligand, another 30 different conformations for each compound were generated by the libdock process.On the basis of the docking scores, the compounds were ranked, and 1388 conformations of 298 compounds with libdock score above 150, Absolute Energy under 200 kcal/mol; Relative Energy under 25 kcal/mol were selected. The 1388 screened conformations displaying in a camel-like appearance (Figure D.A, D.B & D.C in S1 File) were further analyzed by PLIF.During the PLIF screening, we focused on interactions with His122, His189, Asn220, His250, Zn1, and Zn2 because all these elements showed high interaction frequency in the β-lactam based docking (Fig. 4B).1,388 conformations (poses) of 298 compounds satisfied with above specific binding requirement were selected as a focused library, and most of which also interact with Ile35, Gln123, Asp124, Lys211, Ser217, Gly219 and Ser251 at a high frequency (Fig. 4C).In addition, these molecules were inspected visually for features not captured in the docking calculation. Activity of the three compounds against VIM-2 and SIM-1 was also tested.Within the aqueous solubility limit of these compounds (Table C in S1 File), none of the three compounds showed significant inhibition for SIM-1.While 45 μM VNI-24 and VNI-41 inhibited VIM-2 activity by 19.6% ± 3.1% and 34.2% ± 5.2%, respectively, VNI-34 was ineffective in blocking VIM-2 activity.These results suggest that VNI-24, VNI-34 and VNI-41 are selective NDM-1 inhibitors capable of discriminating among various MBLs.Taken together, our study shows that it is feasible to develop novel NDM-1 specific inhibitors via in silico screening. Molecular dynamic study of the NDM-1/VNI-41 complex To investigate stability of the active site cavity in response to the binding of VNI-41, the most potent NDM-1 inhibitor validated in our study, MD simulations were performed.RMSD for zinc ions, VNI-41 and the active site atoms (atoms in Fig. 1D) of NDM-1 from their initial positions (t = 0) was calculated.Overall, the RMSD values of NDM-1 active site fluctuated from 0.5 to 2.5 Å and reached a steady state (Fig. 6A) that the systems were equilibrated and the predicted pose of each inhibitor was compatible with the pocket in the catalytic cavity of NDM-1 structure.Close examination of MD simulation snapshots (N = 10, with different time intervals) of the VNI-41/NDM-1 complex relative to the original pose revealed a coordinated movement of L3, L10 and L12 around the active site (Fig. 7A, 7B).The distance of the zinc ions maintained a steady state during the dynamic simulation and the RMSD of the zinc was less than 0.5 Å.While the ligand underwent a maximal change with a RMSD value of 1.6 Å, the active site showed a change with a RMSD value about 2.0 Å.The conservative water in the active site fluctuated and this may be caused by the solvent used to dissolve NDM-1 in the MD simulation competing with the water kept before MD simulation (Fig. 6A). It is reported that the expanded cavity volume of the active site in the surrounding loops (Loop L1, L3, L10 and L12 in Figure B.A in S1 File) is important for the broad substrate spectrum of NDM-1 [33].To investigate residue movement in the active site cavity in response to VNI-41 binding, a surface analysis was performed on apo NDM-1 and NDM-1/NVI-41 complex after MD simulation [54].Apo NDM-1 has the largest cavity (surface area = 392.4Å 2 ; volume = 693.2Å 3 ) while NDM-1-hydrolyzed ampicillin complex has a smaller cavity after removing the ligand (surface area = 376.2Å 2 ; volume = 633.5 Å 3 ).Active site cavity of NDM-1-VNI-41 complex is the smallest after removing the docked ligand (surface area = 345.3Å 2 ; volume = 596.3Å 3 ) (Fig. 7C, 7D, 7E).Our study shows that VNI-41 clamped into the groove surrounded by active site, induced L3 and L10 movement and narrowed the active cavity (Fig. 7F). Interactions between NDM-1 and compound VNI-41 among the MD generated steady conformations during MD simulation were analyzed.The benzoxadiazole moiety binding to NDM-1 hydrophilic site adopted an appropriate conformation with the double π-π stacking interactions with His122, and the ring-to-ring distances were 3.03 and 2.79 Å for the five member ring and the six number ring interacted with His122, respectively (Fig. 6B, Fig. 6C).Moreover, one oxygen atom from the sulfonamide group interacted with Zn2 via a metal contact (score 100%, distance 2.5 Å), forming a solvent contact with the bridge water (H 2 O in the 2D interaction map) (score 33%, distance 3.0 Å), and ionic contacted with His122, His120 and His189 (score 61%, distance 1.9 Å; score 42%, distance 1.8 Å; score 25%, distance 1.8 Å).The other oxygen atom interacted with Asn220 (score 69%, distance 2.06 Å).The naphthalene group clamped into hydrophobic groove around the active site and contacted with His250 and Ile30 (Fig. 6D).Mechanisms study has showed that the bridging hydroxide-zinc serving as the general base while a surrounding water molecule serving as the nucleophile responsible for the nucleophilic attack which results in a negatively charged intermediate stabilized by oxyanion hole of NDM-1 [45].In the conformation of VNI-41 interacting with NDM-1, the bridge water formed solvent contacting with oxygen atom from sulfonamide group may prevent the proton transfer from the surrounding water to the bridging water in the active site. To date, various sulfamide/sulfonamide/sulfamate containing metalloenzyme inhibitors, such as diuretic and antiglaucoma agents (acetazolamide, methazolamide, dichlorophenamide, and brinzolamide), have been clinically used to inhibit carbonic anhydrases [55].Sulfamide/ sulfonamide/sulfamate containing MBL inhibitors have also been reported.The crystal structure of 4-nitrobenzenesulfonamide interacting with BJP-1, a B3 subclass MBLs, reveals that binding of sulfonamide changes coordination number and geometry for Zn1 by adding one oxygen atom of sulfonamides to the Zn2 and the nitrobenzene moiety form a hydrophobic pocket in the active site [56].MBL inhibitor DansylCnSH can be docked and shown to interact with the core region of the active site of IMP-1 via sulfamide [57].Since the zinc ions of NDM-1 are essential for catalytic activity and participate directly in the catalysis, the ability of the sulfonamide group of VNI-41 to interact with the Zn ion in the active site of NDM-1 suggests that sulfamide/sulfonamide/sulfamate containing compounds may represent promising leads for developing clinically effective NDM-1 inhibitors. In summary, we have identified novel inhibitors of NDM-1 using a multistep docking methodology.Dynamic and ligX-interaction analyses have revealed that VNI-41 interacts with Zn1.This study has demonstrated the feasibility of identifying inhibitors of NDM-1 with a plastic active site by virtual screening.Further investigations and future modifications studies for rational design of NDM-1 inhibitors using sulfonamides as a functional scaffold will lead to a better understanding of their exact mechanism of action, laying a solid foundation for further structure-based hit-to-lead optimization. Fig 1 . Fig 1. Comparative analyses of 21 published NDM-1 X-ray crystal structures.(A) Pairwise RMSD matrix table of NDM-1 structures superimposed with force realignment method and refine with Gaussian Weights in MOE.PDB codes for structures with hydrolyzed substrate in the active site are highlighted in red.(B) Superposition of the 22 NDM-1 structures.3S0Z, 4GYU, 4GYQ, 3SPU, and 3Q6X2 are highlighted in thick line and colored as shown in the index panel.(C) The RMSD-residue index 3D waterfall plots of NDM-1 structures compared with 3Q6X structure.(D) Superimposition of the active site among the reported NDM-1 structures (without 3S0Z and NDM-1 mutants 4GYQ and 4GYU) showing the metal chelating residues (Oliver) and conserved water molecules (Red) in the active site of NDM-1 structures.Residues from 3Q6X are highlighted in green.doi:10.1371/journal.pone.0118290.g001 Fig 2 . Fig 2. Docking of the hydrolyzed ampicillin in the active site of NDM-1.(A) Molecular surface of NDM-1 (PDB 3Q6X) active site with docked hydrolyzed ampicillin.The structurally determined hydrolyzed ampicillin is shown in gray stick representation while docked poses are shown in colored stick.2D ligand-protein interaction maps showing the detailed binding pattern of structurally determined hydrolyzed ampicillin (B), docked hydrolyzed ampicillin (C) and docked ampicillin (D) in the active site of 3Q6X.(E) Residue-ligand interaction energies between NDM-1 (3Q6X) and hydolyzed ampicillin (vdw_ref) or docked hydolyzed ampicillin (vdw_pose).The hydrolyzed ampicillin and NDM-1 residue interaction energies were calculated for the best pose (RMSD = 1.53 Å). doi:10.1371/journal.pone.0118290.g002 Fig 3 . Fig 3. L-captopril docked in the active site of NDM-1.2D ligand-protein interaction maps showing the detailed binding pattern analysis of structurally determined (A) and docked (B) L-captopril.(C) Molecular surface of NDM-1 active site with docked hydrolyzed ampicillin.doi:10.1371/journal.pone.0118290.g003 Fig 4 . Fig 4. PLIF analysis of the docking process.The interaction frequency of individual residue with the docking poses of (A) hydolyzed ampicillin; (B) 10 beta-lactams (ampicillin, cefepime, cefotaxime, ceftazidime, cefuroxime, faropenem, imipenem, meropenem, penicillin G, piperacillin); (C) 298 virtual hit compounds.Each columns of every residue are denoted by some of the following characters to indicate the interaction role of each residue: side chain hydrogen bond acceptor, backbone hydrogen bond donor, backbone hydrogen bond acceptor, solvent hydrogen bond, ionic attraction or surface contact to the atom of the residues.doi:10.1371/journal.pone.0118290.g004 Fig 7 . Fig 7. Structure movements during molecular dynamic simulation process.Overlaid snapshots of the ribbon diagrams of NDM-1 Cα atoms and VNI-41 compound around the active site before (A) (snapshots interval 10 ps, N = 10) and after (B) (snapshots interval 150 ps, N = 10) the system reached equilibrium.Surface analyse of the active site cavity was performed on apo NDM-1(C), NDM-1/hydrolyzed ampicillin (D) and NDM-1/NVI-41(E).Atoms in the active site active site cavity are highlighted in colored balls.(F) The ribbon diagrams showing the active site associated loops (L3, L6, L10) moving toward the ligand and contraction of the active site.The structure of apo NDM-1, NDM-1/hydrolyzed ampicillin and NDM-1/NVI-41 is colored in blue, green and black respectively.doi:10.1371/journal.pone.0118290.g007
v3-fos-license
2022-04-02T13:31:17.934Z
2022-04-01T00:00:00.000
247858745
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41597-022-01256-y.pdf", "pdf_hash": "bce80ed4e3dcf1aa6fcbec5700d32ac6f26dcf6f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42032", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "a4bd1b16595f4a0ea1bc901d3a8e596c4d776265", "year": 2022 }
pes2o/s2orc
A hierarchical inventory of the world’s mountains for global comparative mountain science A standardized delineation of the world’s mountains has many applications in research, education, and the science-policy interface. Here we provide a new inventory of 8616 mountain ranges developed under the auspices of the Global Mountain Biodiversity Assessment (GMBA). Building on an earlier compilation, the presented geospatial database uses a further advanced and generalized mountain definition and a semi-automated method to enable globally standardized, transparent delineations of mountain ranges worldwide. The inventory is presented on EarthEnv at various hierarchical levels and allows users to select their preferred level of regional aggregation from continents to small subranges according to their needs and the scale of their analyses. The clearly defined, globally consistent and hierarchical nature of the presented mountain inventory offers a standardized resource for referencing and addressing mountains across basic and applied natural as well as social sciences and a range of other uses in science communication and education. Background & Summary In recent years, the number of scientific reports, syntheses, assessments, and cross-scale comparisons of patterns and trends in natural and social systems has increased rapidly. These efforts represent a wealth of knowledge and unique resources for agenda setting and negotiations. However, a growing challenge inherent to these contributions lies in the partitioning of the world into relevant and comparable (mapping) units for data aggregation, analysis, and reporting. Different means of regionalization are adopted for various purposes and in different fields. General purpose approaches include administrative units 1 , watersheds or river basins 2 , landforms 3 , ecoregions 4 , terrestrial habitats 5 , or the GEO, IPBES or IPCC (sub-) regions and units of analysis. Examples of more specific partitioning schemes include the World Geographical Scheme for Recording Plant Distributions 6 . Any layer that shows the location and extent of geographical features allows the detailed spatial localization of field data 7 and their analysis at different resolutions. However, the use of different regionalizations also leads to different results, which calls for a clear understanding of the spatial geometry and extent of mapping units and a well-founded rationale for their adoption 8 . In mountains, the integrative and comparative biogeographical and socio-ecological sciences needed for global reporting (e.g., Aichi target 14 on the restoration and safeguard of essential services, SDG target 15.4 on the conservation of mountain ecosystems) and safeguard have long been hampered by a lack of consensus about the definition of mountains [9][10][11][12] . Indeed, assigning an outer border to mountainous regions always remains arbitrary as mountain ranges are inherently fuzzy physiographical objects that often gradually merge into the surrounding terrain 13 , and because there is a considerable amount of (personal, cultural, and regional) subjectiveness in deciding what a mountain is 9 . This long-standing lack of consensus, which accounts for an approximate 60% difference between estimates of global mountain coverage 8,10,12 , served until recently as an explanation for the absence of a global inventory of the world's mountains. In 2017, Körner and colleagues filled this gap with the first of its kind global mountain inventory (GMBA Inventory v1.0) 10,14 . This inventory was based on a definition of mountains proposed by Körner et al. in 2011 15 (GMBA Definition v1.0) and consisted of 1003 hand-drawn shapes (GIS vector polygons) representing mountain ranges. An updated version including 1047 shapes (v1.4) was released in 2019. Here we propose a new inventory of the world's mountains based on a refined mountain definition. Our new inventory delimits 8616 mountain ranges and overcomes several of the limitations of the first one by introducing a hierarchical structure and using rivers to establish the borders between contiguous mountain ranges ('inner borders' , Fig. 1). This ensures global consistency in the level of detail and resolution, and a high accuracy. The hierarchical structure allows customized aggregation into specific units of analysis for global mountain sciences, policy, advocacy, and collective action 16,17 . Our refined mountain definition, in turn, distinguishes itself from existing ones by relying on a set of parameters derived directly from digital elevation models (DEM), and not on expert judgement. We developed interactive visualizations of the resource on the GMBA mountain inventory page (https:// www.earthenv.org/mountains) 18 of EarthEnv to support a versatile exploration of potential uses and download the shapefiles. The dedicated EarthEnv app currently allows a choice of preferred hierarchical level and provides basic statistics. Further tools built on top of this initial browser are in development. Additionally, a set of R functions is provided on Github (https://github.com/GMBA-biodiversity/gmbaR) to work with the inventory. Applications for which this inventory is particularly well-suited include the geolocation of data and knowledge pertaining to specific mountain ranges (e.g. species occurrence and distribution), the extraction of data from spatial layers (e.g. human population and settlements) at the level of named mountain ranges, as well as the spatially-explicit hierarchical aggregation of data for analysis of and reporting on patterns in mountain social-ecological systems. The reference to, or application of, the GMBA Inventory v1.0 in more than 90 studies across the natural and social sciences (See https://www.gmba.unibe.ch/services/tools/mountain_inventory, for a list of publications) and its uptake on several online platforms attest to the usefulness of such a standardized tool. Its application also served to build a community of users whose feedbacks informed the development of the inventory we present here. By offering the flexibility for both the aggregation and the disaggregation of data and knowledge at different scales, our new inventory greatly expands the scope of potential applications of such a tool. Methods The generation of this map of the world's mountains consisted of five steps ( Fig. 1): (i) the identification and hierarchisation of named mountain ranges and the recording of range-specific information; (ii) the manual digitization of the ranges' general shape; (iii) the definition of mountainous terrain (and the inventory's outer borders) using a DEM-based algorithm; (iv) the automatic refinement of the digitized and named ranges' inner borders; and (v) the preparation of the final layers. The resulting products consist of a refined mountain definition (GMBA Definition v2.0), two versions of the inventory (GMBA Inventory v2.0_standard & GMBA Inventory v2.0_broad), and a set of tools to work with the inventories. All identified mountain ranges were recorded in a Microsoft Access relational database ("Mountain database", see below) and given a name, a unique 5-digit identifier (GMBA_V2_ID), and the corresponding Wikidata unique resource identifier (URI), when available. This URI gives access to a range's name as well as to its Wikipedia page URL in all available languages and lists other identifiers for given mountain ranges in a variety of other repositories such as GeoNames or PEMRACS. The primary mountain range names were based on the resources used for range identification and were preferably recorded in English. Names used nationally, locally, as well as/or by indigenous people and local communities were extracted from Wikidata and recorded in a separate attribute field. In the process of cataloguing, we attributed a parent range to each of the mapped mountain ranges. Information about parent ranges is included in PEMRACS, often also in Wikidata as a property that can be extracted though a SPARQL query, in the corresponding Wikipedia pages description, and in regional hierarchical mountain classifications that exist for the European Alps (SOIUSA), the Carpathians, and the Dinaric Alps. When no such information was available, we relied on other sources of information that we found either using a general web search (leading to specific papers, reports, or web pages on mountain ranges) or by consulting www.nature.com/scientificdata www.nature.com/scientificdata/ (online) topographical maps and atlases at different scales. The information about parent ranges was used to construct a hierarchy of up to 10 levels using a recursive SQL query (see Step v). The result of this step was a relational database with a hierarchy of mountain systems and (sub-) ranges (Fig. 1, "Mountain database"). www.nature.com/scientificdata www.nature.com/scientificdata/ Step ii: Digitization of the mountain ranges. In a second step, we digitized all identified 'childless' mountain ranges (i.e. smallest mapping units, called 'Basic' as opposed to ' Aggregated' in the database) in one vector GIS layer. To do so, we used the Google Maps Terrain layers (Google, n.d.) as background and the WHYMAP named rivers layer 42 as spatial reference since descriptions of mountain range areal extension is often given with reference to major rivers. The digitization, which was done in QGIS 43 using the WGS 84 / Pseudo-Mercator (EPSG 3857) coordinate reference system, consisted in the drawing of shapes (polygons) that roughly followed the core area of each mountain range. In general, the approximate shape and extent of the mountain ranges we digitized could be distinguished based on the terrain structure as represented by the shaded relief background that corresponded to the placement and orientation of the range's name label on a topographical map, atlas or other resource. As the exact placement and orientation of mountain range labels in each specific source can be influenced by cartographic considerations (e.g. avoiding overlaps with other features), the final approximation of the mountain range was obtained by consulting a variety of sources for each mountain range. Occasionally, the mountain terrain's geomorphological characteristics strongly hampered the accuracy of our visual identification of mountain subranges within larger systems. This was particularly the case in old, eroded massifs such as the www.nature.com/scientificdata www.nature.com/scientificdata/ well-defined valleys and have a very complex topography. In these cases, we referred to available topographical descriptions of range extent and to the river layer (see above). Other complex regions included Borneo and the Angolan Highlands, whereas subranges in mountain systems such as the European Alps, the Himalayas, and the North American Cordillera were comparatively easy to map. Moreover, the density of currently available mountain toponymical information varied quite strongly between regions. Accordingly, regional variation in the size of the smallest mountain range map units can be considerable. The result of this step was a (manually) digitized vector layer of named mountain ranges shapes ( Fig. 1, "Manual mountain shapes"). Step iii: Definition of mountainous terrain. In a third step, we defined mountainous terrain (GMBA Definition v2.0). To distinguish mountainous from non-mountainous terrain, we developed a simple algorithm which we implemented in ArcMap 10.7.1 44 . This algorithm is based on ruggedness (defined as highest minus lowest elevation in meter) within eight circular neighbourhood analysis windows (NAWs) of different sizes (from 1 pixel (≈ 250 m) to 20 (≈ 5 km) around each point, Fig. 2c) combined with empirically derived thresholds for each NAW (Fig. 2). The decision to use multiple NAW sizes was made because calculating ruggedness based on only a small or a large NAW comes at the risk of identifying the many local irregularities typically occurring in flat or rolling terrain as mountainous or of including extensive flat 'skirts' through the smoothing and generalization of large NAWs 3 . Accordingly, our approach ensures that any point in the landscape classified as mountainous showed some level of ruggedness not only at one but across scales. This also resulted in a smooth and homogeneous delineation of mountainous terrain, very suitable for our mapping purpose. We used the median value of the 7.5 arc second GMTED2010 DEM 45 as our source map. To reduce the latitudinal distortion of the raster, and thus the shape and area of the NAWs, we divided the global DEM into three raster layers corresponding to three latitudinal zones (84° N to 30° N, 30° N to 30° S and 30° S to 56° S) excluding ice-covered Antarctica and projected the two high latitude zones to Lambert Azimuthal Equal Area and the equatorial zone to WGS 1984 Cylindrical Equal Area. We used these reprojected DEM layers to produce eight ruggedness layers, each using one of the eight NAWs. To determine the threshold values of our algorithm, we selected 1000 random points within the area defined by the geometric intersection ( Fig. 1b) of the three commonly applied mountain definitions, i.e. the definitions by UNEP-WCMC 46 , GMBA 15 , and USGS 3 . These layers (referred to as K1, K2, and K3, respectively by Sayre and co-authors 12 ) were obtained from the Global Mountain Explorer 47 . We eliminated 80 clearly misclassified points (i.e., points that fell within lakes, oceans, or clearly flat areas according to the shaded relief map we used as a background) and used the remaining 920 to sample the eight ruggedness layers. For each of the 8 layers, we retained the lowest of the 920 ruggedness values as the threshold for the layer's specific NAW (Fig. 2c). The eight threshold values were then used to reclassify each of the eight layers by attributing the value 1 to all cells with a ruggedness value higher than or equal to the corresponding threshold and the value 0 to all other cells. Finally, we performed a geometric intersection (see Fig. 1b) of the eight reclassified layers to derive the new mountain definition. After these calculations, we reprojected the three raster layers to WGS84 and combined them through mosaic to new raster. To eliminate isolated cells and jagged borders, we then generalized the resulting raster map by passing a majority filter (3 × 3 pixels, majority threshold) three times. This layer corresponds to the GMBA Definition v2.0. The resulting mountain definition (GMBA Definition v2.0) distinguishes itself from previous ones because of the empirically derived thresholds method used to develop it and the use of eight NAWs. In line with the previous GMBA definition, it relies entirely on the ruggedness values within NAWs. The GMBA Definition v2.0 was used to determine the outer delineation of this inventory's mountainous terrain. As expected, it includes neither the wide 'skirts' of flat or undulating land around mountain ranges nor the topographical irregularities that are both typically included when other approaches are applied. It also successfully excludes extensive areas of rolling non-mountainous terrain such as the 52,000 km 2 Badain Jaran Desert sand dunes in China. However, this mountain definition is conservative and only includes the highest, most rugged cores of low mountain systems, as for example in the Central Uplands of Germany, and therefore excludes some lower hill areas still considered by some as mountains. As a further step towards generalization, we considered that small (<100 km 2 ) inner-mountain flat areas corresponding to valley floors, small depressions, and isolated high plateaus were part of the mountainous terrain. Additionally, to avoid self-intersecting polygons in the final product we also eliminated mountain 'appendages' consisting of isolated raster cells smaller than 2 km 2 and touching the main mountain area through one corner only. For the generalization process, we used a vectorized version of the mountain definition that we reconverted to a raster file for use in the creation of the GMBA Inventory v2.0_standard shapes in step iv. This simplification of the GMBA Definition v2.0 was deemed necessary to generate the cleanest possible range shapes. Despite this generalization, we consider that these shapes can be used for most comparative mountain studies. However, for very precise area calculations, the new inventory layer can be intersected with the GMBA Definition v2.0. Step iv: Refinement of the ranges' inner borders. To generate the final shape layer, we extended the hand-drawn polygons to the nearest surrounding river by allocating the value of the GMBA_V2_ID to all intermediate raster cells using the ArcMap tool 'Cost Allocation' . For this, we intersected the simplified GMBA Definition v2.0 (output step iii) with a rasterized river layer from Hydrosheds 2 . This resulted in a mask layer (Fig. 1, "Mask (GMBA)") with value 1 for areas that are mountainous and not rivers and 'NoData' for non-mountain areas or rivers. We then combined this mask with the digitized vector layer of named ranges ( Fig. 1 "Manual mountain shapes", output step ii) and allocated to each cell the 5-digit GMBA_V2_ID value of the nearest digitized mountain range shape. In the allocation, rivers and non-mountain areas (value 'NoData' in the mask) acted as barriers to the cost allocation. This resulted in individual mountain ranges separated by rivers inside the overall www.nature.com/scientificdata www.nature.com/scientificdata/ GMBA Definition v2.0 area. As the river cells maintained a value 'NoData' during the cost allocation operation, we performed a second cost allocation with a mask consisting of the GMBA Definition v2.0 only, to fill these (river) cells with the nearest GMBA_V2_ID. Finally, we converted the resulting raster map to shapes representing the smallest mountain map units ('Basic' unit) of our inventory (Fig. 1, "Processed mountain shapes"). Step v: Preparation of the final products. To create the final products, we first developed a recursive query to convert the parent-child relations recorded in the "Mountain database" (output step i) into unique hierarchical paths (see Fig. 3e), leading from the basic mapping unit up to the highest level of aggregation (Level 1, continents and oceans). We then combined the mountain range shapefile layer ("Processed mountain shapes", output step iv) with an export query of the "Mountain database" as an attribute table containing the complete hierarchy for each mapped mountain range. This allowed us to construct all the higher parent ranges levels by dissolving according to individual levels in the hierarchy. The resulting layer representing mountain range shapes at the ten levels in the hierarchy were merged into one final 'stacked' shapefile entitled GMBA Inventory v2.0_standard containing all mountain range shapes at all (overlapping) hierarchical levels (Fig. 3). To produce a mountain inventory that can be intersected with any of the three mountain definitions currently in use and available on the Global Mountain Explorer, we also applied the 'cost allocation' (step iv) to an additional, considerably broader mountain layer that we obtained in step (iii) by geometric union (Fig. 1c) of the UNEP-WCMC, GMBA, and USGS layers. A 5 km buffer was added to ensure that small, isolated mountain patches would be connected and thus facilitated the cost allocation procedure. We then processed the resulting layer ("Broad") following the exact same steps (iv and v) as above to generate a second version of the raster map and a second version of the 'stacked' shapefile. This second version, entitled GMBA Inventory v2.0_broad, enables comparative mountain science based on a different definition of mountains than the one presented here but needs to be intersected with the chosen definition before use. www.nature.com/scientificdata www.nature.com/scientificdata/ Data Records New GMBA Definition v2.0. The GMBA Definition v2.0.tif is a high resolution (7.5 arcsecond) binary raster file identifying areas considered mountainous with the value 1 and non-mountain areas with the value 'NoData' . GMBA Inventory v2.0_standard. The GMBA Inventory v2.0_standard.shp (Table 1) is a 'stacked' shapefile including the 8327 shapes of the inventory (at all hierarchical levels) that overlaps with the GMBA Definition v2.0. These shapes include both basic map units (i.e. units without 'child' sub-division at the bottom of the hierarchy) and aggregated ones (i.e. 'parent' ranges higher up in the hierarchy). In this inventory, the outer borders of the mountain ranges and systems (i.e., the borders between mountainous terrain and surrounding flat or rolling land) correspond to the newly proposed GMBA Definition v2.0. Because all parent and child polygons are included in this layer, there is a high level of overlap (up to ten overlapping polygons for the few cases where the hierarchy consists of all 10 levels). Besides mountain systems, ranges, and subranges, the 8327 mapped units in the GMBA Inventory v2.0_ standard consist of different types of geographical features ('Feature' in Table 2), including islands, archipelagos, peninsulas, highlands, plateaus, and escarpments. These feature categories were broadly taken from PEMRACS and were included to ensure the largest possible coverage of mountainous physiographical elements at different scales/levels, even if no mountain range name was found. This is particularly the case for mountainous islands, which are clearly defined spatial objects requiring a place in the hierarchy. In the case of spatial features such as islands, a polygon named after the given feature (e.g., Sardinia) represents the mountainous area of that feature, not the entire island. It could therefore also be named 'Sardinian mountains' or 'Sardinian Highlands' , but this has only been done if such a naming was accepted in literature, for example for the 'New Guinea Highlands' . Commonly used geographic divisions of mountain systems, such as the Eastern Alps or Northern Andes, were also included in the hierarchy and identified as 'Geographically-defined subrange' . So-called 'support polygons' Attributes Objects/file size GMBA_V2_ID: A five-digit unique identifier of the mountain range, linking the polygon in the shapefile with the record in the "Mountain database". Elev_High: Highest elevation as calculated from GMTED 2010 7.5 arcsecond DEM. Path: The full hierarchical path leading to the mountain range (using DBaseName) starting at level 1 (continents/oceans). PathID: The full hierarchical path leading to the mountain range expressed as a concatenation of GMBA_V2_IDs Level_01 to Level_10: Ten columns each containing the hierarchical levels for each mountain range. These are the same levels as in "Path", but one level per column. Select_300: A customised selection of 292 mountain ranges / systems, useful for global and IPCC or IPBES (sub-) regional level analyses. It was generated by using the GMBA Inventory v2.0_SelectionTool. Countries: String variable with country names that intersect with a mountain range. CountryCodes: String variable with Alpha-3 ISO country codes (https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) that intersect with a mountain range. ColorAll: A number from 1 to 6 to for colouring polygons with six distinct colours without any contiguous shape sharing the same colour. For use with the 'Level' layers. ColorBasic: idem, for use with the 'Basic' layer. Color300: idem, for use with the '300' layer www.nature.com/scientificdata www.nature.com/scientificdata/ were introduced either to close the gap between parts of named subranges within a range, or to introduce an intermediate level in the hierarchy for aggregation purposes. Some were given a name, if they corresponded to a well-defined region, while others were left with the name of the parent range and the addition '(nn)' for 'no name' . The various features (including support polygons) that do not strictly correspond to mountain systems, ranges, and subranges might be updated with a mountain range name in the future, when more toponymical resources become available, or if users provide specific feedback. GMBA Inventory v2.0_broad. The GMBA Inventory v2.0_broad.shp layer (Table 1) is similar to the GMBA Inventory v2.0_standard shapefile, but the outer borders correspond to the "Broad" layer (see step v). Because the mountain extent is considerably larger in the "Broad" than in the GMBA Definition v2.0, it also includes more of the mountain ranges identified in the "Mountain database" and the "Mountain manual shapes" layer. Accordingly, with 8616 shapes, this inventory includes 289 more named ranges (at all hierarchical levels). As it includes the terrain that fulfils the criteria of all three mountain definitions, this broad version of the inventory also includes high plateaux such as those of the Andes (altiplano) and Tibet, provided they reach above 2500 m.a.s.l. GMBA Inventory v2.0_Selection_Tool and gmbaR R package. The GMBA Inventory v2.0_Selection_ Tool.xlsx is a spreadsheet (MS Excel workbook, Table 3), in which the rows represent all 8616 mountain ranges and their position in the hierarchy, and which includes programmed functions to enable customized selections of mountain ranges at intermediate levels of aggregation. It can be used independently or in combination with the R package gmbaR (see Code Availability) to make customized selections of mountain ranges (see Usage Notes). The GMBA Definition v2.0 the GMBA Inventory v2.0_standard, and the GMBA Inventory v2.0_broad are available for unrestricted download on the GMBA Mountain Inventory page 18 technical Validation To validate the inventory, we performed a consistency check regarding mountain range names and extents, and an evaluation of the completeness of the recorded mountain ranges. The validation of the new mountain definition (GMBA Definition v2.0) consisted in a quantification of the contribution of the selected NAWs in capturing mountain terrain and a comparison of the new mountain definition with the three definitions featured on the Global Mountain Explorer. Mountain inventory. Mountain range names and extents. Names and extents of mountain ranges were validated during the collation process by consulting several sources for each entry (see step i). Besides occasional differences, the naming and geographical extent of mountain ranges showed a high level of consistency between sources. However, mountain ranges are typically known under different names through space (e.g. north or south of the range) and time (for example when colonial names superseded the original indigenous ones) 48 . To account for this diversity, we queried Wikidata for alternative local and indigenous names and added these to the database (LocalNames, Table 1). In general, the consistency in naming and extent of mountain ranges was highest for smaller mountain ranges and well-defined, isolated large mountain systems (such as the European Alps and the Andes). In contrast, the naming and extent of larger mountain ranges within mountain systems such as the Kun Lun Mountains, the Hengduan Mountains, and their subranges could differ quite substantially between sources. Here, we adopted a conservative approach, attributing the smallest number of reported subranges to each mountain system. Attributes Objects / file size Level_1 to Level_10: Mountain range in hierarchical structure. This version of the levels only shows the mountain range name in its hierarchical position. This is mainly a visual difference that better displays the hierarchical structure. Range_Selector: The column where mountain ranges can be selected by adding an "x" in the corresponding row. Selected_Range: A formula column that fills the selected range for all its child ranges. Level_Selected: A formula column that fills the hierarchical level of the selected range for all its child ranges. GMBA_V2_ID_Selected: A formula field that returns the GMBA_V2_ID for the selected range. Overlap_Warning: A formula field which returns the message "polygons overlap!" if a subrange of a selected parent range is selected. GMBA_V2_ID_DissolveField: A formula field which returns the GMBA_V2_ID of the corresponding selected parent range. It can be used in combination with the GMBA_V2_ID of the range, exported to txt or csv, and used as a dissolve field to construct the polygons. Lon_Centr: Longitude of the mountain range centroid. IPCC_maj: IPCC AR6 subregion with the largest overlap with the mountain range. IPCC_str: List of all IPCC AR6 regions intersecting with the range. IPBES_maj: IPBES subregion with the largest overlap with the mountain range. IPBES_str: List of all IPBES subregions that intersect with the mountain range. Name_xx: See Table 1. www.nature.com/scientificdata www.nature.com/scientificdata/ Completeness of the inventory. To evaluate the completeness of the inventory, we calculated the percentage of mountain range names gathered from two reference lists that are also included in the former (GMBA Inventory v1.4) and current (GMBA Inventory v2.0_standard) inventory, respectively (Table 4). To generate the first reference list (WoS list) we first queried the Web of Science with the keyword string "mountains" OR "mountain range" NOT "mountain pass", sorted the obtained publications by relevance, selected those published in 2020, and exported the first 500 records. We then manually extracted the mountain range names from the title, abstract, and keywords fields, or searched in the methods section (study site) if no mountain range name was mentioned in these fields. We extracted 524 range names, of which 276 were unique. The second list (GMBA member list, which includes mountain scientists from 69 countries across the world) consisted of all the mountain ranges named by GMBA members as their (geographical) area of work and/or expertise. We extracted 819 mountain range names, of which 229 were unique. For both lists, we individually checked all mountain range names for differences in spelling or alternative names. We calculated the percentage both for unique names (i.e., each unique mountain range name is counted only once) and for all names (i.e., including repetitions). The latter value better captures the likelihood that a mountain range used in the academic context is represented in the inventory. The completeness percentages calculated based on the WoS and GMBA member lists show a high level of agreement. Based on our samples, the likelihood that a mountain range reported in the academic literature is also included in our new inventory is higher than 95% (Table 4). This likelihood is about two times higher than the likelihood of inclusion in the previous inventory (GMBA Inventory v1.4). The results of this completeness analysis do not inform about the absolute completeness of the GMBA Inventory v2.0, as many and especially smaller ranges are not (yet) included. These results rather indicate that most of the mountain ranges or mountain systems generally referred to in the academic literature and by the mountain research community are well represented in the inventory. Compared with all objects classified as mountain range in the Geonames gazetteer (category "MTS": 26,312 entries on 22 March 2021) and in Wikidata (category "Q46831" mountain range: 20,768 entries on 22 March 2021), the completeness of our inventory is modest, but these repositories include many very small mountain ranges as well as many double entries (Wikidata). From the 1033 polygons common to both versions of the GMBA Inventory, 759 have remained 'basic' map units (i.e., they were not attributed any child ranges), while 274 polygons were split into subranges in the GMBA Inventory v2.0_standard. Mountain definition. Sensitivity analysis of the NAW-threshold pairs. All global geomorphometric approaches to mountain definitions presented on the Global Mountain Explorer calculate slope or elevation range (ruggedness) metrics using unique combinations of input DEM (with their respective cell size), NAW number and size, and threshold value(s) 8,12 . In all cases, the size and shape of the NAWs as well as their corresponding thresholds for a given cell size of the elevation raster (DEM) are based on expert judgement. Here, we derive parameters for the NAW and their corresponding threshold values empirically by sampling the 920 random points (see step iii in Methods) in an area considered as mountainous by all three definitions featured on the Global Mountain Explorer. For validation purposes, we assessed the relative contribution of the different NAW-threshold pairs to the final mountain definition. For attributing the value 'mountainous' to a cell, the algorithm requires the local elevation range calculated for each NAW to be higher than its corresponding threshold value. Because of this, each additional NAW-threshold pair reduces the area considered as mountainous and thus contributes towards spatially refining the final mountain area by intersection (see Fig. 1b). We first calculated the percentage correspondence between the area identified as mountainous according to each NAW-threshold pair and the final mountain definition. The larger this percentage, the better the NAW-threshold pair captured the mountain terrain (Fig. 2d). We also calculated the percent difference in mountain terrain between the final mountain definition and the mountain area achieved when leaving out one of the eight NAW-threshold pairs from the algorithm at a time. The larger the percentage area, the more important the corresponding NAW-threshold pair was for refining the mountainous terrain area (Fig. 2e). Finally, we calculated the difference between the mountain definition and all combinations of two and three NAW-threshold pairs. When assessing the relative contribution of each NAW-threshold pair to the final mountain definition (Fig. 2d) we observed a steady increase from small to larger NAWs, with a peak at the 10 pixel NAW (5 km Table 4. Validation of the inventory. Validation against a selection of mountain ranges extracted from the Web of Science and from the GMBA member database. The column 'Unique names' gives the number of distinct ranges reported, while the column ' All names' includes repetitions of the same ranges. www.nature.com/scientificdata www.nature.com/scientificdata/ diameter), which overlaps 89% with the final mountain terrain. When looking at combinations of two NAWs, the combination of the 10 and 20 pixel NAW comes closest with 95.6% overlap. This overlap increases to 98% after adding the 4 pixel NAW. As expected, the smaller NAWs and their empirically derived thresholds were least effective in delimiting mountainous terrain because these ruggedness thresholds are too low to adequately capture small areas of less rugged terrain (valley floors, gentle lower slopes etc) also included in mountain areas. Applying the two smallest NAWs only marginally improves the definition by allowing the elimination of only 0.1 to 0.2% from the total mountain area (Fig. 2e), while, when used in isolation, these NAW-threshold-pairs include up to 70% non-mountainous area. If these smaller NAWs were used alone (i.e., not in combination with some "smoothing" larger NAWs), and with a higher corresponding threshold, the result would be considerably patchier, including many small mounds in flat or rolling terrain, and many small "flat" areas in mountainous terrain. When comparing the current algorithm with the original NAW-threshold pair applied in Körner et al. 10,15 (mean NAW size = 3.4 km 2 , threshold > = 200 m), we see that at a similar maximum distance within the NAW (around 3 km), the current algorithm applies a lower threshold value (126 m, see Fig. 2c). This is to be expected as the scale-dependent choice of different NAW-threshold pairs in the current algorithm allows for each NAW-threshold pair to be less restrictive than in the case of a unique NAW-200 m ruggedness threshold pair. Comparison of mountain definitions. To compare our refined mountain definition with existing ones, we calculated and compared their planimetric area. Table 5 shows that the overall mountain area according to the GMBA Definition v2.0 applied is about 50% larger than that estimated based on the previous GMBA mountain definition (GMBA Definition v1) 10,15 as a result of the smoothing effect of the new algorithm (see Methods). With a coverage of about 18.2% of the global land area excluding Antarctica, this new mountain definition lies between the GMBA Definition v1 (12.3%) and the definitions proposed by UNEP-WCMC (24.2%) and USGS (30.4%). However, these differences in mountain cover according to the different definitions are not homogeneously distributed across regions. In major, large, high, and rugged mountain systems (e.g. the Andes, the European Alps, the Caucasus, the Dinaric Alps, the Pindus, the Himalayas and associated mountain systems, or regions such as the spines of New Guinea, Sumatra, and the Japanese Archipelago), the correspondence between areas classified as mountainous according to the various definitions is much higher than in lower, less well-defined and less rugged mountain systems (e.g. Brazilian Highlands, highlands of Madagascar, or Central European Highlands, see Fig. 4), where the WCMC and USGS definitions tend to include much more rolling land and wider flat skirts than the more conservative GMBA definitions. Usage Notes polygon extent. A key difference between the GMBA Inventory v2.0_standard and the GMBA Inventory v2.0_broad is the extent of the outer borders of the mapped mountain ranges. In the GMBA Inventory v2.0_standard, the shapes' outer borders correspond to the mountain definition introduced in this paper (GMBA Definition v2.0). In the GMBA Inventory v2.0_broad, outer borders correspond to the union of the three mountain definitions presented on the Global Mountain Explorer (Fig. 1c) and a 5 km buffer. Accordingly, users adopting this version first need to intersect it with the mountain definition of their choice (K1, K2 or K3, as found on the Global Mountain Explorer). The resulting raster layer can be used as such for analysis or converted to polygons (see "Checking geometry"). Checking geometry. The process of converting the mountain rasters to shapes (step iii) results in cases of apparent shape self-intersections, for example in points where shapes touch themselves. The layers provided have been geometrically corrected by applying a buffer of size zero, but it is prudent to check and correct the geometries before the shapes are used in any calculations, especially after intersecting the GMBA Inventory v2.0_broad version with one of the mountain definitions and polygonising it. Selecting polygons from the hierarchical structure. The hierarchical structure allows users to zoom into mountain systems and their subranges and explore the changing spatial patterns with increasing spatial resolution. However, as a result of differences in mountain toponymical information density and of physiographic differences in the position of mountain systems and their subranges, the number of levels in the mountain hierarchy constructed from the parent-child relationships for each basic unit varies between 4 and 10. This puts many Table 5. Planimetric area and percent coverage of mountain terrain according to the current approach and previous ones (GMBA Definition v1, UNEP-WCMC, and USGS). For each definition, the value between brackets indicates the resolution of the elevation raster in arc minutes and seconds. The GMBA Definition v2.0 layer corresponds to the output of step iii after the application of the majority filter. The columns are not ordered by year of publication of the mountain delineations but increasingly relative to the cover percentage of the GMBA Definition v2.0. www.nature.com/scientificdata www.nature.com/scientificdata/ small ocean island ranges at the same level in the hierarchy (level 4) as some continental mountain systems, which is not useful for global scale comparative mountain research. For such research, we encourage the use of our selection of 292 mountain systems or a custom selection of non-overlapping mountain ranges made using the GMBA Inventory v2.0_SelectionTool or the associated R package gmbaR. The Excel-based Selection Tool (GMBA Inventory v2.0_Selection_Tool.xlsx) includes the entire GMBA Inventory v2.0_standard attribute table and allows the user to make a customised selection of mountain ranges. Changing the cell value in column 'Range Selector' to 'x' , selects the range and all its subranges (the range and its subranges are highlighted). If a child of any selected range is also selected (i.e., a subrange within the highlighted list), a warning appears in column "Overlap_Warning", as a parent always overlaps with a child. The list of selected mountain range IDs (GMBA_ID_V2) can be imported into a GIS or to R to select the desired mountain range shapes. Alternatively, the R package gmbaR allows seamless work with the mountain inventory in R (the package can be installed from https://github.com/GMBA-biodiversity/gmbaR). The GMBA Inventory v2.0 can be directly read to R, and mountain ranges can be selected based on, for example, different Selection Tool attributes (gmba_select()) or point coordinates (gmba_ids_from_points()). Limitations. The GMBA Inventory v2.0 is a spatially explicit inventory of mountain ranges across the world of unprecedented completeness. Several assumptions or decisions had to be made to make the development of this resource explicit and transparent: i. Status of the inventory: The GMBA Inventory v2.0_standard is an operational and spatially explicit collection of mountain range identifiers at various geographical scales. It offers a listing of mountain range names and associated map shapes representing their spatial extent. It enables quantitative comparisons from a regional to the global level. The inventory does not replace existing national or regional mountain classifications nor does it pretend to represent a new standard or official gazetteer. ii. Naming of the mountain ranges: for the primary mountain range identifier we have given preference to the name as it appears on topographical maps. However, other sources of mountain range identification (such as landscape and physiographic classifications or complete mountain hierarchies) sometimes use different nomenclature, and mountain ranges are sometimes known under different names according to the source or classification system. Given that in the (scientific) literature geographical features are typically named in English, we chose the English name as the main textual identifier of most map units in the inventory. In addition, we provide the names in seven globally spoken languages, as well as in all additional languages spoken in the countries where the range occurs and available on Wikidata. By doing so, we also include indigenous or local names. To date, the number of mountain ranges with indigenous or local language translations is still very limited in Wikidata. www.nature.com/scientificdata www.nature.com/scientificdata/ iii. Delineation of the mountain ranges: no universal agreement exists as to where a mountain range grades into surrounding lower areas. Different attributes of landforms can be considered to determine what a mountain (range) is (relief, lithology, geomorphology, culture), and such perspectives are different between individuals and communities. Here we adopted a purely morphometric approach, which leads to a conservative delineation of low and scattered mountains, in which only the most rugged cores have been included. The shapes representing these mountain ranges in the inventory might therefore differ quite markedly from what is generally considered to be part of these low mountain systems. As the outer borders of the mountain shapes in the GMBA Inventory v2.0_broad correspond to the geometric union of the three definitions (and an additional 5 km buffer) it should only be used after intersection with one of these definitions available on the Global Mountain Explorer, or any other mountain definition. Code availability The R package gmbaR provides a set of R functions to read and work with the GMBA Inventory 2.0. This package is provided, explained, and continuously developed on Github (https://github.com/GMBA-biodiversity/gmbaR).
v3-fos-license
2021-08-24T13:23:00.543Z
2021-08-24T00:00:00.000
237271290
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2021.635306/pdf", "pdf_hash": "fffa82122b6e6dc8801f67f771d797ff9e318b7b", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42033", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "fffa82122b6e6dc8801f67f771d797ff9e318b7b", "year": 2021 }
pes2o/s2orc
Research on the Failure Evolution Process of Rock Mass Base on the Acoustic Emission Parameters Fracture mechanics behavior and acoustic emission (AE) characteristics of fractured rock mass are related to underground engineering safety construction, disaster prediction, and early warning. In this study, the failure evolution characteristics of intact and fracture (e.g., single fracture, parallel fractures, cross fractures, and mixed fractures) coal were studied and contrasted with each other on the basis of the distribution of max amplitude of AE. The study revealed some meaningful results, where the value of b (i.e., the distribution characteristic of max amplitude of AE) could represent the failure evolution process of intact and fractured coal. The maximum amplitude distribution of AE events was characterized by Gaussian normal distribution, and the probability of the maximum amplitude of AE events corresponding to 35∼50 dB was the largest. In the stress range of 60∼80%, AE events and maximum amplitude increased rapidly, and the corresponding b value decreased. The energy of AE events showed a downward trend after reaching the maximum value at about 80% stress level. Under the same stress level, the more complex the fracture was, the larger the b value of coal–rock mass was, and the stronger the inhibition effect on the fracture expansion caused by the internal fracture distribution was. Due to the anisotropy of coal–rock mass with a single crack, the distribution of the b value was more discrete, while the anisotropy of coal–rock mass with mixed crack decreased, and the dispersion of the b value decreased. The deformation of cracked coal mainly caused by the adjustment of cracks during the initial loading b value experienced a trend of decreasing first, then increasing, and then decreasing in the loading process. When the load reached 0.8 times of the peak strength, the b value had a secondary decreasing trend, indicating the macroscopic failure of the sample, which could be used as a precursor criterion for the complete failure of coal–rock mass. INTRODUCTION As an important strategic resource in China, coal plays a very important role in primary energy consumption. In the National Energy Development Strategy (2030∼2050), it was predicted that China's coal output will reach 3.4 ∼4 billion tons in 2020∼2030. In 2030 and 2050, coal will maintain about 55 and 50% of China's primary energy structure, respectively, so coal will remain the main energy resource in China for a long time in the future. Due to the influence of geological structure and mining, fracture development occurs in the surrounding rock mass. Compared with the intact rock mass, the fractured rock mass has become very sensitive to blasting or mechanical disturbance load. Therefore, the study on the instability evolution characteristics of fractured coal-rock mass has important practical significance for efficient coal mining. In general, the problem of rock mechanics was the mechanical behavior of fracture rock mass in the engineering scale, especially characters of strength, deformation, and failure of fracture rock mass. And the mechanical behavior of fracture rock mass was always a hotspot and difficult problem in rock mechanical field, where the hotspot was its strong application value and the difficulty lay in obtaining the fracture rock mass specimens. At present, the primary research methods for the fracture rock mass were fastened on the physical simulating test [1][2][3][4][5], numerical analysis [6,7], in situ test, and others. Many achievements have been acquired through those methods, but their drawbacks are also obvious. For the physical simulation experiment, similar material and prefabricated crack were used to simulate the fracture rock mass specimens. But compared with the rock materials, the differences of similar material in the internal crystal structure, composition, and cementing material led to the essential difference on crack enlargement. Numerical methods were effective ways to study the mechanical behavior of fracture rock mass in recent years; however, the inhomogeneity of rock materials, parameter selection, and the failure criterion were problems that cannot be ignored. For the in situ test, the main problems were as follows: expensive, time consuming, and others. In this study, the fractured rock mass specimens were obtained through preloading the intact specimens to avoid the difficulty of obtaining fractured rock mass. However, the fracture rock mass was cut by all kinds of structural surfaces, such as primary structural plane and secondary structural plane. This is bound to make the failure mechanism of fracture rock mass different with the intact rock. It must be more complex and diverse. It is very important to analyze this problem more reasonably and effectively. With the development of science, new technologies were also used in the research on rock mechanics, such as CT and AE [8][9][10][11]. Especially, since the AE technology was introduced into the field of rock mechanics by Goodman in the 1960s, the AE technique had become an indispensable methodology to study rock behaviors. At present, there are many studies on the AE response of the rock failure process [12][13][14][15][16]. Chen [17] and Zhang [18] discussed the application of AE technology in rock mechanics research. Li et al. [19] studied the fracture development of intact shale in the fracturing process by using the AE ring number ratio and energy rate, and determined the damage evolution law of shale according to the AE ring number, and characterized the shale degradation behavior. Zhou [20], Zhang [21], and Xu et al. [22] by using AE monitoring system, such as breaking process of rock, studied the AE response of different stress stages. In particular, the AE b value is one of the important parameters; studying the characteristics of rock AE can reflect the change of micro-cracks on rock internal scale, and the b value of mutations usually could be used as rock macroscopic failure precursors [23][24][25]. Lei et al. [26] showed that the sudden drop of the b value indicated that the interaction between the cracks inside the rock was enhanced, indicating that the rock was about to be unstable and might be destroyed soon. Yang et al. [27] found that the AE b value was relatively small at the early stage of loading, indicating the rock crack compaction behavior. The b value gradually increased in the elastic stage, indicating the elastic deformation behavior of the rock crack. When the stress level reaches 70% in the late loading period, the sudden drop of the b value corresponded to the crack propagation behavior. When the stress level reaches 90%, the low level of the b value indicated the macroscopic failure of rock. Xue et al. [28] found that the b value was abnormal in the early loading stage and was at a high value in the early loading stage. When the stress reached about 80% in the plastic stage, the b value began to decrease rapidly, indicating the rapid development of the number of large-scale cracks. Zha et al. [29] and Zhang et al. [30] believed that in the process of uniaxial compression of rock, the b value dropped sharply with the increase of stress at the late loading period, indicating the fracture of the rock. Lisjak et al. [31] obtained through numerical results that the b value of rock dropped sharply twice in the process of failure. The first time was at the pre-peak stress level of 75%, and the second time was at the pre-peak stress level of 97%. The decrease of the b value indicated that the crack on the main fracture plane was transformed from diffusion nucleation to crack coalescence. However, the existing studies on AE response and the b value of rock are basically focused on intact rock mass, and there are few studies on AE characteristics during deformation and failure of fractured rock mass. In this study, the fractured coal-rock mass was acquired by preloading the intact coal rock. According to the different combinations of cracks in the fractured coal-rock mass, the specimens could be divided into single fracture, parallel fracture, cross fracture, and mixed fractured coal-rock mass, statistically. Based on rock material with the acoustic emission phenomenon in the failure process under loading, the maximum amplitude of AE events (i.e., the b value) was used to study the failure evolution process of the intact coal rock, the single fracture, parallel fracture, cross fracture, and mixed fractured coal-rock mass, and then, the failure evolution characteristics and difference between intact rock and fracture rock mass were studied. Technical and Test Equipment Feasibility Fracture mechanical behavior of rock mass plays an important role in engineering practice. The fracture rock mass specimens were mainly obtained by direct or indirect methods presently. Affected by factors such as sampling and specimen processing, the direct method used to obtain fracture rock mass specimens during manufacturing was difficult. And so, the indirect method was the main method to obtain fracture rock mass specimens. As the anisotropic materials controlled by the structural plane, the fractured rock mass was obviously different from that of the conventional rock. In order to obtain the fractured rock mass by the loading of conventional rock, it was necessary to analyze and classify the deformation and failure process. For general rock materials, due to their relatively high strength, they had brittle fracture characteristics, most of which were "II" deformation and failure curves. The damage was severe, and the process from cracking to penetrating the specimen was very short. The success rate of obtaining fractured rock specimens was low. For the soft coal rock, the deformation and damage severity were relatively low, showing the "I" type deformation failure curve ( Figure 1). It was feasible to stop the loading before the cracks penetrated the test piece to obtain the fractured rock specimen. Based on it, the method for obtaining the fractured rock mass specimens was as follows: considering the failure process of coal-rock mass as the development and expansion of internal micro-cracks, and the process of macro-cracking through the test piece. Then by preloading the intact rock specimens, the fracture rock specimens could be obtained by stopping the loading process before cracks penetrated the specimens ( Figure 2). As mentioned above, it was feasible to obtain fracture rock mass on a laboratory scale when the test equipment loading system could stop loading after the max loading point and before the destruction point, and then, rock mass specimens with fracture and without destruction could be obtained. In this study, the early prefabricated and later loading of the fracture rock mass specimens were conducted by MTS815 rock mechanics test system ( Figure 3) from Sichuan University. The testing system has a higher integral rigidity and electro-hydraulic servo control system, which can achieve a variety of control conditions such as stress, strain, and transformation. The testing system could also be made to stop loading before rock specimens are destroyed completely. Therefore, indoor prefabricated fractured rock specimens are also feasible on the test equipment. In order to avoid other influence of the fracture rock mass on failure evolution, uniaxial compression loading scheme to the specimens was adopted, with the axial loading rate of 10 kN/min before the loading arrived at the peak, and then, lateral deformation control was used after max loading point with the rate of 0.02 ∼ 0.04 mm/min. In addition, in order to capture AE space location points, a total of 8 AE sensors were arranged on the upper and lower parts of the samples in the direction of vertical diameter. Therefore, whether it was technical feasibility or the requirements of the test equipment, it was feasible to obtain the samples of the fractured coal-rock mass by loading the intact coal rock by laboratory test means. Preparation Scheme of Fractured Coal-Rock Mass Based on the MTS815 rock mechanics test system, different loading methods were used to perform fractured coal-rock mass samples of Pingdingshan and Tashan intact coal rock. The loading methods include uniaxial, conventional triaxial, and three different mining methods (the caving, no pillar, and protective layer mining). The relevant loading schemes are as follows [32]: 1) Uniaxial loading test The axial compression was loaded to the peak load at a rate of 10 kN/min, and the post-peak stage was controlled by transverse deformation. The loading was stopped after the set stress value (σ′ (92% − 96%)σ max ) was loaded at a rate of 0.02∼0.04 mm/min. 2) Conventional triaxial loading test It mainly included two stages: adding confining pressure stage, in which the internal confining pressure was loaded to 25 MPa at a rate of 3 MPa/min; in the axial compression stage, when the confining pressure was loaded to 25 MPa, the axial compression was loaded to the peak stage at a rate of 30 kN/min. After the peak, the lateral deformation control was adopted, and the loading was stopped at a rate of 0.02∼0.04 mm/min to the preset stress value (σ′ (92% − 96%)σ max ). 3) Indoor simulation loading of three mining methods in coal mine There were mainly three stages: confining pressure loading stage, in which the confining pressure was loaded to 25 MPa at a rate of 3 MPa/min. In the first stage of confining pressure unloading, the confining pressure was unloaded at a rate of 1 MPa/min, and the axial load was loaded to 37.5 MPa at a loading rate of 2.25 MPa/min. In the second stage of confining pressure unloading, the confining pressure continues to be unloaded at a rate of 1 MPa/min. The axial load was loaded at a rate of 2.25, 3.5, and 4.75 MPa/min, respectively, according to the three mining methods of protective layer, top coal caving, and no coal pillar, until the peak. After the peak, the lateral deformation control was adopted. Loading was performed at a rate of 0.02∼0.04 mm/min to the preset stress value (σ′ (92% − 96%)σ max ) and then stopped. Through the above indirect method, fractured coal-rock mass specimens could be obtained and classified, and then the failure evolution and difference of intact rock and fracture rock mass could be studied. Fractured Rock Mass Classification For the sake of better research on the failure evolution of the fracture rock mass, the fracture rock mass could be divided into single fracture, parallel fracture, cross fracture, and mixed fracture rock mass in the statistical sense based on the space composition and complexity of fracture in rock mass by precasting. And the typical specimen photos, CT scans, and their classification of the intact coal and fractured coal-rock mass specimens with different compositions are shown in Table 1. For distinguishing the intact coal and fractured coal-rock mass specimens conveniently, specimen number F represents the specimens were fractured coal-rock mass after pre-casting. In order to avoid the deviation of the analysis results caused by different rock types, the hard rock and soft rock samples were, respectively, collected from Tashan coal mine and Pingdingshan coal mine, and the fractured rock samples were prefabricated. The physical characteristics and microscopic composition are shown in Table 2. On this basis, the study carried out experimental research through 26 effective specimens, including six intact coalrock specimens, five single fracture rock masses, three parallel fracture rock masses, five cross fracture rock masses, and seven mixed fracture rock masses. THE QUANTITATIVE DESCRIPTION OF THE MAXIMUM AMPLITUDE DISTRIBUTION OF ACOUSTIC EMISSION The maximum amplitude of a single AE event was analyzed during the loading process. For a single AE event, analyzing its maximum amplitude was meaningless, but the significance lies in the distribution of all the maximum amplitude of AE events during the failure process, which could reveal the failure evolution regularities and difference of fracture rock mass. As rock materials, Katsuyama [33] represented the distribution of maximum amplitude of AE through the following Eq. 1. where ais the maximum amplitude of AE events in the process of damage, and its unit isdB (0dBis equivalent to100 μV, 100 dBis equivalent to 10 V). n(a)−the frequency distribution of maximum amplitude of a, between the amplitude increased from a to a + da on the amount of AE events. k, m−constants. Through Eq. 1, amount of N (A) which was greater than the maximum amplitude of A could be infinite integrals by Eq. 1 as follows: Therefore, where A is the maximum amplitude of AE events during the failure process, dB. N(A) is the amount of AE events that was greater than (include) the maximum amplitude in the process of failure. t−constant, k/(1 − m)m−constant, its physical meaning was equivalent to the probability of hindering to specimen damage, and when the distribution density of blockage was higher, the probability of hindering to specimen damage was greater with the higher value of m. When letting b m − 1, Eq. 3 could be converted to Eq. 4 as follows: Then, log on both sides of Eq. 4: Therefore, by collecting AE events during the process of failure evolution, Eq. 5 could be used to obtain a quantitative description index (i.e., the value of b) of maximum amplitude distribution of AE events, where the b value was equivalent to the probability of hindering to the specimen failure. As a quantitative evaluation index, this parameter was used to analyze the failure evolution regularities and difference of fracture rock mass. TEST RESULT AND ANALYSIS Based on the uniaxial loading condition, the maximum amplitude distribution characteristics of intact coal rock and four different fractured rock mass were analyzed. Amplitude Variation Regularities Under Different Stress Levels As a quantitative description of the maximum amplitude distribution characteristics of AE events in the loading failure process of rock, the b value could reflect the change of crack scale inside rock. In order to explore the similarities and differences in the crack evolution process of coal-rock mass with different fracture degrees in the loading failure process, the amplitude distribution of intact, single fracture, parallel fractures, cross fractures, and mixed fractures coal under different stress levels in the uniaxial loading process is listed in Table 3. Using the above method to calculate the b value, the fitting correlation coefficient under different stress levels was above 0.8, and part could reach 0.9, which showed that the b value error was small, to meet the requirements of the error [34]. Due to the length of the article and considering the small difference of the correlation coefficient of the value at different stress levels, the average correlation coefficient of various coal-rock masses at different stress levels is listed in Table 3. Due to the few AE events in the initial loading stage of individual samples, the b value was partially missing. The results showed that the b value was lower when the stress level was higher, which meant that the b value trended to decrease with the increasing of stress levels. The distribution regularity of the b value for the intact coal and four kinds of fractured coal-rock mass under different stress levels is shown in Figure 4, where σ max represents the peak strength, namely, the failure point of the sample; σ is the actual stress. On the whole, the b value tended to decrease with the increasing of stress levels, where the illustrated maximum amplitude of distribution regularity was that AE events with large amplitude were increasing under the condition of the same amount of AE events, the physical meaning was that the propagation extent of internal cracks had a trend of increasing, and the scale of those cracks was becoming larger on the microcosmic with the increasing of uniaxial loading. From the local of the curve from Figure 4, the b value of both intact coal and fractured rock mass had a stage of increasing first and then decreasing, and the stress levels of this stage that appeared in fractured coal-rock mass were obviously higher than those in intact coal; for example, the stress levels of this stage in intact coal were 40 ∼ 60% of the peak, while the fractured coal-rock mass was about 50 ∼ 90%. At this stage, the amount of AE events with large amplitude trended to decreasing, which revealed that the crack extension was restrained during the process of failure. Then, with the increasing of stress levels, AE events with large amplitude increased, but the b value was decreased fleetly. Comparing the b value of intact coal with that of fractured coal-rock mass ( Figure 4) at this stage, the more complex fracture contained inside the coal-rock mass, the b value was increased more obviously, and the b value of the fractured coal-rock mass increased significantly higher than that of the intact coal. The b value of intact coal, single fracture, parallel fracture, cross fracture, and mixed fractured coal-rock mass was increased by 0.055, 0.085, 0.106, 0.155, and 0.178, respectively. The more complex the fracture (i.e., in sequence of intact, single fracture, parallel fractures, cross fractures, and mixed fractures coal) in coal-rock mass, the greater b value it was under the same stress level. For example, when the loading reached the peak, the b value of the fracture rock mass from complex to simple was 3.296, 2.875, 2.731, and 2.448, respectively, while the b value of the intact coal was 2.014. In addition, compared with the results of distribution regularity of the b value for the intact coal, single fracture, parallel fracture, cross fracture, and mixed fractured coal-rock mass had a similar trend during the process of failure evolution at the stress level from 0 to the peak. In conclusion, the characteristic parameter b of maximum amplitude distribution of AE events changing regularities under different stress levels showed that the damage of intact coal and fractured coal-rock mass were embodied as the increasing scale of cracks inside on the microscopic and increasing AE events with large amplitude. During the changing process, the growth of micro-cracks was restrained when the stress had increased to a certain level, and then, micro-cracks inside started to propagate until the stress reached the peak strength. In addition, the b value decreased before the specimen reached instability, which was also consistent with the existing literature reports [35][36][37]. The decrease of the b value represented the largescale development of high-energy acoustic emission events. Before the failure, the b value dropped sharply, indicating that the proportion of high-energy large-scale micro-cracks increased gradually, and the development of micro-cracks changed from disorder to order. When the micro-crack size distribution was relatively constant, the b value gradually tended to be stable. Finally, crack penetration led to specimen instability and failure. The Frequency of Amplitude Distribution in Different Stress Intervals Due to similar changing regularities of the characteristic parameter b of AE events in intact coal and fractured coal-rock mass during the failure evolution process, in order to study furthermore, the intact coal specimens 1-3# were used as an example to analyze the failure evolution process reflected by the b value in different stress intervals. During the uniaxial loading, the frequency of maximum amplitude distribution of AE events in different intervals is shown in Figure 5. Combined with the b value of intact coal rock in Table 2, when the stress level was in the interval of 0 ∼ 20%, there was few of AE events with maximum amplitude about 35 ∼ 45 dB mainly. As the stress level was loaded to the interval of 20 ∼ 40%, the amount of AE events and amount of AE events with larger amplitude had increased, where the maximum amplitude was about 35 ∼ 50 dB primarily, and the value b was reduced comparing with the previous interval. When the stress level was in the interval of 40 ∼ 60%, the amount of AE events continued to increase, and the increasing of the low amplitude of the events was faster than that with larger amplitude, which suggested that the propagation of Frontiers in Physics | www.frontiersin.org August 2021 | Volume 9 | Article 635306 8 micro-cracks in this interval was restrained, and reflected with the increasing b value. When the stress level was in the interval of 60 ∼ 80%, the amount of AE events with large amplitude had increased rapidly compared with the previous interval, especially the AE events with maximum amplitude about 50 ∼ 60 dB, which embodied that the large scale of cracks were developing rapidly on the microscale, and reflected with the reducing b value. After loading the stress level to the interval of 80 ∼ 100%, the growth of AE events with maximum amplitude was getting to be stable, and the reducing of the b value was also gently. And comparing with the previous interval with larger scale of cracks developed, the crack development was dominated by adjustment and began to expand along the existing large scale of cracks on the microscale. In order to better reflect the amplitude distribution characteristics of coal-rock mass, the Gaussian normal distribution function "g(x) " was selected to perform statistical probability fitting for the maximum amplitude of AE events, and the Gaussian normal distribution curve as shown in Figure 6 was obtained. Its distribution function and degree of fitting are shown in Table 4. AE events with amplitude distribution have good Gaussian normal distribution characteristics, such as high fitting degree. In different stress regions, the probability of AE events concentrated in the range of maximum amplitude from 35 to 50 dB is the largest. In summary, the quantitative evaluation index b value could reflect very well in the distribution regularities of maximum amplitude of AE events during the failure evolution process of coal, and the maximum amplitude of AE events could directly reflect the expanding intensity of cracks in the coal. Therefore, the characteristic parameter b of AE events could describe the failure evolution process of coal well. In addition, the stress interval with a large amplitude increase could be used as a precursor to the peak of the specimen through the maximum amplitude distribution frequency of the AE event. Then, the b value that characterizes the maximum amplitude distribution of the AE event was drastically reduced to the stress level when it was gradual as a pre-destruction. The Variation Regularities of Maximum Amplitude Between Different Fractured Rock Mass The maximum amplitude distribution frequency of AE events for intact coal and fractured coal-rock mass during the stress level from 0 to the peak loading is shown in Figure 6. As shown in Figure 6, the distribution frequency of maximum amplitude of intact coal was more "full" than that in fractured coal-rock mass from the distribution shape, where it was more "slender" in fractured coal-rock mass. And comparing the shape in different fracture rock mass, the more complex the fracture (i.e., in sequence of intact coal, single fracture, parallel fractures, cross fractures, and mixed fractures) in coal-rock mass, the more "slender" it was. Based on the distribution characteristics of maximum amplitude of AE events, the intact coal had more AE event amount and more distribution range, where the amplitude of AE events was distributed mainly in the interval of 35 ∼ 55 dB, and there were 87 AE events with amplitude more than 70 dB. For the fractured coal-rock mass with more complex fracture, the decrease of AE event amount was more obvious, and the AE events with large amplitude reduced more. Taking mixed fracture of coal-rock mass as an example, the maximum amplitude of AE events was mainly from 35 to 45 dB, where the amount of AE events declined steeply when the amplitude was larger than 45 dB, and there were no AE events with amplitude greater than 70 dB. So, compared with intact coal and fractured coal-rock mass, the more complex the fracture (i.e., in sequence of intact coal, single fracture, parallel fracture, cross fracture, and mixed fractured coal-rock mass), the more reduction of the total AE events and the amount with large amplitude of AE events. The reason for this was that with more complex fracture inside, the internal cracks could expand along the existed fracture more easily, and the cracks expanding with large scale were reduced more for the low bearing capacity of rock bridge cut by fractures. In order to better reflect the amplitude distribution characteristics, through statistical probability fitting of the distribution frequency of Frontiers in Physics | www.frontiersin.org August 2021 | Volume 9 | Article 635306 the maximum amplitude, the Gaussian normal distribution law of the maximum amplitude distribution frequency with a high degree of fit can be obtained. Table 5 shows the Gaussian normal distribution function and fitting degree of coal-rock mass with different fracture combinations. It was observed that the probability of the maximum amplitude event of AE occurred at about 45 dB was the largest. Based on the characterization of the maximum amplitude distribution at the range from 0 to the peak loading, the b value characteristics at the peak stress of coal-rock mass with different fractures were further analyzed. As shown in Figure 7, under uniaxial loading, the b value at the peak stress of intact coal, single fracture, parallel fracture, cross fracture, and mixed fractured coal-rock mass was 2.014, 2.448, 2.731, 2.876, and 3.269, respectively. Thus, the b value of intact coal was the smallest, while the b value of coal-rock mass with mixed fractures with the most complex fractures was the largest. Thus, the more complex the primary fractures were, the greater the b value was, where the physical meaning was that the more complex the fracture inside, the more barriers existed to restrain crack propagation caused by internal crack distribution. On the microscale, the main reason for this was that the more the fractures existed in the coal-rock mass, the lower the capacity of bearing the load and storing less energy inside, and the cracks could be better adjusted during the crack propagation for the more the fractures existed, which led that the more complex the fractures inside, the less scale cracks happened. In addition, as shown in Figure 8, the b value distribution of single fracture coal-rock mass was more discrete, where the reason was that the bearing capacity of single fractured rock mass was influenced by the direction between the load and fracture, and the characteristics of anisotropic were reflected. For mixed fractured coal-rock mass, the discrete degree of the b value was lower than the single fracture inside, where it was shown that the effects were reduced which was caused by the direction between the loading and fracture with lower characteristics of anisotropic. Temporal and Spatial Distribution Characteristics of Acoustic Emission Events In order to reflect the relationship between the evolution of internal fractures and the degree of main fractures in the process of coal-rock failure, the spatial distribution of AE events and energy in the process of failure evolution of intact coal and coal-rock mass with different fractures is shown in Figure 9. By correcting the accuracy of the AE positioning system in the test process, the experiment was carried out under the condition that the absolute errors of the AE source in X direction, Y direction, and Z direction were all less than 2 mm. As for the energy of AE events, the energy and quantity of AE events in the failure process of intact coal rock were the largest. Compared with the coal-rock mass with different combinations of fractures, the amount of AE events and its energy had shown a trend of decrease when the fractures were more complex. Compared with fractured coal-rock mass, the spatial distribution of AE events in intact coal rock was discrete, and the crack propagation was random, which would not be affected by fracture. However, the spatial distribution of AE events reflected the correlation between the spatial location of fractures and AE events. Taking single, cross, and mixed fractured coal-rock mass as examples, under the condition of uniaxial loading, the expansion of the micro-cracks was controlled by the existing fractures, which led that the AE events were mainly distributed along the penetration failure surface presenting with concentrated distribution on the macroscale. For single fracture coal-rock mass, the AE events were mainly distributed along the surface of existing fracture, while for cross and mixed fractured coal-rock mass, the AE events were mainly distributed in the locked segment. The amplitude, energy, and the b value of AE events were combined to reflect the failure evolution process of intact coal and coal-rock mass with different fracture combinations and the relationship between the parameters. The relationship among AE event amplitude, energy, and the b value in different stress levels of intact coal and fractured coal-rock mass with different fracture combinations under uniaxial loading is shown in Figure 10. As shown in Figure 10, the energy of AE events in intact coal-rock mass was greater than that in fractured coal-rock mass significantly, and the more complex the fracture was, the lower the AE energy was. At the low stress level, the more complex the fracture was, the deformation in the failure process was mainly adjusted along the existing fracture, and the lower the energy of acoustic emission event was. When the stress level was loaded to 0.6∼0.8 times of the peak strength (σ max ), the distribution of AE events with large energy was relatively concentrated, caused by cracks extending in special direction. During the stress level of 0.8 ∼ 1, both the AE amplitude and AE energy increased and then decreased. Based on the AE amplitude and energy distribution of intact coal and fractured coal-rock mass, the variation trend of AE amplitude was consistent with that of AE energy, and when the energy value was large, the AE amplitude increased. Comparing with the distribution in intact coal, because of the effect of the existed fractures, AE events with large energy in fractured coal-rock mass were mainly distributed along the failure fracture during the whole loading process, which mainly happened at the stress level about 0.8, while the intact coal-rock mass occurred at 0.6∼0.8 times. The amplitude of intact coal varied continuously during the test, while the amplitude of fractured coal and rock mass changed from disperse to continuity, and the amplitude frequency was high when the stress level was 0.6∼1. The results of this study were consistent with those of Meng et al. [38,39]. In the compaction and FIGURE 10 | The distribution of AE event amplitude, energy, and the b value of intact and fractured specimens at pre-peak stress level. Frontiers in Physics | www.frontiersin.org August 2021 | Volume 9 | Article 635306 elastic stages, the amplitude of AE energy was relatively low, and new micro-cracks could not be formed under low stress. When the stress entered the plastic stage, the internal structure of coal rock was damaged, and the amplitude of AE energy increased gradually. As shown in Figure 10, no matter how intact coal or fractured coal-rock mass occurred with different fracture combinations, the energy of AE events showed a decreased trend after reaching the maximum value during the failure evolution process, and it indicated that the failure evolution was dominated by development and extension of cracks with smaller size and breakthrough the rock bridge between cracks after the AE event with maximum energy arrived. In general, the acoustic wave was the external macroscopic representation of crack extension in rock material during its failure process, where the maximum amplitude of the AE event was positively related to its energy. Combined with the distribution of the b value at different stress levels, the b value of AE events showed an increased trend at the stress level of 0.6∼0.8 in fractured coal-rock mass, and it showed that the amount of AE events with large amplitude was decreasing and the internal energy was accumulating during this stress interval. When the stress level was loaded up to about 0.8, the b value was reducing obviously, where the amount of AE events with large amplitude was increasing conspicuously, and it was in accordance with the energy of AE event distribution in this stress level. After the stress level was about 0.8, the failure evolution was dominated by development and extension of cracks with smaller size, and the b value represented the distribution regularities of AE events were decreasing gradually and trending to be gentle finally. Therefore, based on the characteristics during the failure evolution process in fractured coal-rock mass, some regularities could be presented, where the b value would decrease and then increase and then decrease in the pre-peak phase, and the stress level could be considered as the precursor of the b value occurred. For intact coal, internal cracks were extended randomly under uniaxial loading caused by no constraint of existed fractures and higher strength inside, and the AE events with large energy were distributed during the whole failure process, where the b value was decreased on the whole and distinguished from fractured coal-rock mass. When the stress level was loaded up to about 0.8, the b value trended to be gentle after the AE event with largest energy happened, which was in accordance with fractured coal-rock mass. So, based on the variation regularity of the b value reflected and the maximum amplitude distribution of AE events in intact coal and fractured coal-rock mass during the failure evolution process, it could be considered that the second decrease and flattening of the b value is the premonitory criterion of failure. CONCLUSION As we know, one of the difficulties in studying the mechanical behavior of fractured rock mass was on how to obtain the fractured rock mass specimens. In this study, the fractured rock mass specimens were obtained by preloading intact rock (coal) in the rock mechanic rigidity servo testing system (MTS815). Then through the acoustic emission phenomenon of rock during loading failure and the maximum amplitude distribution law of the AE event, the failure evolution process of intact rock and fractured rock mass specimens under uniaxial loading condition was studied on the basis of the maximum amplitude distribution of AE events. The main characteristics during the failure process could be concluded as follows: 1) Based on the quantity and spatial distribution of fractures, the fractured rock mass could be categorized into four types from simple to complex, such as single fracture, parallel fractures, cross fractures, and mixed fractures successively in the statistical sense. 2) The b value which represented the characteristics of the maximum amplitude distribution of AE events could be used to reflect the failure evolution process of the rock mass, where the more the fractures inside, the more obvious damage effect and the larger b value were. 3) Under different stress levels, the b value of intact rock and fractured rock mass showed a decreasing trend with the increase of load on the whole. Under the condition of the same stress level, the more complex the fractures inside, the larger the b value was. That is, the value b of intact coal rock < single fractured rock mass < parallel fractured coal-rock mass < cross fractured rock mass < mixed fractured coal-rock mass. 4) During the uniaxial loading, the cracks inside intact rock were extended randomly, and the distribution of AE events was discrete, where the AE events of the cracks inside fractured rock mass were concentrated under the influence of existed fractures. 5) Before the loading reached the peak value, in the process of failure evolution of intact rock mass and fractured rock mass, the b value decreased, then increased and then decreased, and finally tended to be flat. The feature that the b value decreased for the second time and gradually flattened out could be regarded as an early warning signal that the loading reached the peak. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
v3-fos-license
2017-08-03T02:30:35.605Z
2016-03-17T00:00:00.000
16403479
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-016-1988-4", "pdf_hash": "1180b807dd52d8a89d5b19effd08fe07ed682fc6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42034", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1180b807dd52d8a89d5b19effd08fe07ed682fc6", "year": 2016 }
pes2o/s2orc
Cannabis use among Navy personnel in Sri Lanka: a cross sectional study Background Prevalence of cannabis use among military populations vary. There is evidence that drug use is associated with combat exposure and PTSD. The objective of the study was to assess the prevalence of cannabis use among Sri Lanka Navy (SLN) personnel and to identify any relationship with cannabis use and combat exposure. Methods This cross sectional study was carried out among representative samples of SLN Special Forces (Special Boat Squadron) and regular forces deployed in combat areas. Both Special Forces and regular forces were selected using simple random sampling. Personnel who had served continuously in combat areas during the 1 year period prior to end of combat operations were included in the study. Cannabis use was defined as smoking cannabis at least once during the past 12 months. Results The sample consisted of 259 Special Forces and 412 regular navy personnel. Prevalence of cannabis use was 5.22 % (95 % CI 3.53–6.9). There was no significant difference in prevalence of cannabis use among Special Forces personnel compared to regular forces. Cannabis use was significantly higher in the age group 18–24 years [OR 4.42 (95 % CI 2.18–8.97)], personnel who were never married [OR 2.02 (95 % CI 0.99–4.12)], or had an educational level less than GCE O’Level [OR 4.02 (95 % CI 1.17–13.78)]. There was significant association between cannabis use and hazardous alcohol use [adjusted OR 5.47 (95 % CI 2.65–11.28)], PTSD [adjusted OR 4.20 (95 % CI 1.08–16.38)], GHQ caseness [adjusted OR 2.83 (95 % CI 1.18–6.79)] and multiple somatic complaints [adjusted OR 3.61 (95 % CI 1.5–8.7)]. Cannabis use was not associated with smoking. Risk of cannabis use was less in those who had seen dead or wounded [adjusted OR 0.42 (95 % CI 0.20–0.85)]. Experiencing hostility from civilians was the only combat exposure that significantly increased the risk of cannabis use [adjusted OR 4.06 (95 % CI 1.06–15.56)]. Conclusions Among Sri Lanka Navy personnel exposed to combat cannabis use was significantly associated with hazardous alcohol use but not smoking. PTSD and other adverse mental health outcomes were associated with an increased risk of cannabis use. Exposure to combat was not associated with increased risk of cannabis use. Background Cannabis also known as marijuana is an illicit psychoactive substance derived from the Cannabis sativa plant. Regular cannabis use is associated with cannabis dependence syndrome. Cannabis users are also more likely to use other illicit drugs. Studies in high income countries show that the pattern of drug initiation starts with alcohol and tobacco, followed by cannabis, and then other illicit drugs [1]. Cannabis use impairs cognitive and behavioural functions, especially for sustained-attention tasks [2,3]. It also increases the risk of mental disorders particularly psychoses [4][5][6]. Cannabis use is increasing in both developed and developing countries [4]. The prevalence varies widely from 0.5-42 % depending on the population [7]. According to the National Epidemiologic Survey on alcohol and related conditions, in the United States, the lifetime prevalence of cannabis is 8.4 % in males and 4.3 % in females [8]. In Europe, among patients presenting to emergency rooms with acute drug toxicity, cannabis was the third commonest drug used after heroin and cocaine [9]. Several studies have shown an association between exposure to trauma and substance use. The National Comorbidity Survey found that one third of individuals with lifetime post traumatic stress disorder (PTSD) had lifetime substance use disorder [10]. This association is seen among both civilians and combat veterans [11]. Increased prevalence of PTSD has been demonstrated in several groups such as community dwelling adolescents with alcohol dependence, treatment seeking male substance users and non treatment seeking users [11]. These groups were exposed to physical or sexual assault or crime victimization [12]. The comorbidity rates for PTSD and substance use are highest for male combat veterans [11]. Among veterans with PTSD the rates of comorbid drug and alcohol use are much higher than in the general population. Prevalence of cannabis use among military populations vary. In the French army 18.5 % reported using cannabis at least once in the past 12 months and 8.1 % reported regular use (smoking at least 10 joints per month) [13]. Among Canadian Forces 14 % reported cannabis use [14]. Among military police in Brazil, lifetime use of cannabis was 8.1 % [15]. Khat is the commonest drug used by Somali combatants followed by cannabis (10.7 %) [16]. However low rates of drug use have been reported in some military personnel returning from combat duty. Among Persian gulf war veterans 2-3 % reported drug problems 6 years after return [17]. Low rates may be due to the fact that cannabis use is illegal in the military [18]. From 2006 to 2009 the Sri Lanka Defense Forces were engaged in combat operations. During this period 190 officers and 5700 other ranks of the Sri Lanka Army and 485 personnel from the SLN were killed and 27,000 injured [19]. Both Special forces and regular forces were exposed to traumatic events. More than 60 % in both groups had seen dead or injured persons. More than 80 % of the Special Forces reported discharge of weapons in direct combat compared to 26.7 % of the regular forces. Among the Special Forces, 81.5 % had engaged in combat with enemy vessels compared to 29.4 % of the regular forces [19]. Although there is evidence that drug use is associated with PTSD there is little evidence that it is associated with exposure to trauma per se. To investigate if use of cannabis in military personnel is associated with exposure to trauma we looked at cannabis use in the Sri Lanka Navy personnel deployed in combat areas. Methods The study methods are described in detail in a previous publication [19]. The data was collected as part of a study comparing the mental health status of Special Forces personnel with regular forces of the Sri Lanka Navy (SLN). Data collection commenced 3 months after combat operations ended in 2009. This cross sectional study was carried out among representative samples of SLN Special Forces and regular forces deployed in combat areas. Both Special Forces and regular forces were selected using simple random sampling. The sample of SLN Special Forces was selected from the Special Boat Squadron. The sample size was calculated to detect an odds ratio of 2.0 for disorders with an estimated prevalence of 15 %, a power of 90 % and confidence of 95 % (two tailed). The required sample size was 240 in each group. The sample size was increased by 15 % to adjust for nonresponse. The comparison group (regular forces) was oversampled to include more combat troops. The sampling frames used were the lists of personnel from the navy central data base. Samples were selected using computer generated random numbers. Participation was voluntary. The response rate was 93.8 %. The rate of missing values for individual items in the survey was about 10 %. Only personnel who had served continuously in combat areas during the 1 year period prior to end of combat operations were included in the study. The sample included only males. A total of 259 Special Forces and 412 regular navy personnel were recruited to the study. Outcome measures The 28 page questionnaire used in the study "Health of UK military personnel deployed to the 2003 Iraq war" was used as the data collection instrument [20]. Permission was obtained from the authors for the use of the questionnaire. Mental health outcomes were measured using several scales. Case definitions used were same as those in the study of UK personnel deployed to Iraq [20]. Symptoms of common mental disorder were identified using the General Health Questionnaire 12 (GHQ-12) and cases were defined as individuals scoring 4 or more. PTSD was diagnosed using the 17 item National Centre for PTSD checklist civilian version (PCL-C) and cases were defined as individuals scoring 50 or more. Fatigue was assessed using the 13 item Chalder fatigue scale and cases were defined as individuals scoring 4 or more [21]. Hazardous alcohol use was identified using the WHO alcohol use disorder identification test (AUDIT) with individuals scoring 8≥ identified as cases. Multiple physical symptoms were elicited using a checklist of symptoms and cases were defined as individuals with 10 or more symptoms. This case definition represents the top decile of this sample. Cannabis use was defined as smoking cannabis at least once within the past 12 months. Ethical approval Ethical clearance was obtained from the Ethics Review Committee of the Faculty of Medicine, University of Colombo. Participation was voluntary and written informed consent was obtained from all participants. The questionnaire did not identify the participants by name. Statistical analysis Prevalence of cannabis use was calculated according to demographic variables. Association between cannabis use and combat exposure was explored using multiple logistic regression analyses which adjusted for demographic variables and service type. Statistical analysis was carried out using SPSS version 13.0 for Windows. Study sample The sample consisted of 259 Special Forces and 412 regular navy personnel [19]. The mean age of the sample was 27.6 years (SD 5.02). Of the sample 49.0 % were single, 49.6 % were married and 0.3 % were previously married. One third of the sample (35.2 %) were engaged in combat duty, 29.1 % served on board naval vessels and 35.3 % were engaged in non-combat duties which included medical, logistic, engineering, communication and administrative roles. Prevalence of cannabis use Cannabis use according to demographic characteristics are shown in Table 1 Cannabis use and exposure to trauma Association between ten items measuring combat exposure and cannabis use was assessed using logistic regression analysis ( These association remained even after adjusting for demographic variables. Fatigue was not significantly associated with cannabis use (Table 3). Cannabis and other substance use There was significant association between cannabis use and hazardous alcohol use (total score of ≥8 in the AUDIT scale) [ Discussion This study provides information about cannabis use in a military population in Sri Lanka. It explores the association of cannabis use with combat exposure and mental health. The prevalence of cannabis use was 5.22 % (95 % CI 3.53-6.9) among SLN personnel. Younger age (18-24 years), personnel who were never married and those with an educational level less than GCE O'Level were more likely to use cannabis. Cannabis use was significantly associated with hazardous alcohol use but not smoking. Except for experiencing hostility from civilians, other types of combat exposure were not associated with an increased risk of cannabis use. Cannabis use was significantly associated with PTSD, GHQ caseness and experiencing multiple somatic complaints. The sample consisted of Special Forces and regular forces personnel exposed to combat. We have previously reported that the prevalence of common mental disorders (11.8 %), PTSD (2.4 %), fatigue (13.4 %), multiple physical symptoms (10.4 %) and hazardous alcohol use in the SLN personnel was less than that in United Kingdom (UK) and United States (US) personnel deployed in Iraq and Afghanistan [19,20,[22][23][24]. Despite higher exposure to potentially traumatic events Special Forces had less mental health problems compared to regular forces. We found that there was no significant difference in cannabis use between Special Forces and regular forces. In military populations combat induced PTSD is associated with substance abuse [25]. This may be because chronic substance users are more vulnerable to developing PTSD or because people with PTSD use psychoactive substances as a means of self medication [11]. We too found that cannabis use was significantly associated with PTSD. We found that experiencing hostility from civilians was the only combat exposure that was significantly associated with cannabis use. To understand the association between combat exposure, PTSD and cannabis use we looked at the relationship between these variables. Of the different types of combat exposure, experiencing hostility from civilians, feeling one might die and injury from land mines were significantly associated with PTSD. Therefore hostility from civilians may be a highly traumatic experience which increases the risk of PTSD and also cannabis use. However it must be noted that only a total of 18 individuals experienced hostility by civilians and of these only three were cannabis users. Injury following land mine blasts and feeling one might die are also highly traumatic experiences which increased the risk of PTSD, but these were not associate with cannabis use. There is evidence that PTSD rather than combat stress per se is associated with substance use [25]. Our findings support this. Cannabis may be used to cope with symptoms of PTSD. Some states in the United States of America have approved the use of medical marijuana for PTSD. A study has reported that 23 % of patients seeking medical cannabis for the first time screened positive for PTSD [30]. Greater severity of PTSD was associated with more frequent cannabis use [32]. Those with PTSD were more likely to seek medical cannabis than those without PTSD. There is evidence that cannabis is used to cope with symptoms such as poor sleep and intrusive thoughts [31]. Seeing dead or wounded were associated with a significantly lower risk of cannabis use. We have previously reported that significantly more Special Forces personnel had reported seeing dead or wounded [19]. Lower rates of mental health problems among Special Forces may explain the lower associated risk of cannabis use. According to the gateway theory, tobacco or alcohol use leads to cannabis use, and cannabis users more likely to go on to use heroin and cocaine [1]. However the evidence regarding the gateway theory is inconsistent and we too did not find a significant association between cannabis use and smoking [26]. In this sample the prevalence of smoking 17.9 % was lower than that reported among the general population of 29.9 % in urban areas and 24.4 % in rural areas [27,28]. The anti smoking policy which was in force in the SLN at that time restricted access to cigarettes. However there was significant association between hazardous alcohol use and cannabis use. There is evidence that hazardous alcohol use is associated with PTSD [12]. The General Health Questionnaire is a scale used to identify psychological morbidity in non-psychiatric settings. Our study found that cannabis users were more likely to be identified as cases based on the GHQ score. This association disappeared when we adjusted for hazardous alcohol use suggesting that hazardous alcohol use acts as a confounding factor in the aetiology of psychological morbidity. In the absence of data on cannabis use among the general population in Sri Lanka, it is not possible decide if the rate of cannabis use in this military population is different to that of the general population. Since this sample consisted only of males and a high proportion were young and unmarried, which are factors associated with illicit drug use, we can expect the overall prevalence in this group to be higher than in a general population sample. The prevalence in this sample was higher than among a cohort of mentally ill patients in Sri Lanka [29]. However that study may have underestimated use as it relied on patient records for identification of cannabis use. We have previously reported that the prevalence of hazardous alcohol use and smoking in this sample are less than that reported in US and UK military personnel [19]. Prevalence of cannabis use too is less than that reported among French, Canadian, Brazilian and Somalian military personnel [13][14][15][16]. Prevalence of probable PTSD was 1.9 % in the Special Forces and 2.7 % in the regular forces [19]. Since there is evidence that substance use disorders are associated with PTSD, the low rate of PTSD may explain the low rate of cannabis and alcohol use in this sample. Access to cannabis also may have been restricted because the personnel were deployed in combat areas and because interactions with civilians and other means of acquiring cannabis were limited. The main limitation in our study was that self reports were used to identify cannabis use. Under reporting is known to occur with self reports on substance use. Under reporting of cannabis use among our sample is a distinct possibility because cannabis is an illicit drug. We also did not assess the frequency and quantity of cannabis use. Despite this limitation, this study provides data on cannabis use in Sri Lanka and also supports previous findings that there is no significant association between cannabis use and combat exposure. Conclusions The prevalence of cannabis use was lower among Sri Lanka Navy personnel than that reported among military personnel from other countries. Younger age, personnel who were never married and those with an educational level less than GCE O'Level were more likely to use cannabis. Cannabis use was significantly associated with hazardous alcohol use but not smoking. PTSD and other adverse mental health outcomes were associated with an increased risk of cannabis use. Exposure to combat per se was not associated with an increased risk of cannabis use.
v3-fos-license
2017-04-16T15:46:42.620Z
2015-04-24T00:00:00.000
3674539
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://academic.oup.com/cercor/article-pdf/25/11/4504/14102928/bhv080.pdf", "pdf_hash": "2ba4d3f3beabc1363b5744f5285b544d6ae8090e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42035", "s2fieldsofstudy": [ "Psychology" ], "sha1": "bba3c41ac8b633697ec6cb6157127f504dbb168d", "year": 2015 }
pes2o/s2orc
Damage to the Ventromedial Prefrontal Cortex Impairs Learning from Observed Outcomes Individuals learn both from the outcomes of their own internally generated actions (“experiential learning”) and from the observation of the consequences of externally generated actions (“observational learning”). While neuroscience research has focused principally on the neural mechanisms by which brain structures such as the ventromedial prefrontal cortex (vmPFC) support experiential learning, relatively less is known regarding how learning proceeds through passive observation. We explored the necessity of the vmPFC for observational learning by testing a group of patients with damage to the vmPFC as well as demographically matched normal comparison and brain-damaged comparison groups—and a single patient with bilateral dorsal prefrontal damage—using several value-learning tasks that required learning from direct experience, observational learning, or both. We found a specific impairment in observational learning in patients with vmPFC damage manifest in the reduced influence of previously observed rewards on current choices, despite a relatively intact capacity for experiential learning. The current study provides evidence that the vmPFC plays a critical role in observational learning, suggests that there are dissociable neural circuits for experiential and observational learning, and offers an important new extension of how the vmPFC contributes to learning and memory. Introduction To make optimal decisions, individuals must learn the value of stimuli in an ever-changing environment and use this information to guide choice behavior. While much research has been conducted regarding the role of the ventromedial prefrontal cortex (vmPFC) in value-guided choices based on learning from the consequences of one's own actions (Damasio et al. 1990;Bechara et al. 2000;Rolls 2004;Rangel et al. 2008;Kable and Glimcher 2009;Rushworth et al. 2011;Rudebeck and Murray 2011a), less is understood about the neural mechanisms by which individuals learn the value of stimuli in the environment through the passive observation of externally generated actions and their consequences. Interestingly, however, a recent neuroimaging investigation (Burke et al. 2010) found that observational learning in a social context was related to brain activation in specific portions of prefrontal cortex, specifically the vmPFC and dorsolateral prefrontal cortex (dlPFC). Notably, activation of the 2 regions was related to dissociable aspects of observational learning: vmPFC activation was linked specifically to prediction errors at the time of "outcome" (i.e., outcome prediction error: Differences between expected and observed reward), whereas activity in dlPFC reflected prediction errors at the time of "choice" (i.e., action prediction error: Differences between expected and observed action). In the current study, we tested the hypothesis that the vmPFC is necessary for an intact capacity for observational learning, by examining the performance of patients with focal lesions of the vmPFC. While the notion of observational learning is multifaceted (Hopper 2010;Zentall 2012), here we focus on a specific component of this process: The ability of individuals to learn from passively watching the consequences of actions that were externally generated (i.e., by a computer player whose choices were random, and known to be so by participants). Our experimental paradigm, therefore, differs from previous studies in the field of social decision-making that have examined how individuals learn from and imitate the behavior of intelligent social agents (Sanfey 2007;Hampton et al. 2008;Behrens et al. 2009;Burke et al. 2010;Seo and Lee 2012;Boorman et al. 2013;Chang et al. 2013). Instead, our procedure has closer parallels with experimental paradigms used to investigate the differential contributions of the striatum and hippocampus to feedback-based and observational learning, respectively, in the memory literature (Poldrack et al. 2001;Shohamy et al. 2004)-and also the "ghost" conditions used previously to examine the nonsocial components of observational learning-for example, where the observer passively watches a remotely controlled door being moved randomly either to the right or to the left by an experimenter to reveal a food reward (Hopper et al. 2008;Hopper 2010). We assessed the performance of a cohort of patients who had focal damage to the vmPFC (vmPFC group), as well as healthy normal comparison (NC), brain-damaged comparison (BDC) groups, and a single case with damage to the bilateral dorsal prefrontal cortex (dPFC; Fig. 1 in a series of relatively simple two-arm bandit tasks (Fig. 2) involving either "experiential learning" alone ( phases 1 and 2), "observational learning" alone ( phase 4), or both types of learning interleaved ( phase 3). Although previous neuropsychological studies have assessed whether vmPFC patients perform decision-making tasks differently from comparison participants by using coarse aggregate measures of behavior (Rolls et al. 1994;Fellows and Farah 2003;Hornak et al. 2004;Fellows and Farah 2007;Tsuchida et al. 2010), here we applied fine-grained analytic procedures to characterize choice behavior (Lau and Glimcher 2005;Kennerley et al. 2006;Rutledge et al. 2009;Kovach et al. 2012). Critically, this analysis allowed us to distinguish the unique influences of past actions and outcomes (both "experienced" and "observed") on subsequent behavior. Based on recent neuroimaging findings relating to observational learning in the social domain (Burke et al. 2010), we predicted that patients with vmPFC lesions would show an impairment in observational learning in the context of our experimental paradigm, reflected in the reduced influence of observed outcomes on participants' behavior. Indeed, our study was specifically configured to test the prediction that the vmPFC makes a critical contribution to learning from observed outcomes, because outcomes were informative whereas observed choices were not (i.e., the choices of the computer player were random). Participants: Neuropsychology The target patient group for this study (N = 11) consisted of adults with damage to the vmPFC who have generally intact psychometric intelligence, memory, and executive function (see Supplementary Table 1). A group of neurologically and psychiatrically normal adults (N = 11) were enrolled as healthy comparison (NC) participants, and a group of psychiatrically normal adults with focal brain damage (N = 11) served as BDC participants. All 3 groups were well matched for average age (mean [SD] , and equal numbers of men and women were enrolled in each group (6 female and 5 male; see Supplementary Tables 1 and 2). We also tested 1 patient with damage to the dPFC (see Supplementary Table 2 Overview of experimental phases in the probabilistic task. (A) During "ACTIVE" trials (blue border, for illustration), participants chose a fractal and then either received feedback concerning the outcome of their choice ( phases 1-3), or received no feedback ( phase 4). During "WATCH" trials (yellow border, for illustration), participants observed a computer player making a choice, followed by the outcome of this choice. Participants could learn by observing the outcomes during WATCH trials-though the actual choices of the computer were known to be uninformative (i.e., random). The composition of individual phases differed as to whether WATCH trials were included ( phases 3-4), and whether the participant received feedback as to the outcomes of their own choices during ACTIVE trials (this was present in phases 1-3; no experiential outcome feedback was presented in phase 4 although participants could still observe computer outcomes during WATCH trials). Reversals occurred in phases 2-4, but not in phase 1. Different pairs of fractals were used in each phase. See Materials and Methods. (B) Schematic illustration of trial type included (ACTIVE/WATCH) and feedback (yes or no) in each of the 4 experimental phases. (C) A typical reward schedule for one phase and one vmPFC patient, illustrating how the reward values of the 2 fractals (blue and green lines) fluctuated over trials. Note that 4 reversals were triggered in this example, through the participant choosing the good stimulus in the last 9 of 10 preceding choices. case was studied because of the rarity of bilateral lesions of this region, in contrast to unilateral lesions. The single dPFC patient closely matched the other groups demographically and neuropsychologically, except for a specific impairment in executive function (i.e., Wisconsin card sorting task; see Supplementary Table 2). Patients (vmPFC, BDC, and dPFC) were selected from the Patient Registry of the Division of Cognitive Neuroscience at the University of Iowa. All patients conformed to the inclusion criteria of the Patient Registry. Specifically, they had focal, stable lesions that could be clearly identified on magnetic resonance (MR) or computerized tomography (CT) scans, and they were free of dementia, psychiatric disorder, and substance abuse. All participants were free of significant intellectual impairments. The patients had no premorbid histories of abnormal social conduct, emotional maladjustment, or other psychological disturbance. Neuropsychological, neuroanatomical, and experimental studies were all conducted in the chronic phase of recovery, >3 months after lesion onset. All lesions were acquired in adulthood, and were stable since the patient's most recent neuroimaging session and corresponding lesion analysis (see below). The neuropsychological tests reported in Supplementary Tables 1 and 2 were administered after participating patients had enrolled in the Patient Registry, but prior to their participation in this project. At recruitment, the neuropsychological profiles of all patients were stable since their most recent examinations, as supported by informal observation during enrollment and test sessions. NC participants were recruited from the surrounding community through advertisement, and they were compensated for their participation. The study was approved by the Human Subjects Committee of the University of Iowa, and all participants gave informed consent before completing the study in compliance with the Declaration of Helsinki. Data were collected at the University of Iowa Hospitals and Clinics, and were de-identified before being transmitted to other authors for collaborative consideration and analysis. Participants: Lesion Description and Analysis vmPFC Group Neuroanatomical analysis was based on MR data for 5 vmPFC participants and CT data for 5 vmPFC participants (i.e., those with surgically implanted clips). All structural neuroimaging data were obtained in the chronic epoch. Each patient's lesion was reconstructed in 3 dimensions using Brainvox software (Damasio and Frank 1992;Frank et al. 1997) and the MAP-3 technique (Damasio et al. 2004). Lesioned tissue evident on each patient's MR or CT scan was manually warped to corresponding anatomy of a normal template brain. The template brain was then used as a common frame of reference to determine the overlap of lesion location among patients. Additionally, the template brain was parcellated according to gyral boundaries (cf. Desikan et al. 2006), which permitted volumetric analysis of the lesions within parcels ( Fig. 1 and see Supplementary Fig. 2). One vmPFC participant (ID 10 in Supplementary Table 1) was excluded from volumetric analysis. Although CT data revealed an orbitofrontal/ ventromedial lesion in this participant, there was insufficient anatomical detail for application of the MAP3 lesion mapping method. Notably, all analyses of behavioral data were conducted with and without this participant, and the same patterns of vmPFC performance were uniformly observed. Patients in the vmPFC group were selected on the basis of having damage that included vmPFC in one or both hemispheres, where vmPFC was defined in the space of the template brain and included gyrus rectus, ventromedial superior frontal gyrus (vmSFG), and the medial orbitofrontal gyrus (mOrbG). Lesions were caused by meningioma resection (5), arteriovenous malformation resection (1), subarachnoid hemorrhage (3), and stroke (2). All patients with etiologies of stroke or subarachnoid hemorrhage had involvement of the anterior communicating or anterior cerebral artery and had surgically implanted clips, as did one resection patient. Volumetrically (see Supplementary Tables 3 and 4), 9 of the 10 patients included in the analysis had lesions that included gray matter within the gyrus rectus bilaterally, and one patient's lesion did not include gray matter of the gyrus rectus (ID 5, refer to Supplementary Tables 1 and 2). Among patients with gyrus rectus damage, the proportion of parcel voxels included in the lesion was similar in the left (mean = 0.550; SD = 0.228) and right (mean = 0.585; SD = 0.288) hemispheres. Meanwhile, 9 of the 10 patients had bilateral lesions that included gray matter in mOrbG, whereas one patient's lesion included only left mOrbG gray matter (5). Among patients with mOrbG damage, the proportion of parcel voxels included in the lesion was somewhat greater on the right (mean = 0.357; SD = 0.281) than on the left (mean = 0.248; SD = 0.181), but the difference was not statistically reliable [T (17) = 1.015, P = 0.324]. Medially and more superior, 8 of the 10 patients had lesions that bilaterally included gray matter in the vmSFG, while one patient had vmSFG damage limited to the right hemisphere (ID 4), and one had vmSFG damage limited to the left hemisphere (ID 5). Among patients with vmSFG damage, the proportion of parcel voxels included in the lesion was somewhat greater on the right (mean = 0.641, SD = 0.279) than on the left (mean = 0.519, SD = 0.284), but the difference was not reliable [T (16) = 0.918, P = 0.372]. Finally, a region just outside of what is typically considered human vmPFC is presented for comparison. Only 4 of the 10 patients included in the analysis had lesions that bilaterally included gray matter in the lateral orbitofrontal gyrus (lOrbG), whereas an additional 3 patients had damage to limited to the right lOrbG, and 1 patient had damage limited to the left lOrbG. Among patients with any nonzero lOrbG damage, the proportion of parcel voxels included in the lesion was somewhat greater on the right (mean = 0.216, SD = 0.196) than on the left (mean = 0.075, SD = 0.046), but the difference was not reliable [T (10) = 1.55, P = 0.151]. BDC Group Participants in the BDC group underwent scanning and analysis procedures identical to those described for the vmPFC group, and neuroimaging data were available for all BDC participants (10 MR and 1 CT). None had damage to any portion of the vmPFC per our definition (see above). Lesions were caused by meningioma resection (N = 1), temporal lobectomy (N = 2: 1 left and 1 right), arteriovenous malformation resection (N = 1), and stroke (N = 5). Additionally, two BDC participants had combined etiologies: BDC 8 underwent a right temporal lobectomy and suffered a contemporaneous right anterior choroidal artery stroke; and BDC 11 suffered a combined subarachnoid hemorrhage and infarct. For all BDCs, damage was principally limited to the temporal, occipital, or parietal lobes (see Supplementary Fig. 1). Unique Dorsal Prefrontal Patient The dPFC participant underwent a bilateral frontal meningioma excision. The extent of the participant's brain injury was assessed using MR data. Significant involvement of gray and white matter in the dorsal medial and lateral prefrontal cortex was evident (see Supplementary Fig. 3), but the vmPFC was preserved. Experimental Tasks Participants were paid $25 for the 2-h test session. Participants were instructed that they would be playing a game in which they would make choices, with their goal being to win as many points as possible. The following versions of the task were used, with the same task order used in all participants tested. No response deadline was used, with participants encouraged to respond as soon as they felt confident of their choice. Probabilistic Task Participants completed 4 phases of the probabilistic version of the task including phases with only observational learning, only experiential learning, or a mixture of both (Fig. 2). The structure of phase 2 (experiential learning only) was based on the task used previously in Hornak et al. (2004) andO'Doherty et al. (2001). In each experimental phase, different fractal pairs were used. The value of the good stimulus was probabilistically drawn from 1 of 2 uniform distributions: 70% (+80 to +250) or 30% (−10 to −60). The value of the bad stimulus was likewise probabilistically drawn from 1 of 2 different uniform distributions: 60% (−250 to −600) or 40% (+30 to +65). The values of the 2 fractals were generated randomly for each phase and participant: Different pairs of fractals were used in each phase, and on each trial the position (i.e., left or right) of a fractal was randomly generated. In all phases, cumulative totals of participants' overall winnings were presented after every 5 trials. However, trial-by-trial feedback concerning the outcome of participants' choices was only presented in phases 1, 2, and 3 (see below). Phase 1 (experiential learning). Participants completed "ACTIVE" trials where they received trial-by-trial outcomes related to their choices, with participants performing the task until a criterion of 14 of 16 choices of the good stimulus was reached. Good and bad item value distributions for this phase were different from those used in phases 2-4, as per Hornak et al. (2004). The value of the good stimulus was probabilistically drawn from 1 of 2 uniform distributions: 70% (+60 to +200) or 30% (+10 to +50). The value of the bad stimulus was likewise probabilistically drawn from 1 of 2 different uniform distributions: 60% (−70 to −300) or 40% (+10 to +100). Phase 2 (experiential learning). Participants completed ACTIVE trials and received trial-by-trial outcomes related to their choices, with a reversal occurring after a criterion of 9 of 10 choices of the good stimulus. The phase was terminated after 70 trials in total. Phase 3 (experiential and observational learning). Participants performed the task as in phase 2, and received trial-by-trial feedback on ACTIVE trials as to the outcomes of their choices. However, on alternate "WATCH" trials, participants did not make choices themselves, but instead were given the opportunity to observe the choices and outcomes generated by a computer player. While participants were aware that the computer's actual choices were randomly determined and that they would not receive any rewards for the computer's choices, they were instructed that the computer was effectively choosing in the same environment (i.e., with the same underlying reward schedules), and therefore they could learn about the values of the fractals through observation. Participants were also told that they would not win or lose points during WATCH trials, and running totals of accumulated points ( presented after every fifth choice by the participant) did not reflect or report computer wins or losses. This phase totaled 70 ACTIVE and 70 WATCH trials. Reversals occurred as described for phase 2. Phase 4 (observational learning only, reversal component, outcomes displayed only for computer choices, and not for participant choices). This phase was identical in format to phase 3 (i.e., alternating ACTIVE and WATCH trials), with the notable exception that participants did not receive feedback concerning the reward outcomes of their own choices during ACTIVE trials. Thus, participants could only learn the stimulus values through observation, by watching the computer's choices and outcomes. Cumulative point totals were provided after every fifth participant choice. Reversals occurred as described for phase 2. Deterministic Task (Experiential Learning) This paradigm generally followed the procedure used by Fellows and Farah (2003) and was used as a control task to measure experiential learning performance in a very simple context. In our implementation, participants were required to choose between 2 fractals, which we refer to as A and B, presented on the left and right sides, respectively, of the display; position was randomized. On any given trial, one fractal consistently (i.e., with 100% contingency) yielded a gain of 50 points ("good" stimulus), with the other fractal yielding a loss of 50 points ("bad" stimulus). Trial-by-trial feedback was provided to the participants, in addition to a display of their cumulative point total after every 5 trials. Reversals occurred as described for phase 2 of the probabilistic task. Each participant completed 70 trials in total. Analytic Procedures In the probabilistic task, the reward values of the 2 fractals were randomly generated for each phase and participants according to the prespecified schedule described above. This was done to avoid the possibility that observed deficits could arise due to the specific properties of any particular reward schedule (i.e., a possible concern with the use of a single reward schedule across all participants). To control for possible differences in average reward values of stimuli between participant groups, we computed the total number of points won in any phase above that predicted by random choice (which would be equivalent to the average value of both stimuli over trials). This measure ("points won") is used throughout and entered into the statistical analyses performed. Basic statistical analyses were performed in SPSS 19 and R software. Statistical comparison of the single dPFC patient with the performance of the other experimental groups (control, vmPFC, and BDC) was performed using the modified Crawford's t-test which was designed for this purpose (Crawford and Garthwaite 2002). Logistic Regression We performed an analysis to examine the influence of previous outcomes (i.e., points won) and choices on current behavior. To achieve this, we carried out a logistic regression analysis following the procedure previously used to evaluate the performance of monkeys (Lau and Glimcher 2005;Kennerley et al. 2006) and humans (Rutledge et al. 2009;Kovach et al. 2012). This procedure has the advantage of allowing choice parameters to be included in the model in order to capture the effect of the past history of choices, a feature which is particularly relevant in the current context given previous reports that patients with vmPFC damage may exhibit reversal deficits due to an increased tendency to perseverate (McEnaney and Butter 1969;Rolls et al. 1994;Fellows and Farah 2003). This analysis seeks to estimate weights (i.e., coefficients), which define the contribution of past rewards and choices to current choice, indexed by the choice log odds. ; t is the choice log odds; t is the current trial; α j is a reward coefficient relating to j trials in the past; β j is a choice coefficient relating to j trials in the past; R A; tÀj is the magnitude of reinforcement (i.e., points won) received for choosing fractal A at j trials in the past; C A;tÀj is 1 if fractal A was chosen at j trials in the past (and zero otherwise). γ is an intercept term that captures any residual bias not explained by past rewards and choices. We included both subject choices and rewards received, and those of a computer player (where relevant, i.e., phases 3 and 4), in the regression model, with the aim of capturing the influence of both directly experienced, and observed, actions and outcomes on current behavior. Given that subjects did not receive feedback relating to their choices in phase 4, the magnitude of subject rewards for phase 4 was not included (i.e., coded as zero). Subject choices were entered into the regression analysis in all phases. Positive weights, therefore, index an increase in the log odds (i.e., a tendency to favor fractal A), as a function of past rewards ðα j Þ or choices ðβ j Þ. Given that in our study subjects had to choose between 2 options in our task, we assume symmetrical weights for both options [as in Lau and Glimcher (2005)]: that is, a reward obtained j trials ago increases the log odds by α j if it was received through choosing fractal A, but decreases the log odds by α j if it was received by choosing fractal B. For example, an α 2 of 0.002 indicates that a positive reward of 200 points received by choosing fractal A 2 trials in the past would increase the choice log odds by 0.4 (or the odds by e 0:4 ¼ 1:49). log P A; t P B; t R A; tÀj C A; tÀj e 0:4 ¼ 1:49: Model Comparison Following Lau and Glimcher (2005), we carried out a search of the parameter space, considering models that included differing trial length histories relating to human and computer player choices and reward outcomes. To restrict the overall number of parameters given the size of our experimental dataset and ensure the robustness of fit, we constrained our search to 6 ACTIVE trials in the past. Given that WATCH trials alternated with ACTIVE trials in phases that incorporated observational learning (i.e., phases 3 and 4), we considered models that included terms for up to 3 WATCH trials in the past (i.e., since if the participant was currently on a WATCH trial, the t−1 WATCH trial actually occurred 2 trials ago due to the intervening ACTIVE trial). As in previous work (e.g., Lau and Glimcher 2005;Daw et al. 2006;Daw 2011), models were fit using a process of maximum likelihood estimation. We computed the fit of the regression model using a pseudo-R 2 statistic: where LL R indexes the maximum log likelihood of the model under random choice, and LL M indexes the log likelihood of the estimated model given the data. Regression models with different trial histories (e.g., 5 trials in the past vs. 3 trials) were compared using a standard measure, that is, the Akaike information criterion (AIC). where LL M is the maximum log likelihood of the estimated model, and k is the number of parameters. Models with lower AIC values are preferred, where a difference of >10 indicates strong support for the better fitting model. The main regression model reported (see Results) included terms representing: (1) rewards received by the participant after each of the previous 5 trials; (2) choices made by the participant during each of the previous 5 trials; (3) rewards received by the computer player (in phases 3 and 4) after each of the previous 3 computer trials; and (4) choices made by the computer player (in phases 3 and 4) on each of the previous 3 computer trials. The model specified above, consisting of 16 parameters and a constant term, provided the best observed fit to the overall dataset, indexed by the pseudo-R 2 measure (0.26); and an AIC of 2391, which takes into account model complexity. In comparison, simpler models (e.g., modeling the effect of only 1 past trial for both human and computer rewards and choices) provided a substantially worse fit to the data: AIC = 2461. Parameter Evaluation To avoid assumptions of normality and symmetry in our parameter estimates, we evaluated differences between parameter estimates using nonparametric confidence intervals and P-values calculated with the BC a method (DiCiccio and Effron 1996). A total of 1000 bootstrap samples were drawn from our observed data using the methodology of Hsu et al. (2005), which stipulates randomly drawing participants (N = 11 in our case) from the original sample with replacement, and forming a single bootstrap sample composed of all observations associated with the resulting sample of participants (Hsu et al. 2005). Parameter values for the best-fit logistic model were recorded for each bootstrap sample. 95% BC a confidence intervals were calculated from these sets of bootstrapped parameters, and the P-value reflecting the probability that the interval contained 0 (P BCa ) was used to evaluate parameter significance. We used this procedure within the comparison and vmPFC groups, and additionally in a combined model including interaction terms for group membership with each parameter. Results Participants completed 4 phases of a probabilistic two-arm bandit task, which differed in terms of whether reversal occurred ( phases 2, 3, and 4), and in terms of trial composition (see Fig. 2 and Materials and Methods). As is often the case in neuropsychological studies, patients were reimbursed with a fixed monetary account at the end of the experiment (i.e., rather than points won translating into real monetary gain). During "ACTIVE" trials ( phases 1-4), participants selected an option, and in all phases except phase 4, they received trial-by-trial feedback concerning the outcome of their choice. During "WATCH" trials ( phases 3 and 4), participants did not make choices themselves but instead were given the opportunity to observe the consequences (i.e., reward outcomes) of externally generated actions (i.e., of the computer player). Different experimental phases, therefore, involved either only experiential learning ( phases 1, 2), only observational learning ( phase 4), or a mixture of both ( phase 3). Overall Performance on Probabilistic Task We first considered participants' performance across the 4 experimental phases in terms of overall aggregate measures, such as the total number of points won and tendency to switch after a high magnitude loss. The pattern of performance was generally similar for all groups, who earned approximately the same number of points per condition. The only exception was in the observational-only condition ( phase 4), in which vmPFC patients earned fewer points than comparisons prior to their first reversal. To anticipate, this modest deficit may reflect the more significant deficit that we observed in our subsequent regression analysis (see the next section). In phase 1 (experiential learning only: Materials and Methods and Fig. 2), the vmPFC group required 45 trials on average to reach a criterion (SEM = 11.3; see Supplementary Table 5), defined as the choice of the higher valued stimulus on 14 of the 16 preceding trials, whereas NCs averaged 33 trials to reach a criterion (SEM = 6.2) and BDCs averaged 31 trials to reach a criterion (SEM = 5.2). No significant group-level differences were found for this measure or for points won (each P > 0.2; Fig. 3). A repeated-measures ANOVA confirmed that the choice behavior of all groups showed was sensitive to large magnitude trial outcomes, indexed by a significantly greater tendency to switch to the alternative stimulus following a large magnitude loss (i.e., >200 points) when compared with a large magnitude win (i.e., >100 points): main effect of the outcome valence-F 1,27 = 20.457, P < 0.0005; no significant group × valence interaction, F 2,27 = 0.324, P > 0.7. No significant difference was found in overall switching tendency between groups (F 2,27 = 1.387, P > 0.25). We next examined the overall performance of participants in experimental phase 2 (experiential learning only) and 3 (experiential and observational learning; Fig. 3 and see Supplementary Table 5). No significant differences were observed between the participant groups along any measure of aggregate performance (P > 0.1). All participant groups, therefore, were able to modify their behavior in response to big wins and big losses (both as defined earlier): main effect of valence, each F 1,30 > 50, each P < 0.0001; no significant interactions, each P > 0.5. Furthermore, there were no significant differences between participant groups in cumulative gain (i.e., points won) over the course of the experiment (each F 2,30 < 0.4, each P > 0.7) or the number of reversals achieved (each F 2,30 < 0.9, each P > 0.4). Interestingly, therefore, the performance of our cohort of vmPFC patients based on directly experienced actions and outcomes (i.e., phase 2) was indistinguishable from that of both NC and BDC participants. Given prior work suggesting that lesions of the lateral orbitofrontal cortex (OFC)-or combined lesions of the lateral and medial OFC (Rudebeck and Murray 2011b)-rather than the vmPFC, produce marked reversal learning deficits , we asked whether damage to this brain region predicted performance. No significant correlations were found between performance in any phase (i.e., either in terms of points won or number of reversals) of individual vmPFC patients and the extent of damage to the lateral orbital gyrus (each P > 0.1), as measured using volumetric analyses (see Supplementary Table 4 and Materials and Methods). Note, however, that this analysis can be considered exploratory and underpowered given the relatively limited extent of damage to those regions in most patients (<10% overall, many with no damage or no damage: see Fig. 1 and Supplementary Fig. 1 and Table 4). We next considered the overall performance of the vmPFC patient cohort in phase 4 (observational learning only), where learning was dependent solely on the observation of reward outcomes associated with the actions of a computer player during WATCH trials (i.e., participant choices during ACTIVE trials were not followed by feedback, although the unseen reward was counted toward their cumulative total points won as in other phases), which was known to be playing in the same environment but whose choices were random. Interestingly, all participant groups demonstrated a substantial capacity to learn about the values of stimuli, and to update these estimates in the face of change (i.e., reversals), through observation. All participants gained significantly more points than predicted by chance (each t (10) > 2.5, each P < 0. Table 5). While all groups performed similarly in this phase according to several measures including the number of reversals, points won after the first reversal, and switching behavior (all P > 0.1), both comparison groups outperformed vmPFC participants in the number of points won prior to the first reversal, F 2,25 = 6.9, P = 0.004 (vmPFC mean = 510 [339], NC mean = 1529 [267], and BDC mean = 1908 [219]). Planned comparisons between vmPFC and NC groups (t (16) = 2.361, P = 0.031) and vmPFC and BDC groups (t (17) = 3.5, P = 0.003) indicated that these differences were significant; meanwhile, the comparison groups did not differ (t (17) = 1.1, P > 0.2). While vmPFC participants also required numerically more trials to trigger the first reversal in phase 4 (vmPFC mean = 24. Logistic Regression Analyses: Experiential Learning The relatively intact performance of vmPFC patients indicates that generally normal experiential learning is possible in the face of vmPFC damage. Critically, however, measures which summarize performance across an entire phase (e.g., total points won) may be insensitive to subtle abnormalities in underlying value representations. Therefore, we applied a finer-grained method of analysis to characterize the choice data using a logistic regression Table 5 for full details of performance indices. model (cf. Lau and Glimcher 2005;Rutledge et al. 2009;Kovach et al. 2012; see Materials and Methods). Importantly, our study generated a large quantity of choice data consisting of several thousand trials per group, making it well suited to this type of analysis. Logistic regression analyses afford the opportunity to examine the influence of past rewards and actions on current behavior, without assuming that these influences decay exponentially over time (i.e., as in reinforcement learning models; Lau and Glimcher 2005;Rutledge et al. 2009). This procedure seeks to estimate weights (i.e., coefficients), which define the contribution of past rewards and choices to current choice, up to j trials in the past (see Materials and Methods). While the reward weights (α j ) index the influence of a reward at trial t−j in the past on current choice at trial t, the choice weights ðβ j Þ index the tendency of a given participant to perseverate (i.e., repeat the same choice) independent of any association of choice with reward. Separate coefficients were estimated for both directly experienced choices and outcomes (i.e., relating to the participants themselves during ACTIVE trials) and observed choices and outcomes (i.e., of the computer player during WATCH trials, see Materials and Methods). Positive weights, therefore, index an increase in the choice log odds (e.g., tendency to favor fractal A), as a function of past rewards (α j ) or choices ðβ j Þ . Data from all experimental phases of the probabilistic task were entered into the regression model to increase the size of the dataset and reliability of the parameter estimates (Lau and Glimcher 2005). The best-fitting regression model (see Materials and Methods for details of fitting procedure) included terms representing: (1) Rewards received by the participant during each of the previous 5 ACTIVE trials (in phases 1, 2, and 3); (2) choices made by the participant during each of the previous 5 ACTIVE trials; (3) rewards received by the computer player (in phases 3 and 4) during each of the previous 3 WATCH trials; and (4) choices made by the computer player (in phases 3 and 4) on each of the previous 3 WATCH trials (Materials and Methods). The results of an overall ANOVA which included factors comprising participant group, latency (i.e., trials into the past), and information type (i.e., directly experienced and observed choices and reward outcomes) are summarized in Supplementary Table 6. There was a significant effect of group [Χ 2 (50) = 72.02, P = 0.022], and to anticipate the results, this was driven by a specific deficit in observational learning in the vmPFC group (see below). We first consider effects relating to experiential learning -that is, the influence of participants' own past choices and reward outcomes on current behavior. As expected, there was a significant effect of participant rewards during ACTIVE trials [Χ 2 (15) = 1217.10, P < 0.0001: Fig. 4], whose influence decayed with increasing time into the past [latency × participant reward interaction: Χ 2 (12) = 25.76, P = 0.012]. x-axis refers to the number of trials in the past (e.g., Rewards: from the previous trial, t−1, until 5 trials in the past, t−5). Regression coefficients, for rewards (top panels) or choices (bottom panels), plotted on y-axis (solid line indicates the vmPFC group, and dashed line indicates the appropriate comparison group group). For example, a reward coefficient of 0.003 relating to the t−1 trial indicates that the receipt of +100 points for choosing fractal A on the previous trial would increase the choice log odds of choosing fractal A on the current trial by 0.3 (i.e., 0.003 × 100). Similarly, a choice weight of 0.6 relating to the t−1 trial indicates that choosing fractal A on the previous trial would increase the log odds of selecting fractal A on the current trial by 0.6, above and beyond any reward received on the last trial (or, indeed, any rewards or choices relating to the computer player). See Materials and Methods for details. Observational learning ("WATCH" trials: right-side panels): Top panels show the influence of past observed rewards and bottom panels show the influence of past observed choices on current behavior. x-axis refers to the number of trials in the past (from the previous, t−1, trial until 3 trials in the past, t−3). Regression coefficients, for rewards (upper) or choices (lower panels), plotted on y-axis (solid line indicates the vmPFC group, and dashed line indicates the appropriate comparison group). For example, a reward coefficient of 0.004 relating to the t−1 trial indicates that if the "computer player" received a loss of −500 points for choosing fractal A on the previous WATCH trial, this would decrease the log odds of "the participant" choosing fractal A on the current ACTIVE trial by 2 (i.e., 0.004 × 500). Similarly, a choice weight of 0.4 relating to the t−1 trial indicates that if the computer player had chosen fractal A on the last WATCH trial, this would increase the log odds of the "participant" selecting fractal A on the current ACTIVE trial by 0.4, above and beyond any reward received by the computer on the last WATCH trial (and, indeed, any rewards received by the participant, or choices made by the participant). See Materials and Methods for details. Whiskers indicate BC a 95% confidence intervals [CIs; see DiCiccio and Effron (1996)]: Right-angle endlines are associated with CIs for comparison participants, and angled endlines with CIs for vmPFC participants. Note that overlapping BC a confidence intervals do not indicate nonsignificant differences; a better indicator of group differences is exclusion of the contrasting mean value from the interval. Symbols: *PBC a < 0:05; ∼ PBC a < 0:10. There were no significant interactions of these factors with the group [Χ 2 (8) = 8.16, P = 0.418]. In NC participants, the influence of rewards for trials t−1 (P BCa < 0:0001, where P BCa values were calculated using the BC a method and nonparametric confidence intervals-see Materials and Methods), t−2 (P BCa < 0:0001), and t−3 (P BCa < 0:05) was significant, whereas that at trial t−4 was marginal (P BCa ¼ 0:11). The vmPFC patients also showed effects of recent rewards on current choices at trials t−1 (P BCa < 0:0001), t−2 (P BCa < 0:05), and t−3 (P BCa < 0:05). While pairwise comparisons suggested that the influence of rewards received at trial t−2 was reduced in both the vmPFC and BDC groups relative to the NC group (P BCa < 0:005 and <0:05; respectively), these effects cannot be considered reliable in the absence of a Group × Participant Reward × Latency interaction in the overall ANOVA (see Supplementary Table 6). We also found a significant effect of past participant choices during ACTIVE trials on behavior [Χ 2 (15) = 1365.71, P < 0.0001], with the influence of past choices decaying with increasing latency [latency × participant choice interaction: Χ 2 (12) = 378.60, P < 0.0001]. No interaction of these factors with group was found [Χ 2 (8) = 7.50, P = 0.484]. On any given trial, therefore, all participant groups tended to repeat choices they had made in the past-an effect that extended in NC participants until trial t−3 (each time point, P BCa < 0:001), and in vmPFC and BDC patients to trial t−2 (P BCa < 0:005). Interestingly, this tendency to repeat past choices independent of any rewards received (i.e., perseverate) was in fact most prominent in the NC group: The choice made in trial t−3 exerted a significantly greater influence on the current choice in NC participants than in vmPFC patients (P BCa < 0:05), with a marginally significant effect observed when the NC group was compared with the BDC group (P BCa ¼ 0:058). Notably, however, the absence of a significant Group × Choice × Latency effect in the overall ANOVA [Χ 2 (8) = 7.50, P = 0.484] suggests caution in interpreting this perseverative tendency in NC participants. Results: No Evidence for a Failure of Contingent Learning in Patients with vmPFC Damage Our findings, therefore, suggest that our cohort of vmPFC patients retain a similar capacity as the BDC group in adapting their behavior based on the rewarding consequences of their own internally generated choices, with only equivocal evidence that this differed from that of the NC group. We next considered whether damage to the vmPFC might specifically impair "contingent learning," whereby discrete associations are formed between individual choices on a given ACTIVE trial and their specific outcomes Rushworth et al. 2011). Indeed, recent work in monkeys suggests that the lateral OFC may play an important role in maintaining a neural representation of current choice and the outcome, facilitating their association with one another ). As such, damage to the lateral OFC in non-human primates results in a phenomenon known as a "spread of effect," whereby reinforcement information on a given trial is assigned backwards, or even forwards, in time, therefore becoming associated with past or future actions . Blurring of the specific choice-outcome history in this way can lead to surprising consequences: for example, the receipt of a large loss when a given stimulus (e.g., fractal A) was chosen on trial t−1 might aberrantly increase the probability of reselecting this stimulus on the current trial (t), if the same stimulus had been associated with a favorable outcome on previous trials (e.g., the t−2 trial). In our next analysis, therefore, we asked whether our cohort of vmPFC patients might have a specific deficit in contingent learning. To carry out this regression analysis, we included all possible combinations of associations between choices and outcomes pertaining to the last 3 trials in the regression model, following the procedure of Walton et al. (2010). We found no evidence for a failure of contingent learning in patients with vmPFC damage. Specifically, the influence of specific choice-outcome associations (e.g., t−1 reward with choice on t−1 trial) was greater than false associations (e.g., reward received on trial t−1 with choice made on trial t−2) in vmPFC patients, NC participants, and BDC patients (Fig. 5). Logistic Regression Analyses: Observational Learning We next examined the capacity of vmPFC patients, and the 2 comparison groups, to learn in an observational fashion, in phases 3 and 4 where participants had the opportunity to benefit from viewing the choices and outcomes of a computer player on alternate WATCH trials (see Materials and Methods). As stated previously, participants were instructed that the computer's choices were entirely random, but also that they would be able to learn through observation of the resultant outcomes. Participants' current choices were significantly influenced by the observed outcomes relating to the computer player [Χ 2 (15) = 1082.78, P < 0.001]. There was a positive influence of past computer rewards relating to the immediately preceding trial (i.e., t−1) in all groups (reward weights: P < 0.0001 all groups), which declined over time into the past [latency × computer reward interaction: Χ 2 (12) = 236.73, P < 0.001]. However, there was a specific deficit in the vmPFC group in using previously observed reward outcomes to guide their choice behavior, evidenced by a significant group × computer reward interaction [Χ 2 (10) = 42.34, P < 0.0001; Fig. 4 and see Supplementary Table 6]. Pairwise comparisons revealed that both NC and BDC groups showed a significantly greater influence of t−1 rewards than vmPFC patients (NC P BCa ¼ 0:033 and BDC P BCa ¼ 0:016). The influence of observed rewards did not differ between NC and BDC groups (P BCa > 0:1). Of note, there was also a significant group × latency × computer reward interaction [Χ 2 (8) = 41.05, P < 0.0001]: This was driven by a tendency for the NC and BDC groups to select the less rewarding of the 2 stimuli observed on WATCH trials occurring at t−2 and t−3 trials in the past (see Fig. 4A and B, top right panels), in contrast to the vMPFC group where t−2 and t−3 trial outcomes exerted no influence on current choice (i.e., regression coefficient = 0). While it is difficult to provide a definitive interpretation of this finding-which is qualitatively mirrored in the analysis of observed choices (see Fig. 4A and B bottom right panels)-it is consistent with the overall diminished influence of previously observed outcomes on current choice in the vmPFC group, and we speculate that this may reflect strategic biases in the NC and BDC groups. We also found that participants' current choices on ACTIVE trials were significantly influenced by the past choices made by the computer player during WATCH trials [Χ 2 (15) = 97.30, P < 0.001]. As such, all groups had a tendency to follow the previous action of the computer player, even though this was known to be random in nature, and therefore uninformative. This influence also decreased over time into the past [latency × computer choice interaction: Χ 2 (12) = 41.51, P < 0.001; see Supplementary Table 6]. As such, the influence of observed choices also extended only to the immediately preceding trial (i.e., t−1 choice weights, NC group, P BCa < 0:001; vmPFC group P BCa ¼ 0:005; BDC group P BCa ¼ 0:010; Fig. 4b). Interestingly, the tendency of the NC group to repeat the computer player's last action was significantly greater than that of the vmPFC group (P BCa ¼ 0:010), whose performance was not distinguishable from that of the BDC group (P BCa > 0:2). However, this choice-related finding should be considered unreliable, given the absence of a significant group × computer choice effect [Χ 2 (10) = 13.47, P = 0.1985]. Our findings provide evidence that the vmPFC makes a specific contribution to the learning of values through the observation of the reward outcomes that resulted from externally generated actions (i.e., of a computer player) during WATCH trials. Given previous work implicating the vmPFC in reversal learning (e.g., Fellows and Farah 2003), however, we next considered the possibility that this deficit in observational learning might reflect an impairment in updating values after a reversal has occurred. To consider this possibility, we restricted the logistic regression analysis to trials before the first reversal had occurred, in each experimental phase. Given the reduced quantity of data entered into the model (approximately 1200 trials per participant group), we adjusted the number of parameters to increase the robustness of our model's fit, and so the influence of participant and computer choices and rewards was restricted to t−3 trials in the past. Importantly, a qualitatively similar pattern of findings was observed to those reported for the full dataset. Specifically, we found a selective deficit in the ability of vmPFC patients to use observed rewards to guide their decision-making behavior [threeway interaction of group by latency by computer reward, Χ 2 (4) = 18.63, P < 0.001]. This result, therefore, confirms that patients with vmPFC damage show a reduced influence of observational rewards on current behavior during initial learning relative to the comparison groups, even prior to the occurrence of any reversals. As a supplemental analysis, we also included the cumulative point totals that were presented every fifth trial to participants (see Materials and Methods) as an additional variable in the regression analysis relating to phase 4. A qualitatively similar pattern of findings was observed as in the primary analysis reported above, with no significant effect of cumulative totals on choice behavior (P > 0.1). We also considered the possibility that patients with vmPFC damage may be impaired at observational learning because it is simply more difficult than experiential learning in our paradigm. To address this issue, we examined the response time (RT) data obtained during ACTIVE trials: We found that the RT of vmPFC patients was numerically [but not statistically; each T (12) < 0.5, each P > 0.6] faster in the experimental phases that involved observational learning (i.e., phases 3 and 4), when compared with phase 2 which involved purely experiential learning (see Supplementary Fig. 4). Importantly, there was no significant group × phase interaction in terms of RT (F 4,60 = 0.083, P = 0.987), arguing against the notion that there was an increase in task difficulty for the vmPFC group in experimental phases involving observational learning (also see Discussion for further consideration of this point). We also considered the possibility that the impairment in observational learning exhibited by vmPFC patients might be accounted for through a difference in general motivationspecifically, that their apparent deficit might be driven selectively by their performance in phase 4 (i.e., pure observational phase: see Materials and Methods) due to a total absence of directly experienced rewards which led them to be insufficiently motivated. Arguing strongly against this possibility, we found that patients with vmPFC damage showed a deficit in observational learning even when the logistic regression analysis was restricted solely to trials from phase 3 (P BCa < 0:001), where directly experienced rewards occurred on alternate trials. Moreover, the same pattern was also observed in an analysis using only data from phase 4 (P BCa < 0:001). Results: Performance of a Patient with Damage to Bilateral Dorsolateral PFC on the Probabilistic Task In summary, our results point to a specific deficit in observational learning in patients with damage to the vmPFC. In contrast, our results do not provide strong support for the role of the vmPFC in experiential learning, at least in the context of the relatively simple probabilistic scenarios examined (see Discussion). One question that arises is whether this dissociation between experiential and observational learning is specific to the vmPFC or also applies to other PFC regions. While a single neuropsychological Figure 5. Results of logistic regression analysis designed to assess contingent learning. Contingent learning refers to the formation of discrete associations between an individual choice (e.g., on the last, n−1, trial) and its specific outcome (i.e., on the n−1 trial), as opposed to false associations between different trials (e.g., n−2) and other outcomes (e.g., n−3 trial). The matrices (left: NC, center: BDC, and right: vmPFC group) show the magnitude of regression coefficients (white = large and black = small) relating to the association of each of the 3 past reward outcomes with each of the past 3 choices. Squares on the diagonal, therefore, relate to contingent learning: that is, veridical associations between a specific trial (e.g., n−1) and the specific outcome received (at n−1 trial). Off-diagonal components index the strength of false associations: for example, the bottom left square relates to the association of the reward received on the last trial (n−1) with the choice made on the n−3 trial. As such, a high magnitude coefficient indexed by this component would indicate that if fractal A had been chosen 3 trials in the past, then receiving a large reward on the last trial would increase the log odds of selecting fractal A on the next trial, irrespective of which fractal was chosen on the n−1 trial and what reward was received on the n−3 trial. See the main text for details. study by nature is not able to localize a single cognitive function (e.g., observational learning) to a unique brain region (e.g., vmPFC), we sought additional neuroanatomical constraint by assessing the effects of damage to the dorsal PFC [i.e., sparing vmPFC: see Supplementary Fig. 3 and Table 2 (legend) on performance in our task]. Since lesions affecting the dorsal PFC bilaterally are rare, we examined the single case extant to our knowledge in the Iowa Registry (see Materials and Methods). Due to the dPFC patient's unique status in our study, the direct statistical comparisons used elsewhere to test the performance of other groups against one another were not possible. Instead, we provide specific details about his performance in each condition of each task accompanied where possible by statistical comparisons using the modified Crawford's t-test [i.e., designed for use in comparing single cases with groups: see Materials and Methods (Crawford and Garthwaite 2002)]. The difference in performance between the dPFC case and other groups (i.e., vmPFC, NC, BDC) groups was striking (see Supplementary Tables 5 and 7). In terms of coarse aggregate measures (see Supplementary Table 5), the dPFC case failed to reverse in any session, scored below chance performance in all sessions but one (session 3), and showed no sensitivity to large magnitude gains or losses (i.e., indexed by % switching tendency: see Supplementary Table 5). The performance of the dPFC case was significantly worse than that of the other patient groups in nearly all sessions (i.e., using modified Crawford's t-test; see Methods and Supplementary Table 7 for details). Logistic regression analysis was also applied to the dPFC patient's choice and reward history as described above, although it should be noted that given this is a single case relatively few trials were available. Therefore, the fitted parameter values should be interpreted with caution as a result. This analysis suggested that the patient was largely insensitive to the history of rewards or choices, whether his own or that of the computer player (see Supplementary Fig. 5; red dots indicate dPFC patient parameter estimates). This contrasts sharply with the performance of the other groups, who typically showed significant evidence of sensitivity to reward and choice history even, for example, damage to vmPFC was related to reductions in the size of the parameter estimates relating to observational learning. These findings are consistent with the notion that value signals are distributed across prefrontal regions (Kennerley et al. 2009;Rushworth et al. 2011). While we are cautious about drawing conclusions from a single case study, these results suggest that the dPFC may play a role in feedback-driven learning regardless of whether the feedback is experienced or observed, at least in the context of the experimental setting examined. In relation to this, it is worth bearing in mind that in our paradigm, the magnitude of rewards was informative (cf. other settings where reward magnitude is fixed at +1 or −1, and only the probability varies across stimuli: also see Discussion)-an aspect of the task that may interact with the impairment in executive function (i.e., Wisconsin card sorting task-see Supplementary Table 2) shown by the dPFC case. Regardless, the results from the analysis of this dPFC case support the specific contribution of the vmPFC to observational learning, over and above its contribution to experiential learning. Results: Overall Performance on Deterministic Task Given that these results were obtained in the context of a probabilistic reward environment, for completeness we also assessed the experiential learning performance of our cohort of vmPFC patients in a deterministic setting. This was motivated by a previous report which found that a separate cohort of patients with damage to the vMPFC shows marked impairments in such a setting (Fellows and Farah 2003). Supplementary Table 8 and Figure 6 illustrate the performance of vmPFC and both comparison groups under these conditions, according to a number of basic measures such as total points won and number of reversals achieved. While no significant group difference was observed between vmPFC patients and comparisons on either of these measures (each F 2,30 < 2.4, each P > 0.1), the exceptional nature of the performance of 2 vmPFC patients was noteworthy. The total number of points won by these patients (200 and −100 by patients 8 and 3, respectively) fell more than 6 SD below the mean performance of BDCs (mean = 2082, SD = 282), more than 5 SD below the mean performance of NCs (mean = 2000, SD = 352), and more than 4 standard deviations below the mean of the remaining vmPFC patients (mean = 2106, SD = 397, see Supplementary Fig. 6). Additionally, these 2 patients completed only 1 reversal each, falling more than 7 SD below the average performance of the other vmPFC patients (mean = 4.7 reversals, SD = 0.5), more than 5 SD below the performance of BDCs on this measure (mean 4.3, SD = 0.8), and more than 4 SD below the performance of NCs (mean = 4.6, SD = 0.8). In contrast to the findings of Fellows and Farah (2003), a substantial majority (i.e., 9/11) of the patients in our cohort performed equivalently to NC and BDC participants on the deterministic version of the reversal task. Despite the low variance in the independence on activities of daily living (IADL) scores of our vmPFC group, we also observed a significant correlation between this measure and performance of patients on the reversal task as indexed by total points won and number of reversals achieved (both r > 0.6, P < 0.05). Note, however, that this correlation was driven primarily by the marginally low (i.e., 20/21) IADLs of the 2 patients mentioned above who performed very poorly on the reversal task. As reported previously for the probabilistic scenario, no significant correlations were found (P > 0.1) between performance (i.e., either in terms of points won or number of reversals) of individual vmPFC patients and the extent of damage to the lateral orbital gyrus, as measured using volumetric analyses (see Materials and Methods). Notably, the dPFC case performed poorly in this setting also, failing to complete any reversals. Even in this task, the dPFC patient scored only 300 points (see Supplementary Table 8), a performance matched by only the very poorly performing 2 vmPFC patients (see above). Discussion Our neuropsychological study investigated the contribution of the vmPFC to experiential and observational learning-a distinction operationalized here as the ability to update stimulus values based on the rewarding outcomes that follow either internally generated (i.e., experiential) or externally generated (i.e., observational) actions. Fine-grained analysis of experiential choice behavior revealed that the current behavior of vmPFC patients was significantly influenced by past rewards and choices in a manner that was equivalent to a BDC group, and only marginally different from the NC group. Despite this relatively intact capacity for experiential learning, patients with vmPFC damage exhibited a significant deficit in observational learning, manifest in the reduced influence of previously observed rewards on current choices. Our findings provide causal evidence that the vmPFC is necessary for normal learning of stimulus values from observed rewards, and point toward the conclusion that there are dissociable neural circuits for experiential and observational learning. It is interesting to consider our findings in relation to a previous neuroimaging study by Walton et al. (2004), which used an experimental design that has conceptual similarities with our own (Walton et al. 2004). In their experiment, participants were required to adjust their behavior by monitoring outcomes under 2 different conditions: One in which participants had freely chosen the action themselves and the other in which the action to be performed was instructed by an external cue specified by the experimenter. They reported a dissociation between the medial and lateral OFC and the dorsal anterior cingulate cortex (ACC): Neural activity in the OFC was greatest when participants were required to monitor the consequences of externally cued actions, whereas that in the ACC was highest in relation to the outcome of voluntarily chosen actions. Though the exact task performed by participants in the study by Walton et al. differs from our paradigm-and while neural activity in the OFC was localized primarily to the lateral (rather than medial) portion in their study-these previous results are broadly consistent with our finding that the integrity of the vmPFC is particularly important when stimulus values must be updated based on the observation of the consequences of externally (vs. internally) generated actions. Though our study focused explicitly on a core component of observational learning (i.e., learning from the outcomes of externally generated actions) within a nonsocial domain, it is worth relating our findings to recent fMRI studies that have sought to reveal the neural signatures of observational learning within the social domain (Burke et al. 2010;Cooper et al. 2012;Liljeholm et al. 2012), and one study in particular that employed a closely related experimental design (Burke et al. 2010). As in phase 3 of our experiment, participants in the study by Burke et al. showed evidence of learning both from the outcomes of their own choices, but also those of another agent-in this case, a human confederate who was unknown to them. In addition, participants could also profit from the observed choices of the confederate agent (e.g., through imitation)-a source of information that was intentionally absent in our paradigm (i.e., the choices of the computer player were random), to allow us to focus squarely on learning from observed outcomes. Activity in the vmPFC in the study by Burke et al., at the time when the outcome of the confederate's choice was revealed, was found to reflect a signed prediction error signal [i.e., the difference between expected and observed outcome; also see Suzuki et al. (2012)]. Interestingly, the pattern of neural activity in the vmPFC was qualitatively different from that in the dorsolateral PFC, where activity at time of confederate choice tracked the difference between actual and expected action (i.e., an action prediction error). By demonstrating that damage to the vmPFC impairs the ability to learn from observed rewarding outcomes, our findings are consistent with the hypothesis that the outcome prediction error signals observed by Burke et al. causally drive learning. While we characterized choice behavior using a logit model (e.g., to avoid the assumption of an exponential decay in the influence of past outcomes over time-see Materials and Methods) rather than a reinforcement learning model, our demonstration that vmPFC patients are less influenced by observational outcomes is entirely consistent with an impairment in an error-correcting learning process. Our findings, together with those of Burke et al. (2010), raise the possibility that the vmPFC forms part of the neural circuitry that supports observational learning, perhaps irrespective of whether the agent under observation is inherently social or not, a hypothesis that merits direct testing in future studies involving patients with vmPFC damage. Importantly, we believe that any vmPFC contribution to observational learning is distinct from Pavlovian learning (and the related notion of Pavlovian-instrumental transfer) where outcomes are directly experienced by the participant, and which has previously been associated with ventral striatum [e.g., see Dickinson and Balleine (2002), O'Doherty et al. (2004), Talmi et al. (2008)]-although we cannot rule out the possibility that participants imagine receiving rewards given to the computer participant, enabling a form of Pavlovian learning. The focus of our study was the role of the vmPFC in observational learning, but we also established that all participants performed similarly when learning experientially. Notably, our finding of unimpaired experiential learning might appear surprising given the critical role ascribed to the vmPFC in valueguided decision-making (e.g., Rushworth et al. 2011;Levy and Glimcher 2012). A critical factor may be that, in our task, participants were required to choose between 2 stimuli whose magnitude varied probabilistically (cf. Daw et al. 2006) and whose reward schedules were markedly separated. As such at our probabilistic task differs in an important respect from other settings where reward magnitude is fixed at +1 or −1, and only the probability varies across stimuli (e.g., Noonan et al. 2010). Unlike these other probabilistic scenarios (e.g., reviewed in Rushworth et al. 2011), in our task it is not critical to integrate reward outcome information over long time windows. Instead, a simple strategy involving switching after a large loss would be relatively effective in our task-indeed, we observed similar rates of response switching after significant losses in all groups. This account may also account for previous findings that lesions of the vmPFC can impair experiential learning, in the Iowa Gambling Task (IGT; Bechara et al. 1994;Gläscher et al. 2012). Specifically, our paradigm and the IGT differ considerably in terms of their complexity. In the IGT participants must choose between 4 options with differing reward schedules, rendering a "lose-shift" strategy a relatively ineffective solution (though one notably employed by both vmPFC patients and comparison participants at similar rates; Bechara et al. 1994). Taken together, therefore, our findings and those from previous reports of IGT impairment after vmPFC damage suggest that the complexity of the environment (i.e., number of options and reward schedules) is a critical determinant of whether damage to the vmPFC produces an impairment in decision-making based on experiential learning. Intriguingly, this hypothesis receives support from recent work in humans which suggests that the vmPFC is only engaged when choices are sufficiently difficult (Hunt et al. 2013), and in non-human primates (Noonan et al. 2010) which demonstrates that damage to the mOFC/vmPFC only produces a significant performance deficit in a three-armed bandit task when reward schedules are closely aligned. Along these lines, it is also worth considering whether our finding that observational learning was specifically affected by damage to the vmPFC could have arisen because it is more difficult than experiential learning (i.e., in the experiential tasks tested)-or as a result of attentional deficits in the vmPFC group. While we cannot definitely rule out such an account, we would argue that the overall pattern of findings [i.e., number of points won, RT and neuropsychological scores] does not provide support either of these accounts. First, NC and BDC participants acquired a comparable number of points over the course of experimental phases whether the task required pure experiential learning ( phase 2) or pure observational learning ( phase 4). Secondly, the RT of vmPFC patients was not slower in the experimental phases that involved observational learning (i.e., phases 3 and 4) compared with phases involved purely experiential learning (see Results and Supplementary Fig. 1), suggesting that all scenarios were similarly challenging. Furthermore, there was no significant group × phase interaction in terms of RT (see Results), arguing against the notion that there was an increase in task difficulty for the vmPFC group in experimental phases involving observational learning. Thirdly, the choices of the vmPFC group were influenced not only by the outcomes received but also the choices made by the computer player-refuting the possibility that vmPFC patients paid little attention during WATCH trials. Finally, neuropsychological test scores did not indicate that either group had general deficits in concentration or attention with no significant differences observed between BDC and vMPFC on relevant measures [i.e., complex figure copy, wisconsin card sorting test, trails B-A, and working memory index (WMI): all P > 0.1 except WMI where there was a trend for superior performance in the vmPFC group (P = 0.07)]. Our tasks incorporated a reversal component, and vmPFC patients performed the experiential phases of the task similarly to comparison participants in terms of overall measures (e.g., total points won and number of reversals). This contrasts with previous neuropsychological studies demonstrating a significant reversal deficit in closely related deterministic (Rolls et al. 1994;Fellows and Farah 2003) and probabilistic tasks (Hornak et al. 2004) that relied on experiential learning. For instance, a study by Fellows and Farah (2003) showed that patients classified as having primary damage to the vmPFC made more reversal errors than comparison participants in the context of a deterministic task where one response yielded a $50 win and the other a $50 loss. Notably, vmPFC patients in that study were significantly more disabled, as indexed by lower scores on a standard measure assessing IADL (mean = 17.8, SD = 3.4), than those in our cohort (mean = 20.6, SD = 0.5; nb: A maximum IADL score is 21). As such, one factor that may account for the performance difference between our cohort of vmPFC patients and those in previous reports is that the severity of damage and anatomical locus varies considerably between individual patients and groups. While these earlier reversal learning studies have typically not performed a quantitative analysis of lesion extent [Rolls et al. 1994;Fellows and Farah 2003;Hornak et al. 2004; although see Tsuchida et al. (2010)], patients characterized as having vmPFC lesions sometimes have damage extending into the lateral OFC, a region that was relatively spared in our cohort. Although it is beyond the scope of the current study, future research should directly contrast the roles of human vmPFC/mOFC and lateral OFC to determine whether these regions make unique contributions to experiential and observational learning. One potential account of the finding of preserved experiential learning in vmPFC patients is that our study was not sufficiently powered to observe a significant impairment in experiential learning. We suggest, however, that this scenario is unlikely for 2 reasons. First, our dataset included 3000 trials per participant group (cf. 500-800 in other studies, e.g., Fellows and Farah 2003) as a result of testing a relatively large sample of vmPFC patients in a range of decision-making scenarios. Based on the magnitude of the deficit observed in previous studies [e.g., around 4000 points in the probabilistic scenario of Hornak et al. (2004)], our study would have been ideally powered to detect any impairment. Secondly, we provide "positive" evidence that patients with vmPFC damage are significantly influenced by their past rewards and choices when learning experientially. In summary, our findings demonstrate a specific deficit in observational learning-operationalized here as the ability to learn stimulus values from the rewarding outcomes of externally generated actions-among patients with vmPFC damage. As argued above, our data provide evidence that this finding that is not easily explained by task difficulty or attentional differences between the experiential tasks used. To be clear, however, we fully concur with previous lines of work, suggesting that the vmPFC is likely critical to experiential learning in settings whose complexity or probabilistic nature (i.e., where magnitude is fixed, and probability of gain/loss varies across stimuli) differs those examined in this study [e.g., Noonan et al. 2010;Hunt et al. 2013; also see Suzuki et al. (2012)]. Interestingly, such a role for the vmPFC in updating reward representations of stimuli based on passive observation of the environment accords well with the previous work, suggesting that the vmPFC automatically computes a value (Lebreton et al. 2009) through integrating different sources of information (e.g., Smith et al. 2010;Levy and Glimcher 2012), drawing on its rich connectivity and functional interactions with sensory areas of the neocortex (Carmichael and Price 1996;Noonan et al. 2011Noonan et al. , 2012. This contrasts with regions such as the ACC which are often believed to sustain reward representations that are more tightly coupled to specific actions mediated through more direct projections to motor areas (e.g., premotor area; Kennerley et al. 2006;Rushworth et al. 2007;Hayden and Platt 2010;Kennerley and Walton 2011;Hunt et al. 2013). While the vmPFC, therefore, appears to be necessary for observational learning, one caveat to this conclusion is that lesions that encompass this brain structure may also inadvertently damage fibers of passage which themselves might produce behavioral impairments [e.g., see Rudebeck and Murray (2011b)]. Furthermore, prior evidence suggests that other brain regions also contribute to observational learning, notably the hippocampus (Poldrack et al. 2001;Shohamy et al. 2004Shohamy et al. , 2008, a structure which is also thought to interact functionally with the vmPFC during goal-directed decision-making (Kumaran et al. 2009(Kumaran et al. , 2012Roy et al. 2012). Indeed, the "episodic" nature (i.e., extending only to t−1 trial) of the influence of observed rewards on current behavior accords with such a notion, and highlights the potential involvement of the vmPFC in episodic control (Lengyel and Dayan 2007). It is tempting to speculate, therefore, that the vmPFC may support observational learning through functional interactions with the hippocampus, an intriguing hypothesis that deserves investigation in future studies-perhaps drawing on recent advances in using multivariate techniques in lesion studies to identify the joint contribution of multiple brain areas to behavior (Smith et al. 2010).
v3-fos-license
2017-10-03T08:06:44.231Z
2007-06-01T00:00:00.000
1454645
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "668f8d27a2baa7eb760f425fcaf7d8fece3e1331", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42036", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "74fa8f50f3caa71396ff545a17ab02a14432045c", "year": 2007 }
pes2o/s2orc
Use of oral cholera vaccine in complex emergencies: what next? Summary report of an expert meeting and recommendations of WHO. Two meetings of the World Health Organization (WHO)—in 1999 and 2002—had examined the potential use of oral cholera vaccines (OCVs) as an additional public-health tool for the control of cholera. In the light of the work accomplished since 2002, WHO convened a third meeting to reexamine with a group of experts the role that OCVs might play in preventing potential outbreaks of cholera in crisis situations and to discuss the use of OCVs in endemic settings. The aim of the meeting was to agree a framework for the recommendations of WHO on these subjects and to consider the pertinence of further demonstration projects in endemic settings. The meeting addressed key issues, including currently-available vaccines, surveillance, and cholera-control measures in complex emergencies, and past experiences of using OCVs. More than 40 participants took part in the discussions, representing cholera-prone countries, humanitarian organizations, scientific institutions, United Nations agencies, and WHO. The experts agreed that when considering the use of OCVs in emergencies, a multidisciplinary approach is essential and that the prevention and control of cholera should be envisaged within the larger context of public-health priorities in times of crisis. As for the use of OCVs in endemic settings, all participants acknowledged that further data need to be collected before a clear definition of endemicity and potential vaccination strategies can be established. Results of further studies on the vaccines per se are also awaited. Recommendations relating to the use of OCVs (a) in complex emergencies and (b) in endemic settings were elaborated, and a decision-making tool for assessing the pertinence of use of OCVs in emergency settings was drafted. The document was finalized by an ad-hoc working group convened in Geneva on 1 March 2006 and is now available for field-testing. After testing, that should be carried out with the involvement of WHO and feedback from field partners, the decision-making tool will be adapted and disseminated. INTRODUCTION Although well-known since the nineteenth century, cholera remains the most feared and stigmatized diarrhoeal disease. As a waterborne disease, it mainly affects the poorest and the most vulnerable populations who live without access to safe water and proper sanitation. The burden it imposes on healthcare systems and on its victims is enormous. Furthermore, countries, fearful of possible commercial sanctions that would prevent the export of food products, are often reluctant to report cases and seek support. Heavy death tolls are regularly reported and, in disaster situations, the possibility of cholera frequently triggers panic-even when the risk of outbreak appears extremely limited. Implementation of the prevention and control measures usually recommended, including improvement of water and sanitation, remains a challenge in both urban slums and crisis situa-tions. To date, there has been no concrete global improvement despite efforts made at the country level; the incidence of disease has even increased in recent years. Predicting potential outbreaks remains difficult and is often complicated by the lack of data on trends and patterns of the disease over time. It is clear that additional public-health tools, such as vaccines, can play a critical role in the control of cholera. The pre-emptive use of oral cholera vaccines (OCVs) in emergency situations was recommended by the World Health Organization (WHO) in 1999, and this general recommendation remains valid (1,2). However, vaccines must be used in appropriate circumstances, where they can provide a definite benefit and will not jeopardize the response to other health priorities. Identifying the population at risk of epidemic cholera is, therefore, a key element in considering the use of OCVs, as is the cost-effectiveness of such an intervention. Several massvaccination campaigns have already been carried out in crisis situations, and a group of experts, convened in a WHO meeting, used the evidence provided by these interventions as the basis for developing assessment tools and recommendations for the use of OCVs in mass-vaccination campaigns and to identify the possible constraints and limitations. This meeting, held in Cairo, Egypt, on 14-16 December 2005, intended to establish a framework for recommendations on the use of OCVs in complex emergencies and natural disasters and in endemic settings (3). More than 40 participants were present, representing cholera-prone countries that had already used or expressed interest in using OCVs, humanitarian organizations, scientific institutions, United Nations agencies, and WHO. A vaccine manufacturer was granted an observer status, but did not attend sessions aimed at developing recommendations on the use of OCVs. Available vaccines and new developments Because of its low-protective efficacy and the frequent occurrence of severe adverse reactions, the early parenteral cholera vaccine was never recommended for use (4). To date, two oral vaccines have been licensed internationally. One consists of killed whole-cell Vibrio cholerae O1 with purified recombinant B-subunit of cholera toxin (WC/rBS). It is administered in two doses, with an interval of 10-14 days between the dos-es. A large volume (75-150 mL) of liquid is needed for administration, meaning that the vaccine cannot be given to children aged less than two years. Protection starts 10 days after the ingestion of the second dose and has been shown to reach 85-90% after six months in all age-groups, declining to 62% at one year among adults (5). This vaccine, currently produced in Sweden, has been granted WHO prequalification. The second licensed vaccine consists of an attenuated, live, and genetically-modified V. cholerae O1 strain (CVD 103-HgR) (6). It is administered in a single dose to individuals aged two years and over; protection starts eight days after ingestion (7). Although a 95% seroconversion and protection was observed during a challenge study, a large field trial undertaken in Indonesia, in circumstances that complicated interpretation, failed to demonstrate convincing protection (8). The manufacturer stopped production in 2004, and the vaccine, although licensed, is currently unavailable. Technology transfer to Viet Nam has generated a variant of the killed whole-cell vaccine containing no recombinant B-subunit, i.e. WC vaccine. This vaccine, currently produced and used only in Viet Nam, is given in two doses at an interval of 10-14 days, without the need for a buffer solution. The protective efficacy of a first-generation monovalent (anti-O1) Vietnamese cholera vaccine was shown to be 66% (68% in children) 8-10 months after vaccination (9). Killed O139 whole cells were added to the Vietnamese vaccine following the emergence of the new form of epidemic cholera caused by this serogroup. A study found the bivalent vaccine to be safe and immunogenic in adults and children aged one year and older (10,11). Technology transfer to India, that could lead to a WHO prequalification, is underway. A number of other live oral vaccines are under development in the USA (12) and in Cuba (13). In addition, research is currently being conducted on parenteral conjugate vaccines and on ways to improve vaccine formulation to ease the numerous logistics constraints, particularly acute in emergencies, linked to the mode of administration of the vaccine presently available. Indeed, the limitations of the WC/rBC vaccine in emergency settings, where logistic and practical constraints abound, are numerous, but its use in a routine context is much more easily managed (14). Since efficacy requirements may be lower in an emergency context, vaccines specifically designed for emergency public-health applications might be considered (15,16). Potential effect of herd protection In researching the public-health impact of cholera immunization, the concepts of herd protection and herd amplification, which arose from recent environmental studies, are important issues that merit examination. When dealing with a killed vaccine, the term herd protection is preferred to herd immunity as unvaccinated persons do not develop antibodies. If these concepts prove to be sound, herd protection may have a major role in increasing the impact of vaccination and reducing the cost and burden of cholerafactors that are essential elements in any consideration of the future use of cholera vaccines. A new analysis of the 1985 cholera vaccine trial in Bangladesh established that there was an additional indirect protective effect among both vaccinated and non-vaccinated individuals when a high proportion of the population was vaccinated and a possible reduction of the incidence of cholera in all age-groups (17). The public-health impact of killed OCV may, thus, have been underestimated in the past, as only the conventional protection efficacy of the vaccine was measured and not the potential effect of herd protection. Further studies are, therefore, needed to precisely evaluate the effect of herd protection, especially as a number of circumstances can have induced a bias (density of population and dwellings, environmental factors, health-education programmes, and microbiological aspects of the disease) (18). The design of future vaccine-evaluation and efficacy studies will need to consider the role of herd protection. The hypothetical existence of significant herd protection will have implications for the choice of target populations for cholera vaccination. It is likely that access to the vaccine might be enhanced for groups who do not usually have access to or seek treatment. It remains to be determined how these elements will influence the development of strategies that focus on reaching a particular threshold level of vaccination to achieve an acceptable level of protection in a community. Surveillance in complex emergencies Several definitions describe the blurred concept of complex emergencies. In this meeting, a pragmatic public-health perspective was adopted, aiming principally at highlighting the health priorities and challenges. All participants agreed to define complex emergencies in the following terms: • a large part of the population is affected, leading to potential massive movements of populations; • coping capacities of the local and national authorities are overwhelmed by the magnitude of man-made or natural disasters; and • numerous national and international actors may participate in the relief efforts. The first consequence of a complex emergency is the upheaval of usual life and the emergence of 'new' vulnerable population groups. Lack of access to basic services and healthcare, lack of food, and displacement have an impact on the health status of the population. Restoring an acceptable health status presents a number of varied challenges: prioritization of health issues, coordination of the numerous actors involved, and timeliness when urgent action is required. The decisionmaking and preparatory phase is often extremely short, and access to vulnerable populations is frequently limited by specific geographical difficulties, further natural disasters, a volatile security environment, or mass population movements. People living in overcrowded camps with poor environmental status are exposed to a higher risk of transmission of cholera if V. cholerae is endemic in the area or has been introduced. Moreover, uncontrolled rumours and panic are often rife: in every catastrophe, false beliefs regarding plagues and epidemics transmitted by dead bodies tend to be widespread. In such contexts, cholera remains, rightly or wrongly, the disease most feared by the population and by the authorities. In all cases, the occurrence, spread, and extent of an outbreak of cholera are extremely difficult to predict. They depend on a multiplicity of aspects, including local endemicity, living conditions, forced or voluntary movements of population, environmental and cultural factors, and the effectiveness of any control measures put in place. In some endemic situations, where outbreaks tend to occur at regular intervals, seasonal recrudescence can be anticipated, provided that enough epidemiological data are known. The establishment of an epidemiological surveillance system that will provide baseline data and trends is, thus, a key element in directing the potential use of OCVs. However, if the early warning system is a challenge in many countries, surveillance and gathering of data in a complex emergency are even more problematic. One of the main difficulties is to establish a system that is both reactive and sustainable; this is particularly tricky when resources are scarce and security cannot be ensured. For cholera, the introduction of a rapid, easy-to-use, and affordable diagnostic test, currently under development, will be critical. Nevertheless, the task is complicated by the stigma attached to the disease and the reluctance of many to report cases for fear of travel and trade sanctions, a fact that impacts negatively on surveillance. An early warning system using standard case definition is essential to trigger the alert promptly, an element particularly critical in high-risk situations, such as in refugee camps and urban slums, and among displaced populations. The definition of outbreak should take into account essential background information, including the occurrence of previous cases or outbreaks and endemicity. The following definitions used by Médecins sans Frontières (MSF) are a good example: • in endemic areas: doubling of cases over three consecutive weeks or increase in cases compared to the previous year; • in non-endemic areas: an increasing number of confirmed cases; • an increasing number of adults dying of watery diarrhoea. Water and sanitation in complex emergencies Other elements closely relating to the containment of outbreaks of cholera are water supplies and sanitation status of populations at risk. The example of Darfur, Sudan, offers valuable indications of the cost, impact, and challenges of water and sanitation projects in complex emergencies and of the role of such projects in preventing outbreaks of cholera. At the beginning of the humanitarian intervention in May 2004, only 20% of the internally-displaced persons (IDPs) living in areas reachable by the United Nations agencies had access to adequate water, and only about 5% to proper sanitation; by September 2005, 16 months later, these figures had risen to 52% and 76% respectively. These numbers are not calculated according to the Sphere standards and are provided for information only; they cannot be considered an accurate indicator of water and sanitation supplies to a population and do not describe the actual conditions faced by the people. Clearly, despite the enormous efforts provided by all humanitarian bodies active in the field for more than a year, a significant number of people still lacked access to minimum water supply and sanitation facilities. This situation serves also to illustrate the obstacles faced by humanitarian workers-lack of human and financial resources, logistic constraints, limited access to beneficiaries, and poor planning and coordination-that prevent sustained implementation and maintenance. The exact cost of improved water and sanitation is difficult to establish; a comparison of the costs of different interventions is, therefore, needed. The cost-benefit of improved water and sanitation, from both health and socioeconomic perspectives, is seen mainly in the reduction of waterborne diseases-cholera and others-which lowers healthrelated costs and reduces morbidity. Usually recommended cholera-control measures Once an outbreak is detected, the usual intervention strategy aims at reducing mortality-ideally below 1%-by ensuring access to treatment and controlling the spread of disease. To achieve this, all partners involved should be properly coordinated and those in charge of water and sanitation must be included in the response strategy. The main tools used for the treatment of cholera are: • proper and timely rehydration in cholera-treatment centres and oral rehydration corners; • specific training for proper case management, including avoidance of nosocomial infections; • sufficient prepositioned medical supplies for case management, e.g. diarrhoeal disease kits; • improved access to water, effective sanitation, proper waste management, and vector control; • enhanced hygiene and food safety practices; and • improved communication and public information. Among these measures, the provision of safe water and sanitation in emergencies is a formidable challenge but remains the critical factor in reducing the impact of outbreaks of cholera. Recommended control methods, including standardized case management, have proved to be effective in reducing the case-fatality rate. A comprehensive multidisciplinary approach should, therefore, be adopted for dealing with a potential outbreak of cholera, and the use of OCVs should be integrated when it can make a difference. Cost-effectiveness of such interventions needs clarification. Although the exact cost of different interventions is difficult to establish, they still need to be investigated and compared. Use of OCVs in crisis situations: recent examples The in Aceh-were examined and compared. Both the campaigns took place during complex emergencies, but the nature of the emergencies and of the target populations, the simultaneous implementation of programmes to address other public-health priorities, the location of the campaigns, and the partners involved were widely different. In Darfur, 87% of 53,537 people targeted-internally-displaced people accommodated in two camps where water supplies and sanitation were poor-received two doses of the WC/rBS. The campaign was completed in about six weeks, and the direct costs of the campaign reached US$ 336,527, or US$ 7 per fully-immunized person. In Aceh, 69.3% of 78,870 people initially targeted-people displaced by the tsunami and scattered around large areas-had received two doses of the same vaccine. The campaign was completed in more than six months, and the direct costs of the campaign reached US$ 958,649, or US$ 18 per fully-immunized person (19). The evidence from Darfur indicates that a smallscale mass-vaccination campaign with OCVs is feasible provided that there is a strong political commitment, easy access to the target population, that is accommodated in closed IDP camps, widespread community mobilization, and involvement of all partners. The Aceh campaign, however, points clearly to the limitations of using a two-dose vaccine in the context of a natural disaster. Enormous logistics and operational constraints greatly delayed the implementation of the vaccination-it took more than six months to complete it-and increased the costs disproportionately. Furthermore, insufficient cold chain and the short shelf-life of vaccines led to an overall vaccine wastage of 11.7%. The feasibility of large-scale interventions is questionable: future campaigns will require solutions to the many difficulties encountered, and a suitable methodology is needed to guide the decision-making process of governments wishing to consider the use of OCVs. The main lessons learned from Aceh and Darfur can be summarized as follows: • An OCV campaign is feasible in natural and man-made disasters, provided that political commitment and good social mobilization can be achieved, good logistics can be ensured, and sufficient funds are available. The target population should be well-defined, localized in a small area, and stable. • A mass OCV-vaccination campaign serves to highlight important deficiencies in water and sanitation coverage and to build the commitment of stakeholders and implementing agencies. • The use of OCV is only one part of a set of comprehensive public-health preventive interventions. Logistics and planning challenges for use of OCVs in crisis situations These two recent examples show that the use of the two-dose OCV in emergency settings can be seriously challenged by various shortfalls and the onerous logistics involved. Several characteristics of the vaccine are less than ideal for emergency settings, including its shelf-life, required storage conditions (cold-chain, at between +2 °C and +8 °C), and volume (25 times greater than measles vaccine); moreover, its mode of administration demands the availability of significant volumes of clean water and requires the target population to be reached twice within a short time (10-14 days). Although logistic constraints can often be overcome, they usually lead to delays in implementation and significant increases in cost. In each situation, the cost-benefit must be thoroughly assessed and the whole campaign planned in detail. Experience in planning and implementing mass OCV-vaccination campaigns in various settings since 1999 has helped identify the following 12 principal challenges: • During natural disasters or other complex emergencies, basic infrastructures are damaged and disrupted, the population is vulnerable and subject to continual threats, and healthcare personnel are scarce. • Access to target populations is often limited by geographical factors, destruction of roads, climatic conditions, potential aftershocks, and a volatile security situation. • To deal with perceived but unconfirmed risks that may not be based on solid evidence, a risk assessment should be carried out: available epidemiological data, living conditions faced by the population, climatic conditions, environmental management, and cultural behaviours are the key elements to be examined. • The target population may be difficult to identify with precision when there are continual population movements. • Thorough planning and preparation are crucial: coordination with partners is important, as are the assignment of responsibilities and good logistic arrangements. Functioning communications, training of field staff, adequate health education, and social mobilization programmes are other elements to be taken into account. • During the implementation phase, monitoring of the operations, ensuring timely delivery of supplies, and maintaining communication with community leaders are crucial. • Logistics must be thoroughly planned and closely monitored throughout the campaign, with the principal focus on transport and storage of supplies, transport of field teams, cold-chain facilities, management of wastes, and reliable telecommunications. • An efficient surveillance system is vital for the early detection of any cholera cases that occur after the vaccination and for the implementation of specific control measures. • Sustained improvement in environmental management, access to safe water and proper sanitation, and adequate hygiene and food safety are essential components of a comprehensive control strategy for cholera. • Health education constitutes a long-term effort and needs to address the vaccine itself, the vaccination campaign, and food hygiene, and water and environmental safety. Involvement of the community is critical to ensure effective social mobilization for the campaign and to avoid culturally-inappropriate activities. • Problems with vaccine availability, affordability, and packaging (if not adequately designed) can prevent smooth implementation. Before the campaign begins, ad-hoc solutions must be found. • The reality of a vaccination campaign inevitably differs from what was originally planned and expected. A detailed timeline helps anticipate potential hindrances and plan alternative solutions. Clearly, a mass-vaccination campaign cannot be improvised at the last moment-it needs careful advance preparation. If time constraints do not allow for proper planning, for instance if an outbreak is about to start or has already started, the use of OCVs may not be appropriate. Experience shows that, once an outbreak of cholera has begun, a reactive vaccination campaign with a two-dose vaccine is almost impossible. In addition, the use of OCVs needs to be positioned within the larger context of other publichealth priorities. It should be additional to health education and improvements in water and sanitation, not the sole intervention, and should never be seen as a substitute for preparedness for outbreaks of cholera-pre-positioning of supplies for case management, health education, and improvements in water supply and sanitation. In settings where a population is inaccessible for extended periods (for example, in detention facilities) or when the water and sanitation status cannot be rapidly improved, the use of OCVs may be a definite benefit. The use of two-dose OCV is easier in closed settings (refugee and IDP camps, detention facilities, etc.), where population movements are limited and can be better controlled than in open settings, such as the spontaneous IPD settlements found in Aceh. The feasibility of scaling up interventions remains to be proved, and the cost-benefit should be further analyzed. For the time being, the two-dose vaccine and the logistics associated with its use remain very expensive. Use of OCVs in endemic settings A demonstration project, carried out in Beira, Mozambique, showed that a mass-vaccination cam-paign using OCVs was feasible (20,21), acceptable, and effective (22) for at least six months. Around 57% of the target population-inhabitants of Esturro neighbourhood-received two doses of the WC/rBS, and a case-control study conducted in 2004, involving 43 patients with cholera, demonstrated a protective efficacy of 78%. Nevertheless, the study left a number of important questions unanswered, including duration of the protection, existence of herd protection, protection within the HIV-positive population, and cost-effectiveness. Globally, studies carried out in different countries suggest that the proper definitions of cholera cases and of endemicity need to be further defined. Differences in methodologies and in attitudes of national authorities towards cholera can result in different approaches to the disease-including the prevention measures to be adopted and the potential use of OCVs. It is, therefore, important to find a definition of cholera endemicity that can be widely adopted. A threshold of one case per 1,000 people has been proposed, but has yet to be universally accepted. Epidemiological data still need to be collected: lack of these data is an obstacle for advocating the use of OCVs. On the other hand, increasing treatment costs and rising antimicrobial resistance make development of a vaccination strategy for endemic settings highly desirable, provided that the vaccine can be formulated for administration to children aged less than two years, can protect against both V. cholerae O1 and O139 serotypes, and is cost-effective. Indeed, the cost per death averted and per hospitalization averted declines with the increasing incidence of cholera: even a very inexpensive vaccine becomes cost-effective only when the incidence exceeds 1/1,000. By comparison, the same model estimates that case management, if provided through routine hospital or treatment centre care, costs about US$ 350 per death averted. Even moderately-inexpensive vaccines, therefore, quickly become too expensive. For example, a vaccine requiring two doses at US$ 3 per dose will cost more than US$ 3,000 per death averted, even where the incidence is high. By contrast, a vaccine priced at US$ 0.40 will cost less than US$ 400 per death averted, which compares favourably with case management, especially as hospital and treatment costs will decrease. In addition, models have been developed to determine the key variables, the most important of which appear to be the incidence of cholera and the cost of the vaccine, including delivery cost (23). The efficacy of vaccines seems of less importance. Vaccines should, thus, be inexpensive and easy to administer and should be provided to inhabitants of high-risk areas. Furthermore, a vaccine marketed over-the-counter may be economic for health ministries, since it would shift the vaccine costs to the consumer rather than to the government. Finally, the adoption of vaccination strategies will not replace treatment facilities. Countries interested in using OCVs in endemic settings will need to design vaccination strategies that will achieve the best possible coverage. Different strategies can be envisaged, but all should be based on mapping of risk and should take account of high-risk groups (particular age-groups and vulnerable populations living in specific geographical areas) and feasibility. The sustainability of vaccination strategies is the paramount consideration: mass-vaccination campaigns that are not sustainable may be useless and possibly counter-productive. Experts present in the meeting recognized that a number of issues still need to be studied, including the efficacy of OCVs in populations with a high proportion of HIV-positive individuals, a definition of endemicity, and the cost-effectiveness of the vaccine. Work should also be done on vaccination strategies. Although the use of OCVs in endemic settings can be supported in principle, detailed recommendations remain to be worked out. The group recommended the synergistic use of control measures other than vaccine, namely improvement of water supply and sanitation and health education. Demonstration projects should yield additional useful data. Pertinence of a stockpile of cholera vaccines The example of the International Coordinating Group on Meningitis (ICGM) for supply of antimeningococcal vaccine was taken to assess the pertinence of creating a stockpile of cholera vaccines. In view of the numerous difficulties and high financial costs involved, the advantages and disadvantages of creating a stockpile should be examined in detail, and an adequate stock rotation should be ensured. The only OCV currently available on the international market is manufactured by the Swedish company-SBL Vaccines-under the commercial name Dukoral ® . To date, the vaccine is not widely used, although licensed in 45 countries. Produc-tion costs remain high and are not covered by the price of the vaccine-up to €5 (US$ 6.10) a dose. Maintained in a cold-chain, Dukoral ® has a shelflife of three years; according to the manufacturer, it can be kept at 25 °C for three months and at 37 °C for one month, but these storage conditions were not recognized in the prequalification process. The WHO recommendations of 1999 proposed the establishment of a stockpile of two million doses of cholera vaccine for use in endemic and emergency settings. However, because of the lack of precise guidelines for the use of OCV, the high costs involved, and the limitations that became apparent during mass-vaccination projects carried out in the 2000-2005 period, the stockpile was never implemented. Moreover, the only current OCV manufacturer has clearly stated that, without firm orders, its limited production capacities will not be expanded. Thus, until recommendations and guidelines are issued and promoted, the issue of a stockpile is not relevant. The subject will be raised with partners and donors after field validation of the recommendations-and, in particular, of the decision-making tool-when countries concerned express their willingness to implement large-scale mass vaccinations or to introduce OCVs into their routine expanded programme of immunization (EPI). CONCLUSION The group of experts convened in Cairo agreed on two sets of recommendations, dealing respectively with: (a) public-health use of OCVs in endemic settings, and (b) public-health use of OCVs in complex emergencies. Although several aspects of the disease itself, and of the vaccines to fight it, still need to be clarified-a concerted threshold of cholera endemicity being on the top of the list-the recommendations summarized below will guide those who consider a large-scale use of OCVs (the full text can be found in Annexure 1). Future progress in research, particularly in vaccine formulation, will call for further consultation and possible amendment of these recommendations; it will be examined in due course. Summarized recommendations for the use of OCVs in complex emergency settings The relevance of oral cholera vaccination should be examined in the light of other public-health priorities. It should be linked to improved surveillance and enhanced water and sanitation pro-grammes. A high-level commitment by all stakeholders and national authorities is critical, and a multidisciplinary approach is essential. The current internationally-available pre-qualified vaccine is not recommended once an outbreak of cholera has started and should not be performed if basic favourable conditions are not present. A decisionmaking tool, developed by an ad-hoc group, is intended to assist those in charge of risk assessments and subsequent decisions. This tool still needs to be validated in field conditions. Summarized recommendations for the use of OCVs in endemic settings Despite the limitations of the currently-available vaccine, the use of OCVs in certain endemic situations should be recommended, and guidelines should be developed. Such use must be complementary to existing strategies for cholera control, such as safe water and sanitation, case management, and health education of the community. Without jeopardizing the issue of recommendations, a number of topics still need to be addressed: vaccine formulation, protection against O139 serotype, and efficacy in children and in HIV-positive individuals. Surveillance data should be available to determine the best timing for vaccination, define the population at risk, and monitor the impact of interventions. Sustainable and cost-effective vaccination strategies should be decided according to each specific situation. Decision-making tool for use of OCVs in complex emergencies In accordance with these recommendations, an ad-hoc meeting was organized to finalize the decision-making tool for the use of OCVs in complex emergencies that was drafted during the conference (the full text can be found in Annexure 2). A three-step approach was adopted, the relevance of OCV-use being examined at each step: a. a risk assessment for an outbreak of cholera should be undertaken first; b. an assessment of whether key public-health priorities are or can be implemented in a timely manner, combined with an analysis of the capacity to contain a possible outbreak; and c. an assessment of the feasibility of an immunization campaign using OCVs. This document, meant to be a convenient tool, includes many practical aspects of importance when thinking of performing a vaccination campaign. Essential elements are reminded, such as the need to reinforce surveillance systems and to conduct vaccination campaigns concomitantly with other interventions, especially improvement of environmental conditions. The Global Task Force on Cholera Control, at the WHO headquarters, will provide expertise and guidance whenever necessary. Decision-makers should not hesitate to contact the Task Force with any doubts or questions, or if envisaging the use of OCVs. This decision-making tool is now ready for fieldtesting. Results and feedback will be included in an adapted version. WHO is willing to coordinate with institutions or governments interested in performing such studies. If OCVs are to be used as a public-health tool, now is the moment for decision-makers-ministries of health and governments, NGOs and institutions, donors and stakeholders-to express their views of using this vaccine in the best interest of the innumerable populations affected by cholera. It cannot be conducted without advocacy, strong involvement, and resources. Sustainability, surveillance data, and a true knowledge on the burden of disease are also indispensable to effective interventions. (1) Recommendations for the use of OCV in complex emergency settings Relevance and multidisciplinary approach: The relevance of oral cholera vaccination should be examined in the light of other public health priorities. Among the top 10 priorities in emergencies is the control of communicable diseases, which should always include a risk assessment for cholera. If a cholera vaccination campaign is deemed necessary after assessment of epidemic risk and public health priorities, water and sanitation programmes should be implemented before or concurrently with the vaccination campaign. Surveillance systems should be reinforced. A high level commitment by all stakeholders and national authorities is critical. Exclusion criteria for OCV use: Vaccination with the current internationally available prequalified vaccine is not recommended once a cholera outbreak has started. An OCV campaign that would interfere with other critical public health interventions should not be carried out. Other exclusion criteria include: very high mortality from a range of causes; basic needs (food, shelter) not covered; an ongoing outbreak of another disease; an untenable security situation. Development of a decision-making tool for OCV use: A decision-making tool will help in determining the relevance of cholera vaccination in a given setting. A three-step process is proposed: a risk assessment for a cholera outbreak, which should be undertaken first; an assessment of whether key public health priorities are or can be implemented in a timely manner together with an analysis of the capacity to respond to a possible outbreak; an assessment of the feasibility of an immunization campaign. The decision-making tool needs to be tested and validated in complex emergency settings. (2) Recommendations for the use of OCV in endemic settings Despite the limitations of the currently available vaccine identified in the public health context, the use of OCV in certain endemic situations 1 should be recommended and guidelines should be developed. Such use must be complementary to existing strategies for cholera control, such as safe water and sanitation, case management, and health education of the community. Without jeopardizing the issue of recommendations, a number of topics still need to be addressed. Recommendations can be modified accordingly, at a later stage: Vaccines per se: New vaccines with improved "fieldability" 2 and cost-effectiveness are needed. Their efficacy should be established in the field. Where the O139 serotype is responsible for a significant proportion of cholera cases, O139 should be included in the OCV. Documentation of OCV efficacy is needed in children and in HIV-positive individuals. Surveillance, endemicity and seasonality: Criteria for a definition of endemicity should be established. Studies should be conducted to determine the best timing for vaccination (seasonality, baseline data, etc.) in order to enhance the protection of the population. Past experience has shown that a two-dose vaccine cannot be used once an outbreak has started. Vaccination campaigns should be accompanied by surveillance to define the population at risk and to monitor the impact of vaccination programmes (e.g. among particular age groups and spatial clusters). 1 A definition of endemicity of: one or more cholera cases/1000 population at risk per year has been proposed, but no consensus was reached. 2 To be understood as the practicability of the vaccine when used in difficult field conditions. Recommendations (3) Vaccination strategies: Vaccination strategies should aim for the highest possible vaccination coverage to realize the benefits of herd protection; strategies should be examined and defined according to each specific situation. Characteristics of the currently available OCV (age group, formulation, etc.) make it difficult to include the vaccine in routine EPI. The cost-effectiveness, sustainability, and economic viability of vaccination strategies should be assessed at country level. Additional recommendations for WHO: • Develop a decision-making tool and guidelines for use of OCV (1) in complex emergencies and (2) in endemic settings. An ad-hoc working group will be established to develop the draft risk assessment and decision-making tool further; the first draft was to be avail-able for circulation among the meeting participants by the end of February 2006. After revision, the document would be submitted to partners, including meeting participants, and countries. • Test and validate the draft decision-making tool in field conditions, at community level. • Identify possible sites for implementation projects, as a follow-up to the demonstration projects already carried out between 2002 and 2005. • Ensure regular meetings for review and guidance. • Develop an information and advocacy strategy for regional offices, country offices, countries and potential donors. Introduction The aim of the decision-making tool described in this annex is to help determine the relevance of OCV use for mass immunization campaigns in the context of complex emergencies. For this purpose, complex emergencies are defined as situations in which: • a large part of the population is affected, leading to potential massive population movements; • the coping capacities of local and national authorities are overwhelmed by the magnitude of the man-made or natural disaster; • numerous national and international actors may participate in the relief effort. While this tool can be used in other crisis situations, WHO plans another document-to be published shortly-on the use of OCV in endemic settings. The decision-making process follows a threestep approach (see Figure A1.1), with the rele-vance of OCV use being examined at each step: • a risk assessment for a cholera outbreak, which should be undertaken first; • an assessment of whether key public health priorities are or can be implemented in a timely manner, combined with an analysis of the capacity to contain a possible outbreak; • an assessment of the feasibility of an immunization campaign using OCV. Relevance of OCV use: During the course of a complex emergency, the following public health aspects should be taken into account when examining the relevance of the potential use of OCV: • The top 10 public health priorities in emergencies 1 include the control of communicable diseases: a risk assessment for cholera should always be part of the initial assessment. • Regardless of whether or not OCV is used, access to sufficient safe water and adequate sanitation should be ensured. Decision-making tree for OCV use in complex emergencies Step 1 Step 2 Step • Priority should be given to other health priorities when. • mortality is very high (above the emergency threshold of 1/10 000 per day); • basic needs (food, shelter, basic health services, and security) are not met;. • an outbreak of another disease is ongoing. • With the currently available internationally prequalified vaccine, 2 vaccination is not recommended in an area where an outbreak has already started. The relevance of oral cholera vaccination should therefore be examined in the light of all public health priorities identified. Remarks Each step of the decision-making process should be assessed carefully and each element linked with the next, as shown in the decision-making tree ( Figure A1.1). The Global Task Force on Cholera Control, at WHO headquarters, will provide expertise and guidance whenever necessary. Decision-makers should not hesitate to contact the Task Force with any doubts or questions. A high level of political commitment by all stakeholders and national authorities is critical. If a decision is made to conduct a cholera vaccination campaign, water and sanitation programmes should be implemented before (or at least concurrently with) vaccination. A surveillance system-including laboratory capacity to diagnose cholera and basic health education for communities-should also be implemented before a mass cholera vaccination campaign is started. 2 Whole-cell killed V. cholerae O1 with purified recombinant B-subunit of cholera toxin (WC/rBS), administered in two doses, 10-14 days apart, in 150 ml of water mixed with a buffer.
v3-fos-license
2023-01-22T16:13:50.739Z
2023-01-18T00:00:00.000
256077539
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adem.202201479", "pdf_hash": "a1c03eaa0f0aa96a6a31454cfa3427b5f5a0f428", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42038", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "2a953664b2c4e435e8a45f973de9988b3d070aca", "year": 2023 }
pes2o/s2orc
Indentation Plastometry of Particulate Metal Matrix Composites, Highlighting Effects of Microstructural Scale Herein, it is concerned with the use of profilometry‐based indentation plastometry (PIP) to obtain mechanical property information for particulate metal matrix composites (MMCs). This type of test, together with conventional uniaxial testing, has been applied to four different MMCs (produced with various particulate contents and processing conditions). It is shown that reliable stress–strain curves can be obtained using PIP, although the possibility of premature (prenecking) fracture should be noted. Close attention is paid to scale effects. As a consequence of variations in local spatial distributions of particulate, the “representative volume” of these materials can be relatively large. This can lead to a certain amount of scatter in PIP profiles and it is advisable to carry out a number of repeat PIP tests in order to obtain macroscopic properties. Nevertheless, it is shown that PIP testing can reliably detect the relatively minor (macroscopic) anisotropy exhibited by forged materials of this type. Introduction Interest in particulate-reinforced metal matrix composites (MMCs) became strong in the 1980s, and they have been in significant commercial use for over 20 years. The particles are normally ceramic, with SiC and Al 2 O 3 being the most common. A range of metallic matrices have been used, but aluminum alloys of various types are among the most popular. The addition of such reinforcement offers potential for substantial enhancements in the stiffness and strength (resistance to plastic deformation) of the matrix, [1][2][3][4] while retaining acceptable levels of toughness. Such materials are also attractive in terms of tribological characteristics, commonly exhibiting major improvements in resistance to various types of wear. [5,6] For such enhancements to be significant, reinforcement contents are needed of at least about 10% (by volume) and levels of 20% or 30% are commonly employed. Commercial usage of composite materials of this type has extended to a range of applications, including several in the aerospace industry. It has always been clear, however, that the processing conditions, and hence the details of the microstructure, are of critical importance if optimum properties are to be obtained. While several types of processing route have been employed to produce various MMCs, powder blending and consolidation has in general been the most successful one for particulate MMCs. In this case, process optimization is usually aimed at producing a uniform spatial distribution of relatively fine particles that are well bonded to the matrix, with little or no porosity. A number of studies [7][8][9] have been aimed at clarifying the effects involved, including those concerning powder size, blending techniques, sintering, and consolidation conditions, etc. There have also been studies [10,11] in which an inverse correlation has been established between the degree of clustering (i.e., the level of inhomogeneity of particle distribution) and the ductility. In contrast, it may be noted that some effects, such as the creation of anisotropy from alignment of "stringers" of reinforcement particles, could be beneficial and should in any event be monitored and controlled. Moreover, the microstructure of the matrix is likely to be affected by the presence of particulate, so modifications to thermomechanical treatments may be needed for optimization of age hardening and other characteristics. For example, previous work [12,13] has shown that recrystallization characteristics can be sensitive to the level and type of reinforcement. Another issue of potential significance for particulate MMCs is the presence of internal residual stresses, largely arising DOI: 10.1002/adem.202201479 Herein, it is concerned with the use of profilometry-based indentation plastometry (PIP) to obtain mechanical property information for particulate metal matrix composites (MMCs). This type of test, together with conventional uniaxial testing, has been applied to four different MMCs (produced with various particulate contents and processing conditions). It is shown that reliable stressstrain curves can be obtained using PIP, although the possibility of premature (prenecking) fracture should be noted. Close attention is paid to scale effects. As a consequence of variations in local spatial distributions of particulate, the "representative volume" of these materials can be relatively large. This can lead to a certain amount of scatter in PIP profiles and it is advisable to carry out a number of repeat PIP tests in order to obtain macroscopic properties. Nevertheless, it is shown that PIP testing can reliably detect the relatively minor (macroscopic) anisotropy exhibited by forged materials of this type. from differential thermal contraction during the latter stages of production. These have been studied in some depth. [14][15][16][17][18] They certainly have the potential to be significant, since a typical difference in thermal expansivity between particle and matrix is about 20 microstrain K À1 , so that a temperature drop of, say, 500 K will generate a (large) misfit strain of about 10 millistrain. In fact, since the resulting stresses are, on average, purely hydrostatic (compressive in the particles and tensile in the matrix), they tend to have less effect on the behavior than for MMCs that incorporate directionality, such as aligned fiber composites. Residual stresses in those have nonzero (average) deviatoric components and those in the matrix can have pronounced effects on plasticity (and creep) characteristics of the composite. Nevertheless, it should be noted that relatively large residual stresses are in general present in particulate MMCs. Moreover, on a local scale, there are nonzero deviatoric components-for example, tensile hoop stresses are present in the matrix immediately adjacent to a (spherical) particle. These local stresses have the potential to affect the way that plasticity develops under macroscopic loading. In particular, it has repeatedly been suggested, [14][15][16][17] from both modeling and experimental work, that one effect of these local residual stresses is to make the macroscopic yielding more progressive (transient), so that the yield point is less well defined. This certainly tends to be a feature of many experimental tensile stressstrain -curves of particulate MMCs. One consequence of the sensitivity of microstructure and properties to the processing conditions, including the level and nature of the reinforcement, is a pressing need for a quick and convenient method of characterising the mechanical properties, including any variations with location in the sample and any anisotropy that may have arisen. There is therefore interest in applying the recently-developed methodology of profilometry-based indentation plastometry (PIP) to composite materials of this type. This procedure is based on iterative FEM simulation of the indentation process, with the plasticity parameters (in a constitutive law) being repeatedly changed until optimal agreement is reached between experimental and predicted outcomes. This is done by converging in parameter space on the set of values for which the misfit between modeled and measured profiles, characterized via a sum of squares parameter, is a minimum. There are several constitutive laws that could be used to capture the plasticity characteristics, but the one used in the current work is that of V oce in which σ Y is the yield stress, σ s is a "saturation" level, and ε 0 is a characteristic strain for the exponential approach of the stress toward this level. It should be noted that the elastic constants of the material (Young's modulus and Poisson ratio) are required as input parameters for the modeling. However, the outcome does not have a high sensitivity to these values, which need only be specified to an accuracy of about 10-15%. Simply knowing the base metal is usually sufficient for this. While the target outcome used in much early work on obtaining stress-strain curves from indentation data via inverse FEM modeling was the load-displacement plot, it has become clear that using the residual indent profile offers major advantages. These are explained in some detail in a recent review paper. [19] In summary, these are (a) improved sensitivity of the experimental outcome to the stress-strain relationship, (b) a capability to detect the presence and sense of any (in-plane) anisotropy in the sample (via a lack of radial symmetry in the profile), (c) improved experimental convenience and accuracy, by eliminating the need to make any measurements during the test or to be concerned about the compliance of the loading train, and (d) potential for obtaining further information by carrying out indentation to more than one depth and measuring the profiles for them. Detailed information is also available [19] concerning the sources and likely magnitudes of errors that could arise in various ways. The superior reliability of PIP to the ("Instrumented Indentation Technique"-IIT) methodology of converting a load-displacement plot directly to a stress-strain curve via analytic relationships has been clearly demonstrated. [20] Integrated facilities are now available that allow stress-strain curves to be obtained from a single indentation experiment within a timescale of a couple of minutes or so. Advantages of the PIP procedure, compared with uniaxial testing, include minimal specimen preparation requirements and a capability to map properties over a surface on a relatively fine (%a mm) scale. These are also offered by hardness testing, but hardness numbers are not well-defined material properties and they should be regarded as no better than semi-quantitative guides to the plasticity of metals. [21] There are several types of sample for which the fine-scale mapping of material response, including anisotropic effects, is a very attractive prospect. A recent paper [22] covers its use for monitoring variations in stress-strain relationships in the vicinity of welds. Similar attractions apply to the testing of MMCs, for which both local property variations and the presence of anisotropy are of interest. The present work is therefore aimed at comparing the outcomes of PIP testing a range of particulate MMC materials with those from conventional testing, and also correlating these results with microstructural observations. There have been a few studies [23,24] involving nanoindentation of MMCs, but such testing cannot generate bulk properties. In fact, while this is the case for most materials, partly because nanoindentation commonly takes place within single grains, it is particularly true of MMCs, since there is no possibility of deforming a representative volume during a test (with a typical lateral indent size of a few microns and a depth of no more than a micron). The PIP procedure involves deforming a region with dimensions of the order of hundreds of microns, and this volume is expected to be a sufficiently large to be representative for particulate MMCs, at least for cases in which the particles are relatively small (no larger than a few tens of microns) and homogeneously distributed. Some previous indentation works on particulate MMCs have been undertaken with similarly large (spherical) indenters, [25,26] but these did not involve residual profile measurements or iterative FEM simulation and did not lead to successful extraction of stress-strain relationships from indentation data. There have also been a few purely theoretical studies, [27] but confirmation of the assumptions incorporated in such models is always problematic. One suggestion that has, however, been made [28] is that (large-scale) indentation may cause "crowding" of the particulate during the deformation and that this could affect the response. This possibility is worth investigating, since it has certainly not been confirmed so far. The current work constitutes the first reported application of PIP testing to MMCs, allowing comparison between indentation -inferred and uniaxially obtained stress-strain curves. Materials Particle-reinforced MMC materials were produced at Materion, using powder blending and consolidation procedures. The matrix was in all cases 6063 Al alloy, with an average original powder particle size of about 70-100 μm. Two grades of SiC particulate were used, with the finer one having a size of up to about 0.7 μm and the coarser one up to about 3 μm. Four types of sample were produced, covering a range of combinations of reinforcement content and processing conditions-as listed in Table 1. All four of these materials were produced in the form of "slabs", which were approximately 15 mm thick (and with lateral dimensions of at least about 30 mm). No distinction could be drawn between the two "in-plane" directions, and it was expected that there would be no in-plane anisotropy. However, the through-thickness direction could be distinguished from the two in-plane directions. For the HIPed material, this direction would be expected to be equivalent to the other two, but for the forged materials (in which it was the direction in which the forging force was being applied), some differences might be expected. Microstructural Examination Optical microscopy was used mainly to obtain an indication of the spatial distribution on the reinforcement particles on various length scales. There is particular interest in how they are distributed on the scale of a few hundred microns up to an mm or so, since this is the order of magnitude of the linear dimensions of the region deformed during PIP testing. In fact, since the particle size is approximately either 0.7 or 3 μm, they cannot readily be resolved individually using optical microscopy. However, a clear impression can nevertheless be obtained concerning their overall spatial distribution. Samples were prepared by grinding and polishing to 3 μm finish and viewed using reflected light under bright-field conditions. Microstructures were also examined at higher magnification, using scanning electron microscopy (SEM), after the same sample preparation procedures. A Zeiss Merlin field-emission gun scanning electron microscope (FEG-SEM) was used, with images being obtained in both back scattered and secondary electron modes, with an accelerating voltage of 15 kV. Highresolution electron backscattered diffraction (EBSD) maps were also acquired using a Bruker eFlash HR detector on the XZ-orientation. The same sample was further fine polished using a Gatan PIPS II precision ion polishing system (Model 695). Two Ar-ion guns were positioned at 8°to the sample surface at 8 keV for 16 min. The sample was rotated at 6 rpm in vacuum. EBSD maps were acquired using an accelerating voltage of 15 kV with a probe current of 20 nA. The Kikuchi diffraction patterns were stored at 320 Â 240 resolution with a step size of 223 nm. ESPRIT 2.3 and HKL Channel 5 software were used for data processing. Uniaxial Testing Tensile tests were carried out in accordance with BS EN/ISO 6892-1:2009 and ASTM E8M, using an Instron 3369 loading frame with a 50 kN capacity. Cylindrical specimens were used, with a 5 mm diameter, 25 mm gauge length, and surface finish ≤0.4R a . Prior to testing, samples were put through a precycle to remove slack from the loading train. Following this, the tests were completed at a constant strain rate of about 10 À4 s À1 . Strain was measured to failure, via a clip-on dual averaging extensometer with a maximum strain limit of 10%. For samples that reached 10% strain without failure, the test was paused and restarted to reach eventual failure. Samples were tested only in "in-plane" directions. Compression tests were also carried out using an Instron 3369 loading frame, with a 50 kN capacity. Samples were in the form of cylinders (4 mm diameter and 4 mm long). No lubricant was used. Displacement was measured using a linear variable displacement transducer (LVDT), attached to the upper platen and actuated against the lower one. In addition, Techni-Measure 1 mm linear strain gauges were attached to both sides of each sample. They have a range of up to about 2%. The average value from these two was used to apply a compliance correction to the LVDT data. This also removes the uncertainty associated with the "bedding down" effect. Compression testing was carried out in both "in-plane" and "through-thickness" directions. Indentation Plastometry Four steps are involved in obtaining a tensile (or compressive) nominal stress-strain curve from a PIP test. These are (a) pushing a hard indenter into the sample with a known force, (b) measuring the (radially symmetric) profile of the indent, (c) iterative FEM simulation of the test until the best-fit set of (V oce ) plasticity parameter values is obtained, and (d) converting the resultant (true) stress-strain relationship to a nominal stress-strain curve that would be obtained during uniaxial testing. For tensile testing, up to the onset of necking, this conversion can be done using the standard analytical relationships. For accurate conversion to a compressive nominal stress-strain relationship, or for a tensile one after the onset of necking, FEM simulation of the test is required (with a friction coefficient required for a compression test). Full details are provided in a recent review paper. [19] Indentation was carried out into both top (x-y) and transverse (x-z) surfaces. Penetration ratio (depth over indenter radius) values of around 15-20% were used. Most of the indentation was carried out using a sphere of 1 mm radius, with the indent profiles measured using a stylus profilometer. It became clear that, while some of the indents appeared to be radially symmetric, others exhibited a degree of asymmetry-i.e., variations in pile-up height with scan angle. This is commonly taken to be indicative of anisotropy in the material, with directions in which the pile-up is higher being "softer" than other directions. However, it became clear that what was being observed was more complex than being due to consistent anisotropy. For example, sometimes the two halves of a single scan were asymmetric. This is suggestive of inhomogeneity in the sample, on a relatively coarse scale (of the order of the indenter radius). Microstructural examinations ( §3) confirmed that such variations are indeed present in these materials. This was investigated further by more detailed study of the Fine-20-HIP material, in which no overall anisotropy is expected. A relatively large number of PIP tests were carried out, using four different ball sizes-with radii of 0.5, 1.0, 1.5, and 2.0 mm. The profilometry was in this case carried out using an optical interferometric system (NetGAGE3D from Isra Vision), facilitating the rapid examination of multiple scan directions. This work was aimed at exploring statistical aspects of the outcomeparticularly related to the potential detection of local anisotropy or inhomogeneity (apparent as differences in pile-up height in different directions and locations). Importance of Scale To assist with interpretation of the mechanical testing outcomes, it is helpful with these materials to have a clear picture of their microstructures. This relates particularly to issues of scale. While conventional uniaxial testing involves interrogation of the material on a very coarse scale (of the order of several mm in linear dimensions), the region deformed during PIP testing is smaller-of the order of a few hundred μm to several hundred μm in linear dimensions. For most materials, such a region will be large enough to constitute a "representative" volume-for example, it will commonly contain "many" grains (capturing the effects of grain size and shape, texture, grain boundary structure, etc). However, in these MMCs, there is a possibility that the spatial distribution of the particulate could be such as to create inhomogeneity and/or anisotropy on a length scale of the order of hundreds of μm. If so, there could be point-to-point variations and/or apparent anisotropy (even if the material is macroscopically homogeneous and isotropic). Scanning Electron Microscopy and Electron Backscattered Diffraction Microscopy The SEM is potentially helpful for studying the size and shape of individual particles, as well as their spatial distribution on a local scale. For example, Figure 1 shows the microstructure of the Coarse-30-Forge material at two different magnifications. This gives an impression of the homogeneity of the distribution, which in general is good (at least on a scale of up to a few tens of μm). The particulate in this material was sieved down to a few μm, and there are no particles larger than about 3 μm, which indeed was the target size for this processing route. There is also a significant proportion of finer particles. These probably originated partly from fines in the powder sample and partly from a certain amount of fracture of the larger particles during the manufacturing process. A further point to note here is that the material is apparently free of any porosity. There is also interest in the matrix grain structure and the possibility of texture, which is most conveniently studied via EBSD images and associated pole figures. An example can be seen in Figure 2, which shows an x-z section from the Coarse-30-Forge material. There are indications here of partial recrystallization, probably with lateral growth inhibited (via Zener pinning) by very fine oxide particles that have become aligned in the vertical direction (normal to the forging direction). Such structures are quite common in MMC materials of this type, [12,13] with the oxide particles coming from the free surfaces of the original aluminum particles. The pole figure in Figure 2 indicates that some texture has developed during this process, although it is www.advancedsciencenews.com www.aem-journal.com relatively weak. This is consistent with the recrystallization being quite limited. Partial recrystallization of this type could contribute to local inhomogeneity in the material, with the recrystallized grains being relatively soft. It may also be relevant in terms of anisotropy that could be exhibited by this material, although the distribution of particulate is also likely to influence this-see below. The other forged materials also exhibit similar features. Optical Microscopy Optical microscopy is helpful for study of the particle distribution on a relatively coarse scale. In the following micrographs, the structures are shown (at two magnifications) for both in-plane (x-y) and transverse (x-z) sections. Figure 3 relates to the HIPed material. As expected, there is no clear directionality on a macroscopic scale in this material. However, on the scale of an indent-typically of the order of several hundred microns in depth and perhaps a mm in width-there is in places a degree of alignment of the particulate into "stringers". The regions that appear light in these micrographs are depleted of particles, and these are indicative of such alignment in some locations. Such variations could have the effect of giving profiles-particularly pile-up heights-that vary in different directions, or even of introducing a difference between the profiles on the two halves of a single scan. The likelihood of this is difficult to assess from such micrographs, partly because the subsurface microstructure will also affect the behavior. It is also unclear whether pile-up heights are in fact very sensitive to the microstructure in the immediate vicinity of the pile-up or are also affected strongly by the nature of material in deeper and more central locations. Nevertheless, inhomogeneities of this type, on this scale, have the potential to create apparent anisotropies or anomalies. It might be expected that this would be more apparent with smaller ball radii, although this is likely to depend on the sensitivity of pile-up height to the microstructure of local or more remote regions. The microstructures of the forged materials do appear to show some overall directionality-see Figure 4 and 5. This takes the form of alignment parallel to the x-y plane, which is apparent in the x-z sections. However, they also show inhomogeneities of a similar type to that apparent in Figure 3. This is clearer in the x-y sections. The material with the coarser particulate (Coarse-30-Forge) also shows a similar type and degree of alignment ( Figure 6). There is also some evidence of clustering (variations in local particle volume fraction), which can be seen more clearly than with the finer particulate. In fact, this occurs on scales ranging from a few tens of μm to a few hundreds of μm. Overall, however, the level of macroscopic homogeneity is good. It may be noted that this type of clustering is not necessarily clear in higher magnification (SEM) micrographs, such as those in Figure 1. It is potentially helpful to bear in mind the nature of these effects when considering various aspects of the PIP outcomes. . Uniaxial Test Outcomes The four experimental tensile (nominal) stress-strain curves are shown in Figure 7a, while the corresponding compressive curves are shown in Figure 7b. Compression testing has been carried out in both in-plane (x or y) and through-thickness (z) directions, while the tensile test samples were all loaded in an in-plane direction only. Certain features can be noted at this point. First, all curves have a noticeably transient shape during initial yielding. This has been reported previously and is probably a consequence of residual stresses in the matrix causing the yielding to take place progressively. This shape makes it particularly important to look at complete stress-strain curves, rather than trying to extract specific values for the yield stress. www.advancedsciencenews.com www.aem-journal.com The differences between the four MMC types look plausible. Raising the particulate content (green v. blue curves) increases the hardness and reduces the ductility, as expected. Also, using finer particulate (green v. purple curves) has a similar effect. This suggests that the finer (0.7 μm) particles are causing some inhibition of dislocation mobility. Switching from forging to HIPing (red v. blue curves) appears to raise the hardness slightly and reduce the ductility. The ductility reduction could be due to lower homogeneity in the HIPed material. There is good consistency between tensile and compressive curves. Of course, the shapes look different when presented as nominal plots, but the onset of yielding is occurring at similar stress levels in all four cases, with similarly transient behavior. To check for consistency at higher plastic strains, both must be converted to true stress-strain curves. Figure 8 shows the outcome of this operation, with the conversions made using the standard analytical expressions. This can only be done up to the onset of necking for the tensile curves. For the compressive curves, this simple conversion takes no account of the effect of friction. Since this raises the experimental (nominal) curves to higher stress levels, typically by about 5-10%, the true stress in the compressive curves should be correspondingly reduced. When this is done, the agreement in Figure 8 between compressive and tensile plots is very good. The other main point to be noted from the compressive curves in Figure 7b is that they indicate that, macroscopically, all of these materials are at least approximately isotropic. This is not unexpected, certainly for the HIPed material and possibly for the forged material as well. While the microstructures in Figure 4-6 suggest that some anisotropy might be expected, it is not surprising that this alignment of the particulate into "planes" of high particle content apparently does not lead to any strong effects overall. PIP Indent Profiles Among the first points to be checked during PIP testing is whether samples appear to exhibit inhomogeneity and/or anisotropy. Inhomogeneity can be investigated by simply creating indents in a variety of locations (on a given plane, in a given sample) and monitoring any variations in the corresponding indent profiles. For these samples, it was found that there was little systematic change, although there were certainly some apparently fairly random variations. They can thus be taken to be macroscopically homogeneous, although apparently with some local variations in structure. This is broadly consistent with the microstructural evidence. Anisotropy is conventionally detected during PIP testing in the form of a systematic lack of radial symmetry in the indent profiles-particularly the pile-up heights. Such variations were www.advancedsciencenews.com www.aem-journal.com certainly detected in the current work, although they did not appear to be entirely systematic. Typical profile scans (obtained using stylus profilometry) from the HIP material, for both inplane and transverse surfaces, are shown in Figure 9. These are not fully consistent with a general expectation of isotropy for the HIPed material, and with the compression test results in Figure 7b, although some of them do show sufficient radial symmetry for a stress-strain curve to be inferred. These variations are apparently caused by local differences in particulate content and distribution of a type that is apparent in Figure 3. Some systematic anisotropy was detected in the forged samples. Typical scans can be seen in Figure 10, which shows indent www.advancedsciencenews.com www.aem-journal.com profiles in both types of plane, for all three of the forged materials. Many of those in the x-y plane do exhibit radial symmetry, indicating the expected isotropy, but there were again some cases where a degree of asymmetry was observed. In the x-z plane, the asymmetry appeared to be much more systematic, with the z direction being noticeably softer, suggesting that there is some overall anisotropy. These observations are broadly consistent with the local particulate distributions, as shown in Figure 4-6. The presence of "layers" with higher particulate content, oriented parallel to the x-y plane, will tend to make the in-plane directions a little harder than the through-thickness (z) direction, as observed. For cases in which there is asymmetry in the radial profiles, the standard PIP procedure cannot be used to obtain stressstrain relationships (since it requires a single profile as the target outcome, with radial symmetry assumed in the associated FEM model). Indentation generates strains in all directions, so any stres-strain curve inferred from such a test will tend to be a direction-averaged one. Nevertheless, there were a number of indents that exhibited radial symmetry, at least to a good approximation, and these were used to obtain stress-strain curves. Optical Profilometry and Effects of Ball Radius To explore the nature of these apparently rather random variations in the details of profile shape, several indents were made in the Fine-20-HIP material, using balls with radii of 0.5, 1.0, 1.5, and 2.0 mm. The outcome can be seen in Figure 11, in which pile-up height is plotted as a function of scan angle, normalized by indent depth, for 2 indents with each ball radius. The picture here is not a simple one. While there appears to be a slight tendency for the variations to be smaller in amplitude with the larger balls, this trend is certainly not consistent or marked. Furthermore, the fluctuations appear to be completely random, with it being relatively rare for the pile-up height to be the same at both ends of a single scan (separated by 180°in scan angle). These variations are evidently not caused by consistent anisotropy in the samples and in fact it is clear that they are due to inhomogeneities in microstructure. Moreover, similar plots were obtained in the x-z plane (and indeed, for the HIPed material, it is not possible to identify different directions in terms of the processing conditions). The diameters of these indents were about 0.7, 1.4, 2.1, and 2.8 mm. The regions being deformed are increasing in volume as the ball radius is raised, reaching levels that would certainly be expected to smooth out the kind of microstructural inhomogeneities that are apparent in Figure 2 and 3. It follows that the pile-up heights must be quite sensitive to the microstructure in their immediate vicinity. Even for the largest ball used here, that region may be just a few hundreds of microns in "width," or possibly even less. The depth of the region that most strongly affects the pile-up height is probably similar. However, the data for the forged samples are more consistent and do indicate the presence of overall anisotropy. A plot of the same type as Figure 11, covering all the materials, is shown in Figure 12. For indents in the x-y plane (solid lines), the picture for the forged samples is similar to that for the HIPed samplei.e., there are some fairly random variations, although they are less pronounced than in the HIPed material. In the x-z plane (dotted lines), in contrast, a more consistent picture emerges for the forged materials, with these plots approximating to the expected shape-i.e., there is a well-defined direction in which the pile-up height is appreciably higher (and virtually all scan directions are symmetric about the central axis, so that heights are similar for any two directions that are 180˚apart). The material is "softer" in this direction, which is the z (through-thickness) direction. The "hardest" direction is expected to be normal to this. This is consistent with the microstructures seen in Figure 4-6 and the complete profiles in Figure 10. This www.advancedsciencenews.com www.aem-journal.com demonstrates the high sensitivity of PIP for detecting such anisotropy, which is not picked up by the uniaxial compression testing. Uniaxial and Profilometry-Based Indentation Plastometry-Inferred Stress-Strain Curves The standard PIP procedure involves converting a (radially symmetric) indent profile to a (best-fit) true stress-true plastic strain curve (in the form of a V oce constitutive relationship, with a given set of parameter values). This has been done in the present work for profiles that did exhibit such symmetry. Subsequent checking with the optical profilometer confirmed that they were at least approximately symmetric radially. It should perhaps be noted that, while the scans in Figure 11 show that quite significant pile-up height variations (of the order of up to a few tens of microns) may be present, they can be smoothed out and the differences between the averaged profile and the "outliers" will in many cases be quite small. The resultant true stress-strain curve can be converted to a nominal one for uniaxial loading (tensile or compressive). This is most easily done using the well-known analytical relationships, although the points made above concerning these conversions should be noted. Both the post-necking regime in tensile testing and the effect of friction in compression can be obtained from the true stress-strain curve via FEM modeling, although this must be done for the specific sample dimensions (and friction conditions) of the test. Since the experimental tension and compression curves are in good agreement, it is logical to just compare the PIP-derived curves with one of these sets. This has been done in Figure 13 www.advancedsciencenews.com www.aem-journal.com for the tensile (nominal) plots. A couple of points should be noted. First, two of these materials-the Fine-20-HIP (red) and the Fine-30-Forge (green)-fractured before necking. This could be regarded as "premature" fracture, which cannot be picked up by PIP testing. Second, the PIP procedure is relatively insensitive to a transient nature in the onset of yielding, which cannot be captured via a set of V oce parameter values. This is evident in the PIP-derived curves. Nevertheless, they do capture well both the yield stresses and the general nature of the work hardening (and give reliable UTS values, particularly if the curves are curtailed at the fracture strain for the materials exhibiting "premature" fracture). Among other points, this agreement suggests that "crowding" (a proposed tendency for the volume fraction of particulate to increase in certain locations as a result of plastic straining-changing the stress-strain relationship)-is not a significant effect. In any event, PIP clearly has the potential for study of local inhomogeneities and anisotropy, and this is likely to be of particular interest for testing of particulate MMCs. Conclusions This study concerns the response of Al-based particulate MMCs to profilometry-based indentation plastometry (PIP) testing. Comparisons are made between outcomes from this type of test and from conventional uniaxial (tensile and compressive) testing. Four different types of MMC, with variations in particle size and volume fraction, have been examined. The following conclusions can be drawn: 1) In general, good agreement is observed between the stress-strain curves obtained via PIP testing and via conventional uniaxial testing. This indicates that the PIP testing involves deformation of a volume that is large enough to be representative of the bulk. This is potentially a concern for materials such as MMCs, since a representative volume clearly needs to incorporate a relatively large number of reinforcement particles; 2) Even with relatively large ball radii, significant (random) variations have been observed in pile-up height, as a function of scan angle. Such variations would normally be indicative of anisotropy. However, with the MMC material produced by HIPing, no such anisotropy is expected and indeed uniaxial testing confirmed that it is isotropic. The variations are attributed to local differences in microstructureparticularly in particle volume fractionon a fairly coarse scale (%hundreds of microns). PIP thus constitutes a sensitive methodology for detecting such (relatively coarse) inhomogeneities; 3) With the forged materials, in contrast, the microstructure is such that some anisotropy is expected. The particles have become somewhat concentrated into planes lying normal to the forging direction, which is expected to lead to the through-thickness direction being slightly softer than the inplane directions. Measured variations in pile-up heights for directions within transverse planes are fully consistent with this. The sensitivity of the PIP procedure to detection of such effects is high, since this anisotropy was not strong enough to be detected by conventional uniaxial testing; 4) These materials exhibit quite strongly transient yielding. This cannot be picked up by PIP testing, so the initial parts of the curves look slightly different for the two types of test. Nevertheless, the values obtained for the yield stress are in good agreement for all of the materials tested; and 5) A further point to note about these materials is that, while they exhibit good toughness, there is a tendency for the strain to failure (ductility) values to be relatively low (%4-10%). In particular, they may fracture before the onset of necking. If this happens, then a (relatively small) discrepancy may arise between the UTS value obtained by PIP testing (which corresponds to the onset of necking) and that from a tensile test.
v3-fos-license
2018-11-13T14:06:15.931Z
2018-11-13T00:00:00.000
53287469
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2018.00091/pdf", "pdf_hash": "3fdd6c23d181a88786f5db85f4d71600f137b9c2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42042", "s2fieldsofstudy": [ "Biology" ], "sha1": "3fdd6c23d181a88786f5db85f4d71600f137b9c2", "year": 2018 }
pes2o/s2orc
The PII-NAGK-PipX-NtcA Regulatory Axis of Cyanobacteria: A Tale of Changing Partners, Allosteric Effectors and Non-covalent Interactions PII, a homotrimeric very ancient and highly widespread (bacteria, archaea, plants) key sensor-transducer protein, conveys signals of abundance or poorness of carbon, energy and usable nitrogen, converting these signals into changes in the activities of channels, enzymes, or of gene expression. PII sensing is mediated by the PII allosteric effectors ATP, ADP (and, in some organisms, AMP), 2-oxoglutarate (2OG; it reflects carbon abundance and nitrogen scarcity) and, in many plants, L-glutamine. Cyanobacteria have been crucial for clarification of the structural bases of PII function and regulation. They are the subject of this review because the information gathered on them provides an overall structure-based view of a PII regulatory network. Studies on these organisms yielded a first structure of a PII complex with an enzyme, (N-acetyl-Lglutamate kinase, NAGK), deciphering how PII can cause enzyme activation, and how it promotes nitrogen stockpiling as arginine in cyanobacteria and plants. They have also revealed the first clear-cut mechanism by which PII can control gene expression. A small adaptor protein, PipX, is sequestered by PII when nitrogen is abundant and is released when is scarce, swapping partner by binding to the 2OG-activated transcriptional regulator NtcA, co-activating it. The structures of PII-NAGK, PII-PipX, PipX alone, of NtcA in inactive and 2OG-activated forms and as NtcA-2OG-PipX complex, explain structurally PII regulatory functions and reveal the changing shapes and interactions of the T-loops of PII depending on the partner and on the allosteric effectors bound to PII. Cyanobacterial studies have also revealed that in the PII-PipX complex PipX binds an additional transcriptional factor, PlmA, thus possibly expanding PipX roles beyond NtcA-dependency. Further exploration of these roles has revealed a functional interaction of PipX with PipY, a pyridoxal-phosphate (PLP) protein involved in PLP homeostasis whose mutations in the human ortholog cause epilepsy. Knowledge of cellular levels of the different components of this PII-PipX regulatory network and of KD values for some of the complexes provides the basic background for gross modeling of the system at high and low nitrogen abundance. The cyanobacterial network can guide searches for analogous components in other organisms, particularly of PipX functional analogs. P II , a homotrimeric very ancient and highly widespread (bacteria, archaea, plants) key sensor-transducer protein, conveys signals of abundance or poorness of carbon, energy and usable nitrogen, converting these signals into changes in the activities of channels, enzymes, or of gene expression. P II sensing is mediated by the P II allosteric effectors ATP, ADP (and, in some organisms, AMP), 2-oxoglutarate (2OG; it reflects carbon abundance and nitrogen scarcity) and, in many plants, L-glutamine. Cyanobacteria have been crucial for clarification of the structural bases of P II function and regulation. They are the subject of this review because the information gathered on them provides an overall structure-based view of a P II regulatory network. Studies on these organisms yielded a first structure of a P II complex with an enzyme, (N-acetyl-Lglutamate kinase, NAGK), deciphering how P II can cause enzyme activation, and how it promotes nitrogen stockpiling as arginine in cyanobacteria and plants. They have also revealed the first clear-cut mechanism by which P II can control gene expression. A small adaptor protein, PipX, is sequestered by P II when nitrogen is abundant and is released when is scarce, swapping partner by binding to the 2OG-activated transcriptional regulator NtcA, co-activating it. The structures of P II -NAGK, P II -PipX, PipX alone, of NtcA in inactive and 2OG-activated forms and as NtcA-2OG-PipX complex, explain structurally P II regulatory functions and reveal the changing shapes and interactions of the T-loops of P II depending on the partner and on the allosteric effectors bound to P II . Cyanobacterial studies have also revealed that in the P II -PipX complex PipX binds an additional transcriptional factor, PlmA, thus possibly expanding PipX roles beyond NtcA-dependency. Further exploration of these roles has revealed a functional interaction of PipX with PipY, a pyridoxal-phosphate (PLP) protein involved in PLP homeostasis whose mutations in the human ortholog cause epilepsy. Knowledge of cellular levels of the different components Protein P II was discovered in the late sixties of last century (Stadtman, 2001), when Escherichia coli glutamine synthetase (GS) was found to exist in feed-back inhibition susceptible or refractory forms depending on the adenylylation state of one tyrosine per GS subunit. P I and P II were the first and second peaks from a gel filtration column (Shapiro, 1969). P I is a bifunctional enzyme (ATase) that adenylylates or deadenylylates GS (Jiang et al., 2007). P II controls the activity of the ATase. We now know that P II proteins are highly conserved and very widespread sensors used to transduce energy/carbon/nitrogen abundance signals in all domains of life (Kinch and Grishin, 2002;Sant'Anna et al., 2009). They are found in archaea, bacteria (Gram+ and Gram−), unicellular algae and plants. Many organisms have two or more genes for P II proteins (reviewed in Forchhammer and Lüddecke, 2016), as E. coli, that has two paralogous genes encoding P II proteins with distinct functions, one (GlnB) involved in the control of GS, and the other one (GlnK) being involved in the regulation of ammonia entry into the cell. By binding to target proteins, including channels, enzymes, or molecules involved in gene regulation and by altering the function of these target molecules, P II proteins can regulate ammonia entry, nitrogen metabolism and gene expression (Forchhammer, 2008;Llácer et al., 2008). Cyanobacteria, and particularly among them Synechococcus elongatus PCC 7942 (hereafter S. elongatus), have been and continue to be very useful organisms for studies of P II actions, fuelling structural understanding of P II regulation. Studies on these organisms exemplify very clearly how enzyme activity and gene regulation can be controlled by P II via formation of several complexes (summarized in Figure 1) mediated by weak intermolecular interactions that are crucially regulated by allosteric effectors of the proteins involved in these complexes. This is the focus of the present review. Abbreviations: P II , a homotrimeric signaling protein; GlnK, GlnK3 and GlnB, different paralogous forms of P II proteins; GlnD, the bifunctional enzyme that uridylylates and deuridylylates GlnB in E. coli; GS, glutamine synthetase; AmtB and Amt, homologous trimeric bacterial transporters of ammonia; PamA, putative channel of unknown function that is encoded by sll0985 of Synechocystis sp. PCC 6803; NtcA and CRP, homologous homodimeric transcription factors of cyanobacteria and of E. coli, respectively; the imperfectly palindromic target DNA sequences to which they bind specifically are called NtcA box and CRP box, respectively; PlmA, putative homodimeric transcription factor of the GntR family that is found in cyanobacteria; P I or ATase, bifunctional enzyme that adenylylates and deadenylylates glutamine synthetase in E. coli and other enterobacteria; NAGK, N-acetyl-L-glutamate kinase; NAG, N-acetyl-L-glutamate; PipX, a small monomeric protein of cyanobacteria that can interact with P II and with NtcA; FRET, fluorescence resonance energy transfer (also known as Förster resonance energy transfer), a phenomenon in which a fluorophore emits light of its characteristic frequency when a nearby different absorbing group is excited by THE P II SIGNALING PROTEIN S. elongatus P II (Figure 2), as other P II proteins, is a homotrimer of a polypeptide chain of 112 amino acids that exhibits the ferredoxin fold (βαβ) 2 followed by a beta hairpin (Xu et al., 2003). The trimer (Figure 2A) has a hemispheric body nucleated by three antiparallel oblique (relative to the three-fold axis) β-sheets, each one formed by the 4-stranded sheet (topology ↓β2↑β3↓β1↑β4) of a subunit (see for example subunit B in the central panel of Figure 2A) extended on its β4 end by the Cterminal hairpin (β5-β6) of an adjacent subunit (subunit A) and on the β2 end by the β2-β3 hairpin stem (the root of the Tloop, see below) of the other subunit of the trimer (subunit C). The three sheets become continuous on the flat face of the hemispheric body via their β2-β3 hairpins (Figure 2A). The subunit sheets encircle like a 3-sided pyramid the three-fold axis, filling the inner space between them with their side-chains. They are covered externally by 6 helices (two per subunit) that run parallel to the β strands, contributing to the rounded shape of the hemispheric trimer (Figure 2A, panel to the right) and to the outer part of its equatorial flat face. In the convex face, three crevices are formed at subunits junctions between adjacent βsheets, over the β2-β3 hairpins ( Figure 2B). These crevices host the sites for the allosteric effectors ATP/ADP [and in some species AMP (Palanca et al., 2014)] and 2-oxoglutarate (2OG) that endow P II with its sensing roles (Kamberov et al., 1995;Zeth et al., 2014) ( Figure 2B), the nucleotides reflecting the energy status (Fokina et al., 2011) and 2OG reflecting the abundance of carbon and, inversely, the nitrogen richness (see for example Muro-Pastor et al., 2001). Very salient structural features of P II are the long flexible Tloops (Figures 2A,C) formed by the 18 residues that tip the β2-β3 hairpin of each subunit (Xu et al., 2003). These loops are key elements (although not the exclusive ones, see Rajendran et al., 2011 andSchumacher et al., 2015) for P II interaction with its targets (Conroy et al., 2007;Gruswitz et al., 2007;Llácer et al., 2007Llácer et al., , 2010Mizuno et al., 2007;Zhao et al., 2010b;Chellamuthu et al., 2014). By binding at the boundary between the T-loop and the P II body, at the crevice formed between adjacent subunits, the adenine nucleotides and MgATP/2OG promote the adoption by the T-loop of different conformations ( Figure 2C) (Fokina et al., light; the FRET signal decreases with the 6th power of the distance between the absorbing and emitting groups; 2OG, 2-oxoglutarate, also called α-ketoglutarate; AcCoA carboxylase, acetyl coenzyme A carboxylase; BCCP, biotin carboxyl carrier protein, the protein subunit that hosts the covalently bound biotin in many bacterial AcCoA carboxylases; PLP, pyridoxal phosphate; PipY, product of the gene that in S. elongatus is the next downstream of pipX, forming a bicistronic operon with it; it is a PLP-containing protein. FIGURE 1 | Summary of the P II -PipX-NtcA network of S. elongatus. The network illustrates its different elements and complexes depending on nitrogen abundance (inversely related to 2OG level) and the structures of the macromolecules and complexes formed (when known). For PlmA (dimer in darker and lighter blue hues for its dimerization and DNA-binding domains, respectively) and its complex the architectural coarse model proposed (Labella et al., 2016) is shown, with the C-terminal helices of PipX (schematized in the extended conformation) pink-colored and the two P II molecules in dark red. The DNA complexed with NtcA and with NtcA-PipX is modeled from the structure of DNA-CRP (Llácer et al., 2010), since no DNA-NtcA structure has been reported. BCCP, biotin carboxyl carrier protein of bacterial acetyl CoA carboxylase (abbreviated AcCoA carboxylase); the other two components of this enzyme, biotin carboxylase and carboxyl transferase are abbreviated BC and CT, respectively. No structural model of BCC has been shown because the structure of this component has not been determined in S. elongatus and also because the structures of this protein from other bacteria lack a disordered 77-residue N-terminal portion that could be highly relevant for interaction with P II . The yellow broken arrow highlights the possibility of further PipX interactions not mediated by NtcA or P II -PlmA resulting in changes in gene expression (Espinosa et al., 2014). The solid semi-transparent yellowish arrow emerging perpendicularly from the flat network symbolizes the possibility of functional interactions of PipX not mediated by physical contacts between the macromolecules involved in the interaction, giving as an example the functional interaction with PipY. Its position outside the network tries to express the different type of interaction (relative to the physical contacts shown in the remainder of the network) as well as to place it outside the field of 2OG concentrations. The T-loop also is the target of regulatory post-translational modification (reviewed in Merrick, 2015), first recognized in the regulatory cascade of the GS of E. coli as uridylylation of Tyr51 (see Figure 2C, 3rd panel from the left) mediated by a glutamineregulated bifunctional P II uridylylating-deuridylylating enzyme, GlnD (Stadtman, 2001). Thus, in the enterobacterial GS regulating cascade P II is uridylylated or deuridylylated depending on whether 2-OG is abundant and L-glutamine is low or the reverse. P II -UMP activates the GS deadenylylating activity of ATase (Jiang et al., 2007), activating GS by decreasing its susceptibility to feed-back inhibition (Stadtman, 2001). This uridylylation (or in Actinobacteria adenylylation of Tyr51) occurs at least in proteobacteria and actinobacteria (Merrick, 2015), but it might be more widespread, since it has also been reported in an archaeon (Pedro-Roig et al., 2013). Structural studies with E. coli P II (Palanca and Rubio, 2017) have excluded the stabilization of the T-loop into a fixed conformation by Tyr51 uridylylation, suggesting that the Tyr51bound UMP physically interacts with the ATase. Although Tyr51 is conserved in cyanobacteria, it is not uridylylated. The Tloop serine 49 ( Figure 2C, 1st, 2nd and 4th panels from the left) is phosphorylated in S. elongatus under conditions of nitrogen starvation by an unknown mechanism (Forchhammer and Tandeau de Marsac, 1994), whereas the phosphatase that dephosphorylates phosphoSer49 has been identified and proven to be 2OG-sensitive (Irmler et al., 1997). Overall view in cartoon representation of S. elongatus P II along its three-fold axis from the P II flat face (left), from its convex side (middle), or with the three-fold axis vertical and the flat surface down (right). The structure corresponds to Protein Database (PDB) file 1QY7 (Xu et al., 2003). Each subunit in the trimer is colored differently. Some relevant traits are highlighted. (B) The P II allosteric sites shown in cartoon representation (top) and in semi-transparent zoomed surface representation (bottom) in approximately the same pose as in the cartoon representation. For clarity, in the cartoon representations only the two subunits forming each site are shown, in the same colors as in (A). Ligands are shown in sticks and balls representation, with atoms of C, O, N, P, and Mg in yellow, red, blue, orange, and green, respectively, except in the leftmost cartoon figure in which all the atoms of ATP are pale gray to highlight the bound 2OG (colored). Note in the corresponding panel of the bottom row that MgATP and 2OG are nearly fully buried in the P II molecule. The organisms from which the P II derive are indicated in the figure. The two panels to the left belong to isolated S. elongatus P II (PDB file 2XZW; Fokina et al., 2010a); the third panels illustrate E. coli GlnK taken from its complex with AmtB (PDB 2NUU, Conroy et al., 2007); the rightmost panels show Chlamydomonas reinhardtii P II , taken from its complex with Arabidopsis thaliana NAGK (PDB 4USJ; Chellamuthu et al., 2014). (C) Illustration of different shapes of the T-loops found in distinct complexes with allosteric effectors or with partner proteins. The T-loop is shown in cartoon representation, within a semi-transparent surface representation as if this loop were isolated from the remainder of P II and from the protein partner in the complex. In the third panel, the side chain of Arg47 of E. coli GlnK is represented in sticks, given its importance for inhibiting the AmtB channel. Taken, from left to right, from: S. elongatus P II with MgATP and 2OG bound (PDB file 2XZW; Fokina et al., 2010a); S. elongatus P II -PipX (PDB 2XG8; Llácer et al., 2010); E. coli GlnK-AmtB complex (PDB 2NUU, Conroy et al., 2007); and P II -NAGK complex (PDB 2V5H; Llácer et al., 2007). Frontiers in Molecular Biosciences | www.frontiersin.org FRET studies with engineered fluorescent S.elongatus P II used as an ADP and ATP-sensitive probe (Lüddecke and Forchhammer, 2015) have challenged the claim (Radchenko et al., 2013) that P II proteins have a very slow ATPase activity that would regulate P II similarly as the signaling GTPases with bound GTP and GDP. Although this ATPase was reported as a 2OG-triggered switch that appeared an intrinsic trait of P II proteins (Radchenko et al., 2013), the FRET experiments with S. elongatus P II (Lüddecke and Forchhammer, 2015) appear to indicate that an endogenous ATPase is not a relevant mechanism for the transition of P II into the ADP state. P II COMPLEXES WITH CHANNELS The first structurally solved P II complex was the one of E. coli GlnK with the AmtB ammonia channel (Conroy et al., 2007;Gruswitz et al., 2007) (Figure 3A) formed under nitrogen richness conditions. This structure showed that AmtB was inhibited by GlnK because the extended T-loop fits the channel entry, with the insertion into the channel of a totally extended arginine emerging from the T-loop and blocking the channel space ( Figure 2C, 3rd panel from the left, and Figure 3A, zoom). The ADP-bound and MgATP/2OG-bound structures of an Archeoglobus fulgidus GlnK protein (Maier et al., 2011) indicated that 2OG may prevent GlnK binding to the ammonia channel because of induced flexing outwards (relative to the 3-fold molecular axis) of the T-loops, preventing their topographical correspondence with the three holes of the trimer of ammonia channels ( Figure 3B). Interestingly, the T-loops of MgATP/2OG-bound S. elongatus P II ( Figure 2C, leftmost panel) and A. fulgidus GlnK3 ( Figure 3B) exhibited different flexed conformations (relative to the ADP-bound extended forms), and thus 2OG-binding by itself does not determine a single T-loop conformation, at least with different P II proteins. Yeast two hybrid approaches (Osanai et al., 2005) detected the interaction between P II and the putative channel PamA (encoded by sll0985) of Synechocystis sp. PCC 6803 (from now on Synechocystis), but molecular detail on this protein is non-existent, and, therefore, it is uncertain whether such interaction might resemble the GlnK-AmtB interaction. PamA is not conserved in many cyanobacteria, and the most closely related putative protein of S. elongatus, the product of the Synpcc7942_0610 gene, failed to give interaction signal with S. elongatus P II in yeast two hybrid assays (Castells, M.A., PhD Dissertation, Universidad de Alicante, 2010), despite the fact that the sequence identity with PamA concentrated in the C-terminal region, where P II binds in Synechocystis (Osanai et al., 2005). In vitro studies with the recombinantly produced Synechococystis PamA and P II showed that their interaction was lost in the presence of ATP and 2OG. Thus, similarly to the GlnK-AmtB and GlnK3-Amt complexes, the P II -PamA complex is formed under conditions of nitrogen abundance. However, T-loop phosphorylation did not dissociate this complex (Osanai et al., 2005). The function of PamA is not known, but its deletion from Synechocystis changed the expression of FIGURE 3 | P II proteins and the ammonia channel. (A) The structure (PDB file 2NUU; Conroy et al., 2007) of the E. coli complex of GlnK (a P II protein in charge of ammonia channel regulation) and the ammonia channel AmtB is shown to the right, whereas the zoom to the lower left shows only a part of the complex, to highlight the interaction of one T-loop with one channel. AmtB is in semi-transparent surface representation. GlnK is in the main figure in cartoon representation with each subunit colored differently, with the side-chain of the T-loop residue Arg47 shown in sticks representation. In the zoomed image GlnK is shown in surface representation in yellow with the T-loop residues highlighted in space-filling representation, illustrating the fact that the side-chain of Arg47 is the element getting deep into the channel and blocking it. (B) Super-imposition of the structures of Archeoglobus fulgidus GlnK3 (one of the three P II proteins of the GlnK type in this archaeon; Maier et al., 2011) with ADP bound (green; PDB file code 3TA1) or with ATP and 2OG bound (yellow; PDB 3TA2) to illustrate how 2OG binding fixes the T-loops in an outwards-flexed position (relative to the positions without 2OG) that would be inappropriate for fitting the topography of the entry chambers to the three ammonia channels in trimeric Amt (the ammonia channel in this organism). P II COMPLEXES WITH ENZYMES IN CYANOBACTERIA (AND BEYOND) The complexes of P II with the N-acetyl-L-glutamate kinase (NAGK) enzymes from S. elongatus and Arabidopsis thaliana presented a very different architecture with respect to the structure of the GlnK-AmtB complex of E. coli (Llácer et al., 2007;Mizuno et al., 2007) (Figures 4A,B). The P II -NAGK complex is an activating complex in which the T-loops of P II are flexed ( Figure 2C, rightmost panel) and integrated into a FIGURE 4 | P II -NAGK complex and active and arginine-inhibited NAGK. (A) The P II -NAGK complex of S. elongatus (PDB 2V5H; Llácer et al., 2010). Surface representations of the complex formed by two P II trimers (yellow) capping on both ends the doughnut-like NAGK hexamer (trimer of dimers; each dimer in a different color). The three-fold axis is vertical (top) or perpendicular to the page (bottom). Figure of J.L. Llácer and V. Rubio taken from Chin (2008). Reprinted with permission from AAAS (B). Cartoon representation of the S. elongatus P II -NAGK complex after removing the back NAGK dimer for clarity. The three-fold symmetry axis is vertical. Reprinted from Current Opinion in Structural Biology, 18, Llácer et al., Arginine and nitrogen storage, 673-681, 2008, with permission from Elsevier. (C) P II subunit-NAGK subunit contacts. P II , NAGK, and NAG are shown as strings, ribbons, and spheres, respectively. The contacting parts of the T-loop, B-loop, and β1-α1 connection, including some interacting side chains (in sticks), are blue, red, and green, respectively. The surfaces provided by these elements form meshworks of the same colors. The NAGK central β-sheet is green, and other β-strands and the α-helices are brownish and grayish for N-and C-domains, respectively. Some NAGK elements and P II residues are labeled. This figure and its legend reproduce with some modifications a figure and its legend of Llácer et al. (2007). The crystal structure of the complex of P II and acetylglutamate kinase reveals how P II controls the storage of nitrogen as arginine. Copyright (2007) National Academy of Sciences. (D,E), active and inactive conformations, respectively, of hexameric arginine-inhibitable NAGK. The active form is from a crystal of the enzyme from Pseudomonas aeruginosa (PDB 2BUF) while the inactive form is from the Thermotoga maritima enzyme (PDB 2BTY) (Ramón-Maiques et al., 2006). Note that the inactive form is widened relative to the active form, and that it has arginine sitting on both sides of each interdimeric junction. In the active form the nucleotide (in this case the product ADP rather than the substrate ATP) and NAG sit one in each domain of individual subunits. The NAGK observed in the P II -NAGK complex is in the active form, being stabilized in this form by its contacts with P II . hybrid (both proteins involved) β-sheet with NAGK, forming also a hybrid ion-pair network ( Figure 4C; Llácer et al., 2007). Apparently this flexing from an extended conformation could occur in two steps (Fokina et al., 2010b). The initial step would be mediated by a smaller loop of P II called the B-loop (Figures 2A, 4C). P II binding of 2OG also favors the flexing of the T-loop ( Figure 2C, leftmost panel) (Fokina et al., 2010a;Truan et al., 2010) although the resulting conformation appears inappropriate for interacting with NAGK. In addition, 2OG can also promote the disassembly of the P II -NAGK complex because certain P II residues like Arg9 that are involved in the binding of 2OG are also involved in the interaction with NAGK (and also with PipX, another target of P II , see below). Therefore, 2OG, an indicator of low ammonia levels , abolishes P II -NAGK complex formation (Figure 1). In S. elongatus 2OG can also promote the disassembly of the P II -NAGK complex by favoring the phosphorylation of Ser49 (Forchhammer and Tandeau de Marsac, 1994;Irmler et al., 1997), since the bound phosphate sterically prevents formation of the P II -NAGK hybrid β-sheet (Llácer et al., 2007). Plants and cyanobacteria stockpile ammonia as arginine, the protein amino acid with the largest nitrogen content (four atoms per arginine molecule). Arginine-rich proteins are very abundant in plant seeds (VanEtten et al., 1963). Cyanobacteria make non-ribosomally an arginine-rich amino acid polymer called cyanophycin (Oppermann-Sanio and Steinbüchel, 2002;Watzer and Forchhammer, 2018). The arginine stockpiling as arginine-rich macromolecules minimizes the osmotic effect while permitting rapid nitrogen mobilization for protein-building processes such as seed germination and cell multiplication. The selection of NAGK as the regulatory target stems from the fact that in many bacteria (including cyanobacteria) and in plants NAGK controls arginine synthesis via feed-back inhibition by L-arginine (Hoare and Hoare, 1966;Cunin et al., 1986;Lohmeier-Vogel et al., 2005;Beez et al., 2009). This inhibition must be overcome if large amounts of ammonia have to be stored as arginine (Llácer et al., 2008). Indeed, the P II -NAGK complex exhibits decreased inhibition by arginine Llácer et al., 2008). In arginine-sensitive NAGK (Figures 4D,E) the N-terminal αhelix of each subunit interacts with the same helix of an adjacent dimer, chaining three NAGK homodimers into a doughnutshaped hexameric ring with three-fold symmetry and a central large hole (Ramón-Maiques et al., 2006). The NAGK reaction (phosphorylation of the γ-COOH of N-acetyl-L-glutamate, NAG, by ATP) occurs within each NAGK subunit. NAG and ATP sit over the C-edge of the central 8-stranded largely parallel β sheet of the N-terminal and C-terminal domains, respectively ( Figure 4D; Ramón-Maiques et al., 2002). Catalysis requires the mutual approach of both domains of each subunit to allow the contact of the ATP terminal phosphate with the attacking NAG γ-COOH (Ramón-Maiques et al., 2002;Gil-Ortiz et al., 2003). Arginine, by binding in each subunit next to the N-terminal αhelix (Figure 4E), expands the hexameric ring hampering the contact of the reacting groups and preventing catalysis (Ramón-Maiques et al., 2006). In the P II -NAGK complex two P II trimers sit on the threefold axis of the complex, one on each side of the NAGK ring, making contacts with the inner circumference of this ring (Figures 4A,B). Each P II subunit interacts via its T and B loops with each NAGK subunit ( Figure 4C) gluing the two domains of this last subunit ( Figure 4A). By restricting NAGK ring expansion ( Figure 4C) even when arginine is bound, P II renders NAGK highly active (Llácer et al., 2007;Mizuno et al., 2007). P II does not compete physically with arginine for its sites on NAGK, simply these sites are widened in the P II -NAGK complex (Llácer et al., 2007), resulting in decreased apparent affinity of NAGK for arginine (as reflected in the dependency of the NAGK activity on the arginine concentration). In addition, the hybrid P II -NAGK ion pair network (Figure 4C) enhances the apparent affinity for NAG (assessed as the K m or S 0.5 value of NAGK for NAG) of cyanobacterial NAGK Llácer et al., 2007). Overall, the NAGK bound to P II exhibits decreased apparent affinity for arginine and increased activity, rendering NAGK much more active in the presence of arginine than when not bound to P II (Llácer et al., 2008), something that is crucial for nitrogen storage as arginine. NAGK appears to be a P II target only in organisms performing oxygenic photosynthesis (cyanobacteria, algae, and plants, Burillo et al., 2004). P II proteins from plants have lost the ability to bind ADP, while still binding ATP and 2OG (Lapina et al., 2018). In addition, except in Brassicae, the Cterminal part of plant P II is extended to form two helical segments and a connecting loop (Q-loop; Figure 2B, rightmost panels), creating a novel glutamine site, resulting in glutaminesensitivity of the P II -NAGK interaction (Chellamuthu et al., 2014). This is not the case with cyanobacterial P II , which binds both ADP and ATP and is glutamine-insensitive (Chellamuthu et al., 2014). P II has also been shown to interact in plants (Feria-Bourrellier et al., 2010) and bacteria, including cyanobacteria (Rodrigues et al., 2014;Gerhardt et al., 2015;Hauf et al., 2016), with the biotin carboxyl carrier protein (BCCP) of the enzyme acetyl coenzyme A carboxylase (AcCoA carboxylase) (Figure 1), although this complex has not been characterized structurally. BCCP is the component that hosts the covalently bound biotin that shuttles between the biotin carboxylase component and the transferase component of AcCoA carboxylase (Rubio, 1986). P II -BCCP complex formation tunes down AcCoA utilization and thus subsequent fatty acid metabolism (Feria-Bourrellier et al., 2010;Gerhardt et al., 2015;Hauf et al., 2016), promoting uses of AcCoA for different purposes than the synthesis of fatty acids, and therefore linking P II to AcCoA and fatty acid metabolism. For interaction, P II has to be in the ATP-bound and 2OGfree form (Figure 1) (Gerhardt et al., 2015;Hauf et al., 2016), which are conditions at which P II also binds to NAGK (Llácer et al., 2008). Therefore, there could be in vivo simultaneous activation of NAGK and inhibition of AcCoA carboxylase by P II . Mutational evidence suggests the involvement of the T-loop in this interaction with AcCoA carboxylase (Hauf et al., 2016), in principle excluding P II -NAGK-BCCP ternary complex formation and raising the possibility of competition between NAGK and BCCP for P II . The classical example of interaction of P II with an enzymatic target was with the ATase of E. coli (see introductory section), with which uridylylated or deuridylylated GlnB (GlnB is one of the two P II proteins of E. coli) can interact (Stadtman, 2001). We will not deal with this enzyme here because the P II /ATase/GS cascade of enterobacteria does not appear to have general occurrence, for example in cyanobacteria, and also because we have only partial information on the structure of the ATase (Xu et al., 2004(Xu et al., , 2010 and no direct information on the structure of the GlnB-ATase complex, although a model for such complex has been proposed (Palanca and Rubio, 2017). THE PipX ADAPTOR PROTEIN AND ITS COMPLEX WITH P II A yeast two hybrid search for proteins interacting with P II in S. elongatus identified (Burillo et al., 2004), in addition to NAGK, a small novel protein (89 amino acids) that was named PipX (P II -interacting protein X). This protein was identified later in a search (Espinosa et al., 2006) for proteins interacting with NtcA, the global nitrogen regulator of cyanobacteria (Vega-Palas et al., 1992). PipX binding to P II occurs under conditions of ammonia abundance (Figure 1), the same conditions prevailing for P II -NAGK complex formation (Espinosa et al., 2006). NAGK-PipX competition for P II was revealed in NAGK assays that showed that PipX decreased P II -activation and increased arginine inhibition of NAGK (Llácer et al., 2010), excluding NAGK-P II -PipX ternary complex formation. 2OG binding to P II disassembles the P II -PipX complex (Espinosa et al., 2006;Llácer et al., 2010), leaving PipX free to interact with NtcA (Figure 1). The crystal structures of P II -PipX complexes of S. elongatus (Llácer et al., 2010) and of Anabaena sp. PCC7120 (Zhao et al., 2010b) provided the first structural information on PipX (Figure 5A), revealing that it is formed by a compact body folded as a Tudor-like domain (a horseshoe-curved β-sheet sandwich) (Lu and Wang, 2013), followed by two C-terminal helices. In the P II -PipX complex ( Figure 5B) the three PipX molecules are enclosed in a cage formed between the flat face of the hemispheric P II trimer and its three fully extended T-loops (see Figure 2C, 2nd panel from the left) emerging perpendicularly to the P II flat face at its edge. The shape and orientation of these T-loops is very different relative to the P II bound to NAGK, Figure 5C). In turn, the caged Tudor-like domains form a homotrimer over the P II flat surface (Figure 5B, bottom), with the PipX self-interaction detected in yeast three-hybrid assays using P II as bridging protein (Llácer et al., 2010). Tudor-like domains characteristically interact with RNA polymerase (Steiner et al., 2002;Deaconescu et al., 2006;Shaw et al., 2008), suggesting that PipX could have some role in gene expression that would be blunted by sequestration of these domains in the P II cage. In the structure of the Anabaena P II -PipX complex, the two C-terminal helices of each PipX molecule lie one along the other in antiparallel orientation ("flexed"), being exposed between two adjacent T-loops in transversal orientation relative to these loops (Zhao et al., 2010b). Recent structural NMR data on isolated PipX showed that when PipX is alone (that is, not bound to a partner) the C-terminal helices are "flexed" (Figure 5A) . As shown below, the C-terminal helices of PipX in the NtcA-PipX complex are also flexed (Llácer et al., 2010). However, in the S. elongatus P II -PipX complex only one PipX molecule presents the "flexed" conformation, whereas in the other two PipX molecules the C-terminal helix is "extended, " not contacting the previous helix and emerging centrifugally outwards from the complex, between two T-loops ( Figure 5B) (Llácer et al., 2010). P II binding might facilitate the extension of the PipX C-terminal helix, endowing the P II -PipX complex with a novel surface and novel potentialities for interaction with other components. These novel potentialities were substantiated recently by the identification, in yeast three-hybrid searches (Labella et al., 2016), of interactions of PipX in the P II -PipX complex with the homodimeric transcription factor PlmA (see proposal for the architecture of this complex in Figure 1 bottom left; Labella et al., 2016). Interactions were not observed in yeast two-hybrid assays between PipX or P II and PlmA. Residues involved in three-hybrid interactions, mapped by site-directed mutagenesis, are largely localized in the C-terminal helix of PipX. PlmA belongs to the GntR super-family of transcriptional regulators, but is unique to cyanobacteria (Lee et al., 2003;Hoskisson and Rigali, 2009;Labella et al., 2016). Little is known about PlmA functions other that it is involved in plasmid maintenance in Anabaena sp. strain PCC7120 (Lee et al., 2003), in photosystem stoichiometry in Synechocystis sp. PCC6803 (Fujimori et al., 2005), in regulation of the highly conserved cyanobacterial sRNA YFR2 in marine picocyanobacteria (Lambrecht et al., 2018), and that it is reduced by thioredoxin, without altering its dimeric nature in Synechocystis sp. PCC6803 (Kujirai et al., 2018). The P II -PipX-PlmA ternary complex suggests that PipX can influence gene expression regulation via PlmA, although the PlmA regulon remains to be defined. THE GENE EXPRESSION REGULATOR NtcA When ammonia becomes scarce the increasing 2OG levels should determine the disassembly of the P II -NAGK, P II -BCCP, and P II -PipX complexes (Figure 1). These same conditions promote the binding of PipX (see below) to the transcriptional regulator NtcA (Figure 1), an exclusive cyanobacterial factor of universal presence in this phylogenetic group (Vega-Palas et al., 1992;Herrero et al., 2001;Körner et al., 2003). The determination of the structures of NtcA from S. elongatus (Figures 6A,B) (Llácer et al., 2010) and from Anabaena sp. PCC7120 (Zhao et al., 2010a) confirmed the sequence-based inference (Vega-Palas et al., 1992) that NtcA is a homodimeric transcriptional regulators of the family of CRP (the cAMP-regulated transcriptional regulator of E. coli) (McKay and Steitz, 1981;Weber and Steitz, 1987). Similarly to CRP, NtcA has a C-terminal DNA binding domain of the helix-turn-helix type. In CRP, the DNA binding helices of its two C-terminal domains are inserted in two adjacent turns of the major groove of DNA that host the imperfectly palindromic target DNA sequence (called here the CRP box) Llácer et al., 2010). Note that the major difference between the two conformations is the large movement of the C-terminal helix around its flexible linker . The same flexed conformation was observed in the complex with NtcA (see below) and agrees with the data of structural NMR studies on isolated PipX . The elements of the Tudor-like domain are encircled in a blue circumference. (B) The P II -PipX complex of S. elongatus viewed with the three-fold axis of PII vertical (top) or in a view along this axis, looking at the flat face of P II (bottom). (C) Superimposition of S. elongatus P II in the complex with PipX and in that with NAGK. The changes in the T-loops are very patent. (McKay and Steitz, 1981;Weber and Steitz, 1987). The consensus DNA sequence to which NtcA binds (consensus NtcA box) is quite similar to the consensus CRP box (Berg and von Hippel, 1988;Luque et al., 1994;Jiang et al., 2000;Herrero et al., 2001;Omagari et al., 2008), and thus NtcA and CRP are expected to bind in similar ways to their target DNA sequences. In vitro studies revealed that 2OG is an NtcA activator (Tanigawa et al., 2002;Vázquez-Bermúdez et al., 2002), increasing NtcA affinity for its target sequences. As in the case of cAMP for CRP, 2OG binds to the NtcA regulatory domain. This domain is responsible for the NtcA dimeric nature (Figure 6A) (Llácer et al., 2010;Zhao et al., 2010a). The regulatory domain of NtcA is highly similar to the corresponding domain of CRP (Llácer et al., 2010). The main differences reflect the changes in the characteristics of the site for the allosteric effector that enable the accommodation of 2OG instead of cAMP. Each 2OG molecule interacts in NtcA with the two (one per subunit) long interfacial helices that form the molecular backbone, crossing the molecule in its longer dimension, linking in each subunit both domains (Figures 6A,C) (Llácer et al., 2010;Zhao et al., 2010a). 2OG interactions with both interfacial helices favor a twist of one helix relative to the other, dragging the DNA binding domains and helices to apparently appropriate positions and interhelical distance for binding in two adjacent turns of the major groove of DNA where the NtcA box should be found (Figures 6A,B, and inset therein), although the experimental structure of . (A,B), structures of "active" (A) and "inactive" (B) S. elongatus NtcA (PDB files 2XHK and 2XGX, respectively) (Llácer et al., 2010). The two subunits of each dimer are in different colors, with the DNA-binding domains in a lighter hue than the regulatory domain of the same subunit. In the cartoon representation used, helices are shown as cylinders to illustrate best the changes in position of the DNA binding helices and of the long interfacial helices (labeled) upon activation. Bound 2OG is shown in "active" NtcA (in spheres representation, with C and O atoms colored yellow and red, respectively). The DNA, in surface representation in white, has been modeled from the CRP-DNA structure (for details see Llácer et al., 2010). The inset superimposes the "active" and "inactive" forms colored as in the main figure to illustrate the magnitude of the changes. (C) Stereo view of sticks representation of the 2OG site residues of the "active" (green) and "inactive" (raspberry) forms of NtcA. The 2OG bound to the "active" form is distinguished by its yellow C atoms. Note that only two residues, both 2OG-interacting and highly polar, experience large changes in their positions between the inactive and the active forms: Arg128 from the long interfacial helix of the subunit that provides the bulk of the residues of the site, and Glu133 from the interfacial helix of the other subunit. They are believed to trigger the changes in the relations between the two interfacial helices that result in NtcA "activation". DNA-bound NtcA should be determined to corroborate this proposals. Although NtcA and CRP boxes are quite similar, plasmon resonance experiments (Forcada-Nadal et al., 2014) revealed that CRP exhibits complete selectivity and specificity for the CRP box, with absolute dependency on the presence of cAMP. In contrast, NtcA had less strict selectivity, since it still could bind to its promoters in the absence of 2OG, although with reduced affinity, and it could also bind to the the CRP promoter tested. Nevertheless, it is unlikely that NtcA could bind in vivo to the CRP boxes of cyanobacteria in those species where CRP is also present, given the much higher affinities for the CRP sites of cyanobacterial CRP and the relative concentrations in the cell of both transcriptional regulators (Forcada-Nadal et al., 2014). While the structures of 2OG-bound NtcA of S. elongatus (Llácer et al., 2010) and of Anabaena (Zhao et al., 2010a) are virtually identical, the reported structures of "inactive" NtcA of Anabaena without 2OG (Zhao et al., 2010a) and of S. elongatus (Llácer et al., 2010) differed quite importantly in the positioning of the DNA binding domains (Figure 1, "Inactive forms" under "NtcA"), although in both cases the DNA binding helices were misplaced for properly accommodating the NtcA box of DNA, raising the question of whether these structural differences are species-specific or whether "inactive" NtcA can be in a multiplicity of conformations. PipX AS AN NtcA CO-ACTIVATOR Soon after PipX was found to interact with NtcA (Espinosa et al., 2006), it was also shown to activate in vivo transcription of NtcA-dependent promoters under conditions of low nitrogen availability (Espinosa et al., 2006(Espinosa et al., , 2007. Direct binding studies with the isolated molecules proved that PipX binding to NtcA requires 2OG (Espinosa et al., 2006). Nevertheless, as PipX was not totally essential for transcription of NtcA-dependent promoters (Espinosa et al., 2007;Camargo et al., 2014), it was concluded (1) that PipX was a coactivator of 2OG-activated NtcA-mediated transcription (Espinosa et al., 2006;Llácer et al., 2010); and (2) that the degree of activation by PipX depended on the specific NtcA-dependent promoter (Espinosa et al., 2007;Forcada-Nadal et al., 2014). Detailed plasmon resonance studies (Forcada-Nadal et al., 2014) using sensorchip-bound DNA confirmed for three Synechocystis promoters that PipX binding to promoter-bound NtcA has an absolute requirement for 2OG, since no PipX binding was observed when NtcA was bound to the DNA in absence of 2OG. In these studies PipX increased about one order of magnitude the apparent affinity of NtcA for 2OG. In other in vitro experiments with four NtcAdependent promoters of Anabaena sp. PCC 7120, PipX was also found to positively affect NtcA binding to its DNA sites (Camargo et al., 2014). The induction by PipX of increased NtcA affinity for 2OG and for its promoters could account for the PipX-triggered enhancement of NtcA-dependent transcription. The crystal structure of the NtcA-PipX complex of S. elongatus (Figure 7) (Llácer et al., 2010) corresponded to one "active" NtcA dimer with one molecule of each 2OG and PipX bound to each subunit. PipX is inserted via its Tudor-like domain (Figure 7A), filling a crater-like cavity formed over each NtcA subunit ( Figure 7B) largely over one regulatory domain, being limited between the DNA binding domain and the long interfacial helix of the same subunit, and the regulatory domain of the other subunit. PipX extensively interacts with the entire crater, with nearly 1200 Å 2 of NtcA surface covered by each PipX molecule, of which 65% belongs to one subunit (40%, 15% and 10% belonging to the DNA-binding domain, the interfacial helix and the regulatory domain, respectively) and 35% belongs to the regulatory domain (including the interfacial helix) of the other subunit, gluing together the elements of half of the NtcA dimer in its active conformation, stabilizing this conformation (Llácer et al., 2010). This conformation is the one that binds 2OG and that should have high affinity for the DNA, thus explaining the requirement of 2OG for PipX binding and the increased affinities of NtcA for 2OG and DNA when PipX was bound to NtcA (Forcada-Nadal et al., 2014). Since a similar crater-like cavity exists in other transcription factors of the CRP family including CRP (see for example McKay andSteitz, 1981 or Weber andSteitz, 1987) it would be conceivable that PipX-mimicking proteins could exist for these other transcriptional regulators of the CRP family, although PipX cannot do such a role since it does not bind to CRP (Forcada-Nadal et al., 2014). Furthermore, a large set of highly specific contacts (Llácer et al., 2010) ensure the specificity of the binding of PipX to NtcA. The elements of the Tudor-like domain that interact with NtcA are largely the same that interact with the flat surface of the hemispheric body of P II (many of them mediated by the upper layer of the Tudor-like β-sandwich, particularly strands β1 and β2), predicting total incompatibility for the simultaneous involvement of PipX in the NtcA and P II complexes (Llácer et al., 2010). While the Tudor-like domain monopolizes the contacts of PipX with NtcA, the C-terminal helices of PipX do not participate in these contacts and remain flexed, as in isolated PipX , protruding away from the (Llácer et al., 2010). The projection of NtcA differs somewhat from that in Figure 6A to allow visualization of both PipX molecules, illustrating the fact that the Tudor-like domain is the part of PipX that binds to NtcA. Note that the two PipX molecules (the asymmetric unit contained an entire complex with two PipX molecules, PDB file 2XKO) are in the "flexed" conformation, that the flexed helices protrude away from the NtcA molecule and that they do not contact the modeled bound DNA (main figure and inset). The inset represents the same complex in a different orientation (viewed approximately along the DNA) and with all elements in surface representation except the DNA (shown in cartoon) to better visualize the protrusion in large part due to the flexed helices of both PipX molecules. (B) Deconstruction of the NtcA-PipX complex to show in surface representation the crater at the NtcA surface where PipX binds, and the surface of Pip X used in this binding. (C) A model based on CRP (Llácer et al., 2010) for the complex of the NtcAPipX complex with DNA and with the C-terminal domain of the α-subunit of RNA polymerase (αCTD), to show that the C-terminal helices of PipX could reach this part of the polymerase. In this figure the C-terminal helix of NtcA is colored red because it has no counterpart in CRP and is involved in the interactions with PipX. complex ( Figure 7A). The coactivation functions of PipX for NtcA-mediated transcription could also involve these helices. However, in vitro experiments (Llácer et al., 2010;Camargo et al., 2014) and modeling (based on CRP) of DNA binding by the NtcA-PipX complex ( Figure 7A and inset therein) (Llácer et al., 2010) did not support the idea of PipX binding to DNA. Alternatively, these helices could interact with RNA polymerase, particularly given the location, in the homologous CRP-DNA complex, of the binding site for the α-subunit of the C-terminal domain (αCTD) of RNA polymerase ( Figure 7C; and discussed in Llácer et al., 2010). Further structures of P II -PipX bound to DNA alone or with at least some elements of the polymerase are needed to clarify these issues. THE PipX REGULATORY NETWORK IN QUANTITATIVE PERSPECTIVE The gene encoding P II could not be deleted in S. elongatus unless the pipX gene was previously inactivated (Espinosa et al., 2009). Further studies led to the conclusion that decreasing the P II /PipX ratio results in lethality in S. elongatus, indicating that PipX sequestration into P II -PipX complexes is crucial for survival and implicating both proteins in the regulation of essential processes Laichoubi et al., 2012). The ability of P II to prevent the toxicity of PipX suggests that P II acts as a PipX sink even under conditions in which the affinity for NtcA would be highest, supporting the idea that not all PipX effects are related to its role as NtcA co-activator. Mutational studies (Espinosa et al., 2009 and massive transcriptomic studies of S. elongatus mutants centered on PipX (Espinosa et al., 2014) also support the multifunctionality of PipX, stressing the need for additional studies, including the determination of PlmA functions and the search for further potential PipX-interacting proteins (Figure 1, broken yellow arrow). Massive proteomic studies (Guerreiro et al., 2014) have estimated the number of chains of each protein of the P II -PipX network (Figure 1) in S. elongatus cells. The values obtained ( Table 1) are corroborated by those obtained in focused western blot studies for some of these macromolecules (Table 1) (Labella et al., 2016(Labella et al., , 2017. These quantitative data give an opportunity to evaluate the possible frequency of the different complexes and macromolecules of the P II -PipX-NtcA network (Figure 1) in one or another form (schematized in Figure 8). Of all the proteins mentioned here until now, P II is by far the most abundant in terms of polypeptide chains ( Table 1). In comparison, the sum of all the chains of other known P II -binding proteins represents no more than 20% of the P II chains. Among these molecules is PipX, which only represents ∼10% of all the P II chains. This indicates that P II has the potential to sequester all the PipX that is present in the cell (Figure 8). In turn, PlmA could be fully trapped in the P II -PipX-PlmA complex if this complex has the 1:1:1 stoichiometry proposed for it (Figure 1) (Labella et al., 2016), since the number of PlmA chains only represent about 10% of the number of PipX chains. Thus, about 10 and 1% of the P II trimer could be as the respective P II -PipX and P II -PipX-PlmA complexes under nitrogen abundance conditions. In contrast, with nitrogen starvation all of the NtcA could be bound to PipX (Figure 8), given the ∼five-fold excess of PipX chains over NtcA chains (Table 1). Thus, assuming that under conditions at which 2OG and ATP reach high levels P II is totally unable to bind PipX, ∼80% of the PipX molecules could be free to interact with additional protein partners. When given as percentages, the data are relative to the number of P II chains (given the value of 100). a Data from massive proteomic study (Guerreiro et al., 2014). Percentages within parentheses are data based on immunoquantification in Western blots (Labella et al., 2016(Labella et al., , 2017. b Rounded to the closest integer or to the closest first decimal figure. c Given for reference, since there is no evidence of physical interaction with any of the other proteins. d The physical interaction of this putative channel with P II was found in Synechocystis sp. PCC6803, but the findings were not replicated in S. elongatus with the homologous product of gene Synpcc7942_0610. These inferences are consistent with the K D values for the P II -NAGK (Llácer et al., 2007) and P II -PipX complexes in the absence of 2OG (Llácer et al., 2010) and for the PipX-NtcA complex at high 2OG and ATP (Forcada-Nadal et al., 2014) (∼0.08, 7 and 0.09 µM, respectively). For the estimated cellular levels of the different components (Table 1), assuming a cell volume of 10 −12 ml, virtually all the NAGK and ∼95% of the PipX could be P IIbound in the absence of 2OG, and ∼98% of the NtcA could be PipX-complexed in the presence of 2OG. However, the impacts of varying concentrations of 2OG on the disassembly or assembly of the complexes most likely differ for the various complexes. For example, a two-order of magnitude increase in the K D value for the P II -NAGK complex due to 2OG binding might have much less impact (a 7% decrease in the amount of NAGK bound to P II would be estimated from the mere total protein levels and K D value) than a two-order of magnitude increase in the K D for the P II -PipX complex (an 80% decrease in the amount of PipX bound estimated similarly). These estimations are very crude, since they do not take in consideration that in S. elongatus P II phosphorylation prevents NAGK binding (Heinrich et al., 2004), and that this phosphorylation is greatly influenced by the abundance of ammonia (Forchhammer and Tandeau de Marsac, 1994). Furthermore, we have not considered in these estimates the influence of the ATP concentrations, recently shown to decrease in vivo in S. elongatus upon nitrogen starvation (Doello et al., 2018). Therefore, the situation is much more complex than would be expected from the mere consideration of the abundances of the different proteins and of the K D values for the non-phosphorylated form of P II . Nevertheless, it appears desirable to estimate the influence of different 2OG levels on K D values as an important element to take into consideration in future attempts to model the concentrations of the different FIGURE 8 | Protein complexes of the P II regulatory system in S. elongatus according to availability of ammonium in the cell, and their corresponding functional consequences. The frequencies of the different chains in the various forms are based on the levels of the proteins in the cell found in proteomic studies ( Table 1). The P II trimer has been colored blue, PipX and its C-terminal helices are red, PlmA dimers have their DNA binding domains yellow or orange and their dimerization domains in two hues of green, NAGK is shown as a purple crown, and the regulatory and DNA binding domains of NtcA are given dark and light shades of blue, respectively. A NOVEL NETWORK MEMBER FUNCTIONALLY RELATED TO PipX Recently, a novel protein has been identified as belonging to the PipX regulatory network (Figure 1, yellow 3D arrow projecting upwards from the plane of the network). In this case no direct protein-protein interaction with PipX has been shown (Labella et al., 2017). This protein (PipY), is the product of the downstream gene in the bicistronic pipXY operon. The regulatory influence of PipX on PipY was originally detected in functional, gene expression and mutational studies (Labella et al., 2017). More recently it has been concluded that PipX enhances pipY expression in cis, preventing operon polarity, a function that might implicate additional interactions of PipX with the transcription and translation machineries, by analogy with the action of NusG paralogues, which are proteins bearing, as PipX, Tudor-like domains. It has been proposed that the cis-acting function of PipX might be a sophisticated strategy for keeping the appropriate PipX-PipY stoichiometry. PipY is an intriguing pyridoxal phosphate-containing protein that is folded as a modified TIM-barrel (Figure 1). The PipY structure (Tremiño et al., 2017) gives full structural backing to unsuccessful experimental attempts to show enzymatic activity of PipY and its orthologs (Ito et al., 2013). Because of these negative findings, and given the pleotropic effects of the inactivation of the PipY orthologs in microorganisms and humans, it has been concluded that these proteins have as yet unclarified roles in PLP homeostasis (Ito et al., 2013;Darin et al., 2016;Prunetti et al., 2016;Labella et al., 2017;Plecko et al., 2017). Interestingly, these proteins are widespread across phyla and the deficiency of the human ortholog causes vitamin B 6dependent epilepsy (Darin et al., 2016;Plecko et al., 2017;Tremiño et al., 2018), providing an excellent example that investigations of cyanobacterial regulatory systems like the one summarized here can have far-reaching consequences spanning up to the realm of human and animal pathology. If any lesson can be inferred from all of the above, is that the investigation on P II and particularly on PipX proteins require further efforts. FINAL REMARKS The rich P II regulatory network summarized in Figure 1 of this review even for a unicellular microorganism with a single type of P II protein, attests the importance of P II and of its regulatory processes. This importance, possibly underrecognized until now, is highlighted, for example, by the very wide distribution of P II proteins among microorganisms and plants. Furthermore, in the many organisms with several genes for P II proteins the levels of complexity expected from P II regulatory networks may be much greater than the one presented here. Each one of these paralogous P II proteins may command a regulatory network, and it would be unlikely that these networks would not be interconnected into a large meshwork that will require the instruments of systems biology to be fully understood. PipX also deserves deeper attention than received until now. Massive transcriptomics studies (Espinosa et al., 2014) have ascribed to this protein a paramount regulatory role in S. elongatus. For full understanding of this role further searches for PipX-interacting or functional partners like PipY appear desirable, with detailed investigation of the molecular mechanisms of the physical or the functional interactions. PlmA merits particular attention to try to characterize the roles of the P II -PipX-PlmA complex. PipY and its orthologs deserve similar attention, to try to define molecularly their PLP homeostatic functions, a need that is made more urgent by the role in pathology of the human ortholog of PipY. In addition to all of this, the structural evidence reviewed here makes conceivable that adaptor proteins capable of stabilizing active conformations of other transcriptional regulators of the CRP family could exist outside cyanobacteria, mimicking the PipX role. In summary, there are many important questions to be addressed arising from the field reviewed here, some within cyanobacteria, but others concerning whether the mechanisms and complexes exemplified here could have a parallel in other bacterial or even plant species. Clearly more investigations on P II and its partners in other phylogenetic groups using the approaches and experimental instruments used to uncover the cyanobacterial P II regulatory network would appear highly desirable. AUTHOR CONTRIBUTIONS AF-N, JLL, AC, CM-M, and VR reviewed the literature and their own previous work and contributed to the discussions for writing this review. The main writer was VR, but all the authors contributed to the writing of the manuscript. AF-N and CM-M prepared the figures, with key inputs from VR. FUNDING Supported by grants BFU2014-58229-P and BFU2017-84264-P from the Spanish Government. ACKNOWLEDGMENTS We acknowledge support of the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI). We are grateful to ESRF (Grenoble, France), Diamond (Oxfordshire, UK) and Alba (Barcelona, Spain) synchrotrons for access and for staff support to collect the data used for the determination by our group of most of the structures mentioned in this paper, which have been previously published as referred. CM-M holds a contract of CIBERER.
v3-fos-license
2024-06-28T15:06:45.280Z
2024-06-01T00:00:00.000
270776425
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.21608/asalexu.2024.361312", "pdf_hash": "93f0df15dbd497adddf89736bd9bae53518771b8", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42043", "s2fieldsofstudy": [ "Medicine" ], "sha1": "864d7c6973719dd86fb23dd08cfed362d369bc15", "year": 2024 }
pes2o/s2orc
Effect of Play Activities Versus Passive Distraction Technique on Preoperative Anxiety and Fear Levels among Children Undergoing Surgeries Background: Illness, hospitalization, and surgery are the first crises that children face especially during the early years. The play and passive distraction technique are non ‐ pharmacological approach used to control emotions. They are considered an anxiety-reducing strategies that divert children’s attention away from noxious or unpleasant stimuli and subsequently minimizes their anxiety and fear. Objective : To determine the effect of play activities versus passive distraction techniques on preoperative anxiety and fear levels among children undergoing surgeries. Setting: The study was conducted in the general pediatric surgical units at Alexandria University Children at El-Shatby and Smouha Specialty Hospitals. Subjects : A convenience sampling of 90 hospitalized school-age children undergoing general surgeries were included from the previously mentioned settings. Tools : Three tools were used; Socio-demographic and medical data of Children's structured Interview Schedule, State trait anxiety inventory for children (STAIC), and the children’s fear scale (CFS). Results : These results revealed that 66.7% of school-age children in the play activities group and 30% of children in the passive distraction group had low anxiety compared to none of them in the control group (0.0%). In addition, 66, 7% of school-age children in the control group had extreme fear compared to none of those children in both the play activities group and the passive distraction group. A highly statistically significant difference was detected for the preoperative anxiety and fear between school-age children in both groups (I and II) one hour before surgery. Where 50% of children in study group II mentioned that they had average anxiety compared to 33.3% of them in study group I. In addition, nearly half of the children in study group II reported that they had medium fear (46.7%) compared to 16.7% of them in study group I. Conclusion : Practicing the play activities and passive distraction technique for school-age children preoperatively minimized their level of anxiety and fear. In addition, the play activities were more effective in decreasing children's level of anxiety and fear than the passive distraction technique. Recommendations: It was recommended that play activities should be applied preoperatively to children undergoing surgeries in hospitals. Introduction Hospitalization for medical illness or surgical procedures is experiencing various emotions due to the unknown environment, unfamiliar people, and various frightening equipment.The effects of the disease, changes in the environment and habits as well as separation from family members and friends can lead to stress particularly during painful procedures (Al-Yateem et al., 2015;Kapkın et al., 2020).Surgical operations are situations that provoke anxiety, fear and varying levels of distress not only for the child who is unwell but also for the parents.Varying degrees of anxiety and fear may be experienced by children depending on the seriousness of the health problem and the level of parental ; anxiety and concern exhibited Chow et al., 2016).Anxiety is defined as a psychological, physiological, and behavioral state induced in human beings by a threat to well-being or survival, either actual or potential.Preoperative anxiety (PA) is of utmost concern in pediatrics.It was documented that 40% to 60% of children undergoing surgical procedures experience high levels of preoperative anxiety (Fortier & Kain, 2015). Fear is a natural, powerful, and inherent human emotion.It involves a universal biochemical reactions as well as a high individual emotional response.Fear refers to the presence of danger or the threat of harm, whether that danger is physical or psychological (Fritscher, 2020).Anxiety and fear in children undergoing hospitalization can be reduced by several measures in the form of play activities and passive distraction technique.In accordance with the stage of development of play for children aged 6-12 years, the game that can be done is constructive play.In this playing activity, the child will create something, create a particular building with the available game tools (Supartini, 2014;Kaluas, 2015(.Play activities are playing technique used to reduce anxiety and fear among hospitalized children.They evaluate their feelings and misunderstandings toward treatments and procedures and help them develop positive coping methods (Kapkın ,2020).Several studies have shown that play activities can help establish a bond and communication with the hospitalized children.The expression of feelings among children, relieve stress and anxiety and prepare them for invasive interventions (Caleffi et al ., 2016). Passive distraction technique means that the children usually remain silent during the procedure through watching a stimulant rather than the active participation.It is hypothesized to be an effective strategy for decreasing procedural pain, fear, anxiety by reducing the sensory and affective components of pain, anxiety and fear and the diversional capacity left to process that pain.In addition, when an individual is distracted, regional cerebral blood flow associated with processing a painful event is reportedly reduced.Likewise, when an individual's attention is occupied by a distracting task, activation is reduced to the areas of the brain associated with pain such as the thalamus, insula and the anterior cingulate cortex producing correspondingly lower pain and anxiety scores (Guzzetta et al .,2007). Pediatric nurse has a crucial role in reducing anxiety and fear among children in preoperative period.Some hospitals provide preoperative preparation programs to reduce anxiety in children and their parents.Preoperative preparation programs allow children and their parents the chance to familiarize themselves with the hospital environment and procedures, some days before the operation.By doing this, they can increase their knowledge, learn coping strategies and lower anxiety (Olson, 2018). Pediatric nurse also can reduce the negative effects of hospitalization through prepare the child and family for this process such as; establish a trusting relationship, provide information about procedures, help children to express emotions, support coping strategies and distraction techniques.Previous experiences of the child and family should also be considered to establish communication appropriate for the age of the child, answer questions carefully, and eliminate needless worries (Çelebi et al ., 2015;Kapkin , 2020).Spielberger, (1964) and adopted in the current study.Validity and reliability has been done for this scale again by Levent Kirisci and Duncan B. Clark, (1996).Cronbach's reliability coefficients for the STAIC ranging from 0.82 to 0.89.The STAIC is self-reported instrument used to evaluate children's anxiety between the ages of 7 to 11years .The scale consists of 20 items that ask children how they feel at a particular time.Children were instructed to respond according to how they feel about their surgeries.They resonded to the STAIC by selecting one of three scores (Hardly ever, often , always). Scoring system: The total scores 60 are summation of the item scores; For statistical purposes, scores ranged from 20 to less than 30 were considered low anxiety, 30 to less than 40, indicating average; 40 to less than 50, indicating above average; and 50 to 60 suggesting very high level of anxiety. Tool III: The children's fear scale (CFS): It is self-reported scale that was developed by McMurtry et al (2011) Statistical Analysis Data were fed to the computer and analyzed using IBM SPSS software package version 20.0.(Armonk, NY: IBM Corp) Qualitative data were described using number and percent.The Shapiro-Wilk test was used to verify the normality of distribution Quantitative data were described using range (minimum and maximum), mean, standard deviation, median.Significance of the obtained results was judged at the 5% level.The used tests were: 1-Chi-square test: For categorical variables, to compare between different groups 2-Fisher's Exact or Monte Carlo correction: Correction for chi-square when more than 20% of the cells have expected count less than 5 3-Paired t-test: For normally distributed quantitative variables, to compare between two periods.4-Kruskal Wallis test: For abnormally distributed quantitative variables, to compare between more than two studied groups.5-Wilcoxon signed ranks test: For abnormally distributed quantitative variables, to compare between two periods. Results: Table ( The relationship of the preoperative level of anxiety between the school-age children in study group one and control group through one hour before surgery is shown in (Table 2).It was found that the level of anxiety for 46.7% of children was very high in control group compared to none of children in the study group one (0.0%).In addition, the anxiety level for 66.7% of children in study group one was on low level compared to none of them on the control group (0.0%) and the difference statistically significant, where p= <0.001. Table (3) presents the relationship of the preoperative level of fear between the school-age children in study group one and control group through one hour before surgery.Significant difference was found between the two groups (p= <0.001) where the majority of children in study group one reported that they had simple fear (83.3%) compared to none of those in control group (0.0%).Furthermore, 66.7% of children in control group mentioned that they had extreme fear compared to none of those in study group one (0.0%). Discussion Surgical operations are situations that develop for multiple reasons causing stress for both children and their families.This stress is reflected as anxiety, fear and anger that initiating mainly from Parental separation and a strange environment.The identification and treatment of these clinical phenomena are very important to prevent both psychological and physiological side effects (Aytekin et al., 2016).The most significant advantage of nonpharmacological methods is that they reduce the use of analgesics and inhence children's adaptation to the stressful situations and fears.Play activities and passive distraction technique are types of these non-pharmacological methods (Inan & Inal, 2019). The findings of the present study revealed that the school-age children who received play activities exhibit lower anxiety and fear than those who don't (table 2& 3).The positive effect of play activities in the present study could be explained in the light of certain issues; it was stated that play activities are an effective strategy for decreasing procedural pain, fear, anxiety and distress where it reduces the sensory and affective components of these feelings.Play activities will be also a vehicle to modify how noxious and fearful stimuli are processed (Guzzetta et al., 2007). It was revealed from the results of the present study that the use of passive technique for children preoperatively has a positive effect in reducing their anxiety and fear (table 4&5).This could be related to passive distraction technique (e.g.watching cartoon games) leads to endorphin secretion and thus, can lead to the modifications of emotions, increase children's comfort and reduce pain, fear and anxiety (Kazemi et al., 2012). The results of current study were congruent with the findings that have been done by Amer et al. ( 2021) who recommended the use of storytelling technique (a type of passive distraction technique) beside routine hospital programs for children undergoing surgery.They found that children who were exposed to storytelling experienced low anxiety and fear scores compared to those children who received the routine hospital care only. The findings of the current study revealed that play activities is more effective than passive distraction technique in minimizing level of anxiety and fear for school-age children preoperatively as illustrated in tables 6 & 7.The superiority effect of play activities in this study could be related to, the school -age children in play activities group act as an active participant in which they were completely involved, immersed and occupied with the play games.So, they focused all their attention that help them to reduce their feelings of anxiety and fear.While, with passive distraction technique the children act as passive participant and they were only just viewers (Elsayed, 2020) The findings that have been shown by Arıkan & Esenay 2020 regarding their study on "active and passive distraction interventions in an emergency pediatric department to alleviate the pain and anxiety during venous blood sampling" supported the findings of the present study as they cited that the active distraction group had lower levels of procedural pain, fear, and anxiety than those in other groups. Inan & Inal, 2019 in their study findings about a clinical trial to evaluate the Impact of 3 different distraction techniques on the pain and anxiety levels of children during vein puncture were parallel with the findings of the current study that the anxiety levels of the group playing video games (as active distraction method) during the venipuncture procedure were significantly lower than those who don't. On the contrary, Many authors ( Millett and Gooding, 2017;Gul, U. 2021;Shekhar et al., 2022) stated that there was no significant difference between the groups of active and passive distraction techniques in reducing pain, anxiety and fear.In addition, the findings that have been shown by Durak &Uysal, 2022 andUgucu et al., 2022 reported that cartoon watching (a type of passive distraction) was more effective in reducing pain, anxiety, and fear in children than distraction card (a type of active distraction). Unfortunately, children's previous experience of hospitalization in the current study doesn't have any effect regarding their anxiety and fear levels.Where, children who hadn't previous experience of hospitalization had lower anxiety and fear levels than those who don't. Conclusion: It is concluded from the present study that practicing the play activities and distraction technique for school-age children preoperatively minimized their level of anxiety and fear.In addition, the play activities were more effective in decreasing the children's level of anxiety and fear than passive distraction technique. Recommendations: 1.An educational training program should be conducted to pediatric nurses about various methods of distraction to minimize children's anxiety and fear preoperatively.2. Play activities as a non-pharmacological anxiety and fear management should be used as a routine in daily care in pediatric hospitals.3. Play activities should be used to children before surgical procedures in hospitals.4. Establishment of playing rooms with interacting toys in surgical units and demonstrate to children how to use it. to measure fear among children.The child's version consists of 5 faces.The first face (from the left) is not scared at all, represents no fear.The next 4 show incremental amounts of fear ranged from low fear to very fearful and the last face is representing the extreme fear.These faces are showing different amounts of being scared.This face [point to the left-most face] is not scared at all, this face is a little bit more scared [point to second face from left], a bit more scared [sweep finger along scale], right up to the most scared possible [point to the last face on the right].For statistical purposes, each face is assigned a numerical value from 0 to 4 (Zero indicating no fear, while 4 indicating extreme fear). Figure 1 : Figure 1: children's fear scale (McMurtry, C.M., Noel, M., Chambers, C.T., McGrath, P.J. (2011).Children's fear during procedural pain: Preliminary investigation of the Children's Fear Scale.Health Psychology, Advanced Access Online.)Method 1. Approval from Research Ethics Committee, Faculty of Nursing, Alexandria University was obtained.2.An official letter was sent from the Faculty of Nursing to the directors of the previously mentioned settings to facilitate research implementation after explanation of the aim of the study.3. Tool I was developed by the researcher.4. Tool II & III were translated into Arabic.5. Tool I & II & III were tested for content validity by five experts in the pediatric nursing field and necessary modifications were done and it was 94.8%. .  2 : 2 : 2 : Chi square test MC: Monte Carlo*: Statistically significant at p ≤ 0.05Table (3): The relationship of the preoperative level of fear between the school-age children in study group one and control group through one hour before surgery Chi square test MC: Monte Carlo*: Statistically significant at p ≤ 0.05 Table (4): The relationship of the preoperative level of anxiety between the school-age children in study group two and control group through one hour before surgery.Chi square test MC: Monte Carlo*: Statistically significant at p ≤ 0.05 Tool II: State trait anxiety inventory for children (STAIC): This scale was developed by Charles D.
v3-fos-license
2023-12-16T17:16:17.116Z
2023-01-01T00:00:00.000
266301156
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/90/e3sconf_icsdg2023_01038.pdf", "pdf_hash": "ba7a8c4ec0f0331a33eacfd1ddaa53cc2a590eeb", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42045", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "sha1": "3fbd4dcd03cd195a090235f85f052b9d5d3d4d31", "year": 2023 }
pes2o/s2orc
Regenerative Manufacturing: Crafting a Sustainable Future through Design and Production - In an era characterised by mounting environmental concerns and a growing awareness of the critical need for sustainability, the manufacturing industry stands at a crossroads. "Regenerative Manufacturing" emerges as a visionary strategy that not only tries to lower the ecological footprint of production but also seeks to restore and rejuvenate ecosystems, communities, and economies. This abstract provides a look into the profound potential of regenerative manufacturing, showcasing its main principles, processes, and its transformational impact on the future of design and production. Regenerative manufacturing signifies a fundamental transformation in the conceptualization, production, and use of items. The manufacturing process incorporates sustainability, circularity, and resilience throughout all its stages, encompassing material selection, design, production, distribution, and end-of-life concerns. The holistic approach discussed here places significant emphasis on the reduction of waste, optimisation of energy usage, and the utilisation of regenerative resources. This strategy aims to establish a regenerative cycle that actively supports the nourishment of the environment, rather than causing its depletion By employing novel methodologies such as biomimicry and generative design, this approach effectively harnesses the knowledge inherent in nature to stimulate the development of sustainable solutions. The regenerative manufacturing paradigm places significant emphasis on the core principles of collaboration and inclusivity. The recognition of the interconnection of all stakeholders is evident, encompassing producers, designers, customers, and local communities. By promoting openness and upholding ethical standards, this approach facilitates socially responsible production techniques that enhance the agency of local economies, safeguard cultural heritage, and prioritise the welfare of employees. The revolutionary capacity of regenerative manufacturing extends beyond the scope of specific goods and sectors. The power of this phenomenon lies in its ability to transform economic systems, facilitating a shift away from a linear model characterised by the processes of extraction, production, and disposal, towards a regenerative and circular economy. This transition offers not alone ecological advantages, but also financial robustness and enduring success. Introduction The necessity for sustainable manufacturing has emerged as a crucial concern in the 21st century due to the global community's efforts to address significant environmental issues, limited availability of resources, and the repercussions of an unwavering focus on industrial expansion [1].With the ongoing growth of the global population and the corresponding rise in demand for consumer products, energy, and infrastructure, it has become evident that conventional manufacturing practises are no longer viable in the long term.Human activities have exerted pressure on the Earth's ecosystems, leading to their degradation, the depletion of limited resources, and the acceleration of climate change.In light of this urgent circumstance, sustainable manufacturing has arisen as a pivotal shift in thinking, embodying a trajectory aimed at reducing the ecological consequences of industrial operations while simultaneously promoting economic advancement and public welfare.The traditional industrial paradigm, which follows a linear "take-makedispose" framework, has historically operated under the assumption of inexhaustible resources and the transfer of environmental burdens to other entities.In the present model, there is an excessive extraction of raw materials, which is conducted at a rate that is not environmentally sustainable.These raw materials undergo processing and conversion into various goods.However, it is important to note that these products ultimately become garbage, which is disposed of either in landfills or through incineration methods.The adoption of a linear strategy has led to substantial environmental consequences, including the degradation of habitats, pollution, and the accumulation of greenhouse gases in the atmosphere.The repercussions of this nonviable model are progressively manifesting themselves through the occurrence of climate change, the depletion of biodiversity, and the shortage of resources [2].The old manufacturing model is being challenged by sustainable manufacturing, which promotes a circular and regenerative approach.Fundamentally, sustainable manufacturing task to minimise the generation of waste, decrease the utilisation of resources, and diminish the environmental impact associated with industrial procedures.The necessity of sustainable manufacturing is rooted in its capacity to alleviate the detrimental environmental consequences of industrialization, while concurrently addressing the social and economic aspects of sustainability.A key element of sustainable manufacturing is the principle of "design for sustainability."Under this framework, items are designed and engineered with consideration for their complete lifecycle, from the extraction of raw materials to the eventual disposal or recycling at the end of their useful life.The objective of design for sustainability is to reduce resource utilisation, energy consumption, and waste generation through the prioritisation of environmentally friendly materials, efficient production methods, and the incorporation of recyclable or biodegradable components [3].The promotion of the incorporation of renewable resources, such as solar and wind power, into manufacturing operations is advocated, resulting in a decrease in dependence on fossil fuels and a reduction in the production of greenhouse gases.In addition, sustainable manufacturing advocates for the adoption of cutting-edge technology and novel materials.The utilisation of additive manufacturing, more generally referred to as 3D printing, serves as a prime illustration of how technology may be effectively employed to achieve sustainable objectives.The utilisation of 3D printing technology enables the achievement of precise and timely manufacturing processes, resulting in a substantial reduction in both material waste and transportation expenses.Moreover, it facilitates the fabrication of complex and lightweight architectures that were previously unachievable using conventional techniques, hence boosting energy efficiency and optimising material use [4]- [8].The concept of sustainable manufacturing encompasses the field of materials science as well.Scientists are currently engaged in the advancement of a diverse array of sustainable materials, including bioplastics, recycled metals, and bio-composites.These materials are not only ecologically sound but also possess the necessary mechanical characteristics to fulfil a variety of practical uses.These materials has the capacity to bring about a significant transformation in several industries through the reduction of dependence on primary resources and the mitigation of environmental consequences associated with production processes [9].In conjunction with the technical dimensions of sustainable manufacturing, the significance of social responsibility and ethical practises cannot be overstated.Manufacturers and designers are obligated to take into account the welfare of workers, local communities, and the world population as a whole.Ethical labour practises, equitable remuneration, and secure working environments are vital elements of sustainable production.In addition, it is imperative for manufacturers to actively involve themselves in local communities, actively soliciting their feedback and addressing any problems that may arise [10].This approach is crucial in order to ensure that the advantages of industrialisation are spread in a fair and equitable manner. In addition to its use on the factory floor, sustainable manufacturing encompasses the fundamental principles of a circular economy.The focal point of this economic framework lies in the promotion of the reuse, refurbishment, and recycling of items and materials, hence redirecting them away from landfills and incineration facilities [11].Through the implementation of extended producer responsibility programmes, manufacturers assume accountability for the complete life cycle of their products, thereby fostering the collecting and recycling of utilised things.The circular economy not only facilitates resource conservation but also engenders novel economic prospects, such as the practise of remanufacturing and the establishment of secondary markets for recycled materials.The concept of sustainable manufacturing also acknowledges the significance of engaging stakeholders in collaborative efforts.In order to advance sustainable practises, it is imperative for manufacturers, designers, consumers, and legislators to collaborate harmoniously [12].The alignment of interests towards sustainability is significantly influenced by transparent supply chains, ethical consumer choices, and severe environmental regulations.In addition, the implementation of education and awareness campaigns plays a crucial role in facilitating the transformation of societal norms towards consumption patterns that are more sustainable.These efforts effectively promote the adoption of informed decision-making and the endorsement of sustainable products among individuals [13]- [15].The obligation to engage in sustainable manufacturing include the crucial aspect of restoring and preserving ecosystems.The objective of sustainable manufacturing is to mitigate the adverse environmental impacts resulting from industrial activity through the implementation of ecosystem restoration projects [16].For example, firms have the opportunity to allocate resources towards reforestation programmes as a means to counterbalance carbon emissions, or alternatively, contribute to initiatives aimed at safeguarding crucial habitats.Sustainable manufacturing plays a significant role in the restoration of ecosystems, thereby contributing to the regeneration of natural resources and facilitating the reestablishment of the intricate equilibrium of our planet.Energy efficiency is a fundamental component of sustainable production.Given the ongoing increase in global energy consumption, it is crucial to prioritise the transition towards sustainable energy sources.Sustainable manufacturing facilities are progressively dependent on renewable energy sources, including solar, wind, and hydropower [17].The implementation of energy-efficient technologies, in conjunction with intelligent energy management systems, contributes to the reduction of energy consumption, the decrease in operational expenses, and the mitigation of the carbon footprint associated with industrial processes.Additionally, the implementation of legal and policy frameworks is crucial in promoting and encouraging the use of sustainable manufacturing practises [18].It is imperative for governments and international organisations to establish unambiguous benchmarks and offer financial incentives in order to foster the adoption of sustainable practises among enterprises.These measures encompass carbon pricing mechanisms, tax incentives aimed at promoting eco-friendly investments, and rules that require eco-labeling and product lifecycle analyses [19]. Public awareness and education play crucial roles in the imperative for sustainable production.As individuals gain greater awareness of the ecological and societal consequences associated with their consumption patterns, they possess the capacity to stimulate the market for sustainable goods and exert pressure on manufacturers to uphold responsible practises.Education campaigns, sustainability certifications, and comprehensive reporting on environmental and social performance serve as crucial tools for empowering customers to make well-informed decisions that align with the principles of sustainable production [20].In the current era, the global community is confronted with a pivotal moment, characterised by a multitude of environmental obstacles that necessitate a profound reassessment of our methods for generating commodities and facilitating transactions.In light of the prevailing issues of climate change, resource depletion, and ecological degradation, a novel concept referred to as "regenerative manufacturing" has emerged and gained traction.Regenerative manufacturing has emerged as a viable solution to address the constraints and deficiencies inherent in conventional manufacturing methods.This innovative approach not only task to mitigate environmental damage but also strives to rejuvenate and invigorate ecosystems, communities, and economies.This essay examines the birth of regenerative manufacturing, delving into its fundamental principles and assessing its potential to influence a future characterised by sustainability and regeneration [21].The prevailing style of production globally has historically been traditional manufacturing, which is characterised by linear and extractive processes.Within this conventional framework, the procurement of raw materials takes place, followed by their conversion into finished products, and ultimately culminating in their disposal as waste.The utilisation of a linear method has resulted in the exhaustion of limited resources, the degradation of habitats, and the accumulation of pollutants and greenhouse gases.The implications of this concept have been progressively evident, as indicated by the escalating global temperatures, extinction of species, and damage of the ecosystem.Regenerative manufacturing has emerged as a viable solution to address the environmental crises that have been also compounded by conventional manufacturing practises [22]- [26]. The concept of "design for regeneration" is a fundamental premise in the field of regenerative manufacturing.This theory underscores the significance of incorporating ecological concepts into the design and production processes of various products.The concept of design for regeneration extends beyond mere environmental harm reduction, as it actively task to develop goods and systems that exert a beneficial influence on the ecosystems they engage with [27].This entails emulating nature's effective and sustainable mechanisms, such as circularity and the optimisation of resources.Biomimicry, an integral element of design for regeneration, entails deriving inspiration from nature's mechanisms for addressing intricate challenges [28].For instance, an examination of the hierarchical arrangement of branches in a tree can serve as a source of inspiration for the optimisation of distribution networks, whereas a comprehensive comprehension of the inherent ability of some species to regenerate and repair themselves can contribute to the advancement of durable materials.Biomimicry promotes the practise of designers and engineers seeking inspiration from the natural world in order to develop new and environmentally sustainable solutions.Generative design is an additional facet of design for regeneration that utilises computer techniques to enhance designs by optimising them according to particular criteria.Generative design software has the capability to generate designs that optimise material utilisation, minimise waste generation, and improve energy efficiency by including environmental and sustainability objectives.This not only facilitates the development of sustainable products but also enhances creativity by enabling the exploration of design alternatives that may not be readily apparent to human designers [29].The third component of design for regeneration, known as modular and adaptive design, places emphasis on the development of goods that possess the ability to be disassembled, repaired, and improved with ease.This methodology effectively mitigates the necessity for frequent substitution and disposal, hence prolonging the durability of goods and mitigating the ecological repercussions associated with production [30]- [31].Technological breakthroughs have a significant impact on the development of regenerative manufacturing.Advanced manufacturing technologies, such as additive manufacturing, colloquially referred to as 3D printing, are fundamentally transforming the production landscape.The utilisation of 3D printing technology enables the achievement of accurate and customizable manufacturing processes, while minimising the generation of excess material waste.Additionally, it facilitates the development of intricate and lightweight constructions that were previously difficult to attain using conventional techniques [32].This technique is in accordance with the concepts of regenerative manufacturing since it effectively minimises material waste and energy usage. The utilisation of digital twins, which are virtual replicas of tangible entities or systems, is progressively gaining prominence within the area of sustainable manufacturing.These technologies provide the continuous monitoring, analysis, and optimisation of production processes in real-time.Digital twins play a crucial role in the identification of potential for resource efficiency and waste reduction by simulating manufacturing scenarios and assessing environmental implications.This technology enables producers to make well-informed decisions that are in line with the principles of regeneration.The integration of the Internet of Things (IoT) inside the manufacturing sector is a significant factor in the advancement of regenerative manufacturing.Internet of Things (IoT) devices possess the capability to gather and communicate data pertaining to energy consumption, machinery functionality, and environmental circumstances.The aforementioned data has the potential to provide valuable insights for making informed decisions pertaining to energyefficient production, predictive maintenance, and environmental sustainability [33]- [36].The implementation of Internet of Things (IoT) technology allows manufacturers to enhance their operational efficiency while simultaneously reducing their impact on the environment.Regenerative manufacturing focuses significant importance on ethical and social responsibility as well.The recognition of the connection among various stakeholders, including manufacturers, designers, customers, and local communities, is evident.The incorporation of ethical manufacturing practises, fair remuneration, secure working environments, and the upholding of human rights are fundamental components of regenerative manufacturing.The primary focus is on promoting the welfare and livelihoods of employees and local communities, so cultivating economic resilience and ensuring social equality.The establishment of transparent supply chains is a fundamental element in the context of ethical production.Consumers can make educated decisions that are consistent with their beliefs by gaining access to information regarding the origins and production methods of items.The transparency provided by this approach also serves to ensure firms are held responsible for their environmental and social practises.The selection of ethical consumer options has the potential to stimulate the demand for sustainable and regenerative products, thereby incentivizing enterprises to embrace responsible practises [37].Community participation plays a vital role in the area of regenerative manufacturing.It is imperative for manufacturers to proactively solicit input from local communities, effectively address their issues, and make constructive contributions to their overall welfare.By actively engaging communities in the decision-making processes, manufacturers may effectively ensure that their activities have a positive impact on the local regions in which they are situated, rather than causing any detrimental effects [38].These programmes have a significant role in enhancing economic resilience and promoting the well-being of the community. Principles of Regenerative Manufacturing The concept of design for regeneration is a fundamental aspect of regenerative manufacturing, which fundamentally transforms our understanding, production, and engagement with various goods and systems.The concept of design for regeneration is fundamentally based on the notion that industrial processes should not solely focus on reducing environmental harm, but should actively seek to make positive contributions to ecological health and resilience.This methodology utilises the ingenuity of natural systems, employing concepts derived from the natural environment, including biomimicry, generative design, and modular and adaptive design.This investigation focuses on the examination of three fundamental elements of design for regeneration, elucidating their transformative impact on manufacturing methodologies and their contribution to the advancement of sustainability objectives.Biomimicry and nature-inspired design are two closely related concepts that have gained significant attention in the field of engineering and design.Biomimicry refers to the practise of emulating and drawing inspiration.Biomimicry, often known as nature-inspired design, is a fundamental aspect of regenerative manufacturing that draws inspiration from the diverse and intricate patterns found in the natural world [39].This methodology entails examining and replicating the astute tactics, structures, and mechanisms observed in ecosystems, species, and animals in order to develop sustainable and regenerative solutions.The underlying principle of biomimicry posits that nature, over billions of years of evolution, has already effectively addressed numerous intricate design and engineering problems encountered by humans, as shown in fig. 1. Fig.1 Generative Design application in manufacturing Within the field of manufacturing, biomimicry presents a vast array of inventive concepts.Engineers have successfully derived stronger and lighter materials for a wide range of uses, such as aviation components and building materials, through the analysis of honeycomb structures.The vascular structure found in leaves has served as a source of inspiration for the development of transport networks that are characterised by enhanced efficiency and resilience.The unique characteristics of geckos' feet have prompted the advancement of adhesive materials that provide non-destructive adhesion and detachment, hence minimising waste in production and maintenance procedures.The concept of biomimicry promotes a significant transformation in cognitive processes, urging designers and engineers to perceive nature not merely as a means to exploit, but rather as a source of guidance and inspiration [40]- [42].Generative design, which utilises sophisticated algorithms and computational technology, is a significant aspect of design for regeneration.Generative design, fundamentally, revolves around leveraging computational capabilities to optimise and enhance designs according to predetermined criteria, including but not limited to sustainability, efficiency, and material utilisation.Through the utilisation of these criteria, designers have the opportunity to delve into a wide range of design alternatives that may not be readily apparent to human designers.This process ultimately leads to the development of solutions that are more efficient in their use of resources and more environmentally sustainable.The process of generative design commences by constructing a digital representation of the desired product or system.This model functions as a platform for conducting experiments, in which designers input their design objectives, limitations, and performance metrics.The generative design programme utilises sophisticated algorithms to produce and iterate designs that satisfy the predetermined criteria [43].The designs exhibit a range of variations in terms of geometry, material utilisation, and other characteristics, hence demonstrating a multitude of potential solutions [44].Modular design encompasses the development of goods or systems comprising distinct modules or components that possess the capability of being readily replaced or improved.This technique differs from conventional monolithic designs, which can pose difficulties or cost impracticalities when it comes to repairs and upgrades.Modular smartphones provide customers the capability to substitute specific components, like as the camera or battery, as opposed to the complete replacement of the entire device.Not only does this practise contribute to the reduction of electronic waste, but it also fosters resource efficiency.Adaptive design extends the principles of modularity by developing products that possess the ability to evolve and adjust in response to shifting requirements and circumstances.In the field of architecture, adaptive building designs have the potential to adjust and adapt to fluctuations in temperature, occupancy levels, and energy availability.This capability allows for the optimisation of both comfort levels and resource utilisation.Adaptive systems have the capability to achieve responsiveness through the integration of sensors, actuators, and intelligent technology.Modular and adaptive design strategies effectively correspond with the regenerative manufacturing philosophy as they contribute to waste reduction, resource conservation, and the cultivation of a repair and reuse-oriented culture.They facilitate the adaptation of products and systems to changing circumstances, hence mitigating the necessity for new replacements and prolonging the lifespan of current assets.The pursuit of regenerative manufacturing involves a strong interconnection between the principles of modular and adaptive design, as well as the concepts of a circular economy and sustainable materials management.These principles acknowledge the significant influence of material choices on the environmental and economic sustainability of manufacturing operations.This study examines three key elements of material innovation and the circular economy in the context of modular and adaptive design: sustainable materials selection, closed-loop material cycles, and recycling and upcycling solutions.Collectively, these components constitute a robust basis for regenerative manufacturing, presenting a trajectory towards waste reduction, improved utilisation of resources, and ecological rejuvenation [45]- [46]. The Role of Technology and Innovation Technology and innovation are crucial factors in the advancement of regenerative manufacturing, as they have the power to transform conventional production methods and promote sustainability.Advanced manufacturing technologies play a prominent role in driving this change by providing novel approaches to minimise waste, boost the efficient utilisation of resources, and bolster the overall environmental and economic sustainability of manufacturing processes.This study examines three fundamental elements of sophisticated manufacturing technologies in the area of regenerative manufacturing: Additive Manufacturing (also known as 3D Printing), Nanotechnology and Materials Science, and the Internet of Things (IoT) and Data Analytics in the field of manufacturing.Collectively, these technical breakthroughs are facilitating the shift towards manufacturing practises that are more sustainable and regenerative in nature.The technology of additive manufacturing, also referred to as 3D printing, has emerged as a transformative innovation with significant implications for regenerative manufacturing [47].Also, it presents the possibility of decentralised and regionalized manufacturing.This implies that the production of goods can be localised in proximity to the intended consumers, hence mitigating the necessity for extensive transportation and the consequent release of carbon emissions.Moreover, the utilisation of 3D printing technology has the potential to facilitate the production process in a manner that allows for immediate fulfilment of consumer demands, hence minimising the need for excessive inventory and mitigating the associated risks of overproduction.Also, the utilisation of 3D printing technology is playing a significant role in the advancement of sustainable materials.Scientists are currently engaged in ongoing research task to develop bio-based and recycled materials that demonstrate compatibility with 3D printing methodologies.These materials possess the twin benefit of less environmental impact and more design freedom [48]. Nanotechnology is also of critical importance in enhancing energy efficiency and promoting sustainability.Nanomaterials have the potential to augment the operational capabilities of photovoltaic cells, hence amplifying the conversion efficiency of solar energy into electrical power.In a similar vein, the application of nanocoatings has been found to effectively mitigate friction and wear in industrial machinery, hence resulting in notable energy conservation and prolonged operational durability of the equipment.In addition, nanotechnology plays a significant role in enhancing resource efficiency through its ability to exert precise control over material properties.This phenomenon has the potential to facilitate the creation of materials that possess diminished resource demands [49].For instance, it can enable the fabrication of lightweight structural elements that exhibit both strength and durability, hence necessitating a lower quantity of raw materials and energy during the manufacturing process.The field of materials science serves as a valuable complement to nanotechnology, as it establishes a fundamental basis for the development and production of environmentally-friendly materials.Academic researchers are currently engaged in the exploration of alternative materials, with the aim of identifying solutions that include both environmentally sustainable characteristics and the requisite mechanical capabilities for diverse applications.The utilisation of sustainable materials, such as bioplastics sourced from renewable resources or recycled metals, is becoming increasingly prominent in the field of regenerative manufacturing.These materials serve to decrease the dependence on primary resources and alleviate the environmental consequences associated with industrial operations.Also, ongoing investigations in the field of materials science are revealing novel methodologies, such as the development of self-healing materials capable of autonomously repairing damage.This advancement holds the potential to significantly prolong the durability of various products, hence mitigating the necessity for frequent replacements.In brief, the fields of nanotechnology and materials science play a crucial role in the progression of regenerative manufacturing by facilitating the creation of materials that possess improved characteristics and a focus on long-term viability.The integration of these disciplines facilitates the development of environmentally sustainable materials and methodologies that adhere to the ideals of waste minimization, efficient resource utilisation, and responsible environmental The integration of the Internet of Things (IoT) and data analytics is facilitating a paradigm shift in the manufacturing industry, characterised by enhanced precision, efficiency, and sustainability.These technologies provide the continuous monitoring, analysis, and optimisation of industrial processes, thereby offering useful insights to mitigate wastage, boost the utilisation of resources, and improve environmental performance.The Internet of Things (IoT) encompasses the integration of physical items and devices with internet connectivity, enabling them to gather and share data.In the manufacturing domain, Internet of Things (IoT) sensors are integrated into machinery, equipment, and even goods, facilitating the acquisition of real-time data pertaining to diverse facets of the production process [50].As shown in Fig. 2, the Internet of Things (IoT) sensors have the capability to monitor several elements, including but not limited to temperature, humidity, energy usage, and machine performance.The aforementioned data is thereafter communicated to a centralised system, wherein it can be subjected to analysis and utilised for the purpose of making well-informed decisions.Predictive maintenance stands as a prominent advantage of implementing Internet of Things (IoT) technology inside the manufacturing sector.Through the constant monitoring of equipment, Internet of Things (IoT) sensors possess the capability to identify initial indications of deterioration, malfunctions, or inefficiencies.This enables manufacturers to proactively plan maintenance activities, thereby minimising instances of unplanned downtime, prolonging the operational lifespan of equipment, and promoting resource conservation.The Internet of Things (IoT) has a substantial influence on energy efficiency.Sensors possess the capability to actively monitor and track energy use in real-time, hence enabling the identification of potential areas for optimisation.Manufacturers have the potential to decrease energy costs and mitigate their carbon emissions by making modifications to equipment settings or production schedules in response to energy demand.Data analytics, which is enhanced by the utilisation of machine learning and artificial intelligence techniques, serves as a valuable adjunct to the Internet of Things (IoT) by effectively handling and deciphering the extensive volumes of data produced by IoT sensors.Sophisticated algorithms possess the capability to scrutinise past data, discern recurring trends, and generate prognostications regarding forthcoming manufacturing results [51]. Case studies with Examples Tesla, Inc. is a prominent exemplar of a corporation leading the charge in sustainable manufacturing within the automotive sector, particularly in the domain of electric vehicles (EVs).The dedication of Tesla to electric vehicles (EVs) has brought about a significant transformation in the market, as shown in fig. 3. Tesla's production of electric vehicles (EVs) that emit no tailpipe emissions is making a significant contribution to the reduction of greenhouse gas emissions and air pollution.Also, Tesla implements sustainable methodologies in its production facilities, encompassing the utilisation of renewable energy resources, such as solar panels and wind power, to generate electricity for its factories.The primary objective of the company's Gigafactories is to attain a state of zero-emission by means of incorporating renewable energy sources, implementing material recycling practises, and minimising waste generation.The Toyota Motor Corporation is widely recognised as a trailblazer in the implementation of lean manufacturing principles.Toyota has established itself as a prominent frontrunner in the area of sustainable manufacturing practises [52].The Toyota Production System (TPS) implemented by the corporation serves as a prominent exemplar of lean manufacturing, placing significant emphasis on optimising efficiency and minimising wasteful practises.Toyota's operational strategy encompasses the implementation of just-in-time production, which aims to reduce inventory waste, as well as a commitment to continual development.Moreover, Toyota demonstrates a steadfast dedication to mitigating the ecological consequences associated with its automotive products, placing significant emphasis on the advancement of hybrid and hydrogen fuel cell technologies.The Toyota Mirai exemplifies a hydrogen fuel cell automobile built with the purpose of mitigating emissions and fostering the advancement of sustainable mobility.Patagonia, a well-established enterprise in the outdoor clothing industry, has emerged as a prominent advocate for sustainability by integrating it as a fundamental tenet of its corporate vision.The fashion industry aggressively advocates for the adoption of regenerative practises.Patagonia incorporates organic and recycled materials into its product offerings, implements measures to curtail water consumption within its supply chain, and promotes customer engagement in clothing repair and reuse task, exemplified by the "Worn Wear" programme.In addition, Patagonia holds the distinction of being a Certified B Corporation, which underscores its dedication to upholding environmental and social obligations.The topic of discussion pertains to the sustainable fashion and circular design practises employed by the brand EILEEN FISHER.EILEEN FISHER, a renowned company specialising in women's clothes, has emerged as a trailblazer in the area of circular fashion and sustainable design.The company has established "take-back" initiatives, enabling customers to engage in the return of their previously owned EILEEN FISHER apparel in exchange for shop credit.Subsequently, these previously owned things undergo a process of refurbishment, followed by their resale or transformation into novel designs through upcycling.The company has additionally pledged to use organic and sustainable fibres, minimise water wastage, and uphold ethical labour practises within its supply chain.The commitment of EILEEN FISHER to circular design and the utilisation of sustainable materials serves as a notable illustration of the fashion industry's transition towards regenerative practises [53].Community-led initiatives refer to projects or programmes that are driven and implemented by members of a specific community.These initiatives are characterised by active participation and engagement by community members.The topic of discussion pertains to the establishment of local manufacturing hubs, specifically focusing on the utilisation of Fab Labs and Makerspaces.Community-led projects, such as Fab Labs and makerspaces, serve as local manufacturing hubs that enable individuals and small enterprises to actively participate in sustainable and regenerative manufacturing practises.These facilities offer individuals with the opportunity to utilise cutting-edge manufacturing equipment such as 3D printers, computer numerical control (CNC) machines, and laser cutters.These initiatives facilitate cooperative learning and foster creativity, empowering members of the community to engage in the process of designing, prototyping, and manufacturing items within their local context.Fab Labs and makerspaces have been found to facilitate and nurture creativity among individuals.Additionally, they contribute to the reduction of transportation emissions that are typically linked with lengthy supply chains.Also, these spaces promote a more decentralised approach to manufacturing, hence supporting a shift away from centralised production methods.The topic of discussion revolves around the significance of artisanal and indigenous practises in relation to cultural sustainability.Numerous global communities, with a particular emphasis on indigenous and artisanal groups, engage in sustainable and regenerative manufacturing practises that are firmly ingrained in their cultural past.These practises place a high emphasis on the use of locally sourced materials, the application of traditional artisan techniques, and the preservation and transmission of intergenerational knowledge.Indigenous people in different geographical areas engage in the production of textiles, pottery, and handicrafts, employing sustainable methodologies and utilising locally-sourced resources derived from their natural environments.The community-led initiatives not only make a significant contribution to the preservation of culture but also provide useful insights into sustainable resource management and regenerative practises.The presented case studies and examples illustrate the adoption of sustainable and regenerative manufacturing practises by enterprises operating in sectors such as automotive and fashion.Also, it is evident that community-led initiatives, such as the establishment of local manufacturing centres and the utilisation of indigenous practises, play a significant role in advocating for sustainability and safeguarding cultural heritage.Collectively, these initiatives and organisations serve as prime examples of the continuous transition towards a more regenerative and ecologically conscientious approach to manufacturing and production. Conclusion The emergence of 3D manufacturing, sometimes referred to as 3D printing, signifies a significant and revolutionary influence within the area of manufacturing and production.The advancement of this technology has undergone rapid development and has been widely embraced in many sectors, radically transforming the processes of item and product design, prototyping, and manufacturing. The utilisation of 3D printing spans across various industries, encompassing aerospace, healthcare, automotive, and consumer goods, among others.The adaptability of this technology facilitates the production of complicated and personalised designs, hence enabling novel advancements that were previously challenging or unattainable using conventional manufacturing techniques. The process of prototyping has been significantly transformed by the rise of 3D printing technology, leading to rapid iteration.This technology enables engineers and designers to efficiently generate tangible prototypes, thereby diminishing the duration and expenses associated with product development.The utilisation of an iterative strategy expedites the process of product development and fosters creativity.The capability to produce highly customised and personalised objects is regarded as one of the most prominent benefits of 3D printing.The healthcare industry, as an illustration, derives advantages from the utilisation of patient-specific implants and prosthetics, whereas customers have the ability to customise their products according to their individual preferences. The utilisation of 3D printing technology allows for the creation of intricate and organic geometries that pose difficulties or are unattainable through traditional manufacturing techniques.The capacity to achieve this skill has a significant influence on various industries, particularly in the aerospace sector, where the importance of lightweight and aerodynamic structures cannot be overstated. These materials provide the potential to enhance the range of uses for 3D printing technology, while simultaneously addressing environmental considerations.The proliferation of desktop 3D printers has facilitated the ability of people and small enterprises to actively participate in the utilisation of 3D printing technology.The presence of accessibility has resulted in the emergence of a dynamic community of makers and individuals engaged in do-it-yourself activities. Fig. 2 Fig.2 Internet of Thing (IoT) Data in manufacturing Fig. 3 Fig.3 Graphical representation of pioneering lean manufacturing in industry.
v3-fos-license
2023-11-18T16:06:54.710Z
2023-11-16T00:00:00.000
265271940
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pcn5.159", "pdf_hash": "7918f50d533917bdf3c8338ded355558f5fbbded", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42046", "s2fieldsofstudy": [ "Medicine" ], "sha1": "7117cb1c03bd6c3ca23b8b456280a0631f9622ca", "year": 2023 }
pes2o/s2orc
Delirium due to Trousseau syndrome treated with memantine and perospirone: A case report Abstract Background Trousseau syndrome is a hypercoagulability syndrome associated with cancer. It is known that delirium occasionally occurs after the onset of Trousseau syndrome. However, there have been no detailed reports about treatment for psychiatric symptoms of delirium associated with Trousseau syndrome. Case Presentation A 61‐year‐old man with lung cancer was hospitalized due to Trousseau syndrome. Delirium occurred after hospitalization and psychiatric symptoms worsened. Although haloperidol, risperidone, and chlorpromazine were used, severe insomnia persisted. After memantine (5 mg/day) was used with perospirone, the patient's psychiatric symptoms gradually decreased; he could sleep for 4–5 h at night. Due to psychiatric improvement, he was able to return home and resume immunotherapy for lung cancer as scheduled. Conclusion We report the first case of Trousseau syndrome delirium treated by memantine used with perospirone. Although further studies are needed, memantine and perospirone might be candidates for the management of psychiatric symptoms associated with Trousseau syndrome. BACKGROUND Trousseau syndrome is a clinical condition named after Armand Trousseau, who first reported the relationship between venous thromboembolism and malignancy in 1865. 1 Today, Trousseau syndrome is defined as unexplained thrombotic events that precede the diagnosis of an occult visceral malignancy or appear concomitantly with the tumor. 2 The clinical manifestations of Trousseau syndrome in patients with cancer include various conditions, such as deep vein thrombosis, pulmonary embolism, and brain infarction. 3However, there have been almost no reports about delirium due to Trousseau syndrome.Moreover, to the best of our knowledge based on the literature review, there are no previous reports about psychiatric symptoms or management of delirium associated with Trousseau syndrome.Furthermore, we could find no reports mentioning how often delirium occurs with Trousseau syndrome.Although antipsychotics could be used off-label for delirium in clinical situations, we did not find any reports on the effectiveness of these drugs.Here, we describe a first case of Trousseau syndrome in which the psychiatric symptoms of delirium were controlled by memantine, used with perospirone. CASE PRESENTATION A 61-year-old man with stage IVB pulmonary adenocarcinoma for more than 2 years was admitted to our hospital due to sudden onset of impaired consciousness, left hemiplegia, left facial nerve paralysis, and dysarthria.Vital signs were as follows: temperature 36.9°C,pulse 74 beats/min, blood pressure 160/104 mmHg, and SpO 2 98%.He had a history of diabetes but no history of hypertension or atrial fibrillation.Diffusion-weighted magnetic resonance imaging revealed many small acute brain infarctions in both hemispheres (Figure 1).The patient's symptoms were consistent with Trousseau syndrome secondary to lung cancer.His mental status had been otherwise normal.He had no history of mental disorders and he had never had cognitive problems.With the introduction of anticoagulation therapy (days 0-10, heparin 23,000 U/day; day 11 onwards, edoxaban tosylate hydrate 60 mg/day), his neurological symptoms recovered. Several days after hospital admission, neurological deficits had resolved, except for mild left arm paralysis and left facial nerve paralysis. However, delirium occurred after Trousseau syndrome 1 day after hospital admission.Blood test was unremarkable.His daily medications included loxoprofen (180 mg/day, oral), lansoprazole (15 mg/day, oral), and suvorexant (15 mg/day, oral), which were less likely to induce delirium.Moreover, the last pembrolizumab (antiprogrammed cell death 1 antibody [anti PD-1]) was given 4 months ago and the possibility of psychiatric disorders including delirium associated with anti PD-1 is reported to be 1.91% and is the lowest in immune checkpoint inhibitors. 4We therefore diagnosed that this case was delirium due to Trousseau syndrome and delirium due to other causes was deniable. Psychiatric symptoms such as insomnia, agitation, and restlessness gradually progressed to the extent that the patient moved around his room restlessly at night without sleeping.He could not stay still, restlessly sitting and standing beside his bed.He often got confused and admitted the existence of visual hallucination.We first selected an intravenous drip of haloperidol or risperidone liquid because the patient admitted difficulty in swallowing after Trousseau syndrome, and effects such as sedation and antihallucination were expected.Haloperidol (days 2-6, up to 10 mg/day, intravenous), risperidone liquid (days 6-12, up to 1.5 mg/day, oral), and chlorpromazine (days 11-17, up to 37.5 mg/day, oral) were used to manage his psychiatric symptoms (Figure 2).Before administration, it was explained to the patient and his family that these were off-label prescriptions and there were possible side effects.After the initiation, hallucination and agitation were gradually lessened.However, inattention, altered sleep/wake cycle, altered level of consciousness, and symptom fluctuation persisted.The patient complained of unbearable insomnia for more than 2 weeks.In addition, extrapyramidal side effects such as sialorrhea and bradykinesia were observed.Moreover, chlorpromazine was intolerable because of drowsiness and urinary retention. We then tried perospirone (days 17 onwards, 8 mg/day, oral) but this was not effective either and insomnia persisted.The Intensive Care Delirium Screening Checklist (ICDSC) 5 score at that time remained 4, indicating the existence of delirium, and hospital discharge remained impossible.We maintained the use of perospirone at this point because we began to consider the combined use of perospirone and another medication other than antipsychotics. After the failure of the antipsychotics, we tried memantine to alleviate his psychiatric symptoms.We considered that was worth trying because memantine has different mechanisms of action from antipsychotics and it has no drug interactions with anticancer agents, 6 which is favorable in terms of not hindering cancer treatment.Before use, we fully explained to the patient and his family that this was an off-label prescription and also explained the clinical effects and possible side effects.After the use of memantine (days 19 onwards, 5 mg/day, oral) with perospirone, the patient's psychiatric symptoms lessened.He could sleep for 4-5 h at night for the first time after admission.He could rest in his room at night and restlessness gradually resolved.Moreover, daytime sleepiness was not observed.Memantine was tolerated with no apparent side effects.With psychiatric improvement, he was able to return home after 26 days of hospitalization (Figure 3).Pembrolizumab was restarted regularly as previously planned, with the aid of his family and his home-visiting physician. DISCUSSION AND CONCLUSION As far as we know, there have been no reports about treatment and drug choice for the psychiatric symptoms of delirium due to was notable that antipsychotics used as the sole regimen were effective for hallucinations and psychomotor agitation, but not especially effective for inattention and persisting insomnia in Trousseau syndrome delirium. Perospirone is an atypical antipsychotic with potent serotonin 5-hydroxytryptamine 2 and dopamine D2 antagonist activity and Takeuchi et al. showed the effectiveness of perospirone for delirium treatment. 7They reported that perospirone was effective in 86.8% (33/38) of patients, and the effect appeared within several days (5.1 ± 4.9 days).The initial dose was 6.5 ± 3.7 mg/day and the maximum dose of perospirone was 10.0 ± 5.3 mg/day.However, the effect of perospirone for Trousseau syndrome delirium was not mentioned, in contrast to delirium due to other causes. In this case, we used perospirone at first and then used memantine simultaneously.It is possible that the sole use of perospirone contributed to the alleviation of Trousseau syndrome delirium.However, when perospirone was used without memantine, there was no apparent psychiatric improvement.In addition, perospirone 8 mg is regarded as equivalent to haloperidol 2 mg and risperidone 1 mg. 8We used haloperidol up to 10 mg and risperidone up to 1.5 mg, both of which were ineffective to inattention, altered sleep/wake cycle, and altered level of consciousness.We therefore consider that not only perospirone but also memantine contributed to the alleviation of psychiatric symptoms of Trousseau syndrome delirium. We speculate that the reason why memantine alleviated the delirium due to Trousseau syndrome might be partly related to the recovery of N-methyl-D-aspartate (NMDA) receptors, which are hypothesized to play a role in psychiatric symptoms such as the positive and negative symptoms of schizophrenia and cognitive function. 9Memantine is thought to work as a partial antagonist and a partial agonist of the NMDA receptor. 10Thus, memantine might contribute to the recovery of NMDA receptor function, thereby reducing the psychiatric symptoms of delirium due to Trousseau syndrome, as in schizophrenia.In addition, memantine is reported to be an effective augmentation agent in refractory schizophrenia using clozapine, 11,12 therefore it is possible that the combination of memantine with antipsychotics, including perospirone, may also be effective for psychiatric symptoms of delirium following Trousseau syndrome. We also speculate that the mechanism by which memantine alleviates the psychiatric symptoms of delirium is related to the attenuation of progression of ischemic changes and brain infarction at the microscopic level.Chen et al. showed that memantine could prevent ischemic stroke-induced neurological deficits and brain infarction both in vivo and in vitro, which attenuates brain damage and neuronal loss in rats. 13Memantine is expected to induce the same results in Trousseau syndrome, in which ischemic changes due to the progression of microcoagulation by tumor cells are predominant.Besides, as for the rapid improvement, memantine is reported to have effects on brain infarction within 5 days by lowering matrix metalloproteinases-9, which is associated with the development of delirium. 14,15We speculate that the stroke-related mechanism by memantine more strongly contributed in this case because it is not dominant in chronic dementia, but is expected to occur abundantly in Trousseau syndrome. Based on the hypothesis that memantine is effective for delirium with Trousseau syndrome, is memantine also effective for delirium due to other causes?7][18] Although the neuropathogenesis of delirium remains unclear, acetylcholine deficiency and dopamine and glutamate excess are related to the onset of delirium. 19As for glutamate, Choi mentioned that glutamate neurotoxicity has been linked to a lethal influx of extracellular Ca 2+ through cell membrane channels, 20 pointing out that an influx of extracellular Ca 2+ can cause cell death and Ca 2+ is associated with excessive NMDA receptor activation.Based on the NMDA receptor hypothesis, it is possible that memantine is also effective for delirium due to other causes.To answer this question, we think that it is necessary to investigate the effectiveness of memantine for delirium due to other causes as well as delirium due to Trousseau syndrome and compare their differences. This case study has several limitations.First, we only examined one case.Second, it could be that this case was a part of transit Trousseau syndrome.This is the first case report of Trousseau syndrome delirium alleviated by memantine used with perospirone.It F I G U R E 1 Magnetic resonance image of the head taken at on hospitalization day.There are multiple small acute brain infarctions distributed in both hemispheres indicating Trousseau syndrome. F I G U R E 2 The clinical course of delirium with Trousseau syndrome and psychotropics.ICDSC, Intensive Care Delirium Screening Checklist.F I G U R E 3 The psychiatric change of delirium with Trousseau syndrome.ICDSC, Intensive Care Delirium Screening Checklist.TREATMENT OF TROUSSEAU SYNDROME DELIRIUM | 3 of 5 syndrome induced by Trousseau syndrome.However, psychiatric symptoms such as sleep cycle and inattention were unchanged or worse before the introduction of memantine and perospirone.The ICDSC score fell below 4 after the start of memantine and we suggest that natural course alone may not be enough to explain the whole psychiatric improvements.Third, other factors such as nursing care and environmental changes after hospitalization might have influenced the patient's mental status, therefore, to validate the effect of memantine and perospirone, it is necessary to increase the number of clinical cases where memantine and perospirone are used to treat delirium due to Trousseau syndrome while regulating other factors such as psychotropics, environment, and nursing care.Memantine and perospirone are regarded as possible candidates for the management of psychiatric symptoms associated with Trousseau syndrome.Further investigations are necessary to verify their clinical effect.AUTHOR CONTRIBUTIONS Junji Yamaguchi and Takatoshi Hirayama treated the patient and acquired data.Junji Yamaguchi drafted the manuscript.Takatoshi Hirayama supervised the work.All authors substantially revised the manuscript.All authors read and approved the final manuscript.
v3-fos-license
2018-04-03T05:24:25.199Z
2017-09-25T00:00:00.000
20597244
{ "extfieldsofstudy": [ "Geography", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ece3.3424", "pdf_hash": "58f6b10c7cdbb033b409f48444a3ec4664b1e71a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42050", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "sha1": "58f6b10c7cdbb033b409f48444a3ec4664b1e71a", "year": 2017 }
pes2o/s2orc
Influences of population pressure change on vegetation greenness in China's mountainous areas Abstract Mountainous areas in China account for two‐thirds of the total land area. Due to rapid urbanization, rural population emigration in China's mountainous areas is very significant. This raises the question to which degree such population emigration influences the vegetation greenness in these areas. In this study, 9,753 sample areas (each sample measured about 64 square kilometers) were randomly selected, and the influences of population emigration (population pressure change) on vegetation greenness during 2000–2010 were quantitatively expressed by the multivariate linear regression (MLR) model, using census data under the condition of controlling the natural elements such as climatic and landform factors. The results indicate that the vegetation index in the past 10 years has presented an increasing overall trend, albeit with local decrease in some regions. The combined area of the regions with improved vegetation accounted for 81.7% of the total mountainous areas in China. From 2000 to 2010, the rural population significantly decreased, with most significant decreases in the northern and central areas (17.2% and 16.8%, respectively). In China's mountainous areas and in most of the subregions, population emigration has significant impacts on vegetation change. In different subregions, population decrease differently influenced vegetation greenness, and the marginal effect of population decrease on vegetation change presented obvious differences from north to south. In the southwest, on the premise of controlling other factors, a population decrease by one unit could increase the slope of vegetation change by 16.4%; in contrast, in the southeastern, northern, northeastern, and central area, the proportion was about 15.5%, 10.6%, 9.7%, and 7.5%, respectively, for improving the trend of NDVI variation. It is proved that the impact of climate factors is crucial on vegetation growth in some regions Chuai, Huang, Wang, & Bao, 2013;Piao, Mohammat, Fang, Cai, & Feng, 2006). Especially, temperature is claimed to play a dominated role compared to precipitation (Thavorntam & Tantemsapya, 2013;Tian et al., 2015). Some scholars have explored the relationship between human activities and vegetation change through the correlation analysis method (Cai et al., 2014;Lu et al., 2015). For example, Cai et al. (2014) have explored the relationship between population emigration and vegetation change at the county scale in the karst areas of southwest China, using spearman correlation analysis, and found a positive influence of population emigration on vegetation index. However, they only used one single factor and did not control other variables. Lu et al. (2015), using Pearson's correlation analysis, selected a large number of factors (such as population, labor force, GDP, investment.) at the provincial scale as explanatory variables and analyzed the influences of China's social and economic factors on vegetation index, without controlling the natural factors. Other authors have tried to isolate the influence of human activity from the comprehensive influences using residual analysis (Evans & Geerken, 2004;Ferrara, Salvati, Sateriano, & Nolè, 2012;Sun et al., 2015;Tousignant et al., 2010;Wessels et al., 2007). For example, Wessels et al. (2007) have used residual analysis to isolate the influences of human activities on vegetation productivity in a study on the impacts of land degradation in South Africa. In mountainous areas, Wang et al. (2015) have investigated the influences of climate and human activity factors on the vegetation of southern China using residual analysis, obtaining the regression equations of the vegetation index for temperature and precipitation, whereas the influences of human factors were completely explained by the residual terms of the regression equation. However, the residual analysis method could only reveal the positive or negative effects of human activity instead of identifying the types, intensity, and contribution ratio of human activities. Such studies have deepened our understanding of the relationship between human activities and vegetation change. However, these approaches could not identify the types, intensity, and contribution ratios of human activities that influence vegetation greenness change and do not exclude the influences of natural factors, such as temperature and precipitation. In addition, these studies always choose a certain administrative unit as the research object. However, due to the large land area and complex topography of mountainous areas, the spatial differences inside the administrative unit are significant and therefore using an administrative unit as the analysis unit increases the uncertainty of the results. Mountainous areas in China account for two-thirds of the total land area; they are characterized by high intensity of human activity and fragile ecological environments. Furthermore, a number of large rivers rise in China's mountainous areas, such as the Yangtze River, the Yellow River, and the Lancang River, as well as some international rivers; the state of the ecological vegetation of these areas significantly affects the hydrological conditions of these rivers in China as well as in neighboring countries. So, it is very important to evaluate the effects of human activities on mountainous vegetation in China. This study chose the widely used vegetation index NDVI (Normalized Difference Vegetation Index) as an indicator of vegetation conditions (Fu & Burgher, 2015;Guay et al., 2014;He et al., 2012;Liu & Gong, 2012;Starns, Weckerly, Ricca, & Duarte, 2015;Stow et al., 2003;Zhang, Zhang, Dong, & Xiao, 2013). Furthermore, we used two indices, population pressure (population density) and land-use intensity, to express human activity intensity. Then,9,753 samples of 64 square kilometers each were selected, to reduce the uncertainty caused using administrative units with a large area. This allowed us to quantitatively assess the effects of human activities on the vegetation conditions, using the multivariate linear regression (MLR) model under the condition of controlling the influences of natural factors. | Study area and data source The study area covers about five million square kilometers ( Figure 1). According to previous research (Guo & Zhang, 1986), China's mountainous areas can be divided into seven regions: northwest, northeast, north, central, southeast, southwest, and Tibet. In this study, we mainly used the following data: NDVI data, Network Information Center, Chinese Academy of Sciences (GSCLOUD), with a spatial resolution of 1 × 1 km and a temporal resolution of 1 month. In northern China, the growing season is from April to October (Li, Sun, Tan, & Li, 2016); therefore, the average NDVI value for the growing season was adopted to replace the annual average NDVI value in this study. Temperature and pre- Academy of Sciences. In this data set, there are seven main land use types. These are arable land, woodland, grassland, water area, construction land, unused land, and other areas. Population density data mainly refer to the research report of Tan, Li, Li, and Li (2016), and the spatial distribution diagram of population density is modeled based on nighttime light image data, land-use data and the fifth and sixth nationwide census data. Figure 2 shows the overall design of this study, expressed as a flowchart. | Interpolation of meteorological data Original temperature and precipitation data were grid data sets with a precision of 0.5 × 0.5°. These datasets were obtained by interpolation based on 2,472 weather stations across the country. In this study, on the platform of the ANUSPLIN (The Australian National University, Canberra, ACT, Australia) software, we carried out an interpolation on the grid data, using elevation as the covariate via thin-plate spline F I G U R E 1 Regional division and sample spatial distribution of China's mountainous areas. NOTE: the landform map is derived from State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences; measuring scale is 1:4,000,000 interpolation (Liu et al., 2008;Qian, Lu, & Zhang, 2010), thereby obtaining grid data with an accuracy of 1 × 1 km. | Trend analysis of NDVI and meteorological data As the interannual variabilities of temperature, precipitation, and NDVI were obvious, trend analysis was used to calculate their variation trends from 2000 to 2010 (Tian et al., 2015;Wang et al., 2015;Zhao et al., 2015). Here, we use temperature as an example to introduce the calculation process for this method: where T_Slope is the slope of the temperature change, m is the fixed number of years of the study, equaling 11 in this study, and t j is the temperature of the j-th year. When T_Slope < 0, the temperature presents a decreasing trend during this study period; otherwise, it presents an increasing trend. In this study, the data were processed at the pixel level, and the average value of each index was determined in the sampling scope using the zonal statistics tool of ArcGIS 10.2. | Calculation of land-use intensity (LUI) According to the comprehensive analysis methods to measure LUI proposed in previous studies (Gao, Liu, & Zhuang, 1998;Liu, 1997;Wang, Liu, & Zhang, 2001;Zhuang & Liu, 1997), the land was divided into four land use grades, that is, the unused land grade (with a grading index of 1), which contains saline-alkali land, marsh land, sand land, bare land, and other unused or hardly used land, for instance, alpine desert and tundra; the forest-grass-water land grade (with a grading index of 2), which includes forest land, grass land, and water areas; the agricultural land grade (with a grading index of 3), which includes cultivated land, garden land, and artificial grassland; and the urban settlement land grade (with a grading index of 4), which includes town land, residential land, and industrial and traffic land. The calculation formula for the comprehensive index of LUI is as follows: where I represents the land-use intensity, i is the number of landuse intensity grades, M i refers to the grading index of the i-th landuse intensity grade, and S i represents the area percentage of the i th (1) | Selection of influencing factors In terms of studying the influencing factors on vegetation change, this article mainly selects natural factors and human activities (Table 1). For natural factors, due to the obvious differences in the interannual change of temperature and precipitation, the variation trends of annual total precipitation and annual average temperature were used in the study period. Besides, gradient and aspect determine the vegetation site conditions and are also introduced into the model as the explanatory variables. The differences in land-use intensity can reflect the influence degree of human land-use activities on vegetation change (Zhuang & Liu, 1997), which can quantitatively reveal the comprehensive level of regional land use (Wang , Liu, & Zhang, 2001). Here, population density change represents the indicator that reflects population pressure. | Influences of population pressure change on vegetation greenness variation Before model estimation, we adopted the Variance Inflation Factor (VIF) to carry out a full-collinearity test on the explanatory variables. All operations were implemented by Stata 13.0; The number of samples for the statistics is 9,753; * is 10,000 times that of NDVI_Slope, ** is 1,000 times that of Slope temperature, and *** is the natural logarithmic of original Average Elevation. Variables T A B L E 2 Summary statistics of variables F I G U R E 3 Spatial distribution of average NDVI values in the growing season in 2000 in mountainous areas of China For all explanatory variables, VIF values were below 10, which means that there was no significant collinearity between variables. (Table 3), the standard partial regression coefficient shows that the main factors influencing the NDVI change, according to their influencing degrees from high to low, include average elevation, trend of precipitation variation, population pressure change, trend of temperature variation, gradient slope, and aspect. To reveal the regional differences, we carried out further analysis on the factors that influence vegetation greenness change in Models 5-11. The results show that overall, population pressure change significantly influences the trend of the NDVI variation for the six regional models, except for the northwest. Among these models, the influence of population pressure on vegetation greenness was significant at the 1% significance level for Models 5,8,9,10,and 11, and significant at the 5% significance level for Model 6. As introduced in Tibet was totally different, that is, a population increase by one unit could increase the trend of NDVI variation by 18.5%. Furthermore, Table 4 shows that there were large regional differences for the influences of land-use intensity on the explained variables. In the northwestern and the Tibet areas, land-use intensity had a significant positive influence on vegetation greenness change at the significance level of 5%. In other regions, land-use intensity change had no significant impact on vegetation greenness change. not the focus of this study, we will further elaborate on this in the Appendix S1. | DISCUSSION In this study, on the basis of analyzing the variation trend of the From the beginning of the 21st century, the rural population in China's mountainous areas has been decreasing significantly. In general, population pressure in two-thirds of China's mountainous areas has been decreasing in the past 10 years. For instance, the northern, T A B L E 3 Models of impact of population pressure change on NDVI slope at the national level in mountainous areas in China The figures in [] are marginal effects of population pressure change; the figures in () are t values; *, **, *** are coefficients different from zero at 10%, 5%, and 1% significance levels, respectively; Region dummies = Yes. Standard error adjusted for 9,753 clusters in each sample. central, and southeastern mountainous areas presented the most obvious decrease. From 2000 to 2010, the rural populations of these three regions fell by 17.2%, 16.8% and 12.6%, respectively. Accordingly, the slope of NDVI change in these regions was relatively large, and vegetation greenness increase was significant ( Figure 6). In contrast, in the northwestern and Tibet areas, the rural population only decreased by 4.1% and 1.8%, respectively, which was significantly lower than the national average of 17% (Li, Sun, Tan & Li, 2016). As a result, the NDVI increased slowly in this area. With a decrease in rural population, vegetation index generally showed an increasing trend, except for a few regions which showed a decreasing trend and accounted for 18.3% of the total study area during the study period. Some scholars in China have come to similar conclusions (Cai, Yang, Wang & Xiao 2014;Han & Xu, 2008;Li, Sun, Tan & Li, 2016). For instance, Lu et al. (2015) have stated that China experienced both vegetation restoration and degradation with great spatial heterogeneity. In addition, Han and Xu (2008), using the correlation analysis method, found that demographic factors significantly affected vegetation productivity in the undeveloped regions with a great distance to the center of Chongqing, especially in the mountainous areas. In other countries in the developing world, research led to similar conclusions. For example, Olsson, Eklundh, and Ardö (2005) have found that the population emigration in marginal areas of the southern Sahara region increased the cultivated land abandonment rate, thus promoting the spontaneous recovery of vegetation to a certain degree. The results of a similar study have shown that cultivated land abandonment in mountainous areas caused by rural-to-urban labor migrants in St. Lucia, West Indies, had a certain facilitating effect on forest restoration in mountainous areas (Bradley, 2016). Based on these studies, it can be concluded that population pressure decrease positively impacts vegetation greenness change. However, as mentioned in the introduction, previous researches have rarely analyzed the effects of human activity changes on vegetation greenness while controlling natural factors. In this study, we quantitatively analyzed the influences of human activities on the basis of controlling climatic and landform factors (Table 1) contributed to the improvement of vegetation conditions. Studies showed that through irrigation, the grassland biomass in Tibet could be significantly increased (Ganjurjav et al., 2014(Ganjurjav et al., , 2015. Above all, the proportion of shrubs and broad-leaved forbs was also increased under irrigation conditions. Generally speaking, with higher land-use intensities and human activities, vegetation greenness decreases. However, in this study, landuse intensity had no significant impact on the dependent variables in most of the models. This may mainly be related to the definition of land-use intensity in this study. We calculated land-use intensity according to equation (2), which divides land-use status into four different grades. Nevertheless, this discontinuity of variables cannot fully reflect the influences of land use and masks a large amount of vegetation responses to land-use change. Remarkably, in Model 7 and 11 (Table 4), land-use intensity has a significant positive influence on the dependent variables; the increase of land-use intensity can promote vegetation greenness. This situation occurs in northwestern China and is, most likely, mainly related to the development of irrigated agriculture in the area. In the valley and piedmont zones of the northwestern area, the temperature rise in recent years has increased the amount of alpine snow water, which promoted the development of irrigated agriculture in these regions, thus improving regional vegetation (Ta, Dong, & Caidan, 2006). Previous studies have shown that from 1975 to 2005, the large-scale development of cultivated land in Xinjiang has significantly influenced regional vegetation (Wang, Wang, Zhang, & Duan, 2014). The vigorous development of irrigated agriculture in these regions has improved the vegetation conditions to some extent. In addition, vegetation change is affected not only by natural conditions and human factors, but also by other factors such as land use policy, related projects, and policies of vegetation protection (Li, Wu & Huang, 2012;Lu et al., 2015;Luck, Smallbone, & O'brien, 2009). Especially since the 1990s, large-scale ecological protection and afforestation projects have been significantly affecting vegetation F I G U R E 5 Standardized coefficients of significant explanatory variables based on multivariate linear regression models shown in Table 4 in subregions F I G U R E 6 The rural population and average values of trend of NDVI variation in different areas restoration (Lu, Fu, Wei, Yu, & Sun, 2011). The variable of land-use intensity in this study can, to a certain extent, reflect the influence of the "grain-to-green" policy. However, land-use changes do not fully reflect the influence of policies on vegetation greenness. Further studies are therefore required to assess the impacts of population change on vegetation greenness change.
v3-fos-license
2023-05-17T15:19:14.639Z
2023-05-01T00:00:00.000
258732491
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "959c9ae2de7916e0526668587873ab1a679baec2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42051", "s2fieldsofstudy": [ "Engineering" ], "sha1": "1d05b9f4d8114d9a4cc7fa6b24a9e33ac2786d23", "year": 2023 }
pes2o/s2orc
An Overview of Recycling Wastes into Graphene Derivatives Using Microwave Synthesis; Trends and Prospects It is no secret that graphene, a two-dimensional single-layered carbon atom crystal lattice, has drawn tremendous attention due to its distinct electronic, surface, mechanical, and optoelectronic properties. Graphene also has opened up new possibilities for future systems and devices due to its distinct structure and characteristics which has increased its demand in a variety of applications. However, scaling up graphene production is still a difficult, daunting, and challenging task. Although there is a vast body of literature reported on the synthesis of graphene through conventional and eco-friendly methods, viable processes for mass graphene production are still lacking. This review focuses on the variety of unwanted waste materials, such as biowastes, coal, and industrial wastes, for producing graphene and its potential derivatives. Among the synthetic routes, the main emphasis relies on microwave-assisted production of graphene derivatives. In addition, a detailed analysis of the characterization of graphene-based materials is presented. This paper also highlights the current advances and applications through the recycling of waste-derived graphene materials using microwave-assisted technology. In the end, it would alleviate the current challenges and forecast the specific direction of waste-derived graphene future prospects and developments. Introduction Graphite has been utilized as the essential raw material in the production of graphene since its discovery. Graphite has an exceptionally anisotropic construction which leads to its in-plane and out-of-plane surface properties being very different [1]. Graphene is a layer of graphite. It is a solitary atom thick sheet of sp 2 hybridized carbon atoms organized in a hexagonal grid structure with extraordinary properties, such as high surface area, high electrical conductivity, and excellent mechanical strength [2]. Due to its exceptional physical characteristics, such as its ultra-thin properties, significant nonlinearity, and electrical tunability, graphene is frequently used in combination with other materials to create tunable optical and other electronic devices [3]. Since each carbon particle has an unhybridized single bond, graphene has high native flexibility and electronic conductivity. Recently, 3D structures of graphene honeycombs have been studied through large-scale molecular dynamics simulations for mechanistic understanding and deformation behaviors as displayed in Figure 1 [4]. Graphene oxide (GO) is not a conductor. However, it can be reduced by he cesses into conductive reduced GO (rGO) [5]. rGO is conveyed by disposing of t genated groups of GO, where GO is a variant of graphene adorned with functional [6]. Despite the fact that rGO is a derivative of graphene, the rigorous process of ox and reduction familiarizes harmed areas with the rGO sheets. There are unreacte tional groups attached to the rGO plane ( Figure 2) [7]. Graphene, both single layer and multilayer, can now be manufactured in a va ways. The layers of graphene union are fabricated through a hierarchical or base m ology [8]. Graphite is composed of graphene layers. The graphene layers have tw of bond structures. The weak Van der Waal interactions hold the graphene bond together with a distance of around 0.341 nm between the adjoining graphene lay The Van der Waal interactions have a significant impact on the frequency of mod Graphene oxide (GO) is not a conductor. However, it can be reduced by heat processes into conductive reduced GO (rGO) [5]. rGO is conveyed by disposing of the oxygenated groups of GO, where GO is a variant of graphene adorned with functional groups [6]. Despite the fact that rGO is a derivative of graphene, the rigorous process of oxidation and reduction familiarizes harmed areas with the rGO sheets. There are unreacted functional groups attached to the rGO plane ( Figure 2) [7]. scale molecular dynamics simulations for mechanistic understanding and deformation behaviors as displayed in Figure 1 [4]. Graphene oxide (GO) is not a conductor. However, it can be reduced by heat processes into conductive reduced GO (rGO) [5]. rGO is conveyed by disposing of the oxygenated groups of GO, where GO is a variant of graphene adorned with functional groups [6]. Despite the fact that rGO is a derivative of graphene, the rigorous process of oxidation and reduction familiarizes harmed areas with the rGO sheets. There are unreacted functional groups attached to the rGO plane ( Figure 2) [7]. Graphene, both single layer and multilayer, can now be manufactured in a variety of ways. The layers of graphene union are fabricated through a hierarchical or base methodology [8]. Graphite is composed of graphene layers. The graphene layers have two types of bond structures. The weak Van der Waal interactions hold the graphene bond layers together with a distance of around 0.341 nm between the adjoining graphene layers [9]. The Van der Waal interactions have a significant impact on the frequency of modes with relative movement between the layers in vibrational dispersion [10]. Some of the phenom- Graphene, both single layer and multilayer, can now be manufactured in a variety of ways. The layers of graphene union are fabricated through a hierarchical or base methodology [8]. Graphite is composed of graphene layers. The graphene layers have two types of bond structures. The weak Van der Waal interactions hold the graphene bond layers together with a distance of around 0.341 nm between the adjoining graphene layers [9]. The Van der Waal interactions have a significant impact on the frequency of modes with relative movement between the layers in vibrational dispersion [10]. Some of the phenomena related to Van der Waal interactions include friction, surface tension, viscosity, adhesion, cohesion, and wetting [11]. The assembly of carbon atoms into a graphene arrangement is the bottom-up method of synthesis. The two methodologies have advantages and downsides that have been explored in literature [12,13]. In this study, we aim to review the available literature on the synthesis of graphene and graphene-based materials derived from wastes in the last decade. The focus of waste is biowaste, coal, and industrial waste as source materials. The specific synthesis method is microwave synthesis. Moreover, numerous characterization techniques have been discussed along with the emerging future prospects and recommendations. Biowastes It has become challenging in the 21st century to obtain clean, affordable, and reliable energy sources which are essential from both a financial and natural outlook. Biomass has been identified as one of the most favorable sustainable sources of energy [14]. Biomass is standard and normal material derived from plants and animals (microorganisms) and it contains stored energy from the sun [15]. Since plants and animals are classified as sustainable, the word "renewable" is applicable to both. Moreover, biomass is often time obtained from forestry, agricultural, industrial, household, and municipal solid wastes (MSW) [16,17]. Every year, various bio-waste from large-scale livestock or agricultural sources are dumped into the environment [18]. Biomass is mostly comprised of long chains of carbon, hydrogen, and oxygen compounds with a carbon fixation as high as 55% by weight [19]. The carbon content of biomass should be concentrated before it can be converted completely to graphene. The industry has utilized this strategy to make biochar. Biochemicals, biofuels, and even bio-vehicles are created from biomass utilizing heat treatment methods, such as gasification, carbonization, liquefaction, and pyrolysis [20]. Carbonization is a pyrolysis process that converts biomass into a carbonaceous, charcoallike material [21]. On the other hand, graphitization is a method in which amorphous carbon is heated before being converted into three-layered graphite [22]. It ought to be noticed that the carbonization cycle habitually brings about amorphous carbon instead of graphite-like carbon. Pyrolyzed carbon exists in two forms which are hard and soft carbon. In fact, despite being heated to extremely high temperatures, hard carbon graphitization is yet to be achieved [23]. In the meantime, heat treatment readily converts soft carbon into graphite. In spite of the way that the properties of the converted carbon structures are similar to graphene, they are not unmodified graphene because of the presence of extra carbon components [24]. Thermal exfoliation and carbon growth are two methods for the thermal degradation of biomass [25]. The exfoliation technique with graphitized biomass incorporates breaking up the carbon structure by overcoming the Van der Waals forces, resulting in graphene sheets (GSs). This process is similar to the conversion of graphite into graphene with graphitized biomass substituted for graphite [26,27]. Table 1 shows examples of methods for the conversion of biomass into graphene derivatives. Coal Waste Coal is a unique carbon material that can be subdivided into lignite, bituminous coal, and anthracite [46]. Lignite and sub-bituminous coal are classified as inferior coal because of their high moisture content, high impurity, highly volatile matter substance, and low quantitative worth [47]. Coal is generally converted into fuel through various cycles, such as ignition, pyrolysis, gasification, and liquefaction [48]. The traditional methods have drawbacks which include a lack of energy efficiency and ecological contamination [49,50]. Subsequently, a high-esteem and earth-manageable technique in using coal is required [51]. Coal particles, in contrast with normal pieces of graphite and other precursors, contain a number of aromatic units as well as short aliphatic and ether bonds [52]. It is believed that coal might be a good option for creating carbon nanomaterials because of its staggered nanoarchitecture and explicit capabilities. Savitskii et al. [53] utilized anthracite coal and a thermo-oxidative technique to produce colloidal GO nanoparticle scatterings in size range from 122 nm to 190 nm. Pakhira et al. [54] showed that GO can be synthesized from low-grade coal. It was molded from the natural coalification of plant metabolites isolated by chemical exfoliation of cold HNO 3 . However, such GO sheets are bound to break into negligible round shapes of nanometers. It is striking that there is an expansion in the utilization of coal-derived nanomaterials for a variety of industrial applications [52]. Currently, the strategy for reprocessing coal into graphene is to initially convert huge molecule coal into an antecedent carbon source prior to synthesizing graphene. The precursor carbon source can be gaseous or a particular form [55]. Primer screening of crude coal, debasement expulsion, pyrolysis (dry refining), gasification, and liquefaction of coal steps in the preparation of a precursor carbon source [56]. Zhou et al. utilized a reactant graphitization-helped dielectric barrier discharge (DBD) plasma strategy to make Taixi anthracite-based synthetically inferred graphene as well as metallic nanoparticle-enhanced graphene sheets [57]. In this method, crude coal was graphitized at 2400 • C for 2 h (under Ar) directly with Fe 2 (SO 4 ) 3 as a catalyst followed by Hummers' method oxidation into the corresponding graphite-like carbon oxides (TX-NC-GO and TX-C-GO, separately) [57,58]. Industrial Wastes Malaysia is an emerging nation that relies on modern efficiency as one of its monetary donors. Different types of wastes are produced in industrial processes, including chemical effluents, industrial plants waste, paper waste, metals, concrete, sludge, electronic devices wastes, etc. [59]. A number of significant materials (e.g., graphite, Cu, Fe, and Zn) from industrial waste can be recuperated utilizing a hydrometallurgical technique called leaching [60]. The commercialization of graphite-based products has immensely improved during the twenty-first century [61]. It is due to their unique physical and manufactured properties, such as high chemical resistance, heat capacity, high electrical conductivity, and lubricity. These unique properties are suitable for various modern applications, such as contraptions, oils, and metallurgy [62]. A modified Hummers method was utilized to prepare to GO from graphite obtained from modern waste filtering [63]. Concentrated sulfuric acid (H 2 SO 4) and graphite (30 mL & 1 g) were mixed homogeneously in an ice bath for 30 min during the synthesis cycle. A total of 5 g of potassium permanganate (KMnO 4 ) was added and mixed for another 15 min at temperatures below 10 • C [64]. The extent of KMnO 4 was subsequently increased from 1:3 to 1:5 to speed up the oxidation rate. From that point onward, 8 mL of ultrapure water was added dropwise for 15 min, and the temperature of the mixture was kept under 98 • C for around 60 min. Finally, the oxidation reaction was obtained by adding 60 mL ultrapure water followed by 1 mL H 2 O 2 [65]. Microwave Synthesis of Graphene Nanomaterials from Waste Materials Microwave radiation is electromagnetic radiation with wavelengths ranging from 0.01 to 1 m and frequencies ranging from 300 MHz to 300 GHz [66]. Modern microwaves have two frequencies, 915 MHz and 2.45 GHz, while the consumed microwave only has one frequency, 2.45 GHz, and a wavelength of 12.25 cm [67]. Microwaves are widely used to heat materials that can absorb and convert microwave radiation to heat [68]. These dipolar particles that are changed can quickly rearrange toward the electric field, leading to expanded inward atomic contact, and volumetric warming of the whole substance [69]. As a result, microwave-assisted technology is able to provide a quick and efficient method of evenly heating the material or system from within. The conventional heating system, on the other hand, is relatively slow and ineffective [70]. Graphite or GO, is a typical wellspring of GSs, which are made from a conventional or modified Hummer's method [71]. Hummers' method is the most widely used method in the synthesis of GO through a mixture of concentrated H 2 SO 4 and KMnO 4 [72]. Since then, numerous modified versions have been developed. However, the experimental procedures are mainly very similar to the original Hummers method. Oxidation is achieved using KMnO 4 and the reaction is stabilized by adding hydrogen peroxide into the solution [73]. A few hazardous reducing agents, such as hydrazine (N 2 H 4 ) and NaBH 4 , are normally utilized in substance methodology to reduce GO. Thermal treatment, on the other hand, does not require the utilization of hazardous reducing agents making it a more attractive option [74]. The microwave-assisted technique has acquired ubiquity as an alternative to conventional graphene preparation. It treats GO or normal graphite in a microwave or microwave plasma-assisted chemical vapor deposition (MPCVD) framework which utilizes microwave-assisted solvothermal/aqueous strategies [75]. Microwave radiation provides a quick and uniform heating rate that leads to fast particle nucleation and growth which may reduce the reaction time that eventually led to significant energy saving [76]. Figure 3 portrays one potential microwave-assisted strategy for graphene synthesis. Microwave illumination produces very high temperatures and tensions, and energy is transferred directly into the GO [77]. Furthermore, the interaction of polar solvents with the surface oxides on GO sheets is the key factor in determining deposit regularity [77]. Materials 2023, 16, x FOR PEER REVIEW 6 of 23 [73]. A few hazardous reducing agents, such as hydrazine (N2H4) and NaBH4, are normally utilized in substance methodology to reduce GO. Thermal treatment, on the other hand, does not require the utilization of hazardous reducing agents making it a more attractive option [74]. The microwave-assisted technique has acquired ubiquity as an alternative to conventional graphene preparation. It treats GO or normal graphite in a microwave or microwave plasma-assisted chemical vapor deposition (MPCVD) framework which utilizes microwave-assisted solvothermal/aqueous strategies [75]. Microwave radiation provides a quick and uniform heating rate that leads to fast particle nucleation and growth which may reduce the reaction time that eventually led to significant energy saving [76]. Figure 3 portrays one potential microwave-assisted strategy for graphene synthesis. Microwave illumination produces very high temperatures and tensions, and energy is transferred directly into the GO [77]. Furthermore, the interaction of polar solvents with the surface oxides on GO sheets is the key factor in determining deposit regularity [77]. Furthermore, the reduction degree of GSs was further enhanced, and the functional groups on the surface of GO are successfully lowered [78]. There are several obvious advantages to producing graphene using microwave technology. Firstly, the advantage of microwave-assisted heating over traditional heating methods is its uniform and rapid heating of the reaction mixture [79]. In addition, microwave-assisted heating can significantly improve the transfer of energy directly to the reactants, resulting in an instantaneous internal temperature rise [80]. Furthermore, microwave technology enables the use of environmentally friendly solvents, resulting in cleaner products that do not require additional purification steps [81]. Since it involves a quick warming and very fast rate of crystallization to create the ideal nanocrystalline items, microwave illumination has recently been proposed as a valuable procedure for delivering carbon-related composites with uniform scattering as well as size and morphology control [82]. Table 2 shows examples of waste materials in graphene derivatives by using the microwave method. Furthermore, the reduction degree of GSs was further enhanced, and the functional groups on the surface of GO are successfully lowered [78]. There are several obvious advantages to producing graphene using microwave technology. Firstly, the advantage of microwave-assisted heating over traditional heating methods is its uniform and rapid heating of the reaction mixture [79]. In addition, microwave-assisted heating can significantly improve the transfer of energy directly to the reactants, resulting in an instantaneous internal temperature rise [80]. Furthermore, microwave technology enables the use of environmentally friendly solvents, resulting in cleaner products that do not require additional purification steps [81]. Since it involves a quick warming and very fast rate of crystallization to create the ideal nanocrystalline items, microwave illumination has recently been proposed as a valuable procedure for delivering carbon-related composites with uniform scattering as well as size and morphology control [82]. Table 2 shows examples of waste materials in graphene derivatives by using the microwave method. X-ray Diffraction (XRD) and X-ray Photoelectron Spectroscopy (XPS) Firstly, XRD is a reliable technique for the structural analysis of GO. This analysis can be used to assess the pattern/shape and crystallinity of GO [103]. XRD also is comparable to a fingerprint that is unique for each sample or species. This is due to the evaluation of achieved data which can be compared with the database results to identify that material [104]. In spite of the fact that XRD is certainly not an optimal device for recognizing single-layer graphene, it can be used to recognize graphite and graphene tests. In the XRD design, the unblemished graphite has a basal reflection (002) peak at 2θ = 26.6 • (d spacing = 0.335 nm). Later, the oxidation of graphite into graphite oxide shows middle basal (002) reflection peak moves to 11.2 • , corresponding to a d spacing of 0.79 nm [105]. The increase in interlayer space is due to water atoms intercalating between the oxidized graphene layers. The presence of metallic mixtures in graphene structures was analyzed utilizing XRD examination. In addition, an x-beam connection with a graphitic translucent stage produces a diffraction design [106]. Non-covalent functionalization of rGO with two poly ionic fluids (PIL), poly (1-vinylimidazole) (PVI), and 2-bromopropionyl bromide resulted in the disappearance of a sharp GO diffraction peak at 2θ = 11.8 • in PIL-rGO diffractograms [107]. This trend is predictable with the detailed information and a slight expansion in the power of the GO trademark top in 2θ = 44.5 • (101) which relates to the basal reflexing plane of the tri-layered graphite [108]. Figure 4 displays the XRD profiles of graphite, GO-I, GO-II, and rGO. The formation of GO was confirmed by the diffraction peak at 2θ = 11.01 • at a reflection plane (001). A diffraction peak that appeared at 2θ = 26.8 • at a reflection plane (002) after the thermochemical treatment confirmed the reduction of GO [109]. This diffractogram demonstrated the disappearance of the GO peak, providing evidence that GO was converted into rGO. In addition, GO was prepared using different ratios of acids (I and II) as shown in Figure 4 [109,110]. XPS is one of the most common techniques used to study the relative amount of carbon, oxygen, and functional groups present in GO and electrochemically rGO (ErGO). It is an accurate technique to determine the amount of carbon and oxygen compared to elemental analysis because it is difficult to fully dehydrate a GO sample [111]. This is a quantitative and reliable technique in removing electrons from the C 1s and O 1s levels of graphene using X-rays and the energies of the emitted electrons are determined by the atomic composition of the material [112]. XPS can quantify the different types of carbon functionalities present and indicate the formation of chemical bonds, and evaluate the physisorption of molecules through the O/C ratio [113]. This quantification is critical to correlate the graphene-based materials' chemical properties versus their performance, for example, in permeability [114], water purification [115], or bio-sensing [116]. Furthermore, the surface chemistry and binding sites of both electrically conducting and non-conducting materials are also studied by XPS. It is possible to characterize the networks and bonds in the material sample. The photoelectric effect serves as the basis for the theory. Additionally, XPS can shed light on the atomic composition's percentage. Figure 5 displays the GO and rGO of the XPS spectra that exhibit distinctive patterns which reveal their chemical composition [117]. The C(1 s) and (O1 s) peaks, which are located at about 285 and 532 eV, respectively, in the XPS full scan spectra of GO and rGO, are discernible [118]. The bonding involved is further highlighted by the deconvolution of the core orbitals of C(1 s) and O(1 s) [119]. Peaks for C=O, O=C-OH, C=C and C-C bonds are respectively visible in the C(1 s) deconvolution for GO at binding energies of 287, 289, 284, and 285 eV. The C-OH and C-O-C groups have peaks on the O(1 s) deconvolution curve for GO at 532 and 533 eV, respectively [120]. Materials 2023, 16, x FOR PEER REVIEW 9 of 23 XPS is one of the most common techniques used to study the relative amount of carbon, oxygen, and functional groups present in GO and electrochemically rGO (ErGO). It is an accurate technique to determine the amount of carbon and oxygen compared to elemental analysis because it is difficult to fully dehydrate a GO sample [111]. This is a quantitative and reliable technique in removing electrons from the C 1s and O 1s levels of graphene using X-rays and the energies of the emitted electrons are determined by the atomic composition of the material [112]. XPS can quantify the different types of carbon functionalities present and indicate the formation of chemical bonds, and evaluate the physisorption of molecules through the O/C ratio [113]. This quantification is critical to correlate the graphene-based materials' chemical properties versus their performance, for example, in permeability [114], water purification [115], or bio-sensing [116]. Furthermore, the surface chemistry and binding sites of both electrically conducting and non-conducting materials are also studied by XPS. It is possible to characterize the networks and bonds in the material sample. The photoelectric effect serves as the basis for the theory. Additionally, XPS can shed light on the atomic composition's percentage. Figure 5 displays the GO and rGO of the XPS spectra that exhibit distinctive patterns which reveal their chemical composition [117]. Table 3 has highlighted the GO and rGO binding energy values. In the case of rGO, these peaks show up with low intensity, confirming the reduction of GO. As a result, the peaks in rGO become narrower when GO is reduced to rGO. Additionally, it appears intense to restore the π-conjugation in the rGO peak at 284 eV which corresponds to C=C. The deconvoluted peaks in rGO shift to a binding energy value greater than GO for O(1 s). Table 3. Binding energy values of GO and rGO in (eV) from the XPS plot [117,119]. The appearance of distinctive peaks in XPS can be used to verify that graphene has been successfully non-covalently functionalized [121]. According to Khan et al., two distinct peaks at 729 and 715.3 eV can be used to identify the presence of magnetic nanoparticles anchored on the GO surface [122]. Furthermore, it is noted that the XPS C1 spectrum after Fe3O4-functionalization shows peaks associated with C=O (285 eV), C=C (286.2 eV), and C-O-O (289 eV) bonds. XPS can also be used to identify active sites and further illuminate associated reaction mechanisms in graphene-based catalytic materials. This is best illustrated by the direct observation of active sites during the oxygen reduction reaction (ORR) over nitrogen-doped graphene (NG) catalysts [123]. Even though many simulation results showed various reaction pathways and adsorption sites for ORR over NG, the actual mechanism is still in dispute, primarily because there is not any direct evidence of the detection of intermediate species or active sites [124]. Raman Spectroscopy and Fourier-Transform Infrared Spectroscopy (FTIR) Raman spectroscopy detects the transformation in energy connected with the Stokes and anti-Stokes transitions between the scattered photons. It is a non-destructive technique that provides information on chemical structure and molecular interactions by the combination of light within the bond of material [125]. Moreover, Raman spectroscopy is one of the most useful assets for concentrating on the construction and nature of carbon-based materials, for example, graphene [126]. It is a powerful, quick, delicate, and logical technique for giving subjective and quantitative information to graphene-based materials [127]. Raman spectroscopy is a significant instrument for deciding the quantity of graphite layers and the level of graphitization [128]. Graphene shows D, G, and 2D bands for the most parts in Raman analysis [129]. The D band is commonly situated around 1350 cm −1 and addresses the level of defects in the graphite. The higher the D band, the more defects in the graphite are observed [130]. The G band is linked to the in-plane vibration of sp 2 hybridization of carbon atoms which is located near 1580 cm −1 . The 2D peak, also known as G', represents the number of graphene layers and is observed at 2700 cm −1 [131]. Figure 6 depicts the Raman spectra of graphene reduced with various reduction conditions which reflect the significant structural changes that occur during each stage of the electro and thermal processing [132]. in the graphite are observed [130]. The G band is linked to the in-plane vibratio hybridization of carbon atoms which is located near 1580 cm −1 . The 2D peak, also as G', represents the number of graphene layers and is observed at 2700 cm −1 [131 6 depicts the Raman spectra of graphene reduced with various reduction condition reflect the significant structural changes that occur during each stage of the ele thermal processing [132]. Pristine graphene is a carbon allotrope, and no signal can be collected usin Graphite oxide exfoliation is one of the primary routes for preparing practical g which supports catalytic research, and the oxidation step is critical [133]. As a resu functional groups may remain in graphene-based catalysts even after being "com removed", having a significant impact on catalytic performance [134]. Therefore, portant to evaluate the reduction level. FTIR is one of the most efficient and simp ods for investigating residual functional groups [135]. Other than that, the metho termine the bonding configuration of different types of oxygen is FTIR analysis. A ally, FTIR is a tool that complements Raman spectroscopy. The identifiable fu groups do not show any distinctive peaks in the pristine graphite FTIR spectrum only shows two peaks at about 1610 and 450 cm −1 which are attributed to the vib adsorbed water molecules (the O-H stretching) and the skeletal vibrations from domains, respectively (the sp 2 aromatic C=C) [137]. The oxygenated GSs may e Figure 6. Raman spectra of samples at various stages of processing [132]. Pristine graphene is a carbon allotrope, and no signal can be collected using FTIR. Graphite oxide exfoliation is one of the primary routes for preparing practical graphene which supports catalytic research, and the oxidation step is critical [133]. As a result, many functional groups may remain in graphene-based catalysts even after being "completely removed", having a significant impact on catalytic performance [134]. Therefore, it is important to evaluate the reduction level. FTIR is one of the most efficient and simple methods for investigating residual functional groups [135]. Other than that, the method to determine the bonding configuration of different types of oxygen is FTIR analysis. Additionally, FTIR is a tool that complements Raman spectroscopy. The identifiable functional groups do not show any distinctive peaks in the pristine graphite FTIR spectrum [136]. It only shows two peaks at about 1610 and 450 cm −1 which are attributed to the vibration of adsorbed water molecules (the O-H stretching) and the skeletal vibrations from graphite domains, respectively (the sp 2 aromatic C=C) [137]. The oxygenated GSs may exhibit a variety of absorption bands or characteristic peaks ranging from 900 to 3500 cm1 following treatment with oxidizing agents [138]. These include the stretching vibrations of epoxy C-O groups (1000-1280 cm −1 ), alkoxy stretching vibrations (1040-1170 cm −1 ), O-H stretching vibrations (3300-3500 cm −1 ), O-H deformation peaks (1300-1400 cm −1 ), and carboxyl peaks (1700-1750 cm −1 ) [139]. Notably, between 1600 and 1650 cm −1 , the aromatic C=C peak was visible. This peak is a result of the sp 2 domains in the unoxidized region of the graphite, and the vibration that is produced there is known as skeletal vibration [140]. Atomic Force Microscopy (AFM) As a result of the limits of scanning tunneling microscopy (STM), such as the requirement for conductive examples, atomic force microscopy (AFM) was created in 1985 [141]. AFM is a multifunctional instrument that can envision the topography of a sample, measure its roughness, and distinguish the various periods of a composite [142]. It is widely used to measure the adhesive strength and mechanical properties of materials. It requires the utilization of conductive tips that act as top terminals as well as related to programming. Furthermore, nanoindentation can be utilized to quantify mechanical properties, such as Young's modulus and hardness [143]. AFM is broadly utilized in materials science [144], life science, and other disciplines [145]. As AFM innovation progresses, perception goal improves, and application scope extends and also more quantitative investigation of noticed pictures has started [146]. For instance, in the field of biomedicine, most exploratory examinations have zeroed in on the connection between the design and related elements of natural macromolecules, especially nucleic acids and proteins [147]. AFM in materials science can provide data related to the three-layered morphology and surface roughness of a material surface, as well as the distinction in the distribution of actual properties on the material surface, for example, morphological analysis [148] and dielectric constant [149]. A modified Langmuir-Schaefer deposition method was used to create a thin monolayer film suitable for imaging in the samples for AFM measurements. Figure 7 shows a representative AFM image of the GO monolayer deposited on the Si substrate as well as the corresponding size distribution of the GO sheets [150]. such as Young's modulus and hardness [143]. AFM is broadly utilized in materials scien [144], life science, and other disciplines [145]. As AFM innovation progresses, percepti goal improves, and application scope extends and also more quantitative investigation noticed pictures has started [146]. For instance, in the field of biomedicine, most explo tory examinations have zeroed in on the connection between the design and related e ments of natural macromolecules, especially nucleic acids and proteins [147]. AFM in m terials science can provide data related to the three-layered morphology and surfa roughness of a material surface, as well as the distinction in the distribution of actu properties on the material surface, for example, morphological analysis [148] and diel tric constant [149]. A modified Langmuir-Schaefer deposition method was used to crea a thin monolayer film suitable for imaging in the samples for AFM measurements. Figu 7 shows a representative AFM image of the GO monolayer deposited on the Si substra as well as the corresponding size distribution of the GO sheets [150]. Scanning Electron Microscopy-Energy Dispersive X-ray Spectroscopy (SEM-EDS) Since it can rapidly examine/imagine the morphology of a huge sample, electron microscopy is broadly utilized in everyday schedule examinations [151]. A potential difference accelerates thermionic electrons transmitted by a tungsten fiber (cathode) close to the anode (1.0-30.0 kV). A condenser and objective electromagnetic focal points are utilized to adjust the bar to the example under vacuum (105 Dad) [152]. Secondary and backscattered electrons are transmitted during the output, as well as Auger electrons and X-rays, and their interaction with electrons which changes them completely to grayscale pictures. Pictures of the sample are given by secondary and backscattered electron identifiers, while compositional data is given by the X-ray spectrometer [153]. Secondary electrons are fundamentally created by the outer shell's inelastic scattering, while backscattered electrons are delivered by the primary electrons [154]. To avoid surface and underlying damage from the rays, delicate examples, such as polymers, need to be treated carefully. Nonconductive examples require surface pre-treatment and the sample is normally covered with a gold or carbon overlayer [155]. Due to the oxygenated epoxy groups of GO, it shows multilayers with some wrinkles [156]. SEM images provide 3D visualization of nanoparticles morphology, dispersion in cells, and other matrices. Lateral dimension and rapid analysis of nanoparticles element composition and surface flaws, such as cracks, etching residues, differential swelling, and holes can also be seen [157]. Figure 8 shows SEM images of protruded GNP produced by GNP debonding from the polymer matrix upon failure as indicated by circles when GNP loading is increased to 10% and 20%, respectively. It has been observed that while GNP loading is increased to 10% and 20% (Figure 8c,d), the fractured surfaces become much coarser [158]. analysis of nanoparticles element composition and surface flaws, such as cracks, etch residues, differential swelling, and holes can also be seen [157]. Figure 8 shows SEM ages of protruded GNP produced by GNP debonding from the polymer matrix upon f ure as indicated by circles when GNP loading is increased to 10% and 20%, respective It has been observed that while GNP loading is increased to 10% and 20% (Figure 8c the fractured surfaces become much coarser [158]. Transmission Electron Microscopy (TEM) and High-Resolution Transmission El tron Microscopy (HRTEM) TEM is best known for imaging a specimen's morphology, a wide variety of ot combined techniques are also available in TEM to extract chemical, electrical, and str tural data. For instance, local diffraction patterns can be measured using the parallel el tron beam of the TEM which can offer precise measurements of the crystal system a parameters [159]. Furthermore, the transparent, corrugated, or wrinkled structure of Transmission Electron Microscopy (TEM) and High-Resolution Transmission Electron Microscopy (HRTEM) TEM is best known for imaging a specimen's morphology, a wide variety of other combined techniques are also available in TEM to extract chemical, electrical, and structural data. For instance, local diffraction patterns can be measured using the parallel electron beam of the TEM which can offer precise measurements of the crystal system and parameters [159]. Furthermore, the transparent, corrugated, or wrinkled structure of the two-dimensional (2D) GO and rGO nanosheets is visible under the TEM [160]. It is also described as having the morphology of an ultrathin silk veil with folds and scrolls along its edges and it is attributed to graphene's inherent properties [161]. A highly effective method for characterizing the structure of graphene is HRTEM. It is a special tool for describing graphene's atomic structures and interfaces. It has been used to observe graphene flakes in a fraction of a micron and to reveal the fine chemical structure of GO [162]. Based on a TEM image of the folds formed at the edge, HRTEM also provides data on the number of graphene layers. Graphene's electron diffraction pattern can also be used by HRTEM to identify its crystalline nature [163]. It is noteworthy that HRTEM can reveal the quantity of layers present in various areas of the sheets [164]. The measured lattice spacing of singlelayer graphene using this method is 0.236 nm [165]. Figure 9 shows TEM and HRTEM images of rGO. Field Emission Scanning Electron Microscopy (FESEM) The image of the materials' microstructure is captured using the cutting-edge technology known as FESEM. Gas molecules have a tendency to disturb the electron beam and the emitted secondary and backscattered electrons used for imaging and FESEM is typically carried out in a high vacuum [166]. The difference between the surface morphology of GO and rGO was further demonstrated by FESEM analysis [167]. It has been demonstrated that the rGO's FESEM image from Figure 10 has more wrinkles than GO [168]. The removal of oxygenated functional groups from the GO surface during the reduction process was supposed to be the cause of the corrugations on the rGO surface [169]. on a TEM image of the folds formed at the edge, HRTEM also provides data on the n ber of graphene layers. Graphene's electron diffraction pattern can also be used HRTEM to identify its crystalline nature [163]. It is noteworthy that HRTEM can re the quantity of layers present in various areas of the sheets [164]. The measured la spacing of single-layer graphene using this method is 0.236 nm [165]. Figure 9 shows T and HRTEM images of rGO. Field Emission Scanning Electron Microscopy (FESEM) The image of the materials' microstructure is captured using the cutting-edge t nology known as FESEM. Gas molecules have a tendency to disturb the electron b and the emitted secondary and backscattered electrons used for imaging and FESEM typically carried out in a high vacuum [166]. The difference between the surface morp ogy of GO and rGO was further demonstrated by FESEM analysis [167]. It has b demonstrated that the rGO's FESEM image from Figure 10 has more wrinkles than [168]. The removal of oxygenated functional groups from the GO surface during the duction process was supposed to be the cause of the corrugations on the rGO surface [1 The image of the materials' microstructure is captured using the cutting-edge t nology known as FESEM. Gas molecules have a tendency to disturb the electron b and the emitted secondary and backscattered electrons used for imaging and FESEM typically carried out in a high vacuum [166]. The difference between the surface morp ogy of GO and rGO was further demonstrated by FESEM analysis [167]. It has b demonstrated that the rGO's FESEM image from Figure 10 has more wrinkles than [168]. The removal of oxygenated functional groups from the GO surface during the duction process was supposed to be the cause of the corrugations on the rGO surface [1 Future Prospects Even though scientific interest in graphene has increased for a variety of applications, there are still several significant obstacles and challenges that need to be addressed and overcome. One of the critical issues is the reproducibility of waste materials into graphenaceous materials. Improved morphological properties should be combined with procedures that are both scalable and affordable. Excitedly, there is a sustained interest in the synthesis of materials based on graphene and the evaluation of their production and fusion with other materials. Although waste precursors have been the subject of recent studies, none of them have yet been able to be marked into commercially available products. Figure 11 shows the future prospects in graphene synthesis from a variety of wastes. Noteworthy future prospects include; • Optimization of process variables and techniques to regulate the size, quality, and morphology of graphene-derived materials from waste materials. • Improved synthetic concepts and methods are highly inspiring and necessitate commercial research involving renewable and biodegradable waste materials. • Well-ordered oxidation/decrease and functionalization are expected for calibrating material properties, for example, band hole, electrical conductivity, and mechanical properties [170]. • Controlled graphite, GO, and rGO adjustment is in this way basic for widening the utilizations of graphene-based materials. • To survey the wellbeing risk related with graphene and its subsidiaries, the poisonousness and biocompatibility of these unique carbon structures and their subordinates should be examined [171]. • Due to its extensive property, graphene preparation is a crucial area for material scientists. As a result, the scientific community should focus on advanced and novel microwave instruments which would be a great substitute of toxic and harsh chemicals • To explore more variations that involving novel synthetic techniques, high purity GO for its mass production. • There should be more consideration to lessen the cost effects of graphene derivatives. • There should be more emphasis on the high yield and purity of graphene derivatives using a variety of wastes through microwave synthesis. • This may also lead towards the excellence of functionalization, such as ID, 2D, and 3D graphene members, to fabricate waste materials into graphene-based structures with enhanced functionalities and high surface areas [172]. • Improving synthetic ideas and microwave approaches are remarkably motivating and requires further investigations by recycling waste materials for the optimization of parameters, such as time, power, and frequency. • Further analysis of microwave synthesis and applications should be explored where the waste-based graphene derivatives can be utilized and, thus, the structures and properties can be modified as per the industrial demands. FOR PEER REVIEW 16 of 23 Figure 11. Future prospects in graphene synthesis from a variety of wastes. Conclusions Due to graphene's industrial significance, there is great concern about its sources and synthesis methods. These variables affect the price of graphene, and its industrial applications are constrained. The current study presents an overview of various types of wastes followed for the synthesis of graphene-based materials. Graphene creates wonders with its intriguing qualities and great attention from researchers all over the world. GO isolation has been established for more than ten years. However, the process is a continual exploration of variations involving novel synthesis techniques, and highly pure GO for its mass production and commercialization. The major attention is to have a process that is cost effective and economical. The literature is rich with important process parameters, their optimizations, and the synthesis of GO from a variety of waste which is useful for a wide range of applications. We have narrated literature that lists the synthetic routes for GO, particularly microwave synthesis, as well as different characterization methods. Alt- Figure 11. Future prospects in graphene synthesis from a variety of wastes. Conclusions Due to graphene's industrial significance, there is great concern about its sources and synthesis methods. These variables affect the price of graphene, and its industrial applications are constrained. The current study presents an overview of various types of wastes followed for the synthesis of graphene-based materials. Graphene creates wonders with its intriguing qualities and great attention from researchers all over the world. GO isolation has been established for more than ten years. However, the process is a continual exploration of variations involving novel synthesis techniques, and highly pure GO for its mass production and commercialization. The major attention is to have a process that is cost effective and economical. The literature is rich with important process parameters, their optimizations, and the synthesis of GO from a variety of waste which is useful for a wide range of applications. We have narrated literature that lists the synthetic routes for GO, particularly microwave synthesis, as well as different characterization methods. Although waste biomass-inferred graphene is one more encouraging material with various applications, its synthesis method is still open to be explored. Accordingly, more examination is expected to exhibit the best strategy for creating graphene with the best properties and optimization. In addition, successful and cost-effective planning may lead to the use of graphene in a wide range of applications from energy to the environment. Furthermore, material progress always demonstrates a superior effect in any field. Due to its diverse properties, graphene preparation is an important area for material scientists. As a result, the scientific community will always give attention to effective and efficient graphene preparation. The improvement of graphene through green combination addresses a huge progression in graphene innovation. The cost of producing graphene in large quantities could be reduced in alternative ways using carbonaceous wastes as raw materials. The production of graphene for industrial applications should successfully utilize a variety of environmentally hazardous solid waste precursors. Since waste-derived graphene might have impurities, additional purification procedures are needed. Future research is therefore required to increase graphene production with better yield and properties. Conflicts of Interest: The authors declare that there are no conflicts of interest regarding the publication of this manuscript.
v3-fos-license
2020-12-01T15:15:32.102Z
2020-12-01T00:00:00.000
227236859
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journalbipolardisorders.springeropen.com/track/pdf/10.1186/s40345-020-00202-4", "pdf_hash": "df198bc12b24b5f964e44e48d9c12f2c65d102e1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42052", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Psychology" ], "sha1": "df198bc12b24b5f964e44e48d9c12f2c65d102e1", "year": 2020 }
pes2o/s2orc
Digital health developments and drawbacks: a review and analysis of top-returned apps for bipolar disorder Background Although a growing body of literature highlights the potential benefit of smartphone-based mobile apps to aid in self-management and treatment of bipolar disorder, it is unclear whether such evidence-based apps are readily available and accessible to a user of the app store. Results Using our systematic framework for the evaluation of mental health apps, we analyzed the accessibility, privacy, clinical foundation, features, and interoperability of the top-returned 100 apps for bipolar disorder. Only 56% of the apps mentioned bipolar disorder specifically in their title, description, or content. Only one app’s efficacy was supported in a peer-reviewed study, and 32 apps lacked privacy policies. The most common features provided were mood tracking, journaling, and psychoeducation. Conclusions Our analysis reveals substantial limitations in the current digital environment for individuals seeking an evidence-based, clinically usable app for bipolar disorder. Although there have been academic advances in development of digital interventions for bipolar disorder, this work has yet to be translated to the publicly available app marketplace. This unmet need of digital mood management underscores the need for a comprehensive evaluation system of mental health apps, which we have endeavored to provide through our framework and accompanying database (apps.digitalpsych.org). Background With the rise of digital tools and applications, smartphone apps offer promising tools to augment support and self-management for individuals with bipolar disorder (BD). With a prevalence rate of > 1% of the world's population, patients who need chronic illness management may not have access to subspeciality clinics, and primary care providers are increasingly comfortable working with this patient population, provided supportive technology to facilitate monitoring and follow-up. (Rowland and Marwaha 2018). Studies have suggested that, as of 2019, smartphone ownership among individuals with bipolar disorder exceeds 75% (Hidalgo-Mazzei et al. 2019;Young et al. 2020), and there is evident interest in app use among individuals with BD, with 40% of young adults with bipolar disorder having used an app for symptom management and 79% of those not using an app wanting to try (Nicholas et al. 2017). The feasibility and preliminary efficacy of mobile interventions for bipolar disorder have been validated in a variety of settings, with programs including a personal digital assistant (Depp et al. 2010), weekly text-services, and other short-message based interventions demonstrating evidence of benefit (Bopp et al. 2010). There is a robust literature base of both internet-based and smartphone-based interventions supporting self-management strategies for ongoing monitoring, education, and maintaining hope (Gliddon et al. 2017). Smartphone apps can enable both active (user-inputted) and passive (automatically collected) data collection to aid in BD diagnostics, advance evidence-based treatments like social rhythm therapy, and self-management (Torous and Powell 2015). Software applications such as Mood Rhythm and MONARCA, for example, use sensors and self-assessments in order to gain data about sleep, social activity, and mood to provide more information for both the patients and their clinicians (Matthews et al. 2016). In a 6-month, randomized, placebo-controlled, singleblind, parallel group trial utilizing MONARCA, bipolar patients who used the app, in comparison to those who did not, had fewer symptoms of mania, highlighting that both active and passive data collection may meaningfully augment conventional treatment (Faurholt-Jepsen et al. 2015). A study involving smartphone-based monitoring systems in conjunction with wrist worn accelerometers demonstrated adequate usability and feasibility (Faurholt-Jepsen et al. 2019a). Indeed, digital phenotyping-deriving metrics like location and activity patterns, social phone utilization, and symptom change-are emerging as mobile target interventions for BD (Huckvale et al. 2019). These metrics may help elucidate digital biomarkers to detect both diagnostic mood status and symptom change (Ortiz and Grof 2016), ultimately facilitating the potential for early relapse detection (Jacobson et al. 2019;Faurholt-Jepsen et al. 2019b). Survey studies have indicated that individuals with bipolar disorder are interested in and open to using apps for illness management, including apps with automatic data collection to complement traditional user-inputted metrics (Daus et al. 2018). Digital phenotyping is the latest promising avenue of exploration, adding to the robust base of literature supporting the efficacy of and receptiveness to smartphone interventions for BD. However, these promising research findings may not translate into widely available apps that patients and clinicians can use as tools today if these digital tools are not available to an end user of the app store. While a search on the app store yields numerous results with the "bipolar" search term, it is unclear whether order in returned search is at all associated with app quality or clinical utility. Since the last systematic review of publicly available apps for BD in 2015, which highlighted serious concerns around privacy, evidence, engagement, and potential for harm, it is unknown if the landscape has meaningfully changed in response to emerging research about the potential of digital interventions for BD, and whether an average app store user is now more able to access quality, research-supported apps (Nicholas et al. 2015). Other excellent recent reviews have focused on the evidence for bipolar disorders apps based on the research literature, but what is available and being offered to patients today in the public app marketplace is likely different than the subject of this review (Bauer et al. 2020). We intend to address this lacuna in the existing literature: while recent work by our team and others has thoroughly investigated the potential of apps in research settings, far less attention has been paid to what an app store end user is able to find and access. Advances in digital health research are promising, but without widespread translation to the broader public their impact is limited. We thus sought to examine the safety, relevance, and clinical utility of the apps that are most readily available for lay person seeking tools for BD, which is also understudied when compared to technology for anxiety, depression, and other mental health conditions. In the absence of strict oversight to help guide users find appropriate tools in the app store, we have proposed an enduring and reproducible framework to guide the evaluation and assessment of mobile health apps Lagan et al. 2020). The framework is based on the American Psychiatric Association's app evaluation model which has been well studied and utilized (Martinengo et al. 2019;Cohen et al. 2020;Bergin and Davies 2020;Ondersma and Walters 2020;Levine et al. 2020). The framework comprises 105 different questions examining app accessibility, origin, functionality, privacy, features, and clinical foundation, ultimately providing a comprehensive picture of app quality and utility. Each question in the framework corresponds to a principle in the American Psychiatric Association's app evolution model but now is reduced to a reproducible data point or number to encourage transparency and cultural respect. Seeking to identify the attributes of the most accessible apps for bipolar disorder, we applied this framework to the 100 top-returned apps on the Apple iOS store, investigating a wide array of their features along with correspondence to evidence-based principles of BD treatment and self-management. Methods Beyond appearing in the search, there was no inclusion criteria for app analysis, as an objective of this study was to assess the features of the most readily available and easily findable apps for a layperson. On February 20, 2020 the term "bipolar" was entered into the iOS app store. Of the first 107 returned apps, nine were not usable (5 required an access code; 2 were unavailable in English; 2 became unavailable over the course of the study). The remaining 98 apps were downloaded onto an iPhone 6 and iPhone 8 for complete assessment. Apps that were not free to download were purchased by raters for analysis. Each app was assessed by at least two raters (AMR, ANR, EL, SL). Apps were evaluated based on our previously established 105 question framework created in conjunction with the American Psychiatric Association (APA). These 105 questions are based on the APA's App Evaluation Model, with questions sorted into six categories: Origin, Functionality, and Accessibility (31 questions); Inputs and Outputs (15 questions); Privacy and Security (14 questions); Evidence and Clinical Foundation (8 questions); Features and Engagement Style (31 questions); and Interoperability and Data Sharing (6 questions). The questions are all objective and can be coded with a binary or numeric answer. The framework employed thus promotes comprehensive user testing while minimizing subjectivity and providing transparent assessment results. The ratings for each individual app are available upon request, and the apps suitable for mental health and wellness use are included in the database which can be accessed at https ://apps.digit alpsy ch.org/ Apps. All raters underwent a 1-h training in order to complete the framework questions. Interrater reliability was assessed using Cohen's kappa statistic (McHugh 2012). Following the training, the majority of raters demonstrated very good interrater reliability (defined as a kappa value of above 0.75), with an average kappa of 0.84 across the first five apps rated. Discrepancies between the two raters were initially addressed one-by-one in discussion and used to clarify the description of each question, and subsequently rectified by a second look at the source of the discrepancy (either app store information, privacy policy, or in-app features and functionality). All training materials have now been published online alongside the database, enabling any interested user to undergo the training and become an app rater. The resulting data was analyzed with descriptive statistics. Results Of the first 107 bipolar disorder related apps, nine (8%) were inaccessible without an access code or unavailable in English, and thus, a total of 98 iOS bipolar disorder apps were evaluated with our framework (Fig. 1). Function and relevance In the iOS Store, apps returned on a search bipolar disorder (BD) were categorized in Business (n = 1), Education (n = 4), Entertainment (n = 2), Games (n = 13), Health and Fitness (n = 34), Lifestyle (n = 19), Magazine and Newspaper (n = 1), Medical (n = 19), Productivity (n = 2), Social Networking (n = 3), and Stickers (n = 2). The range of primary app functions reflects that the top returned apps are not necessarily patient facing or relevant to an individual with Origin and accessibility The framework's questions around origin and accessibility offer a comprehensive picture of who can download and access the app, including considerations like availability across different platforms, cost, offline functionality, last update, app size, and availability in different languages. Of the 98 apps top returned iOS apps, 35 apps were also available on the Google Play store. 72 apps were free to download, although 41 of these apps required an in-app purchase or required a subscription to access the full swath of content. Of the 22 apps that were not free to download, the minimum cost was $0.99 USD and the maximum cost was $49.99 USD with a median cost of $1.99. Sixty six apps did not require an active internet connection after download and could be accessed offline, while 32 apps required internet connection after download to access content. Many of the apps were infrequently downloaded and not currently updated, prompting concern given that lack of update in the last 180 days-a metric associated with lower app quality (Wisniewski et al. 2019a). Only 38 apps had been updated in the last 180 days in the iOS Store, with 10 apps still in the first version. The average rating for analyzed apps was 4.2 in the iOS store and 4.0 in the Google Play store. 44 apps had over 20 reviews in the iOS store and 23 apps had over 20 reviews in the Google Play store. The fact that more than half of the apps available on iOS had fewer than 20 reviews suggests that apps returned on search for BD may not be widely downloaded (the iOS app store does not provide direct data on number of downloads). The average app size was 47.5 MB in the Apple Store and 20.6 MB in the Google Play store, and 20 apps were available in at least one other language in addition to English, with the majority (n = 19) of these offering Spanish functionality. Regarding app origin, only two apps were affiliated with a university or healthcare organization, and both were treatment guideline apps intended for physicians. None of the top 100 apps had government affiliation, although several governmental organizations, including the Department of Veterans Affairs and Department of Defense, have ventured into the mental health app space and developed apps (Owen et al. 2018). None of the apps that had been assessed in research studies for effectiveness at BD management or treatment appeared in the top 100. Inputs and outputs Questions about inputs and outputs help illustrate the kind of data that each app collects and returns to the user. The most common inputs among the 98 apps analyzed were surveys (n = 48), diaries or user inputted text entries (n = 38), geolocation (n = 9), and camera (n = 9). Six of the apps that collected geolocation data from a user's phone were mood-tracking apps, while two were apps with a peer support or community forum. The most common outputs were notifications (n = 49), summaries of data (n = 41), graphs of data (n = 40). 8 apps provided a link to formal care or coaching within the app itself. Privacy and confidentiality Privacy policies were available to the public in 66 of the 98 apps, through either a link from the app store description or within the app itself. 27 apps (40.9%) of apps with privacy policies mentioned the disclosure of users' personal information to third parties. Although several apps included features that prompted users to enter personal health information such as medication tracking (n = 5) alongside identifying data, only one app claimed to be HIPAA compliant. Clinical foundation Questions about clinical foundation assess each app's veracity of claims, support in peer-reviewed studies, and potential to cause harm. 93 apps in our analysis provided what they claimed, while 5 apps did not meet their claims, failing to offer the features that were advertised in their app description. One app's description read, for example, "stimulate vital areas of the brain and heal natu-rally…prepare yourself for Coronavirus!" despite providing no psychoeducation or links to information about COVID-19 (Brainwaves Psychological 2020). Another claimed to provide meditation and yoga, but instead primarily served as a game without offering comprehensive meditation modules. Only one app had supporting feasibility and efficacy studies, with 7 Cups of Tea backed by one feasibility study (Baumel and Schueller 2016) and two efficacy studies (Baumel et al. 2018;Baumel 2015), none of which involved participants with bipolar disorder specifically. Although another app, Daylio, was the subject of an article in JMIR's mHealth (Chaudhry 2016), the article offered an in depth description of the app without a supporting feasibility or efficacy study and thus was not included in our analysis. 12 apps were rated as capable of causing harm to a user. 11 of these potentially harmful apps offered unmoderated forums or information that was triggering and not aligned with current treatment guidelines. One app, for example, encouraged users to "break the silence on your hopelessness and depression by speaking up in a small way and using a background that reflects your inner misery and despair" and provided phone wallpapers with messages including, "I disappoint myself " and "I don't like what I'm becoming" (Wallpapers 2020). The other app was a "bipolar test and personality quiz" that offered users a "test" for bipolar disorder (Bipolar Test 2020). The questions, however, did not align with the clinically-validated Mood Disorder Questionnaire (MDQ) and even if a user obtained a result confirming they were at high risk of BD, the app provided no links or references and did not direct users to a medical professional or other resources. Features and engagement style Mood-tracking, journaling, and psychoeducation were the most common features offered by apps returned in the search for bipolar disorder. 43 apps offered mood-tracking, 41 apps provided a platform for journaling, and 19 apps provided psychoeducation about either BD specifically or coping strategies and treatments more broadly. The most common engagement style was through user-generated data (n = 44), gamification (n = 23), and peer support (n = 13). Features varied significantly as a function of cost (χ 2 = 15.982, p = 0.042), with apps that were totally free more likely to offer screeners or assessments, and apps with either up-front costs or in-app purchases more likely to provide meditation and mindfulness (Table 1, Fig. 2). Discussion Our analysis reveals that a simple app store search may not be sufficient for an individual seeking to find an app suitable to BD education, management, or treatment, as many apps in the top 100-including paid apps-were irrelevant or raised concerns that warrant a cautious approach to app selection. Some apps offered harmful or misleading content. The order that the search returned apps was not indicative of clinical utility, as some misleading, stigmatizing, and dangerous apps appeared before apps with features suitable to BD management and treatment. One app encouraged users to "break the silence on your hopelessness and depression by speaking up in a small way and using a background that reflects your inner misery and despair" and provided phone wallpapers with messages including, "I disappoint myself " and "I don't like what I'm becoming" (Wallpapers 2020). Another app's description read, "stimulate vital areas of the brain and heal naturally…prepare yourself for Coronavirus!" despite providing no psychoeducation or links to information about COVID-19 (Brainwaves Psychological 2020). One app offered no features besides downloadable stickers of an "unpredictable bipolar bear" depicted in cartoon imagery (The Bipolar Bear Bonacorso 2020). Only one app that appeared in the search had supporting feasibility and efficacy studies, and even those peerreviewed publications did not involve individuals with Bipolar Disorder, instead focusing on the app's ability to mitigate symptoms of depression. While the overtly incorrect claims by some apps are a serious area of concern, even subtler claims made by commercial apps, such as "manage your symptoms of Bipolar Disorder!" should be approached cautiously given the lack of evidence across all returned apps. Another challenge was that some apps offered what appeared to be evidence-based interventions, but upon closer inspection were likely not. For example, thirteen apps claimed to offer peer support in some form; nine of them, however, did so via unmoderated forums, where users were able to post and view content posted on a public forum, or the "moderators" did not intervene until after a comment was reported. One mood tracking app, for example, automatically published all mood and diary entries to a public newsfeed and required an in-app purchase in order to access a private diary that would not be published (Moodtrack Social Diary 2020). In all of these apps, the risk of triggering, non clinically-useful content was concerning. The concept of peer support was also defined loosely. While certified peer specialists have been shown to improve treatment outcomes across a range of mental health conditions (Felton et al. 1995), no apps we reviewed utilized certified peer specialists, instead defining "peer" to be anyone else using the app. Integrating peer specialist support into technology is a continuing area of research, with preliminary evidence of both feasibility and efficacy, but our analysis reveals that these advances in peer specialist technology research have yet to be translated to the area of publicly available BD apps (Fortuna et al. 2018). The lack of privacy policies and specifically and the lack of HIPAA compliant apps further underscores the necessity of a cautious approach to app selection. Previous literature has highlighted the numerous risks around data disclosure of behavioral data by mental health apps (Bauer et al. 2017). Our study of 98 BD apps found that 32.7% of apps did not have a privacy policy readily available to users either through the App Store or in the app itself. Moreover, of the apps with a privacy policy, the average reading grade level was 12.1 (SD 2.5), with only 7 apps having a grade level of 9th grade or lower and 34 apps having a collegiate reading grade level or higher. While privacy and security remain important features to users (Dehling et al. 2015), the lack of transparent policies that require college-level literacy indicate the need to improve the state of privacy and transparency among BD apps. Comparison with prior work This study builds upon numerous prior efforts in the area of mental health apps, allowing an analysis of potential changes in the space. Compared to the review of BD apps in 2015 by Nicholas et al., we employed fewer search terms, utilizing only "bipolar" instead of "bipolar, " "manic depression", "mood swings", and "mood. " By using this singular search term, our objective was to assess the apps that would be most readily accessible for an individual searching for a BD app. When including the top 98 returned apps in our analysis, we found that 43% of apps did not even mention Bipolar Disorder in their app title, description, or content. Like Nicholas et al., we found symptom monitoring tools such as mood-tracking and journaling to be most common among reviewed apps; in contrast to the 2015, review, however, we identified 13 apps providing community or peer support-a significant increase from the 4 such apps five years ago. Another major improvement is around privacy policies. A striking finding from Nicholas et al. was that only 18 or 82 apps had a privacy policy-a figure that has noticeably advanced, with 66 of the 98 apps we reviewed now possessing a privacy policy. Our team has done smaller app evaluation studies were we looked at the top 10 apps for bipolar disorder as returned by an app store-search (and other conditions), but results here are different as they seek to quantify the state of the field beyond the highlights of the app stores (Wisniewski et al. 2019b;Mercurio et al. 2020). In terms of evidence-based apps, however, the commercial app space has not significantly progressed: as in the 2015 review by Nicholas et al., we identified just one app supported by feasibility or efficacy studies. This finding suggests that research around digital tools for Bipolar Disorder has not been translated into many evidence-based, clinically relevant apps on the app store. As the need for digital resources becomes increasingly urgent in the wake of the COVID-19 pandemic, the app store content most available to end users is of the utmost importance. Overall, while our analysis demonstrates that some strides have been made in the landscape of apps for bipolar disorder since 2015, a noticeable gap between research and practice is still present. Despite the body of research highlighting the potential of apps to support individuals with bipolar disorder, such research has yet to be translated to the publicly available app marketplace, where only one app that appears in a user's search for "bipolar" is backed by supporting studies. Although there is evidence that individuals with bipolar disorder are interested in apps with automatic data collection to aid in symptom management, few of the apps utilized passive data (Daus et al. 2018). And while the effectiveness of peer support has received growing attention, available "peer support" apps have yet to progress beyond a potentially triggering community forum model. If the potential of technology is to be fully harnessed, research-backed apps must be made available to the public, and a comprehensive app evaluation system is urgent in light of limited regulation from the app stores. Limitations This study employed a single search term, "bipolar, " as we sought to analyze the most immediately accessible apps for a layperson seeking BD resources in the app store. Utilizing this limited search term, however, prevented a full perusal of apps that may be relevant for an individual with bipolar disorder; it is possible that mood-tracking, mindfulness, and other tools for self-management may not explicitly reference BD at all but nonetheless offer clinical benefit. For example, the app HealthRhythms was designed to target bipolar disorder but does not return in any search for the term (Measure Health 2020). Recognizing the limited scope of this work, we view our analysis as a marker of the current state of the field and call for better processes in finding a relevant, clinically usable app via the app store for an individual seeking resource for Bipolar Disorder. This review is just one component of our broader effort to link clinicians and patients with safe, effective apps. Our database of mental health apps enables users to filter and find apps based on desired characteristics and thus connects individuals to tailored tools more effectively than a simple app store search (Nicholas et al. 2015). Additional limitations arise from individual differences in the algorithm that determines which apps appear in what order on an iOS store search. The algorithm in fact changes daily, as a search a week later conducted on the same phone yielded the same apps but in a slightly different order. A growing body of literature highlights the rampant turnover characterizing the app space, with, for example, a clinically relevant app for depression becoming available every 2.9 days (Larsen et al. 2016). Given the dynamic nature of the app store, and the increasing focus on developing technology to support mental health (Monteith et al. 2016), it is possible that clinically relevant apps for BD have emerged in between drafting and publication of this study. Conclusion Despite both the continued proliferation of mental health apps and promising research around the efficacy of smartphone apps for management and treatment of bipolar disorder, our study highlights how it remains difficult for an individual seeking a relevant app for BD to find an appropriate tool in the app store. The primary shortcoming is that users must wade through irrelevant, misleading, and even potentially dangerous apps to find a relevant one. The lack of privacy protection and transparency around user data, along with the lack of supporting evidence among available apps and potential for misleading content, all raise concerns about the most accessible public facing apps and highlight the need for a way to evaluate apps beyond app store metrics. We employed a framework for app assessment that is research based and entirely reproducible, paving the way for future analyses of health apps and providing a tool to help clinicians, patients, and the wider public reap the benefits of digital health. All of our results are available to the public on our database that is informed by our evaluation model. Recognizing the limitations of this study, we regularly update our database to reflect the changing nature of available apps and emergence of new ones. We encourage crowd-sourcing and collaboration around app evaluation in order to provide clarity amidst the profusion of available apps for end users, ultimately equipping them to make an informed choice around an app to help them meet their goals.
v3-fos-license
2021-05-24T13:12:34.103Z
2021-05-01T00:00:00.000
235240019
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/13/10/2479/pdf", "pdf_hash": "e01ffd25759817e88b7d4eb25afae6689b8f8d1d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42053", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e24754c9feb7ce9d149077e5f21a414c18b10a06", "year": 2021 }
pes2o/s2orc
COVID-19 Risk Factors for Cancer Patients: A First Report with Comparator Data from COVID-19 Negative Cancer Patients Simple Summary The COVID-19 pandemic has had a detrimental impact on cancer patients globally. Whilst there are several studies looking at the potential risk factors for COVID-19 disease and related death, most of these include non-cancerous patients as the COVID-19 negative comparator group, meaning it is difficult to draw hard conclusions as to the implications for cancer patients. In our study, we utilized data from over 2000 cancer patients from a large tertiary Cancer Centre in London. In summary, our study found that patients who are male, of Black or Asian ethnicity, or with a hematological malignancy are at an increased risk of COVID-19. The use of cancer patients as the COVID-19 negative comparator group is a major advantage to the study as it means we can better understand the true impact of COVID-19 on cancer patients and identify which factors pose the biggest risk to their likelihood of infection with SARS-CoV2. Abstract Very few studies investigating COVID-19 in cancer patients have included cancer patients as controls. We aimed to identify factors associated with the risk of testing positive for SARS CoV2 infection in a cohort of cancer patients. We analyzed data from all cancer patients swabbed for COVID-19 between 1st March and 31st July 2020 at Guy’s Cancer Centre. We conducted logistic regression analyses to identify which factors were associated with a positive COVID-19 test. Results: Of the 2152 patients tested for COVID-19, 190 (9%) tested positive. Male sex, black ethnicity, and hematological cancer type were positively associated with risk of COVID-19 (OR = 1.85, 95%CI:1.37–2.51; OR = 1.93, 95%CI:1.31–2.84; OR = 2.29, 95%CI:1.45–3.62, respectively) as compared to females, white ethnicity, or solid cancer type, respectively. Male, Asian ethnicity, and hematological cancer type were associated with an increased risk of severe COVID-19 (OR = 3.12, 95%CI:1.58–6.14; OR = 2.97, 95%CI:1.00–8.93; OR = 2.43, 95%CI:1.00–5.90, respectively). This study is one of the first to compare the risk of COVID-19 incidence and severity in cancer patients when including cancer patients as controls. Results from this study have echoed those of previous reports, that patients who are male, of black or Asian ethnicity, or with a hematological malignancy are at an increased risk of COVID-19. Introduction Whilst the COVID-19 research and innovation landscape has led to a plethora of publications in the context of cancer, we still do not have a good understanding of what may make some cancer patients more likely to get infected with SARS-CoV-2. To our knowledge, most studies to date discuss the risk factors for COVID-19 in the general population, as described recently in a meta-analysis by Pijls et al. [1]. Based on 59 studies, including 36,470 patients, male sex and age >70 were found to be consistently associated with a higher risk of COVID-19, severe disease, intensive care unit (ICU) admission, and death [1]. In the context of cancer, several cohort studies presented the clinical and demographic characteristics of cancer patients diagnosed with SARS-CoV-2 infection and/or their association with COVID-19 outcomes [2,3]. However, the true rate of COVID-19 disease in oncology patients remains unquantified because the denominator is not known, i.e., the actual number of all cancer patients infected with SARS CoV-2 [4]. We have previously reported on the scale of COVID-19 infection in cancer patients based on 1 week of COVID-19 testing in our Cancer Centre and identified that 1.38% of cancer patients tested positive for COVID-19 [5]. We have now completed this data, which provides us with the unique opportunity to identify which factors are associated with an increased risk of SARS-CoV-2 infection in cancer patients, a question which hitherto has not been investigated due to the lack of data on a comparator, i.e., cancer patients who tested negative for SARS-CoV-2 infection. In this study, we aimed to describe factors associated with the risk of COVID-19 in cancer patients, whilst using cancer patients with a COVID-19 negative test as the comparator. Materials and Methods Our Centre in South-East London, treating approximately 8800 patients annually (including 4500 new diagnoses), is one of the largest comprehensive cancer Centers in the UK and was at the epicenter of the UK COVID-19 epidemic during the first wave. We reported our first COVID-19 positive cancer patient on 29th February 2020. Until 30th April 2020, a COVID-19 swab was ordered for cancer patients with symptoms necessitating hospitalization or for those scheduled to undergo a cancer-related treatment. From 1st May 2020 until mid-June 2020, COVID-19 testing was introduced as a standard of care, with about 25% of patients being swabbed daily depending upon staff and testing kit availability [5]. COVID-19 was categorized based on the World Health Organization (WHO) criteria for disease severity [6], and we included those who died from COVID-19 in the severe group. A detailed analysis of the COVID-19 positive cancer patients (29th February until 30th June 2020) at our center was published elsewhere and focuses specifically on the cancer patient characteristics indicative of COVID-19 severity and death [7,8]. Here, we analyzed data from 1st March to 30th June 2020 for COVID-19 test results in all cancer patients at our center. All data was collected and analyzed as part of Guy's Cancer Cohort (Ethics Reference number: 18/NW/0297) [9], a research ethics committee-approved research database of all routinely collected clinical data of cancer patients diagnosed or treated at Guy's and St Thomas' (GSTT) NHS Foundation Trust. Over 83% of patients filled out a symptom assessment form, of which 82% were asymptomatic. Based on their demographics and tumor characteristics, this sample can be considered to be representative of the total population. Statistical Analyses Descriptive statistics were used to describe the demographic and clinical characteristics of the patients based on COVID-19 status. Socio-economic status (low, middle, high) was categorized based on the English Indices of Multiple Deprivation for postcodes [10]. Radical treatment referred to those patients with a chance of long-term survival or cure. We conducted logistic regression analyses to identify which factors were associated with a positive COVID-19 test. Additional analyses were conducted, whereby a positive COVID-19 test was further categorized into mild/moderate or severe disease. Pneumonia with or without sepsis (i.e., those patients managed on the ward) was an indicator of mild/moderate COVID-19, whereas acute respiratory distress syndrome (ARDS), septic shock (i.e., those patients whose severity reached criteria for Intensive Care Unit admission, if deemed clinically appropriate), or COVID-related death were indicators of severe COVID-19, as defined by the WHO COVID-19 classification [6]. We used a directed acyclic graph (DAG) ( Figure A1 in Appendix A) to inform the models to quantify the association between each factor and COVID-19 status. Each factor was individually set as the main exposure variable in the model when determining the minimal adjustments required (Table A1 in Appendix A). Cohort Demographics Of the 2152 patients included in the study, 190 patients (9%) tested positive for COVID-19 (Table 1), of which 34 (18%) were asymptomatic. Overall, there were slightly more females than males in the cohort (55% vs 45%, respectively); however, in the COVID-19 positive cohort, there was a higher proportion of males compared to females (59% vs 41%, respectively). The age groups were fairly similarly distributed between the COVID-19 positive and negative patients. The mean age in the COVID-19 positives was 63.80 (SD 14.80), whilst the mean age in the COVID-19 negatives was 62.50 (SD 13.20). The majority of patients were of a low SES (86%). Overall, 12% of patients were black and just under 3% were Asian. When stratified by COVID-19 status, 22% were black in the COVID-19 positive group compared to under 12% in the negative patients. In terms of cancer characteristics, the most common tumor type was breast in both the overall (21%) and COVID-19 negative patients (22%). However, in the COVID-19 positive group, the largest proportion of patients had a urological tumor (22%). Hematological cancers were the second most common in the COVID-19 positive group, present in 17% of patients. With respect to the treatment paradigm, in the overall cohort, palliative treatment was the most common (45%), followed by adjuvant (20%) and radical treatment (18%). There was a higher proportion of patients who were treatment-naïve in the COVID-19 positive group compared to the COVID-19 negative patients (10% vs 2%, respectively). Of the patients on palliative treatment, the majority of patients were either on first-or second-line treatment. For those patients on systemic anticancer therapy (SACT), the most common type was systemic chemotherapy (45%); this remained true when stratified by COVID-19 status. Whilst the highest proportion of patients had their cancer diagnosed just under a year prior to their test for COVID-19, the median time from cancer diagnosis was 13 months (IQR:4, 37 months). However, for those with COVID-19, , the median time since cancer diagnosis was 9 months (IQR:2-45 months) compared to 14 months (IQR:4, 37 months) in those patients who did not have COVID-19. Risk of Developing COVID-19 Males were at an increased risk of being diagnosed with COVID-19 compared to females (OR = 1.85, 95%CI:1.37-2.51) ( Table 2). Patients of black ethnicity (OR = 1.93, 95%CI:1.31-2.84) and those with a hematological cancer type (OR = 2.29, 95%CI:1.45-3.62) were at an increased risk of having a positive COVID-19 result compared to those of white ethnicity and those with solid malignancies, respectively. Patients who were on either radical or palliative treatment appeared to be at a lower risk of COVID-19 compared to patients on no active treatment (Radical, OR = 0.37, 95%CI:0.20-0.66 and palliative, OR = 0.39, 95%CI:0.22-0.70). Discussion To the best of our knowledge, this is one of the first studies to assess the risk of COVID-19 in a real-world cancer patient population, using the COVID-19 negative group as controls for those with COVID-19. Our results show that cancer patients who are male, of black ethnicity and those with a hematological cancer type were at a significantly increased risk of COVID-19 and more specifically mild/moderate disease. Furthermore, cancer patients who are male, of Asian ethnicity, with a hematological cancer type and those diagnosed with cancer over 5 years ago were at a significantly increased risk of severe COVID-19. The results from this study corroborate with those of our previously reported results when looking at the risk of COVID-19 severity and death in a cohort of COVID-19 positive cancer patients [7,8]. We previously reported a prevalence of 1.4% of COVID-19 in our cancer population, a figure taken from 1 weeks' worth of testing as standard care at our center [5]. In the current study, 9% of our cancer population tested positive for COVID-19 over the 5 months of this study. This figure is a reflection of the targeted testing carried out at our center. However, it is also possible that, during the latter months, some patients were being tested and diagnosed with COVID-19 at their local testing centers; therefore, we may not have captured all COVID-19 positive cancer patients under our care. A previous study, which looked at the prevalence of COVID-19 in a population of cancer patients, found that 18% of patients who had suspected COVID-19 had detectable SARS-COV2 infection [11]. This higher proportion of detected cases compared to our study may be somewhat explained by the reasons for testing. In the study by Assaad et al., patients were only tested if COVID-19 was suspected. However, at our center, patients underwent testing for COVID-19 infection for numerous reasons over the study period, including screening for treatment, such as having surgery, being symptomatic, and due to routine testing. Only 17% of our patients were symptomatic or deemed clinically or radiologically suspicious for COVID-19. It is also worth noting that our data was collected at the height of the first wave of the COVID-19 pandemic when the UK was in national lockdown and, therefore, most of our patients were shielding. These factors may have influenced our lower infection rate compared to that of Assaad et al. in France [11]. Males have consistently been observed to be at an increased risk of COVID-19. In their systematic review, comprising 17 studies, Park et al. reported an OR of 1.60 (95%CI:1.38-1.85) for the composite of severe COVID-19 and all-cause death for males compared to females [12,13]. The current study has shown that this increased risk for males also stands true for patients with cancer. This is an important finding as most studies do not look at cancer patients specifically. Biological mechanisms surrounding the role of ACE2 and TMPRSS2 in males enhancing the viral entry and invasion of cells have been proposed, with increasing evidence to support this [12,14]. Here, we also report that cancer patients of black ethnicity were at an increased risk of developing COVID-19, albeit mild/moderate infection, compared to patients of white ethnicity. We also report an increased risk of severe COVID-19 for patients of Asian ethnicity compared to those of white ethnicity. These results concur with our preceding report on COVID-19 positive patients only [7,8]. In this previous study, we found that Asian patients, but not black patients, were at an increased risk of severe COVID-19 (compared to mild/moderate disease) when compared to white patients. Black, Asian, and minority ethnic (BAME) individuals have repeatedly been over-represented within non-cancerous COVID-19 cohorts [15][16][17][18][19]. One study, performed using data from the UK biobank, investigated whether factors, such as deprivation, cardiometabolic morbidities, and 25(OH)-vitamin D levels, attenuated the association of ethnicity with COVID-19 status [15]. Similar to our study, they found no significant association with deprivation and risk of COVID-19 and further concluded that these factors did not explain the strong association with ethnicity [15]. Several published studies, including our own, have reported worse outcomes and higher mortality rates in patients with hematological malignancies compared to solid cancers [7,[20][21][22]. Whilst we did not look at mortality in this current study, we found that patients with a hematological malignancy were at a two-fold increased risk of severe COVID-19, thus complimenting the results from other studies. A recent, yet to be published, study delved into potential reasons behind this association [22]. The researchers found that patients with hematological malignancies had an impaired SARS-CoV2-specific antibody response when compared to those with solid malignancies; an observation also noted by Abdul-Jawad et al. [22,23]. They further concluded that, in the absence of a humoral response, CD8 T cells were critical for the survival of hematological cancer patients with COVID-19 [22]. The authors explain that this immune response may impact the response that hematological patients have to the COVID-19 vaccines, thus highlighting this as a vital area for future research. Being on a curative or palliative treatment paradigm was found to be associated with a decreased risk of COVID-19. This interesting result may in part be explained by the behavior of cancer patients. Many cancer patients were either asked to or chose to shield when the UK went into lockdown in March 2020. As a result, patients undergoing active treatment, whether this be curative or palliative, may have been protected from COVID-19 due to a reduced exposure. On the contrary, the study by Assaad et al. found a higher proportion of patients undergoing cancer treatment in the past month in the COVID-19 positive patients compared to those who remained COVID-19 free (p = 0.049) [11]. Lee et al. reported that cancer patients who had undergone chemotherapy in the 4 weeks prior to testing positive for COVID were not at risk of increased mortality from COVID-19 [24]. In our previous study, we also reported that patients on palliative treatment were at increased risk of being diagnosed with severe COVID-19 (compared to mild/moderate disease) and COVID-19-related death [7,8]. A strength of this study is the use of COVID-19-negative cancer patients as controls in our cohort. Previous studies have frequently used non-cancerous patients as controls [20,[25][26][27][28] or performed case-control studies using non-cancerous patients only [1,15]. By using cancer patients as the control, we can better understand the true impact of COVID-19 on cancer patients and which factors pose the biggest risk to their likelihood of infection with SARS-CoV2. By including our entire cancer population, this can potentially minimize selection biases in comparison to reports including only the COVID-19 positive patients. Many of these studies include large multi-center consortiums, where the denominator of patients and their outcomes are unknown. A limitation to the current study is the use of single center data. Having said this, we were still able to use data on a large population of patients (n = 2152). Moreover, as previously discussed, some COVID-19 positive cases may have been missed due to testing in the community or at external centers. We minimized the impact of this by cross-checking our data using Network Hospitals and Cancer Alliance networks. Despite rigorous internal validation of the dataset, a further limitation to the study is the proportion of missing data for certain variables, such as ethnicity (30%) and smoking status (44%). In light of the results published by Monin-Aldama et al. [29], whereby the immune efficacy of the COVID-19 vaccine was increased with a booster vaccine within 21 days in cancer patients, future research (already underway) includes looking into the effects of the COVID-19 vaccination program on these cancer patients. Conclusions Data from this study, comparing both COVID-19 negative and positive cancer patients, has provided us with the unique opportunity to identify which factors are associated with an increased risk of SARS-CoV-2 infection in cancer patients. Results from this study have echoed those of previous reports that both demographic (sex and ethnicity) and clinical characteristics (type of malignancy) are associated with an increased risk of COVID-19. To the best of our knowledge, this is one of the first studies to utilize cancer patients as the comparator group for COVID-19 risk factors; hence, studies to date need to be considered carefully, since they often include non-cancer patients as the denominator. These results, together with data from our previous studies on mortality, can further help clinicians to identify their patients most at risk of COVID-19, thus giving them the opportunity to take appropriate actions to alleviate this risk. Future studies will also have to take into account the effects of COVID-19 vaccination programs, which are currently being rolled out across the globe [30].
v3-fos-license
2018-04-03T03:20:41.853Z
2013-01-29T00:00:00.000
2293300
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1074/mcp.m112.021683", "pdf_hash": "765f4593b6b87299f8249d45d748757535fc38fa", "pdf_src": "Highwire", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42055", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "sha1": "25da8990433cb0b58994cf89c0d308878ea9582e", "year": 2013 }
pes2o/s2orc
Metabolomics Coupled with Proteomics Advancing Drug Discovery toward More Agile Development of Targeted Combination Therapies* To enhance the therapeutic efficacy and reduce the adverse effects of traditional Chinese medicine, practitioners often prescribe combinations of plant species and/or minerals, called formulae. Unfortunately, the working mechanisms of most of these compounds are difficult to determine and thus remain unknown. In an attempt to address the benefits of formulae based on current biomedical approaches, we analyzed the components of Yinchenhao Tang, a classical formula that has been shown to be clinically effective for treating hepatic injury syndrome. The three principal components of Yinchenhao Tang are Artemisia annua L., Gardenia jasminoids Ellis, and Rheum Palmatum L., whose major active ingredients are 6,7-dimethylesculetin (D), geniposide (G), and rhein (R), respectively. To determine the mechanisms underlying the efficacy of this formula, we conducted a systematic analysis of the therapeutic effects of the DGR compound using immunohistochemistry, biochemistry, metabolomics, and proteomics. Here, we report that the DGR combination exerts a more robust therapeutic effect than any one or two of the three individual compounds by hitting multiple targets in a rat model of hepatic injury. Thus, DGR synergistically causes intensified dynamic changes in metabolic biomarkers, regulates molecular networks through target proteins, has a synergistic/additive effect, and activates both intrinsic and extrinsic pathways. Currently, a paradigm shift is occurring in that there is a new focus on agents that modulate multiple targets simultaneously, rather than working at the level of single protein molecules (1). Multiple-target approaches have recently been employed to design medications that are used to treat atherosclerosis, cancer, depression, psychosis, and neurodegenerative diseases (2). During the past few years, the pharmaceutical industry has seen a shift from the "one disease, one target, one drug" and "one drug fits all" approaches to the pursuit of combination therapies that include more than one active ingredient (3,4). Because of the complexity of medicine, treatment protocols should be carefully designed, and prescriptions must be carefully developed to successfully fight a given disease. A growing body of evidence has demonstrated that treating illnesses such as human acute promyelocytic leukemia (5,6), cancer (7), HIV (8), chronic hepatitis C (9), and diabetic nephropathy (10) with treatment regimens that use multiple drugs for combination therapy and related mechanisms usually amplifies the therapeutic efficacy of each agent, yielding maximum therapeutic efficacy with minimal adverse effects (11). Such developments represent a triumph for modern medicine and provide fertile ground for modern drug development (12,13). Interestingly, traditional Chinese medicine (TCM), 1 which is a unique medical system that assisted the ancient Chinese in dealing with disease, has advocated combinatorial therapeutic strategies for 2,500 years using prescriptions called formulae (14). Typically, formulae consist of several kinds of crude drugs that originate from medicinal plants, animals, or minerals; one represents the principal component and is called the monarch drug in TCM, and the others serve as adjuvant components that facilitate the delivery of the principal component to the disease site within the body. More specifically, according to the rules of TCM theory, the famous formulae include four elements: the monarch (which plays the most important role in the formula), the minister (which increases the effectiveness of the monarch herb), the assistant (which helps the monarch and minister herbs reach their target positions), and the servant (which can reduce the adverse effects and/or increase the potency of the whole formula). In formulae, the herbs work together harmoniously to achieve an ideal therapeutic outcome. Therapeutic regimens that include more than one active ingredient are commonly used clinically in Chinese medicine (15). The therapeutic efficacy of TCM is usually attributed to its synergistic properties, its capacity for minimizing adverse reactions, or its improved therapeutic efficacy. Synergism is a core principle of traditional medicine, or ethnopharmacology, and plays an essential role in improving the clinical efficacy of TCM. It is believed, at least in regard to some formulae, that multiple components can hit multiple targets and exert synergistic therapeutic effects (14). A scientific explanation for this type of synergy would certainly promote the reasonable and effective application of TCM and help to encourage rational approaches to the safe combination of healthcare systems from various cultures. Multidrug combinations are increasingly important in modern medicine (16 -18). However, the precise mechanisms through which formulae function are poorly understood and must be addressed using a molecular approach. Yinchenhao Tang (YCHT), which was recorded in Shanghanlun, a classic resource on TCM written by Zhongjing Zhang (150 -215 A.D.), is one of the most famous Chinese herbal formulae. YCHT consists of Artemisia annua L. (the monarch herb), Gardenia jasminoides Ellis (the minister herb), and Rheum Palmatum L. (the assistant and servant herb) and has been used for more than a thousand years to treat jaundice and liver disorders (19). Pharmacological studies and clinical practice have shown that it can be used clinically to treat cholestasis, hepatitis C, primary biliary cirrhosis, liver fibrosis, and cholestatic liver diseases (20,21). Our previous study showed that YCHT contains 45 compounds, 21 of which were detected in rat plasma (22). The compatibility of the different compounds in YCHT and the effect on the absorption of these 19 compounds were investigated (23). Interestingly, dimethylesculetin (D) has been shown to be effective in treating hepatic injury (HI) by exerting hepatoprotectivity, and it contributes directly to the therapeutic effect of YCHT (24). Geniposide (G), a primary component of the fruits of Gardenia jasminoids Ellis, exhibits various pharmacological properties, including antioxidant, anti-inflammatory, and hepato-protective effects (25)(26)(27). Intriguingly, a recent study showed that rhein (R), a metabolite of anthranoids and a major component of Rheum Palmatum L., helps to ameliorate liver fibrosis (28 -30). Studies indicate that D, G, and R have all been used as marker compounds in quality control for YCHT (31). It is noteworthy that a previous study reported the synergistic effects of DGR based on the pharmacokinetics of the main effective constituents of YCHT (32,33). These results demonstrate the clinical efficacy of DGR and indicate the need for further research regarding the mechanics of this formula. It has been proposed that DGR-based combination treatment for HI produces a synergistic effect. However, although much is known regarding the interactions among D, G, and R at pharmacokinetic sites, there is little knowledge regarding the compound's synergistic properties. Understanding the synergistic effects of DGR represents an even greater challenge because multilayered regulation might be involved, with the three compounds having overlapping but distinct target properties. In order to gain insight into the complex biochemical mechanisms that underlie this effective HI therapy, we conducted an investigation incorporating advanced technologies using metabolomic, proteomic, and biochemical analyses throughout the treatment process for HI (which has been shown to respond specifically to these agents). In analyzing the formula design in TCM, here we use the treatment of HI with DGR as a working model. D, G, and R-which are derived from Artemisia annua L., Gardenia jasminoids Ellis, and Rheum Palmatum L., respectively-were used as the active compounds in YCHT, and the efficacy and mechanisms of the DGR combination as used to treat HI were tested both in vivo and in vitro. This is the first study that investigates the unique synergistic effect of combination dosing and provides support for the popular view that traditional Chinese formulae require multiple components to exert their combined effects. MATERIALS AND METHODS Reagents-Acetonitrile (HPLC grade) was purchased from Merck (Darmstadt, Germany). Methanol (HPLC grade) was purchased from Fisher (USA). Distilled water was purchased from Watson's Food & Beverage Co., Ltd. (Guangzhou, China), and formic acid (HPLC grade) was purchased from the Beijing Reagent Company (Beijing, China). Leucine enkephalin was purchased from Sigma-Aldrich, and carbon tetrachloride (CCl 4 ) was purchased from the Chemicals Factory (Tianjin, China). Glycerol was supplied by the Chemicals Factory. Other chemicals, except as noted, were analytical grade. D, G, and R were isolated within our laboratory and identified via spectral analyses, primarily NMR and MS. After identification, the substances were further purified via HPLC to yield authorized compound with a purity of at least 99%. Freeze-dried YCHT powder was produced by our laboratory. The assay kits were purchased from the Nanjing Jiancheng Biotech Company (Nanjing, China). The other reagents that were used in the two-dimensional electrophoresis were purchased from Bio-Rad. Animals-Male Wistar rats were bred and maintained in a specific pathogen-free environment. The animals were allowed to acclimatize in metabolic cages for 1 week prior to treatment. The animals were randomly assigned to various groups and treated with D, G, and/or R at the doses indicated in the supplementary material. The experiments were performed with the approval of the Animal Ethics Committee of Heilongjiang University of Chinese Medicine, China. Blood was collected from the hepatic portal vein, and plasma was separated via centrifugation at 4,500 rpm for 5 min at 4°C, flash frozen in liquid nitrogen, and stored at Ϫ80°C until the liver function tests and proteomics analyses were performed. Urine was collected daily (at 6:00 a.m.) from the metabolic cages at ambient temperature over the course of the entire procedure and centrifuged at 10,000 rpm at 4°C for 5 min; the supernatants were then stored frozen at Ϫ20°C for subsequent metabolomic analysis. Biochemical Analysis-We collected plasma samples in heparinized tubes, kept them on ice for 1 h, and centrifuged them at 4,500 rpm for 15 min at 4°C. We quantified the levels of plasma alanine transaminase, aspartate transaminase, alkaline phosphatase, glutathione peroxidase, and superoxide dismutase activity and the malondialdehyde, triglyceride, glutamyl transferase, cholesterol, total protein, direct bilirubin, and total bilirubin content using assay kits according to the manufacturer's instructions. The rat livers were removed immediately after plasma collection and stored at Ϫ70°C until analysis. Histology, Immunohistochemistry, and TUNEL Assay-The livers were fixed in 4% neutral buffered formaldehyde at 4°C and embedded in paraffin. The liver tissue was stained with H&E for histopathological analysis. Immunohistochemistry was performed using antibodies against Fas and BCL-2. TUNEL staining was performed to detect and quantify apoptotic cells using the in situ cell death detection kit. The sections were viewed and photographed using standard fluorescent microscopic techniques. Metabolomics Analysis-Urine and serum were collected for UPLC-HDMS analysis. For the reversed-phase UPLC analysis, an ACQUITY UPLC BEH C 18 column (50 mm ϫ 2.1 mm inner diameter, 1.7 m, Waters Corp., Milford, MA) was used. The column temperature was maintained at 35°C, the flow rate during the mobile phase was 0.50 ml/min, and the injection volume was fixed at 2.0 l. Mobile phase A involved the use of 0.1% formic acid in acetonitrile, whereas mobile phase B involved the use of 0.1% formic acid in water. The data were collected in centroid mode, the lock-spray frequency was set at 5 s, and the lock-mass data were averaged over 10 scans. A "purge-wash-purge" cycle was employed when the auto-sampler was used, with 90% aqueous formic acid used for the wash solvent and 0.1% aqueous formic acid used as the purge solvent, which ensured minimal carry-over between injections. The mass spectrometry full-scan data were acquired in the positive ion mode from 100 to 1,000 Da with a 0.1-s scan time. For the plasma UPLC-HDMS analysis, the desolvation gas flow was 600 l/h, and the other parameters were the same as for the urine. The MS data were generated and recorded using MassLynx V4.1 (Waters Corp., Milford, MA), Marker-Lynx Application Manager (Waters Corp., Milford, MA) was used for peak detection, and EZinfo 2.0 software (which is included in MarkerLynx Application Manager and can be applied directly) was used for the principal component analysis (PCA), partial least squares-discriminant analysis (PLS-DA), and orthogonal projection to latent structures (OPLS) analysis. "Unsupervised" data were analyzed using PCA, and "supervised" analysis was conducted using PLS-DA and OPLS. Putative markers were extracted from S-plots that were constructed following the analysis using OPLS, and markers were chosen based on their contribution to the variation and correlation within the data set. The processed data were then analyzed using EZinfo 2.0 software. The potential biomarkers were matched with the structure message of metabolites acquired from available biochemical databases, the Human Metabolome Database, and the Kyoto Encyclopedia of Genes and Genomes. Proteomics Analysis-Two-dimensional polyacrylamide gel electrophoresis tests were performed. Protein spots with more than a 3-fold change in density (paired Student's t test yielding p Յ 0.05) with consistent increases or decreases were considered as differentially expressed and were selected for further identification via a MALDI-TOF-MS/MS analysis. Details regarding the immobilized pH gradient (IPG)-2-DE and image analysis, MALDI-TOF-MS/MS analysis, and Gene Ontology (GO) functional analysis can be found in the supplementary material. All experiments were performed at least in triplicate to ensure reproducibility. Statistical Analyses-All statistical analyses were performed using Student's t test. Differences with a p value of 0.05 or less were considered significant. Assays were performed in triplicate, and the results are expressed as mean Ϯ S.D. Therapeutic Efficacy of DGR as Indicated by a Rat HI Model-The efficacy of combination therapy with D, G, and R was compared with the efficacy of monotherapy with each of the three components individually in a rat model of HI. There was significant variation between the biochemical indicators for the control and model groups after CCl 4 treatment (supplementary Table S1). This indicates that the HI model successfully replicated the disease. The model group had higher alanine transaminase, aspartate transaminase, alkaline phosphatase, r-glutamyl transferase, triglyceride, total cholesterol (TC), malondialdehyde, total bilirubin, direct bilirubin, and total protein values but had lower levels of superoxide dismutase and glutathione peroxidase than the control animals. Each treatment group was treated back to baseline levels (i.e. those of the control group), which demonstrates that these drugs had a therapeutic effect in the rat HI model. Our data show that the DGR combination statistically intensified the therapeutic efficacy relative to the control condition or monotherapy using D, G, or R (supplementary Table S1). Interestingly, the DGR protocol decreased the levels of alanine transaminase, aspartate transaminase, alkaline phosphatase, total bilirubin, direct bilirubin, glutamyl transferase, malondialdehyde, and total protein but increased the levels of glutathione peroxidase, triglyceride, and total cholesterol. These results indicate that DGR combination therapy exerted a synergistic effect and yielded better therapeutic effects than did the approaches that were based on the use of D, G, or R as a single agent. These data provide evidence of the synergy that is created with co-administration. Among the various monotherapies, D showed the most potent therapeutic efficacy. Our data also indicate that D is the principal component of the formula, whereas G and R serve as adjuvant ingredients. DGR Reduces Histologic Changes and Hepatocyte Apoptosis-To confirm the protective effects of DGR in treating liver tissue damage, histological, TUNEL, and immunohistochemical analyses were performed on liver tissue that was obtained from HI rats and compared with tissue from control rats. Microscopic analyses of H&E-and TUNEL-stained liver sections showed that DGR significantly decreased hepatocyte necrosis, fibrotic area, and hepatocyte apoptosis levels, making them comparable to those in normal liver (Fig. 1A). The histopathological examination of liver sections that were stained with H&E revealed numerous apoptotic hepatocytes and the accumulation of massive necrosis with intralobular hemorrhage in the livers of HI rats (Fig. 1A). Further analysis revealed multiple and extensive areas of portal inflammation and hepatocellular necrosis in the HI group and a moderate increase in inflammatory cell infiltration. In the portal areas, Kupffer cells were detected within the sinusoids. The degree of necrosis was clearly lower in the CCL 4 -treated rats that received YCHT, the DGR combination, or bitherapies using various combinations of D, G, and R (Fig. 1A). Minimal hepatocellular necrosis and inflammatory cell infiltration and mild portal inflammation were observed in rats that were treated with either the DGR combination or YCHT as compared with animals that were treated with the control or with monotherapies or biotherapies composed of D, G, and/or R. It is impor-tant to note that only spotty necrotic hepatocytes were visible in the livers of the DGR-treated rats. Furthermore, the H&E-stained sections indicated that the hepatocyte size had changed in the model rats and that DGR restored normal cell size. These results suggest that the DGR combination prevents the destruction of liver tissue and intensifies therapeutic efficacy. Next, to investigate further the therapeutic and synergistic properties of the DGR combination, apoptotic hepatocytes were measured using TUNEL staining. Few TUNEL-positive hepatocytes were observed in the livers of CCL 4 -treated rats. However, numerous TUNEL-positive hepatocytes (p Ͻ 0.01) were found in the livers of animals that had been pretreated with DGR, YCHT, or two agents (Fig. 1A). Interestingly, the rats that were treated with DGR had the highest density of apoptotic hepatocytes, and the density of apoptotic cells in the rats that were treated with D, G, and R bitherapies was greater than in the rats that were treated with monotherapies (Fig. 1B). These results demonstrate that the DG, DR, and RG combinations moderately elevated the density of apoptotic FIG. 1. Histology, immunohistochemistry, and TUNEL assay. A, representative pictures of liver histopathology (H&E staining), immunohistochemistry, and TUNEL assay of apoptotic hepatocytes after CCL 4 with DGR treatment in rats (original magnification: ϫ200). B, quantitative evaluation of the density of apoptotic hepatocytes using TUNEL staining. C, quantitative immunohistochemistry evaluation of the density of Fas-and BCL-2 positive cells. The values are expressed as mean Ϯ S.D. and were compared via analysis of variance. *p Ͻ 0.05 versus the control group; **p Ͻ 0.01 versus the control group; # p Ͻ 0.05 versus the HI group; ## p Ͻ 0.01 versus the HI group. cells, and the maximum synergistic effect was observed in rats that were treated with the DGR protocol. This is strong evidence that the synergistic effect of D, G, and R occurred at the level of hepatocyte apoptosis. To rigorously test this conclusion, we performed an extensive immunohistochemical analysis using several various antibodies in formalin-fixed, paraffin-embedded liver sections. This analysis was performed with antibodies against Fas and BCL-2 and revealed hepatocytes both in clusters of various sizes in the portal areas and as single cells within the lobules (Fig. 1C). Interestingly, a statistically significant difference in the density of FASand BCL-2-positive cells was observed between the model and controls. A quantitative immunohistochemical analysis revealed that DG and DR, but not GR, induced a certain degree of (i) up-regulation of BCL-2 (antiapoptotic) (p Ͻ 0.05) and (ii) decreased density of FAS-positive hepatocytes (p Ͻ 0.05). Surprisingly, however, the DGR combination induced a much stronger (p Ͻ 0.01) antiapoptotic effect (Fig. 1C). Although G, R, and the GR combination did not dramatically affect the staining pattern, these two compounds facilitated the effects of D (Fig. 1C), suggesting that the use of D to up-regulate BCL-2 is more effective when G and R are also used. The oral administration of DGR to HI rats might be more effective at inhibiting the extent of liver necrosis than other treatment regimens. The immunohistochemical assay further confirmed the cooperativity of D and G with R in up-regulating BCL-2 and down-regulating FAS, with the strongest effect resulting from the DGR combination, which supports the rationale of using YCHT to treat HI. Metabolomic Multivariate Analysis-Using a Waters AC-QUITY UPLC system together with a Waters Micromass quadrupole-time-of-flight Micro Synapt High Definition Mass Spectrometer (UPLC-HDMS) under optimal conditions as described in the supplemental material, the representative base peak intensity chromatograms of the urine or plasma samples that were collected from representative rats from each group were obtained, and these are presented in Figs. 2A and 2B. The low-molecular-mass metabolites were well separated at 11 min because of the minor particles (sub-1.7 m) of UPLC. PCA, PLS, and OPLS-DA were used to classify the metabolic phenotypes and identify the differentiating metabolites. Analysis of the OPLS-DA score plots identified the control and HI rats based on differences in their metabolites, suggesting that their metabolic profiles significantly changed as a result of CCL 4 administration (Fig. 2C). The PLS-DA loading plots displayed variables that were positively correlated with the score plots (Fig. 2D). S-plots and VIP-value plots were combined for the structural identification of the biomarkers (Fig. 2E). For example, this information for 2-((hexyloxy)carbonyloxy)benzoic acid is displayed in Fig. 2F. Finally, potential biomarkers of significant contributions were identified, with 20 and 12 in the urine and plasma, respectively ( Fig. 3 and supplementary Table S2). As seen in Fig. 4 and the supplementary Materials and Methods section, putative metabolic pathways were identified in the HI rats based on changes in the intermediates during substance metabolism. DGR Causes Synergistic Effects: Trajectory Analysis of HI Rats-In this study, a model of HI rats was constructed, and the dynamic metabolic profiles of treated rats with various outcomes were investigated using UPLC-HDMS and multivariate statistical analysis. With regard to PCA, as shown in Figs. 5A-5C, the control and model groups were significantly different after CCL 4 treatment, which validates the HI model. Fig. 5C shows that YCHT helped prevent HI and maintained the animals in a normal state; there were no distinct clustering differences between the control group and the YCHT treatment group. In contrast, distinct clustering differences were observed between the model group and the groups that received either monotherapy or bitherapy using D, G, and/or R; each dosing regimen returned the values to their baseline levels (i.e. those of the control group), but the DGR combination further enhanced this effect. Therefore, the maximum synergistic effect in terms of HI prevention was observed in rats that were treated with the DGR combination as compared with animals that were treated with monotherapy or bitherapy using D, G, and/or R (Figs. 5A-5C). Similarly, treatment with G or R alone inhibited the occurrence and development of HI, as shown by the clear differences between the control and model groups. Among the monotherapy groups, the group that was treated with D showed the highest level of improvement. This suggests that the therapeutic effect of D on rats with liver injury can be improved by the addition of G and R and provides compelling evidence that the synergistic effect of D, G, and R occurred at the metabolomic level. DGR Triggers Dynamic Changes in Metabolic Biomarkers-A metabolomic trajectory analysis indicated that each dosing regimen returned the values to their baseline levels (i.e. those of the control group), but the maximum synergistic effect in preventing liver injury was observed in rats that were treated with the DGR combination, not in the animals treated with the control or with monotherapy or bitherapy using D, G, and/or R (Fig. 5). These results clearly show that the components of DGR exert a mutual reinforcing effect. This raises the question of how the DGR combination exerts its synergistic effects. To address this question, we further utilized the metabolomics approach to delineate metabolic changes in HI after each treatment. Potential biomarkers were identified, with 20 and 12 in the urine and plasma, respectively (supplementary Table S2). A dynamic change analysis of the levels of metabolic biomarkers was also used to determine whether D, G, and/or R exerted synergistic effects. These metabolites, when identified together, are important to the host's response to HI because they affect energy metabolism, amino acid metabolism, nucleotide metabolism, and lipid metabolism. The metabolic pathways of the biomarkers for the HI model are presented in Fig. 4. The coefficient of variation plots (Fig. 5D) for the biomarkers indicating the metabolite concentrations in the DGR group showed the high relative weights of these metabolite levels, thereby indicating that they had exerted the greatest effect on HI. Additionally, a dynamic change analysis of the biomarkers further validated the metabolomic findings. Clear differences between the treatment and control groups were observed with each dose. Interestingly, the maximum synergistic effect in preventing liver injury was observed in the rats that were treated with the DGR protocol, whereas the animals that were treated with monotherapy or bitherapy using D, G, and/or R showed weaker results. Taken together, these results provide a com- prehensive dynamic metabolic profile and serve as the basis for further study of the mechanisms that underlie synergism. Metabolic pathway analysis with IPA software revealed that metabolites that were identified together were important for the host response to HI (Fig. 4). To uncover signal transduction pathways and/or signaling networks associated with the differentially expressed metabolites, the identified metabolites were imported into the IPA software. According to the IPA knowledge base, major signaling networks, comprising 38 nodes, were associated with this set of proteins. The integrated network included differentially expressed metabolites and involved small molecular-transport-associated major signaling pathways. Proteomic Identification of the Target Proteins for HI-To better understand CCL 4 -induced proteins in HI rats, we investigated dynamic proteomic differences between the model and control groups. After the two-dimensional electrophoresis gels had been analyzed, the peptides were extracted from each differentially expressed protein spot using in-gel trypsin digestion, and the proteins were identified using MS/MS. Original images of the silver-stained two-dimensional gels are not shown in the supplementary materials. Representative FIG. 5. DGR causes synergistic effects on trajectory analysis of HI rats. For each cluster, dotted circles indicate the subject closest to the average user profile within the group. A, trajectory analysis of PCA score plots for the HI rats after D treatment. B, trajectory analysis of PCA score plots for the HI rats after G treatment. C, trajectory analysis of PCA score plots for the HI rats after R treatment. fC, control; fM (pink), HI group; fCOC (red), DGR group; fDG (green); fDR (brown); fGR (yellow); fG (blue); fD (brown); fR (green). two-dimensional electrophoresis images for the control and model groups are shown in Figs. 6A and 6B. The results of the MS/MS analysis-including the protein score, coverage, number of identified peptides, and best ion score for each spot-are summarized in supplementary Table S3. Among all of the protein spots that were separated using two-dimensional gel electrophoresis, 42 spots appeared to be significantly changed in percent volume as identified by peptide mass fingerprints that were based on MALDI-TOF-MS/MS and database searching. We found 15 plasma proteins (7 of which were up-regulated and 8 of which were down-regulated) that were expressed in the animal model. The upregulated proteins in the HI group were Ig kappa chain C, zinc finger protein 407, serotransferrin, haptoglobin, macroglobulin, alpha-1-antitrypsin, and complement factor B. The downregulated proteins were fibrinogen alpha chain precursor, glyceraldehyde-3-phosphate dehydrogenase, albumin, 2,4dienoyl-CoA reductase, transthyretin, ␣-1-inhibitor 3, vitamin D-binding protein, and prothrombin. The synergistically/additively regulated proteins appear to be involved in metabolism, energy production, immunity, chaperoning, antioxidation, and signal transduction. These data help to clarify the molecular mechanisms that underlie the therapeutic and synergistic properties of YCHT. Functional Analysis of Novel Proteomic Biomarkers-To better understand the molecular mechanisms and relevant pathways that underlie DGR's ability to treat HI at the proteomic level, we employed a proteomics strategy that involved combining two-dimensional gel electrophoresis and MALDI-TOF-MS/MS to perform functional enrichment analysis, with a special focus on the changes in the HI biological process after each treatment. We found seven and eight target proteins that were up-regulated and down-regulated, respectively, and that were highly specific to the HI model (supplementary Table S3). The detection of these proteins with distinct regulatory patterns provides evidence that novel biomarkers are actively involved in multifunctional pathways that are likely essential for HI. The biological functions of these critical proteins can be sorted into five groups: (i) generation and degradation of the extracellular matrix, including fibrinogen alpha chain precursors and macroglobulin; (ii) the regulation of transcription and translation, as provided by zinc finger proteins; (iii) acute phase reaction and immunity protection, as provided by Ig kappa chain C, alpha-1-antitrypsin, ␣-1-inhibitor 3, prothrombin, and vitamin D-binding protein; (iv) oxygenation and cell apoptosis, to which 2,4-dienoyl-CoA reductase contributes; and (v) transport and metabolism, as provided by glyceraldehyde-3-phosphate dehydrogenase, haptoglobin, serotransferrin, and transthyretin. The characteristic functions of these differentially expressed proteins were enriched within clusters that were based on biological processes such as immunity, cellular apoptosis, transport, signal transduction, cell growth and proliferation, and metabolism. For example, the modulation of several key members of the immunity cluster was revealed via proteome analysis, as highlighted by Ig kappa chain C, ␣-1-inhibitor 3, and prothrombin. As expected, currently available HI biomarkers rely on the measurement of substances that are key to the development, transport, and metabolism of HI and thus have unlimited clinical application value in predicting HI. These proteins therefore may be considered as candidates for the further FIG. 6. Two-dimensional gel electrophoresis representative proteomic maps. A, the control group. B, the HI group. C, the YCHT-treated group. D, the DGR-treated group. The protein spots that are indicated with numbers represent the differentially expressed proteins. investigation of synergistic mechanisms and might be potential therapeutic targets for HI. Synergistic Effects of DGR on the Relative Expression Levels of Target Proteins-Because DGR exerts preferential and synergistic effects on the proteome, it was worthwhile to further investigate the protein expression profiles of the model and of each dosing group using comparative proteomics analysis. We found that a total of 15 target proteins were synergistically/additively modified by DGR based on two-dimensional gels. Figs. 6B, 6C, and 6D show the proteome maps (two-dimensional gel electrophoresis images) for HI, YCHT-treated, and DGR-treated rats, respectively. Supplementary Table S4 summarizes the relative expression levels (in terms of the percent volume) of these proteins. Image analysis revealed 15 differentially expressed proteins (p Ͻ 0.01 by Student's t test with a fold change Ͼ 3.0). For the purpose of protein identification, differentially expressed proteins were isolated from the two-dimensional gel and analyzed using MS after in-gel digestion. Among these proteins, we found a striking number of uniquely HI-associated proteins. All of the details regarding the relative expression levels for the control, model, and dosed rats are provided in supplementary Table S4. For example, the amount of zinc finger protein (spot 2) was 0.1811 in the control and down-regulated to 0.0693 in the model group. We found that both D and G up-regulated the zinc finger protein at the proteomic level, whereas YCHT significantly enhanced zinc finger protein upregulation (0.1279). Interestingly, the DGR combination induced strong expression (0.1621), reflecting potential therapeutic and synergistic properties of DGR in treating HI. Equally notable is the haptoglobin protein (spot 4), which showed strong up-regulation in the model group and downregulation in the YCHT, DGR, D, and G groups. In this study, the level of alpha-1-antitrypsin (spot 6) was up-regulated in the model rats relative to controls (indicating its importance in liver injury), down-regulated in the YCHT and DGR groups, and absent (not detectable) in the DG, DR, GR, D, G, and R groups. We found that the expression of the vitamin D-binding protein (spot 14) was slightly increased in the groups that were treated with D (i.e. the D, DG, and DR groups) and was dramatically up-regulated in the DGR treatment group. Although G, R, and the GR combination did not exert this significant effect, these two compounds (G and R) enhanced the effects of D, indicating that the expression of the vitamin D-binding protein by D can be further amplified by G and R. This is strong evidence of a synergistic effect of D, G, and R at the proteomic level that occurs by means of coordinated protein expression. These observations indicate that DGR had a synergistic/additive effect on transport, metabolism, and the modulation of immunological processes in HI rats and that these effects were much more pronounced with co-treatment than with any single treatment. These data, together with the biochemical, immunohistochemical, and metabolomic analyses, suggest that D is the principal ingredient in this formula, whereas G and R serve as adjuvant components, and that by hitting multiple targets the DGR combination exerts more profound therapeutic effects than either monotherapy or bitherapy using D, G, and/or R. DISCUSSION Currently, in contrast to the traditional focus on single targets, a paradigm shift is occurring that is sparking new interest in agents that modulate multiple targets simultaneously with fewer adverse effects and lower toxicity. Combination medicines with multi-effect pathways and multi-effect targets tend to be more effective than single drugs and might help to address many treatment-related challenges (33). Modern medical therapy has long acknowledged the usefulness of combination therapies that regulate multiple nodes of the disease network simultaneously and have a synergistic effect on the treatment of multifactorial diseases such as acute promyelocytic leukemia and cancer (34 -36). However, continued progress will be essential if we are to develop effective, customized multi-drug regimens. TCM, the therapeutic efficacy of which is based on the combined action of a mixture of constituents, offers new treatment opportunities. In recent years, there has been a significant increase in the clinical use of combinatorial intervention in TCM to achieve synergistic interactions that are capable of producing a sufficient effect at low doses. Most Chinese therapeutic herbs that are traditionally used in co-treatment but not mono-treatment series exert significantly better pharmacological effects. However, the precise mechanism of synergistic action remains poorly understood. Based on syndromes and patient characteristics and guided by the theories of TCM, formulae are designed to contain a combination of various kinds of crude drugs that, when combined, will achieve better clinical efficacy. One example is YCHT, whose efficacy in treating HI has recently been well established. Because the means of action of DGR on HI has not been sufficiently investigated, we used an "omics" platform and identified the key mechanisms that underlie the observed effects. To gain insight into the complex biochemical mechanisms of this effective HI therapy, we conducted an investigation that incorporated modern biochemical analyses, immunohistochemistry, metabolomics, and proteomics, analyzing the effects of D, G, R, DG, DR, RG, and DGR treatments on HI. It is well known that CCl 4 is a potent hepatotoxic agent that is rapidly metabolized in vivo to the CCl 3 free radical, which subsequently acts on liver cells to covalently conjugate membranous unsaturated lipids, leading to lipid peroxidation; thus, CCl 4 is widely used to model HI in rats. Based on our previous metabolomics findings, we know that YCHT can significantly prevent HI by interfering with the trajectory changes in the biomarkers, exerting an overtly hepatoprotective effect (20,21). We have now tested the therapeutic and synergistic properties of YCHT that make it an effective treatment for HI using a rat model. We employed UPLC-HDMS, MALDI-TOF-MS/MS, and two-dimensional gel electrophoresis to systematically analyze the synergism of DGR at the levels of the proteome and metabolome, thereby exploring some of the key molecular mechanisms that underlie these synergistic effects. Our approach extends the wellestablished concept of combinatorial therapeutics and identifies new potential strands of investigation that involve multicomponent combinations that target multiple pathways. In this study, we have shown that the combined in vivo use of the active components of YCHT (namely, D, G, and R) exerts more profound therapeutic effects than any component used individually in a rat model of HI. DGR significantly intensified the therapeutic efficacy as indicated by our modern biochemical analysis. The immunohistochemical assay further supports the cooperation of D and G with R in upregulating BCL-2 and down-regulating Fas, with the strongest effect occurring with the DGR combination, thereby supporting the rationale of using YCHT for treating HI. These observations provide mechanistic insight into the synergistic effect. Our metabolomic trajectory analysis indicated that each dosing group could be regulated back to baseline levels (i.e. those of the control group), but the maximum synergistic effect on liver injury was observed in rats treated with the DGR combination, as opposed to mono-or bitherapies. These observations support the rationale of this formula wherein the compounds mutually reinforce each other. We also found that DGR activated an array of factors that are involved in energy, amino acid, nucleotide, fatty acid, cofactor, and vitamin metabolism, and we suggest that these effects might form the basis of DGR synergy. This study has indicated that a metabolomics approach can be used to elucidate the synergistic mechanisms employed by TCM. Our study has identified robust biomarkers that are related to HI, with a special focus on the changes in the relative expression levels of target proteins after each treatment. It is important to note that based on our results, DGR targets not only immunity and metabolism but also key regulatory pathways that are used in transport, signal transduction, and cell growth and proliferation, thereby helping to restore the normal function of the liver. Our data will help to indicate the molecular mechanisms of synergism at the proteomic level. According to the rules of TCM theory, the monarch, minister, assistant, and servant herbs that constitute a formula can work together harmoniously to achieve an ideally synergistic and therapeutic outcome. Understanding the synergistic effects of YCHT represents an even greater challenge than usual. This is because in YCHT, the multilayer regulation structure involves three compounds with overlapping but distinct target properties. However, determining their targets would shed new light on synergism and efficient therapeutic strategies. Although the YCHT formula was designed by TCM doctors in the pre-molecular era, its mode of function can be revealed using modern biochemical analyses. In summary, we have shown that TCM formulae can be analyzed using biomedical and chemical research approaches at the biochem-istry, proteome, and metabolome levels. Here, we report that the DGR-based treatment regimen yields encouraging outcomes, reinforcing its potential use as a frontline therapy for HI. This study can be considered as a useful pilot trial in the effort to evaluate traditional formulae on a larger scale and to bridge Western and Eastern medicines in this era of systems biology. Furthermore, these observations indicate that traditional Chinese formulae usually require multiple components to exert their effects, possibly laying the foundation for promising new schemes and patterns of drugs that are derived from TCM.
v3-fos-license
2019-04-03T13:08:50.514Z
2018-12-05T00:00:00.000
91849784
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://jcoagri.uobaghdad.edu.iq/index.php/intro/article/download/137/92", "pdf_hash": "e30782fd404264ce4c12d39945f0f58f3e3eebcb", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42057", "s2fieldsofstudy": [ "Biology" ], "sha1": "e30782fd404264ce4c12d39945f0f58f3e3eebcb", "year": 2018 }
pes2o/s2orc
RELATIONSHIP OF GROWTH HORMONE GENE WITH SOME OF PRODUCTIVE TRAITS OF COMMON CARP Cyprinus carpio This study was carried out at Al-Radhwaniyah fish Reservoir (Baghdad) to investigate the polymorphism of GH gene and relationship with some of productive characteristics (total weight gain (T.W.G), daily growth rate (D.G.R), relative growth rate (R.G.R) and specific growth rate (S.G.R) in common carp. single nucleotide polymorphism (SNPs) in GH gene was analyzed by direct sequencing. Two SNP were identified in the third intron of GH1, the first SNP at site A1132T was negative correlated with growth traits, AA Genotype (wild) was significant(p<0.05) correlated with growth traits. The second SNP was happened at site of G1217T, any genotype significantly does not correlated with growth traits. the study summarized that identification of SNPs associated with growth performance can be candidate as genetics markers in marker-assisted selection (MAS) programs for improving growth traits INTRODUCTION Studies of genetic diversity at DNA level represented an expansion field in aquaculture, and aimed to find out those DNA variations associated with productive phenotypes, in order to use them as tools for assisting the offspring selection at any early stage and predict their productive performance (14).Growth hormone or somatotropin is a singlechain polypeptide has a weight of 20 to 22 kilo Dalton (kDa) produced by the pituitary gland (8), along with prolactin (PRL) and somatolactin (SL) the GH/ PRL/SL gene family, which share similar structure and overlapping biological function may be it was common in ancestral genes (12).Growth hormone plays a major role in stimulating somatic growth in Teleosts (21), it is also involved in linear growth, food conversion (1) and many metabolic functions, including reproduction (2), and also plays a role in osmoregulation (18), Ttherefore GH gene is a potential target for genetic studies on variation related to growth traits and a polymorphisms in GH gene that is associated with the growth rate of farmed fish which were the target of many breeding programs (6).In recent years, polymorphisms of GH gene have been reported in several fish, such as Tench Tinca tinca (9), Large yellow croaker Larimichthys crocea (16), rainbow trout Oncorhynchus mykiss (17), tilapia Oreochromis niloticus (4) and yellow catfish Pelteobagrus fulvidraco (11) and certain polymorphisms have been revealed to be associated with growth traits.Common carp is one of the most important cultured fish species in the world and historically the longest in the fields of fish culture, it bears the high temperature, Oxygen depletion, high stock density and fast growth, therefore it was the first fish reared in Iraq.Murakaeva (13) mentioned that the common carp has very large distribution area (from Central Europe, through Central Asia to East/South-East of Asia) with very different ecological conditions and variable growth rates, so that probably genetic varieties of the GH gene might be of adaptive importance.Both GH genes of common carp are very similar with each other where both genes consist of 5 exon and 4 introns with the 3rd intron in the GHII gene is the largest, therefore it is difficult to construct specific primer pairs for each of the two growth hormone genes to screen the polymorphisms.Due to the lack of ongoing studies in this regard in Iraq, the present study aimed to know the relationship of polymorphism in the growth hormone gene with a some of productive traits in common carps. MATREIALS AND METHODS This study was conducted at Al-Radhwaniyah fish Reservoir in Baghdad, 40 of common carp were collected from a private fish farm and reared for 110 day in ponds measuring 7 * 3 * 1.2 meter and were fed with commercial pelleted food with crude protein of 26.8%, crude fat 1.5% and energy 3165 kilo calories (kcl).All the experimental fish were reared under similar environmental conditions.Fish were marked up by a device from a Hallprint Fish Tags (Australia), where they were numbered by a hole near the dorsal fin.Some parameters describing the growth traits of common carp such as initial weight (IW), final weight (FW), TWG, DGR, RGR and SGR were studied.Genomic DNA Extraction one ml of blood were collected from the heart muscle of all trial fish.These samples were collected in EDTA tubes and kept in freezer (-18 ºC).for DNA extraction by using DNA extraction kit (Geneaid,korea) before DNA extraction blood volume was reduced to 20 microliters (μl) and phosphate-buffered saline (PBS) increased to 200 μl because all the blood cells of fish are nucleated and contained DNA and proteins levels in fish blood are higher than in mammals blood. PCR amplification The primers was supplied from BIONEER (korea) as lyophilized powder of different picomols concentrations.The sequences of primers are shown in (23) to be considered a SNP, the less frequent allele must exist at a frequency of 1% or more in the population.Because the number of samples we have is few, therefore we set up minimum frequency of 25% for a SNP to be selected to achieve reliable outcomes. Statistical analysis: The Statistical Analysis System (SAS) was used in data analysis (20) according to complete randomized design (CRD) using the general linear model (GLM) and Duncan multiple range test was used to compare the average means at a significant level (p<0.05)(5).The genotype frequencies were calculated and HWE was tested using a chi-square test of PopGene32 . RESULTS &DISCUSSION Polymerase Chain Reaction (PCR) amplified regions, which showed only one band with molecular weight of 770 bp (figure 1).To detect the PCR product, DNA ladder (100-3000bp) was used and the gel was visualized by photo documentation system.The same PCR product size was obtained in 770 bp Which represents the GH1 gene but we did not show the second piece on the site either 650 or 900 bp Which represents the GH2 gene in Common carp according to Designer primer by Murakaeva (13).Thats may be due to the variability of our common carp strain or that the fish we used in this experiment a combination were represented more than one strains, especially in Iraqi waters there were several strains of common carp coming from several countries and uncontrolled mating between these fish. Effect of GH gene polymorphism in growth traits Table .3shows the growth traits of common carp.Results showed significant difference (P <0.05) at site of A1132T,it was found in the average of final weight among the different genotypes as above in the AA genotype and reached 303.62 g / fish and reached 240.6 g / fish in the AT genotype and 229 g / fish in the TT genotype, while the results did not differ significantly between the second and third genotypes.Results showed significant differences(P <0.05) in the T.W.G of fish as reached 109.37 g / fish in AA, 63.78 g / fish in AT and 58.33 g / fish in TT genotype , as for average D.G.R was the highest in AA (0.99 g / fish / day) and the lowest value was in TT (0.63 g / fish / day).AA genotype significantly differed from the other Genotypes (P <0.05). Results showed that R.G.R was significantly affected (P <0.05) by different genotypes of GH gene, which was 56.42%, 36.60% and 34.17% for AA, AT and TT genotypes respectively.Direction of S.G.R was also significantly affected (P <0.05).Values were 0.40% g / day, 0.28% g /day and 0.26% g /day sequentially for the same genotypes above. Results of G1217T showed no significant differences in the final weight between the genotypes.Values were 264.19 g for GG and 268.62 g for GT, as for T.W.G differences computations were found in GG 77.57g and in GT 88.76 g with no significant difference.Also, the characteristics D.G.R, R.G.R and S.G.R were recorded as differences in computations according to different Genotypes with nonsignificant superiority.The importance of growth hormone gene is reflected as one of the candidate genes for the study of genetic variation and its relationship with growth characteristics and a sign to study the evolutionary relations of different fish and study the characteristics of growth and the possibility of application (10).In fish, it was found that polymorphism was associated with growth characteristics.This was demonstrated by Kang et al. (7) on olive flounder (Paralichthys olivaceus), Sanchez Ramos et al. (19) on gilthead seabream (Sparus aurata) and Blank et al. (3) on Nile tilapia (Oreochromis nilotcus).Several evidences of the key role of growth hormone have been reported as GH gene is a major research target in the aquaculture sector and one of the main goals is to achieve the largest increase in weight in shortest time, so the polymorphism of GH gene in the growth-related is the target of many studies in breeding programs, and the identification of SNPs in the GH gene for common carp is still below the level of ambition.In this study, SNPs were identified using direct sequencing and differences in alleles were found in the third intron of GH gene.Table 3.Effect of growth hormone gene Polymorphism in growth traits of common carp (means ± standard error) The superiority of performance was observed for the individuals who carry the AA genotype at site A1132T with most of the studied growth traits, so the individuals carried this genotype can be selected.Where the mutation in this site has affected negatively, which led to deterioration in the growth characteristics of individuals who carry them.These results different with (17) in GH2 of rainbow trout he was found three genotypes (AA, AB and BB).The homozygous (BB) achieved significant results compared with heterozygous (AB) (p<0.05), and insignificant results compared with wild (AA).Ni et al. (15) investigated polymorphisms within the exon regions of olive flounder Paralichthys olivaceus GH and found that exon 4 had two SSCP haplotypes, AA and AB.The AB genotype had one non synonymous mutation at site 1763 (C→T) that was positively correlated with body weight.In site G1217T, the statistical analysis does not indicate significant differences in different genotypes, that is meaning the change in this site does not affect the performance of fish.these results agree with (11) which found three genotypes at site of 2100 bp in yellow catfish (Pelteobagrus fulvidraco) and did not observed any significant differences among the genotypes and the body weight.Tian et al. (22) found three genotypes in mutation at site of 5045 T> C in Basilwasky (Siniperca chuatsi) and no significant differences were observed in body weight among different genotypes.According to the obtained results it is highly probable that the genotype has direct influence on growth parameters of common carp.The statistical analysis showed that the genotype at a site of A1132T was better associated with the studied growth characteristics.The SNP in G1217T did not affect of growth characteristics whether positive or negative. Figure 1 . Figure 1.The PCR products were agarose gel 1.5% and 40 volt for 1 hour.Visualized under U.V light after stain with Ethidium Bromide Genotyping and Allele Frequencies Results of sequence shown correspond with the GH gene (GH1) in common carp.Results showed two SNPs in the third intron of GH1 gene (figure.2),First SNP was at site 1132 bp (A → T) and revealed three genotypes (AA, AT and TT).Second SNP at site 1217 bp (G → T) and revealed two genotypes (GG and GT).Table.2 shows the distribution ratios of genotypes and allele frequency according to Hardy Weinberg equilibrium (HWE).
v3-fos-license
2017-04-05T16:59:32.443Z
2015-08-11T00:00:00.000
215193525
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0133594&type=printable", "pdf_hash": "6025e97f47c355465aec2b42073de745b61e11e8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42059", "s2fieldsofstudy": [ "Medicine" ], "sha1": "6025e97f47c355465aec2b42073de745b61e11e8", "year": 2015 }
pes2o/s2orc
Unilateral and Bilateral Cortical Resection: Effects on Spike-Wave Discharges in a Genetic Absence Epilepsy Model Research Question Recent discoveries have challenged the traditional view that the thalamus is the primary source driving spike-and-wave discharges (SWDs). At odds, SWDs in genetic absence models have a cortical focal origin in the deep layers of the perioral region of the somatosensory cortex. The present study examines the effect of unilateral and bilateral surgical resection of the assumed focal cortical region on the occurrence of SWDs in anesthetized WAG/Rij rats, a well described and validated genetic absence model. Methods Male WAG/Rij rats were used: 9 in the resected and 6 in the control group. EEG recordings were made before and after craniectomy, after unilateral and after bilateral removal of the focal region. Results SWDs decreased after unilateral cortical resection, while SWDs were no longer noticed after bilateral resection. This was also the case when the resected areas were restricted to layers I-IV with layers V and VI intact. Conclusions These results suggest that SWDs are completely abolished after bilateral removal of the focal region, most likely by interference with an intracortical columnar circuit. The evidence suggests that absence epilepsy is a network type of epilepsy since interference with only the local cortical network abolishes all seizures. Introduction The neurological syndrome epilepsy is characterized by the presence of recurrent spontaneous seizures although they are manifested in different ways. Absence seizures are commonly, but not exclusively, seen in children between 4 and 12 years old [1]; they are classified as generalized and predominantly nonmotor with impaired responsiveness [2]. The electroencephalographic (EEG) examination records bilateral, synchronous, and symmetrical spike-wave discharges (SWDs) with a frequency of 3-4 Hz on a normal background activity, first described by Gibbs et al. [3]. The search for mechanisms of generation, maintenance and abortion of SWDs typical of absence seizures has been carried out for more than half a century [4], and is still emerging. It is not possible as yet to have a clear picture about all processes and mechanisms involved, also considering that most concepts and theories are gained from different animal models and/or in vitro studies, and cannot be easily verified in humans. In general, the hypothesis that SWDs are generated within the cortico-thalamo-cortical network is widely accepted [5,6,7,8,9,10]. A relatively new theory for the initiation and generalization of absence seizures has been achieved in the genetic absence models with a detailed analyses of perictal local field potentials of a cortical grid on the somatosensory cortex and thalamic depth recordings with a nonlinear association analyses in the WAG/Rij rat and through intracellular recordings combined with local field potentials in different layers of the somatosensory cortex in GAERS [7,11,12]. While the former authors identified a cortical initiation zone in the peri-oral region of the somatosensory cortex, the latter ones showed that cells located in deep layers (layer VI) of the somatosensory cortex show a massive increase in firing already before SWDs onset. This introduced a location refinement of the cortical focus to the subgranular layers. Also results of recent studies in WAG/Rij rats are in line with the "cortical focus" theory establishing that the deep somatosensory cortex of absence epileptic rat is more excitable than the motor cortex and that this difference was not present in control rats [13]. Next, there is more seizure related pre-SWD activity in the focal cortical region than in the thalamus [14]. Other evidence (pharmacologic and neurochemical) for a cortical hyperexcitable region and network analyses in children with absence seizures has been reviewed recently [8,10,15,16]. The presumed focal origin of the "generalized" SWDs has already led to the exploration of a new experimental therapy such as bilateral local transcranial electrical stimulation of the focal regions [15]. A second issue, relevant from a clinical perspective, is that focal epilepsies can be treated with surgical resection. Surgical resection itself has been demonstrated to be sure and effective for the treatment of patients resistant to pharmacotherapy where there is inadequate seizure control [17]. Surgical treatment has never been considered in patients with refractory types of absence epilepsy and a reference protocol does not exist. However, if cortical focal sites of origin can be identified unambiguously, and its locations allow resection, then the possibility can be considered. Here, surgical resection of the epileptogenic zones on seizure occurrence is evaluated in WAG/Rij rats. By this, a property of focal epilepsies is investigated: if absence epilepsy is considered to be a focal type of epilepsy, then it is hypothesized that unilateral and bilateral surgical resection of the assumed focal region should decrease and abolish SWDs. The study may therefore also contribute to the discussion whether absence epilepsy should be considered as a focal, a generalized type of a network type of epilepsy [10,11,12,15,18]. for Cognition, Radboud University Nijmegen, The Netherlands. Prior to surgery the rats were housed in pairs (High Makrolon cages with Enviro Dri bedding material and cage enrichment) with free access to food and water and were kept under environmentally controlled conditions (ambient temperature = 22°C, humidity = 40%) in a room with reversed light-dark cycle (light on from 8:00 p.m. to 8:00 a.m.). The experiment, performed in accordance with Institutional and ARRIVE guidelines, was approved by the Ethical Committee on Animal Experimentation of Radboud University Nijmegen (RU-DEC). All efforts were done to keep the discomfort of the animals as minimal as possible; therefore it was decided to do the experiment in fully anesthetized animals. Drugs The experiments were carried out in anesthetized WAG/Rij rats: a combination of the mu opioid receptor agonist Buprenorphine (Vetergesic Multidosis, Ecuphar, The Netherland, solution 0.3 mg/ml) diluted (1 to 5) in saline (0.9% NaCl) was given subcutaneous (s.c.) at a concentration of 0.05 mg/ml in a volume of 1 ml/kg and haloperidol (Haldol, Janssen-Cilag BV Tilburg, Netherlands), at a concentration of 5 mg/ml also in a volume of 1 ml/kg was injected intraperitoneally (i.p.). A combination of a mu opioid receptor agonist and a D2 receptor antagonist was shown to have SWD enhancing effects closely mimicking the physiological SWDs regarding amplitude of the spikes, the intraspike frequency, and the morphology of the spike and waves spontaneous in free moving WAG/Rij and GAERS [19,20]. Moreover, the same drugs do not elicit SWDs in non-epileptic rats. Surgery and EEG recordings. The experiment was done while the rats were in the stereotactic apparatus. EEG recordings were made with the aid of two tripolar stainless steel electrode sets (Plastic One, Roanoke, VI, USA: MS 333/2). The electrode sets were kept in place by a custom made electrode holder; two out of four active electrodes were placed on the frontal region, coordinates with the skull surface flat and from bregma zero-zero, (AP + 4.2 mm, LF ± 3 mm) and two in the parietal region (AP -6.5 mm, LF ± 4 mm). Ground and reference electrodes were implanted symmetrically over both sides of the cerebellum. These and all following stereotactic coordinates were relative to bregma and according to the atlas of Paxinos and Watson [21]. Two differential EEG recordings were made, one from the right and one from the left hemispheres. EEG signals were allowed to pass between 1 and 100 Hz, digitalized at 200 samples s -1 , and stored for off-line analysis using Windaq system (DATAQ Instruments, Akron, OH, USA). After the EEG electrodes have been epidurally placed (surgery lasted about 30 minutes), the EEG of the 2 groups was recorded for one hour as base-line (precraniectomy) control. The wash-out period of isoflurane was about 30 minutes, therefore, only the data of the last half an hour were compared between the various phases of the experiment. Experimental protocol The rats were divided in two groups: 9 experimental and 6 control rats. Both groups received the general anesthetic isoflurane (Pharmachemie BV, Haarlem, the Netherlands) in combination with a mixture of the analgesic buprenorphine and the antipsychotic haloperidol during the implantation of the electrode sets, during the removal of the cranium above the somatosensory cortex (experimental and control groups), and during resection of the somatosensory cortex. During the EEG recording isoflurane administration was stopped and rats were anesthetized only via the neurolept-analgesic. The first injection with the analgesic buprenorphine occurred 30 min before the start of the surgery. The skull was removed (craniectomy) in both the rats of the experimental and control group, cortical resection only in the rats of the experimental group. Craniectomy. The dorsal part of the skull was exposed. A piezosurgery device (Mectron, Carasco, Italy, rounded/straight tip, mode power high; pump 2; frequency 25 kHz) was used to remove part of the cranium on the right and left side. The piezosurgery tip was constantly irrigated with ACSF. Special care was taken in order to avoid perforation of the dura or to cause any mechanical injury to brain tissue. The coordinates used for craniectomy are indicated in Fig 1; they enclose the cortical region with the assumed focal area in the somatosensory cortex, as was earlier established in this model [11]. The coordinates of the resected cranium were (with bregma at 0,0) from AP 3.0, L ± 2.4 till 5.4 mm to AP -4, L ± 3.2 mm till 6.2 mm. First the right cranium was removed, in the experimental group this was followed by removal of a trench of the ipsilateral somatosensory cortex. This was followed by craniectomy of the left hemisphere and a similar trench of the left hemisphere. Cortical resection. Fig 1 shows the resected part of the somatosensory cortex superimposed on the graph of Meeren et al [11] in which the foci were found as was determined in 8 individual rats in a mapping study with a cortical grid. The aim was to resect grey matter until the corpus callosum was visible. The removed area is the region in which local injections of phenytoin and ethosuximide reduce SWDs [22,23,24], with an increased expression of a subtype of Na + channels [25], a reduction of HCN channels [26] and a high cortical excitability established in vivo in free moving WAG/Rij rats [13]. The injection and recording times of both the experimental and control groups are presented in Figs 2 and 3. Subsequent injections of haloperidol (half-life 2 hours [27]) and buprenorphine (half-life 2-4 hours [28]) were given in order to maintain an appropriate depth of anaesthesia, which was regularly monitored via toe pinch reflexes. After the end of the day, the rats were still anesthetized, they were given an overdose of ketamine (100mg/ml)/xylazine (20 mg/ml), systemically administered, without any sign of adverse reaction. Next the brains were quickly removed. The resection of the somatosensory cortex was performed under a binocular stereo microscope (Euromex, Arnhem, Holland); the dura was incised by a sharp steel blade (length = 3 mm) to expose the cortical zone. A flat steel blade with a cutting surface 2 mm length × 1.5 mm width was used to make cortical incisions, next cortical tissue within the target area was removed by a steel curved blade, while contemporarily blooding was stanched with cotton. The depth of the incisions was determined by the coordinates above described. Considering the poor discriminability between grey matter of the cortex and white matter of the corpus callosum, the size of the lesioned area including its depth was subsequently verified by histological verifications. Body temperature was monitored and kept at 37°C via a heating pad, other vital parameters such as respiration were monitored continuously. EEG recordings and analyses. Rats of the experimental group were recorded for 5 hours (Fig 2). In the 10 min of the first hour of recording, rats were still given isoflurane anaesthesia, they had received buprenorphine 30 min before starting the surgery, after successful implantation of the EEG electrodes haloperidol was administered (i.p.), and isoflurane was stopped 3 minutes after haloperidol injection. Isoflurane induced an EEG with burst suppression without any SWDs, the combination of the neurolept analgesic anaesthesia mixture allowed the occurrence of SWD after isoflurane was washed-out. The combination of high dose of haloperidol (5 mg/kg) with buprenorphine increases immobile behaviour but it also increases the analgesic effects of buprenorphine [29]. The second hour was recorded following right side craniectomy at the right side under influence of the mixture while isoflurane anaesthesia was stopped 1 min after the beginning of recording. The third hour of EEG recording followed the resection of somatosensory cortex right side under influence of the mixture while isoflurane anaesthesia was stopped 1 min after the beginning of recording. The fourth hour of EEG recording was after left sided craniectomy and left-side cortical resection and under influence of the mixture. The fifth hour was an extra hour, to make sure that SWDs did not return, again no isoflurane was administered and rats were still anesthetized by the mixture. The behaviour of the rats was constantly monitored by a biotechnician (SMH). Rats of the control group (Fig 3) were recorded under identical anaesthetic conditions; in the 10 min of the first hour of recording the rats were still given isoflurane, they had received buprenorphine 30 min before starting the implantation of the EEG recording electrodes, after successful implantation of the EEG electrodes haloperidol was administered, and isoflurane was stopped 3 minutes after haloperidol injection, similar as rats from the experimental group. The second and third hour of EEG recording in this group was after the right and left side craniectomy respectively, and again always under the influence of the neurolept mixture. SWDs were marked at visual inspection of the EEG of the right and left hemisphere independently (differential recordings between frontal and parietal cortex) based on commonly used criteria: trains of sharp spikes and slow waves lasting minimally 1 s, an amplitude of the spikes at least twice the background, frequency of the SWDs between 7 and 10 Hz and an asymmetric appearance of the SWDs [30,31]. In case of doubt, i.e. after craniectomy or after cortical resection the monopolar recordings were used in order to decide whether a SWD was present or not. The same criteria were used for the monopolar and bipolar recordings. The SWDs seen under the neurolept-analgesic mix were visually identical to the spontaneous SWDs commonly seen in WAG/Rij rats, although the mean duration of the SWDs was longer than commonly observed [30,32]. Histological verification. Immediately after euthanasia the brains of the animals were removed and fixated in formaldehyde 3% for 30 days and 30% sucrose/PBS for 4 days. Coronal slices (100 μm) were made with a microtome and stained with Cresyl violet. Three slices per animal were inspected: one from the frontal, one from the middle and one from the posterior part of the resected cortex. Statistical analysis. The incidence of SWDs was determined per 30 min EEG recording. The EEG recordings in the base-line of the experimental and control group were only 24 minutes considering the duration of the first wash out period of isoflurane. All data were statistically analyzed with SPSS 19.0. For the data of the unilateral lesion, a general linear model repeated measures ANOVA with side (left vs right) and time (baseline, post craniectomy and post unilateral lesion) as within subjects factors was used to determine the statistical significance of the main effects and their interaction. SWDs were no longer present following the removal of the second (left) focal region, and therefore statistical tests are not meaningful for these (4 th and 5 th ) recording hours. Paired and unpaired t-tests were used to establish changes in the amplitude of the spike of the SWDs in both hemispheres after right craniectomy and whether these changes were different for the right and left hemisphere. A p value of 0.05 was chosen as the threshold level for significance. Additionally, t-tests for dependent groups were used as post-hoc tests to compare side differences at different time points and differences within a hemisphere between different time points. The data of the control group were similarly analyzed as the experimental group with side (left vs. right) and time as within subjects factors. Experimental group The rats were under the influence of buprenorphine throughout the whole experiment. During isoflurane anesthesia all rats showed an EEG with burst suppression and no SWDs were noticed. The bursts appeared bilateral synchronized (Fig 4A). After the injection of haloperidol and the washout of isoflurane WAG/Rij rats exhibited bilateral normally, with respect to frequency and amplitude of the spikes, appearing SWDs ( Fig 4B). However, the incidence of SWDs was higher than what can be seen in freely moving drug free rats [13,30]. The SWD inhibiting effect of isoflurane was also present at the subsequent periods when this inhalation anesthesia was repeated at surgical intervention periods. In all rats of the experimental group, unilateral resection of somatosensory cortex affected SWDs but differentially for the two hemispheres, an example is depicted in Fig 4C. The data on the incidence of SWDs in the various phases of the experiment are given in Fig 5A. Post lesion (resection), the incidence of SWDs was unchanged in the intact hemisphere compared to post craniectomy recording period of the resected hemisphere. In contrast, SWD were rare in the resected hemisphere. The ANOVA showed significant effects for the incidence of SWDs for time (F = 11.10, df 2,26, p < .001, η 2 = .58), left-right (F = 7.06, df 1,8, p < .03, η 2 = .47) and their interaction (F = 10.30, df 2,16, p < .001, η 2 = .56), post hoc t-tests showed there were no differences between left and right hemisphere before and after craniectomy, that unilateral craniectomy reduced the incidence SWDs at both sides (p < .05), while the unilateral resection of the cortex reduced SWD at the lesioned hemisphere (p < .05), but not at the intact hemisphere. Subsequent lesions of the previously intact hemisphere completely abolished all SWDs for the entire (2 hour) recording period in both hemispheres; an example of an EEG epoch following bilateral resection can be found in Fig 4D. SWDs were no longer present, in neither the right nor left hemisphere after resection of previously intact hemisphere. The incidence of SWDs (mean ± S.E.M.) decreased from 52.8 ± 7.3 (left) and 52.6 ± 7.0 (right side)/per 30 min pre lesion to 0 on both sides. Pearson correlation coefficients between the depth of the lesions in the frontal, middle and posterior part and the number of SWDs on both sides after the unilateral lesions were made. All correlations were small (between .33 and .03) and non significant, supporting the hypothesis that it is not the amount of resected cortical material, but the fact that lesions perse were made is a likely explanation for the diminishment of SWDs. The amplitudes of the all SWDs as recorded in the right and left hemispheres in the lesioned animals were calculated in the different phases of the experiment. Examples of SWDs and their powerspectra as detrmined by a Fast Fourier analyses are presented in Fig 6. It was found that right side craniectomy reduced the amplitude of SWD on the left and right side (t-tests for paired observations, n = 9, both p's < .05) in the experimental group by 28 and 43% respectively. Similar changes were found in the control group. However, as can be seen in Fig 6, SWDs keep their charactertistic morphology albeit with a smaller amplitude and they seem to be less regular as expressed on more peaks in the spectrogram. The size of this decrease in amplitude in the left and right side was not statistically different. Next it was found that lesions on the right side did not further decrease the amplitude on either the left (n = 9) or right (n = 5) side. SWDs tended to reduce gradually over time or as a consequence of craniectomy. Statistical evaluation showed neither a time effect (F = 2.24, df 2,10, p> .05, η 2 = .31), nor a side effect (F = 3.15, df 1,5, p> .05, η 2 = .39), although the effect sizes were rather large. This suggests that a larger sample size would yield significant main effects. Histological Evaluation The extent of the cortical lesions included most of the layers of the cortex but it showed some variation between animals, details about the size of lesions in the resected animals are presented in Table 1. Photographs of lesions with different depth are presented in Fig 1C. Also there were some differences between the left and right side. In three rats (nrs 4, 5 and 6) the bilateral resection was restricted to cortical layers II-III and IV of the frontal, middle and posterior part, while the other layers of the cortex were fully intact. In four rats (nrs 1, 2, 7 and 8), the bilateral resection of the somatosensory cortex was larger, layer V was removed at least in one location, in rat nrs 3 and 9 the resection was extensive until layer VIa in at least one of the three sections. The corpus callosum did not show any damage in any of the rats, as could be inferred from microscopic inspection. Discussion The major outcomes of this acute study in neurolept anesthetized WAG/Rij rats are that a unilateral lesion of the assumed focal region decreased the incidence of SWDs and this reduction was different in the two hemispheres. Bilateral resection completely abolished all SWDs. Removal of foci or interference with a network It is generally assumed that SWDs in rodents take place in an interconnected intact corticothalamo-cortical network, although the exact interactions between the cortex and different thalamic nuclei necessary for the generation and maintenance of SWDs are not fully understood [9,14,33]. It is clear that SWDs in WAG/Rij rats are initiated in the perioral region of the somatosensory cortex (S1po) [11,15], most likely by neurons located in the deep cortical layers, as was established in GAERS [12]. From this point the early appearance of SWD-activity can be easily visualized by local field potentials from the depth of the somatosensory cortex [15,Fig 6. Spike-wave discharges in acute neurolept anesthetized WAG/Rij rats. Example of bilateral differential LPF recording of the left and right hemisphere and spectral plot (1-25 Hz) of a SWD in an acute neurolept anesthetized WAG/Rij rat (for details on electrode position see accompanying text) after the electrodes have been implanted (top). After removal of the right cranium, SWDs with clear spikes and slow waves are seen, albeit with a smaller amplitude both during background and during SWD (middle). The characteristic peak frequency of the SWDs in both hemispheres remains unchanged, as well the presence of its characteristic harmonics. The diminishment of the amplitude at both sides can be best appreciated from the spectral plots, the reduction is largest at the lesioned hemisphere, although the left-right difference was not statistically significant. Bottom: SWD post removal right cortex shows that SWDs are clearly visible in the intact (left) hemisphere, their identification in the right hemisphere is doubtful since they no longer fulfill the criteria of SWDs (van Luijtelaar and Coenen, 1986). doi:10.1371/journal.pone.0133594.g006 Intact Cortical Network Is Imperative for Absences 34]. Our present data demonstrate that removal of the cortical regions which contain the initiation site of the SWDs (the foci) reduces SWDs in both hemispheres. Even when the focal layers, i.e. the cortical layers V and VI, are still intact, SWDs are reduced, suggesting that a decreased intactness of the cortical columns, part of the neural circuits in which SWDs are initiated, spread and maintained is responsible for a reduction or complete abolishment of SWDs. This conclusion is also supported by the lack of significant correlations between SWD incidence and the depth (size) of the cortical lesions. Other studies aiming to test the role of various parts of the network in their contribution to the occurrence of SWDs have shown that SWDs are suppressed by a functional inactivation of the whole neocortex by inducing a spreading depression in GAERS [35] or by micro-infusion of local inactivating drugs such as phenytoin in the subgranular layers and Lidocaine at the surface of the S1po in WAG/Rij rats [23,36]. Moreover, micro-infusion of ethosuximide in the region S1po, again in GAERS, causes a full and immediate decrease in SWD number, comparable to that tested after systemic administration of the same drug, supporting the involvement of this area as a crucial and specific area in the initiation or occurrence of SWDs [22,24,37]. Similarly, inactivation studies of various parts of the lateral thalamus including the rostral RTN abolished SWDs both in GAERS, as in WAG/Rij rats [38,39,40,41,42], suggesting that an interference with the intactness of this circuitry is crucial for the diminishment of the occurrence of SWDs. Unilateral lesions: a differential reduction in the ipsilateral and contralateral hemisphere The corpus callosum is the principal anatomical structure, necessary for the bilateral synchronous cortical and thalamic SWDs in intact brains since callosal transsections reduced the left right co-occurrence. It seems that each hemisphere is able to initiate SWDs independently [35] and that SWDs quickly appear bilateral symmetrically through the interhemispheric monosynaptic projections of the callosal projecting neurons [43]. The interhemispheric connections of the homotopic regions of the somatosensory cortex are constituent part of the corpus callosum [44]. Both a network analyses and Diffusion Tension Imaging study showed the relevance of the cortico-cortical interhemispheric connections for SWDs between the left and right Intact Cortical Network Is Imperative for Absences somatosensory cortices in these absence epileptic rats [45,46]. About 80% of the cell bodies of these callosal projecting neurons in rodents principally reside in cortical layers II/III, about 20% in layer V and a small fraction in layer VI [47]. Layers I through III are the main target of interhemispheric cortico-cortical afferents, and layer III is the main source of cortico-cortical efferents [48]. The unilateral resected focal region is no longer able to initiate SWDs. However, information transfer from the intact hemisphere via the corpus callosum to the partial resected hemisphere is still possible. This allows the presence of some SWDs in the resected hemisphere. It is also possible that SWDs, initiated at the intact hemisphere involve the contralateral hemisphere via interhemispheric thalamic projections. The reticular thalamic nuclei are known to project to the contralateral thalamus through bilateral connections with the ventro medial nuclei of the thalamus and intralaminar nuclei and can influence the activity of wide territories of the cerebral cortex and basal ganglia of both hemispheres [49]. It is clear that callosal and interthalamic transsections studies are necessary to establish the role of the contralateral hemisphere after ipsilateral lesions. The differential effects of unilateral lesions, as revealed by the significant interaction between left-right and pre-post lesion shows that the occurrence of SWDs in the left and right hemispheres should not be considered as completely independent processes. Instead, our data show that the effects of an unilateral lesion exert a larger effect at the ipsilateral than on the contralateral side. Although it seems logical that SWDs generated in one hemisphere quickly involve the other hemisphere through the excitatory pathways of the callosal projecting neurons interconnecting the focal regions in the left and right hemisphere [43,45,50], it also seems that the intact hemisphere is no longer inhibited by the resected hemisphere and that the number of SWDs are higher at the intact side as compared to the number at the lesioned side. This proposal would not be against the view that the function of interhemispheric transfer of information could be both inhibitory and excitatory in the same corpus callosum [51]. Bilateral lesions abolish all SWDs The third main finding is that bilateral resection of the assumed cortical foci in the somatosensory cortex abolished all SWDs, although the lesions were not always extended to the deepest cortical layers. The histological examination of the size of the resections in the current experiment showed that in rats of the experimental group the resection has been done till layers III-V of the S1po and, and in only 2 animals until layer VI. The sensory cortex including the somatosensory cortex with its S1Po is not only part of a larger cortico-thalamo-cortical and inter hemispheric network for information transfer, it is also columnar organized with many connections between various layers within the thickness of the cortex. In every layer morphological subtypes of cells are present [52,53], which project to various cortical regions [54]. Next, excitatory inputs from layer IV to supra granular layers III and II regulate and even amplify the sensory information transcolumnar [55], whereas projections from layer III to layer V and VI are also involved in intracolumnar circuits [56]. Our results suggest that interference of inter or intra layer communication of only the superficial cortical layers and thereby altering the normal cortical signal processing is sufficient to interfere with the occurrence of SWDs. Layers IV, V and VI are responsible for the communication between cortex and thalamus, layer IV is the main target of the thalamo-cortical afferents, as well as intra-hemispheric cortico-cortical afferents [56]. The infragranular layers V and VI establish a very precise reciprocal interconnection between the cortex and the first order thalamic neurons and higher order nuclei [57,58,59,60]. Interestingly, lesions of the cortex that communicated most directly with the thalamus and of cell layers that contain the most hyperexcitable cortical cells [12] involved in SWD generation, are not necessary for interference with SWD occurrence. More precise, our study points out that lesioning of the superficial layers is sufficient to prevent the occurrence of SWDs. Some support for the view that also the superficial cortical layers are also involved in the occurrence of SWDs is obtained from the Kandel and Buzsáki study [61]. These authors found sinks and sources during SWDs in all cortical layers, suggesting that also inter layer communication is necessary for the occurrence of SWDs. In addition, the basically different types of neurons present in every single column of the cortex are involved in the communication between cortical layers [53,55,62,63]. In all, it does not seem necessary to resect the cortical tissue completely to abolish the SWDs in these genetic epileptic rats. Control group The control group was added to our protocol in order to demonstrate the presence of SWDs during the various regimes of anesthesia both before and after unilateral and bilateral craniectomy. The analyses of the EEG recordings of control WAG/Rij rats showed that SWDs were abundantly present in all phases of the experiment and that there were no differences in parameters of SWDs between left and right side. The apparent decrease of SWDs over the recording hours as seen in the experimental and control group (Figs 5 and 6) is due craniectomy, and or to the cumulative effects of isoflurane over time or both. It has been demonstrated that craniotomy reduces the brain's excitability for an extended period [64]. and physical stimulation of the cortex in the form of pinpricks induces a spreading depression suppressing SWDs for 1 to 2 hours [65]. It is thought that even a careful brain operation might have short term consequences on cortical excitability causing SWDs to diminish. Isoflurane anesthetic was used repeatedly and intermittently (drilling holes, removal of cranium, removal of cortical tissue) and SWDs were never seen under isoflurane anesthesia. We noticed that the recovery time of isoflurane as measured by the reappearance of the SWDs increases from about 23 min from the first discontinuation of anesthesia, to about 35 min from the 2 nd period of isoflurane anesthesia. It is therefore thought that both factors, craniectomy and isoflurane, contribute to the simultaneous reduction of SWDs in the left and right hemisphere over time. However, SWDs remained present in our anesthesia regime on either side. Is the site of the lesion crucial? It would be interesting to make similar resections, or to make small lesions in other parts of the cortex in order to establish whether cortical lesions in different parts of the cortico-cortical network are also sufficient to prevent seizure occurrence since it might be thought that the decrease after the cortical resections is due to a non-selective effect of interfering with the functional integrity of the cortex. Polack et al. [66] established in GAERS that the blockade of neuronal activity by the topical application of the sodium channel blocker tetrodotoxin in the motor cortex did not affect the occurrence of SWDs in the somatosensory cortex, while the functional deactivation of neurons in the facial area of the somatosensory cortex by the same method abolished all ictal activities in the somatosensory cortex, including the SWD. This Polack et al. study [66] is the primary evidence that it matters for SWD occurrence which part of the cortex is inactivated or removed. Similarly, we previously established that rostral thalamic lesions in WAG/Rij rats abolished cortical SWD, while caudal thalamic lesions enhanced SWDs [42], again demonstrating that the effects of in this case thalamic lesions regarding their SWD reducing effects are specific for the location within the cortico-thalamo-cortical network and therefore this abolishment should not be considered as being caused by a non-specific lesion effect. The outcomes of two pharmacological studies confirm that selectivity of the somatosensory cortex as the initiation side for SWDs: infusion with ethosuximide or AMPA antagonists was only effective when applied in the somatosensory cortex and not in the motor cortex [22,67]). Additionally, a diminishment of SWD is not very likely in case of lesions in for example the visual cortex considering that the focal facial region receives necessary input from the VPM [68] and projects back to the posterior nucleus, RTN and somatosensory thalamus and not to the visual thalamus. The visual cortex and its thalamic counterpart, the lateral geniculate, are, to the best of our knowledge, not part of the SWD generating system. Moreover, no other SWD initiating sites have been described in this genetic rodent absence model outside the somatosensory cortex. In all, a non-specific effect of the lesion is not a likely explanation for the complete abolishment of SWD after bilateral cortical lesions. Concluding remarks The outcomes of the present study contribute to the discussion about the generalized nature of absence epilepsy; the successful removal of an assumed focal zone has been an argument for the distinction between focal and generalized epilepsies. Here the bilateral removal of the assumed focal regions, or more precise, a partial removal of the cortical zone dorsal to the assumed focal origin and or interference with the columnar intracortical networks was enough for complete seizure abolishment in this acute study. It is also thought that the site of the cortical lesion is specific. It is acknowledged that craniectomy and surgical resection is a radical surgical technique for seizure inactivation. More subtle alternative techniques should be explored. Small implanted electrodes for example, would allow making selective lesions in brain tissue. Moreover, before making lesions, these electrodes might be used for local field potential recordings of SWDs and for local evoked potentials elicited by stimulation of specific afferent pathways. In this way the cortical area would be functionally mapped, the excitable focal area could be identified before making the lesions. The experiments were carried out in anesthetized WAG/Rij rats. Although the SWDs as seen under this type anesthesia closely mimic the spontaneous occurring SWDs (Fig 6, upper trace), it is necessary to repeat these experiments in free moving animals and evaluate the long term effects of surgical manipulations. Finally, the possibility exists that surgical ablation restricted to the most superficial layers hampers the functionality of the deep layers of the cortex, considering that the intracolumn information flow is bidirectional and that resection of the dendritic arborescence is likely to modify the deep neurons integrative properties. Therefore, and as is the case in all in vitro studies and in some ablation studies in vivo, the functionality of the remaining neural tissue can be questioned. Differential recorded local field potentials measured in frontal and parietal cortex (the removed area was in between the two active EEG electrodes) showed a diminshment of the amplitude in both the experimental and control group, however no changes in amplitude of the SWDs before and after the cortical resection in either the lesioned and intact side were found, suggesting that after the lesion the remaining tissue was still good enough to generate some SWDs. The outcomes of our study emphasize the necessity of an intact cortical circuit for the occurrence of SWDs and demonstrate that absence epilepsy is a network type of epilepsy since interference within the network involved in the communication between supra and subgranular layers disrupts seizures. SWDs also have a cortical focal origin in patients with absence epilepsy [18,69,70,71,72], although the location of their foci might be different from that in animals [73,74]. Some of the absence epileptic patients are cognitively impaired and this depends on the anatomical site of seizure onset, the hemisphere involved and the dimension of epileptogenic area [75]. Some forms of absence seizures have their origin in the frontal lobe, especially in the mesial frontal region [76,77] and patients with absence seizures with clear focal abnormalities on EEG were identified [78]. Since some of these patients are intractable by current medications [79], partial inactivation with modern techniques of the assumed focus or interference with an intracortical circuit might be a treatment option in these refractory patients. Moreover, seizure control is of utmost importance because it might restrict cognitive damage since the duration of refractory epilepsy is a major determinant of cognitive deterioration, affecting also quality of life [80,81,82]. These first experimental outcomes of cortical resection teach us that surgical techniques in combination with electrophysiological recordings should not be rejected at forehand although there are many questions whether the present acute results in this genetic absence model can be translated to a chronic preparation and later to patients without further functional compromising patients.
v3-fos-license
2017-10-23T14:28:18.154Z
2014-01-10T00:00:00.000
43089358
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=41831", "pdf_hash": "f6897f30b0c530f6a57cc9512eb5582f543e6e0a", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42061", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "sha1": "f6897f30b0c530f6a57cc9512eb5582f543e6e0a", "year": 2014 }
pes2o/s2orc
Comparative Spectroscopic Studies on Pure , 10 and 50 mol % Glycine Mixed L-Valinium Picrate Crystals Nonlinear optical crystals of pure, 10 and 50 mol% glycine mixed L-valinium picrate have been grown from saturated aqueous solution by slow evaporation method at a temperature of 36 ̊C using a constant temperature bath of accuracy of ±0.01 ̊C. The synthesized organic optical material has been purified by repeated recrystallization. The cell parameters were calculated using single crystal X-ray diffraction technique which confirmed the crystal system. Optical behavior was examined by UV-Vis-NIR spectrometer in the range from 190 nm to 1100 nm, which revealed the absence of absorption in the entire visible region. Functional groups and modes of vibration were identified by FT-IR spectrometer in the range between 400 cm and 4000 cm. The Hand C NMR spectra of grown crystals were recorded using D2O as solvent on a Bruker 300 MHz (Ultrashield) TM instrument at 23 ̊C (300 MHz for H NMR and 75 MHz for C NMR) to confirm the molecular structure. The second harmonic generation conversion efficiency was investigated by Kurtz powder method using Nd: YAG laser as a source to explore the NLO characteristics. Introduction To satisfy the modern society's demand for photonics and telecommunication, extensive search of new nonlinear optical (NLO) materials is very essential [1].The investigations on new nonlinear optical crystals with high second harmonic generation efficiency are more attractive because of their application in the field of telecommunication, optical computing and optical storage [2,3].Organic nonlinear optical crystals are more resourceful materials for NLO applications compared to inorganic materials due to their large electro-optic coefficient with low frequency dispersion and high nonlinearity [4].Due to chiral symmetry and noncentro symmetric properties, complex of amino acid with organic acid is promising materials for NLO applications [5].Hence, lots of researches are being carried out to synthesize new organic NLO materials.In our laboratory, we engaged in finding new NLO materials and some of the results were reported recently [6][7][8].Growth and characterization on nonlinear optical crystal L-valinium picrate were carried out and reported earlier [9][10][11][12].Growth and characterization on L-valinium picrate and 10 mol% of glycine mixed L-valinium picrate were reported by other authors [13,14].In the present investigation, the synthesis and growth of pure, 10 and 50 mol% glycine mixed L-valinium picrate crystals from its aqueous solution by slow evaporation have been reported.The cell parameters were calculated using single crystal X-ray diffraction studies.Using UV-Vis-NIR spectrum, transmission properties were reported.Functional groups were identified by FT-IR analysis.The chemical structure was discussed using FT-NMR technique and SHG test was also performed to confirm the NLO property. Crystal Growth Analar grade samples of glycine, valine and picric acid were employed for the synthesis of L-valinium picrate (LVP).10 mol% glycine mixed L-valinium picrate (10 GVP) and 50 mol% glycine mixed L-valinium picrate (50 GVP).LVP was synthesized by the reaction between picric acid and the amino acid, L-valine taken in equi-molar ratio.10 GVP has been grown from glycine, valine and picric acid taken in the ratio of 0.1:0.9:1(Glycine: Valine: Picric Acid) in the deionized water.Same procedure was carried out to obtain 50 GVP in the ratio of 0.5:0.5:1(Glycine: Valine: Picric Acid).The purity of the synthesized materials was increased by successive recrystallization.The chemical reaction involved in growth of L-valinium picrate may be written as follows. The purified powder of LVP, 10 GVP and 50 GVP were dissolved thoroughly in double distilled water at 30˚C to form saturated solution.The solutions were heated to remove any undissolved substance and filtered to remove the dust particles.Then the solutions were kept aside undisturbed for the growth of single crystal.After two weeks, good quality transparent crystals were harvested. XRD Technique The single crystal diffraction analysis of LVP, 10 GVP and 50 GVP were carried out using ENRAF NONIUS CAD-4 single X-ray diffractometer with MoK α (λ = 0.71073 Å) radiation.From the XRD data, it was observed that 10 GVP crystallizes in orthorhombic crystal system and LVP, 50 GVP crystallizes in monoclinic system.The observed cell parameters are tabulated in the Table 1. UV-Vis-NIR Analysis The UV-Vis-NIR spectrum of grown crystals were recorded using Lambda 35 double beam spectrometer in the range at 190 nm to 1100 nm and it is shown in Figure 1.From the transmission spectrum, it was observed that the grown crystal has transparency from 425 nm to 1100 nm. UV cut-off wavelength of L-valinium picrate (LVP), 10 mol% glycine mixed L-valinium picrate (10 GVP) and 50 mol% glycine mixed L-valinium picrate (50 GVP) are 425 nm, 460 nm and 450 nm respectively.The transparency above the 460 nm satisfies the requirement for frequency doubling of Nd: YAG laser.The observed peaks in the range between 200 nm to 360 nm, were due to transition from n to Π * transitions of carbonyl group [15]. FT-IR Analysis The FT-IR spectrum of grown crystals were recorded in the KBr phase in the frequency region of 400 -4000 cm −1 using Perkin-Elmer FT-IR spectrometer (model SPECTRUMRX1) and shown in Figure 2. The recorded spectrum was compared with the available literature [16].The observed vibrational frequencies and their tentative assignments are given in the Table 2. The stretching vibration of 3 NH + of amino acid was observed at 3088, 3080 and 3081 cm −1 in LVP, 10 GVP, 50 GVP respectively which was found due to superimposed of OH and 3 NH + stretching bands.A broad absorption occurred around 3446 cm −1 (LVP) 3428 cm −1 (10 GVP), 3432 cm −1 (50 GVP) due to OH stretching band.The absorptions around 1720 cm −1 and 1480 cm −1 of all the three spectrums are due to stretching of COO − bands.The rocking vibration of NH 3 and CH 2 was observed around 1153 cm −1 and 906 cm −1 in all the cases.The incorporation of glycine in L-valinium picrate is confirmed by presence of peak at 618 cm −1 in 10 GVP and 620 cm −1 in 50 GVP and it is obtained due to stretching NMR Studies The 1 H-NMR and 13 C-NMR spectra of LVP, 10 GVP and 50 GVP crystals were recorded using D 2 O as solvent on a Bruker FT-NMR spectrometer.The spectra were recorded by dissolving the sample in D 2 O and obtained spectra are shown in the Figures 3 and 4. The chemical shifts for 1 H-NMR and 13 C-NMR spectrums are represented in δ ppm and are tabulated in the Table 3. The proton in the glycine and picric acid was assigned with the help of the available literature [17,18].In 1 H-NMRspectrum, the OH-proton of picric acid pf LVP, 10 GVP and 50 GVP are observed at δ = 8.75, 8.54 and 8.73 ppm respectively.The same was observed at δ = 11.94 ppm in free picric acid [18,19].This upfield shift was due to the shielding of OH proton by the Л and n electron of glycine, confirming the charge transfer phenomenon in the compound [20] SHG Test The nonlinear optical susceptibility of grown crystals were measured through second harmonic generation test using standard Kurtz and Perry Method [22].The powdered sample is placed in the path of Nd: YAG laser with pulse width of 8 ns and repetition rate 10 Hz.The intensity of incident power is 2.8 mJ/pulse for LVP, 50 GVP crystals and 3.5 mJ/pulse for 10 GVP crystals.The green color output signal from the sample confirmed the second harmonic generation.The intensity of output light was observed as 1300 mV, 425 mV and 1050 mV for LVP, 10 GVP and 50 GVP crystals respectively. Conclusion Organic NLO material, L-valinium picrate (LVP), 10 mol% glycine mixed L-valinium picrate (10 GVP) and 50 mol% glycine mixed L-valinium picrate (50 GVP) were grown from aqueous solution in room temperature using slow evaporation technique.The cell parameters were determined by XRD analysis and crystal systems were found.The chemical environment of carbon and hydrogen in grown crystals was identified by FT-NMR technique and it confirmed the presence of dopent in parent crystals.The NLO effect was confirmed by Kurtz and Perry technique. O group of glycine.The absorptions of 10 GVP and 50 GVP have been compared with those of parent compound (LVP) and its shows shifts in the position of characteristics peak which confirm the formation of the new compound. Figure 3 .Figure 4 . Figure 3. (a) 1 H-NMR spectrum of LVP; (b) 1 H-NMR spectrum of 10 GVP crystal; (c) 1 H-NMR spectrum of 50 GVP crystal.to carboxylic acid [21].The doublets observed around at δ = 0.8 ppm to 0.9 ppm are assigned to the proton of two CH 3 group of L-valine in all three crystals.The signal observed at δ = 3.85 and 3.83 ppm of 10 GVP and 50 GVP respectively are due to the protons present in the (CH 2 ) group of glycine which indicates that addition of glycine in L-valinium picrate. In 13 C -NMR spectrum, the signals observed at δ = 172.01,170.08 and 170.08 ppm of LVP, 10 GVP and 50 GVP were due to presence of COOH of L-valine.The characteristic peak nearly at δ = 162 ppm of three crystals were attributed to ipso carbon of picric acid.The observed peak arround δ = 141 ppm of three crystals were due to C 2 and C 6 carbon atom containing NO 2 group in picric acid molecule.The signals at δ = 128.26,127.80 and 128.22 ppm of grown crystals are assigned to C 4 carbon atom of picric acid.The peaks at δ = 127.21,127.01 and 127.19 ppm are assigned to C 3 and C 5 carbon of picric acid of LVP, 10 GVP and 50 GVP respectively.The resonance signal was observed at δ = 58.64,58.51 and 58.55 are due to tertiary carbon connected to L-valine.The signal around δ = 29.03ppm represents the CH (isopropyl) groups of L-valine.The CH 3 carbons of L-valine are observed nearly between 16 and 17 ppm.The resonance signals at δ = 40.14ppm of 10 GVP and 50 GVP are account for the carbon of the CH 2 group in glycine.The peaks at δ = 171.85and 171.89 ppm of 10 GVP and 50 GVP respectively can be safely attributed to carboxyl group (COOH) of glycine. Table 1 . Lattice parameter values of grown crystals. OPEN ACCESS JMMCEA. A. JOSEPH ET AL. 10
v3-fos-license
2018-12-08T06:16:16.091Z
2012-08-05T00:00:00.000
55108216
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/amse/2012/592485.pdf", "pdf_hash": "e63b29d70808cf951c6a10e731169f0a6c007856", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42062", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "e63b29d70808cf951c6a10e731169f0a6c007856", "year": 2012 }
pes2o/s2orc
Structural and Phase Transformations in Water-Vapour-Plasma-Treated Hydrophilic TiO 2 Films We have investigated structural and phase transformations in water-vapor-plasma-treated 200–300 nm thick Ti films, maintained at room temperature, by injecting water vapor into radio frequency (RF) plasma at different processing powers. Scanning electron microscopy (SEM) combined with optical microscopy and surface nanotopography analysis were used to view tracks of adsorbed water layers and to detect bulges or blisters appeared on the surface of treated samples. Rough surfaces with different size of holes (5–20 μm) through the entire film thickness have been observed. X-ray diffraction results show that the oxidation rate of Ti film drastically increases in the presence of an adsorbed water on the hydrophilic layer. It is assumed that the defining factor which controls oxidation kinetics is the hydroxyl radicals formation. Introduction Exposition of Ti surfaces in water vapor results in the formation of an adsorbed layer on the surface.Rapid diffusion on the surface maintains quasiequilibrium between the molecules bound to islands and isolated adsorbed water molecules.Water molecules oxidize Ti atoms (Ti x O y + (2x − y) H 2 O = xTiO 2 + (2x − y) H 2 ).These possible oxidation reactions are highly thermodynamically favorable and form nanostructured hydrated titanium oxide layer [1].It is known that the thermal oxidation of titanium by water vapor proceeds according to a linear-parabolic rate law resulting from a reaction-diffusion-mixed regime.The growth of TiO 2 takes place by rapid diffusion of substitutional hydroxide ions generated at the gas-scale interface [2]. It is established [3] that the titanium coating exposed to ultraviolet light has the extraordinary property of complete wettability for water.The ultraviolet light removes some of the oxygen atoms from the surface of the titania, resulting in a patchwork of nanoscale domains where hydroxyl groups became adsorbed, which produces the superhydrophilicity.It is proved that the Ti 3+ ions are closely associated with the hydrophilicity of TiO 2 [4,5]. Titanium coatings immersed in plasma are exposed to ultraviolet radiation and ion bombardment [6,7].Due to radiation coming from the plasma, oxygen atoms are preferentially removed from the surface of titania and the formation of suboxides as well as an oxygen deficient surface in the steady state is registered [8].The water molecules arriving at the plasma activated surface due to hydrophilicity tend to spread out over the entire surface, as schematically shown in Figure 1. Under plasma radiation the split of hydroxyl radicals, resulting from dissociation of water molecules and oxidation of Ti, into their atomic components: hydrogen and oxygen occurs [4].The hydrogenation of growing titanium oxide TiO 2 takes place [9]. The present paper reports the study of the structural and phase transformations in thin Ti films treated by water vapor plasma in dependence to the surface coverage by hydrophilic layer at different levels of processing power. Experimental Technique The vacuum 80 L volume stainless steel chamber is equipped with a water inlet valve, which permits the introduction of water vapor from outside the chamber.The Ti samples to be treated are placed onto the water-cooled substrate holder.The reaction chamber is evacuated using a turbomolecular vacuum pump down to a pressure of about 10 −3 Pa and then connected to the source of water vapor.Once the steady state of water vapor pressure of about 5 to 10 Pa in the chamber at ambient temperature is reached, the Ti sample is exposed to low-pressure RF plasma.The discharge characteristics have been controlled using a variable RF power supply by changing input power levels from 50 to 300 W. Without an applied axial magnetic field, plasma ionization degree increases linearly with plasma dissipated power.With an axial magnetic field, ionization degree jumps to a maximum value at about 300 W and then saturates.The positive ion density and electron density of the vapor plasma of around 5•10 10 cm −3 and the electron temperature between 1.2 and 1.6 eV were measured by a Langmuir probe at processing power of 300 W. It is in close agreement with the experimental results presented in [6]. The microstructure of the samples was characterized by X-ray diffraction (XRD) method using Bruker diffractometer (Bruker D8).The measurements were performed with 2θ angle in the range 20 • -70 • using Cu cathode Kα radiation in steps of 0.01 • .All X-ray diffraction peaks were indexed using software together with Search-Match function and PDF-2 database from International Centre for Diffraction Data (ICDD).The thickness and surface topography of Ti films were measured using the nanoprofilometer (AMBIOS XP 200).The spatial resolution was several nanometers.The surface views were investigated before and after plasma treatment by the scanning electron (SEM, JEOL JSM-5600) and optical (Nikon Eclipse Lv150) microscopes.The distribution profiles of oxygen in Ti films were measured by Auger Electron Spectroscopy (AES, PHI 700XI).The distribution profiles of hydrogen in Ti films after hydrogenation were measured by Glow Discharge Optical Emission Spectroscopy (GDOES, Spectruma Analytic GMBH). Experimental Results Water vapor condenses on the hydrophilic Ti surface in plasma surrounding to produce adsorbed water layer.The distribution and size of water island layers on the surface of thin hydrophilic Ti film surface were investigated by trace analysis of islands on the surface of plasma-treated samples using optical and scanning electron microscopes.It was found that water island layers were randomly distributed on the surface with an average size of about 8 μm. Figure 2 includes SEM surface views of plasma-treated Ti films: (a) at 50 W for 60 min and (b) at 300 W for 5 min.We observed that Ti films treated upon lightly ionized water vapor plasma (at RF power equal to 50 W) are homogeneous and smooth (Figure 2(a)).After treatment by highly ionized water vapor plasma (at RF power equal to 300 W), holes throughout the entire film thickness were observed (Figure 2(b)).With an increase in the power dissipation in plasma, the density of holes increased while the mean size remained about the same.The SEM surface view also reveals the circular bumps distributed between the holes (Figure 2(b)). Figure 3 includes surface height profiles of untreated Ti film (Figure 3(a), curve 1) and plasma treated at 50 W for 60 min (Figure 3(a), curve 2).It was registered that the surface topography, initially flat with a mean roughness of about 2 nm, becomes periodically bumpy with a height amplitude equal to 8 nm and a period equal to 15-18 μm after treatment at 50 W for 60 min.As the output power of the RF power amplifier increased, the surface was subjected to the development of various bumps and blisters.After exposition to the water vapor plasma at 300 W for 5 min, thin film that covered blisters was lifted and film contained randomly distributed small holes throughout the entire film thickness (Figure 3(b)), a process known as "plasma-cut".The location of the cuts correlates well with the wetted areas on the surface and can be explained by damageinduced in-plane stress and the corresponding elastic out-ofplane strain.We suppose that a rapid uptake of oxygen and hydrogen through the surface areas covered by island layers of adsorbed water occurs and deduces that fast H transients because of their high mobility may be responsible for the high compressive stresses which lead to film surface plastic deformation (50 W, Figure 3(a), curve 2) and detachment in In all cases, the mean surface roughness increases after plasma treatment.It is in agreement with other observations [10].In the presented work, it was registered that the mean surface roughness of untreated samples is equal to 2-4 nm.It increases from 24 nm for samples treated at 200 W during 5 min and up to 180 nm for samples treated at 300 W during 5 min.It is established that the dominant technological parameter to the surface roughness is the RF power, while the plasma treatment time has significantly less effect. Water-plasma-treated hydrophilic Ti films, which had been completely covered by water layer, were subsequently analyzed by XRD technique.Figure 4 includes XRD patterns of untreated (curve 1) and plasma-treated Ti films: curve 2 for 5 min at 200 W, curve 3 for 20 min at 200 W, and curve 4 for 5 min at 300 W. It is seen that phase and structural transformations in metallic Ti depend on the RF power dissipated in the plasma.SEM analysis showed that the Ti films entirely covered by hydrophobic layer were broken and lifted from the substrate after treatment at 300 W for 5 min.It is in agreement with observation of lifting, rupture, and formation of holes for a film covered by water island layers registered by SEM (Figures 2(b) and 3(b)). Figure 5 includes the distribution profiles of O and H atoms (curves 1 and 2, resp.) in Ti film treated at 200 W for 5 min.The Auger oxygen depth profile (Figure 5, curve 1) shows a sharp decrease of oxygen concentration from 75 at.% at the surface followed by a gradual decrease through the entire film thickness in the bulk to the values around the maximum solubility of oxygen in titanium (34 at.%), while H atoms are homogeneously distributed over the estimated film thickness of about 500 nm (Figure 5, curve 2) except the sharp increase near the surface where the mobile H atoms are trapped at radiation defects in TiO 2 .Titanium film treated at 300 W for 5 min becomes completely oxidized, while the distribution profile of H atoms does not change significantly in the bulk, and H maximum appears at the film-substrate interface.As a result of H accommodation at the interface, the surface is subject to the development of various bumps and blisters. Discussions Water molecules present in plasma are excited, ionized, and disassociated in dependence on the plasma processing power.In this surrounding, the oxygen atoms are preferentially removed from the oxidized titanium surface due to radiation coming from plasma.An oxygen-deficient surface with suboxides becomes highly reactive.Taking into account supperhydrophilic of titania, adsorbed water molecules and water drops converge to island layers The adsorbed water molecules quickly the surface leading to increased hydroxyl density on the surface and transformation of reduced oxides into TiO 2 .The concentration of hydroxyl groups on the surface depends on the oxidation state of titanium oxide.Irradiation of surface area covered by adsorbed water layer leads to the split of hydroxyl radicals into their atomic components hydrogen and oxygen [7,10].These factors contribute to the increase of oxidation rate of surfaces covered by water layers.This increase is predominantly determined by the plasma radiation intensity.Additionally, the outermost layer becomes highly defected and new pathways for the transport of water molecules and atoms become possible. The splitted H atoms in the near-surface region of TiO 2 are trapped at radiation defects while the mobile H atoms detrapped from trapping centers diffuse through the oxide layer into the bulk and, taking into account titanium's high affinity for hydrogen, are absorbed by the titanium and consequently are stored in the bulk.Substantial excess of atomic hydrogen present may be accommodated in the crystal lattice, resulting in gas bubbles.Lifting in the form of "popping off " discrete blisters was observed.The size distribution of the holes correlates well with the topology of island water layers observed by SEM.One of the main issues concerning their mechanical performance is the type and magnitude of residual stresses around the crystalline TiO 2 phase in the matrix of Ti.Residual stresses may or may not generate microcracks around the precipitates depending on their magnitude, crystal size, and film thickness. Conclusions In summary, it emerges from our results that the study of structural and phase transformations in water-vaporplasma-treated Ti films in dependence on the processing parameters provides an information for an advancing knowledge of the behavior of adsorbed water molecules on the surface of titanium oxide under plasma radiation.Adsorbed water molecules on the hydrophilic surface converge to water island layers leading to increased hydroxyl group density on the oxidized Ti surface and its transformation into TiO 2 .The H atoms are trapped at radiation defects in the near-surface region of TiO 2 , while the mobile H atoms diffuse through the oxide layer and tend to form gas bubbles in the bulk and at the film-substrate interface. 2 AdvancesFigure 1 : Figure 1: Formation of water layer on the hydrophilic surface: (a) adsorbed water on a plasma nonactivated surface and (b) water layer on a plasma-activated surface. Figure 2 : Figure 2: SEM surface views of plasma-treated Ti films: (a) 50 W for 60 min and (b) 300 W for 5 min. 2 Figure 4 :Figure 5 : Figure 4: XRD patterns of Ti film after plasma treatment: 1 as deposited, 2 for 5 min at 200 W, 3 for 20 min at 200 W, and 4 for 5 min at 300 W.
v3-fos-license
2021-10-18T18:40:40.106Z
2021-09-25T00:00:00.000
240426126
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/13/19/3832/pdf?version=1632723152", "pdf_hash": "f0ef3ae2d288017eab552ad5e051b22738145d9e", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42063", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "e429a9ff7e9a330daa521faa2a1af07314b2a9c4", "year": 2021 }
pes2o/s2orc
Long-Term Projection of Water Cycle Changes over China Using RegCM : The global water cycle is becoming more intense in a warming climate, leading to extreme rainstorms and floods. In addition, the delicate balance of precipitation, evapotranspiration, and runoff affects the variations in soil moisture, which is of vital importance to agriculture. A systematic examination of climate change impacts on these variables may help provide scientific foundations for the design of relevant adaptation and mitigation measures. In this study, long-term variations in the water cycle over China are explored using the Regional Climate Model system (RegCM) developed by the International Centre for Theoretical Physics. Model performance is validated through comparing the simulation results with remote sensing data and gridded observations. The results show that RegCM can reasonably capture the spatial and seasonal variations in three dominant variables for the water cycle (i.e., precipitation, evapotranspiration, and runoff). Long-term projections of these three variables are developed by driving RegCM with boundary conditions of the Geophysical Fluid Dynamics Laboratory Earth System Model under the Representative Concentration Pathways (RCPs). The results show that increased annual average precipitation and evapotranspiration can be found in most parts of the domain, while a smaller part of the domain is projected with increased runoff. Statistically significant increasing trends (at a significant level of 0.05) can be detected for annual precipitation and evapotranspiration, which are 0.02 and 0.01 mm/day per decade, respectively, under RCP4.5 and are both 0.03 mm/day per decade under RCP8.5. There is no significant trend in future annual runoff anomalies. The variations in the three variables mainly occur in the wet season, in which precipitation and evapotranspiration increase and runoff decreases. The projected changes in precipitation minus evapotranspiration are larger than those in runoff, implying a possible decrease in soil moisture. Introduction The global water cycle is becoming more intense in a warming climate; increases in precipitation, evapotranspiration, and runoff can be widely observed over the world [1,2]. The resulting extreme rainstorms and heavy runoff can themselves lead to losses of life and damage to infrastructure (such as urban drainage systems), not to mention that they may also cause flood events that are of more devastating consequences [3]. On the other hand, the delicate balance of the three variables also deserves attention, as, according to the surface water budget equation dS/dt = P − E − R (where S denotes the subsurface storage of water substances, P is precipitation, E is evapotranspiration, and R is runoff) [2,4], they are closely associated with variations in soil moisture. In cases where meteorological drought occurs (i.e., long-term rainfall deficit), the interactions among the water cycle components could affect how the meteorological drought is propagated to hydrological drought (i.e., Methodology and Data Dynamical downscaling of water cycle components over China is developed using RegCM, which is a regional climate model developed by the International Center for Theoretical Physics [26]. The Community Land Model version 4.5 (CLM4.5) coupling is enabled in RegCM simulations to provide an improved description of land surface processes (e.g., carbon cycle, vegetation dynamics, and river routing) [27,28]. The detailed representation of water vapor fluxes for both non-vegetated and vegetated surfaces in CLM is expected to help with the simulation of evapotranspiration. In addition, CLM is embedded with SIMTOP (simple TOPMODEL-based runoff model) [29], which can take into account the influence of topological information in runoff generation. The simulated total runoff is then routed to active ocean or marginal seas through a river transport model [30]. More details about RegCM parameterization scheme configuration can be found in Lu et al. [22]. Two rounds of RegCM hindcast simulations are conducted for model validation purposes; one of them is driven by the ERA-Interim reanalysis data developed by the European Centre for Medium-Range Weather Forecasts [31], which provide the realistic historical climate over China; the other one is driven by the historical climate scenario of the Earth System Model developed by the Geophysical Fluid Dynamics Laboratory (GFDL) [32,33], which is used to provide the baseline for projections. The baseline period is 1986 to 2005. The RegCM performance is validated through comparisons of the annual and seasonal averages of model results with those of the observations, remote sensing data, and reconstructed data. The months included in each season are as follows: December (of the previous year), January, and February for winter; March, April, and May for spring; June, July, and August for summer; September, October, and November for autumn. Spatial correlation is employed as a quantitative metric to reflect the similarity between the annual and seasonal averaged observation/remote sensing data/reconstructed data and the simulation results. In this study, version 4 of the high-resolution gridded observation dataset from the Climate Research Unit (CRU) [34] is used for verifying the model-generated temperature and precipitation. This dataset is generated through the interpolation of extensive networks of gauge station observations into a 0.5 • regular grid [34], and is widely applied for the calibration/validation of global and regional climate models [35]. For evapotranspiration, two sets of remote sensing data are employed, specifically, version 6 of the Resolution Imaging Spectroradiometer (MODIS) terrestrial evapotranspiration product (MOD16A2GF v006) [36], and the latest version of the Global Land Evapotranspiration Amsterdam Model (GLEAM, v3.5b) [37,38]. The MOD16A2GF dataset is created from remotely sensed data products of MODIS (e.g., land cover and albedo) based on the Penman-Monteith equation [39]. On the other hand, GLEAM assembles various satellite-based observations (e.g., radiation from GERES, precipitation from TMPA and MSWEP, air temperature from AIRS, soil moisture from SMOS and ERA CCI SM) and derives global evaporation variables with the Priestley and Taylor model [37]. Both datasets were demonstrated to be able to reasonably represent the actual evapotranspiration over China [40,41]. The evapotranspiration from MOD16A2GF and the actual evaporation from GLEAM are used for validating the RegCM-generated evapotranspiration. It is worth noting that the start years of these two remote sensing datasets are 2000 and 2003, respectively; therefore, RegCM results averaged over the same periods (i.e., 2000 to 2005 and 2003 to 2005) are used for comparison. To assist the validation for evapotranspiration over the entire baseline period, station-based observed tank evaporation from the National Meteorological Information Center of China (NIMC) is also used (data available at data.cma.cn; accessed on 9 April 2019). The locations of the stations are shown in Figure S1 in the Supplementary Materials. For runoff, the validation was undertaken through the comparison of the model outputs with the Global Runoff Reconstruction (GRUN), which is constructed through machine learning techniques based on runoff and meteorological observations [42]. This dataset is widely used in weather and climate research and is shown to have a reasonable performance over China [43][44][45]. For future climate, GFDL projections of two representative concentration pathways (RCPs) are employed, which are RCP4.5 and RCP8.5, respectively, for intermediate and heavy emissions. Simulations are conducted for the entire twenty-first century. Three time-slices are considered in result analysis: 2020 to 2039 (or 2030s), 2040 to 2069 (or 2050s), and 2070 to 2099 (or 2080s); time averages and trends are calculated with respect to these periods. In addition, since the water cycle components are closely related to atmospheric moisture contents, and the saturation vapor pressure is related to the temperature following the Clausius-Clapeyron equation [46], different warming periods are also considered in this study. The warming periods are defined as twenty-year periods in which the domain average temperature increases by 1, 1.5, 2, 3, and 4 • C compared with the baseline average. Table 1 lists the respective periods for each warming level under the two emission scenarios. The numerical values of future trends are obtained based on Sen's slope estimator, and their statistical significance is examined by Mann-Kendall tests [47][48][49][50]. Model Validation Validation results for near-surface temperature, precipitation, evapotranspiration, and runoff are shown in Figures 1-4, respectively. The columns of each figure are for different datasets (i.e., CRU, MODIS, GLEAM, GRUN, RegCM driven by ERA-Interim, and RegCM driven by GFDL); the rows are for annual and seasonal averages. It can be observed that RegCM can reasonably reproduce the spatial distribution and seasonal variations of temperature over China. As shown by the CRU observations (Figure 1a), above-zero baseline-average temperature can be found in most parts of the domain, except for the Tibetan Plateau and a small part of northeastern China. This spatial feature is realistically generated in the two sets of RegCM results, although with underestimations of various degrees (Figure 1b,c). Such underestimations, as discussed by Lu et al. [22], are partly caused by the model setup and partly due to the driving GCM data. Temperature over China demonstrates clear seasonality, i.e., hot summer, cold winter, and mild spring and autumn. RegCM is able to capture the seasonal variations, although underestimations can still be observed. The spatial correlation between the observed and modeled temperature can be found in Table S1 in the Supplementary Materials. The high correlation (ranges between 0.94 and 0.98) indicates RegCM's good performance in temperature. In addition, the results of RegCM driven by GFDL have higher correlations with the observations than those of the raw GCM data (please refer to Table S2 in the Supplementary Materials), which suggests that RegCM is able to correct some biases in GFDL's temperature results. The performance of RegCM with respect to precipitation is less satisfactory than fo temperature. Although it is able to generate the observed wet-in-the-southeast and dry in-the-northwest precipitation pattern, underestimations can be found over the southeas ern part of the domain and overestimations over the northwestern part (Figure 2a-c). Thi over-and underestimation pair was shown to be related to the simulation bias in vapo pressure by Lu et al. [22], who further argued that the bias in vapor pressure could b associated with the bias in temperature. There is also an apparent dry bias near the S chuan Basin and a wet bias near the southeastern edge of the Tibetan Plateau, which coul be related to the configuration of the cumulus convective scheme [22]. The spatial corre lation (Table S1) is lower for precipitation than for temperature, which is consistent wit The performance of RegCM with respect to precipitation is less satisfactory than for temperature. Although it is able to generate the observed wet-in-the-southeast and dry-inthe-northwest precipitation pattern, underestimations can be found over the southeastern part of the domain and overestimations over the northwestern part (Figure 2a-c). This overand underestimation pair was shown to be related to the simulation bias in vapor pressure by Lu et al. [22], who further argued that the bias in vapor pressure could be associated with the bias in temperature. There is also an apparent dry bias near the Sichuan Basin and a wet bias near the southeastern edge of the Tibetan Plateau, which could be related to the configuration of the cumulus convective scheme [22]. The spatial correlation (Table S1) is lower for precipitation than for temperature, which is consistent with the above results. The seasonal precipitation over China shows clear monsoon features (more precipitation in summer and less in winter), which is shared by the two sets of RegCM results. Remote Sens. 2021, 13, 3832 6 the above results. The seasonal precipitation over China shows clear monsoon featu (more precipitation in summer and less in winter), which is shared by the two set RegCM results. The spatial pattern of actual evapotranspiration from MODIS and GLEAM (Fig 3a,d) shows more regional details than the observed precipitation pattern from C which is in part due to the higher resolution of remote sensing data than that of the g reasonable performance for RegCM in evapotranspiration. The spatial correlation between RegCM results and the observed tank evaporation from NIMC is 0.50 and 0.42, respectively, for the two rounds of hindcast simulations. The lower correlation between RegCM and NIMC could be related to the difference between tank evaporation and actual evapotranspiration. In terms of the seasonal variations, RegCM demonstrates overestimations in spring and winter, and underestimations in summer and autumn. The RegCM-generated spatial patterns for runoff show considerably larger biases than for other variables. Apparent overestimations can be spotted near the southeastern The spatial pattern of actual evapotranspiration from MODIS and GLEAM (Figure 3a,d) shows more regional details than the observed precipitation pattern from CRU, which is in part due to the higher resolution of remote sensing data than that of the gridded observation, and in part due to evapotranspiration's closer relationship with the geophysical characteristics of the domain than precipitation. The two sets of remote sensing data show similar annual patterns; subtle differences exist sporadically over the domain, which could be explained by the different skills of the two datasets over different land-use types [51]. RegCM shows better skills in simulating evapotranspiration than precipitation (Figure 3b,c,e,f), although minor overestimations can be noticed. As shown in Table S1, for annual average evapotranspiration, the spatial correlations between RegCM results and GLEAM are higher than 0.8, and those for MODIS are higher than 0.6, indicating a reasonable performance for RegCM in evapotranspiration. The spatial correlation between RegCM results and the observed tank evaporation from NIMC is 0.50 and 0.42, respectively, for the two rounds of hindcast simulations. The lower correlation between RegCM and NIMC could be related to the difference between tank evaporation and actual evapotranspiration. In terms of the seasonal variations, RegCM demonstrates overestimations in spring and winter, and underestimations in summer and autumn. entire Tibetan Plateau, where overestimations in evapotranspiration can be identified; the latter is likely to be the cause of the former. The spatial correlations between the annual average runoff of the two sets of RegCM results and GRUN are 0.67 and 0.66, respectively (Table S1). From the seasonal perspective, the GRUN reconstructed runoff is high in summer and low in winter. This monsoon feature is well captured by RegCM. As shown by the spatial correlation, for runoff, RegCM shows better skills in summer/autumn (ranges between 0.65 and 0.69) than in winter/spring (between 0.37 and 0.53). The RegCM-generated spatial patterns for runoff show considerably larger biases than for other variables. Apparent overestimations can be spotted near the southeastern corner of the Tibetan Plateau, which is likely to be related to the wet bias in precipitation that occurs at the same location. Slight underestimations in runoff can be found over the entire Tibetan Plateau, where overestimations in evapotranspiration can be identified; the latter is likely to be the cause of the former. The spatial correlations between the annual average runoff of the two sets of RegCM results and GRUN are 0.67 and 0.66, respectively (Table S1). From the seasonal perspective, the GRUN reconstructed runoff is high in summer and Remote Sens. 2021, 13, 3832 9 of 21 low in winter. This monsoon feature is well captured by RegCM. As shown by the spatial correlation, for runoff, RegCM shows better skills in summer/autumn (ranges between 0.65 and 0.69) than in winter/spring (between 0.37 and 0.53). The RegCM-simulated domain average annual cycles for precipitation, evapotranspiration, and runoff are also examined (shown in Figure 5). The two sets of RegCM results present similar features (please refer to Figure S2 in the Supplementary Materials for a direct comparison of the two sets of simulation results). All three variables show a peak in their annual cycles during the monsoon period (May to September), which is consistent with previous results. On domain average, a considerable part of the precipitation is balanced by evapotranspiration, and a smaller portion is attributed to the runoff. The difference between the simulated precipitation and evapotranspiration is also plotted in Figure 5. As indicated by the surface water budget equation, the amount of precipitation that is not balanced by the other two variables contributes to the moisture storage in soil. It can be observed that the difference between precipitation and evapotranspiration is larger than runoff in early spring, and this relationship reverses in autumn, which indicates water storage in spring and dissipation in autumn. Remote Sens. 2021, 13, 3832 9 of 22 The RegCM-simulated domain average annual cycles for precipitation, evapotranspiration, and runoff are also examined (shown in Figure 5). The two sets of RegCM results present similar features (please refer to Figure S2 in the Supplementary Materials for a direct comparison of the two sets of simulation results). All three variables show a peak in their annual cycles during the monsoon period (May to September), which is consistent with previous results. On domain average, a considerable part of the precipitation is balanced by evapotranspiration, and a smaller portion is attributed to the runoff. The difference between the simulated precipitation and evapotranspiration is also plotted in Figure 5. As indicated by the surface water budget equation, the amount of precipitation that is not balanced by the other two variables contributes to the moisture storage in soil. It can be observed that the difference between precipitation and evapotranspiration is larger than runoff in early spring, and this relationship reverses in autumn, which indicates water storage in spring and dissipation in autumn. Precipitation Having reasonable skills in reproducing the historical climate over China, RegCM is subsequently used to project future changes in water cycle components. The projected changes in precipitation over China in three future periods and under two emission scenarios are shown in Figure 6. The left three columns are for RCP4.5 and the right three for RCP8.5; for each scenario, the three columns, respectively, indicate 2030s, 2050s, and 2080s. The rows in the figure are the annual and seasonal averages. On annual average, increases in precipitation can be found in most parts of the domain. Precipitation changes of the two RCPs show certain similarities. For example, in the 2080s, precipitation increases of larger than 0.3 mm/day are expected in parts of the Tibetan Plateau, Yellow River Basin, Haihe River Basins, Yangtze Plain, and southern coastal hilly regions under both scenarios. In general, the area experiencing increased precipitation is larger under RCP8.5, especially in the Tibetan Plateau, where more pronounced increases (over 0.9 mm/day in the southeastern corner) can be observed as well. Table 2). Some seasonal trends are also statistically significant; summer precipitation shows increasing trends of 0.02 and 0.05 mm/day per decade under the two scenarios, and autumn precipitation increases at a rate of 0.02 mm/day per decade under RCP8.5. Precipitation changes demonstrate apparent seasonal variations. In winter, precipitation decreases of over 0.3 mm/day can be found in parts of the Yunnan-Guizhou Plateau in the 2050s and 2080s under RCP4.5 and in the 2030s and 2050s under RCP8.5. In the 2080s under RCP8.5, more severe changes over larger areas can be noticed; parts of the Pearl River Basin are projected with precipitation decreases of over 0.3 mm/day and parts of the Yunnan-Guizhou Plateau of over 0.6 mm/day. Summer precipitation exhibits similar pat-terns of changes in the 2050s under the two scenarios, where decreases of over 0.3 mm/day are to be found in the middle and lower reaches of the Yangtze River Basin, and increases of over 0.9 mm/day are expected near the Hengduan Mountains located in the southeastern corner of the Tibetan Plateau. In the 2080s, the spatial distributions of summer precipitation change under the two scenarios are quite different. Under RCP4.5, precipitation decreases of over 0.3 mm/day are likely to occur in the very north of northeastern China, parts of the Yangtze Plain, and parts of the Pearl River Basin, while increases of over 0.9 mm/day are projected in the Hengduan Mountains and the southeastern coastal hilly regions. In comparison, under RCP8.5, decreases in summer precipitation mainly occur in the middle and lower reaches of the Yangtze River Basin (over 0.3 mm/day), while increases of over 0.9 mm/day are to be found in the southern parts of the Tibetan Plateau, Hengduan Mountains, and parts of the Haihe River Basin. The changes in summer precipitation may be related to variations in its major moisture transport branches, which are the transportations by the Indian monsoon, Southeast Asian monsoon, and midlatitude westerlies, as shown by Simmonds [52]. The increases in summer precipitation in southeastern China and decreases in central south and southeastern China under both scenarios could indicate an enhanced Indian summer monsoon and a subsided Southeast Asian summer monsoon. In spring and autumn, precipitation increases are to be seen in the Yangtze Plain (can reach over 1.8 mm/day) and the Yellow River Basin (over 0.9 mm/day), respectively. The annual series of domain average precipitation under both RCPs are shown in Figure 7. For both scenarios, precipitation demonstrates an evident increasing trend (although a decreasing trend can be observed between 2050 and 2060 under RCP4.5). The Mann-Kendall test confirms the statistical significance of the trends at an α level of 0.05 (when the period of 2010 to 2100 is considered as a whole). The magnitudes of trends, given by Sen's slope estimator, are 0.02 and 0.03 mm/day per decade under RCP4.5 and RCP8.5, respectively (as shown in Table 2). Some seasonal trends are also statistically significant; summer precipitation shows increasing trends of 0.02 and 0.05 mm/day per decade under the two scenarios, and autumn precipitation increases at a rate of 0.02 mm/day per decade under RCP8.5. Precipitation changes (with respect to the baseline period) in the three future periods under both scenarios are listed in Table 3. Under RCP4.5, the annual average precipitation is projected to increase by 0.06, 0.08, and 0.16 mm/day, respectively, in the three future periods (statistically significant at an α level of 0.05). Under RCP8.5, precipitation change in the 2030s is not statistically significant, and the increases in the 2050s and 2080s are 0.12 and 0.2 mm/day, respectively. Spring and summer precipitation is expected to undergo larger increases than that in winter and autumn. These numbers are consistent with the changes in the annual cycles of precipitation as shown in Figure 8, in which large precipitation increases can be found from April to September in the 2050s and 2080s under both scenarios. Precipitation changes with respect to different warming levels under the two scenarios are shown in Table 4. The magnitudes of changes under the same warming level are similar under different scenarios. For example, with a domain average warming of 2 • C, the annual average precipitation is likely to increase by 0.12 and 0.13 mm/day under RCP4.5 and RCP8.5, respectively. This phenomenon is reasonable because, according to the Clausius-Clapeyron equation, the increase in the water holding capacity of the atmosphere is the same given the same temperature increase. The change in precipitation is not necessarily the same since the actual amount of water vapor available can be different. At a warming level of 2 • C, spring and summer precipitation is also projected to increase by 0.15 and 0.17 mm/day under RCP4.5, and by 0.21 and 0.17 mm/day under RCP8.5. When domain average warming reaches 4 • C under RCP8.5, the annual, spring, and summer precipitation are projected to increase by 0.19, 0.24, and 0.27 mm/day, respectively. Table 4. Projected changes (mm/day) in precipitation, evapotranspiration, and runoff at different warming levels (P, E, and R denote precipitation, evapotranspiration, and runoff, respectively). Evapotranspiration The projected changes in evapotranspiration over China are shown in Figure 9. Similar to the spatial pattern of precipitation changes, the annual average evapotranspiration is likely to increase over most parts of the domain. The area experiencing increased evapotranspiration is larger than that for precipitation, but the magnitude of the increase is smaller when compared with precipitation. Annual average evapotranspiration changes are within ±0.3 mm/day for most of the time under both scenarios, except that increases of over 0.3 mm/day are projected in the Hengduan Mountains, Yunnan-Guizhou Plateau, and southeastern coastal hilly regions in the 2080s under RCP8.5. Intra-annual variations can also be observed for evapotranspiration changes. The most pronounced changes are to occur in summer, in which evapotranspiration increases of over 0.3 mm/day can be found over the entire domain except parts in northern and northeastern China in the 2080s under RCP8.5, and increases of over 0.6 mm/day are to be seen in the Hengduan Mountains. In spring, evaporation increases of over 0.3 mm/day are projected in areas between the Yangtze River and Pearl River Basins in the 2080s under RCP4.5, and in most of the southern parts of the domain in the same period under RCP8.5. In autumn, evapotranspiration increases of over 0.3 mm/day can be found in the Hengduan Mountains. Evapotranspiration changes in winter are within ±0.3 mm/day for all future periods under both scenarios. The annual average evapotranspiration time series is shown in Figure 7, which appears to be less fluctuating than those of the other two variables. Evident increasing trends can be observed under both scenarios, the magnitudes of which are 0.01 and 0.03 mm/day per decade (statistically significant at an  level of 0.05), respectively ( Table 2). All seasonal trends are exclusively statistically significant, which are 0.01, 0.02, 0.01, and 0.01 mm/day per decade, respectively, for winter, spring, summer, and autumn under RCP4.5 and 0.01, 0.03, 0.04, and 0.02 mm/day per decade under RCP8.5. Annual and seasonal evapotranspiration changes in the three future periods under both scenarios are all statistically significant, except for winter in the 2030s under RCP4.5 (Table 3). Under RCP4.5, annual average evapotranspiration is projected to increase by 0.06, 0.1, and 0.12 mm/day, respectively, for the 2030s, 2050s, and 2080s, while, under RCP8.5, the increases are 0.08, 0.14, and 0.22 mm/day. Such increases in evapotranspiration can also be observed in Figure 8, in which evapotranspiration changes are always above zero and larger increases can be found in the monsoon months. At a domain average warming of 2 °C, annual average evapotranspiration is projected to increase by 0.12 and 0.14 mm/day under the two scenarios (Table 4), which are close to the amount of precipitation increases. For the 4 °C warming period under RCP8.5, annual evapotranspiration is likely to increase by 0.21 mm/day, and seasonal evapotranspiration by 0.08, 0.23, 0.36, and 0.18 mm/day for winter, spring, summer, and autumn, respectively. The annual average evapotranspiration time series is shown in Figure 7, which appears to be less fluctuating than those of the other two variables. Evident increasing trends can be observed under both scenarios, the magnitudes of which are 0.01 and 0.03 mm/day per decade (statistically significant at an α level of 0.05), respectively ( Table 2). All seasonal trends are exclusively statistically significant, which are 0.01, 0.02, 0.01, and 0.01 mm/day per decade, respectively, for winter, spring, summer, and autumn under RCP4.5 and 0.01, 0.03, 0.04, and 0.02 mm/day per decade under RCP8.5. Annual and seasonal evapotranspiration changes in the three future periods under both scenarios are all statistically significant, except for winter in the 2030s under RCP4.5 (Table 3). Under RCP4.5, annual average evapotranspiration is projected to increase by 0.06, 0.1, and 0.12 mm/day, respectively, for the 2030s, 2050s, and 2080s, while, under RCP8.5, the increases are 0.08, 0.14, and 0.22 mm/day. Such increases in evapotranspiration can also be observed in Figure 8, in which evapotranspiration changes are always above zero and larger increases can be found in the monsoon months. At a domain average warming of 2 • C, annual average evapotranspiration is projected to increase by 0.12 and 0.14 mm/day under the two scenarios (Table 4), which are close to the amount of precipitation increases. For the 4 • C warming period under RCP8.5, annual evapotranspiration is likely to increase by 0.21 mm/day, and seasonal evapotranspiration by 0.08, 0.23, 0.36, and 0.18 mm/day for winter, spring, summer, and autumn, respectively. Figure 10 shows the projected changes in future runoff. In general, the annual and seasonal variations in runoff share a certain resemblance with those in precipitation, and only the area experiencing increased runoff is considerably smaller. Runoff changes in most parts of the domain are within ±0.3 mm/day in the three future periods under both scenarios. Under RCP4.5, runoff increases of over 0.3 mm/day can be found in the Yangtze Plain in the 2080s. Under RCP8.5, over 0.3 mm/day decreases in runoff are likely to occur in parts of the Yunnan-Guizhou Plateau in the 2050s and also in parts of the Yangtze River Basin in the 2080s. Seasonal changes in future runoff also demonstrate distinct characteristics. Among all seasons, runoff changes in summer are most severe. For example, under RCP4.5, runoff increases of over 1.2 mm/day are projected in the southeastern coastal hilly regions in the 2080s, which can be related to the increased precipitation in this area. Under RCP8.5, a considerable part of the Yangtze River Basin is projected to experience a runoff reduction of over 0.6 mm/day in the 2080s, which is likely to be caused by the simultaneous decrease in precipitation and increase in evapotranspiration. In winter, changes in runoff are within ±0.3 mm/day in most parts of the domain, with decreases in the central and southwestern parts and increases elsewhere. Spring runoff is projected to increase in the Yangtze Plain by at least over 0.3 mm/day in the future. In autumn, parts of the Yellow River and Haihe River Basins are to receive runoff increases of over 0.3 mm/day. The observed changes in runoff are closely related to the corresponding changes in precipitation. Runoff As shown in Figure 7, the interannual variation in runoff is highly correlated with those in precipitation and precipitation minus evapotranspiration (hereafter as P − E). In addition, the changes in P − E are slightly larger than those in runoff, which are consistent with the observations of Zhang et al. [2]. They offered two possible explanations: problems with model spinup or water balance closure, and changes in terrestrial water storage as a result of global warming [2]. The latter case indicates a reduction in future soil moisture, which may bring a negative influence on agriculture. The time series of annual average runoff do not exhibit apparent trends, which is consistent with the lack of a significant trend as shown in Table 2. Previous studies also noticed that the annual runoff series does not demonstrate a trend in the historical period; for example, no statistically significant trend can be detected in the Huaihe River Basin, according to Yu et al. [53]. Significant weak trends of 0.01 mm/day per decade are projected in autumn under RCP4.5 and in winter and autumn under RCP8.5. In comparison with precipitation and evapotranspiration, there is no statistically significant change in annual average runoff except for the 2080s under RCP4.5 (0.05 mm/day). Statistically significant runoff reductions are projected in summer, which are −0.08 and −0.09 mm/day in the 2030s and 2050s under RCP4.5 and −0.12, −0.13, and −0.16 mm/day in the three future periods under RCP8.5. Such a runoff reduction in summer is also evident in Figure 8. These observations are partially consistent with those of Zhang et al. [2], who noticed that changes in precipitation, evapotranspiration, runoff, and P − E mainly occur in the wet season with simultaneous increases in all variables. In this study, the most pronounced changes also occur in the wet season. The projected reduction in runoff and P − E could be related to the difference in the domain selection between this and Zhang et al.'s studies. There is no significant change in annual average runoff under all warming levels and emission scenarios, except that an increase of 0.04 mm/day is projected at a warming level of 3 • C under RCP8.5. At a warming level of 2 • C, runoff changes of 0.02 and 0.08 mm/day are likely to occur in winter and autumn under RCP4.5, and 0.03, −0.11, and 0.08 mm/day in winter, summer, and autumn under RCP8.5. With a domain average warming of 4 • C, runoff in winter, summer, and autumn is to change by 0.03, −0.15, and 0.01 mm/day under RCP8.5. regions in the 2080s, which can be related to the increased precipitation in this area. Under RCP8.5, a considerable part of the Yangtze River Basin is projected to experience a runoff reduction of over 0.6 mm/day in the 2080s, which is likely to be caused by the simultaneous decrease in precipitation and increase in evapotranspiration. In winter, changes in runoff are within ±0.3 mm/day in most parts of the domain, with decreases in the central and southwestern parts and increases elsewhere. Spring runoff is projected to increase in the Yangtze Plain by at least over 0.3 mm/day in the future. In autumn, parts of the Yellow River and Haihe River Basins are to receive runoff increases of over 0.3 mm/day. The observed changes in runoff are closely related to the corresponding changes in precipitation. Discussions and Conclusions In this study, the long-term variations in the water cycle over China are studied using RegCM. The performance of RegCM in terms of temperature, precipitation, evapotranspiration, and runoff is validated through comparisons of the model-generated results with gridded observations, remote sensing data, and reconstructed data. The results show that RegCM can reasonably capture the spatial and seasonal variations in these variables, although certain biases exist, such as a cold bias in the entire domain, a dry and wet bias pair in the southeastern and northwestern parts of the domain, some over-and underestimations of evapotranspiration, respectively, in winter/spring and summer/autumn, and some over-and underestimations of runoff near the Tibetan Plateau. Long-term projections of precipitation, evapotranspiration, and runoff under two emission scenarios are then developed. The results show that increased annual average precipitation and evapotranspiration can be found in most parts of the domain, while a smaller part of the domain is projected with increased runoff. For precipitation, the regions most affected by global warming are the Yangtze Plain, Yellow River and Haihe River Basins, and southeastern parts of the Tibetan Plateau, where over 0.3 mm/day increases are expected in the 2080s under both scenarios. The projected increase in precipitation in the Yellow River and Haihe River Basins can also be observed in a CMIP6 GCM ensemble according to Tian et al. [54]. It is worth noting that although the projected precipitation increase in the Tarim Basin is within 0.3 mm/day, the percentage change could be large considering its low annual average total precipitation, which is why several studies (e.g., [55,56]) identified it as an area vulnerable to climate change. Precipitation increase has been shown to be among the driving factors for the increased flood frequency in the Tarim River Basin since the 1980s [57]; the increase in future precipitation as projected by this and the previous studies may indicate increased flood risks in this area, which suggests the need for relevant flood prevention measures. For evapotranspiration, areas experiencing evident increases are the Hengduan Mountains, Yunnan-Guizhou Plateau, and southeastern coastal hilly regions; the magnitude of change is 0.3 mm/day in the 2080s under RCP8.5. The apparent evapotranspiration increase in southeastern coastal hilly regions is also noted by Su et al. [58]. In terms of seasonal variations, summer and spring are likely to see larger increases in evapotranspiration; this feature is consistent with Ma et al.'s observation [59]. The developed projections in evapotranspiration can be used to evaluate climate change impacts on drought conditions through the calculation of evapotranspiration deficit; this index, compared with those that are based on precipitation and soil moisture, could more effectively reflect moisture deficiency in ecosystems [60]. For runoff, the regions most affected are the Yangtze Delta, Yunnan-Guizhou Plateau, and parts of the Yangtze River Basin, with increases of 0.3 mm/day for the first region in the 2080s under RCP4.5 and decreases of over 0.3 mm/day for the latter two regions in the 2080s under RCP8.5. The projected decrease in runoff in the middle reaches of the Yangtze River Basin is consistent with the results from Xing et al.'s study [61]. In addition, the runoff reduction in the Yunnan-Guizhou Plateau and the upper reaches of the Yangtze River Basin is also noticed by Zhai et al. in their ensemble projection of runoff [62]. Extreme high and low runoff are often related to flood and drought hazards [63], and the runoff projection developed in this study could help identify regions vulnerable to increased flood and drought risks, and thus support flood mitigation and water resource management [64]. In summary, future precipitation and evapotranspiration are likely to increase over China in the wet season, while runoff decreases. The projected changes in precipitation minus evapotranspiration are larger than those in runoff, implying a possible decrease in soil moisture. It is important that future variations in the water cycle components be considered when designing flood and drought mitigation measures. Extensions of this study can be conducted with respect to the current limitations. For example, more sophisticated bias correction techniques can be applied to the model results. In addition, more GCMs can be used to drive RegCM so that an ensemble can be constructed for more robust projections. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/rs13193832/s1, Figure S1: Location of NIMC weather stations. Note: Red dots indicate selected stations for evapotranspiration data. The NIMC tank evaporation data contains a large number of missing data; the stations are selected if the percentage of missing data is less than 50% for each season. Figure S2: Simulated annual cycles for precipitation, evapotranspiration, runoff, and the difference between precipitation and evapotranspiration in the baseline period. The R2's for the two sets of RegCM results are 0.88, 0.92, 0.83, and 0.82, respectively for precipitation, evapotranspiration, runoff, and the difference between precipitation and evapotranspiration. Table S1: Spatial correlation between RegCM-generated results and observations, remote sensing data, and reconstructed data. Table S2: Spatial correlation between raw GFDL data and observations.
v3-fos-license
2022-05-10T15:02:31.213Z
2022-05-02T00:00:00.000
248586314
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.johila.org/index.php/Johila/article/download/100/113", "pdf_hash": "49ebdad6cde1bca8fcc3c5359468293836b2f1c2", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42064", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c8b378642b7638a0afb82072d2b9acf82734e395", "year": 2022 }
pes2o/s2orc
Just in time: integrating library services for literature searches in a hospital library setting Providing timely and comprehensive literature searches is a core service for hospital libraries. These expert searches are mediated across multiple databases and platforms. NSLHD Libraries have pivoted services which were largely site based, to services for staff not just off-site, but when working in isolation across the district. Immediate access to literature searches has been achieved by integrating multiple library services. Background Northern Sydney Local Health District (NSLHD) has a community of almost 1 million residents (HealthStats NSW, 2022). Five libraries support a workforce of more than 10,000 staff which includes nurses, allied health staff, medical professionals, community health centers and health service managers. By adopting a strategy of continuous improvement using technology to reduce barriers, NSLHD Libraries supports the NSLHD Strategic Plan vision to be "Leaders in healthcare and partners in wellbeing." NSLHD Libraries In 2020 NSLHD Libraries implemented several new resources, Springboard (OVID Discovery) Reftracker (Altarama), and Single Sign-On authentication through OpenAthens. After some months embedding these systems into library workflows, Covid-19 changed the workplace. By early 2021 it became apparent that NSLHD libraries needed to find agile ways to support staff. These new technologies enabled a pivot from services which were largely site based, to services for staff who required access not just off-site, but when working in isolation across the district. As Scott (2021) suggests services need to be re-engineered to meet the needs of staff who struggle to be released from their clinical duties. NSLHD Libraries resolved to implement an improved remote service for clinicians. Changing environment NSLHD Libraries staff are able to deliver services and adapt to a rapidly changing environment. In doing so this supports the main themes that underpin the NSLHD Strategic Plan (2017-2022) -Evidence Based Decision Making, Responsive & Adaptable Organisation, and an Engaged and Empowered Workforce. All NSLHD Libraries remained open during the pandemic delivering access onsite whilst simultaneously rapidly expanding service delivery offsite and for staff no longer located in their usual place of work. The messaging across the district has been clear and proactive (Haugh, 2021) resulting in a substantial increase of visits to both the NSLHD's 5 Libraries and offsite access via Springboard. Year Onsite Removing barriers Time is one of the 8 pain points identified by Laera et al. (2021) when users are accessing clinical information. With staff offsite, furloughed, upskilling, or taking on extra tasks, time is even more compromised. How could some barriers be removed to provide immediate access to search results? NSLHD Libraries set out address these issues. Literature service review NSLHD Librarians reviewed 5 completed literature searches in a range of topics and databases for patient care for the NSLHD. This amounted to 300 references in total. The aim of the sample was to address the following issues: Firstly the number of articles available as full text via NSLHD Springboard; secondly articles requiring the document delivery service; and thirdly any technical issues. Results Results were encouraging, 70% of the articles were available immediately in full text. The remaining 30% were conference proceedings and results from obscure journals. These results indicate depth in the NSLHD Libraries collection and reflects the expertise in the selection of relevant journals. Of the 60 citations in an X-Ray search for example, 52 articles were available as full text and only 8 required document delivery (ILLs). No access or authentication issues were identified. To assist in the design of a study to look at the effect of a chest Xray interpretation tool in the Emergency department. (ILLs required) Solutions NSLHD Library staff set out to identify and remove barriers such as the need to fill out an additional form, copying and pasting citations, and the need to contact library staff for assistance. Library staff created a new EndNote output style incorporating Springboard URLs for literature searches. Then by activating the "Auto format" in Microsoft Word the citations could be hyperlinked, ensuring that the user receives direct hyperlinks into Springboard. Trial of the new service NSLHD Libraries set up a trial of hyperlinking literature searches in July and August 2021. A short survey for users was added to our library forms for feedback. Survey results, though small, were positive, with no issues reported. Additionally no questions or queries were received via the NSLHD's phone and email address. Training sessions for the Library team were set up via video call and work instructions provided. In September 2021 the new hyperlinked search results were implemented across the district. Conclusion Hyperlinking citations directly to Springboard gives immediate access to articles. For articles not held at NSLHD, an article request form pops up. Details are prefilled using OpenURL functionality through Reftracker. The implementation of single signon to our library systems, including our Discovery Service, has meant the complex process of remembering additional logins and passwords are no longer required. Users are familiar with their institutional staff logins and use these to obtain access. This integration represents significant time saving for users and library staff.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-04-29T00:00:00.000
2678596
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0125403&type=printable", "pdf_hash": "21090fef66c9a9ce8ceb1a00c61c439e95a9c398", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42065", "s2fieldsofstudy": [ "Medicine" ], "sha1": "21090fef66c9a9ce8ceb1a00c61c439e95a9c398", "year": 2015 }
pes2o/s2orc
Month 2 Culture Status and Treatment Duration as Predictors of Recurrence in Pulmonary Tuberculosis: Model Validation and Update Background New regimens capable of shortening tuberculosis treatment without increasing the risk of recurrence are urgently needed. A 2013 meta-regression analysis, using data from trials published from 1973 to 1997 involving 7793 patients, identified 2-month sputum culture status and treatment duration as independent predictors of recurrence. The resulting model predicted that if a new 4-month regimen reduced the proportion of patients positive at month 2 to 1%, it would reduce to 10% the risk of a relapse rate >10% in a trial with 680 subjects per arm. The 1% target was far lower than anticipated. Methods Data from the 8 arms of 3 recent unsuccessful phase 3 treatment-shortening trials of fluoroquinolone-substituted regimens (REMox, OFLOTUB, and RIFAQUIN) were used to assess and refine the accuracy of the 2013 meta-regression model. The updated model was then tested using data from a treatment shortening trial reported in 2009 by Johnson et al. Findings The proportions of patients with recurrence as predicted by the 2013 model were highly correlated with observed proportions as reported in the literature (R2 = 0.86). Using the previously proposed threshold of 10% recurrences as the maximum likely considered acceptable by tuberculosis control programs, the original model correctly identified all 4 six-month regimens as satisfactory, and 3 of 4 four-month regimens as unsatisfactory (sensitivity = 100%, specificity = 75%, PPV = 80%, and NPV = 100%). A revision of the regression model based on the full dataset of 66 regimens and 11181 patients resulted in only minimal changes to its predictions. A test of the revised model using data from the treatment shortening trial of Johnson et al found the reported relapse rates in both arms to be consistent with predictions. Interpretation Meta-regression modeling of recurrence based on month 2 culture status and regimen duration can inform the design of future phase 3 tuberculosis clinical trials. Introduction Tuberculosis remains one of the world's deadliest communicable diseases, causing an estimated 9 million new cases and 1.5 million deaths annually [1]. The identification of new regimens capable of shortening treatment without increasing the risk of recurrence has been a high priority for tuberculosis research for many years. A brief report by Mitchison in 1993 first proposed a role for sputum culture status after 2 months of treatment in the evaluation of such regimens [2]. Two subsequent independent analyses of regimen pairs of equal duration confirmed the relationship between sputum culture status and relapse risk [3,4]. However, the design of these studies precluded their ability to directly inform the likelihood of success of shorter new regimens in phase 3 trials. In 2013, a meta-regression analysis identified 2-month sputum culture status and treatment duration as independent predictors of recurrence, using data from 7793 patients treated with 58 diverse regimens of various durations published from 1973 to 1997 [5]. The regression model predicted that if a new 4-month regimen reduced the proportion of patients positive after 2 months of treatment to 1%, it would reduce to 10% the risk of a relapse rate >10% in a trial with 680 subjects per arm. The 1% target was far lower than anticipated. There have since been lingering concerns that the model, which was developed using data from decades-old trials, might have limited ability to predict results of contemporary studies. In October 2014, results of 3 phase 3 trials of 4 fluoroquinolone-substituted 4-month regimens were reported [6][7][8]. None of the four 4-month regimens tested in these trials proved successful. In the present publication, data from these trials have been used to assess and refine the accuracy of the 2013 meta-regression model. The accuracy of the updated model was then assessed using data from the treatment shortening study of Johnson et al [9]. None of these studies had been included during development of the original model. Model validation The original dataset, statistical programming code, and resulting mathematical model, as reported in 2013, comprised the training set for this study. That model predicted TB recurrence risk based on the proportion positive at month 2 and the treatment duration in months, as follows: logit(recurrence proportion) = 2.1471 + 0.4756 x logit(month 2 positive proportion)-2.2670 x ln(months duration). Proportions (recurrence and positive cultures at month 2) were transformed using the logit function. On an ordinary scale such proportions must be between 0 and 1. After logit transformation, values range from negative infinity to positive infinity, with logit(0.5) = 0. Logit transformation eliminates the possibility that a linear model will yield predicted proportions exceeding the limits of 0 and 1. Duration was transformed using the natural log function. The validation dataset consisted of results from the REMox, OFLOTUB, and RIFAQUIN studies [6][7][8]. For consistency with historic data, recurrence rates were calculated from those studies as the number of recurrences divided by the number of subjects at risk for recurrence (i.e., excluding those who had unsatisfactory outcomes prior to being assessed for recurrence), as reported in per-protocol analyses. The REMox and RIFAQUIN trials included in their primary analyses patients retreated for recurrent tuberculosis based on clinical criteria without full microbiologic confirmation (described in the two studies as "retreated" and "limited bacteriology" cases, respectively). For consistency, these cases are included in the primary analysis in the present study as they were reported; a secondary analysis includes only those with full culture confirmation. Sputum culture status (positive or negative for M. tuberculosis) after 2 months of treatment is as reported in each trial using solid culture medium, excluding invalid results due to contaminated or missing specimens (REMox supplemental table S8, OFLOTUB table 2, RIFAQUIN supplemental table 2). Proportions positive for M. tuberculosis at this single time point (without regard to subsequent cultures) were used for consistency with historic data. The confidence intervals of observed proportions were estimated using logistic regression and the Wald test [10]. Validation of the model was performed by examining the relationship between observed and predicted recurrence proportions on a logit scale. Model updating After the validation step, the model was updated using the full dataset, following the same methods as in the 2013 publication. Briefly, proportions were transformed using the logit function. Proportions reported as zero were assigned values of 0.005 (0.5%). As in the 2013 publication, the model included fixed effects for the logit of the month 2 culture positive rate and for the natural logarithm of the treatment duration. A random intercept was included for study. The within-study variance of each study arm was fixed using the asymptotic variance of the logit-transformed recurrence proportion, calculated as 1/Np(1-p), where N was the arm's sample size and p was the recurrence proportion. The between-study variance was estimated by restricted maximum likelihood using the SAS MIXED procedure [11]. Regression parameters were estimated via weighted least squares using the inverse of the sum of the within-study variances as the weight. From the fitted model, we predicted recurrence proportions at given proportions of month 2 culture positivity and treatment duration. Two-tailed 80% confidence intervals (CI) were calculated, as well as corresponding prediction intervals (PI) for a hypothetical trial with 680 subjects per arm. The upper limit of this interval thus identifies the recurrence rate with only a 10% chance of being exceeded in a typical phase 3 trial (i.e., 90% power). The 10% value had been selected as the highest risk of failure likely to be considered acceptable by a pharma sponsor during the planning of such a trial. The prediction error variance on the logit scale was SE 2 + Vs + 1/N new q(1-q), where q was the model-predicted logit recurrence proportion at a given level of month 2 culture positive rate and treatment duration, SE was the standard error of q, N new was the number of subjects per arm of the hypothetical trial, and Vs was the estimated variance associated with the study. The intervals were formed on the logit scale and back-transformed to an ordinary scale. The SAS code for the model is available on request. Results Characteristics of the original (training) dataset as reported in 2013, the validation dataset (from REMox, OFLOTUB, and RIFAQUIN trials), and the full dataset are described in Table 1. The regimens are diverse with respect to their composition, duration, and region of the world in which they were studied. Relative to the original data set, the regimens in the validation set were shorter, included more subjects, were more likely to contain rifampin, pyrazinamide, and fluoroquinolones, and were more likely to have been conducted in Africa. These differences are expected, as they reflect advances in tuberculosis treatment and clinical trials over a period of nearly 4 decades. Detailed characteristics of the validation dataset from the 3 recent fluoroquinolone trials are described in Table 2. The numbers of patients with recurrences according to stringent and lessthan-stringent criteria are shown as they were reported in the REMox and RIFAQUIN trials. The potential impact of recurrences without full microbiologic confirmation was greatest for the control arm of the REMox trial, in which such cases exceeded the number of confirmed recurrences. Such instances in which retreatment of study subjects occurred without full culture confirmation had been prospectively designated as recurrences by the study protocol [6]. Inclusion of rifampin and pyrazinamide 0.5 (0.5-1) Evaluable subjects are those who at end-of-treatment have not met other unsatisfactory endpoints. Observed relapse rates are from per-protocol analyses, calculated as the number of subjects meeting the primary definition of recurrence in each trial (REMox: "relapse" + "retreated"; OFLOTUB: unfavorable outcomes at 18 months; RIFAQUIN: "culture confirmed" + "other") divided by the number of evaluable subjects. Relapse was predicted using a model developed without data from the 3 trials in question, whose variables included total treatment duration and month 2 culture status using solid media [5]. The right-most column of Table 2 shows the predicted proportion of patients with recurrence using the model as originally described in 2013. Predictions were based on the proportion culture positive after 2 months of treatment, and the total duration of treatment. Observed and predicted recurrence proportions were highly correlated, with a coefficient (R 2 ) of 0.86 and a normalized mean-squared error (NMSE) of 0.04 for the primary analysis of all recurrences (left panel Fig 1). A threshold of 10% (-2.2 on a logit scale) had been proposed in the 2013 publication as the highest recurrence rate that would likely be considered acceptable by tuberculosis control programs. This threshold is indicated by the dotted horizontal and vertical lines (Fig 1). Using this criterion, the original model performed well as a test to predict regimen success, correctly identifying all 4 six-month regimens as satisfactory, and 3 of 4 four-month regimens as unsatisfactory (sensitivity = 100%, specificity = 75%, PPV = 80%, and NPV = 100%). In a secondary analysis that included only recurrences with full culture confirmation, the correlation between observed and predicted recurrence proportions nonetheless remained relatively high (R 2 = 0.76, NMSE = 0.03). These findings confirm month 2 culture status and treatment duration as predictors of tuberculosis recurrence, and more generally confirm the utility of the mathematical model. The model was then updated to reflect the full dataset of 27 studies, 66 regimens, and 11181 subjects. The original and revised fitted parameters are shown in Table 3. The The main effect of the revision was to increase to 10% the predicted recurrence rate in the sole 4-month fluoroquinolone regimen that had been incorrectly predicted to yield acceptable results. Table 4 shows corresponding results for the 80% prediction interval (PI) for a hypothetical trial with 680 patients per arm. Parameters yielding a risk of approximately 10% of a relapse rate >10% are indicated in bold. The target month-2 culture positive rate identified by the revised model for a new 4-month regimen remained 1%. An assessment of the updated model was performed using data from the TB Research Unit (TBRU) treatment shortening trial reported by Johnson et al in 2009 [9]. In that study, 370 HIV-uninfected adult patients with non-cavitary pulmonary tuberculosis at baseline and negative sputum cultures after 2 months of standard treatment were randomly assigned to receive either 2 or 4 additional months treatment with isoniazid plus rifampin. The study was halted by its safety monitoring board when a difference in relapse risk emerged between the 2 arms. The TBRU trial had not been included in the original meta-regression model. The updated model parameters were used to predict the relapse rates for the 2 arms in the trial. Calculations were performed using a month 2 culture positive proportion of 0.005 (0.5%, the lowest in the dataset), as values of zero are not permitted on a logit scale. As indicated in Table 5, observed relapse rates for both arms fell within their respective prediction intervals. Discussion The translation of the results of phase 2 trials into phase 3 trials is a major challenge for the clinical development of shorter TB regimens. Phase 2 trials typically assess sputum culture conversion, whereas phase 3 trials assess relapse-free cure. Accordingly, TB regimen developers are keen to understand the quantitative link between these endpoints. The meta-regression model originally reported in 2013 and updated here provides a framework for direct translation of Phase 2 results to Phase 3 outcomes. Using the threshold for recurrence of 10% proposed in the original publication as the highest TB control programs would consider acceptable, the present study found that the model as reported in 2013 correctly predicted all 4 six-month regimens in recent trials as satisfactory, and 3 of 4 four-month regimens as unsatisfactory, based on month 2 culture status and duration. Predicted and observed recurrence rates were highly correlated (R 2 = 0.86). Updating the fitted model using the full dataset of 11181 patients resulted in only minimal changes to its predictions. It has been argued that the small sample size and resulting wide confidence intervals of typical phase 2 trials limit their ability to predict treatment shortening [6]. However, 5 prior phase 2 trials of 6 gatifloxacin or moxifloxacin-substituted regimens had reported month 2 culture positive proportions of 8-29% [12][13][14][15]. The 2013 model predicted that if administered for only 4 months, all 6 regimens would yield unsatisfactory recurrence rates (10.4-19.4%), consistent with those observed in the 3 phase 3 trials (12.5-17.8%) [5,16]. Thus, in these instances, the reduced sample size of the phase 2 trials did not adversely affect the validity of the predictions. The validation of mathematical models is often conducted by the random allocation of portions of a single dataset for training and validation. Random allocation increases the likelihood that the 2 portions will be comparable, thereby increasing the likelihood that validation will be successful. However, such an approach poses a risk that the model will not perform well in new populations. The validation and training datasets in the present study differ significantly in several key characteristics with the potential to affect the validity of the model. The finding that the original model accurately predicted outcomes despite significant differences in regimen composition, treatment duration, and geographic region indicates the model is robust and generalizable. The findings regarding the TBRU treatment shortening study [9] are particularly informative in this context. Lung destruction and cavity formation in tuberculosis are driven by the host immune response [17]. Although patients with overt immunodeficiency were excluded from the TBRU trial, host immune factors were nonetheless most likely responsible for the non-cavitary disease and early culture conversion that were required for enrollment. Despite having been derived solely from studies of TB chemotherapy trials, the model accurately predicted outcomes in the TBRU trial. This indicates a potential role of the model to inform the design of future studies in which host-directed and antimicrobial therapies are combined. The relapse rate in the experimental arm of the TBRU trial (7.0%) was unacceptable only in the context of the unusually low relapse rate in the control arm (1.6%). Had the latter been anticipated, alternative study designs might have been considered. Potential limitations of the present study arise from the comparison of modern and historic data. Formal definitions of intent-to-treat and per-protocol populations were uncommon in the original dataset, whereas they were specified in advance in all three recent trials. Molecular methods to distinguish tuberculosis recurrence due to relapse from that due to reinfection were not previously available. Additional data will be required from future trials if the risk of true relapse is to be modeled. As in the original model, the prediction intervals remain wide, indicating the contribution of other unmeasured predictors of recurrence risk (such as baseline radiographic extent of disease or sputum mycobacterial burden). Due to limitations in the range of regimen durations available in the present data set and the empiric nature of the model, extrapolating predictions of recurrence for regimens shorter than 4 months carries considerable uncertainty. The longest duration studied in the new Phase 3 trials was 6 months; accordingly, the accuracy of the model for regimens longer than 6 months in duration has not yet been prospectively confirmed. The opportunity to do so may arise as treatment-shortening trials in patients with multi-drug resistant tuberculosis are reported. The accuracy of any early biomarker requires that treatment continues as expected after assessment of the biomarker. This consideration necessitated the exclusion from the 2013 analysis of regimens in which rifampin was administered for the first 2 months but not subsequently, as clinical data indicate rifampin must be continued for the entire duration of treatment for its full effect to be evident [18]. This question must be addressed for each future tuberculosis drug on an individual basis. Finally, month 2 culture status remains a relatively weak predictor of outcomes for individual patients. The science of pharmacometrics has grown in the pharmaceutical industry over the past 2 decades precisely to prevent costly failures in phase 3 trials by identifying and maximizing the factors necessary for success [19]. One of the techniques that emerged is the use of meta-doseresponse and meta-regression analysis to inform drug development decision making. The observations of the present study indicate an important role of the meta-regression model to inform the translation of phase 2 culture conversion results to the design and expected outcomes of future phase 3 tuberculosis clinical trials.
v3-fos-license
2020-09-10T10:16:49.202Z
2020-09-09T00:00:00.000
225302275
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scholink.org/ojs/index.php/sll/article/download/3239/3273", "pdf_hash": "ce75e7518b62f65da855d8b0c2acd4c44ccc4c83", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42066", "s2fieldsofstudy": [ "Linguistics", "Education" ], "sha1": "67c2bee1a2bbd5e1d540a918941382f5c1e565fb", "year": 2020 }
pes2o/s2orc
A Reflective Account on Human Translation and Interpreting Faced with the Automated Text and Speech Processing Tools This reflection aims to depict the prospective position and role of translation and interpreting in the globalised world. Demographic factors point to a long-lasting multilingualism, which reflects the co-existence of linguistic identities within a variety of settings. From schools, to workplaces and communities, different languages are and will continue to be in use. In many countries, there is an increasing wave of using vernacular and migrant languages in education. However, the current global academic discourse on language situations does not sufficiently reflect this new looming reality. The focus of translation and interpreting studies has traditionally been placed on those languages that were perceived as internationally important. One would assume that economic and diplomatic interests have influenced that approach and attitude. With changes affecting the globalised world in relation to the rise of some emerging economies and new resources, it is clear that the interlingual communication will be one of the greatest challenges of the coming age. In this regard, a new paradigm in overall language promotion and education must be formulated within which human translation and interpretation continue be seen as important skills to be generally acquired. Introduction Translation and interpreting are some of the human activities that have been around for several thousand years and many people have devoted their efforts and lives to them. This is especially true for the last fifty years during which translation studies have been established as a subject worth of the attention of the academia (Bassnett, 2002;Munday, 2008). However, rather than the past and present www.scholink.org/ojs/index.php/sll Studies in Linguistics and Literature Vol. 4, No. 4, 2020 2 Published by SCHOLINK INC. state of the translation art and its various technical aspects, the focus of this paper is the prospective position and role of translation in the present and future globalised world, an aspect of challenges that human translation and interpreting are facing vis-à-vis the automated text and speech processing tools. The current trends of international cooperation, national development and international migrations, the intensity of these processes or the lack of it, point to a long-lasting co-existence of hundreds of developed national communities with their own languages of national communication and education. These are demographic factors that are decisive for the existence of linguistic communities and one can fairly say that most of the communities that consist of compact populations of some five or more million people are likely to last for centuries. Moreover, globalisation is, indeed, closely intertwined with localisation, internationalisation with local human and societal development. These co-existing tendencies have their linguistic aspects. While internationalisation supports a limited number of languages in use, the most obvious, but surely not the only one, being English, the inevitable reality is that the human development in local linguistic communities is bound to occur in local languages, the mother tongues or first languages of millions of people. Under those circumstances, it is unlikely and even inconceivable that quality international communication and acceptance of works of art and science as well as national education worldwide could take place in any single language, English being no exception. The process of globalisation is not going to bring about a linguistically homogenous global society. It will rather bring about a world with a considerable number of political and cultural power centres, each using their own languages. Method The popularisation of the information technology in a globalised world has led to the new digital tools, such as electronic corpora, translation databases, memories, etc. Such technology advances have triggered new perceptions of human translation within the productivity-oriented perspective of emerging markets. Many researchers (Bowker, 2002;Jenkins, 2006;Munday, 2008;Hutchins, 2010;Pym, 2010Pym, , 2011Doherty, 2016;Gentzler, 2016) have rightly started to reflect on the impact of automated text and speech processing tools on this human activity of translation and interpreting. Using secondary research data collected via the literature review, this paper analyses the trends and the impact of technology on the management of multilingual encounters. It makes a case for the need for academic and pedagogic interests in the synchronicity between human translation and interpreting activities and the automated text and speech processing tools. Language Skills in the Academic and in the Public Perceptions Cronin (2012) between human and the machine in the labour market. While technological tools have brought changes in translation to facilitate interlingual communication and increase productivity, "these tools also represent significant challenges and uncertainties for the translation profession and the industry", (Doherty, 2016, p. 947 In the academia, this debate is needed to reflect not only on the difficult relationship between human and the machine in the academic and public domain, but also and mainly on the pedagogic challenge of linking languages, translation and interpreting skills with employability in the digital age. At present, academic and public discourse on language situations does not sufficiently reflect this looming reality, especially when compared with the disproportionate emphasis laid upon the danger of extinguishment of some, may be even many, small languages on the one side and on the predominant role of English on the other side. This is, arguably, not a matter of purely theoretical speculation. In the context of discussing the future of translation and interpretation it may be a fairly relevant issue. Foreign language teaching and learning is, of course, in the foreground of many national and some international public discourses. Specifically, as we all are aware, through the Council Resolution on a European strategy for multilingualism (Council of European Union, 2008), the European Union recommended that each citizen of its member states, young or old, should be able to use their mother tongue and two other languages. Although the education and other authorities in many member states have already deployed considerable efforts to implement this policy, the practical implementation of this recommendation has not been an easy task. Even though there has been some moderate progress along these lines, the results have been partly satisfactory. One should be aware that even as more and more people will have a command of two or three languages, it will be in most cases limited to relatively simple everyday communication and passive understanding of the texts providing basic information. It is important to have a clear awareness of the difference between such a broadly achievable level of partial individual bilingualism or plurilingualism and a relatively deep full-scale understanding of two or more languages and the skills required to work with and in those languages. While the lower level of plurilingualism is achievable in a multilingual context through the efforts of language teachers, training for quality communication on the higher level requires the engagement of institutions, including language schools, centres and departments in higher education. It is where another type of specially trained linguists comes to the scene: researchers, editors, writers, teachers of languages and cultures, as well as translators and interpreters. There are also some special departments of translation studies or translatology. Researchers have documented the pedagogic approaches to translation and interpreting (Bogucki, 2010;Bogucki, 2012;Bielsa & Bassnett, 2009;Williams, 2013) to professionalise the industry. Though specially trained translators and interpreters are usually supposed to get full-time jobs, a considerable part of existing translation and interpretation tasks is being undertaken by the rest of trained language professionals. In fact, whatever the great successes of language teaching in general will be, without a clear engagement and the availability of translators and interpreters, full timers and/or part timers, an ever-growing international communication will be difficult. As far as translation of written texts is concerned, even in a hypothetical situation of a single powerful source language (e.g., English) from which texts would be fed to the rest of the world, there would be hundreds of target languages, requiring a one-way translation. In fact, such situation will hardly ever happen. The contrary can be expected. All significant languages will become both source and target languages in a complicated network of two-way translation processes. Therefore, one would suggest that interlingual communication will be one of the greatest challenges of the coming age and if the world of global markets is going to work, human translation and interpretation will have to be one of the key solutions and one of the key activities in that world. It remains to be seen, how these activities will be distributed between people doing translation or interpretation full-time and professionals who devote to these activities only part of their time, though their contribution to the field may be highly sophisticated. Nevertheless, the current distribution of translation tasks suggests that human translation and interpretation will require several types of specialists. Most official international talks and meetings cannot do, to be fully effective, with machines. There must be trained human translators and interpreters. Major Foreign offices have, to this end, their own staff. A large institutional body, the Directorate-General for Translation and Interpreting, does the job for the European Commission, in order to provide quality translations of all important official documents and interpreting in all important meetings. Translation and Interpreting in Multilingual Societies Managing linguistic diversity in multilingual societies makes obvious that translation trainees should acquire the skill of seeking the effective dynamic equivalence from dictionaries, be they monolingual or bilingual. Talking about translation dictionaries, preparing good monolingual, bilingual and in some cases even trilingual dictionaries must be seen as a task closely connected to translation, not only by its very nature, but also as necessary support for good translators and interpreters. This task seems not so urgent when seen from the perspective from the widely considered main languages, such as English, French, German, Italian, Spanish or other big languages with some hundred years history of making sophisticated dictionaries. But if we look at it, for example, from the perspective of societies of the Given the challenges faced with in meeting the professional language services demands in some multilingual contexts, it is clear that translation and interpretation in our globalised world comes to work in a variety of environments and circumstances. Some challenges are common to the whole field, some are specific to certain forms of these activities. It is therefore worth stressing two points. First, both translation and interpretation affect the perceived quality of some other person's work or performance as well as the subsequent reaction of the recipient of the conveyed information to what he or she had learned or had been asked for. It is important to be always aware of the responsibility, often self-imposed moral responsibility, for translations and interpretations and always to improve the general knowledge, language competence and communication skills. Second, we should be always aware, that as translators or interpreters, the ones who are able to listen and to talk to both sides in a situation where one participant does not know what the other is actually saying, we are in a position of considerable power. Power to build or power to destroy. Power to help or power to exploit. Power to encourage goodwill or power to seed suspicion and mistrust. Translation and interpretation are jobs that involve powers and pose moral issues. The Shortcomings of the Automated Text and Speech Processing Tools for Translation and Interpreting It is widely recognised that the automated translation and speech processing tools have come a long way over the past years. The semantic accuracy of the text processing is unquestionable. Everyone should recognise the role that Google plays in providing multifaceted translation services for the widely spoken languages. However, while the technology has significantly improved the semantic accuracy in message transfers, it would be erroneous to assert that the machine is good enough to take over the role of professional translation and interpreting from humans for different reasons. First, translation and interpreting are integral parts of communication modes and processes. What this means in pragmatic terms is that while the intertext elements can be accurately conveyed by machines, inferences and other non-verbal communication features may not be picked up by technology. Technology may manage semantic and syntactic categories but may miss other features of situational and linguistic context. According to Nida (1964, p. 120): Language consists of more than the meaning of the symbols and the combination of symbols; it is essentially a code in operation, or, in other words, a code functioning for a specific purpose or purposes. Thus, we must analyse the transmission of a message in terms of dynamic dimension. This dimension is especially important for translation, since the production of equivalent messages is a process, not merely of matching parts of utterances, but also of reproducing the total dynamic character of the communication. Without both elements the results can scarcely be regarded, in any realistic sense, as equivalent". In translators and interpreting, humans are more able to consider the role of context and inferences in communication. Second, there may be cases of untranslatability (Large et al., 2018) or hidden parts of communication where the interpretation of utterances requires the intelligibility of human cognitive capacities. Such hidden parts of communication are more significant in interpreting than in translation. Gentile (Gentile, 1991, p. 30) argues that, "The role of the interpreter can be summarised as one where he/she is required to conduct himself or herself in a manner which makes the situation with an interpreter, as far as possible, similar to a situation without an interpreter". Assuming Gentile (op.cit.) is not unreasonably ambitious in the perception of the role of the interpreter, one could argue that technology-driven interpreting constitutes some shortfalls: The subtlety of communicative encounters that are related to the various levels or dimensions (Hall, 1966) of discourse (intonation, gestures, rhetoric, speech acts, body language, proximity and other aspects of interaction) cannot be picked up by the technology. Third, at least for now, it may be difficult to localise translation and interpreting applications for all lesser taught and lesser spoken languages. In some countries with a huge mosaic of dialects, there may be many sociolinguistic challenges due to the constant changes of language corpora that machines would not pick up. And yet, those countries with rare vernacular languages are globally important as they may form a new bulk of emerging international markets. Human resources are therefore still needed to fulfil the role of professional translators and interpreters. The Pressing Needs for Human Translation and Interpreting Services and Training The current trends in international migrations are likely to continue at a present, if not larger scale. Asylum seekers, job seekers, victims of human trade, foreigners in various troubled situations, most of them highly vulnerable, will continue to need the assistance and services of trained and understanding professional interpreters. With reference to seeking fairness in service provision in the judicially, the www.scholink.org/ojs/index.php/sll Studies in Linguistics and Literature Vol. 4, No. 4, 2020 7 Published by SCHOLINK INC. European Parliament (2010) recommends in article 2.1 on the Right to interpretation that "Member States shall ensure that suspected or accused persons who do not speak or understand the language of the criminal proceedings concerned are provided, without delay, with interpretation during criminal proceedings before investigative and judicial authorities, including during police questioning, all court hearings and any necessary interim hearings." Interpreters play a pivotal role in facilitating communication. Given the scope of necessary knowledge they need to command and the respect in society they deserve, these professionals should be, perhaps, called communication facilitators rather than interpreters, a term sometimes coloured with disparaging connotations. On the side of the administration, it is first the judiciary who are in the need, for both practical and legal reasons, of translators and interpreters of many languages, for which they organize special networks. In a similar way, private translation agencies mediate between people or companies in need of translation and available persons who can accept ad hoc translation jobs generated at random by many subjects in a certain region. Then there is the classical field of quality translation: literary translation. There is no doubt it will continue to flourish (Bassnett, 2011) and, as national cultures grow and develop, it will encompass more and more languages in, perhaps, rather unbalanced, bilateral exchanges. According Bassnett (2013), translation is increasingly generating a diverse interdisciplinary activity and developing technologies and new forms of media will reinforce this burgeoning reality. History shows that literary translations are often part of creative activities of writers and poets, but some people make with such translations their living. As far as academic translation is concerned, it will be probably partly done by interested individuals knowledgeable in their respective field, as well as by individual scholars interested in having their own work published in a foreign language or in having an outstanding foreign study in their field translated into their own language to spread knowledge in their country. Finally, there are professionals who work with interpreters or translated material daily adapting information from a foreign source or several sources to produce `new texts in the target language. This seems to be a common practice in the media. Similar kinds of tasks are performed by specialists in public relations and advertising. Furthermore, there is no doubt that due to the fluidity of social situations and of the market, the demand for human translation and interpretation can hardly ever be covered by specially trained staff employed full time. Anyone who has some knowledge of a foreign or second language can be confronted with a situation when his or her help may be needed, or, to put it the other way around, with an opportunity to be useful and often to earn some money. It may be therefore said that acquisition of some translating and/or interpreting skills would be profitable for anyone who has some knowledge of another language, and people in need of some assistance in foreign environment will naturally benefit. Translation studies as a special subject can, therefore, be only part of the solution. What is needed, is training in translation and interpretation to be embedded into all language studies from a certain level up to graduation. To put it bluntly: added to training in speaking, listening, reading and writing skills, should be translating skills. Translating and interpreting skills are, indeed, a combination of either listening and speaking or reading and writing skills. However, translation will add the skill of searching for equivalent meaning and adequate expression. This new paradigm in formal language education would bring multiple benefits: It would make the learners more aware that learning languages is useful not only for the learners themselves, but that it contributes also to other people and society. It would make the learners more aware of the intricacies of bilingual communication and acquire new practical knowledge and experience. It is, in fact a very down-to-earth, no rocket science matter, as can be seen from the following short extract from the Reflections on Translation by Bassnett (2011, p. 87): When you are the person who knows the language and your companion does not, you are inevitably the interpreter. Sometimes this is straightforward, such as when you ask someone for directions and then translate them for the person who is driving. Restaurant menus can be a great source of entertainment, particularly if you have the menu in the local language and he has the menu in execrable English and keeps asking you to explain the translation". Another benefit of introducing translation into foreign language classes would be better coordination of foreign language teaching with mother tongue development. Intralingual translation as opposed to interlingual translation is, no doubt, part of didactical fundamentals. In high school one of the important tasks that is not always given the necessary attention is to implement awareness of the difference between everyday colloquial style and the syntactic structure of various types of written texts and teaching students the skill to use both in their right places. Another positive effect of translation training in language classes would inevitably be more attention to all aspects of vocabulary, phraseology and mainly the true meaning of equivalence (Kenny, 1998;Vinay and Darbelnet (1958). Equivalence, according to Vinay and Darbelnet (op.cit.,p. 42) "replicates the same situation as in the original, whilst using completely different wording". In translation, equivalence has always been a central concept and yet remains the most controversial one among theorist and translators. In searching and/or explaining the validity of equivalence, theorists, linguists and critics such as Vinay and Darbelnet (1958), Jakobson (1959), Nida and Taber (1964, 1969), Catford (1965, Newmark (1981Newmark ( , 1988, Baker (1992), House (1997), Pym (1998Pym ( , 2010 contrastive or comparative linguistics to situational and effects/impact-oriented dimensions (pragmatic oriented), the notion still falls short of the universally theorised approach to translation. It is therefore important to assess critically the limitations of theoretical approaches that informed the conceptualisation of equivalence to highlight the shortfalls of pedagogical tools, including the over-reliance on dictionaries in translation classrooms. A reflection on the concept unveils the looseness in nature, definition, scope and applicability of the concept and underlines the implications it may have on teaching translation, especially in A-B language pair classroom experience. The classroom experience of translation can only be improved by helping learners to understanding what seeking equivalence really means in translation. For translation experts, equivalence in meaning transfer should be a dynamic one. According to Nida and Taber (1969, p. 25), "dynamic equivalence in translation is far more than mere correct communication of information". Culler (1976, pp. 21-22) highlights the complexity of equivalence in translation. "If language were simply a nomenclature for a set of universal concepts, it would be easy to translate from one language to another. One would simply replace the French name for a concept with the English name […]. Each language articulates or organizes the world differently. Languages do not simply name existing categories, they articulate their own". This rejection of mechanical transfer of meaning in translation would, perhaps, make the operation and its process more demanding, where bilingualism is not enough. Conclusion To conclude, translation and interpretation competence in professional contexts is one of the great powers of our age, the age of global communication and acculturation. Cronin (2012, p. 5) challenges the 'messianic' trends and future of translation: "Translation, powerfully assisted by the digital toolkit, removes boundaries, abolishes frontiers, and ushers in a brave new world of communicative communion. However, […] such messianic theories not only misrepresent obdurate political realities, but also fail to account in any adequate way for what translators actually do in the present and have done in the past". The question is not whether translation and interpretation will have to be one of the key solutions and activities in multilingual contexts. It is rather the form, mode and shape of the activity in the digital age of a globalised world. This includes thinking about the use of translation and interpretation approaches and methods in preparing linguists and dictionaries. It is also about evaluating critically the impact of the automated text and speech processing tools on human translation and interpreting. If those who are involved in different forms of translating and interpreting, whether as practitioners or trainers, are aware of the importance of what they are doing in providing effective language skill services, they will also command the vital and irreplaceable role and responsibilities of human translators and interpreters in the communicative encounters of the digital age.
v3-fos-license
2023-10-27T15:19:22.040Z
2023-10-24T00:00:00.000
264513169
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1290746/pdf?isPublishedV2=False", "pdf_hash": "e50f9a2797b5af24c95d26b43e7ee2452707c0ff", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42069", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "sha1": "d86c22dc21155c2c10bd0c209087eb3392a6aae6", "year": 2023 }
pes2o/s2orc
The diagnosis of tuberculous meningitis: advancements in new technologies and machine learning algorithms Tuberculous meningitis (TBM) poses a diagnostic challenge, particularly impacting vulnerable populations such as infants and those with untreated HIV. Given the diagnostic intricacies of TBM, there’s a pressing need for rapid and reliable diagnostic tools. This review scrutinizes the efficacy of up-and-coming technologies like machine learning in transforming TBM diagnostics and management. Advanced diagnostic technologies like targeted gene sequencing, real-time polymerase chain reaction (RT-PCR), miRNA assays, and metagenomic next-generation sequencing (mNGS) offer promising avenues for early TBM detection. The capabilities of these technologies are further augmented when paired with mass spectrometry, metabolomics, and proteomics, enriching the pool of disease-specific biomarkers. Machine learning algorithms, adept at sifting through voluminous datasets like medical imaging, genomic profiles, and patient histories, are increasingly revealing nuanced disease pathways, thereby elevating diagnostic accuracy and guiding treatment strategies. While these burgeoning technologies offer hope for more precise TBM diagnosis, hurdles remain in terms of their clinical implementation. Future endeavors should zero in on the validation of these tools through prospective studies, critically evaluating their limitations, and outlining protocols for seamless incorporation into established healthcare frameworks. Through this review, we aim to present an exhaustive snapshot of emerging diagnostic modalities in TBM, the current standing of machine learning in meningitis diagnostics, and the challenges and future prospects of converging these domains. Introduction Tuberculosis, caused by mycobacterium tuberculosis, represents one of the major global public health issues (World Health Organization [WHO], 2022).Although it primarily infects the lungs, known as pulmonary tuberculosis (PTB), it can also affect extrapulmonary sites such as the central nervous system (Ohene et al., 2019).Specifically, TBM is a lethal form of tuberculosis, especially among infants and untreated HIV-infected individuals (Heemskerk et al., 2011;Seddon et al., 2019).Despite TBM accounting for only 1% of new diagnoses, its consequences are severe, leading to death or disability in nearly half of the patients (Marais et al., 2010). Mycobacterium tuberculosis, the causative agent for tuberculosis, is characterized by slow growth and acid resistance, features attributed to its complex cell wall that confer survival advantages within host organisms (Gao et al., 2003).The pervasive transmission of the pathogen, coupled with the intrinsic difficulties associated with bacterial culturing, elevates tuberculosis to a pressing issue in global healthcare.More specifically, these biological complexities present substantial obstacles in the accurate diagnosis and effective management of TBM.While the emergence of drug resistance exacerbates the complexity of treatment regimens, timely diagnosis and intervention can substantially reduce mortality rates (World Health Organization [WHO], 2022).However, early diagnosis of TBM is rendered exceptionally challenging due to the low sensitivity of current diagnostic gold standards and prolonged culture times (Pormohammad et al., 2019).Many patients only seek medical intervention at advanced stages, such as during a mental health crisis or a comatose state (Yan et al., 2020).Typically, TBM diagnosis is predicated Abbreviations: TBM, tuberculous meningitis; RT-PCR, real-time polymerase chain reaction; mNGS, metagenomic next-generation sequencing; PCR, polymerase chain reaction; NAATs, nucleic acid amplification tests; WHO, World Health Organization; AFB, acid-fast bacilli; IGRA, interferon-gamma release assays; 1 H-NMR, proton nuclear magnetic resonance; LC-MS, liquid chromatography-mass spectrometry; GC-MS, gas chromatographymass spectrometry; MRI, magnetic resonance imaging; DTI, diffusion tensor imaging; fMRI, functional magnetic resonance imaging; LR, logistic regression; DT, decision trees; RF, random forests; NN, neural networks; ML, machine learning; CART, classification and regression tree; ANN, artificial neural network; SVM, support vector machines; CSF, cerebrospinal fluid; AUC, area under curve; BM, bacterial meningitis; NBTrees, naïve bayes trees; CRP, C-reactive protein; VM, viral meningitis; CDSS, clinical decision support system; ROC, receiver operating characteristic; DCA, decision curve analysis; CNN, convolutional neural network; RNN, recurrent neural network; NLP, natural language processing; LDA, linear discriminant analysis; KNN, k-nearest neighbors. on clinical manifestations and empirical treatment rather than concrete diagnostic evidence (Thwaites et al., 2000;Ssebambulidde et al., 2022).Alarmingly, commonly employed clinical markers lack specificity, thereby increasing the risk of misdiagnosing TBM as other types of meningitis, such as viral or bacterial forms (Venkatesan et al., 2013;Wang et al., 2019;Xing et al., 2020;He et al., 2023). Even in the face of such challenges, emerging technologies signal a positive shift in the diagnostic paradigms for TBM.The application of targeted gene sequencing is increasingly vital for discerning drug-resistant forms of mycobacterium tuberculosis, an essential step for tailoring effective treatment regimens (Feuerriegel et al., 2021).Molecular diagnostic methodologies, such as RT-PCR and miRNA assays, provide sensitive and specific tools for the early diagnosis of TBM, allowing for the rapid identification of both tuberculosis and rifampicin resistance (Nhu et al., 2014;Hu et al., 2019).Moreover, mNGS is gaining traction as a diagnostic tool, with its capacity for the unbiased detection of a diverse array of pathogenic organisms, thereby revolutionizing the field of infectious disease diagnosis (Lin et al., 2023).Through the incorporation of high-throughput mass spectrometry, as well as metabolomic and proteomic analyses, the research landscape for TBM is expanding to include the identification of specific biomarkers, such as metabolites and proteins, that may be intricately linked with the pathophysiology of the disease (Mason and Solomons, 2021).As a result, these technological advancements are significantly enhancing diagnostic precision and facilitating unprecedented monitoring of disease progression.Moreover, MRI provides distinct advantages, particularly in the assessment of cerebral structural alterations and inflammatory responses (Dian et al., 2020). Machine learning offers a potential solution for improving the diagnostic processes of TBM and other forms of tuberculosis.Machine learning is a computational framework designed to make predictions or decisions by automatically extracting and decoding complex data patterns (Camacho et al., 2018).Specifically, these algorithms utilize feature vectors and corresponding labels to adjust the internal parameters of the model, leveraging a variety of optimization techniques (Capobianco and Dominietto, 2020).By analyzing large volumes of medical images, genomic data, or clinical records, machine learning models can identify complex patterns of disease that may be imperceptible to human experts (Greener et al., 2022).This technology has potential advantages in terms of accuracy and speed and has been widely applied in the field of infectious diseases to improve the diagnosis, detection of complications, treatment, and prognostic stratification (Peiffer-Smadja et al., 2020).Given its track record in related domains, the integration of machine learning into TBM research can thus be both possible and advantageous.Our detailed feasibility analysis, considering the substantial data volume, complexity, and interdisciplinary nature of TBM research, further underscores the significant potential of machine learning techniques.Moreover, by presenting instances where machine learning has been implemented in TBM studies, we emphasize its practical viability and the tangible benefits it brings to the field. This review aims to provide an overview of new diagnostic technologies for TBM, the current status and progress of machine learning in meningitis diagnosis, and the challenges and future directions when integrating these realms. New diagnostic technologies for TBM Diagnosing TBM requires a multi-faceted approach, employing various advanced technologies and methodologies.Over the years, advancements in molecular biology, immunology, biomarker analysis, and imaging technologies have greatly enhanced our ability to detect and study TBM, providing clinicians and researchers with a broader toolkit for diagnosis and assessment.Each technology, from molecular tools like PCR to imaging modalities like MRI, has its unique advantages and challenges.This section delves into these various diagnostic technologies, exploring their capabilities and contributions to the field of TBM research and diagnosis.A comprehensive schematic illustration is provided in Figure 1. Polymerase chain reaction (PCR) Nucleic acid amplification tests (NAATs), such as PCR, hold particular promise for improving TBM diagnosis.The GeneXpert MTB/Rif test is a rapid, automated, cartridge-based nucleic acid amplification test that the World Health Organization (WHO) recommended in 2015 as the initial microbial diagnostic test for TBM (World Health Organization [WHO], 2015).In a recent Cochrane review, the summarized sensitivity of cerebrospinal fluid (CSF) Xpert against CSF culture was 71.1% (95% CI: 62.8-79.1%),and the summarized specificity was 96.9% (95% CI: 95.4-98.0%)(Kohli et al., 2021).Subsequently, GeneXpert MTB/Rif Ultra (Xpert Ultra) was developed (with a larger specimen volume reaching the PCR reaction, additional probes for two other DNA targets, optimized microfluidics, and PCR cycling), featuring enhanced sensitivity and more reliable rifampicin resistance detection (Bahr et al., 2018;Donovan et al., 2020).In 2017, the WHO recommended adopting Xpert Ultra for TBM diagnosis, replacing Xpert as the first-line test (World Health Organization [WHO], 2017). Analysis of miRNA Exosomes, which are microvesicles emanating from viable cells into the circulatory system and typically ranging between 30-100 nanometers in diameter, harbor RNA and protein constituents (Wang et al., 2022).Recently, these extracellular vesicles have ascended as potent instruments for the identification of biomarkers in a plethora of diseases, with miRNA identified as one of the most auspicious candidates.A small selection of studies centered on tuberculosis has illuminated the profiles of exosomal miRNAs.Research conducted by Singh et al. (2015) and Alipoor et al. (2017) divulged a differential spectrum of exosomal miRNAs originating from macrophages infected with mycobacterium tuberculosis, implicating the regulatory and diagnostic capabilities of these miRNAs during the infection.Additional studies have also indicated the feasibility of utilizing exosomal miRNAs for differentiating tuberculosis patients from healthy states (Lv et al., 2017). Further, Hu et al. (2019) discerned six differentially expressed exosomal miRNAs in tuberculosis cases; three of these exhibited substantial discriminatory capacity for TBM and were subject to support vector machine (SVM) modeling.The study also tentatively amalgamated electronic health records (EHR), a digital version of patient medical histories, with miRNA data, proposing the integration of multimodal datasets. mNGS Over recent years, mNGS has ascended as a potent sequencingbased modality capable of pathogen identification, without the prior knowledge of the target (Chen et al., 2022).Notably, in contrast to microorganism-specific PCR techniques, mNGS exhibits heightened sensitivity for detecting low-abundance microbial infections in a solitary assay.A seminal pilot investigation involving 12 tunnel boring machine cases revealed a diagnostic sensitivity of 67%, surpassing traditional methods like acid-fast bacilli (AFB) staining, PCR, and microbial culturing (Wang et al., 2019). Immunological technologies 2.2.1. Interferon-gamma release assays (IGRA) Interferon-gamma release assays, which are founded on T-cellbased methodologies, serve as diagnostic tools for identifying infections caused by Mycobacterium tuberculosis (Lalvani et al., 2001).Currently, two commercial types of IGRAs are available: T-SPOT.TB (T-SPOT, Oxford Immunotec Ltd., Oxford, UK) and QuantiFERON-TB Gold (QFT, Cellestis Ltd., Carnegie, Australia or Qiagen, Hilden, Germany) (Bastian and Coulter, 2017).These assays employ enzyme-linked immunospot and enzymelinked immunosorbent assay techniques, respectively (Uplekar et al., 2015).A meta-analysis and systematic review from 2016 determined that the overall sensitivity rates for blood and CSF IGRA were 78 and 77%, accompanied by specificity rates of 61 and 88% (Yu et al., 2016).These data suggest that these assays have only moderate accuracy.Specificity is enhanced when the assays are used on CSF, but large volumes are required (>2 ml) and indeterminate results are common (up to 15%).Consequently, the deployment of IGRA should be augmented by additional diagnostic modalities and Frontiers in Microbiology 03 frontiersin.orgcomprehensive clinical assessments for a more reliable and precise diagnosis. Biomarker analysis 2.3.1. Protein and metabolite analysis The analysis of proteins and metabolites leverages advanced high-throughput technologies to carry out comprehensive and quantitative assessments of low-molecular-weight metabolites in biological specimens.These technologies include proton nuclear magnetic resonance ( 1 H-NMR), which is a technique that uses the magnetic properties of atomic nuclei for structural analysis; liquid chromatography-mass spectrometry (LC-MS), a powerful tool combining the separating capabilities of liquid chromatography with the quantitative and qualitative analysis abilities of mass spectrometry; and gas chromatography-mass spectrometry (GC-MS), which is similar to LC-MS but specializes in the analysis of volatile compounds.These metabolite profiles can serve as molecular characteristics of the disease state, offering valuable information for diagnosis, disease progression, and treatment efficacy (Qiu et al., 2023).In the context of TBM, the expression patterns of specific metabolites in cerebrospinal fluid and blood could be closely related to the onset, development, and prognosis of the disease (van Zyl et al., 2020;Gao et al., 2023).Precise analysis of these metabolites not only helps in enhancing the accuracy of early TBM diagnosis but may also reveal its pathophysiological mechanisms and contributing factors (Cao et al., 2022).For example, CSF lactate and CSF glucose, as the two primary metabolic markers identified from CSF metabolomics studies, have already been instrumental in the diagnosis of TBM.Crucially, for the diagnosis of TBM, the observed concentration ranges are 3.04-17 mmol/L for CSF lactate and 1.6-2.69mmol/L for CSF glucose (Mason and Solomons, 2021).Although metabolomics has huge potential in TBM diagnosis, it also faces the complexity of sample handling, challenges in data analysis, and the need for more clinical validation studies. Imaging technologies 2.4.1. MRI As a high-resolution, non-invasive imaging technique, MRI can provide detailed views of the anatomy and physiological activities of the central nervous system, including the brain, brainstem, and spinal cord.In the diagnosis of TBM, MRI is generally used for detecting inflammation in the meninges, ventricles, and brain tissues, including manifestations such as meningeal thickening, brain edema, ventricular dilation, and localized ischemia or hemorrhage (Pienaar et al., 2009).Advanced MRI technologies like diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) can further assess microstructural changes in neural conduction and brain function (Mathur et al., 2010;Ding et al., 2013).This information is highly valuable for early diagnosis, disease severity assessment, and monitoring treatment responses. However, the application of MRI also has certain limitations, including high cost, limited availability of equipment, and reliance on patient cooperation.Additionally, the interpretation of MRI results requires specialized skills and experience.Despite this, MRI serves as a powerful diagnostic tool indispensable for accurate TBM diagnosis, treatment planning, and efficacy assessment, and may continue to play a crucial role in future research and clinical practice. Fundamentals of machine learning Applications of machine learning in the realm of biomedical sciences have increasingly captivated scholarly attention.Fundamentally, machine learning methodologies bifurcate into two cardinal classifications: supervised and unsupervised learning (Greener et al., 2022). Supervised learning is predicated on utilizing expertly annotated datasets to train computational models for the extraction of specific, disease-related attributes.Upon rigorous training, such models acquire the capability to discern and categorize pertinent attributes within novel, unlabeled datasets, thereby augmenting clinical diagnostics (LeCun et al., 2015).From a technical perspective, supervised learning can be subdivided into classification and regression algorithms.Classification algorithms strive to categorize data samples, while regression algorithms aim to predict continuous variables.Specific techniques encompass logistic regression (LR), decision trees (DT), random forests (RF), neural networks (NN), and deep learning.Importantly, the majority of machine learning algorithms feature both classification and regression variants, rendering the choice of algorithm contingent upon the task at hand. Conversely, unsupervised learning seeks to unearth latent structural or pattern-related nuances in data without the crutch of pre-labeled datasets.This approach shines in its ability to handle complex, high-dimensional data, such as gene expression profiles (Becht et al., 2019).Through unsupervised learning, one can efficaciously identify co-expression modules in genes, potentially indicative of common biological mechanisms or pathways (Tawa et al., 2021).Noteworthy is the emergent interest in semisupervised learning methods, which amalgamate the virtues of supervised and unsupervised paradigms to bolster classification performance through clustering techniques (Dou et al., 2023). The integration of machine learning into medical practice constitutes a multi-stage, interdisciplinary endeavor.Initially, a dataset that is both large and representative is assembled, often comprising medical records and biomedical imaging.The data quality is crucial for model efficacy.Subsequently, meticulous pre-processing eliminates noise and balances the dataset, if necessary.Upon dataset preparation, algorithmic model development commences.Models, ranging from traditional DT to NN, are trained and fine-tuned on a data subset to optimize predictive capabilities.Following training, the model undergoes rigorous validation using an independent dataset and standardized evaluation metrics such as accuracy, sensitivity, and specificity.After successful validation, the model transitions to clinical deployment, serving as an auxiliary tool for clinicians in diagnosis and treatment planning.Continuous maintenance and periodic revalidation are imperative for sustained efficacy.The overall process is encapsulated in Figure 2, providing a roadmap for medical professionals interested in machine learning applications. Research status of machine learning in meningitis As meningitis research grows, more studies are employing diverse methods, evolving from traditional statistics to machine learning models, to diagnose the disease in hospital settings.These studies, especially those using larger datasets and multiple clinical factors, have shown improved predictive accuracy.Advanced diagnostic technology has also expanded the variety of features included in these models.Table 1 summarizes key studies, reviewing aspects like study population, outcomes, features, validation methods, and model performance. Research based on traditional statistical methods Earlier diagnostic models for TBM relied on relatively small sample sizes and traditionally employed statistical methods such as logistic regression.Wang et al. (2021) aimed to assess the clinical features associated with normal CSF protein levels in pediatric TBM.Conducted retrospectively, their study specifically examined two clinical features: vomiting and serum glucose levels.The research indicates that these features are correlated with normal CSF protein levels in children with TBM.This finding is particularly significant for diagnosing and managing pediatric TBM, as CSF protein is often employed as a crucial diagnostic marker for the disease.However, the study has several limitations, including a small sample size and the focus on a single research center, which may restrict its broader applicability.The study also did not explore the relationships between CSF protein and other CSF analytes.Huang et al. (2022) employed ELISA assays to examine the expression of eight proteins in the CSF of 80 patients, which included 22 confirmed cases of TBM, 18 probable cases, and 40 non-TBM cases.They discovered significant differences in the expression of seven proteins between TBM and non-TBM groups.Through unsupervised hierarchical clustering analysis, the researchers further identified a pattern composed of these seven differentially expressed proteins.Logistic regression analyses validated the high efficacy of a combination of three biomarkers (APOE, APOAI, S100A8) in distinguishing TBM from non-TBM cases, with an AUC of 0.916, a sensitivity of 0.95, and a specificity of 0.775.In contrast, Török et al. (2007) focused on regions with high tuberculosis incidence but limited laboratory resources.They selected a sample of 205 HIV-negative meningitis patients with lower CSF glucose levels.Employing LRand CART, the researchers successfully classified patients into TBM and bacterial meningitis (BM) groups.The LR model achieved a diagnostic sensitivity of 0.99 for TBM and 0.815 for BM, whereas the CART method reached diagnostic sensitivities of 0.87 for TBM and 0.865 for BM.These algorithms primarily relied on factors like age, white blood cell counts in blood and CSF, medical history, and the percentage of neutrophils in the CSF for diagnosis.Dendane et al. (2013) used data from 508 patients-comprising 274 cases of TBM and 234 cases of bacterial meningitisto apply logistic regression models and CART analyses.They successfully identified six variables significantly associated with TBM diagnosis.These variables include female gender, symptom duration exceeding 10 days, focal neurological signs, blood white cell count less than 15 × 10ˆ9/L, serum sodium below 130 mmol/L, and a total CSF white cell count less than 400 × 10ˆ6/L.The sensitivity and specificity of this algorithm ranged between 0.87 and 0.88, and between 0.95 and 0.96, respectively.Thwaites et al. (2002) analyzed data from 251 adult patients in a Vietnamese infectious disease hospital, consisting of 143 TBM cases and 108 bacterial meningitis cases.The researchers pinpointed five features strongly correlated with TBM diagnosis: age, length of illness, white cell count, total CSF white cell count, and the proportion of neutrophils in the CSF.Based on these key features, the team formulated a diagnostic rule and evaluated it through both retrospective and prospective test data methodologies.The diagnostic rule demonstrated 0.97 sensitivity and 0.91 specificity in retrospective testing, and 0.86 sensitivity and 0.79 specificity in prospective testing.Luo et al. (2021) constructed a diagnostic model based on multiple CSF markers and the TBAg/PHA ratio.Through multivariate logistic regression analysis, the model incorporated four key variables: CSF chloride levels, CSF nucleated cell count, the proportion of lymphocytes in CSF, and the TBAg/PHA ratio.The model achieved a sensitivity of 0.8158 and a specificity of 0.9184, with an accuracy exceeding 0.85 and an area AUC of 0.949. While these diagnostic models are high-performing, they often function as "black boxes," offering limited interpretability for clinicians.In contrast, clinical scoring tools are generally more accessible to healthcare professionals, being based on clear, intuitive variables and scoring systems.To address this, Lu et al. (2021) introduced a comprehensive new diagnostic scoring system, which synthesizes 28 clinical, laboratory, and radiological factors to differentiate TBM from other common central nervous system infections.This system, validated in a prospective cohort, excelled in sensitivity and specificity, achieving 0.858 and 0.877, respectively. Similarly, Handryastuti et al. ( 2023) developed a simplified scoring system for diagnosing pediatric TBM based on a retrospective analysis and multivariable prediction model.Although the system has lower sensitivity at the established threshold, reaching 0.471, its high specificity of 0.951 signifies a robust accuracy in clinical diagnosis. Research based on machine learning methods In light of increasing dataset sizes, several studies have been published over the past few years that employ machine learning methods with larger data set requirements, such as SVM and tree-based models.Guzman et al. (2022) scrutinized a substantial dataset consisting of 26,228 patients, characterized by 19 primary variables related to symptoms and initial CSF laboratory results.The central aim of their research was to identify the most effective classifier for meningitis etiology.Toward this goal, they explored a myriad of feature selection, dataset sampling, and classification model techniques, based predominantly on ensemble methods and DT.Following experimentation with 27 classification models, 19 of which employed ensemble methods, they found that the ensemble methods yielded the most optimal classifiers.Specifically, the union of Bagging and naïve bayes trees (NBTrees) resulted in peak performance metrics, boasting an F-measure of 0.89, along with an accuracy, recall, and AUC of 0.95 each.Their study also illustrated that, compared to using DT alone, the incorporation of ensemble methods substantially enhanced the model's diagnostic efficacy. Šeho et al. ( 2022) deployed a dataset of 1,000 instances, where 800 were meningitis patients and 200 were healthy individuals.Factors used in diagnosing meningitis included body temperature, protein levels, CSF-to-blood glucose ratio, CSF white cell counts, lactate, glucose, erythrocyte sedimentation rate, and C-reactive protein (CRP).They developed a classifier that utilized ANN for instance categorization.When tested on the employed dataset, the proposed system exhibited a classification accuracy of 0.9669.Jeong et al. (2021) applied a range of machine learning models, including Naive Bayes, LR, RF, SVM, and ANN, to differentiate between TBM and viral meningitis (VM).The study cohort consisted of 203 patients, incorporating data from 143 confirmed cases of VM and 60 cases of confirmed or probable TBM.Among all tested machine learning techniques, ANNs using imperative estimators yielded the highest AUC registering at 0.85 with a 95% confidence interval ranging from 0.79 to 0.89.Lélis et al. (2017) compiled a dataset of 22,602 potential meningitis cases in Brazil.Utilizing input data from nine symptom categories, alongside other patient information like age, gender, and location, they applied seven classification techniques and validated their models using 10-fold cross-validation.Their results indicated that the deployed methods could appropriately diagnose pneumococcal meningitis. Further extending the scope, Lelis et al. (2020) developed an integrated clinical decision support system (CDSS) aimed at assisting physicians in making early stage meningitis diagnoses based on observable symptoms.Built on explainable, treebased machine learning models and knowledge engineering techniques, this system integrated three intelligent components.The system was constructed and assessed on a Brazilian dataset encompassing 26,228 meningitis patients and demonstrated exemplary classification performance, particularly for the more severe type of meningitis, termed as MD-type, with an accuracy rate as high as 94.3%.Further experimentation corroborated that the system could accurately diagnose 88% of meningitis cases in a real-world database.This research holds particular importance for regions lacking financial resources and advanced medical facilities, as it offers an accurate, economical, and actionable methodology for early stage diagnosis. In recent years, novel multimodal data types such as metagenomic sequencing of CSF, exosomal miRNAs, and electronic health records have enriched the data resources available for machine learning models.This data diversity enables models to learn from multiple perspectives, thereby augmenting their diagnostic and predictive capabilities.Ramachandran et al. (2022) sought to enhance the diagnostic accuracy of TBM and its mimic diseases through an integrated machine learning classifier that combines metagenomic sequencing of CSF and host gene expression.Conducted in the sub-Saharan African region-a zone where TBM is prevalent yet challenging to diagnose-the study employed methods including the extraction of total nucleic acids from CSF samples followed by RNA and DNA sequencing, which were then analyzed using a machine learning classifier.Overall, the study found that this combined approach demonstrated high sensitivity and specificity in diagnosing TBM.In the test set, the diagnostic accuracy was 0.88, with sensitivity and specificity rates at 88.9 and 88%, respectively.Notably, the study also showed that this approach performs reliably in resourceconstrained settings.Hu et al. (2019) employed a combination of exosomal miRNA and electronic health data to conduct diagnostic studies on tuberculosis among 351 individuals, which included both active tuberculosis patients and a control group.The authors utilized an ExoQuick Kit and thrombin D to isolate exosomes from plasma samples, which were subsequently validated through nanoparticle tracking analysis, transmission electron microscopy, and protein blotting.In the exploratory phase, 102 exosomal miRNAs exhibited differential expression between tuberculosis patients and the healthy control group.Ten of these differentially expressed exosomal miRNAs were selected for further analysis.This study not only introduced new biomarkers but also optimized existing diagnostic methods, attaining an AUC of 0.97 (95% CI: 0.80-0.99) in diagnosing TBM. Frontiers in Microbiology 07 frontiersin.org With the emergence of imaging genomics, an increasing number of investigators are harnessing the power of machine learning algorithms in conjunction with a diverse array of radiological features derived from MRI scans to enhance the diagnostic precision of brain tuberculosis.Aftab et al. (2021) utilized patient data from Aga Khan University Hospital in Pakistan, encompassing not only demographic information but also radiological features derived from MRI.To address class imbalance during data preprocessing, the study employed various oversampling techniques for the minority class, such as SMOTE, SMOTE-TOMEK, SMOTE-ENN, and ADASYN.Two primary classification models, LR and RF, were tested.The LR model in combination with SMOTE + TOMEK techniques yielded the highest diagnostic performance, achieving an accuracy of 90.9%, an AUC of 95.4%, and an F1 score of 92.8%.These findings underscore the significant accuracy and effectiveness of this machine learning approach in the diagnosis of brain tuberculosis, particularly in emergency or clinical settings where rapid and accurate diagnosis is imperative.Ma et al. (2022) developed an automated, non-invasive diagnostic tool for early detection of basal cistern changes in TBM using deep learning and radiomics methods on a multicenter MRI dataset.The authors initially employed an nnU-Net-based model for the automatic segmentation of the basal cistern region, achieving an average dice coefficient of 0.727.Subsequently, radiomics features were extracted from FLAIR and T2W images and subjected to independent sample t-tests and pearson correlation coefficient analyses for feature selection.Finally, radiomics signatures were constructed using SVM and LR, and their performance was evaluated using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA).Test results indicated that the AUCs for SVM classifiers based on T2W and FLAIR features were 0.751 and 0.676, respectively, signifying good discriminative ability.This integrated method demonstrated considerable potential in the early identification of subtle basal cistern changes in TBM, promising improvements in the early diagnosis and treatment of the disease. Discussion As illustrated in Figure 3, logistic regression emerges as the most favored statistical approach, extensively adopted in the domain of TBM research.In contrast, machine learning algorithms such as SVM and tree models are the second most frequently utilized methodologies for model construction.Notably, a marked uptick in research articles using machine learning for model creation has been observed since 2022, while studies relying on traditional statistical approaches like logistic regression were predominantly concentrated prior to 2021.This trend may signal the growing acceptance and proliferation of machine learning in the field. Given its data-centric nature, machine learning offers significant potential in the diagnosis of TBM, especially in the last decade, where advancements in novel diagnostic technologies have supplied a variety of data types.As shown in Table 2, these include genomic data, transcriptomic data, fluorescent markers, The frequency of modeling techniques employed for diagnosing TBM.metabolomic data, proteomic data, radiographic images, and standard clinical and EHR.Such diverse data types not only enhance diagnostic accuracy but also serve as rich resources for training and validating machine learning models.Specifically, the amalgamation of these data types with advanced machine learning and deep learning techniques could pave the way for innovative diagnostic pathways.Some potential applications are outlined below. Molecular biology data Deep learning algorithms such as convolutional neural network (CNN) and recurrent neural network (RNN) have the capacity to process high-dimensional and complex data structures.This enables them to accurately identify genes and transcription factors associated with TBM from extensive genomic or transcriptomic data.Compared to traditional machine learning and statistical methods, these deep learning algorithms are better equipped to handle data of higher dimensions and complexity, making them more suitable for identifying key elements within intricate biological networks and pathways (Zou et al., 2019;Kleppe et al., 2021;Wang et al., 2023).For instance, deep learning has already been successfully employed in cancer diagnostics to analyze transcriptomic data for the identification of specific gene expression patterns related to certain types of cancer (Coudray et al., 2018;He et al., 2020;Meng et al., 2023).This approach could be similarly applied to the study of TBM, wherein deep learning-based analyses of gene expression data could potentially Immunological data The utilization of immunological data offers significant prospects in researches of TBM.High-dimensional immunological datasets can be adeptly navigated using unsupervised and semi-supervised algorithms, such as k-means clustering and autoencoders (Tanner et al., 2013).These computational techniques not only enable the recognition of TBM-associated immune response configurations but also disclose potentially determinative immunological markers and subpopulations of cells that are impactful in the course and responsiveness of treatments.Through the application of high-throughput methodologies like immunohistochemistry and flow cytometry, scholars have effectively pinpointed specific subpopulations of immune cells that correlate with prognostic and therapeutic outcomes (Zhang Z. et al., 2019;Kim et al., 2021;Ye et al., 2022).In a parallel vein, the aggregate analysis of TBM immunological data through similar unsupervised and semi-supervised machine learning algorithms may unearth key immunological metrics pertinent to the dynamics of disease and treatment.Such advanced analytical processes could contribute to the refinement of diagnostic frameworks and could potentially catalyze the formulation of more individualized treatment strategies. Biomarker data The value of metabolomics in the diagnosis of TBM has drawn the attention of researchers.Reduced glucose concentrations and elevated levels of proteins in the CSF have long been the two biochemical indicators used to diagnose TBM (Zhang et al., 2018;Zhang P. et al., 2019).However, a lot of information hidden in high dimensional data is often overlooked.To extract valuable information from these high-dimensional and variable biomarker data sets, machine learning algorithms like RF and SVM could be a solution (Reel et al., 2021;Sen et al., 2021;Sun et al., 2022).These algorithms exhibit exceptional feature selection and classification capabilities, enabling the identification of key metabolites and protein markers correlated with the diagnosis and prognosis of TBM. Radiological imaging The past few years have seen research initiatives that employ radiomics to manually extract features related to TBM (Aftab et al., 2021;Ma et al., 2022).However, the advent of CNN offers an automated, end-to-end analytical approach that achieves accuracy levels that meet or even surpass those of human experts (Hosny et al., 2018).For example, Jang et al. (2018) exemplified that CNN could autonomously identify glioblastoma features in MRI scans with a remarkable 0.87 AUC, significantly outperforming traditional image analysis methods.Therefore, employing this endto-end approach to the analysis of radiological images may be helpful in the diagnosis of TBM.Not only can complex biomarkers be automatically identified and analyzed, but more personalized treatment options are also possible.Such a fusion is expected to dramatically improve the accuracy of early diagnosis and disease surveillance, thereby optimizing treatment outcomes and improving patient quality of life. Routine clinical data and EHR In the realm of healthcare informatics, routine clinical data along with EHRs play an indispensable role.These expansive, multifaceted datasets typically include a wide range of information from clinical narratives and laboratory outcomes to imaging data and individual patient histories.While traditional methods of mining and analyzing these datasets have been laborious and error-prone, requiring manual scrutiny and specialized expertise, recent breakthroughs in natural language processing (NLP) and time-series analytics have revolutionized the process (Esteva et al., 2019).For example, Kehl et al. (2019) demonstrated the successful application of NLP techniques like text classification and entity recognition for the automatic extraction of oncologic outcomes from EHRs.Moreover, the application of time-series analysis has yielded valuable insights for evidence-based clinical decisions, especially in monitoring patient health and forecasting disease trajectories.When applied to TBM (Malo et al., 2021), this method enables the real-time monitoring of patient vitals, pharmacological responses, and disease advancement, thereby facilitating more individualized and prompt healthcare interventions. Limitations of the application of machine learning in TBM Although the past decade has seen increasing research on machine learning-based diagnostic tools for TBM, the field remains relatively nascent.Applying machine learning to the diagnosis of TBM presents several challenges, encompassing data collection, model training, diagnostic accuracy, and practical clinical implementation.The majority of predictive models have been validated using retrospective data, but only a few have been trained and tested on prospective data sets.Another crucial issue is the clinical heterogeneity of TBM, which manifests in symptoms ranging from mild headaches to severe neurological damage (Garg, 2010).This complexity complicates the task of machine learning models in capturing a comprehensive array of variables and clinical presentations that influence disease progression.Consequently, high-quality, and reliable machine learning models necessitate validation through large-scale, multi-center data that span different healthcare systems.Such validation not only enhances model accuracy but also deepens our understanding of the mechanisms of TBM and various prognostic factors.Additionally, data heterogeneity arising from different regions, hospitals, or equipment can introduce variations that may affect the model's generalizability.However, the scarcity of such multicenter data, combined with the inherent data heterogeneity, currently limits the model's generalizability and precision.Lastly, it's worth noting that many emerging machine learning-based TBM diagnostic tools and algorithms are patented or commercialized, resulting in opacity regarding their specific algorithms and datasets.This lack of transparency hampers comprehensive and rigorous evaluations, further impeding advancements in diagnostic accuracy research. Conclusion This review provides a thorough overview of state-of-theart diagnostic technologies and machine learning methodologies for the diagnosis of TBM.Our principal aim is to encapsulate extant research and scrutinize its application in the domain of TBM.For the purpose of this review, we bifurcate the research landscape into two primary categories: machine learning approaches and classical statistical methods.In the domain of machine learning, the literature is further segmented into categories of supervised and unsupervised learning, featuring key algorithms like SVM, linear discriminant analysis (LDA), k-nearest neighbors (KNN), ANN, boosting algorithms, RF, and k-means clustering.Conversely, traditional statistical methods mainly involve linear regression, logistic regression, chi-square testing, and CART.Overall, SVM is identified as the most widely applied machine learning tool for TBM diagnosis, whereas logistic regression remains the statistical method of choice.In recent years, machine learning has showcased enormous potential for elevating the diagnosis and treatment of TBM, outclassing traditional methods by excelling in the analysis of intricate biomedical data, including genomic sequencing, metabolomics, and proteomics.Future studies could explore integrating various machine learning algorithms into robust ensembled models and empirically validate these against human benchmarks in controlled trials.Additionally, assessing the integration of traditional diagnostics like MRI with emerging machine learning techniques offers a promising avenue for a more holistic diagnostic approach.Such advancements could significantly improve both TBM diagnosis and treatment, ultimately enhancing patient outcomes. FIGURE 1 FIGURE 1Comprehensive schematic illustration of current diagnostic technologies for TBM.The figure categorizes the diagnostic modalities into four primary technological approaches: PCR, miRNA, mNGS, IGRA, protein and metabolite analysis, and MRI.Each technology is represented with corresponding icons or graphical elements to delineate its unique contribution to the diagnosis of TBM. FIGURE 2 FIGURE 2General process for applying machine learning in medical diagnosis and treatment.It outlines crucial steps such as data collection, data processing, machine learning (ML) development, validation, and eventual clinical deployment.The figure aims to offer a roadmap for clinicians and researchers interested in integrating machine learning into medical practice. TABLE 1 Summary of clinical models for meningitis diagnosis. TABLE 2 Summary of available data type from new diagnostic technologies.
v3-fos-license
2022-09-08T06:16:38.323Z
2022-09-06T00:00:00.000
252109394
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-022-32941-6.pdf", "pdf_hash": "2e31a85193422d65828b9a6cdecf3420f6cb7c04", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42074", "s2fieldsofstudy": [ "Biology", "Engineering" ], "sha1": "a481d6d39d79f71c9b7e826e5de6fafe34cd7a4c", "year": 2022 }
pes2o/s2orc
Signal processing and generation of bioactive nitric oxide in a model prototissue The design and construction of synthetic prototissues from integrated assemblies of artificial protocells is an important challenge for synthetic biology and bioengineering. Here we spatially segregate chemically communicating populations of enzyme-decorated phospholipid-enveloped polymer/DNA coacervate protocells in hydrogel modules to construct a tubular prototissue-like vessel capable of modulating the output of bioactive nitric oxide (NO). By decorating the protocells with glucose oxidase, horseradish peroxidase or catalase and arranging different modules concentrically, a glucose/hydroxyurea dual input leads to logic-gate signal processing under reaction-diffusion conditions, which results in a distinct NO output in the internal lumen of the model prototissue. The NO output is exploited to inhibit platelet activation and blood clot formation in samples of plasma and whole blood located in the internal channel of the device, thereby demonstrating proof-of-concept use of the prototissue-like vessel for anticoagulation applications. Our results highlight opportunities for the development of spatially organized synthetic prototissue modules from assemblages of artificial protocells and provide a step towards the organization of biochemical processes in integrated micro-compartmentalized media, micro-reactor technology and soft functional materials. chemomechanical deformation and enzyme-mediated metabolism. A key strategy is to employ protocell-protocell contact-dependent interactions as the basis for prototissue assembly. For example, vesicle prototissues with spheroidal or sheet-like morphologies and controllable adhesion strengths and compaction have been prepared using membrane-mediated interactions involving streptavidin-biotin recognition or single-strand DNA complementarity 34 . Alternatively, externally applied acoustic or magnetic fields have been used to physically manipulate giant unilamellar lipid vesicles into localized prototissue architectures with coded configurations and microscale patterns [35][36][37] . In other studies, tissue-like enzymatically active spheroids that exhibit coordinated contractibility and mechanochemical transduction have been fabricated by the programmed chemical ligation of mixed populations of bio-orthogonally activated proteinosomes [38][39][40] . Furthermore, 3D printing has been used to prepare functional synthetic tissues based on contact-dependent interactions in networks of hemi-fused lipid-stabilized water-in-oil emulsion droplets 41,42 . The networks exhibit controlled mass transport and mutual communication via membrane pores to generate chemical and electrochemical circuitry and light-induced gene expression [43][44][45] . Based in part on the mimicking of extracellular matrix/living cell interactions, synthetic prototissues have also been constructed by immobilizing populations of artificial protocells in soft viscoelastic aqueous media such as polysaccharide hydrogels. For example, millimetre-sized emulsion droplets have been stabilized by entrapment in an alginate matrix 46,47 , and enzyme-active proteinosomes immobilized in helical hydrogel filaments to implement signalinduced movement and protocell-mediated micro-actuation 48 . Membrane-less coacervate droplets have been captured in single hydrogels by self-immobilization 49 or incarcerated in different hydrogel modules to produce a linear modular micro-reactor capable of a photocatalytic/peroxidation cascade reaction under nonequilibrium flow conditions 50 . Here, we describe a prototissue model based on the hydrogel immobilization and spatial segregation of catalytically active coacervate vesicles. The coacervate vesicles are employed as an integrated cytomimetic model 28 and re-designed as a stable biochemical reaction platform for the construction of a model prototissue with potential real-world applications. In general, coacervate vesicles offer novel opportunities in protocell science as in principle enzymes can be attached to the lipid outer surface to generate molecularly crowded micro-compartmentalized objects capable of membrane-mediated catalysis. Moreover, by attaching enzymes specifically to the coacervate membrane rather than using diffusive free biomolecules, prototissue-like materials with internal organization and functional stability can be generated using hydrogel immobilization. Using this strategy, here we assemble a three-layer tubular prototissue-like vessel capable of modulating the output of bioactive nitric oxide (NO). For this, we use a concentric arrangement of agarose hydrogels to spatially immobilize and segregate chemically communicating populations of enzyme-decorated phospholipid-enveloped polymer/DNA coacervate vesicles. Communication between the different coacervate vesicles is achieved by decorating their outer membranes with hydrophobically modified glucose oxidase (GOx), horseradish peroxidase (HRP) or catalase (CAT), which are attached prior to loading the cell-like constructs into the hydrogels. The three different protocell-loaded hydrogels are assembled as contiguous concentric modules to produce a tubular prototissue-like vessel that can implement logic-gate processing of a dual glucose/hydroxyurea input under reaction-diffusion conditions. We show that a distinct nitric oxide (NO) output can be produced in the internal lumen of the model prototissue by directional protocell-mediated processing using a specific spatial sequence of hydrogel modules. Under these conditions, a diffusive H 2 O 2 signal generated by GOx/glucose activity in an outer hydrogel module activates HRP-decorated protocells in an adjacent middle layer. This results in a downstream reaction with hydroxyurea to produce NO, which diffuses into the central channel through an inner module comprising CAT-decorated coacervate vesicles. The latter depletes any residual and potentially toxic H 2 O 2 to produce a distinct NO output in the central channel of the prototissue vessel. We use the NO output as a bioactive agent to inhibit platelet activation and blood clot formation in samples of plasma and whole blood located in the internal channel, thereby demonstrating proof-ofconcept use of the model prototissue vessel for anticoagulation applications. Design and construction of a model prototissue tubular vessel To fabricate a model prototissue vessel capable of integrated chemical processing, we designed and constructed cell-like constructs in the form of enzyme-decorated membranized coacervate micro-droplets ( Fig. 1a, b). To achieve this, positively charged (zeta potential, +13.2 ± 5.7 mV) membrane-less coacervate micro-droplets were prepared by associative liquid-liquid phase separation in aqueous mixtures of poly (diallyldimethyl ammonium chloride) (PDDA) and double-stranded DNA ( Supplementary Fig. 1), and then coated in a continuous phospholipid membrane to produce stable dispersions of dioleoyl phosphatidylcholine (DOPC)-enveloped coacervate vesicles (DOPC-CVs). Bright field and fluorescence microscopy images showed discrete spherical droplets with a mean size of 20.0 ± 6.3 μm that revealed a bright red fluorescent outer shell when stained with the lipophilic lipid bilayer probe Dil (Fig. 1c, d). The DOPC-CVs were stable with respect to coalescence and could be closely packed and stacked into stable multi-layer arrangements, providing the possibility of high density packing in three-dimensional space (Fig. 1e, f). The presence of a distinct lipid-stained surface membrane was consistent with transmission electron microscopy (TEM) images of minimally sized DOPC-CVs extracted from the supernatant after centrifugation. TEM images of single DOPC-CVs showed an electron dense homogeneous coacervate matrix surrounded by a continuous membrane with an estimated thickness of 6.4 ± 0.5 nm, comparable to the thickness of a phospholipid bilayer ( Supplementary Fig. 2). In contrast, no surface layer was observed by TEM for the membrane-free PDDA/DNA coacervate droplets ( Supplementary Fig. 2). Attachment of GOx, HRP or CAT to the outer membrane surface of the DOPC-CVs was achieved by increasing the lipophilicity of the enzymes by conjugation with palmitic acid (PA; hexadecanoic acid). Matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) confirmed that the resulting PA-GOx, PA-HRP and PA-CAT nanoconjugates consisted on average of 1 to 6 hexadecanoic chains for each enzyme molecule ( Supplementary Fig. 3). Addition of hydrophobically modified PA-GOx labelled with fluorescein-5-isothiocyanate (FITC) to a suspension of DOPC-CVs followed by treatment with a lipid stain produced protocells with red and green fluorescence co-localized specifically at the membrane ( Fig. 1g-l). In contrast, native GOx (no PA-modification) did not bind to the phospholipid membrane of the DOPC-CVs ( Supplementary Fig. 4). Similar observations were made for the binding of FITC-PA-HRP or FITC-PA-CAT to the DOPC-CVs (Supplementary Figs. 5,6). In all cases, minimal levels of sequestered enzymes were observed in the coacervate core, consistent with a molecular weight cut-off for the lipidcoated coacervate droplets of approximately 6 kDa 28 . Molecular binding curves indicated that more than 82% of the enzymes were associated with the DOPC-CVs (Supplementary Figs. 7,8). Having established a procedure for attaching lipophilic enzymes to the outer surface of the DOPC-CVs (enzyme-CVs), we sought to fabricate a series of protocell-based hydrogel modules as the basis for constructing a model prototissue. Each module consisted of a hydrogel that was loaded with a single population of immobilized enzyme-CVs. In each case, capture of approximately 2 × 10 6 protocells per mL was achieved by slow cooling of an enzyme-CV/agarose aqueous suspension from 40°C to room temperature. Optical and fluorescence microscopy images, 3D reconstructions and scanning electron micrographs of the loaded hydrogels showed colonies of intact protocells that were immobilized and spatially distributed throughout the agarose matrix ( Fig. 2a-d). No significant changes in the shape or size of the enzyme-CVs were observed after immobilization. Rheometric analysis under 1% strain indicated that the storage modulus (G / ) of the hydrogels increased with increasing agarose concentration to give protocell-loaded modules with relatively high stiffness (Supplementary Fig. 9). The G / values were essentially unchanged by immobilization of the DOPC-CVs within the hydrogel matrix, indicating that the hydrogels were not destabilized when loaded with the protocells. Based on the above observations, we assembled a self-supporting tubular prototissue-like vessel from a three-layer concentric arrangement of protocell-loaded hydrogel modules (Fig. 2e). For this, we implemented a stepwise construction process involving a three glass rod templates of decreasing diameters that were sequentially placed within a glass tube to generate tubular compartments that were infilled with a hot aqueous agarose suspension containing single populations of the enzyme-CVs, followed by in situ hydrogelation ( Supplementary Fig. 10). The prototissue-like vessel was 100 and 15 mm in length and outer diameter, respectively, and contained a 6 mm-wide central channel (Fig. 2f, g). Each hydrogel layer was approximately 1.5 mm in thickness and bounded by distinct interfaces (Fig. 2h). A series of leakage experiments showed that release of the PA-conjugated enzymes from coacervate vesicle-containing hydrogels was strongly inhibited when compared with analogous experiments undertaken with hydrogels containing free enzymes and no coacervate vesicles ( Supplementary Fig. 11), confirming efficient attachment of the hydrophobized enzymes to the outer membrane of the immobilized coacervate vesicles. Processing of nitric oxide generation in a model prototissue Using the above procedures, we prepared a three-layered tubular model prototissue capable of generating the vasoactive and anticoagulation agent, nitric oxide (NO), via a spatially confined protocellmediated enzyme process operating under a substrate reactiondiffusion gradient. The concentrically arranged hydrogel modules contained single populations of immobilized PA-GOx-, PA-HRP-or PA-CAT-decorated DOPC-CVs arranged as the outer, middle, and inner layers, respectively, of the vessel (Fig. 3a). Assays undertaken on each module indicated that the PA-enzymes remained active when attached to the DOPC-CVs and immobilized in the agarose hydrogels (Supplementary Fig. 12). Given that HRP catalyses the conversion of hydroxyurea into NO in the presence of H 2 O 2 51 , we reasoned that in situ production of H 2 O 2 within the model prototissue could be used as a diffusive signal for generating NO within the central channel of the tubular vessel. To achieve this, glucose and hydroxyurea were added specifically to the exterior side of the model prototissue and then allowed to diffuse inwardly through the concentrically arranged hydrogels such that GOx-CV-mediated H 2 O 2 production in the outer hydrogel module was subsequently processed downstream by HRP-CVs in the middle layer to produce NO. Under certain conditions, Step I Step II Fig. 15). This was attributed to complete degradation of the H 2 O 2 intermediate due to the considerably higher rate constant of catalase-mediated H 2 O 2 decomposition compared with GOxmediated H 2 O 2 production and HRP-mediated peroxidation. The spatiotemporal distributions of H 2 O 2 and NO at different positions along the reaction-diffusion gradient of the prototissue-like vessel were monitored using selective microelectrodes, while colorimetric assays were used to determine H 2 O 2 and NO concentrations in the central lumen (Fig. 3d, e). An extension in reaction time led to a gradual increase in the production of H 2 O 2 and NO over 150 min (Fig. 3f, g). Concentrations of H 2 O 2 were highest in the outer hydrogel module and progressively decreased in the middle and inner layers as the signalling molecule became depleted by downstream reactions with HRP/hydroxyurea and catalase, respectively (Fig. 3f). Minimal levels of H 2 O 2 were detected in the aqueous solution trapped in the central channel, indicating that any excess H 2 O 2 entering the inner layer was decomposed by the immobilized CAT-CVs. In contrast, after 150 min, the highest levels of NO were observed near to the boundary between the outer and middle hydrogel modules (Fig. 3g). NO was also detected in significant amounts at the exterior of the model prototissue vessel as well as in the inner layer and lumen, indicating nondirectional diffusion of the output after formation in the middle layer. The initial rate of NO output in the central channel 30 minutes after the addition of glucose (50 mM) and hydroxyurea (6 mM) to the external medium was 2.4 nM min −1 , resulting in the detection of NO concentrations in the interior lumen of up to 1.0 μM after 150 min. The production of NO was dependent on the glucose and hydroxyurea concentrations for a constant protocell number density, with increasing levels of NO formed when the substrate concentrations in the external medium were increased ( Supplementary Fig. 16). Given that the H 2 O 2 signal downstream of the HRP-CV-containing middle layer could be effectively removed by catalase activity in the inner layer, we exploited the model prototissue vessel as a dual-input/ single-output device for the chemically mediated signal processing of NO production in the central lumen. We investigated four different input combinations using a fixed GOx-CV/HRP-CV/CAT-CV outer/middle/inner layer spatial sequence. The absence of at least one of the substrates resulted in a failure to generate NO (Fig. 3h). Only the dual input of glucose and hydroxyurea gave a distinct output of NO in the internal channel, corresponding to an AND logic gate operation (Supplementary Fig. 17). In contrast, dual inputs in the absence of catalasedecorated CVs in the inner module produced a mixture of NO and H 2 O 2 in the lumen, while only a H 2 O 2 output was observed when the HRP-and CAT-decorated CVs were absent from the middle and inner layers, respectively ( Supplementary Fig. 18). No products were detected for glucose/hydroxyurea dual inputs in the absence of GOx-CVs or HRP-CVs in the outer and middle layers, respectively, or when GOx and HRP, or GOx and catalase, were absent ( Supplementary Fig. 18). We investigated whether changes in the spatial sequence of the enzyme-CV-containing hydrogel modules gave rise to different signal outputs in the prototissue lumen under dual input conditions (Fig. 4). Switching the outer GOx-CV and middle HRP-CV modules while retaining the CAT-CV inner layer also gave a distinct NO output in the central channel, indicating upstream transfer of H 2 O 2 into the outer and NO were monitored at different positions (i-iv) using microelectrodes. c Reaction diagram for three-enzyme processing in the model prototissue vessel under reactiondiffusion conditions. H 2 O 2 generation in the outer layer serves as a diffusive signal for HRP/Hu-mediated NO production in the middle layer. Excess H 2 O 2 is removed by the CAT-CVs located in the inner layer adjacent to the lumen. Oxygen is consumed and produced in the outer and inner layer, respectively. K1, k2, and k3 are the enzyme reaction rates for the GOx-CVs, HRP-CVs and CAT-CVs, respectively, immobilized within the different hydrogel modules. Reaction schemes for the colorimetric determinations of H 2 O 2 (HRP-mediated ABTS oxidation, d) and NO (Griess reagent, e). Spatiotemporal distribution of H 2 O 2 (f) and NO (g) at different positions ((i)-(iv), see (b)) in the prototissue-like vessel. Reaction conditions: Glu (50 mM); Hu (6 mM); GOx (GOx-CV outer layer, 0.2 mg mL −1 ); HRP (HRP-CV middle layer, 0.2 mg mL −1 ); CAT (CAT-CV inner layer, 0.1 mg mL −1 ). Data are presented as mean ± s.d. (n = 3 independent experiments). h Logic gate generation of NO in a three-layer tubular enzyme-CV modular prototissue-like vessel demonstrating AND gate processing. Only case I with a dual substrate input gives rise to a distinct NO output in the interior lumen. A H 2 O 2 output in the lumen is rendered null by catalase activity in the inner layer. Representations of the corresponding absorption spectra (ABTS assay of H 2 O 2 (blue); Griess assay of NO (purple) and colour changes for NO detection (Greiss assay, 540 nm, purple coloration) are also shown. A threshold absorption value above 0.04 at 540 nm (Greiss reagent, purple coloration with NO) was used as a verifiable (0,1) signal output. Reaction concentrations as given in (f, g). HRP-CV layer and complete H 2 O 2 decomposition in the inner module. Under these conditions, isotropic diffusion of NO from the outer layer ultimately gave rise to the detection of NO in the interior lumen. In contrast, switching the HRP-CV and CAT-CV modules whilst retaining the outer GOx-CV layer produced no NO or H 2 O 2 outputs in the lumen due to complete depletion of the H 2 O 2 signal during diffusion through the middle CAT-CV layer. Spatial sequences in which the CAT-CV module was positioned as the outer layer produced a mixture of NO and H 2 O 2 in the interior channel, while H 2 O 2 alone was detected for an outer/middle/inner layer sequence corresponding to HRP-CV/CAT-CV/ GOx-CV, respectively. Anti-coagulation activity in NO-producing model prototissues Given the known anti-coagulant properties of NO at low dose concentrations ( Supplementary Fig. 19) 52,53 , we exploited the model prototissues as tubular micro-reactors capable of inhibiting fibrin formation and thrombin-mediated platelet activation. To demonstrate bioactivity, we loaded solutions of fresh rabbit plasma or whole blood into the central cavity of the model prototissue and then added glucose and hydroxyurea to the external environment to generate a sustainable flux of NO into the lumen (Fig. 5a). Aliquots of plasma were removed after different time intervals up to 150 min, and NO-induced anti-coagulation monitored by addition of an activator (Ca 2+ ions) to initiate a potential clotting cascade, which was monitored by light scattering. Compared to control plasma solutions that were not exposed to NO and which coagulated after addition of aqueous CaCl 2 , generation of NO within the model prototissue progressively increased the half-life for plasma coagulation (t 1/2 ). For example, operating the prototissue-like vessel for 150 min produced NO concentrations in the lumen of ca. 1 μM. This resulted in a decrease in the scattering profile from approximately 116 (control without NO) to 4 cps/min, 30 min after addition of Ca 2+ ions (Fig. 5b). Under these conditions, t 1/2 was extended from 32 (control) to 280 min (Fig. 5c), indicating that the model prototissue vessel was capable of effective NO-mediated anti-coagulation activity. No anticoagulation was observed in control experiments undertaken in the absence of the enzyme substrates (Supplementary Fig. 20). In related studies, we used thromboelastography (TEG) to determine the inhibition kinetics of clot formation in whole blood samples placed within the internal channel of the model prototissue. Fresh citrated blood was placed in the vessel lumen and exposed to the NO flux for 150 min, followed by addition of Ca 2+ ions and transfer of the blood to the TEG hemostatic assay (Fig. 5d). The viscoelastic properties of whole blood clot formation under low shear stress were quantified by measurement of the reaction time required for initial fibrin formation (R), the time required to achieve a certain level of clot strength (K time; at an amplitude of 20 mm), the rate of clot formation (angle α), and the ultimate mechanical strength of the clot formed (maximum amplitude, MA). Values for R and K increased in the NO-generating prototissue-like vessel and were accompanied by decreases in α and MA values (Fig. 5e), indicating efficient anticoagulation activity. In particular, the extension in K from 2.0 min (control) to 3.4 min in the vessel and corresponding decrease in α from 60 to 48°confirmed the strong NO-mediated inhibition of fibrin-mediated blood clotting. Discussion In this Article, the immobilization and spatial distribution of protocell colonies within concentrically arranged tubular hydrogel modules is used to construct a functional synthetic prototissue model capable of endogenous signal processing and generation of nitric oxide under reaction-diffusion conditions. The model prototissue is based on a type of protocell construct involving the surface decoration of phospholipidenveloped PDDA/DNA coacervate droplets with lipophilically modified enzymes that collectively process a dual glucose/hydroxyurea input, which under certain conditions results in a distinct NO output in the central channel of the prototissue-like vessel. Transposing or removing any of the hydrogel modules results in different outputs, indicating that the functionality of the model prototissue depends on the spatial organization of the processing domains when operating under enzyme substrate gradients. As a specific outcome of prototissue vessel functionality, we show that a sustainable flux of bioactive NO leads to the inhibition of blood coagulation in samples located in the internal lumen of the device, demonstrating proof-of-concept use as an on-site anticoagulation device, provided that the residence times of samples within the inner channel are commensurate with the rate of blood clotting. Moreover, as the mediation of physiological processes by NO is dosedependent and toxic at high gas concentrations, it should be possible to use the model prototissue vessel to optimize the flux of NO to address the particular requirements of specific biomedical antithrombotic applications. More broadly, our results highlight opportunities to develop spatially segregated synthetic prototissue models from modules comprising immobilized assemblages of artificial protocells and provide a step towards the organization of biochemical processes in integrated micro-compartmentalized media, micro-reactors, and soft functional materials. In general, decoration of the phospholipid-enveloped PDDA/DNA coacervate droplets with lipophilically modified enzymes provides a strategy for locally concentrating biomolecules across the surface of a protocell membrane to generate functional micro-compartmentalized objects. By attaching the enzymes to the coacervate surface rather than employing a homogeneous dispersion of biomolecules within a coacervate-free hydrogel, soft materials with increased levels of internal organization can be generated. For example, the surfaceadsorbed enzymes can be coupled with other chemistries localized within the interior of the coacervate droplets to produce embedded signalling and network systems that are spatially distributed as reaction hot-spots throughout the hydrogel. Taken together, our results illustrate an approach to the construction of model prototissues based on the concept of protocell/ hydrogel modularization. Modularity is a major factor in the adaptability and resilience of living systems and is potentially a key requirement for implementing prototissue designs across diverse biomimetic applications. For example, given that enzyme processing in the tubular prototissue-like vessels can be regulated by employing different spatial sequences of the enzyme-CV-containing hydrogel modules, it seems feasible that the prototissue models could be individually designed for integration into biomedical and pharmaceutical applications involving glucose-mediated metabolic pathways. Moreover, designing synthetic prototissues with bespoke shape, size, spatial configuration and micro-anatomy could provide a route to artificial constructs that complement the use of organoids as therapeutic models in regenerative medicine 54 , or serve as micro-reactor-based implants for diagnostic, therapeutic, and theranostic applications. Hydrophobic modification of enzymes Hydrophobic modification of enzymes with palmitic acid ester was performed as follows 55 . 0.4 mL of palmitic acid N-hydroxysuccinimide ester (PA-NHS) solution in dry dimethyl sulfoxide (four portions of 0.1 mL every 2 h) was added to 6 mL of enzyme solution (4 mg mL −1 GOx, HRP or CAT) in 100 mM phosphate buffer in the presence of 1% (w/w) deoxycholate (pH 8.8). The molar ratio of enzymes to the ester in the reaction mixture was 1:40. The reaction mixture was kept for 8 h in a thermostat at 30°C while stirring. The mixture was then filtered through a 0.45 mm filter, and dialyzed with 50 mM phosphate buffer (pH 8.0) for 3 times. After dialysis, the solution containing the hydrophobized enzyme (PA-GOx, PA-HRP or PA-CAT) was filtered through a 0.2 µm filter, and then lyophilized and stored in powder form. The products were characterized by matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF MS), which was performed on a 4700 Proteomics analyzer (Applied Biosystems) using 2,5-dihydroxybenzoic acid as matrix substance. The sample aqueous solution concentration was 5.0 mg mL −1 . FITC labelling of the PA-enzymes was conducted during the PA-NHS conjugation step. 1 mg mL −1 of FITC in dimethyl sulfoxide together with PA-NHS was added to the enzyme solution to give a final concentration of 100 µg FITC/1 mg enzyme. The sample tube was wrapped in foil and mixed at room temperature for 2 h. Unreacted FITC was removed by ultrafiltration using Millipore Amicon-Ultra-15 tubes (MWCO = 10 kD) to give the FITC-labelled PA-enzymes which were collected and stored in 50 mM PBS buffer at −20°C for further use. FITC labelling of the native enzymes was also undertaken but without use of PA-NHS. Decoration of DOPC-CVs with hydrophobically modified enzymes Attachment of the palmitic acid (hexadecanoic acid)-modified enzymes to the outer membrane surface of the DOPC-CVs was performed as follows. 1 mL suspension of DOPC-CVs (8.0 mg mL −1 , pH = 8.0) was separately added to 5 mg mL −1 PA-enzyme solution (100 µL for PA-GOx, 100 µL for PA-HRP, and 50 µL for PA-CAT). After incubation for 60 min and subsequent centrifugation for 5 min at 20 × g, enzyme-decorated DOPC-CVs (PA-GOx-CVs, PA-HRP-CVs, and PA-CAT-CVs) were obtained. Due to incomplete adsorption and a limit on the loading efficiency, the final enzyme loading values obtained from adsorption experiments were around 0.4 mg mL −1 GOx for PA-GOx-CVs, 0.4 mg mL −1 HRP for PA-HRP-CVs, and 0.2 mg mL −1 CAT for PA-CAT-CVs. Preparation of protocell-based hydrogel modules Hydrogel modules were prepared by immobilization of enzymedecorated coacervate vesicles in agarose. Agarose powder (low gelling temperature) was poured into a conical flask along with Milli-Q water and heated in a microwave oven for 1-3 min until the solution was gently boiling and the agarose was completely dissolved to produce a 2 wt% solution. The agarose solution was then cooled to ca. 40°C in a water bath. 1 mL of the warm agarose solution was then immediately added to 1 mL of an aqueous suspension of enzyme-decorated DOPC-CVs (vesicle concentration, 8.0 mg mL −1 ; enzyme concentrations 0.4 mg mL −1 (GOx), 0.4 mg mL −1 (HRP), 0.2 mg mL −1 (CAT)) in a glass tube (inner diameter 15 mm) with a removable plastic stopper at one end, well-mixed, and then left at 4°C in a fridge for approximately 30 min during which hydrogelation (final concentration, 1 wt% agarose) occurred along with entrapment of the coacervate vesicles. The resulting hydrogel was obtained from the glass tube as a selfsupporting material by removing the plastic stopper and gently displacing the hydrogel using a glass rod. Agarose hydrogels were prepared using concentrations of 0. Design and construction of a tubular prototissue vessel A model prototissue vessel was assembled from three concentric hydrogel modules containing GOx-CVs, HRP-CVs, or CAT-CVs arranged respectively from the exterior to interior of the vessel using a gel perfusion method involving four steps: (1) A 15 mm-diameter glass tube glass tube (length, 100 mm) was sealed at one end by a plastic stopper and stood upright in an agarose hydrogel matrix. (2) A 12 mm-diameter glass rod (length, 90 mm) was placed in the centre of the glass tube and 10 mL of a hot (40°C) aqueous agarose suspension containing GOx-CVs was added to fill the empty space between the glass tube and inserted glass rod, and then cooled to 4°C for 30 min in a fridge to induce hydrogelation. The 12 mm-diameter glass rod was then carefully removed to produce a tubular outer layer comprising a hydrogel/GOx-CV module. (3) A 9 mm-diameter rod (length, 90 mm) was then placed into the centre of the glass tube, and 7 mL of a hot (40°C) aqueous agarose suspension containing HRP-CVs added to fill the empty space between the outer GOx-CV-containing hydrogel layer and inserted glass rod, and then cooled to 4°C for 30 min in a fridge to induce hydrogelation. The 9 mm-diameter glass rod was then carefully removed to produce a tubular middle layer consisting of a hydrogel/HRP-CV module. (4) A 6 mm-diameter glass rod (length, 90 mm) was placed in the centre of the glass tube and 2 mL of a hot (40°C) aqueous agarose suspension containing CAT-CVs added to fill the empty space between the middle HRP-CV -containing hydrogel layer and inserted glass rod, and then cooled to 4°C for 30 min in a fridge to induce hydrogelation. The 6 mm-diameter glass rod was then carefully removed to produce a tubular inner layer consisting of a hydrogel/CAT-CV module. Finally, the resulting tubular three-layer prototissue vessel was obtained from the glass tube as a self-supporting material by removing the plastic stopper and gently removing the 15 mm-diameter glass tube. Samples were stored in the fridge prior to use. The prototissue vessels were stained by adding Congo red (0.5 mM), Direct yellow (0.5 mM) and Brilliant blue (0.5 mM) to the outer, middle and inner layers during the assembly process. Prototissue model-mediated processing of nitric oxide generation A three-layer tubular prototissue vessel (length, 50 mm) comprising outer, middle and inner hydrogel layers loaded with GOx-CVs (0.2 mg mL −1 ), HRP-CVs (0.2 mg mL −1 ) or CAT-CVs (0.1 mg mL −1 ), respectively was used for the controlled production of NO in the internal lumen. The prototissue vessel was sealed at the bottom end with enzyme-free agarose hydrogel and then stood upright in a 20 mL beaker. DPBS buffer (Dulbecco's phosphate-buffered saline buffer) containing 50 mM glucose and 6 mM hydroxyurea was added as an external solution. The volume added was such that the open end of the vertical prototissue vessel was not immersed, preventing direct access of the enzyme substrates to the central lumen. The central channel of the prototissue vessel was then filled with DPBS buffer to generate a substrate diffusion gradient across the tubular micro-reactor. After reaction at different time intervals, the H 2 O 2 and NO produced in the different hydrogel layers of the prototissue vessel were recorded using H 2 O 2 and NO microelectrode sensors (World Precision Instruments). In situ microelectrode-based monitoring of H 2 O 2 and NO production Real-time monitoring of H 2 O 2 and NO within the different layers of the prototissue vessel were recorded using microelectrode sensors (electrophysiology tissue microelectrodes, World Precision Instruments). H 2 O 2 measurements were performed using a Four-Channel Free Radical Analyzer (WPI, TBR 4100) equipped with a peroxide hydrogen sensor (ISO-HPO-100). The microelectrode has a limit of detection (LOD) of 1 nM-1 mM, response time of < 5 s (90%), drift of <1.0 pA/min, and sensitivity of 1 pA/nM. The H 2 O 2 -reactive part of the needle electrode had a proximal length of 1-5 mm with a diameter of 100 µm. The electrode was equilibrated and polarized according to recommendations from the manufacturing company (WPI), and calibrated using 3% (w/w) H 2 O 2 solution in 100 mM PBS buffer. NO measurements were performed using a Four-Channel Free Radical Analyzer (WPI, TBR 4100) equipped with nitric oxide sensor (ISO-NOPF200). The microelectrode has a LOD of 0.2 nM, and Ag-AgCl was used as a reference electrode. The NO-reactive part of the needle had a proximal length of 1-5 mm with a diameter of 200 µm, which was coated in a hydrophobic gas permeable membrane. The instrument calibration was performed at 37°C. The electrode was equilibrated and polarized according to recommendations from the manufacturing company (WPI). For calibration, S-nitroso-N-acetyl-d,l-penicillamine (SNAP) was used in combination with the catalyst cuprous (I) chloride (CuCl) to generate a known amount of NO in solution. Colorimetric assays of H 2 O 2 and NO production in the prototissue lumen H 2 O 2 concentration in the lumen of the prototissue vessel was determined by using a H 2 O 2 -ABTS colorimetric assay. The solution collected from the lumen was ultra-filtrated to remove any enzymes that may have leached into the internal channel. Before H 2 O 2 determination, NO was removed through addition of Carboxy-PTIO, a nitric oxide radical scavenger. Samples (200 µL) were then placed in the wells of a 96-well clear microplate, and HRP (final concentration 50 ng mL −1 ) and ABTS (final concentration 2 mM) then added into each well. After 20 min, the optical density (OD) was measured at λ = 418 nm and compared to a standard curve. The amount of H 2 O 2 produced was estimated by monitoring the absorption of the ABTS diradical at λ = 418 nm (ε 418 = 36,800 M −1 cm −1 ). The NO concentration in the lumen was determined by the Griess reagent colorimetric assay. The Griess reagent was prepared by combining equal amounts of N-1-(naphthylethyl)ethylenediamine and sulfanilamide. Samples (100 µL) were placed in the wells of a 96-well clear microplate and Griess reagent (100 µL) added to each well. After 15 min, the optical density (OD) was measured at λ = 540 nm and compared to a standard curve. Anticoagulation activity in NO-producing tubular prototissue vessels All procedures with the animal experiments in this study were performed in accordance with the guidelines of the National Institutes of Health for the Care and Use of Laboratory Animals and were approved by the Ethics Committee on Animal Care of Hunan University (No. SYXK (Xiang) 2018-0006). Blood was collected through venipuncture in New Zealand white rabbits using a 21-gauge butterfly needle and placed in vacutainer tubes containing sodium citrate. The citrated whole blood was centrifuged at 50 × g for 15 min, and rabbit plasma collected from the supernatant. The plasma was added into the central channel of a tubular prototissue vessel comprising outer, middle, and inner hydrogel modules containing immobilized populations of GOx-CVs, HRP-CVs, and CAT-CVs, respectively. NO production in the central channel was initiated by the addition of glucose and hydroxyurea to the exterior side of the micro-reactor. After incubation for 90 min, 300 µL of plasma were removed from the lumen and 30 µL of 0.2 M CaCl 2 quickly added to initiate a clotting cascade. The coagulation kinetics were monitored by light scattering using a Hitachi FL-7000 spectrofluorometer with excitation and emission wavelengths were set at 580 nm. As a control experiment, 30 µL of 0.2 M CaCl 2 was directly added to a volume of 300 µL plasma without being exposed to NO. The initial rate of clotting was calculated from the slope of the scattering profile 30 min after the addition of the activator (Ca 2+ ions). The halflife (t 1/2 ) was obtained through fitting of a single exponential equation based on a one-phase exponential decay function with a time constant parameter by using Originlab. NO in the interior lumen was simultaneously determined by Griess colorimetric assay. Anti-coagulation in citrated whole blood samples was also evaluated by thrombelastography (TEG) performed at 37°C using a Haemostasis Analyzer (TEG-5000, Haemonetics, Braintree, MA) according to the manufacturer's guidelines. Aliquots (1 mL) of the blood were removed from the central lumen of a NO-producing prototissue vessel after incubation for 90 min, mixed with kaolin (Cat. No. 6300, Haemonetics, Braintree, MA), and then 360 µL of the kaolin-activated plasma transferred to the pre-warmed TEG cups. 30 µL of 0.2 M CaCl 2 was quickly added to initiate the clotting cascade 56 . As a control experiment, 30 µL of 0.2 M CaCl 2 was directly added to 360 µL of kaolin-activated plasma without being exposed to NO. The TEG analyzer was calibrated and evaluated daily by running quality control samples before experimental sample analysis. The following TEG parameters were determined to investigate the anticoagulation activity of NO generated from the prototissue vessels: (i) the reaction time R, which is the latency from start of the test to initial fibrin formation (coagulation phase I) reflecting formation of thromboplastin, thrombin, and factor Xa; (ii) K, the time from appearance of the first fibrin filaments to clot formation (coagulation phase II); (iii) MA, maximum amplitude of the clot (mm) indicating increase or decrease of fibrinogen concentration and corresponding to coagulation phase III; (iv) S, syneresis constant describing the entire period of fibrin coagulation; (v) MA, time to maximum amplitude of the clot; and (vi) J, the total index of coagulation calculated as J = 160 × tanα, where α is the angle between the horizontal line in the middle of the TEG tracing and the line tangential to the developing body of this tracing at 20 mm amplitude 57 . Statistical analyses Statistical analyses were performed using Origin 6.5. Student's t-tests were used for comparison between two groups according to data distribution. Values were normally distributed, and the variance was similar between compared groups. P < 0.05 was considered statistically significant. All the data were repeated at least three times on three different samples. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
v3-fos-license
2023-09-09T05:16:02.891Z
2023-09-08T00:00:00.000
261612563
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://sjtrem.biomedcentral.com/counter/pdf/10.1186/s13049-023-01112-x", "pdf_hash": "d39fb00ee1a7e36bc75e6d4a5a4d6fe114ec9426", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42077", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "f6f26a77ca233907d281d1487007e7ca8f5e8cc4", "year": 2023 }
pes2o/s2orc
Prehospital transportation of severe penetrating trauma victims in Sweden during the past decade: a police business? Introduction Sweden is facing a surge of gun violence that mandates optimized prehospital transport approaches, and a survey of current practice is fundamental for such optimization. Management of severe, penetrating trauma is time sensitive, and there may be a survival benefit in limiting prehospital interventions. An important aspect is unregulated transportation by police or private vehicles to the hospital, which may decrease time but may also be associated with adverse outcomes. It is not known whether transport of patients with penetrating trauma occurs outside the emergency medical services (EMS) in Sweden and whether it affects outcome. Method This was a retrospective, descriptive nationwide study of all patients with penetrating trauma and injury severity scores (ISSs) ≥ 15 registered in the Swedish national trauma registry (SweTrau) between June 13, 2011, and December 31, 2019. We hypothesized that transport by police and private vehicles occurred and that it affected mortality. Result A total of 657 patients were included. EMS transported 612 patients (93.2%), police 10 patients (1.5%), and private vehicles 27 patients (4.1%). Gunshot wounds (GSWs) were more common in police transport, 80% (n = 8), compared with private vehicles, 59% (n = 16), and EMS, 32% (n = 198). The Glasgow coma scale score (GCS) in the emergency department (ED) was lower for patients transported by police, 11.5 (interquartile range [IQR] 3, 15), in relation to EMS, 15 (IQR 14, 15) and private vehicles 15 (IQR 12.5, 15). The 30-day mortality for EMS was 30% (n = 184), 50% (n = 5) for police transport, and 22% (n = 6) for private vehicles. Transport by private vehicle, odds ratio (OR) 0.65, (confidence interval [CI] 0.24, 1.55, p = 0.4) and police OR 2.28 (CI 0.63, 8.3, p = 0.2) were not associated with increased mortality in relation to EMS. Conclusion Non-EMS transports did occur, however with a low incidence and did not affect mortality. GSWs were more common in police transport, and victims had lower GCS scorescores when arriving at the ED, which warrants further investigations of the operational management of shooting victims in Sweden. Introduction Gun homicide increased in Sweden during the past decade, in contrast to a decreasing incidence in the majority of European countries [1].Transitioning from a society with relatively few shootings, first responders on scene now face a new reality.It is known that the outcome of severe penetrating trauma is time sensitive [2][3][4].There may be a survival benefit from limiting prehospital interventions in severely injured patients in favor of urgent transport in urban settings, although optimal prehospital management is debated [5][6][7][8][9].Some of the deaths may be preventable depending on prehospital care, and "scoop and run" may be preferable to "stay and play" in select cases [10,11].The ultimate "scoop and run" approach is immediate transport of the victim to the hospital by police.The police are often first on scene, which may decrease the time from injury to arrival at definitive care.[12][13][14][15] However, these transports provide only a bare minimum of medical intervention.The first organized police transport approach was established in Philadelphia in 1996 [12].By 2016, more than 50% of the penetrating trauma in Philadelphia was transported by police to medical facilities [16].These patients presented with a higher injury severity score (ISS), lower Glasgow coma scale score (GCS), and a higher frequency of gunshot wounds (GSWs) than those transported by EMS [8,[16][17][18][19], and it is still debated whether a survival benefit can be deducted.Initial reports showed an increase in mortality for police transport compared with emergency medical services (EMS), although adjusted comparisons indicated no difference [8,[16][17][18][19] and one report indicated a survival benefit [15].The picture was further complicated by the fact that transport by private vehicles decreased the adjusted mortality in relation to EMS [20].It is not known whether transport of patients with penetrating trauma occurs outside of the EMS in Sweden or whether it affects outcome.Sweden is a relatively large country compared to its population, and level one trauma centers are located only in urban areas [21].Therefore, data from the US cannot be extrapolated to Sweden.Moreover, prehospital care cannot be compared directly, as organizations, operative competence, and mandates differ substantially between countries [22][23][24].Sweden is facing a surge of gun violence that mandates optimized prehospital transport approaches, and an understanding of current practice is fundamental for such optimization.Therefore, we used the Swedish National Trauma Registry (SweTrau) to investigate prehospital transportation modalities of severe, penetrating trauma in Sweden during 2011-2019.We hypothesized that transport by police and private vehicles occurred and that it affected mortality. Study population This was a retrospective, descriptive nationwide study of all patients with penetrating trauma and ISS ≥ 15 registered in SweTrau between its establishment on June 13, 2011, and December 31, 2019.The population in Sweden was 9,415,570 people in 2011 and 10,327,589 people in 2019.Patients of all ages and sexes were included.The study was approved by the Swedish Ethical Review Authority (no 2019-02842) and by the SweTrau steering group. Swedish trauma registry Data were extracted from the national trauma registry in Sweden, SweTrau, which was established in 2011.In 2019, 92% of all hospitals in Sweden with trauma capabilities (anesthesia, surgery and radiology competence available at all times) were associated with SweTrau, and 86% of hospitals in the registry reported actively [25].SweTrau follows "the revised Utstein Trauma Template for Uniform Reporting of Data following Major Trauma, 2009, a uniform template for reporting variables and outcomes in trauma allowing comparison of trauma systems in Europe [26].SweTrau estimates its coverage by comparing registry entries of trauma requiring intensive care with data in The Swedish Intensive Care registry (SIR) of admissions with the diagnosis "Trauma" and injury diagnoses SA01-TA04 and TA09-TA13.SweTrau's coverage was estimated at 72.6% in 2019 [25].To be included in SweTrau, patients needed to fulfill at least one of the following criteria: exposure to a traumatic event with subsequent trauma team activation at the receiving hospital, ISS > 15 without trauma team activation, ISS > 15 and transferred to a participating hospital within 7 days of the trauma.The exclusion criteria for registration in SweTrau were trauma team activation without a precipitating trauma and patients where the only injury was a chronic subdural hematoma. Definitions and missing data Penetrating trauma was defined as injuries caused by sharp objects.Transport by EMS was defined as ground ambulance.Airborne EMS and transports between hospitals were excluded.Scene time was defined as the registered time from EMS arrival to the scene of trauma until departure, and transport time was defined as the registered time from EMS departure from the scene of trauma to arrival at the receiving hospital.Prehospital time was defined as scene time combined with transport time.Missing data are presented with their respective categories in tables when applicable.Patients arriving on foot were excluded from Tables 2 and 3 due to isolated patients. Statistical analyses Data are presented as mean with interquartile range (IQR) for continuous variables.Descriptive statistics of patient characteristics are presented as numbers and percentages.Data analysis was performed with R (v. 4.0.3).Logistic regression models for dichotomous outcomes were used with restricted cubic splines and three knots placed at their respective quantiles.P < 0.05 was considered statistically significant. Outcomes and airway management The 30-day mortality for patients transported by EMS was 30% (n = 184), 50% (n = 5) for police transport, 22% (n = 6) for private vehicles, and all patients (n = 8) who arrived at the ED by foot survived (Table 3).Private vehicles, odds ratio (OR) 0.65 (confidence interval [CI] 0.24, 1.55, p = 0.4), and police transport, OR 2.28 (CI 0.63, 8.3, p = 0.2), were not associated with increased mortality in relation to EMS.The Glasgow outcome scale score was generally higher for patients transported by private vehicles and patients who arrived at the ED by foot compared with EMS and police transport.In total, 199 (32.5%) patients transported by EMS were intubated in the ED, compared with 6 (60%) patients transported by police and 12 (44.4%)patients transported by private vehicles.The mortality rates associated with transit time, scene time, and combined scene and transit time for EMS are presented in Fig. 3. Short transit times were significantly associated with increased mortality, but no other association was significant (Fig. 3).The ISS in relation to transport times for EMS is presented in Fig. 4. Discussion In this study, we showed that non-EMS transport of severe penetrating trauma occurred in 5.6% of cases.The mortality for police transport was 50% (n = 5) and 22% (n = 6) for private vehicles, and there was no mortality difference between EMS and police transport (OR 2.28 [CI 0.63, 8.3]) or private vehicles (OR 0.65 [CI 0.24, 1.55]).Adjusted mortality analysis of police transport and private vehicles was ceded due to limited sample size.The police transported 1.5% of the patients, who presented with lower GCS scores and a higher incidence of GSWs compared with EMS, in concurrence with earlier reports [8,16,17,19,20].The combination of GSW and low GCS score may have signaled an urgency that prompted police to transport the victim instead of waiting for the EMS, although the specific reasons in these cases could not be deduced.In contrast to previous observations, ISS did not differ between patients transported by the police and EMS [8,16,18,19].Further analysis of ISS in relation to mode of transport showed that police transported patients with a lower ISS to a lesser extent than EMS, although the median ISS did not differ.The police transported more severely injured patients (median ISS 25) compared with earlier reports (mean ISS 14.2 and mean ISS 15.5) [8,18], which is likely reflected in the increased mortality (50%) in relation to those reports (17.7% and 14.8%) [8,18].Private vehicles transported 4.1% of all cases, compared with previous observations of 12.6% and 20.5% [27,28].Patients transported by private vehicles had lower ISS, similar systolic blood pressure, and comparable GCS scores in relation to EMS, in concurrence with earlier reports [14,27].Private vehicles more frequently transported patients with GSW compared with EMS, which contrasts with a report from Wandling et al [20].The median ISS 20 for patients transported by private vehicles was elevated in relation to earlier reports (median ISS 2 and 84% with mean ISS ≤ 15), which likely influenced the increased mortality (22%) compared with those reports (2.2% and 2.1%) [20,27]. We detected a median scene time of 12 min for EMS.A nonsignificant trend of increased mortality with increased scene times was noted.Prehospital interventions may increase scene time and possible harm [3], and increased scene times have been associated with increased mortality [4,29].Advanced interventions enroute could lower the time on scene [30,31].Additionally, transport by non-EMS could decrease prehospital times [13,15] and limit medical interventions.We found no association between ISS and transport time.In other studies, severely injured penetrating trauma patients were associated with shorter transport times [2], and shorter transport times increased mortality unrelated to injury severity [29].These results may reflect an urgency in severely injured patients not necessarily mirrored in the present classification of injury severity. The incidence of gun homicide in Philadelphia was 146 per million inhabitants in 2016.Several cities in the US have a similar incidence of gun homicide as Philadelphia without an established practice of police transports [18,36], indicating additional contributing factors to the practice of non-EMS transport besides the incidence of gun homicide alone.In comparison, gun homicides occur at a rate of 4 per million inhabitants in Sweden and 1.6 per million inhabitants in Europe [1,37].Philadelphia has eight adult and pediatric trauma centers in proximity to shooting incidents, which is why conditions may be favorable for short transportation times by non-EMS [8,28].We have previously shown that the incidence of severe penetrating trauma was highest in the three largest metropolitan regions in Sweden [21].These areas provide relatively short transportation times.Unsurprisingly, increased distance between the scene of violence and hospitals may increase mortality [32], and access to trauma centers in Sweden varies considerably depending on geographic location.[33] The availability of trauma centers within different healthcare organizations likely influences the challenges posed by prehospital triage.Accurate prehospital triage of trauma patients is challenging, and undertriage of undifferentiated trauma patients has been associated with increased mortality [34], with possible subsequent harm from interhospital transfers.[34,35] Considering triage challenges by health care professionals, mistriage by non-EMS is likely elevated compared with EMS, with potential harmful effects on patients and health care resources. The increased shooting incidence in Sweden also risks increasing the number of casualties in areas with ongoing violence, and anecdotal stories of police transport were discussed in Swedish media [38].Here, we show that although transport by police and private vehicles occurred, the incidence was low.Nevertheless, in 2018, health and police authorities in the Stockholm region established an agreement that regulates the authorities' cooperation concerning the management of severely injured patients around scenes of violence [39].The agreement stated that EMS should always perform the transports unless time restraints or safety concerns dictate otherwise; in these circumstances, police may evacuate patients with a subsequent transfer to EMS at a safe location.Police transport to the hospital should be restricted to exceptional cases.Areas outside of Stockholm are still unregulated.Therefore, increased medical training of police officers may increase lifesaving interventions in either situation [40]. This study has some limitations that need to be acknowledged.First, this was an observational study with inherent limitations regarding association and causality.Second, prehospital deaths were not included in SweTrau, which may be a source of selection bias.Third, the number of non-EMS transports was small, which limited the analysis and decreased the observation confidence.Fourth, the coverage of SweTrau increased during the study period, which could affect outcomes, although we did not analyze trends. Conclusion Non-EMS transport did occur, however with a low incidence and did not affect mortality.GSWs were more common in police transport, and victims had lower GCS scores when arriving at the ED, which warrants further investigations of the operational management of shooting victims in Sweden. Fig. 1 Fig. 1 Flowchart of patient inclusion.EMS emergency medical service, HEMS helicopter emergency medical service, ISS injury severity score, SweTrau Swedish national trauma registry Fig. 2 Fig. 2 Histogram visualizing ISS in patients transported by EMS and police.The police transported patients with lower ISS to a lesser extent compared with EMS.EMS emergency medical service, ISS injury severity scale Table 1 Baseline characteristicsED emergency department, EMS emergency medical service, GCS Glasgow coma scale, GSWs gunshot wounds, SWs stab wounds Table 2 Patient injuries
v3-fos-license
2021-11-11T16:09:38.342Z
2021-11-08T00:00:00.000
243974778
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-1312/9/11/1237/pdf", "pdf_hash": "9673ff041c1ab40c76ee3437523efd6ef808af68", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42078", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "5c075e9090f081e822c41101b5f3a0a6d53ff7e1", "year": 2021 }
pes2o/s2orc
First Concurrent Measurement of Primary Production in the Yellow Sea, the South Sea of Korea, and the East/Japan Sea, 2018 : Dramatic environmental changes have been recently reported in the Yellow Sea (YS), the South Sea of Korea (SS), and the East/Japan Sea (EJS), but little information on the regional primary productions is currently available. Using the 13 C- 15 N tracer method, we measured primary productions in the YS, the SS, and the EJS for the first time in 2018 to understand the current status of marine ecosystems in the three distinct seas. The mean daily primary productions during the observation period ranged from 25.8 to 607.5 mg C m − 2 d − 1 in the YS, 68.5 to 487.3 mg C m − 2 d − 1 in the SS, and 106.4 to 490.5 mg C m − 2 d − 1 in the EJS, respectively. In comparison with previous studies, significantly lower ( t -test, p < 0.05) spring and summer productions and consequently lower annual primary productions were observed in this study. Based on PCA analysis, we found that small-sized (pico- and nano-) phytoplankton had strongly negative effects on the primary productions. Their ecological roles should be further investigated in the YS, the SS, and the EJS under warming ocean conditions within small Introduction Marine phytoplankton as primary producers play an important role as the base of the ecological pyramid in the ocean and are responsible for nearly a half of global primary production [1,2]. The primary production of phytoplankton is widely used as an important indicator to predict annual fishery yield in various oceanic regions [3][4][5], because it is one of key factors in determining amount of food source for upper-trophic-level consumers [6,7]. Lee et al. [8,9] also reported that an algorithm for estimation of the habitat suitability index for the mackerels and squids around the Korean peninsula was largely improved by including a primary production term. The physiological conditions and community structures of phytoplankton are closely related to physical and chemical factors (e.g., light regime, nutrients, and temperature) [10][11][12], which induce greatly different phytoplankton productions in various marine ecosystems [3,13,14]. Thus, the primary production measurements can provide fundamental backgrounds for better understanding marine ecosystems with different environmental conditions and detecting current potential ecosystem changes. The Yellow Sea (hereafter YS), the South Sea of Korea (SS), and the East/Japan Sea (EJS), belonging to the East Asian marginal seas, have experienced 2-4 times faster increase (0.7-1.2 • C) in seawater temperature than that in global mean water temperature (0.4 • C) for 20 years [15]. Moreover, some notable changes in physicochemical conditions were reported, such as increasing limitation of nutrients in the YS and rapid Table 1. Description of sampling sites in the YS, the SS, and the EJS for each cruise period, in 2018. (o) means investigation was conducted, while (-) means investigation was not conducted. The bottom depths at our sampling stations in the YS and the SS had relatively narrow range, whereas the EJS had a wide range of bottom depths (48-2340 m) in this study ( Table 1). The six water depths were determined at each station by converting Secchi disc depth to 6 corresponding light depths (100, 50, 30, 12, 5, and 1% of surface photosynthetic active radiation; (PAR)). Then, each water sample was collected from 6 different depths using Niskin bottles (8 L) equipped with a conductivity, temperature, and depth (CTD)rosette. The water temperature and salinity were obtained from SBE9/11 CTD (Sea-Bird Electronics, Bellevue, WA, USA). The mixed-layer depth (MLD) was defined as the depth at which the density is increased by 0.125 density units from the sea surface density [27,28]. Water samples for dissolved inorganic nutrients (NH 4 , NO 2 + NO 3 , PO 4 , and SiO 2 ) and chl-a (total and size-fractionated) concentrations were collected at three light depths (100, 30, and 1% of PAR). Water samples for measuring the particle organic carbon (POC) and particle organic nitrogen (PON) concentrations and total carbon uptake rates (primary production) of phytoplankton were collected at six light depths (100, 50, 30, 12, 5, and 1% of PAR). The euphotic zone is defined as the depth from 100 to 1% of PAR. Inorganic Nutrients Concentrations To measure concentrations of dissolved inorganic nutrients (NH 4 , NO 2 + NO 3 , PO 4 , and SiO 2 ), 0.1 L water samples were filtered onto Whatman GF/F filters (ø = 47 mm) at a vacuum pressure lower than 150 mmHg. Filtered water samples were immediately frozen at −20 • C for further analysis in our laboratory. An auto-analyzer (Quattro, Seal Analytical, Norderstedt, Germany) in the NIFS was used for the analysis of dissolved inorganic nutrients according to the manufacturer's instruction. Chl-a Concentration The primary method and calculation for determining the chl-a concentrations were conducted according to Parsons et al. [29]. Water samples (0.1-0.4 L) for total chl-a concentration were filtered through Whatman GF/F filters (ø = 25 mm), and samples (0.3-1 L) for three different size-fractionated chl-a concentrations were passed sequentially through 20 µm and 2 µm membrane filters (ø = 47 mm) and GF/F filters (ø = 47 mm) at low vacuum pressure. The filtered samples were then placed in a 15 mL conical tube, immediately stored in −20 • C freezer until the analysis. In the laboratory, the frozen filters were extracted with 90% acetone at 4 • C for 20-24 h, and chl-a concentrations were then measured using a fluorometer (Turner Designs, 10-AU, San Jose, CA, USA) calibrated based on commercially available reference material for chl-a. Measurements of Phyoplankton Carbon and Nitrogen Uptake Rate The 13 C-15 N dual stable isotope tracer technique was used for simultaneously measuring the carbon and nitrogen uptake rates of the phytoplankton as described by Dugdale and Goering [30] and Hama et al. [31]. In brief, water samples from each light depth (100%, 50%, 30%, 12%, 5%, and 1% of PAR) were immediately transferred to acid-rinsed polycarbonate incubation bottles (1 L) covered with neutral density screens (Lee Filters) [32] after passing through 333 µm sieves to eliminate the large zooplankton. The incubation bottles filled with seawater at each light depth were inoculated with the labeled carbon (NaH 13 CO 3 ) and nitrate (K 15 NO 3 ) or ammonium ( 15 NH 4 Cl), which correspond to 10-15% of the concentrations in the ambient water [30,31]. Then, the tracer-injected bottles were incubated in a large polycarbonate incubator at a constant temperature maintained by continuously circulating sea surface water under natural surface light for 4-5 h. The incubated water samples (0.1-0.4 L) were filtered onto Whatman GF/F filters (ø = 25 mm) precombusted at 450 • C, and the filters were then kept in a freezer (−20 • C) until mass spectrometer analysis. At the laboratory of Pusan National University, the filters were fumed with a strong hydro acid in a desiccator to remove the carbonate overnight and dried with a freeze drier for 2 h. Then, POC and PON concentrations and atom % of 13 C were analyzed by Finnigan Delta+XL mass spectrometer at the stable isotope laboratory of the University of Alaska (Fairbanks, AK, USA). The carbon uptake rates of the phytoplankton were estimated as described by Dugdale and Goering [30] and Hama et al. [31]. The final values of the carbon uptake rates of phytoplankton were then calculated by subtracting the carbon uptake rates of dark bottles to eliminate the heterotrophic bacterial production [33][34][35]. The daily primary productions of phytoplankton were calculated from the hourly primary productions observed in this study and 10-h photoperiod per day reported previously in the YS and EJS [22,24]. Statistical Analysis The statistical analyses for Pearson's correlation, t-test, and one-way analysis of variance (one-way ANOVA) were performed using SPSS (version 12.0, SPSS Inc., Chicago, IL, USA). In the one-way ANOVA, a test to certify the homoscedasticity of variables was conducted by using Levene's test. To compare pairwise differences for the variables, Scheffe's (homogeneity) and Dunnett's (heteroscedasticity) post hoc tests were used, based on homogeneity of variances. Principal component analysis (PCA) with the Varimax method with Kaiser normalization using the XLSTAT software (Addinsoft, Boston, MA, USA) was used to identify relatively significant factors affecting the total carbon uptake rates of phytoplankton in each sea during our observation time. Fourteen variables for PCA included physical (water temperature and salinity and euphotic and mixed-layer depths), chemical (NH 4 , NO 2 +NO 3 , PO 4 , and SiO 2 concentrations), and biological (total and size-fractionated chl-a and POC concentration) factors and carbon uptake rates of phytoplankton. Physicochemical Environmental Conditions Seasonal vertical profiles of the mean temperatures and salinities at each light depth in the YS, the SS, and the EJS are presented in Figure 2. Seasonal water temperatures and salinities in the YS, the SS, and the EJS were evenly distributed within the euphotic zone except in August. The mean temperatures within the euphotic zone in the YS, the SS, and the EJS were lowest in February, with means of 5.9 (S.D. = ± 2.3), 13.6 (± 1.3), and 9.9 (± 1.7) • C, respectively, and gradually increased to their highest in August, with means of 23.2 (± 1.4), 23.8 (± 1.6), and 20.9 (± 2.9) • C, respectively ( Figure 2). The average water temperature in the YS was significantly lower than those in the SS and EJS during February and April (one-way ANOVA, p < 0.05). The highest mean salinities in the YS and EJS were observed in April (32.8 ± 0.8 and 34.4 ± 0.1 psu), whereas salinity in the SS was highest in February at 34.6 ± 0.0 psu ( Figure 2). Overall, lower salinities were found in the YS than in the SS and the EJS throughout the observation period. ˚C, respectively, and gradually increased to their highest in August, with means of 23.2 (± 1.4), 23.8 (± 1.6), and 20.9 (± 2.9) ˚C, respectively ( Figure 2). The average water temperature in the YS was significantly lower than those in the SS and EJS during February and April (one-way ANOVA, p < 0.05). The highest mean salinities in the YS and EJS were observed in April (32.8 ± 0.8 and 34.4 ± 0.1 psu), whereas salinity in the SS was highest in February at 34.6 ± 0.0 psu ( Figure 2). Overall, lower salinities were found in the YS than in the SS and the EJS throughout the observation period. The mean euphotic depths in the YS, the SS, and the EJS were deepest in August at 37.6 ± 15.6, 49.8 ± 11.3, and 54.4 ± 10.7 m, respectively ( Figure 3). In particular, the euphotic depth in the EJS in February (51.0 ± 5.8 m) was significantly deeper (one-way ANOVA, p < 0.01) than those in the YS (12.8 ± 6.2 m) and the SS (28.1 ± 4.7 m). The deepest MLDs in the YS, the SS, and the EJS were observed in February, with means of 68.7 ± 15.7, 59.0 ± 40.5, and 80.6 ± 57.4 m, respectively ( Figure 3). The MLDs in the YS, the SS, and the EJS became continuously shallow until August at 12.0 ± 14.2, 13.7 ± 6.6, and 13.2 ± 6.2 m, respectively, and then deepened in October to 26.3 ± 13.7, 30.2 ± 16.1, and 37.9 ± 14.2 m, respectively. In all regions, the differences between the MLDs and euphotic depths were greatest in February, decreased toward April, and then reversed in August when MLDs were significantly shallower than the euphotic depths (t-test, p < 0.01) ( Figure 3). These results indicate that the euphotic zone was vertically well-mixed in all study regions during February and April, whereas strong stratifications were developed in the euphotic water columns during August. respectively. In all regions, the differences between the MLDs and euphotic depths were greatest in February, decreased toward April, and then reversed in August when MLDs were significantly shallower than the euphotic depths (t-test, p < 0.01) ( Figure 3). These results indicate that the euphotic zone was vertically well-mixed in all study regions during February and April, whereas strong stratifications were developed in the euphotic water columns during August. Major dissolved inorganic nutrient concentrations at each light depth (100%, 30%, and 1%) in the YS, the SS, and the EJS for each cruise are summarized in Table 2. The ranges of NO2+NO3, PO4, and SiO2 concentrations during the study period were 0.5-9.9, <0.1-0.6, and 2.4-10.0 μM in the YS; 0.9-8.1, 0.1-0.4, and 5.1-11.3 μM in the SS; and 0.2-8.7, 0.1-0.5, and 2.4-11.0 μM in the EJS, respectively. Ranges of nutrient concentrations except for NH4 varied significantly in all regions during the study period, being generally high in February and low in other seasons except NO2+NO3 concentrations in the YS in April. The nutrient concentrations, except for NH4 at 1% light depths in the YS and the EJS, were higher (one-way ANOVA, p < 0.01) than those at 100% and 30% light depths during August and October, whereas vertical differences in the SS were only detected in August. NH4 concentrations ranged from 0.5 to 1.2 μM in the YS, 0.1 to 0.6 μM in the SS, and 0.4 to 0.9 μM in the EJS, respectively, during the observation period. Unlike other nutrients, NH4 concentrations had no distinct seasonal and vertical characteristics in all study regions. Major dissolved inorganic nutrient concentrations at each light depth (100%, 30%, and 1%) in the YS, the SS, and the EJS for each cruise are summarized in Table 2. The ranges of NO 2 +NO 3 , PO 4 , and SiO 2 concentrations during the study period were 0.5-9. The nutrient concentrations, except for NH 4 at 1% light depths in the YS and the EJS, were higher (one-way ANOVA, p < 0.01) than those at 100% and 30% light depths during August and October, whereas vertical differences in the SS were only detected in August. NH 4 concentrations ranged from 0.5 to 1.2 µM in the YS, 0.1 to 0.6 µM in the SS, and 0.4 to 0.9 µM in the EJS, respectively, during the observation period. Unlike other nutrients, NH 4 concentrations had no distinct seasonal and vertical characteristics in all study regions. Apr. Aug. Oct. Oct. Apr. POC and PON Concentration The mean POC concentrations integrated in the euphotic zone in the YS, the SS, and the EJS showed different seasonal patterns in comparison to the chl-a concentrations (Figure 6b). The POC concentrations in the YS and SS gradually increased from February, at POC and PON Concentration The mean POC concentrations integrated in the euphotic zone in the YS, the SS, and the EJS showed different seasonal patterns in comparison to the chl-a concentrations (Figure 6b). The POC concentrations in the YS and SS gradually increased from February, at 1.7 ± 0.5 and 2.7 ± 1.0 g C m −2 , to October, with 10.4 ± 3.7 and 7.5 ± 3.1 g C m −2 , respectively. In comparison, the POC concentrations in the EJS were the highest during August at 8.9 ± 1.5 g C m −2 but remained constant at an average of~4 g C m −2 during other seasons. POC and PON Concentration The mean POC concentrations integrated in the euphotic zone in the YS, the SS, and the EJS showed different seasonal patterns in comparison to the chl-a concentrations (Figure 6b). The POC concentrations in the YS and SS gradually increased from February, at 1.7 ± 0.5 and 2.7 ± 1.0 g C m −2 , to October, with 10.4 ± 3.7 and 7.5 ± 3.1 g C m −2 , respectively. In comparison, the POC concentrations in the EJS were the highest during August at 8.9 ± 1.5 g C m −2 but remained constant at an average of ~4 g C m −2 during other seasons. Primary Production of Phytoplankton The primary productions of phytoplankton integrated from different six-light depths (100, 50, 30, 12, 5, and 1%) ranged from 1.0 to 135.1 (YS), 1.8 to 63.7 (SS), and 2.3 to 119.3 (EJS) mg C m −2 h −1 , respectively (Figure 7). The ranges of the primary productions in the YS and the EJS were more variable than in the SS in this study. High mean primary productions in the YS, the SS, and the EJS were observed during April (29.3 ± 39.4, 42.6 ± 7.8, and 49.1 ± 25.2 mg C m −2 h −1 ) and October (60.6 ± 17.8, 48.4 ± 15.4, and 43.3 ± 31.1 mg C m −2 h −1 ) (Figure 6c). In comparison, the mean primary productions during February and August were low in the YS (2.6 ± 1.2, 9.3 ± 1.0 mg C m −2 h −1 ), the SS (6.8 ± 3.5 and 19.5 ± 12.5 mg C m −2 h −1 ), and the EJS (10.6 ± 7.7 and 28.4 ± 20.4 mg C m −2 h −1 ) (Figure 6c). Overall, there were distinct seasonal variations in the primary productions, which were higher in spring and autumn than those in winter and summer in all waters of the littoral sea in Korea in 2018. and 49.1 ± 25.2 mg C m −2 h −1 ) and October (60.6 ± 17.8, 48.4 ± 15.4, and 43.3 ±31.1 mg C m −2 h −1 ) (Figure 6c). In comparison, the mean primary productions during February and August were low in the YS (2.6 ± 1.2, 9.3 ± 1.0 mg C m −2 h −1 ), the SS (6.8 ± 3.5 and 19.5 ± 12.5 mg C m −2 h −1 ), and the EJS (10.6 ± 7.7 and 28.4 ± 20.4 mg C m −2 h −1 ) (Figure 6c). Overall, there were distinct seasonal variations in the primary productions, which were higher in spring and autumn than those in winter and summer in all waters of the littoral sea in Korea in 2018. The results of PCA to determine major environmental and biological factors affecting the primary productions of phytoplankton in the YS, the SS, and the EJS throughout the observation period are shown in Figure 8a-c. The two ordination axes (PC1 and PC2) of principal components (PC) accounted for the cumulative variances of 61.6, 66.9, and 54.7% in the YS, the SS, and the EJS, respectively. Primary production in the YS was positively correlated with the chl-a and POC concentrations and temperature but negatively correlated with the MLD and compositions of nano-sized phytoplankton (Figure 8a). The positive relations between primary production and total chl-a concentrations and compositions of micro-sized phytoplankton were observed in the SS (Figure 8b). In contrast, picosized phytoplankton compositions and nutrients except for NH 4 were negatively related to primary production in the SS (Figure 8b). For the EJS, the total chl-a concentrations, compositions of the micro-sized plankton, and salinity had positive effects, whereas the pico-sized plankton and water temperature had negative effects on the primary production ( Figure 8c). No strong correlation (R 2 = 0.1225 p > 0.05) was found between the biomass contributions of pico-sized phytoplankton and the primary production of phytoplankton in the YS (Figure 9a). In contrast, significantly negative correlations between the biomass contributions of pico-sized phytoplankton and the primary production were observed in the SS (R 2 = 0.791, p < 0.01) and the EJS (R 2 = 0.801, p < 0.01) (Figure 9b,c). itive relations between primary production and total chl-a concentrations and compositions of micro-sized phytoplankton were observed in the SS (Figure 8b). In contrast, picosized phytoplankton compositions and nutrients except for NH4 were negatively related to primary production in the SS (Figure 8b). For the EJS, the total chl-a concentrations, compositions of the micro-sized plankton, and salinity had positive effects, whereas the pico-sized plankton and water temperature had negative effects on the primary production (Figure 8c). No strong correlation (R 2 = 0.1225 p > 0.05) was found between the biomass contributions of pico-sized phytoplankton and the primary production of phytoplankton in the YS (Figure 9a). In contrast, significantly negative correlations between the biomass contributions of pico-sized phytoplankton and the primary production were observed in the SS (R 2 = 0.791, p < 0.01) and the EJS (R 2 = 0.801, p < 0.01) (Figure 9b,c). No strong correlation (R 2 = 0.1225 p > 0.05) was found between the biomass contributions of pico-sized phytoplankton and the primary production of phytoplankton in the YS (Figure 9a). In contrast, significantly negative correlations between the biomass contributions of pico-sized phytoplankton and the primary production were observed in the SS (R 2 = 0.791, p < 0.01) and the EJS (R 2 = 0.801, p < 0.01) (Figure 9b,c). Comparisons of Primary Production between This and Previous Studies Based on a 10-h photoperiod and the hourly primary productions obtained in this study (Figure 6), the mean daily primary productions in the YS were 25.8 ± 11.9, 292.7 ± 393.9, 139.0 ± 66.9, and 607.5 ± 172.6 mg C m −2 d −1 during winter, spring, summer, and autumn, respectively. Our values obtained in this study were slightly lower than the ranges (56-947 mg C m −2 d −1 ) of the values reported previously in adjacent or nearly identical regions to our sites in the YS (Table 3). In particular, our spring (293 mg C m −2 d −1 ) and summer (139 mg C m −2 d −1 ) values in this study were significantly lower (t-test, p < 0.05) than the spring (851 ± 108 mg C m −2 d −1 ) and summer (555 ± 231 mg C m −2 d −1 ) values averaged from previous studies. These lower seasonal productions in 2018 might be explained by a recent change in the nutrient budgets in the YS. An increasing trend in dissolved inorganic nitrogen (DIN) concentration since the 1980s was reported, whereas a decreasing trend from the 1980s to 2000 followed by a slight increase in PO 4 concentration was observed in the YS [16,36]. These changes in DIN and PO 4 have induced a gradual increase in the N/P ratio and a shift from N-limitation to P-limitation in the YS [36]. The P-limited condition could convert dominant species of phytoplankton from diatoms to small-sized non-diatoms with higher growth rates in P-limited waters but lower photosynthetic efficiencies [18,24,37]. Lin et al. [16] reported that a dramatic decrease in primary production in the YS during all seasons between 1983-1986 and 1996-1998 periods could be one of the ecological responses caused by the increase in the N/P ratio. In this study, the N/P ratios (32 ± 14) during the spring period were significantly higher (one-sample t-test, p < 0.01) than the Redfield ratios (16) [38], which could have resulted in a limitation for diatom growth [39,40]. Indeed, the diatom compositions (approximately 50%) in the YS in spring based on the results from our parallel study (non-published data) were distinctly lower than those reported previously in 1986 (89%) and 1998 (70%) [23]. This shift in dominant species could have caused the low primary production in spring 2018. Jang et al. [24] reported that the high contribution of pico-sized (<2 µm) phytoplankton to the total primary production could induce a lower total primary production in the YS when the N/P ratio is higher than 30 during the summer period. We did not measure the production of pico-sized phytoplankton in this study, but the higher N/P ratio (54 ± 78) at upper euphotic depth (100 and 30%) accounted for about 75% of integrated primary production and could explain the lower primary production in the YS during summer 2018. Since the primary production measurements have rarely been conducted in the SS section belonging to the northern part of the East China Sea, we compared our results with those measured previously in the entire East China Sea ( Table 4). The average daily primary productions in the SS during this observation are within the range (102-1727 mg C m −2 d −1 ) reported previously in the East China Sea (Table 4). However, the winter and summer values in this study were significantly lower (t-test, p < 0.05) than the mean winter (206 ± 93 mg C m −2 d −1 ) and summer (621 ± 179 mg C m −2 d −1 ) productions reported previously. In comparison, the autumn value (487 mg C m −2 d −1 ) in this study was consistent with the previous findings (503 ± 186 mg C m −2 d −1 ). For the springtime, our daily production (426 mg C m −2 d −1 ) was not statistically different (t-test, p > 0.05) from the mean production (350 ± 161 mg C m −2 d −1 ) in early spring (March), but our spring value was considerably lower than those reported previously in April (1727 mg C m −2 d −1 ) and May (1375 mg C m −2 d −1 ). Table 3. Comparisons of daily primary production in the YS. PP represents daily primary production. Resion Method The daily primary productions measured in this study during four seasons are within the range (44-1505 mg C m −2 d −1 ) obtained previously from the various regions in the EJS in different seasons (Table 5). However, our value (284 mg C m −2 d −1 ) during the winter period was significantly higher (t-test, p < 0.05) than the winter mean value (75 ± 44 mg C m −2 d −1 ) reported by Nagata [51] and Yoshie et al. [52], whereas our spring (491 mg C m −2 d −1 ) and summer (106 mg C m −2 d −1 ) rates were significantly lower (t-test, p < 0.05) than the spring (858 ± 376 mg C m −2 d −1 ) and summer (519 ± 184 mg C m −2 d −1 ) values averaged from various previous studies. A plausible mechanism for the difference might be related to the development of the MLD in the EJS during the wintertime. A vigorous vertical mixing driven by the Asian winter monsoon can limit the availability of light to phytoplankton in winter [53,54] but induces an increase in the nutrient availability in the upper euphotic layer from spring to summer [55,56]. However, the MLD has been gradually decreased by an increase in water temperature and weakened wind stress in the EJS [17,57,58], which could offer better light conditions for phytoplankton growth in winter but fewer nutrients for the spring phytoplankton bloom. In this way, the difference in seasonal primary production in the EJS mentioned above could be explained by the recent change in the MLD. However, because our surveys in the EJS were restricted to only 2018, this mechanism needs to be verified by a long-term observation. Another reason for the low primary production, especially in spring 2018, could be potentially having missed the bloom timing of the phytoplankton during our observation period. In general, the spring bloom in the EJS is mainly driven by the massive growth of diatoms, which account for the majority of large-sized (> 20 µm) phytoplankton [59][60][61]. Indeed, Kwak et al. [62] observed a significantly higher contribution (approximately 60%) of diatoms during the spring bloom period than in other seasons. In this study, the contribution of the large-sized phytoplankton was rather lower during the spring (Figure 5c). However, much lower diatom contributions were detected based on our parallel stud, showing that diatoms accounted for only 23.1% (± 9.9%) of total phytoplankton communities in the EJS in spring (non-published data). The other reason might be conspicuously low phytoplankton biomass in the EJS in April 2018. Based on MODIS (Moderate Resolution Imaging Spectrometer)-Aqua monthly level-3 datasets regarding chl-a (https://oceandata.sci.gsfc.nasa.gov/MODIS-Aqua/, accessed on 3 August 2021), the surface chl-a showed strong negative anomalies in the southwestern part of the EJS during April between 2003-2015 and 2018 (data not shown). As the chl-a concentrations in the EJS were one of the major factors controlling the primary production (Figure 8c), a noticeable low chl-a concentration could cause lower primary production in the EJS during the springtime in 2018. At the current stage, it is difficult to find out a solid reason for the low chl-a concentration in the EJS during April 2018, which should be further resolved for a better understanding of the EJS ecosystem. Main Factors Affecting the Primary Production in the YS, SS, and EJS in 2018 Based on the PCA results (Figure 8), the major factors controlling the phytoplankton productions were different among the three seas. Total chl-a concentrations (positively; +), temperature (+), MLD (negatively; −), and nano-sized phytoplankton contribution (−) are found to be major controlling factors in the YS. In comparison, total chl-a concentrations (+), pico-(−) and micro-sized (+) phytoplankton contributions, and nutrients (−) except for NH 4 can greatlyaffect the primary production in the SS. For the EJS, the primary production of phytoplankton can greatly vary due to total chl-a concentrations (+), micro-sized phytoplankton contribution (+), salinity (+), pico-sized phytoplankton (−), and water temperature (−). The effects of physical (temperature, salinity, and MLD) and chemical (nutrients) factors are different in the YS, the SS, and the EJS. Given the positive relationships between the primary productions and the total chl-a concentrations in this study, biomass-driven primary productions are characteristics in the YS, the SS, and the EJS ecosystems, at least in 2018. However, the effects of the three size groups of phytoplankton can be different among the three seas. The contribution of nano-sized phytoplankton in the YS and the contributions of pico-sized phytoplankton in the SS and the EJS are negatively correlated with the primary productions in this study. Choi et al. [42] reported nano-phytoplankton contributed greatly to the primary production in the YS, based on the large biomass contribution of nano-phytoplankton (approximately 60%). In this study, the negative relationship between the nano-sized phytoplankton contribution and the primary production indicates that increasing contributions of the nano-sized phytoplankton could decrease the primary production in the YS. In the EJS, several previous studies reported higher contributions of pico-sized phytoplankton could cause a decrease in the primary production [12,22,69]. Indeed, marked decreasing trends in the primary productions with increasing pico-sized phytoplankton biomass were observed in the SS and EJS during our observation period in 2018 (Figure 9b.c). This could be caused by the different primary productivities between pico-and large-sized (>2 µm) phytoplankton [22]. Generally, pico-sized phytoplankton have a lower primary productivity than large phytoplankton [14,22,70]. Therefore, the total primary production can be decreased by increasing contribution of pico-sized phytoplankton, with their lower productivity traits. Under ongoing warming ocean conditions, pico-sized phytoplankton are expected to be predominant in phytoplankton communities [71][72][73][74]. In this pico-sized-phytoplanktondominated ecosystem, a lower total primary production could be expected in the SS and the EJS based on the negative relationships between the primary production and pico-sized phytoplankton observed in this study. The ecological roles of pico-sized phytoplankton in regional marine ecosystems should be further investigated in the YS, the SS, and the EJS with current environmental changes.
v3-fos-license
2017-10-04T07:03:27.442Z
2017-01-01T00:00:00.000
31987836
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.4274/balkanmedj.2015.0535", "pdf_hash": "af5d4a9ebcf7d67f51f87bbbe7aee422b2d77bc1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42080", "s2fieldsofstudy": [ "Medicine" ], "sha1": "af5d4a9ebcf7d67f51f87bbbe7aee422b2d77bc1", "year": 2017 }
pes2o/s2orc
Spontaneous Remission of Congenital Complete Atrioventricular Block in Anti-Ro/La Antibody-Negative Monozygotic Twins: Case Report Background: Congenital complete atrioventricular block without any structural heart disease and anti-Ro/La negativity is very rare. Discordant complete atrioventricular block, which is more frequently defined in the literature as an autoimmune mechanism, is much more rare in monozygotic twins. Case Report: The 26-year-old healthy mother had given birth in her first spontaneous, uneventful pregnancy to monozygotic twins at week 35. While the first twin’s physical examination proved her to be normal with a pulse rate consistent with her age, the second twin had a pulse rate of approximately 40 beats/minute.The patient was confirmed to have congenital complete atrioventricular block. Conclusion: Despite this case appears to be an isolated one, a discordant complete atrioventricular block regression without any autoimmune evidence should be included in the differential diagnosis of bradycardia in infants. Congenital complete atrioventricular block (CAVB) is seen in approximately one in every 20.000 live births. Congenital CAVB is rarer in patients that are anti-Ro/La negative or who do not exhibit any structural heart disease. More than 90% of congenital atrioventricular blocks (AVB) are accompanied by maternal autoimmune antibodies or structural heart disease. The remaining 10% are idiopathic AVB (1). Congenital CAVB in monozygotic twins is rarer, and usually defined in the literature as an autoimmune mechanism (2,3). We present here a case of discordant CAVB that sustained remission in monozygotic twin infants who were autoimmune negative and did not have any structural heart disease. To the best of our knowledge, this is the first to report discordant CAVB regression as shown in monozygotic twins with no autoimmune evidence. CASE PRESENTATION The 26-year-old healthy mother had given birth in her first spontaneous, uneventful pregnancy to monozygotic twins at week 35, one of whom had a birth weight of 2.320 grams. The mother did not have a history of infection, metabolic disease, autoimmune disease, or drug usage during the pregnancy period. After the evaluation of the fetuses at 35 th week of gestation, an emergency cesarean section was performed on the diagnosis of fetal distress due to bradycardia of the index fetus which was determined by a non-stress test. Following birth, both babies had normal Apgar scores. While the first twin's physical examination proved her to be normal with a pulse rate consistent with her age, the second twin had a pulse rate of approximately 40 beats/ minute; therefore, the twin with bradycardia was hospitalized. Department of Pediatric Cardiology, İstanbul Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Center and Research Hospital, İstanbul, Turkey Background: Congenital complete atrioventricular block without any structural heart disease and anti-Ro/La negativity is very rare. Discordant complete atrioventricular block, which is more frequently defined in the literature as an autoimmune mechanism, is much more rare in monozygotic twins. Case Report: The 26-year-old healthy mother had given birth in her first spontaneous, uneventful pregnancy to monozygotic twins at week 35. While the first twin's physical examination proved her to be normal with a pulse rate consistent with her age, the second twin had a pulse rate of approximately 40 beats/minute.The patient was confirmed to have congenital complete atrioventricular block. Conclusion: Despite this case appears to be an isolated one, a discordant complete atrioventricular block regression without any autoimmune evidence should be included in the differential diagnosis of bradycardia in infants. Keywords: Anti-Ro/La antibody, congenital heart block, discordant complete atrioventricular block, infants Taner Kasar, Murat Saygı, İsa Özyılmaz, Yakup Ergül Informed consent was obtained from the parents at this time. The patient was confirmed to have CAVB by 12-lead electrocardiography (ECG) (MAC 1600, GE Healthcare, USA) and 24-hour Holter monitorization (Life card CF, Del Mar Reynolds Medical, United Kingdom) ( Figure 1a). No structural cardiac defect was observed on the echocardiogram (ECHO) (Philips IE33; USA). Serum electrolyte levels, cardiac enzymes, and pro-brain natriuretic peptide levels were normal, and there were no findings of congestive heart failure (shortening fraction: 36%). Viral serology markers for myocarditis etiology were negative. Isoproterenol (HOSPIRA, INC., Lake Forest, USA) infusion was initiated for significant bradycardia (35-40/minute). Following treatment with isoproterenol, a normal sinus rhythm with a heart rate of 120/minute was reached; therefore, treatments were discontinued. Two days later, the heart rate dropped back to 50 beats/minute and was confirmed by a 12-lead ECG. Isoproterenol treatment was restarted and then discontinued one week later as the majority of the rhythm was sinus. On the fifteenth day of the follow-up, the patient was confirmed to have a normal sinus rhythm with rare Wenckebach type II AVB ( Figure 1b). As the patient's overall condition and vital signs were stable with normal cardiac functions, the patient was discharged. The ECG and ECHO findings of the mother and the other twin were normal. The immunological markers, including anti-Ro/SSA, anti-LA/SSB, anti-DNA, antinuclear antibody, anti-cardiolipin antibody, anti-Sm, U1-RNP, Jo-1 and Scl70, of both infants and the mother were found to be negative. Anti-Ro/SSA and anti-LA/SSB immunoblotting yielded negative results in a reference laboratory with ISO 15189 accreditation. At the first and sixth month follow-up visits, the autoantibodies of the patient and the mother were checked for late seroconversion and were found to be negative. At the sixth month follow-up visit, both Holter monitoring and the ECG (Figure 1c) indicated a normal sinus rhythm. DISCUSSION Congenital CAVB is rare in patients who are anti-Ro/La negative and have no concomitant structural heart diseases (3). Brucato et al. (4) showed that 20% of unselected congenital AVB have anti Ro/La-negative mothers, and Maeno et al. (5) identified this rate at 18%. In these studies, two fetuses were reported to have second degree AVB, with one diagnosed immediately after birth, and progressive AVB was demonstrated at the third month follow-up in the other three fetuses. Two other infants were noted as having alternating block with normal sinus rhythm; in one infant, stable, normal sinus rhythm was reportedly restored. Breur et al. (6) reported a series of four patients with fetal heart block, structural heart defects, and negative maternal antibodies (anti-Ro or anti-La antibodies). The neonates were reported to have an unstable progress of the AVB pattern with an occasional block of the sinus rhythm at varying degrees. Our patient had CAVB at birth and varying degrees of AVB, including the sinus rhythm during follow-up, which was detected in Holter monitorization. At the sixth month follow-up visit, normal sinus rhythm was observed in the ECG and the 24-hour Holter monitor recording. Previous studies have reported that negative immune complex (anti-Ro or anti-La antibodies) findings do not always rule out an immune-mediated event; anti-Ro and anti-La antibodies can exhibit a stable profile for many years, but late seroconversion may remain a risk (7). Methods that are not sensitive and low concentrations of maternal antibodies may be numbered among potentially significant factors in this respect. The reason for late seroconversion is considered to be influenced by anti-Ro and anti-La antibodies as well as by unknown intrinsic (fetal) or extrinsic (maternal) factors (3). In our case, the levels of antibodies in the mother and infants, which were measured immediately after birth and at months one and six for the possibility of late seroconversion, were all found to be negative. On the other hand, other studies specify that maternal antibodies may not constitute an adequate reason for AVB, and it was shown that infants of seropositive mothers had an incidence rate of approximately 1-7.5% (8). Discordant congenital CAVB is a rare occurrence in monozygotic twin infants (2). To the best of our knowledge, there is no published report of congenital CAVB in autoimmune negative twins. The reason for this discordance in twins remains unknown. However, the fact that autoimmune positive mothers reported to have children with congenital block only at a rate of 1-7.5% shows that the autoimmune mechanism would fall short of accounting for this situation. This result suggests that the discordant status may be related to unknown intrinsic (fetal), extrinsic (maternal), and/or environmental factors. Furthermore, there are conflicting reports about the prognosis of congenital AVB patients. According to Berg et al. (8), the mortality rate was found to be similar among children with congenital AVB who were anti-Ro negative and children with congenital AVB who were anti-Ro positive. A series of studies exists which demonstrate the spontaneous improvement of congenital heart block without any cardiomyopathy within the follow-up period (6). In the light of this information, it is safe to say that congenital AVB patients may not always require urgent pacemaker implantation. Citing the most recent guidelines, symptomatic bradycardia with congenital AVB has been accepted as a Class I indication for pacemaker implantation in cases such as wide QRS escape and complex ventricular ectopy (9). Another Class I pacemaker indication in this group is basal heart rate: a ventricular rate below 55 beats/minute for an infant or a ventricular rate below 70 beats/ minute in an infant with congenital heart disease. Although the original studies reporting these thresholds did not have 24-hour Holter data, this is frequently interpreted as an average 24-hour heart rate (10). We chose to include a clinical follow-up for our patient because the average heart rate was high when evaluated by 24-hour Holter ECG monitoring. Even though the initial heart beats were around 40/minute, the patient had no structural heart disease, asymptomatic and cardiac functions were fine, on ECHO evaluation. Pacemaker implantation is not a riskfree procedure, especially in infants as pacemaker implantation requires open heart surgery, the pacemaker generator needs space, and has risk of infection. We preferred to monitor our asymptomatic patient with more frequent follow-up visits and 24-hour Holter monitorization. Our conclusion is that discordant congenital CAVB may develop in monozygotic twins born to an autoimmune negative mother. The pathogenesis of this condition is still unknown. If the cardiac functions of such patients are normal, they may be clinically monitored until normal sinus rhythm is restored.
v3-fos-license
2021-05-01T06:17:18.594Z
2021-04-22T00:00:00.000
233457346
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4360/13/9/1374/pdf", "pdf_hash": "079e00a2f402efda6079daf7c0e674b3f054e487", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42081", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "39650e9ea816c3ad1959cec9c2d77fb05510a81b", "year": 2021 }
pes2o/s2orc
Design, Manufacturing and Test of CFRP Front Hood Concepts for a Light-Weight Vehicle Composite materials are very often used in the manufacture of lightweight parts in the automotive industry, manufacturing of cost-efficient elements implies proper technology combined with a structural optimization of the material structure. The paper presents the manufacturing process, experimental and numerical analyses of the mechanical behavior for two composite hoods with different design concepts and material layouts as body components of a small electric vehicle. The first model follows the black metal design and the second one is based on the composite design concept. Manufacturing steps and full details regarding the fabrication process are delivered in the paper. Static stiffness and strain values for lateral, longitudinal and torsional loading cases were investigated. The first composite hood is 254 times lighter than a similar steel hood and the second hood concept is 22% lighter than the first one. The improvement in terms of lateral stiffness for composite hoods about a similar steel hood is for the black metal design concept about 80% and 157% for the hood with a sandwich structure and modified backside frame. Transversal stiffness is few times higher for both composite hoods while the torsional stiffness has an increase of 62% compared to a similar steel hood. Introduction Reducing vehicle weight is one of the main methods for low fuel consumption in the case of cars that have conventional engines or vehicle autonomy in the case of electrical vehicles. Lightweight is advantageous for maximizing the engine efficiency, accelerating force and braking power in comparison with a heavier vehicle. Automobile manufacturers are seeking to achieve a lightweight structure by structural design and inclusion in the manufacturing process of proper materials such as composite materials. The car producers started considering replacing the doors and bonnets with light-weight structures made of carbon fiber reinforced polymer (CFRP). The Ford Company presented in 2012 to the Reinforced Plastics Journal [1] a CFRP bonnet that weighs 50% less than a steel version. The process for fast and affordable production of automotive parts made of composite materials in a large number is still a challenge, manufacturing of cost-efficient elements implies a proper technology combined with a structural optimization of the material structure. Carbon fiber reinforced plastics (CFRP) components such as a hood should meet the high standards for stiffness, dent resistance and crash performance. The component must also perform well in pedestrian protection head-impact tests. The mechanical properties of CFRP are studied by different authors over time. Tensile strength of CFRP presented in [2][3][4], bending and Charpy impact fracture energy [5][6][7], used different stacking sequences of the layers, indicating very good mechanical properties and lightweight of CFRP structures. The technologies to obtain the CFRP parts are very important, in order to obtain very high mechanical properties of parts and a very compact structure [8]. We can say that, for the moment, the vacuum bag technology and autoclave curing process is the best procedure to produce a very high-quality level of CFRP. A wide range of experimental investigations of CFRP obtained in the autoclave curing process, the authors presented in [9] the importance of autoclave curing parameters and polymerization mechanism. Switching metal parts design to CRFP requires a new product design. Many designers traded the fiber reinforced polymer (FRP) like metal or plastic materials. They designed pieces that copy the metal parts, by reproducing the same geometry shape, the reinforced ribs, all like a quasi-isotropic material approach. The concept of black metal design (BMD) for composites limits the benefits that composites structures have. The arrangement of the fibers in the direction of maximum demands, use of sandwich structures to achieve a high rigidity and light structure, used a balanced stacking sequence of layers in order to respond to complex requests are a couple of the advantages of FRP design concept. The development of a new 3D reinforcement material type [10] or sandwich structures with different types of light cores like Nomex [11][12][13][14][15][16], Aluminum [17,18] or PP Honeycomb, polymethacrylimide closed-cell foam Rohacell or Balsa wood [19,20] contributes to the development of complex light and rigid structures. Sandwich structures with CFRP face sheet and different light cores are experimentally studied in [21]. Proposed in the lightweight body structures in electric vehicles, these structures were analyzed under static and dynamic loading. In order to replace the steel with new light-weight material in the manufacturing process of a vehicle hood it is necessary to estimate beside the mechanical proprieties of the materials also the structural performance of the same part. The structural performance of the car hood presumes stiffness, modal and pedestrian protection properties. The stiffness of a hood is described by the relationship between load and deformation and it can be evaluated by measurement of flexural displacement produced by an external load. Depending on the application point of the load and supporting conditions, bending and torsion stiffness can be evaluated. Generally, modal behavior can be characterized by 1st torsion mode and bending mode. Property of pedestrian protection for hood usually consider the associated collision pedestrian protection standards, the Head Injury Criterion (HIC) being an important parameter to evaluate the pedestrian protection level. The outer hood panel is shaped according to the vehicle design and it cannot be subjected to any structural improvements excepting the layer's architecture in the case of composite materials. The inner panel of the hood can be structurally optimized in order to improve the stiffness, modal behavior and pedestrian protection both by topological and material design without changing the clearance between the inner panel and engine zone. Related studies have shown that the structure of the inner panel of the hood has a greater impact on stiffness [22] and impact behavior [23,24]. Numerical and experimental investigations of aluminum car hood presented in [25] indicate that the optimized hood structure shows similar stiffness performance compared to the original steel structure and better pedestrian protection performance than the original ones; the proposed aluminum structure realized a weight reduction of 46.4% compared with the original steel one. Composite materials were considered as an alternative to steel and aluminum for hood structures. The main manufacturing steps of a CFRP front hood and a proposal for the composite material layers' distribution are presented in [26]. The inner panel design has a hexagonal shape reinforcement. Vacuum-as resin transfer mold (VRTM) was employed to obtain the CFRP front hood. The composite material was a balanced woven fabric Twill 2 × 2 by 200 g/m 2 with the distribution of layers [90/0 ± 45 2 /90/0]. The analysis of the typical load cases and computation of lateral, transversal and torsional stiffness of the front hood was done using finite element commercial software and proved that the proposed design of the composite material and hood structure can fulfil the requirements of the standard static tests. A similar study but different composite material layup is presented in [22], the proposed hood has no inner part, the outer surface is designed to replace the inner part by a composite material with different mechanical proprieties. The behavior of three identical bonnets made of steel, aluminum and composite have been investigated in [23] in terms of head impact. The developed composite model (5-ply Glass/Epoxy) showed excellent results in HICs and torsional stiffness besides being lightweight, although it has the highest cost among the bonnets. Numerical modelling of a hood made of CFRP composite are presented in [24]; the work concluded that using composite material for the hood structure there was a reduction in HIC values and an increase of the head displacement compared to other materials. In [27], an automobile bonnet manufactured of flax/vinyl ester composite that is being researched in various ways as eco-friendly material was evaluated to perform structural design and analysis. All hoods' variants made of composite materials must satisfy bending and torsion stiffness requirements compared to those of conventional steel and aluminum hoods. However, very few studies on the hoods presents a complete picture of the static behavior of a CRFP hood and proposes specific design, complete manufacturing detail, experimental investigations and numerical studies. This paper presents the mechanical behavior of a CFRP hood manufactured considering two different solutions. One solution where the inner part mimics the original steel bonnet BMD concept and the second one that replaces the ribs and the reinforcements with a sandwich architecture. In the first part of the paper, the geometrical topologies of the investigated hoods, CFRP materials employed for hood design and the elastic material constants experimentally determined on flat specimens according to standard procedures are presented. The second part is dedicated to manufacture technology of hoods and presents detailed aspects regarding the process steps, parameters and methodology. A 3D scanning technique revealed the manufacturing dimensional accuracy with respect to CAD models. The fabricated hoods are experimentally tested using a self-developed testing frame under similar boundary condition as in reality. Static stiffness for longitudinal, transversal and torsional load cases are evaluated by measuring the displacement produces by a concentrated force acting perpendicular to the hood upper surface under different supporting conditions. Strain values in three different points placed on upper and lower hood's surfaces are monitoring as the load is applied to get information about the strain state in the structure of the hood. Comparison of the mechanical behavior of the experimental hoods with numerical models was performed by finite element analyses. Good agreement between displacements and strains validates the numerical analyses and will support future static and dynamic analyses. Hood Design Concepts The development of the novel electric vehicle (EV) with small dimensions led to the design and manufacturing of components as light as possible. Figure 1 presents rendered models (Catia V5 software) of the developed EV concept. The main purpose of the new EV concept is to reduce the mass and increase the range. In this case, CFRP body-in-white parts were chosen to replace traditional materials such as conventional steel. For this study, two front hoods from CFRP were designed, analyzed and manufactured. In the first step was chosen a BMD concept based on a replica of a similar metal front hood but with the new material. In our case, the hood is made of CFRP prepreg using different materials and stacking sequence for the layers. The first version of the hood, denoted "A", made according to the BMD concept has two main parts an exterior face and a backside frame. The backside frame includes all the interest points of the hood such as support points on the chassis, hinge fastening points and closing area. After manufacturing an experimental and numerical study was performed to evaluate the mechanical behavior under static loads in terms of longitudinal, lateral and torsional load cases. A second variant, denoted "B", is an optimized composite design and shape of the backside frame. The aesthetic differences between the hoods are minor on the outside face, the initial design has a single rib (Figure 1a) on the middle and the second one has been slightly changed by inserting two less pronounced ribs on the central side ( Figure 1b). For this study, two front hoods from CFRP were designed, analyzed and manufactured. In the first step was chosen a BMD concept based on a replica of a similar metal front hood but with the new material. In our case, the hood is made of CFRP prepreg using different materials and stacking sequence for the layers. The first version of the hood, denoted "A", made according to the BMD concept has two main parts an exterior face and a backside frame. The backside frame includes all the interest points of the hood such as support points on the chassis, hinge fastening points and closing area. After manufacturing an experimental and numerical study was performed to evaluate the mechanical behavior under static loads in terms of longitudinal, lateral and torsional load cases. A second variant, denoted "B", is an optimized composite design and shape of the backside frame. The aesthetic differences between the hoods are minor on the outside face, the initial design has a single rib (Figure 1a) on the middle and the second one has been slightly changed by inserting two less pronounced ribs on the central side ( Figure 1b). For the second hood, the optimized model includes changes in the CFRP structure too and the backside frame dimensions were reduced to decrease the mass of the hood. Experimental and numerical analyses were performed on the second variant to compare the results in terms of stiffness with the first variant and a metal (steel) hood. The studies are complex including material selection, material testing and characterization, manufacturing of the front hoods at real scale, static tests under different loads and boundary conditions and numerical simulations. Materials and Mechanical Proprieties In the previous studies of [22,26], the proposed CFRP layup was a balanced woven fabrics Twill 2 × 2 (200 g/m 2 ) with the distribution of layers [90/0 ± 452/90/0] or more complex structure including a Twill fabric (200 g/m 2 ) combined with a biaxial fabric of 300 g/m 2 and a Nomex honeycomb core. Based on the numerical results obtained and expertise gained by the manufacturing process of one of these hoods, in the present paper, two new and different design variants of the composite hood are presented. For an easier description, the two variants are referred to as variant "A" and "B", respectively. For the "A" hood made by BMD concept, the following materials and stacking sequence of the layers are presented in Figure 2a. The sequence 1A consists of three layers of CFRP prepreg. The first layer was CFRP type GG245TSE-DT121H-42. Reinforced materials were 2 × 2 Twill fabric, 245 g/m 2 , 2 K, HR threads and the next 2-3 layers CFRP prepreg type GG430TSE-DT121H-42. In this case the reinforced materials were 2 × 2 Twill fabric, 430 g/m 2 , 12 K, HR threads. The stacking sequence of the CFRP for "1A" was noted as [90/0 ± 45/90/0]. For the second hood, the optimized model includes changes in the CFRP structure too and the backside frame dimensions were reduced to decrease the mass of the hood. Experimental and numerical analyses were performed on the second variant to compare the results in terms of stiffness with the first variant and a metal (steel) hood. The studies are complex including material selection, material testing and characterization, manufacturing of the front hoods at real scale, static tests under different loads and boundary conditions and numerical simulations. Materials and Mechanical Proprieties In the previous studies of [22,26], the proposed CFRP layup was a balanced woven fabrics Twill 2 × 2 (200 g/m 2 ) with the distribution of layers [90/0 ± 45 2 /90/0] or more complex structure including a Twill fabric (200 g/m 2 ) combined with a biaxial fabric of 300 g/m 2 and a Nomex honeycomb core. Based on the numerical results obtained and expertise gained by the manufacturing process of one of these hoods, in the present paper, two new and different design variants of the composite hood are presented. For an easier description, the two variants are referred to as variant "A" and "B", respectively. For the "A" hood made by BMD concept, the following materials and stacking sequence of the layers are presented in Figure 2a. The sequence 1A consists of three layers of CFRP prepreg. The first layer was CFRP type GG245TSE-DT121H-42. Reinforced materials were 2 × 2 Twill fabric, 245 g/m 2 , 2 K, HR threads and the next 2-3 layers CFRP prepreg type GG430TSE-DT121H-42. In this case the reinforced materials were 2 × 2 Twill fabric, 430 g/m 2 , 12 K, HR threads. The stacking sequence of the CFRP for "1A" was noted as [90/0 ± 45/90/0]. For the sequence "2A" which represented the backside frame of the hood two layers from the same materials were used. The first layer applied to the mold was CFRP prepreg type GG245TSE-DT121H-42 and the second one GG430TSE-DT121H-42 type. The stacking sequence of the layers was [0/90/ ± 45]. The position of the layers was in the order of laminating on the mold. All the mentioned materials were provided from Delta Tech S.p.A. Company from Rifoglieto Italy. For the sequence "2A" which represented the backside frame of the hood two layers from the same materials were used. The first layer applied to the mold was CFRP prepreg type GG245TSE-DT121H-42 and the second one GG430TSE-DT121H-42 type. The stacking sequence of the layers was [0/90/ ± 45]. The position of the layers was in the order of laminating on the mold. All the mentioned materials were provided from Delta Tech S.p.A. Company from Rifoglieto Italy. The second studied hood noted "B" was manufactured using CFRP prepregs. The structure of this hood was different from the hood "A", in this case being used a sandwich structure. The purpose of this new concept (presented in Figure 2b) is to reduce the mass of the hood and achieve a good mechanical response of the structure. This concept removed a large number of carbon fibers. For the backside frame, the points of interest were preserved and a minimal backside frame was introduced around the interest points. The details regarding the stacking sequence of the applied layers for "B" hood are presented in Figure 3. The sequence 1B represented the exterior layers of the sandwich structure of the hood. Three CFRP prepreg layers GG090P-DT121H-48 type were used and combined with a plain fabric, 90 g/m 2 , 1 K, HR threads type of CFRP as reinforced material. The stacking sequence was [0/90/ ± 45/0/90]. For the interior the material is a Nomex honeycomb by 10 mm and 3.2 mm size with hexagonal cells. The sequence denoted by 2B is positioned around the edges of the hood and uses three layers of Biaxial CFRP prepreg with [±45]3 stacking sequence. The width of the applied strips was 50 mm and the selected material was G300X(T700)-DT121H-37 a CFRP biaxial fabric, with 300 g/sqm as reinforced material. In the border of the hood, the sandwich structure was not applied. All these layers were covered by a 1B sequence. Both the Nomex structure and the 2B sequence were covered. For the backside frame of the "B" hood (sequence 3B presented in Figure 2b), four layers of CFRP prepreg GG245TSE-DT121H-42 type were used. The stacking sequence was [0/90/ ± 45]2s. The second studied hood noted "B" was manufactured using CFRP prepregs. The structure of this hood was different from the hood "A", in this case being used a sandwich structure. The purpose of this new concept (presented in Figure 2b) is to reduce the mass of the hood and achieve a good mechanical response of the structure. This concept removed a large number of carbon fibers. For the backside frame, the points of interest were preserved and a minimal backside frame was introduced around the interest points. The details regarding the stacking sequence of the applied layers for "B" hood are presented in Figure 3. The sequence 1B represented the exterior layers of the sandwich structure of the hood. Three CFRP prepreg layers GG090P-DT121H-48 type were used and combined with a plain fabric, 90 g/m 2 , 1 K, HR threads type of CFRP as reinforced material. The stacking sequence was [0/90/ ± 45/0/90]. For the interior the material is a Nomex honeycomb by 10 mm and 3.2 mm size with hexagonal cells. For the sequence "2A" which represented the backside frame of the hood two layers from the same materials were used. The first layer applied to the mold was CFRP prepreg type GG245TSE-DT121H-42 and the second one GG430TSE-DT121H-42 type. The stacking sequence of the layers was [0/90/ ± 45]. The position of the layers was in the order of laminating on the mold. All the mentioned materials were provided from Delta Tech S.p.A. Company from Rifoglieto Italy. The second studied hood noted "B" was manufactured using CFRP prepregs. The structure of this hood was different from the hood "A", in this case being used a sandwich structure. The purpose of this new concept (presented in Figure 2b) is to reduce the mass of the hood and achieve a good mechanical response of the structure. This concept removed a large number of carbon fibers. For the backside frame, the points of interest were preserved and a minimal backside frame was introduced around the interest points. The details regarding the stacking sequence of the applied layers for "B" hood are presented in Figure 3. The sequence 1B represented the exterior layers of the sandwich structure of the hood. Three CFRP prepreg layers GG090P-DT121H-48 type were used and combined with a plain fabric, 90 g/m 2 , 1 K, HR threads type of CFRP as reinforced material. The stacking sequence was [0/90/ ± 45/0/90]. For the interior the material is a Nomex honeycomb by 10 mm and 3.2 mm size with hexagonal cells. The sequence denoted by 2B is positioned around the edges of the hood and uses three layers of Biaxial CFRP prepreg with [±45]3 stacking sequence. The width of the applied strips was 50 mm and the selected material was G300X(T700)-DT121H-37 a CFRP biaxial fabric, with 300 g/sqm as reinforced material. In the border of the hood, the sandwich structure was not applied. All these layers were covered by a 1B sequence. Both the Nomex structure and the 2B sequence were covered. For the backside frame of the "B" hood (sequence 3B presented in Figure 2b), four layers of CFRP prepreg GG245TSE-DT121H-42 type were used. The stacking sequence was [0/90/ ± 45]2s. The sequence denoted by 2B is positioned around the edges of the hood and uses three layers of Biaxial CFRP prepreg with [±45] 3 stacking sequence. The width of the applied strips was 50 mm and the selected material was G300X(T700)-DT121H-37 a CFRP biaxial fabric, with 300 g/sqm as reinforced material. In the border of the hood, the sandwich structure was not applied. All these layers were covered by a 1B sequence. Both the Nomex structure and the 2B sequence were covered. For the backside frame of the "B" hood (sequence 3B presented in Figure 2b), four layers of CFRP prepreg GG245TSE-DT121H-42 type were used. The stacking sequence was [0/90/ ± 45] 2s . Before hoods manufacturing, a mechanical characterization and measurement of elastic constant of component the materials are necessary. In this sense, several plates having the material layups above described have been fabricated. The manufacturing technology of the proposed CFRP plates was vacuum bag technology and autoclave curing process. For manufacturing of the CFRP plates, the flat metal mold was used. The surface of the mold has been rectified and polished. On the active surface of the mold, mold release Polymers 2021, 13, 1374 6 of 21 liquid was applied The CFRP prepreg layers were laminated on flat mold. In the end, the CFRP layers were covered by release foil and the breeder. The entire system (mold and laminated prepreg composites) was introduced in a vacuum bag, closed at the edges and applied a vacuum pressure at −0.9 Bars. The curing process of CFRP plates was carried out in an autoclave. The cycle steps and parameters set are presented in Figure 4 and include: Step 1. Autoclave heated from 20-80 • C at 2 • C/min ramp rate for 30 min; pressure from 1-3 Bars. Step 2. Autoclave heated from 80-120 • C at 2 • C/min. ramp rate for 30 min; pressure 3 Bars. Step 3. Dwell at 120 • C for 120 min; pressure 3 Bars, vacuum pressure −0.9 Bars. Step 4. Cool the part from 120-50 • C at 2 • C/min for 60 min, eliminate the pressure. In all cycle steps the Vacuum pressure was −0.9 Bars. Before hoods manufacturing, a mechanical characterization and measurement of elastic constant of component the materials are necessary. In this sense, several plates having the material layups above described have been fabricated. The manufacturing technology of the proposed CFRP plates was vacuum bag technology and autoclave curing process. For manufacturing of the CFRP plates, the flat metal mold was used. The surface of the mold has been rectified and polished. On the active surface of the mold, mold release liquid was applied The CFRP prepreg layers were laminated on flat mold. In the end, the CFRP layers were covered by release foil and the breeder. The entire system (mold and laminated prepreg composites) was introduced in a vacuum bag, closed at the edges and applied a vacuum pressure at −0.9 Bars. The curing process of CFRP plates was carried out in an autoclave. The cycle steps and parameters set are presented in Figure 4 and include: Step 1. Autoclave heated from 20-80 °C at 2 °C/min ramp rate for 30 min; pressure from 1-3 Bars. Step 2. Autoclave heated from 80-120 °C at 2 °C/min. ramp rate for 30 min; pressure 3 Bars. Step 3. Dwell at 120 °C for 120 min; pressure 3 Bars, vacuum pressure −0.9 Bars. Step 4. Cool the part from 120-50 °C at 2 °C/min for 60 min, eliminate the pressure. In all cycle steps the Vacuum pressure was −0.9 Bars. The materials, parameters and conditions are similar to those used for the manufacturing of the hoods. To perform the experimental determinations standard specimens were cut by water jet from the CFRP obtained plates. Tensile tests following the ASTM D3039M standard were run for specimens with the warp fibers parallel to the load. Uniaxial tensile test of a ±45° laminate is performing by the ASTM D3518M standard to evaluate the in-plane shear response. Elastic modulus, shear modulus and Poisson's ratio have been derived from these tests. In this study, a homogenization procedure is implemented; the material model adopted for numerical simulations is an isotropic elastic model. For accurate measurements unidirectional (1-LY1x-6/120) and bidirectional strain gauges (1-XY91-6/120, HBM, Darmstadt, Germany) with 120 Ω electrical resistance and 6 mm gauge length were applied to each specimen to monitor the longitudinal and transverse strain [3]. The bidirectional strain gauges have two overlapped grids able to measure the strains in the same point in two directions. A half-bridge set-up with passive strain gauges mounted on a non-loaded specimen of the same composite material was used to compensate for the temperature variation. The tests for elastic constants identification were conducted on an INSTRON 3366 (10 kN) universal test frame controlled by an electronic control unit which allows monitoring of the applied load and the speed of the crosshead. Strain signals and a second load cell were acquired by a digital data acquisition system (HBM Spider 8, Darmstadt, Germany) [3]. The materials, parameters and conditions are similar to those used for the manufacturing of the hoods. To perform the experimental determinations standard specimens were cut by water jet from the CFRP obtained plates. Tensile tests following the ASTM D3039M standard were run for specimens with the warp fibers parallel to the load. Uniaxial tensile test of a ±45 • laminate is performing by the ASTM D3518M standard to evaluate the in-plane shear response. Elastic modulus, shear modulus and Poisson's ratio have been derived from these tests. In this study, a homogenization procedure is implemented; the material model adopted for numerical simulations is an isotropic elastic model. For accurate measurements unidirectional (1-LY1x-6/120) and bidirectional strain gauges (1-XY91-6/120, HBM, Darmstadt, Germany) with 120 Ω electrical resistance and 6 mm gauge length were applied to each specimen to monitor the longitudinal and transverse strain [3]. The bidirectional strain gauges have two overlapped grids able to measure the strains in the same point in two directions. A half-bridge set-up with passive strain gauges mounted on a non-loaded specimen of the same composite material was used to compensate for the temperature variation. The tests for elastic constants identification were conducted on an INSTRON 3366 (10 kN) universal test frame controlled by an electronic control unit which allows monitoring of the applied load and the speed of the crosshead. Strain signals and a second load cell were acquired by a digital data acquisition system (HBM Spider 8, Darmstadt, Germany) [3]. In Tables 1 and 2 are given the obtained results from the tensile test of specimens having the stacking architecture and materials described in the previous paragraph. The values represent the mean values of several experiments. To analyze the tested CFRP microstructure, a morphological study was performed. The fracture areas of the CFRP tested specimen were analyzed using Scanning Electron Microscopy (SEM) and a Quanta 200 3DDUAL BEAM SEM type (FEI Company, Hillsboro, OR, USA). Each of this CFRP segments are assembled in an aluminum support specific for SEM analyses of a microstructure. The Low Vacuum module offer the possibility to analyze the natural surfaces of CFRP samples, thus covering the samples surface with an electrically conductive coating was not necessary. Working parameters were set to 60 Pa for the working pressure and 10 kV for acceleration voltage. This relatively low voltage prevents electrostatic charging of the samples. The detector type was large field detector (LFD). To clearly observe the architecture of the surface in the fracture area and the CFRP structure the images were acquired at higher magnification (600×, 2400×). Microstructures of CFRP plates subjected to tensile tests indicate a very pressed material. The pores in the material structure are not present, the monofilaments are closed and the epoxy matrix is uniformly distributed ( Figure 5). This binds the monofilaments which act together in case of traction. In Tables 1 and 2 are given the obtained results from the tensile test of specimens having the stacking architecture and materials described in the previous paragraph. The values represent the mean values of several experiments. To analyze the tested CFRP microstructure, a morphological study was performed. The fracture areas of the CFRP tested specimen were analyzed using Scanning Electron Microscopy (SEM) and a Quanta 200 3DDUAL BEAM SEM type (FEI Company, Hillsboro, OR, USA). Each of this CFRP segments are assembled in an aluminum support specific for SEM analyses of a microstructure. The Low Vacuum module offer the possibility to analyze the natural surfaces of CFRP samples, thus covering the samples surface with an electrically conductive coating was not necessary. Working parameters were set to 60 Pa for the working pressure and 10 kV for acceleration voltage. This relatively low voltage prevents electrostatic charging of the samples. The detector type was large field detector (LFD). To clearly observe the architecture of the surface in the fracture area and the CFRP structure the images were acquired at higher magnification (600×, 2400×). Microstructures of CFRP plates subjected to tensile tests indicate a very pressed material. The pores in the material structure are not present, the monofilaments are closed and the epoxy matrix is uniformly distributed ( Figure 5). This binds the monofilaments which act together in case of traction. Parts of the epoxy resin left on the surface of the carbon monofilaments can be also observed and indicates a very good cohesion between carbon monofilaments and the Parts of the epoxy resin left on the surface of the carbon monofilaments can be also observed and indicates a very good cohesion between carbon monofilaments and the epoxy resin. Microstructure investigation of analyzed samples in the fracture area presents a high quality CFRP composite with a good resin impregnation of monofilaments and no delamination. Manufacturing Methodology of CFRP Hoods Molds were used for manufacturing the hoods both for the front exterior side and for the back frame side. The milling CNC Epoxy block was used for fabricating the molds. The obtained surface of milled molds was sanded using a glass paper by 400-1500 grit. The surface molds were treated at the end using an abrasive polishing paste. In order to prevent the sticking of CFRP prepreg on the mold surface a release treatment procedure was applied. The procedure consisted in applying of 4 layers of Mold Sealer type S31 from Jost Chemicals Company (Wals, Austria). This substance closed the surface pores of the mold. In the next step were applied 5 layers of liquid mold release type Frekote 770NC from Loctite Company, 10 min wait period after each layer was needed in order to allow drying of the solvent. The end step in mold preparation was surface polishing. The manufacturing of the CFRP hoods were made using vacuum bag technology and autoclave curing. For the "A" hood, the CFRP prepreg layers were laminated on the mold according to stacking sequence above presented. On the backside, the edge of the hood a peel ply layer was applied to cover the bonding area of the back frame. Release foil, textile breeder and vacuum foil covered the CFRP Prepreg layers at the final lamination. Mastic tape was used for sealing the vacuum foil on the edges of the molds. In the end, the vacuum bag was tested one hour under −0.9 Bars vacuum pressure for 30 min. The CFRP laminates and the molds were introduced in the autoclave for curing procedure. The same cycle steps and the parameters were used the same as they were used in the manufacturing of CFRP plates described in the previous paragraph. For the backside frame were respected the same curing condition of the manufacturing procedure as for the exterior part. The backside frames and the exterior side of the hood were bonded using a structural adhesive Scotch Weld, BP-9323 B/A type from 3M Company (Bracknell, UK). In order to eliminate the possible deformations of the parts, the gluing was done in the mold. The curing procedure of the structural adhesive at 80 • C run for two hours. The material excess from the borders of the hood were eliminated at the end using a manually mechanical procedure. For the second front hood marked with "B" the same mold was used. The first three sequence layers of P90 CFRP were applied to the mold (Figure 6a). On the borders of the hood, three layers of Biaxial CFRP with 300 g/m 2 were applied together with a peel ply textile layer. The result was a 50 mm reinforced area around the borders of the hood (Figure 6b). epoxy resin. Microstructure investigation of analyzed samples in the fracture area presents a high quality CFRP composite with a good resin impregnation of monofilaments and no delamination. Manufacturing Methodology of CFRP Hoods Molds were used for manufacturing the hoods both for the front exterior side and for the back frame side. The milling CNC Epoxy block was used for fabricating the molds. The obtained surface of milled molds was sanded using a glass paper by 400-1500 grit. The surface molds were treated at the end using an abrasive polishing paste. In order to prevent the sticking of CFRP prepreg on the mold surface a release treatment procedure was applied. The procedure consisted in applying of 4 layers of Mold Sealer type S31 from Jost Chemicals Company (Wals, Austria). This substance closed the surface pores of the mold. In the next step were applied 5 layers of liquid mold release type Frekote 770NC from Loctite Company, 10 min wait period after each layer was needed in order to allow drying of the solvent. The end step in mold preparation was surface polishing. The manufacturing of the CFRP hoods were made using vacuum bag technology and autoclave curing. For the "A" hood, the CFRP prepreg layers were laminated on the mold according to stacking sequence above presented. On the backside, the edge of the hood a peel ply layer was applied to cover the bonding area of the back frame. Release foil, textile breeder and vacuum foil covered the CFRP Prepreg layers at the final lamination. Mastic tape was used for sealing the vacuum foil on the edges of the molds. In the end, the vacuum bag was tested one hour under −0.9 Bars vacuum pressure for 30 min. The CFRP laminates and the molds were introduced in the autoclave for curing procedure. The same cycle steps and the parameters were used the same as they were used in the manufacturing of CFRP plates described in the previous paragraph. For the backside frame were respected the same curing condition of the manufacturing procedure as for the exterior part. The backside frames and the exterior side of the hood were bonded using a structural adhesive Scotch Weld, BP-9323 B/A type from 3M Company (Bracknell, UK). In order to eliminate the possible deformations of the parts, the gluing was done in the mold. The curing procedure of the structural adhesive at 80 °C run for two hours. The material excess from the borders of the hood were eliminated at the end using a manually mechanical procedure. For the second front hood marked with "B" the same mold was used. The first three sequence layers of P90 CFRP were applied to the mold (Figure 6a). On the borders of the hood, three layers of Biaxial CFRP with 300 g/m 2 were applied together with a peel ply textile layer. The result was a 50 mm reinforced area around the borders of the hood (Figure 6b). All the applied CFRP layers were cured in an autoclave as previously mentioned. After the curing procedure of the exterior side of the hood, the Peel ply was eliminated. The obtained surface was covered by a prepreg adhesive film type AX003-150-30-6000F. This allows the connection between CFRP layers and the honeycomb structure (Figure 7a). The Nomex honeycomb structure was cut by an offset of 50 mm than the outer edges of Polymers 2021, 13, 1374 9 of 21 the hood. The Nomex structure is present starting from the backside of the hood until biaxial CFRP reinforcement. The last three CFRP layers of P90 (sequence 1B) covered the entire structure (Figure 7b). The curing procedure was done in autoclave condition. The parameters of the autoclave remained the same excepting the curing pressure reduced to at 1.5 Bars during all steps of the autoclave cycle to avoid damaging of the honeycomb structure. The backside frame of the hood was manufactured using the same conditions previously presented in the case of hood "A". The bonding of the backside frame was done using Scotch Weld structural adhesive in similar conditions as for the previous hood. Polymers 2021, 13, x FOR PEER REVIEW 9 of 22 All the applied CFRP layers were cured in an autoclave as previously mentioned. After the curing procedure of the exterior side of the hood, the Peel ply was eliminated. The obtained surface was covered by a prepreg adhesive film type AX003-150-30-6000F. This allows the connection between CFRP layers and the honeycomb structure (Figure 7a). The Nomex honeycomb structure was cut by an offset of 50 mm than the outer edges of the hood. The Nomex structure is present starting from the backside of the hood until biaxial CFRP reinforcement. The last three CFRP layers of P90 (sequence 1B) covered the entire structure (Figure 7b). The curing procedure was done in autoclave condition. The parameters of the autoclave remained the same excepting the curing pressure reduced to at 1.5 Bars during all steps of the autoclave cycle to avoid damaging of the honeycomb structure. The backside frame of the hood was manufactured using the same conditions previously presented in the case of hood "A". The bonding of the backside frame was done using Scotch Weld structural adhesive in similar conditions as for the previous hood. For the obtained CFRP hoods, the surface structure is very good and surface pores are not presented. The front hood has very good rigidity and the CFRP material is very compact and has a homogenous structure, as we can see in the morphological analyses. The mass of the "A" front hood (Figure 8) was 3407 g. For the second front hood, "B" (Figure 9) the mass was 2690 g. The mass reduction is 22% in the case of the sandwich structure hood. For the obtained CFRP hoods, the surface structure is very good and surface pores are not presented. The front hood has very good rigidity and the CFRP material is very compact and has a homogenous structure, as we can see in the morphological analyses. The mass of the "A" front hood (Figure 8) was 3407 g. For the second front hood, "B" (Figure 9) the mass was 2690 g. The mass reduction is 22% in the case of the sandwich structure hood. All the applied CFRP layers were cured in an autoclave as previously mentioned. After the curing procedure of the exterior side of the hood, the Peel ply was eliminated. The obtained surface was covered by a prepreg adhesive film type AX003-150-30-6000F. This allows the connection between CFRP layers and the honeycomb structure (Figure 7a). The Nomex honeycomb structure was cut by an offset of 50 mm than the outer edges of the hood. The Nomex structure is present starting from the backside of the hood until biaxial CFRP reinforcement. The last three CFRP layers of P90 (sequence 1B) covered the entire structure (Figure 7b). The curing procedure was done in autoclave condition. The parameters of the autoclave remained the same excepting the curing pressure reduced to at 1.5 Bars during all steps of the autoclave cycle to avoid damaging of the honeycomb structure. The backside frame of the hood was manufactured using the same conditions previously presented in the case of hood "A". The bonding of the backside frame was done using Scotch Weld structural adhesive in similar conditions as for the previous hood. For the obtained CFRP hoods, the surface structure is very good and surface pores are not presented. The front hood has very good rigidity and the CFRP material is very compact and has a homogenous structure, as we can see in the morphological analyses. The mass of the "A" front hood (Figure 8) was 3407 g. For the second front hood, "B" (Figure 9) the mass was 2690 g. The mass reduction is 22% in the case of the sandwich structure hood. For the exterior surface of the hood "B", which did not include the backside frame, the obtained mass was 1626 g. In this case the mass reduction is 53%. Considering this reduction, the backside frame must be eliminated and for the interest points like support points on the chassis, the hinge fastening points and closing area support must be rein- For the exterior surface of the hood "B", which did not include the backside frame, the obtained mass was 1626 g. In this case the mass reduction is 53%. Considering this reduction, the backside frame must be eliminated and for the interest points like support points on the chassis, the hinge fastening points and closing area support must be reinforced. Mass reduction can be between 40-50% in the hood B case. At the same time, we can say that the stiffness of the hood decreases if we eliminate the backside frame. Dimensional Evaluation of the Obtained CFRP Hoods After the manufacture of the two hood models, they were dimensionally evaluated. The evaluation was realized to verify how the manufacturing process influenced the precision of the parts. The dimensional evaluation of the CFRP hood was done by scanning, the obtained results were compared with the CAD model of the hoods. Scanning was performed with a white structured light scanner, with the following specifications: measurement rate-550,000 measurements/s; resolution-0.100 mm; accuracy-0.100 mm (Go!SCAN-Creaform Inc., Lévis, QC, Canada). Scanning was performed with position targets, as recommended by the manufacturer, to increase scanning accuracy. Because the thickness of the piece is relatively small in relation to the length and width, three different scans were performed, one for each face and a partial scan for both faces. During the processing stage, the three scans were concatenated into a single model using the VXElements software solution (Creaform Inc., Lévis, QC, Canada). The verification is done by comparison between two 3D models using the Deviation Analysis tool from CATIA V5 (Dassault Systèmes-Vélizy-Villacoublay, France). Figure 10 shows the deviation analysis between the CAD model used in the manufacture of molds and the CAD model resulting from the processing of scans made on the hood noted by "A". In the analysis, 268,633 points were used of which 87.9% are between ±0.2 mm and 99.98% are within ±1 mm. The farthest point scanned from the CAD model is at 1.57 mm, the standard deviation for all geometric deviations of the scanned model from the CAD model is 0.176 mm. Figure 11 shows the deviation analysis between the CAD model used in the manufacture of molds and the CAD model resulting from the processing of scans made for the hood noted "B". The analysis used 295,909 points of which 83.57% are between ±0.3 mm and 99.68% are within ±1 mm. The farthest point scanned from the CAD model is at 1.79 mm; the standard deviation for all geometric deviations of the scanned model from the CAD model is 0.281 mm. Experimental Stiffness Investigation of Composite Hoods Hoods structural stiffness evaluation usually consists of three types of loads: transversal bending, longitudinal bending and torsion. The criterion for stiffness tests is the elastic deflection due to the applied load. In all load cases, the hood is mounted in real position being constrained in the supporting points, the hinges in the rear and the buffer points in the front, respectively. The supports are defined for lateral, transversal and torsional stiffness estimation by their degrees of freedom (DOF) from 1 to 6 (1, 2 and 3 stand for translational constraints in X, Y and Z axis, respectively, and 4, 5 and 6 stand for rotational constraint about X, Y and Z axis, respectively) as shown in Figure 12. The orientation of the Cartesian coordinates system considers X axis as the longitudinal axis of the car, Y axis is oriented in the transversal direction and Z is the vertical axis. The loads are applied in the vertical direction and have for each load cases different intensities. A conventional steel hood is used to establish the reference data for the load intensities and the stiffness requirements. Thus, for lateral and longitudinal stiffnesses a concentrated force between 400 and 500 N is employed, while the torsional stiffness considers a force about 100 N [28][29][30][31]. Experimental Stiffness Investigation of Composite Hoods Hoods structural stiffness evaluation usually consists of three types of loads: transversal bending, longitudinal bending and torsion. The criterion for stiffness tests is the elastic deflection due to the applied load. In all load cases, the hood is mounted in real position being constrained in the supporting points, the hinges in the rear and the buffer points in the front, respectively. The supports are defined for lateral, transversal and torsional stiffness estimation by their degrees of freedom (DOF) from 1 to 6 (1, 2 and 3 stand for translational constraints in X, Y and Z axis, respectively, and 4, 5 and 6 stand for rotational constraint about X, Y and Z axis, respectively) as shown in Figure 12. The orientation of the Cartesian coordinates system considers X axis as the longitudinal axis of the car, Y axis is oriented in the transversal direction and Z is the vertical axis. The loads are applied in the vertical direction and have for each load cases different intensities. A conventional steel hood is used to establish the reference data for the load intensities and the stiffness requirements. Thus, for lateral and longitudinal stiffnesses a concentrated force between 400 and 500 N is employed, while the torsional stiffness considers a force about 100 N [28][29][30][31]. Experimental set-up for stiffness evaluation consists of a rigid frame that ensures supporting conditions and load application. For the manufactured hood, the supporting points are presented in Figure 13 and consists of two hinges (A, B with the restricted DOF 1, 3, 4, 6) and two bumpers on each side (D, C, E, F with the restricted DOF 3). For lateral and transversal stiffness measurement, all supporting points were employed; in the torsional loading case, the E and F supports were removed. Experimental set-up for stiffness evaluation consists of a rigid frame that ensures supporting conditions and load application. For the manufactured hood, the supporting points are presented in Figure 13 and consists of two hinges (A, B with the restricted DOF 1, 3,4,6) and two bumpers on each side (D, C, E, F with the restricted DOF 3). For lateral and transversal stiffness measurement, all supporting points were employed; in the torsional loading case, the E and F supports were removed. (c) Experimental set-up for stiffness evaluation consists of a rigid frame that ensures supporting conditions and load application. For the manufactured hood, the supporting points are presented in Figure 13 and consists of two hinges (A, B with the restricted DOF 1, 3, 4, 6) and two bumpers on each side (D, C, E, F with the restricted DOF 3). For lateral and transversal stiffness measurement, all supporting points were employed; in the torsional loading case, the E and F supports were removed. The frame was instrumented with a simple load application device that converts the rotation of a nut into a linear movement of a screw. A force transducer type HBM F2B 10 kN (HBM, Darmstadt, Germany) connected to an amplifier HBM Spider 8 allows realtime force value acquisition. The deflections due to the load application were measured by linear inductive displacements transducers type HBM WA-T with a 10 mm measurement range fixed on the opposite face of the bonnet colinear with the load. Application points of the forces and displacements transducers are also presented in Figure 13. For additional information about the strain-stress state, several strain gauges were mounted on the hood surface and connected at the same amplifier. One unidirectional strain gauge The frame was instrumented with a simple load application device that converts the rotation of a nut into a linear movement of a screw. A force transducer type HBM F2B 10 kN (HBM, Darmstadt, Germany) connected to an amplifier HBM Spider 8 allows real-time force value acquisition. The deflections due to the load application were measured by linear inductive displacements transducers type HBM WA-T with a 10 mm measurement range fixed on the opposite face of the bonnet colinear with the load. Application points of the forces and displacements transducers are also presented in Figure 13. For additional information about the strain-stress state, several strain gauges were mounted on the hood surface and connected at the same amplifier. One unidirectional strain gauge (SG1) was glued on the outer surface in the marked point ( Figure 13) in the longitudinal (x) direction of the hood and the other two strain gauges on the inner surface (SG2 in the longitudinal direction and SG3 in transversal direction (y)). Each strain gauge was connected in a half-bridge circuit with one dummy strain gauge for temperature compensation glued on the unloaded specimen made of similar composite materials. All measurement data (load, displacement and strains) were analyzed by the HBM CatmanEasy data acquisition and measurement software. Instrumented set-up for hood structural analyses is presented in Figure 14. (SG1) was glued on the outer surface in the marked point ( Figure 13) in the longitudinal (x) direction of the hood and the other two strain gauges on the inner surface (SG2 in the longitudinal direction and SG3 in transversal direction (y)). Each strain gauge was connected in a half-bridge circuit with one dummy strain gauge for temperature compensation glued on the unloaded specimen made of similar composite materials. All measurement data (load, displacement and strains) were analyzed by the HBM CatmanEasy data acquisition and measurement software. Instrumented set-up for hood structural analyses is presented in Figure 14. The hood "A" was loaded progressively with concentrated forces up to 500 N for the lateral and transversal stiffness and 50 N for the case of torsional stiffness, respectively. Several measurements of the load displacements values were recorded, the stiffness value being calculated through linear interpolation. The obtained mean values are presented in Table 3. The hood "A" was loaded progressively with concentrated forces up to 500 N for the lateral and transversal stiffness and 50 N for the case of torsional stiffness, respectively. Several measurements of the load displacements values were recorded, the stiffness value being calculated through linear interpolation. The obtained mean values are presented in Table 3. Table 4 presents the measured strain values corresponding to maximum force applied to the hood in each loading case, lateral, transversal and torsional. Structural analysis of the composite hood manufactured with a sandwich structure (hood "B") was realized on the same experimental set-up as for the first variant (Figure 15a). Strain gauges were glued in longitudinal and transversal direction of the hood in two points on the upper and lower surface. The position of application points of the loads and strain gauges is presented in Figure 15b. The load for lateral stiffness investigation was about 230 N, for transversal stiffness 524 N and 206 N for the torsional loading case. Tables 5 and 6 presents the experimental values of displacements and strains measured for the composite hood "B" with sandwich structure. Tables 5 and 6 presents the experimental values of displacements and strains measured for the composite hood "B" with sandwich structure. Numerical Analyses of Investigated Hoods The finite element models of the hoods are based on the CAD geometries of the above-described hood, for the simulation being employed ANSYS Workbench 2019 software (ANSYS Inc., Canonsburg, PA, USA) and the ACP tool for layup modelling of the composite materials. The composite material used in the simulation was defined as in the manufacturing process, the isotropic material proprieties of the layers are those experimentally determined or computed numerically in case of a stack-up of more layers. For computational and accuracy reasons, a shell model was chosen to be modelled. The upper and lower hood faces were designed with CATIA software in order to obtain delimited shell surfaces representing the mid surface of the composite material. Surfaces with the same material proprieties were joined and the thickness was defined as symmetrical, top or bottom, with respect to the CAD geometry, in such a way to obtain a numerical model identical to the real one. The connection between the inner and outer surface of the hoods is a bonded connection in the contact areas. The numerical analyses are respective of the established supporting points, their free and cancelled DOFs and application points of loads as presented in the experimental analyses to determine the lateral, transversal and torsional stiffness of the hoods. For hood "A", based on the black metal design concept, in the first step was simulated a geometrical identical model having structural steel (E = 200 GPa, G = 76.9 GPa, ν = 0.3) as material and a shell thickness of 0.6 mm. The idea is to compare our CFRP hoods with the conventional ones manufactured from steel, a widespread material in the automotive industry. Table 7 presents the stiffness values obtained for the "A" hood made of steel. All boundary conditions and load position are identical to those presented in experimental analyses. In the case of the hood "A" made of CFRP materials with the lay-up and mechanical constants previously presented, in Figure 1 are shown the boundary conditions and the numerically computed displacements under the load cases and supporting conditions for evaluation of lateral, transversal and torsional stiffness. Directional displacements were recorded in the load application points that, for the longitudinal load case (Figure 16b), is on the same face (upper part of the hood); for the other cases (transversal and torsional), the corresponding displacement that was experimentally measured on the lower surface of the hood. For similarity reasons, these points and their corresponding displacements are marked in the numerical models and used for corresponding stiffnesses computations. The values extracted for the FE simulations are recorded in Table 8 for the applied forces and their corresponding displacements and in Table 9 for the strain measured in the application points of strain gauges applied on the upper and lower surfaces of experimentally investigated hoods. In the case of the hood "A" made of CFRP materials with the lay-up and mechanical constants previously presented, in Figure 1 are shown the boundary conditions and the numerically computed displacements under the load cases and supporting conditions for evaluation of lateral, transversal and torsional stiffness. Directional displacements were recorded in the load application points that, for the longitudinal load case (Figure 16b), is on the same face (upper part of the hood); for the other cases (transversal and torsional), the corresponding displacement that was experimentally measured on the lower surface of the hood. For similarity reasons, these points and their corresponding displacements are marked in the numerical models and used for corresponding stiffnesses computations. The values extracted for the FE simulations are recorded in Table 8 for the applied forces and their corresponding displacements and in Table 9 for the strain measured in the application points of strain gauges applied on the upper and lower surfaces of experimentally investigated hoods. Hood "B" was numerically analyzed using the same methodology. In this case, the sandwich structure consists of more plies as described in previous paragraph. A homogenization procedure is necessary to get the elastic constants of shell elements representing the reinforced frame and sandwich structure of the hood (Figure 2b). The materials of these plies were defined in ANSYS Pre-Post Composite software module, resulting in a homogenized material for each zone of the hood "B". For the honeycomb (Nomex structure), the material constants were given by the producer and are presented in Table 10. After homogenization, the resulting isotropic material constants for the reinforced frame and sandwich structure are presented in Table 11. Hood "B" was numerically analyzed using the same methodology. In this case, the sandwich structure consists of more plies as described in previous paragraph. A homogenization procedure is necessary to get the elastic constants of shell elements representing the reinforced frame and sandwich structure of the hood (Figure 2b). The materials of these plies were defined in ANSYS Pre-Post Composite software module, resulting in a homogenized material for each zone of the hood "B". For the honeycomb (Nomex structure), the material constants were given by the producer and are presented in Table 10. After homogenization, the resulting isotropic material constants for the reinforced frame and sandwich structure are presented in Table 11. The obtained values were used for stiffness calculation. The displacements and strain values extracted for the FE simulations of the hood "B" are presented in Tables 12 and 13 for the analyzed load cases. Table 13. Computed strains on the outer and inner surface of the composite hood "B". The obtained values were used for stiffness calculation. The displacements and strain values extracted for the FE simulations of the hood "B" are presented in Tables 12 and 13 for the analyzed load cases. Discussions Comparative analysis of the experimental and numerical results is focused on stiffness values and the strain registered at the measuring points above presented. A good agreement between the experimental measurement and numerical (finite element) simulations is presented in Figures 18 and 19. In general, the relative deviation between experimental and numerical values in terms of displacements, strains and stiffnesses are under 10%; in few cases, the difference is larger and is due mainly to imperfect experimental tests (supporting conditions, application point of load or application point of displacement transducer). The obtained results validate both the experimental set-up and numerical models of the two investigated hoods. The strain results for the hood "A" show significant values in case of lateral load case comparatively with transversal and torsional cases for all strain gauges, due to the proximity and position of the application point of the load. In the case of the hood "B", the behavior is slightly different mostly for transversal and torsional load cases where the The strain results for the hood "A" show significant values in case of lateral load case comparatively with transversal and torsional cases for all strain gauges, due to the proximity and position of the application point of the load. In the case of the hood "B", the behavior is slightly different mostly for transversal and torsional load cases where the generated strain is higher than for the first variant of the composite hood. The effect relates to the material structure and with the backside frame that is no longer continuous over the hood and changes the mechanical behavior under transversal and torsional loads. Both composite hoods show a higher transversal stiffness than lateral and torsional ones, mainly due to their design with an under-unity ratio between longitudinal and transversal dimensions and the presence of two supporting points on each side near the headlamps. Figure 20 represents a comparative analysis between the stiffness of a steel hood, a composite hood with similar design and a composite hood with changed design of the backside frame. It can be noticed that composite hoods are offering superior stiffness to a similar steel hood. The total mass reduction is 22% in the case of the hood "B" having the sandwich structure and modified backside frame related to BMD structure ("A"). For the exterior surface of the hood "B" which not include the backside frame the obtained mass was reduced by about 53%. The computed mass of the steel hood with the same geometry as in the BMD concept indicates a value of 6856 g. This mass was determined according to the CAD model of hood "A" using CATIA V5 software. Compared with the steel hood, CFRP hood "A" has a two-fold mass reduction and hood "B", made by sandwich structure and The strain results for the hood "A" show significant values in case of lateral load case comparatively with transversal and torsional cases for all strain gauges, due to the proximity and position of the application point of the load. In the case of the hood "B", the behavior is slightly different mostly for transversal and torsional load cases where the generated strain is higher than for the first variant of the composite hood. The effect relates to the material structure and with the backside frame that is no longer continuous over the hood and changes the mechanical behavior under transversal and torsional loads. Both composite hoods show a higher transversal stiffness than lateral and torsional ones, mainly due to their design with an under-unity ratio between longitudinal and transversal dimensions and the presence of two supporting points on each side near the headlamps. Figure 20 represents a comparative analysis between the stiffness of a steel hood, a composite hood with similar design and a composite hood with changed design of the backside frame. It can be noticed that composite hoods are offering superior stiffness to a similar steel hood. The total mass reduction is 22% in the case of the hood "B" having the sandwich structure and modified backside frame related to BMD structure ("A"). For the exterior surface of the hood "B" which not include the backside frame the obtained mass was reduced by about 53%. The computed mass of the steel hood with the same geometry as in the BMD concept indicates a value of 6856 g. This mass was determined according to the CAD model of hood "A" using CATIA V5 software. Compared with the steel hood, CFRP hood "A" has a two-fold mass reduction and hood "B", made by sandwich structure and CFRP faces, has a mass reduction of 254 times. If we eliminate the backside frame of the hood, "B" de mass reduction is about 421-fold. This difference is remarkable if we look at the results presented in Figure 20 one can notice that the stiffness of CFRP hoods is higher than the steel one and hood "B" the lightest structure has the highest lateral and transversal stiffness compared with all other hoods. In the torsional stiffness, the obtained results are comparable. The improvement in terms of lateral stiffness for composite hoods about a similar steel hood is for the hood with black metal design about 80% and 157% for the hood with sandwich structure and modified backside frame. Transversal stiffness is significantly higher for composite hoods, three times higher for hood "A" and 12 times for hood "B". The hood designed as a sandwich structure is transversally stiffer (245 times) comparative with the hood made by BMD concept. Torsional stiffness is less influenced by changing of materials in case of hood "A" is an increase of 62% and a similar value as for the steel hood is obtained by composite hood "B". This can be explained by similar backside frame geometry in the case of the first composite hood, but different material and in the second composite hood by changed design of this back frame. Both composite hoods show a higher transversal stiffness than lateral and torsional ones, mainly due to their design with an under-unity ratio between longitudinal and transversal dimensions and the presence of two supporting points on each side near the headlamps. Figure 20 represents a comparative analysis between the stiffness of a steel hood, a composite hood with similar design and a composite hood with changed design of the backside frame. It can be noticed that composite hoods are offering superior stiffness to a similar steel hood. The total mass reduction is 22% in the case of the hood "B" having the sandwich structure and modified backside frame related to BMD structure ("A"). For the exterior surface of the hood "B" which not include the backside frame the obtained mass was reduced by about 53%. The computed mass of the steel hood with the same geometry as in the BMD concept indicates a value of 6856 g. This mass was determined according to the CAD model of hood "A" using CATIA V5 software. Compared with the steel hood, CFRP hood "A" has a two-fold mass reduction and hood "B", made by sandwich structure and CFRP faces, has a mass reduction of 254 times. If we eliminate the backside frame of the hood, "B" de mass reduction is about 421-fold. This difference is remarkable if we look at the results presented in Figure 20 one can notice that the stiffness of CFRP hoods is higher than the steel one and hood "B" the lightest structure has the highest lateral and transversal stiffness compared with all other hoods. In the torsional stiffness, the obtained results are comparable. The improvement in terms of lateral stiffness for composite hoods about a similar steel hood is for the hood with black metal design about 80% and 157% for the Conclusions The paper presents a complex study regarding two CFRP front hoods for small EV. A very few studies on the hoods presents a complete picture of static behavior of a CRFP hood and proposes specific design, complete manufacturing detail, experimental test on materials and the whole structure and numerical studies. There were investigated two different design concepts. The concepts are based in the first case (hood "A") on an equivalent CFRP hood with similar geometry as those BMD design. The special design of composites that uses a light sandwich structure for the exterior side and changes the backside frame were adopted for the second case (hood "B"). Based on previous studies, the authors proposed new CFRP materials and stacking sequence for manufacturing the CFRP hoods, the proposed materials have not been analyzed before. Comparative studies between these concepts were performed by experimental and numerical methods. The performed analyses firstly imply the measurement of elastic constants of the five different CFRP composites with the same architecture of the layers as those used for hood manufacturing. The manufacturing steps and full details regarding the fabrication process are delivered in the paper; a 3D scanning technique revealed the manufacturing dimensional accuracy with respect to initial CAD models. The deviations of the points can be observed in the backside of the frame for the CFRP hoods cases. The mechanical behavior of the CFRP composite hoods was analyzed in terms of static stiffness and strain monitoring in three different points for lateral, longitudinal and torsional loading cases. Experimental set-ups include supporting conditions similar to the reality, load-displacements measuring sensors and strain gauges glued on the upper and lower surfaces of the hood. Numerical simulations with FE software of the investigated hoods revealed good agreement with the experimental data and validate both the experimental set-up and the developed numerical models. In summary, the main article conclusion are as follows: • The vacuum bag technology and autoclave curing process is the best procedure to produce very high quality CFRP parts, a fact demonstrated by microstructural analysis and highly dimensional accuracy of the manufactured hoods. • A designed composite hood that copies the metal parts, by reproducing the same geometry shape and the reinforced ribs, brings the advantage of mass reduction with similar or higher stiffness, but limits the benefits that composites structures have. • The first composite hood manufactured based on BMD is 2.54 times lighter than a similar steel hood and the improvement in terms of lateral stiffness for this composite hood about a similar steel hood is about 80%. Transversal stiffness is a few times, higher while the torsional stiffness has an increase of 62% as for a similar steel hood. • The use of sandwich structures to achieve a high stiffness and light structure, a balanced stacking sequence of layers in order to respond to complex requests are the advantages of an FRP design concept that allows, in the case of a car hood replacement of the ribs and the reinforcements, an important additional mass reduction. • The second hood concept is 22% lighter than the first one. In this case, the outer hood panel subjected to structural improvements by layer architecture offers a mass reduction about 53%. Lateral stiffness is also improved by 42%, while the transversal stiffness is significantly higher. The torsional load case revealed a smaller value, but not lower than for a similar steel hood. The obtained results are consistent and will deliver support for future static and dynamic mechanical analyses of composite hoods both experimentally and by the finite element method In the future, improvements of the manufacturing technology for cost reduction (types of fabric, stacking sequence) will also be included, in addition to closer analysis of hinges area and backside frame. The main conclusions are already in implementation for other components, such vehicle doors, front and rear fenders and a trunk lid fabricated from CFRP composites.
v3-fos-license
2022-04-05T15:37:21.595Z
2022-04-01T00:00:00.000
247945754
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-393X/10/4/552/pdf", "pdf_hash": "d222b4c829d83b5ee33e2c857d75496067cdeab2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42082", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "bef225be6d9035d55aa45f9e910b7f468f112e66", "year": 2022 }
pes2o/s2orc
The Beneficial Effect of the COVID-19 Vaccine Booster Dose among Healthcare Workers in an Infectious Diseases Center Introduction: Healthcare workers in Poland received a booster dose of the BNT162b2 mRNA vaccine (Pfizer-BioNTech, Manufacturer: Pfizer, Inc., and BioNTech; Moguncja, Germany) at the beginning of October 2021. Here, we report on the preliminary results of an ongoing clinical study into the antibody response to SARS-CoV-2 of healthcare workers previously exposed to the virus, with or without evidence of past infection, in the Hospital for Infectious Diseases in Warsaw before and after the vaccine booster dose. Methods: Blood samples were collected on the day the vaccine booster dose was administered and again 14 days later. The levels of SARS-CoV-2 IgG antibodies (against the n-protein, indicative of disease) and S-RBD (indicative of a response to vaccination) were measured. Results: One hundred and ten health care workers from the Hospital for Infectious Diseases were included in the study. The percentage of subjects with a positive test for anti-n-protein IgG antibodies at both time points remained unchanged (16, 14%), while a statistically significant increase in the percentage of subjects producing high levels of S-RBD antibodies (i.e., >433 BAU/mL) was observed (from 23, 21% to 109, 99%; p = 0.00001). Conclusions: The results of the study indicate that the booster dose of the vaccine significantly increases the percentage of people with high levels of S-RBD antibodies, regardless of previous contact with the virus, which may indicate greater protection against both the disease and a severe course of COVID-19. Introduction The novel coronavirus (SARS-CoV-2) responsible for coronavirus disease (COVID- 19) was first detected in late 2019 in China [1]. SARS-CoV-2 belongs to the Betacoronaviridae, also in which other viruses that caused outbreaks in the past are also included (SARS-CoV and MERS-CoV). Within these viruses, mutations may occur not only due to frequent recombination, but also due to interspecies transmission [2]. Therefore, during the pandemic, a few new SARS-CoV variants formed, including alpha, beta, gamma, delta, and omicron [3]. The infection usually mildly affects the respiratory tract; however, it may also have a severe course with acute respiratory distress syndrome and multiple organ failure [4]. The first cases in Poland were diagnosed at the beginning of March 2020, and by January 2022, over 4,220,000 cases and 100,000 deaths had been reported [5]. On 27 December 2020, Poland introduced a mass vaccination program using the BNT162b2 Vaccines 2022, 10, 552 2 of 7 mRNA vaccine (Pfizer-BioNTech) [6]. Healthcare workers (HCWs) were prioritized for COVID-19 immunization. The vaccine was given in two doses, three weeks apart. At the beginning of October 2021, a booster dose was administrated to fully vaccinated HCWs. It has been proven that anti-nucleocapsid antibodies serve as a marker of previous SARS-CoV-2 infections [7]. Anti-nucleocapsid antibodies can be also used as an indicator of post natural SARS-CoV-2 infection [8]. Meanwhile, a strong immune response to the virus's spike protein, particularly the receptor-binding domain (RBD) of the spike protein (which contains neutralizing epitopes), is considered to be a response provoked by SARS-CoV-2 vaccines [9]. Here, we report the preliminary results of an ongoing clinical study on the effectiveness of a booster dose of the Pfizer mRNA vaccine among healthcare workers in the Hospital for Infectious Diseases in Warsaw. Methods The study participants were adults who had previously been vaccinated (8-9 months) with two doses of the BNT162b2 mRNA vaccine (Pfizer-BioNTech). Blood samples were collected on the day the vaccine booster doses were administered, and again 14 days later (October-November 2021) ( Figure 1). The levels of SARS-CoV-2 IgG antibodies (against the n-protein, indicative of disease) and S-RBD antibodies (indicative of a response to vaccination) were measured using MAGLUMI SARS-CoV-2 IgG and MAGLUMI SARS-CoV-2 S-RBD IgG assays. According to the manufacturer's information, MAGLUMI ® SARS-CoV-2 S-RBD IgG kits are 99.6% specific and 100% sensitive. The kits have been approved for sale in the European Union and have received a CE certificate. SARS-CoV-2 infection [8]. Meanwhile, a strong immune response to the virus's spike protein, particularly the receptor-binding domain (RBD) of the spike protein (which contains neutralizing epitopes), is considered to be a response provoked by SARS-CoV-2 vaccines [9]. Here, we report the preliminary results of an ongoing clinical study on the effectiveness of a booster dose of the Pfizer mRNA vaccine among healthcare workers in the Hospital for Infectious Diseases in Warsaw. Methods The study participants were adults who had previously been vaccinated (8-9 months) with two doses of the BNT162b2 mRNA vaccine (Pfizer-BioNTech). Blood samples were collected on the day the vaccine booster doses were administered, and again 14 days later (October-November 2021) ( Figure 1). The levels of SARS-CoV-2 IgG antibodies (against the n-protein, indicative of disease) and S-RBD antibodies (indicative of a response to vaccination) were measured using MAGLUMI SARS-CoV-2 IgG and MAG-LUMI SARS-CoV-2 S-RBD IgG assays. According to the manufacturer's information, MAGLUMI ® SARS-CoV-2 S-RBD IgG kits are 99.6% specific and 100% sensitive. The kits have been approved for sale in the European Union and have received a CE certificate. The group of COVID-19-recovered participants was distinguished based on positive PCR test results for SARS-CoV-2 from any time prior to the booster dose. Data on concomitant diseases were collected on the basis of a survey conducted among the study participants. In the statistical analyses, non-parametric tests were used as appropriate: the Chi2 test to compare categorical variables and the Wilcoxon test to compare dependent numeric variables. A p value of <0.05 was considered significant. The study was approved by the Bioethical Committee of the Medical University of Warsaw (Nr KB/2/2021). The study was funded from a research grant issued by the Medical Research Agency (Nr 2021/ABM/COVID19/WUM). Results One hundred and ten healthcare workers from the Hospital for Infectious Diseases in Warsaw were included in the study. Most participants were female (87, 79.1%) and were working in direct contact with patients (83, 74.5%). In terms of professions, there were 31 doctors (28.2%), 21 nurses (19.1%), and 31 from other professions (28.2%). Their median height was 1.65 m (IQR: 1.6-1.73 m), weight 70 kg (IQR: 61-84 kg), and BMI 25.15 The group of COVID-19-recovered participants was distinguished based on positive PCR test results for SARS-CoV-2 from any time prior to the booster dose. Data on concomitant diseases were collected on the basis of a survey conducted among the study participants. In the statistical analyses, non-parametric tests were used as appropriate: the Chi2 test to compare categorical variables and the Wilcoxon test to compare dependent numeric variables. A p value of <0.05 was considered significant. The study was approved by the Bioethical Committee of the Medical University of Warsaw (Nr KB/2/2021). The study was funded from a research grant issued by the Medical Research Agency (Nr 2021/ABM/COVID19/WUM). Our analysis revealed that the presence of at least one concomitant disease was more likely to occur in the COVID-19-recovered group in our hospital. Within this group, two people had three concomitant diseases concurrently (hypertension, type II diabetes mellitus, and a history of myocardial infarction; the second person had asthma, hypertension, and was undergoing immunosuppressive treatment due to an interstitial lung disease), and seven people had one concomitant disease (hypertension (five people), hyperthyroidism, and one was a bone marrow transplant recipient). As for the group of subjects who had not had COVID-19 in the past, two participants had more than one disease (hypertension and asthma, and the second person had hypertension and a history of myocardial infarction); the rest had hypertension (seven people), immunosuppressive treatment, asthma, obesity, hypothyroidism, or chronic hepatitis. The percentage of healthcare workers with a positive test for anti-n-protein IgG antibodies at both time points did not differ (16, 14%), while a statistically significant increase in the percentage of people with very high levels of S-RBD antibodies (i.e., >433 BAU/mL) was observed. On the day the booster dose of the vaccine was administered, 23 (20.9%) of the subjects had S-RBD antibodies > 433 BAU/mL, while two weeks later, 109 (99%) of the subjects had the antibodies, p = 0.00001 (Table 2, Figure 2). Discussion Recent studies have shown the short-term efficacy of a two-dose regimen of BioN-Tech/Pfizer mRNA BNT162b2 vaccine against COVID-19. Analyses have confirmed this efficacy in both clinical trials and real-world settings, using a two-dose schedule with a Discussion Recent studies have shown the short-term efficacy of a two-dose regimen of BioN-Tech/Pfizer mRNA BNT162b2 vaccine against COVID-19. Analyses have confirmed this efficacy in both clinical trials and real-world settings, using a two-dose schedule with a target interval of three weeks between doses [6,10]. On the other hand, there is evidence for waning SARS-Co-V-2 immunity after a few months of receiving the second vaccine dose in Israel. After a successful vaccination campaign starting in December 2020, more than half the adult population received two doses of BNT162b2 vaccine in three months. In June 2021, an increase in the number of SARS-CoV-2 PCR tests was observed among both vaccinated and unvaccinated persons. Genetic analyses showed that the delta variant was responsible for most cases (98%) of the observed breakthrough infections during that time [11,12]. Reports of waning vaccine efficacy came from healthcare organizations all over the world [13][14][15]. In addition, there was a decrease in neutralizing antibody titers during the first six months after receiving the second dose of the vaccine [16]. Recent analyses have suggested that a booster dose of the BNT162b2 vaccine reduced the rates of both confirmed infection and severe COVID-19 illness [17]. The rate of confirmed infection was significantly lower than in the non-booster group. Moreover, in a study by Arbel et al., participants who received a booster dose of BNT162b2 at least Vaccines 2022, 10, 552 5 of 7 five months after the second dose had 90% lower mortality rates due to COVID-19 than participants who did not receive a booster dose of the vaccine [18]. These two above studies reported no side effects during the period of the research. The protection gained by a booster dose is relevant for public health, especially in the context of waning vaccine efficacy. The booster dose and the expected reduction in the incidence and severe course of COVID-19 highlight the important role of maintaining immunity in the general population, which is critical for public health, especially in terms of saving lives. Study reports show that the vaccine's efficacy against primary symptomatic COVID-19 was directly related to an increase in the anti-SARS-CoV-2 receptor-binding domain (RBD) IgG; these antibodies also appear to correlate better with virus neutralization [19]. In their study, Carta et al. strongly underlined that only serology assays specific for antibodies that target regions within the spike protein (e.g., the RBD) can be used to evaluate immune response to the BNT162b2 vaccine [20]. Moreover, other studies also supported these findings, stating that antibodies only against the spike protein will be elicited after the BNT162b2 vaccine [21]. In our study, a significant increase in the percentage of people with very high levels of anti-S-RBD antibodies (>433 BAU/mL) was observed after a booster dose of the BNT162b2 mRNA vaccine (Pfizer-BioNTech). Narayan et al. analyzed data from 20 different hospitals and collected a study group of 14,837 HCWs. They underlined that vaccination against COVID-19 is effective. However, their analysis included HCWs vaccinated with only two vaccines doses [22]. Poukka et al. also analyzed two-vaccine dose efficacy in HCWs and also concluded that a vaccine booster dose may be beneficial for COVID-19 prevention in HCWs [23]. However, the efficacy of a booster dose in HCWs should be analyzed, due to the fact that the risk of exposure for COVID-19 in the workplace is higher for HCWs than for general population [24]. In our analyses, there was a statistically significant increase in the titer of S-RBD antibodies when comparing the values before and after the vaccine booster dose (p = 0.00014) among COVID-19-recovered participants. RBD-specific antibodies with strong antiviral activity were found in studies with COVID-19-recovered participants in the pre-vaccine period, suggesting that a vaccine designed to raise such antibodies could be very effective [25]. This was the reason high expectations were placed on vaccines. However, the data show that the COVID-19 vaccine can elicit specific antibody titers and neutralizing antibody concentrations above those observed in COVID-19 human convalescent serum in the first 100 days after vaccination [26]. Further analyses are needed in this area. Our study was conducted among healthcare workers, who are generally healthy and relatively young. This could be the limitation of our study, as this group of people was not compared with the general population. Despite these limitations, our findings support the recommendation of providing the COVID-19 vaccine booster dose to the broad population. HCWs were prioritized for COVID-19 immunization in our country, and it was a natural choice for obtaining initial results quickly. To our knowledge, this is the first study to evaluate beneficial effects of the booster dose based on the immune response in the Polish population. Conclusions The preliminary results of our study indicate that the vaccine booster dose significantly increases the percentage of people with high levels of anti-SARS-CoV-2 S-RBD IgG antibodies. It may indicate better protection against both the disease and a severe course of COVID-19, regardless of previous contact with the virus. Future research is needed to evaluate the long-term efficacy of the booster dose against current and emerging SARS-CoV-2 variants.
v3-fos-license
2017-06-30T05:59:53.806Z
2015-08-07T00:00:00.000
10232895
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://biologicalproceduresonline.biomedcentral.com/track/pdf/10.1186/s12575-015-0023-9", "pdf_hash": "07238ab303ffb8e89f76b16cb2f647846075fed5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42084", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "sha1": "3e0036899cbf5c628017d555add55db7c36a4549", "year": 2015 }
pes2o/s2orc
CellProfiler: Novel Automated Image Segmentation Procedure for Super-Resolution Microscopy Background Super resolution (SR) microscopy enabled cell biologists to visualize subcellular details up to 20 nm in resolution. This breakthrough in spatial resolution made image analysis a challenging procedure. Direct and automated segmentation of SR images remains largely unsolved, especially when it comes to providing meaningful biological interpretations. Results Here, we introduce a novel automated imaging analysis routine, based on Gaussian, followed by a segmentation procedure using CellProfiler software (www.cellprofiler.org). We tested this method and succeeded to segment individual nuclear pore complexes stained with gp210 and pan-FG proteins and captured by two-color STED microscopy. Test results confirmed accuracy and robustness of the method even in noisy STED images of gp210. Conclusions Our pipeline and novel segmentation procedure may benefit end-users of SR microscopy to analyze their images and extract biologically significant quantitative data about them in user-friendly and fully-automated settings. Background Super resolution (SR) microscopy unlocked new opportunities for cell biologists to investigate cells and cellular functions at unprecedented resolution up to few nanometers, which require re-thinking of biologists about new and previous discoveries [1]. SR microscopy visualizing single molecules clusters at nanometer resolution has made image analysis a more complicated practice. Current image analysis of SR microscopy data rely mostly on complex analytical tools and MatLab (www.mathworks.com) based routines. Automated grouping of molecule clusters into biologically meaningful objects by direct segmentation remains largely difficult; however, density algorithms optimized for Single Molecule Localization Microscopy (SMLM) was the first automated attempt to segment and interpolate object boundaries directly from SMLM images by using local adaptive density kernels to merge and separate molecules clusters into meaningful objects [2]. Image segmentation algorithms that use shape or intensity data will identify single super resolved clusters, but will not perceive a group of separate clusters as one functionally active domain. Automated algorithms that rely on adaptive densities information to segment and interpolate boundaries have been only shown to work well with SMLM on reference structures with continuous densities e.g., mitochondria, microtubule [2], however; computational performance and accuracy on structures with intermittent densities and profiles remains lacking e.g., nuclear pores complex. For such problems, most biological studies involving SR microscopy techniques rely on manually defining regions of interests (ROIs) with geometry that best fits structures investigated e.g., ellipse, circular, square or rectangular [3][4][5]. This type of manual work is a very tedious job, but most reliable alternative so far. SR microscopy techniques provide a much narrower Gaussian point spread function (PSF) of the focused scanning area, enabling us to resolve features that are distant below diffraction limit of light, which is approx. about 200-300 nm in (xy) and up to 500-700 nm in (z) axis [6]. Smoothing is applied to images to average signals using a smoothing function. Interestingly, applying a Gaussian could enlarge PSF of SR images and drive resolution backward toward diffraction limit of light in a controlled fashion by mathematical means e.g., increasing width of Gaussian. An intuitive way to segment SR images into meaningful groups of molecules would be through Gaussian smoothing to merge proximate signals within the artifact radius of the Gaussian. Therefore, structures and groups of molecules clusters can be merged when distance separating them is below Gaussian width of the applied filter. Once this cluster grouping is achieved, segmentation remains to be done. Merged super resolved structures might show intensity variation throughout the whole object, because each of the merged clusters has different intensity profile. For that, it is logical to use algorithms that use shape information to segment clumped objects, avoiding by that over-segmentation problems [7]. Once groups of clusters in an image are perceived as biologically significant objects by means of Gaussian blurring, it should remain possible to segment the actual SR image to find out single molecule cluster information in the image. Associating objects found in the SR image and its Gaussian blur, would allow us to find out meaningful data about each group of molecules. Interestingly, automated imaging free ware software tools, like CellProfiler (www.cellprofiler.org), possess all previously described algorithms and analysis paradigms [8], which should make it possible to implement Gaussian blur filters and combine image segmentation procedures to extract meaningful data about clusters of molecules directly from SR images in a fully automated way. In this paper, we introduce and test a novel image analysis procedure for SR microscopy, which depends on Gaussian blurring to merge super-resolved structural details in SR images into biologically meaningful objects. Followed by a segmentation process, it was then possible to interpolate objects boundaries in blurred images. Relating objects from both SR images and their blurred versions allowed direct reading and quantification of cluster information per groups of molecules. We used simulation data and CellProfiler to explain how the analysis works and we applied our approach to study structures of nuclear pore complex, to show our approach running on real data. Basic Simulation of SMLM Image and Automated Cluster Analysis by Cellprofiler We created a basic simulation image of two adjacent single molecule clusters that simulate SMLM data e.g., PALM or STORM of the active zone bruchpilot protein clusters at synapse of neuromuscular junctions (NMJs) of Drosophila [3,9]. We used CellProfiler to automatically count number of clusters per active zone by designing a pipeline tree in (Fig.1b). First, we used a Smooth module to apply a Gaussian filter with an artifact diameter that is capable of merging only nanoscopic clusters of both virtual active zones, while leaving it possible to delineate the two active zones (Fig. 1b). Next, we used "IdentifyPrimaryObject" module and we set MCT thresholding as indicated in materials and methods. De clumping of individual active zones was done by searching for shape indentations in the fused objects (Fig. 1b), while detection of single molecule clusters per active zone was done by another automated "IdentifyPrimaryObject" module to find single molecule clusters (Fig. 1c). In the latter object search module, we used the same thresholding strategy; however, to de clump individual single molecule clusters we used maximum intensity to divide clumped objects (Fig. 1c). Since both identified set of objects or features come from the same image, re-alignment was not required and we could accurately relate objects using "RelateObjects" module. This made it possible to quantitatively count number of clusters per active zone (Fig. 1d). Automated Counting of gp210 Subunits and pan-FG Protein per Nuclear Pore Sites Imaged By STED Microscopy We tested the performance of our analytical approach on STED images of gp210 and pan-FG proteins. Gp210 protein is known to be the main cause for the symmetrical eight fold structures around the central channel of nuclear pores, whereas pan-FG labels the central channel with an average of one cluster per complex [10,11]. We designed a pipeline to extract number of gp210 and pan-FG clusters per nuclear pore complex directly from STED images using our novel approach (Fig. 2a). Using imageJ "Gaussian Blur" plug-in, we confirmed that Gaussian smoothing de blur noisy signals in fluorescence images (Fig. 2b). Interestingly, expanding Gaussian diameter by a factor more than the resolution of super resolved gp210 clusters, we were able to merge adjacent gp210 subunits of nuclear pore complex into donut structure (Fig. 2b). In the automated pipeline, we applied a sequence of Gaussian filters to detect single nuclear pore structures (donuts) and their respective gp210 clusters and used MCT threshold algorithm for both donuts and gp210 clusters. However, we used maximum intensity watersheds and shape to de clump gp210 clusters and their donuts, respectively (Fig. 2c). To identify pan-FG protein, we applied a Gaussian of 5.0 pixels and shape was used to de clump objects (Fig. 2c). Relating all objects (gp210 clusters, gp210 donuts, and pan-FG clusters), we resolved that each nuclear pore complex consist of an average of 1 pan-FG protein, and histogram data showed highest frequency at eight gp210 clusters per nuclear pore (Fig. 2d and e). We also suggest a more global experimental and analytical scheme (Fig. 2e and f) that takes into account the following: 1. Modeling geometry of structures of interest, their clustering density and relative orientation of clumped objects all of which decides parameters and stringency of Gaussian and segmentation algorithms. 2. Optimization of imaging acquisition parameters e.g., finding best field of view, right detector pixel size and laser power. 3. Control experiments to optimize biophysical related parameters e.g., labeling density optimization, label size, and label specificity. 4. Careful testing of models for any type of quantitative SR microscopy experiment and increasing (n) number of images to calculate statistical significance. Further, we used a noisy single color STED image of gp210 (Fig. 3a). To enhance visualization of individual gp210 subunits, we applied a smooth filter in ImageJ to enhance features signals, but it was not sufficient and for that we used a bandpass filter to reduce edge pixels artifacts and enhance contrast (Fig. 3b). (Fig. 4a and b) show an example of segmented images. Intriguing quantification results revealed that bandpass filter, not smoothing, was sufficient to boost the accuracy of cluster segmentation and we had a peak value of 8 clusters per donut from that image (n = 167 show 8 subunits per nuclear pore from an n total = 847), while over segmentation was prominent in raw image and even after smoothing; as indicated by shifting of the peak of the histograms toward the right i.e. more clusters per donuts (Fig. 4d). No evident change in the segmentation of donuts before and after preprocessing (Fig. 4c). Our final results are in agreement with previous reports of super resolution images of gp210 and pan-FG protein. Discussion Manually defining and tracking biologically meaningful ROIs in images obtained by SR microscopy is time consuming and predominantly subjective task; however, it is still the most commonly used approach in SR image analysis [3][4][5]12]. We designed pipelines to perform automated quantitative analysis on images acquired by SR microscopy based on a novel procedure, which involves Gaussian fusion of super-resolved clusters into meaningful objects. We succeeded to directly perform cluster analysis on nuclear pore complex proteins by overlying Gaussian fused objects (donuts) with segmentation results from the original SR images (gp210 clusters) to estimate number of gp210 clusters per nuclear pore. Our method maybe potentially promising for many cell biologists who want to extract quantitative data from their SR microscopy data, imposing a great advantage that it can be run on fully automated mode via CellProfiler and does not require special computer programming skills. Even though we have not presented test results on other interesting cellular structures, we propose that our method and the scope of application will only grow by testing it on more biological structures tackling different interesting research questions that remain to be answered by high resolution quantification analysis; partly we are doing this in our research with collaboration and we are obtaining promising results for biological discoveries. In this paper, we mainly report the novelty and creativity of the procedure and to make pipelines available for public to use, modify and develop. We also suggest that image pre-processing e.g., applying bandpass filters, and critical inspection of SR images quality should always be carefully considered, clearly stated how it was carried out before publishing any quantification results. We predict that future advancements in the field of userfriendly automated image analysis solutions optimized for SR microscopy will follow. Conclusion The proposed computational image segmentation procedure is a novel method optimized for super resolution Fig. 2 Automated counting of gp210 and pan-FG protein clusters per nuclear pore complex. a Overview showing pipeline modules sequence. b Influence of Gaussian blurring on structure of gp210 STED resolved structures. c Two color raw STED image of immunostained gp210 (red) and pan-FG (green) and images series produced after segmentation. d Zoom in image of segmentation results from CellProfiler window showing two neighbouring nuclear pores (left), gp210 clusters (middle), and pan-FG clusters (right). e Histogram plots showing quantitative segmentation results. f Current model of nuclear pore complex and two direction arrows connecting models with experimental results microscopy. It could allow direct segmentation of super resolved features in fully automated and user-friendly settings. We successfully quantified the composition of the amphibian nuclear pore complex stained by gp210 and pan-FG proteins imaged with STED microscopy and CellProfiler was used to implement our method in fully automated mode. ImageJ and SMLM Data Simulation Image Gaussian filters applied in (Fig. 2) was done using ImageJ "Gaussian Blur" Plug-in. SMLM simulation in (Fig. 1) was made by brush tool in ImageJ to draw single molecule clusters that resemble Bruchpilot protein clusters at Drosophila neuromuscular junction (NMJ) active zones [3]. STED Images and Pre Processing STED images used in (Figs. 2, 3 and 4) of gp210 and pan-FG proteins of nuclear pore complex in ovarian amphibian cells were imaged by a two color STED microscope and were valuable gifts from Fabian Göttfert (Prof. Stefan Hell's Lab) [11]. Pre-processing of STED images via smoothing 3x3 pixels averaging, bandpass or Gaussian filters, and linear intensity scaling (for clear visualization) were done in ImageJ and all images were stored as tiff colored (RGB) formats. CellProfiler Pipelines and Data Analysis We used CellProfiler (version 2.1.1) to design and execute pipelines used in this paper. Gaussian filter was used for smoothing images by running smooth modules. Filter artifact diameter was set according to requisite Fig. 4 Comparing segmentation results on raw STED image, with and without 3x3 smooth or band pass filters. a, b Segmentation image of gp210 donuts (a) and gp210 clusters (b). c Number of donuts segmented from raw image, smoothed image, and band pass filtered image. d Over segmentation artifacts of gp210 clusters per gp210 donuts significantly reduced in image pre-processed by a bandpass filter using ImageJ effect, either to merge groups of structures/molecules into biologically meaningful ensembles, or to blur out noise to aid segmentation. "IdentifyPrimaryObject" module was used to find objects of interest in the analyzed images. Unless stated otherwise, Maximum Correlation Thresholding (MCT) was implemented with 1 pixel Gaussian smoothing [13]. Separating clumped objects was carried based on shape of objects or maximum intensity values within a radius range depending on objects of interest. Distance for lower maxima suppression, which aid to separate clumped objects was set manually to median radius of objects sizes. In that way most of clumped objects in test images did not display over segmentation artifacts. To associate segmented objects found in SR images, their respective blurred copy, or any images in different channels in case of two color imaging, we applied "RelateObjects" automated module. According to analysis purpose we set objects as either 'parents' , or 'children' and data were plotted directly by "DisplayHistogram" modules. Data were also exported to spreadsheets using "ExportToSpreadsheet" module to use them in further analysis and to plot data. We used OriginPro9.1 to plot data.
v3-fos-license
2024-05-31T15:18:58.278Z
2024-05-01T00:00:00.000
270139670
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.240088", "pdf_hash": "8ce55dcf2394a5d88aecd5909bb707c46620f30b", "pdf_src": "RoyalSociety", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42087", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "sha1": "4cf2fa1af54f2594df7f55405697f6d8eea3501f", "year": 2024 }
pes2o/s2orc
Characterization of viable but nonculturable state of Campylobacter concisus Campylobacter concisus is an opportunistic bacterial pathogen linked with a range of human diseases. The objective of this study was to investigate the viable but nonculturable (VBNC) state of the bacterium. To induce the VBNC state, C. concisus cells were maintained in sterilized phosphate-buffered saline at 4°C for three weeks. The VBNC cells were monitored using quantitative analysis by propidium monoazide (PMAxx) coupled with quantitative real-time PCR (PMAxx-qPCR), targeting the DNA gyrase subunit B gene. The results demonstrated that C. concisus ATCC 51562 entered the VBNC state in 15 days, while ATCC 51561 entered the VBNC state in 9 days. The viable cell counts, assessed by PMAxx-qPCR, consistently remained close to the initial level of 107 CFU ml−1, indicating a substantial portion of the cell population had entered the VBNC state. Notably, morphological analysis revealed that the VBNC cells became coccoid and significantly smaller. The cells could be resuscitated through a temperature increase in the presence of a highly nutritious growth medium. In conclusion, under environmental stress, most C. concisus cells converted to the VBNC state. The VBNC state of C. concisus may be important for its environmental survival and spread, and the presence of VBNC forms should be considered in environmental and clinical monitoring. Introduction Microorganisms continually face dynamic environmental conditions, necessitating them to employ diverse survival strategies through adaptive adjustments in their physiology.This adaptation allows them to optimize resource utilization while preserving their structural and genetic integrity, ultimately enhancing their resilience and tolerance to adverse conditions. Preparation of heat-killed cell suspensions Bacterial cells grown on CA plates were harvested and resuspended into sterile phosphate-buffered saline (PBS, pH 7.4) to a concentration of 10 7 CFU ml −1 .One millilitre of cell suspension was subjected to heat treatment at 90°C for 5 min to kill the cells.The effect of the heat killing was confirmed by spreading 100 µl of the suspension on CA plates and incubating them at 37°C for 72 h under microaerobic conditions.The absence of colony formation confirmed the effectiveness of the heat treatment. Optimization of propidium monoazide concentration An improved version of photoactivable DNA binding dye, PMAxx, was acquired from Biotium (USA).PMAxx was added to bacterial suspensions at varying concentrations, and the mixture was incubated in the dark with constant shaking at 150 rpm for 10 min.Subsequently, DNA crosslinking was performed by exposing the tubes horizontally to a 500 W halogen light at a distance of 20 cm for 15 min.The mixture was then centrifuged at 15 000 × g for 10 min, and a single wash with sterile distilled deionized water was performed to eliminate any remaining PMAxx before DNA extraction.The optimal concentration of PMAxx required for qPCR experiments was determined using live and heat-inactivated C. concisus cells.Separate samples containing 10 7 CFU ml −1 of live or heat-inactivated cells were treated with PMAxx at concentrations of 0, 10, 20, 30, 40 and 50 µM prior to DNA extraction. Real-time quantitative PCR amplification Determination of total cell number and viable cells were performed using qPCR and PMAxx-qPCR, respectively.Primers targeting the gyrB gene, which encodes the subunit B protein of DNA gyrase, were used for C. concisus detection, as previously described [35].The primer sequences used were as follows: Pcisus5-F (5′-AGCAGCATCTATATCACGTT-3′) and Pcisus6-R (5′-CCCGTTTGATAGGCGATA G-3′).The qPCR reactions were performed using a total volume of 12 µl, comprising 1× SensiFAST SYBR Mix (Bioline), 5 µl of DNA template, 100 nM of each primer and sterile water as required.The qPCR experiments were conducted using a CFX Connect Real-Time System (Bio-Rad).To account for potential contamination, a negative control (ddH 2 O) was included in each qPCR run, and triplicate testing was performed for each sample.The temperature cycle started with 300 s at 95°C (initial polymerase activation) followed by 40 cycles of 15 s at 95°C (denaturation), 15 s at 55°C (annealing) and then 60 s at 72°C for extension.The PCR generated an amplicon of 344 bp. Construction of standard curve from genomic DNA To establish a standard curve to be used in determining the efficiency of qPCR, a 10-fold serial dilution of genomic DNA from C. concisus ATCC 51562 was prepared in ultrapure water, resulting in final DNA concentrations equivalent to a cell content ranging from 10 0 to 10 8 CFU ml −1 .The efficiency, slope and correlation coefficient of the standard curves were determined by plotting the Cq values against the log 10 CFU ml −1 .The amplification efficiency was calculated using the equation E = [10 (−1/s) − 1] × 100%, where E represents the efficiency as a percentage and s denotes the slope obtained from the standard curve.The limit of detection (LOD) for this assay was defined as the lowest CFU ml −1 of C. concisus that yielded Cq values below 35.Each run of the standard curve was conducted in triplicate, and the mean standard curve was calculated by averaging the cycle threshold (Cq) values and their respective standard deviations (s.d.) across the triplicate measurements. royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.11: 240088 2.6.Induction of viable but nonculturable state The induction of the VBNC state at low temperature under limited nutrient conditions was performed following the protocol used for C. jejuni with some modifications [5,36].C. concisus cells were washed and resuspended in PBS to a concentration of 10 7 CFU ml −1 .The cell suspensions were divided into three sets (Group C, culturable cells; Group T, total cell count; and Group V, VBNC cell count) and kept at 4°C to induce a VBNC state (figure 1).Entry of cells into the VBNC state was monitored by using three methods: conventional culture method, qPCR and PMAxx-qPCR.On every second day, samples were removed, vortexed briefly and tested. Determination of culturable, total and viable but nonculturable cell number The culturable cell number (Group C) was counted every second day for 23 days by a traditional plate counting method.Briefly, a 10-fold serial dilution was performed in 0.85% (w/v) sterile NaCl solution and 0.1 ml aliquots were spread on CA plates and incubated at 37°C for 48 h under microaerophilic conditions.Colonies were then counted to calculate CFU ml −1 .The second set of samples (Group T) remained untreated and were used for the overall cell quantification by qPCR.The third set of samples (Group V) underwent PMAxx treatment under optimal conditions.The cell pellets from both the second and third groups were washed with 0.85% (w/v) normal saline prior to genomic DNA extraction (figure 1).When the culturable cell concentration was <1 CFU ml −1 , it was considered that all the viable cells quantified by PMAxx-qPCR were VBNC cells.As validation, 0.1 ml of the putative VBNC cell suspensions was spread on CA plates and incubated under the optimum microaerophilic conditions for 72 h to confirm that no cells could be recovered. Resuscitation of viable but nonculturable cells Upon loss of culturability on CA, the samples underwent a recovery process for further analysis.This involved subjecting the samples to temperature upshift and enrichment in Bolton Broth (BB) followed by plating on supplemented agar.This method has been previously used in C. jejuni [36].In brief, to initiate the resuscitation process, a 1 ml aliquot of the cell suspension was combined with 5 ml of sterile BB (Oxoid CM0983) and incubated under microaerobic conditions at 37°C with gentle agitation for 48 h.Following the enrichment period, a 0.1 ml portion of the suspension was plated on supplemented CA agar, which was then incubated at 37°C for 48 h prior to quantification. Pyruvate is known to be a factor that promotes the resuscitation of VBNC cells [14].A combination of ferrous sulfate, sodium metabisulfite and sodium pyruvate has been reported to promote C. jejuni viability in culture media [37].Therefore, the supplemented CA was prepared by incorporating Campylobacter agar base (Oxoid CM0689) along with Campylobacter growth supplement (Oxoid SR0232), comprising ferrous sulfate, sodium metabisulfite and sodium pyruvate. Cell morphology analysis Cell morphology was investigated using various microscopic techniques.Initially, cells were stained with the LIVE/DEAD Baclight Bacterial Viability Kit (L13152, Invitrogen) and observed under a confocal laser scanning microscope.Additionally, the examination of cell morphology was performed using a transmission electron microscope.To prepare the samples for transmission electron microscopy, bacterial cells were cultivated in Columbia broth.A carbon-coated parlodion film-covered grid was gently placed onto a 20 µl sample drop, allowing the sample to adsorb to the grid for 2 min.Subsequently, the grid was carefully lifted from the drop and transferred onto a 1% phosphotungstic acid solution (pH 7.0) for 1 min.To remove excess stain, the grid was carefully touched against a piece of Whatman filter paper, ensuring gentle contact.The coated samples were then examined using a JSM-840 transmission electron microscope (JEOL). Optimization of propidium monoazide treatment conditions VBNC cells are now recognized to coexist with culturable and dead cells in bacterial cultures grown in standard laboratory conditions [2,3,38].To overcome the challenge of amplifying DNA from dead cells, a DNA modifier dye called PMAxx was employed [29].Previous investigations have identified an optimal concentration range of 2-100 µM PMA for accurately determining the viability of various bacterial species in different sample matrices [5,29,39,40].Establishing the appropriate concentration of PMAxx is crucial for precise quantification of viable cells and to avoid potential underestimation or false-negative outcomes.Use of an excessively high concentration of PMAxx can inhibit DNA amplification from viable cells, resulting in an underestimation of viable cell count.Conversely, a low concentration may not effectively suppress the signal from dead cells, leading to overestimation.Hence, it is important to determine the optimal concentration of PMAxx to ensure reliable assay performance. To establish the optimal PMAxx concentration for inhibiting DNA amplification from dead C. concisus ATCC 51562 cells in qPCR, cells were treated with a range of PMAxx concentrations (0, 10, 20, 30, 40 and 50 µM) prior to DNA extraction.PMAxx-qPCR was then performed.The greatest reduction in Cq values was observed with a PMAxx concentration of 20 µM (figure 2).Therefore, for subsequent studies, a concentration of 20 µM PMAxx was selected as it effectively inhibited DNA amplification from dead C. concisus cells without significantly impacting the quantification of viable bacterial cells.Xiaonan et al. [5] have reported that 20 µM PMAxx was sufficient to inhibit signals from dead C. jejuni cells for the quantitative determination of cells in the VBNC state. Standard curve for real-time quantitative PCR To determine the efficiency of the qPCR, a standard curve was generated by performing serial dilutions of genomic DNA from C. concisus ATCC 51562, covering a range of 10 8 -10 2 CFU ml −1 .As shown in figure 3, a negative correlation was observed between the number of bacterial cells (log 10 CFU ml −1 ) and the Cq value obtained from qPCR analysis of culturable C. concisus cells.The standard curve demonstrated linearity within the range of 1.5-8.5 log CFU ml −1 with an R 2 value of 0.996 (figure 3).The efficiency of qPCR amplification (E) was calculated to be 99.6%. Validation of the optimized PMAxx-qPCR assay To assess the capability of the PMAxx method to distinguish between viable and heat-inactivated C. concisus cells, a set of mixtures containing different proportions of live and dead cells, as depicted in figure 4, were subjected to treatment with a concentration of 20 µM PMAxx.The number of gene copies obtained from PMAxx-treated cells were subtracted from those of non-treated cells.The number of gene copies without PMA showed no differences within a mixture of viable and dead cells from C. concisus (t-test, p > 0.05), whereas the number of gene copies with PMAxx increased gradually with increasing proportions of viable cells resulting in ΔLog 10 values decreasing (figure 4).These findings demonstrate the successful quantification of viable cells in mixtures containing both live and heat-killed C. concisus cells using PMAxx-qPCR. Induction of the viable but nonculturable state under starvation and low temperature C. concisus showed a variable timeline to enter into the VBNC state (figure 5).C. concisus ATCC 51562, a member of GS1, entered into the VBNC state in 15 days, while another strain, ATCC 51561, a GS2 strain, entered into the VBNC state in 9 days when incubated at 4°C in limited nutrient conditions.During the entire duration, the total cell counts consistently remained close to the initial level of 10 7 CFU ml −1 , indicating that total viability remained almost constant while standard traditional viability dropped.The VBNC state has been observed in more than 100 bacterial species when exposed to unfavourable environmental conditions such as heat, oxidative stress, antibiotics and other stressors [3].Within the Campylobacter genus, it has been documented that four species, namely C. jejuni, C. hepaticus, C. coli and C. lari, can enter the VBNC state [3,7,8].C. jejuni cells become VBNC in 10 days when incubated in low nutrient conditions (PBS) at 4°C [5].However, under osmotic stress, the bacterium quickly enters the VBNC state within 48 h [5].Another study reported that approximately 80% of the total population of C. jejuni, C. coli and C. lari entered into the VBNC state within 3 days when kept at simulated aquatic conditions at 10°C [41].C. hepaticus enters into the VBNC state in 55 days when incubated in an isotonic solution at low temperature (4°C) [14]. Previous studies also reported that the survivability of C. concisus is temperature dependent [42,43] and that GS2 strains are better adapted to the human gut environment [44].C. concisus survived for up to 6 days in saliva samples at low temperatures (4°C) [42].In another study, bacterial numbers were not affected when clinical samples were tested after 7 h storage at 4°C [43].Consequently, there is speculation that C. concisus-contaminated food or beverages, especially those subjected to refrigeration, may serve as a plausible source of infection [42].There are some limitations in the study.A limited number of strains were investigated at a temperature of only 4°C under starvation.Effects of other factors such as osmotic stress, chemical stress, low pH, high temperature or a combination of multiple factors to induce VBNC in C. concisus need to be investigated. Resuscitation of viable but nonculturable cells and analysis of morphological changes A distinct and significant trait displayed by the VBNC cells is their ability to undergo in vitro resuscitation, wherein they can regain their cell division capability along with heightened metabolic activity, pathogenic potential and cellular morphology changes [14].The resuscitation mechanisms vary across bacterial species.Besides the removal of stress, some bacterial species require additional triggers such as physical, chemical or host-related stimuli to recover from the VBNC state to a cultivable condition [13,14].The well-characterized Gram-negative bacterium Escherichia coli was resuscitated from the VBNC state after temperature change [14].Several studies have reported the resuscitation of C. jejuni and C. coli VBNC cells using experimental animal models [13].Another member of Campylobacter species, C. hepaticus, was successfully resuscitated from VBNC to a culturable state using base media supplemented with a mixture of Vitox, FBP (ferrous sulfate, sodium metabisulfite and sodium pyruvate) and ʟ-cysteine [8].In this study, C. concisus VBNC cells were resuscitated by subjecting them to a temperature upshift to 37°C, pre-enrichment and culturing in modified CA media containing sodium pyruvate, sodium metabisulfite and ferrous sulfate.Pyruvate is known to be one main factor that promotes the resuscitation of VBNC cells as they act as reactive oxygen species scavengers [14].Pyruvate plays an important role in protecting C. jejuni from high oxygen stress and facilitates its growth [45].Recently, it has been reported that pyruvate stimulates DNA and protein biosynthesis in E. coli VBNC cells and eventually supports growth restoration and colony formation [46].Detailed studies including genomic and proteomic analysis are warranted to explore the underlying mechanisms in C. concisus VBNC cell resuscitation.The morphology of C. concisus resuscitated cells was examined and compared to VBNC and to normal, freshly grown, live cells.Significant morphological alterations were observed between cells in the exponential growth phase and those in the VBNC state.During the exponential phase, the cells displayed a characteristic rod or arc shape, with elongated forms clearly discernible under light microscope (1000×) and epifluorescence microscope (figure 6a,d).In contrast, in the VBNC state, a majority of the cells underwent a transformation into short rods or cocci, exhibiting irregular and distorted cell morphologies (figure 6b,e).Similar metamorphic changes were corroborated by transmission electron microscopy (figure 6g,h).Under TEM, it was observed that cells in the exponential phase had an average size of 4 × 0.5 µm (length × width), whereas the VBNC cells were smaller, measuring approximately 0.6 × 0.4 µm.Furthermore, LM, CLSM and TEM were employed to observe the morphological characteristics of resuscitated cells (figure 6c,f,i).The morphology of resuscitated cells was similar to C. concisus cells in exponential growth but larger than the VBNC cells. In previous studies, when C. jejuni entered the VBNC state, curved-shaped cells were converted to coccoid-shaped cells [47].Morphological transformation, from comma-shaped cells to coccoid-shaped cells, has also been noted in H. pylori upon entry into the VBNC state [31].VBNC cells possess a remarkable ability to undergo resuscitation both in controlled laboratory conditions [9] and in natural estuarine environments, triggered by environmental alterations such as temperature elevation.Notably, Vibrio alginolyticus was resuscitated from the VBNC state within 16 h by subjecting to a temperature upshift and natural estuarine environments owing to increased temperature [3,14,32].In another study, researchers were able to retrieve the causative agent of spotty liver disease in chickens, C. hepaticus, from its VBNC state using an enrichment medium [8].The findings of the current study highlight the importance of pre-enrichment in a nutrient-rich broth for the successful resuscitation of C. concisus. Conclusion The findings demonstrate that C. concisus has the ability to enter a VBNC state under stressful environmental conditions.The ATCC51561 strain belonging to GS2, which is known to be better adapted to gut environment, entered the VBNC state earlier than ATCC51562, a GS1 strain.The VBNC cells can be resuscitated through a temperature upshift in the presence of nutrients.Upon Figure 1 . Figure 1.Experimental flow diagram of C. concisus cold stress VBNC induction and assay for quantification by plate counting (Group C), qPCR (Group T) and PMAxx-qPCR (Group V).Cell morphology was assessed using light microscopy, epifluorescence and transmission electron microscopy. Figure 2 . Figure 2. Optimization of the PMAxx concentration for detection of C. concisus ATCC 51562.Cq reductions were obtained by subtracting the mean Cq value obtained from PMAxx-treated dead cell aliquots from the mean Cq values from non-PMAxx-treated dead cells.Error bars in diagrams represent s.d.from three independent replicates. Figure 3 . Figure 3.A standard curve of C. concisus ATCC 51562 concentration where mean Cq values were plotted against log 10 (CFU ml −1 ) of bacterial standard solution.The linear equation of the regression line and the coefficient of determination (R 2 ) are displayed in the graph. Figure 4 . Figure 4. Suppression of DNA amplification from dead cells after PMAxx treatment in live/dead cell mixtures.The table presents the ratios at which viable and dead cells were combined.Results are shown as differences in ΔLog(without-with PMA) values by subtracting the log(CFU ml −1 ) values of PMAxx-treated from the log(CFU ml −1 ) values of PMAxx-untreated samples.The error bars represent the s.d.derived from three independent replicates.S represents sample no. Figure 5 . Figure 5. Entry into the VBNC state of C. concisus ATCC 51562 (aa) and ATCC 51561 (b) under nutrient starvation conditions at low temperature (4°C).Blue circles ( ) represent the total cell count (Group T) quantified by real-time qPCR.Green circles ( ) represent the viable cell counts (Group V) quantified using PMAxx-qPCR, while black circles ( ) represent culturable cells (Group C) counted by culture-based methods.The error bars were calculated based upon three replicates.
v3-fos-license
2019-05-13T13:04:53.752Z
2019-02-28T00:00:00.000
150683058
{ "extfieldsofstudy": [ "Psychology", "Sociology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://journal.uir.ac.id/index.php/jshmic/article/download/2603/1542", "pdf_hash": "3ebae02afd2ce670e693d36a3d207eb31558c431", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42089", "s2fieldsofstudy": [ "Education" ], "sha1": "3ebae02afd2ce670e693d36a3d207eb31558c431", "year": 2019 }
pes2o/s2orc
Student Perspective in Using Social Media As a Tool in English Language Learning Social media has great potential to support student-centered learning as they are flexible, interactive, and resource–rich in nature. One of its drawbacks is the quality of online instruction that lead to unfavorable perception and perspective of the students. The objectives of this research are to analyze the students’ perceptions and perspectives in using Social Media as a tool in English Language learning. Qualitative and quantitative data were collected from the students of the English Department, Faculty of Education, Islamic University of Riau. Research finding revealed that social media is very practical and useful for getting general information, knowledge and to increase their language I. INTRODUCTION Language educators have long used the concepts of four basic language skills: Listening, Speaking, Reading, and Writing.These four language skills are sometimes called the "macro-skills".The four basic skills are related to each other by two parameters: the mode of communication, oral or written and the direction of communication, receiving or producing the message.Listening comprehension is the receptive skill in the oral mode.When we speak of listening what we really mean is listening and understanding what we hear. Reading is the receptive skill in the written mode.It can develop independently of listening and speaking skills, but often develops along with them, especially in societies with a highly-developed literary tradition.Reading can help build vocabulary that helps listening comprehension at the later stages, particularly. Writing is the productive skill in the written mode.It, too, is more complicated than it seems at first, and often seems to be the hardest of the language skills, even for native speakers of a language, since it involves not just a graphic representation of speech, but the development and presentation of thoughts in a structured way. Speaking is the productive skill in the oral mode.It, like the other skills, is more complicated than it seems at first and involves more than just pronouncing words.Speaking is often connected with listening.Gillet and Temple [1] also emphasize the close relationship between listening and speaking in this way. Listening cannot be separated from the expressive aspects of oral communication.It is impossible to "teach listening" separately from speaking, or to set aside a portion of the instructional time for listening instruction and ignore it the rest of the time.When children develop their communicative powers they also develop their ability to listen receptively. Learning language now should not have been as difficult as it was in previous time as there are many technologies and facilities as learning media become available to help learn English.For example interactive social media and virtual web-based communities is gaining popular everyday as the number of web-based courses, colleges, and schools continues to increase significantly [2,3].Their popularity also increased because the process of teaching and learning can take place independence of place and time and more importantly they have many advantages by allowing for a more interactive, personalized, and independent learning experience [4].Digital devices also have been used not only to complement established education aids but also to develop new ways of learning [1]. Web based learning environments have great potential to support student-centered learning as they are flexible, interactive, and resource-rich in nature [1].Unfortunately, although web-based learning environments have unlimited prospects for educational use, they do however have some drawbacks -namely the implementation problems and challenges that are confronted when it comes to meeting all students' instructional needs [2,3].The sources of these problems may have been the low quality of online instruction, the nature of the non-linear attributes of the web based learning environments and diverse learner profiles and characteristics.This research is mainly focus on the first problem -the quality of online instruction -that lead to unfavorable perception and perspective of the students toward the learning process.In analyzing these two variables, this research used social media as a mean to deliver particular subjects of English and focusing on perception and perspective of the students. The word perception means "the ability to see, hear, or become aware of something through the senses, the way in which something is regarded or understood."Perspective at least has two meaning, first is the art of drawing solid objects on a two dimensional surface so as to give the right impression of their height, width, depth, and position in relation to each other when viewed from a particular point.The second meaning of perspective is a "particular attitude toward or way of regarding something or a point of view toward something" [5].In this research we use the second meaning that is the opinion, the attitude or the view point toward something -in this case toward the social media. Social media is website and application or computermediated technology that enable users to create and share various content, information, ideas, interest and various expression through communities by using virtual network or to participate in social networking.From this definition the core component of the social media are the technology and the application that enable people to connect each other. Social media is also defined as any form of online publication or presence that allows interactive communication, including, but not limited to social networks, blogs, internet websites, internet forums, and wikis.It could be concluded that media is communication channels through which news, entertainment, education, data, or promotional messages are disseminated which includes every broadcasting and narrowcasting medium such as newspapers, magazines, TV, radio, billboards, direct mail, telephone, fax, and internet.Therefore, lecturers need to learn how to select the instructional media in the learning process [6].Kustandi and Sutjipto concluded, "learning media is a tool that can help the learning process and serve to clarify the meaning of the message, so as to achieve the learning objectives perfectly [7].Similarly, learning media is defined as everything that can convey and deliver the message from the source in a planned manner so as to create a conducive learning environment. In development of technology, lecturers and students need to be creative facilitators and users of the technology.They have to be able to facilitate the teaching and learning activities in the class by presenting the appropriate use of technology.By knowing the perception and perspective of the students toward the use of social media in teaching, they could improve the teaching material in order to increase the quality of teaching.There are a large number of technology products today that can be used by teachers/lecturers in the class.Social media is one of the important ones [8]. In selecting media [10] some criteria should be considered as follows: 1.In accordance with the objectives to be achieved, 2. Right to support learning content that are facts, concepts, principles or generalizations, 3. Practical, flexible, and survive.Media chosen should be able to be used anywhere, anytime with the equipment available in the vicinity, as well as easily removed and taken anywhere, 4. Teachers' skill.Whatever the media, teachers should be able to use them in the learning process, 5. Grouping target.Effective medium to large groups is not necessarily equally effective when used in small groups or individuals.There is an appropriate medium for the type of large groups, medium groups, small groups, and individuals, 6. Technical quality.For example, a visual on the slide should be clear and fine information or messages to be conveyed and not be interrupted by other elements that form the background. The objectives of this research are to investigate the use of social media and analyze the students' perceptions and perspectives in using Social Media as a tool in English language learning at English Study Program. Research results about social media have actually shown that the great majority of students have a Twitter account (49.03%), the most popular social network among community diploma college students in King Khalid University.Other than twitter are MySpace, Live mocha, Video, Babble, Busuu, Unilang, Lang-8, Palabea, Italki.com,voxSwap, Myngle and English Baby (23.23%) and Google+ (18.71%) [9]. With regard to the use of social media for learning foreign languages, 66.9% of the surveyed students reported that they use this online tools to enhance their foreign language skills.The students were also asked to choose the social media network technologies that they use to boost their language skills.Results showed that most of the students like video sharing websites and Chat Tools.This implied that the students generally prefer to use a platform thoroughly which they can not only learn a language but also interact with friends, collegues and family [9]. To investigate the impact of social media on students' foreign language learning skills, the students were asked to select the four major language skills -listening, speaking, reading or writing.Based on the percentage ratings associated with each language skill reported to be improving is listening.Of those who participated in the survey, 45% said that listening is the most noticeable skill.Listening is usually considered as the weakest skill of most language learners.Easy access to the rich database of audio and video materials on social media certainly contributes in enhancing this Listening Language skill [9]. Based on data collected from 120 students, this research measures the effective contribution of Social Media Networking (SMN) in foreign language learning skills.Results of the research demonstrated that most of the students claimed they favor the use of these internet-based applications as these help them improve their four language skills.Given these educational benefits, the reseacrhers can stipulate that social media tools are capable of enriching the language learning experience.Therefore, the researchers recommend that educators use these online social communities whether they work in fully online, blended, or face -to-face language learning environments [9]. In teaching language skills, some lecturer has used technology but it is just used to introduce the material only.It is not really used as a medium in whole teaching and learning process.In this research, the writer will use social media as medium in teaching language skills as the solution of these problems faced by the students.In other word media also serves as a teaching tool. In learning English, social media has offered opportunities for learners to share information, create conversations and develop their own content of interest conveniently.They may share their knowledge or their assignment from blogspot to other students or leacturer.In facebook, they can create conversation and also share everything.In google classroom, they can get information about the course in one semester.There are many examples of these platforms (blogs: wordpress, blogspot, microblogs: Twitter, Posterous, Tumbler, wikis: Wikipedia, Scholarpedia, Social networking sites: Facebook, Edmodo.Academia, Linkedln, photosharing sites: Instagram, Cymera, instant messaging: Whatsapp, WeChat, Line, video-sharing sites: Keek, Youtube and many more that have benefits billions of users from all over the world. From the education perspective, many scholars have found that these platforms especially the social networking sites have enormous potential that can encourage critical engagement in discussion as well as harness peer feedback throughout the learning process [11].In this research the writers are interested in investigating and describing the students' perceptions and perspective toward the Social Media as a tool in English language learning.The use of instructional media at the stage of learning orientation will greatly assist the effectiveness of the learning process and the delivery of the message and the content of learning at the time.In addition to the motivation and interests of students, learning media can also help students improve comprehension, presenting interesting and reliable data, facilitate the interpretation of data, and condense the information".That's why in this research, the writers want to know the perception in using them as tools in teaching and learning English. II. METHODS The research has been conducted at English Department, Faculty of Education, the Islamic University of Riau Indonesia in 2017/2018 academic year.We used questionnaires and interviews for collecting quantitative and qualitative data from student respondents.In this survey we use 12 closed questions to explore the students' perceptions about the use of social media in the classroom.Each question has 5 alternative answers, from "strongly disagree" to "strongly agree".In addition to closed questions there are also open questions to give the students the opportunities to declare their opinion about the use of social media in the classroom.Data and the answers then are classified according to the topic and are analyzed using qualitative method in three steps: data managing, data interpreting and transcript analyses. III. RESULTS AND DISCUSSION The study revealed that most of the students have been using social media since they know how to use the communication gadget.There are many types social media they use but the most common and popular ones are WhatsApp and Instagram as shown in Fig 1 .These two social media are used by 64% of respondents while other social network, Line, Youtube, Facebook, Google class, Duolingo and Edmondo are used only around 9%, 8%, 6%, 6%, 4% and 4% respectively by the respondents. WhatsApp and Instagram are the most popular messaging and voice over IP Service for sending voice calls, text messages, video calls, images, documents and user locations.More than 1 billion people use WhatsApp to stay in touch for free through messaging or calling in over 180 countries in the world.According to the students, social media are very useful in helping them learning to improve their skill in English.This is because social media can be accessed easily and handy, unlike the conventional internet that need computer and internet connection. There are many reasons why the students use social media.As shown in Fig 2, the dominant reasons are to improve their general information and general knowledge, each account for 41%.The rest of the students answer that social media can improve their English skill and can be fun for the users; each account for 9%.In addition, the use of social media as medium for learning English give them the flexibility in term of time, variation of sources that can be accessed and the easiness to access from many places.With the advance of communication technology the use of social media from time to time is expected to be increasingly easier and user friendly. Social media have been influencing students quite significance in every aspects of human life, including in learning of English.Most of social media or information technology equipment use English as a default language.Many students also use English as their means of communication because this language has been preinstalled in the gadget at the first place. In this research we analyzed the students' perception and perspective in using social media such as Instagram, What Ups, You tube, and Face Book as tools in English Language Learning at English Study Program. Based on research results there are 67 % of student respondents stated that social media has improved their vocabulary through many ways.The rest of the students (33 %) said that the social media enable them to share the information to others.This become possible as the students have ample opportunities in exploring the media through internet and share the information to other friends and groups. Fig. 2. The Purpose of Using Social Media by Students The use of the social media has brought a big impact on the people life, either positive and negative ones.Among other positive impact of social media are: easy and more efficient in distributing news, easy to make contact with people across the world and long distance communication with single touch with social networking.More importantly are the availability of abundance of information on the internet with finger tips access from everywhere in the world.Internet has become a must equipment, smart book or a big encyclopedia.They can update knowledge and access the relevant information for them Most of the students (93%) enjoy to use social media in helping their learning process.This is because the internet or social media provide almost everything they need in the learning process.Only 7 % of the students are not happy with internet because not all information they need are available in the sites they are browsing. In addition to the advantage of social media, there are also, however, some disadvantages of social media.For example the social interaction and social cohesion may become weak because of lack of direct interactions and emotional exchange among people.People may also get misunderstanding as the emotion and feeling are not easily represented by symbol and letter.What the people really feel are not easily understood by the other side.Sometime other people get the opposite meaning of what other people really mean. Another problems are cyber bullying, cheating or even stealing information and money, security threat and addiction to the internet which sometime can be dangerous.These negative impacts of the social media have taken victims.For example, persecution done by some people as results of war in the social media. A. Students Perception on Social Media The first objective of this research is to find out the perception of students on the social media.Based on the survey we found that about 23 % of the students strongly agree that social media can increase their interest and motivation to learn English and 74% of students agree that the use of social media in the classroom have positive impact on the learning process.The expression they often say is "interesting."The rest of the respondents do not agree to use social media, but the number of this category is only 3% of the total students (see Fig 2).Other students said that social media can overcome their boring and tedious.Students also stated that the use of social media is more interesting than just reading the books or listening to lectures only.Some students also expressed the importance of choosing social media that has good sound and picture quality. The response of the students clearly show that almost all students are in favor of using social media in their learning in the classroom.Implication of this finding is that the lecturers should have considered this finding in their teaching process in the classroom and they need to learn and understand the social media as well. B. Students Perspective on Social Media The second research questions of this study is to find out the students' perspective in using social media as a tool in language learning.As Table 1 shows about 85% of the students stated that social media is very useful, practical, simple, easy to use and to understand, cheap, and can be used everywhere, anytime, and every day.Only small number of students (5.6%) who feel unhappy with social media.In other word social media has significant advantages or benefits for students in learning process. They also said that they are motivated to learn more as they can access any information and knowledge and learn from the various website anytime and everywhere.They can also send questions, material and subject matters to have feedback or just to get the questions answered by people. . Lecturers also get benefit as they can access many teaching material from reputable sites, handy and easy to use in any occasion, either in the class or outside the class rooms.From the perspective of the students, social media is not only beneficial in the class rooms but also outside of the class rooms.This advance of technology makes possible those were impossible in the past.Similarly, those are not possible today may be possible in the future. Social media also has its own disadvantageous as stated by few respondents.Among the negative effects are: we could easily become the victim of cyber attack, bullying, addiction, cheating, criminal acts, security threats and other criminal acts.Herewith, the social media have great impact on learning purposes as well as have some drawbacks.Another respondent gives negative answer because the hatred speech on the internet.Only one respondent gives neutral answer when asked about the benefit of social media. Another use of social media is for blended learning or the blended use of social media is for academic and non academic purposes [12].For example, academically students can learned vocabulary, listening and pronunciation simultaneously to improve their speaking skill.Another purpose of social network is the students can use social media for searching information, looking for material for non learning, chatting and group meeting.IV.CONCLUSION Based on this research result, we can conclude that learning English by using social media is more interesting to learn by the second semester English students at Islamic University of Riau.Moreover, English becomes more interesting as the sophisticated technology becomes available. The students are more eager to learn and to increase more of their time allocation in using social media for academic purposes, such as listening and other aspect of English language.They are more interested and happy to practice and to upgrade the quality of their English competencies.Technology is available to help and give the students opportunity to do more not only in the classroom but also outside the class room. By knowing the students' weakneses, the lecturer should give more practices to the students in order to improve their competencies.Social media can help lecturer provide hundreds of listening materials to the students.To avoid misuse of using the phone, the lecturers also have to check every students' activity while they are learning because while they are listening they also active in using their phone.The lecturer has to make sure if they are exactly listening to their subject. Fig. 1 . Fig. 1.Types and Percentage of Social Media Used by the Students TABLE I . PERSPECTIVE OF USING SOCIAL MEDIA BY STUDENTS
v3-fos-license
2022-11-26T15:07:32.487Z
2014-03-13T00:00:00.000
253888226
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00421-014-2856-3.pdf", "pdf_hash": "e95cc31797fa026df12d17e8434054cce27881c3", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42091", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "e95cc31797fa026df12d17e8434054cce27881c3", "year": 2014 }
pes2o/s2orc
Interactions of the human cardiopulmonary, hormonal and body fluid systems in parabolic flight Commercial parabolic flights accessible to customers with a wide range of health states will become more prevalent in the near future because of a growing private space flight sector. However, parabolic flights present the passengers’ cardiovascular system with a combination of stressors, including a moderately hypobaric hypoxic ambient environment (HH) and repeated gravity transitions (GT). Thus, the aim of this study was to identify unique and combined effects of HH and GT on the human cardiovascular, pulmonary and fluid regulation systems. Cardiac index was determined by inert gas rebreathing (CIrb), and continuous non-invasive finger blood pressure (FBP) was repeatedly measured in 18 healthy subjects in the standing position while they were in parabolic flight at 0 and 1.8 Gz. Plasma volume (PV) and fluid regulating blood hormones were determined five times over the flight day. Eleven out of the 18 subjects were subjected to an identical test protocol in a hypobaric chamber in ambient conditions comparable to parabolic flight. CIrb in 0 Gz decreased significantly during flight (early, 5.139 ± 1.326 L/min; late, 4.150 ± 1.082 L/min) because of a significant decrease in heart rate (HR) (early, 92 ± 15 min−1; late, 78 ± 12 min−1), even though the stroke volume (SV) remained the same. HH produced a small decrease in the PV, both in the hypobaric chamber and in parabolic flight, indicating a dominating HH effect without a significant effect of GT on PV (−52 ± 34 and −115 ± 32 ml, respectively). Pulmonary tissue volume decreased in the HH conditions because of hypoxic pulmonary vasoconstriction (0.694 ± 0.185 and 0.560 ± 0.207 ml) but increased at 0 and 1.8 Gz in parabolic flight (0.593 ± 0.181 and 0.885 ± 0.458 ml, respectively), indicating that cardiac output and arterial blood pressure rather than HH are the main factors affecting pulmonary vascular regulation in parabolic flight. HH and GT each lead to specific responses of the cardiovascular system in parabolic flight. Whereas HH seems to be mainly responsible for the PV decrease in flight, GT overrides the hypoxic pulmonary vasoconstriction induced by HH. This finding indicates the need for careful and individual medical examination and, if necessary, health status improvement for each individual considering a parabolic flight, given the effects of the combination of HH and GT in flight. healthy subjects in the standing position while they were in parabolic flight at 0 and 1.8 g z . Plasma volume (PV) and fluid regulating blood hormones were determined five times over the flight day. Eleven out of the 18 subjects were subjected to an identical test protocol in a hypobaric chamber in ambient conditions comparable to parabolic flight. Results cI rb in 0 g z decreased significantly during flight (early, 5.139 ± 1.326 l/min; late, 4.150 ± 1.082 l/ min) because of a significant decrease in heart rate (Hr) (early, 92 ± 15 min −1 ; late, 78 ± 12 min −1 ), even though the stroke volume (SV) remained the same. HH produced a small decrease in the PV, both in the hypobaric chamber and in parabolic flight, indicating a dominating HH effect without a significant effect of gt on PV (−52 ± 34 and −115 ± 32 ml, respectively). Pulmonary tissue volume decreased in the HH conditions because of hypoxic pulmonary vasoconstriction (0.694 ± 0.185 and 0.560 ± 0.207 ml) but increased at 0 and 1.8 g z in parabolic flight (0.593 ± 0.181 and 0.885 ± 0.458 ml, respectively), indicating that cardiac output and arterial blood pressure rather than HH are the main factors affecting pulmonary vascular regulation in parabolic flight. Conclusion HH and gt each lead to specific responses of the cardiovascular system in parabolic flight. Whereas HH seems to be mainly responsible for the PV decrease in flight, gt overrides the hypoxic pulmonary vasoconstriction induced by HH. this finding indicates the need for careful and individual medical examination and, if necessary, health status improvement for each individual considering a parabolic flight, given the effects of the combination of HH and gt in flight. Introduction Parabolic flights performed in slightly modified passenger airplanes operating in the troposphere have been used extensively in past decades for space-related human physiological research. Most of these life science experiments have been conducted on parabolic flights in a Kc 135 aircraft in the USA or in an Airbus A300 aircraft in Europe. Both aircrafts achieve parabolic trajectories of different sequences. the A300 performs 31 parabolas each flight day, with a 2-min break of level flight between two consecutive parabolas and with 4-min breaks after each group of five parabolas. After the 16th parabola, a break of 8 min represents half-time of the flight. the Kc 135 performed 40 parabolas per flight day and took a so-called roller coaster flight path with 10 parabolas back-to-back. typically a Kc 135 flight had 5-min breaks after the 10th and the 30th parabola and a 10-min break after the 20th parabola. Extensive previous research was aimed at investigating the cardiovascular system in the context of changing gravity in general and weightlessness in particular (liu et al. 2012;Mukai et al. 1991;Petersen et al. 2011). Most of these experiments focused on cardiovascular responses during gravity transitions (limper et al. 2011;Mukai et al. 1994). Fewer studies have investigated longitudinal changes in these specific cardiovascular responses over the course of a parabolic flight day. Only Mukai et al. reported longitudinal changes in cardiac index (cI), although concrete cI differences were not reported. In particular, Mukai et al. used impedance cardiography during the first 10 parabolas of a parabolic flight. Mukai et al. (1994) also demonstrated a decreased thoracic fluid index by thoracic impedance measurements, and they noted that such a decreased thoracic fluid index may indicate an increase in thoracic fluids during parabolic flight. However, to date, no research has investigated how the body fluid system is influenced by parabolic flights, although some evidence suggests that the intravascular volume may increase on the day of a parabolic flight, as stated by Schlegel et al. (2001). Another important factor that has not been adequately addressed in prior experiments is the changing ambient atmosphere of the airplane cabin on the day of a flight. this seems astonishing because of the tremendous amount of work which has been carried out on the effects of hypoxia of commercial air travel on the human body (gradwell 2006; Mortazavi et al. 2003). An inflight cabin atmosphere that is more hypobaric, hypoxic and dry with respect to ground control may affect the responses of the cardiovascular system, particularly to changes in gravity. the ambient pressure of the European A300 at cruising altitude is approximately 830 mbar, which is equal to the ambient pressure at 1,650 m above sea level (m. a. s. l.) (lehot 2012). the ambient pressure of the Kc 135, which is no longer in service, was 751 mbar, which is equal to an altitude of 2,438 m. a. s. l. (lehot 2012). Even the mild hypobaric hypoxia (HH), equivalent to 2,400 m. a. s. l., of a typical airplane cabin has been associated with a reduction of baroreflex sensitivity (Sevre et al. 2002), which is one of the most important cardiovascular control mechanisms under changing gravity. In contrast, hypoxemia with an arterial oxygen saturation of 90-95 % causes pulmonary vasoconstriction, leading to a 20 % increase in pulmonary pressure in healthy subjects during air travel (Smith et al. 2012). Hypoxic pulmonary vasoconstriction (HPV) leads to a reduction of the arterial and venous blood volume in the lungs (Sylvester et al. 2012). consequently, we can expect opposing mechanisms to act on the pulmonary blood volume during parabolic flight: a cephalic volume shift under microgravity bouts would increase pulmonary blood volume, whereas the persistent HPV would decrease it. Furthermore, it is known today that even slight hypobaric hypoxic conditions at quite low altitudes, from 1,000 m. a. s. l., induce changes in blood volume (Bartsch and Saltin 2008). However, the effects of HH on baroreflex sensitivity, pulmonary blood volume and the body fluid system during parabolic flights have not yet been examined. We therefore measured cardiovascular, pulmonary, hormonal and fluid volume parameters during parabolic flights and repeated these measurements in a hypobaric chamber. Our specific hypotheses were the following: (1) cardiac output in a state of weightlessness is not constant during a parabolic flight but rather increases over time because of an increase in intravascular volume; and (2) the cephalic blood volume shift in weightlessness overrules the hypoxic pulmonary vasoconstriction and leads to an increase in lung tissue volume. When designing this study, we also considered the commercial airplane parabolic flights and upcoming suborbital commercial parabolic flights. We believe that more research must be done to clarify potential health issues that may arise from the combined effect of moderate hypobaric hypoxia and intense gravitational transitions, particularly for flight surgeons responsible for future, most likely elderly, customers of airplane and suborbital parabolic flights. Subjects Eighteen healthy subjects participated in the parabolic flight study, and 11 also repeated an identical test protocol in the hypobaric chamber of the german Aerospace center (Dlr) in cologne, germany, during the first 3 months after their flights (table 1). the test protocols were approved by the pertinent authorities (a) for the parabolic flights: agence française de sécurité sanitaire des produits de santé and comité de protection des personnes nord oeust III and (b) for the hypobaric chamber experiments: Ärztekammer Nordrhein. All subjects were free of any cardiopulmonary, renal or other systemic diseases, none were taking any medications on a regular basis and each passed a special parabolic flight medical examination (requirements of the parabolic flight executing company, (nOVESPAcE 2013)) based on the JAr class III examination at the aeromedical center of the Dlr, cologne. All subjects provided written informed consent to participate in the study. Heavy exercise and alcohol were strictly prohibited beginning 24 h before any testing. Scopolamine-hydrobromide was applied subcutaneously before the flights (125 μg in women and 1 3 175 μg in men) as a prophylactic against motion sickness. the same dosage was also administered before the hypobaric chamber tests to allow for comparable test conditions (Hyoscine Injection BP 400 µg/ml, UcB Pharma ltd, Berkshire, UK). Subjects drank between 100 and 200 ml of water during an experiment day to antagonize dry mouth caused by scopolamine medication and rebreathing maneuvers. Parabolic flights Data were obtained during the 15th, 16th and 19th Dlr parabolic flight campaigns between 2010 and 2012. the flights were performed in the Airbus A300 Zero-g of the French nOVESPAcE company in Bordeaux, France. Flights took off from and returned to the Bordeaux Merignac Airport (airport altitude: 49 m. a. s. l.) where the preand post-flight measurements were performed. Each flight campaign consisted of three successive flight days. thirtyone parabolas were flown on each flight day in sets of five consecutive parabolas separated by short 4-5 min phases of steady flight. the 16th and 17th parabolas were separated by a longer, 8-min break. During the flights, the cabin environmental conditions were as follows: 830 mbar pressure (equivalent to an altitude of 1,650 m. a. s. l.), approximately 15 % humidity, an ambient temperature of approximately 19 °c, an illumination level of approximately 800 lux, a light color temperature between 3,400 and 3,600 K, a noise level of 70-80 dB and a vibration level of approximately 0.008 g with a frequency spectrum of 1-400 Hz. the ambient atmospheric conditions on the ground during pre-and post-flight varied over the time period of the campaigns because of changes in weather conditions and seasons; the ambient pressure was approximately 1,005 mbar, the humidity ranged from 30 to 100 % and temperature ranged from 9 to 26 °c. After a light breakfast, each subject was equipped with a lead-II electrocardiogram (Ecg), impedance cardiogram (Icg) and finger blood pressure device (FBP). thereafter, an indwelling short 16 g catheter for blood sampling was inserted in the antecubital vein of the right arm (Vasofix ® certo, B. Braun Melsungen Ag, Melsungen, germany). Subsequently, subjects received scopolamine at 8 a.m., and a baseline blood sampling was performed. Baseline measurements were then conducted, consisting of at least three repetitions of cardiac index measurements by rebreathing (cI rb ), FBP, Hr and Icg in a standing position in the airplane cabin with the doors still open. the subjects were then seated for taxiing and take off for approximately 30 min. After a steady flight level was reached, three outbound data sets were collected in the standing position, and a second blood sample was obtained. During the flight phase of the parabolic trajectories, rebreathing exercises were performed in the standing position only at 0 and 1.8 g z during parabolas 2-5 (block 1), 14-16 (block 2), 17-19 (block 3) and 27-30 (block 4). BP, Hr and Icg data were collected continuously. Subjects stood in the upright body position during parabolas 1-6, 12-21 and 27-31 and were sitting during the remaining 10 parabolas (Fig. 1) to recover from the intense orthostatic challenge and thus to decrease their risk of presyncope and motion sickness. two more blood samples were obtained after the 16th and 31st parabolas. At least three sets of rebreathing exercises and cardiovascular data points were collected during the return flight, and another three sets were collected after landing on the ground while the subjects were still in the airplane but with the doors open. the final blood sample was also collected on the ground. We adopted a rigorous rebreathing procedure that enabled us to measure two subjects at the same time (Online resource 1). In particular, an operator indicated the breathing frequency and depth by moving his hand up and down, and both subjects triggered their breath cycles to the hand signals. During the parabolic trajectories, the rebreathing maneuvers were strictly aligned to the pilot's announcements of trajectory: (1) "10 s" (2) "pull up" with increased g z load of up to 1.8 g z ; (3) "20", "30", and "40", signifying the rising angle of attack of the airplane; (4) "injection", with a rapid decrease of the g z load to approximately 0.05 g z ; and (5) "pull out", with an increased g z load of up to 1.8 g z. Each phase lasted approximately 20-25 s. controlled rebreathing was initiated after the pilot's announcement of "10 s" for Fig. 1 Study design: measurements were performed at regular ambient pressure before and after parabolic flight and hypobaric chamber runs (pre and post, respectively); and under low ambient pressure conditions in a standing position in a parabolic flight and in the hypobaric chamber (outbound, block 1-4, return); and in a standing posi-tion combined with gravity transitions in parabolic flight and without gravity transitions in the hypobaric chamber. Measurement blocks for cardiovascular and pulmonary data acquisition and time points of blood sampling are indicated the hyper-g measurements or at the pilot's announcement of "40" for the 0 g z measurements during the final pull-up seconds. thus, the breaths relevant for cI rb determination occurred during the 1.8 and 0 g z phases, respectively, and the rebreathing maneuver was completed before injection and pull out, respectively. Hypobaric chamber the actual individual parabolic flight protocol for 11 of the 18 subjects was identical to that of the hypobaric chamber test. this comparison was performed to determine any potential effects of hypobaric hypoxia, restricted water intake in flight and changes in body position on the parameters of interest and to separate such effects from the effects of hyper-and microgravity. tests were performed in the hypobaric chamber of the Dlr Institute of Aerospace Medicine in cologne, germany, which has dimensions of 2.8 × 2 m and provides seats for six people. One subject was tested during each chamber run, supported by two operators in the chamber. the subjects received identical instructions before the chamber run and their parabolic flights. the chamber runs started at the same time as the actual parabolic flights and lasted as long as the individual flight day. the rebreathing and body position protocols were the same as those performed in flight. the subjects received an equal amount of subcutaneous scopolamine before their chamber runs, and the parabolic flight blood draw protocol was performed similarly. the chamber was depressurized to the actual inflight cabin pressure of that particular subject's flight. De-and re-pressurization of the hypobaric chamber had exactly the same duration as in the actual flight of the subject. the other environmental conditions in the chamber were approximately 60 % humidity, an ambient temperature of approximately 23 °c, an illumination level of approximately 150 lux, a light color temperature of approximately 3,000 K, a noise level of approximately 70 dB due to airflow and no significant vibrations. Measurements Inert gas rebreathing cardiac index (cI rb ), stroke index (SI rb ), oxygen consumption (VO 2 ) and lung tissue volume (Vt) were determined by inert gas rebreathing (Igr) using an Innocor ® commercial inert gas rebreathing device (Innovision, glamsbjerg, Denmark). Oxygen saturation (SO 2 ) of the arterial blood was measured during each rebreathing at a fingertip. the subjects breathed ambient air through a face mask fitted around the nose and mouth. When a cI rb measurement was required, the system switched to a closed rebreathing mode. A respiration bag was automatically filled with a gas mixture composed of 29.5 % O 2 in n 2 , 0.5 % n 2 O (soluble tracer gas) and 0.1 % SF 6 (non-soluble tracer gas). In our study, the volume of the respiration bag was approximately 40 % of the vital capacity of the subject. the pulmonary blood flow (PBF), which, in the absence of significant shunts, is equal to cardiac output, was calculated on the basis of the soluble tracer gas disappearance rate (n 2 O), the total volume of the system and the Bunsen solubility coefficient of the tracer gas in blood (clemensen et al. 1994) (for details see Online resource 2). Cardiovascular parameters continuous beat-by-beat finger blood pressure was measured using a Finometer MIDI device [Finapres Medical Systems (FMS), Amsterdam, the netherlands], which uses a photoplethysmographic technique based on the volume clamp method of the czech physiologist J. Peňáz. the finger cuff was placed around the third finger of the left hand and the left hand was fixed by a bandage at the level of the fourth intercostal space at the assumed level of the heart. the mean arterial pressure was calculated from the systolic and diastolic finger blood pressures by the formula . the systemic vascular resistance was calculated by dividing the mean arterial pressure by cO rb MAP (mmHg) CO rb L×min −1 = SVR mmHg min×L . Finger blood pressure, Ecg and thoracic impedance data were measured continuously during the rebreathing pre-and post-flight on the ground, during the rebreathing maneuvers in the outbound and return phases and during the entire phase of the parabolic trajectories. these data were stored at 2,000 Hz using AcQKnowledge ® 4.0 software (Biopac Systems Inc., goleta, cA, USA) on a laptop (Dell Precision Workstation, Dell Inc., round rock, USA) for post-flight analyses. A solid-state hard drive was used for data storage to prevent automatic computer shutdown at 0 g z , which would be triggered by the laptop built-in free-fall sensor that would mistakenly indicate that the computer was falling down during the 0 g z phases. rebreathing maneuvers were subsequently identified from the thoracic impedance signal. the finger blood pressure and heart rate were averaged across the three relevant breaths required for the measurement of the SI rb , and these averages were then processed as single blood pressure and heart rate data points for further analysis (limper et al. 2011). Blood counts and intravascular volume During parabolic flights and hypobaric chamber experiments, 10-ml serum and 6-ml EDtA blood samples were drawn. the overall amount of blood drawn during an experiment day was 80 ml. Intravascular volume on parabolic flight days and in the hypobaric chamber was determined using the Optimized carbon Monoxide rebreathing technique (cOrt) (Prommer and Schmidt 2007; Schmidt and Prommer 2005). In each case, 3-ml EDtA blood samples (S-Monovette ® , Sarstaedt Ag & co., nümbrecht, germany) were drawn from the antecubital vein via intravenous puncture (21 g Venofix ® Safety, B. Braun Melsungen Ag, Melsungen, germany). Each subject performed the carbon monoxide rebreathing procedure only once at the german Aerospace center in cologne, germany with less than 2 months between the parabolic flights and hypobaric chamber runs (Online resource 2). During the parabolic flights and the hypobaric chamber runs, blood samples were drawn in 2.0-ml EDtA tubes via the 16 g intravenous line in the right antecubital vein. Blood draws were performed five times per experiment day: pre, outbound, post 16 (meaning after the 16th parabola), post 31 (after the 31st parabola) and post. Identical labeling was used for the blood samples of both facilities. Blood samples were immediately refrigerated at 5 °c after collection. Blood count analyses were performed following landing at a local medical laboratory via a routine clinical method (laboratoire d'Analyses Weckerle, Martignas-sur-Jalle, France). Analyses were performed twice, and the average results of the duplicates were used for intravascular volume calculations. After the hypobaric chamber experiments, blood counts were analyzed immediately at the laboratory of the german Aerospace center in cologne, germany using the ABX Pentra 60 hematology analyzer (Horiba ABX SAS, Montpellier cedex, France). Biochemical analyses Following collection, all samples were refrigerated at 5 °c until centrifugation at 1,500 rpm for 15 min at 4 °c. Plasma and serum were then transferred to 1.5-ml tubes, immediately frozen on dry ice and then kept at −80 °c. Blood osmolality and albumin, cortisol, aldosterone, ctpro AVP, renin active and ntpro BnP concentrations were determined using standard methods by a commercial biomedical laboratory (MVZ labor Dr. Quade und Kollegen, cologne, germany) within 3 months of blood sampling (for details, see Online Supplement 2). Statistical analyses We evaluated 631 inert gas rebreathing data sets and the same amount of simultaneously collected Ecg and finger blood pressure data sets. In total, 374 data sets were collected during the parabolic flight days, whereas 257 data sets were collected during the hypobaric chamber tests; 128 and 66 inert gas rebreathing maneuvers were performed at 0 and 1.8 g z , respectively. Analysis of variance (AnOVA) tests using a general linear model evaluated fixed effects of facility (parabolic flight vs. hypobaric chamber) and phase ("pre"; "outbound"; "block 1, block 2, block 3 and block4"; "return" and "post", Fig. 1), and their interactions. Subject ID was used as a random factor to account for between-subject variability. Where fixed factors were significant, post hoc tukey's Honestly Significant Difference test was employed to identify significant differences. For Phase, identical phase names were used for both parabolic flights and the hypobaric chamber to increase clarity. P = 0.05 was taken as the minimum level of significance. All statistical analyses were performed using StAtIStIcA 10 (StatSoft, Inc., tulsa, OK, USA). Results the results which are given in the following originate from a mixed-gender sample (table 1). cardiovascular parameters Figures 2, 3, 4 show the cardiovascular and pulmonary responses in parabolic flight and in the hypobaric chamber. Hr decreased significantly during the hypobaric chamber run (p < 0.001) with respect to pre but did not differ during measurements post relative to pre (p = 0.639). In parabolic flight, Hr was significantly decreased at 0 g z with respect to pre (p < 0.001) and significantly increased at 1.8 g z with respect to pre (p < 0.001). However, Hr at 0 g z decreased significantly over time (block 1 vs. block 4 p < 0.001). A similar attenuation in Hr increase at 1.8 g z was observed over time (block 1 vs. block 4, p < 0.001). Hr was lower post-flight than in pre-flight measurements (p < 0.001). the stroke index by rebreathing changed significantly during the hypobaric chamber run (p < 0.001) (Fig. 3, Online Supplements 3, 5). With respect to normobaric normoxia (nn) conditions at pre, SI rb showed a significant increase during chamber outbound (p < 0.001) and block 2 (p < 0.001) and a tendency increase during block 3 and block 4 (p = 0.0775 and p = 0.0798, respectively). SI rb was not significantly different between post measurements and pre measurements (p = 0.952). In parabolic flight, SI rb was significantly enlarged at 0 g z with respect to pre (p < 0.001). It was also significantly higher during outbound with respect to pre (p = 0.026). However, SI rb did not decrease at 0 g z over time (block 1 vs. block 4, p = 0.682). In the hyper-g of block 1, SI rb showed no significant difference with respect to pre measurements (p = 0.999) but was subsequently smaller in block 2 and block 3 with respect to pre (p = 0.017 and 0.002, respectively). SI rb was similar during pre-and post-flight measurements (p = 0.323). With respect to pre baseline measurements, cI by rebreathing did not change significantly in any phase in the hypobaric chamber (Online Supplement 3, 5). However, in parabolic flight, cI rb was significantly increased at 0 g z with respect to pre in each block (p < 0.001). cI at 0 g z decreased in a stepwise fashion (block 1 vs. block 4, p < 0.001). In block 1, cI rb was significantly increased at 1.8 g z with respect to pre, similar to pre in block 2 and block 4 (p = 0.660 and 0.817, respectively), and was significantly smaller with respect to pre measurements only Fig. 2 time course of main pulmonary parameters in parabolic flight in 18 subjects and in hypobaric chamber in 11 subjects is shown as the mean ± SE; asterisks indicate significant differences with respect to pre: *p < 0.05, **p < 0.01, ***p < 0.001; gray background indicates measurements in hypobaric hypoxia after decompression to 830 mbar Fig. 3 time course of heart rate and stroke index responses in the hypobaric chamber and in parabolic flight; responses in 0 g z , solid black graph and at 1.8 g z ; dashed black graph are shown separately. Asterisks indicate significant changes with respect to pre separately for hyper-and microgravity values; gray background indicates low ambient cabin pressure in block 3 (p = 0.0252). Furthermore, cI rb was significantly decreased after parabolic flight with respect to pre (p = 0.006). the systolic blood pressure, shown in Fig. 4 and Online Supplements 3 and 5, did not show any significant change during the hypobaric chamber runs (p = 0.559). In contrast to the hypobaric chamber, FBP syst changed significantly in parabolic flight, and the changes were of a similar pattern at 0 and 1.8 g z . FBP syst was significantly increased in block 1 at both 1.8 and 0 g z with respect to pre (p < 0.001 and < 0.001, respectively). During block 2 and block 3, FBP syst was not different at 1.8 g z but significantly decreased at 0 g z with respect to pre (1.8 g z , p = 0.195 and 1.0, respectively; 0 g z , p < 0.001 and 0.007, respectively). During block 4, FBP syst did not differ significantly at 0 g z or at 1.8 g z with respect to pre (p = 0.104 and 0.537, respectively). FBP syst was similar after flight with respect to pre (p = 0.999). Diastolic arterial pressure (FBP diast ) did not show any change during the hypobaric chamber test (p = 0.814), as shown in the Online Supplements 3 and 5. In parabolic flight, FBP diast was significantly increased at 1.8 g z in each block with respect to pre (p < 0.001 each), as shown in Fig. 4. At 0 g z during block 1, FBP diast did not differ significantly from pre values (p = 0.102) but was significantly lower at 0 g z in block 2 to block 4 with respect to pre (p < 0.001 each). After parabolic flight, FBP diast remained higher than before parabolic flight (p < 0.001). Systemic vascular resistance (SVr) did not change at any time in the hypobaric chamber (p = 0.921) (Online Supplements 3 and 5). However, in parabolic flight (Fig. 3), SVr showed a significant decrease at 0 g z from block 1 with respect to pre (p < 0.001 for each block). At 1.8 g z of block 1, SVr was statistically similar to pre (p = 0.999) but increased compared to pre values, in block 2 and block 3 (p < 0.001 each). In block 4, SVr was again not significantly different from pre values (p = 0.198). However, SVr was significantly increased after parabolic flight with respect to pre values (p < 0.001). Pulmonary parameters Oxygen saturation was significantly decreased in reduced ambient pressure with respect to regular ambient pressure in both parabolic flight and in the hypobaric chamber (p < 0.001 and p < 0.001, respectively) ( Fig. 2 and Online Supplements 4 and 5). With respect to normobaric normoxia baseline measurements, pulmonary tissue volume was significantly decreased only in hypobaric hypoxia conditions in the hypobaric chamber (p < 0.001); in contrast, it was significantly increased at 0 and 1.8 g z (p < 0.001 and p < 0.001) (Fig. 2 and Online Supplements 4 and 5). Oxygen consumption did not show any significant change in the hypobaric chamber in HH relative to regular pressure (p = 0.330). Oxygen consumption was not different between 1.8 g z and baseline 1 g z + nn but was significantly increased at 0 g z relative to 1 g z + nn (p < 0.001) (Online Supplements 4 and 5). Plasma volume Figure 5 shows that the response patterns of plasma volume did not differ significantly between parabolic flight and the Fig. 4 time course of arterial pressure, systemic vascular resistance and cardiac index in parabolic flight; responses at 0 g z , solid black graph and at 1.8 g z , dashed black graph, are shown separately. Asterisks indicate significant changes with respect to pre separately for hyper-and microgravity values; gray background indicates low ambient cabin pressure hypobaric chamber (p = 0.449). However, responses in the hypobaric chamber seemed to be more distinct. In parabolic flight, there was a significant decrease in plasma volume after parabola 31 with respect to pre (p = 0.034). Pre and post values were not significantly different on the parabolic flight day (p = 0.9728). In the hypobaric chamber there was a significant increase in plasma volume during outbound with respect to pre (0.024) and then a significant decrease in plasma volume at post 16 and post 31 with respect to outbound (p < 0.001 and <0.001, respectively). Post-flight plasma volume recovered to baseline values again and was significantly higher than at post 16 and post 31 (p = 0.004 and 0.007, respectively) (Online Supplement 6). Blood hormones and osmolality cortisol and ctpro AVP are strongly influenced by motion sickness (Schneider et al. 2007;Kohl 1987;Drummer et al. 1990). thus, the three motion-sick subjects 0AD, 0AP and 0At were excluded from the statistical analyses of cortisol and the ctpro AVP parabolic flight values, but their responses are shown individually in Fig. 5. Subject 0Al showed a very different renin response pattern in the hypobaric chamber; thus, the renin hypobaric chamber values of 0Al were removed from the statistical analysis but are provided individually in Fig. 5. Depending on the inter-individual variance, some parameters are shown as differences from the baseline and some as absolute values. Δcortisol decreased significantly after the parabolic flight relative to pre-flight measurements (p < 0.001). In the hypobaric chamber, a significant change in cortisol over time was also visible (p = 0.0475). ctpro AVP did not show any difference between its response to parabolic flight and its response to the hypobaric chamber (p = 0.810). In both facilities, ctpro AVP was not different after the run with respect to baseline levels (p = 0.154 Fig. 5 courses of plasma volume and a subset of blood hormones are shown as a continuous black graph for parabolic flight and as dashed black graph for the hypobaric chamber results; significant differences with respect to pre are indicated as asterisk in parabolic flight (A300) and in the hypobaric chamber (chamber); the statistical significance of plasma volume and ΔAldosterone is illustrated as asterisk with respect to pre; degree symbol with respect to outbound; open diamond with respect to post 16th; and section symbol with respect to post 31st. In the Δ pro AVP and Δcortisol diagrams, the black open circle represents subject 0AP, the black cross represents subject 0AD, the black open diamond represents subject 0At; in the renin active diagram (renin A ), the black open triangle represents the individual responses of subject 0Al; gray background indicates hypobaric hypoxic conditions and 0.566). Similarly, there was no significant difference in the course and absolute values of osmolality between both facilities (p = 0.726 and p = 0.379, respectively), although osmolality appeared to be slightly higher on parabolic flight days. renin active did not react differently between the hypobaric chamber and parabolic flight (p = 0.213). However, post 16th renin showed a significant increase with respect to pre (p = 0.013). In parabolic flight, renin active did not change significantly over time (p = 0.168). the time course of aldosterone levels was significantly different between the hypobaric chamber and parabolic flight (p = 0.044). In the hypobaric chamber, aldosterone levels tended to decrease over time with respect to pre without reaching statistical significance (p = 0.0926), but the decrease was significant after the chamber run with respect to post 16th (p = 0.0458). In parabolic flight, aldosterone levels increased over time. At post 16th and post-flight, aldosterone levels were significantly elevated with respect to pre (p = 0.041 and 0.011, respectively). the time course of ntpro BnP was statistically not different between the hypobaric chamber and parabolic flight (p = 0.136). However, it showed a strong tendency to increase in the parabolic flight (p = 0.075) and to decrease in the hypobaric chamber (p = 0.056) (Online Supplement 6). Discussion the major findings of the study are fourfold. First, confirmation of the observations of the studies of Iwase et al. (1999a) and Beckers et al. (2003) who found that cardiovascular responses to the transition into weightlessness in a standing position are different between the early parabolas of a parabolic flight and the later parabolic phases. Our results give new insights into the mechanisms of those differences by showing that, in the early phase of the flight, there is no distinct blood pressure decrease after the injection of 0 g z , which would have been expected. However, SVr is decreased at 0 g z from the early phase of the flight on. thus, blood pressure is kept elevated by a lack of heart rate decrease after injection together with an already increased stroke volume. Over the course of a parabolic flight day, the heart rate at 0 g z decreases and through decreased cardiac output leads to a pronounced blood pressure drop at 0 g z in the later phases. Second, the plasma volume response pattern in the hypobaric chamber and during parabolic flight is comparable. With respect to the baseline, an increase in PV during outbound was observed in both facilities, followed by a decrease after the 16th and 31st parabola and an increase after recompression. this pattern suggests that changes in plasma volume depend mainly on changes in body position and changes in ambient and oxygen partial pressure and depend to a lesser degree on gravity changes produced by parabolic trajectories. Third, differences in hormonal responses occur between the two facilities. Whereas the combination of hypobaric hypoxia and gravity changes in parabolic flight induces an increase in aldosterone, the opposite is the case after the hypobaric chamber run. ntpro BnP shows a strong tendency to increase during parabolic flight maneuvers but not in hypobaric hypoxia in a hypobaric chamber alone. renin active is increased in the hypobaric chamber but is not affected by parabolic flight. Fourth, the lungs and the cardiovascular system interact differently in the two facilities. Whereas in the hypobaric chamber, lung tissue volume is decreased by hypoxic pulmonary vasoconstriction, lung tissue volume is increased at 0 and in 1.8 g z in parabolic flight. Our results show that the cardiovascular response to microgravity transitions in the standing position differs between the early and later parabolas. this finding is of importance for the design and the interpretation of cardiovascular experiments on parabolic flights. Our results suggest that data collected during the initial five parabolas of a parabolic flight should be discarded to increase the homogeneity of the cardiovascular results. the observed variability may be partly derived from the administration of scopolamine, which has a serum elimination half-life of approximately 2 h (Stetina et al. 2005). this point may suggest that half-elimination of scopolamine from the circulation has already occurred after an early parabolic flight phase. However, the increased level of psychomotor excitement is also certainly of importance. In contrast, changing interactions of the baroreflex and a "Bainbridge-like reflex" may explain changes in cardiovascular responses over the parabolic flight. Whereas the Bainbridge-like reflex induces tachycardia triggered by a central volume increase by a volume shift at 0 g z (Petersen et al. 2011), the baroreflex would induce bradycardia. It is possible that the Bainbridge-like reflex predominates in early parabolic flight because the vagotropic blocking capability of Scopolamine suppresses the vagal efferent outflow of the baroreflex. the sympathetic withdrawal effect of the baroreflex at 0 g z seems to be unaffected, as indicated by a decreased SVr at 0 g z from the very first parabola (Fig. 3) and as found by Iwase et al. (1999b). However, whether factors other than scopolamine, e. g., hypobaric hypoxia, influence arterial and cardiac regulation remains unclear. Systolic and diastolic blood pressure was unaffected in the hypobaric chamber, whereas heart rate decreased, which could indicate a resetting of the baroreflex. However, the neutral, rather tedious environment of the confined hypobaric chamber experiment could well have contributed a further decrease in heart rate. How the baroreflex behaves in hypoxia and hypobaric pressure has been inconsistently defined in the literature. Halliwill and Minson (2002) present data supporting that hypoxia resets the baroreflex and 1 3 muscle sympathetic nerve activity (MSnA) to higher levels without changing the baroreflex sensitivity, whereas Sevre et al. (2002) found evidence for reduced baroreflex sensitivity in a hypobaric chamber experiment simulating an airplane cabin atmosphere with a pressure equivalent to an altitude of 2,400 m (Sevre et al. 2002;Halliwill and Minson 2002). Finally, heart rate increases typically in hypoxia caused by decreased oxygen partial pressure in the blood (West et al. 2007), which was contrary to the findings of our hypobaric chamber runs. We found a slight intermittent decrease in plasma volume over both parabolic flight and hypobaric chamber courses. this finding was contrary to our expectations. Schlegel reported that the 16 subjects of the parabolic flight experiment had on average a larger stroke volume in the supine position when compared between and after the flight, suggesting an increase in blood volume (Schlegel et al. 2001). During the parabolas, the subjects had been in an upright sitting position, which allowed a certain value of volume shift through gravity transitions. Schlegel did not measure intravascular volume and hormone concentrations directly but actually focused on the question of whether changing levels of arginine vasopressin (AVP) and renin-angiotensin-aldosterone could lead to an increase in intravascular volume during parabolic flights. Schlegel suggested, based on previous work, that the predominance of hyper g z during the flight with respect to μg z may have led to the expansion of intravascular volume. Indeed, the overall durations of the μg z and hyper g z phases during a parabolic flight were approximately 1,000 and 2,000 s, respectively, on the Kc 135 and approximately 700 and 1,400 s, respectively, on the A300 Zero-g, and therefore twice as long under hyper than under μg z . those longer micro-and hypergravity phases and the different flight profile on Kc 135 parabolic flights may be an explanation for the different findings between Schlegel's and our work. nevertheless, we found, in accordance with the concept of a modification of the Starling-landis pressure under changing gravity as noted by Hargens and richardson (2009), a plasma volume loss during parabolic flight and a recovery to baseline values after re-pressurization. Again, we would have expected that such a contraction of intravascular volume through the effects of hyper g z and μg z on the Starling-landis equation during the parabolic flight day would have been aggravated by hemoconcentration because of the hypobaric hypoxia of the airplane cabin inflight. It is well known that hypoxia induces a reduction in plasma volume that increases the hematocrit and thereby improves tissue oxygenation (Bartsch and Saltin 2008). this finding is already accounted for from just slight hypobaric hypoxia, i.e., in an ambient pressure equivalent to an airplane cabin inflight. However, in contrast to our expectation, Yamashita et al. were not able to find a significant effect of 130 min of quiet sitting in a hypobaric chamber in an ambient pressure equivalent to the pressure at 2,000 m and a low relative humidity of 20 % on hematocrit levels with respect to the regular ambient pressure of sea level (Yamashita et al. 2005). However, they did find a significant decrease of body weight (100-200 g) after the chamber test with respect to baseline values. this finding may indicate a loss of extracellular water with a concomitant preservation of intravascular volume. Yamashita et al. concluded that low humidity conditions may have a higher effect on fluid loss than the hypobaric hypoxia itself. However, we found more pronounced changes in plasma volume over the course of the hypobaric chamber testing than during the parabolic flight, whereas the response pattern for the plasma volume in both facilities seemed to be similar. An explanation for the greater plasma volume changes in the hypobaric chamber, in addition to the smaller subject collective, may involve a different volume status of the subjects during the chamber runs as indicated by a slightly lower average osmolality during the chamber runs. the high average osmolality of approximately 310 mosmol/ kg in the subjects during the parabolic flights is a strong indicator for a dehydrated state on the flight days. AVP is known to be increased in volume-contracted subjects, and ctpro AVP shows a similar pattern for changes in blood volume (Szinnai et al. 2007). the reduced fluid volume in parabolic flight may be explained by the participants' overnight fasting and then only having a slight breakfast without much morning fluid intake. the lower humidity in the airplane cabin inflight with respect to the chamber may have been an additional factor in the differences in osmolality, but the difference was already apparent during the baseline measurements. However, it should be taken into account that high osmolality has an impact on cardiovascular reflexes. charkoudian et al. reported that hyperosmolality of even 290 mosmol/kg increases the baroreflex sensitivity in young subjects and has a sympathoexcitatory influence in general (charkoudian et al. 2005). In the present study, hormones related to volume regulation and their precursor peptides responded differently to scopolamine, standing position, gravity changes and HH in the airplane cabin on the one hand and to scopolamine, standing position and HH in the hypobaric chamber on the other hand. Whereas aldosterone values increased in HH in combination with changing g-loads during the parabolic flight, they decreased in HH in the hypobaric chamber. Aldosterone levels are expected to decrease after a move to higher altitude and in hypoxia (Slater et al. 1969;Shigeoka et al. 1985). However, an aldosterone increase at altitude was reported by Humperler et al. (1980) and attributed by richalet (2001) to the physical exercise in the study. Under orthostatic stress, aldosterone levels are known to increase 1 3 (laszlo et al. 2001). taken together, these studies underpin the following interpretation of our own results: in the hypobaric chamber, the serum aldosterone concentration decreased in response to HH. In contrast in parabolic flight, increased orthostatic stress and muscular load of standing upright during hyper-g phases, and increased postural muscular work due to turbulent flight phases, and potentially increased muscular work, induced by the airplane's wholebody vibrations, increased aldosterone release. this finding could be interpreted as supporting the effects of orthostatic and exercise stress on the effect of the stress of HH during parabolic flight on aldosterone release. Indeed, renin active did not show a parallel response with aldosterone. the renin active post 16th measurement during hypobaric chamber testing showed a slightly significant increase with respect to pre. However, renin active did not show any further significant response, neither in parabolic flight nor in hypobaric chamber. this implies a dissociation of the aldosterone level and renin response under the particular conditions of parabolic flight. Interestingly, dissociation of plasma aldosterone levels and plasma renin activity has previously been reported in subjects experiencing presyncope and in subjects undergoing repeated orthostatic challenges by tilt table testing and lower body negative pressure (lBnP) tests (roessler et al. 2011;Hinghofer-Szalkay et al. 2011). the works of roessler et al. and Hinghofer-Szalkay et al. indicate furthermore that during orthostatic challenge aldosterone is rather controlled by adrenocorticotropic hormone (ActH) than by renin, which may serve as an explanation of the dissociated aldosteronerenin response in parabolic flight. ntpro BnP shows a strong tendency toward a different response in the chamber with respect to parabolic flight (p = 0.0524). Parabolic flight induces an increase, whereas only hypobaric hypoxia and a standing position do not affect ntpro BnP. this finding is in agreement with the literature, which reports that ntpro BnP does not increase with an acute ascent to altitude (toshner et al. 2008) and during tilt table orthostatic tests, but does increase after an increase in thoracic blood volume-by-volume loading in healthy volunteers (Heringlake et al. 2004). this finding may indicate that the trend toward increased ntpro BnP during parabolic flight is triggered by reiterative increasing in the thoracic blood volume at 0 g z . With subjects 0At, 0AP and 0AD excluded from the analysis of the parabolic flight data for cortisol because of possible stress responses arising from motion sickness, cortisol showed a significant decrease over the day in both facilities. this finding underpins the observation of a decreased heart rate during the flights and shows that the stress level decreases. However, this response results not only from decreasing excitement during the experiments but also from the distinct circadian rhythm of cortisol. the blood cortisol concentration shows a physiological peak between 8 and 9 a.m. and a continuous decrease thereafter. Peak values in healthy subjects are approximately 16 μg/dl and decrease to approximately 12 μg/dl at noon (Debono et al. 2009). the baseline measurements before the flights and the chamber runs were obtained between 8 and 9 a.m., i.e., during the physiological cortisol peak, and post-intervention measurements were obtained between noon and 1 p.m. the measured cortisol values at these time points were similar to the circadian values of these day phases given in the literature by Debono et al. (2009) Opposing alterations of pulmonary tissue volume in hypergravity and microgravity compared with hypoxia have been shown in different experiments. Snyder et al. (2006) showed a decrease in lung water and lung tissue volume under moderate hypoxia of 12.5 % inspired oxygen in resting healthy subjects. rohdin and linnarson found increased lung tissue volume in healthy subjects during 2 and 3 g z centrifugation in a sitting position. Furthermore, they reanalyzed the parabolic flight data of Vaida et al. and noted an increase in lung tissue volume in weightlessness (Vaida et al. 1997;rohdin and linnarsson 2002). Vaida had performed the experiment during parabolic flights in the former European caravel parabolic flight airplane under a hypobaric ambient pressure of 793 mbar. thus, the results of Snyder, rohdin and Vaida are in line with our observations of decreased Vt in the hypobaric chamber and increased Vt during weightlessness and hypergravity in parabolic flight. However, we are the first to show in the same subjects that the reduction in blood volume in hypobaric hypoxia of the airplane cabin is reversed by a central volume shift in weightlessness and by sequestration, as suggested by rohdin and linnarsson, of blood in the dependent parts of the lung circulation in the hypergravity phases. this finding could be of benefit for potential parabolic flight candidates suffering from pulmonary hypertension, which would be aggravated by the hypobaric hypoxia of the airplane cabin and which may be attenuated by the pulmonary response to hyper-and microgravity. Limitations the study design included some limitations that we tried to consider in our interpretation of the results. First, temperature, humidity, noise level, vibrations and light conditions in the hypobaric chamber were not fully comparable to parabolic flight because of a lack of air-conditioning in the hypobaric chamber, and because of a fixed installed nonchangeable illumination system in the hypobaric chamber. It seems unlikely that the slightly higher temperature in the hypobaric chamber of approximately 23 °c, with respect to approximately 19 °c in the cabin of the A300 inflight, led to changes in the orthostatic or volume-regulating behavior of the cardiovascular system for instance by skin vasodilation or even by increase of the core body temperature (Allan and crossley 1972). there was a 10 dB difference in the noise level during parabolic flight with respect to hypobaric chamber runs. thus, in both facilities the noise level was comparable and below 90 dB which is known to increase the degree of physiological arousal (Harding and Mills 1983) and therefore we do not assume a significant effect of differences in the noise levels of the two facilities on our data. On the other hand, vibrations which appeared only inflight and not in hypobaric chamber might have had a certain minor impact on our results. Although exposure to moderate levels of whole-body vibrations does not lead to consistent changes in basic measures of the cardiovascular system, there may be an increase in muscle activity to maintain body posture which may again lead to peripheral vasoconstriction (rollin Stott 2006). Furthermore, wholebody vibration induces a slight increase in metabolic rate which is comparable with that seen in light exercise and to hyperventilation with a reduction in cO 2 (rollin Stott 2006). However, forced hyperventilation due to vibrations inflight, with respect to the hypobaric chamber, would have led to an increase in SO 2 of the arterial blood inflight with respect to SO 2 in the hypobaric chamber, which indeed cannot be found in our data. the differences in illumination between the hypobaric chamber and the cabin of the A300 were approximately 650 lux in brightness and 600 K in light color temperature. noguchi investigated the influence of 50 and 150 lux of light brightness and 3,000 and 5,000 K of light color temperature on the activity of the autonomic nervous system. they could not find any difference in the activity of the autonomous nervous system under these conditions what makes us doubting a significant effect of the differences in light characteristics in our study on our data (noguchi and Sakaguchi 1999). Second, it is known that vestibular-autonomic interactions (Yates 1996) andcardio-postural interactions (goswami et al. 2012;Blaber et al. 2009) affect cardiovascular responses during orthostatic stress. therefore, subjects were instructed to avoid head movements during the hypergravity phases to minimize potential vestibular-autonomic interactions. However, minor differences in cardio-postural interactions in parabolic flight with respect to hypobaric chamber can be considered possible because in parabolic flight subjects were standing on a floor covered with soft padding and trying to adjust their upright body posture for turbulences using their postural muscles. Furthermore, large muscle groups may have been activated by airplane whole-body vibrations during flight. these advanced postural adjustments, which almost did not occur in hypobaric chamber, may have led to increased muscle pumping and increased venous return inflight with respect to the hypobaric chamber. Third, only 11 of the 18 subjects of the parabolic flights were available to participate thereafter in the hypobaric chamber tests. Fourth, three of the 18 subjects developed motion sickness in parabolic flight, which affects cardiovascular and hormonal regulation and removes the homogeneity of the subject population. It is well known that levels of AVP and cortisol are extensively increased in motion sickness; therefore, the ctpro AVP and cortisol values of the motion-sick subjects were excluded from the statistical analysis of the blood hormones, and individual data are shown instead. Furthermore, we did not analyze blood levels of ActH what might had allowed us to identify a close relationship between ActH and aldosterone in parabolic flight as it is known for orthostatic stress during tilting and lBnP (roessler et al. 2011;Hinghofer-Szalkay et al. 2011). Fifth, rebreathings at 0 g z fell into the early 0 g z phases, which are characterized by a sympathetic withdrawal and acute activation of the vagal nervous system. later in the 0 g z phase, there would be an increasing dominance of the sympathetic nervous system. We did not perform most of the rebreathings in this phase, and thus our results mainly represent the cardiovascular responses in the early 0 g z phases. Sixth, forced breathing, as performed for the rebreathing maneuvers for cI rb determination, modulates cardiovascular regulation during gravity transitions (Schlegel et al. 1998;Iwase et al. 1999b). However, using a breathing frequency of 20 breaths per minute and a rebreathing volume between 1.5 and 2.5 l, we were in the range of the low effect of breathing parameters on cO rb influencing noted by Damgaard and norsk (2005). Conclusion In conclusion, the cardiovascular, pulmonary and body fluid system are influenced not only by micro-and hypergravity but also by the hypobaric hypoxic cabin environment of the parabolic flight airplane. this finding leads, in some cases, to antagonistic reflex patterns in which reflexes triggered by gt abolish those triggered by HH. the compensation of the hypoxic pulmonary vasoconstriction by volume shift and the increases in cardiac output during parabolic flight maneuvers could have a positive effect on some potential parabolic flight participants with restricted health status, e.g., patients with mild chronic obstructive pulmonary disease or right ventricular strain; these effects should be investigated in future studies. for constructing the parabolic flight experiment rack and for his technical support during each phase of the study. We are grateful to Hartmuth Friedrich for operating the Dlr hypobaric chamber and to gernot Plath for his helpful technical mentoring. Furthermore, we thank the staff of the nOVESPAcE company for their help before and during the parabolic flights and the Dlr/german Aerospace Agency for providing the flight opportunities in the 15th, 16th and 19th Dlr parabolic flight campaigns.
v3-fos-license
2023-05-15T06:16:17.456Z
2023-05-13T00:00:00.000
258677755
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/g3journal/advance-article-pdf/doi/10.1093/g3journal/jkad102/50738566/jkad102.pdf", "pdf_hash": "d6a7e6c5b802d9ab85db2acf3be4ffc5c9f1c865", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42092", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "sha1": "2358b2d9998245bd7413c01209b66f10cd48ccef", "year": 2023 }
pes2o/s2orc
Chromosome-level assembly of Dictyophora rubrovolvata genome using third-generation DNA sequencing and Hi-C analysis Abstract Dictyophora rubrovolvata, a rare edible mushroom with both nutritional and medicinal values, was regarded as the “queen of the mushroom” for its attractive appearance. Dictyophora rubrovolvata has been widely cultivated in China in recent years, and many researchers were focusing on its nutrition, culture condition, and artificial cultivation. Due to a lack of genomic information, research on bioactive substances, cross breeding, lignocellulose degradation, and molecular biology is limited. In this study, we report a chromosome-level reference genome of D. rubrovolvata using the PacBio single-molecule real-time-sequencing technique and high-throughput chromosome conformation capture (Hi-C) technologies. A total of 1.83 Gb circular consensus sequencing reads representing ∼983.34 coverage of the D. rubrovolvata genome were generated. The final genome was assembled into 136 contigs with a total length of 32.89 Mb. The scaffold and contig N50 length were 2.71 and 2.48 Mb, respectively. After chromosome-level scaffolding, 11 chromosomes with a total length of 28.24 Mb were constructed. Genome annotation further revealed that 9.86% of the genome was composed of repetitive sequences, and a total of 508 noncoding RNA (rRNA: 329, tRNA: 150, ncRNA: 29) were annotated. In addition, 9,725 protein-coding genes were predicted, among which 8,830 (90.79%) genes were predicted using homology or RNA-seq. Benchmarking Universal Single-Copy Orthologs results further revealed that there were 80.34% complete single-copy fungal orthologs. In this study, a total of 360 genes were annotated as belonging to the carbohydrate-active enzymes family. Further analysis also predicted 425 cytochromes P450 genes, which can be classified into 41 families. This highly accurate, chromosome-level reference genome of D. rubrovolvata will provide essential genomic information for understanding the molecular mechanism in its fruiting body formation during morphological development and facilitate the exploitation of medicinal compounds produced by this mushroom. Introduction Dictyophora rubrovolvata, a saprophytic fungus belonging to the family Phallaceae , is a widely artificially cultivated edible mushroom in Southwest China. Dictyophora rubrovolvata is also called "hong tuo zhu sun" (red-volva basket stinkhorn) in Chinese and regarded as the "queen of the mushroom" for its attractive appearance (Liao et al. 2015). Dictyophora rubrovolvata grows on the wet roots of bamboo groves or in the humus of bitter bamboo forests in Guizhou, Yunnan, and Sichuan provinces of China (Hang et al. 2012). It was initially discovered in Yunnan province of China in 1976 and then successfully artificially cultivated in 1983 (Zang and Ji 1985). In the cultivation cycle of D. rubrovolvata, the development process can be divided into 5 major stages: undifferentiated mycelia, primordia, ball-shaped stage, peach-shaped stage, and the mature stage (Wang et al. 2020). The mature fruiting body of D. rubrovolvata possesses a unique appearance, including a red-volva, white stipe, and white net-like veil. Dictyophora rubrovolvata has been widely used as a functional food in daily life in China and Japan for its variety of nutrients, including proteins, amino acids, minerals, vitamins, thiamine, riboflavin, nicotinic acid, and polysaccharides (Deng et al. 2016). Dictyophora rubrovolvata has also been reported to have many biological and pharmacologic activities, such as antiaging and hypoglycemic (Ye, Wen, Peng, et al. 2016), antifatigue and hypoxia endurance (Ye, Wen, Huan, et al. 2016), etc. Up to now, D. rubrovolvata was still considered a rare edible mushroom in China. However, compared with other mushrooms, such as Flammulina velutipes, Pleurotus eryngii, and Hypsizygus marmoreus, its biological and genetic information remains limited, which impedes the breeding of high-quality cultivars. In recent years, the genomes of many basidiomycetes have been obtained, including Stropharia rugosoannulata , Naematelia aurantialba (Sun et al. 2021), Sparassis latifolia (Xiao et al. 2018), F. velutipes (Park et al. 2014), P. eryngii (Dai et al. 2019), H. marmoreus (Jin Jing et al. 2015), Lentinula edodes (Shim et al. 2016), and Agaricus bisporus (Morin et al. 2012). The availability of these increased genome sequences promotes research on the development and utilization of medical and pharmaceutical products. Hi-C technology provides a unique and powerful tool to study nuclear organization chromosome architecture (Belton et al. 2012). In recent years, the Hi-C technique has been used to assemble high-quality genomes for many mushrooms. An updated draft genome sequence of S. latifolia was generated by Oxford Nanopore sequencing and the Hi-C technique, a 41.41-Mb chromosome-level reference genome of S. latifolia was assembled, and 13,103 protein-coding genes were annotated . Based on the genome assembly obtained from second-and third-generation sequencing and Hi-C data, Ophiocordyceps sinensis strain 1,229 was found to possess 6 chromosomes with a strong telomere interaction between chromosomes (Meng et al. 2021). The Hi-C technique was also used to construct the chromosome-level genome in L. edodes Yu et al. 2022) and Ganoderma lucidum (Wu et al. 2022). In the present study, we used the PacBio Sequel II platform to sequence the D. rubrovolvata genome, and assembled a highquality chromosome-scale reference genome using the Illumina platform combined with Hi-C scaffolding. The D. rubrovolvata genome sequence will be helpful for understanding the molecular mechanisms and advancing our understanding of its genetics and evolution. Fungal strains and strain culture The D. rubrovolvata fruiting body was collected from Fuzhou, Fujian Province, China. The D. rubrovolvata strain Di001 was obtained from the fruiting body by tissue isolation. At present, this strain has been preserved at the Institute of Edible Mushroom, Fujian Academy of Agricultural Sciences. The strain was maintained on potato dextrose peptone agar slants and subcultured every 3 months. Extraction of genome DNA To obtain sufficient cell amounts for genetic DNA extraction, D. rubrovolvata Di001 was inoculated into potato dextrose broth medium in a Petri dish for 10-15 days, and then the aerial mycelium was scratched out of the medium by sterilized cover glass. The sodium dodecyl sulfate method was used to extract the genomic DNA . The genomic DNA concentration was determined using the Nanodrop spectrophotometer (Thermo Fisher Scientific, NANODROP2000) and Qubit Fluorometer (Invitrogen, Qubit 3 Fluorometer), and agarose gel electrophoresis was performed to check its integrity. De novo sequencing The 15-kb SMRTbell library was constructed using the SMRTbell Express Template Prep Kit (version 2.0). The 350-bp small, fragmented library was constructed using the NEBNext ultra DNA library prep kit. After the library was qualified, the whole genome of D. rubrovolvata Di001 was sequenced using the PacBio Sequel II platform and Illumina NovaSeq PE150 at the Biomarker Technologies Corporation (Beijing, China), and the sequencing results were used for gene annotation. Genome assembly and assessment To obtain chromosome-level whole-genome assembly for D. rubrovolvata, we utilized a combined approach of Illumina, PacBio and Hi-C technology for the genome assembly, and chromosomelevel scaffolding. Regarding the PacBio Sequel II platform, on the basis of removing the low-quality reads (<500 bp) from the raw data, the automatic error correction function of the single-molecule real-time (SMRT) portal software was used to further improve the accuracy of the seed sequences, and finally, the variant caller module of the SMRT link v5.0.1 software was used to correct and count the variant sites in the initial assembly results using the arrow algorithm (Berlin et al. 2015). The ccs (circular consensus sequencing) reads were assembled using Hifiasm v0.12 (https://github.com/ chhylp123/hifiasm; Cheng et al. 2021), Pilon software (Walker et al. 2014) was used to further correct the assembled genome using the second-generation data, and finally, the genomes with higher accuracy was obtained. Regarding the Illumina NavaSeq PE150 platform, the clean reads were mapped to the D. rubrovolvata Di001 genome using Burrows-Wheeler Aligner software under its default parameters. Benchmarking Universal Single-Copy Orthologs (BUSCO) v 2.0 software was used to assess the completeness of the genome assembly. The lineage data set of BUSCO was fungi_odb9 (number of species: 85; number of BUSCOs: 290). Hi-C library construction and assembly of the chromosome Hi-C libraries were prepared as previously reported . Briefly, the sample was fixed with formaldehyde to maintain the 3D structure of DNA in cells and the restriction enzyme Hind III was applied to DNA digestion. Then, biotin-labeled bases were introduced using the DNA terminal repair mechanism. DNA was fragmented by a Covaris S220 focused ultrasonicator, and 300-700 bp fragments were recovered. The DNA fragments containing interaction relationships were captured by streptavidin immunomagnetic beads for library construction. Library concentration and insert size were determined using the Qubit 2.0 and Agilent 2100, respectively, and Q-PCR was used to estimate the effective concentration of the library. High-quality Hi-C libraries were sequenced on the Illumina NovaSeq PE150 sequencing platform, and the sequencing data were used for chromosome-level assembly (He et al. 2022). Hi-C data were filtered and evaluated using HiC-Pro software (Servant et al. 2015), it could identify the valid interaction pairs and invalid interaction pairs in the Hi-C sequencing results by analyzing the comparison results, and realize the quality assessment of the Hi-C libraries. The order and direction of scaffolds/contigs were clustered into super scaffolds using LACHESIS (Burton et al. 2013), based on the relationships among valid reads. Sequencing and assembly data The final D. rubrovolvata genome was composed of 136 contigs after genome assembly. The total length of all assembled contigs was 32,887,457 bp with a GC content of 45.16% and an N50 value of 2,480,000 bp. There were 233 complete BUSCOs in the assembled genes of D. rubrovolvata, the complete single-copy BUSCO was 232 (80%), and the complete duplicated BUSCO was 1 (0.34%). The complete BUSCO in D. rubrovolvata is lower than those in D. indusiata, S. latifolia, Tremella fuciformis, Naematelia encephala, and G. lucidum (Supplementary Table 1). The assembly encodes 9,725 protein-coding genes, which is less than the other 16 fungi except C. militaris (9,651 protein-coding genes) and O. sinensis (6,972 protein-coding genes). The GC content of D. rubrovolvata (45.2%) was also lower than the average value of 19 fungi in this study (Supplementary Table 2). The general features of the D. rubrovolvata genome, including assembly and gene model statistics, are presented in Table 1. Hi-C Hi-C has been widely used to map chromatin interactions within regions of interest and across the genome. In total, 20.1 million read pairs (6.03 Gb clean data) were generated from the Hi-C library, and the GC content and Q30 ratio (the percentage of clean reads more than 30 bp) were 43.25 and 94.03%, respectively (Supplementary Table 3). The Hi-C library quality was assessed based on the ratio of mapped reads and the proportions of valid interaction pairs and invalid interaction pairs. Only valid interaction pairs can provide effective information for genome assembly. Invalid interaction pairs mainly consist of self-circle ligation, dangling ends, religation, and dumped pairs. The mapped reads ratio was 95.19% (Supplementary Table 4). Of the unique mapped read pairs, 74.34% were the valid interaction pairs (12.12 million), which were used for the next Hi-C assembly (Supplementary Table 5). Overall, we constructed a chromosomal-level assembly of D. rubrovolvata with 11 pseudo-chromosomes with lengths ranging from 1.77 to 3.37 Mb (Table 2). Hi-C assembly incorporated Table 2. For the Hi-C assembled chromosomes, the genome was cut into 20 kb bins of equal length. The number of Hi-C read pairs covered between any 2 bins was then used as the intensity signal of the interaction between the bins to construct a heat map . The heat map demonstrated that the 11 chromosome groups can be clearly distinguished (Fig. 1). Within each group, the intensity of interaction in the diagonal position was higher than that in the off-diagonal position, indicating that the intensity of adjacent sequences (diagonal position) interaction in the Hi-C assembly was high, while the intensity of nonadjacent sequences (off-diagonal position) interaction was weak. The heat map of the Hi-C assembly interaction bins was consistent with a genome assembly of excellent quality. Repeat sequence The total length of the repeat sequence was 3,243,445 bp, which accounted for 9.86% of the D. rubrovolvata genome length. It was subdivided into 5 major types: retrotransposon, transposon, potential host gene, simple sequence repeat (SSR), and unknown duplications. A total of 2,428 retrotransposon, 2,141,399 bp in length, accounted for 6.51% of the genome length. In retrotransposon, the long terminal repeat-retrotransposons Copia (LTR/Copia) and long terminal repeat-retrotransposons Gypsy (LTR/Gypsy) accounted for 0.49 and 2.68% of the assembled genome, respectively. Transposon represented 0.71% of the assembled genomes. The Helitron transposable element, miniature inverted repeat transposable element, and terminal inverted repeat transposable element accounted for 0.16, 0.09, and 0.38% of the assembled genome, respectively (Table 3). Gene prediction and genome comparisons A total of 9,725 genes were predicted in the D. rubrovolvata genome (Supplementary Table 6), among which there were 8,830 homology-predicted genes or RNA-seq-predicted genes (90.79%; Fig. 1. Hi-C assembly of a chromosome interactive heat map. Lachesis Group (LG) means chromosome. LG01-LG10 are the abbreviations of 11 chromosomes. The abscissa and ordinate represent the order of each bin on the corresponding chromosome group. Fig. 2), indicating high reliability of the prediction. The total length of the encoded genes was 20.76 Mb, accounting for 63.1% of the whole genome, and the average length of each gene was 2,135.07 bp. The average exon and intron numbers were 7.08 and 6.08, respectively (Table 1). Gene function annotation To predict the protein sequences, a similarity analysis of 9,725 nonredundant genes in multiple public databases (GO, KEGG, KOG, NR, Pfam, CAZy, Swiss-Prot, and TrEMBL) identified 8,727 genes that were functionally annotated, which accounted for 89.74% of the assembled genome. Most genes were matched using the Nr (8,671 genes) database, followed by TrEMBL (8,298 genes) and Pfam (6,492 genes) database (Supplementary Table 7). GO annotations In GO database, 3 independent ontologies including biological process, cellular component, and molecular function were used to describe gene products according to their functional annotations. A total of 4,001 genes were assigned to 3 major categories: biological processes (18 branches), cellular components (15 branches), and molecular functions (14 branches). These were mainly distributed in 5 functional entries, "catalytic activity," "metabolic process," "cellular process," "cell part," and "cell," of which the number of annotated genes was 2,058, 1,873, 1,777, 1,722, and 1,702, respectively (Fig. 4). Dictyophora rubrovolvata had more genes in the common subcategories of "metabolic process" and "cellular process" within the biological process and "catalytic activity" within the molecular function categories (Supplementary Table 9). KEGG annotations To further systematically analyze the metabolic pathways of gene products in cells and the functions of these gene products, the KEGG database was used to annotate the gene functions of D. rubrovolvata. A statistical map of the number of annotated genes in the KEGG database is shown in Fig. 5. The 3,046 genes were assigned into 4 categories in KEGG: metabolism (90 branches), genetic information processing (15 branches), cellular processes (5 branches), and environmental information processing (1 branches). Of these, 1,863 genes were assigned to the "metabolism" category. Within metabolism, the biosynthesis of unsaturated fatty acids possesses 111 genes, followed by carbon metabolism (96), amino sugar and nucleotide sugar metabolism (47), and glutathione metabolism (46). A total of 817 genes were assigned to the "genetic information processing" functional category, including nucleocytoplasmic transport (100), ribosome (99), protein processing in the endoplasmic reticulum (85), and spliceosome (79). For cellular processes (265 genes), the cell cycle was the most involved (74). In addition to the above 3 major categories, only 28 genes were assigned to the "environmental information processing" category (Supplementary Table 10). Conclusion In this study, we report a highly accurate chromosome-level genome assembly of D. rubrovolvata based on the PacBio SMRT and Hi-C technologies. The final genome size was 32.89 Mb. A total of 9,725 protein-coding genes were predicted using the strategy of multievidence combination, and 8,727 genes were functionally annotated. To the best of our knowledge, this genome-wide assembly and annotation data represent the first genome scale assembly of D. rubrovolvata. The genome data created in this study will serve as valuable resources for fungal diversity research and breeding of D. rubrovolvata and will further provide essential genomic information for understanding the molecular mechanism in its fruiting body formation during morphological development and facilitate the exploitation of medicinal compounds produced by this mushroom. Data availability Genome sequencing of D. rubrovolvata Di001 generated for this study has been submitted to the NCBI (BioProject: PRJNA908074 and BioSample: SAMN32024313). Supplemental material available at G3 online. Funding This work was supported by the Natural Science Foundation of Fujian province of China (2021J01504), the Special Fund for Scientific Research in the Public Interest of Fujian Province (2022R1035005), the Science and Technology Innovations Program of Fujian Academy of Agricultural Science (CXTD2021016-2), and the guiding scientific and technological innovation projects of Fujian Academy of Agricultural Science (YDXM202209). Conflicts of interest The author(s) declare no conflict of interest.
v3-fos-license
2018-04-05T13:05:18.444Z
2018-05-01T00:00:00.000
4596526
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1016/j.anbehav.2018.03.004", "pdf_hash": "0d15cb8ba664d0a8b9f51641f09f33de112fe47d", "pdf_src": "Elsevier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42094", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "a007bc63617000450c5d29aedf444d9f36909e43", "year": 2018 }
pes2o/s2orc
Experimental evolution with an insect model reveals that male homosexual behaviour occurs due to inaccurate mate choice The existence of widespread male same-sex sexual behaviour (SSB) is puzzling: why does evolution allow costly homosexual activity to exist, when reproductive fitness is primarily achieved through heterosexual matings? Here, we used experimental evolution to understand why SSB occurs in the flour beetle Tribolium castaneum. By varying the adult operational sex ratio across 82–106 generations, we created divergent evolutionary regimes that selected for or against SSB depending upon its function. Male-biased (90:10 M:F) regimes generated strong selection on males from intrasexual competition, and demanded improved ability to locate and identify female mates. By contrast, Female-biased regimes (10:90 M:F) generated weak male–male competition, and relaxed selection on mate-searching abilities in males. If male SSB functions through sexually selected male–male competition, it should be more evident within Male-biased regimes, where reproductive competition is nine times greater, than in the Female-biased regimes. By contrast, if SSB exists due to inaccurate mate choice, it should be reduced in Male-biased regimes, where males experience stronger selection for improved mate finding and discrimination abilities than in the Female-biased regime, where most potential mating targets are female. Following these divergent evolutionary regimes, we measured male engagement in SSB through choice experiments simultaneously presenting female and male mating targets. Males from both regimes showed similar overall levels of mating activity. However, there were significant differences in levels of SSB between the two regimes: males that evolved through male-biased operational sex ratios located, mounted and mated more frequently with the female targets. By contrast, males from female-biased selection histories mated less frequently with females, exhibiting almost random choice between male and female targets in their first mating attempt. Following experimental evolution, we therefore conclude that SSB does not function through sexually selected male–male competition, but instead occurs because males fail to perfectly discriminate females as mates. Empirical tests between these divergent explanations have not revealed a consistent reason for the widespread existence of SSB, and there is considerable variation between different taxa in SSB (Scharf & Martin, 2013), even when species are closely related (Serrano, Castro, Toro, & L opez-Fanjul, 2000). These different study findings could be the consequence of SSB having different functions in different taxa and/or circumstances. Here, we employed experimental evolution within a species to test explicitly whether maleemale competition or inaccurate mate discrimination can explain male SSB. We used the red flour beetle, Tribolium castaneum, a promiscuous species where SSB is recognized (Levan, Fedina, & Lewis, 2009). In this model, SSB generates measurable costs: when T. castaneum males invest in homosexual behaviour they are not engaged in searching for, courting or mating with females and fertilizing their eggs. In addition, there is some indirect evidence that SSB might function in intrasexual competition by reducing rival male life span: average life span of adults in single-sex male groups was under half that of males in isolation, or of females in single-sex groups, and many of the dead males in the group condition exhibited hardened white deposits around the mouth and tip of the abdomen (Spratt, 1980). We applied divergent experimental evolution regimes that allowed us to test between the two core hypotheses that SSB occurs (1) because it generates sexually selected benefits for males through competition or (2) because males do not perfectly identify females, so they mate indiscriminately with any adult to maximize female mating opportunities. Having maintained replicate independent lines evolved through divergent adult operational sex ratios (Lumley et al., 2015;Michalczyk et al., 2011b), we then conducted tightly controlled mate choice assays to measure how experimental evolution under different sexual selection regimes had shaped male SSB. Our Male-biased lines were reproduced through adult operational sex ratios containing 90 males and 10 females, while the Female-biased lines reproduced using 10 males and 90 females. Under Male-biased regimes, males must achieve fertilizations in the face of strong levels of sexual selection from maleemale competition. In tandem, males in Male-biased conditions face much greater selection to evolve abilities that improve mate location and discrimination, because females are rare in the adult population. Male-biased conditions will therefore promote the evolution of male behaviours that simultaneously improve maleemale competition and enhance female location and mate discrimination. By contrast, under Female-biased conditions, maleemale competition is weak, and males experience much more relaxed selection to locate and discriminate between potential mates because nine out of 10 adults encountered are female. Our Female-biased regimes therefore relaxed selection on the evolution of male behaviours that are required for reproductive competition, while simultaneously weakening selection on mate finding and discrimination abilities. Adult population densities (N ¼ 100) in every line and both regimes were kept identical throughout to maintain equal adult encounter rates. Since T. castaneum is a promiscuous species (Fedina & Lewis, 2008) in which females mate repeatedly with multiple males (Michalczyk et al., 2011a) and males have substantial mating rate and fertilization potential (Lumley et al., 2015), male and female encounter rates were expected to correlate closely with the operational sex ratio. Although there is limited evidence for it, if female T. castaneum take 'time out' of mating activity after copulation, this will only exacerbate the differences in selection acting on SSB between our Male-biased and Female-biased regimes: more mating opportunities in the Male-biased lines would increase any female 'time out' in that regime, making females even rarer, and therefore further increasing the selection on males from maleemale competition and female mate searching and discrimination. Previous work with these lines has confirmed that male reproductive competitiveness has evolved to become stronger following selection under male-biased conditions (Godwin et al., 2017). The contrasting regimes therefore provide an ideal opportunity to test between explanations for the evolution of male homosexual behaviour. If SSB functions within maleemale competition, maleefemale signalling, mating practice or some other sexually selected route to indirectly improve male reproductive fitness, then we would predict increased selection for SSB under the Male-biased, strong sexual selection regime. Males that evolved through stronger levels of sexual selection in the Malebiased regime should therefore exhibit a greater level of SSB. On the other hand, if SSB exists because males fail to find and recognize female mates correctly, then we would expect the reverse outcome: males from the Male-biased regime have faced stronger selection to improve their abilities in locating and identifying females as mates, and therefore should evolve lower levels of SSB. Applying this logic in reverse, if male SSB functions within maleemale competition, males from Female-biased regimes exposed to relaxed levels of sexual selection should engage less in SSB. If, however, SSB is the result of erroneous female recognition, then the relaxed selection on mate location and discrimination in our Female-biased regimes (where most potential adult mates are female) should result in higher levels of SSB among Female-biased males. Having evolved replicate lines across 82 and 106 generations of these contrasting intensities of selection on SSB depending on its function, we then used experimental mate choice assays to reveal what evolutionary forces influence the existence of male homosexual behaviour. Experimental Evolution and the T. castaneum Model Tribolium castaneum demonstrates significant levels of SSB (Levan et al., 2009;Martin, Kruse, & Switzer, 2015;Spratt, 1980), is readily cultured in the laboratory under experimental control, freely engages in measurable mating behaviour, and can be reared from egg to adult in 1 month (Sokoloff, 1972). Beetles were maintained in density controlled, nonoverlapping generations under standard conditions (30 ± 1 C, 60 ± 5% relative humidity and 16:8 h light:dark) with ad libitum fodder consisting of organic flour and yeast (9:1 by volume) topped with oats for traction (details in Godwin et al., 2017;Lumley et al., 2015;Michalczyk et al., 2011b, Michalczyk, Martin, Millard, Emerson, & Gage, 2010. Experimental evolution was applied by altering the adult operational sex ratio at every generation to create either Male-or Female-biased conditions for reproduction, as described previously in detail (Michalczyk et al., 2011b;Lumley et al., 2015). Experimental evolution took place through six independent lines, three per regime. In the Male-biased regime, 90 males and 10 females (all previously unmated) were placed in fresh fodder for 7 days of reproduction, after which adults were removed and eggs and offspring left to develop to the next generation (Fig. 1a). The Female-biased regime was engineered in the same manner, except that the adult operational sex ratio was reversed, comprising 90 females and 10 males. In the Female-biased regime, reproducing males were therefore nine times more likely to encounter a female than the Male-biased regime, and suffer nine times less competition for reproductive success. While these Male-biased and Female-biased operational sex ratios generate contrasting intensities of selection from reproductive competition and mate-finding abilities, their adult structures create identical theoretical effective population sizes, with microsatellite screening revealing similar levels of heterozygosity between the regimes (Lumley et al., 2015). In addition, the identical adult population densities (N ¼ 100) also equalized adult encounter rate. Mate Choice Assays Mate choice assays were conducted through two experimental repeats after 82 and 106 generations. Mating behaviour of 'focal' males from the Male-and Female-biased experimental evolution regimes was assayed within experimental trios, where simultaneous choice of a male and female sexual 'target' was presented to the focal male (Fig. 1b). All adults in the mate choice assays were reared to adulthood under identical conditions, having been isolated and sexed as pupae, and then stored singly in fodder-filled 1.5 ml Eppendorf tubes to metamorphose into adult imagos, and then allowed 12 ± 2 days posteclosion to ensure sexual maturity (Michalczyk et al., 2011b). Individual storage ensured that all adults in the assays were unmated virgins, and therefore without the potential for uncontrolled variance arising from prior mating and insemination activities. All adults emerged under identical conditions, thereby standardizing any sociosexual conditioning effects that could have influenced mating behaviour and SSB (Dukas, 2010;Engel, Manner, Ayasse, & Steiger, 2015;Fedina & Lewis, 2008). Mate choice trials were conducted in 5 cm diameter plastic petri dishes with lightly scored floors to aid traction. Each trio consisted of (1) the focal selection line male (from either the Male-or Femalebiased background, and marked with a white paint spot on the thorax), whose sexual behaviour was recorded, and (2) the male and (3) female mating target. All targets were sourced from Georgia 1 standard laboratory stock, which is ancestral to the experimentally evolved lines. To allow focal male mate choice, and to restrict confounding interference between all three adults, male and female targets were tethered using 2 cm lengths of ultrafine silk cotton tied between the thorax and abdomen of either adult. The targets were tethered to opposite sides of the petri dish mating arena (marked as either male or female and oriented randomly) and could move freely within their own hemispheres, but could not interact and mate with each other, transfer substances such as pheromones or cuticular hydrocarbons by direct contact, or interfere with or interrupt the focal male's mounting or mating attempts with the opposite target Scharf & Martin, 2013). Our mate choice trios were therefore designed to give the focal male maximum opportunity to express his sexual behaviour, while limiting interference, courtship, mating or direct maleemale competition by the target adults. Sexual behaviour was measured in a total of N ¼ 145 trios containing Male-biased males, and N ¼ 141 trios containing Female-biased males. To assay sexual behaviour, focal males from either experimental evolution background were introduced to the centre of the petri dish mating arena, equidistant from male and female targets. Sexual behaviour of the focal male with either target was then recorded for 15 min at 30 ± 1 C, 60 ± 5% relative humidity (Lumley et al., 2015). Sexual behaviour was categorized into either (1) mounting, where the focal male attempted to copulate with the target but did not remain on its dorsum for 36 s, or (2) mating where the focal male maintained a mounted copulatory position on the dorsum of the target for over 36 s, which is known to correlate with successful spermatophore transfer to females in T. castaneum (Bloch Qazi, Herbeck, & Lewis, 1996). Thus, mounting and mating frequency and durations were recorded by observers, allowing the following sexual behaviours exhibited by the focal In mate choice assays, selection line FBR and MBR focal males were provided with a simultaneous choice of virgin male and female targets, both of which were tethered to prevent interaction or interference with one another. If MBR males invested greater relative amounts of mating effort than FBR males on the male target, the hypothesis that SSB functions in sexually selected maleemale competition is supported. If MBR males invested less relative mating effort than FBR males on the male target, the hypothesis that SSB occurs through inaccurate mate discrimination is supported. male to be assayed: (1) total mounting and mating behaviour; (2) the latency to first mounting and mating and the sex of the first mounting and mating target; (3) the proportion of total mounting and/or mating events on the female or male target, and (4) the proportion of total time within the 15 min observation period invested in mounting or mating either target. Ethical Note Beetles were maintained in conditions, and observed engaging in conspecific mating interactions, that are normal components of their life cycle (Fedina & Lewis, 2008;Levan et al., 2009;Martin et al., 2015;Sokoloff, 1972;Spratt, 1980). No invasive procedures were applied. Although male and female mating 'targets' were tethered to ensure they did not mate with each other or disrupt mating attempts by the focal male with the other target, they could move freely within their own hemispheres and were therefore able to physically resist mating attempts by the focal male. After each 15 min observation period, beetles were returned to their stock populations. Data Analysis All data were analysed with R.3.3.2 (R Core Team, 2017a) using the RStudio.0.99.903 wrapper (RStudio Team, 2016), and graphs plotted using 'ggplot{ggplot2}' (Wickham & Chang, 2016, following ;Weissgerber, Milic, Winham, & Garovic, 2015). Data were analysed using generalized linear mixed models (GLMMs), and maximum models were fitted using restricted error maximum likelihood (REML) available in 'glmer{lme4}' (Bates et al., 2016). The most appropriate error distribution for each GLMM was selected by examining diagnostic residual plots (Bolker et al., 2008;Crawley, 2013;Thomas et al., 2015). The total mounting and mating frequencies on both targets were analysed using a Poisson distribution with a log link function. The proportion of males mounting and mating the female target first used a Bernoulli binomial GLMM (where 1 ¼ mounting/mating the female first or 0 ¼ mounting/ mating the male target first) with a logit link function. The proportion of mounting events, mating events, total duration mounting and total duration mating that the male spent on the female target (out of the totals spent on both male and female targets) were analysed using binomial GLMMs with logit link functions (Thomas et al., 2015). In all analyses, the experimental evolution regime (Male-or Female-biased) was entered as a fixed factor, and their three independent replicate lines nested as random factors, together with the sampling generation and experimental repeat (Bates, M€ achler, Bolker, & Walker, 2015). After each model was fitted, the significance of the experimental evolution regime was assessed by a likelihood ratio test between models, with and without the factor of interest, using c 2 testing in 'drop1{stats}' (Bolker et al., 2008;R Core Team, 2017b). Total Mating Effort There were no overall differences between Male-biased and Female-biased regime focal males in their mating activity, irrespective of male or female target. On average, Male-biased regime males engaged in 4.3 ± 0.2 mountings within each 15 min trial, and Female-biased males engaged in 4.4 ± 0.2 mountings (c 2 3,283 ¼ 0.1, P ¼ 0.780; Fig. 2a). Mating frequency was also similar between focal male regime background (irrespective of target): Malebiased ¼ 1.8 ± 0.1 matings and Female-biased ¼ 1.7 ± 0.1 matings by focal males (c 2 3,211 ¼ 0.7, P ¼ 0.400; Fig. 2b). Moreover, there were no differences between Male-biased and Female-biased focal males in total time invested in mounting both targets (c 2 3,283 ¼ 0.1, P ¼ 0.760; Fig. 2c), or in their total mating effort (c 2 3,211 ¼ 0.1, P ¼ 0.790; Fig. 2d). The contrasting experimental evolution regimes had therefore not caused divergence in overall male mating activity through our trials. First mounting and mating Focal males from the Male-biased experimental evolution regimes were 20% less likely to mount the male target first compared with focal males from Female-biased backgrounds (c 2 3,283 ¼ 9.4, P ¼ 0.002; Fig. 3a). Male-biased males exhibited a clear preference for the female targets, with 71% of the first mounts upon the female. By contrast, males from the Female-biased regime exhibited near random choice over their first mating partner, with 49% of the first mounts occurring on male targets versus 51% on female targets. When we analysed 'matings', where mounts in the copulatory position lasted more than 36 s, 77% of Male-biased males committed to their first mating with the female target, compared with 60% of the Female-biased males (c 2 3,211 ¼ 7.4, P ¼ 0.007; Fig. 3b). Mounting and mating frequencies Although there were no differences in overall mounting or mating frequencies by males between either experimental evolution regimes (Fig. 2), Male-biased males performed 9% fewer mountings on the male targets than Female-biased males (c 2 3,283 ¼ 4.4, P ¼ 0.036; Fig. 4a). In addition, as a proportion of total matings, Male-biased males invested 12% more of their mating frequency with females than males from Female-biased backgrounds (c 2 3,211 ¼ 5.1, P ¼ 0.024; Fig. 4b). Mounting and mating time investment Despite similar total time invested by focal males from either experimental evolution regime in mounting and mating (Fig. 2), focal males from the Male-biased background spent 12% more of their mounting time targeting the female, with 68% of their total time spent mounting females (and 32% investing in SSB on males). By contrast, Female-biased regime focal males spent 56% of their time mounting female targets (and 44% engaging in SSB with males; c 2 3,283 ¼ 5.7, P ¼ 0.017; Fig. 4c). Likewise, of the total time invested in matings lasting over 36 s, Male-biased regime focal males invested 79% of this time targeting the female, and 21% engaging in SSB with male targets. By contrast, Female-biased focal males invested 65% of their mating time with female targets, and 35% engaging in SSB with male targets (c 2 3,211 ¼ 12.6, P < 0.001; Fig. 4d). DISCUSSION Following experimental evolution under divergent intensities of sexual selection our study reveals that male SSB in T. castaneum is the consequence of inaccurate mate discrimination, where males are targeted for mating instead of females. We found no evidence that SSB is the consequence of sexually selected maleemale competition or female choice. Instead, our results showed that males from Male-biased experimental evolution regimes that had evolved under stronger opportunities for sexual selection engaged in less SSB than males that evolved under Female-biased ratios. By comparison with males that evolved through Female-biased operational sex ratios, Male-biased regime males demonstrated superior abilities for recognizing females as mating targets, through which direct reproductive fitness will be achieved. As well as investing more total effort into mounting and/or mating females, Male-biased regime males were more likely to mount or mate the female target first after initial introduction to their mate choice trio. By contrast, males from the Female-biased evolutionary background, experiencing weaker sexual selection and relaxed selection on female location and mate discrimination, engaged much more frequently in SSB, choosing the sex of their first mating attempt almost randomly (Fig. 3a). Overall, we found that males from Male-biased and Femalebiased evolutionary backgrounds engaged in similar levels of total mating investment within our trials. On average, focal males engaged in 4.3e4.4 mounting attempts across each 15 min mating trial, and 1.7e1.8 matings lasting more than 36 s. We therefore found no differences in levels of overall male mating effort between Male-biased and Female-biased backgrounds, removing the possibility that SSB arises because of biased levels of sexual activity. However, within these equivalent levels of sexual activity, Malebiased regime males invested significantly more mating effort towards the female targets and discriminated more effectively against SSB and male targets. In our Male-biased selection regime, where females were nine times less abundant in the adult mating population (and competitor males nine times more abundant), selection was predicted to act on superior female-finding and mate choice abilities if SSB occurs due to inaccurate mate choice: our mating trial assays revealed exactly this pattern. Previous work has shown that changing the operational sex ratio and proximate mating environment can change engagement in male SSB. For example, reduced female availability improved mate discrimination up to eight-fold in male field crickets, Teleogryllus oceanicus (Bailey & French, 2012), and when male density was increased during maturation in T. castaneum, SSB decreased (Martin et al., 2015). Relative to our Female-biased lines, we found that selection reduced male engagement in SSB in the Male-biased lines, which would be expected if SSB occurs due to inaccurate mating discrimination, and there is no sexually selected direct or indirect reproductive fitness to be gained from investing mating effort towards other males. Our study using experimental evolution concords with the ca. 80% of studies (N ¼ 87) reviewed by Scharf and Martin (2013) where inaccurate mate discrimination was the explanation for widespread SSB. A large number of studies have failed to find sexually selected explanations for SSB functioning through maleemale competition, such as through competitive dominance, signalling, mating practice or indirect sperm transfer (e.g. Bailey & French, 2012;Benelli & Canale, 2012;Dukas, 2010;Harari, Brockmann, & Landolt, 2000;Levan et al., 2009;Shimomura, Mimura, Ishikawa, Yajima, & Ohsawa, 2010). In T. castaneum, for example, mounting males actively engaging in SSB were no larger and did not gain greater reproductive fitness than the mounted males, lending little support to social or competitive dominance (Levan et al., 2009). Moreover, males initially engaging in SSB were no more successful in subsequent heterosexual matings, providing no support for mating practice being a reason for SSB (Levan et al., 2009). There is some evidence that SSB in T. castaneum can allow indirect sperm transfer leading to significant paternity gains via male proxy (Haubruge, Arnaud, Mignon, & Gage, 1999). However, further research has concluded that this phenomenon is rare: Levan et al. (2009) found that indirect sperm transfer only occurs in 7% of SSB matings, and that it achieves only 1% subsequent paternity, while Tigreros, South, Fedina, and Lewis (2009) found no viable sperm transfer from SSB. Although indirect sperm transfer could reduce some selection against SSB in T. castaneum, males will clearly achieve far greater reproductive fitness by targeting sperm transfer to females, especially under initial mating opportunities with virgins. If indirect sperm transfer was an important source of indirect reproductive fitness for males, we might expect SSB to be commoner in our Male-biased regime males, where increased opportunities for indirect sperm transfer exist and there is stronger competition for fertilizations. The wider evidence that SSB can function within sexually selected maleemale competition is scarce, but does exist in some systems (Emlen, 2008). A competitive advantage for males engaging in SSB has been proposed in ca. 10% of studies (N ¼ 87) reviewed by Scharf and Martin (2013). Males can use SSB to incapacitate or damage male rivals, causing reduced fitness and survival among competitor males (Bieman & Witter, 1982;Maklakov & Bonduriansky, 2009). For example, the high mortality of all-male T. castaneum groups is associated with the presence of desiccated ejaculates around the mouthparts and/or anogenital opening (Spratt, 1980), and abdominal damage has been found on male C. maculatus (Maklakov & Bonduriansky, 2009). However, studies directly linking SSB-derived damage and reduced male fitness are lacking, and the known costs of engaging in SSB also need to be considered. A review of 25 studies of SSB in Lepidoptera concluded that 68% of cases could be explained via intrasexual competition (Caballero-Mendieta & Cordero, 2012). For example, SSB in the oriental fruit moth, Grapholita molesta, was displayed when latearriving males interfere with males that are already engaged in courting females, reducing the subsequent mating success of courting males and allowing the late arrivals to gain reproductive success (Baker, 1983). In the broad-horned flour beetle, Gnatocerus cornutus, which reproduces through territoriality and ritualized physical fighting, SSB can act as a form of intrasexual competition, as pairs of males engaging in SSB subsequently showed reduced aggression where one consistently mounted the other, compared to pairs without SSB or fluctuating roles (Lane, Haughan, Evans, Tregenza, & House, 2016). Moreover, submissive males receiving SSB attempts had reduced subsequent mating success and reproductive fitness, relative to dominant males engaging in SSB or males exhibiting no SSB (Emlen, 2008;Lane et al., 2016). Despite the differences in rates of male SSB between our two selection regimes, we still found significant levels of SSB across all our mate choice trials, even those involving males that evolved through a Male-biased selection regime. Although varying by context and test condition, male homosexual mating activity is common in T. castaneum, and previous experiments using singlesex or mixed-sex quartets revealed SSB levels that were similar to our own. In a selection experiment across three generations, Castro, Toro, and L opez-Fanjul (1994) found evidence for genetic control of SSB, with realized heritability across four replicates of ca. 10%. Within two-male þ two-female quartets, an average of 30% of all male mating activities involved homosexual mountings, and this could be increased to around 40% across three generations by selecting males engaging in most SSB to sire the next generation (Castro et al., 1994). When focal males are placed within a quartet containing one male and two female mating targets, 33% SSB would be expected if 'focal' males exhibited no discrimination, so the homosexual mating rates found by Castro et al. (1994) indicate near-random mate choice. Using groups containing both T. castaneum and Tribolium confusum, Serrano et al. (2000) explored the relative levels of SSB shown by males of either species, by housing two conspecific males with two to four females of the other species. Tribolium castaneum males frequently engaged in SSB, with 53% of the mating activity being homosexual, versus 32% in T. confusum (Serrano et al., 2000). The mating activity we found in our Female-biased regime males also indicated a lack of discrimination, with an average of 49% of first mounts and 44% of total mounting investment on the male SSB target (Figs 3 and 4), contrasting with 39% and 32% of SSB mounting activity for the Malebiased males. Although conditions and contexts are very different, our SSB data by comparison with those of Castro et al. (1994) and Serrano et al. (2000) suggest that SSB remained similar to stock conditions in the Female-biased regime but reduced through selection in our Male-biased regime. However, because our experimental evolution regimes are very different to ancestral stock conditions in terms of effective population size, density and the possibility for uncontrolled drift and other evolutionary changes over 82e106 generations of stock maintenance, we cannot know whether SSB increased in the Female-biased regime and/or decreased in the Male-biased regime, relative to their ancestors. To standardize behavioural measurements, we assayed SSB within trios where focal experimental line virgin males were tested . Differences between focal males from Female-biased (purple) versus Malebiased (blue) experimental evolution regimes in the preferred sex of their first mounting or mating target. Sexual behaviour is defined as (a) attempting to mate through mounting, or (b) matings in which unbroken mounting lasted for >36 s. Error bars are 95% confidence intervals and experimental trial sample sizes are presented at the base of the plots. simultaneously against virgin male or female mate choice targets. The use of trios all containing virgins allowed consistent assays and tight experimental control, but the mating conditions are obviously different to those operating within each line through experimental evolution, where there will be added variation in mating conditions, individual mating history and interference, and so the costs and benefits of SSB may also be different. However, it would be impossible to measure SSB in larger groups of males and females without the addition of uncontrolled confounds arising from variation in individual mating history and status, as well as interference between adults (which is why we used tethered mating targets). There is good evidence that mating history and experience affect SSB in this system. In experiments investigating how social conditions affect SSB in T. castaneum, Martin et al. (2015) showed that SSB varied depending on prior sociosexual exposure, with increasing homosexual activity when males were held in all-male groups for longer periods, and SSB activity was greatest among males held in isolation compared with groups. In these experiments, SSB was measured in the experimental males when they were placed into quartets with three other males for 15 min observation periods. Seven-day-old males previously isolated showed an average of 4.5 homosexual mounts per 15 min observation period in the quartets, whereas males previously held in groups with seven other males exhibited 1.5 SSB mounts per trial; males previously housed with seven females showed 2.5 SSB mounts per trial (Martin et al., 2015). Previous sociosexual experience therefore has a strong influence over the relative levels of SSB, which is why we applied standardized conditions to assay SSB following experimental evolution. If SSB occurs due to inaccurate mate discrimination, and does not help in maleemale competition, why do male flour beetles still engage in significant levels of SSB, even when they have been under more than 100 generations of strong evolutionary pressure through our Male-biased selection regime to reduce this costly activity? It is possible that the reproductive ecology of T. castaneum makes it particularly challenging for males to distinguish between potential mates. Flour beetles live within their stored product food, often burrowing through it, and frequently at high infestation densities. Most communication is through olfaction (Shimomura et al., 2010), but sex-specific cues may be hard to signal or receive in these conditions, making discrimination between males and females as mates difficult Castro et al., 1994;Engel . Comparison of sexual activity of focal males from Male-biased (blue bars) and Female-biased (purple bars) experimental evolution regimes. Sexual activity is described as: (a) proportion of mounting events targeting the female, (b) proportion of mating events targeting the female, (c) proportion of total mounting time on the female target, and (d) proportion of total mating time with the female target. Matings are defined as periods of unbroken mounting lasting >36 s. Box plots have a horizontal median line, interquartile range (IQR) boxes and 1.5 Â IQR whiskers. Sample sizes are below boxes, empty dots are experimental trial data points and black dots are means. et al., 2015). Linked to this is the possibility that it may be less costly for overall male reproductive fitness to mate indiscriminately if, by being more discriminatory, there is the potential to lose heterosexual mating opportunities. Tribolium castaneum is a promiscuous species (Michalczyk et al., 2011a) in which males have a high potential reproductive rate (Lumley et al., 2015). Engaging in SSBs, although erroneous and without direct reproductive benefit, may enable males to maximize lifetime reproductive success if the species possesses a challenging mate discrimination system. If evolving a more discerning mate choice system also translates into more missed mating opportunities with hard-to-identify females, SSB can evidently persist in the T. castaneum mating system, even in male-biased evolutionary regimes where homosexual matings will impose more significant reproductive fitness costs for individual males (Taylor & Sokoloff, 1971;Thornhill & Alcock, 1983). Declaration of Interest The authors have no competing financial interests related to this research. Author Contributions Experimental evolution lines have been maintained for 10þ years by Ł.M., O.Y.M., A.J.L., R.V., K.S. and M.J.G.G. M.J.G.G. and K.S. conceived and designed the study, with input from all authors. K.S., T.T., J.G. and R.V. collected the mating trio data, and K.S. led the analyses. The manuscript was written by K.S. and M.J.G.G., with contributions from all authors.
v3-fos-license
2023-02-01T16:22:14.826Z
2023-01-01T00:00:00.000
256442946
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2023/02/e3sconf_conmechydro2023_04015.pdf", "pdf_hash": "9a1d51f503de7f0c70f5d5248c9c10e13d60f6d3", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42096", "s2fieldsofstudy": [ "Engineering" ], "sha1": "24c9c088741345f02a882ae23f5304e2ffe3ed27", "year": 2023 }
pes2o/s2orc
Technologies and means of protection in process of managing pumping stations . This paper aims to analyze automatic protection and control measures at pump stations in the event of constant accumulation of groundwater and water dripping from the pump, leading to flooding in the pump station engine room. To prevent this, the pumping station's machine room should monitor the level of groundwater and water leaks from the pump and, promptly, automatically remove excess water with a pump. The excess water is collected in special reservoirs. From these, the ERSU series sensor, which is based on a contact electrode controller, does not allow it to constantly monitor the level in the storage tank within the required limits; therefore, as a completely automated system, it is unstable and cannot support the regulatory requirements for preventing it from starting one of its technical operations. The article discusses the positive aspects of technical solutions and experimental work in the DRV5023 IC with MSP430G2131 controller and no more than 2.7 mA consumer, NE555 IC-based timer and pump mode, stable contactless sensor, sensor-based automatic monitoring, and pumping system. The results are given. (12-17 seconds.) Preparation to start the pumping process is also a standard 4.7 V. power supply circuit. The work is based on production experience and is innovative. Introduction Many years of experience and the observations of the authors of this work in the field of production operation of energy and automation equipment of pumping stations made it possible to establish noticeable improvements to the structure of auxiliary technological processes and operations, especially in matters of protection of equipment and the production process taking place in the engine room of a pumping station from open water are possible.The efficiency of pumping station operation and sustainable protection against the flooding of its premises, including the station's engine room, depends entirely on the reliable operation of submersible pumps that provide drainage water pumping.Under production conditions, submersible pumps of some stations, for various reasons, are not able to provide complete removal of drained water.In addition to drained water in the engine room, there are leaks from pumping units, the origin of which is not considered in this paper.Still, their volume, together with the water not drained by submersible pumps, makes it necessary to add to the structure of the technological process in the engine room to protect the latter from flooding.These additions can be attributed to auxiliary processes, but they can provide reliable protection of the machine room from flooding. Materials and methods The general structure of the functioning of the pumping station includes several technological sections, which are auxiliary, and provide conditions for technical and technological operation and safe operation of electrical equipment.Such a site includes a technological section for pumping out drainage water (TUODV) (Fig. 1).Generally, these sites are distributed at two levels of water transportation.At the first level of the suction pipeline, there is a section of the fore-chamber and a technological section for garbage protection and fish protection.On the second level of the pumping station premises, there are technological sections for pumping pumps, pumping out drainage water, a control section, a monitoring and a signaling section at the pumping station.The equipment and means of automation of the technological section for pumping out drainage water include a well, a submersible pump, a submersible pump control station, and outlet pipes.At the same time, at some pumping stations: Kibrai-TashGres in the Kibray district, Ramadan in the Zangiota district, Chirchik in the Bostanlik district, Boz-suv in the Chinaz district of the Tashkent region, within the framework of the technological section for pumping out drainage water, form an additional section in as part of a storage tank for drainage water and water flowing into the storage tank from leaks of working pumping units, an electrode regulator, a level indicator in the storage tank, a horizontal centrifugal pump for pumping water from the storage tank.The controller-indicator performs two technological operations: signaling the emergency state of the level in the storage tank and automatically connecting the pump for pumping water [1].Thus, the storage tank is a deep water collector in which, to prevent flooding of the pumping station, drained water and leaks are collected, and with the help of a signaling regulator, it is pumped from there. If this is not done, the water from the reservoir will heat up the engine room of the pumping station. The practice of operating the technological process constructed in this way has shown that the electrode signaling device-level controller (ERSU), which is part of the TUODV, has the disadvantages of relay-contact origin and, like the signaling device, is notable for its inability to continuously monitor the level.These shortcomings manifest in high humidity conditions affecting the electronic relay unit of the ERSU.In addition, the lack of continuous measurement of the water level in the reservoir does not allow one to have information before the moment the pumping pump is switched on, as required by the established operating standard at the pumping station [2,3].Namely -the operator for 12-17 s. must be aware of the start of this operation before turning on the suction pump.Also, some electrodes operated as part of the TUODV at the pumping stations in Uzbekistan, the level signaling device, although they are made of the appropriate alloy steel, however, over time, are subjected to biological "raids" and further corrosion, which significantly affects the reliability of the entire technological section of water pumping. As a result, water accumulates at pumping stations in the engine room and floods the pumping station, which leads to a long-term shutdown of pumping units.There are cases when the flooding of the machine room led to the burnout of the electric motor of the pumping unit and the complete failure of the electrical network of the pumping station [3,1].Formulation of the problem.Taking into account the state of the issue with the indicated shortcomings arising from the operation of the TUODV, the possibilities were studied, and proposals were made for the use of automation tools for pumping out pumping stations from flooding and control using a non-contact level sensor built on the principle of the Hall effect and schematic solutions at the technological site for pumping out drainage water, providing established technological requirements.Various sensors are built on the Hall effect, including level sensors [4,5].The auxiliary technological area for pumping water (Fig. 2) works as follows.In an arranged built-storage tank, water is collected from leaks of pumps from the machine room of M.Z. and non-diverted drained water D.V.This water must be pumped out to prevent it from overflowing through the reservoir and flooding the pumping station.The Hall sensor is permanently installed on the inner wall of the tank above the magnet with a float.An increase in the tank's water level leads to the magnet's approach to the sensor, and at a critical level value, through the control unit B.U. the centrifugal pump Ts.N. and water is pumped out of the reservoir and discharged from the premises of the pumping station. Results and discussion The solution methods in this work involve the use of the features of the operation of the Hall sensor at the time of entry/exit into the operating mode of measurement, the aggregation of circuit designs in the control unit, the autonomy of the monitoring and control system, the minimum power consumption, as well as mobility at a low cost compared to with ERSU regulator.Figure 3 shows a schematic diagram of the operation of the Hall sensor with the MSP430G2131 controller on the DRV5023 IC [6,7].Using it with a low power-consuming microcontroller, it is possible, thanks to a combination of software and hardware capabilities, to implement a battery-powered sensor [8,9] or with an autonomous standard 4.7 V power supply.An example of technical implementation involves connecting the Hall sensor directly to the microcontroller outputs and (if necessary) periodically turning it on for measurement.This mode can significantly reduce the load on the power supply [9,10].The purpose of this solution is to reduce the average current consumption by reducing the active time of the sensor itself. The longer the sensor inactivity time, the lower the average current consumption of the circuit [10].The use of microcircuits of the DRV5023 family [12] made it possible to have low current consumption (2.7 mA) of electricity, did not require an additional stabilizer, and also provided a fast turn-on time (35 μs). Consider the operation of the recommended device when the water level in the tank drops after turning on the pump.The sensor, which has a float with the magnetic part descending, is brought to the exit system and its properties change [13,5].At the same time, as field observations have shown, due to high sensitivity, fluctuations in falling levels, and, in some cases, rapid filling of the tank, there are repeated switching on and off of the pump motor.In general, this provokes unstable operation of the automatic pump control system.To prevent these phenomena, a timer built based on the NE555 microcircuit was introduced into the monitoring and control system (Fig. 4) [14].This made it possible to create a stable pump motor operation until the tank was empty.At the end of the timer, the system switches to work from the level sensor.That is, the reservoir begins to fill, and the process repeats again.It should be noted that on the timer, it is possible, using the potentiometer R1 (Fig. 4), to adjust the shutdown time of the actuator (pump motor), thereby changing the duration of the pump, that is, manually setting the timer output algorithm in the range from 1 to 25 seconds.And this, in turn, allows you to solve automatically the technological operation of notifying personnel about the completion of the pumping and to form a 12-17 second readiness mode for turning it on to start pumping water.Technically, this is done (Fig. 4) by installing in series with a constant resistor of 10 kΩ a variable potentiometer with a nominal value of 250 kΩ.The electrical capacitance of the time-setting capacitor is 100 microfarads.The timer circuit works as follows.In the initial state, pin 2 has a high level: logic 1 (from the power source), and pin 3 has a low level: logic 0. Transistors VT1, VT2 are closed.When a positive pulse is applied to the base VT1, a current flows through the Vcc-R2-collector-emitter-common wire circuit.VT1 opens and puts the NE555 into timing mode [14].At the same time, a positive pulse appears at the output of the IC, which opens VT2.As a result, the emitter current VT2 leads to the operation of the relay.If necessary, maintenance personnel can interrupt the task at any time by briefly shorting Reset to ground. Conclusions Thus, the experience of operation and observation of the processes occurring in the engine room of pumping stations indicates the need to form an auxiliary process with instruments and equipment to remove drainage water and pump volumetric leaks to avoid flooding of the engine room.The results of studying the possibilities of automation of control and protection against such flooding made it possible to create and conduct laboratory tests of a local set of tools that provide the specified auxiliary automated technological process, including a storage tank, controls, instrumental control of drained water and leaks in the tank, a horizontal centrifugal pump for automatic removal of excess water, as well as means of generating an alarm to notify the dispatching service about the state of the mode of this process.The complex includes: a contactless level sensor -Hall, with the MSP430G2131 controller on the DRV5023 IC and the current consumption is not more than 2.7 mA; a timer based on the IC NE555 that ensures stable operation of the automatic pumping control system and a 12-17s mode of readiness to start the pumping process; standard 4.7 V power supply. Fig. 1 . Fig. 1.Technological sections in the technical process of transporting water at a pumping station. Figure 2 . shows the proposed functional diagram of continuous monitoring of the water level in the storage tank TUODV pumping station, based on the Hall sensor and pumping water from the tank. Fig. 2 . Fig. 2. Functional diagram of automatic control and pumping of water from the reservoir Fig. 3 . Fig. 3. Schematic diagram of the operation of the Hall sensor with ultra-low power consumption. Fig. 4 . Fig. 4. Schematic diagram of the timer on the NE555 chip.
v3-fos-license
2021-05-20T13:42:58.511Z
2021-05-20T00:00:00.000
234786277
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fsoil.2021.666950/pdf", "pdf_hash": "3331e4b5197c29efc1346a1b224df9ad96b756f4", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42097", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "3331e4b5197c29efc1346a1b224df9ad96b756f4", "year": 2021 }
pes2o/s2orc
Microbial Utilisation of Aboveground Litter-Derived Organic Carbon Within a Sandy Dystric Cambisol Profile Litter-derived dissolved organic carbon (DOC) is considered to be a major source of stabilised C in soil. Here we investigated the microbial utilisation of litter-derived DOC within an entire soil profile using a stable isotope labelling experiment in a temperate beech forest. The natural litter layer of a Dystric Cambisol was replaced by 13C enriched litter within three areas of each 6.57 m−2 for 22 months and then replaced again by natural litter (switching-off the 13C input). Samples were taken continuously from 0 to 180 cm depths directly after the replacement of the labelled litter, and 6 and 18 months thereafter. We followed the pulse of 13C derived from aboveground litter into soil microorganisms through depth and over time by analysing 13C incorporation into microbial biomass and phospholipid fatty acids. Throughout the sampling period, most of the litter-derived microbial C was found in the top cm of the profile and only minor quantities were translocated to deeper soil. The microbial 13C stocks below 30 cm soil depth at the different samplings accounted constantly for only 6–12% of the respective microbial 13C stocks of the entire profile. The peak in proportional enrichment of 13C in subsoil microorganisms moved from upper (≤ 80 cm soil depth) to lower subsoil (80–160 cm soil depth) within a period of 6 months after switch-off, and nearly disappeared in microbial biomass after 18 months (< 1%), indicating little long-term utilisation of litter-derived C by subsoil microorganisms. Among the different microbial groups, a higher maximum proportion of litter-derived C was found in fungi (up to 6%) than in bacteria (2%), indicating greater fungal than bacterial dependency on litter-derived C in subsoil. However, in contrast to topsoil, fungi in subsoil had only a temporarily restricted increase in litter C incorporation, while in the Gram-positive bacteria, the C incorporation in subsoil raised moderately over time increasingly contributing to the group-specific C stock of the entire profile (up to 9%). Overall, this study demonstrated that microorganisms in topsoil of a Dystric Cambisol process most of the recently deposited aboveground litter C, while microbial litter-derived C assimilation in subsoil is low. Litter-derived dissolved organic carbon (DOC) is considered to be a major source of stabilised C in soil. Here we investigated the microbial utilisation of litter-derived DOC within an entire soil profile using a stable isotope labelling experiment in a temperate beech forest. The natural litter layer of a Dystric Cambisol was replaced by 13 C enriched litter within three areas of each 6.57 m −2 for 22 months and then replaced again by natural litter (switching-off the 13 C input). Samples were taken continuously from 0 to 180 cm depths directly after the replacement of the labelled litter, and 6 and 18 months thereafter. We followed the pulse of 13 C derived from aboveground litter into soil microorganisms through depth and over time by analysing 13 C incorporation into microbial biomass and phospholipid fatty acids. Throughout the sampling period, most of the litter-derived microbial C was found in the top cm of the profile and only minor quantities were translocated to deeper soil. The microbial 13 C stocks below 30 cm soil depth at the different samplings accounted constantly for only 6-12% of the respective microbial 13 C stocks of the entire profile. The peak in proportional enrichment of 13 C in subsoil microorganisms moved from upper (≤ 80 cm soil depth) to lower subsoil (80-160 cm soil depth) within a period of 6 months after switch-off, and nearly disappeared in microbial biomass after 18 months (< 1%), indicating little long-term utilisation of litter-derived C by subsoil microorganisms. Among the different microbial groups, a higher maximum proportion of litter-derived C was found in fungi (up to 6%) than in bacteria (2%), indicating greater fungal than bacterial dependency on litter-derived C in subsoil. However, in contrast to topsoil, fungi in subsoil had only a temporarily restricted increase in litter C incorporation, while in the Gram-positive bacteria, the C incorporation in subsoil raised moderately over time increasingly contributing to the group-specific C stock of the entire profile (up to 9%). Overall, this study demonstrated that microorganisms in topsoil of a Dystric Cambisol process most of the recently deposited aboveground litter C, while microbial litter-derived C assimilation in subsoil is low. Keywords: soil profile, European beech, litter-derived carbon, temporal dynamics, subsoil, microbial carbon utilisation, acid soil INTRODUCTION Dissolved organic carbon (DOC) is the most bioavailable and mobile soil organic carbon (SOC) fraction within the entire soil profile and a major C source in subsoil (1). Dissolved OC in forest soils originates mainly from the organic layer, is relocated within the soil profile by percolating soil water, and becomes partially immobilised in mineral soil horizons. With passage through the soil profile, both the concentration of DOC decreases and its composition changes to lower proportions of labile C compounds (2). The highest concentrations of DOC are generally found directly below the O horizon, decreasing steeply with increasing depth in most soils. Michalzik et al. (3) reported net reductions of up to 90% in DOC concentration in the first metre of the soil profile. Major processes contributing to DOC dynamics within soil profiles are temporal and selective immobilisation (sorption, co-precipitation) as well as repeated microbial processing and remobilisation (desorption, dissolution) of the organic compounds (4)(5)(6). However, the contribution of microbial DOC retention to the SOC pool within the soil profile is as yet poorly understood (7). The activity of soil microorganisms strongly affects DOC dynamics within the soil profile. During passage through the soil, large proportions of DOC are consumed by microorganisms for anabolic and catabolic processes, contributing to a continuous decrease in DOC concentration in the soil solution with increasing depth (8). The preferential microbial utilisation of easily assimilable OC compounds, as well as preferential sorptive retention of lignin-derived polymeric molecules, alters DOC composition and leads to an increase in proportions of more complex and microbially-derived OC compounds from topsoil to subsoil (5,9,10). However, not only does microbial activity change the quantity and composition of DOC as soil depth increases, but, in turn, changes in DOC concentration and properties also affect microbial community biomass, composition, and activity. The considerably lower microbial biomass in subsoil than in topsoil is largely related to the generally steep decline in available SOC with increasing soil depth (11). The typically heterogeneous distribution of microbial biomass in subsoil further reflects increasing spatial and seasonal heterogeneity in DOC fluxes with increasing soil depth (e.g., preferential flow paths vs. matric soil), with resulting greater variability in microbially available SOC (12). Besides abiotic factors such as bulk density and water content, changes in microbial community composition with soil depth are mainly attributable to changes in the composition of SOC and associated microbial group-specific feeding strategies (13,14). A large portion of the microbial communities inhabiting near-surface environments are adapted for rapid metabolism of labile OC compounds (e.g., root exudates and/or recent litter-derived C); these communities have proportionally high abundances of fungi and Gram-negative bacteria (11). Along with a decrease in the availability of more labile C sources with soil depth, the microbial communities shift towards an increasing proportion of microorganisms with low metabolic activity. These microorganisms are better adapted to resourcelimited conditions, feed on previously processed C sources, and are predominantly found among Gram-positive bacteria (15,16). The small proportion of labile SOC, facilitated by mineralorganic interactions, and the concomitant low metabolic activity of microbial communities in deeper soil depth is reflected in greatly reduced turnover rates and greater stability of subsoil C as indicated by radiocarbon ages of up to several thousand years (17,18). The C sequestration potential of soils is further affected by the fungal:bacterial ratio, whereby enhanced C storage has been related to a greater abundance of fungi (19). Although still under debate, this relationship between prolonged and increased C storage and higher fungal abundance is thought to be attributable to higher carbon use efficiency (CUE), more recalcitrant necromass, and longer C residence time in the living biomass of fungi than of bacteria (20,21). While the actual residence time of C in bacteria is estimated to range from days to weeks, in fungi it is on the scale of months (22,23). All these results have most often been obtained through separate studies of top-and subsoils, and have not considered the temporal dynamics of DOC downward migration within entire soil profiles (12). More specifically, we are not aware of studies comprehensively investigating the dynamics of microbial DOC utilisation during DOC transport through soil profiles. The aim of our experiment was thus to determine the allocation and incorporation of litter-derived DOC in different C pools throughout an entire soil profile. Whereas, Liebmann et al. (24) characterised the fate of the litter-derived C in chemical pools of OM (e.g., particulate and mineral-associated OM), in the present study we focused primarily on the extent and persistence of microbial utilisation of litter-derived C across depths and over time. More specifically, we investigated the respective roles of different microbial groups (bacteria and fungi) in this process. We tested the following hypotheses: (1) Litter-derived C is largely retained in the upper centimetres of the soil profile due to an active microbial community in near surface soil horizons; (2) A small quantity of litter-derived as well as microbially processed C migrates into deeper soil horizons and acts as an important C source for the local microbial community; (3) Bacteria and fungi in subsoil differ in their C utilisation patterns, with a longer C persistence in fungi than in bacteria. To address these hypotheses, we conducted a plot-scale stable isotope labelling experiment where the natural litter layer of a temperate beech forest was temporarily replaced by 13 C-labelled litter for a period of 22 months. The labelled litter was then replaced by the natural litter, and soil samples were taken from 0 to 180 cm soil depths immediately after replacement of the labelled litter, and 6 and 18 months thereafter. By 13 C analyses of microbial biomass ( 13 C mic ) and phospholipid fatty acids ( 13 C-PLFA) we followed the incorporation and persistence of the labelled litter-derived C into the different microbial C pools across depth and over time. Site Description The experiment was conducted in a ∼100-year old European beech (Fagus sylvatica L.) forest 40 km northwest of Hannover in Lower Saxony, Germany (Grinderwald; 52 • 34' 22" N, 9 • 18' 51" E; 100 m.a.s.l.). Besides the stand-forming European beech, the Grinderwald site has only few other vegetation and little understory. The climate is temperate and humid with mean annual temperature and precipitation for the period from 1981 to 2010 of 9.7 • C and 762 mm, respectively (data obtained from the closest German Meteorological Service weather station in Nienburg/Weser). During the experimental period, precipitation was measured by a local weather station directly at the experimental site (Supplementary Figure 1). Soil parent materials are fluvial and aeolian deposits from the Saale glaciation (25). The soil has been classified as an acid and sandy Dystric Cambisol (26) and pH values increased from 3.3 in topsoil to 4.5 in subsoil. Podsolization characteristics within the soil profile are only weakly expressed or absent. The predominant humus form is moder, and mean proportions of sand, silt, and clay are 77.2, 18.4, and 4.4%, respectively. During the establishment of the soil observatories and the various soil samplings we did not observe any signs of bioturbation by earthworms in the acidic soil of the Grinderwald. Additional soil properties of the Grinderwald site are listed in Supplementary Table 1 [adopted from Preusser et al. (27)]. Soil Observatories Three subsoil observatories were installed on the study site (12). The soil observatories are circular polyethylene shafts of diameter 150 cm, providing access to each undisturbed soil profile to a depth of 200 cm. Each soil observatory is equipped with soil moisture probes to continuously measure the volumetric water content at 10, 30, 50, 90, 150, and 180 cm soil depths within the undisturbed soil profiles (Supplementary Figure 1). Experimental Design In January 2015, the original litter layer, in circular areas surrounding the three soil observatories, each with a radius of 150 cm, was removed. Following removal of the natural litter, 13 C-labelled beech leaf litter was applied to one half and nonlabelled litter to the other half (each 6.57 m −2 ) of the surface area surrounding each soil observatory, in consideration of the initial litter amount (275 g per m −2 ) and thickness (Figure 1). The applied 13 C-labelled beech leaf litter was prepared by mixing highly labelled litter (10 atom-%; provided by IsoLife B.V., Wageningen, The Netherlands) with unlabelled litter, achieving a final δ 13 C enrichment of 1,241‰ at observatory I and 1,880‰ at observatories II and III (δ 13 C differences were due to restrictions in litter production capacity). To avoid windblown dispersal, the applied litter was covered with coarse mesh nets. After 22 months of field exposure (November 2016), the enriched litter was replaced by non-labelled litter originating from the study site (switch-off). This exchange allowed us to follow the persistence of the temporally distinct 13 C signal in the different C pools within the soil profile. Soil Sampling and Preparation In total, three sampling campaigns were carried out, with the first sampling the day of the labelled litter replacement with nonlabelled litter (November 2016), the second one 6 months (May 2017), and the third one 18 months (May 2018) later. At the first and third samplings, three replicate soil samples on each side of the circle (with 13 C-labelled litter and non-labelled litter) were taken from 0 to 180 cm soil depths by percussion drilling at the three soil observatories. Each of the obtained soil cores was divided into 15 subsamples (0-5, 5-10, 10-20, 20-30, 30-40, 40-50, 50-60, 60-70, 70-80, 80-90, 90-100, 100-120, 120-140, 140-160, and 160-180 cm soil depth) and mixed with the respective subsamples of the two other soil cores taken from the same side of the soil observatory. Due to limitations in surface area around the observatories (avoidance of soil compaction and damage to the installed soil probes), the second soil sampling was only possible by vertical sampling of six depth increments, from the soil surface down to 50 cm soil depth and by horizontal sampling at discrete depths of 60-70, 80-90, and 140-160 cm from the inner wall of the soil observatories using a drill rod. Sample partitioning and number of replicates were analogous to the two other soil samplings. This resulted in a total number of 90 samples (15 depth levels × 3 observatories × 2 litter treatments) at the first and third sampling date each, and of 54 samples (9 depth levels × 3 observatories × 2 litter treatments) at the second sampling date. All samples were sieved <2 mm immediately after sampling to remove larger particles. On one aliquot the gravimetric water content of the samples was determined, while the other aliquot was frozen at −23 • C until further analysis. Soil Organic C The labelled litter-derived SOC stocks (g labelled litter C m −2 ) were calculated for each sampling date based on the concentration of labelled litter-derived C (ng g −1 dry soil) and the depth-specific bulk density (data not shown) for each depth increment of the soil profile. The stocks for the missing depth increments at the second sampling were interpolated based on the respective stocks above and below. SOC data for the first and third sampling dates was obtained from Liebmann et al. (24). To determine the SOC at the second sampling date, 0.3 g of each soil sample was dried at 60 • C for 72 h and ground with a ball mill. Subsamples of 15-25 mg each were weighed into tin capsules and measured for total SOC and isotopic composition by dry combustion using an elemental analyzer (EA, Euro EA 3000, Euro Vector, Milan, Italy) coupled with an isotope ratio mass spectrometer (IRMS, Delta Plus XP, Thermo Finnigan MAT, Bremen, Germany) as described by Müller et al. (28). The calculation of labelled litter-derived C (%) was done using the following equation: where δ sample is the δ 13 C value of the respective sample, δ reference is the δ 13 C mean value of the respective non-13 C-addition samples, δ litter is the average δ 13 C value of the applied beech leaf litter and δ soil is the average δ 13 C value of the respective soil depth. The concentration of labelled litter-derived C (ng g −1 dry soil) was calculated based on the proportion of labelled litter C to total SOC. Microbial Biomass C and Extractable Organic C Microbial biomass C (C mic ) was determined using the chloroform fumigation extraction (CFE) method of Vance et al. (29). In brief, 10 g of each soil sample was fumigated with ethanol-free chloroform for 24 h to release C mic . After removing the chloroform, 40 ml of 0.025 M K 2 SO 4 solution was added to each of the subsamples, which were then shaken for 30 min on a horizontal shaker at 250 rev min −1 and centrifuged for 30 min at 4,420 × g. A second 10 g subsample of each soil sample was treated similarly but without fumigation to determine extractable organic C (EOC). The measurement of organic C in the clear supernatants of both fumigated and non-fumigated subsamples was conducted on a TOC-TNb Analyzer Multi-N/C 2100S (Analytik Jena, Jena, Germany) as described by Marhan et al. (30). The C mic content was calculated by subtraction of the C content of the non-fumigated sample from the C content of the corresponding fumigated sample using a k EC factor of 0.45 (31). The EOC content was calculated based on the non-fumigated samples only. For δ 13 C determination in C mic , 10 ml of the fumigated and non-fumigated extracts were evaporated at 60 • C in a rotatory evaporator (RVC 2-25, Martin Christ, Osterode am Harz, Germany). The residues were ground and weighed into tin capsules with minimum 10 mg C per sample and measured for isotopic composition as described above. The calculation of δ 13 C of the microbial biomass was done as described by Marhan et al. (30) using the following equation: where C f and C nf are extracted organic C content (µg C g −1 dry soil) of the fumigated and non-fumigated samples and δ f and δ nf are the corresponding δ 13 C values. Calculation of the percentage (%), content (ng g −1 dry soil), and stocks (g m −2 ) of labelled litter-derived C in C mic and EOC was done as described for SOC. The δ 13 C PLFA values were determined with an HP 6,890 gas chromatograph (Agilent Inc., Santa Clara, CA, USA) coupled with a combustion III Interface (Thermo Finnigan, Waltham, MA, USA) to a Delta Plus XP mass spectrometer (Thermo Finnigan MAT, Bremen, Germany) as described by Müller et al. (28). The δ 13 C values of all FAMEs were corrected for the addition of a methyl group by using a mass balance equation (38). The methanol used for methylation had a δ 13 C value of −40.23 ‰. Calculation of labelled litter-derived C (%) was done as described for SOC. Mean labelled litter-derived C (%) in the different microbial groups was calculated according to the relative proportions of the respective fatty acids to the total of the microbial group-associated fatty acids. The content of labelled litter-derived C (ng g −1 dry soil) incorporated into the different microbial groups was determined based on the relative proportion of labelled litter-derived C to the total C of the group-specific fatty acids and the molecular weight of C (12.011 g mol −1 ). Calculations of microbial group-specific labelled litterderived C stocks (g m −2 ) were done as described for SOC. Statistical Analyses The effects of sampling date, soil depth, and their interactions on the variables SOC, EOC, C mic , PLFA, and on the relative and absolute labelled leaf litter-derived C incorporation into SOC (SO 13 C), EOC (EO 13 C), C mic ( 13 C mic ), and PLFAs ( 13 C-PLFA) were performed in SAS Version 9.4 (SAS Institute Inc., Cary, NC, 2016) using the GLIMMIX procedure for mixed linear models. Due to the experimental design, observatory was set as random factor. Significance was tested at p < 0.05 in all cases. Soil Organic Carbon and Contribution of Litter-Derived 13 C SOC content was highest in 0-5 cm soil depth at all sampling dates (up to 95.3 mg g −1 dry soil) and steeply decreased with increasing soil depth to a lowest value of 0.2 mg g −1 dry soil in the 160-180 cm soil depth; SOC content differed slightly by sampling date (date × depth; F = 7.40, p < 0.001; Supplementary Figure 2A). The depth gradient was more pronounced for the content of labelled litter-derived C, which decreased sharply from the topsoil to the subsoil (Supplementary Figure 2B). Litter-derived 13 C concentration did not exceed 1.8 µg g −1 dry soil below 30 cm soil depth, but did slightly increase below 60 cm soil depth at the second compared to the first sampling (date × depth; F = 2.08, p < 0.05). The overall lowest content of litter-derived C in SOC was observed at the third sampling. The total SO 13 C stock within the soil profile decreased from the first to the third sampling from a mean of 17.7 g to 3.25 g m −2 , respectively (Figure 2A). Over time, most of the litter-derived 13 C was retained in the top 5 cm (from 55.3 to 82.2% of total retained labelled litter C) followed by 5-30 cm soil depth (from 14.3 to 38.3%) and the subsoil below 30 cm (from 3.5 to 6.5%). At the third sampling, the highest proportion of the initially litterderived SO 13 C stock measured at the first sampling was found in 5-30 cm (49.2%) followed by the subsoil below 30 cm (33.7%) and the top 5 cm (12.4%) (Figure 3A). The negligible importance of the litter-derived C for SOC stocks in the different depths was indicated by the proportion (%) of labelled litter-derived C, which was highest (up to 0.5%) in 0-5 cm soil depth at each sampling date (Supplementary Figure 2C). With increasing soil depth, the relative proportion of litter-derived C generally decreased, although to a lesser extent than for content of litter-derived C. Maximum values in the subsoil (below 30 cm soil depth) were slightly higher at the second than at the first sampling (date: F = 5.44, p < 0.01; depth: F = 3.34, p < 0.01). The third sampling had overall the smallest proportion of litter-derived C within the entire soil profile (max. 0.1%). Extractable Organic Carbon and Contribution of Litter-Derived 13 C EOC decreased sharply from the topsoil (max. 244.6 µg g −1 dry soil in 0-5 cm soil depth) to the subsoil (min. 2.9 µg g −1 dry soil in 80-90 cm soil depth) and varied between the three sampling dates with the most pronounced seasonal variations at shallower soil depths (date × depth; F = 20.13, p < 0.001; Supplementary Figure 3A). Labelled litter-derived C in EOC substantially declined from the topsoil (max. 4.9 µg g −1 dry soil) to the subsoil (≤ 0.1 µg g −1 dry soil below 30 cm soil depth) at the first sampling date. At the second and third samplings, litter-derived C declined strongly (max. 0.25 µg g −1 dry soil) in the topsoil, but remained relatively unchanged in the subsoil (date × depth; F = 18.18, p < 0.001; Supplementary Figure 3B). In contrast to the total SO 13 C stocks, total EO 13 C stocks were more evenly distributed within the top 30 cm (0-5 cm: 58.4%, 5-30 cm: 35.1%) and contributed most to the total EOC while the subsoil below 30 cm soil depth contributed only 6.5% at the first sampling ( Figure 2B). Total EO 13 C stocks at the second and third sampling were similar to one another but considerably lower than at the first sampling, with a stronger reduction in the top 5 cm than in deeper soil depth. Comparing the EO 13 C stocks at the third with those at the first sampling indicated a proportional increase with soil depth from 3.3% in the top 5 cm to 52.7% below 30 cm soil depth (Figure 3B). The proportion (%) of labelled litter-derived C in EOC was generally highest in the topsoil, although it decreased over time (date × depth; F = 14.53, p < 0.001; Supplementary Figure 3C). In contrast, in the subsoil the largest contribution of labelled litter-derived C to EOC was measured at the second sampling. Microbial Biomass and Contribution of Litter-Derived 13 C Microbial biomass steeply decreased with increasing soil depth at all sampling dates and showed substantial seasonal variation in the three samplings, especially in the topsoil samples (date × depth; F = 4.31, p < 0.001; Supplementary Figure 4A). Concentration of labelled litter-derived C mic was highest in the topsoil (max. 6.4 µg g −1 dry soil in 0-5 cm soil depth) and steeply decreased with soil depth at the first sampling. Over time, it decreased steeply in top-and subsoil (date × depth; F = 4.96, p < 0.001; Supplementary Figure 4B), although the decrease in the topsoil was less pronounced than for EOC. The labelled litter-derived C mic stocks of the entire soil profile decreased from 0.42 g litter C m −2 at the first sampling to 0.06 g litter C m −2 at the third sampling, but again at a slower rate than for EOC ( Figure 2C). In contrast to EOC, the build-up of microbial 13 C stocks was more pronounced in the top 5 cm compared to 5-30 cm. This observed large contribution of the stocks in the top 5 cm to overall profile litter-derived C mic stocks decreased over time (from 76.1 to 20.0%), while that of the stocks in 5-30 cm increased (from 17.6 to 74.2%). In the subsoil below 30 cm, the contribution of litter-derived 13 C to total C mic stocks remained mostly stable at 5.9-11.9%, with the highest proportion at the second sampling. Microbial 13 C stocks strongly decreased from the first to the third sampling in the top 5 cm and below 30 cm soil depth, representing only 4.1 and 14.5% of the initial stocks, respectively, while C mic in the 5-30 cm soil depth still harboured 65.3% of the initial 13 C at the third sampling ( Figure 3C). In general, labelled litter-derived C contributed <3% to microbial biomass C, in most cases even <1% (Supplementary Figure 4C). The only exception was the 20-50 cm soil depth at the second sampling, with values of up to 6.4%, although this effect was not significant. Phospholipid Fatty Acids and Contribution of Litter-Derived 13 C The abundances of PLFA Gram+ , PLFA Gram− , and PLFA fun were highly variable between the samplings and steeply decreased from Frontiers in Soil Science | www.frontiersin.org The quantity of labelled litter-derived C incorporated into PLFAs decreased even more sharply with depth and had already, in the 10-20 cm depth, reached levels similar to those in subsoil (Supplementary Figure 5B). Accordingly, the decrease with time was most pronounced in the topsoil (date × depth; PLFA Gram+ : F = 2.62, p < 0.01; PLFA Gram− : F = 2.66, p < 0.01; PLFA fun : F = 2.32, p < 0.01). From the first to the second sampling, the maximum concentrations of 13 C in PLFAs in the deeper subsoil (below 30 cm soil depth) shifted from 40 to 90 cm (e.g., up to 1.4 ng g −1 dry soil for PLFA Gram+ ) to 80-90 and 140-160 cm (e.g., up to 1.6 ng g −1 dry soil for PLFA fun ). At the third sampling, only very small quantities of litter-derived C were present in PLFAs in the subsoil. The temporal patterns of 13 C stocks in microbial groups differed among the three soil depths 0-5 cm, 5-30 cm, and below 30 cm (Supplementary Figure 6). Gram-negative bacteria incorporated most 13 C in the top 5 cm throughout the experiment, while the relative share of 13 C in PLFA Gram+ and PLFA fun in 0-5 cm strongly decreased over time while increasing in the 5-30 cm soil depth. The incorporation of litter-derived C into PLFAs in the deeper subsoil below 30 cm soil depth decreased clearly for PLFA Gram− and PLFA fun and increased slightly for PLFA Gram+ . Accordingly, at the third sampling the microbial groups had the smallest proportional 13 C reduction compared to the first sampling at 5-30 cm soil depth, with fungi roughly doubling their 13 C stocks at this depth over time (Figures 3D-F). The proportion of labelled litter-derived C in the PLFAs of bacteria and fungi provides an indicator of substrate preferences in microbial groups. At the first two samplings, the proportions steeply decreased from the topsoil to a depth of 40 cm (Figure 4). However, although neither sampling date nor soil depth significantly influenced the proportions of litter-derived C in the different microbial groups, a temporal pattern with specific maxima at depths below 40 cm, similar to that for litterderived C content, was apparent. While at the first sampling the greatest proportion of labelled litter-derived C (e.g., up to 1.7% for PLFA fun ) appeared within the upper 80 cm of the soil profiles, this maximum (e.g., up to 6.3% for PLFA fun ) shifted to deeper soil depth (80 to 160 cm) by the second sampling, and almost disappeared at the third sampling with a maximum 13 C retention of only 0.5% (PLFA Gram− ). DISCUSSION Our study investigated the microbial utilisation of leaf litterderived C within a soil profile to 180 cm depth over 18 months. Primarily, we found that most litter C was incorporated by microorganisms within the topsoil. Fungi in particular incorporated litter C for a prolonged time within the top cm of the profile. Our study provides an estimate for the upper limit of microbial assimilation of actively circulating (and accessible) litter C (that is, not long-term stabilised in other C pools) in subsoil on the months to years scale. The small C quantities migrating into deeper soil led to a temporally restricted increase in litter C incorporation but contributed less than one percent to the total microbial C pool 18 months after stopping the DO 13 C input from the added litter into the mineral soil. This indicates an overall slow rate of microbially mediated SOC formation from aboveground litter in the subsoil. Litter-Derived Carbon Utilisation Across Depth and Over Time Microbial utilisation of litter-derived C was, by far, greatest in the mineral soil near the soil surface (0-30 cm) throughout the entire 18-month period. Microorganisms in near-surface environments are expected to be in a state of high metabolic activity, thus enhancing C turnover compared to those in deeper soil depth (39). This suggests rapid and direct microbial assimilation of recently introduced DOC (40) as a main driver of the high initial incorporation rates of labelled litter C in microorganisms in the top cm of the soil profile. Over time, however, microbial 13 C stocks (g litter C m −2 ) in topsoil (0-30 cm) remained much higher than those in deeper soil, although they generally decreased, and continuously accounted for ∼90% of the microbial 13 C stocks in the entire soil profile throughout the experimental period. As suggested by the observed more rapid reduction of 13 C stocks in the labile EOC fraction than in SOC, direct 13 C assimilation from DOC in topsoil likely decreased over time. However, two other pathways may be of importance for the long-term microbial labelled litter C acquisition: recycling of microbial necromass C, and assimilation of previously mineral-bound C. Microbial necromass has been shown to contribute greatly to the SOC pool and can be utilised by the microbial community either directly or after binding to reactive soil minerals (41)(42)(43). Reactive soil minerals are hotspots for C transformation and decomposition, playing a major role in sustaining the microbial C supply in both topsoil and subsoil (44). In line with this, Liebmann et al. (24) found, in the same experiment as that of the present study, that a substantial part of the newly formed mineral-associated OC was labile. Pronounced C retention in topsoil has been further explained by preferential sorption of newly introduced C to already present organo-mineral clusters in a DOC injection experiment (45). Nevertheless, soil mineralogy and related physico-chemical properties can control the extent and temporal pattern of microbial assimilation of formerly mineral-bound C, but this may vary substantially between different soil types (46)(47)(48). Over the experimental period, peaks of proportional (%) and absolute (ng g −1 dry soil) microbial incorporation of labelled litter C occurred at different soil depths, suggesting a downward movement of some litter C over time in the form of DOC. Preliminary evaluation of the DOC fluxes for the period November 2016 to May 2018 amounted to litter-derived DOC fluxes of about 0.265 (± 0.129) g m −2 in 10 cm, 0.013 (± 0.002) g m −2 in 50 cm, and 0.008 (± 0.008) g m −2 in 150 cm soil depth (Liebmann, unpublished), which showed a similarly strong decrease with soil depth, and seasonal variability as observed at the Grinderwald site for the period August 2014 to November 2015 (12). Small-scale litter C translocation within the topsoil environment was evidenced by the fact that microorganisms in the top 5 cm initially contributed 76% and in the 5-30 cm 18% to the total litter-derived microbial C stock, while this pattern was almost reversed 18 months later. This increasing contribution of the 5-30 cm soil depth to the total microbial 13 C stock over time could be explained either by persistence of microbially assimilated litter C or by a more balanced input/output ratio of litter C on the months to years scale. Both processes are also indicated by the distinctly high proportion of litter-derived microbial C stock in the 5-30 cm soil depth at the third sampling relative to the initial stock (65%). We found no evidence for an increased proportion of litterderived 13 C in microbial biomass C in subsoil, contradicting our hypothesis proposing that the low concentration of litter-derived 13 C measured in the subsoil SOC and EOC pools are of major importance for the microbial community. The only exception was the 20-50 cm soil depth at the second sampling, when up to 6% of the microbial C was labelled litter-derived. This was accompanied by a substantial decrease over time in the EO 13 C stocks in topsoil (by appr. 90%), which may indicate that some of this labile and mobile C fraction was translocated into deeper soil depth where it was assimilated by soil microorganisms. The time period between the first two samplings experienced higher soil water content than most of the experimental period, suggesting increased DOC fluxes (e.g., leaf litter leachate) to deeper soil depth. Microbial processing of litter C entering the 20-50 cm soil depth may thus have contributed to the moderate increase in the SO 13 C stock below 30 cm soil depth at that time. Overall, during the entire sampling period, the concentrations of labelled litter C found in both the SOC as well as in the microbial fraction were very small in the deeper soil depth as compared to those found in the top cm of the soil profile. In accordance with other studies on C translocation within soil profiles, this indicates that only a minor portion of the C input from aboveground litter is transported into deeper soil depth (49)(50)(51). At the third sampling date (May 2018), incorporation rates of the labelled litter C in C pools within the entire soil profile were very low, with the litter C contribution in subsoil even lower than in topsoil. Here, the maximum relative contribution of the labelled litter C to the microbial C pool in subsoil was less than one percent, 18 months after switchoff of the DO 13 C input, providing an estimate for the upper limit of long-term microbial assimilation of actively cycling C derived from aboveground litter in sandy subsoils. This low long-term contribution of the labelled litter C may be explained by (interactions of) different mechanisms. The labelled litter C may have been largely, first, displaced with the soil solution into soil depths below the maximum sampling depth (translocation); second, replaced by new litter C introduced into the soil profile after removal of the labelled litter (dilution); and/or, third, used for catabolic (energy production) rather than for anabolic (growth) processes by the microbial community (consumption). The last could apply in particular to the microbial community in the subsoil, where new litter-derived C inputs were found to be used primarily to overcome microbial energy limitation in decomposition processes (17). High C loss by respiration due to lower microbial CUE in subsoil (39) also offers an explanation for the rather short and low litter C incorporation in the microbial C pool of the subsoil (e.g., 0.004 g litter C m −2 resp. 5.9% of the total microbial 13 C stock at the third sampling). These microbial anabolic constraints are driven both by decreasing C content and greater proportions of more complex compounds following repeated processing of aboveground litter-derived DOC as it passes through the soil profile (5,7). This, in turn, implies a rather slow rate of microbial SOC formation in subsoil from aboveground litter C, which, 18 months after the end of the labelled litter C input, accounted for only 0.2 g litter C m −2 or ∼0.01% of the total SOC pool below 30 cm soil depth. However, we must keep in mind that we found no evidence of bioturbation in the acidic forest soil that limits the pathways by which litterderived C can be translocated to the subsoil, and we assume that the rate and extent of litter-derived C transport to the subsoil may be much higher in soils with pronounced bioturbation by, for example, earthworms or other soil-dwelling fauna (52). Microbial Group-Specific Carbon Utilisation In previous studies, microbial community structure and activity were found to undergo major changes with increasing soil depth, explained primarily by changes in OM composition and accessibility (11). The microbial group-specific proportions in 13 C incorporation in our study indicated that fungi respond more to recent aboveground litter-derived C than bacteria, and this was more evident in the subsoil than in the topsoil. In near-surface soil, neither Gram-positive and Gram-negative bacteria nor fungi had litter-derived 13 C proportions in their biomass higher than 2% at any of the three sampling dates. In contrast, fungi in subsoil strongly increased their proportional uptake of up to 6%, while the two bacterial groups remained at similar or slightly increased levels compared to bacteria in topsoil. The similar proportions of labelled litter-derived C across the three different microbial groups in the topsoil may have been due to the broad availability of C sources other than the labelled litter-derived DOC, such as high belowground C inputs from root exudation and decomposition in this densely rooted environment (53,54). However, the dynamics of the group-specific 13 C stocks within the top 30 cm showed major differences between bacteria and fungi. Stocks of 13 C incorporated into bacterial biomass generally decreased over time, while that of fungi decreased only in the top 5 cm but roughly doubled in the 5-30 cm soil depth from the first to the third sampling. The increased fungal litter-derived C storage suggests this soil depth as the primary location for fungaldriven C retention and underlines the importance of fungi for (microbial) C stabilisation in topsoil (21). Here, an important pathway for C translocation from upper to lower topsoil may be the hyphal network (55), which probably contributed to the more balanced input/output ratio of litter C at 5-30 cm depth. In addition, the higher persistence of fungal necromass than of bacterial necromass (56,57) may have contributed to the comparatively slow decrease in the SO 13 C stock at this depth. In the subsoil, however, the relative and absolute incorporation patterns indicated differences between bacteria and fungi in their dependency on litter-derived DOC and/or in the ability of these microbial groups to exploit this heterogeneously distributed C source in the subsoil environment. In previous studies by Kramer and Gleixner (13,15), Grampositive bacteria were found to predominantly utilise "older, " more processed SOC and to maintain these C preferences with increasing soil depth, while Gram-negative bacteria and fungi preferentially use more recent plant-derived C fractions. In line with this, Gram-positive bacteria were the only microbial group to moderately increase their proportion (up to 9%) of the subsoil 13 C stock to the 13 C stock of the entire soil profile, while fungi exhibited only a temporarily restricted increase in relative 13 C incorporation during the major stage of litterderived C migration into deeper soil. This indicates that fungi in subsoil, in contrast to Gram-positive bacteria, depend more strongly on the low but periodically occurring litter-derived C inputs, but are not able to store it for long periods under subsoil conditions due to, e.g., low CUE. This contradicts the importance of fungi for C stabilisation at the 5-30 cm soil depth as suggested in our study, and as was previously proposed for C-rich near-surface environments (19). Despite similar substrate preferences, differences in the incorporation of litter-derived C between fungi and Gram-negative bacteria in subsoil may to some extent have been driven by the prevailing moisture conditions and DOC fluxes. Seasonal variability and total DOC fluxes are of major relevance, as they strongly affect diffusive as well as convective transport and thus DOC availability to soil microorganisms in the soil matrix (8). Throughout the entire experimental period, soil moisture in the subsoil was lower than in the topsoil, with major water fluxes in deeper soil depths most likely limited to intense precipitation periods (e.g., November 2017 to January 2018). This assumption is supported by results from Leinemann et al. (12), who observed occasional strongly reduced total water fluxes in subsoil compared to topsoil at the experimental site. Under such conditions, hyphal growth may have given fungi a competitive advantage over Gram-negative bacteria in exploiting the leaf litter-derived C, as was also shown in a previous study at the same experimental site for root litter-derived C (54). Whereas, fungi can grow towards sparsely distributed resources in the subsoil (58), bacteria depend, for the most part, on solute transport and thus respond more strongly to decreasing soil water content (59)(60)(61). Microbial utilisation of specific C substrates is consequently determined by microbial group-specific preferences as well as by accessibility to C substrates within the soil matrix. Microbial access is thus not only controlled by the availability of C sources in the soil volume but also strongly by the microbial group-specific adaptations to prevailing habitat conditions (e.g., soil moisture and texture). CONCLUSION In this study, we demonstrated the dynamics of microbial groupspecific C utilisation as well as the downward movement of aboveground litter-derived C within an undisturbed sandy soil profile under beech forest. Most of the recently deposited litterderived C was processed in the top cm of the soil profile. The decoupling of the temporal dynamics of litter-derived C in EOC and C mic in this soil depth indicated internal recycling of microbial-bound C as well as microbial utilisation of previously mineral-bound C. Presumably, only a small proportion of the DOC input during the previous 2 years was further transported to the subsoil and served as a scarce but essential C source for microorganisms, especially for fungi. During the experimental period, the primary downward movement of the labelled litter C in the microbial fraction from topsoil to subsoil occurred over a period of 6 months. Our study provides an estimate for the upper limit of actively cycling litter-derived C (that is, not long-term stabilised in other C pools), which can be incorporated into microbial C pools of sandy forest subsoils; only 0.2 g m −2 or ∼0.01% of total SOC below 30 cm soil depth were litter-derived 18 months after 13 C-labelled litter was removed. Fungi seemed to be most important for this slow microbial C incorporation, although 13 C stabilisation in fungal biomass in subsoil was ephemeral. Overall, our study thus suggests that on annual timescales, microbial C assimilation may be of minor relevance for the stabilisation of aboveground litter-C in subsoil of sandy soils. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS GG, RM, AD, KK, SM, CP, and EK contributed to conception and design of the study. SP, PL, AS, JW, and KM conducted the field work and sample analysis. JB provided data on precipitation. GG and PL provided data on SOC content at the first and third sampling date. SP conducted the data evaluation and statistics with contributions of JW and AS. SP wrote the manuscript with contributions of EK, SM, and CP. All authors read and approved the submitted manuscript. ACKNOWLEDGMENTS We would like to thank the editor and the three reviewers for constructive comments, Sabine Rudolph for her assistance in the laboratory, Julian Heitkötter and Bernd Marschner at the Institute of Geography at Ruhr-Universität Bochum for the project coordination, Wolfgang Armbruster at Institute of Food Chemistry at University of Hohenheim for IRMS measurements, Hans-Peter Piepho at the Biostatistics Department at University of Hohenheim for statistical support, and Kathleen Regan for English corrections. For support with field work and sampling we thank Frank Hegewald.
v3-fos-license
2019-05-20T13:06:14.058Z
2018-10-15T00:00:00.000
158073914
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://www.journal-imab-bg.org/issues-2018/issue4/JofIMAB-2018-24-4p2201-2204.pdf", "pdf_hash": "127173238132676d60ebd0646508ca6e0f8d9233", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42098", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "sha1": "127173238132676d60ebd0646508ca6e0f8d9233", "year": 2018 }
pes2o/s2orc
PROBLEMS AND PERSPECTIVES FOR THE LABOUR MARKET IN HEALTH CARE BULGARIA AS A PART OF THE EUROPEAN UNION AND THE COMMUNITY POLICIES Specialists working in health care are the main ssets which each EU member state has. There are many tasks and challenges for the health systems in the Union. The demand for health services in the Community is increasing, and concurrently there are certain restrictions in the offering of such services. National policies and healthcare authorities in all EU member states need to consider current issues of the Community and adapt their healthcare systems according to the lack of a sufficient number of health specialists and the ageing population. On 1 March 2005, Bulgaria signed the EU accession treaty, and since 1 January 2007, the country is a member of the Community. As such, it must follow European sources of law and conduct a harmonized policy of governance along with the rest of the member states. In 2008 the EU accepted a Green Book on workers in health care in Europe [1]. The demand for health services in the Community is increasing, and concurrently there are certain restrictions in the offering of such services. In order to verify and analyse the attitude for training and career development of students in nursing care, a survey was conducted among first year students in nursing care from the Faculty of Public Health, Medical University Sofia. The average age of the first year students is 30, and the results show a need for change and attracting younger people into medical majors. 66,04% of respondents believe that nursing care would be more attractive for future students if their competencies were expanded, while 54% of the respondents believe that medical majors could be more attractive for future students if mass media and Internet advertising are included to promote such majors. There are many challenges on a national and Community level. Health is a main value and healthcare is a part of the national security of any country. Ceaseless migration of health professions and the outflow of candidates for the medical specialities in Bulgaria present serious challenges to the country. It is necessary to take adequate measures to solve actual problems in health care and to ensure the labour market with health professionals. INTRODUCTION: The labour market is of key importance to the stability and prosperity of the economy of every country as well as for the social and public development of the country.The social and economic basis that influences the labour market is a cumulative result of the proper functioning of many spheres -demography, healthcare, education, the pension system, legal system and business climate.The labour market is one of the priority sectors of the EU. According to data from the National Statistical Institute in Bulgaria until 31 December 2016, the population of Bulgaria is 7,101,859, which represents 1,48% of the population of the European Union.4,304,436 of them are in employable age, including 2,262,326 men and 2,042,110 women.As at June 2017 in the economic activities of the country: human healthcare and social sector -work 142,781 persons of the total population of the country. In 2017, it was the 10th anniversary of Bulgaria's accession to the EU.The EU membership of the country turned its citizens into EU citizens, which means that they can use the rights and benefits given to them by the EU. In June 2010 the European Council adopted the Europe 2020 strategy [2], which is a multi-lateral strategy for growth and jobs for the period 2010-2020 and its aim is to help Europe overcome the heavy economic crisis since the 1930s.The Europe 2020 strategy determines three priorities: intelligent growth: building knowledge-based and innovative economy, sustainable growth: encouraging a more environment-friendly and competitive economy with the more efficient use of resources and inclusive growth: stimulating the economy with levels of employment, which will lead to social and territorial cohesion. In 2008 the EU accepted a Green Book on workers in health care in Europe.There are many tasks and challenges for the health systems in the Union.The demand for health services in the Community is increasing, and https://doi.org/10.5272/jimab.2018244.2201concurrently there are certain restrictions in the offering of such services.National policies and healthcare authorities in all EU member states need to consider current issues of the Community and adapt their healthcare systems according to the lack of a sufficient number of health specialists and the ageing population.The implementation of new technologies for diagnostics, prevention and treatment improves the quality of the services provided in the field of healthcare; however, these technologies require financial resources and qualified staff to operate the implemented technologies.In order to meet the needs of the population in the field of healthcare, the health systems need an efficient workforce.In practice, health care is one of the most significant sectors of the EU economy and provides employment to one of every ten employees in the EU and approximately 70% of the healthcare budgets are allocated for wages and other expenses, which are directly connected with the work activities of the workers in health care. In 2007 the European Commission published the White Paper: Together for Health [3], which forms the health strategy of the Community.The EU and its member states stand together in front of the common challenges of the Union -diseases, inequality, change in climate, insufficiency of health professionals in the entire community.The purpose of the strategy is to contribute to better lifelong health for the citizens by protecting them from health threats and supporting the introduction of the new technologies in medicine. With the adopted in 2008 Green book, the EU continues its strategy in the sphere of health care by focusing on issues related to the work force of the EU in the health sector.The aim of the Green Book is to help the national governments deal together with the challenges faced by the workers in health care in the EU -ageing population and ageing work force in the health sector, insufficient young people in the health care, insufficient young specialists in health care, migration of health workers within the EU and outside of it. Article 152 of the Treaty on the EU [4] says that "Community action in the field of public health shall fully respect the responsibilities of the Member States for the organisation and delivery of health services and medical care".Concurrently the same articles underline that the Community shall encourage cooperation between the Member States and the coordination of their policies and programmes.Community actions aim to complement national policies; however, the main responsibility for organization and provision of health services is borne by the EU Member States.The EU may only provide support to the member states and contribute to the exchange of good practices. The ageing of the population in Europe goes along with the ageing of the work force.In the period from 1995 and 2000, the number of medical doctors below the age of 45 in Europe decreased by 20% and the number of medical doctors at the age over 45 increased by more than 50%.The average values increased also for the nursing staff -in five member states almost half of the nurses are over 45 years old.The crisis with the human resources in health care may be overcome by training, recruitment and retention of young staff in the healthcare sector and retention of trained staff in mature age. The Council Recommendation of 11 July 2017 on the 2017 National Reform Programme of Bulgaria outlined the following: According to the Council of the EU as at July 2017 [5] the trends on the labour market in Bulgaria are positive, but there are structural issues.The work force in Bulgaria is decreasing due to the ageing of the population combined with emigration.It is of major significance for the country to utilize the unused potential of the work force, the inclusion of young people on the labour market is limited, and there is a deficit of skills and qualified staff.The main challenges in healthcare remain the limited accessibility, the poor financing, emigration of specialists and poor health results. MATERIALS AND METHODS: In order to verify and analyse the attitude for training and career development of students majoring nursing care, a survey was conducted among the first year students in nursing care from the Faculty of Public Health, Medical University -Sofia. A pilot survey was conducted in the period 20.09.2017-20.10.2017 and included 87 students in nursing care.Students were selected for the survey on a random basis; the survey was anonymous, which gives ground to claim for representativeness of the results. RESULTS AND DISCUSSION: To create a clearer profile of the responding students and to analyse the trend among the studying specialists in health care, they were asked about their age, fig. 1. Fig. 1. Age of respondents. Fig. 1 shows that most interested to study nursing are the candidates at the age of 31 to 40 years (37 %) and at the age of fewer than 20 years (36%).First year students at the age between 41 and 50 years are 16 % and the students at the age between 21 and 30 years are 11 %.Re-sults show that more than half of the students or 53% of the first year students in nursing care were at the age between 31 and 50 years in 2017.Obviously, the recommendations for training young specialists and renewal of the staff on the labour market, as promoted by the EU, have not been realized.Medical major are still not enough at-tractive for the students due to the low pay and the quite limited competencies of nurses in Bulgaria. Students were asked whether in their opinion nursing care would be more attractive for the prospective students if their competencies are extended, fig. 2. 66.04% of respondents consider that nursing care would be more attractive for the prospective students if their competencies are extended.Only 1.89% of the respondents answered that in their opinion the scope of competences of nurses is not the leading factor in selecting the relevant major.16,98 % of the respondents answered that they are not able to assess what would be the effect and 15,09 % consider that if the competences of nurses are extended, it will make the major more attractive although they are not able to assess the effect accurately. Examining the methods and opportunities for increasing the interest to nursing care, the respondents were asked whether the inclusion of mass media and Internet advertising for promotion of majors in health care and attracting new staff in the health care system would have a positive effect, fig. 3.According to more than half of the respondents or 54 % of the respondents, majors in health care would be more attractive for the prospective specialists if mass media and Internet advertising are included for the promotion of the majors in health care.21 % of the respondents consider that mass media and Internet advertising should be included for the promotion of majors in healthcare although they are not able to assess the specific effect.25 % of students in nursing care consider that it is not necessary to include mass media and Internet advertising for promotion of majors in health care. CONCLUSIONS There are many challenges on a national and Community level.Health is a main value and healthcare is a part of the national security of any country.Ceaseless migration of health professions and the outflow of candidates in healthcare majors in Bulgaria presents the coun-try with serious challenges.It is necessary to take adequate measures to solve actual problems in health care and to ensure the labour market with health professionals [6].This requires an active policy for investing in human capital focused on investing in education and professional training of health specialists, investments in the professional qualification of specialists in health care and overcoming the migration processes by creation motivation for the health specialists for a professional career in Bulgaria.The support for health care also supports the economic growth of each country by providing an opportunity for the citizens to remain active and in better health for a longer period of time.The health status of citizens actively affects the participation and the efficiency of the labour market.Investments for the health of the population contribute to limiting future expenses for treatment of preventable diseases and represent investments for assurance effective workforce. Fig. 2 . Fig. 2. Opinion of students in nursing care on the relationship between the attractiveness of the nurse and the specific competencies of this profession. Fig. 3 . Fig. 3. Opinion of students in nursing care on the inclusion of mass media and Internet advertising to promote health specialties and attract new staff to the system.
v3-fos-license
2021-07-29T20:46:22.907Z
2021-07-21T00:00:00.000
236524307
{ "extfieldsofstudy": [ "Geology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2021/1993344", "pdf_hash": "182b50b057c0519c29c4bc97473a96633b4e9475", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42099", "s2fieldsofstudy": [ "Geology", "Materials Science" ], "sha1": "182b50b057c0519c29c4bc97473a96633b4e9475", "year": 2021 }
pes2o/s2orc
Analytical Solution and Simulation of Oil Deliverability Analysis for Reorientation Hydraulic Fracture in Low-Permeability Reservoirs The hydraulic refracturing operations are often used to improve oil deliverability in the low-permeability reservoir. When the development of oilfields has entered a high water cut stage, oil deliverability can be promoted by refracturing reservoirs. The orientation of the new fracture formed by refracturing will be changed. The new formed fracture is called reorientation fracture. To calculate the oil deliverability of the refracture wells, a three-section fracture which includes reorientation fracture was established. The multiwell pressure drop superposition theory is used to derive the analytical solution of the refracture wells which includes the reorientation fracture. The numerical simulation was conducted to validate the results of the analytical solution. Comparing the refracture well deliverability of reorientation and nonreorientation, permeability, deflection angle, and the length of reorientation fracture will jointly control the productivity of refracture well. When the permeability in the direction of maximum principal stress is greater than the permeability in the direction of minimum principal stress, the capacity of reorientation fractures is relatively large. The deflection angles and the length of the reorientation fracture will directly affect the drainage area of the fracture, thus affecting productivity. The reorientation fractures generated by repeated fracturing have great potential for improving oil deliverability in the anisotropic low-permeability reservoirs. Introduction The hydraulic fracturing or refracture operations have been widely used to improve oil deliverability in developing lowpermeability reservoirs. The parameters of permeability and porosity can be improved by multiple fracturings [1][2][3]. The benefit of existing refracture wells has been proved by a lot of studies [4,5]. Compared to drill new wells, refracture operations have been an economical way for improving oil deliverability [6][7][8][9]. Refracture can be created in two forms: one form is that the directions of the new fracture propagate the same as the original fracture, and the other is that the directions of the new fracture propagate to a new direction with the redistribution of pressure. The conventional theory of hydraulic fracturing is built on tensile failure of rocks [10][11][12]. Many studies have been carried out that the formation of fractures is mainly affected by induced stress, geological conditions, and production conditions. The direction of initial fracture is parallel to the direction of maximum principal stress [13]. The stresses surrounding the fracture decrease when a fractured well output for some time [14][15][16]. During a repeated fracturing treatment, if the directions of the maximum formation stress changes, the new fracture and original fracture could be at a certain angle [17,18]. The results of the laboratory test, as showing in Figure 1, also proved that the directions of refracture are not all along the original fracture direction [19,20]. What is more, the Chevron oil technology company's multiple fracturing fracture location in the Lost Hill Oilfield is about 30 degrees from the original fracture azimuth, which reveals the possibility of a new fracture in the repeated fracturing fractures [21]. The new orientation fracture is called a reorientation fracture. The reorientation fracture will propagate to the nonleaking oil area, increasing the drainage area of the oil well [22][23][24]. In the actual production process, reorientation fracture characterized by multiple branches has become the main purpose of refracture [25]. It is also the main way to actually increase production. Many studies have been carried out that the porosity and permeability near the well can be increased by fractures [26]. The generation of hydraulic fracturing fractures creates favorable conditions for the development of low permeability reservoirs [27][28][29][30]. Scholars have a variety of seepage interpretations of single fracture [31][32][33]. Raghavan and Gringarten have presented methods to calculate well productivity, considering the anisotropy of the hydrocarbon reservoir [34]. Wang summarized the productivity evaluation and of hydraulic fracturing wells and obtained a radial flow formula for vertically fractured wells with finite conductivity [35,36]. There are a number of analytical and semianalytical models for the simulation of vertical two-wing fractures or horizontal wells [37]. However, there are still few studies on analysis of the productivity of the reorientation fracture. A mathematical model to describe the oil deliverability of a reorientation fracture in anisotropic low-permeability hydrocarbon reservoirs was proposed in this paper. The model used multiwell pressure drop superposition principle to calculate integral for the oil productivity formulation of the reorientation fracture, which is validated by using reservoir numerical simulation and comparing the productivity of the nonreorientation fracture. The proposed model is used to study the sensitivity of infinite diversion fracture parameters. It is proved that the productivity of the diverting fracture in different anisotropic low-permeability is controlled by the anisotropy of reservoir permeability, the length of reorientation fracture, and the angle between the reorientation fracture and the original fracture. Physical Model. A schematic of the stress distribution of generating reorientation fractures is displayed Figure 2. It is important to note here that this model is a simplified model describing the actual fracture deflection. The model in this paper describes a fracture that experiences only one deflection. This model describes the principle of practical calculation by simplifying multisection deflected fracture into three sections. This simplified model greatly reduces the complexity of calculation. Actual reorientation fractures often undergo multiple deflections and form complex fracture morphology. No matter how complicated the reorientation fracture is, it can be dispersed into multiple fractures that experience small angle deflection. Compared with the fractures described in this paper, more complex fractures only increase the complexity of the computational process, without putting forward new requirements on the analytical solution process. From Figure 2, the hydraulic fracture will result in an elliptical pressure drop zone. Initial fractures extend to the direction of the maximum initial stress, which is indicated by σ ′ max . After certain production processes have been carried out, the direction of maximum principal stress in the stress fall zone changes to the σ max direction. The refractured fracture will extend in the direction of σ max . Figure 3 displays a schematic of physical model of a refractured fracture including reorientation fracture. In order to calculate the refractured fracture production, Cartesian axes are set up parallel to the direction of the reorientation fracture. Some basic assumptions are presented as follows. (1) The refractured fracture is divided into three sections in the Cartesian coordinate system. The middle part of the fracture is angled θ to the X axis, and both sides of it are parallel to the X axis. Transpose the Cartesian coordinate system θ angle to the X axis parallel to the middle part to form a new coordinate axis (2) The position of well in the reservoir is set as (x w , y w ). The position of any point in the reservoir is set as (x, y). The maximum distance from the fracture to the X axis is H, and the maximum distance from the fracture to the Y axis is l (3) In this paper, the whole fracture includes before and after refracturing (l − H/K + H/sin θ). The original fracture refers to the fracture before reorientation (H/sin θ). Reorientation fracture refers to the fracture after reorientation (l − H/K). Nonreorientation 2 Geofluids is the fracture extended along the origin direction. The length of the fracture is denoted by x f (4) In the production process of refracture wells, the fluid output per unit length of the fracture is uniform. Therefore, the flow rate of each point is equal to the output at the origin of coordinates The method of establishing coordinate axes along different fracture directions greatly simplifies the difficulty of calculation. It also provides a simplified method for the calculation of ideal fractures for multistage fracturing and actual fractures that can be dispersed into multiple stages. Figure 4 displays a schematic of conventional vertical production wells in circular closed formation. The paper studies the quasisteady state seepage for the center of the circular closed formation. Based on the multiwell pressure drop superposition principle of single conventional vertical production well, the production of the vertical well in Figure 3 is calculated by integration. Mathematical Model and Solution. From Figure 4, in the center of a homogeneous formation with a boundary radius of r e , there is a production well with a production capacity of q with a well radius of r w . The model assumptions are listed below: (1) The model is homogeneous, and the permeability is anisotropic. In this study, coordinate transformation is used to simplify the calculation. The anisotropy is reflected in the different permeability of Y and Y ′ (2) The pressure is constant, and both fluids and rock are slightly compressible. Formation permeability, formation fluid viscosity, porosity, and comprehensive compressibility are constants The pseudosteady state pressure distribution is given by The superposition principle of multiwell pressure drop follows as The three sections of the pressure difference are set to Δp 1 , Δp 2 , Δp 3 , and the total pressure difference The pressure drop equation for the three-stage fracture is as follows: Geofluids Integrating the right side of Eqs. (4)-(6) with respect to x w and adding them up, the expression of the pressure drop is obtained: where where It should be noted that the q here is the flow rate in the point of origin of coordinates. In this work, the fluid output per unit length of the fracture is assumed to be uniform, and under these circumstances, the production formula of reorientation fracture as Eq. (10) is obtained: According to the ideas of Raghavan and Joshi [38], the yield formula for the nonreorientation vertical fracture is The production formula for the nonreorientation fracture with the same assumption is obtained: 3. Results and Discussion Mathematical Model Validation. A numerical simulation conceptual model is established to verify the reliability of the mathematical model. The ideal model of numerical simulation for reorientation fracture is established by the black oil model of ECLIPSE. Figure 5 describes the schematic of the grid simulation model and hydraulic reorientation fracture. Figure 5(a) displays the schematic of the grid simulation model. From Figure 5(a), a block-shaped reservoir is built in ECLIPSE. The center of the reservoir has three fractures with the same shape as Figure 2. There is a vertical well at the center of the original fracture. The reservoir parameters are the 4 Geofluids same as those in the productivity formula. There are 194 grids in the X direction and 196 grids in the Y direction. Their steps are 1.0 m. The grid of the Z direction is divided into three layers, and the single layer thickness is 2 m. The total number of grids is 194 × 196 × 3 = 114072. The permeability in the X direction is 0.1 D, and the permeability in the Y direction is 0.05 D. The porosity is 0.1. Figure 5(b) displays the schematic of the magnified hydraulic reorientation fracture. The fracture is set as an infinite conductivity fracture. The fracture penetrates the entire reservoir in the Z direction. Usually, the fracture width is about a few millimeters while the fracture length is a few hundred meters. The width of the fracture is too small to set. If the needed grid is divided by actual values, the number of grids is very large, so that it is time-consuming or impossible for the simulation work. In order to make numerical simulation more accurate, grids that intersect with hydraulic fractures are refined until the grid sizes are small enough to be in the same order of fracture width. The grid-refining process enables us to accurately describe flow characteristics near the hydraulic fracture region and guarantees the efficiency and accuracy of simulation results. Figure 6 displays the matrix and hydraulic fracture different phase-permeability curves. In order to analyze the reorientation fracture productivity, the oil production of reorientation fractures and nonreorientation fractures is compared between the mathematical model and the numerical simulation. A schematic of oil productivity curve comparison between the mathematical model and the numerical simulation solution is displayed in Figure 7. From Figure 7, at the beginning of the production phase, the two curves of the mathematical model and the numerical simulation basically overlap. With the increase of oil production time, the curve of the mathematical model drops rapidly. At the end of production, the two curves basically coincide. The results of the mathematical model and the numerical simulation are still somewhat different. The differences between the two curves may be as follows: Geofluids (1) Due to the use of equivalent conductivity capability, the permeability in the model has some differences with that in the formula. The permeability in the model is greater than that calculated in the formula, resulting in larger numerical simulation solution (2) In the process of derivation, the fractures are divided into three sections, ignoring the flow between the fractures and making the deduced result smaller (3) Since the grid is a block center grid, the circular borders of the model are not smooth, and the total area size has some differences In general, the results of numerical solution and numerical simulation solution are the same on the whole. Therefore, the formula has practical value, and the formula can be used to calculate the output during the actual production process. Reorientation Hydraulic Fracture Productivity Sensitivity Analysis. The proposed mathematical model indicates that the productivity of fractures is controlled by several factors. By studying the permeability of anisotropic reservoir, the advantages and disadvantages of refracture wells in anisotropic reservoir are illustrated. In isotropic reservoirs, the effect of deflection angles and length of reorientation fracture on stimulation is studied. If the deflection fracture does not extend in the advantage direction of permeability, reorientation fracturing will not increase productivity. The deflection angle should be minimized and increase the length to increase the drainage area and increase production. The Effect of Anisotropy on Permeability. In order to analyze the effect of anisotropic permeability, the permeability is set to be anisotropic, and reorientation and nonreorientation fractures of the same length were assumed. The angle of reorientation deflection is 45 degrees. The reorientation fracture has the same length as the origin fracture. The production of reorientation fractures and nonreorientation fractures is calculated by using reservoir data in an oilfield. Permeability in the Y ′ axis and in the Y axis directions is k y ′ = 0:01 D, k y = 0:005 D, k y ′ = 0:01 D, k y = 0:01 D, k y ′ = 0:01 D, and k y = 0:02 D. Other reservoir data include average porosity = 0:1, formation fluid viscosity = 1:5 mPa · s, comprehensive compressibility = 6:0 × 10 −4 MPa −1 , average thickness of reservoir = 40:0 m, the volume compressibility of fluid = 1:08, the boundary radius = 700 m, the original formation pressure = 52:0 MPa, and the production pressure difference = 15:0 MPa. The above data is substituted into the equation of the reorientation fracture and nonreorientation fracture for production calculation. Figure 8 displays the oil production changes at the different anisotropic low permeability. This plot for k y ′ = 0:01 D and k y = 0:005 D, k y ′ /k y = 2; k y ′ = 0:01 D and k y = 0:01 D, k y ′ /k y = 1; and k y ′ = 0:01 D and k y = 0:02 D, k y ′/k y = 0:5 D. In this study, the fracture length is set to be consistent. From Figure 8(a), the permeability in the Y ′ direction is obviously higher than that in the Y axis (k y ′ /k y = 2). The productivity of both types of fractures decreases with production time. Reorientation fracture well is less productive than the nonreorientation fracture well. From Figure 8(b), the permeability in the Y ′ direction is equal to that in the Y axis (k y ′ /k y = 1). The productivity of both types of fractures decreases with production time. The two curves are almost coincidentally. It should be noted that reorientation fractures have slightly lower productivity than vertical fractures. From Figure 8(c), the permeability in the Y ′ direction is obviously lower than that in the Y axis (k y ′ /k y = 0:5). The production of reorientation fracture is beneficial. The production of reorientation fracture is much higher than that of the nonreorientation fracture. And with the increase in production time, the production of the reorientation fracture is decreasing faster than that of the nonreorientation fracture. The above studies confirm that reservoir permeability is an important factor controlling fracture productivity during fracture production. It is proved that the permeability anisotropy has a great significance in the productivity of fractures. The anisotropy of reservoir permeability determines the effect of reorientation fractures to productive. When the reservoir is isotropic, the productivity of the reorientation fracture well is similar to that of the equal-length vertical fracture well. When the formation has anisotropic properties, the reorientation fracture well will increase productivity if the reorientation direction is high permeability. On the contrary, there will be a decline in production capacity. It can be concluded that, in anisotropy reservoirs, the fracture should be controlled to extend to a more beneficial direction, or it will not perform better productivity. The Influence of Angle on Fracture Productivity. The deflection angle is an important property of the reorientation fracture. According to Figure 8(b), it can be concluded that 6 Geofluids the existence of deflection angle will not only affect the productivity through the anisotropy of the reservoir but also affect the productivity of the well in the isotropic formation. In isotropic reservoirs, the influence of the deflection angle on fracture productivity is studied by setting θ as 0, π/4, π/2, and 3π/4. The productivity of the vertical two-wing fractured well is also described. It must be emphasized that the length of the reorientation fracture (l − H/K) is always equal to the length of the original fracture (H/sin θ). In this study, the total fracture length was set at 300 m. The oil production changes at different angles are displayed in Figure 9(a). From Figure 9(a), compared to vertical fractures, the production of reorientation fractured well decreases gradually, and the decline rate is gradually accelerated with the increase of the angle. It can be expected that when the angle is increased to 2π, it will coincide with the original fracture, and this production will revert back to the production of the vertical fracture well. The production of the reorientation fractured wells will be minimized, but such hydraulic fracture is not possible in practice. It is only an extreme case for analysis and explanation. This study shows that in isotropic reservoirs, the reorientation fracture does not increase fracture productivity but will reduce productivity. It is because the angle will affect the effective length of reorientation fracture and decrease the drainage area, thus decreasing the productivity of reorientation fracture wells. It can be concluded that the reorientation fracture is disadvantaged in isotropic formations during vertical well fracturing production. In anisotropic reservoirs, the deflection angle will also have an adverse effect on production for the same reason. 3.2.3. The Influence of Length on Fracture Productivity. Fracture length is another important factor affecting fracture productivity. The study of fracture length consists of two parts, one is the total length of the whole fracture, and the other is the influence of the ratio of the deflected fracture to the original fracture length. The research on the total length of whole fracture is to study the influence of the total length on productivity when the deflection angle is determined. The oil productivity changes at different lengths of the whole fracture are 8 Geofluids displayed in Figure 9(b). From Figure 9(b), when the total length of reorientation fracture become longer, the fracture productivity increases gradually. This is because the drainage area of the fracture increases when the total length and side length of the fracture are increased, leading to the increase of the fracture productivity. Therefore, during the fracturing process, the length of the fracture should be as long as possible. For the reorientation fracture, the fracture length and deflection angle are the property of the fracture in space, they will jointly affect the production of the reservoir. The compatibility between the reorientation fracture and the angle must be considered. To study the effect of reorientation fracture length on productivity, the total length of the fracture is set as a constant value. The productivity of the fracture is studied by changing the ratio of length of the reorientation fracture and the original fracture in the circumstance of different deflection angles. When the reorientation fracture reaches a certain length, it will inevitably turn back to the same stress direction as the initial principal stress direction, leading to limit length reorientation fracture. Here, the ratio of reorientation fracture length (l − H/sin θ) to the original fracture length (H/K) is assumed as 0.5, 1.0, 1.5, and 2.0. In order to prevent the influence of permeability anisotropy on the reorientation fracture, the study was carried out under the condition of isotropic permeability. The angles between the directional fracture and original fracture are set as π/4, π/2, and 3π/4. Figures 9(c)-(e) display the productivity changes when the ratio of length varies. The degree of change is related to the deflection angle. At the circumstance of the same deflection angle, with the length ratio increased, the productivity of the reorientation fracture will drop at the isotropic. From Figure 9(c), when the deflection angle is π/4, the productivity is similar. From Figures 9(d) and (e), at the angles of π/2 and 3π/4, the productivity has been reduced to different degrees with the increase of length ratio. At the angle of 3π/4, the decrease of productivity is faster with the length ratio. When the length ratio is greater than 1, with the increase of the length ratio, the rate of productivity declines gradually slows down. Figure 9(f) displays an extreme example that the angle is π. This is not possible in reality, but from Figure 9(f), the decrease in production as the deflection angle increases can be better explained. With the increase of the length of reorientation fractures, the overlapping fractures with the original fractures are not effective fractures, which will not increase the productivity of the fractures, but will reduce the drainage area due to the overlap. This can happen with any fracture that has a deflection. However, when the deflection angle is small, as shown in Figure 9(c), the degree of interference between the reorientation fracture and the original fracture will be reduced. The productivity decline of the reorientation fracture wells will also be smaller. These above results indicate that fracture length has an impact on fracture productivity. The longer the total fracture length is, the higher the fracture productivity is. The ratio of the length of the reorientation fracture and original fracture has a disadvantaged effect on productivity. As the ratio increases, the yield decreases more. The effect of length ratio on fracture productivity is controlled by the fracture angle. The larger the deflection angle is, the faster the productivity decreases with the length ratio. Secondary fractures increase well production by extending the total fracture length. But under the premise that the total length of artificial fracture is certain, the longer the diversion fracture is, the larger the deflection angle is, and
v3-fos-license
2022-07-20T05:10:32.844Z
2022-06-09T00:00:00.000
250641831
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "1ae3a7616b9eb8611cf14635e7b6c48d3c5d2d1c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42103", "s2fieldsofstudy": [ "Medicine" ], "sha1": "1ae3a7616b9eb8611cf14635e7b6c48d3c5d2d1c", "year": 2022 }
pes2o/s2orc
Association Between Rotavirus Vaccination and Antibiotic Prescribing Among Commercially Insured US Children, 2007–2018 Abstract Background Vaccines may play a role in controlling the spread of antibiotic resistance. However, it is unknown if rotavirus vaccination affects antibiotic use in the United States (US). Methods Using data from the IBM MarketScan Commercial Database, we conducted a retrospective cohort of US children born between 2007 and 2018 who were continuously enrolled for the first 8 months of life (N = 2 136 136). We followed children through 5 years of age and compared children who completed a full rotavirus vaccination series by 8 months of age to children who had not received any doses of rotavirus vaccination. We evaluated antibiotic prescriptions associated with an acute gastroenteritis (AGE) diagnosis and defined the switching of antibiotics as the prescription of a second, different antibiotic within 28 days. Using a stratified Kaplan-Meier approach, we estimated the cumulative incidence for each study group, adjusted for receipt of pneumococcal conjugate vaccine, provider type, and urban/rural status. Results Overall, 0.8% (n = 17 318) of participants received an antibiotic prescription following an AGE diagnosis. The 5-year adjusted relative cumulative incidence of antibiotic prescription following an AGE diagnosis was 0.793 (95% confidence interval [CI], .761–.827) among children with complete rotavirus vaccination compared to children without rotavirus vaccination. Additionally, children with complete vaccination were less likely to switch antibiotics (0.808 [95% CI, .743–.887]). Rotavirus vaccination has averted an estimated 67 045 (95% CI, 53 729–80 664) antibiotic prescriptions nationally among children born between 2007 and 2018. Conclusions These results demonstrate that rotavirus vaccines reduce antibiotic prescribing for AGE, which could help reduce the growth of antibiotic resistance. Since its introduction in 2006 [1], rotavirus vaccination has had a beneficial impact on child health in the United States (US). Prior to vaccine introduction, rotavirus was the leading cause of severe diarrheal disease among US children ,5 years of age [2,3]. The primary benefit of rotavirus vaccination is the reduction of severe gastroenteritis. Vaccine effectiveness is .85% against severe rotavirus gastroenteritis, and the introduction of vaccination in children has led to a dramatic reduction in rotavirus hospitalizations in all ages and resulting medical costs [4][5][6][7][8][9]. Additionally, rotavirus vaccination may lead to several nontargeted benefits, such as the reduction of type 1 diabetes and febrile seizures, both of which are potential downstream outcomes of rotavirus infection [10][11][12]. Vaccines may be an important tool in addressing the growing threat of antimicrobial resistance [13,14]. In 2019, the Centers for Disease Control and Prevention estimated that .2.8 million antibiotic-resistant infections occur in the US each year, resulting in .35 000 deaths [15]. As part of a comprehensive federal strategy, the Federal Task Force on Combating Antibiotic-Resistant Bacteria identified the use of existing and new vaccines as a key tool for reducing unnecessary antibiotics [16]. There is strong evidence that vaccines targeting bacterial pathogens, such as the Haemophilus influenzae type b and pneumococcal conjugate (PCV) vaccines, have resulted in reduced bacterial colonization, subsequent antibiotic use, and prevalence of antibiotic-resistant strains on a population level [13]. Vaccines targeting viral pathogens, such as influenza vaccines, have also shown the potential to reduce unnecessary antibiotic prescribing [13]. Similarly, rotavirus vaccination may impact antibiotic prescribing and resistance by 2 mechanisms. First, a reduction in acute gastroenteritis (AGE) cases could result in fewer inappropriately prescribed antibiotics [13]. In 2010-2011, there were 7.7 million visits by children ,19 years of age for AGE in the US and 10.4% of them received an antibiotic (equating to 800 000 antibiotic prescriptions) [17]. When bacterial agents are the cause of gastroenteritis, appropriate treatment sometimes includes antibiotics. However, many enteropathogens, including rotavirus, present with nonspecific symptoms and are often not known to be the etiologic agent unless testing is conducted. Since etiology is usually not known at the time of a medical encounter, antibiotics are frequently prescribed for viral gastroenteritis even though they are not recommended [18]. Second, antibiotics may disrupt the enteric microbiome, which can lead to secondary bacterial infections, which can inherently be resistant to treatment and possibly require microbiome restorative treatments [19,20]. Therefore, we hypothesized that by preventing viral gastroenteritis, rotavirus vaccination may be associated with less antibiotic prescribing among children in the US. This question has only been investigated in low-and middle-income countries [21,22], where antibiotic treatment practices are likely different from the US. We used national health insurance claims data to investigate if rotavirus vaccination is associated with a reduction in antibiotic prescribing and the switching of antibiotics among children ,5 years of age in the postvaccine era (2007-2018). Data Source Data for our analysis came from the IBM MarketScan Commercial Database [23]. The database contains deidentified, individual-level records on employees, their spouses, and their dependents who have employer-sponsored health insurance from all US states. The database includes data on enrollment status, inpatient and outpatient medical visits, and pharmaceutical claims for several million individuals each year. Construction of Cohort We constructed a retrospective cohort of commercially insured US children who were born between 2007 and 2018. All children whose births were included in MarketScan were eligible to be enrolled. We estimated each individual patient's date of birth by restricting to International Classification of Diseases, Ninth Revision and Tenth Revision (ICD-9/ICD-10) codes for live births, major diagnostic codes identifying "newborns and other neonates with conditions originating in perinatal period," and admission codes indicating maternity or newborn admissions. Inpatient and outpatient records were then combined in chronological order, with the earliest live birth claim date used as a proxy for date of birth. Since our primary exposure of interest (rotavirus vaccination) was introduced in the US in 2006 [1], we limited the analysis to children born in 2007 or later. To reduce misclassification of rotavirus vaccination status, we limited the analysis to children who were continuously followed in the dataset for the first 8 months of life (ie, still enrolled 7 months after the birth month) and were born in a state that did not include a universal vaccination purchasing program. We excluded children whose enrollment lapsed during the first 8 months of life as they could have received rotavirus vaccination during that lapse and their vaccination data would not be included in this dataset. Similarly, we excluded children born in states that provide universal vaccination programs (ie, Alaska, Connecticut, Idaho, Massachusetts, Maine, North Dakota, New Hampshire, New Mexico, Oregon, Rhode Island, South Dakota, Vermont, Washington, Wisconsin, and Wyoming) that offer free immunization, which may not be recorded in insurance data. Exposure, Outcomes, and Potential Confounders We used Current Procedural Terminology codes 90680 and 90681 to identify all inpatient or outpatient visits that involved receipt of pentavalent rotavirus vaccine or monovalent rotavirus vaccine, respectively. In line with current clinical recommendations [24], children who completed a full rotavirus vaccination series (3 doses of pentavalent rotavirus vaccine or 2 doses of monovalent rotavirus vaccine) by 8 months of age (defined as first 244 days of life) were considered to have complete vaccination (Table 1). In contrast, children who did not receive any rotavirus vaccine by 8 months of age were considered to have no rotavirus vaccination. The primary outcome of interest was an antibiotic prescription associated with an AGE diagnosis before 5 years of age. Based on clinical expert opinion (coauthors E. J. A. and S. F.), we defined a set of antibiotic and anti-infective agents that would likely be prescribed for AGE (Table 1) and extracted the data of the relevant pharmaceutical claims. In parallel, we identified all AGE diagnoses using ICD-9/ICD-10 codes from inpatient and outpatient visits. An AGE diagnosis was defined as the presence of any AGE diagnostic code after 8 months of age. Any antibiotic prescription that occurred within 3 days of an outpatient AGE diagnostic code or within 3 days of the discharge date from an inpatient visit that included an AGE diagnostic code was considered an antibiotic prescription following AGE. As a secondary outcome, we evaluated the switching of antibiotics as a marker of potential antibiotic resistance. We defined the switching of antibiotics by identifying the prescription of a second, different antibiotic (ie different National Drug Code label and product number within any of the defined therapeutic drug classes) within 28 days of the initial prescription. Data on several potential confounders, previously identified in the literature, were extracted for each child in the cohort including PCV vaccination status, provider type, and rural/urban location of residence. PCV vaccination status was of interest for several reasons. First, PCV is a routine childhood immunization with high coverage [25] and was, therefore, used to represent healthcare-seeking behavior. Second, PCV is a relatively new vaccine [26] (originally introduced in 2000) and may represent a parent/guardian's willingness to accept new vaccines such as the rotavirus vaccine. Last, PCV vaccination directly impacts a child's likelihood of pneumococcal bacterial infection and resulting antibiotic treatment, potentially confounding the rotavirus vaccination and antibiotic prescription relationship. Children who had received at least 2 doses of PCV by 8 months of age were considered to have been vaccinated against PCV for our analysis. Rates of rotavirus vaccination and antibiotic prescribing are known to vary by provider type [27,28]. We categorized each child into 3 mutually exclusive provider-type categories based on data from outpatient visits. Provider type was categorized as "pediatrician" if a child had any outpatient visits to a pediatrician during the follow-up period. Otherwise, provider type was categorized as "family practitioner" if any of the outpatient visits were to a family practitioner and "other" if the child did not visit a pediatrician or a family practitioner. Last, individuals were categorized as living in a rural area if the primary address for the plan holder was not within one of the defined metropolitan statistical areas [29], in an effort to capture some of the structural factors that may lead to differences in access to healthcare and health education. Statistical Analysis We summarized and compared covariates and outcomes among children who had complete rotavirus vaccination and children who had no rotavirus vaccination. Additionally, we conducted a time-to-event analysis to estimate the unadjusted and adjusted cumulative incidence of antibiotic prescription associated with AGE among children with complete rotavirus vaccination and children with no rotavirus vaccination. Follow-up began at 8 months of age and if children lost coverage before experiencing the outcome, they were censored at the first day of the month in which coverage was lost. Participants who did not experience the outcome during follow-up were censored after 5 years of follow-up. Within each group, we defined 12 strata based on receipt of PCV vaccine (2 categories), provider type (3 categories), and urban/rural (2 categories). For each group, we estimated the Kaplan-Meier survival probability for each day of follow-up and multiplied it by the proportion of each group within each stratum. The resulting probabilities were summed to generate cumulative incidence curves adjusted for receipt of PCV vaccine, provider type, and urban/rural. Cumulative incidence ratios were estimated by comparing the daily cumulative incidence among children with complete rotavirus vaccination and children without rotavirus vaccination. Confidence intervals (CIs) were estimated using 1000 bootstrapped iterations. We conducted several sensitivity analyses using alternative outcome definitions. First, we conducted a sensitivity analysis using any antibiotic prescription (with or without an AGE diagnosis), a less specific outcome. Second, we evaluated a more specific outcome definition by conducting a sensitivity analysis that limited the outcome to antibiotic prescriptions with an AGE diagnosis that occurred during the historic rotavirus season (January-June) [30]. Third, we conducted the same stratified time-to-event analysis using the secondary outcome, switching antibiotics following an AGE diagnosis. Finally, we conducted a sensitivity analysis in which we expanded the definition of an AGE event to including ICD codes for vomiting [31] (alone and with unspecified nausea; results presented in Supplementary Data). We extended the primary results to the entire cohort of children born in the US between 2007 and 2018 to estimate a lower bound of the number of antibiotic prescriptions that have been averted by rotavirus vaccination. First, we estimated the number of children who have been vaccinated against rotavirus from 2007 to 2018 using annual birth data from vital statistics [32] and annual estimates of rotavirus vaccination coverage [25,33]. Next, we multiplied the total number of children vaccinated against rotavirus by the proportion of children with an antibiotic prescription following AGE in our study sample to estimate the total number of vaccinated children who have received an antibiotic prescription following an AGE diagnosis nationally. We divided that by the cumulative incidence ratio and calculated the difference between the 2 quantities to estimate the number of antibiotic prescriptions associated with AGE that have been averted. Data management was done in SAS version 9.4, and all analysis was performed in R version 3.6.3 software. (Table 2). Children with complete rotavirus vaccination were more likely to have also received a PCV vaccine (99.5%) compared to children without rotavirus vaccination (57.8%). Additionally, participants with rotavirus vaccination were more likely to have seen a pediatrician (88.4%) and less likely to live in a rural area (16.4%) compared to participants without rotavirus vaccination (72.4% and 22.5%, respectively). From Overall, 55.4% of participants (n = 1 183 658) received an antibiotic prescription during follow-up and 1.5% (n = 17 318) of those prescriptions followed an AGE diagnosis. The number of A higher proportion of children fully vaccinated against rotavirus had any antibiotic prescription during follow-up (56.7%, n = 846 883) compared to children who did not receive rotavirus vaccination (52.0%, n = 175 103). Children with full rotavirus vaccination were less likely to have an AGE diagnosis (19.7%, n = 293 655) and less likely to receive an antibiotic prescription following an AGE diagnosis (3.9% of AGE diagnoses) compared to children without rotavirus (21.2% and 4.4%, respectively). At 1 year of age, the adjusted relative cumulative incidence of antibiotic prescription following an AGE diagnosis was 0.909 (95% CI, .841-.986) among children with complete rotavirus vaccination compared to children without rotavirus vaccination (Table 3, Figure 1). The adjusted relative cumulative incidence decreased over time to 0.821 (95% CI, .784-.862) at 2 years of age and 0.793 (95% CI, .761-.827) at 5 years of age ( Figure 2). We report the cumulative incidence of antibiotic prescription following AGE by age group and rotavirus vaccination in Supplementary Table 2. This association was stronger when we limited the outcome to only antibiotic prescriptions following AGE diagnoses that occurred during the typical rotavirus season, demonstrated by an adjusted relative cumulative incidence ratio of 0.836 (95% CI, .762-.921) at 1 year of age and 0.729 (95% CI, .691-.773) at 5 years of age. As a sensitivity analysis, we included ICD codes for vomiting, and the results followed a similar pattern (Supplementary Table 3). Additionally, at 5 years of age, the adjusted relative cumulative incidence of receiving a second, unique prescription within 28 days was 0.820 (95% CI, .750-.905) among children with complete rotavirus vaccination compared to children without rotavirus vaccination. According to vital statistics data, from 2007 to 2018 there were 48 092 250 children born in the US and based on annual trends in rotavirus vaccination coverage, we estimated that 33 150 763 (68.9%) of them were vaccinated against rotavirus. Extending our results to this cohort of children, we estimated that rotavirus vaccination has averted the prescription of at least 67 045 (95% CI, 53 729-80 664) antibiotics associated with an AGE diagnosis among children born between 2007 and 2018. DISCUSSION The IBM MarketScan Commercial Database provided a unique opportunity to investigate the relationship between rotavirus vaccination and antibiotic prescriptions, which has not been previously described in the US. We found that children with rotavirus vaccination were less likely to be prescribed an antibiotic following an AGE diagnosis and were then less likely to switch antibiotics within 28 days of the initial prescription. This differential in antibiotic prescribing between fully vaccinated and unvaccinated children increased through follow-up to 5 years of age. When applied to the US child population, we estimate that rotavirus vaccination has prevented .67 000 initial antibiotic prescriptions since vaccine introduction. These findings highlight an important nontargeted benefit of rotavirus vaccination: a reduction in antibiotic prescribing among children with AGE. We found that the protection against antibiotics following an AGE diagnosis increased over time, with the rotavirus vaccine cumulative effectiveness increasing from 9.1% (95% CI, 1.4%-15.9%) at 1 year of age to 20.7% (95% CI, 17.3%-23.9%) by 5 years of age. The instantaneous vaccine effectiveness over time is presented in Supplementary Figure 1 and indicates that vaccine effectiveness increases steeply in the first couple of months after vaccine administration and gradually increases over the following year. Furthermore, limiting our analysis to AGE diagnoses that occurred during the typical rotavirus season (January-June) resulted in a higher cumulative vaccine effectiveness of 16.4% at 1 year (95% CI, 7.9%-23.8%) and 27.1% (95% CI, 22.7%-30.9%) at 5 years, supporting a causal relationship. These results are in comparison to the estimated reduction of 11.4% within the first 2 years of life in low-and middle-income countries (LMICs) described in Lewnard et al [34]. However, it is important to note differences in treatment practices in LMICs compared to the US. Rogawski et al reported that 45.8% of diarrhea episodes within the first 2 years of life in 8 LMICs were treated with antibiotics [35], indicating that the absolute incidence of antibiotic prescribing associated with diarrhea that is averted by rotavirus vaccination is probably much higher in these settings. An additional study by Lewnard et al found that rotavirus was the leading etiologic cause of antibiotic-treated diarrhea in the first 2 years of life in LMICs and conclude that vaccination programs could substantially reduce antibiotic consumption in LMICs [21]. These results further support the utility of vaccination as part of a broader, comprehensive effort to reduce antimicrobial resistance [16]. In addition to promoting the development of new vaccines to prevent infections and the general overuse of antibiotics, the National Vaccine Advisory Committee recently highlighted the importance of existing vaccines as a tool to prevent prescribing of antibiotics for viral gastroenteritis [36]. Rotavirus vaccine coverage in the US has remained low compared to other routine immunizations. In 2016-2017, only three-quarters (75.3%) of children had completed the full rotavirus vaccine series by 8 months of age, whereas .90% of children had completed 3 doses of diphtheria, tetanus, and acellular pertussis vaccine (93.4%) or PCV (91.5%) by 2 years of age [25]. Our results suggest that the potential benefits of achieving increased vaccination coverage will extend beyond reducing rotavirus gastroenteritis to antibiotic prescribing and perhaps other outcomes. These results should be interpreted within the context of a few limitations. First, we were unable to control for socioeconomic factors (race, ethnicity, income) that may confound our exposure-outcome relationship because they were unavailable in the database. However, controlling for PCV vaccination, provider type, and urban/rural status likely reduced the potential confounding due to known confounders [25][26][27][28]. The higher general antibiotic prescription rates among children who were vaccinated are likely a reflection of a difference in healthcare-seeking behavior between the 2 groups, but focusing our outcome on antibiotic prescriptions following an acute gastroenteritis diagnosis reduces that concern [37]. Additionally, it is possible the provider type for rotavirus vaccine administration and AGE visits may be different, which could allow for residual confounding. Second, we did not evaluate other diagnostic codes concomitant with AGE diagnostic codes, which could result in some misclassification of the outcome. This approach could lead to the inclusion of AGE diagnoses associated with other health conditions that are unrelated to vaccination, which yields a conservative estimate of effect. Similarly, if individuals did not have prescription coverage as part of their insurance at time of AGE visit, we may be underestimating prescriptions following an AGE diagnosis, which would likewise result in a conservative estimate of effect. Third, although we included antibiotic prescriptions that occurred shortly after discharge from inpatient visits with an AGE diagnostic code, we are unable to capture antibiotics that are administered during inpatient settings. Fourth, we used an insurance claims database in which low-income populations, particularly populations that are more likely to be underinsured or on Medicaid, are underrepresented or not included. Excluding these populations, among whom antibiotic prescribing is high [38,39], could underestimate the impact of rotavirus vaccination. Finally, our results may further underestimate the impact of rotavirus vaccination because we did not specifically investigate indirect vaccine effects. Rotavirus vaccination has been shown to provide indirect (herd) protection against rotavirus gastroenteritis among both unvaccinated children and adults in the US [6]. We are unable to account for these indirect effects among the unvaccinated older populations, including adults in our study, potentially underestimating the overall effect of rotavirus vaccination. As such, we suspect that our study provides a conservative estimate of the population-level impact of rotavirus vaccination on AGE in children, and the indirect benefit among adults associated with antibiotic prescribing, and that the true impact may in fact be larger. Previous studies suggest that approximately 21-24 million antibiotics are prescribed annually for children ,5 years of age in the outpatient setting [17,38]. Although the 67 000 treated episodes averted by rotavirus vaccination according to these results are a relatively small portion of all antibiotics prescribed in this age group, these results indicate that rotavirus vaccination can be part of a multifaced approach to reducing antibiotic treatment. Our analysis used a large, longitudinal, individual-level insurance claims database, which uniquely allowed for the detection and precise estimation of the association between rotavirus vaccination and antibiotic prescribing. These results demonstrate an additional important, nontargeted benefit of rotavirus vaccination and bolster evidence for the use of rotavirus vaccines for reducing antibiotic prescribing for acute gastroenteritis. In addition to the existing evidence that vaccination against respiratory infections may reduce antibiotic prescriptions [13], these results provide similar evidence for nonrespiratory infections as well. The reduction of antibiotic prescribing likely contributes to the broader effort of reducing antimicrobial resistance. Thus, increasing rotavirus vaccination coverage should be encouraged both for its intended and nontargeted effects. Supplementary Data Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author. Notes Patient consent. As the MarketScan database is compliant with the Health Insurance Portability and Accountability Act, no additional ethics committee approval was needed for this research. Patient consent was not needed for this study. Financial support. This work was supported by funding from the Wellcome Trust (grant number 219823/Z/19/Z) and the National Institute of Allergy and Infectious Diseases (grant number T32AI074492). Potential conflicts of interest. B. L. reports grants and personal fees from Takeda Pharmaceuticals and personal fees from the World Health Organization (WHO), unrelated to the submitted work. E. W. H. reports consulting fees from Merck, unrelated to the submitted work. J. M. B. reports consulting fees from WHO, unrelated to the submitted work. All other authors report no potential conflicts of interest. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.
v3-fos-license
2023-04-20T15:14:41.580Z
2023-01-01T00:00:00.000
258225043
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": "https://doi.org/10.1051/e3sconf/202338101028", "pdf_hash": "06641cb706d37decbe792a9ca9132c9af668869a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42108", "s2fieldsofstudy": [ "Economics" ], "sha1": "670510b8fb8bb8660269e92679f443979ff053e8", "year": 2023 }
pes2o/s2orc
Econometric model of the effective activity of the enterprise . In modern conditions of globalization and growing economic instability, many enterprises cannot withstand competition and find themselves in a state of crisis. In order to carry out successful trading activities, there is a growing demand for reliable information about the future development of the organization, and therefore the most urgent problem of any enterprise is its financial stability. Using the example of an enterprise with a registered form of a limited liability company, the authors considered the assessment and analysis of the stability of the enterprise's development in order to confirm the future prospects of its functioning in the Russian markets. With the help of basic methodological approaches and principles of using the apparatus of econometric modeling and analysis of economic phenomena and processes, an assessment of the state of efficiency of a limited liability company is considered and a forecast of the development of economic phenomena and processes is made, which is an urgent task. The problem at this point in time. As a result of our research, we have determined a linear model of the dependence of indicators such as profit and cost of sales, which is statistically significant with a probability of 95% and allows us to predict the profit of a limited liability company. A comprehensive assessment of the financial condition of the company allowed us to formulate conclusions and proposals to improve the efficiency of the company. Introduction The modern level of organization and management of entrepreneurial activity objectively determines the need to use the tools of financial and economic analysis of the assessment of the state and the choice of forecast options for the development of entrepreneurship [1].The methodology of econometric research allows us to identify the main factors of changes in performance indicators, determine the degree of their influence on volume and nonfinancial indicators, and ultimately concentrate the actions of management personnel on the implementation of strategies of an active socio-economic orientation.The development of econometric methods and models using them in the field of quantitative and nonquantitative information determines new opportunities for substantiating current and prospective management decisions. The volume of sales of a particular product depends on a number of factors, the impact of which must be taken into account.Only with the help of econometric analysis it is possible to identify the influence of certain factors on the result.A properly conducted econometric study can show us the close relationship of several explanatory factors in the regression equation.The change of one variable cannot practically occur with the absolute immutability of other variables.Some factors of the model have a positive effect on the growth of the effective feature, i.e., accelerate the growth of output, while others, on the contrary, slow down its growth [2]. Thus, with the help of basic methodological approaches and principles of using the apparatus of econometric modeling and analysis of economic phenomena and processes, we can assess the state and make a forecast of the development of economic phenomena and processes, including evaluating the efficiency of the enterprise, which is an urgent problem at this point in time. The store of the limited liability company (LLC) «Usadba» in the city of Tyumen, is a popular store at home with a traditional form of service, its main function is to promote food products to the consumer.Among the goods sold are milk and dairy products, bread and bakery products, groceries, meat and fish products, draft beer, tobacco products. Materials and methods Using the example of this store, we decided to conduct a study in order to identify the links between quantitative characteristics in the activities of a limited liability company and build a model of the dependence of related quantitative characteristics for the purpose of analysis and forecasting. To achieve this goal, the following tasks were solved: -collection of information on the available quantitative characteristics in the activities of the limited liability company «Usadba»; -identification of the relationship between quantitative characteristics; -determination of the general type of the dependence model between the features, determination of the parameters of this model and their interpretation; -determination of the quality of the constructed model; -forecast based on the constructed model.The scientific novelty of the results of our research is as follows: an econometric model has been developed based on the analysis of the qualitative characteristics of the enterprise the limited liability company "Usadba", which allows predicting the results of activities. The practical novelty of the research is that on the basis of the developed model, the conclusions and proposals formulated, the efficiency of the enterprise increases.The formulated conclusions and suggestions can be used in similar enterprises. A model of the dependence of related quantitative characteristics in the activities of the limited liability company «Usadba» is constructed.According to this model, the analysis and forecast of the company's activities are carried out.A comprehensive assessment of the financial condition of the limited liability company «Usadba» allowed us to formulate conclusions and proposals to improve the efficiency of the enterprise. Results and Discussion The financial position of the limited liability company «Usadba» of the city of Tyumen is characterized by its business activity.The criteria of business activity include indicators that reflect the qualitative and quantitative aspects of the development of the company's activities, the volume of sales of goods and services, profit, indicators of the turnover of assets and liabilities.Thus, according to these criteria, it is possible to determine how effectively the company uses its funds. Without an analysis of the financial condition today, it becomes impossible for any economic entity to function, including those that, for certain reasons, do not pursue the goal of maximizing profits.If the efficiency of farming is a voluntary matter of the agent of economic activity, then financial reporting is mandatory [7]. The stable activity of the limited liability company «Usadba» depends both on the validity of the development strategy, marketing policy, on the effective use of all resources at its disposal, and on external conditions, which include the tax, credit, pricing policy of the state and market conditions.Because of this, the information base for the analysis of the financial condition should be the reporting data of the store, some specified economic parameters and options under which the external conditions of its activity change, which must be taken into account when making analytical assessments and managerial decisions.The analysis of business activity carried out in Table 1 shows that revenue from the sale of goods in 2019 compared to 2018 increased by 3762.95 thousand rubles, therefore, gross profit also increased, covering the loss of 2018 and reaching a positive result, which amounted to 422.57thousand rubles.The gross profit of the limited liability company «Usadba» for the period under study has a pronounced upward trend, which positively characterizes the commercial activity of the store. Expenses for ordinary activities in the limited liability company «Usadba» in 2018 amounted to 2,478 thousand rubles, and in 2019 -3,324.32thousand rubles, which is 34% more.The ongoing changes have affected the growth of the profitability of sales.The return on sales is calculated by dividing the profit from the sale of products, works and services or net profit by the amount of revenue received.The calculation shows that in 2018 the profitability is negative, and in 2019 it is positive. Let's consider the model of dependence of the profit of the limited liability company «Usadba» on various factors listed in Table 2. To build a model of the dependence of the profit of the limited liability company «Usadba» on various factors, we studied the following indicators in rubles: Y -profit (loss) from core activities, X1 -receipt of goods to the warehouse, X2 -accounts payable to suppliers, X3 -cash flow, X4 -cash outflow, X5customer debt, X6 -tax arrears to the budget, X7 -cost of sales, X8 -revenue from sales [8].Consider a matrix of paired correlation coefficients for all variables involved in the analysis.The matrix was obtained using the Correlation tool from the Data Analysis package in Excel.Visual analysis of the matrix allows you to set: 1. Profit has a fairly high pair correlation with factor x7 -the cost of sales, and a moderate pair correlation with factors x5 -customer debt, x6 -tax arrears to the budget, x9 -sales revenue; 2. Some analysis variables demonstrate rather high pair correlations, which necessitates checking factors for the presence of multicollinearity between them.Moreover, one of the conditions of the classical regression model is the assumption of the independence of explanatory variables [8]. Conducting a step-by-step selection of factors by excluding statistically insignificant variables from the model, we obtain two statistically significant factors, namely x7 -cost of sales and x8 -revenue.A two-factor regression equation, all coefficients of which are significant at a 5% significance level, has the form: To select the most significant explanatory variables, after conducting a test for "long" and "short" regressions, we obtained two "short" regressions: the equation of the dependence of profit on the cost of sales (2) and the equation of the dependence of profit on revenue (3). 7 = 484742.5− 0.73774 ⋅ 7 (2) This equation shows that with an increase in the cost of sales by 1 ruble, the profit of the company «Usadba» will decrease by an average of 0.73774 rubles per month.Accordingly, with a decrease in the cost of sales by 1 ruble, the profit of the company «Usadba» will increase by an average of 0.73774 rubles per month. The resulting equation of the dependence of profit on sales revenue (3) shows that with an increase in sales revenue by 1 ruble, the profit of LLC «Usadba» increases by an average of 0.54 rubles per month. By checking the statistical significance of the obtained dependence equations ( 1), ( 2) and (3), we obtain that the regression equation ( 1) and ( 2) is statistically significant at 95% significance level, therefore, they are suitable for further use, and the regression equation ( 3) is statistically insignificant at 95% significance level, therefore, it is not suitable for further use. As a result of the analysis of the "long" and "short" regression, I will give preference to the "short" regression (2). Using the elasticity coefficient, we will evaluate the degree of influence of factor x7 on the result.The elasticity coefficient shows by how many percent the dependent variable changes when the factor changes by one percent.The obtained coefficient shows that with an increase in the cost of sales by 1% from the average level of 304729.2rubles, the profit of LLC «Usadba» will decrease by 0.86% from the average level of rubles. Based on the data obtained, we will consider three types of forecasting: -point forecast of factor x7; -point forecast of the Y indicator.As a point forecast of the factor, let's consider what value the cost of sales will be, if it decreases by 10% from the average level.We get: 7 = 7 − 0.1 ⋅ 7 = 0.9 ⋅ 7 = 0.9 ⋅ 304729.2= 274256.3 Accordingly, with a reduction in the cost of sales by 10% from the average level, the forecast profit value will be: As a result of studying the dependence of the profit of LLC "Usadba" on eight different factors, it was found that the greatest attention should be paid to reducing the cost of sales, since reducing the cost of sales by 1 ruble entails an increase in profit by 0.73774 rubles. Conclusion A comprehensive econometric analysis of the activities of the limited liability company «Usadba» in Tyumen showed that the highest correlation is manifested [9] between profit and cost of sales [9,10].The relationship between these indicators is linear and inverse.A linear model of the dependence of these indicators has been determined, which is statistically significant with a probability of 95% and allows us to forecast the profit of the limited liability company "Usadba".A comprehensive assessment of the financial condition of the limited liability company «Usadba» allowed us to formulate conclusions and proposals to improve the efficiency of the enterprise. Fig. 1 . Fig. 1.The result of modeling and forecasting by pair regression. Table 1 . The main indicators of the activity of the limited liability company "Usadba" store for 2019 -2020. Table 2 . Indicators of the turnover balance sheet for 12 months. Table 3 . Matrix of pair correlation coefficients.
v3-fos-license
2016-03-14T22:51:50.573Z
2016-03-01T00:00:00.000
16149907
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1660-4601/13/3/278/pdf", "pdf_hash": "5f889b2881c8e464bc16a7bbabd5db75f6be8c9e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42112", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "5f889b2881c8e464bc16a7bbabd5db75f6be8c9e", "year": 2016 }
pes2o/s2orc
SNP-SNP Interaction between TLR4 and MyD88 in Susceptibility to Coronary Artery Disease in the Chinese Han Population The toll-like receptor 4 (TLR4)-myeloid differentiation factor 88 (MyD88)-dependent signaling pathway plays a role in the initiation and progression of coronary artery disease (CAD). We investigated SNP–SNP interactions between the TLR4 and MyD88 genes in CAD susceptibility and assessed whether the effects of such interactions were modified by confounding risk factors (hyperglycemia, hyperlipidemia and Helicobacter pylori (H. pylori) infection). Participants with CAD (n = 424) and controls (n = 424) without CAD were enrolled. Polymerase chain restriction-restriction fragment length polymorphism was performed on genomic DNA to detect polymorphisms in TLR4 (rs10116253, rs10983755, and rs11536889) and MyD88 (rs7744). H. pylori infections were evaluated by enzyme-linked immunosorbent assays, and the cardiovascular risk factors for each subject were evaluated clinically. The significant interaction between TLR4 rs11536889 and MyD88 rs7744 was associated with an increased CAD risk (p value for interaction = 0.024). In conditions of hyperglycemia, the interaction effect was strengthened between TLR4 rs11536889 and MyD88 rs7744 (p value for interaction = 0.004). In hyperlipidemic participants, the interaction strength was also enhanced for TLR4 rs11536889 and MyD88 rs7744 (p value for interaction = 0.006). Thus, the novel interaction between TLR4 rs11536889 and MyD88 rs7744 was related with an increased risk of CAD, that could be strengthened by the presence of hyperglycemia or hyperlipidemia. Introduction Coronary artery disease (CAD) is the most common cause of morbidity and mortality in China [1]. It is a complex disease determined by genetic predisposition and environmental factor accumulation, which play major roles in a number of associated vessel wall abnormalities [2]. A person's genetic make-up as well as other well-known major risk factors are important for the initiation and progression of CAD. Indeed, a substantial body of literature has investigated the association of CAD with gene polymorphisms [3][4][5]. Toll-like receptor 4 (TLR4) and myeloid differentiation factor 88 (MyD88), which act as the gate of the innate immune system and the trigger of the adaptive immune system, have been extensively studied for their roles in the pathogenesis and progression of CAD [6,7]. Compared to the wild-type mice, the mice deficiency of the TLR4 gene or MyD88 gene exhibited significantly smaller infarctions, as well as lower levels of some atherogenic cytokines (e.g., IL-1β, IL-6, and TNFα) [8]. Some studies have found that a coding polymorphism in the TLR4 gene was associated with CAD or acute myocardial infarction in a Caucasian population, but not in a Chinese population [9]. Regarding the MyD88 gene, a single nucleotide polymorphism (SNP) in its 3 1 -untranslated region (3 1 -UTR) has been reported to be associated with Buerger disease but not with Takayasu arteritis in the Japanese population [10]. Up to date, 153 suggestive DNA variants associated with CAD have been discovered by genome-wide association study (GWAS) worldwide. However, each variant usually confers a minimal to modest increase in relative risk, averaging only 18% (corresponding to an odds ratio of 1.18) [11]. Accordingly, the results of genetic polymorphism studies that have sought to identify relationships for TLR4 and MyD88 genes with CAD remain controversial and inconclusive. In most studies, the association between the risks of CAD and genetic polymorphisms was often limited to one loci or haplotypes over several neighboring loci in one gene of interest, which seems insufficient as the genetic baseis for CAD is complex and varied [12]. Thus, an increasing number of studies have assessed epistatic gene-gene interaction effects on CAD susceptibility [13,14]. TLR4 is an important membrane receptor, which not only can recognize most of exogenous ligands, like lipopolysaccharide (LPS) of Helicobacter pylori (H. pylori) [15], but also can bind to some endogenous ligands, such as fetuin-A (FetA) related to hyperglycemia and minimally modified low density lipoprotein (mmLDL) involved in hyperlipidemia [16,17]. Thus, we made further efforts on evaluation of the modified function of the related environmental factors to the SNP-SNP interaction effect of TLR4 and MyD88 genes that are not. Consequently, in this study, we investigated potential SNP-SNP interactions of TLR4 and MyD88 genes for their possible roles in susceptibility to CAD. We assessed whether the effects of such interactions were modified by environmental factors, such as hyperglycemia, hyperlipidemia and H. pylori infection, in order to determine the architecture of CAD predisposition and thereby improve personalized preventative for individuals at risk of this disease. Study Population This was a single center, case-control study. We collected data from 848 consecutive participants who had undergone coronary angiography at the First Affiliated Hospital of China Medical University between 2012 and 2015. This study was approved by the Ethics Committee (Ethic Approval Number: [2011]18 and 2015-68-2). Patients who had at least one vessel with stenosis of no less than 50% diameter were defined as CAD cases (n = 424). Those who had no demonstrable lesions on angiography served as controls (n = 424). The exclusion criteria were as follows: participants with cardiomyopathy, auto-immunological disease, severe kidney or liver disease, or malignant disease. All participants had their demographic characteristics (e.g., age, sex) recorded and were examined to determine the presence of cardiovascular risk factors. The confounding risk factors were as follows: (a) smoking: individuals who had smoked at least one cigarette per day for more than one year were classified as smokers; (b) alcohol consumption: individuals who had consumed at least one alcoholic drink a day for a minimum period of six months were defined as consumers of alcohol; (c) hypertension: individuals with systolic blood pressure ě140 mmHg or diastolic blood pressure ě90 mmHg, or both, were considered hypertensive; (d) hyperglycemia: individuals with fasting plasma glucose ě7.0 mmol/L or 2-h plasma glucose ě11.1 mmol/L, or both, were considered hyperglycemic; (e) hyperlipidemia: participants with plasma cholesterol concentration ě5.17 mmol/L, or plasma triglyceride concentration ě1.70 mmol/L or plasma low-density lipoprotein cholesterol concentration ě2.58 mmol/L, were considered hyperlipidemic. Details of the study group characteristics were summarized in Table 1. SNP Selection and Genotyping A two-step approach was performed to identify tag-SNPs in TLR4 and MyD88 genes as described previously [18]. First, tag-SNPs were selected in the combinations provided by the HapMap database (Release 27, Phase I + II + III) and Haploview software [19,20]. Next, the functional effects of the selected tag-SNPs were predicted by FuncPred software [21]. Accordingly, two tag-SNPs (rs10116253 and rs10983755) in the promoter region of TLR4, one tag-SNP (rs11536889) in the 3 1 -UTR of TLR4 and one tag-SNP (rs7744) in the 3 1 -UTR of MyD88 were screened. Genomic DNA of each subject was extracted from a blood clot using standard phenol-chloroform methodology. The polymorphisms were detected using the polymerase chain restriction-restriction fragment length polymorphism (PCR-RFLP) procedure. Table S1 shows the details of the PCR-RFLP conditions of the four tag-SNPs. H. pylori Serology The concentration of serum IgG, specific for H. pylori was tested using an enzyme-linked immunosorbent assay (H. pylori IgG ELISA kit; BIOHIT, Helsinki, Finland). The cut-off value is 34 EIU, which is given by the standard protocol (BIOHIT, Helsinki, Finland). If the titer value was above 34 EIU, the individual was defined as H. pylori infection [22]. Statistical Analyses All statistical analyses were performed using the SPSS 16.0 statistical software package (SPSS, Chicago, IL, USA). Discrete variables, represented as frequencies and percentages, were evaluated by Pearson's χ 2 tests. Continuous variables, presented as mean˘SD, were compared using ANOVA tests. SNP-SNP interaction effects were assessed using the likelihood-ratio tests, by comparing the fit of the logistic model that included the main effects of the environment risk factors and genotypes with a fully parameterized model [23]. Odds ratios (OR) with 95% confidence intervals (CI) were calculated as measures of associations adjusted by the confounding risk factors (age, sex, hypertension, hyperglycemia, hyperlipidemia and H. pylori infection) unless the risk factor had been used as a stratified factor. A two-side p value of <0.05 was considered statistically significant. Main Effect Analyses of Individual Polymorphisms in the TLR4 and MyD88 The genotype distributions of the four SNPs studied in the control participants followed Hardy-Weinberg equilibrium (HWE) (p > 0.05) (Table S2). In our unpublished data, we found that of the polymorphisms in TLR4 and MyD88, the TLR4 rs10116253 polymorphism was associated with a slightly decreased risk of CAD, whereas there was no overall genetic effect for TLR4 rs10983755, TLR4 rs11536889 or MyD88 rs7744 relating to CAD risk. Two-Way Interactions between TLR4 and MyD88 Polymorphisms In the two-way interaction analyses, the most significant interaction was between TLR4 rs11536889 and MyD88 rs7744. This interaction was associated with an increased risk of CAD (p value for interaction = 0.024, OR (95% CI) = 1.928 (1.089-3.413)). In contrast, in the two-way analyses between TLR4 rs10116253 or TLR4 rs10983755 and MyD88 rs7744, no statistically significant interactions were observed (p value for interaction >0.05) ( Table 2). The Effect of Confounding Risk Factors on the Interaction between Polymorphisms in TLR4 and MyD88 In stratified analyses, we tested the effect of environmental risk factors (H. pylori infection, hyperglycemia and hyperlipidemia) on the interaction strength (Table 3). Under conditions of hyperglycemia, the OR (95% CI) was 4.905 (1.640-14.673) between TLR4 rs11536889 and MyD88 rs7744 (p value for interaction = 0.004). In contrast, the OR (95% CI) was 1.336 (0.664-2.686) for the participants with normal plasma glucose levels (p value for interaction = 0.417). Moreover, when the participants had hyperlipidemia, the OR (95% CI) was 3.269 (1.398-7.644) between TLR4 rs11536889 and MyD88 rs7744 (p value for interaction = 0.006). However, no interaction effect was noted in the participants who lacked hyperlipidemia (OR (95% CI) = 1.156 (0.513-2.604), p value for interaction = 0.726). Furthermore, H. pylori infection did not influence the interaction effect between TLR4 rs11536889 and MyD88 rs7744 for CAD risk (p value for interaction >0.05). As to the analyses between TLR4 rs10116253 or TLR4 rs10983755 and MyD88 rs7744, no modification by any of the environmental risk factors was identified (p value for interaction >0.05) (Tables 4 and 5). Discussion Genetic polymorphisms in humans can be used to predict the risks of particular diseases occurring. However, many previous studies have focused their attention on identifying single gene polymorphisms responsible for disease risk, but often no effects or weak effects have been found in such studies [24,25]. Recently, increasing studies have investigated interactions among combinations of two or more SNPs, and the results have usually revealed a moderate or strong effect on disease risk [23,26]. To the best of our knowledge, this study is the first to assess the interaction effects of TLR4 and MyD88 polymorphisms on CAD risk in the Chinese Han population. TLR4, as the gate of inflammatory reaction, not only can recognize pathogen-associated molecular patterns (PAMPs), but also can initiate inflammation in the lipid-laden artery wall via the NF-κB pathway, which have been proved to take part in the initiation and progression of atherosclerosis and its related complications [27,28]. As to MyD88, the cytoplasmic receptor adaptor of TLR4, has been widely studied in atherogenesis. Besides involving in the classical TLR4-MyD88-dependent signaling pathway related to atheroscleorsis, MyD88 has been also played an important role in obesity-associated inflammatory diseases, including insulin resistance and atherosclerosis [29]. Hence, we performed interaction effect analyses on three tag-SNPs in TLR4 (rs10116253, rs10983755 and rs11536889) and one tag-SNP in MyD88 (rs7744) to evaluate the risk of CAD in the Chinese Han population. We found that an interaction effect between rs11536889 in TLR4 and rs7744 in MyD88 was associated with an increased risk of CAD. Furthermore, the interaction effect was exacerbated by the presence of hyperglycemia or hyperlipidemia. Evidence is accumulating that TLR4 and MyD88 have a close relationship with many inflammation-related diseases, and many studies in recent years have focused on polymorphisms in TLR4 and MyD88 genes with disease risk [30,31]. Some researchers have reported that TLR4 rs11536889 polymorphism is associated with a variety of autoimmune diseases, such as Grave's disease and autoimmune pancreatitis [32]. A study by Wang et al. revealed a relationship between TLR4 rs11536889 and sepsis [33]. Furthermore, the results from Sato et al. indicated that genetic variation of rs11536889 contributes to translational regulation of TLR4, possibly by binding to microRNAs [34]. Regarding MyD88 rs7744, Chen et al. found that the variant genotypes of rs7744 were associated with Buerger's disease in a Japanese population [10]. However, we found that when analyzed as a single locus, neither TLR4 rs11536889 nor MyD88 rs7744 had an effect on CAD risk. In contrast, the interaction effect of TLR4 rs11536889 and MyD88 rs7744 was associated with an increased risk of CAD. We consider the interaction effect of these two SNPs to be epistasis, which has been involved in susceptibility to various inflammation-related diseases, such as malignant tumors, asthma, and Parkinson's disease [35][36][37]. The epistatic effect of two or more genes can account for the missing heritability of many diseases, a phenomenon often underestimated or even ignored. Indeed, the genetic effects of TLR4 rs11536889 and MyD88 rs7744 polymorphisms on the risk of CAD would most likely have been missed had they not been tested jointly. Consequently, the epistatic effects of TLR4 rs11536889 and MyD88 rs7744 on the pathogenesis and progression of CAD might depend on the presence of the other SNP. It is assumed that a functional effect on TLR4 and MyD88 in the TLR4-MyD88-dependent signaling pathway might account for the interaction effect we observed. Any genetic mutation within this pathway, especially in key genes like TLR4 and MyD88, could potentially alter the action of other components of the pathway so as to influence inflammatory reactions in the pathogenesis and progression of atherosclerosis. Our study focused only on a few tag-SNPs with potential functions in the promoter and 3 1 -UTR of TLR4 and MyD88 genes, but this approach does not capture all genetic variants in these two genes. Therefore, further analyses covering more tag-SNPs should be undertaken to investigate the potential interaction effects of TLR4 and MyD88 more fully. In the current study, heterogeneity in the hyperglycemia or hyperlipidemia status of the study participants had a significant effect on the interaction of TLR4 rs11536889 and MyD88 rs7744. Moreover, the interaction strength was enhanced under conditions of hyperglycemia or hyperlipidemia. Evidence suggests that exogenous and endogenous ligands can activate the TLR4-MyD88-dependent signaling pathway [38,39]. Miller et al. showed that the mmLDL-induced stimulation of macropinocytosis was TLR4 dependent and resulted in lipid accumulation in macrophages [17]. Pal et al. found that FetA played a crucial role in regulating insulin sensitivity via the TLR4-MyD88-dependent signaling pathway in mice. FetA knockdown in mice with hyperglycemia resulted in inactivation of the TLR4-MyD88-dependent signaling pathway, whereas selective administration of FetA induced inflammatory signaling and insulin resistance [16]. In addition, Yu et al. [29] showed MyD88-dependent interplay between myeloid and endothelial cells in the initiation and progression of atherosclerosis. MyD88 deficiency in endothelial cells results in a moderate reduction in diet-induced adipose macrophage infiltration, and M1 polarization, selective insulin sensitivity in adipose tissue, and amelioration of spontaneous atherosclerosis [29]. Therefore, we hypothesized that TLR4 and MyD88 were highly likely to be associated with hyperglycemia and hyperlipidemia, consistent with the effect-modification by hyperglycemia and hyperlipidemia that was observed in TLR4 rs11536889 and MyD88 rs7744 interaction. Although the LPS of H. pylori has been shown to be one of the most powerful exogenous TLR4 ligands, there is no evidence of systemic invasion of H. pylori beyond the intestinal mucosa. Researchers have looked for H. pylori DNA in atheromatous tissue specimens using PCR. Kaklikkaya et al. did not detect H. pylori DNA in 21 patients with aortoiliac occlusive disease [40]. In addition, Dore et al., found that only one of 32 atherosclerotic plaques obtained at endarterectomy was positive for H. pylori DNA; however, the possibility of contamination could not be excluded in this study [41]. Hishiki et al. have speculated that a relationship between H. pylori, decreased body mass index and decreased plasma total cholesterol caused by dyspepsia exists, and that eradication of H. pylori might exaggerate the metabolic syndrome [42]. In the present study, no interaction effect between TLR4 rs11536889 and MyD88 rs7744 polymorphisms in the subgroup analyses for H. pylori infection was identified. Taken together, the evidence above indicates that H. pylori is unlikely to be involved in the atherogenic process in arteries, and supports our findings that H. pylori does not influence the interaction effect of TLR4 rs11536889 and MyD88 rs7744 in CAD risk. Our study has some limitations. Firstly, although our study comprised 424 CAD participants and 424 controls, this sample size may still be relatively insufficient for fully analyzing interaction effects. Secondly, additional adenosine functional tests were absent, so we could not investigate the relationship of SNP-SNP interaction effects on microvascular dysfunction in the participants [43]. Thirdly, some information was lost for a small number of study participants, such as lifestyle factors (i.e., smoking and alcohol status), precluding their use as environmental factors in our multivariate logistic regression. Lastly, this study was hospital-based, which might increase the selection bias in comparison with population-based study. Conclusions In summary, our study is the first to show that a novel SNP interaction between TLR4 rs11536889 and MyD88 rs7744 is associated with an increased risk of CAD. Furthermore, the interaction strength was enhanced under conditions of hyperglycemia or hyperlipidemia. Our results provide a potential genetic clue to help predict CAD risk in susceptible people. Large-scale studies and experiments to determine the mechanisms are required to confirm the findings of this study.
v3-fos-license
2023-05-29T15:03:00.808Z
2023-05-26T00:00:00.000
258954059
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.3390/children10060945", "pdf_hash": "ff5641bad68700e12292607a93ff38464826a255", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42114", "s2fieldsofstudy": [ "Medicine" ], "sha1": "5277eda9638eecc3b5add8fa9a56499ec1823106", "year": 2023 }
pes2o/s2orc
D-Lactate: Implications for Gastrointestinal Diseases D-lactate is produced in very low amounts in human tissues. However, certain bacteria in the human intestine produce D-lactate. In some gastrointestinal diseases, increased bacterial D-lactate production and uptake from the gut into the bloodstream take place. In its extreme, excessive accumulation of D-lactate in humans can lead to potentially life-threatening D-lactic acidosis. This metabolic phenomenon is well described in pediatric patients with short bowel syndrome. Less is known about a subclinical rise in D-lactate. We discuss in this review the pathophysiology of D-lactate in the human body. We cover D-lactic acidosis in patients with short bowel syndrome as well as subclinical elevations of D-lactate in other diseases affecting the gastrointestinal tract. Furthermore, we argue for the potential of D-lactate as a marker of intestinal barrier integrity in the context of dysbiosis. Subsequently, we conclude that there is a research need to establish D-lactate as a minimally invasive biomarker in gastrointestinal diseases. Introduction L-lactate is a familiar molecule to the human body and is also produced in large amounts in human tissues, depending on metabolic conditions [1,2]. In contrast, D-lactate is produced only in minute quantities in human tissues, and is therefore not detectable in the bloodstream under normal physiological conditions [3]. Certain bacteria in the human gut produce D-lactate as a byproduct of carbohydrate fermentation. Lactate-producing bacteria (LAB) are an example of intestinal bacteria, which can generate both L-and D-lactate [4]. Under normal conditions, there is an equilibrium of bacteria and their metabolites, but when the composition of the microbiome is disrupted, such as through a reduction in diversity or an overgrowth of certain bacteria, dysbiosis develops. Dysbiosis is a condition that promotes disease [5] and is a characteristic of several diseases, including inflammatory bowel disease (IBD), metabolic disorders, autoimmune conditions, and psychiatric and neurological illnesses [6][7][8][9]. Elevated intestinal production of D-lactate in dysbiosis can lead to its detection in the bloodstream, and excessive accumulation of D-lactate in the blood can cause metabolic acidosis, also called D-lactic acidosis [10,11]. This metabolic phenomenon is well described as a complication primarily found in pediatric patients with short bowel syndrome (SBS) [12]. It can be life threatening, and therefore, it is crucial to understand the underlying pathophysiology of this metabolic disorder. Because D-lactate production in the gut increases in the presence of dysbiosis, this review will concentrate on illnesses that affect the gastrointestinal tract. The primary aim of this review is to provide an overview of the sources and metabolism of D-lactate, the pathophysiology of D-lactic acidosis in SBS, and the subclinical rise of D-lactate in diseases affecting the gastrointestinal tract. In addition, the review will address the important question of whether D-lactate could function as a biological marker for intestinal permeability in conjunction with the disease activity of gastrointestinal diseases. It is essential to understand the potential use of D-lactate as a biomarker, as this could have a significant impact on the diagnosis and management of gastrointestinal diseases. This is particularly interesting in pediatrics, with its need for minimally invasive diagnostic tools. Materials and Methods For this narrative review, a search in Pubmed and Clinical Trials.gov was conducted in March 2023. The search terms were "d-lact*","gastrointestinal disease*", "short bowel syndrome", "dysbiosis", "microbiome", "intestinal barrier", and "biomarker". These terms were combined into the following query: "d-lact*" AND ("gastrointestinal disease*" OR "short bowel syndrome" OR "dysbiosis" OR "microbiome" OR "intestinal barrier" OR "biomarker"). For analysis, only studies in humans were considered. In total, 193 papers were retrieved. We read the abstracts of the publications and excluded studies that were not relevant to the research question (e.g., discussing other body fluids such as synovial or vaginal fluids, analyzing only fecal D-lactate). The references of the selected publications were reviewed to identify additional relevant articles. In total, 56 papers were included in the review. Biochemistry In 1780, the Swedish chemist Scheele discovered lactate, also known as 2-hydroxyprop anoate, in sour milk. Lactic acid has two stereoisomers, L-and D-lactic acid. They are enantiomers due to the presence of an asymmetric second carbon atom. Despite their distinct chirality, both enantiomers share similar chemical and physical properties. Lactic acid, at a physiological pH, exists in its conjugate base form (L-lactate for L-lactic acid and D-lactate for D-lactic acid), which does not affect the chirality of the base anion [13]. Lactate Production and Metabolism Lactic acid is a byproduct of the process known as anaerobic respiration, and during this process, glucose is broken down into pyruvate in the cytoplasm of cells. Pyruvate can then be converted to lactic acid through the action of lactate dehydrogenases [14]. There are two isomer-specific forms of lactate dehydrogenase (LDH), L-LDH and D-LDH, which produce L-lactate and D-lactate, respectively [15][16][17]. The conversion of pyruvate to lactate (specifically L-lactate form) is an essential process for the body to generate energy during times of oxygen deprivation, such as during intense exercise. Lactic acid can be produced in various tissues, including muscle, red blood cells, and the brain. The accumulation of lactic acid in muscles during exercise can lead to fatigue and soreness. On the other side, lactic acid can serve as a fuel source for other tissues, such as the heart and liver. For this, it is converted back to pyruvate and enters the aerobic respiration pathway to produce energy in the presence of oxygen [1,2,18]. Lactate Production and Metabolism in Humans Mammalian cells lack the enzyme D-LDH, and therefore produce nearly exclusively L-lactate. However, it has been found that limited amounts of D-lactate are produced endogenously via the methylglyoxal pathway [3]. This pathway generates D-lactate from dihydroxyacetone phosphate (DHAP), an intermediate in various catabolic pathways [19]. It is primarily active in tissues with high rates of glucose utilization, such as the brain and the lens of the eye. In these tissues, the production of D-lactate by the methylglyoxal pathway may serve a protective role by scavenging free radicals and protecting against oxidative stress [20]. The toxic methylglyoxal is quickly converted by the enzymes glyoxalase I and II to D-lactate and glutathione [20]. The created D-lactate is then metabolized in the liver and renal cortex to pyruvate with the mitochondrial enzyme D-2-hydroxy acid dehydrogenase (D-2-HDH) [16,17]. Lactate-Producing and -Utilizing Bacteria in the Gastrointestinal Tract In addition to endogenous production in human tissues, lactate can also be sourced from other places, including gastrointestinal bacteria and exogenous supply ( Figure 1). The human gastrointestinal tract is home to millions of bacteria, with various bacterial species capable of synthesizing either L-or D-lactate. The specific type of lactate produced depends on the expression of L-LDH or D-LDH in the bacteria, and some bacteria can also convert one isomer to the other using DL-lactate racemase [4]. D-Lactate in Health and Disease In healthy individuals, D-lactate is generally considered safe and its amount is negligible. The metabolism of D-lactate is tightly regulated by enzymes and any excess Dlactate is rapidly cleared from the body by the kidneys. The level of D-lactate is typically maintained in balance with the L-lactate level in the body and any changes in the level of Common LAB are Lactobacillus (L-lactate, D-lactate, racemic mixture), Pediococcus (L-lactate, racemic mixture), Leuconostoc (D-lactate), Weissella (D-lactate or racemic mixture), Streptococcus (L-lactate), and Bifidobacterium (L-lactate) [21]. LAB are typically gram-positive, aerobic to facultatively anaerobic, and asporogenous rods and cocci. They are also oxidase-, catalase-, and benzidine-negative, and are unable to utilize lactate. The presence of LAB in the GI tract is beneficial for humans as they help to maintain a healthy gut microbiota, improve nutrient absorption, and stimulate the immune system. Additionally, LAB are often used as probiotics to improve gut health and prevent or treat various gastrointestinal disorders [22]. Pediatric guidelines may recommend probiotics in specific settings, such as acute gastroenteritis and prevention of necrotizing enterocolitis [23]. In addition to LAB, the intestine also contains lactate-utilizing bacteria (LUB), which use lactate as a source of energy. Most LUB belong to the Firmicutes and Actinobacteria phylum [24] and include Lactobacillus, Streptococcus, and Bifidobacterium [25]. Lactobacillus and Bifidobacterium use the phosphoketolase pathway to convert lactate into short-chain fatty acids (SCFAs) [26], whereas Streptococcus use the Embden-Meyerhof-Parnas (EMP) glycolytic pathway [27]. Maintaining a neutral pH in the gut is crucial for these LUB, as they are sensitive to acidic pH levels. The generated SCFAs include butyrate, acetate, and propionate [24]. They are used by the epithelial cells of the colonic mucosa as their main energy source and can provide energy to the body by absorption [28]. Lactate, which is not metabolized to SCFAs, is absorbed by the epithelial cells via the proton-dependent monocarboxylate transporter 1 (MCT-1) [29] or excreted with the stool. By efficiently utilizing lactate, LUB help to maintain the balance of lactate in the intestine, help to reduce inflammation and promote the growth of other beneficial gut bacteria such as Bacteroides and Faecalibacterium, which in turn contribute to overall gut health [30]. Additional Sources Some fermented foods and beverages such as yogurt, sauerkraut, pickles, sour milk, tomatoes, apples, beer, and wine also contain L-and D-lactic acid [24]. Another source for L-and D-lactate are medications such as ringer lactate solution, sodium lactate, propylene glycol, and some peritoneal dialysate solutions [31]. D-Lactate in Health and Disease In healthy individuals, D-lactate is generally considered safe and its amount is negligible. The metabolism of D-lactate is tightly regulated by enzymes and any excess D-lactate is rapidly cleared from the body by the kidneys. The level of D-lactate is typically maintained in balance with the L-lactate level in the body and any changes in the level of D-lactate can indicate an underlying disease condition [32]. Interestingly, when administering Dlactate via oral or IV means to healthy subjects, they do not develop metabolic acidosis or neurological symptoms [12]. D-Lactic Acidosis in Short Bowel Syndrome D-lactic acidosis (or D-lactate encephalopathy) was first described in 1979 in an adult patient with SBS [33]. D-lactic acidosis is characterized by severe neurological symptoms and metabolic acidosis with a blood D-lactate level of more than 3 mmol/L [34]. Symptoms of D-lactic acidosis include confusion, disorientation, difficulty speaking, and ataxia. In severe cases, D-lactic acidosis can lead to coma or even death. The exact mechanism of Dlactic acidosis in SBS is not fully understood, but it is thought to be related to the increased production of D-lactate by bacteria in the remaining intestine. In individuals with SBS, the remaining intestine may have a reduced capacity to metabolize D-lactate, leading to an accumulation of D-lactate in the blood. Pathophysiology SBS is characterized by malabsorption and malnutrition as a result of congenital or secondary loss of a large portion of the small intestine [35]. Because of the resulting altered anatomy, a higher load of undigested or partially digested carbohydrates reach the colon. Fermentation of these carbohydrates to SCFAs by the colonic microbiota leads to a progressive reduction of the intraluminal pH [36]. This pH change supports an overgrowth of acid-resistant, lactic-producing bacteria. In a vicious cycle, these bacteria force a further reduction of pH and thus favor the growth of their own kind [36]. Typically, bacteria of the genus Lactobacillus (L.), such as L. acidophilus, L. fermentum, L. buchneri, L. plantarum, or L. salivarius, which are all D-lactate producing bacteria, are found in increased concentrations in cases of SBS [10,11]. In summary, in SBS, the increase in undigested carbohydrates in the colon leads to increased SCFA production, which in turn leads to the overgrowth of acid-resistant D-lactate-producing lactobacilli, resulting in an increased accumulation of D-lactate. A low intestinal pH is a common characteristic in patients with SBS. This leads to a high pH gradient across the epithelial membrane. As D-lactate is co-transported with protons (H + ) via MCT-1, the uptake of D-lactate from the colon into the blood is enhanced in the setting of SBS [29]. In SBS, the metabolization of D-lactate in the body is impaired due to several mechanisms. First, the low pH in the blood, due to the increased uptake of D-lactate and protons from the colon, inhibits the enzyme D-2-HDH. This inhibition limits the metabolization of D-lactate to pyruvate and enhances the accumulation of D-lactate in the blood [12]. Second, the activity of D-2-HDH appears to be saturable, which at high D-lactate levels results in a build-up of D-lactate [37]. Third, oxalate, a potent inhibitor of the enzyme D-2-HDH, is excessively absorbed in SBS, further impairing the metabolization of D-lactate [38,39]. Last, patients with SBS tend to have higher pyruvate levels, as D-lactate is metabolized to pyruvate via the L-LDH pathway. High pyruvate levels also inhibit the activity of D-2-HDH via a negative feedback loop [40]. D-lactate is partially excreted in the urine. In the renal tubular system, there is a carrier-mediated system (sodium-lactate cotransporter) that can reabsorb D-lactate, so that at low blood levels the urinary excretion of D-lactate is negligible [41]. However, at high blood levels, the kidney is unable to increase the excretion to sufficient amounts for lowering the D-lactate level, even with decreasing reabsorption [41]. Diagnosis It is important to note that the diagnosis of D-lactic acidosis can be challenging, as it requires measurement of blood D-lactate levels. Unfortunately, in clinical routine, validated D-lactate assays are not available. Additionally, the symptoms of D-lactic acidosis can be similar to those of other neurological conditions, making it difficult to distinguish D-lactic acidosis from other disorders. Lab work typically shows a non-ketotic and non-lactic metabolic acidosis with an increased anion gap in the blood. The urine anion gap may also be increased. In cases of D-lactic acidosis with hyperchloremic acidosis and an increased anion gap in the urine, a misdiagnosis of renal tubular acidosis (RTA) is possible [42,43]. However, analyzing the urine osmolarity gap to calculate excreted NH4+ can help in these cases, as NH4+ excretion is high in D-lactic acidosis but low in RTA [12]. Treatment Acute management of D-lactic acidosis requires a correction of the acidemia with IV bicarbonate and fluid hydration. Lactated Ringer's solution contains L-as well as Dlactate and should be avoided [44]. Because carbohydrates build the substrate for D-lactate production with intestinal bacteria, oral carbohydrate intake should be diminished [45]. With hemodialysis, D-lactate can be rapidly cleared from the blood [46]. Antibiotics suppress colonization with D-lactate-producing bacteria. However, caution is required as antibiotics can cause D-lactic acidosis by promoting, e.g., overgrowth of antibiotic resistant D-lactate producing bacteria [47]. Because of the malabsorption of nutrients in SBS, nutritional deficiency may also play a role in the development of neurologic symptoms. Thiamine deficiency was found in a patient with recurrent D-lactate encephalopathy, and after oral thiamine supplementation D-lactic acidosis no longer occurred [48]. Long-term management focuses on preventing recurrences of D-lactic acidosis by correction of the dysbiosis and reestablishment of a healthy microbiome. Replacement of D-lactate-producing bacteria via supplementation of pure L-lactate-producing bacterial species has been successfully employed in pediatric and adult patients [47,49,50]. Another interesting option to change the composition and metabolism of the intestinal microbiota is fecal microbiota transfer, which resulted in resolution of D-lactic acidosis in a case report with a pediatric patient [51]. Nutritional restrictions are beneficial in the long-term management of D-lactic acidosis. Restriction of simple carbohydrate intake reduces the substrate availability for D-lactate-producing bacteria [4]. Oxalate restriction inhibits the enzyme D-2-HDH, which metabolizes D-lactate by converting it to pyruvate [10]. On the other hand, the supplementation of calcium is beneficial, as it increases the intestinal pH and thereby favors the growth of non-acid bacteria and suppresses colonization with lactic acid bacteria [10]. Inflammatory Bowel Disease IBD is a chronic disease characterized by inflammation of the gut accompanied by a dysbiosis and increased intestinal permeability [52]. Recent studies have found significantly higher levels of D-lactate in the blood of IBD patients compared to healthy controls [53][54][55] ( Table 1). One study examined adult patients with ulcerative colitis (UC) who received treatment with mesalazine or mesalazine plus rifaximin. All patients had significantly higher D-lactate levels before therapy than after therapy and the decrease in D-lactate levels correlated with a decrease in clinical activity (Mayo Score) and systemic inflammatory markers (C-reactive protein and erythrocyte sedimentation rate) [53]. Interestingly, in the group with co-treatment, this effect was pronounced [53]. A general effect of treatment on D-lactate levels could be confirmed in another study, which included adult UC and Crohn's disease (CD) patients [54]. A study in CD patients which compared patients with active disease and patients in remission showed that D-lactate was able to be used to discriminate between both disease status (AUC 0.815, 95%CI 0.692-0.904) [55]. From these studies in IBD, an association of blood D-lactate concentration and intestinal inflammation can be hypothesized. Additionally, D-lactate levels may be a useful biomarker for assessing disease activity as well as the effectiveness of IBD treatment. Further research is needed to better understand the relationship between D-lactate and IBD as well as the potential clinical applications of D-lactate measurement in IBD management. Whether D-lactate even plays an active role in the pathogenesis of IBD needs to be determined. Acute Appendicitis Acute appendicitis is thought to result from luminal obstruction leading to mucus retention and bacterial overgrowth. The increase in tension of the appendiceal wall accompanied by decreased blood and lymph flow eventually leads to necrosis and perforation [56]. The diagnosis of acute appendicitis is often still challenging, despite improved diagnostic strategies and broadly available ultrasonography. Therefore, different studies have investigated whether D-lactate could be a diagnostic tool in acute appendicitis [57][58][59][60][61]. In all but one study, D-lactate levels contributed to the diagnosis of acute appendicitis. One study with pediatric patients further postulated that D-lactate levels can be used to differentiate between types of appendicitis (e.g., acute vs. perforated) [57]. However, this was not confirmed in two other studies [58,59]. It could be argued that a very localized inflammatory process is not sufficient to raise the blood D-lactate level, and instead, a more spread-out sequence of events is required. Further studies are needed to provide a more conclusive picture of the role of D-lactate in acute appendicitis. Intestinal Ischemia Intestinal ischemia is characterized by insufficient oxygenation of the intestinal mucosa, which leads to epithelial damage with increased risk of bacterial translocation. To test its diagnostic and prognostic value, D-lactate was measured in intestinal ischemia due to embolic events [62], sepsis or septic shock [63,64], and complications of ruptured abdominal aortic surgery [65,66]. In all these scenarios, D-lactate was significantly elevated in comparison to controls. It is noteworthy that patients undergoing surgery for mesenteric ischemia showed significantly higher levels of D-lactate in their blood compared to patients undergoing surgery for acute abdomen without intestinal ischemia [63]. In septic patients, D-lactate levels in the blood positively correlated with splanchnic luminal CO 2 production [63]. From these findings, it can be postulated that hypoxia seems to be a crucial anchor point in the vicious cycle of D-lactate production and translocation. Hypoxia can disrupt the normal metabolism of carbohydrates and could lead to an increase in the production of D-lactate by bacteria, as is known for the production of L-lactate [67]. Furthermore, hypoxia can impair the intestinal barrier function [62,68], resulting in increased translocation of D-lactate and other bacterial metabolites from the gut into the bloodstream. These events can trigger an inflammatory response and further exacerbate dysbiosis, feeding the vicious cycle. Liver Disease A dysfunctional gut-liver axis, understood as a situation of crosstalk between the gut microbiome, its metabolites, the immune system, and the liver, seems to play an important role in the pathogenesis of fatty liver disease, alcoholic liver disease, and liver cirrhosis. Dysbiosis and impaired intestinal barrier function are hallmarks of its dysfunction [69]. Accordingly, in patients with liver diseases, including metabolic fatty liver disease, alcoholic liver disease, and liver cirrhosis, increased D-lactate levels in the blood were found [70][71][72][73]. The severity of cirrhosis was not univocally reflected in the D-lactate levels, which may be due to different definitions of patient groups. For example, in one study, D-lactate levels in alcoholic liver disease patients with Child-Pugh A or B cirrhosis were less raised than with acute hepatitis [72]. In another study, there was no difference in D-lactate levels between stable cirrhosis and acute decompensation [68]. A study in hepatitis B patients showed a positive correlation of D-lactate with increasing cirrhosis categorized by Child-Pugh A, B, and C [71]. Furthermore, in metabolic fatty liver disease, D-lactate levels correlated with fatty infiltrations of the liver in ultrasonography. Thereby, D-lactate could be used to distinguish between mild and moderate/severe steatosis [73]. A high disease burden in liver disease is associated with impaired intestinal barrier function due to altered intestinal blood flow as well as a compromised immune response, which leads to dysbiosis, and could thus explain raised D-lactate levels in the blood. Cystic Fibrosis Cystic fibrosis (CF) is a monogenetic disease with mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene leading to a paucity or absence of the chloride channel activity in epithelial cells. In patients with CF, intestinal dysbiosis is often present, caused by multiple factors including pancreatic insufficiency, decreased gut motility, altered intestinal mucus layer composition, and recurrent treatments with antibiotics [74,75]. We found one study investigating D-lactate in pediatric patients with CF [76]. D-lactate levels correlated with clinical activity, defined by the Shwachman-Kulczycki score and pancreatic insufficiency. There was no association with diet composition or malnutrition, but there was a trend toward higher D-lactate levels in patients with signs of intestinal inflammation expressed by higher fecal calprotectin levels. Further studies are needed to explore whether D-lactate could be a valuable surrogate marker for intestinal health in CF patients. Diabetes Mellitus as an Example of a Non-Primary Gastrointestinal Disease Several studies showed significantly increased D-lactate levels in patients with type 1 and type 2 diabetes mellitus (DM) [3,[77][78][79][80][81][82][83]. It is postulated that the increased D-lactate in DM patients may be intrinsically caused by higher endogenous D-lactate production. In favor of this hypothesis is the finding that DM patients show higher methylglyoxal levels compared to controls [79,[82][83][84]. Methylglyoxal is the only substrate that is metabolized to D-lactate in human cells [85], and high substrate availability could be one factor to explain high D-lactate levels in DM [3,82,86]. Additionally, one study showed that administration of high doses of metformin significantly decreased methylglyoxal levels in parallel with an increase in D-lactate [79]. The authors argue that metformin activates the methylglyoxal pathway, and thus increases degradation of methylglyoxal to D-lactate [79]. A known complication in patients with DM is ketoacidosis. In some cases, the extent of the metabolic acidosis cannot be explained by the measured concentrations of ketones [80,81]. This distinctive feature of ketoacidosis may actually be explained by high D-lactate levels. Ketone bodies are degraded to methylglyoxal via different pathways, including methylperoxidase [85] and methylglyoxal, to D-lactate. Whether dysbiosis and increased intestinal D-lactate production may coexist as causes for measurable D-lactate was not investigated in these studies. However, patients with DM can show a dysbiosis [87,88] and an impaired intestinal barrier [89]. Therefore, it is conceivable that the altered microbiome and the higher intestinal permeability could also cause elevated Dlactate levels in the blood of patients with DM. Unfortunately; studies investigating this hypothesis are not available yet. Emerging Role of D-Lactate as a Biomarker Intestinal permeability refers to the ability of the intestinal lining to allow or prevent the passage of substances from the lumen of the gut into the bloodstream. Increased intestinal permeability has been associated with various gastrointestinal diseases, including inflammatory bowel disease [52,90], intestinal ischemia [91], advanced liver cirrhosis [92,93], and type 1 [94] and type 2 [89] diabetes mellitus. These diseases share pathophysiological similarities, including dysbiosis, with a possible overgrowth of D-lactate-producing bacteria, which displace the healthy gut microbiota [54,72,74,76,87]. Dysbiosis also contributes to the impairment of the intestinal barrier [95], which then allows translocation of D-lactate into the bloodstream [55,62,71,96]. Overall, it can be postulated that, in addition to increased bacterial D-lactate production, an alteration of the intestinal barrier is required to increase blood D-lactate levels in these diseases. Therefore, D-lactate could serve as a useful biomarker for the integrity of the intestinal barrier in the context of dysbiosis. In certain diseases that lead to elevated D-lactate in the blood, a correlation with disease activity could be discovered. In patients with IBD, lower values were measurable during therapy than before therapy [54]. Patients with acute decompensated liver cirrhosis showed higher D-lactate levels in the blood than patients with stable cirrhosis [72]. D-lactate levels in blood also correlated positively with disease activity in pediatric patients with cystic fibrosis [76]. Assuming that D-lactate is a marker of intestinal barrier integrity, it is reasonable that in situations in which disease activity, and thus impairment of the intestinal barrier, changes, D-lactate levels reflect these altered statuses. D-lactate could therefore be a marker of disease activity in chronic diseases affecting the gastrointestinal tract. Limitation of the Study The studies reviewed here primarily describe preliminary data on small numbers of patients. In addition, the available studies focus on associations with crude outcomes and have a limited granularity. All studies concerning subclinical D-lactate levels were-with the exception of two in acute appendicitis [57,61] and one in CF [76]-performed on adults. More comprehensive data for the pediatric age group are available in SBS, but mostly from case reports. Unfortunately, no causal relationships and pathomechanisms are explored, such as the determination of the intestinal microbiome to detect D-lactate-producing bacteria or the correlation between the intestinal microbiome and blood D-lactate levels. From a technical point of view, there are no established cutoffs and normal values. The reasons for this are the unavailability of a standard assay and that the determination of D-lactate is still reserved for research settings. Conclusions The presented studies indicate that D-lactate could be a promising biomarker for assessing gut permeability in the presence of dysbiosis. Dysbiosis often coexists with impaired gut barrier function, leading to increased translocation of microbial products into the bloodstream and causing systemic inflammation. Since D-lactate is primarily produced by intestinal bacteria and can only be detected in the blood when the intestinal barrier function is compromised, measuring D-lactate levels could provide valuable information on the status of the gut barrier. In addition, D-lactate could also serve as an activity marker in chronic diseases that involve dysbiosis and impaired gut barrier function. For example, in inflammatory bowel disease, the severity of the disease is closely related to the degree of dysbiosis and gut barrier dysfunction. Therefore, monitoring D-lactate levels could potentially provide a useful measure of disease activity. However, further research with larger patient populations, including pediatric patients, is needed to validate the utility of D-lactate as a gut permeability or disease activity marker in clinical practice. In particular, a reliable and standardized D-lactate assay needs to be developed and validated for routine clinical use. If successful, the use of D-lactate as a minimal invasive biomarker could have important implications for the diagnosis, monitoring, and treatment of pediatric diseases associated with dysbiosis and gut barrier dysfunction. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2016-05-12T22:15:10.714Z
2015-08-25T00:00:00.000
13267728
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=13850&path[]=4808", "pdf_hash": "416dca9f723046ac162015d05e3420b761b4c111", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42116", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "416dca9f723046ac162015d05e3420b761b4c111", "year": 2015 }
pes2o/s2orc
Molecular dissection of HBV evasion from restriction factor tetherin: A new perspective for antiviral cell therapy. Viruses have evolved various strategies to escape from the innate cellular mechanisms inhibiting viral replication and spread. Extensive evidence has highlighted the ineffectiveness of interferon (IFN) therapy against chronic hepatitis B virus (HBV) infection, implying the existence of mechanisms by which HBV evades IFN-induced antiviral responses. In our current study, we demonstrate that HBV surface protein (HBs) plays a crucial role in counteracting the IFN-induced antiviral response mediated by tetherin (also known as BST-2). The type I IFN treatment of HBV-producing cells marginally but significantly inhibited the release of HBsAg and viral DNA, but this release was recovered by the knockdown of tetherin. HBs can interact with tetherin via its fourth transmembrane domain thereby inhibiting its dimerization and antiviral activity. The expression of a tetherin mutant devoid of the HBs-binding domain promoted a prominent restriction of HBV particle production that eventually resulted in the alleviation of caspase-1-mediated cytotoxicity and interleukin-1β secretion in induced pluripotent stem cell (iPSC)-derived hepatocytes. Our current results thus reveal a previously undescribed molecular link between HBV and tetherin during the course of an IFN-induced antiviral response. In addition, strategies to augment the antiviral activity of tetherin by impeding tetherin-HBs interactions may be viable as a therapeutic intervention against HBV. INTRODUCTION The type I interferon (IFN) system, which includes IFNα and IFNβ, is an innate immune response [1]. Upon virus infection, cells can readily secrete IFNα/β as part of the biological defense mechanisms that plays a primary role in virus restriction. Indeed, IFNα/β induces the synthesis of a range of antiviral proteins, which serve as cell-autonomous intrinsic restriction factors [2]. However, viruses have evolved multiple strategies to evade the IFN system, which would otherwise limit viral spread at an early stage of infection [3]. Hepatitis B is a serious infectious illness of the liver caused by the hepatitis B virus (HBV) [4]. The primary treatment goal for patients with hepatitis B is to prevent progression of the disease to cirrhosis, liver failure, or hepatocellular carcinoma. Current antiviral therapies for HBV infection involve nucleoside reverse transcriptase inhibitors (NRTIs). However, the long-term treatment of hepatitis B with NRTIs can be associated with toxicity and the emergence of drug resistant viral mutations, which result in treatment failure and disease progression. Therefore, it is vital to develop a new type of antiviral drugs for hepatitis B treatment. Pegylated IFNα is also the standard first-line agent in the treatment of hepatitis B. The biological response to IFN is mediated by its binding to the IFN receptors and the activation of the Janus-activated kinase-signal transducer and activator of transcription www.impactjournals.com/oncotarget (STAT) pathway. This leads to the expression of several hundred IFN-stimulated genes (ISGs), such as tetherin (also known as . Tetherin inhibits the release of HIV, Ebola, Lassa, herpes and other enveloped viruses from infected cells by tethering progeny virions to the membranes of the host cells [5]. However, many viral proteins can inactivate tetherin in multiple ways [6]. For example, HIV-1 Vpu can displace tetherin from the site of viral assembly in the plasma membrane [7,8]. Ebola virus glycoprotein (GP) can bind tetherin directly for antagonizing its function, although the mechanism was not deciphered [9]. Furthermore, tetherin can induce NF-κB-mediated signal transduction, leading to the production of proinflammatory cytokines, thereby acting as an innate sensor of viral release [10][11][12]. Accumulating evidence now strongly indicates that IFNα may not be an effective treatment for hepatitis B virus infection [13]. These findings suggest that HBV has evolved strategies to block IFN signal transduction and its antiviral properties. Previous reports indicate that HBV polymerase can block STAT activation to limit IFNα-induced antiviral responses [14,15]. Although the aforementioned pathway might be associated with a high incidence of resistance to type I IFN in patients with HBV infection, it remains elusive as to whether there are other mechanisms that contribute to the IFN-resistance of HBV in connection with ISGs. Thus, it is likely that HBV has the countermeasures to repress the innate antiviral function of tetherin. In our present study, we reveal that IFN-induced tetherin can repress the release of HBV from infected cells but is antagonized by the viral protein HBs. We also suggest that the transduction of HBs-resistant tetherin in hepatocytes may be a potential option in the treatment of HBV infection. Type I IFN-induced tetherin marginally represses HBV release Since tetherin is one of many ISGs, we first investigated whether IFNα/β could induce the expression of tetherin in human hepatocytes. Whilst we found a relatively low level expression of tetherin in untreated cells, treatment with type I IFNs increased the tetherin protein to the certain levels in HepG2 cells and primary human hepatocytes ( Figure 1A). We then investigated the antiviral response by tetherin induced by IFNα treatment. HepG2 cells were treated with or without tetherinspecific siRNAs and IFNα, and then transfected with a HBV molecular clone (pUC19-C_JPNAT; genotype C) ( Figure 1B). Although IFNα did not affect the amounts of intracellular viral DNA (vDNA) ( Figure 1C), we found that IFNα weakly but statistically significantly decreased the levels of HBV surface antigen (HBsAg) and vDNA in the culture supernatant ( Figure 1D, 1E). Moreover, these IFN-induced effects were reduced by a knockdown of tetherin by siRNA ( Figure 1D, 1E). These results indicate that IFN-induced tetherin acts as an antiviral effector against HBV release, although in a relatively weak manner. HBs binds tetherin via its fourth transmembrane domain Hepatocytes have been reported to remain permissive for HBV infection regardless of the treatment of IFNα [16]. This is indicative of antagonistic properties of HBV against endogenous tetherin. To delineate this hypothesis, we assessed whether HBV-encoding proteins could interact with and inactivate tetherin. Immunoprecipitation analysis revealed that both large and small HBs (LHBs and SHBs) can interact with tetherin, but that no other viral protein (such as HBc, HBx and polymerase) could do so ( Figure 2A). To verify the association between SHBs and tetherin in cells, we next examined the intracellular localization of both proteins by immunofluorescence confocal microscopy. Our results indicated that these proteins colocalize in the perinuclear region and cytoplasm ( Figure 2B). We next attempted to identify the domain within SHBs that is responsible for the interaction with tetherin. Since SHBs has four transmembrane regions, we generated deletion mutants of these regions. Immunoprecipitation analysis with these SHB mutants indicated that the most C-terminal transmembrane domain (TM4) of SHBs is important for the binding of tetherin, since SHBs lacking TM4 (SHBs∆TM4) were not co-immunoprecipitated with tetherin ( Figure 2C). Moreover, in vitro pull-down analysis using recombinant HBs (genotype A, B, C and D) and tetherin proteins synthesized in cell-free protein systems [17], clearly demonstrated that their interactions are evolutionally conserved across all of the HBV genotypes tested ( Figure 2D). HBs inhibits the dimerization of tetherin to counteract its antiviral activity We next evaluated the antagonizing effect of HBs on the function of tetherin using a HIV-1 viral-like particle (VLP) model. In this model, HepG2 cells are cotransfected with vectors encoding HIV-1 Gag-Pol and tetherin together with either HBs (LHBs, SHBs and SHBs∆TM4) or HIV-1 Vpu (tetherin antagonist of HIV-1) as a positive control. As reported previously, the expression of tetherin restricted HIV-1 VLP release by approximately 10-fold in our ELISA experiments. This restriction was recovered by expression of LHBs and SHBs as well as Vpu. However, the SHBs mutant lacking tetherin binding site (SHBs∆TM4) failed to recover VLP release ( Figure 3A). These results suggest that the direct interaction of HBs with tetherin via TM4 domain might be essential for the anti-tetherin function of HBs. Although Vpu has been reported to decrease the expression of tetherin in cells [18,19], we did not observe this same effect in case of HBs ( Figure 3A). Since HBs was found to interact directly with tetherin ( Figure 2D), we hypothesized that it might interfere with the dimer formation of tetherin. As previously reported [20,21], wild-type (WT) tetherin is detectable as a dimer (50 kD) under non-reducing conditions ( Figure 3B). Interestingly, we found in our current experiments that HBs, but not HBs∆TM4, increased the monomeric tetherin level ( Figure 3B), suggesting that HBs counteracts the antiviral activity of tetherin by inhibiting dimerization. Consistently, the overexpression of a dimerization-defect tetherin mutant (C53, 63, 91A) [20] had no effects on HBV release ( Figure 3C, 3D). Collectively, these results suggest that HBs can bind and antagonize tetherin by inhibiting the functional dimerization of tetherin ( Figure 3E). The transmembrane domain of tetherin is responsible for HBs binding We next mapped the binding domain within tetherin that interacts with HBs. Tetherin consists of an N-terminal cytoplasmic (CT) domain, single transmembrane (TM) domain, extracellular (EC) domain and a C-terminal glycosylphosphatidylinositol (GPI) anchor ( Figure 4A). Tetherin mutants lacking these domains were generated and subjected to immunoprecipitation analysis. All of these tetherin mutants except for ∆TM efficiently interacted with HBs ( Figure 4A), indicating that the tetherin transmembrane domain is responsible for the HBs binding. To further confirm the possibility that HBs antagonizes tetherin through TM-TM associations, primary hepatocytes (bottom) treated with either IFNα or IFNβ (100 or 1,000 U/ml) for 24 h before harvesting. B-E. HepG2 cells were transduced with siRNA targeting tetherin (siTetherin-1 or 2) or control siRNA, and were treated with IFNα, then transfected with an HBV molecular clone (pUC19-C_JPNAT) 24 h later. One day after transfection, the cells were washed and treated with IFNα for three days. The expression of tetherin and tubulin in cells was detected by immunoblotting (B). The amounts of viral DNA (vDNA) in cells (C) and in the culture supernatants (E) were measured by real-time PCR. The amounts of HBsAg in the culture supernatants were measured by ELISA (D). ns, not significant; *P < 0.05; **P < 0.01. we substituted the TM domain of tetherin with the corresponding domain of transferrin receptor, hereafter referred to as TFRTM tetherin ( Figure 4B). As expected, TFRTM tetherin showed almost no interaction with HBs in our immunoprecipitation analysis ( Figure 4B). A HIV-1 VLP release assay demonstrated that the antiviral function of TFRTM tetherin were comparable with WT tetherin ( Figure 4C), as reported previously [22]. However, whereas the antiviral function of WT tetherin was inhibited by HBs, that of TFRTM tetherin was not ( Figure 4C). Consistently, the dimerization of TFRTM tetherin was also not abrogated by HBs expression ( Figure 4D). Figure 2: HBs binds tetherin via its fourth transmembrane domain. A. HEK293 cells were cotransfected with plasmids encoding HA-tetherin and the indicated FLAG-tagged HBV proteins. Cell lysates were immunoprecipitated with anti-FLAG antibody and then analyzed by immunoblotting with either anti-HA or anti-FLAG antibody. Vpu (tetherin-interacting HIV protein) was used as a positive control. B. HepG2 cells were transfected with plasmids encoding SHBs-FLAG and HA-tetherin. After 24 h, the cells were fixed, permeabilized, and stained with anti-FLAG (green) and anti-HA (red), followed by confocal microscopic analysis. Scale bar, 10 µm. C. Schematic representation of the domain structure of HBs (top). HEK293 cells were transfected with WT SHBs-FLAG or its transmembrane domain-deficient mutants (∆TM) together with HA-tetherin. Cell lysates were immunoprecipitated with anti-FLAG antibody, and the bound proteins were analyzed by immunoblotting with either anti-HA or anti-FLAG antibody (bottom). D. Alignment of the HBs TM4 sequence with the indicated HBV variants (top). Recombinant FLAG-tagged LHBs derived from the indicated HBV genotypes were mixed with recombinant biotinylated tetherin and then processed for the in vitro pull-down analysis with streptavidin sepharose beads. Captured proteins were analyzed by immunoblotting with either anti-FLAG antibody or horseradish peroxidase-conjugated streptavidin (bottom). DHFR (dihydrofolate reductase) was used as a negative control. www.impactjournals.com/oncotarget HBs-resistant chimeric tetherin efficiently restricts HBV release We next examined the effect of HBs-resistant tetherin on HBV release. Consistent with the data from our HIV-1 VLP assay ( Figure 4C), TFRTM tetherin exhibited more potency for inhibiting HBV release and was effective even at relatively lower amounts (i.e, 62 ng/well) ( Figure 5A, 5B). Recent studies have demonstrated that the stable expression of NTCP (Na + taurocholate cotransporting polypeptide) in HepG2 cells allows for HBV entry, viral protein synthesis, and subsequent production of progeny virions [23][24][25]. We thus generated HepG2 cells harboring a tetracyclineinducible NTCP gene, referred to hereafter as HepG2-Tet-NTCP. We also produced HepG2-Tet-NTCP cells that stably expressed tetherin at a lower level ( Figure 5C). We confirmed that these cell lines expressed an equivalent level of NTCP in the presence of doxycycline ( Figure 5D). These cells were infected with HBV for 10 days and the virus levels in the culture supernatants were then measured. Interestingly, HBV particle release from HepG2-Tet-NTCP cells was found to be comparable to that from those additionally expressing WT tetherin ( Figure 5E), suggesting that WT tetherin failed to inhibit HBV release due to a HBs-mediated counteracting mechanism. In contrast, HepG2-Tet-NTCP cells expressing TFRTM tetherin demonstrated significantly reduced levels of progeny virions in the culture supernatant ( Figure 5E). These results suggested that even at lower levels, TFRTM tetherin can effectively restrict HBV release. TFRTM tetherin inhibits virus-induced cytotoxicity in iPSC-derived hepatocytes The aforementioned results demonstrated that TFRTM tetherin has potent anti-HBV activity. Hence, a therapeutic strategy to utilize the TFRTM tetherin . At four days after transfection, the amounts of HBsAg and vDNA in the culture supernatants were measured. Note that 125 ng of WT tetherin can inhibit HBV release (see Figure 5B). E. Predicted model for the HBs-mediated counteraction of tetherin. www.impactjournals.com/oncotarget would have potential benefits. The recent development of induced pluripotent stem cell (iPSC) technology enabled us to utilize iPSC-derived hepatocytes in liver diseases including virus-induced hepatitis. We thus cotransfected human iPSC-derived hepatocytes with either WT or TFRTM tetherin and a HBV molecular clone to assess the inhibitory effect of TFRTM tetherin on HBV release ( Figure 6A). Compared with WT tetherin, TFRTM tetherin strongly inhibited the release of HBV ( Figure 6B). Although the aberrant expression of HBV caused prominent cytotoxicity in iPSC-derived hepatocytes, this was completely reverted by TFRTM tetherin ( Figure 6C, 6D), suggesting its cytoprotective activity against HBV. Several previous reports have demonstrated that the transduction of HBV DNA into primary hepatocytes induced cell death, possibly by apoptosis [26,27]. However, we could not detect cleaved caspase-3, an apoptotic marker, in HBV-transduced iPSC-derived hepatocytes (data not shown) but instead detected the inflammasome-mediated pyroptosis markers cleaved caspase-1 and interleukin (IL)-1β [28] in cell lysate and culture supernatant, respectively ( Figure 6E, 6F). Caspase-1 activation induces cell swelling and the release of cytosolic contents, including lactose dehydrogenase (LDH) [29]. Indeed, we detected LDH secretion ( Figure 6G) and the formation of membrane rupture in HBVtransduced hepatocytes ( Figure 6H). Notably, TFRTM tetherin decreased the levels of these pyroptosis markers ( Figure 6E, 6F, 6G). Taken together, our current data provide the proof of concept that the use of TFRTM tetherin is a viable strategy for cell-mediated anti-HBV therapy in the context of inhibiting viral release and virusinduced cytopathic effects. DISCUSSION In our current study, we decipher the molecular link between HBV and tetherin during the course of HBV infection as part of an IFN-induced antiviral response. We find that tetherin can block HBV release, but the HBV surface protein, HBs, can antagonize the antiviral activity of tetherin. This scenario strongly supports the "offensive and defensive battle" theory between host and virus [30], in which both players can be reciprocally suppressed in Figure 6: Transduction of TFRTM tetherin inhibits virus-induced cytotoxicity in iPSC-derived hepatocytes. A, B. Hepatocyte-like cells were differentiated from human induced pluripotent stem cells (iPSCs) according to a previously described method [42] (A). iPSC-derived hepatocytes were cotransfected with pUC19-C_JPNAT and vectors encoding WT or TFRTM tetherin. Four days after transfection, the amounts of HBsAg and vDNA in the culture supernatants were measured using ELISA and real-time PCR, respectively (B). *P < 0.05; **P < 0.01. C, D. Microscopic images (C) and cell viability (D) of indicated hepatocytes at 7 days after transfection. Scale bar, 200 μm. *P < 0.05; **P < 0.01. E-G. Detection of cleaved caspase-1 in cell lysates (E) and secreted IL-1β (F) and LDH (G) in the culture supernatants of indicated hepatocytes at 4 days after transfection. *P < 0.05; **P < 0.01. H. Plasma membrane raptures (black arrow) were mainly observed in HBV-transduced hepatocytes (at 3 days after transfection). Scale bar, 50 μm. accordance with the quantity and/or capability ratio of tetherin versus HBs. When HBs is more predominant than tetherin (e.g. during the acute infection phase), HBs can inactivate the antiviral function of tetherin via direct interaction through the TM4 domain. Conversely, if tetherin predominates, it can restrict the release of HBV. Indeed, many viral proteins can inactivate tetherin in multiple ways [6]. For example, Ebola virus Glycoprotein (GP) counteracts tetherin without inducing its degradation. Although the mechanism of this is not known, GP needs to bind tetherin directly to antagonize its function [9]. We found in our current study that HBs can inhibit the dimerization of tetherin, which is an essential process for anti-HBV activity. This is the first evidence of a mechanistic link between HBs and tetherin. Additionally, a recent study has indicated that in plasmacytoid dendritic cells, HBs abrogates the TLR7/9-induced innate immune reaction towards IFNα gene transcription [31], suggesting that as an upstream effector, HBs may downregulate the expression of ISGs itself in vivo. The role of intracellular restriction factors in HBV infection as a part of the type I IFN pathway has not been well studied. Several ISGs, including APOBEC3G and IDO, have been shown to act as putative anti-HBV factors in in vitro culture models [32,33]. However, for unknown reasons, the response rate of IFNα-treated hepatitis B patients was found previously to be very poor regardless of the induction of ISGs [34]. This discrepancy between in vitro and in vivo effects may be partly due to the stoichiometric balance between ISGs and its viral countermeasure (such as tetherin and HBs). Indeed, our current study findings demonstrated the effect of IFN-induced tetherin on viral release to be relatively weak, but that an overabundance of HBs-resistant chimeric tetherin strongly inhibited HBV release in comparison with WT tetherin. Hence, the development of new methods for transducing HBs-resistant tetherin into hepatocytes could prove to be a unique strategy to efficiently suppress HBV replication. In this regard, our current study has demonstrated that the transduction of HBs-resistant tetherin into iPSC-derived hepatocytes confers potent anti-HBV activity and protects the cells from HBV-induced cytotoxicity. This therapeutic strategy would be more practicable by utilizing recently developed genome editing and stem cell technologies. In our current analyses, we unexpectedly found that HBs-resistant tetherin strongly inhibited HBV-triggered cell death, accompanied by caspase-1 activation and IL-1β secretion. These are markers of "pyroptosis"-a proinflammatory cell death process that eliminates the infected cell [35]. During viral infection, viral DNA/RNA recognized by pattern recognition receptors (PRRs) in host cells can promote the formation of inflammasomes, resulting in the activation of caspase-1. This induces pyroptosis together with the secretion of proinflammatory cytokines IL-1β and IL-18 [35]. Several previous reports have demonstrated that AIM2, a cytosolic PRR, enhances immunopathology during HBV infection [36,37]. Tetherin may block the AIM2-dependent signaling pathway via unknown mechanisms. Alternatively, in the absence of tetherin, vast quantities of virions (including Dane particles, HBsAg or HBcAg subviral particles) released from HBV-producing cells may trigger pyroptosis. Indeed, HBcAg has been shown to induce the secretion of bioactive IL-18, another marker of pyroptosis, an event which is blocked by caspase-1 inhibitor [38]. Although the mechanisms underlying how PRRs sense HBcAg are not yet known, HBV-mediated pyroptosis can be activated by HBcAg invasion. Nonetheless, blockage of HBVassociated hepatic injury may be a therapeutic strategy for the treatment of chronic hepatitis B. Further studies should identify the involvement of inflammasome-mediated hepatocyte death in HBV infection. In conclusion, tetherin plays an important role in intracellular antiviral immunity during the course of HBV infection. Strategies to augment the antiviral activity of tetherin by impeding tetherin-HBs interactions may be a viable therapeutic intervention against HBV. Moreover, a better understanding of how HBV evades IFN-induced host immune response during infection may help to develop more effective vaccines. In the siRNA and IFNα experiments (Figure 1), one day prior to the transfection of the HBV molecular clone, cells were transduced with 60 pmol tetherin-specific siRNA (HSS101113 and HSS101114, Life Technologies) or control siRNA using Lipofectamine RNAiMAX (Life Technologies). Cells were then pre-treated with 1000 U/ ml IFNα (Sigma-Aldrich) for 3 h before transfection. At 24 h after transfection of pUC19-C_JPNAT (2.5 μg), cells were washed twice and then additionally cultured for three days with or without IFNα, followed by quantification of HBsAg and vDNA, as described above. HBV preparation and infection HBV stocks were derived from the supernatants of HepG2.2.15 cells, which were stably transfected with a complete HBV genome (genotype D). The collected supernatants were filtered through a 0.45-µm filter (Merck Millipore), and concentrated using PEG virus precipitation kit (BioVision, Milpitas, CA). HepG2-Tet-NTCP or its derivative cells in 24-well plates were infected with HBV (5000 GEq/cell) with or without 5 μg/ml Dox. The culture supernatants were then harvested and subjected to quantification of HBsAg and vDNA, as described above. Immunoprecipitation, in vitro pull-down and immunoblotting Immunoprecipitation were performed as previously described [43,44]. In vitro streptavidin pull-down analysis with recombinant proteins were also performed as previously described [43]. Briefly, biotinylated tetherin was incubated with SHBs-FLAG at 26°C for 2 h before being co-incubated with streptavidin-Sepharose beads (GE Healthcare, Little Chalfont, UK) at 4°C for 3 h. Bound proteins were analyzed by immunoblotting as follows. Samples in SDS loading buffer (with or without 2-mercaptoethanol) were loaded onto 10% or 15% gels and blotted onto PVDF membranes (Merck Millipore). Membranes were probed with primary antibodies and horseradish peroxidase-conjugated secondary antibodies (GE Healthcare). The antibodies used in this study were as follows: anti-tetherin (a gift from Chugai Pharmaceuticals), anti-α-tubulin (Sigma-Aldrich), anti-HA (Roche), anti-FLAG (Sigma-Aldrich), anti-Myc (Cell Signaling Technology, Danvers, MA), anti-p24 (NIH AIDS Reagent Program), anti-NTCP (Sigma-Aldrich), anti-vinculin (Sigma-Aldrich) and anti-caspase1 (Abcam, Cambridge, MA). The proteins detected were visualized on a FluorChem digital imaging system (Alpha Innotech, San Leanardo, CA) and the band intensities were quantified with NIH ImageJ software. HIV-1 VLP release assays HepG2 cells in 12-well plates were transfected with vectors encoding HIV-1 Gag-Pol (200 ng), tetherin (100 ng) and either HBs (200 or 1000 ng) or Vpu (100 or 500 ng). At four days after transfection, culture supernatants were harvested and clarified, and p24 antigens were measured with an HIV-1 p24 ELISA kit (Zepto Metrix, Buffalo, NY). For immunoblotting analysis, the virus-containing supernatants was layered onto 20% sucrose in PBS and centrifuged at 20,000 g for 2 h. Cell and virion lysates were then subjected to immunoblotting analysis as described above. Measurements of secreted LDH and IL-1β, and cell viability Human iPSC-derived hepatocytes in 12-well plates were transfected with of pUC19-C_JPNAT (1 μg) together with vectors expressing tetherin (125 or 250 ng) and GFP (200 ng, as a transfection control) using Lipofectamine 3000 (Life Technologies). At 2-5 days after transfection, the culture supernatants were collected and secreted LDH and IL-1β were assayed using an LDH cytotoxicity detection kit (Roche) and IL-1β ELISA kit (R&D systems, Minneapolis, MN), respectively. At day 4-7 after transfection, cell viability was determined by using Cell Titer-Glo (Promega, Madison, WI). Immunofluorescence One day before transfection, cells were seeded onto collagen-coated glass cover slip. At 48 h post-transfection, the cells were fixed with 4% paraformaldehyde and stained as described previously [41]. Alexa Fluor-conjugated secondary antibodies (Life Technologies) were used to detect signals. Microscopic imaging was performed with an FV1000-D confocal laser scanning microscope (Olympus, Tokyo, Japan). Statistical analysis All graphs present the means and SDs. The statistical significance of differences between two groups was tested using a two-tailed unpaired t test with Prism 6 software (GraphPad, La Jolla, CA). A P value of <0.05 was considered statistically significant.
v3-fos-license
2018-12-14T19:59:06.966Z
2007-12-07T00:00:00.000
97796953
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.ajol.info/index.php/wsa/article/download/5269/1374", "pdf_hash": "19e4310d5a83550d2f2a9d8b1619b78bac6a9dde", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42122", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Environmental Science" ], "sha1": "19e4310d5a83550d2f2a9d8b1619b78bac6a9dde", "year": 2007 }
pes2o/s2orc
The characterisation of rainfall in the arid and semi-arid regions of Ethiopia In order to plan effective agricultural and water resource projects, it is necessary to understand the spatial and temporal variability of rainfall. Although it is one of the most drought-hit countries in the world, almost no study has ever been conducted in characterising the rainfall pattern of the arid and semi-arid regions of Ethiopia. In this study, rainfall data of the past 50 years was used to study the basic statistical characteristics of the rainfall of this region. Annual and monthly rainfall was fitted to the theoretical probability distributions and the best distributions describing the data at respective stations were determined. Probability of wet days and dry periods of different durations was determined. It has been found that both annual and monthly rainfall at different stations was described by different probability distributions. There is high variation of rainfall pattern among the stations. Heavier rainfall events are infrequent but they make up a significant percentage of the total rainfall. In arid and semi-arid regions where both the amount and frequency of rainfall occurrence is low, it is essential to take into account the unique rainfall characteristics in such regions. Introduction Rainfall is the most important environmental factor limiting agricultural activities in the arid and semi-arid regions of the tropics. Although irrigation is believed to be an important strategy in alleviating the current food crisis, rain-fed agriculture is still the dominant practice in most developing countries. Soil moisture management in semi-arid and arid areas of the tropics is faced with limited and unreliable rainfall and high variability in rainfall pattern (Kipkorir, 2002). It is very hard for hydrologists to measure, collect and store hydrological data such as rainfall and runoff. In most cases, the available data are limited and may also contain some gaps in the series. The gaps in the data can be filled or the series extended to a longer period using mathematical equations. It is generally assumed that a hydrological variable has a certain distribution type. Some of the most common and important probability distributions used in hydrology are the normal, lognormal, gamma, Weibul and Gumbel (Aksoy, 1999). The normal distribution generally fits to the annual rainfall and flows of rivers. The lognormal distribution is also used for the same purpose. In hydrology, the gamma distribution has the advantage of having only positive values. The Weibul and Gumbel distributions are used for extreme values of hydrological variables. Generally only few studies of rainfall characteristics of arid and semi-arid regions of the tropics have ever been conducted. A review of research on tropical rainfall reveals that most detailed studies have been concerned with the more humid areas, a reflection of the distribution of both population and rainfall stations (Jackson, 1977;Oguntoyinbo and Akintola, 1983;Rowntree, 1988). The few published studies available from semi-arid areas tend to be from outside of the tropics (Sharon and Kutel, 1986) and the results are not necessarily representative of tropical areas. The Ethiopian arid and semi-arid region is no exception with almost no study to characterise the climatic pattern of this area. A recent study by Segele et al. (2005) tried to analyse the onset of Kiremt (rainy season), the rainy season of Ethiopia over the highlands of Ethiopia. This study tries to characterise daily, monthly and annual rainfall distributions of the arid and semi-arid region of Ethiopia. The resulting information is essential for several research programmes, rehabilitation projects, irrigation scheduling, and hydrological studies in the area. Data and method of data analysis The study area and data The study area encompasses the arid and semi-arid region of Ethiopia found in the southern, southern-eastern, eastern, and north-eastern parts of the country (Fig. 1). The selection of the stations was restricted to eight stations due to unavailability of stations with complete data. Daily data of rainfall, temperature, humidity, wind speed, and sunshine hours was obtained from the National Meteorological Services Agency (NAMSA) of Ethiopia. As presented in Table 1, the length of data record for all the stations was greater than the 30 years of climatic data needed to do accurate climatic analyses in the tropics (Stewart, 1988;Aldabadh et al., 1982). Method of data analysis Monthly reference evapotranspiration was calculated using the FAO Penman-Monteith equation (Allen et al., 1998) given as: ( 1) where: ET o is the reference evapotranspiration (mm d -1 ) R n is the net radiation at the crop surface (MJ·m -2 ·d -1 ) G is the soil heat flux density (MJ·m -2 · d -1 ) T is the air temperature ( o C) u 2 is the wind speed at 2 m height (m·s -1 ) e s is the saturation vapour pressure (kPa) e a is the actual vapour pressure (k Pa) (e s -e a ) is the saturation vapour pressure deficit (k Pa) ∆ is the slope of vapour pressure curve (k Pa o ·C -1 ) γ is the psychometric constant (k Pa o C -1 ) The agro-climatic zonation of the meteorological stations was determined using UNESCO aridity index (AI) given in Rodier (1985) as: (2) where: P is the mean annual rainfall ET o is the mean annual reference evapotranspiration. According to this classification, P/ET o <0.03 is hyper-arid zone, 0.03<P/ET o <0.20 is an arid zone, and 0.20<P/ET o <0.50 is a semi-arid zone. Mean annual rainfall P was calculated from the rainfall data for each station. Monthly average data of temperature, humidity, wind speed and sunshine hours was used in Eq. Probability distributions of annual and monthly rainfall For predictive purposes, it is often desirable to understand the shape of the underlying distribution of the population. To determine this underlying distribution, it is common to fit the observed distribution to a theoretical distribution by comparing the frequencies observed in the data to the expected frequencies of the theoretical distribution since certain types of variables follow specific distributions. Two kinds of tests were used to identify which theoretical probability distribution function best fits the rainfall data: Chi-square goodness-of-fit and Kolmogorov-Smirnov test. The chi-square goodness-of-fit test compares the observed frequencies with the expected frequencies from the hypothesised distribution. To apply the chi-square goodness-of-fit-test, the data are grouped into suitable classes, and then the chi-square statistic is calculated as the sum of squares over the classes of the difference between the observed and corresponding expected frequencies in the class. This test can be applied to discrete as well as continuous distributions; however, a fairly large sample is required to generate a reasonable frequency distribution. As a result, this being a large-sample test, one needs a sufficiently large sample. The Kolmogorov-Smirnov test compares the observed distribution function to the hypothesised distribution function. The test statistic is based on the maximum absolute difference between these two distribution functions. In this study, five commonly used probability distributions were fitted to annual and monthly rainfall data of eight stations in arid and semi-arid parts of Ethiopia. The five distributions are briefly described in the following section. Normal distribution The most important distribution of continuous variable is the normal distribution also called Gaussian distribution commonly applied for symmetrically distributed data. The probability density function n (x; μ, σ) of a random continuous variable x reads: Lognormal distribution Large numbers of hydrological continuous random variables tend to be asymmetrically distributed. It is computationally advantageous to transform the distribution to a normal distribution. In many cases the transformation can be achieved reasonably well by considering the logs of the events. In case natural logarithms of a variable x are normally distributed, the variable x is said to follow logarithmic normal probability distribution. The probability density function of such a variable y = ln x: where: μ y is the mean of ln x σ y is the standard deviation of ln x. Gamma distribution A random variable x is said to have a gamma distribution if the probability density function is given by: where: α is the scale parameter β is the shape parameter of the distribution. The normalising factor Г (α) is defined such that the total area under the density function is unity as: Weibul distribution The probability density function of the Weibul distribution is given by: where: α is the scale parameter β is the shape parameter. Gumbel distribution The probability density function of Gumbel distribution is given by: where: α is the scale parameter β is the location parameter of the distribution. In order to determine the probability of a wet day P wet , the number of days (n i ) that were wet were counted out of the total number of days (N s ) for the station as: Since the climatic records for the stations are all greater than the minimum recommended length of data record (i.e., 30 years), then the number of days in 30 years is assumed to be a reasonably closer sample estimate to the population's probability of wet days. A day was considered to be wet when there was more than 1 mm of rainfall and dry when rainfall was 1 mm or less. The probability of wet day vs. time was plotted to identify the time when the station is likely to be dry or wet. The probability of a dry spell of a given duration was determined on a monthly basis. To obtain this probability, the number of dry days of a given duration was counted and divided by the total number of days of such duration in the data series for a given month. For example, to determine the probability of a dry spell of 2 d, the number of two consecutive dry days was counted and divided by the number of total two consecutive days in the recorded historical rainfall data for a given month. Similarly, to determine the probability of three consecutive dry days, the number of three consecutive dry days was counted and divided by the number of total three consecutive days in the recorded historical rainfall data. The distribution of daily rainfall totals by amount and frequency was obtained using a frequency analysis of historic daily rainfall data. This was achieved by counting the number of times a daily rainfall of specified amount occurred during the recorded period for the station. Agro-climatic zonation The aridity index (AI value) calculated using Eq. (2) is presented in Table 2. Included in the table is also the corresponding agroclimatic classification of the stations based on the UNESCO classification criteria. Assaita and Gode are relatively arid as a result of low rainfall and high evapotranspiration in these areas. Some stations such as Dire Dawa, despite high evapotranspiration due to the relatively high rainfall, are classified as semi-arid. Probability distributions of annual and monthly rainfall Annual rainfall recorded at eight rain-gauge stations in arid and semi-arid regions of Ethiopia was fitted to five probability distribution functions. The respective parameters of the distribution functions were determined and presented in Table 3. The values of the two goodness-of-fit tests chi-square (χ 2 ) and Kolmogorov-Smirnov (KS) are also presented in the table. The annual rainfall data of three of the stations (Gode, Assaita, Zeway) were not sufficient to calculate the chi-square goodness-of-fit test as this method requires a large size of data to be properly applied. Based on the value of the chi-square goodness-of-fit value, the annual rainfall of the five stations is best described by the respective theoretical probability distributions indicated in parenthesis as follows: Negele Borena (Gumbel), Dire Dawa (Weibul), Mekele (normal), Jijiga (Gumbel), and Assebe Teferi (Weibul) ( Table 3). Theoretical probability distributions superimposed on respective frequency histograms of annual rainfall are also presented in Fig. 2 for these stations. Goodness-of-fit for Gode, Assaita, and Zeway was evaluated based on Kolmogorov-Smirnov test static value for which Gumbel, Gumbel and lognormal distributions respectively best describe the annual rainfall at these stations. Out of the eight stations considered, the number of stations with annual rainfall following the given distribution is normal (1), lognormal (1), Weibul (2), and Gumbel (4). Monthly rainfall data were fitted to the theoretical distributions and the parameters of the respective distribution and the goodness-of-fit values are given in Table 4. Rainfall data of the relatively dry months could not be analysed due to limited non-zero data to apply the respective frequency distributions. A monthly rainfall frequency histogram superimposed to the fitted theoretical probability distributions for the wet months of Negele Borena, Dire Dawa, Mekele, and Jijiga is presented in Figs. 3, 4, 5, and 6. Out of the 12 months for which probability distributions were plotted, the number of stations with respective probability distribution is as follows: normal (2), lognormal (5), gamma (3), Weibul (1), and Gumbel (1). While no single distribution provides a good fit to monthly or annual rainfall data, it can be seen that most of the annual rainfall and monthly rainfall data fit the Gumbel and lognormal distribution respectively. The gamma distribution was also found to be the probability distribution of monthly rainfall in arid regions (Sen and Eljadid, 1999). Studies in various parts of the world indicate that there is a general consensus that while annual rainfall in wet areas and wet months' rainfall can be fitted to normal distribution, rainfall in arid and semi-arid areas is skewed. Manning (1956) assumed that the distribution of annual rainfall in Uganda was statistically normal. Jackson (1977) has stressed that annual rainfall distributions are markedly 'skew' in semi-arid areas and the assumptions of a be noted that higher number of rainy days for arid and semi-arid areas do not necessarily imply higher daily rainfall since in arid areas smaller numbers of rainy days are more frequent. A daily point rainfall frequency analysis was carried out and the result presented in Fig. 7. The plotted points on the graph show the frequency of occurrence of daily rainfall of 1 mm or more for each calendar day in the record period. The probability plot clearly shows the rainfall pattern in a year. It can be seen that over a long-time period, there is a well-defined daily rainfall probability pattern within the season. The shape of the curve varies from station to station. From the figure, the probability of daily rainfall occurrence on a specific day in a year may be inferred. The maximum probability for a wet day is 0.75 (on 9 August) for Mekele, 0.50 (on 10 August) for Dire Dawa, 0.52 (on 14 October) for Negele Borena, 0.48 (on 4 September) for Jijiga and 0.55 (on 30 August) for Zeway. Since the rainfall pattern in Mekele is highly unimodal, 80% of Mekele's annual rainfall occurs from June to September. The rainfall intensity is high during this period and runoff and erosion would be very high unless different soil and water conservation structures are implemented. Mekele area is almost dry for the rest of the season (October to May). The rainfall pattern in Dire Dawa area is bimodal with two rainfall peaks: one occurring in the period from March to April and the other from July to September. The first peak occurs in the last week of March while the second peak occurs in the beginning of August. About 45% of the annual rainfall occurs in the second peak period. The highest probability is 0.38 during the first peak period and 0.50 during the second peak. Negele Borena also exhibits a highly bimodal rainfall pattern. The two peak periods are such that the first peak is from mid-March to mid-May and the second peak is from beginning of October to mid-November. Unlike other stations, July and August are dry periods. Jijiga area exhibited only slight bimodality in April and May and September with the annual highest number of rainy days occurring in September. Generally, rainfall is distributed from March to September. At Zeway, the number of rainy days starts increasing in March and peaks in August and decreases thereafter. At a probability level of 0.20 of Probability of a wet day during a year at different stations normal frequency distribution for such areas are inappropriate. Brooks and Carruthers (1953) stated that annual rainfall is slightly skew and that monthly rainfall is positively skew. For annual rainfall series which exhibit slight skewness, Brooks and Carruthers (1953) suggest the use of lognormal transformations. These comments apply equally well to tropical rainfall where annual totals exceed certain amounts. For example, Gregory (1969) suggests that normality is a reasonable assumption where the annual is more than 750 mm. Kenworthy and Glover (1958) suggest that in Kenya normality can be assumed only for wet season rainfall. Gommes and Houssaiu (1982) state that rainfall distribution is markedly skew in most Tanzanian stations. Mooley and Rao (1971) have shown that annual and monthly rainfall over different parts of India can be described by a gamma distribution. Exceedance probability and return period of annual rainfall Annual 20%, 50%, and 80% exceedance rainfall was calculated from the respective rainfall distribution of each station. The 20% exceedance rainfall is expected on average to be exceeded in 1 out of 5 years, 50% in 1 out of 2 years, and 80% in 4 out of 5 years. The annual rainfall which is expected at different exceedance probability levels and corresponding return-periods is presented in Table 5. A return period implies the frequency with which one would expect on average, a given total annual rainfall to occur. It can be calculated as: where: T is the return period (years) P is exceedance probability (i.e. the probability that a given annual rainfall is equalled or exceeded). This statistical information helps in planning irrigation projects under different scenario. In some areas such as Gode and Assaita, even if one is optimistic and assumes 20% exceedance annual rainfall, it is not possible to grow crops without supplemental irrigation. Probability of a wet day The frequency of rain-days is an important determinant of annual rainfall (Hofmeyr and Gouws, 1964). Knowledge of wet day probability is important in the soil and water conservation planning and to predict the incidence of crop diseases. It should, however, and estimates of the percentage of annual rainfall falling in daily rainfall within each class. It can be observed that the lightest rainfall events are more frequent. The distribution of daily rainfall depths is highly skewed, a comparatively small proportion of the rain-days supplying a high proportion of the rainfall. In South Africa, Harrison (1983) observed the following: only 13% of all rain-days in the Eastern Orange Free State are responsible for 50% of the rainfall and only 27% contributed 75% of the total rainfall, whereas the lowest 50% of all rain-days produce as little as 7% of the rainfall. Similar observations were made elsewhere in the world including Argentina (Olascoaga, 1950), Florida (Riehl, 1949), Philippines (Riehl, 1950, and the Sudan (Hammer, 1968). In this study the following observations were made. In Mekele area about 98% of the daily rainfall events have values of less than 20 mm but accounting for only 60% of the total rainfall. From Fig. 9 it can be seen that 3% of the daily rainfall events and 53% of the total rainfall amounts equal or exceed the 15 mm required for rain-water harvesting (Roberts, 1985). Although the heavier rainfall events are relatively infrequent, they make up a significant percentage of the total rainfall. At Dire Dawa, about 98% of the storms produce less than 20 mm but accounting for only 53% of the total rainfall. Only 1% of the storms produce 40 mm of rainfall or more, yet they account for 18% of the annual rainfall total. In Negele Borena area, about 97% of the storms produce less than 20 mm but accounting for only 45% of the total rainfall. Five percent of the storms and 67% of the total rainfall amounts equal or exceed the 15 mm. Only 1% of the storms produce 40 mm of rainfall or more, yet they account for 24% of the annual rainfall total. At Jijiga, about 97% of the storms produce less than 20 mm but accounting for only 55% of the total rainfall. From the figure it can be seen that 4% of the storms and 57% of the total rainfall amounts equal or exceed the 15 mm. One percent of the storms produce 40 mm of rainfall or more, yet they account for 17% of the annual rainfall total. Conclusions The annual and monthly rainfall distribution in most of the arid and semi-arid parts of Ethiopia is skewed and cannot be described by normal distribution. Other distributions such as Gumbel and lognormal fit the data better. Although heavier rainfall events are infrequent, they make up a significant percentage of the total rainfall. The maximum probability of a wet day is 0.75 (on 9 August) for Mekele, 0.50 (on 10 August) for Dire Dawa, 0.52 (on 14 October) for Negele Borena, 0.48 (on 4 September) for Jijiga and 0.55 (on 30 August) for Zeway. The probability plot of number of wet days follows similar pattern as the rainfall amount distribution in a year. It is becoming increasingly important to understand the nature of the variability of rainfall so as to be able to optimally utilise the low rainfall areas a day being wet, only a period from June to mid-September can be identified which is the main growing season. Probability of dry periods of different durations Knowledge of the occurrence of dry periods of different durations is important in agricultural planning (irrigation scheduling) and hydrological studies. The probability of occurrence of continuous dry periods ranging from 2 d to one month duration was determined and is presented in Fig. 8. The probability of occurrence of dry periods of different lengths was different for different stations. At Mekele the probability of occurrence of a dry period of even 2 d is very low in July and August. However, the probability of occurrence of a dry period of one week is about 90% from October to February. Therefore, if not supplemented by irrigation, crop production is very risky during this latter period. In Dire Dawa area the probability of occurrence of dry periods of one week is less than 50% for the months from March to September. During July, August and September, the probability of a dry period of two consecutive days is less than 40%. At Zeway, the probability of dry period of one week is less than 60% for the months from February to October. The probability of occurrence of a dry period of 3d in July and August is the same as that of the probability of a dry period of one week in June. In Negele Borena area, the probability of a dry period of one month is less than 60% throughout the year while the probability of a dry period of 2d to one week is more than 60% for the months of June, July and August. In Jijiga area, the probability of dry period of even 2 d is less than 60% throughout the year. The probability of a dry period of one week is less than 20% in July, August and September and the probability of a dry period of two weeks is less than 20% for the months from April to September. Cumulative frequency and cumulative depth of daily rainfall The annual distribution of daily rainfall is summarised in Fig. 9 which shows both the frequency distribution of rainfalls producing various rainfall amounts Daily rainfall (mm) Frequency and D epth (% ) Frequency Depth Figure 9 Cumulative frequency and amount and depth of daily rainfall
v3-fos-license
2020-12-01T15:16:25.792Z
2020-12-01T00:00:00.000
227235380
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mecp.springeropen.com/track/pdf/10.1186/s43045-020-00074-5", "pdf_hash": "8466b411ee0456231629143014390d3ac0bba4ad", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42123", "s2fieldsofstudy": [ "Psychology" ], "sha1": "8466b411ee0456231629143014390d3ac0bba4ad", "year": 2020 }
pes2o/s2orc
Emotional and behavioral problems of 9–18-year-old girls and its relationship to menarche age Adolescence is associated with rapid changes in behavioral patterns which affect the functioning of the person in adulthood. The purpose of this research was to study the emotional and behavioral problems of 9–18-year-old girls and their relationship to menarche age. This cross-sectional study was done on girls aged 9–18 years old in Shiraz city. A cluster sampling method was used to select about 2000 students in 2015. Then, a questionnaire including demographic characteristics and strengths and difficulties (SDQ) was completed for each of them. The SPSS software was used to analyze the collected data via descriptive statistics and chi-square tests. Among the 2000 tested samples, the highest mean and standard deviation (4.2 ± 2.25) were related to emotional symptoms. Most of them (960 individuals = 48%) scored abnormally. The mean and standard deviation was 15.61 ± 5.89, and the highest value was 33. The highest mean and standard deviation (16.69 ± 5.4) ranged from 17 to 18 years old in 289 subjects. There was a significant relationship between the age of menarche and emotional and behavioral problems (p = 0.001). Most individuals (638 subjects) (46%) had abnormal emotional and behavioral problems in the menarche age of 11–12 years old. Emotional symptoms were the most common emotional-behavioral problems of adolescents. There was a significant relationship between the menarche age and emotional and behavioral problems. It is necessary to be familiar with the problems of adolescent girls during adolescence and the way to deal with their problems. Background Adolescence is one of the most important periods of life, and in fact, this stage is considered to be a kind childhood to adulthood transition. This intermediate phase is accompanied by important physical, psychological, and social changes in addition to rapid changes in behavioral patterns that affect the performance of the individual during adulthood [1]. According to the World Health Organization (WHO), people between 10 and 19 years old are considered to be adolescents [2]. Based on the 2006 census, 21.8% of Iran's population is between 10 and 19 years old [3]. In adolescence, the basis of many behaviors affecting the health and lifestyle of individuals is formed [4]. Most attitudes and behaviors formed during this period determine the habits of a healthy lifestyle during adulthood [5]. The results of studies conducted by Friedman and colleagues in 2001, Kimm and colleagues in 2002, and O'Loughlin and colleagues in 2003 have shown that risk factors, both behavioral and biological, associated with non-communicable diseases are formed during childhood and adolescence, and they are stable until adolescence [6][7][8]. Behaviors and lifestyles of this age have a profound effect on major illnesses in the future, especially in today's world where the pattern of illnesses has changed and illnesses caused by the unhealthy lifestyle patterns have been passed to the top of the list of the causes of death [4]. Ericsson considered adolescence as a period of identity vs. role confusion. Given that identity is the unity that exists in the three biological, social, and psychological systems, when such unity is not achieved, adolescents' relationships and behaviors are disturbed. Holling and colleagues in 2007, in their study on German teenagers, reported that 11.9% of adolescents needed mental health services due to behavioral problems [9]. The most frequent psychiatric disorders in childhood and adolescence are anxiety disorders (up to 31.9%), behavior disorders (16.3-19.1%), substance use disorders (8.3-11.4%), emotional disorders (3.7-14.3%), hyperkinetic disorders (2.2-8.6%), and aggressive anti-social disorders (2.1-7.6%) [10]. Personal problems usually occur when they face new conditions of puberty and identity crisis. In other words, the disability of the adolescent in adapting to the new conditions leads to the emergence of behavioral problems. The results of several studies have shown that girls show their problems as internal behaviors such as isolation, physical symptoms, depression, and anxiety [11]. Also, some studies have shown that emotional and psychological problems of adolescents increase with age [12]. One of the most critical periods in a woman's life is adolescence which leads to the onset of menstruation. About 70-90% of women undergo various physical and mental changes before or after menstruation bleeding or at its onset, which is called premenstrual tension or molimina [13]. Actually, emotional imbalance and instability are the most prominent features of adolescence. Sensitivities and emotions caused by irritability, which are the prominent features of this period, are often due to changes in the endocrine system, level of hormonal secretions, and type of education and training of adolescents in the past that totally make up the emotional state of the adolescent [14]. Evaluation of the psychiatric problems of children in the community and finding the sufferers are the first steps in promoting the level of mental health in this age group. Because no extensive study has been conducted in this area in our society, the researchers decided to conduct a study to investigate emotional and behavioral problems of 9-18-year-old girls and its relationship to menarche age Methods This analytical, epidemiological, and cross-sectional study was performed in 2014-2015. All female students in the primary, guidance, and high school were included in the study across all four districts of Shiraz city. By considering the previous studies [15] and statistics experts, the obtained sample size was 1625 female students based on the formula and confidence level of 95%. By considering the probability of the sample size fall, 2000 were estimated as the sample size. The inclusion criteria were girls between 9 and 18 years old, willing to participate in the study, and completed the written informed consent with no background of taking medication (except anti-allergic and pain killers-3 months prior to the study) or chronic physical and mental illness. The aim of this study was to make adolescents completely healthy. Antibiotics are important and may be used to treat chronic diseases that affect the hormonal cycle and the onset of menarche. The exclusion criterion was suffering from any hormonal diseases such as growth hormone, thyroid gland, and adrenal glands disorders; diabetes; skeletal, muscular, and neurological disorders; and chronic diseases like asthma. Awareness of diseases has been the selfreporting of girls and their parents. Those who had experienced a crisis or stressful event who were willing to withdraw from the study or their parents requested to withdraw their children from the study were excluded. First, the cluster sampling method was used, and 6 to 8 schools were selected randomly through convenience sampling for the selection of 500 students at each educational level. In the present study, the researcher asked the departments to complete demographic and SDQ questionnaires after obtaining permission from related authorities, examining the inclusion and exclusion criteria of the study, and explaining the study objectives. The scientific validity of the questionnaire was evaluated via content validity. Moreover, it was assured that the information about all subjects would remain confidential. The study instruments had two parts: (1) personal information about menarche (including menarche age and demographic information) and (2) SDQ questionnaire. After studying the reference textbooks and various sources, the researchers selected Robert Goodman's Questionnaire with Cronbach's alpha of 0.73. This questionnaire contained 25 questions about the behavioral and emotional problems of children from the viewpoint of parents and teachers with three categories of response (not true, somewhat true, and certainly true). The minimum and maximum total score ranged from 0 to 40. The questionnaire had five indicators (emotional problems, overactive problems, behavioral problems, and communicational problems with peers and appropriate social behaviors). This questionnaire has been validated by Dr. Tehranidoust in the Iranian children's community [16,17]. The collected data were analyzed through the Table 1 shows the status of the indicator of emotional and behavioral problems in female students. The highest mean and standard deviation (2.35 ± 4.2) were related to emotional symptoms, and the lowest mean and standard deviation (1.93 ± 3.33) were related to the peers' problems. Table 2 shows the emotional and behavioral problems of female students. Most of the subjects (960 individuals) (48%) had abnormal scores, and the lowest of them (337 subjects) (16.9%) scored as intermediate. Results The mean and standard deviation was 15.61 ± 5.89. The highest value was 33, and the lowest was zero. Table 3 shows the emotional and behavioral problems in terms of age in female students. The highest mean and standard deviation (16.69 ± 5.4) were in the range of 17-18 years old girls (289 subjects), and the lowest mean and standard deviation (13.82 ± 5.8) were in the range of 11-12 years old girls (73 subjects). Table 4 shows the relationship between menarche age and emotional and behavioral problems in female students. The chi-square test between the menarche age and emotional and behavioral problems showed that there was a significant relationship between the two variables at the confidence level of 95%. The test value was equal to 22.17 with a significance level of p = 0.001. Most of the subjects (57%) had abnormal behavioral and emotional problems at the menarche age of 15-16 years old. Ethical considerations The local Ethics Committee of Shiraz University of Medical Sciences approved the study protocol (grant number 7173). Permissions were also received through the authorities in the schools. Written informed consent was collected from all the participants. The confidentiality of all participants' personal information was assured. Furthermore, they were free to withdraw from the study at any time. Discussion Epidemiological studies show that 5-10% of children and adolescents suffer from emotional and behavioral problems, which are among the most common psychiatric disorders for this age group [18]. Emotional and behavioral problems are associated with suffering and disturbances in the daily life of the affected person, his/her family, and among the relatives. These problems were associated with an increased risk of substance abuse, depression, and impaired social and emotional functioning during adolescence and early adulthood [18][19][20][21]. Therefore, emotional and behavioral problems in childhood should be identified and treated as soon as possible. The standard deviation and mean of the total score of the questionnaire's strengths and difficulties in the sample were 15.61 ± 5.89. The value was reported as 5. (17,22,25). The reason mentioned for the difference in the total score of the strengths and difficulties is the difference in the real prevalence of problems in different countries and probably different mean ages of the study subjects [22]. Therefore, in this study group, the age group of 9-18 years old, the highest mean score was in the range of 17-18 years old with an average of 16.69. Nasiri et al. carried out a study to determine the prevalence of mental health disorders in primary school children in Boushehr city (2006)(2007). A total of 2350 SDQ questionnaires were distributed randomly in urban and rural primary schools. In this study, 946 (49.3%) subjects had an abnormal score similar to that of the present study in which 960 (48%) subjects had an abnormal score [22]. A comparison of the indicators obtained from the SDQ questionnaire in the present study showed that the highest mean was related to the dimension of appropriate social behaviors; this is consistent with the studies of Tehranidoust et al. and Arabgol et al. The lowest mean was related to the dimension of problems with peers, while in Tehranidoust and Arabgol's studies, the lowest mean was related to behavioral problems [17,23]. Latif Nezhad et al. conducted a study with the aim of comparing emotional and behavioral problems and depression in two groups of girls before and after menarche in Mashhad city. In this case-control study, 320 healthy high school children aged 11 to 15 years old (140 girls in the pre-menarche period and 140 in the postmenarche period) who did not have emotional and behavioral problems were selected through multistage sampling from 18 high schools in Mashhad. The results showed no significant difference in the behavioral and emotional problems of the girls in the post-menarche period in comparison with those in the pre-menarche period. However, in this study, the relationship between emotional and behavioral problems and menopause age was statistically significant. Most of them (638 subjects) (46%) had emotional and behavioral problems at the menarche age of 11-12 years [24]. However, in Tehranidoust et al.'s study, the scores of the SDQ questionnaire score were not significantly correlated to age [17]. Moreover, in another study by Sanders on Japanese families living in Australia (2007), 50 families were evaluated in two case and control groups. Then, a significant difference was observed in the dimensions of parenthood, parenting, and adolescent behavioral problems at the end of the intervention. However, there was no significant difference in anxiety, stress, and depression [25]. Grant et al.'s (2003) study concluded that stressful events, such as the conflict in the family, have a significant role in the expansion of emotional and behavioral problems in children and adolescents [26]. Garnefski et al.'s study (2005) also investigated children and adolescents aged 12 to 18 years in the Netherlands among the general population. They found that the scores of people with emotional and behavioral problems were significantly higher than those of the control group and the group with conditions of behavioral problems in terms of cognitive coping strategies as self-blame and rumination [27]. In a case-control study in Birjand, it was found that the mean of emotional and behavioral problems and aggression was significantly higher in divorce children than non-divorced children [28]. Cognitive-behavioral therapy was strongly supported as an effective treatment for emotional and behavioral problems in children [29]. However, the vast majority of children and adolescents with emotional and behavioral problems do not receive evidence-based psychological treatment [30,31]. Turner et al. showed that adolescents who were in the warm, intimate, adaptive, communicative, and supportive environment of their family could control the negative effects of stress on their health [32]. In adolescence, the role of parents and their ability to communicate positively and constructively with their adolescent is very critical. Studies showed that warm and protective family relationships were predictive of the positive correlation between children and adolescents and are considered as protective factors against emotional and behavioral problems in adolescence [33]. Van et al. (2012) investigated the impact of social skills training programs for children aged 7 to 13 years on emotional and behavioral problems. The results showed that social skills training caused positive changes in children's emotional and behavioral problems [34]. Chen (2006) and Spence's (2003) study on students at risk of behavioral and emotional disturbances indicated that social skills training, which included modeling, feedback, and encouragement in case of proper performance and role-plays, led to an increase in their social adequacy [35,36]. Senik showed that training social skills led to increased social interaction and interpersonal relationships followed by increased indicators of psychological well-being, income-earning, and consequently increased quality of their life [37]. A meta-analysis study showed that two thirds of adolescents who were at risk of behavioral and emotional disorders but received social skills training were improved compared with the control group [38,39]. Generally, the review of the studies conducted on social skills training shows that 25 years after the beginning of research in this field, the researchers make an attempt to train these skills in order to make people acquire, maintain, and publicize the skills to overcome or reduce their behavioral and emotional problems [38,40]. It should be noted that mental disorders may activate the corticotropin-releasing hormone from the nervous system following an increase in cortisol and prolactin, which leads to menstrual symptoms [41,42]. In addition to the effects of hormones released on the quality of life, these mental disorders may lead to suicide, addiction, early sexual experience, depression in adulthood, crimes, loss of education, low self-esteem, and its consequences, eventually leading to occupational, family, and social disorders [43,44]. The prevalence of these mental disorders in the premenstrual period is rare. Gender differences show that during puberty, these disorders increase with a steep slope and are more common among the girls who are more vulnerable to various psychological factors so that the ratio of the girls' psychological condition in these disorders compared to boys is 1/1/3 [45]. In explaining these results, it can be mentioned that life skills are like a behavior change-based approach that can make a balance between knowledge, attitude, and skills and can increase stress-coping skills, self-esteem, and individual control in different situations. The adolescence period is a critical stage in the course of which the foundation of adulthood can be set for a person [46]. This period is associated with significant physical and mental changes, and the lack of awareness of adolescents in this period may lead to inaccurate performance and adverse outcomes. Training can reduce many of the problems and crises of that period. Therefore, since it is important to know how to enter the process of adolescence and how to overcome its ups and downs by adolescents, families should get acquainted with the time and trends of menarche and factors affecting it in order to provide their children with the right decisions in a timely manner [47]. Training issues, especially support of the family, can reduce the stresses and menstrual disorders of the adolescent girls [48], given that the main aim of this project was to study the prevalence of menarche, early and late menarche. Therefore, the factors affecting adolescents' psychological problems have not been fully evaluated. Other factors affecting psychological problems have been proposed as a design limitation in the article. Another factor in the development of emotional and behavioral problems are socioeconomic status; a possible clue could be related to the type of school and area of residence. Further research is suggested to confirm the relationship between environmental factors and menarche onset age, development of secondary sexual characteristics, perception of puberty or physical maturity compared to peers, and the rate of puberty in different racial-ethnic groups so that the existing contradictions are resolved. Besides, interest in environmental factors that have influenced the onset of puberty has increased significantly over the past three decades. However, despite extensive studies, how environmental factors affect the first menstrual period is largely unclear. Conclusion Emotional symptoms were the most emotional and behavioral problems in adolescents. There was a significant relationship between the menarche age and emotional and behavioral problems. Therefore, the attention of the parents and family plays a significant role in the behavior of adolescents. According to the results of this study, it is suggested that some strategies should be developed in the form of educational programs based on the problems and psychological characteristics of the girls in order to prepare themselves for coping with adolescent conditions. It is necessary to develop mental health programs appropriate for adolescent age based on their problems and educational conditions. Besides, one of the goals set by the World Health Organization in 2020 is to promote a healthy lifestyle in the community. Accordingly, countries should put on their agenda strategies that are effective in improving individual and social life and the factors that lead to unhealthy lifestyles (such as poor physical activity, poor nutrition, and substance abuse). Therefore, in health care systems, it is necessary to pay serious attention to behavioral approaches and risk factors simultaneously with clinical examination. Among these, due to the decrease in the age of developing harmful behaviors and due to the sensitivity of adolescents and the formation of intellectual, ideological, social, and emotional values, this group should be prioritized. SDQ: Strengths and difficulties; WHO: World Health Organization Funding This work was supported by the Shiraz University of Medical Sciences, Shiraz, Iran. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Table 4 The relationship between the age of menarche emotional and behavioral problems in the student city of Shiraz Emotional and behavioral problems 9-10, N (%) 11-12, N (%) 13-14, N (%) [15][16]
v3-fos-license
2019-05-29T13:11:56.156Z
2014-01-01T00:00:00.000
13349897
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.hrpub.org/download/20131215/UJIBM3-11601929.pdf", "pdf_hash": "d1e9cf9afcf32b11a56b73831449033a02e4f1d2", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42128", "s2fieldsofstudy": [ "Economics" ], "sha1": "7b12d755d8aebbe740fddd8f24324cb73691ef62", "year": 2014 }
pes2o/s2orc
Lithuanian Pension System’S Reforms Transformations and Forecasts The aim of this article is to describe the Lithuanian pension system, its reform process and its long-term financial sustainability. We define therefore the current reforms in the public pension system, influenced by the last economic crisis and social challenges. Also, we forecast the financial dynamics of the public pension system, in the light of raising social expenses (due to second pillar pension reforms) and of demographic trends (like ageing society and low fertility). Results reveal the long-term sustainability of the system, albeit at a cost of initial negative balances to be covered with public budget. Policy solutions could improve sustainability by encouraging and extending employment (especially for the disadvantaged) and by building trust in both public and private pension systems. Introduction Before the economic crisis in 2008, Lithuania reformed pension system in 1995 and 2003. Lithuania's pension system model is based on classical Bismarkian principles (earning related benefits and ensured state's guarantees) and from 2002 to 2008 had social security fund budget surplus. Pension expenditures in Lithuania in 2007 was only 6,8% of GDP and it was almost twice lower than an average of EU-27 (11,8% of GDP in 2007): this is due to a more favorable population structure and to the fact that in the pre-crisis rapid economy growth period pensions have increased at a lower pace than the GDP. Without any pension reform the replacement rate (male worker retiring at 65 after 40 years of career) in the first pillar will decline from 48% to 35% in 2048 (European Commission. The Joint report on pensions, 2010, p. 88). Pension expenditures in Lithuania will grow: the change of the age-related expenditure in 2007-2060 will be 4,6% of GDP (in EU-27 will be 2,4% of GDP in the period 2007-2060). Despite of negative prognosis showing increase of the pension expenditure in Lithuania, there are some factors which could mitigate the growth of the pension system expenses: restriction of the eligibility for a public pension (through higher retirement age, reduced access to early retirement and changes of the disability pension system), higher employment and reduced generosity of pensions (European Commission. The Ageing report, 2009). According to the projections of the Eurostat and Lithuanian Ministry of Social protection and Labour, the population of Lithuania will decline to 2,5 million from 2009 to 2060, the elderly population (aged 65 and older) will more than double from 16% to 32,7% . Lithuania has one of the highest negative rates of crude migration (net) in EU-27 ( -4,6% in Lithuania and 1,9% in EU-27) (European Commission. Joint report on pensions, 2010, p. 87). However, in Lithuania we could fix still relatively high employment of the older workers (55-64 years): the employment rate of older workers in 2009 was higher (51,6%) than the average of EU-27 (46% in 2009) (European Commission. The Ageing report, 2009). The pace of pension reform has accelerated over the period 2007-2010 and changes include increases in pensionable ages, the introduction of automatic adjustment mechanisms and the strengthening of work incentives. Some countries have also better focused public pension expenditure on lower income groups. However, some recent reforms have raised controversy, such as the decision of some central and eastern European countries to pull back earlier reforms that introduced a mandatory funded component (OECD, 2012, p. 15-18). Today we could underline common challenges to be met by Europe'ssocial security systems: demands for more personal choice and quality improvements in services and benefits; the impacts of globalization (greater flows of people, goods, services and capital across national borders); population ageing and economic, fiscal and social fallout of the currenteconomic crisis (International social security association, 2010, p. 93). M. Ferrera emphasized that a genuine European invention, public protection schemes were introduced to respond to the mounting "social question" linked to the industrialization and the disruption of traditional, localized systems of work-family-community relations and the diffusion of national markets (based on free movement and largely unfettered economic competition within the territorial borders of each country) profoundly altered the pre-industrial structure of risk and need (Ferrera, 2010, p. 45). When we are searching for the better efficiency of the social security system and higher social security coverage, it is important to note, that social security structure depends on the type of social model. Today it is difficult to find pure social model, designed in the classic Bismarck or Beveridge tradition, but the essential elements of a theoretical model still dominates. The strengths of Continental model (France, Germany) could be: mandatory participation in the social insurance system; the right to social security benefits is related to the paying of social insurance contributions; relatively high benefits; indexation related to the economic situation; autonomous management of the system; social insurance contributions are related to the social insurance risks. The weaknesses of this model are: complexity of the system; the system is not fully universal; the system do not guarantee minimum level of benefits. The strengths of Anglo-Saxon model (United Kingdom, Ireland) are: universality; free medical care; the system includes all needs of person. The weaknesses are the following: relatively low level of benefits; medical care (financed by taxes) coverage depends on the economic situation; the biggest role is given to the additional voluntary private systems. The strengths of Nordic model (Scandinavian countries) are: universality (wide coverage); extremely high benefits; the minimum level of benefits is established; the public social insurance depends on the contributions paid; large public confidence in the system; equality between women and men. The weaknesses of this model are the high cost of the system and high level of social insurance contributions (Kahil-Wolff, B., Greber, P. Y., 2006, p. 47-49).Eastern European social model (Lithuania) characterized by both Nordic social model features (active labour market policies),Continental model (the structure of the social security system) and the Anglo -Saxon features (development of private initiatives and labour market liberalization policy). Development of Eastern European social model is related to the fact, that countries in this region changed economic orientation from socialist to market-oriented system. But we could point out, that the economic transformation (increased unemployment, poverty, inequality, bankruptcies of companies and industries, fiscal crisis, creation of new public institutions) and other related facts (the needs of different social groups, recommendations of international institutions, European integration) resulted in the limited public financial resources. G. Esping -Andersen argues, that the Eastern European countries have opted for a liberal social security system concept, where the basis of social security schemes have been privatized, reduced social security coverage, social assistance is based on the means-testing principle and labour market is flexible (Esping-Andersen, G., 1996, p.20). In this perspective, wepresent the Lithuanian pension system and its reforms. Also, we give a qualitative and quantitative evaluation of its sustainability in the light ofcurrent economic and demographic trends. We conclude with an agenda of further reforms. The paper starts with thepresentation of Lithuanian pension system (Sect. 2) and its reforms (Sect. 3). It continues with the forecasting of Lithuanian public pension system (Sect. 5). Finally, conclusions are drawn (Sect. 6). Pension System's Reforms in the Light of the International Organisations European Commission noted that the purpose of automatic adjustment mechanisms is to maintain the balance between revenues and liabilities in pension schemes, and these mechanisms impact on both intergenerational adequacy and sustainability. These mechanisms imply that the financial costs of demographic changes will be shared between generations subject to a rule. To a varying degree they link: i) life expectancy to pension eligibility or replacement rates; ii) economic performance in terms of GDP growth or labour market performance (with valorisation of entitlements or indexation of benefits); iii) balance of the system to valorisation of entitlements or indexation of benefits and contribution rates with indexation of benefits (European Commission. Joint report on pensions, 2010). Organisation for Economic Co-operation and Development(OECD, 2012, p. 15-18) stressed, the crisis has accelerated pension reform initiatives, while private pension policy makers have focused their attention on regulatory flexibility and better risk management; the introduction of automatic adjustment mechanisms in public pension systems will improve their sustainability, but may raise adequacy problems; the coverage of funded, private pensions is insufficient in some countries to ensure benefit adequacy; return guarantees are generally unnecessary and counterproductive but in some countries they may be justified in order to protect pension benefits and raise public confidence and trust in the private pension system; a new roadmap for defined contribution pension plans: policies to strengthen retirement income adequacy. Analysis shows that while some of the losses incurred during the crisis may be recovered during economic recovery relatively quickly, a complete restoration of pension finances may take many years (it means that people have lost a number of years of savings due to the financial crisis) and might not recover during their remaining active life (because of vulnerability of pension levels in defined contribution schemes) (International Labour conference, 2011, p. 61).The crisis has wiped out years of economic and social progress and exposed structural weaknesses in Europe's economy, the world is moving fast and long-term challenges: globalization, pressure on resources and ageing (European Commission. Communication "Europe 2020", 2010). Because the public pension replacement rates in general declined in the EU, reforms have given and will continue to give rise to greater individual responsibility for outcomes and it is important to provide sufficient opportunities for complementary entitlements: e.g. enabling longer working lives and increasing access to supplementary pension schemes (European Commission. Green paper, 2010). International Labour organization (ILO) notes, that the repercussions that these developments will have on contributors and pensions are not straightforward, and will most likely affect people who retire after the crisis butpensions funds in 2008 in many countries suffered enormous losses during the global crisis. OECD emphasized that countries private pension funds lost 23% of their value in 2008. The degree of vulnerability of future pension levels to the performance of capital markets and other economic fluctuations, introduced in so many pension systems during the last three decades, was clearly a mistake that stands to be corrected. Strong minimum pension guarantees may work here as "automatic stabilizers" of retirees' living standards. Response to the economic crisisis only possible on the basis of existing administrative structures, that is, existing social institutions which either can automatically react to changing economic conditions thanks to their design, or can be easily adjusted (e.g. extended) to crisis-induced requirements (International Labour Office. World's social security report, 2010, P. 106-118). European Commission in the White paper "An agenda for adequate, safe and sustainable pensions" indicated that member states should: create i) link the retirement age with increases in life expectancy; ii) restrict access to early retirement schemes and other early exit pathways; iii) support longer working lives by providing better access to life-long learning, adapting work places to a more diverse workforce, developing employment opportunities for older workers and supporting active and healthy ageing; iv) equalise the pensionable age between men and women; v) support the development of complementary retirement savings to enhance retirement incomes (European Commission. Green paper, 2010). When revenue is declining, the simplest way to regulate the social insurance fund budget is to increasestate social insurance contributions or to reduce benefits. However, these methods cannot be applied as the fastest economic effect because they indirectly impact State's competitiveness and employment policy. Reduction of pension benefits may affect certain undesirable legal and social implications, raise the questions of social solidarity, social security unity, benefits differentiation and legitimate expectation principles. Thus, the reduction of pensions means that persons are not encouraged further work and expect a higher pension, and pensions will decline despite of paid higher social insurance contributions. The economic crisis and reduction of pensions, deny the contribution-benefit balance and it's important to maintain the state social insurance pension guarantees. Reduction of pensions could violate mainprinciple of Bismarck social tradition: the benefits depend on paid contributions. Pension System's Reforms in Lithuania The last economic recession strongly impacted Lithuanian pension system reforms. From 1 July 2009, the amendments to the Lithuanian Pension system reform law adopted: the state social insurance contributions transfers to the private pension funds fallen to 2% and social insurance benefits reduced for two years. On 28 October 2009, a National Agreement was signed between the Government of the Republic of Lithuania and social partners: the largest trade unions, business and employers as well as pensioners' organizations. Under this Agreement, the Government undertook to implement measures for financial consolidation, including a temporary reduction in all pensions (except the smallest pensions). Therefore, the government reduced pension benefits in 2009 (however, the Lithuanian Constitutional Court decided, that reduced part of pensions must be compensated in the future). Only in 2010 a complex pension system reforms adopted and future policies designed. Reference Literature and Scientific Research The tendencies of welfare state development, financing and pension system's reforms in Lithuania are analysed by A. Guogis, R. Lazutka, T.Medaiskis, P.Gylys. A. Guogis, D. Bernotas analysed the social models and development of the welfare state (Guogis, Bernotas, 2006). P. Gylys states that the experience of some East European states showed that contributory funded pension schemes were established without deep analysis of the reform consequences and the reform results were worse than forecasted (Gylys, 2002). Evaluating 2003 pension system reform, R. Lazutka points out that primary objective of the pension reform was the state protection for corresponding businesses not for private individuals who failed to understand that participation in those pension schemes could hardly ensure more social safety (Lazutka, 2007). According to A. Guogis, such a solution of pension system problems only recedes (but not approaches) a vision of "social Europe" still further (Guogis, 2004). The development of Lithuanian pension system after latest economic crisis of 2008, problems of the pension adequacy and financial stability, pension system evolution and related modelling is new approach in the scientific literature. The results of scientific analysis and transformations forecasts showed the Lithuanian pension system's financial stability perspective. This study and conclusions shows the directions for the future pension system's reforms in Lithuania. Purpose of Reforms Pension system's reforms should cover not only the traditional measures (to reduce benefits and to increase contributions), but should be donetogether with the comprehensive social security system and labour law reform: to grow the employment, to introduce more flexible labourforms andactive labour market policies, to review the system of social security benefits (reduce or eliminate some benefits), to introducehealth social insurance contributions for pensions (pensions are taxable in many EU countries, except Lithuania).Economic Cooperation and Development Organization in the pensions review of 2009 noted, that in the face of the economic crisis, the government adopts the short-term practical solutions, meanwhile, long-term strategic plans, which are important to pensioners' incomes, are ignored (OECD, 2009). ILO indicated that the short-term responses to a crisis -macroeconomic stabilization, trade policies, financial sector policies and social security -cannot ignore longer-term implications for both economicdevelopment and vulnerability to future crises (International Labour Office, 2010, p. 112). International social security association noted (International social security association, 2013), that the last three years have seen a number of reform measures taken, not least to respond to longer-term trends and changes in the demographic and social environment and more immediate fiscal pressures heightened by the crisis. The danger is that significant reforms (e.g. raising the retirement age) are being made without the coherent development of a necessary cross-sectoral policy strategy (e.g. employment, return-to-work, and occupational safety and health policies to support employment among older workers), and without a fuller national debate involving all relevant social partners and stakeholders about the likelihood of necessary further reform. The time for reforms is actually critical:without the prolongation of retirement age and without incentives for the private pension accumulation,the deficit of state social insurance fund will be higher and the trust of the societyin social insurance system could fell down. Pension System Reform in 2003 In 2000,the Government of the Republic of Lithuania adopted the Concept of the Pension system reform. This Concept indicated the principal goal -to change the pension system in such way, that personsat the retirement age could get higher pension income, the pension system should become more viable and would cover all population as well as the redistribution effect in the systemshould be decreased. Concept stated that quasi/mandatory funded pension system will be introduced (without increasing contribution rate for the pension insurance).It should be mentioned that the Concept has been adopted at the time of economic and social crisis: existing deficit of the state social security fund,economic recession, declining demographic situation. The Concept provides also that the first level (pillar) of pension system of Lithuania should guarantee the state social security pension (retirement, disability, widows and orphans). The second level (pillar) is quasi/ mandatory funded pensions operated by the private pension funds. The third stage (level) of the pension system is an additional voluntarily funded pension system (operated by pension funds or life insurance companies). In July 2003 the Parliament adopted a Law on Funded Pensions . This law provides, that from 1 January 2004 the part of the contributions will be transferred to the private pension funds (if person decides to participate). The reasons to introduce funded pension system were deterioration of demographic situation, sustainability of the pension system and the surplus of the state social security budget. Social insurance contribution rate to the funded system fixed by 2,5% for the first year and increased every year by 1% to 5,5% maximum. There were no restrictions for participation by age (below the legal retirement age). The supplementary part of the state social insurance old-age pension reduced in proportion to the size of the contribution rate. The participants of the funded pension system can receive accumulated benefits at the retirement age. The volume of the accumulated sum depends on the annuity period, transferred contributions, investment results and the level of administration costs of the pension funds. Every year pension funds must inform participants about the accumulated sum. Law on Funded Pensions defined, that lack of finances in the budget of social security (because of transferred contributions to the private funds) should be financed from the state property privatization and from the state budget.Each year the Law on the Approval of Indicators of the Budget of the State Social Insurance Fund provides the compensation level for the state social fund. Participation in the funded pension system was active; however this may be related to the Government incentives explaining in mass media positives points to accumulate.Relatively high part of the older population (from 45 year)accumulates in this funded pension system (about 28% of total population in 2010).About 85% of the social insurance system's participants decided to accumulate for the funded pension in 2010. Economic crisis strongly influenced the funded pension system. The state social pension insurance contributions (which are transferred to the pension funds) were reduced from 5,5 to 2% in 2009-2011. The introduction of the funded pension system in 2003 means, that Lithuanian pension system turned into Anglo-Saxon model: the state social security system become partly dependent on the state budget and participants of the funded pension system have less state guarantees from the first pension's pillar. Pension System Reforms after Economic Crisis Economic crisis and analysis (indicated in the Concept of the reform of state social insurance and pension scheme of 15 June 2010) showed that there are several problems in pension insurance: the current benefit scheme enables the duplication of benefits;the redistributed part of social insurance pensions (the basic pension) has great significance for the pensions level, while the impact of contributions paid by a person is reflected insufficiently. It makes this scheme unattractive;benefits are not linked to the life expectancy; no incentives to continue longer work career; the identification of work incapacity and special needs are insufficiently transparent and controlled; the state social insurance scheme is financially vulnerable and thepension reserve fund not established; the indexation of the pension benefits is not linked to theeconomic and demographic indicators and is under a strong political impact; no long-term strategy for thepension accumulation. On 15 June 2010, the Concept of the reform of state social insurance and pension scheme has been approved. The goal of the reform is to establish financial sustainability, to guarantee adequate and target-oriented benefits and to administer pension system more efficiently. In this concept some proposals has been fixed: to increase the pensionable age for women and for men until 65 years of age for the both genders in 2027;to cancel new state pensions (not related with the insurance record); to introduce private pension's fund better management means;to pay non-contributory social insurance pensions from the state budget; to apply a new clearer formula for pensions; to introduce economic indexation of pensions; to change the formula ofthe social insurance old-age pension calculation, introducingaccounting units ("points") system or to introduce notional pension system;to integrate state pensions into the general scheme of social insurance. The Lithuanian Parliament reached a wide political agreement and on 24 May 2011 adopted Guidelines of pensions and social security reform. The Government adopted theMeasures plan for implementation of Parliament Guidelines (adopted in Government on June 8, 2011) and timetable for thepreparation of the laws projects. The reform will last in two stages. The transitional period will start since 2012 and will last until 2026. Second stage will start form 2027. The main aim of the reform as indicated in the Guidelines, is to ensure that persons could receive adequate pensions, to stabilise the state socialinsurance fundbudget and to adjust the pensions level to to theeconomical and demographical changes. Several principles indicated in the Guidelines: 1. More transparency in the pension system -pension system participants should receive all information about pension rights, should know about system' benefits and should be constantly notified of the obtained rights to the state social security pension. 2. Separation of the social insurance and social assistance: better correlationbetween contributions and benefits; to make labour market more flexible; to increase gradually a retirement age; pension's level should be related to the demographic and economic situation; government should encourage employment of elderly persons. 3. To establish clear indexation rules and clear relationship between social insurance fund and state budget. The pension benefits indexation should belinked to the economic and demographic, but not to the political indicators.Others changes related to the new pension formula: to transfer the basic flat-rate pension to the state budget and to introduce NDC (virtual accounts) system or accounting units ("points") system. 4. To cancell privileged benefits in future, to integrate all state privileged pensions into the social insurance system and to create professional pension funds. 5. Better regulation and more efficiency in second pillar private funded pension schemes. Theaccumulation in the second pillar gradually should be restored and voluntary pension accumulation should be encouraged. Themeasures for the better management of the pension funds should be introduced: introduction of the life-cycle investment system; to analyze the possibility to introduce state pension fund etc. E.Volskis stressed (Volskis, 2012), that as response to growing demographic risks due to low fertility, life expectancy increase, risks related to migration of working age population and shortening of employment services period all three Baltic countries have introduced successfully the new pension systems, which established good pre-conditions to mitigate aforementioned risks. Nevertheless financial crisis in 2008 and 2009 and currently ongoing crisis in euro zone countries indicated that the pension systems in Baltic countries were not properly protected against the real economic risks, which were related to long term unemployment and decrease of return rates below inflation rates, for instruments such as term deposits and government bonds, which historically were considered no risk financial investments with stable returns of 4-5%. Council of the European Union noted (Council of the European Union, 2013), that the adequacy of pensions is a challenge as the older population is at a high risk of poverty and exclusion. The 2012 reform of the pension accumulation system encourages 2nd pillar pension accumulation with financial incentives from the state budget. It also introduces the possibility to opt out from private pension accumulation and return to the state social insurance fund during a transitional period as well as a gradual increase of retirement age. These are important but isolated steps in the right direction and more significant changes are needed, particularly within the 1st pension pillar. In addition, measures that promote the employability of older workers and age friendly working environments are necessary. Prolongation in the Retirement Age The increase of the pensionable age is strongly related to the longer life-expectancy. One of the key recommendations of European Unionis prolongation of pensionable age and changes in the pre-retirement pension schemes. Prolongation of pensionable age is common process in many European countries because of state social security pension system vulnerability, ageing and raising of life expectancy. In the European Union's startegy "Europe 2020: Integrated guidelines for the economic and employment policies of the member states" is indicated that, member states should emphasize promoting increased labour force participation through policies to promote active ageing (European Commission, Strategy "Europe 2020", 2010). International social security association noted (International social security association, 2013), that raising the retirement age, and thus pushing back the age at which benefits can be taken, can also support the financial sustainability of pension systems by encouraging continuing contributions from insured employment and reducing the duration for which benefits are likely to be paid out on average. In addition, a later retirement age may support efforts to improve benefit adequacy by allowing a longer period of accrual of benefits. On June 9, 2011 the Parliament approved the amendments to the Law on state social insurance pensions and it was decided to increse the retirement age. The retirement age will be increased by 4 months per year for women and 2 months per year for men from 2012,until it reaches 65 years in 2026. This decision was adopted with regard to the longer lifespan after the retirement age. According to the data from the Department of Statistics of Lithuania, in 2009 the average life expectancy after 65 years of age in Lithuania was 13.38 years for men and 18.25 years for women. According to the Eurostat projections, in future the life expectancy will grow (19years for men and 22,6 years for women in 2050). Accumulation for the Retirement Pension in the Second Pillar Private schemes can relieve some of the pressure on public pension provision, however, increasing reliance on private schemes has fiscal costs, given the widespread practice of providing tax incentives during the accumulation phase (European Commission. Green paper, 2010).International Labour Organization indicated thatwhere the schemes were financed collectively and have been fully managed by State (in particular through PAYG financing), the immediate impact has been small. In contrast, fully funded schemes, where individual savings have been invested in relatively volatile products, have sustained severe loses.States should implement following principles: regular actuarial studies, establishment of contingency reserve or stabilization funds and strict investment rules (International Labour Office, 2011, p. 183). On 14 November 2012, the Parliamentapproved the changes in the funded pension scheme. The aim was to create opportunities for current and future retirees to decide how they would like to accumulate their pensions in future. According to the newregulation, from 2014the financial sources for the second pilar will consist from three parts: the contribution transferred from state social insurance fund budget, contribution paid from person's earnings and subsidy from the state budget. There are three posibilities for persons. First,person could accumulate pension under current conditions, when 2% ofstate social insurance contribution's part is transferred from state social insurance fund to the private funds. The current contribution rate will remain until 2020 (since 2020,the contribution rate will be increased from 2% to 3.5%). Second,the person could pay additional 1% (from 2016 -2%) from his earnings to private pension fund and 2% (from 2020-3,5%) ofsocial insurance contribution's part will be transferred from state social insurance fund to the private funds. In order to encourage a person to accumulate in private funds, the state will financially encourage person:in this otion, the subsidy from the state budget(1% from an average wage in the national economy from 2014 and 2% from 2016) will be transferred from the state budget to a person's pension account. Additional 1% will be transferred from the state budget for every child until three years. Third, during transitional period (from 1 April 2013 to 30 November 2013), persons have a possibility to stop the participation in the pension fund and toreturnto thestate social insurance fund. Methodology In this section we evaluate the long-term financial sustainability of Lithuanian public pension system, in the light of current demographic, financial and regulatory changes. Our model follows the traditional actuarial approach, widely adopted in Pension Economics. The interested reader can find technical references in Janssen and Manca (Janssen, Manca, 2006), Booth et al. , Hyndman and Booth ), Pitacco (Pitacco, 2004, p. 279-298). Classic introductions to forecasting national population, which may be useful for the inexperienced reader, are the contributions of Leslie (Leslie, 1945, p. 183-212) and of the United Nations (United Nations, 1956). Other quantitative analyses of the Lithuanian case have been proposed by Klyvienė (Klyvienė, 2004) and Alho (Alho, 2002). We start by estimating the life dynamics of national population divided by gender and age. We make a forty-year prediction, from 2012 to 2051, and we consider the demographic variables of mortality, fertility and migration. Then, we analyze the working conditions of national population in the forecasting period. We find initial contributors and pensioners and we forecast their life dynamics. Each year, new cohorts of workers enter in the pension system, while some existing cohorts fulfill the pension requirements and retire. We calculate for each cohort of workers the cash inflows for contributions to the pension scheme, and for each cohort of pensioners the cash outflows from the pension scheme. The annual difference between total contributions and pensions determines the pension balance. The annual pension balances accrue over the forecasting period and determine the evolution (and the eventual sustainability) of the pension scheme. We estimate the financial evolution of the pension system under three scenarios, which correspond to the three possible individual choices of contribution of Lithuanian workers according to current reforms (see section 3.6 and figure 1). Also, we estimate the effects of the reform on the monthly income of future pensioners (see figure 2). Finally, we define a minimum and a maximum range of variation for the financial evolution of the public pension system, according to the results in the different scenarios (see figure 3). Demographic Model We calculate the evolution of population divided by gender, age and working conditions, with the following formulae. Let ( ) represent the national population of gender s={F, M} and age x, alive at year y. For ≥ 1, we estimate the national population as: where ( ) and ( ) represent, respectively, the mortality rate and the net migration rate at year y of individuals with gender s and age x. For = 0, we estimate the national newborn population as: ( ) represent the population of members of the pension system alive at year y of gender s, age x and seniority in the system a. Given a population of existing members ( ) with > 1, we estimate its evolution as: where q s x (y) represents the mortality rate at year y of individuals with gender s and age x. We estimate new members as follows. We assume that all new contributors enter in the pension system at age = ̅ and we estimate the population of new contributors ( ) with = 1 and = ̅ as: where α s x and β s x represent, respectively, the activity rate and the unemployment of population with gender s and age x. The preceding formula tends to keep unchanged over time the activity and unemployment rates among the population. Financial Model Let ( ) be the average contribution of type g paid at year y by a member of sex s, age x and working seniority a, determined as: where γ gxay and R gsxa (y) represent respectively the contribution rate and the expected financial amount (i.e. gross income) for the determination of the contribution type g due at year y by an individual of sex s, age x and seniority a.Then, the annual cash flows at year y for contributions to the pension system is equal to: where x � dsy and a � dsy represent respectively the retirement requirements of age and seniority, in force at year y, for members of sex s to be entitled to a benefit of type b. Thus, the N s x a (y) considered in the previous equation are cohorts of active members. The term c gsxa (y) is the average contribution of type g paid at year y by an individual of sex s, age x and seniority in the pension system a. The term g ∈ G represents a generic contribution of all the existing types of contributions G. The annual pension disbursement at year y is equal to: where x � dsy and a � dsy represent the retirement requirements of age and seniority, in force at year y, for members of sex s to be entitled to a benefit of type d. Thus, the N s x a (y) considered in the previous equation are cohorts of retired members of the pension system alive at year y of gender s, age x and seniority a. The term b dsxa (y) is the average benefit of type d received at year y by a pensioner of sex s, age x and working seniority a. The term d ∈ D represents a generic benefit of all the existing types of benefits D. Let V y represent the cumulated balance of the pension system, which we model with the following recursive equation: wherer y is the nominal annual interest rate on public debt, C y , B y and E y represent respectively the amounts of contribution income, pension disbursement and administrative expenses generated in the year y. All of the cash flows are assumed to take place at the end of each year. Demographic, Financial and Pension Assumptions We adopted the forecasting model described in section 4.2 under the following demographic assumptions: • Population of Lithuanian pensioners and contributors at 1 st January 2012, divided by sex, age and seniority, estimated from data of Lithuanian Official Statistics Portal. • Future new contributors enter in the pension system at age 25. We adopted the forecasting model described in section 4.3 under the following financial assumptions: • Annual pension balance equal to collected contributions minus pensions minus the eventual governmental subsidy to 2 nd pillar accounts (see scenario B in section 4.5). • We consider an annual management cost for the public budget of 42 million Litas in nominal values at 2012, appreciated annually by inflation. • We assume that the overall social security budget deficit at 1 st January 2012 is 5 billions Litas, appreciated annually at 3.65% interest rate (that is the average interest rate on Lithuanian bonds in August 2013, source European Central Bank). • Inflation rate equal to European Central Bank long-term objective, thus equal to 2,00%. • Annual gross incomes, for each cohort of same sex and age, equal to average values in 2010 published by the Lithuanian Official Statistical Portal, appreciated at nominal GDP growth rate in 2011-2012 and at 3,00% for the following years. We estimated the old-age insurance public pensions under the following assumptions: • Benefits paid to pensioners who retired before 1 st January 2012 estimated according to average values in 2012, published on Lithuanian Official Statistics Portal. • Pensioners who retire after 1 st January 2012 get an old-age insurance pension consisting of three components: a basic sum; a supplement based on working seniority; an earnings-related part, calculated with an accounting unit ("points") system and reduced proportionally to second pillar contributions. • Pensions after 1 st January 2012 estimated for each cohort of members by gender and age, according to current regulations. • All pensions are appreciated annually at inflation rate. Each cohort of contributors retires immediatly after fulfilling requirements. We do not consider benefit reversion to survivors. The Contribution Regimes: Three Scenarios In estimating the financial dynamics of the Lithuanian public pension system, we considered the current reform of second pillar system (exposed in section 3.6) that requires workers to choose among three different contributive options. Lithuanian regulations allow for two types of contributions: to 1 st pillar public pension system and to 2 nd pillar pension system. In the period 2004-2013 (before the reform produces its effects) we adopted for every worker the contemporary contribution rates. In the period from 2014, we considered the three contribution regimes that have been introduced with the reform (see section 3.6). We considered three scenarios in which every worker choose to join the same contributory regime: A) contribution rate to 1 st pillar equal to 24 21). New workers enters immediately in the 2 nd pillar system. Private annuities have been calculated at the retirement year by applying to the accrued contributions the regulatory conversion rates for 2012, fixed by Lithuanian Central Bank. We assumed that every cohort of pensioners converts their 2 nd pillar contributions into annuities. Results We estimated the financial dynamics of the Lithuanian public pension system and the effects of demographic, economics and regulatory variables in the period 2012-2051. We made three different forecasts according to the three scenarios described in section 4.5. In every scenario, the Lithianian public pension system is sustainable in the long term, even if the state bugdet should bear an initial negative balance. The rebalancing of the system happens at a cost of low pension payments, especially for oder cohorts of current workers. Annual pension balances for each scenario are shown in Figure 1. Negative pension balances are initially observable and their persistence can vary. If all contributes flowed into the first pillar public system, as in scenario C, the annual public pension balance would turn positive in 2017. Conversely, if the Government subsidized every second pillar private account, as in scenario B, the annual pension balance would turn positive in 2026. In figure 3 we show the cumulated pension balances over the forecasting period under the three scenarios. The overall deficit of the public pension system is completelly cancelled in 2027 under the favourable scenario C, and in 2043 under the less favourable scenario B. We can consider scenarios B and C, respectively, as the maximum and minimun range of variation of future results. Then, we deduce that the cumulated balance of the public pension system will turn positive between 2027 and 2043 (see figure 3) according to the workers' choices of contribution regime. This result suggests that current pension reforms guarantee the long-term sustainability of the public pension system. Figure 2 suggests that long-term sustainability is reached through low pension payments. Figure 2 shows the average monthly pension for different age cohorts, in real values at 2012, depending on the worker's choice of contribution regime. The most favourable choice is the option B because it implies a public subsidy to their private second pillar account. The advantage of option B is higher for younger workers and decreases according to age. In all scenarios, pension provisions seem low and may expose to the risks of poverty and social exclusion; this is true especially for females and older workers. Forecastings should be considered with caution because the model cannot capture the effects of abrupt demographic and economic changes. Improvements in accuracy can be obtained with wider statistical data. Further quantitative analysis of the Lithuanian pension system would require a risk assessment through stress testing and percentile analyses. Conclusions The paper has presented the Lithuanian pension system, its reforms and an evaluation of its sustainability in the light of current economic and demographic trends. The quantitative analysis reveals its long-term tendency towards financial equilibrium, albeit at a cost of initial negative balances to be covered with public budget. The system may expose workers to risks of poverty and social exclusion because of low pension payments. The problem is higher for old workers, which can benefit less from second pillar pension savings. Article analysis leads to the following conclusions: 1. The key policy is to rebuilt the trust in public social insurance schemes and in private funded pension schemes. Participants of the private and public pension system should be constantly and clearly notified of the obtained pension rights. 2. Therefore, the concept of social security should cover public state security schemes, state funded second pillar pensions and all private funded or occupational pension schemes. Social insurance pension calculation of the replacement rate should comprise not only public pensions but statutory private quasi/mandatory funded pensions (second and third pillar). 3. The challenges for the Lithuanian pension system is ageing population (especially low fertility rate), low employment rate, low pension's benefits, poverty of older persons, no clear indexation rules, emigration and growth of the pension expenditures. 4. Pension system has to respond directly to the changes in the structure of society and must be very closely related to the flexibility of labor relations (part-time or half-day employment, opportunities for longer and less interrupted contributory careers, the positive returns from financial markets, more lifelong learning etc.), creation of better working conditions and the changing the approach of employers towards older workers. 5. It is necessary to intensify the pension system's reform in Lithuania because of sharpening of the demographic and social changes. The main goals should be: to encourage and extend employment (especially for the older workers, women and young persons), to revise all social security system benefits; to balance the budget of the social security fund and to introduce pension reserve fund; to decrease pension funds administrative costs; to introduce pension benefits indexation rules; to reform unemployment system and to reduce the early retirement pension system (intruding flexible retirement); to introduce an automatic adjustment mechanisms; to maintain the balance between revenues and expenses in the pension system.
v3-fos-license
2020-01-23T14:10:20.410Z
2020-01-23T00:00:00.000
210861466
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2019.00823/pdf", "pdf_hash": "de93a844d1f29360b39d81abec3b4af53554512f", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42129", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "de93a844d1f29360b39d81abec3b4af53554512f", "year": 2019 }
pes2o/s2orc
Cultural Ecosystem Services Provided by Coralligenous Assemblages and Posidonia oceanica in the Italian Seas Posidonia oceanica meadows and coralligenous reefs are two Mediterranean ecosystems that are recognized as suppliers of valuable ecosystem services (ESs), including cultural services. However, valuation studies on these ecosystems are scarce; rather, studies have mainly focused on provisioning and regulating services. Here we focus on the cultural services provided by P. oceanica and coralligenous assemblages by addressing a specific group of users. Through an online survey submitted to Italian scuba divers, we assess their willingness to pay for a dive in the two ecosystems and how their preferences will change under different degradation scenarios. Diving preferences are assessed using a discrete choice experiment. The results confirmed that ecological knowledge is associated with higher ecosystem values. Moreover, the results confirm and assess how a high degradation of coralligenous and P. oceanica habitats would reduce the value of the underwater environment, by decreasing scuba diver satisfaction and their rate of return visits. Considering a 50% reduction in the coverage of keystone species, the marginal willingness to pay decreased by approximately €56 and €18 for coralligenous reefs and P. oceanica, respectively, while the willingness to pay decreased by approximately €108 and €34, respectively, when there was a total reduction in coverage. Our results can be used to support marine ecosystem-based management and the non-destructive use of Mediterranean Posidonia oceanica meadows and coralligenous reefs. INTRODUCTION Coralligenous reefs and Posidonia oceanica meadows are two Mediterranean ecosystems that are important suppliers of highly valuable ecosystem services (ESs) and benefits and have a fundamental role in supporting human wellbeing (Salomidi et al., 2012;Campagne et al., 2015). Following ES theory (CICES V4.3, 2012), coralligenous reefs and P. oceanica meadows provide humans with several services belonging to provisional (i.e., food, raw materials, pharmaceutical molecules), regulating (i.e., carbon sequestration, nutrient recycling), and cultural ecosystem services (CESs), including numerous services (i.e., high biodiversity, fish abundance, complex habitats to explore, and water clarity) that enhance the quality and the enjoyment of underwater recreation activities (Campagne et al., 2015;Thierry de Ville d'Avray et al., 2019). However, coralligenous reefs and P. oceanica meadows are particularly vulnerable to several anthropogenic pressures that are increasingly threatening coastal waters worldwide, including those in the Mediterranean Sea. Indeed, the Mediterranean Sea is currently facing multiple anthropic pressures, which affect the ecological, economic, and social spheres (de Groot et al., 2012). Mediterranean marine ecosystems, including coralligenous reefs and P. oceanica meadows, are highly threatened by local and global stressors, which often interact with one another. These stressors include intensive coastal development, pollution, invasive alien species, unsustainable fishing practices, poorly planned tourism (Coll et al., 2012;Katsanevakis et al., 2014;Randone et al., 2017), and global drivers of climate change (Jordà et al., 2012;Marbà et al., 2014;Martin et al., 2014;Gaylord et al., 2015;Zunino et al., 2017). Indeed, in the business-as-usual scenario of anthropogenic emissions (IPCC, 2014), the observed and projected levels of ocean acidification (OA) and global warming may highly threaten P. oceanica and coralligenous ecosystems (Jordà et al., 2012;Gattuso et al., 2015;Chefaoui et al., 2018;Zunino et al., 2019). Over the past two decades, the scientific community has developed and adopted the ES framework, aiming to highlight the complex relationship between human ecosystems and ecosystem functioning. The power of this framework lies in the integration of the ecological dimension into the economic and social dimensions by applying a common system of values. The goal is to improve environmental decision making by providing information regarding the benefits of nature conservation, the costs of degradation, and the consequences of ecosystem changes in terms of human wellbeing. Mapping and assessing ESs is considered to be a key action for environmental governance in support of biodiversity objectives and ecosystem-based planning (Maes et al., 2016). Moreover, monetary valuation tools can be used to raise awareness among users and provide information for managers and policy-makers (Wright et al., 2017). Although our understanding of the ways in which ESs support human wellbeing has increased over the last two decades, the available data on marine ecosystems and the methods used to assess them are much more limited when compared to those pertaining to terrestrial ecosystems (Liquete et al., 2013). In addition, assessing and valuing marine ESs is still a challenging task despite the recent methodological and operational advances (Hattam et al., 2015;Garcia Rodrigues et al., 2017;Newton et al., 2018). Indeed, Mediterranean marine ESs, particularly CESs, are often overlooked due to their limited visibility and accessibility (Liquete et al., 2013;Garcia Rodrigues et al., 2017) and due to the difficulties in valuing non-material benefits . To a large extent, the economies of countries bordering the Mediterranean Sea rely on cultural activities including both coastal and marine tourism. Marine tourism in the Mediterranean Sea generates US$110 billion annually (∼€90 billion), representing, together with coastal tourism, 92% of the annual economic output of all sectors related to the sea (Randone et al., 2017). Coastal natural destinations are of particular interest to scuba divers, whose recreational activities have become one of the most important factors in the marine tourism sectors globally (PADI, 2017;Lucrezi et al., 2018a). Uncontrolled coastal tourism, including unregulated scuba diving, can negatively impact marine ecosystems (United Nations Environment Programme [UNEP], 2015;Habibullah et al., 2016), and as the number of scuba divers is currently increasing, concern related to environmental deterioration has also increased (Flores-de la Hoya et al., 2018). However, because high-value nature-based tourism largely relies on the maintenance of a good environmental status and on the preservation of all ecosystem functions (Drius et al., 2018), the sustainable management of this activity can represent a viable alternative to more destructive uses of the environment (De Brauwer and Burton, 2018). Hence, there is an urgent need to analyze, estimate, evaluate and communicate all these values, including the cultural values, provided by threatened marine ecosystems, such as coralligenous reefs and P. oceanica meadows; this information can be used to facilitate their inclusion in ecosystem-based management policies, which are also directed to improve and maintain sustainable blue jobs (EU Commission, 2017). Here, we collected all the information on the existing valuation studies of cultural ecosystem services related to the Mediterranean marine ecosystem. Then, we performed an explorative evaluation study aimed at highlighting the societal implications of the degradation of coralligenous and P. oceanica meadow ecosystems, with a particular focus on the Italian diving sector. We apply non-market analysis techniques to assess underwater recreational services provided by coralligenous reefs and P. oceanica meadows using an online questionnaire (MEA, 2005;de Groot et al., 2010;Chan et al., 2012;Daniel et al., 2012). Study Area Italy is a country with 8309 km of coastline that essentially separates the Mediterranean Sea into Western and Eastern subbasins. Coralligenous assemblages are widespread along the Italian coast, with the exception of the sandy-muddy seabed between the Po river delta and the Gargano peninsula (Ingrosso et al., 2018). P. oceanica is present along most of the West Mediterranean and in the western Adriatic Sea. Coralligenous assemblages and P. oceanica meadows are considered to be the most important hotspots of species diversity in the Mediterranean (Ballesteros, 2006) (Figure 1). The Mediterranean seagrass P. oceanica creates the "climax community" of soft sublittoral habitats covering a known area of 1,224,707 ha in the Mediterranean Sea (Giakoumi et al., 2013;Telesca et al., 2015). In addition, coralligenous reefs have been described as mesophotic biogenic structures produced by the growth and accumulation of calcareous encrusting algae (Ballesteros, 2006). Bioconstructors, such as coralligenous species, can be very common from 18-20 to 100 m depth and more. Because a wide variety of human activities threaten P. oceanica and coralligenous ecosystems, they are both under protection. P. oceanica is included on both the Red List of marine threatened species of the Mediterranean (Boudouresque et al., 1990) and the list of priority natural habitats in Annex I of ES Framework We performed a literature review to understand the state of the knowledge related to the identification and/or assessment of the marine CES in the Mediterranean Sea. The review was performed in June 2019 through a search of the SCOPUS database for the following relevant keywords: cultural OR "cultural ecosystem service" OR tourism OR recreat * OR social AND "ecosystem service" AND "marine" OR "sea" AND Mediterranean. Subsequently, the abstracts that specifically targeted our research focus were selected. Valuation of CES: The Questionnaire An online questionnaire was distributed among Italian marine divers to assess their diving habits, environmental attitude, and preferences for different diving experiences following marginal changes to ecosystems due to degradation. This work expands the analysis conducted by Rodrigues et al. (2015), which assessed the extent of the negative preference for the degradation of coralligenous reefs in the Medes Islands Marine Protected Area (MPA) in Spain, to a different Mediterranean context of the Italian divers diving in the Italian Sea and with an analysis not limited only to MPAs. Moreover, this analysis addresses P. oceanica meadows, for which, despite their uncontroversial ecological relevance and contribution to the preservation of Mediterranean biodiversity, few cultural valuation studies exist (Dewsbury et al., 2016). Among the general public, the level of knowledge about seagrass ecosystems and their associated services is low (i.e., Unsworth et al., 2019), for this reason we addressed the valuation of seagrass cultural services among divers. Divers, as other stakeholders that directly interact with the sea, better understand the functioning of different marine ecosystems, including P. oceanica, and their associated services (Ruiz-Frau et al., 2018). The P. oceanica ecosystem is not a preferred diving location (Lucrezi et al., 2018b), and its inclusion in our study was made to provide insight into the general cultural benefits provided by seagrass to divers. From August 2016 to August 2017, an online survey was e-mailed to 18 diving centeres and clubs distributed along the Italian peninsula. The link was also spread through the OGS (Italian Institute of Oceanography and Applied Geophysics) webpage and the FIAS (Italian Underwater Activities Federation) official webpage and to all of their clubs (Figure 1 and Supplementary Appendix Figure A1). The questionnaire was structured with two sections that addressed diver preferences for different levels of ecosystem quality. The first section collected personal and demographic data, such as gender, age, level of education, and diving certification, to identify factors that could be related to their responses. Additional questions explored the type of benefits that the scuba diving experience provided to them. The second section was a choice experiment (CE) which was used to model the preferences for different typologies of the diving experience using a choice modeling approach (CM) (Hanley et al., 2001). This methodology is a preference-based method; such methods are currently the most commonly used approaches to assess the economic value of ESs (Kumar, 2010). The power of the CE relies on not asking respondents directly for their willingness to pay (WTP) or to accept compensation (WTA) for a certain environmental change. In our CE, the divers were asked 6 times to choose the most preferred alternative between two choices. The alternatives differed in terms of the quality and type of attributes (technically called attribute levels) characterizing the diving experience and the state of the ecosystem. Choice Experiment Design We reviewed the literature to identify the attributes that maximized the divers' utility and that would likely be impacted by climate change. In particular, following Wielgus et al. (2003), Gill et al. (2015), and Rodrigues et al. (2015), we tested the consistency of the selected attributes and of their respective levels administering the questionnaires to a pre-sample of divers selected from five Italian sites (Ischia, Ventotene, Ustica, Cyclops Island, and Siracusa). Thus, the selected attributes were the "Number of divers" (per dive trip), "Coral cover, " "Seagrass cover, " and "Price" (per dive) ( Table 1), while the chosen levels represented a spectrum of environmental conditions from good conservation status to heavily damaged, as detailed below: (1) Number of divers found on a diving trip: The crowding level is an important consideration when valuing the quality of a dive, as suggested by several studies (e.g., Wielgus et al., 2003;Gill et al., 2015;Rodrigues et al., 2015). We selected 25, 15, and 5 as the levels of this attribute. (2) Coral cover (the expected status of corals): We used the term corals as a proxy for the coralligenous environments since the latter is mostly known in the academic context (Tonin and Lucaroni, 2017). Indeed, corals are considered to be attractive features of coralligenous ecosystems in Italian diving destinations that are highly threatened by multiple anthropogenic pressures exacerbated by the drivers of climate change (Ingrosso et al., 2018 and the literature therein). Scientific literature suggests that coralligenous reefs could disappear or shift to deeper sites (Ingrosso et al., 2018) which are not suitable for recreational divers. Three levels were defined for this attribute: (a) 100% of the corals are in good condition; (b) 50% of the corals are degraded; and (c) 0% of the coral cover. (3) Seagrass cover (the expected status of P. oceanica): Although we are aware that this ecosystem is not among the favorites of divers, the presence and status of P. oceanica meadows were included in the questionnaires since this ecosystem is undergoing degradation as a result of anthropogenic activities (Short et al., 2011) declining by 34% in the last 50 years (Telesca et al., 2015) and possibly becoming functionally extinct by 2100 in the worst-case emission scenario (Chefaoui et al., 2018). Three levels were defined for this attribute: (a) 100% of the seagrass meadow is in good condition; (b) 50% of the seagrass meadow is degraded; and (c) 0% seagrass meadow cover. (4) Price of the dive: This attribute indicates the price of a single 50-min dive and includes the cost of the boat trip, air and tank for diving, and dive insurance. The average price is €40 during the high tourism season. For the CE, the price levels were set at €20, €50, €70 and €90. An opt-out option was offered in all cases. Starting from the attributes list -with their relative levelswe created a total choice set using ALgDesign package for R software (Wheeler, 2004). A full factorial design with three three-level factors ("Numbers of divers, " "Corals cover, " "Seagrass cover") and one four-level factor ("Price" -per-dive attribute) was created. The full factorial design comprised 135 combinations of the levels of each attribute (3 3 × 5). In order to make the questionnaire more manageable, we generated a fractional factorial design from the full factorial design with the function optFederov (Wheeler, 2004). Following the procedure described by Aizaki and Nishimura (2008) we obtained 24 alternatives that were considered to be cognitively manageable. The alternatives were then blocked into two sets of six paired choices, each with a "neither" alternative for consistency with market decisions and were presented to divers. The presence of this latter choice option mimicked real market situations in which the diver is not forced to make a choice but can opt-out (Rodrigues et al., 2015). Multinomial Logit Model The CE technique is an application of the theory of value (Lancaster, 1966) combined with the random utility theory (Thurstone, 1927;Manski, 1977). According to Lancaster theory of value (Lancaster, 1966), individuals (i) obtain utility (U) not from goods themselves but from the attributes that describe these goods (Hanley et al., 1998). This theory assumes that individuals are perfectly able to discriminate between sites, basing the choice of the site j on a systematic, and observable, component, vij, which is based on the site attributes, and an additive random component ε ij which is not observable. The utility function for the model used in this work is specified in Equation 1: U ij = β optout * Opt-out i + β 25 divers * 25 divers i + β 15 divers * 15 divers i + β Coral 50% * Coral 50% i + β Coral 0% * Coral 0% i + β Seagrass 50% * Seagrass 50% i + β Seagrass 0% * Seagrass 0% i + β price * PRICE i + ε ij Where Opt-out is a dummy variable that assumes a value of 1 for the no-choice option and 0 otherwise; 25 divers is a dummy variable that indicates a highly crowded diving experience, and 15 divers indicates a less crowded excursion; Coral 50% and Seagrass 50% represent dummy variables taking a value of 1 if the respondent chose the alternative including an environment that is 50% degraded and 0 otherwise; Coral 0% and Seagrass 0% are dummy variables taking a value of 1 if the respondent chose the alternative including a completely degraded environment and 0 otherwise, and, finally, PRICE is the price variable. The probabilistic odds that one alternative is selected over another can be estimated using a standard multinomial logit model (MNL), also called a conditional logit model (see Supplementary Appendix A1). The CE model analysis was conducted using the "survival" package (Therneau and Grambsch, 2000;Therneau, 2015) in the R statistical environment (R Core Team, 2015) using the clogit function (Aizaki, 2012). Latent Class Model As stated by Greene and Hensher (2003), the basic assumptions of the Latent Class Model (LCM) affirm that individual behavior is determined both by the attributes of the alternatives and by certain latent heterogeneity that is not observed by the researcher. Therefore, one of the main strengths of the model is its ability to capture the heterogeneity of preferences (Boxall and Adamowicz, 2002). This model assumes that the population consists of a number of latent classes that is exogenously determined and permits to overcome the limits of MNL model, which considers the utility coefficient as fixed among respondents. The unobserved heterogeneity is captured by these classes through the estimation of a parameter vector. In this study, the LCM was used as an instrument to identify groups of divers interested in counteracting the degradation of coralligenous and P. oceanica meadow habitats, with a particular focus on the implications to the Italian dive sector. The LCM analysis was conducted using the program NLogit4 R . Both the MNL and the LCM models shared the same linear utility function (see Equation 1 above). Willingness to Pay The parameters estimated by the MNL and LCM models were used to identify the diver's willingness to pay. The WTP for each attribute was derived from the models using the coefficients from each attribute level and the coefficients for price. The estimated coefficients represent the marginal utilities that are increments of utility. When the coefficients are compared with reference levels they reveal the relative importance of attributes and their levels and reflect respondents' willingness to trade one attribute level for another. The addition of the price attribute in the utility expression is essential in order to derive implicit price for marginal changes in attribute levels (Rodrigues et al., 2015), called the marginal WTP. The marginal WTP for attributes/levels (non-monetary variable) is calculated as -β nm /β price (Equation 2); where β nm is an estimated coefficient of the non-monetary variable and β price is an estimated coefficient of the monetary variable: From the parameter estimates it is possible to derive welfare changes in monetary terms. These values are associated with changes in the level of an attribute compared with its reference level, provided that the remaining parameters are held constant. WTP measures reflect utility increases when the value is positive. This can be interpreted as WTP for a change in a certain attribute level. However, a negative value indicates a decrease in utility. This result suggests that individuals require compensation through lower prices (Train and Weeks, 2005) to have the same level of utility as that in the reference dive. We calculated the negative ratios of the parameters associated with each attribute level and price. ES Framework The SCOPUS search listed 48 articles targeting CES in the Mediterranean Sea, which were further reduced to 14 after an analysis of the abstracts. The results were grouped into three categories, recreation and tourism, symbolic and aesthetic values, and cognitive effects, according to Liquete et al. (2013) (Table 2). Notably, only a few studies have focused on coralligenous and P. oceanica ecosystems. Among the others, Tonin and Lucaroni (2017) found that the general public is not generally aware of the existence and/or the value of coralligenous ecosystems; however, the authors highlight the important role of public education and communication to enhance awareness of endangered ecosystems. Valuation of CES: The Questionnaire The online survey was completed by 221 Italian scuba divers; of a total of 229 respondents, the surveys submitted by 8 were discarded due to incompleteness. The respondents were Italian, with an average age of 43 years, and 71% were male (χ 2 = 37.88, df = 1, p < 0.001). The mean number of years of diving experience per diver was 13 years, ranging from 1 to 50 years, and the divers mainly held superior dive licenses from divemaster to technical licenses (χ 2 = 135.67, df = 2, p < 0.001). The mean number of dive trips per recreational diver (open, advanced) was 15 per year, ranging from 1 to 40 dives per year. Divers holding a superior dive license reported a mean of 53 dives per year, ranging from 5 to 250. Over half of the surveyed divers were employed full time (73%; χ 2 = 383.77, df = 4, p < 0.001), with an average annual gross income of between €30,001 and €40,000 (51%) (Supplementary Appendix Table A1). The divers with more than 10 years of experience were asked to indicate their perception of the status of the underwater environments. 70% of the respondents agreed that the underwater habitat conditions had worsened since they began diving (χ 2 = 209.08, df = 4, p < 0.001). The main reasons were decreases in the numbers and size of the fish and corals (33%), increases in plastic litter, ghost nets and pollution (29%), and the increased abundance of stinging jellyfish, alien species and algae (9%). However, almost 12% of the respondents with more than 10 years of diving experience acknowledged that the MPAs have significantly improved the environmental status through an increase in the presence of marine biodiversity within the MPAs. Most of the respondents (74%; χ 2 = 51.88, df = 1, p < 0.001) visited at least one MPA during their lifetime, and among them, more than 91% evaluated the experience as good or excellent (χ 2 = 152.67, df = 1, p < 0.001). The respondents were also asked to choose among three species the one that they would like to see during a dive trip (Figure 3). The respondents were grouped by their level of expertise (i.e., beginners vs. experts). No difference was found between beginners and experts in the choice of emblematic species (χ 2 test, df = 1, p > 0.05; Figure 3), except for scorpionfishes and moray eels, which appear to be more appreciated by beginners (χ 2 test, df = 1, p < 0.05; Figure 3). Overall, more than half of the participants selected red corals (51%), groupers (42%), and seahorses (51%) (Figure 3). The results of the CE showed that the decision to take a dive was chosen by 81% of the respondents in the choice simulation. All coefficient estimates with the MNL were significant at the 99% level, except for the attribute level "Seagrass cover 50%, " which showed 90% significance ( Table 3). The coefficient of the variable "Price" was negative and significant, indicating the respondents' preference for a cheaper option. The attribute levels "Coral cover 50%" and "Coral cover 0%" had significantly negative coefficients (p-value < 0.05), indicating that respondents highly prefer habitats with good coral coverage for their diving rather than those with partial or total degradation of the coral cover ( Table 3). The seagrass parameters indicate a slight preference of the divers (p = 0.02) for habitat with seagrass meadows in good condition ("Seagrass cover 100%") and that they significantly rejected degraded habitat ("Seagrass cover 0%") ( Table 3). We found that the respondents significantly preferred less crowded dives instead of more crowded ones (p < 0.05; Table 3). Moreover, they also assigned a low preference to the intermediate level of crowdedness, the "15 divers" level (p < 0.05; Table 3), which differs from Rodrigues et al. (2015), who did not find a significant response in terms of the "15 divers" dive attribute level. The approximation of the mean WTP for the different levels of the attributes suggest that the divers were willing to pay approximately €56 less if the coral coverage decreased by 50% and €108 less if corals disappeared entirely. The same occurred for the P. oceanica coverage, for which the respondents were willing to pay approximately €18 and €34 less in the case of a partial or total The model with three classes minimizes the BIC value. In addition, the AIC and R 2 values indicate that this model is suitable for the aim of our study. loss, respectively. The results also suggest that the scuba divers were willing to pay less for highly crowded dive trips (25 divers). The number of classes for the LCM analyses was identified before the evaluation of the parameters that was performed using the Bayesian information criterion (BIC) and Akaike information criterion (AIC) (Boxall and Adamowicz, 2002) ( Table 4). The three-class LCM (LCM-3) indicated that the sample showed heterogeneous preferences and that the respondents could be divided into three classes, representing 66, 13, and 21% of the divers, respectively. It is interesting that the coefficients for class two were not significant (p > 0.05) except for "Price, " an intermediate number of divers and no seagrass cover. The members of this class when choosing the most preferred alternatives considered only the number of divers found during a diving trip ("15 divers, " p < 0.05) and seemed to be independent of the other attributes considered in our experiment. However, they showed also a negative WTP for the absence of seagrass cover. Each of the other two classes was characterized by a different structure of preferences. In detail, members of class one were more concerned about having a good quality of coral cover ("Coral cover 50%, " p < 0.05, and "Coral cover 0%, " p < 0.05), the level of seagrass cover ("Seagrass cover 0%, " p < 0.05) and a low crowding level of divers on a trip ("25 divers, " p < 0.05; 15 divers, " p < 0.05), while members of class three preferred a high coral cover ("Coral cover 50%, " p < 0.05, and "Coral cover 0%, " p < 0.05) and a high seagrass cover (Seagrass cover 0%, " p < 0.05; Seagrass cover 50%, " p < 0.05) and did not have a clear preference regarding the number of divers on a trip ("25 divers, " p > 0.05). We will refer to members of class two as "dive-alone divers" and members of classes one and three as "pro-habitat conservation divers, " although class one had a positive but insignificant low seagrass cover attribute value, meaning a low preference for a high abundance of these meadows. Furthermore, members of class one had a lower (negative value) WTP for degraded coral cover on average (−€186) than the other classes. In addition, considering their WTP value for the highest level of the number of divers attribute, class one strongly preferred to avoid crowded trips (WTP −€187), while members of class three had a lower but negative WTP (€-15.7). The interactions between the attribute and socio-economic variables show that class one is composed of more experienced divers (in terms of years of experience) and those with a higher level of education (master's degree), while there was no correlation with the license level. Regarding class two (also called the "dive alone divers"), the results show a correlation only with the more experienced divers. The Opt-out coefficients capture the degree to which respondents tend to choose no scuba diving experiences. The Opt-out coefficient was negative and significant (p < 0.05) for classes one and three, indicating that respondents were more likely to choose one of the alternatives. DISCUSSION The results of the analysis of the data from the online questionnaire administered to Italian divers provide new insights into their attitudes toward coralligenous and P. oceanica meadow ecosystems and furthermore contribute to the valuation of the marine CESs they provide. As can be inferred from the results, a diving experience is rated not only in terms of the quantity and quality of the charismatic environments or species (as indicated by the rates of preferences for the presence of corals, the abundance of animals or the underwater landscape features) but also for knowledgerelated and environmental-related features, as expressed by the choices for new knowledge and the presence of P. oceanica meadows. The last choice, in particular, reveals a positive environmental attitude because the meadows themselves are not very attractive in terms of the diving experience. Therefore, the choice of a diving site, including those with P. oceanica, could indicate the respondent's knowledge regarding the indirect functions that the P. oceanica meadows provide to the ecosystem (i.e., protection of juveniles from predators and allowing the aggregation of individuals and their reproductive success). The divers' "pro-habitat" conservation attitudes were also confirmed by the LCM for the two most numerous classes (class one and class three), grouping 66 and 21% of the divers, respectively. This, in the end, supports the importance of "knowledge" as an important element which increases the "value" of the ecosystems and confirms that "understanding the ocean is essential to comprehending and protecting this planet on which we live, " as quoted from the Ocean Literacy Framework 1 . A secondary benefit of ES valuation exercises lies in fostering the relationship with the environment, acting "as a tool of selfreflection that helps people rethink their relationships with the natural environment and increase their knowledge about the consequences of consumption, choices and behavior" (Brondìzio et al., 2010). Thus, the scientifically sound assessment of ESs can support local managers as well as marine governance bodies in the complex process of valuation aimed at supporting both ecosystem-based management and knowledge on ecosystem functions and values. Coralligenous and P. oceanica meadow ecosystems provide a variety of ESs whose valuation requires a combination of different approaches to assess all the different types of benefits. Our study focused on a subset of these services, CESs, which are classified as non-consumptive benefits that are related to wellbeing, aesthetic inspiration, cultural identity, and spiritual experience. The WTP estimates obtained from the MNL and LCM confirm the importance of structured and complex ecosystems as providers of benefits for scuba divers. In the present study, as expected and in agreement with Rodrigues et al. (2015), we found that scuba divers have a strong predilection for coralligenous habitats but are also sensitive to the loss of P. oceanica meadows. Declines in the coverage of both corals and P. oceanica would result in significant economic losses to the recreational dive industry in the Italian peninsula. Conversely, proper management that promotes habitat conservation will likely have a positive economic impact on diving tourism and significantly influence the choice of a dive site destination. However, when considering the present valuation study, we must note that our results are based on the responses of people who voluntarily participated. This could mean that our sample was potentially biased toward scuba divers who were likely more interested and committed to environmental issues, as confirmed by the high percentage (74%) of respondents who had visited at least one MPA during their lifetime. Another important point to be taken into account is that the number and origin of the sample of 221 Italian divers, despite providing statistically significant results, should be treated and interpreted in the proper context. As for any other local valuation study, the extrapolation of these results requires the application of a benefit transfer analysis that allows scaling up and transposing the values obtained in an original study to a different policy context (Smith et al., 2002). WTP is strongly related to the socio-economic context in which the valuation takes place, and the generalization of the results should only be conducted by adjusting the results to the different contexts. This could allow us to extend these results to members of the entire international community who dive in Italy and FIGURE 4 | Ocean and Ecological literacy can promote the sustainable use of marine ecosystems and contribute to both ecological conservation and blue growth. In fact, even if divers value (and are willing to pay for) pristine and well-preserved ecosystems mainly for aesthetic reasons, they appreciate, respect, and value them even more, when they are aware of the ecological importance of those systems. Positive feedback arises when extra-value is used to sustain education, conservation and mitigation. to the coralligenous and P. oceanica ecosystems across the Mediterranean Sea. This valuation, however, as for any other valuation of ES, should not serve as a substitute for other scientific or ethical reflections and systems of values related to biodiversity conservation. Instead, valuation should be used to complement them and to provide information that can guide policymaking (Turner and Daily, 2008). CONCLUSION This study contributes to the valuation of Italian marine benthic ecosystems and estimates how their disappearance could lead to economic losses. The coastal underwater landscape attracts millions of scuba divers yearly and depends on the presence of healthy environments. The current threats imposed by human pressures on the marine environment, which are exacerbated by climate change, are degrading these ecosystems and the flow of their ESs. The potential loss of economic value may, in turn, negatively impact other economic activities directly or indirectly related to scuba diving tourism. By highlighting the losses that could be caused by habitat destruction, our estimates can be used to support sustainable and nondisruptive diving tourism activities and the implementation of local conservation policies. However, the consideration of an interest in coralligenous and seagrass should be treated with caution, as excessive scuba diving activities in these ecosystems may ultimately cause ecological damage. Balanced uses of diving sites must be considered by mitigating the negative effect of human presence pressures with the positive effects derived from increasing people's awareness of these ecosystems, especially for the P. oceanica meadows (Lucrezi et al., 2018b), see Figure 4. Different activities should be spread and routinely implemented, such as ecological guided tours and citizen-science projects that actively involve citizens in the data collection, using pictures, videos and dive computer data. Improving divers environmental awareness might be seen as a tool to support viable local-scale management solutions to dampen the degradation of coastal ecosystems. By highlighting a change in the WTP, our results indicate that the Italian diving sector might be willing to invest and implement sustainable actions for the protection of the marine ecosystems, thus fostering the development of sustainable jobs, as suggested by the EU Blue Growth initiative (EU Commission, 2017). DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS SZ and DM developed the initial idea, designed and analyzed the questionnaire. SZ performed the statistical, MNL and WTP analysis. ST and FM performed the LCL and WTP analysis. SZ and DM wrote the manuscript with contributions from all coauthors, discussed the results, and revised the manuscript. FUNDING This work was funded by the Acid.it Project "Science for mitigation and adaptation policies to ecological and socio-economic impacts of the acidification of Italian seas". ACKNOWLEDGMENTS Many thanks to Bruno Iacono, the FIAS president Bruno Galli, Flavio Gaspari, Gianluca Coidessa, and all the diving clubs who helped distribute the questionnaires.
v3-fos-license
2022-10-02T15:09:31.272Z
2022-09-30T00:00:00.000
252658776
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40095-022-00538-w.pdf", "pdf_hash": "8a2a0b529df3d5476b6332ceb1c07ebceb38a1f4", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42131", "s2fieldsofstudy": [ "Engineering" ], "sha1": "e7ce3fea1cc7d0616baaf4be97bdcb636ffdbc89", "year": 2022 }
pes2o/s2orc
Scope for feed-in tariff in a hydro-rich energy market−a case of Bhutan To promote the use of renewable energy (RE), several types of RE policies have been implemented globally. Among these, feed-in tariff (FiT) is one of the most accepted mechanisms of pricing policy. However, choosing the best policy for a market with a high RE penetration is a challenge. A case of Bhutan is considered in this paper as the source of electrical energy is predominantly hydropower. Additionally, a generous subsidy is provided by the government to keep the electricity tariff to a minimum. Recently, there has been an increase in the advocacy and motivation for other forms of RE sources (RES) to supplement hydropower and increase energy security in Bhutan. Bhutan aims to achieve a total of 20 MW of non-hydro RES by 2025 as per the RE policy of Bhutan. However, Bhutan still does not have an RE pricing policy, and therefore, there is a need to institute a suitable pricing mechanism to accommodate the penetration of the planned non-hydro RES. This paper discusses the challenges in introducing FiT for non-hydro RES in an electricity market dominated by hydropower in Bhutan. Subsequently, recommendations are made in the wake of subsidised electricity tariff, which is the lowest in the region at 0.0171 USD/kWh for low voltage customers. FiTs for solar photovoltaic based on different categories of customers have been computed and proposed. Introduction Globally, there has been an increase in the use of renewable energy sources (RES) primarily to discourage the use of fossil-fuelled energy generation, reduce greenhouse gas emissions and increase energy security.Policy schemes such as feed-in tariff (FiT), net-metering and other mechanisms have been instituted to promote the use of RES.However, in regions where the share of RE is dominant, designing and implementing RE policy becomes a challenge.One such region is Bhutan, where the source of electricity is predominantly hydropower and a policy to provide incentives for the installation of RE is yet to be instituted. Bhutan is blessed with a huge hydropower potential of 36,900 MW with annual production capacity of 154,000 GWh [1].Chukha Hydropower Plant, the country's first hydropower plant (HP) was established in 1986-88.Since then, there has been a considerable progress in the development of HPs in Bhutan.The total installed capacity of hydropower in Bhutan today stands at 2335 MW [1], which is 6.32% of the total potential.About 70% of the total generation is exported and sold to India [2], Bhutan's closest neighbour and ally because of which there has been a surge in the country's economy.The electricity exported to India topped 4,465 MU fetching Indian Rupees (Rs.) 10,080 million (~ USD 136 million) in 2020, and the revenue from hydropower is approximately 7.5% of the country's gross domestic product (GDP) [2]. In addition, Bhutan had set up an ambitious target of providing 100% electricity coverage to all households by 2020, but this target was brought forward, to be achieved by the year 2013 [3].In May 2008, Bhutan and India signed the Protocol to the 2006 Agreement concerning Cooperation in the Field of Hydroelectric Power and agreed to achieve a total generation of 10,000 MW by the year 2020 in Bhutan [4].However, considering the current pace of development and complexities involved in the construction of HPs and the fact that excessive dependence on a single resource for electricity would imperil the country's energy security, the country has recently started identifying RES that would be most appropriate for Bhutan.Wind power and solar 1 Faculty of Electrical Engineering, Bialystok University of Technology, Bialystok, Poland power are the front runners among others in Bhutan.The theoretical solar potential in Bhutan is estimated at 6 TW, while the restricted technical potential is estimated at 12 GW [5].As per [5], the wind energy potential is estimated at around 760 MW, mostly in the northern region of Bhutan, with Wangduephodrang district in the north accounting for around 19% of the total potential.The two districts in southern Bhutan; Chukha has 12% and Dagana, 10% of the total wind power potential [5].Other renewable technologies such as biomass and biogas are mainly limited to cooking purposes in rural areas.RE policy has not been implemented yet in Bhutan and there are no private generation companies (GenCos).As the country look towards diversifying the sources of energy in Bhutan, there is a need to institute an appropriate RE pricing scheme.Among the existing RE policies, FiT is considered the most effective and widely implemented RE policy in the world [6], [6].This paper discusses the challenges in introducing FiT in Bhutan and subsequently proposes recommendations in the wake of subsidised electricity tariff and a non-liberalised market dominated by hydropower. Energy sector in Bhutan The energy sector in Bhutan has been restructured to address the need of the changing market scenario and to keep pace with the global change.The restructuring of the energy sector in Bhutan has not been long.It was only in 2002, when the energy sector, which was fully state-owned was restructured to derive maximum benefits for the country and the consumers. Bhutan Power Corporation Limited (BPC), established on July 1, 2002, serves as the system operator for Bhutan and is responsible for both transmission and distribution of electricity in the country.In addition, BPC also looks after the Bhutan Power System Operator (BPSO), small/micro HPs and wind power plant comprised of two wind power generators of 2×300 kW in Rubessa, Wangduephodrang.Therefore, the role of BPC is comparatively unique from that of a conventional system operator elsewhere.Besides providing electricity services to the nation, BPC as one of the companies of Druk Holding and Investments (DHI) is also mandated to generate revenue for the company.Figure 1 shows the overall structure of energy sector in Bhutan.In general, the power and energy sector in Bhutan is comprised of only government entities and currently, there are no private GenCos, except as end-user consumers. Renewable energy in Bhutan Driven by the increasing domestic demand, risks due to reliance on a single source of electricity and the need to harness other forms of clean energy, the Ministry of Economic Affairs of Bhutan (MoEA) had come up with a draft RE policy in 2011, which was finalised and released as the 'Alternative Renewable Energy Policy 2013 (AREP)', the first edition of the renewable energy policy in Bhutan.Bhutan shall strive to generate 20 MW by 2025 through a mix of renewable technologies such as solar power, wind power, biomass, and others [8].However, the targets set in AREP [8] as shown in Table 1 are comparatively lower than what is available [5]. Although focus on small hydropower has been made in [8], no specific target has been set for small hydropower less than 25 MW. In Bhutan, the development and implementation of wind power and solar photovoltaic (PV) and their integration into the national grid is prioritised over other non-hydro RES [9].The current priorities are small hydropower, wind power and solar PV as they seem most appropriate and viable considering the rugged mountain terrains of Bhutan [8].In a country where hydropower is the main source of electricity and that major portion of the country's economy depends on hydropower, penetration of other sources of renewable energy is seen as a challenge, primarily due to the overall costs and associated turnover.Keeping in view the challenges associated with hydropower generation, the threat to energy security due to reliance on a single source of electricity, and the increasing imports of fossil fuels, Bhutan has started diversification of its energy resources [8].As per [9], major activities have been planned to promote RE and include the following types of RES: (i) Wind power (ii) Solar PV (iii) Biomass, and (iv) Small hydropower. In 2014, two wind electric generators were installed in Rubessa, Wangduephodrang.The 2×300 kW wind power plant was established mainly to reduce the import of electricity during the lean period, increase sale during summer and diversify energy resources in the country.Although non-hydro RES for the generation of electricity (RES-E) has penetrated the market, BPC does not levy a separate tariff to its customers. BPC is responsible for the transmission and distribution of electricity to residential, commercial, and industrial customers irrespective of the type of source of electricity.However, with the increase in the percentage of RE penetration, there will be a need to appropriately design and implement a pricing mechanism to suitably benefit investors as well as meet the objectives of the government.Additionally, the RE policy of Bhutan also indicates a FiT scheme to be introduced [8].At the moment, non-hydro RES penetration has remained insignificant and therefore, the electricity pricing modality and the tariff thereof continue to remain unchanged.The electricity generation from various sources in 2020 is shown in Fig. 2. [10]. Electricity generation from non-hydro RES-E will grow and the share of these RES-E is expected to reach 20 MW by 2025.The DRE had planned to install three major renewable power plants: A 30 MW solar power plant at Shingkhar, Bumthang, a 17 MW solar power plant at Sephu, Wangduephodrang and a 23 MW wind power plant at Gaselo, Wangduephodrang [11].The planned solar power plants will be the first of their kind and the largest solar power plants in the country.While the plan to install the 17 MW Solar PV plant in Sephu is in progress, the 30 MW solar PV plant in Shingkhar has been suspended due to community issues.Generation in million units (MU) Additionally, a utility scale 180 kW grid-tied ground mounted solar power plant was inaugurated on October 4, 2021, in Rubessa, Wangduephodrang as a pilot project.The solar power plant is expected to provide electricity to around 90 households in the locality through the grid.Therefore, FiT, or other pricing mechanisms will have to be carefully planned to suit the needs of the country in view of the increase in non-hydro RES-E. Feed-in tariff Feed-in tariff in general is a government driven policy to promote and support investments in renewable power generation.FiT scheme enables the RE GenCos, such as solar, wind or small hydropower to receive an incentive for electricity generation. As per [7], there are two types of FiT models: (i) Marketdependent or Feed-in Premium (FiP) model and (ii) Marketindependent or Fixed Price model, where the main distinction is the dependency on the actual electricity market price.While the market-dependent model is associated with an additional premium on top of the market price, the marketindependent model offers a guaranteed minimum payment for every unit of electricity injected into the grid.FiTs facilitate long-term financial incentive to those who participate as GenCos and contribute electricity to the grid.Unlike the current system of unit-based pricing, the pricing is based on the conditions of the FiT contract with the utility company.A well-planned FiT can create a robust market for RE and encourage increased participation of RES-E.In addition to FiT, the following types of regulatory instruments in the promotion of RE are implemented: (i) Feed-in Premium (ii) Auction (iii) Quota (iv) Certificate system (v) Net-metering (vi) Mandate, and (vii) Registry Fiscal incentives and other benefits such as grid access, access to finance and socio-economic benefits are also being implemented in the promotion of RE technologies.FiT schemes have been conducive to RE development since the early 2010's, however the focus has now shifted from FiTs to competitive tendering schemes such as auctions [12].In [13], they have carried out an analysis to determine the effectiveness of the FiT policy and conclude that there are multiple avenues that can be leveraged to achieve a larger representation of RE.There is a preference for auctioning, net-metering and mini-grids as alternative mechanisms to FiT policies.It is reported that the implementation of these options should be done effectively to avoid challenges faced with the FiT policy [13].In Malaysia, successful utilisation of RES for electricity generation and the global implementation of FiT has encouraged the implementation of FiT.Although FiT can be conveniently implemented for biomass, biogas and solid waste energy, FiT has not attracted expected investment in solar and wind RES due to their requirement of higher FiTs [14].In Taiwan, the government has adopted FiTs coupled with a bidding process to determine subsidised tariffs due to difficulties in setting tariff each year and because FiTs often cannot catch up with the market and technology-varying conditions [15].Several nations have introduced technology-specific FiTs combined with netmetering scheme. Although FiT is an effective RE policy, it is interesting to note that several countries have either revised or discontinued their FiT policy.In few cases, due to reduction in the solar PV costs, FiTs were replaced by combined policies and to prevent over-subsidisation in the markets [16].More number of countries have opted auctions modality over FiTs as their renewable energy pricing policy [17].The number of countries following FiT and Auction modality from the year 2014 to 2016 is shown in Fig. 3 [17].As seen from the trend, by the end of 2016, more than 70 countries had adopted auctions as opposed to FiT.A substantial increase in the number of countries adopting auctions is noted as opposed to FiT, mostly due to their flexibility of design and because they can be made country specific. Feed-in tariff in Bhutan At the moment, there is a lack of FiT and other incentive schemes for RES in Bhutan.It would require a robust enabling policy environment for Bhutan to achieve a total of 20 MW of alternative RE generation by 2025 as planned.To enable a promising policy environment for alternate RE resources in Bhutan, the following are considered the highest priorities [18]: (i) formulation of renewable energy master plan to identify, assess, and forecast the resources, (ii) formulation of feed-in tariff framework, and (iii) preparation of implementation rules and guidelines of the alternative renewable energy policy. With a feed-in policy in place, regulatory agencies such as the Bhutan Electricity Authority (BEA) and the DRE could initiate the development, implementation, and monitoring of the FiT programme.Further, the lack of private GenCos in Bhutan has not conditioned the government to pursue such feed-in policies.Further, in a study conducted by [19], it has been opinioned that there is a barrier to promoting non-hydro RES due to the absence of a FiT policy in Bhutan.Based on the nature of the feed-in policy (feed-in tariff or feed-in premium), several types of incentives can be developed.As per [8], BEA shall design and develop the following: (i) fed-in tariff as per the principles contained in the policy, and (ii) norms related to grid connectivity/interfacing and load dispatch, etc. Considering all circumstances, the current choice of RE pricing policy in Bhutan is FiT [8]. Electricity pricing in Bhutan Bhutan's current electricity market structure is unique as all components of the power sector (generation, transmission, and distribution) operate under one entity, the DHI, which is the largest and only government-owned holding company in Bhutan.To encourage additional participants in the electricity market, the government will have to devise various strategies, and this will necessitate a major reform in the electricity market structure.Further, Bhutan currently exports electricity to only a single customer, which is India.To benefit more customers, enhance cross-border energy trade, and increase energy security, Bhutan will have to continue exploring opportunities to export energy to other neighbouring countries such as Bangladesh, Nepal, and Myanmar.Additionally, Bhutan's dependency on energy export from hydropower, a single source of energy comes with a challenge to its energy security. The electricity pricing in general is determined by the generation cost, transmission cost, distribution cost, interests and depreciation of assets, salaries of employees, and profit.These costs further depend on numerous factors, such as connected load, load conditions, demand factor, load factor, diversity factor, plant capacity factor, etc., which makes the fixation of tariff in an electricity market rigorous.BPC as the state-owned system operator, levies electricity costs on customers and these prices are regulated by BEA.The past and current end-user tariff is shown in Table 2 [20,21]. Over the years, there has been a steady rise in the domestic electricity tariff based on numerous factors.Although small, the rise has been steady, and an extrapolation of the tariff gives an average annual increase of 1.8%.The variation in the tariff for low voltage (LV) customers from 2017 to 2022 (projected) is shown in Fig. 4. It can be concluded that the rise over the years is held minimum due to subsidies provided by the government. Period The existing low tariff has enabled significant success in the overall socio-economic growth in Bhutan.There has also been an increase in the small, medium, and large energy intensive industries due to the access to reliable and affordable electricity.This is evident from the government's initiative of establishing various special economic zones in the country [7], where citizens are encouraged to establish various industries through government-led schemes.However, hydropower projects have their own share of shortfalls.Environmentalists continue to question whether hydropower projects in Bhutan can really be considered as 'clean' sources of electricity. In addition, there have been bottlenecks in the construction of hydropower projects, such as the construction of the 1200 MW Punatshangchhu-I which began in 2008 and was scheduled to be completed in 2016, but the completion date has been further delayed.The project has been facing serious challenges due to geological conditions in the construction of the dam.The delay in the completion of the project has caused serious financial and resource implications to the government. Considering all circumstances, the government has now started embracing other forms of RE.Several projects are in the pipeline and constructions will begin soon.A pricing modality needs to be developed to encourage the establishment of non-hydro RES-E projects in the country.As per [8], all RE projects for electricity generation except for mini, micro, and small hydropower are to be developed under Build, Own, Operate (BOO) model. Electricity tariff computation The electricity tariff payable to BPC is derived as follows [23]: The cost of supply CoS is the sum of generation cost G C from DGPC and network cost N C (transmission and distribution) from BPC. CoS includes the allowances for operation and maintenance costs, depreciation, return on fixed assets, i.e., including an allowance for company taxation, cost of working capital and any regulatory fees, duties, or levies that the licensee is liable to pay. The network cost N C , includes the operation and maintenance costs, depreciation, return on fixed assets i.e., including an allowance for company taxation, cost of working capital and any regulatory fees, duties, or levies that the licensee is liable to pay, cost of losses to transmit, distribute and supply electricity to the customers.S R is the subsidy granted by the royal government of Bhutan (RGoB) to BPC to keep the MV and LV customer tariffs lower than the CoS. The end-user tariff for LV, MV and HV works out as shown in Table 3 for the tariff sample of 2017 [23].From the table, we see that the unit price of LV for all levels of block is Nu.5.81 (USD 0.0776).The subsidy provided by the government ranges from 0 to 5.81 Nu./kWh (0 to 0.0776 USD/ kWh), based on the customer type and consumption.For the LV customers, it ranges from 1.91 to 5.81 Nu./kWh (0.0255 to 0.0776 USD/kWh), which works out to a minimum of 32.87% to a maximum of 100% for LV customers.Likewise, subsidy is also provided to MV customers.Therefore, it is evident from Table 3 that the government provides electricity subsidy based on different category of customers, mainly supporting those in rural areas. In Bhutan, all generation plants fully owned by the RGoB must provide 15% of the annual generation as royalty energy to the government free of charge [24].This enables the government to grant significant subsidies so that the domestic tariff is kept at minimum.To ascertain the impact of the subsidy provided by the government which eventually benefits the end-user customers in Bhutan, electricity pricing in the South Asian region is compared. Table 4 gives a comparison of the average electricity pricing across South Asian countries and Fig. 5 compares the average electricity price for household customers as of October 2015 [24].From the comparisons, it is evident that Bhutan has the lowest unit price for household customers among the South Asian countries, while Sri Lanka has the highest unit price with a monthly consumption of 600 kWh.The comparison also indicates that the average unit prices of household customers as well as that for commercial and industrial customers are the lowest in Bhutan.Unfortunately, these lead to bigger challenges for those wishing to venture into non-hydro RES-E in Bhutan.Consequently, designing a renewable pricing policy becomes a bigger challenge. Pricing mechanism In the context of the current discussion, the following two objectives among others given in [25] are relevant and important: (i) to promote a safe and reliable supply of electricity throughout the country (leading to increased energy security), and (ii) to promote development of renewable energy resources. It has been established that a well-designed FiT scheme offer multiple benefits to all stakeholders as the tariff can be both cost-effective and cost-efficient.However, in a RE-rich market with subsidised electricity prices, designing an effective FiT scheme will demand careful analysis and planning.Additionally, once these subsidies are withdrawn, private players may see promising prospects while the general population may end up paying an increased tariff.Further, the introduction of a FiT without careful analysis may drive the existing electricity tariff to rise considerably.This may put economic pressure on the general mass, while only a handful of private participants operating as GenCos may benefit financially.Currently, there is a lack of standard for private sectors to participate in RE generation and the rate of subsidy provided by the government makes it difficult for private individuals and business entities to participate in RE business in Bhutan.This also poses one of the biggest challenges in implementing a FiT policy, aggravated by the lack of adequate grid infrastructure to support such schemes.As the overall design of FiT scheme is also influenced by the market design and the availability of resources, the FiT scheme may not allow an effective market integration of RE.FiT in a non-liberalised, monopolised electricity market is not encouraging, perhaps only when there are adequate number of private participations.Further, as recommended by [26], FiT policies must be kept transparent, less complex and consider local conditions such as RES-E potential.This work proposes FiT scheme taking into consideration the following factors: (i) Energy is almost entirely hydropower based.(ii) Subsidy is provided by the government that results in a lower tariff for end-users.(iii) Lack of private participation and knowledge of the benefits of investing in RES. Globally, favourable policies such as FiTs, renewable portfolio standards, tenders, and tax incentives have contributed to the growth in RES-E although these were aimed at overcoming initial barrier to the development of RES-E when compared to the costs related to conventional sources of generation [27].Further, increase in the RES-E penetration could impact electricity prices and consequently, the revenue opportunities for RES-E plants.It will be a challenge to maintain rapid RES-E supply growth against decreasing costs and technical and market operation challenges associated with high penetration levels of RES-E [27].In the case of Bhutan, where such initiatives are yet to take place, modifications in the FiTs will have to be made to address the required needs of the investors and the government in future. The levelised cost of energy (LCOE), which serves as a cost indicator for various RE technologies has an influence on the design of pricing strategies for RES-E.LCOE will continue to decrease for solar PV and wind power, while for hydropower, it will increase, primarily due to inflation and labour cost [28][29][30].Therefore, it is more viable to opt for non-hydro RES-E, especially solar PV. In terms of evolution in the incentive schemes, [31] gives a historical perspective of the introduction of the Grid Feed-in Law in 1990 in Germany, which was replaced by the Renewable Energy Sources Act (EEG 2000) based FiT scheme in 2000.They further give the underlying reasons for moving from fixed FiT scheme to auctions since 2016, although exceptions have been made for smaller-sized installations of less than 750 kW where these installations continue to receive FiTs.In Germany, solar PV investors between 300 to 750 kW are provided with the following two options [32]: (i) Tender scheme for utility scale PV without self-consumption, and (ii) FiT, halved compared to smaller systems, allowing self-consumption is allowed. For the existing market in Bhutan, subsidy provided by the government is based on 'blocks' of consumers.Currently, there are no grid regulations and no incentives for private participants.Therefore, to determine FiT for effective implementation, the following components of the FiT pricing as per the traditional definition of FiT will need to be considered: (i) Generation tariff -where the generator is paid for every kWh of electricity generated.(ii) Export tariff -where the generator is paid for every kWh injected into the grid.(iii) Rate of own usage -rate at which deficit energy is bought from the grid. The existing pricing modality in Bhutan is shown in Fig. 6.Customers pay subsidised electricity tariff E T and the rate at which power is exported to the third country PE E is based on the power purchase agreement (PPA) signed with the third country.In absence of fossil-fuelled energy sources in the country, and the existence of a well-established energy sector with hydropower as the primary source, support for other forms of RE is not well pronounced.Further, as per the energy policy, there is a cap of 25 MW of total generation from non-hydro RES-E by 2025.Therefore, based on the prioritised non-hydro RES-E in [8], a suitable FiT scheme needs to be designed for solar power and wind power. While designing the scheme, it must be noted that the nonhydro RES-E investors do not suffer any losses.FiT is based on several economic and cost aspects, including LCOE for various RE technologies.The LCOE of residential PV systems in 2019 was between 0.063 USD/kWh and 0.265 USD/kWh [33].India and China had the lowest country/market average LCOE for commercial PV up to 500 kW.Further, for onshore wind, the global weighted-average LCOE in 2019 was 0.053 USD/kWh.The most competitive weighted average LCOE below 0.050 USD/kWh was observed in India and China and the LCOE is expected to further reduce [33]. While information on LCOE for different RE technologies in Bhutan would be ideal for this study, reliable information and data is not available.Therefore, data from India, 0.063 USD/kWh is used instead, owing to the proximity, and having a similar market scenario.The average residential load is assumed to be 5 kW. To assess LCOE since it has an influence on the pricing strategy, LCOE in select countries have been reviewed.Table 5 shows the LCOE for residential solar PV and onshore wind power in various countries.Further, the lifetime of solar PV is between 20 and 35 years [33].Therefore, it is safe to assume the project lifetime n of solar PV to be 25 years in the current work.For wind power plants, [34] has concluded that the useful life of wind power plant has changed over time and today it stands at an average of 30 years. An investor or a homeowner will only opt for non-hydro RES-E if it offers an undue advantage over that from the grid.If TC RES−E is the total cost in its lifetime n with electricity from non-hydro RES-E and TC grid is the total earnings in its lifetime n with energy into the grid, the following preliminary condition for an investor to invest in RES-E must be satisfied: The total costs TC RES -E and TC grid are given by, where E g is the total energy generation in n number of years and m is the number of months in a year, E i is the kWh injected into the grid, and E T is the retail LV tariff with an annual increase r at 1.8% per year as indicated in Fig. 4. The simplest forms of determining the total costs are considered in Eq. (3) and it does not account for taxes, discount rates and other associated costs pertaining to Bhutan. Results Considering the current market scenario, it is proposed that the FiT is pegged with the block-wise subsidy to enable each category of customer to make net savings for injecting equivalent kWh into the grid.FiT is computed based on the maximum (2) TC RES -E < TC grid subsidy S R_ max and electricity tariff E T for each category of LV customer.As a sample, FiT for LV Block-I (others) FiT LVB1_o is fixed as given below: FiT LVB1_o = 5.81−1.28= 4.53 Nu.∕kWh = 0.0606 USD∕kWh , and LCOE is 0.063 USD/kWh.The proposed FiTs with net savings that each category of customer/investor can make, considering a plant life of 25 years with 1.8% annual increase in the retail tariff have been computed and shown in Table 6. The preliminary condition for an investor to opt for nonhydro RES-E based on Eq. ( 2) is shown in Fig. 7. The following conclusions can be drawn from Table 6 and Fig. 7: (i) The proposed FiT will not be effective for LV customers in rural, highland areas, Block I (others) and Block II (all) as they receive subsidies of 0.0777, 0.0777, 0.0606 and 0.0418 USD/kWh respectively.(ii) Only LV Block III and LV Bulk customers with net injection of 500 and 1000 kWh per month respectively will be able to make savings if they opt for RES-E (solar PV).The proposed FiTs are 0.0290 and 0.0223 USD/kWh respectively, when export tariff component is pegged with the subsidy. (5) FiT LVB1_o = S R_ max − E T_LVB1_0 (iii) To encourage maximum number of participants in the RE market, a generation tariff pegged with the lowest tariff (0.0171 USD/kWh) can also be provided in addition to the export tariff.This will result in a higher net savings for all categories of LV customers, including rural and highlanders. Further, FiTs for different countries in the region and other areas where it has gained significant success are shown in Table 7 for the purpose of drawing comparison with the proposed FiT. For the purpose of drawing comparison between the proposed FiT and the FiTs given in Table 7, the lowest values of FiTs in these respective countries are considered.The comparative analysis is drawn and shown in Fig. 8. From the comparative analysis, it can be concluded that the proposed FiT in Bhutan remains the lowest for all categories of customers, as it is pegged with the rate of subsidy provided by the government.This will not be encouraging for the private investors who wish to participate in FiT scheme in Bhutan, and therefore, a more rigorous scheme will have to be instituted to encourage private investors. To encourage private participants to invest in non-hydro RES-E in a country with high ratio of RE (hydropower) and to achieve reasonable return on investment, the following recommendations are made: (i) For a system with block rate tariff, FiTs can be fixed based on the 'blocks' of kWh injected into the grid.(ii) Export tariff component of FiT can be pegged with the block rate tariff.LV customers will continue to draw benefits for their investment in non-hydro RES-E.Any sale of electricity after n years can be considered as 'profit.' (iii) The generation tariff (FiT) can be pegged to the lowest subsidy rate.However, for the government to make marginal profit, the FiT remains lower or equal to the power export tariff (PE E ) based on the power purchase agreement, which may also include framework for future bilateral cooperation in the field of energy between two nations.(iv) In place of 'generation tariff,' a tax credit can also be provided.As the percentage of non-hydro RES-E installation is going to be minimal due to the 25 MW cap on the installation of RES-E, this will not have any major financial constraints on the government.This strategy will encourage a greater number of participants. The above analysis pertains to a rooftop solar PV, however, LCOE is technology-based and to make the proposal generic, pricing modality is proposed as shown in Table 8 for both wind and solar PV for a period of 25 years. It must be noted that the retail tariff as well as the power export tariff keeps varying based on the market scenario and PPA, respectively.These variations must be taken into consideration while designing FiT so that the government nor the investor incurs loss.Although many RE-matured countries like United Kingdom and Germany are moving towards tendering scheme, for Bhutan, where there is lack of i) deregulated market ii) RE pricing modality, and iii) private participants, the pricing modality can be initiated with FiT scheme as proposed.With experience and increase in the number of participants, the following alternatives as shown in Table 9 can be proposed in place of FiT.[35] since it is the closest Indian state to Bhutan.Although competitive bidding is largely followed, the proposed FiT for 1 MW to 5 MW solar PV in 2021 was 3 Rs./kWh(0.0405 USD/kWh) [35] b Provisional solar PV FiTs for the fiscal year April 2020-March 2021 in Japan based on the Ministry of Economy, Trade and Industry (METI) [36] c FiT is based on large PV projects in three different regions in China [37] d Fit for Los Angeles Department of Water & Power [38] e FiT in Germany [39] Conclusion In a market with high share of RE, promotion and investment in additional RES-E is seen as a challenge.With the increase in the share of RE installations globally, several countries have started moving away from the traditional FiT scheme into tendering and other revised schemes.In the case of Bhutan, it is a challenge to promote and invest in non-hydro RES-E since the electricity tariff is the lowest in the South Asian region.However, apart from the financial aspects, there is a need to consider energy security and related challenges, such as dependency on a single source of energy in the country.Therefore, as Bhutan gradually begins diversifying its energy resources, suitable RE policy will need to be instituted.Two RE technologies, solar power and wind power are considered in this paper.FiT for solar power has been computed and proposed for different blocks of customers.The analysis indicates that LV Block I customers would not benefit from the FiT scheme as they receive subsidies ranging from 0.0418 to 0.0777 USD/kWh/month.Only two categories of customers, LV Block III and LV Bulk, are able to make savings if they opt for the FiT scheme of 0.0290 and 0.0223 USD/kWh respectively.Further, two favourable alternatives, (i) net-metering and (ii) auction have been proposed.However, these recommendations are subject to change based on the market scenario and the government policies.A more robust RE policy will need to be instituted to benefit all players. Table 9 Alternatives to FiT scheme Net-metering There is no time-bound obligation or project time-period Additional design complications, such as those related to block rate tariff, project cost, LCOE will be minimised Allows investors to sell or use the units generated at any time without a specific contract 2. Auction As the bid is evaluated by the government, it offers flexibility to make appropriate decisions Creates a competitive electricity market Fig. 1 Fig. 1 Structure of energy sector in Bhutan The electricity prices in Bhutan are determined by the BEA and fixed in accordance with the Tariff Determination Regulation 2016 of Bhutan.The regulation provides for the electricity prices in accordance with the Electricity Act of Bhutan 2001 and the Domestic Electricity Tariff Policy 2016.These prices apply to all licensees including the following [22]: (i) Generation licensee.(ii) Transmission licensee.(iii) Distribution and supply licensee.(iv) System operation licensee. Fig. 3 Fig. 3 Number of countries adopting FiT and Auction[17] Fig. 4 Fig. 4 Increase in the LV tariff from 2017 to 2022 Fig. 5 Fig. 5 Comparison of average electricity price for household customers in South Asian countries Fig. 7 Fig. 7 Condition for investors to opt for RES-E Fig. 8 Fig. 8 Proposed FiT and FiTs in countries where it has been successfully implemented Table 2 Electricity tariff in Bhutan *For 2017-2018, it is only up to 300 kWh instead of 500 kWh **Conversion rate as of Feb 2022 is approximately, USD 1 = Nu.74.8 Table 4 [24]age electricity price in South Asian countries in US cents/kWh at unity power factor[24]USD conversion rate as of Oct 31, 2015: USD 1.00 = Nu.65.40 Table 6 Comparison of costs and the proposed FiT *The upper limits have been suitably assumed for the purpose of the study Table 7 FiTs (USD/kWh) in select countries for solar PV a Based on Table 8 Proposal for FiTThe proposed schemes are based on a simplified method where assumptions are made on the LCOE of the two variable renewable energy (VRE) sources due to lack of data specific to Bhutan.Recommendations made in this paper would serve as a guideline for decision making, planning, and instituting a more robust system to introduce RE policy for non-hydro RES-E.As it would be a challenge to introduce an RE policy in Bhutan primarily due to the source of electricity being hydropower and generous subsidies provided by the government, it would necessitate the cooperation of all players: MoEA, BEA and DHI (BPC & DGPC) to work on a suitable RE policy.One of the immediate research works would be to determine LCOE for various RE technologies in Bhutan, in particular wind power and solar PV.Thereafter, we draw comparisons on different RE policies suitable for Bhutan's unique energy market on the basis of LCOE and subsidies provided by the government. BExport tariff Yes, but only to higher category of customers Yes, more than Option ALimitations and future work
v3-fos-license
2021-10-10T06:17:08.710Z
2021-10-08T00:00:00.000
238528939
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-021-99526-z.pdf", "pdf_hash": "22b64b09440814b942b42ab872bc0c0f5c138167", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42134", "s2fieldsofstudy": [ "Geology", "Environmental Science" ], "sha1": "0f1e5c03c7735cead65fa71eeaf4192e47fd53fa", "year": 2021 }
pes2o/s2orc
Goldilocks at the dawn of complex life: mountains might have damaged Ediacaran–Cambrian ecosystems and prompted an early Cambrian greenhouse world We combine U–Pb in-situ carbonate dating, elemental and isotope constraints to calibrate the synergy of integrated mountain-basin evolution in western Gondwana. We show that deposition of the Bambuí Group coincides with closure of the Goiás-Pharusian (630–600 Ma) and Adamastor (585–530 Ma) oceans. Metazoans thrived for a brief moment of balanced redox and nutrient conditions. This was followed, however, by closure of the Clymene ocean (540–500 Ma), eventually landlocking the basin. This hindered seawater renewal and led to uncontrolled nutrient input, shallowing of the redoxcline and anoxic incursions, fueling positive productivity feedbacks and preventing the development of typical Ediacaran–Cambrian ecosystems. Thus, mountains provide the conditions, such as oxygen and nutrients, but may also preclude life development if basins become too restricted, characterizing a Goldilocks or optimal level effect. During the late Neoproterozoic-Cambrian fan-like transition from Rodinia to Gondwana, the newborn marginal basins of Laurentia, Baltica and Siberia remained open to the global sea, while intracontinental basins of Gondwana became progressively landlocked. The extent to which basin restriction might have affected the global carbon cycle and climate, e.g. through the input of gases such as methane that could eventually have collaborated to an early Cambrian greenhouse world, needs to be further considered. Although plate tectonics under regimes of shallower and hotter subduction might have operated since the Mesoarchean ("Proterozoic-style plate tectonics" 4 ), a change to modern-style plate tectonics characterized by deep subduction and colder thermal gradients apparently occurred in the Neoproterozoic, as suggested by the global distribution of ophiolites, blueschists and UHP (Ultra-High Pressure) rocks 4,5 , especially in the Pan-African/Brasiliano orogens that formed during the amalgamation of Gondwana 7,8,11,12 . This is a consequence of secular cooling of the mantle, which by the end of the Neoproterozoic might have cooled sufficiently to allow widespread cold and dense lithosphere slabs to collapse into the underlying asthenosphere without loosing its coherence. Deep subduction of felsic continental crust and colder geotherms led to continental collision zones with deep roots and lower density, resulting in significant relief generated by isostatic rebound 13 . Thus, the inception of modern-style plate tectonics produced large mountain belts up to thousands of km long and topographically higher than pre-Neoproterozoic orogens. High relief in mountainous areas can be maintained for at least ca. 40 Myr after the onset of continental collision 14 , providing detritus for long-lived adjacent sedimentary basins over broad timescales. As an important outcome, extensive Ediacaran-Cambrian sedimentary basins developed throughout Gondwana, fed by denudation of the recently uplifted mountain belts. Late Ediacaran to early Cambrian metazoan biota that appears for the first time in the stratigraphic record has been described [15][16][17] in all of the basins that sourced the Pan-African/Brasiliano orogens as main provenance areas. To test how interlinked the development of mountain belts and metazoan-bearing sedimentary basins were in western Gondwana, we performed in-situ Laser Ablation-Induced Coupled Plasma (LA-ICPMS) determinations of U-Pb, Sr isotope and trace element data on samples from distinct carbonate levels in the Ediacaran-Cambrian Bambuí Group of east-central Brazil, along with novel stepwise Pb leaching mineral dating and Sr, C and O isotope data. We produced a comprehensive compilation of C, Sr, Nd isotope and detrital zircon data for different sections of this basin, and discuss other available proxies in light of the integrated framework of orogen-basin evolution proposed here. The Bambuí Group is ideal for testing the hypothesis of integrated metazoan-mountain evolution as it is located at the core of western Gondwana and surrounded by collisional mountain belts developed diachronously around the São Francisco paleocontinent 18 , between 630 and 600 Ma (Brasília Orogen to the southwest) 18 , 585-530 Ma (Araçuaí-Ribeira Orogen to the east) 19 and 540-500 Ma (Araguaia-Paraguay-Pampean Orogen to the northwest) 20 . The goal is to investigate the influence of the uplifting mountains in the sedimentary and biological record of the first metazoan-bearing basins by tectonic restriction of epeiric seas and changes in continentally-derived nutrient influx through time. We argue that progressive basin restriction by the surrounding mountains might have damaged the conditions for complex life development and discuss the possible global outcome of widespread basin restriction in Gondwana and its effects on global biogeochemical cycles. Geological context. Deposition of the Bambuí Group spanned the whole Ediacaran and lower Cambrian over the São Francisco paleocontinent [21][22][23] (Fig. 1a-d), comprising a mixed carbonate-siliciclastic succession. The complete stratigraphic package is illustrated in the schematic lithostratigraphic column of Fig. 1e. Glacial diamictite at the base of the group rests atop striated pavements 24 and is probably related to the global Marinoan glaciation 21,25 . The overlying Sete Lagoas Formation represents the basal carbonate succession of the Bambuí Group and is subdivided into two members (Fig. 1c,d). The lower Pedro Leopoldo Member covers the glacial deposits or onlaps the crystalline basement and comprises a typical early Ediacaran cap carbonate succession. At the base, a meter-thick patchy cap dolostone unit shows decreasing-upwards δ 13 C carb from − 3.2‰ down to − 6.5‰ and associated δ 18 O at − 5‰ 21 (all values reported as compared to VPDB). The cap dolostone is succeeded by up to a couple hundred meters-thick limestone containing pseudomorphs of calcite after original aragonite crystal fans with negative δ 13 C carb 21,23,25 . Phosphorite deposits 26 , apatitic cements 27 and centimetric barite layers with a characteristic Δ 17 O anomaly 28 , probably caused by perturbations in the ozone layer due to excess CO 2 accumulated during the Marinoan glaciation, are locally recognized 27,28 . Cr isotope and geochemical data (negative Ce anomalies, low Th/U ratios, Mo and U contents, and Fe speciation data 25,29 ) suggest pulsed oxygenation of the post-glacial ocean due to meltwater contribution 25 . The top of the Pedro Leopoldo cap carbonate succession is marked by a depositional hiatus or condensed section, recognized in both seismic and isotopic breaks 21,30 . Although some suggestions that this surface might represent an erosional unconformity were put forward, no convincing field evidence other than dissolution features, tepees, mud cracks, dolomitization and other facies changes, as well as subtle variations in regional dip 30 have yet been described, so it is safer to assume this interval as a depositional hiatus, here defined as the Lower Bambuí Hiatus-LBH. Above the LBH lies the couple hundred meters-thick Lagoa Santa Member, comprising a second crystal-fan-bearing limestone level superimposed by laminar and columnar stromatolites and thrombolites with δ 13 C carb at ca. 0‰. This intermediate succession contains some putative trace fossils and sparse, loosely packed Cloudina sp. 16 shells and Corumbella werneri 17 fragments 15 . The δ 13 C carb values rise quickly upwards to > + 10‰, reaching extreme values of ca. + 16‰ 21,23 and the macrofossil content virtually disappears. These δ 13 C carb values are anomalously high when compared to Ediacaran global curves and persist upsection for around 350 m, spanning through the siltstone-dominated Serra de Santa Helena Formation and dark storm-related limestone of the Lagoa do Jacaré Formation, defining the Middle Bambuí Excursion (MIBE) 23 . Above the Cloudina-bearing interval, the remainder of the Bambuí Group is mostly devoid of macrofossils and anoxic conditions prevailed throughout the water column, as shown by isotopic, elemental and Fe speciation data 23,29,31,32 In-situ trace element data indicates wide variations between the three studied sections. Box-Whisker plots (Fig. 4) show that there is a circa five-fold increase in Al-normalized trace metals such as Zn, Ni, Cu, and Ba in the Road Police section compared to the Sambra and Tatiana sections. The PAAS-normalized REE + Y patterns are LREE-depleted or MREE-enriched, with Y/Ho ratios between 35 and 61 and no Eu anomalies. The main distinctive traits between the sections are the Ce anomalies, which are prominently negative in the Sambra section (down to 0.2) and variable to null in the Tatiana and Road Police sections. Discussion Mountain building was diachronous in western Gondwana due to protracted collision of the São Francisco-Congo, West African, Paranapanema/Rio de La Plata, Pampean(?), Amazonian and Kalahari paleocontinents and the minor intervening continental blocks 18 (Figs. 5, 6). Closure of the Goiás-Pharusian ocean generated the first major collisional belt, constrained at ca. 630-600 Ma from the Tuareg Shield in the Saharan region to the 15 Myr, syn-collisional crustal anatexis and peak metamorphic conditions were attained in the Araçuaí-Ribeira Orogen to the east of the São Francisco paleocontinent, marked by widespread 585-530 Ma aluminous granites and high-grade rocks 19 . This orogen was formed through closure of the V-shaped Adamastor ocean 12 and Sr (black dots) isotope data from the literature, interpreted in the framework proposed here and compared to the global carbon and strontium isotope curves 36 . Literature-compiled Nd isotope data (in green, with poorer stratigraphic control than C and Sr data) is also presented for comparison. The amount of nutrient input in each evolutionary stage of the basin is represented by Box-Whisker plots of Zn/Al ratios, color-coded according to Fig. 4 Fig. 5, the new geochronological, isotopic and trace element data presented here is integrated in a comprehensive compilation of C, Sr and Nd isotope data from the available literature. Interpreted together in the framework proposed here, the amassed dataset establishes a chrono-correlation of the Bambuí Group evolutionary stages and diachronous mountain belt formation around the São Francisco paleocontinent. These can be summarized in a three-stage evolution: (1) Goiás-Pharusian ocean closure, Brasília Orogen building and Marinoan deglaciation (ca. 635-615 Ma)-At the beginning of this stage, the São Francisco paleocontinent was surrounded by oceans (Fig. 6a). Closure of the Goiás-Pharusian ocean and building of the Brasília Orogen proceeded during this stage 11,18 . Carbon and strontium isotopes in the Pedro Leopoldo Member mirror the global early Ediacaran curve (Fig. 5), due to continued seawater connection through the Adamastor ocean (Fig. 6b). The negative Ce anomalies detected in the Sambra Quarry crystal-fan bearing carbonates (Fig. 4) are consistent with oxic conditions in the water column 38 , especially for high-purity carbonate samples with low detrital and organic matter content such as crystal-fan precipitates. This additional proxy supports the previous interpretations of oxic conditions for the surficial waters of the Bambuí basin in the aftermath of the Marinoan glaciation, based on Cr isotope 25 , Mo and U enrichments and Fe speciation 29 data for this same interval. This, however, seems to have been a temporary and patchy oxygenation pulse, perhaps triggered by glacial meltwater input to the basin 25 . Nevertheless, it led to an important input of sulfate and phosphate to the basin, with formation of phosphorites 26 and authigenic phosphate cements encrusting crystal fans 27 . Ca isotope systematics 39 , as well as the sharp 87 Sr/ 86 Sr peak from ca. 0.7074 to ca. 0.7080, support enhanced weathering of source areas during deposition of the cap dolostone. The LBH is here constrained to between ca. 600 and 580 Ma, according to the U-Pb in-situ dates, within uncertainties, obtained in the Pedro Leopoldo and Lagoa Santa members, respectively (Fig. 3). This matches the time interval between closure of the Goiás-Pharusian and Adamastor oceans 11,12,18 (Fig. 5) and roughly coincides with the onset of the regional Gaskiers glaciation, recognized in parts of South America 40 . Considering the U-Pb in-situ dates obtained here, along with the available field, lithostratigraphic and isotope data, the Pedro Leopoldo member would represent the cap carbonate to the Marinoan glaciation, deposited between 635 and 600 Ma and bearing consistent δ 13 C, 87 15 briefly thrived. The disappearance of Ce anomalies in the Tatiana quarry limestones supports the available Fe speciation and trace metal proxies 29 that depict a significant change from a likely pervasively oxic basin in the aftermath of the www.nature.com/scientificreports/ Marinoan glaciation (Sambra quarry) to unstable marine redox conditions. Although this situation might be interpreted as hazardous to life, previous studies revealed the ability of the first metazoans to colonize the marine substrate during sporadic oxic episodes under a regime of dominantly anoxic water conditions 10,42 , for example in the late Ediacaran Nama Group [42][43][44] . The cause for the shift in redox conditions between the Pedro Leopoldo and Lagoa Santa members is uncertain. At ca. 585-530 Ma the Bambuí basin was progressively surrounded by orogens, and erosion of the uplifting mountain belts probably enhanced delivery of sediments and nutrients. High rates of primary productivity may cause a general drawdown of dissolved oxygen if the amassed biomass is subjected to aerobic remineralization and organic carbon is not buried fast enough. The proximity with several mountains probably caused local high sedimentation rates that resulted in increased rates of biomass burial, thus proportionally increasing inner-ramp dissolved O 2 concentrations and enabling fleeting colonization of the cosmopolitan Cloudina genus during a time of heterogeneous redox conditions. The case of the Bambuí Group reinforces previous interpretations that delivery of an optimum amount of nutrients from oxidative weathering of mountains, combined with a balance between primary production and organic matter burial, were probably essential to opportunistic benthic colonization during times of ephemeral oxygenation among episodic incursion of anoxic waters beneath a fluctuating chemocline 42,[44][45][46][47] . (3) Clymene ocean closure, Paraguay-Araguaia Orogen building and full basin restriction during the MIBE interval (540-500 Ma)-Collision of the Amazonian paleocontinent and consequent closure of the Clymene ocean to the west 20 caused renewed uplift of the Brasília Orogen (Fig. 6c), completely restricting the Bambuí basin from all sides. These processes were ultimately responsible for the unique Middle Bambuí Excursion (MIBE) of δ 13 C carb > 10‰ 23 (Fig. 5). The 87 Sr/ 86 Sr ratios became decoupled from the global curve, recording anomalously low values of circa 0.7072 (Fig. 5), probably due to the erosion of juvenile terranes and/or of ancient carbonate platforms uplifted in the surrounding orogenic belts 23,41 (Fig. 6). This interpretation is reinforced by the available Nd isotope data compiled from the literature, which shows a slight increase of εNd(t) values during the MIBE, roughly coincident with the minima of 87 Sr/ 86 Sr values (Fig. 5). This rise in εNd(t) values is consistent with the suggestion of increased weathering of juvenile terranes in the orogenic belts that surrounded the Bambuí basin, potentially increase the delivery of key nutrients such as phosphorus from the erosion of basic and intermediate rocks 48 . This is reinforced by the presence of detrital apatite in carbonates of the MIBE interval 32 . This possible large increase in nutrient input is consistent with the five-fold rise in the concentration of micronutrients such as Zn, Cu, Ni and Ba in the Road Police section, upper Lagoa Santa Member (Fig. 4). Metal enrichment was probably controlled by both weathering flux, as evidenced by paired low 87 Sr/ 86 Sr and high εNd(t), and by anoxic conditions stablished for this stratigraphic interval from Fe speciation and trace element concentrations 29 . In addition, detrital apatite found in carbonates of the MIBE interval 32 could indicate, besides sourcing from basic to intermediate rocks, strong recycling of the PO 4 -rich basal carbonate platforms and refertilization of the basin. In this scenario, progressive restriction due to tectonic confinement and uncontrolled delivery of nutrient-rich waters might have fueled biomass production that was efficiently re-mineralized through methanogenesis 41,49 . Limestones that record the MIBE present a covariation of paired δ 13 C carb -δ 13 C org data, high δ 34 S pyrite and low carbonate-associated sulfate (CAS) 31,32 . A sulfate poor, water-column methanogenesis environment was recently proposed 31,32 , in which 13 C-depleted methane is released from sediments and water without being oxidized. Thus, seawater DIC record anomalously positive δ 13 C carb signals from methanogenic 13 C-enriched CO 2 31,32 . This scenario is only achieved after a drastic reduction of the dissolved oxygen pool through aerobic respiration and consumption of other oxidants. Methanogenesis in the water column, instead of porewater methanogenesis, is supported by the basinal scale of the MIBE, occurring for hundreds of meters in distinct portions of the basin with little sample-to-sample variation, including in oolitic limestones deposited under shallow and agitated waters 32 . Alternative factors might have contributed to the MIBE, such as a change in total carbon input from the weathering of older carbonate rocks with high δ 13 C carb 23,41 and a third authigenic carbonate sink 32 , but are unlikely to have acted alone in maintaining a 13 C-enriched DIC throughout the basin 32 . A giant Ediacaran graphite deposit interleaved in paragneisses of the Araçuaí Orogen 50 was proposed to potentially represent at least part of the organic carbon buried to generate the MIBE 32 , but there is a seeming mismatch in the constrained ages of the two events, as the graphite deposit was metamorphosed to high-grade at ca. 585-560 Ma 50 and the MIBE would only have started after the beginning of the Cloudina sp. biozone, i.e. after ca. 550 Ma 2 . While progressive basin restriction disconnected the Bambuí waters from the global sea and thus isolated it from the prevailing biotic and abiotic conditions, preventing the rise of typical late Ediacaran/early Cambrian macrofaunal biota, some distinct microbial benthic assemblages might still have thrived, as indicated by localized stromatolites, recently described MISS structures 23 and putative ichnofossils such as Treptichnus pedum in the upper Bambuí Group 51 . It should be noted, however, that microbial mats of the Jaíba Formation, situated below the unit from which the possible ichnofossils were described (Três Marias Formation), yielded δ 13 C carb around + 3 ‰ and 87 Sr/ 86 Sr of ca. 0.7080 52 , similar to the early Cambrian seawater curve, which might suggest a recoupling with global oceanic waters and the return to normal biogeochemical conditions at the topmost Bambuí Group after the intensively restricted period of the MIBE (Fig. 5). Inner-ramp stromatolites of the Road Police section yielded Ce anomalies that are consistent with the interpretation of anoxic seawater from other proxies such as Fe speciation and U and Mo contents for the same stratigraphic interval 29 . A significant shallowing of the redoxcline and increasing of anoxic incursions into proximal settings during the MIBE is thus interpreted. Anaerobic degradation of organic matter (through methanogenesis and dissimilatory Fe reduction) likely resulted in a degree of P recycling back to the water column, thus fueling productivity to probably very high levels 53 , provided P did not get trapped in ferrous iron minerals. The absence of oxygen and a probable methane-rich water made the environment extremely eco-stressful and hampered complex life forms and the establishment of a typical Ediacaran-Cambrian style trophic chain 29 44,54 and constrained from numerical models 55 . This tentative model needs to be further tested and confirmed by a systematic study using other nutrient proxies in different parts of the basin. Nevertheless, available coupled δ 13 C carb -δ 13 C org data 31,32 , high δ 34 S pyrite and low CAS 32 throughout the MIBE indicate the attainment of primary production under low-sulfate and low-oxygen conditions, reinforcing the suggestion of a dominantly ferruginous water column modeled through Fe speciation and trace element data 29 for this time interval. A compilation of published detrital zircon data (Fig. 6) supports the evolutionary model proposed here. Provenance of the Marinoan glacial diamictite and related units reflects the erosion of mainly cratonic sources, with the main Neoproterozoic detrital zircon peak at ca. 900 Ma and sparse crystals with a youngest peak at 670 Ma 56,57 that might have been transported in volcanic ash clouds derived from surrounding island arcs 12 (Fig. 6a,d). Building of the Brasília Orogen 18 provided youngest detrital zircons of ca. 636 Ma to foredeep conglomerate wedges that developed roughly during the time of cap carbonate deposition 22 (Fig. 6b,e). An important provenance shift is observed within the fossil-bearing limestone, with youngest detrital zircons at ca. 570 Ma 58 , indicating erosion of the Araçuaí mountain belt (Fig. 6c, f). The 520.2 ± 5.3 Ma 33 U-Pb zircon date interpreted as the age of extrusion of a tuff layer at the top of the Bambuí Group indicates that deposition spanned the time of closure of the Clymene ocean and final amalgamation of western Gondwana. The relationship between orogens and metazoan-bearing basins proposed for Ediacaran/Cambrian systems 7,8,11 is thus more complex than previously thought. According to our model, mountains might provide the conditions for life development, i.e., delivery of bio-essential nutrients, causing a boost in primary productivity and the subsequent rise of atmospheric and ocean oxygen levels. However, mountains may hinder complex life development if basins become too restricted by the surrounding uplifted areas that hamper seawater renewal and encourage eutrophication from an excess of nutrients and biomass production. There is, then, a Goldilocks effect constraining the optimum conditions for metazoan development, especially in Ediacaran-Cambrian basins surrounded by mountain belts formed due to Gondwana assembly. This effect might be recognizable in other moments of geological history as well 54 . The Methods Carbon, oxygen and strontium isotope ratio mass spectrometry. Carbonates had their CO 2 extracted on a high vacuum line after reaction with phosphoric acid at 25 °C, and cryogenically cleaned at the Stable Isotope Laboratory (NEG-LABISE) of the Department of Geology, Federal University of Pernambuco (UFPE), Brazil. Released CO 2 gas was analyzed for O and C isotopes in a double inlet, triple collector mass spectrometer (VG-Isotech SIRA II), using the BSC reference (Borborema Skarn Calcite) that was calibrated against NBS-20 (δ 13 C = − 1.05‰ VPDB ; δ 18 O = − 4.22‰ VPDB ). The external precision, based on multiple standard measurements of NBS-19, was better than 0.1‰ for both elements. Aliquots of the carbonate samples were attacked with 0.5 M acetic acid in order to prevent dissolution of the siliciclastic fraction, following procedures described in 21 . Sr was then separated using the conventional cation exchange procedure at the Laboratory of Geochronology, University of Brasília (UnB), Brazil. Samples were measured at 1250-1300 °C in dynamic multi-collection mode in a Thermoscientific Triton Plus mass spectrometer. The 87 Sr/ 86 Sr values of the samples were corrected for the offset relative to the certified NIST SRM 987 value of 0.710250. The long-term (year-round) average of this standard 87 Sr/ 86 Sr ratios measured in this machine is 0.71028 ± 0.00004. Procedural blanks for Sr are less than 100 pg. All uncertainties are presented at the 2σ level. www.nature.com/scientificreports/ for 3 s to remove surface contamination. Soda-lime glass NIST SRM-614 was used as a reference glass together with one carbonate reference material (WC-1) and three other carbonate reference materials (see below). Raw data were corrected online using our in-house Saturn software, developed by João Paulo Alves da Silva. Following background correction, outliers (± 2σ) were rejected based on the time-resolved 207 Pb/ 206 Pb and 206 Pb/ 238 U ratios. The mean 207 Pb/ 206 Pb ratio of each analysis was corrected for mass bias (0.3%) and the 206 Pb/ 238 U ratio for interelement fractionation (~ 5%), including drift over the sequence time, using NIST SRM-614. Effects for mass bias and drift correction on the Pb/Pb ratios were monitored using USGS BCR and BHVO glasses. Over the three-day analyses compiled Pb/Pb ratios for both standards were within error to certified values by 62 . Due to the presence of carbonate matrix, an additional offset factor of 1.3 was determined using WC-1 carbonate reference material 63 . The 206 Pb/ 238 U fractionation during 20 s depth profiling was estimated to be 3%, based on the common Pb corrected WC-1 analyses, and has been applied as an external correction to all carbonate analyses. Pooled together the U-Pb data for WC-1 obtained during the sections yielded an age of 254.5 ± 1.6 Ma (MSWD = 0.78, n = 53). Repeated analyses of a stromatolitic limestone from the Cambrian-Precambrian boundary in southern Namibia, analyzed during the sequences yielded a lower intercept age of 548 ± 16 Ma (MSWD = 3, n = 15). This is within uncertainty identical to the U/Pb zircon age of 543 ± 1 Ma from the directly overlying ash layer (Spitskopf Formation 64 ). Repeated analyses of the Duff Brown carbonate yielded a lower intercept age of 63.85 ± 0.77 Ma (MSWD = 1.5, n = 17), which is within error of the age of 64.0 ± 0.7 Ma 65 . Lastly, fifteen spots on our internal reference material Rio Maior calcite gave a low intercept age of 62.43 ± 0.37 Ma (MSWD = 0.7, n = 15). This age is identical to our long-term measurement in the Department of Geology at Federal University of Ouro Preto (UFOP). In-situ LA-ICPMS Sr isotope ratios determination. A ThermoFisher Neptune Plus LA-(MC)-ICP-MS coupled with a 193 nm HelEX Photon Machine laser ablation system was used to obtain the Sr isotope composition of the carbonate samples at the Applied Isotope Research labs, Department of Geology, UFOP, Brazil. The methodology followed was proposed by 66,67 . Analytical conditions included a 6 Hz repetition rate and an energy density of 4 J cm −2 with a spot size of 85 µm. The acquisition cycle consisted of 30 s of measurement of the gas blank, followed by 60 s of sample ablation. Ablation and material transport occurred in a sample gas stream of Ar (0.8 l min −1 ) mixed with He (0.5 l min −1 ) and N 2 (0.005 l min −1 ). The dataset was reduced using an in-house Excel® spreadsheet for offline data reduction (modified from 66 ). The raw data for 86 In-situ LA-ICPMS trace element analysis. Trace element composition of the calcite samples were obtained via LA-ICP-MS (CETAC 213 laser ablation coupled to a ThermoFisher Element 2) at the Applied Isotope Research laboratories, Department of Geology, UFOP, Brazil. The laser was set to produce spot sizes of 40 μm in diameter, during a period of 30 s at 10 Hz frequency. The data acquisition was done in bracketing mode and consisted of 4 analyses of standards (NIST 612 50 ppm glass) bracketing 10-15 unknowns. The data reduction was done via the Glitter software (GEMOC Laser ICP MS Total Trace Element Reduction), which provides an interactive environment for analytic selection of background and sample signals 72 . Instrumental mass bias and ablation depth-dependent elemental fractionation were corrected by tying the time-resolved signal for the unknown to the identical integration window of the primary standard NIST612. BCR and BHVO were used as secondary control reference materials, and yielded values within the recommended USGS range. Errors are derived from the averaged counts for each mass for both the standards and values are then compared to those of the primary and secondary standards, to determine concentrations. Stepwise Pb leaching dating. Analytical details. Rock samples bearing crystal fans were carefully cut out from a 5 mm thick slab with a rotating diamond blade mounted to a hand-hold hand-piece used in dental offices. The material was carefully ground by hand in an agate mortar, washed in deionized water, dried in an oven at 50° and then sieved into a 10-40 µm fraction. We prepared three aliquots: Two fractions (A1, A2) which we processed further by subjecting them to magnetic separation using a Frantz Isodynamic separator, cleaning the sample material from magnetic particles in the rock matrix and also intergrown directly with aragonite; and a third fraction (A3) which were not treated nor purified. We performed stepwise Pb leaching 73 (TATI sample) on all three sample aliquots (200 mg sample amounts) using 2 mL of different acids (HBr, acetic acid, HCl and aqua regia) with concentrations specified in Supplementary Table S3 and reaction times of 10 min in every step. Supernatants in every step were centrifuged, pipetted off and then transferred to clean 7 mL Savillex™ Teflon beakers. A fifth of each sequential liquid aliquot of samples A1 and A3 was pipetted into separate beakers and an adequate volume of a 204 Pb enriched tracer solution was added to them. This allowed the precise determination of Pb amounts released during each and every sequential All sequentially leached samples were dried and, after conversion to the chloride form with 1 mL of 6 N HCl, Pb was separated on miniaturized 1 mL pipette tip columns with a fitted frit, charged with 300 µL of Biorad™ AG1-X8 100-200 mesh anion resin, using a conventional HCI-HBr anion exchange procedure with doubly distilled acids diluted to our needs with ultrapure water provided by a Milli-Q® Reference Water Purification System, contributing a blank of less than 100 pg. Pb was loaded together with 2 µL silica gel and 1 µL 1 M phosphoric acid and measured from 20 µm Re filaments on a 8 collector VG Sector IT mass-spectrometer in static mode. Mass fractionation amounted to 0.068 ± 0.011%/AMU (2σ, n = 85), determined on repetitive analyses of the NBS 981 Pb standard. Errors (reported at the 2σ level) and error correlations (r) were calculated after 74 . Isochron ages were derived using Isoplot 3.6 74 . Errors assigned to the isochrons are 2σ given in the 95% confidence interval. Results/discussion. TATI data, color-coded with respect to the three different aliquots processed, are plotted in a Pb isotope ratio diagram of Fig. 3g. It is apparent from the data in Supplementary Table S3 that the acetic acids steps removed most Pb (~ 70%) from the crystal fan separates, in line with our expectation that this acid is capable of preferentially attacking and dissolving the carbonate. In all experiments, the subsequent stronger leaching acids (HCl, HNO 3 -HCl mixture and aqua regia) released Pb fractions with a significantly more radiogenic Pb isotope signature (Supplementary Table S3). While this might indicate Pb release from at least another subordinate phase with elevated U and Th relative to Pb, information deduced from the uranogenic-thorogenic common Pb diagram seems to indicate that this is unlikely. Instead, the leaching patterns in this diagram reveals a linear relationship of the data, with the exception of aliquot 3 (not purified by magnetic separation). A linear arrangement of TATI data in this diagram strongly supports the leaching of only one phase, in this case calcite pseudomorphs after aragonite, contributing Pb to the leaching acids. Conversely, the scattered leaching pattern of A3 in the uranogenic-thorogenic common Pb diagram reflects the presence of a multicomponent system with Pb contributed from phases likely having different U/Th. Based on the above, the well-defined correlation line defined by TATI data of the pure crystal fan separates (A1 and A2) in the uranogenic common Pb diagram is interpreted as a true mono-mineral isochron, with a slope corresponding to an age of 576 ± 36 Ma (MSWD = 0.72). We interpret this age to indicate the timing of growth of the respective aragonite fans. The results strongly reveal the importance of removal of matrix phases, in this case from the crystal fans, to prevent erroneous interpretation of linear arrays in uranogenic Pb isotope diagrams as isochrons, whereas they instead signify mixing lines with no interpretable geological meaning. Carbon, strontium and neodymium data compilation. The δ 13 C (1597 samples), 87 Sr/ 86 Sr (103 samples) and Nd isotope data (66 samples) compiled in Fig. 5 come from the following sources: [21][22][23]25,35,41,[75][76][77][78][79][80][81][82] . As distinct stratigraphic sections show different thicknesses, stratigraphic positioning of the data points was normalized to a common thickness for each formation. Nd isotope data are not reported with stratigraphic height tie-points, with exception of the data reported by 25 ; thus, the εNd(t) values, recalculated for the expected age of deposition, were grouped in a single column for each unit. Only the less radiogenic 87 Sr/ 86 Sr results reported for each section in the cited literature, corresponding to samples with higher Sr concentration and lower Mn/Sr ratios, were used in the compilation. Detrital zircon compilation. For the compiled detrital zircon 206 Pb/ 238 U age probability density plots of Fig. 6d-f, data from the following works were recalculated and only the spots showing less than 5% discordance, low common Pb and low uncertainty were used. For our purposes, only spots with 206 Pb/ 238 U ages younger than 1000 Ma were compiled. Age of the youngest population is calculated as the weighted average 206 Pb/ 238 U age using the minimum three youngest age-equivalent spots of each dataset (red bars in Fig. 6d-f), with uncertainties presented at the 95% level. This method is known as weighted average of the youngest three grains or Y3Z 83 , generally considered successful and accurate for low-n datasets (n < 300). Spots with younger ages, but not ageequivalent to at least other two spots, are considered outliers, either reflecting Pb loss, low sample size, analytical issues or under-representation due to statistical or analytical bias, and are not considered as reliable indicators of maximum depositional age. Glacial diamictites (Fig. 6d) include the cratonic Jequitaí and Bebedouro formations and correlated diamictite-bearing units in the marginal fold belts: the Canabravinha Formation of the Rio Preto belt, the Cubatão Formation of the Brasília belt, the lower diamictite-bearing formations of the Macaúbas Group of the Araçuaí belt and the Capitão-Palestina Formation of the Sergipano belt. The latter is capped by the Olhos D'água cap carbonate bearing identical C, O and Sr isotopic signals to the Pedro Leopoldo cap carbonate 21 . The sources for the compilation of Fig. 6d are: 56,57,77,[84][85][86][87][88] . Figure 6e is plotted with detrital zircon data from the Samburá conglomerate wedge 22 , which is overlain by the Pedro Leopoldo cap carbonate on the western part of the basin. No Neoproterozoic detrital zircon data is available for the cap carbonate unit itself. Figure 6f presents detrital zircon data for the Lagoa Santa Member compiled from 58,77,78 . Data availability All data are available within the paper and its Supplementary Material tables or from the corresponding author upon reasonable request.
v3-fos-license
2020-10-19T14:44:48.506Z
2020-10-05T00:00:00.000
224801771
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://discovery.ucl.ac.uk/10114109/7/Nachev_Generative%20model-enhanced%20human%20motion%20prediction.pdf", "pdf_hash": "4c9bb92393c5386d19c2da45c4698669cd82b5b0", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42135", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "73349d4ff95f12408e588d31f6657c1615f0dc55", "year": 2022 }
pes2o/s2orc
Generative model‐enhanced human motion prediction Abstract The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out‐of‐distribution (OoD). Here, we formulate a new OoD benchmark based on the Human3.6M and Carnegie Mellon University (CMU) motion capture datasets, and introduce a hybrid framework for hardening discriminative architectures to OoD failure by augmenting them with a generative model. When applied to current state‐of‐the‐art discriminative models, we show that the proposed approach improves OoD robustness without sacrificing in‐distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hardening diverse discriminative architectures to extreme distributional shift. The code is available at: https://github.com/bouracha/OoDMotion. The favoured approach to predicting movements over time has been purely inductive, relying on the history of a specific class of movement to predict its future.For example, state space models [Koller and Friedman, 2009] enjoyed early success for simple, common or cyclic motions [Taylor et al., 2007, Sutskever et al., 2009, Lehrmann et al., 2014].The range, diversity and complexity of human motion has encouraged a shift to more expressive, deep neural network architectures [Fragkiadaki et al., 2015, Butepage et al., 2017, Martinez et al., 2017, Li et al., 2018, Mao et al., 2019, Li et al., 2020b, Cai et al., 2020], but still within a simple inductive framework.This approach would be adequate were actions both sharply distinct and highly stereotyped.But their complex, compositional nature means that within one category of action the kinematics may vary substantially, while between two categories they may barely differ.Moreover, few real-world tasks restrict the plausible repertoire to a small number of classes-distinct or otherwise-that could be explicitly learnt.Rather, any action may be drawn from a great diversity of possibilities-both kinematic and teleological-that shape the characteristics of the underlying movements.This has two crucial implications.First, any modelling approach that lacks awareness of the full space of motion possibilities will be vulnerable to poor generalisation and brittle performance in the face of kinematic anomalies.Second, the very notion of In-Distribution (ID) testing becomes moot, for the relations between different actions and their kinematic signatures are plausibly determinable only across the entire domain of action.A test here arguably needs to be Out-of-Distribution (OoD) if it is to be considered a robust test at all.These considerations are amplified by the nature of real-world applications of kinematic modelling, such as anticipating arbitrary deviations from expected motor behaviour early enough for an automatic intervention to mitigate them.Most urgent in the domain of autonomous driving [Bhattacharyya et al., 2018, Wang et al., 2019], such safety concerns are of the highest importance, and are best addressed within the fundamental modelling framework.Indeed, Amodei et al. [2016] cites the ability to recognize our own ignorance as a safety mechanism that must be a core component in safe AI.Nonetheless, to our knowledge, current predictive models of human kinematics neither quantify OoD performance nor are designed with it in mind.There is therefore a need for two frameworks, applicable across the domain of action modelling: one for hardening a predictive model to anomalous cases, and another for quantifying OoD performance with established benchmark datasets.General frameworks are here desirable in preference to new models, for the field is evolving so rapidly greater impact can be achieved by introducing mechanisms that can be applied to a breadth of candidate architectures, even if they are demonstrated in only a subset.Our approach here is founded on combining a latent variable generative model with a standard predictive model, illustrated with the current state-of-the-art discriminative architecture [Mao et al., 2019, Wei et al., 2020].Myronenko [2018], take an analogous approach, regularising an encoder-decoder model for brain tumor segmentation on magnetic resonance images by simultaneously modelling the distribution of the data using a variational autoencoder (VAE) [Kingma and Welling, 2013].Here the aim is to achieve robust performance within a low data regime, which coincides with the demand for OoD generalisation. In short, our contributions to the problem of achieving robustness to distributional shift in human motion prediction are as follows: 1. We provide a framework to benchmark OoD performance on the most widely used opensource motion capture datasets: Human3.6M[Ionescu et al., 2013], and CMU-Mocap1 , and evaluate state-of-the-art models on it. 2. We present a framework for hardening deep feed-forward models to OoD samples.We show that the hardened models are fast to train, and exhibit substantially improved OoD performance with minimal impact on ID performance. We begin section 2 with a brief review of human motion prediction with deep neural networks, and of OoD generalisation using generative models.In section 3, we define a framework for benchmarking OoD performance using open-source multi-action datasets.We introduce in section 4 the discriminative models that we harden using a generative branch to achieve a state-of-the-art (SOTA) OoD benchmark.We then turn in section 5 to the architecture of the generative model and the overall objective function.Section 6 presents our experiments and results.We conclude in section 7 with a summary of our results, current limitations, and caveats, and future directions for developing robust and reliable OoD performance and a quantifiable awareness of unfamiliar behaviour. Related Work Deep-network based human motion prediction.Historically, sequence-to-sequence prediction using Recurrent Neural Networks (RNNs) have been the de facto standard for human motion prediction [Fragkiadaki et al., 2015, Jain et al., 2016, Martinez et al., 2017, Guo and Choi, 2019, Gopalakrishnan et al., 2019, Li et al., 2020b].Currently, the SOTA is dominated by feed forward models [Butepage et al., 2017, Li et al., 2018, Mao et al., 2019, Wei et al., 2020].These are inherently faster and easier to train than RNNs.The jury is still out, however, on the optimal way to handle temporality for human motion prediction.Meanwhile, recent trends have overwhelmingly shown that graph-based approaches are an effective means to encode the spatial dependencies between joints [Mao et al., 2019, Wei et al., 2020], or sets of joints [Li et al., 2020b].In this study, we consider the SOTA models that have graph-based approaches with a feed forward mechanism as presented by [Mao et al., 2019], and the subsequent extension which leverages motion attention, Wei et al. [2020].We show that these may be augmented to improve robustness to OoD samples. Generative models for Out-of-Distribution prediction and detection.Despite the power of deep neural networks for prediction in complex domains [LeCun et al., 2015], they face several challenges that limits their suitability for safety-critical applications.Amodei et al. [2016] list robustness to distributional shift as one of the five major challenges to AI safety.Deep generative models, have been used extensively for detection of OoD inputs and have been shown to generalise well in such scenarios [Hendrycks and Gimpel, 2016, Liang et al., 2017, Hendrycks et al., 2018].While recent work has showed some failures in simple OoD detection using density estimates from deep generative models [Nalisnick et al., 2018, Daxberger andHernández-Lobato, 2019], they remain a prime candidate for anomaly detection [Kendall and Gal, 2017, Grathwohl et al., 2019, Daxberger and Hernández-Lobato, 2019].Myronenko [2018] use a Variational Autoencoder (VAE) [Kingma and Welling, 2013] to regularise an encoder-decoder architecture with the specific aim of better generalisation.By simultaneously using the encoder as the recognition model of the VAE, the model is encouraged to base its segmentations on a complete picture of the data, rather than on a reductive representation that is more likely to be fitted to the training data.Furthermore, the original loss and the VAE's loss are combined as a weighted sum such that the discriminator's objective still dominates.Further work may also reveal useful interpretability of behaviour (via visualisation of the latent space as in Bourached and Nachev [2019]), generation of novel motion [Motegi et al., 2018], or reconstruction of missing joints as in Chen et al. [2015]. Quantifying out-of-distribution performance of human motion predictors Even a very compact representation of the human body such as OpenPose's 17 joint parameterisation Cao et al. [2018] explodes to unmanageable complexity when a temporal dimension is introduced of the scale and granularity necessary to distinguish between different kinds of action: typically many seconds, sampled at hundredths of a second.Moreover, though there are anatomical and physiological constraints on the space of licit joint configurations, and their trajectories, the repertoire of possibility remains vast, and the kinematic demarcations of teleologically different actions remain indistinct.Thus, no practically obtainable dataset may realistically represent the possible distance between instances.To simulate OoD data we first need ID data that is as small in quantity, and narrow in domain as possible.For this reason we propose to define OoD on multi-action motion capture datasets as being the scenario where only a single action, the smallest labelled subset, is available for training and hyperparameter search.In appendix A, to show that the motion categories we have chosen can actually be distinguished at the time scales on which our trajectories are encoded we train a simple classifier and show that it can separate the selected ID action from the others with high accuracy (100% precision and recall for the CMU dataset).In this way OoD performance may be considered over the remaining set of actions. Background Here we describe the current SOTA model proposed by Mao et al. [2019] (GCN).We then describe the extension by Wei et al. [2020] (attention-GCN) which antecedes the GCN prediction model with motion attention. Problem Formulation We are given a motion sequence X 1:N = (x 1 , x 2 , x 3 , • • • , x N ) consisting of N consecutive human poses, where x i ∈ R K , with K the number of parameters describing each pose.The goal is to predict the poses X N +1:N +T for the subsequent T time steps. DCT-based Temporal Encoding The input is transformed using Discrete Cosine Transformations (DCT).In this way each resulting coefficient encodes information of the entire sequence at a particular temporal frequency.Furthermore, the option to remove high or low frequencies is provided.Given a joint, k, the position of k over N time steps is given by the trajectory vector: where we convert to a DCT vector of the form: , these coefficients may be computed as If no frequencies are cropped, the DCT is invertible via the Inverse Discrete Cosine Transform (IDCT): The target is now simply the ground truth x k . Graph Convolutional Network Suppose C ∈ R K×(N +T ) is defined on a graph with k nodes and N + T dimensions, then we define a graph convolutional network to respect this structure.First we define a Graph Convolutional Layer (GCL) that, as input, takes the activation of the previous layer (A [l−1] ), where l is the current layer. GCL(A where +T ) , and S ∈ R K×K is a layer-specific learnable normalised graph laplacian that represents connections between joints, are the learnable inter-layer weightings and b are the learnable biases where n [l] are the number of hidden units in layer l. Network Structure and Loss The network consists of 12 Graph Convolutional Blocks (GCBs), each containing 2 GCLs with skip (or residual) connections, see figure 5. Additionally, there is one GCL at the beginning of the network, and one at the end.n [l] = 256, for each layer, l.There is one final skip connection from the DCT inputs to the DCT outputs, which greatly reduces train time.The model has around 2.6M parameters. Hyperbolic tangent functions are used as the activation function.Batch normalisation is applied before each activation. The outputs are converted back to their original coordinate system using the IDCT (equation 2) to be compared to the ground truth.The loss used for joint angles is the average l 1 distance between the ground-truth joint angles, and the predicted ones.Thus, the joint angle loss is: where xk,n is the predicted k th joint at timestep n and x k,n is the corresponding ground truth.This is separately trained on 3D joint coordinate prediction making use of the Mean Per Joint Position Error (MPJPE), as proposed in Ionescu et al. [2013] and used in Mao et al. [2019], Wei et al. [2020].This is defined, for each training example, as where pj,n ∈ R 3 denotes the predicted jth joint position in frame n.And p j,n is the corresponding ground truth, while J is the number of joints in the skeleton. Here n z = 16 is the number of latent variables per joint. Motion attention extension Wei et al. [2020] extend this model by summing multiple DCT transformations from different sections of the motion history with weightings learned via an attention mechanism.For this extension, the above model (the GCN) along with the anteceding motion attention is trained end-to-end.We refer to this as the attention-GCN. 5 Our Approach Myronenko [2018] augment an encoder-decoder discriminative model by using the encoder as a recognition model for a Variational Autoencoder (VAE), [Kingma andWelling, 2013, Rezende et al., 2014].Myronenko [2018] show this to be a very effective regulariser.Here, for conjugacy with the discriminator, we consider the Variational Graph Autoencoder (VGAE), proposed by Kipf and Welling [2016] as a framework for unsupervised learning on graph-structured data. The generative model sets a precedence for information that can be modelled causally, while leaving elements of the discriminative machinery, such as skip connections, to capture correlations that remain useful for prediction but are not necessarily persuant to the objective of the generative model.In addition to performing the role of regularisation in general, we show that we gain robustness to distributional shift across similar, but different, actions that are likely to share generative properties.The architecture may be considered with the visual aid in figure 1. Variational Graph Autoencoder (VGAE) Branch and Loss Here we define the first 6 GCB blocks as our VGAE recognition model, with a latent variable z ∈ R K×nz = N (µ z , σ z ), where µ z ∈ R K×nz , σ z ∈ R K×nz .n z = 8, or 32 depending on training stability. The KL divergence between the latent space distribution and a spherical Gaussian N (0, I) is given by: The decoder part of the VGAE has the same structure as the discriminative branch; 6 GCBs.We parametrise the output neurons as µ ∈ R K×(N +T ) , and log(σ 2 ) ∈ R K×(N +T ) .We can now model the reconstruction of inputs as samples of a maximum likelihood of a Gaussian distribution which constitutes the second term of the negative Variational Lower Bound (VLB) of the VGAE: where C k,l are the DCT coefficients of the ground truth. Training We train the entire network together with the additional of the negative VLB: Here λ is a hyperparameter of the model.The overall network is ≈ 3.4M parameters.The number of parameters varies slightly as per the number of joints, K, since this is reflected in the size of the graph in each layer (k = 48 for H3.6M, K = 64 for CMU joint angles, and K = J = 75 for CMU Cartesian coordinates).Furthermore, once trained, the generative model is not required for prediction and hence for this purpose is as compact as the original models. Datasets and Experimental Setup Human3.6M (H3.6M)The H3.6M dataset [Ionescu et al., 2011[Ionescu et al., , 2013]], so called as it contains a selection of 3.6 million 3D human poses and corresponding images, consists of seven actors each performing 15 actions, such as walking, eating, discussion, sitting, and talking on the phone.Martinez et al. [2017], Mao et al. [2019], Li et al. [2020b] all follow the same training and evaluation procedure: training their motion prediction model on 6 (5 for train and 1 for cross-validation) of the actors, for each action, and evaluate metrics on the final actor, subject 5.For easy comparison to these ID baselines, we maintain the same train; cross-validation; and test splits.However, we use the single, most well-defined action (see appendix A), walking, for train and cross-validation, and we report test error on all the remaining actions from subject 5.In this way we conduct all parameter selection based on ID performance. CMU motion capture (CMU-mocap) The CMU dataset consists of 5 general classes of actions. Similarly to [Li et al., 2018, 2020a, Mao et al., 2019] we use 8 detailed actions from these classes: 'basketball', 'basketball signal', 'directing traffic' 'jumping, 'running', 'soccer', 'walking', and 'window washing'.We use two representations, a 64-dimensional vector that gives an exponential map representation [Grassia, 1998] of the joint angle, and a 75-dimensional vector that gives the 3D Cartesian coordinates of 25 joints.We do not tune any hyperparameters on this dataset and use only a train and test set with the same split as is common in the literature [Martinez et al., 2017, Mao et al., 2019]. Model configuration We implemented the model in PyTorch [Paszke et al., 2017] using the ADAM optimiser [Kingma and Ba, 2014].The learning rate was set to 0.0005 for all experiments where, unlike Mao et al. [2019], Wei et al. [2020], we did not decay the learning rate as it was hypothesised that the dynamic relationship between the discriminative and generative loss would make this redundant.The batch size was 16.For numerical stability, gradients were clipped to a maximum 2-norm of 1 and log(σ 2 ) and values were clamped between -20 and 3. Walking Eating Smoking Discussion Average milliseconds 560 1000 560 1000 560 1000 560 1000 560 1000 GCN (OoD) 0.80 0.80 0.89 1.20 1.26 1.85 1.45 1.88 1.10 1.43 ours (OoD) 0.66 0.72 0.90 1.19 1.17 1.78 1.44 1.90 1.04 1.40 Table 2: Long-term prediction of Eucildean distance between predicted and ground truth joint angles on H3.6M.Mao et al. [2019] (GCN), and Wei et al. [2020] (attention-GCN) use this same Graph Convolutional Network (GCN) architecture with DCT inputs.In particular, Wei et al. [2020] increase the amount of history accounted for by the GCN by adding a motion attention mechanism to weight the DCT coefficients from different sections of the history prior to being inputted to the GCN.We compare against both of these baselines on OoD actions.For attention-GCN we leave the attention mechanism preceding the GCN unchanged such that the generative branch of the model is reconstructing the weighted DCT inputs to the GCN, and the whole network is end-to-end differentiable. Baseline comparison Both Hyperparameter search Since a new term has been introduced to the loss function, it was necessary to determine a sensible weighting between the discriminative and generative models.In Myronenko [2018], this weighting was arbitrarily set to 0.1.It is natural that the optimum value here will relate to the other regularisation parameters in the model.Thus, we conducted random hyperparameter search for p drop and λ in the ranges p drop = [0, 0.5] on a linear scale, and λ = [10, 0.00001] on a logarithmic scale.For fair comparison we also conducted hyperparameter search on GCN, for values of the dropout probability (p drop ) between 0.1 and 0.9.For each model, 25 experiments were run and the optimum values were selected on the lowest ID validation error.The hyperparameter search was conducted only for the GCN model on short-term predictions for the H3.6M dataset and used for all future experiments hence demonstrating generalisability of the architecture. Results Consistent with the literature we report short-term (< 500ms) and long-term (> 500ms) predictions.In comparison to GCN, we take short term history into account (10 frames, 400ms) for both datasets to predict both short-and long-term motion.In comparison to attention-GCN, we take long term history (50 frames, 2 seconds) to predict the next 10 frames, and predict futher into the future by Conclusion We draw attention to the need for robustness to distributional shifts in predicting human motion, and propose a framework for its evaluation based on major open source datasets.We demonstrate that state-of-the-art discriminative architectures can be hardened to extreme distributional shifts by augmentation with a generative model, combining low in-distribution predictive error with maximal generalisability.The introduction of a surveyable latent space further provides a mechanism for model perspicuity and interpretability, and explicit estimates of uncertainty facilitate the detection of anomalies: both characteristics are of substantial value in emerging applications of motion prediction, such as autonomous driving, where safety is paramount.Our investigation argues for wider use of generative models in behavioural modelling, and shows it can be done with minimal or no performance penalty, within hybrid architectures of potentially diverse constitution.The general increase in the distinguishability that can be seen in figure 3b increases the demand to be able to robustly handle distributional shifts as the distribution of values that represent different actions only gets more pronounced as the time scale is increased.This is true with even the näive DCT transformation to capture longer time scales without increasing vector size. As we can see from the confusion matrix in figure 3c, the actions in the CMU dataset are even more easily separable.In particular, our selected ID action in the paper, Basketball, can be identified with 100% precision and recall on the test set. B Latent space of the VGAE One of the advantages of having a generative model involved is that we have a latent variable which represents a distribution over deterministic encodings of the data.We considered the question of whether or not the VGAE was learning anything interpretable with its latent variable as was the case in Kipf and Welling [2016]. The purpose of this investigation was two-fold.First to determine if the generative model was learning a comprehensive internal state, or just a non-linear average state as is common to see in the training of VAE like architectures.The result of this should suggest a key direction of future work.Second, an interpretable latent space may be of paramount usefulness for future applications of human motion prediction.Namely, if dimensionality reduction of the latent space to an inspectable number of dimensions yields actions, or behaviour that are close together if kinematically or teleolgically similar, as in Bourached and Nachev [2019], then human experts may find unbounded potential application for a interpretation that is both quantifiable and qualitatively comparable to all other classes within their domain of interest.For example, a medical doctor may consider a patient to have unusual symptoms for condition, say, A. It may be useful to know that the patient's deviation from a classical case of A, is in the direction of condition, say, B. We trained the augmented GCN model discussed in the main text with all actions, for both datasets.We use Uniform Manifold Approximation and Projection (UMAP) [McInnes et al., 2018] to project the latent space of the trained GCN models onto 2 dimensions for all samples in the dataset for each dataset independently.From figure 4 we can see that for both models the 2D project relatively closely resembles a spherical gaussian.Further, we can see from figure 4b that the action walking does not occupy a discernible domain of the latent space.This result is further verified by using the same classifier as used in appendix A, which achieved no better than chance when using the latent variables as input rather than the raw data input.This result implies that the benefit observed in the main text is by using the generative model is significant even if the generative model has poor performance itself.In this case we can be sure that the reconstructions are at least not good enough to distinguish between actions.It is hence natural for future work to investigate if the improvement on OoD performance is greater if trained in such a way as to ensure that the generative model performs well.There are multiple avenues through which such an objective might be achieve.Pre-training the generative model being one of the salient candidates. Mao et al. use the DCT transform with a graph convolutional network architecture to predict the output sequence.This is achieved by having an equal length input-output sequence, where the input is the DCT transformation of x k = [x k,1 , . . ., x k,N , x k,N +1 , . . ., x k,N +T ], here [x k,1 , . . ., x k,N ] is the observed sequence and [x k,N +1 , . . ., x k,N +T ] are replicas of x k,N (ie x k,n = x k,N for n ≥ N ). (a) Distribution of short-term training instances for actions in h3.6M.(b) Distribution of training instances for actions in CMU. Figure 3 : Figure 3: Confusion matricies for a multi-class classifier for action labels.In each case we use the same input convention x k = [x k,1 , . . ., x k,N , x k,N +1 , . . ., x k,N +T ] where x k,n = x k,N for n ≥ N .Such that in each case input to the classifier is 48 × 20 = 960.The classifier has 4 fully connected layers.Layer 1: input dimensions × 1024, layer 2: 1024 × 512, layer 3: 512 × 128, layer 4: 128 × 15 (or 128 × 8 for CMU).Where the final layer uses a softmax to predict the class label.Cross entropy loss is used for training and ReLU activations with a dropout probability of 0.5.We used a batch size of 2048, and a learning rate of 0.00001. Figure 4 :Figure 5 : Figure 4: Latent embedding of the trained model on both the H3.6m and the CMU datasets independently projected in 2D using UMAP from 384 dimensions for H3.6M, and 512 dimensions for CMU using default hyperparameters for UMAP. Table 1 : Code for all experiments is available at the following link: https://github.com/bouracha/OoDMotionShort-term prediction of Eucildean distance between predicted and ground truth joint angles on H3.6M.
v3-fos-license
2022-07-11T15:07:16.971Z
2022-07-09T00:00:00.000
250415666
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/5794914.pdf", "pdf_hash": "eb3102a5eab495796349588a5b46fceb2c5e873f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42136", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "2724ad6f3d18adc4b63a506d8947d2ffaaf4c77d", "year": 2022 }
pes2o/s2orc
Human Sports Action and Ideological and Political Evaluation by Lightweight Deep Learning Model . The purpose is to automatically and quickly analyze whether the rope skipping actions conform to the standards and give correct guidance and training plans. Firstly, aiming at the problem of motion analysis, a deep learning (DL) framework is proposed to obtain the coordinates of key points in rope skipping. The framework is based on the OpenPose method and uses the lightweight MobileNetV2 instead of the Visual Geometry Group (VGG) 19. Secondly, a multi-label classification model is proposed: attention long short-term memory-long short-term memory (ALSTM-LSTM), according to the algorithm adaptive method in the multi-label learning method. Finally, the validity of the model is verified. Through the analysis and comparison of simulation results, the results show that the average accuracy of the improved OpenPose method is 77.8%, an increase of 3.3%. The proposed ALSTM-LSTM model achieves 96.1% accuracy and 96.5% precision. After the feature extraction model VGG19 in the initial stage of OpenPose is replaced by the lightweight MobileNetV2, the pose estimation accuracy is improved, and the number of model parameters is reduced. Additionally, compared with other models, the performance of the ALSTM-LSTM model is improved in all aspects. This work effectively solves the problems of real-time and accurate analysis in human pose estimation (HPE). The simulation results show that the proposed DL model can effectively improve students’ high school entrance examination performance. Introduction e extracurricular physical exercise of primary and secondary school students after school and holidays is gradually reduced.e rise of online teaching and online tutoring has squeezed the time for students to exercise after school.Arti cial intelligence (AI), big data, cloud computing, and other technologies not only break through the limitations of traditional physical education (PE) classrooms due to factors such as space, time, and region but also provide a guarantee for the teaching feedback link in extracurricular PE teaching in schools.By changing the traditional interaction between teachers and students, teachers have redesigned PE teaching and adopted new learning evaluation methods so that PE teaching presents a new learning space and learning environment [1].In recent years, vision-based human motion analysis technology has received extensive attention with the continuous development and application of computer technology and AI technology.At present, vision-based human motion analysis is still a major challenge in the eld of computer vision, mainly involving pattern recognition, image processing, virtual reality (VR), and other disciplines.It has broad application prospects in the elds of humancomputer interaction (HCI), rehabilitation therapy, and sports training [2,3]. Computer vision has been widely used in sports training and other related elds, including motion type recognition and activity recognition, athlete tracking, and human pose estimation (HPE).e core problem of motion analysis is HPE. is is an important topic in the eld of computer vision. e task of HPE is to identify the human body through computer image processing algorithms and determine the joint positions of the human body (such as eyes, nose, shoulders, wrists, and so on) [4][5][6].e application of HPE involves human behavior understanding, HCI, health monitoring, motion capture, and other fields.Nadeem et al. [7] proposed a new method for automatic HPE. e method intelligently recognizes human behavior by utilizing salient contour detection, robust body part models and multi-dimensional cues from full-body contours, and the maximum entropy Markov model (MEMM).Firstly, the image is preprocessed and noise is removed to obtain robust contours. en, the body part model is used to extract 12 key body parts.ese key body parts are further optimized to help generate multi-dimensional cues.Finally, MEMM is used to process these optimization modes further, using a cross-validation scheme that omits one term.Better body part detection and higher identification accuracy are achieved on four benchmark datasets, with results superior to existing well-known statistical latest methods.Cui and Dahnoun [8] proposed an HPE system based on millimeter-wave radar.e system detects persons with arbitrary poses at close range (within two meters) in indoor environments and estimates poses by locating key joints.Two-millimeter wave radars were used to capture the scene, and a neural network model was used to estimate the pose. e neural network model consists of a part detector that estimates the position of the subject's joints and a spatial model that learns the correlations between the joints.A time-dependent step is introduced in real-time operation to refine the estimation further.e system is able to provide accurate HPE in real-time at 20 frames per second, with an average localization error of 12.2 cm and an average accuracy of 71.3%.Wu et al. [9] proposed a model in which multiple subnets are connected in parallel on the high-resolution main net.It maintains the network structure of high-resolution heat maps throughout the operation.e structure is applied to the human body key point vector field network, which improves the accuracy and operation speed of human body gesture recognition.Experimental results show that the proposed network outperforms existing mainstream research by 3%-4%.To sum up, the deep learning (DL) model is widely used in the field of computer science.Applying it to fields such as safety monitoring, health assessment, and HCI search can help recognize human actions according to human joints using sensors.For example, convolutional neural network (CNN) can be used for feature extraction, and multi-layer perceptron can be used as the standard for subsequent classification.ese findings show that CNN-based supervised learning is very effective. e key to improving the performance of the rope skipping test is to automatically and quickly analyze whether the rope skipping action meets the standards and give correct guidance and training plans.e existing HPE algorithms based on computer vision have high complexity, poor robustness, and complex computation.Additionally, due to the lack of professional human action analysts, the research on human action analysis and sports quality evaluation needs to be further explored.e innovation of this work is that the coordinates of the key points in the rope skipping process are obtained through the two-dimensional (2D) HPE algorithm.e coordinates are preprocessed to obtain a robust data sequence.A multi-label classification model is proposed: attention long short-term memory-long short-term memory (ALSTM-LSTM).e proposed model can effectively solve the problems of real-time analysis and accurate analysis.e results show that the ALSTM-LSTM model can improve students' high school entrance examination scores. Smart Sports. e basic task of smart sports is to obtain the pose changes of the human body through the equipment and analyze and guide the attitude [10,11].In recent years, measurement technologies such as accelerometers, gyroscopes, and magnetometers have emerged.However, these technologies rely heavily on specialized equipment, which is very expensive.Professional competitive sports training mostly adopts traditional methods, and the training requirements are very high.It needs to invest extensive human and financial resources to make professional sports training smart [12].Emerging technologies such as AI and big data have been widely used in sports evaluation and guidance. e sports data acquisition and analysis system utilizes wearable devices to capture and record students' actions during physical exercise and further evaluates their actions against specific standards.Based on virtual reality (VR) technology, a more in-depth experimental study on the PE teaching platform simulates human action scenes.With the in-depth development of deep neural networks and hardware technology, AI technology based on DL has shown relatively good results in HPE [13,14].e HPE method based on DL can predict the data information of human skeleton points from each video frame and finally regenerate the human skeleton. 2D Human Pose Estimation Based on OpenPose. e OpenPose Estimation Project is an open-source library based on convolutional neural network (CNN) and supervised learning.It is regarded as a skeleton point and skeleton detector, which can predict human facial key points, limb skeleton points, hand joint points, and other parts.It shows good robustness in the pose estimation of one or more people.e input of OpenPose can realize pictures, videos, or real-time cameras as input and output the position and coordinate information of the joint points of the human body through OpenPose estimation [15].e basic principle of OpenPose is to build a CNN in stages and then output the prediction confidence heat map of the skeleton points after predicting the human skeleton points in the image.Additionally, it also predicts the affinity field between bones.e affinity field is the basis for connecting the skeleton and participates in the next stage of skeleton point prediction, improving the speed and accuracy of the model [16,17].e OpenPose processing flow is shown in Figure 1.It learns using nonparametric representations through partial affinity fields, associating body parts with individuals in images, giving a bottom-up representation of the distribution of limb associations. Computational Intelligence and Neuroscience In Figure 1, in order to obtain the key point coordinate information of the human body in the human body pose estimation based on OpenPose, it is necessary to use the Gaussian modeling method to obtain the confidence map of the key point position.A confidence map represents key points. e values in the confidence map represent the probability of a certain key point location [18,19].Confidence maps of key point locations are shown in equations ( 1) and (2): where S j,k represents the confidence map of the individual produced by each individual k; k represents the k-th individual in the image; j represents the j-th joint point of the individual; p represents the coordinates of the predicted individual in the image; δ represents a minimum value, which can ensure certain feasibility in the training process of the model; and x j,k represents the real coordinate position of the j-th joint point of the k-th individual. Multi-Label Classification Method. In practical applications, a person's body movements need to be analyzed in many aspects, and there are often multiple labels in one frame of the image.In the multi-label classification problem, there is a certain dependency or mutual exclusion between labels.In multi-label classification tasks, the relationship between categories is very complex due to the large number of labels.erefore, multi-label classification is more complex than single-label classification [20,21].According to the source of algorithm design ideas, there are two types of methods for multi-label classification problems: problem transformation methods and algorithm adaptation methods, as shown in Figure 2. Problem conversion-based methods usually transform multi-label classification problems into other learning scenarios.As shown in Figure 2, the multilabel learning method can be divided into two categories: problem conversion method and algorithm adaptation method.ey are represented by the left block diagram and the right frame diagram, respectively.e problem conversion-based method adapts the current popular algorithm in the classification process to process multi-label data, such as the adaptive DL algorithm and recurrent neural network (RNN), to apply them to many label classification tasks [22]. Action Analysis Algorithm in Rope Skipping e lightweight network model is to make the model run in a shorter time and consume fewer resources while maintaining accuracy.When acquiring the human pose, the OpenPose network model first sends the image frame to the Visual Geometry Group (VGG) 19 to obtain a collection of image feature maps.However, VGG19 is computationally expensive.Also, it produces a lot of parameters during training.erefore, the occupied memory is also large [23].However, the model trained by the MobileNetV2 network is small, fast in speed, and has high accuracy.erefore, when extracting image feature maps, the original OpenPose method is changed to MobileNetV2.e MobileNetV2 network is improved based on MobileNet.An inverted residual structure is added to MobileNetV2. e inverted residual structure first maps low-dimensional features to high-dimensional features, then uses depth-wise separable convolution to perform convolution operations, and then uses a linear convolution to map them to low-dimensional features.In order to make the model have better expressiveness in the calculation process, the nonlinear transformation in MobileNet is removed in the MobileNetV2 network.Meanwhile, a linear bottleneck is also introduced in MobileNetV2 [24].In MobileNetV2, it is believed that the nonlinear activation function of each layer in the neural network will bring two problems to the network: the first is that after the ReLU activation function, the input data and output data are linearly transformed, and the obtained output result is nonzero.e other is that if the integrity of the interest manifold in the activation space is high, the problem of space collapse will occur after the ReLU activation function.en, the architecture of MobileNetV2 is analyzed by referring to relevant literature [25].e structure of MobileNetV2 is unfolded in Table ere are many nodes in the hidden layer that emphasize feature learning of the data.DL is to transform the feature representation of samples in the original space into a new feature space, simplify the classification and prediction of data, learn in fewer samples, and express complex functions with fewer parameters. is reduces the difficulty of setting and adjusting model parameters, contains more hidden layers than traditional shallow neural networks, and can learn more rich sample features.Additionally, its simulation performance is also better [26].e long short-term memory network (LSTM) is essentially derived from the RNN.LSTM is a state unit added to the RNN.Its function is to save the previously input information, and a gating mechanism with three gates is also designed, and the activation function and excitation function are set in the structure of each gate.In general, the tanh function is chosen as the activation function for the input and output of the memory cell.e sigmoid function is used as the activation function of the gate structure [27]. e human pose analysis problem during jumping is transformed into a multi-label classification problem with a temporal relationship.LSTM can play a role in global processing and storage units so that LSTM can maintain relatively good performance in time series.Attention is a global processing method.erefore, attention is applied to LSTM and combined with a single LSTM for multi-label classification, which is the ALSTM-LSTM method.e specific network framework is shown in Figure 3. In Figure 3, the ALSTM-LSTM network includes five layers: input, batch normalization, ALSTM-LSTM, connection, and sigmoid layer.Additionally, a batch normalization layer is added before the LSTM and ALSTM layers. e study is a multi-label classification problem.According to the multi-label algorithm transformation method, the activation function of the last layer of the ALSTM-LSTM model is set to the sigmoid function, and the loss function selects the binary cross-entropy. In the two branches of OpenPose, one branch is used to predict the confidence of key points, and the other branch is used to predict the affinity field between two key points.e overall loss is the sum of each stage, as shown in the following equation: where S represents the confidence of the key point; L represents the affinity field between two key points; f t S represents the loss in the confidence stage of predicting key points; and f t L represents the loss in this stage of predicting the affinity field between two key points. Dataset. e dataset used in the experiment is the Max Planck Institute for Informatics (MPII) dataset.e MPII human pose dataset is a benchmark for estimation.e dataset includes 25,000 annotated images of more than 40,000 people and contains 410 regularly performed human activities with corresponding labels.ese images are pulled from YouTube. e test set includes annotations for body part occlusion, 3D torso, and head orientation.e MPII dataset is used to train the network model and detect key point coordinates of the head, shoulders, wrists, hips, knees, and ankles. In addition, many countries and regions include oneminute hopping in the middle school entrance examination. e rope skipping dataset used is from an experimental high 4. In Figure 4, professionals get data labels by analyzing videos and labeling them by time segments.e labels of the data are set to six labels, namely, the body is kept upright, the left wrist is rocked, the left arm is tightened with the body, the right wrist is shaken, the right arm is tightened with the body, and the left and right arms are kept horizontal, as shown in Figure 5. e selection of these labels is based on the skipping skills of the mid-admission exam. In Figure 5, because the captured images are the same, the dataset needs to be preprocessed.Preprocessing of video frames includes setting video frames of different sizes to the size.e video height and width are set to 530 pix and 460 pix, respectively.e basic information of each skipping object, including name, age, gender, height, and weight, is recorded and saved. e key point detection method is used to obtain the nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, and left knee-the coordinate positions of the 14 joint points of the left ankle. e obtained 14 key point coordinates define a Cartesian coordinate system with the center of gravity of the triangle formed by the three points of the left hip, right hip, and neck as the origin.e coordinate matrix obtained in each frame is accumulated to obtain the cumulative coordinate matrix of each video. Setting of the Experimental Environment. In the process of pose estimation and skipping action analysis, the code is written in PyCharm and Jupyter Notebook.Intel Core i7-8700K, 3.70 GHz, is used.Memory is 32G; GPU is GTX 1080Ti.In LSTM, units are set to 64 and batch size is set to 100.In order to choose an appropriate sliding window length, the sliding window length is set to 10 frames and 15 frames, the cumulative coordinates are set to 20 frames, and the step size is set to 30% data overlap. e evaluation indicators of human key point detection are object key point similarity (OKS), average precision (AP), and mean average precision (MAP).Among them, OKS is to calculate the similarity between the real value and the predicted human key points, as shown in the following equation: where p represents the id of a person in the ground truth; pi represents the key point id of a person; vp i � 0 means that key points are not labeled; vp i � 1 means that the key points are occluded but marked; vp i � 2 means that the key points are unobstructed and marked; S p is the square root of the size of the area occupied by the person; σ i represents the normalization factor of the i-th bone point; and δ(.) is 1 when the condition holds; it is 0 otherwise.Common evaluation indicators for multi-label classification are accuracy, F1-score, precision, and recall.e calculation of each indicator is shown in equations ( 5)-( 8): Computational Intelligence and Neuroscience precision � where true positives (TP) are positive samples that are correctly identified as positive samples; true negatives (TN) are negative samples that are correctly identified as negative samples; false positives (FP) are negative samples that are incorrectly identified as positive samples; and false negatives (FN) are positive samples that are misidentified as negative samples. Analysis of the Experimental Results of Attitude Estimation.In order to improve the accuracy and efficiency of pose estimation, the feature extraction model VGG19 in the initial stage of OpenPose is replaced by a lightweight network model MobileNetV2.Weights and penalties are introduced into the final loss function.e experimental results on the MPII dataset are shown in Figure 6. In Figure 6, the MAP of OpenPose is 74.5%.e MAP of the improved OpenPose method is 77.8%, and the MAP is improved by 3.3%.After replacing the feature extraction model VGG19 in the initial stage of OpenPose with the lightweight network model MobileNetV2, the pose estimation accuracy has been improved, and the total number of parameters of the model has been reduced.erefore, the improved OpenPose model structure can meet the experimental requirements. Analysis of the Experimental Results of Multi-Label Classification of Rope Skipping.Since the rope skipping process is a long-term sequence analysis process, it is necessary to segment the data through a sliding window.In this experiment, three groups of 10-frame cumulative coordinates, 15-frame cumulative coordinates, and 20-frame cumulative coordinates are set up for analysis to find the appropriate sliding window length.e step size is set to 30% data overlap. e specific experimental results are shown in Figure 7. In Figure 7, when the sliding window length is 10, the accuracy of ALSTM-LSTM is 85.36%, and the F1 value is 83.13%.When the sliding window length is 15, the accuracy of ALSTM-LSTM is 89.55%, and the F1 value is 88.75%.When the sliding window length is 20, the accuracy of ALSTM-LSTM is 87.77%, and the F1 value is 86.43%.erefore, ALSTM-LSTM performs best when the sliding window length is 15.Computational Intelligence and Neuroscience Support vector machine (SVM) is a linear classifier model defined in feature space. is algorithm can effectively help optimize quadratic programming.e performance of the proposed ALSTM-LSTM model and SVM model is compared to further verify and optimize the proposed model.e results are given in Figure 8. In Figure 8, in the multi-label classification problem of skipping action analysis, the proposed ALSTM-LSTM model achieved an accuracy of 96.1%, precision of 96.5%, recall of 95%, and F1 value of 95.7%.DL models outperform traditional machine learning algorithms.In the rope skipping action analysis, the ALSTM-LSTM architecture provides the best performance on all metrics, and the SVM has the worst performance. To sum up, this work studies the recognition and evaluation of 2D human poses based on OpenPose and designs.It analyzes the ALSTM-LSTM network framework by combining the multi-label classification method.e results show that the MAP of the OpenPose method before and after optimization is 74.5% and 77.8%, respectively, improved by 3.3%.After replacing the feature extraction model VGG19 in the initial stage of OpenPose with the lightweight network model MobileNetV2, the pose estimation accuracy is improved, and the model parameters are reduced.Some other scholars' research results are cited below.Ding et al. [28] researched HPE based on multifeature and rule learning and proposed a new algorithm based on multi-feature and rule learning.e research results showed that feature fusion at the granularity level could reduce the dimension and sparsity of feature sets.Zhang and Callaghan [29] conducted real-time HPE based on an adaptive hybrid classifier. e pose-based adaptive signal segmentation algorithm was combined with a multi-layer perceptron classifier, and various voting methods were combined.A sensor calibration algorithm based on software was designed.e results showed that the adaptive hybrid classifier could improve the accuracy of real-time HPE.To sum up, the proposed improved HPE algorithm can improve the recognition accuracy of the system for human actions. Conclusion e HPE task is to recognize human body through computer image processing algorithm. e application of HPE involves human behavior understanding, HCI, health monitoring, motion capture, and other fields.Following a review of smart sports, the research is based on the lightweight DL model.OpenPose is used to design and evaluate human poses. e multi-label classification algorithm is used to design the architecture of MobileNetV2. e research content is based on the pose analysis in the scene of shaking feet and skipping rope in the high school entrance examination.Combined with the characteristics of the research scene, the HPE model based on OpenPose network and the multilabel classification network structure are introduced.In order to improve the efficiency and accuracy of pose analysis, OpenPose is improved based on the lightweight network MobileNetV2.en, the ALSTM-LSTM model is proposed to analyze whether the body actions are standardized during rope skipping.e results show that MAP of the OpenPose method based on MobileNetV2 is improved by 3.3% compared to the standard OpenPose method.In the multi-label classification problem of skipping action analysis, the ALSTM-LSTM model achieves 96.1% accuracy, 96.5% precision, 95% recall, and 95.7% F1 value. e ALSTM-LSTM architecture provides the best performance on all metrics for skipping action analysis.e disadvantage is that in the physics test of the middle school entrance examination, rope skipping is the research object, but the research method is not limited to this scenario.In the future work, ALSTM-LSTM architecture will collect more rope skipping videos from different scenes to provide stronger data support for subsequent action analysis.e Figure 8 : Figure 8: Comparison of the ALSTM-LSTM model with other models. . Here, -indicates that the structure parameter value of this network layer does not exist.enetwork first uses full Computational Intelligence and Neuroscience convolution as 32 convolution kernels and then uses 19 inverse residual structures with linear bottlenecks.Such low precision computation has stronger robustness.Further, t represents the "spread" factor: the channel spread factor of the inverse residual network.c stands for the number of output channels.n refers to the number of repetitions.n denotes the step length.
v3-fos-license
2019-09-17T03:03:03.822Z
2019-08-13T00:00:00.000
240846755
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://revistas.ucr.ac.cr/index.php/rbt/article/download/35965/39642", "pdf_hash": "77eee64f2a8d50266b23de5d61a5f2f042125ffb", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42137", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "a1f8b61c6fbeb11d6ba8b4b42b4999b92d2c0938", "year": 2019 }
pes2o/s2orc
Review of California sea lion (Zalophus californianus) abundance, and population dynamics in the Gulf of California Introduction: The life history of the California sea lion (Zalophus californianus) in the Gulf of California is marked by a series of important events influencing and modifying its population growth, distribution, and evolution. Despite the fact that this population has been studied since the 1950s, research has been rather punctual and fragmentary. Before 2010, there are only a few surveys conducted simultaneously in all rookeries, thus there is no reliable information on key aspects of life cycle, population trend and potential threats. In the present work we conducted a review of California sea lion life history and environmental changes in the Gulf of California thorough a collation of survey data encompassing the last 37 years. Objective: Our aim was focused on identifying shortand long-term processes potentially acting on the population, and hopefully improve knowledge about the population trend and status using different points of view. Methods: We collected and analyzed population survey data from different sources since the 1970s to 2018: published papers, master’s and doctoral thesis, in addition to technical reports. The survey data are organized in sections corresponding with crucial population life history events. Results: Considering a long-time period the population size appears to be stable with zero growth. Cyclic interannual fluctuation seem to denote a certain dependence with climatic factors, not directly with El Niño, but with sea surface temperature anomalies that determine prey availability. However, many doubts persist about the incidence of different local environmental factors on gender and age, particularly related with juvenile recruitment and female survival rate. Conclusions: In conclusion, more information is required based on seasonal surveys, life cycle, regional environmental variation. Statistical errors need to be assessed and monitoring methods should be standardized and must be considered to ascertain shortand long-term population and colony spatial-temporal patterns. The life history of California sea lion (Zalophus californianus) in the Gulf of California (henceforth, the Gulf) is marked by a series of important events influencing and modifying its population growth, distribution, and evolution. The ancient settlement of local California sea lion demes in the Gulf dates to three million years ago, during the late Pleistocene or earlier (Maldonado, Orta-Dávila, Stewart, Geffen, & Wayne, 1995). Sea lions started to occupy the Gulf due to high primary productivity and large abundance of sardine (Sardinops sagax), a typical California sea lion prey, particularly around the Midriff island region (Fig. 1). The Gulf of California sea lion population progressively became isolated from the US California and Pacific Baja California populations, as demonstrated by several genetic studies (Fig. 1, Maldonado et al.,1995;Bowen et al., 2006;Schramm, 2002). Over the past several centuries, the Gulf of California sea lion population stabilized into 13 breeding colonies distributed along the Gulf (Fig. 1). Genetic analysis identified three groups in the same number of regions: North, Center and South (Schramm, 2002), seemingly related with the Gulf's physical environmental conditions. Females are confined near specific reproductive rookeries (high philopatry) (Maldonado et al., 1995). However, this division is not completely clear: only a few colonies have been genetically examined for each region. Using a multivariate model including genetics, diet and osteoarthritis data and other variables, Szteren and Aurioles-Gamboa (2011) divided the Gulf population into four demes located in the same number of eco-regions: North, Ángel de la Guarda (A.G.), Central and Southern Gulf. González-Suárez, Aurioles-Gamboa, and Gerber (2010) and Ward et al. (2009) obtained similar results yet considered that the distance between colonies is the main cause for their division. Nevertheless, uncertainties remain about the reasons why the sea lion population in the Gulf is comprised of distinct rookeries and where are the true limits. In the present study we use Szteren and Aurioles-Gamboa (2011) eco-regional distribution (ca., Fig. 1). A life table produced by Hernández- Camacho (2001) for California sea lions in the Gulf, with further adjustments in survival rates (Hernández-Camacho, Aurioles-Gamboa, Laake, & Gerber, 2008) was used as a reference for many studies. This life table has some problems because it uses the growth rate found for Los Islotes, the Southern-most rookery (ca., Fig. 1), with environmental conditions very different compared to other sites. This was confirmed in 2015 when the same authors intended to apply surrogate data to estimate the growth rate of two colonies located at a different sites with doubtful results attributed to differences in survival rates between rookeries (see below). In preceding years, Inclán (1999) recognized the need to use a dynamic life table and different survival rates for the rookeries in the Gulf population, given the significant differences between ecoregions. In fact, Ward et al. (2009) used a multivariate state-space model and found synchrony in growth rates and variability between colonies from the same region and high temporal correlation among the three most Northern ecoregions. Scarcity of census data precluded a longterm evaluation of population patterns, which could contribute to mislead results of population viability analyses (Chirakkal and Gerber, 2010) and our perception about the Gulf of California sea lion population status. Szteren, Aurioles-Gamboa and Gerber (2006) published a study about survival rates and found a 20 % decrease during 1976 and 2004; this alarmed the scientific community. Despite that negative trend, another study considered the population to be stable over 37 years . This leads one to conclude that the results of a given study might vary depending on the method and time period considered. In the present work we conducted a thorough review encompassing the last 37 years of California sea lion life history and environment in the Gulf of California. This constituted an opportunity to focus on short-and long-term processes acting on the population and hopefully improve knowledge about the population trend and status and unveil some latent aspects hardly recognizable using a partial vision. MATERIALS AND METHODS Data from different sources since the 1970s to 2018 were collected and analyzed: published papers, master's and doctoral thesis, and technical reports. Particularly, for the last eight years PROMOBI and PROMANP reports (CONANP, unpublished data, available upon request) were used. Data were organized in sections according to crucial population life history events for better understanding the evolution of environmental and anthropogenic factors. Period (1533Period ( -1970. First population collapse: California sea lions were hunted historically 1 000-2 000 years before European colonization (Zavala-González & Mellink, 2000). Some 500 years ago, the Comcáac (Seri) people settled along the coast of Sonora, where they hunted sea lions for livelihood around Ángel de la Guarda and San Esteban, the largest islands in the Gulf of California. In 1872, sea lion hunting turned into a commercial activity to extract a low-price oil from the blubber and for the leather industry. Between 1860 and 1888 there was an intensive exploitation (Ronald, Selley, Healey, 1982cited in Zavala-González & Mellink, 2000, and sea lions started to decrease significantly in the late 1870s. In the late XIX Century, the federal government of Mexico granted sea lion hunting permits to Mexican citizens, which were sold to foreigners. During President Álvaro Obregon's government hunting was declared open in small amounts per permit. Hunting In 1930, the hunting was banned, but in 1937 the sea lion flesh started to be used as bait for shark fishing. The sea lion exploitation was extended until the 1970s, and came to an end in 1982, yet in the 1990s the sea lion flesh was still used as bait (Zavala-González & Mellink, 2000). The Midriff island region was the most exploited area for sea lions, and adult males were the preferred target. The Mexican federal fisheries agency (Dirección General de Pesca) estimated that an average of 400 adult males were killed annually in the Gulf of California during the mid XX Century (Zavala-González & Mellink, 2000). In 1966, 10 366 adult sea lions were counted in seven colonies, and in 1991 the count raised to 17 486 animals in the same colonies (Zavala-González, 1993). Despite the large population decline in a 25-year period, apparently no genetic bottle neck took place (González, Aurioles-Gamboa, Gerber, 2010). In the early 1990s, California sea lions were still subject to capture in small numbers to sell them to dolphinaria and aquaria in Mexico and abroad. To date only stranded individuals treated for illness or malnutrition are collected via special permits in displays at dolphinaria and aquaria. Post -hunting period . Population recovery: After the hunting period, between 1970 and 1990, an important California sea lion recovery took place both in number of individuals and rookeries. From 1942-53, the population rose from 6 273-12 902 individuals; then there were 10 366 sea lions in 1965 (Lluch-Belda,1969;Zavala-González, 1999); 15 140 (14 colonies) in 1979; and 14 389 (in 12 colonies) in 1981 (Le Boeuf et al., 1983). According to Aurioles-Gamboa (1988), at the end of the 1970s the actual number of sea lions were 20 000 adjusted with correction factors to 25 000 animals. The population numbers continued to growth constantly to the point that some rookeries increased above 30 % from 1965-79 (Le Boeuf et al.,1983). During this period, the highest occupation index (total number of occupied island x 100/total number of islands), and pup production was observed in the Northern and Central Gulf up to San Esteban Island. The highest presence of sea lions in these regions was probably related to food availability since the Monterrey sardine (Sardinops sagax), Pacific mackerel (Scomber japonicus), herring (Opisthonema spp.) and Northern anchovy (Engraulis mordax) were concentrated in the summer around Ángel de la Guarda and Tiburón islands (Midriff area or Central region) (Aurioles-Gamboa, 1988). The increase of sea lion population was particularly considered to be associated with the dramatic rise of Monterrey sardine biomass during the 1970s until 1988-89 (Cisneros-Mata, Nevárez, Hammann, 1995Zavala-González, 1999). During 1982-83, one of the most intense El Niño events occurred in recorded history, causing a drastic decrease of US California and Pacific Baja California sea lion populations. The pup count decreased 35 %, and did not return to pre-El Niño levels until four years later (Lowry & Maravilla-Chávez, 2005). This decrease was related with longer female foraging trips due to lower prey availability caused by El Niño (Trillmich & Ono, 1991), which probably resulted in the worsening of pups' health and their death. Conversely, there were no apparent or significant negative effects in the Gulf's California sea lion population. Hernández-Camacho et al. (2008) documented at Los Islotes a low pup survival rate during 1980-81. However, during the El Niño the rate was very high, therefore they assumed a null influence of El Niño in the Gulf population. Other authors sustain that the environmental conditions inside the Gulf were not influenced by El Niño and the sea lion population was not affected, at least female fecundity, pup body condition index, juvenile survival rate or female foraging trip duration that remained similar (Lara-Lara, Holgin-Valdéz, Jiménez-Pérez, 1984;Aurioles-Gamboa & Le Boeuf, 1991;Samaniego-Herrera & Aurioles-Gamboa, 2000;Hernández-Camacho, 2001). Nevertheless, this hipothesis may not be completely certain, because there is evidence that there was a decrease in the number of adult males during El Niño associated with a long phase of warm temperature anomaly during 1979 and 1983 (Shirasago-Germán, Pérez-Lezama, Chávez and García-Morales, 2015). Hernández-Camacho (2001), recorded an increased male mortality in 1983 and assumed that, due to their higher metabolic rate and precocious weaning, males were more prone to environmental factor influence than females. This was confirmed by Aurioles-Gamboa and Zavala-González (1994) in 1982-83 in US Southern California when they found an increase in 6-to-12-month age juvenile deaths. Numbers in the Northern and Central Gulf had increased from the 1960s and during 1966-91 the population rose at an annual rate of 2.1 % (Zavala-González, 1993;. After the strong El Niño event, in 1984 the California sea lion population began to decline ( Fig. 2) throughout most of the 1990s. Unfortunately, the data reported are rather imprecise due to differences in the number of colonies surveyed, method of error estimation, and monitoring protocol. Furthermore, to obtain several total annual sea lion population sizes, some missing survey values were replaced using data corresponding to other years (Aurioles-Gamboa & Zavala-González, 1994; Szteren et al. 2006). However, this could result in an erroneous conclusion regarding the actual population size due to high interannual variability of enviromental conditions. Therefore, it is difficult to determine annual population size and trend particularly because the raw data are difficult to access or unavailable. The increase in the Gulf's California sea lion population during the 1980s was attributed to a concomitant recovery of the Monterrey sardine. Further, a decrease in sardine recruitment in 1984-86 (Cisneros-Mata et al., 1995Zavala-González, 1999), due to El Niño event in 1982-84 and positive sea surface temperature anomaly in 1989-90, and excess sardine fishing effort during 1990-91 might have resulted in a collapse of sardine biomass at the onset of the 1990s (Cisneros-Mata et al., 1995). At the same time, the sea lion population began to decline. Some authors hypothesized that the Gulf's California sea lion abundance trend is directly related to sardine biomass rather than climatic events per se (Aurioles-Gamboa & Zavala-González , 1994;Szteren et al. 2006). Global warming and fishing pressure (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000). Second population collapse: As already mentioned, from 1989 to 1992 the Monterey sardine catch in the Gulf dropped from 292 000 metric tons (mt) in 1988-89 to 7 500 mt in -1993(Cisneros-Mata et al., 1995 and did not recover until 1995. Consequently, sardine predators declined during 1990-94 (Zavala-González, 1999, and simultaneously the Ángel de la Guarda Island (A.G.) California sea lion colonies started to decrease. Different to other colonies, A.G. colonies showed a dominance of sardine in their diet (García-Rodríguez & Aurioles-Gamboa, 2004) which leads to the conclusion that this sea lion colony is more dependent on prey availability (Sterzen & Aurioles-Gamboa, 2006. Aurioles-Gamboa and García-Rodríguez (1999) found that during the 1980s, pup abundance in Los Cantiles considerably increased in conjunction with the Monterrey sardine biomass peak and rapidly decreased with the collapse of sardine during the early 1990s; therefore they assumed the existence of some relation between the two events. The latter is not totally supported by Zavala-González (1999), who did not relate the sea lion abundance drop in Los Cantiles with the sardine biomass because the pups' nutritional condition remained constant during 1988-1993. The decline of the rookery was most likely related to emigration of sea lions out of the area rather than individual mortality (Heat et al., 1994cited in Zavala-González, 1999; either way, the pup production had begun to decline years before (Zavala-González, 1999). To summarize, a probable cause of California sea lion decrease in the Gulf was due to the increase of sardine fishing and a reduction of the sardine's natural productivity due to changing oceanographic conditions (Zavala-González, 1999). In support of this hipothesis, in 1993 the female foraging trips at Los Cantiles was longer than during 1988 probably related to prey availability (Heat et al., 1994cited in Zavala-González, 1999. This might explain the reduction in sea lion female numbers during the onset of the 1990s in the central Gulf rookeries (Zavala-González, 1999). Despite that El Niño 1992-93 has been considered as a moderate event, it coincided with a long period of positive sea surface temperature anomalies (Zavala-González, 1999) which resulted in a major Gulf ecosystem effect. During 1960 to 1990, in the Central Gulf and Angel de la Guarda (A.G.) the sea lion population increased 2.1 % per year (Zavala-González, 1993. From 1984 to 1991 in eight colonies of this area the population was 12 600, and in 1990-1993 dropped 34.8 % (Maravilla-Chávez et al. 1997cited in Zavala-González, 1999. As already mentioned, in these two areas the population had begun to decline since the 1980s but while colonies in the San Pedro Nolasco island and Los Cantiles ceased to decline in 1993 (58.6 and 29.9 %, respectively), the other colonies continued to show decreased sea lion numbers until 1997 (Zavala-González, 1999). Zavala-González (1999) assumed that since 1984, environmental changes had occurred in the Midriff island region which increased in 1990-93 and affected the sea lion population. This explains the 34.8 % decrease in total numbers, 37.1 % of females in 1991-92, 38 % of males in 1992-93 and 46.7 % of pups in 1990-93. In contrast, during 1985-1992Ward et al. (2009 found a population drop greater than expected according to Zavala-González's hypothesis. Conversely, the US and Pacific Baja California sea lion populations did not experience the same process and, although decreased during El Niño 1992-93, the pup production recovered rapidly than in 1983. After having declined to just 8 902 individuals in 1993, the central and A.G. sea lion population remained constant around 9 237 individuals until 1997. Surprisingly, during the strong 1997-98 El Niño event the population did not declined compared to the US population where the pup production decreased significantly (Lowry & Maravilla-Chávez, 2005). In the Central Gulf region, during 1993-1997 the pup production increased by 54.1 % (Zavala-González 1999). In Los Islotes, pup numbers increased, and their body condition was similar to that in other years (Szteren et al., 2006;Hernández-Camacho et al., 2008). Neither of both strong El Niño events in 1983 and 1997 impacted the population, and it has been proposed that the Gulf's environmental conditions during 1997 favored enough prey availability (Hernández-Camacho et al., 2008). This has not been substantiated with data. For 1998 in San Jorge Island (Northern Gulf), Mellink (2001) found an anomalous increase of adult individuals (6 717) without variation in pup numbers (793) probably due to immigration of individuals from the central and Southern rookeries. Later, during the 1999 La Niña, the author recorded more pups than in previous years (1 053). Yet the number of adults (2 953) varied slightly with respect to previous years (before El Niño), probably because the return of individuals to their original rookeries. This hypothesis of sea lion population movements to more favorable sites resembles the shift observed in seabirds nesting from Rasa Island south of A.G. to San Jorge Island during 1997-98 (Velarde, Ezcurra, Horn, & Patton, 2015). Zavala-González (1999) interpreted the increase in sea lion pup numbers as a strategy developed over short time periods to confront strong environmental pressures, such as what occurred in the US population (Lowry et al., 1992cited in Zavala-González, 1999. A decrease of pups and juveniles was not seen at Los Islotes during 1997-98; Hernández-Camacho (2001) found fewer adult males and females. Although this author interpreted the latter as being due to extended feeding trips and not to mortality, Samaniego-Herrera (1999) did not report changes in food searching times but did find a fecundity increase for the same time period. This means that a greater number of pups as compared to females were observed in the rookery, which seems to confirm Hernández-Camacho's theory. The 35 % population decrease during the 1980s and 1990s should not be attributed neither to pups -that showed an increase despite intense interannual variability, nor to adult males -that also increased-but to females abundance which had been decreasing over 14 years. This process generated in the Central Gulf rookeries a change in sex ratio from 1:10.9 (male:female) in 1984, to 1:4.3 in 1997, probably related to female mortality and a sharp drop in reproduction activity (Zavala-González, 1999). However, in San Pedro Nolasco rookery in 1999, relatively older females gave birth which suggested an increase in juvenile mortality (Aurioles-Gamboa, Godínez-Reyes, Hernández-Camacho, & Santos del-Prado-Gasca, 2011). During warm sea surface temperature periods (1983, 1987, 1992 and 1998) each age class and sex behaves in different way (Shirasago-Germán et al., 2015). These authors found that during El Niño events adult female numbers decreased more than all male age classes and that male were more likely to decline during prolonged warm periods for the reasons previously discussed. They found a strong negative correlation between El Niño and total adult males and females and interpreted this as evidence of the impact of environmental factors on breeding California sea lion colonies. In addition to climatic and ecological phenomena, the intensification of fishing effort could have contributed to the population drop of both, small pelagic fishes and California sea lions. During the 1980s and 1990s an increase of sardine fishing occurred, particularly in the A.G. region during summer, which coincides with the California sea lion reproductive season. Furthermore, in the same period, long line and entanglement net shark artisanal fishing notably increased, generating direct and indirect problems for the California sea lion population (Zavala-González, 1999). Sea lions were often found entangled in fishing gears and provoked an increase in fishermen unrest and sea lion shooting occurrences (Gallo-Reynoso, unpublished data). Also, fishers started to use sea lion flesh as bait for shark fishing (Gallo-Reynoso, 1986; Delgado-Estrella, Ortega-Ortíz, & Sanchez-Ríos, 1994). The first research on the above-mentioned topic was conducted in 1981 in Guaymas Sonora (Fleischer & Cervantes, 1990cited in Zavala-González, 1999 and at the end of 1980s (Aguayo, 1989cited in Zavala-González, 1999. Delgado et al. (1994) reported an increase of sea lion entanglement when, due to the decrease of shrimp productivity, fishers began to use monofilament nets for sardine and shark fishing in the A.G. region and near Los Islotes colonies. That same year, sea lions were declared under special protection by Mexican government due to the increased anthropogenic threats (Zavala-González, 1999). In 1997, the situation worsened because the artisanal fishing effort further increased during El Niño. That year in A.G. and the Central Gulf, the entanglement index (percentage of entangled animals/ total population) increased to 1.2 % (Zavala-González & Mellink, 1997). The rookery with a major risk for juveniles and females was Los Islotes where the entanglement index oscillated between 3.9 and 8.0 % until 2004 (Aurioles-Gamboa et al., 2011); interistingly, the sea lion population continued to increase. In some colonies of Northern and Midriff regions, a relationship was found with croacker (Pisces, Scianidae) commercial fishing and a high index of entanglement (Aurioles-Gamboa & Porras, 2006 cited in Aurioles-Gamboa et al., 2011). Aurioles-Gamboa, García-Rodríguez, Ramírez-Rodríguez, and Hernández-Camacho (2003) found in Los Islotes sea lion diets an low overlap of 5 %. The commercially important fish species was spotted sand bass (Paralabrax maculatofasciatus). Finally, during 1998-99 in San Pedro Mártir, Aurioles-Gamboa et al. (2011) registered a 10 % entanglement during the peak of shark fishing and assumed that shark overexploitation had generated an increase of giant squid (Dosidicus gigas) that began to compete with sea lions for sardine. These authors concluded that sea lions migrated to other areas which resulted in a 61 % population drop at the San Pedro Nolasco sea lion colony in 2000. After the shark fishing decreased, in 2003 the rookery recovered considerably in numbers (Aurioles-Gamboa at al., 2011). Interestingly, Zavala-González (1999) did not report important sea lion population changes over a 55-year time period analyzed: a stable age distribution and pup growth rate was interpreted by this author to reflect a constant population size. The colony of San Pedro Nolasco Island recovered during 1992 and 1997 and stabilized in 2003. Colonies in San Esteban and Los Islotes showed the highest fecundity rate (up to75 %) and Lobos Island and Rasito the lowest (down to 26 %). Colonies at Angel de la Guarda (A.G.), Los Cantiles and Los Machos were the most affected because the net population growth rate was less than one (Szteren et al., 2006). Total population growth decreased during the period of 1994 to 2004. Szteren et al. (2006) attributed the population decline to low prey availability, particularly sardine, as showed in Central Gulf and A.G. trends (García-Rodríguez & Aurioles-Gamboa, 2004) (Fig. 3). According to Pérez-Lezama (2010), Szteren's (2006) data represented a segment of the cyclic population abundance and not a true decrease. Furthermore, Aurioles-Gamboa & Zavala-González's (1994) survey included more colonies censed in different periods. Abundance numbers in 2004 as compared to 1979 or 1997 showed a decrease of 4.8 and 7.9 %, respectively. Including the corrections due to lack of colonies in Szteren's (2006) data (they used other years' numbers for two colonies not censused) the average varies from -2.9 to 0.65 %. Pérez-Lezama (2010) agreed with Szteren et al. (2006) regarding a relation between a drop of sea lion numbers and chlorophyll a concentration and prey availability but attributed the decline to local feeding emigration of some sea lion age-classes, particularly juveniles, to more productive areas. This explains the need to consider not only short-time frames but also the long-term patterns and differences within regions and between age classes. This last aspect was widely demonstrated by Hernández-Camacho et al. (2015) who applied birth and adult survival rate surrogate data (obtained from Los Islotes) to estimate population trend of different colonies: survival rate fitted well between observed and surrogate data for San Jorge Island but not for Granito. This error yielded an estimate of adult female survival rate lower than that for juveniles which is unusual for large mammals. High energy demand for gestation and the need to remain near the reproductive site for lactation could result in a major adult female mortality rate in an area with environmental stress such as the Midriff region islands. The mechanisms for the latter are unknown, and future investigations should focus on estimating adult survival rates and mortality. Neglecting differences in age-class survival rates could mislead interpretations in large-scale population trend determination; surrogate data should be used only with colonies that show a similar trend and belong to the same area (Hernández-Camacho, Bakker, Aurioles-Gamboa, Laake, & Gerber, 2015). Ward et al. (2009) proposed a multivariate state-space model that consider the different trends based on rookery's distribution in the Gulf. This approach allows to appreciate spatial patterns of synchrony and correlation. These authors found that all northern colonies have similar trends and the southern ones show a negative correlation with all other colonies. Due to the independence of the Southern Gulf region, Ward et al. (2009) concluded that there is a low risk of decline for the whole Gulf California sea lion population yet in the future, the center of abundance would probably shift southwards with an increasing contribution to the total population from 20 % in 2006 to 33 % in 2030. The Blob (2010-2016). Fast recovery and third population collapse: during 2010 a monitoring project took place by CONANP in collaboration with Mexican research institutions such as Centro Interdisciplinario de Ciencias Marinas (CICIMAR), and Centro de Investigación en Alimentación y Desarrollo (CIAD). The goal was to obtain data over a period of five consecutive years in all rookeries to better understand the state of the sea lion populations. In 2011 12 157 individuals and 4 089 pups were counted; the fecundity index was 60 %, with good pup body condition index. The majority of the 13 colonies were stable, recovering (San Esteban Island, San Pedro Mártir Island), or increasing (Los Islotes); in San Jorge island the colony was declining albeit a high fecundity rate was also observed. Compared to 2004, 2011 showed a slight decrease of population (13 185), with 4 299 pups . In 2012 the population increased to 12 885 adults and 5 242 pups 1 000 pups more than in year 2011, numbers similar to those dating back to the 1980s. In both years (2011 and 2012) the highest index of fecundity was found in San Jorge Island, Los Cantiles, Los Machos, Farallón de San Ignacio and Los Islotes, although only the last two colonies increased in number (Aurioles-Gamboa & Gallo-Reynoso, 2012). Some colonies experienced an important change in trend: Farallón de San Ignacio showed an increase, after a period of important decline (around 45 %), starting in 2004; after a long constant decrease for 30 years (Szteren et al., 2006;Ward et al., 2009) in 2013 Los Cantiles and Los Machos reached the highest number of the past 15 and 20 years, respectively. That same year, Granito island, El Partido and El Rasito slowly increased in sea lion numbers; conversely, in the latter years Los Islotes continued to increase but at a lower rate (Aurioles-Gamboa, Gallo-Reynoso, & Hernández-Camacho, 2013) (Fig. 4). The entanglement problems registered some changes as well. Before 2004 Los Islotes colony had the highest rate but in recent years the northern and central regions showed the highest entanglement rate, in particular, females (San Jorge) and yearlings. In contrast with 1997 the entanglement rate decreased with respect to the 1990s with a further decline during El Niño 2015. Although apparently this did not affect the growth rate, there is a great level of concern among researchers particularly due to a recent worsening again of the problem in risk areas as Midriff region (Aurioles- Gamboa & Gallo-Reynoso, 2012;Gallo-Reynoso, 2015). Until 2013 the growth and fecundity rates remained stable over the last 20 years (Szteren et al., 2006;Ward et al., 2009;Aurioles-Gamboa et al., 2013). The problems faced by the California sea lion population began in 2014 when the total number (12 045) and pup production (2 878) decreased 18 and 45 %, respectively as compared with 2012. Even though the pup production continued within range, exclusively San Pedro Mártir registered one of the lowest values in the last 20 years prior to 2012. In 2014 the fecundity rate between the colonies was more variable and lower than in 2012 (30 to 100 %). In 2015 in 11 colonies the total number of individuals was 7 966 (17 807 estimated) 16 % less than 2014 but the pup production increased 19 % (3 070 adjusted to 6 748) (Gallo-Reynoso, Hernández-Camacho et al. unpublished data cited in Pelayo-González, 2018). In 2016 the joint monitoring project came to an end and institutions did not share their information. Consequently, it is impossible to know each rookery's population trend for that year. Partial data from another CONANP monitoring project shows a slight improvement of the sea lion population in five colonies distributed in different areas of the Gulf, except the A.G. Region (Gallo-Reynoso, Aurioles-Gamboa, & Hernández-Camacho, 2016). To summarize, between 1979 and 2016 an important total Gulf's California sea lion population decrease was estimated at 44 %, and pup production at 36 %, under historical average in the Northern and Central Gulf during 2015-16 ( Fig. 4) (Hernández-Camacho et al. unpublished data cited in Pelayo-González, 2018). Each colony shows different growth rate and risk of extinction or extirpation: increasing at Rocas Consag and Los Islotes; decreasing in .*net value*. Total population number estimated during the period 1990-93 (Aurioles-Gamboa, 1988;Zavala-Gonzalez, 1990and 1993Aurioles-Gamboa & Zavala-González, 1994). Lobos Island, El Granito Island, Los Cantiles, Los Machos, El Rasito and San Pedro Nolasco;stable at San Jorge Island, El Partido, San Esteban, San Pedro Mártir and El Farallón de San Ignacio (Pelayo-González, 2018) (Fig. 3). This result could reflect a state of the Gulf's environment ecosystem during 2011-2016. After two consecutive La Niña events in 2013 and later in 2014 in the northern Pacific Ocean, an important sea surface anomaly around 1-4 °C, named "The blob" (NASA) which generated a decrease in primary productivity and of California sea lion prey availability (Elorriaga-Verplancken, Ferretto, & Angell, 2015;Kintisch, 2015). After that, in 2015-2016 a strong El Niño event also developed (NASA). This series of climatic events determined profound ecosystem changes with important consequences. The sardine catch decreased from half a million mt in 2008-2009 to 3 500 mt in 2013-14 ; migratory birds in 2015-2016(different to El Niño 1997 did not nest in Rasa Island but in the Southern California Bight (US) . US California sea lion abundance during 1975 to 2014 dramatically increased from 50 000 to 340 000 (McClatchie et al. 2016;Laake, Lowry, Delong, Melin, & Carretta, 2018). In 2011 the number of pups were the highest ever registered (61 943; Carretta et al., 2016) but during 2013-2016 the population declined significantly. In February and March of 2013 and 2015, a massive stranding of starving sea lion pups named "unusual mortality event" was observed in the Pacific US westcoast (NOAA). This event also involved the Pacific of Baja California, California sea lion population that showed a decrease of pups and adult females in 2014(Elorriaga et al., 2015 and in the same region in 2015 it was registered an anomalous dispersion of Guadalupe fur seal to northern areas in California, Oregon and British Columbia (Canada) (Aurioles-Gamboa, Rodríguez, Rosas, & Hernández-Camacho, 2017). In 2015, a high number of Guadalupe fur seal juvenile and pups were found stranded along the coast of California (Aurioles-Gamboa et al. 2017). The US California sea lion population suffered a decrease in juvenile numbers (DeLong et al., 2017) with specific low recruitment of twoand five-year old females. Even when in 2012 the pup production was very high in the US, the cohort of two-year old pups was the lowest recorded since 1998 (Laake et al. 2018). Researchers found that this decrease in sea lion population is not directly related to the El Niño event but to sea surface temperature (SST) anomalies that correlate with pup and yearling survival even if not always appreciated due to the presence of biological and anthropogenic factors. The strong relationship seems to indicate an impact of climate change on sea lion population trends particularly in the US stock was found when positive SST anomaly exceeds 1 °C the population stops to increase and up to 2 °C start to decline (Melin, Orr, Harris, Laake, & DeLong, 2012;Laake et al. 2018). Pelayo-González (2018) obtained similar results in the Gulf of California. The author did not find any relation between pup production and El Niño events, but registered positive correlations when the SST anomaly surpassed + 0.5 °C in the northern and 1 °C in the central region; no relationship was found for the southern region. Therefore, the observed sea lion decline in the northern and central regions could be related to a change in prey availability generated by positive SST anomalies. In the southern region the highest prey variety (Brusca et al., 2005cited in Pelayo-González, 2018 could buffer this effect on sea lions. Pelayo-González (2018) found a positive relationship with small pelagic fish availability only in Los Cantiles which decreased with the sardine drop. During execution of the PROMOBI projects, differences in the diet composition were found between colonies: in 2012, sea lions in San Jorge Island were the most generalist and San Pedro Nolasco the most specialist. An increase of 15 N stable isotope reflecting a widening of the trophic niche (Gallo-Reynoso, Aurioles-Gamboa, & Hernández-Camacho, 2014) since 2004 was also found in the central region, during 2011-12 in the northern Gulf colonies and in 1999 in Los Islotes. This shows that at least three geographic areas are influenced by different environmental factors forcing sea lions to change their diet (Gallo-Reynoso et al., 2014). Particularly in 2015 it was found an increase of crustaceans which were more frequent than cephalopods, different to what was observed in1997. This might be related to the El Niño event but the presence of crustaceans in California sea lion diet had never been so high (Gallo-Reynoso et al, 2015). DISCUSSION Considering a long-time frame period the sea lion population could appear stable with zero growth due to extreme interannual fluctuations (Gallo-Reynoso Aurioles-Gamboa, Hernández-Camacho, 2015; Pelayo-González, 2018). After sea lion hunting came to an end in the 1970s (Zavala-González & Mellink, 2000), the population showed a constant increase until the onset of the 1990s when it drastically decreased. However, there is evidence that this low trend apparently started in 1984 (Zavala-González, 1999). Then, followed a period of relative stability (Pérez-Lezama, 2010) until La Niña event in 2011-2012 (Aurioles-Gamboa & Gallo-Reynoso, 2012) when the population showed an important growth but to levels reported in 1980s. The population began to decrease during El Niño of 2015-2016(Gallo-Reynoso et al., 2015. According to various authors (Aurioles-Gamboa & Zavala-González, 1994;Zavala-González, 1999;Peréz, 2010;Laake et al., 2018;Pelayo-González, 2018), the cyclic population fluctuation could reflect dependence with climatic factors, not directly with El Niño, but with SST anomalies that determine changes in prey availability. This seems to impact more the area where Monterrey sardine (Sardinops sagax) forms an important component of sea lion diet such as the A.G. region (Aurioles-Gamboa & Zavala-González, 1994;Pelayo-González, 2018) thus increasing the probability of extirpation (Gallo-Reynoso et al., 2014). The Southern region represents least concern due to major diet variability, so that it has been expected that in the year 2030 the southern colonies will comprise 33% of the total population (Ward et al., 2009;Pelayo-González, 2018). However, many doubts persist about the influence of environmental site-specific factors on local sea lions particularly related with juvenile recruitment and female survival rate (Zavala-González, 1999;Hernández-Camacho et al., 2008;Hernández-Camacho et al., 2015). Given the present state of knowledge, due to numerous inconsistencies reported in this review, it is difficult to conduct an assessment of California sea lion population in the Gulf. Particularly, because the data pre 1990's are rather vague so it is difficult to compare these data with more recent investigations. Furthermore, between 1998 to 2010 there is an important information gap (the only year completed was 2005), so it is impossible to compare data among all ecoregions except for few years. The only information comparable consist in the data collected during 2011-2016 because they show spatio-temporal consistency and standardized methodology. Hence, to improve the knowledge about the status of California sea lion population there is a need to conduct future investigations based on the recent years. To better understand which are the main factors that regulate population trends in California sea lions, more information is required based on seasonal surveys in all reproductive and a selection of resting colonies. Data on diet, sardine fishing pressure, migration rates, genetics, prey availability, life cycle and regional environmental variation must be considered to ascertain short-and long-term population and colony spatial-temporal patterns. Statistical errors need to be assessed and monitoring methods should be standardized. It is important to promote research on related topics such as harmful algal blooms, pathogens, persistent organic pollutants and heavy metal tissue concentrations. These factors may influence sea lion mortality/fertility rate and the lack of these data could generate errors in the construction of life tables and population viability analyses. Ethical statement: authors declare that they all agree with this publication and made significant contributions; that there is no conflict of interest of any kind; and that we followed all pertinent ethical and legal procedures and requirements. All financial sources are fully and clearly stated in the acknowledgements section. A signed document has been filed in the journal archives. ACKNOWLEDGMENTS We thank the Comisión Nacional de Áreas Naturales Protegidas (National Commission of Natural Protected Areas (CONANP) for facilitating the use of the biological monitoring data of the PROMOBIS from 2011 to 2015 and of the PROMANP of 2016.
v3-fos-license
2018-04-03T02:22:21.817Z
2013-08-26T00:00:00.000
20969158
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1364/oe.21.020404", "pdf_hash": "65c5adb85be5f6a7f7eaa4b0459bc368467571e6", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42138", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "65c5adb85be5f6a7f7eaa4b0459bc368467571e6", "year": 2013 }
pes2o/s2orc
Shear stress sensing with Bragg grating-based sensors in microstructured optical fibers We demonstrate shear stress sensing with a Bragg grating-based microstructured optical fiber sensor embedded in a single lap adhesive joint. We achieved an unprecedented shear stress sensitivity of 59.8 pm/MPa when the joint is loaded in tension. This corresponds to a shear strain sensitivity of 0.01 pm/με. We verified these results with 2D and 3D finite element modeling. A comparative FEM study with conventional highly birefringent side-hole and bow-tie fibers shows that our dedicated fiber design yields a fourfold sensitivity improvement. ©2013 Optical Society of America OCIS codes: (060.2370) Fiber optics sensors; (120.3940) Metrology. References and links 1. A. Cusano, A. Cutolo, and J. Albert, Fiber Bragg Grating Sensors: Recent Advancements, Industrial Applications and Market Exploitation (Bentham Science Publishers, 2011). 2. F. Berghmans and T. Geernaert, “Optical fiber point sensors” in Advanced Fiber Optics: Concepts and Technology, L. Thévenaz, Ed. (EPFL Press, 2011) pp. 308–344. 3. A. Othonos and K. Kalli, Fiber Bragg gratings: Fundamentals and applications in telecommunications and sensing (Artech House, 1999). 4. G. Luyckx, E. Voet, N. Lammens, and J. Degrieck, “Strain measurements of composite laminates with embedded fibre bragg gratings: criticism and opportunities for research,” Sensors (Basel) 11(1), 384–408 (2011). 5. C. Martelli, J. Canning, N. Groothoff, and K. Lyytikainen, “Strain and temperature characterization of photonic crystal fiber Bragg gratings,” Opt. Lett. 30(14), 1785–1787 (2005). 6. T. Mawatari and D. Nelson, “A multi-parameter Bragg grating fiber optic sensor and triaxial strain measurement,” Smart Mater. Struct. 17(3), 035033 (2008). 7. C. Jewart, K. P. Chen, B. McMillen, M. M. Bails, S. P. Levitan, J. Canning, and I. V. Avdeev, “Sensitivity enhancement of fiber Bragg gratings to transverse stress by using microstructural fibers,” Opt. Lett. 31(15), 2260–2262 (2006). 8. T. Geernaert, G. Luyckx, E. Voet, T. Nasilowski, K. Chah, M. Becker, H. Bartelt, W. Urbanczyk, J. Wojcik, W. De Waele, J. Dearieck, H. Terryn, F. Berghmans, and H. Thienpont, “Transversal load sensing with fiber Bragg gratings in microstructured optical fibers,” IEEE Photon. Technol. Lett. 21(1), 6–8 (2009). 9. E. Chmielewska, W. Urbańczyk, and W. J. Bock, “Measurement of pressure and temperature sensitivities of a Bragg grating imprinted in a highly birefringent side-hole fiber,” Appl. Opt. 42(31), 6284–6291 (2003). 10. H.-M. Kim, T.-H. Kim, B. Kim, and Y. Chung, “Enhanced transverse load sensitivity by using a highly birefringent photonic crystal fiber with larger air holes on one axis,” Appl. Opt. 49(20), 3841–3845 (2010). 11. www.tekscan.com 12. www.hbm.com/en/menu/products/strain-gages-accessories/ 13. R. Khandan, S. Noroozi, P. Sewell, and J. Vinney, “The development of laminated composite plate theories: a review,” J. Mater. Sci. 47(16), 5901–5910 (2012). 14. K. Basler, Strength of Plate Girders in Shear (Lehigh University Institute of Research, 1960). 15. E. Real, E. Mirambell, and I. Estrada, “Shear response of stainless steel plate girders,” Eng. Struct. 29(7), 1626– 1640 (2007). #192831 $15.00 USD Received 24 Jun 2013; revised 9 Aug 2013; accepted 9 Aug 2013; published 22 Aug 2013 (C) 2013 OSA 26 August 2013 | Vol. 21, No. 17 | DOI:10.1364/OE.21.020404 | OPTICS EXPRESS 20404 16. M. D. Banea and L. F. M. da Silva, “Adhesively bonded joints in composite materials: An overview,” Proc. Inst. Mech. Eng. L J,” Mater. Des. Appl. 223, 1–18 (2009). 17. S. Benyoucef, A. Tounsi, E. A. Adda Bedia, and S. A. Meftah, “Creep and shrinkage effect on adhesive stresses in RC beams strengthened with composite laminates,” Compos. Sci. Technol. 67(6), 933–942 (2007). 18. H. Yousef, M. Boukallel, and K. Althoefer, “Tactile sensing for dexterous in-hand manipulation in robotics—A review,” Sens,” Actuator A-Phys 167(2), 171–187 (2011). 19. M. I. Tiwana, S. J. Redmond, and N. H. Lovell, “A review of tactile sensing technologies with applications in biomedical engineering,” Sens,” Actuator A-Phys. 179, 17–31 (2012). 20. C. Perry, “Plane-shear measurement with strain gages,” Exp. Mech. 9(19–N), 22 (1969). 21. J. W. Naughton and M. Sheplak, “Modern developments in shear-stress measurement,” Prog. Aerosp. Sci. 38(67), 515–570 (2002). 22. K. Noda, K. Hoshino, K. Matsumoto, and I. Shimoyama, “A shear stress sensor for tactile sensing with the piezoresistive cantilever standing in elastic material,” Sens,” Actuator A-Phys 127(2), 295–301 (2006). 23. H.-K. Lee, J. Chung, S.-I. Chang, and E. Yoon, “Normal and shear force measurement using a flexible polymer tactile sensor with embedded multiple capacitors,” J. Microelectromech. Syst. 17(4), 934–942 (2008). 24. K. Sundara-Rajan, A. Bestick, G. I. Rowe, G. K. Klute, W. R. Ledoux, H. C. Wang, and A. V. Mamishev, “An interfacial stress sensor for biomechanical applications,” Meas. Sci. Technol. 23(8), 085701 (2012). 25. J. Missinne, E. Bosman, B. Van Hoe, G. Van Steenberge, S. Kalathimekkad, P. Van Daele, and J. Vanfleteren, “Flexible shear sensor based on embedded optoelectronic components,” IEEE Photon. Technol. Lett. 23(12), 771–773 (2011). 26. W.-C. Wang, W. R. Ledoux, B. J. Sangeorzan, and P. G. Reinhall, “A shear and plantar pressure sensor based on fiber-optic bend loss,” J. Rehabil. Res. Dev. 42(3), 315–325 (2005). 27. S. C. Tjin, R. Suresh, and N. Q. Ngo, “Fiber Bragg grating based shear-force sensor: modeling and testing,” J. Lightwave Technol. 22(7), 1728–1733 (2004). 28. A. Candiani, W. Margulis, M. Konstantaki, and S. Pissadakis, “Ferrofluid-infiltrated optical fibers for shearsensing smart pads,” SPIE Newsroom (2012). 29. W. L. Schulz, E. Udd, M. Morrell, J. M. Seim, I. M. Perez, and A. Trego, “Health monitoring of an adhesive joint using a multiaxis fiber grating strain sensor system,” in Proc. SPIE 3586,” Nondestructive Evaluation of Aging Aircraft, Airports, and Aerospace Hardware III, 41–52 (1999). 30. J. C. Knight, “Photonic crystal fibres,” Nature 424(6950), 847–851 (2003). 31. P. S. J. Russell, “Photonic-crystal fibers,” J. Lightwave Technol. 24(12), 4729–4749 (2006). 32. W. Urbanczyk, T. Martynkien, M. Szpulak, G. Statkiewicz, J. Olszewski, G. Golojuch, J. Wojcik, P. Mergo, M. Makara, T. Nasilowski, F. Berghmans, and H. Thienpont, in Proc. SPIE 6619, Third European Workshop on Optical Fibre Sensors, “Photonic crystal fibers: new opportunities for sensing,” 66190G–66190G (2007). 33. O. Frazão, J. Santos, F. Araújo, and L. Ferreira, “Optical sensing with photonic crystal fibers,” Laser Photon. Rev. 2(6), 449–459 (2008). 34. T. Martynkien, G. Statkiewicz-Barabach, J. Olszewski, J. Wojcik, P. Mergo, T. Geernaert, C. Sonnenfeld, A. Anuszkiewicz, M. K. Szczurowski, K. Tarnowski, M. Makara, K. Skorupski, J. Klimek, K. Poturaj, W. Urbanczyk, T. Nasilowski, F. Berghmans, and H. Thienpont, “Highly birefringent microstructured fibers with enhanced sensitivity to hydrostatic pressure,” Opt. Express 18(14), 15113–15121 (2010). 35. S. Sulejmani, C. Sonnenfeld, T. Geernaert, P. Mergo, M. Makara, K. Poturaj, K. Skorupski, T. Martynkien, G. Satkiewicz-Barabach, J. Olszewski, W. Urbanczyk, C. Caucheteur, K. Chah, P. Mégret, H. Terryn, J. Van Roosbroeck, F. Berghmans, and H. Thienpont, “Control over the pressure sensitivity of Bragg-grating based sensors in highly birefringent microstructured optical fibers,” IEEE Photon. Technol. Lett. 24(6), 527–529 (2012). 36. T. Geernaert, T. Nasilowski, K. Chah, M. Szpulak, J. Olszewski, G. Statkiewicz, J. Wojcik, K. Poturaj, W. Urbanczyk, M. Becker, M. Rothhardt, H. Bartelt, F. Berghmans, and H. Thienpont, “Fiber Bragg gratings in Germanium-doped highly birefringent microstructured optical fibers,” IEEE Photon. Technol. Lett. 20(8), 554– 556 (2008). 37. T. Geernaert, M. Becker, P. Mergo, T. Nasilowski, J. Wojcik, W. Urbanczyk, M. Rothhardt, C. Chojetzki, H. Bartelt, H. Terryn, F. Berghmans, and H. Thienpont, “Bragg grating inscription in GeO2-doped microstructured optical fibers,” J. Lightwave Technol. 28(10), 1459–1467 (2010). 38. F. Berghmans, T. Geernaert, T. Baghdasaryan, and H. Thienpont, “Challenges in the fabrication of fibre Bragg gratings in silica and polymer microstructured optical fibres,” Laser Photon. Rev. 10.1002/lpor.201200103 (2013). 39. C. Sonnenfeld, S. Sulejmani, T. Geernaert, S. Eve, N. Lammens, G. Luyckx, E. Voet, J. Degrieck, W. Urbanczyk, P. Mergo, M. Becker, H. Bartelt, F. Berghmans, and H. Thienpont, “Microstructured optical fiber sensors embedded in a laminate composite for smart material applications,” Sensors (Basel) 11(12), 2566–2579 (2011). 40. F. Berghmans, T. Geernaert, S. Sulejmani, H. Thienpont, G. Van Steenberge, B. Van Hoe, P. Dubruel, W. Urbanczyk, P. Mergo, D. J. Webb, K. Kalli, J. Van Roosbroeck, and K. Sugden, “Photonic crystal fiber Bragg grating based sensors: opportunities for applications in healthcare,” in Communications and Photonics Conference and Exhibition, 2011. ACP,” Asia 8311, 1–10 (2011). 41. G. Luyckx, E. Voet, T. Geernaert, K. Chah, T. Nasilowski, W. De Waele, W. Van Paepegem, M. Becker, H. Bartelt, W. Urbanczyk, J. Wojcik, J. Degrieck, F. Berghmans, and H. Thienpont, “Response of FBGs in #192831 $15.00 USD Received 24 Jun 2013; revised 9 Aug 2013; accepted 9 Aug 2013; published 22 Aug 2013 (C) 2013 OSA 26 August 2013 | Vol. 21, No. 17 | DOI:10.1364/OE.21.020404 | OPTICS EXPRESS 20405 microstructured and bow tie fibers embedded in laminated composite,” IEEE Photon. Technol. Lett. 21(18), 1290–1292 (2009). 42. L. F. M. da Silva, P. J. C. das Neves, R. D. Adams, A. Wang, and J. K. Spelt, “Analytical models of adhesively bonded joints—Part II: Comparative study,” Int. J. Adhes. Adhes. 29(3), 331–341 (2009). 43. M. Goland and E. Reissner, J. Appl. Mech. Trans. Am. Soc. Eng. 66, A17 (1944). 44. C. M. Lawrence, D. V. Nelson, and E. Udd, “Multiparameter sensing with fiber Bragg gratings,” in Pacific Northwest Fiber Optic Sensor Workshop 2872 (1996), 24–31. 45. C. M. Lawrence, D. V. Nelson, E. U Introduction The wide adoption of smart materials and structural health monitoring in domains such as material manufacturing, civil engineering, transport, energy production and healthcare stimulates the demand for reliable and dedicated sensors.Typical physical quantities to measure include temperature, pressure or strain.Conventional electromechanical sensors, such as electrical strain gauges, are often perfectly adequate for this task.However, an increasing number of smart sensor applications require the sensor to be read out permanently whilst being embedded in a non-invasive manner in various materials that are often subjected to harsh conditions over long lifetimes.Electromechanical sensors are not always suited for this challenge, as they are usually bulky, they exhibit intrinsic temperature sensitivity, they are vulnerable to electromagnetic interference and they can exhibit a strong signal drift.When traditional sensors fail, optical fiber sensors and more specifically fiber Bragg grating (FBG) sensors, can provide a solution.FBG sensors feature many advantages over conventional sensors; they are small, flexible and lightweight, they can be multiplexed and allow quasidistributed sensor configurations, they allow absolute measurements and they have a linear response over a wide temperature and mechanical strain range.These features make FBG sensors highly suitable for integration in a material for smart sensing applications [1][2][3][4]. Multi-axial (selective) strain sensing remains a challenge for structural health monitoring or tactile sensing applications.Many research efforts focused on axial strain, hydrostatic pressure and transverse strain sensors and there are currently various methods commercially available with a high strain sensing resolution [5][6][7][8][9][10][11][12].On the contrary, the detection of shear strain remained mostly unaddressed, and at present no adequate sensor exists that is intrinsically sensitive to shear deformation.Shear stress nevertheless plays a crucial role in the appearance of structural defects such as delamination in laminated composite materials, debonding of adhesive joints or buckling of beams [13][14][15][16][17].In addition, shear stress sensing is also a key feature of tactile sensors, since this parameter provides information on (skin) friction [18,19].The absence of shear sensing technology can be attributed to the challenging requirements for a shear stress sensor.Non-intrusive integration capabilities and flexibility are essential features of a shear sensor.Furthermore, high shear sensing resolution is indispensable since shear stress levels in aforementioned structures and applications are typically several orders of magnitude smaller than normal stress.Depending on the specific sensor implementation, different types of shear force sensors have been investigated.A planeshear strain gage rosette was first demonstrated in [20].Current developments of shear stress sensors are based on MEMS technology [21], including piezo-resistive [22] or capacitive [23,24] shear stress sensors.More recently, Missine et al. [25] demonstrated an optoelectronic shear sensor based on measuring the optical power with a photodiode received from a vertical cavity surface-emitting laser facing the photodiode and separated with a deformable transduction layer.Distributed shear force sensing was also proposed by Wang et al. [26] using an array of optical fibers embedded in a flexible polymer foil.Shear or transverse loading of the foil induces macro bending of the fibers which can be observed through intensity attenuations. Research on shear strain sensors using FBG sensors has so far focused on 3 very different approaches.A first technique, demonstrated by Tjin et al. [27], uses a FBG in a conventional single mode fiber (SMF) that is embedded under a small tilt angle in a deformable layer.Shear loading of this layer induces an axial strain to the fiber, which can be derived from the Bragg peak wavelength shift.Candiani et al. [28] has demonstrated a shear sensing pad by embedding a ferrofluidic-infiltrated microstructured optical fiber FBG sensor and a magnet in a polymer foil.When a shear load is applied to the foil, the location of the ferrofluidic segment in the FBG sensor changes, which in its turn influences the FBG reflection spectrum.Another approach, reported by Schulz et al. [29], uses a conventional highly birefringent FBG sensor that is embedded in a material perpendicular to the direction of an applied shear load.When the fundamental optical axes of the fiber are aligned with the directions of principal stress, the optical fiber experiences the shear load as if it were a transverse load.In this manner, the applied shear load induces a change in material birefringence of the optical fiber, and hence also a change in its modal birefringence.This can be detected by monitoring the FBG reflection spectrum. The FBG sensors used in the work of Schulz et al. [29], for example, typically featured low shear strain sensitivity and significant thermal cross-sensitivity.In order to deal with these issues we propose to work with highly birefringent microstructured optical fibers (MOF).Owing to the unique features of MOFs, leveraged by the properties of FBG sensors, they have become a promising technology for optical fiber based sensing [30][31][32][33].While the cross section of a conventional step-index optical fiber is entirely made of (silica) glass, the cross section of a MOF displays a pattern of air holes that run along the entire length of the fiber.The arrangement of these holes determines the optomechanical properties of the MOF.By changing the geometry, position, shape or number of holes, fiber properties such as optical mode confinement, modal dispersion, modal birefringence and thermal sensitivity can be tuned.By exploiting stress induced changes of the modal birefringence of a highly birefringent MOF, temperature insensitive pressure and transverse strain FBG sensors have been demonstrated [34,35].When a mechanical load is applied to the cladding of such a MOF, the asymmetric air hole geometry will induce an asymmetric stress distribution in the core region, which affects the material birefringence, and hence also the modal birefringence.Temperature insensitivity can be attained by using only low GeO 2 -doping levels in the core region to limit thermal stress variations.A small concentration of GeO 2 is still required to allow for FBG inscription using conventional UV techniques [36][37][38]. Our previous research resulted in a dedicated MOF-FBG sensor ('butterfly MOF') with unprecedented transverse strain sensitivity, combined with a very low thermal sensitivity [34,39].Here we show that the record transverse strain sensitivity of the butterfly MOF-FBG sensor is translated in an unprecedented shear strain sensitivity, exceeding that achieved with conventional highly birefringent fibers with a factor 4 (section 3).By aligning the transverse strain sensing axes of the butterfly MOF with the directions of principal stress in a shear loaded material, we can detect shear strain.In this work, we successfully demonstrate shear strain sensing by embedding the butterfly MOF-FBG in the adhesive layer of a single lap adhesive joint (SLJ).We chose to embed our sensor in a lap joint structure because the shear stress distribution in the adhesive layer is well known and can be described by analytical models (section 2).One application of the shear stress sensor is to assess the shear stress distribution in complex prototype joints.Since the shear stress distribution in an adhesive bond line can be a measure of the bond quality, the sensor could in principle also be used for monitoring adhesive bonds in large structures.However, since our sensor is intrinsically sensitive to shear stress, it can as well be implemented in other materials or structures, such as polymer [40] or composite materials [39,41]. Our paper is structured as follows.In section 2 we discuss how a highly birefringent MOF-FBG sensor can detect shear stress in a single lap adhesive joint (SLJ).In section 3 we elaborate on the experimental results that we have obtained when using butterfly MOF-FBG sensors to measure shear stress in an SLJ experiment.We also verify our experimental results with 2D and 3D finite element modeling techniques.Finally, in section 4, we conclude on the shear stress sensing opportunities offered by our dedicated selective strain sensors. Shear stress sensing with highly birefringent optical fiber sensors To determine the shear stress sensing performance of a butterfly MOF-FBG sensor (Fig. 1), we have embedded several of these sensors in a single lap adhesive joint (SLJ).A SLJ is a simple structure for which the shear stress distribution in the adhesive layer is well known and can be described by analytical models [42].The analysis of Goland-Reissner [43] is a classic two-dimensional linear elastic method to analyze SLJs and to determine not only the shear stress in the bond layer, but also the peel stress that is induced by the bending moment caused by eccentric loading of the joint.Figure 2(a) shows a SLJ configuration with additional spacer tabs placed at the ends of the adherends to ensure tensile loading along the center line y = 0. Figure 2(b) compares the shear and peel stress profile from Goland-Reissner theory, and that from 2D finite element modeling (see section 3) of the SLJ shown in Fig. 2(a).This profile demonstrates that in the centre of the adhesive layer -at the location of the optical fibershear stress is more prominent than peel stress.However, at the edges of the overlap, peel stress will dominate.This peel stress may also initiate joint failure.It is worthwhile mentioning that the Goland-Reissner analysis does not include the decay of shear and peel stress near the edges of the adhesive bond.The nearly linear evolution of the shear and peel stress, with correlation coefficients R 2 > 0.999 and R 2 > 0.998, respectively, in the centre of the adhesive layer when the tensile loading is increased is shown in Fig. 2(c). The operating principle of the butterfly MOF-FBG sensor relies on the change of modal birefringence caused by mechanical load applied to the cladding of the MOF, or to the material in which the sensor is embedded.Because of the asymmetric geometry of the air hole pattern, load applied to the fiber induces an asymmetric mechanical stress distribution in the core region.The refractive indices for the modes polarized along the x-and y-direction, n x and n y , are therefore affected in a different manner according to [44][45][46]: with i = x or y for the x-or y-polarized modes.The correction terms Δn i (x,y) represent the change in stress distribution induced by mechanical load: with C 1 and C 2 the stress-optic coefficients.The principal stresses σ 1 (x,y) and σ 2 (x,y) are determined from the normal stress components σ x (x,y) and σ y (x,y) and shear stress component τ xy (x,y) using the following relationship: The influence of the shear stress component is often neglected when transverse stress sensitivity of a MOF-FBG sensor is being considered.However, when investigating the shear stress sensitivity of these sensors, the contribution of this component is of major importance.Fig. 1.The butterfly MOF-FBG sensor has an asymmetric air hole topology which induces large deformations in the core region and its GeO2 doped inclusion during fiber fabrication [34].The contours of the air holes, doped region and cladding are reconstructed in a 2D geometry for FEM simulations in Abaqus. Because of the large asymmetry in the design of the butterfly MOF, the sensitivity of the sensor to transverse load depends on the angular orientation of the MOF with respect to the direction of the load.More specifically, the butterfly MOF-FBG sensor has a sine-like angular dependence of the transverse line load [39] and transverse strain sensitivity when embedded in a material [47].The transverse strain sensitivity is highest when the transverse load is applied along 90°, which is indicated in Fig. 1 by the y-axis and which is also called 'slow axis'.When the fiber is loaded along 0°, which corresponds to the x-axis, or so-called 'fast axis', the magnitude of the sensitivity is still large, but it will now be negative.When the fiber is transversally loaded along ± 45°, the magnitude of its transverse line loading sensitivity approaches zero.When embedded in a shear loaded adhesive layer along ± 45° (Fig. 2(a).), it will detect the shear load induced transverse strain in the fiber.At the same time, the influence of peel strain on the sensor signal will remain low. Experimental and FEM modeling results We carry out experiments on FBG sensors embedded in SLJs that are loaded in tension up to failure and 2D and 3D structural finite element (FEM) analyses of an optical fiber embedded in a SLJ to compute and verify the sensor response.We embed two FBG sensors fabricated in a conventional SMF (referred to as sample 1A and 1B) and two sensors fabricated in a butterfly MOF (sample 2A and 2B) in a SLJ.The joint configuration is shown in Fig. 2(a) with the optical fiber located in the centre of the adhesive layer and directed along the zdirection.The adhesive is a two component methyl methacrylate (Simson Supergrip MMA 8105) that has cured at room temperature for at least 24h.The length and thickness of the adhesive layer are indicated in Table 1 for each sample.Schulz et al. [29] have demonstrated before that the presence of an optical fiber in an adhesive bond layer does not affect its strength.The adherends are made of aluminum.To ensure that tensile loading is along the central axis (y = 0) of the SLJ, spacer tabs are placed at the ends of the adherends where they are gripped.The adherends have a thickness of 3 mm and length of 100 mm, and the width of the sample is 25 mm.The FBG sensors were inscribed using UV femtosecond laser and phase mask technique, with a similar laser configuration as described in [48].The UV output power was fixed to 200 mW and focused to the core of the optical fiber with a cylindrical lens of 50 mm focal length.Short length (6-8 mm) FBGs were inscribed to ensure uniform stress distribution along the sensor when embedded centrally in the adhesive layer.We verified this with 3D FEM modeling using commercially available Abaqus FEA software [49].The modeling approach is explained later in this section.The butterfly MOF-FBG sensors feature a 2.4 mol% GeO 2 -doped inclusion, as indicated in Fig. 1.As discussed earlier in section 2, the optomechanical response of butterfly MOF-FBG sensors features an angular dependence.After testing the samples, the cross-sectional sides of the adhesive layer are polished to verify the optical fiber position and orientation (Table 1).The results show that it is perfectly possible to position an optical fiber and maintain its angular orientation within ± 9° in a SLJ without the need to adapt the SLJ fabrication procedure.Previous research [47] demonstrated that for the butterfly MOF sensor an angular misalignment of 10° from its optimal transverse strain sensing direction, leads to a sensitivity decrease of 10%.To facilitate fiber orientation in larger structures, one can consider the use of alignment tabs fixed on the optical fiber that indicate the optimal orientation.Another option is to adapt the outer cladding of the butterfly MOF to feature a flat side, also known as a D-cladding shaped fiber.The alignment tabs or the flattened side of the fiber can help to maintain the optimal fiber orientation during fiber positioning in the host material.Chehura et al. [50] demonstrated that the transverse strain sensitivity of the modal birefringence of an elliptical core fiber is not significantly affected when the circular outer cladding is changed to a D-cladding shape.The reflection spectra of the FBG sensors are recorded before and after embedding in a SLJ (Fig. 3(a) and Fig. 4(a)).Minor spectral deformations were introduced, but this did not affect the Bragg peak detection.An average shift of 669 ± 9 pm of the Bragg peak wavelengths towards longer wavelengths was detected for all samples.This shift is likely due to axial strain induced during SLJ fabrication when fixing the fiber to maintain its position and angular orientation in the adhesive layer.Since the axial strain sensitivity of SMF-FBG sensors is known to be 1.2 pm/µε, we find a good match between 3D FEM results and experiments.The sample is placed in a hydraulic servo-controlled tensile test machine with a load capacity of 100 kN (Instron 8801 [51],) by gripping it at both ends (Fig. 5).A static tensile load is applied at a rate of 0.05 mm/min until failure of the SLJ.During loading, the FBG sensor response is recorded using an FBG interrogator (FBG scan 608 [52],) with a sample frequency of 1 Hz and peak detection resolution of 1 pm.The SMF-FBG sensor response to tensile load of sample 1B is shown in Fig. 3(b).The results of a linear regression analysis of the response of samples 1A and 1B yielding the sensitivity in pm/kN are given in Table 1.For both samples the Bragg peak wavelength shifts to shorter wavelengths because of tensile loading of the SLJ.This corresponds to an axial compression of the FBG sensor that is induced by transverse contraction of the adhesive layer.This was verified with a 3D FEM model with dimensions corresponding to that of sample 1B and a silica rod located centrally in the adhesive layer to represent an optical fiber.Constraints are applied to the adherends to reproduce a fixed support on the left and a guided support on the right end of the SLJ.A load of maximum 5 kN is applied to the right end.The mesh consists of linear, 3D stress elements.Perfect bonding is assumed at all interfaces.The elastic modulus E and Poisson coefficient ν is respectively 70.0 GPa and 0.33 for the aluminum adherends, 0.47 GPa and 0.385 for the MMA adhesive, and 72.5 GPa and 0.17 for the silica optical fiber.The actual material parameters for the two component MMA structural adhesive were not available.Therefore, an average was made over publicly available material parameters (E and ν) available for similar two-component MMA adhesives [53][54][55].Results from 3D structural FEM modeling show that the detected negative Bragg peak shift is indeed because of transverse contraction of the adhesive layer, and results in a (negative) axial strain that is transferred to the optical fiber.Figure 3(b) compares the results of the experiment with those of 3D FEM when using the well-known axial strain sensitivity of SMF-FBG sensors (1.2 pm/µε [3,39],) to calculate the corresponding Bragg peak shift.We find a very good agreement between experiments and 3D FEM modeling of SMF-FBG sensors embedded in a SLJ. The butterfly MOF-FBG sensor response of sample 2A is shown in Fig. 4 and the results of a linear regression analysis of that response against applied load is also given in Table 1.For sample 2A, the Bragg peak separation increases because of tensile loading of the SLJ at a rate of 67.4 pm/kN.We limited the linear fit up to 2 kN, since at higher loads the sensor response is no longer linear.This can be attributed to the initiation and growth of cracks at the edge of the adhesive bond.Debonding at the edges will increase the shear stress at the location of the optical fiber.The experimental results were verified using 2D structural FEM modeling of a SLJ model using the same constraints, loading conditions and material properties as mentioned before.The dimensions of the adhesive layer and SLJ were chosen to match that of sample 2A.The 2D model of the butterfly MOF was extracted from its scanning electron micrograph (SEM) as shown in Fig. 1 [47].We have limited the simulations to 2D FEM since the sensing principle of the butterfly MOF-FBG sensor relies on the change of material birefringence which is limited to effects in the xy-plane.Moreover, we obtained a very good agreement between experiments on a SMF-FBG sensor and 3D modeling, which gives us confidence that neglecting stress along the z-direction will not affect the results for the Bragg peak separation.The result from a 2D FEM model of the peak separation is shown in Fig. 4(c) and summarized in Table 1.A sensor response of 69.9 pm/kN is obtained, which agrees well with the experimentally obtained value.For sample 2B, the Bragg peak separation increases at a rate of 71.8 pm/kN.The slightly higher response stems from the thicker bond and from the small off-centre location of the fiber.Both of these effects increase the shear stress at the location of the fiber. The increase in peak separation is due to the angular orientation of the MOF sensor, −45° instead of + 45°, and prevents peak overlapping and corresponding peak detection errors.Since both peel and shear stress are present in the adhesive layer, we cannot link the rate of change entirely to shear stress alone.However, because of the particular orientation of the MOF sensor, we can say that its sensitivity to the small amount of peel stress will be minimal.The shear and peel stress profiles shown in Fig. 2(b), which are derived from this particular 2D FEM model, show that in the centre of the adhesive layer the shear stress is 1.18 MPa when a tensile load of 1 kN is applied to the SLJ.On the other hand, the peel stress at that location is more than 3 times smaller: only −0.35 MPa.In the experiment on sample 2A, a load of 1 kN induced a peak separation increase of 70.6 pm.Hence, when neglecting peel stress, the butterfly MOF-FBG sensor would have a shear stress sensitivity of 59.8 pm/MPa.Considering the material properties of the adhesive, this sensitivity would correspond to a shear strain sensitivity of 0.01 pm/µε.It should be noted that although shear stress in an optical fiber is known to induce optical mode coupling between the fundamental modes [56], the modal birefringence of the butterfly MOF is sufficiently large to prevent the occurrence of this effect.During our experiments, we did not experience any detectable change in the reflected optical power of both fundamental modes. To demonstrate the added value of the butterfly MOF-FBG sensor over other types of highly birefringent fiber with an outer cladding diameter of 125 µm and a doped inclusion in the core region, we have also modeled the sensitivity of a bow-tie and side-hole fiber when embedded in a similar SLJ.We have constructed 2D FEM models of these fibers based on details and dimensions provided by Guan et al. [58] and by Clowes et al. [59].An overview of the results is presented in Table 2.When embedded in a SLJ, the bow tie FBG sensor and side hole FBG sensor yield a shear stress sensitivity of 16.0 pm/MPa and 16.2 pm/MPa, respectively.These shear stress sensitivities are almost 4 times lower than that obtained with a butterfly MOF-FBG sensor, clearly indicating the added value of our dedicated MOF-FBG sensor. Fig. 2 . Fig. 2. (a) Configuration of the tested and modeled SLJ with an optical fiber embedded in the centre (x = y = 0) of the adhesive layer.The boundary and loading conditions used for 2D and 3D FEM analyses are indicated.Perfect bonding is assumed at every interface.(b) Shear and peel stress profile along the adhesive bond line (y = 0) in a SLJ according to Goland-Reissner analysis and obtained with 2D FEM modeling of a SLJ configuration as shown in (a).The addition of spacer tabs (which is not considered in the Goland-Reissner model) has a small influence on the stress profile near the edges of the adhesive overlap.(c) The evolution of shear and peel stress in the centre of the adhesive layer due to tensile loading is nearly linear (R 2 > 0.999 and R 2 > 0.998, respectively). Fig. 3 . Fig. 3. Sample 1B: (a) SMF-FBG reflection spectra before and after embedding the sensor in the SLJ show minor deformations and a shift toward longer wavelengths due to axial pre-strain.(b) Results from experiments and 3D FEM modeling demonstrate a transverse contraction due to tensile loading of the adhesive layer which transfers a negative axial strain on the fiber.Since the axial strain sensitivity of SMF-FBG sensors is known to be 1.2 pm/µε, we find a good match between 3D FEM results and experiments. Fig. 4 . Fig. 4. Sample 2A: (a) Butterfly MOF-FBG sensor reflection spectra before and after embedding the sensor in the SLJ show minor deformations and a shift toward longer wavelengths due to axial prestrain.(b) The individual Bragg peaks shift towards lower wavelengths due to tensile loading of the SLJ.From linear fitting the results up to a load of 2 kN, we find that their sensor response is respectively −136.0 pm/kN and −63.5 pm/kN for the Bragg peak 1 and Bragg peak 2. (c) The Bragg peak separation increases due to tensile loading with a sensor response of 67.4 pm/kN.Results from 2D FEM modeling of a SLJ similar to sample 2A are in very good agreement with the experimental results. Fig. 5 . Fig. 5. Picture of the SLJ sample placed in the tensile test machine.The optical fiber is also visible.
v3-fos-license
2021-05-22T00:02:48.056Z
2021-01-01T00:00:00.000
234976562
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-res.com/articles/esr2021/45/n045p147.pdf", "pdf_hash": "fbcb68d0292658dc314d2529a80004c16dc954ae", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42140", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "525be3a89b2a7ec243ba7f3701c9081924597af6", "year": 2021 }
pes2o/s2orc
Social facilitation for conservation planning: understanding fairy tern behavior and site selection in response to conspecific audio-visual cues Simulated social facilitation techniques (e.g. decoys and call playbacks) are commonly used to attract seabirds to restored and artificially created nesting habitats. However, a lack of social stimuli and conspecific cueing at these habitats may limit the use of these sites, at least in the short term. Therefore, testing the effectiveness of simulated audio-visual cues for attracting gregarious birds is important for conservation planning. In this study, we (1) assessed whether call playback and decoys were associated with an increased likelihood of Australian fairy terns Sternula nereis nereis visiting potentially suitable nesting habitats; (2) tested their behavioral response to different cues; and (3) documented whether social facilitation had the potential to encourage colony establishment. A full cross-over study design consisting of all possible pairings of decoy and call playback treatments (control [no attractants], decoys, call playback, both decoys and playback), allocated as part of a random block design, was undertaken at 2 sites. Linear modeling suggested that call playback was important in explaining the time spent aerial prospecting as well as the maximum number of fairy terns aerial prospecting, although this only appeared to be the case for 1 of the 2 sites. Decoys, on the other hand, did not appear to have any effect on time spent aerial prospecting. The results from this study suggest that audio cues have the potential to encourage site selection by increasing social stimuli, but attractants may be required over several breeding seasons before colonies are established. INTRODUCTION Coastal systems, with their complex mosaic of shallow-water habitats and shorelines of varying geomorphology, support rich floral and faunal communities worldwide and are important breeding and feeding sites for an array of birds. Despite the impor-tance of these environments, growing human populations and the associated demand for housing, ports, marinas, and recreational amenities have led to significant reductions in habitat and resource availability, driving population declines among coastal bird communities (Yasué et al. 2007, Pakanen et al. 2014. The threats to coastal birds during the breeding season are numerous and include both climate-and anthropogenic-driven pressures. Managed sites, where interventions such as habitat enhancement and predator control are undertaken to improve breeding success (Greenwell et al. 2019a(Greenwell et al. , 2020, may become increasingly important within urban environments (e.g. Jenniges & Plettner 2008, Fujita et al. 2009). However, remedial works and site engineering can be costly (Hecht & Melvin 2009), and restored or artificially created sites may not be immediately utilized because of an absence of social stimuli and conspecific cueing (Boulinier et al. 1996). Given the potential for colony-and nest-site selection to affect individual fitness, birds use complex strategies to select breeding sites, including environmental and social cues (Cody 1985). Among gregarious birds, social facilitation (i.e. where the behavior of one individual increases the probability of other animals engaging in that same behavior) at colony sites is often used to assess habitat quality and an individual's chance of reproductive success, these being proximate cues for nest site selection (Gochfeld 1980, Boulinier et al. 1996, Kress 1997, Danchin et al. 1998). Social facilitation behavior may be especially important among coastal gulls and terns, which are known to periodically shift colony sites between breeding attempts (Dunlop 1987, Gochfeld & Burger 1992. Simulated social facilitation techniques such as the use of conspecific audio-visual cues (e.g. call playback and decoys) offer a powerful opportunity to influence animal behavior, including occupancy at restored or created habitats (James et al. 2015, Friesen et al. 2017, particularly among species like seabirds that have behavioral and life-history traits that rely on strong sensory signaling (Kress & Nettleship 1988, Kress 1997, Friesen et al. 2017. James et al. (2015) showed that call playback could be used to manipulate the distribution of amphibians within pre viously unoccupied ponds, offering a habitat restoration tool for threatened species conservation. Conspecific cues have also been used to successfully restore numerous seabird colonies (e.g. roseate terns Sterna dougallii, Arctic terns S. paradisaea, and common terns S. hirundo; Kress 1983Kress , 1997. When combined with appropriate habitat management and pre dator controls, they offer great potential to improve long-term conservation outcomes. However, the effectiveness of simulated audio-visual stimuli can vary between species (reviewed by Friesen et al. 2017). Therefore, understanding the drivers of site selection for the target species and assessing the effi-cacy of simulated social facilitation techniques is an important step for conservation planning. The fairy tern Sternula nereis is listed as Vulnerable on the International Union for Conservation of Nature Red List because of decreasing population trends over much of its breeding range in recent decades (BirdLife International 2018, Commonwealth of Australia 2019). Fairy terns typically nest on sheltered bays, coastal lagoons, sand spits, or lacustrine islands (Higgins & Davies 1996, Johnstone & Storr 1998, Dunlop 2018, habitats that are also highly valued for human use. To overcome a lack of natural nesting sites and to reduce disturbance at breeding colonies, managed sites, combined with social facilitation techniques, may be an effective strategy for improving nesting success (Dunlop 2018. Social facilitation to encourage site selection at secure or managed nesting sites has been identified as a possible 'local conservation strategy' to improve population trends under the Draft National Recovery Plan for the Australian fairy tern Sternula nereis nereis (Commonwealth of Australia 2019). The first objective of this study was to determine whether audio-visual cues would increase the likelihood of attracting Australian fairy terns S. nereis nereis (hereafter fairy terns) to an area of potentially suitable nesting habitat, and if so, which cue would elicit the strongest behavioral response. It was hypothesized that a combination of auditory and visual cues would provoke the strongest response compared to either visual or audio cues used in isolation and control treatments. Call playback was expected to provide a strong initial cue and draw attention to the site, while decoys were likely to encourage settlement through visual cueing. Our second objective was to determine whether social facilitation had the potential to encourage fairy tern colony establishment and egg laying at 2 managed sites (Mandurah and Garden Island) in temperate south-western Australia (see Fig. 1), which are historically important nesting sites for the species. Study sites The first study site was located in Mandurah (32°31' 14.24'' S, 115°43' 0.26'' E) and is managed by the local government authority, the City of Mandurah. Fairy terns have a long-known history of nesting in the lower reaches of the Peel-Harvey Estuary, Mandurah, likely because of the abundance of potential fish prey in the system and adjacent coastal waters. For many years, fairy terns nested at the mouth of the estuary, but this land has since been developed as part of the Mandurah Ocean Marina precinct (Dunlop 2016). Over the past decade, fairy terns have attempted to nest on a number of estuarine islands (Boundary, Channel, Creery, Len Howard, and Mandurah Quay Islands) and at Nairns beach, near the mouth of the Serpentine River (Fig. 1). However, these nesting attempts have failed, primarily because of increased high tide levels and summer storm surge events (Dunlop 2016). In 2015−2016 and 2016−2017, nesting attempts were made on vacant development blocks within the Mandurah Ocean Marina Precinct. However, few chicks fledged, possibly as a result of high disturbance levels or predation (Dunlop 2018, Greenwell et al. 2019a In 2017, the Mandurah fairy tern breeding site was established to overcome a lack of secure, flood-free breeding sites available to fairy terns in the region. The site (~1500 m 2 ) has a uniform elevation of ~3.0 m above sea level and is separated from the adjoining beach by a ~1.5 m high limestone rock sea wall. The perimeter of the site is fully enclosed with chain-wire fencing lined with shade cloth. A layer of shell material was added to the ground surface by land managers to enhance its attractiveness to fairy terns. Black rats Rattus rattus were not detected in the area prior to the commencement of breeding but baits were deployed along the adjoining sea wall as a precautionary measure. During the 2017−2018 breeding season (October to January), decoys were deployed at the site in an attempt to attract mature breeding adults. Ad hoc observations indicated that the birds were not interested in the site, despite the presence of the decoys. On 2 October 2018, prior to the start of our study, adult fairy terns in advanced nuptial breeding plumage (i.e. solid black head cap and bright orange bills and legs, indicating readiness for breeding), were observed landing on the seawall and beach adjacent to the fairy tern site. The second site was located at Garden Island (32°14' 31.92'' S, 115°41' 36.3372'' E), which is on a Commonwealth military base (HMAS Stirling) managed by the De partment of Defense. Garden Island is a historically important breeding site for fairy terns, with records of colonies at various locations across the island, including the Causeway, a traffic bridge that provides access to the island from the mainland (Higgins & Davies 1996, Dunlop 2016. Reproductive success varies greatly from year to year. Occasionally, entire colonies are lost through colony inundation and egg burial during summer storms, and mortality arising from vehicle strike has been recorded when birds nest on the edge of the Causeway (Dunlop 2016, G. Davies pers. comm.). Garden Island has also been identified as an important pre-breeding night roost location for fairy terns (see . In 2018, a managed site was established on Garden Island for fairy terns in an attempt to improve breeding outcomes and discourage nesting on the Causeway. The site (~3500 m 2 ) has a uniform elevation of 3.0 m above sea level and is separated from the adjoining beach by a ~1.5 m high, vegetated sand dune. A limestone rock wall, which adjoins a road, runs parallel to the dune along the entire length of the site on the opposite side. A layer of shell material was added to the ground surface by land managers to enhance its attractiveness to fairy terns. A baiting program was undertaken in 2019 after black rats were detected in the area, to reduce the potential for egg depredation. During the 2018−2019 breeding season (October−January), before this study commenced, decoys were deployed on the site; however, ad hoc observations indicate that no interest was shown by the terns. Study design Conspecific call playback (audio cues) and decoys (visual cues) were used at Mandurah and Garden Island to determine whether these sensory-based cues increased the likelihood of attracting fairy terns to an area of potential nesting habitat. A crossover study design was adopted to measure the behavioral response of terns to different cues. This design consisted of all possible pairings of decoy and call playback treatments (control [no attractants], decoys only, call playback only, decoys and call playback), allocated as part of a random block design ( Table 1). The study was carried out between 06:15 and 08:15 h on 5−19 October 2018 and 7−30 October 2019 at Mandurah and Garden Island, respectively, corresponding with the typical prospecting and early egglaying period of fairy terns. Six 4 d blocks were planned for each site, but on 19 October 2018, a fairy tern was observed incubating an egg at the Mandurah site. As a result, the social facilitation treatment was stopped due to the need for the egg to be incubated, which would bias further observations. As a consequence, the behavioral response of fairy terns at Mandurah was limited to 3 full blocks. The length of time that any individual from a group of terns spent (1) flying above the site or (2) on the ground was measured using 2 stopwatches to produce separate timing intervals for each activity. Timing commenced when terns either flew over or landed at the site. Stopwatches were left running for as long as any individual remained either over the site or on the ground, allowing the time interval of each landing and aerial prospecting event to be recorded separately. Observations of the maximum number of birds present and the duration of each landing or aerial prospecting event were made over a continuous 120 min observation period on consecutive days from a vantage point outside the study area. Note that a lack of distinguishing features between birds precluded individuals being counted. The cumu lative time that fairy terns spent on the ground (landing events) or in the air over the site (aerial prospecting events) each day was calculated for the 2 sites and is used as the sampling unit in this study. Decoys and audio recordings Models of least terns (Mad River Decoys, Audubon Society) were hand-painted to replicate the nuptial plumage of breeding adult fairy terns (Fig. 2). A conspecific vocalization recording, obtained from a fairy tern colony in Bunbury (~120 km south of Mandurah) during the 2017−2018 breeding season, was edited using Wavepad software to increase its am plitude, remove silver gull Chroicocephalus novaehollindae vocalizations, and delete the first and last segment of the recording to ensure that only settled colony call playbacks were used. The recording (~50 min long) was then loaded onto an MP3 player (Apple iPod) and set to loop. Conspecific call playback was played using a 15 W Toa broadcast megaphone modified with an input socket for MP3 attachment. The megaphone was positioned at the edge of the study site in the surrounding vegetation, and call playbacks were projected up and over the site. On the days when the decoys (n = 10) were deployed, they were spaced 1.5 m apart in a combination of singles (6) and pairs (4) (Fig. 3) to reflect natural conditions observed within a colony (Burger 1988). A social experiment on the closely related least tern Sternula antillarum showed that terns were attracted to larger groups and preferred to land where decoys were more spaced out (1.5 vs. 0.5 m), landing in the center of the group rather than the edge (Burger 1988 (Burger 1988, Arnold et al. 2011. In 2018, removal of the decoys was hampered when fairy terns remained on the ground at the Mandurah site at the conclusion of the observation period. To minimize disturbance to birds prospecting the site, decoys were left in place until early the following morning and removed prior to the commencement of the observation period, when decoy treatments were not scheduled. At Garden Island in 2019, the process of removing decoys early the following morning was repeated for consistency between the 2 sites. Wind speed and direction data, at 30 min intervals, were obtained from Bureau of Meteorology, and both average and maximum wind speed over the 2 h sampling period were calculated. In addition, the physical behavior of terns towards call playbacks and decoys was recorded in an ad hoc manner. Statistical analyses All statistical analyses were performed using R version 4.0.2 (R Core Team 2020). We investigated in separate analyses the relationship between (1) the time spent aerial pros pecting and the 2 treatments (decoys and call playback) and (2) the maximum number of birds aerial prospecting and the 2 treatments. In both cases, we fitted multiple linear regression models which included the 2 treatments and site as dummy variables and average wind speed and maximum wind speed as continuous explanatory variables. These models considered all possible interactions between the 2 treatment variables and site as well as interactions between site and wind speed variables. Residual plots highlighted the need to transform the time spent aerial prospecting (square root transformation, as identified using the 'boxcox' function in the 'MASS' package in R; Venables & Rip ley 2002) to produce greater compliance with the assumptions of linear regression. No transformation was required for the maximum number of birds aerial prospecting. Using the previously described explanatory variables and interactions, we performed an exhaustive model search to find the models minimizing second-order Akaike information criterion (AIC C ; Akaike 1974, Hurvich & Tsai 1989 for each response variable (i.e. time spent aerial prospecting, maximum number of birds aerial prospecting). The 'AICcmodavg' package for R was used in calculating AIC C (Mazerolle 2020). Birds landed on the site at Garden Island on only 1 d during the observation period, and only 12 d were recorded for landing time at Mandurah before a fairy tern was observed incubating an egg. Considering the small number of days for which there was any (non-zero) landing time data, our analysis of (1) landing time data and (2) maximum number of fairy terns observed on the ground and the relationship between these outcomes and the treatments is purely descriptive. Statistical models On average, the time fairy terns spent aerial prospecting or on landing events in treatments with call playbacks (either with or without the decoys) was greater than in the control and decoy-only treatments (Figs. 4 & 5). However, there was substantially greater variability in time spent aerial prospecting at Mandurah relative to Garden Island, particularly in the call playback-only treatment. Fairy terns began aerial prospecting at both sites within 20 min of the An exhaustive model search based on minimizing AIC C was carried out for the response of square root transformed aerial prospecting time using, as ex planatory variables, the 2 treatment variables (decoys, call playback), site, average wind speed, maximum wind speed, all possible interactions between the treatment variables and site, and interactions be tween the wind speed variables. The model minimizing AIC C included the call playback treatment, site, maximum wind speed, a call playback treatment × site interaction, and a site × maximum wind speed interaction ( Table 2, adjusted R 2 = 0.827), and all of these terms, except the call playback treatment × site interaction, were in the top 5 models in terms of minimizing AIC C (Tables S1 & S2 in the Supplement at www.intres.com/articles/suppl/n045p147_supp. pdf). For this model, there was an estimated increase of 3.00 min (95% CI = 0.58, 7.29) in time spent aerial prospecting when call playback was used on Garden Island, controlling for maximum wind speed. No real call playback effect was evident for Mandurah, however, with an estimated call playback effect of −0.57 min (95% CI = −10.85, 9.70) on the square root scale due to the call playback treatment × site interaction essentially negating the single-term call playback effect. A similar exhaustive model search based on minimizing AIC C but using the maximum number of fairy terns prospecting as the response variable led to selection of a model that included single terms for the call playback treatment, site, and average wind speed variables, a call playback treatment × site interaction, and a site × average wind speed (Table 3, adjusted R 2 = 0.848). Site, average wind speed variables, and a site × average wind speed were in each of the top 5 models based on minimizing AIC C , with the call playback treatment appearing in 3 of these (Tables S3 & S4). For the model minimizing AIC C (Table 3), there was an estimated increase of 2.41 (95% CI = 0.31, 4.51) birds aerial prospecting when call playback was used on Garden Island, controlling for average wind speed. Again, however, there did not appear to be a real call playback effect for Mandurah (estimated decrease of −1.04 birds; 95% CI = −4.03, 1.94), with the call playback treatment × site interaction again essentially negating the singleterm call playback effect. Mandurah The time spent aerial prospecting over Mandurah increased between the first and second treatment blocks for all treatments (Fig. 5a). During the third treatment block, this increase was followed by a sharp decline. The decline in aerial prospecting co -incided with a cold front producing ~32 km h −1 winds and rainy conditions on 14 October (call playback), and there was an increase in the time spent on the ground on 16 October (playbacks and decoys). Fairy terns spent an average (±1 SE) of 65.9 ± 6.6 and 42.8 ± 21.6 min on the ground in response to the call playback plus decoy treatment and call playback treatment, respectively (Figs. 4b & 5c). In comparison, terns spent an average of 31.0 ± 13.9 and 6.9 ± 1.5 min on the ground in response to decoy and control treatments, respectively (Fig. 4b). When decoys were in situ, fairy terns were observed interacting with the models (Fig. 2), and terns were regularly observed walking towards or flying above the speaker when call playbacks were broadcast. On 19 October 2018, a fairy tern was observed incubating an egg on the Mandurah site. The colony grew steadily in size over several weeks, and the site went on to support a colony that peaked at 110 nests in late November 2018 (Greenwell et al. 2019a). While decoys remained in situ, no further call playback was used following the laying of the first egg. A second colony of terns was established on a beach ~50 m away from the managed site on 30 October and peaked at ~40 nests in late November 2018 (Greenwell et al. 2019a Table 2. Model fit for a multiple linear regression of time spent aerial prospecting (square root transformed) on the treatments of call playback, site, maximum wind speed, an interaction (×) between the call playback and site variables, and an interaction between the maximum wind speed and site variables (n = 36). Significant results are in bold (p < 0.05) Garden Island On average, terns spent more time aerial prospecting over the site when call playback was used than the control or decoy-only treatments at Garden Island (Figs. 4a & 5b). Unlike Mandurah, fairy terns only landed on the Garden Island site on a single day -birds landed amongst the decoys during the playback plus decoy treatment on 20 October 2019, when 2 or 3 birds landed on 3 occasions for 15, 21, and 40 s. However, terns landed on the adjacent beach on 6 d (14,16,18,19,24,and 29 October) when call playbacks were used, and this location was close to the speaker. No landing events were re corded on control or decoy treatment days. Terns regularly hovered over the speaker projecting the call playback and were observed making low flights over the decoys when in situ. Terns spent an average of 10.2 ± 2.32 and 6.9 ± 1.8 min flying over the site when call playback plus decoy and call playback-only treatments were used, respectively (Fig. 4a). In contrast, terns spent an average of 1.5 ± 0.8 and 1.3 ± 0.8 min over the site on decoy-only and control days, respectively (Fig. 4a). There was a general increasing trend in the time spent prospecting over the study period, except in block 5 when strong winds (33 km h −1 ) were re cor ded on 24 October (call playback plus decoy) and on 25 October during the call playbackonly treatment. A decrease in the time spent at the site was also ob served in block 6 during the decoy treatment (Fig. 4). Fairy terns did not establish a colony at the Garden Island managed site during or after the study period. Instead, the birds selected an alternative and historically important breeding site on Parkin Point, an expansive sandbar ~800 m away from the managed site. The first colony on Parkin Point failed, likely due to egg depredation by black rats and possibly ghost crabs Ocypode sp., as animal tracks of these species were found around fairy tern nests. However, fairy terns formed a second colony on an alternative part of the sandbar protected by coastal vegetation, and following rodenticide baiting, the site went on to support a breeding colony of an estimated ~145 pairs that peaked in mid-to late January 2020. DISCUSSION While settling decisions by prospecting fairy terns varied between the 2 study sites, the audio-visual cues elicited a strong behavioral re sponse at previously unused areas and resulted in egg laying at Mandurah. Overall, treatments with call playbacks stimulated a stronger behavioral re sponse than decoy-only or control treatments. Active colonies provide information to prospecting birds about habitat suitability and the potential for individual breeding success (Reed & Dobson 1993, Boulinier et al. 1996, Danchin et al. 1998). Therefore, the use of audio-visual cues (particularly call playbacks) that mimicked active breeding colony sounds provided an opportunity to influence fairy tern behavior (Friesen et al. 2017). For species such as fairy terns that breed in relatively ephemeral habitats, exhibit low site tenacity, and tend to periodically shift colony sites , social cues may be strong drivers of site selection (Burger 1984, Medeiros et al. 2012. The results of this study, while limited in extent, appear consistent with those of Arnold et al. (2011), who also highlighted the importance of call playbacks compared to decoys for attracting common terns in Massachusetts, USA. They suggested that 'decoys are likely to be a secondary cue signaling the presence of breeding conspecifics only in the presence of sound' (Arnold et al. 2011, p. 498). Decoys deployed at Mandurah and Garden Island in the years prior to this study failed to attract prospecting fairy terns (see Section 2.1), yet birds began actively prospecting these sites within 20 min of call playbacks being used, supporting the premise that call playbacks are the primary cues needed to attract terns. The results of the current study are in contrast to those of Jeffries & Brunton (2001) Table 3. Model fit for a multiple linear regression of maximum number of birds aerial prospecting on the treatment of call playback, site, average wind speed, an interaction (×) between the call playback and site variables, and an interaction between the average wind speed and site variables. (n = 36) decoys attracted a significant behavioral response from New Zealand fairy terns Sternula nereis davisae, with or without playbacks. The reasons for the observed be havioral differences between the 2 subspecies re main unclear, although decoy design and the origin and type of conspecific call recordings used in the experiments may be important (see below). The differences in responses to artificial social facilitation be tween subspecies highlight the importance of assessing species-specific responses when efforts to im prove reproductive success are required (reviewed by Friesen et al. 2017). Marked differences in the time spent prospecting, settling behavior, and colony site selection were observed between the 2 sites at Mandurah and Garden Island. We propose 4 possible factors that may have contributed to these differences: reproductive phase, habitat availability, past breeding experience at other suitable sites, and the influence of group adherence behavior. During the pre-breeding period, attachment to potential colony sites is low, and prospecting bouts may be limited to a few birds visiting for brief periods before dispersing . Over time, terns may begin alighting and engaging in site-attachment activities such as territory establishment and scraping (Dunlop 1987, Kress 1997, but the timing of breeding likely coincides with a peak in prey availability (Monaghan et al. 1989, Zuria & Mellink 2005, Paillisson et al. 2007). The rapid settling behavior at Mandurah, while unusual, may have been driven by individuals in an advanced reproductive stage, who were already utilizing a beach for courtship within close proximity (~50 m) of the site just before the study period (C. N. Greenwell pers. obs.). Conversely, at Garden Island, it is possible that terns were less advanced in their reproductive condition than those observed in Mandurah 1 yr earlier or that their fish prey was not sufficiently abundant. At Garden Island in 2019, the number of breeding pairs peaked in mid-to late January 2020 compared to a peak in late November in Mandurah in 2018. The timing of breeding varies widely between individuals, and food availability in the lead-up to the breeding pe riod has the potential to affect the timing and success of reproduction (Regehr & Rodway 1999, Zuria & Mellink 2005, Kitaysky et al. 2007. At Mandurah, the managed site is located within a historically important breeding area that has since been developed into a marina, and terns have periodically nested on empty blocks adjacent to the managed site. Therefore, strong area knowledge and historical use, along with social stimuli and conspecific cueing associated with previous experience may have contributed to a stronger response by fairy terns already prospecting in the area (Boulinier et al. 1996). On Garden Island, alternative habitat, i.e. a large sandbar (Par kin Point), located ~800 m away that is also used as a night roost, may have contributed to the terns on the island ultimately selecting this site and showing less interest in the prepared managed site. Finally, the origin of call playbacks obtained for this study may have contributed to the terns ultimately selecting the alternative site at Garden Island. Group adherence and the maintenance of strong alliances between groups of birds is, potentially, an important behavioral trait among fairy terns , as has been shown for least terns and common terns (Austin 1951, Atwood & Massey 1988. Playback experiments performed in a least tern colony in North Carolina showed that the temporal and spectral characteristics of calls varied significantly between individuals, enabling the identification of mates (Moseley 1979). The individual recognition of associates and group adherence behavior may, therefore, be an important cue in encouraging site selection, particularly during the early stages of colony formation. Further research is required to elucidate whether the behavioral response of small terns varies according to the origin of the playback call. That is, can the calls of birds from one region be used to successfully encourage settlement of birds from another region or state, and do locally sourced colony calls lead to increased settlement? It is also possible that different decoy designs, such as recently developed, 3D-printed fairy tern models (www.shaunlee.co.nz) and model eggs (visual cues), may influence the behavior of terns, which are topics for future research. Anecdotal observations of early colony formation in fairy terns indicate that the presence of eggs may provide a strong stimulus for prospecting individuals (C. N. Greenwell pers. obs.). Social facilitation and the stimuli acting on gregarious species such as the fairy tern have the potential to influence colony establishment but may be de pendent on a range of interacting factors. While fairy terns did not select the Garden Island managed site for nesting, this site has the potential to be occupied in subsequent breeding seasons with further artificial social facilitation. It is important to note that call playback was only utilized on 2 d within a 4 d block and was limited to a 2 h period in the morning over 24 d at this site. In a restoration project involving Arctic terns and common terns, call playbacks were broadcast for 3 yr before a colony was formed (Kress 1983). Sightings of terns in creased 2-fold within the first year of using call playbacks and decoys, and despite not actively nesting on the site, terns were seen interacting with decoys and made nest scrapes in the area of the decoys (Kress 1983). By the third year of using these attractants, a mixed colony of Arctic and common terns formed (80 pairs) around the decoys and spea ker, with some of the early colonizers establishing nests <10 cm from decoys (Kress 1983). The behavioral response of fairy terns to artificial social facilitation over a relatively short period highlights the potential for call playback and, to a lesser extent, decoys to be used as a tool to encourage site selection by increasing social stimuli (Kress 1983). However, due to an absence of past experience at newly created sites, social facilitation may be required over several breeding seasons before colonies are established (Kress 1983(Kress , 1997. This may include the use of decoys and broadcasting call playbacks for at least several hours per day, particularly in the mornings when site prospecting activity is high (Dunlop 1987. Site selection and the associated site threat profiles should be given careful consideration before social facilitation is undertaken to reduce the potential for terns to be attracted into ecological traps or suboptimal habitats (Battin 2004, Ward et al. 2011. While habitats may provide the fundamental conditions necessary to encourage site selection, the inability of land managers to adequately mitigate the external influences that limit reproductive success may lead to reproductive failure (Ward et al. 2011, Greenwell et al. 2019a. Increased anthropogenic pressures, including coas tal development, have the potential to fundamentally change coastal processes and the habitats that support birdlife. In some locations, dedicated managed sites may offer long-term solutions for coastal birds like fairy terns. Managed sites such as North Fremantle (see Greenwell et al. 2019b) show the potential of dedicated nesting areas to maintain breeding aggregations and support reproductive success by overcoming a lack of natural habitat. However, regular monitoring and management of site threat profiles to support the target species remain critical (Commonwealth of Australia 2019). The maintenance and establishment of multiple sites, whether natural or artificially created, in areas of high human activity, is important. The availability of multiple sites will allow for periodic shifting of colony locations over the years in response to changes in site suitability (e.g. food availability, habi-tat stability, disturbance, predation), which is an important behavioral characteristic of fairy terns (Green well et al. 2020.
v3-fos-license
2019-10-03T09:10:38.984Z
2019-09-30T00:00:00.000
203639295
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4409/8/10/1181/pdf", "pdf_hash": "418e10895836fb9b3167983bbaa5adc1dffc528f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42142", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4605e42b17502dbf17db975e2c93aa4d4bbcd2dc", "year": 2019 }
pes2o/s2orc
Exportin-1-Dependent Nuclear Export of DEAD-box Helicase DDX3X is Central to its Role in Antiviral Immunity DEAD-box helicase 3, X-linked (DDX3X) regulates the retinoic acid-inducible gene I (RIG-I)-like receptor (RLR)-mediated antiviral response, but can also be a host factor contributing to the replication of viruses of significance to human health, such as human immunodeficiency virus type 1 (HIV-1). These roles are mediated in part through its ability to actively shuttle between the nucleus and the cytoplasm to modulate gene expression, although the trafficking mechanisms, and impact thereof on immune signaling and viral infection, are incompletely defined. We confirm that DDX3X nuclear export is mediated by the nuclear transporter exportin-1/CRM1, dependent on an N-terminal, leucine-rich nuclear export signal (NES) and the monomeric guanine nucleotide binding protein Ran in activated GTP-bound form. Transcriptome profiling and ELISA show that exportin-1-dependent export of DDX3X to the cytoplasm strongly impacts IFN-β production and the upregulation of immune genes in response to infection. That this is key to DDX3X’s antiviral role was indicated by enhanced infection by human parainfluenza virus-3 (hPIV-3)/elevated virus production when the DDX3X NES was inactivated. Our results highlight a link between nucleocytoplasmic distribution of DDX3X and its role in antiviral immunity, with strong relevance to hPIV-3, as well as other viruses such as HIV-1. Introduction DEAD-box helicase 3, X-linked (DDX3X) is a conserved ATP-dependent RNA helicase with various roles in RNA metabolism/gene expression, facilitated by localization in the cytoplasm or the nucleus. DDX3X is crucial in regulating innate antiviral immune responses initiated by the retinoic-acid-inducible gene I (RIG-I)-like receptors (RLRs) [1]. RLRs recognize cytoplasmic RNA derived from viruses such as hepatitis C (HCV), influenza A, human immunodeficiency virus type 1 (HIV-1) [2], and parainfluenza virus type 3 (hPIV-3) [3], a major cause of bronchiolitis, bronchitis, and pneumonia in children, the elderly, and immunocompromised, and a cause of significant mortality in hematopoietic stem cell transplant recipients [4,5]. Despite being an important respiratory pathogen DNA Transfections Plasmid transfections were performed using FuGENE HD (Promega, Madison, WI, USA) according to the manufacturer's instructions. Co-Immunoprecipitation and Immunoblotting HEK-293T cells were transfected to express mCherry or mCherry fusion proteins, then the mCherry positive populations were FACS-sorted and expanded. 1 × 10 7 cells were scraped into microfuge tubes, then 200 µL ice-cold co-IP buffer (20 mM Tris-Cl pH 7.4, 150 mM NaCl, 0.1% v/v IPEGAL) supplemented with 10 µg mL −1 RNaseA (Sigma-Aldrich, St. Louis, MO, USA) and cOmplete Ultra EDTA-free protease inhibitor cocktail tablets was added. Cells were briefly sonicated and clarified by centrifugation, then 100 µL Protein G-coupled magnetic resin (Thermo Fisher Scientific, Waltham, MA, USA) pre-bound to mCherry antibody was added to each supernatant. Protein-antibody complexation proceeded with end-over agitation for 30 min at 4 • C, then the resin was washed once with tris-buffered saline (TBS), transferred to clean tubes, and 50 µL 2× Laemmli sample buffer was added before incubating for 10 min at 95 • C. Samples were centrifuged at 16,100× g for 1 min and immediately subjected to SDS-PAGE (sodium dodecyl sulfate polyacrylamide gel electrophoresis) on 10% polyacrylamide gels. Proteins were transferred to PVDF (polyvinylidene difluoride) and probed using specific antibodies diluted in 5% w/v skim milk + 0.1% v/v Tween-20. NanoString RNA Profiling Whole-cell lysates of freshly sorted A549 cells were prepared 24 h postinfection by thoroughly washing and resuspending the cells in 50 µL CL buffer (10 mM Tris-Cl pH 7.4, 150 mM NaCl, 0.25% (v/v) IPEGAL). Cells were homogenized and RNA hybridization reactions were performed using the 770-plex Human PanCancer Immune Profiling CodeSet (NanoString Technologies, Seattle, WA, USA) with 5 µL clarified supernatant, corresponding to approximately 4000 cells in accordance with the manufacturer's instructions. The nCounter ® SPRINT system (NanoString Technologies, Seattle, WA, USA) was used to quantify captured reporter probes. Average linkage Pearson correlation heatmaps on optimally ordered data were generated using MeV software. Principal component analysis was performed using XLStat. Experimentally-validated human ISGs were interrogated using the Interferome database [24]. Enzyme-Linked Immunosorbent Assay ELISA titrations were performed in triplicate using the sandwich method employed by the LumiKine hIFN-β kit (Invivogen, San Diego, CA, USA) in accordance with the manufacturer's instructions. Measurements were performed on a ClarioStar plate reader (BMG Labtech, Ortenberg, Germany) equipped with a liquid injector using 30 flashes per well. Live and Indirect Immunofluorescence Microscopy Fixed and live cell imaging was performed using a Nikon C1 inverted confocal laser scanning microscope (Monash Micro Imaging, Monash University), equipped with a CO 2 -and temperature-controlled live imaging chamber and stage, a 100× NA 1.4 oil-immersion objective, and running NIS Elements (Nikon, Tokyo, Japan) for image acquisition. Specimens were optically sliced through the maximum dimension of the nucleus using a pinhole diameter of 1.0 AU. Images were analyzed blind using Fiji/ImageJ (NIH). Pixel intensities (fluorescence) in the middle of the nucleus and cytosol were determined by sampling equally sized representative regions of interest (ROIs), free of inclusions and oversaturated pixels, as performed previously [25]. Background was calculated by defining a ROI in each image lacking cells or specific staining, and measuring the pixel intensity of an area equivalent to that used for cell sampling. This was subtracted from the nuclear and cytosolic pixel intensity values, thereby enabling the nuclear/cytoplasmic (Fn/c) ratio to be calculated. Similarly, DDX3X-exportin-1 colocalization around the nuclear membrane was determined by measuring pixel intensity (fluorescence) along a representative line bisecting the nucleocytosolic boundary as indicated. Expression and Assembly of Exportin-1-Ran-GTP GST-Exportin-1 and GST-Ran(Q69L) were expressed separately in E. coli BL21(DE3) cells at 16 • C following induction at OD 600nm = 0.6 with 0.5 mM IPTG. 18 h post-induction, bacteria were harvested by centrifugation and resuspended in PBS supplemented with 5 mM DTT and cOmplete protease inhibitor tablets (Sigma-Aldrich, St. Louis, MO, USA). Proteins were extracted by sonication and applied to sepharose G4B resin (GE Healthcare, Chicago, IL, USA) for GST-affinity purification, then washed and the GST-free proteins eluted by incubation with PreScission protease. Exportin-1 and Ran(Q69L) were then each applied to a Superdex 200 16/60 gel filtration column (GE Healthcare, Chicago, IL, USA) equilibrated in GF1 buffer (20 mM Tris-Cl pH 7.5, 100 mM NaCl, 5 mM MgOAc, 2 mM DTT). For production of Ran-GTP, 1 mM GTP was added to Ran(Q69L) and the complex was purified on a Superdex 200 16/60 column. The formation of the Ran-GTP complex was confirmed by absorbance at 260 nm. For binding studies using exportin-1-Ran-GTP, the complex was pre-formed by incubating equimolar amounts of exportin-1 and Ran-GTP in GF1 Buffer at 20 • C for 30 min. Circular Dichroism Protein circular dichroism spectra were measured in 20 mM Tris-Cl pH 8.0, 500 mM NaCl, 10% glycerol, and 0.5 mM TCEP using a J-815 circular dichroism (CD) spectrometer (Jasco, Easton, MD, USA). Spectra were recorded at 0.2 mg mL −1 between 190-250 nm in a 1 mm quartz cuvette at 20 • C. Mean ellipticity values per residue (θ) were calculated as θ = (3300 × m × ∆A)/(lcn), where l is the path length (0.1 cm), n is the number of residues, m is the molecular mass (Da), and c is the protein concentration (mg mL −1 ). RNA-Dependent ATP Hydrolysis Assays RNA-dependent ATP hydrolysis activity was measured using the Biomol ® Green phosphate detection kit (Enzo Life Sciences, Farmingdale, NY, USA). 200 nM RIG-I∆CARDS, DDX3X, or variants thereof were diluted in ATPase assay buffer (20 mM Tris-Cl pH 7.5, 1.5 mM DTT, 1.5 mM MgCl 2 ), 10 µM poly(I:C) (Invivogen, San Diego, CA, USA), and 20 nmol ATP (Sigma-Aldrich, St. Louis, MO, USA), then incubated for 25 min at 37 • C. Phosphate standards were serially diluted from 2 µM to 0.031 µM using 1× ATPase reaction buffer and added to the control wells. Reactions were performed in pentaplicate in a final volume of 100 µL in 96-microwell assay plates (Corning, Corning, NY, USA). Reagents were diluted using diethylpyrocarbonate (DEPC)-treated water. Following incubation, 100 µL Biomol Green reagent was added to the control and sample wells to stop the reactions. Sample absorbance was measured by absorbance at 620 nm using a ClarioStar plate reader (BMG Labtech, Ortenberg, Germany) using 30 flashes per well. Analytical Ultracentrifugation Sedimentation velocity experiments on wild-type DDX3X and NES mutants alone and in complex with exportin-1-Ran-GTP were performed in an Optima analytical ultracentrifuge (AUC; Beckman Coulter, Brea, CA, USA) at 20 • C. Proteins were incubated individually or together at 20 • C for 30 min prior to centrifugation in GF1 buffer (20 mM Tris-Cl pH 7.5, 100 mM NaCl, 5 mM MgOAc, 2 mM DTT). 380 µL of sample and 400 µL of reference solution (GF1 buffer) were loaded into a conventional double sector quartz cell and mounted in an An-50 Ti rotor (Beckman Coulter, Brea, CA, USA). Samples were centrifuged at 40,000 rpm and the data was collected continuously at 280 nm. Solvent density (1.041 g mL −1 at 20 • C) and viscosity (1.0149 cp at 20 • C), as well as estimates of the partial specific volume (DDX3X: 0.7215 mL g −1 , exportin-1-Ran-GTP: 0.7450 mL g −1 at 20 • C), were computed using SEDNTERP [26]. Sedimentation velocity data were fitted to a continuous size [c(s)] distribution model using SEDFIT [27]. Quantification and Statistical Analysis Statistical parameters are reported in the figures and figure legends. Statistical analysis was performed using GraphPad Prism software. For nuclear/cytosolic fluorescence ratio measurements (Figures 1B,F, 2B and 4B,D), n represents the number of cells measured per sample and is represented as mean ± SEM, as previously [23,28,29]. Significance was calculated using Student's t-test (two-tailed) or one-way ANOVA with Tukey's, Dunnett's, or Holm-Sidak multiple comparisons post hoc analysis as indicated. For ATP hydrolysis assays ( Figure 3C), n represents the number of experimental replicates and is represented as mean ± SD. Significance was calculated using the Student's t-test with Holm-Sidak multiple comparisons post hoc analysis. For plaque assays ( Figure 5A,C) and ELISA ( Figure 5B,D), n represents the number of biological replicates and is represented as mean ± SD. Significance was calculated using one-way ANOVA with Dunnett's or Tukey's multiple comparisons post hoc analysis as indicated. For NanoString RNA profiling ( Figure 6A-C, Table S1), low (<10) count data was discarded, then the remaining data was background corrected by subtracting the maximum value of the available negative control probes and normalized to the geometric mean of 10 stable housekeeping genes across all samples, as described previously [30]. The DDX3X N-Terminus Mediates Its Nuclear Export To investigate the exportin-1-dependent nuclear export of DDX3X, we generated plasmids for mammalian expression of HA-fused full-length DDX3X, the first 168 residues (HA-DDX3X(1-168)), or DDX3X lacking the first 168 residues (HA-DDX3X(169-662)). Subcellular localization of these proteins was analyzed in HEK-293T cells by confocal laser scanning microscopy and quantitative image analysis (qCLSM). As a control, we used a GFP fusion of the well-characterized HIV-1 Rev NES (GFP-HIV-RevNES), which is exported via exportin-1 [31]. As another control, we used a GFP fusion of the Simian virus 40 T-antigen (Tag) NLS (GFP-TagNLS), which undergoes nuclear import dependent on importin α/β1 [32]. As expected, GFP-HIV-RevNES and GFP-TagNLS localized to the cytoplasm and nucleus, respectively ( Figure 1A,B). Surprisingly, despite being only 19.5 kDa in size, and small enough in principal to passively diffuse across the nuclear pore, HA-DDX3X(1-168) was predominantly cytosolic, as per the full-length protein ( Figure 1A,B). In contrast, DDX3X lacking this region, HA-DDX3X(169-662), was localized strongly within the nucleus ( Figure 1A,B), supporting the idea that the N-terminus specifically mediates nuclear export of DDX3X [20]. Additionally, because the 55.9 kDa HA-DDX3X(169-662) truncation has very limited ability to passively diffuse through the nuclear pore, its strong nuclear accumulation is likely due to active nuclear import. Thus, DDX3X residues 1-168 and 169-662 appear to harbor, respectively, at least one NES or nuclear localization signal (NLS), and these interact specifically with one or more subcellular trafficking receptors to facilitate nucleocytoplasmic shuttling of full-length DDX3X. DDX3X Harbors an Exportin-1 Recognized NES in the N-Terminus To confirm the NES (hereafter termed NESα) is functional in mediating DDX3X nuclear export, and for further use as a tool to explore DDX3X function, we substituted the four hydrophobic residues to alanine (L12A/F16A/L19A/L21A, termed qmNESα), which are required for transport of other exportin-1 cargos [33]. We compared subcellular distribution of the qmNESα variant with wild-type DDX3X expressed as mCherry fusion proteins in HEK-293T cells by live-cell qCLSM. The qmNESα DDX3X variant showed significantly (p ≤ 0.0001) impaired nuclear export compared to wild-type, with almost 9-fold higher levels of nuclear accumulation ( Figure 1E,F). LMB treatment did not further increase the nuclear fluorescence signal, in stark contrast to wild-type which showed significantly (p ≤ 0.001) increased nuclear accumulation. As expected, these results confirmed the finding by Brennan et al. [20] that the DDX3X N-terminal NESα is functional in exportin-1-dependent nuclear export ( Figure 1E,F). Consistent with this result, we successfully captured the transient receptor-cargo interaction between endogenous exportin-1 and mCherry-DDX3X, but not mCherry-DDX3X(qmNESα) by co-immunoprecipitation ( Figure 1G). Collectively, these data confirm that exportin-1-mediated nuclear export of DDX3X is dependent on the N-terminal NESα of DDX3X. DDX3X's C-Terminal Tail Is Dispensable for Nuclear Export DDX3X residues 260-517, comprising a truncated portion of the helicase core (residues 211-575), were previously proposed to bind exportin-1 without dependence on a modular NESor Ran-GTP [17,21,22]. Additionally, DDX3X C-terminal residues 536-662 have been reported to mediate nuclear export by nuclear RNA export factor 1 (NXF1/TAP) [18]. To test these possibilities, we generated mCherry-fused DDX3X lacking the NXF1-binding region but harboring the wild-type NESα, termed DDX3X(1-535). Using live-cell qCLSM, we found the subcellular distribution of this protein was identical to full-length (Figure 2), indicating the C-terminal tail is dispensable for DDX3X's subcellular trafficking. Next, we introduced our qmNESα mutations into this truncated construct, termed DDX3X(1-535)(qmNESα), to test the contribution of any NES-independent binding of exportin-1. As expected, this protein was localized in an identical manner to full-length DDX3X(qmNESα) (Figure 2). Collectively, these data suggest DDX3X's bulk nuclear export occurs via exportin-1, and that this is mediated by the N-terminal NESα-sequence. DDX3X's C-Terminal Tail is Dispensable for Nuclear Export DDX3X residues 260-517, comprising a truncated portion of the helicase core (residues 211-575), were previously proposed to bind exportin-1 without dependence on a modular NES-or Ran-GTP [17,21,22]. Additionally, DDX3X C-terminal residues 536-662 have been reported to mediate nuclear export by nuclear RNA export factor 1 (NXF1/TAP) [18]. To test these possibilities, we generated mCherry-fused DDX3X lacking the NXF1-binding region but harboring the wild-type NESα, termed DDX3X(1-535). Using live-cell qCLSM, we found the subcellular distribution of this protein was identical to full-length (Figure 2), indicating the C-terminal tail is dispensable for DDX3X's subcellular trafficking. Next, we introduced our qmNESα mutations into this truncated construct, termed DDX3X(1-535)(qmNESα), to test the contribution of any NES-independent binding of exportin-1. As expected, this protein was localized in an identical manner to full-length DDX3X(qmNESα) (Figure 2). Collectively, these data suggest DDX3X's bulk nuclear export occurs via exportin-1, and that this is mediated by the N-terminal NESα-sequence. DDX3X Nuclear Accumulation nctional significance of exportin-1-dependent nuclear export of DDX3X in in the context of invasive RNA. To determine whether the subcellular nges in correlation with immune stimulation, we challenged HeLa cells by thetic double-stranded RNA analog poly(I:C) and then examined the f endogenous DDX3X by qCLSM. Strikingly, poly(I:C) caused rapid rom the cytosol to the nucleus, with significantly (p ≤ 0.0001) increased (~2n observed 6 h post-stimulation, with levels of nuclear protein remaining (Figure 4a-b). To test whether the same effects were induced by an RNA used hPIV-3, the most virulent hPIV subtype for respiratory illness [34], For (D-I) residuals from the c(s) distribution best fit plotted as a function of radial distance from the axis of rotation are displayed above. The presence or absence of larger-sedimenting species corresponding to complex formation is indicated by black arrows. See also Table 1. Invasive RNA Triggers DDX3X Nuclear Accumulation We next probed the functional significance of exportin-1-dependent nuclear export of DDX3X in innate immune signaling in the context of invasive RNA. To determine whether the subcellular distribution of DDX3X changes in correlation with immune stimulation, we challenged HeLa cells by transfection with the synthetic double-stranded RNA analog poly(I:C) and then examined the subcellular distribution of endogenous DDX3X by qCLSM. Strikingly, poly(I:C) caused rapid redistribution of DDX3X from the cytosol to the nucleus, with significantly (p ≤ 0.0001) increased (~2-fold) nuclear accumulation observed 6 h post-stimulation, with levels of nuclear protein remaining constant for at least 24 h ( Figure 4A,B). To test whether the same effects were induced by an RNA virus infection model, we used hPIV-3, the most virulent hPIV subtype for respiratory illness [34], and A549 human alveolar epithelial cells. Indeed, hPIV-3 infection significantly (p ≤ 0.0001) increased (~2-fold) the nuclear localization of ectopically expressed DDX3X ( Figure 4C,D). Notably the magnitude of DDX3X relocalization between poly(I:C) stimulation and virus infection was identical (~2-fold), suggesting a specific response to invasive RNA. In addition, hPIV-3 infection induced accumulation of DDX3X into cytosolic inclusions in some cells, possibly p-bodies or stress granules typically associated with translational regulation of cellular or viral RNA. HeLa cells showed identical results (data not shown). These results imply that nuclear redistribution of DDX3X may be a general, acute-phase cellular response to viral challenge, arising as a specific cellular response to invasive RNA. Overexpression of Wild-Type But Not Nuclear Export Defective DDX3X Can Protect Against hPIV-3 Infection To dissect the role of exportin-1-mediated nuclear export of DDX3X in regulating immune signaling events in the nucleus and cytosol, we infected A549 cells expressing either mCherry-DDX3X, mCherry-DDX3X(qmNESα), or mCherry alone with hPIV-3, and then measured viral replicative fitness using plaque assays. Strikingly, cells overexpressing mCherry-DDX3X were significantly (p ≤ 0.01) more resistant to infection than those expressing mCherry alone, with almost a 10-fold reduction in infectious virus production as measured by plaque assay ( Figure 5A). This is consistent with the idea that DDX3X plays an important antiviral role. In stark contrast, cells overexpressing mCherry-DDX3X(qmNESα) were substantially more susceptible to infection, with 200-fold higher levels of virus production (p ≤ 0.01) than those expressing wild-type DDX3X, strongly indicating that DDX3X's ability to undergo nuclear export through the exportin-1-recognized NESα is key to its antiviral activity. Parallel monitoring of production of IFN-β by ELISA in response to infection indicated that DDX3X(qmNESα)-expressing cells secreted significantly (p ≤ 0.001) more IFN-β (~2-fold) than those expressing wild-type DDX3X ( Figure 5B). These results suggest that the increased hPIV-3 titer observed in nuclear export defective DDX3X-expressing cells is not due to a general defect in IFN-β production during hPIV-3 infection, and that increased IFN-β production is insufficient to inhibit hPIV-3 replication. These results strongly imply that DDX3X's antiviral role in hPIV-3 infection is dependent on its nuclear export/nuclear trafficking ability. Exportin-1 Is Important to hPIV-3 Replication Even though RNA viruses such as paramyxoviruses replicate entirely in the host cytosol, inhibition of exportin-1 by LMB has been reported to inhibit virus production in the case of Hendra virus [23], RSV [35], and Venezuelan equine encephalitis virus [36], suggesting their replication is facilitated by exportin-1-dependent nuclear export. Since hPIV-3 also replicates entirely in the cytoplasm, we tested the importance of exportin-1 mediated nuclear export by treating hPIV-3-infected cells with LMB, again monitoring virus production and IFN-β as above. Controlling for limited cytotoxic effects, we found a dose-dependent reduction in both hPIV-3 titer and IFN-β secretion with increasing LMB concentration ( Figure 5C,D), again consistent with the importance of exportin-1-dependent nuclear export of host/viral factors being central to hPIV-3 virus production fitness, as opposed to IFN-β levels. DDX3X's Nuclear Trafficking Potentiates Immune Gene Induction Since IFN-β production in response to hPIV-3 infection did not appear to be impaired by inactivation of DDX3X nuclear export, we hypothesized that altered expression of antiviral genes besides IFNB1 might be responsible for the effects on infection observed in Figure 4A. To address this directly, we profiled host gene transcription using the NanoString nCounter ® SPRINT system. We transfected A549 cells to express mCherry-fused DDX3X, mCherry-fused DDX3X(qmNESα), or mCherry alone, then sorted the mCherry-expressing populations and assayed mRNA transcript levels 24 h post-hPIV-3 or mock infection. Transcript levels were monitored using the PanCancer Immune Profiling RNA probe library. After internal normalization and discarding low-count data, we measured transcription across a total of 730 human genes relevant to immunity and cancer (Table S1). The vast majority of genes were downregulated in uninfected cells expressing mCherry-DDX3X compared to mCherry alone ( Figure 6A), suggesting DDX3X may act as a 'brake' on immune genes at steady-state. Consistent with this idea, many of the genes were upregulated in uninfected cells expressing mCherry-DDX3X(qmNESα), implying that DDX3X's ability to traffic between the nucleus and cytoplasm, dependent on its exportin-1 recognized N-terminal NESα, is central to this function. As expected, viral infection resulted in strong activation of many of these genes in cells expressing wild-type, but not in cells expressing nuclear export defective, DDX3X. This is reflected in the distant clustering of the wild-type DDX3X samples between steady-state and infection, as opposed to the much closer clustering of the DDX3X(qmNESα) samples in the absence or presence of infection ( Figure 6A. See also principal component analysis in Figure 6B). The data show that the anti-hPIV-3 inflammatory response in lung tissue is overwhelmingly characterized by the induction of IFN-β and ISGs including proinflammatory cytokines and chemoattractants for neutrophils (e.g., CXCL1, CXCL2, CXCL3, IL1A, IL6, IL8, PTGS2, and SAA1) and T-cells (e.g., CCL5, CCL20, CXCL10, CXCL11, IL6, and IL8), as well as innate immune signaling proteins (e.g., MX1, IFI27, IFIT1, IFIT2, IRF7, ISG15, ISG20, STAT1, and TLR8) and inducers of apoptosis (e.g., IL1B, IFI27, and IFIT2) ( Figure 6C). Comparison of the mRNA levels for cells ectopically expressing DDX3X with or without a functional NES revealed clear differences in the subsets of ISGs expressed. In resting cells, ectopic expression of DDX3X(qmNESα) resulted in increased mRNA levels of 523 genes ( Figure 6D). Only 173 of these showed similar effects upon overexpression of wild-type DDX3X. There were an additional set of 49 genes, distinct from those impacted by DDX3X(qmNESα), showing elevated levels upon overexpression of wild-type DDX3X ( Figure 6D). Upon hPIV-3 infection, wild-type DDX3X-expressing cells distinctly upregulated 192 genes, whereas only 85 were distinctly upregulated in DDX3X(qmNESα)-expressing cells ( Figure 6D). Overall, these data highlight that nucleocytoplasmic trafficking of DDX3X is critically important in regulating gene induction during viral infection, with elevated nuclear expression of DDX3X impacting the resting state transcriptome as well as that in response to viral infection. Nuclear DDX3X Contributes to IFNB1 Transcription and Influences ISG Subset Induction Chromatin-immunoprecipitation experiments indicate DDX3X can associate with the IFNB1 promoter [10]. Our observation that expression of nuclear-localizing DDX3X(qmNESα) led to elevated levels of IFN-β secretion in response to hPIV-3 infection compared to wild-type DDX3X ( Figure 5B) correlated nicely with the fact that a large number (217) of the genes upregulated upon overexpression of DDX3X(qmNESα) were ISGs, including IFNB1 itself. For the latter, hPIV-3-infected A549 cells expressing either DDX3X or DDX3X(qmNESα) showed enhanced IFNB1 transcription versus mCherry alone (normalized induction of 0.924 and 0.942 versus 0.870, respectively), with DDX3X(qmNESα) showing the greatest overall induction (Table S1). Consistent with the idea that DDX3X's nucleocytosolic distribution modulates its role as a brake on immune induction at rest, uninfected cells overexpressing DDX3X showed lower IFNB1 transcription than cells overexpressing mCherry only, whereas cells overexpressing DDX3X(qmNESα) once again showed enhanced IFNB1 transcription (normalized induction of -0.979, -0.909, and -0.847, respectively) ( Table S1). The nuclear trafficking of DDX3X thus appears to modulate IFNB1 gene transcription, modulated by exportin-1 binding to the DDX3X N-terminal NESα in a Ran-GTP-dependent manner. To further validate the above results, we examined protein expression levels of a subset of the above genes in addition to IFNB1, representing a broad range of cellular pathways and antiviral defenses. Changes in transcriptional activity of CASP3, RIPK2, LAMP2, and TBK1 ( Figure 6E) were reflected in corresponding changes in expression of encoded proteins as determined by immunoblot ( Figure 6F), giving confidence that our overall dataset for IFN-1/ISG induction/expression is robust. Discussion DDX3X is a key host cellular factor in the RLR signaling cascade and is implicated in the replication strategy of a large and growing list of evolutionarily divergent pathogens of significance to human health, including hepatitis B virus [37], hepatitis C virus [38], influenza A virus [39], Japanese encephalitis virus [40], West Nile virus [41], dengue virus [42], and HIV-1 [17]. Understanding the link between DDX3X subcellular localization and the host-and pathogen-directed roles of DDX3X are central to unlocking novel strategies to target DDX3X activity in infection. Consistent with a NES-dependent interaction, we and others [8,[17][18][19][20] have shown that DDX3X's nuclear export is inhibited by the exportin-1-specific inhibitor LMB, which blocks cargo protein binding and trafficking by covalently modifying the NES-binding interface of exportin-1 [15,16]. Correspondingly, our qCLSM, co-immunoprecipitation, and analytical ultracentrifugation sedimentation velocity data confirms DDX3X harboring a nonfunctional NES is incapable of binding exportin-1 even at supraphysiological concentrations, and reciprocally, Ran-GTP is strictly required for DDX3X binding to exportin-1. Previously, DDX3X's nuclear export was attributed to a unique exportin-1-dependent mechanism requiring DDX3X helicase domain residues 260-517, but neither a recognized exportin-1 NES nor the Ran-GTP gradient [17]. However, consistent with Brennan et al. [20] our results do not support this finding. Collectively, we confirm the mechanism of DDX3X's exportin-1-dependent nuclear export is typical of other receptor-cargo interactions and aligns with that of An3, the Xenopus laevis orthologue of DDX3X, which shares 87% sequence identity overall and an identical NES within the N-terminus of DDX3X [43]. Notably, the key hydrophobic residues of the DDX3X/An3 NES are also conserved down to the Saccharomyces cerevisiae orthologue Ded1p, which also undergoes exportin-1-mediated nuclear export in a NES-and Ran-GTP-dependent manner [44]. Although exportin-1 is an exporter of DDX3X we observe residual DDX3X in the cytosol following inactivation of the NES or LMB treatment. One explanation may be that DDX3X utilizes other nuclear export pathways in addition to the exportin-1 pathway. The C-terminal region of DDX3X was previously reported to mediate binding and nuclear export by NXF1 [18]. We did not observe any contribution of the DDX3X C-terminal region to its nucleocytosolic distribution in this study, but cannot formally exclude the possibility that other nuclear export receptors may bind and traffic DDX3X in certain circumstances. We propose it is equally plausible that the nuclear import of DDX3X is weak at steady-state, but is then enhanced during specific events or stages of the cell cycle [20]. Importantly, the nuclear import mechanism of DDX3X remains unknown and warrants further study. Previous studies [20], as well as our own, have only implicated regions involved in the nuclear import of DDX3X. Our study suggests invasive RNA is a trigger for authentic nuclear accumulation of DDX3X, which supports IFN-β induction and secretion. Nearly all proinflammatory genes strongly activated during hPIV-3 infection were positively associated with expression of wild-type DDX3X, while IFN-β itself and a particular subset of IFN-I signaling and effector genes were more strongly expressed when DDX3X accumulated more strongly in the nucleus. This reveals that regulated trafficking of DDX3X between the nucleus and cytosol is crucial for controlling IFN-β levels, at least in response to hPIV-3 infection, as well as supporting transcription of a particular subset of IFN-I signaling and effector genes in order to amplify the IFN-I response. While we do not exclude the possibility that endogenous DDX3X expression levels may play a role, we propose the following model of DDX3X trafficking-dependent immune regulation based on our observations. In the resting state, DDX3X acts as a 'brake' on immune gene induction to prevent unnecessary immune activation. However, upon exposure to invasive RNA during acute-phase virus infection, DDX3X supports cytosolic signaling events leading to IFN-I expression, redistributing to the nucleus to help drive transcription contributing to IFN-I mediated immunity and T-cell recruitment/activation. Notably, the nuclear export of DDX3X via exportin-1 is critical for maximal gene induction, and thereby presumably results in a more effective innate and adaptive immune response to infection, and as demonstrated in our hPIV-3 infectious model. Our results indicate that DDX3X plays a hitherto unrecognized antiviral role in hPIV-3 replication that is contingent upon its export into the cytosol, and seemingly independent of its role in IFN-β induction. Consistent with this finding, IFN-α and type III IFN (IL29A, IL-28A and/or IL28B), as opposed to IFN-β, are reported to have anti-hPIV-3 action [45,46], and type III-IFN receptor deficiencies increase susceptibility to hPIV-3 infection [47]. Despite nuclear export-deficient DDX3X being permissive to hPIV-3 replication, LMB treatment, which blocks the nuclear export of all exportin-1 cargos, including DDX3X, suppressed hPIV-3 replication. This suggests that the nuclear export of unknown host and/or hPIV-3 viral proteins plays a pivotal role in hPIV-3 replication, and that compounds such as LMB specifically targeting exportin-1 in this context may be effective against hPIV-3, as reported for other viruses [23,36,48]. DDX3X subcellular localization is central to its function in antiviral immunity and hence paramount to the infectivity of microbes that exploit DDX3X as an essential host cofactor. For example, HIV-1 Rev requires nuclear DDX3X to export HIV-1 transcripts to the cytoplasm [17], whilst cytoplasmic DDX3X is required for RSV M2 translation [49]. This suggests that host-orientated agents that alter the nuclear import/export of DDX3X are likely effective antiviral agents. Indeed, inhibitors of exportin-1, such as those developed by Karyopharm ® Therapeutics [50], that inhibit nuclear export of all cargoes recognized by exportin-1 bearing a NES can be efficacious broad-spectrum antivirals (e.g., against RSV and influenza infection). Accordingly, LMB treatment inhibited hPIV-3 replication in the current study, and Hendra virus [23], RSV [35], and Venezuelan equine encephalitis virus [36] in previous studies, suggesting their replication requires exportin-1-dependent nuclear export. However, there are currently no cargo-specific nuclear export inhibitors, which are critically important in reducing the cytotoxic effects of global inhibition of exportin-1. We anticipate our work exploring the exportin-1 mediated nuclear export of DDX3X and understanding its functional relevance in directing antiviral immune signaling outcomes will support the pursuit of DDX3X-specific nuclear export inhibitors that will have implications for viruses of significance to human health such as HIV-1 and RSV.
v3-fos-license
2022-12-07T14:16:25.579Z
2016-07-12T00:00:00.000
254290960
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10519-016-9798-y.pdf", "pdf_hash": "4055226f627210e401e708a2b956fd34f0461eae", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42144", "s2fieldsofstudy": [ "Psychology" ], "sha1": "4055226f627210e401e708a2b956fd34f0461eae", "year": 2016 }
pes2o/s2orc
Parental Reports of Infant and Child Eating Behaviors are not Affected by Their Beliefs About Their Twins’ Zygosity Parental perception of zygosity might bias heritability estimates derived from parent rated twin data. This is the first study to examine if similarities in parental reports of their young twins’ behavior were biased by beliefs about their zygosity. Data were from Gemini, a British birth cohort of 2402 twins born in 2007. Zygosity was assessed twice, using both DNA and a validated parent report questionnaire at 8 (SD = 2.1) and 29 months (SD = 3.3). 220/731 (8 months) and 119/453 (29 months) monozygotic (MZ) pairs were misclassified as dizygotic (DZ) by parents; whereas only 6/797 (8 months) and 2/445 (29 months) DZ pairs were misclassified as MZ. Intraclass correlations for parent reported eating behaviors (four measured at 8 months; five at 16 months) were of the same magnitude for correctly classified and misclassified MZ pairs, suggesting that parental zygosity perception does not influence reporting on eating behaviors of their young twins. Introduction Over the past century the Twin Method has been used to investigate genetic and environmental contributions to variation in complex human traits. Researchers have been using this methodology to examine a wide spectrum of aspects of human life accumulating in a total of 17,804 investigated traits, spanning disease, to behavior to opinion. Twin research is conducted worldwide and 14,558,903 twins are currently included in a multitude of studies (Polderman, et al. 2015). The classic Twin Method is based on comparing the correlations or concordance rates of traits between monozygotic (MZ) and dizygotic (DZ) twin pairs. MZs are genetic clones of one another, sharing 100 % of their genes, whereas DZs share on average only 50 % of their segregating genes. Importantly, both types of twins share their environments to a similar extent. For example, both types of twins are gestated together in the same uterus, and are raised together in one family. Any difference in resemblance between MZ and DZ pairs is therefore assumed to reflect genetic differences only. The univariate method can also be extended to understand if multiple traits share a common etiology, and to establish genetic and environmental contributions to trait stability and change over time (Rijsdijk and Sham 2002;van Dongen et al. 2012). One of the criticisms of parent reported measures of young twin behavior is that parents are biased by their belief about their twins' zygosity. For example, it is possible that parents score their twins more similarly if they believe them to be identical, or more differently if they believe them to be non-identical. If this is true, heritability estimates for these traits will be inflated because heritability is estimated by doubling the difference between the MZ and DZ correlations. This bias can be tested for directly by taking advantage of the fact that many parents are mistaken about their twins' zygosity-the so-called 'misclassified zygosity design'. Many parents of MZs mistakenly believe them to be DZs (van Dongen et al. 2012). This often results from parents being misinformed by health professionals based on prenatal scan observations or at the twins' birth if the MZ twins are dichorionic (Ooki et al. 2004). Researchers can take advantage of parental misclassification of zygosity to examine if twin correlations differ for MZs who are correctly and incorrectly classified by parents (the same approach can be used to test for differences between correctly and incorrectly classified DZs, although this occurs much more rarely (van Jaarsveld et al. 2012)). If the correlations for correctly and incorrectly classified MZ pairs are of the same magnitude, it is unlikely that parents are biased in their reporting by their belief about their twins' zygosity. Most previous studies using the 'misclassified zygosity design' have relied on self-reported zygosity by the twins themselves in order to investigate if their perception of their zygosity shapes their environmental exposure-testing the so-called 'equal environments assumption'. Results from these studies have suggested that identical twins correlate highly on behavioral traits regardless of their believed zygosity status (Scarr and Carter-Saltzman 1979;Goodman and Stevenson 1989;Xian et al. 2000;Gunderson et al. 2006). This study uses a novel application of the 'misclassified zygosity design' to test for parental bias in reporting of a range of eating behaviors in infancy and early childhood. Sample Data came from Gemini, a population-based British birth cohort of 2402 families with twins born in 2007 in England or Wales (van Jaarsveld et al. 2010). Ethical approval was granted by the University College London Committee for the Ethics of non-National Health Service Human Research. Participants included 816 families with oppositesex twin pairs (DZ by default), and 1586 with same-sex twin pairs. Parents of same-sex twin pairs completed a 20-item Zygosity Questionnaire at baseline (Q1), when the twins were on average 8 months old (SD = 2.1, range 4.1-16.7 months) (Price et al. 2000). In addition, 934 families (58.9 %) completed the same questionnaire on a second occasion (Q2) when the twins were on average 29 months old (SD = 3.3, range 22.9-47.6 months). A total of 1127 families had provided DNA samples for both twins, of which 81 pairs were randomly selected for zygosity testing. Parents also completed measures of infant and child eating behavior when the twins were on average 8 months (SD = 2.1, range 4.1-16.7 months) and 16 months (SD = 1.2, range 13.4-27.4 months) old respectively. Only data from same-sex twin pairs were used in the analyses in this study. Twin pairs with missing or inconclusive zygosity data were excluded. Zygosity questionnaire The items in the zygosity questionnaire relate to physical resemblance including: general similarity; similarity of specific features such as hair color and texture, eye color, ear lobe shape; timing of teeth coming through; and ease with which parents, friends and other family members can distinguish the twins. Other items ask about blood type, health professional's opinion, and the parents' own opinion on zygosity (Price et al. 2000). The zygosity questionnaire is scored by adding up the scores obtained for each question and dividing the total by the maximum possible score based upon the number of questions answered to create a value between 0 and 1. Lower scores indicate greater intrapair similarity with zero representing maximal similarity and one maximal dissimilarity. Scores \0.64 were classified as MZ, scores [0.70 were classified as DZ, and scores between 0.64 and 0.70 were coded as 'unclear' zygosity, as described by Price et al. (2000). DNA genotyping Hyper-variable minisatellite DNA probes are used to detect multiple tandem-repeat copies of 10-15 base pair sequences scattered throughout the human genome (Hill and Jeffreys 1985;Jeffreys et al. 1985). In MZ twins, the bands are identical, but they differ in DZ twins. 1127 families provided DNA using saliva samples for both twins. To validate the zygosity questionnaire, DNA was analyzed in a randomly selected sample of 81 twin pairs. In addition, some families elected to have their DNA used for zygosity testing (n = 118) and we tested a further 111 pairs who could not be classified using questionnaire data (or did not complete the second questionnaire) and who had provided DNA samples. Of these, 41 pairs recorded a mismatch between the two questionnaires; 59 pairs were classified as uncertain at one or both time points; and 24 pairs were missing the second zygosity questionnaire. A total of 310 pairs were therefore zygosity-tested using DNA. We also assessed the concordance between the 8-and 29-month zygosity questionnaire classification, with the DNA-classified zygosity for all of these pairs for whom DNA was available, to evaluate the relative accuracy of the 8 versus 29-month questionnaire. However, this sample largely included pairs who were not easily classified using the questionnaire. Parental beliefs about zygosity When the twins were approximately 8 months old (mean = 8.17, range 4.01-20.3) parents were asked to classify their twins as MZ or DZ, using the question: ''Do you think your twins are identical? ('yes' or 'no')''. Parental classifications were available for 1565 same-sex twin pairs. The same question was asked again when the twins were 29 months old (SD = 3.3, range 22.9-47.6 months) old, and 898 parents responded. To gain further insight into how beliefs about zygosity are formed, parents were also asked if they had ever received zygosity information regarding their twins from health professionals, using the question: ''Have you been told by a health professional that your twins are identical or non-identical?''. Baby eating behavior questionnaire The Baby Eating Behavior Questionnaire (BEBQ) (Llewellyn et al. 2011) was completed by parents when the twins were 8 months old (mean = 8.17, SD = 2.18) old. The BEBQ measures four distinct eating behaviors during the period of exclusive milk-feeding (the first 3 months after birth, before any solid food has been introduced) that have been associated with infant weight gain (van Jaarsveld et al. , 2014. Satiety Responsiveness (SR) measures an infant's 'fullness' sensitivity (e.g. how easily he or she gets full during a typical milk feed). Food Responsiveness (FR) assesses how demanding an infant is with regard to being fed, and his or her level of responsiveness to cues of milk and feeding (e.g. wanting to feed if he or she sees or smells milk). Enjoyment of Food (EF) captures an infant's perceived liking of milk and feeding in general (e.g. the extent of pleasure experienced while feeding). Slowness in Eating measures the speed with which an infant finishes a typical milk feed (e.g. his or her overall feeding pace). Parents used a 5-point Likert scale (1 = Never, 5 = Always) to report how frequently they observed their infant demonstrate a range of eating behaviors characteristic of each scale. Numbers of items per scale and example items are shown in Table 1. The BEBQ is an adaptation of the Child Eating Behavior Questionnaire (CEBQ), and has been validated in a different sample (Mallan et al. 2014). Mean scores for each subscale were only calculated if a minimum of items were entered (2/3, 3/4 or 4/5). Child eating behavior questionnaire (Toddler) The Child Eating Behavior Questionnaire for toddlers (CEBQ-T) was completed by parents when their children were 16 months old (Mean = 15.8, SD = 1.2). In keeping with the BEBQ parents used the same 5-point Likert scale (1 = Never, 5 = Always) to rate the twins for six distinct eating behaviors. The CEBQ-T measures the same four traits as the BEBQ (SR, FR, EF and SE), in relation to food rather than milk, as well as two other eating behaviors that have been associated with child weight. Food Fussiness (FF) measures a child's tendency to be highly selective what foods he or she is willing to eat, as well as the tendency to refuse to try new food items. Emotional Overeating (EOE) captures a child's the tendency to eat more in response to stress and negative emotions. The number of items per scale and example items are shown in Table 1. The CEBQ-T is a modified version of the validated CEBQ (Wardle et al. 2001) which has been validated against laboratory-based measures of eating behaviors (Carnell and Wardle 2007). The CEBQ has been widely used to establish relationships between eating behavior and pediatric weight status (Carnell and Wardle 2007;Viana et al. 2008;Webber et al. 2009;Mallan et al. 2013;Domoff et al. 2015). The CEBQ-T was modified to be appropriate for toddlers. The majority of the items between the CEBQ and the CEBQ-T are identical. However, the emotional undereating and desire to drink scale from the original CEBQ were removed as mothers reported their children not to engage in these behaviors. Furthermore, the wording of some EOE items was modified. Words describing the child's mood were changed to make them more age appropriate ('worried', 'annoyed' and 'anxious' were replaced for 'irritable', 'grumpy' and 'upset'). One item of the SR scale was extended from 'my child always leaves food on his/her plate at the end of a meal' to 'my child always leaves food on his/her plate or in the jar at the end of a meal'. Finally the item 'If given the chance, my child would always have food in his/her mouth' was omitted from the FR scale. Similar to the BEBQ, means for the CEBQ-T subscales were calculated if majority of the items were answered (2/3, 3/4, 4/5 or 4/6). Researcher classification of zygosity Zygosity results from the two questionnaires were compared in 934 pairs who had data for both, to assess the testretest correlation and percentage agreement. The questionnaire results were compared to DNA results in the random sub-sample of 81 pairs. Analyses were performed using SPSS 22 for Windows. Comparison of twin correlations for correctly and incorrectly classified pairs Concordance and discordance between parents' beliefs about their twins' zygosity and zygosity as derived from the questionnaire and DNA analyses at 8 and 29 months, were used to establish four groups for comparison: (1) parents who correctly classified their MZs as MZs (MZC); (2) parents who incorrectly classified their MZs as DZs (MZI); (3) parents who correctly classified their same-sex DZs as DZs (DZC); and (4) parents who incorrectly classified their same-sex DZs and MZs (DZI). This allowed for direct comparison of twin correlations between parents who misclassified and correctly classified MZ and DZ pairs. Scores for each of the BEBQ and CEBQ scales were regressed on age, sex and gestational age of the twins. Intraclass correlations (ICCs) were calculated and compared for each of the four separate groups and for the two time points (8 and 29 months) when data on the parents' opinion regarding their twins' zygosity was collected. Parental classification of zygosity at 8 months was used to compare the ICCs for the BEBQ scales; parental classification of zygosity at 29 months was used to compare the ICCs for the CEBQ-T scales. ICCs were calculated using SPSS Version 22 for Windows. Results All opposite sex twin pairs were classified as DZ. Zygosity questionnaire data was collected for same-sex twin pairs at 8 months (SD = 2.1; n = 1586) and 29 months (SD = 3.3; n = 934). 934 families (58.9 % of all same-sex pairs) provided questionnaire results at both time points. For the majority of pairs (n = 827, 88.5 %) zygosity assignment matched across the two questionnaires. The Spearman correlation coefficient between the zygosity questionnaire classification at 8 and 29 months (n = 934) was 0.80 (p \ 0.001) and the Kappa statistic (a measure of agreement) was also 0.80 (p \ 0.001), indicating a good test-retest reliability. A total of 1127 families had provided DNA samples for both twins; of these, 81 pairs were randomly selected for zygosity testing. 107/934 pairs (11.5 %), who had questionnaire data at both time points, could not be conclusively allocated using the questionnaire data: 41 pairs had a mismatch of classification between the two questionnaire time points (MZ then DZ; or DZ then MZ); 59 pairs fell into the uncertain range at either 8 or 29 months (i.e. uncertain at 8 months, then MZ or DZ at 29 months; or, MZ or DZ at 8 months, then uncertain at 29 months); 7 pairs fell into the uncertain range at both time points. Therefore, where available, DNA was used to classify the zygosity of these pairs. DNA was available for 87/107 pairs, and the genotyping process was successful for 86/87 pairs (34/41 mismatches; 46/59 pairs who were uncertain at either 8 or 29 months; 6/7 pairs who were uncertain at both time points). There were also 24 pairs for whom questionnaire data was only available at 8 months, but for whom DNA was also available; for these 24 pairs DNA was used for zygosity classification. Results from the questionnaire and the DNA testing were combined to provide the most accurate zygosity assignment for the Gemini sample. For 1239 pairs, questionnaire data only was used to allocate zygosity (n = 590 pairs with data at 8 months only; n = 636 pairs with data at both 8 and 29 months; n = 6 pairs with classification at 8 months but uncertain zygosity status at 29 months; n = 7 pairs with uncertain zygosity status at 8 months, but classified at 29 months). DNA was used to zygosity test (n = 310 pairs), including: a random sample of 81 pairs; 86 pairs for whom zygosity could not be classified My child refuses new foods at first conclusively using questionnaire data; 24 pairs who only had questionnaire data at 8 months; and 119 pairs whose parents requested a zygosity test. A total of 749 twin pairs (31.2 %) were classified as MZ and 1616 (67.3 %) twin pairs were classified as DZ (including 816 opposite sex DZ twins), based on the questionnaire and DNA results. For a further 37 pairs (1.5 %) zygosity could not be established, as questionnaire results were unclear and no DNA was provided. A detailed list of the final zygosity classification in this sample can be found in Table 2. Validation of the zygosity questionnaire using DNA DNA from the random sample of 81 twin pairs was used to validate the zygosity questionnaire. DNA confirmed 43 pairs as MZ and 38 as DZ; which exactly matched the results of the questionnaires. Comparing the questionnaire results with all pairs for whom DNA was available showed high concordance between the two questionnaires with DNA. At 8 months, 279 pairs had both questionnaire classified zygosity and DNA; the 8 month questionnaire matched DNA results for 87.5 % of the sample. At 29 months, 248 pairs had both questionnaire classified zygosity and DNA; the 29 month questionnaire matched DNA results for 96.8 % of the sample. Misclassified zygosity At 8 months there were 1528 pairs of twins who had both researcher-classified zygosity (using the questionnaires and DNA) and parent-classified zygosity (i.e. parents had responded to the question ''do you think your twins are identical?''). There was high concordance between parental classification of zygosity and researcher measured zygosity (85.2 %). However 30.1 % (220/731) of parents of MZ twins mistakenly believed them to be DZ. Only six parents of same-sex DZ pairs mistakenly classified them as MZs (0.75 % of parents of same sex DZs, 6/797). At 29 months there were 898 pairs of twins who had both researcher-classified zygosity (using the questionnaires and DNA) and parent-classified zygosity (i.e. parents had responded to the question ''do you think your twins are identical?''). At 29 months 26.3 % of parents of MZs (119/453) misclassified them as DZs. Again the number of misclassified DZ twins was very low (2/445 same-sex DZ pairs). These analyses used only same-sex twin pairs; opposite-sex pairs (n = 816, 33.3 %) and pairs of unknown zygosity (n = 37, 1.5 %) were excluded. All percentages and numbers of twin pairs used in the analyses are shown in Table 3 for 8 and 29 months separately. Parental belief about zygosity was stable over time. Of the parents who responded at both 8 and 29 months, 94.9 % (852/898) held the same belief at both time points. Furthermore 1427 parents stated that they were informed by a health professional about their twins' zygosity, and the majority agreed with the health professional's opinion (n = 1375; 96.4 %). Only a few parents (n = 52, 3.6 %) disagreed with the opinion of the health professional. Comparison of intraclass correlations Intraclass correlations (ICCs) of eating behaviors measured by the BEBQ and CEBQ-T were calculated separately for the different zygosity groups, based on the parental belief at 8 months and 29 months, respectively. Baby eating behavior questionnaire Scores from the BEBQ were regressed on sex, gestational age and age of the children at questionnaire completion. Only six same-sex DZ pairs were misclassified as identical by the parents; because of the small sample size for these pairs the 95 % confidence intervals were wide and reliable ICCs could not be calculated. We therefore only report the results for three groups: MZC, MZI, and DZC. Overall there was no difference in magnitude between the size of the ICCs for correctly and misclassified identical twins for any of the four eating behaviors. For SR, EF and SE the 95 % confidence intervals overlapped, indicating that the ICCs were not significantly different for MZC and MZI. The 95 % confidence intervals did not overlap for the ICCs for FR, however the difference in magnitude was very small (MZC, 0.89; MZI, 0.82) and the large sample size ensured that the 95 % confidence intervals were narrow, such that trivial differences were significant. Additionally, the ICCs for the DZC group were substantially smaller than those for the MZI group for all four eating behaviors, and none of the 95 % confidence intervals overlapped. Child eating behavior questionnaire (Toddler) A similar pattern of results was found for eating behaviors measured by the CEBQ-T at 16 months. For each of the five eating behaviors the magnitude of the ICCs for MZC and MZI was similar. For EF, SR, FR, FF and SE there was no significant difference between MZC and MZI, indicated by the overlapping 95 % confidence intervals. For EOE the 95 % confidence intervals did not overlap, but touched for the MZC and MZI groups. Again, the ICCs for the DZC group were substantially smaller than the MZI ICCs for each of the five eating behaviors, and none of the 95 % confidence intervals overlapped. All ICCs for the different zygosity groups and eating behaviors are presented in Table 4. Discussion We used the 'misclassification of zygosity' design in a novel approach to test for parental bias in reporting of similarities in infant and child eating behavior among twin pairs. We showed for the first time that parents who misclassified their MZs as DZs nevertheless scored them as similarly as the parents who correctly classified their MZs as MZs, on a range of eating behaviors. Intraclass correlations were compared for misclassified and correctly classified MZ pairs for a range of eating behaviors, measured by widely used parent-report questionnaires for infants (the BEBQ) and toddlers (the CEBQ-T). The results showed that the magnitude of the intraclass correlations was very similar across both correctly and misclassified identical twins. In addition, the intraclass correlations for the correctly classified DZs were markedly smaller than those of the incorrectly classified MZs, and none of the 95 % confidence intervals overlapped across the two groups. These results indicate that parents' perceptions of their twins' zygosity did not bias their scoring of their eating behaviors, insofar as they did not score their MZ twins less similarly if they mistakenly believed them to be DZ. The problem of parental rater bias is often raised in research with infants and children. These outcomes suggest that no parental bias was found in relation to zygosity DZI DZ pairs misclassified as MZs by parents a n is less than the total n for MZs (1549) because it only includes pairs with both classified zygosity at 8 months and pairs whose parents answered the question ''do you think your twins are identical?'' b n is less than the total n for DZs (1257) because it only includes pairs with both classified zygosity at 29 months (using questionnaire and DNA data) and pairs whose parents answered the question ''do you think your twins are identical?'' status, and supports the validity of the twin method for establishing the genetic and environmental influences on eating behaviors in infants and toddlers. Implications The twin method has been widely used to investigate the etiology of complex human behavior and constant critical analysis of the assumptions underlying this method contributes to its ongoing success. Previous studies used the misclassified zygosity methodology to test for violations of the equal environments assumption (EEA), confirming its overall validity (Felson 2014). This approach was also previously used to investigate the effect of self-reported zygosity on twin similarity of eating patterns in adulthood. Results showed that identical twins correlate higher than DZ twins on healthy eating patterns, regardless of their selfreported zygosity (Gunderson et al. 2006), indicating that measures of eating behavior can also be used reliably in adult twin samples. In comparison to previous misclassified zygosity studies (Goodman and Stevenson 1989;Kendler et al. 1993Kendler et al. , 1994Xian et al. 2000;Cronk et al. 2002;Gunderson et al. 2006;Conley et al. 2013), this research is, to our knowledge, the first attempt to utilize the design in a sample of infant and toddler twins to test for biases in relation to parental belief about zygosity. As previously reported parents can be misinformed about the zygosity of their children (Ooki et al. 2004). In this sample, of 749 MZ twins, 220 (29.4 %) were misclassified as DZ by parents when the twins were 8 months old. Previous research suggests that parental misclassification of MZs as DZs often stems from false information given by health professionals (van van Jaarsveld et al. 2012). In this study the majority (n = 1375, 96.4 %) of parents agreed with the health professional's opinion about their twins' zygosity. These results might be seen as an indicator that parents trust health professionals and base their own opinion on the judgement of a professional. However many health professionals classify twin pairs as non-identical if a prenatal scan shows that they are dichorionic (each has their own placenta), regardless of the fact that approximately one third of MZ twin pairs develop with separate placentas (Hall 2003). Knowledge gaps of obstetricians and gynecologists in twin prenatal development is suggested to be the cause of the misinformation (Cleary-Goldman et al. 2005). Using reliable measures of zygosity determination in same-sex twins is crucial for twin research. Additionally, zygosity classifications are important for medical reasons, such as prenatal diagnosis of genetic disease or disorders and transplant compatibility, as well as the identity and social development of the children (Stewart 2000;Hall 2003). Limitations In the current sample only a small number of same-sex DZ pairs were misclassified as MZ (n = 6 at 8 months; n = 2 at 29 months). Intraclass correlations were therefore often not significant and had wide 95 % confidence intervals, making them difficult to interpret and were therefore not included in the present analysis. A previous study of parental zygosity classification in 1244 Japanese families with twins born between 1960 and 2002 found a slightly higher (but still small) number of misclassified DZ twins (31/323 DZ pairs were misclassified as MZ). However, this study found higher rates of misclassification overall (Ooki et al. 2004). Future studies using the misclassified zygosity design would benefit from increased sample sizes to include more misclassified DZs. Larger samples would enable researchers to make comparisons between correctly classified and misclassified DZ twins, to provide more evidence in support of the validity of parental reports for the twin method. For the majority of the sample zygosity was ascertained using a zygosity questionnaire sent to parents when the twins were 8 and 29 months old. When comparing questionnaire results collected at 8 months with all available DNA collected, zygosity ascertainment matched for 87.5 % of the sample. For data collected at 29 months the accuracy of the questionnaire was higher at 96.8 %; indicating that the questionnaire may be slightly more accurate for toddlers than infants. As children might become more distinct as they grow up, it seems reasonable that parent rated zygosity is slightly more accurate when the twins are older. Regarding these rates of accuracy overall, it is also important to acknowledge that DNA was only used to zygosity-test a subset of the sample that included twin pairs who were difficult to classify (pairs for whom there was a mismatch between the zygosity questionnaire results, and pairs whose parents requested a DNA test, implying that they were uncertain about their twins' zygosity), as well as a random sample of 81 pairs. For the random sample only there was a 100 % match between the questionnaire and DNA zygosity classification. However, although we feel confident that zygosity can be accurately classified using a parental questionnaire for most twin pairs, DNA genotyping remains the gold standard for zygosity ascertainment and should ideally be available for more twin pairs. Nevertheless, zygosity testing using DNA remains costly and the use of questionnaire is more feasible for larger cohorts like Gemini. This study only assessed parental bias in relation to eating behavior in infancy and toddlerhood. Additional studies using a similar design could investigate the parental bias on other parent rated child behaviors, such as physical activity and personality. It would also be useful to understand if parental bias starts to emerge as children mature and naturally become more different from another. Future studies using the misclassified zygosity design assessing parental bias in school-aged children would be useful. Conclusion A potential flaw in the twin method is parental bias in reports of similarities in twin behavior, related to perceived zygosity. The outcomes of this study suggest that there was no parental bias related to zygosity in the Gemini twin cohort when parents reported on a range of infant and child eating behaviors.
v3-fos-license
2021-12-25T16:03:50.561Z
2021-12-22T00:00:00.000
245448434
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0375/12/1/8/pdf", "pdf_hash": "e6c70e159169520e01fb89f072ad4a95d1aca4e3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42145", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "sha1": "0349df27d7f0d4703b831ccb98409b641b3cf456", "year": 2021 }
pes2o/s2orc
Permeability and Stability of Hydrophobic Tubular Ceramic Membrane Contactor for CO2 Desorption from MEA Solution Ceramic membrane contactors hold great promise for CO2 desorption due to their high mass transfer area as well as the favorable characteristics of ceramic materials to resist harsh operating conditions. In this work, a hydrophobic tubular asymmetric alpha-alumina (α-Al2O3) membrane was prepared by grafting a hexadecyltrimethoxysilane ethanol solution. The hydrophobicity and permeability of the membrane were evaluated in terms of water contact angle and nitrogen (N2) flux. The hydrophobic membrane had a water contact angle of ~132° and N2 flux of 0.967 × 10−5 mol/(m2∙s∙Pa). CO2 desorption from the aqueous monoethanolamine (MEA) solution was conducted through the hydrophobic tubular ceramic membrane contactor. The effects of operating conditions, such as CO2 loading, liquid flow rate, liquid temperature and permeate side pressure, on CO2 desorption flux were investigated. Moreover, the stability of the membrane was evaluated after the immersion of the ceramic membrane in an MEA solution at 373 K for 30 days. It was found that the hydrophobic α-Al2O3 membrane had good stability for CO2 desorption from the MEA solution, resulting in a <10% reduction of N2 flux compared to the membrane without MEA immersion. Introduction Carbon dioxide (CO 2 ) capture plays a key role in reducing CO 2 emissions. Among current technologies for CO 2 capture, amine scrubbing is considered to be the most wellestablished one, dominating industrial application in the short-to-medium terms [1]. However, the most pressing issue in this technology is the regeneration of solvent, which represents approximately two-thirds of operating cost [2]. Thus, any improvement in reducing energy usage, such as employing an advanced stripping configuration, will contribute to lowering capture costs [3]. Current challenges associated with the conventional CO 2 desorption (or solvent regeneration) process at least include two most significant ones: (I) the liberation of free CO 2 molecules from their compound form and (II) the recovery of useful heat from evaporated water vapor. Specifically, a process of CO 2 desorption from amine solutions undergoes the decomposition of unstable carbamate and/or bicarbonate species into CO 2 and amine molecules and then the release of CO 2 molecules from the liquid phase to the gas phase. Accompanied by the CO 2 desorption process, a large amount of water in a reboiler needs to be evaporated to act as a stripping vapor due to low equilibrium CO 2 partial pressure. Typically, a reboiler is operated at a high temperature allowed by solvent stability or by the available steam supply. Elevated temperature does increase CO 2 desorption flux. However, it requires high heat duty. Even though part of the vapor from the reboiler is cooled down to condense water in the stripper, the overhead vapor contains 1-5 mol of water vapor per mol of CO 2 , depending on reboiler temperature and solvent employed [4]. This situation will cause a massive loss of latent heat in the overhead condenser. If an advanced separator is developed to greatly increase the mass transfer area for CO 2 , reboiler temperature (or mass transfer driving force) could be significantly reduced. Membrane contactors are potential candidates applied for CO 2 desorption given their advantages of high specific surface area and high operational flexibility as well as easy modularization [5,6]. To date, much fewer studies regarding membrane contactors have been conducted for membrane CO 2 desorption compared with membrane CO 2 absorption. Overall, one of the key obstacles that cause this situation is that CO 2 desorption is usually carried out at elevated temperatures, e.g., at 100-120 • C for elevated-pressure desorption or at 70-100 • C for vacuum desorption [7,8]. High temperature and chemical conditions require membrane materials to exhibit excellent characterizes. In the past decades, some polymeric membranes, most notably polyvinylidene fluoride (PVDF) [5], polytetrafluoroethylene (PTFE) [9] and polypropylene (PP) [10], have been used for CO 2 desorption. Despite these polymeric materials exhibiting advantages of high specific surface area for mass transfer, in general, they underperform on anti-chemical degradation, anti-thermal aging and mechanical strength [6]. These drawbacks of polymeric membranes make them easily susceptible to undesired variations in membrane structure and properties, such as in morphology, microstructure, hydrophobicity, etc., and even to liquid leakage after long-term exposure to the evaluated-temperature chemical solution. Thus, the employment of other promising membrane materials that can withstand long-term harsh conditions is essential. Tubular ceramic membranes have higher mechanical strength and chemical and thermal stabilities than polymeric membranes as well as hollow ceramic membranes under harsh operating conditions [11]. They have been applied for different harsh conditions such as membrane reaction [12,13], membrane distillation [14,15], membrane desorption [16], water heat recovery [17] and other applications [18,19]. They are probably more suitable than polymeric membranes for membrane CO 2 desorption. However, the permeability and stability of tubular ceramic membranes used for CO 2 desorption from amine solutions can be rarely found in the open literature. Generally, the materials used for membrane desorption are hydrophobic. The hydrophobic surface enables the creation of a high liquid entry pressure (LEP) to avoid the entrance of feed solution into pores. Consequently, only CO 2 and water vapor are able to pass through the hydrophobic pores. The pores filled with gas and vapor usually have higher mass transfer performance for CO 2 compared with those filled with liquid since membrane desorption processes are driven by temperature and pressure differences. Moreover, the hydrophobic pores without wetting will improve thermal and chemical resistance for long-term performance [6]. Original ceramic materials are hydrophilic because of the presence of massive hydroxyl groups (-OH) on their surface and pores [20]. Recently, extensive studies have confirmed that ceramic membranes can be endowed with stable hydrophobicity by grafting hydrophobic groups, such as organosilane, on the membrane interface [21]. Advances in hydrophobic modification increase the opportunities for the industrial application of ceramic membranes for CO 2 desorption. In this work, hexadecyltrimethoxysilane (C16) ethanol solution was used for hydrophobic modification. The reasons for that are presented as follows. First, C16 is cheap, easy to store and less toxic compared to some commonly used modifiers, such as fluoroalkylsilanes (FAS). In addition, ethanol is a harmless and non-toxic solvent, can be considered as an environmentally friendly alternative to traditional grafting solvents such as acetone harmful solvents during the grafting process. Furthermore, the C16 ethanol solution had been used for the fabrication of hydrophobic zirconia (ZrO 2 ) and alumina (Al 2 O 3 ) membranes. The grafted ceramic membranes possessed high hydrophobicity and performed well in the processes of membrane absorption for gas separation [21], water-oil separation [22] and membrane distillation for desalination [14]. In this work, a hydrophobic tubular asymmetric alpha-alumina (α-Al 2 O 3 ) ceramic membrane contactor for CO 2 desorption from an aqueous monoethanolamine (MEA) solution was investigated in terms of mass transfer performance and stability. The mass transfer performance of the hydrophobic asymmetric ceramic membrane was experimentally evaluated in terms of the N 2 flux and, more importantly, CO 2 desorption flux under various conditions, including temperature, pressure and liquid flow rate. In addition, the stability of the original and hydrophobic membranes was evaluated in terms of the N 2 flux, water contact angle and morphology before and after the immersion of the ceramic membrane in aqueous MEA solution at 373 K for 30 days. Materials The ceramic membrane, which was fabricated by coating an α-Al 2 O 3 membrane layer on the internal surface of tubular α-Al 2 O 3 support, was supplied by Membrane Industrial Park, (Jiangsu, China). Reagent grade MEA with a purity of ≥99% was purchased from Shanghai Ling Feng Chemical Reagent Co., Ltd. (Shanghai, China). Commercial grade N 2 and CO 2 were supplied by Nanjing Ning Wei Medical Oxygen, Co., Ltd., Nanjing, China. Reagent grade hexadecyltrimethoxysilane (C16) with a purity of ≥85% (GC) was purchased from Shanghai Aladdin Chemical Reagent Co. Ltd., (Shanghai, China). Preparation and Characterization of the Hydrophobic Membrane The surface modifier was prepared by mixing the concentrated C16 with ethanol and a certain amount of nitric acid (about 3 mL 0.1 mol/L HNO 3 per 1 L solution) to 0.1 mol/L C16 at room temperature for 24 h. The raw tubular membranes were dried and immersed into the modifier solution at 30 • C for 12 h. In the modification process, the -OCH 3 group in silane molecule undergoes hydrolysis reaction to form silanol (R-Si-(OH) 3 ) to possess hydrophobicity ( Figure 1). The modified membranes were taken out and rinsed with deionized water and then dried at 110 • C for 6 h for curing the silane-modified silica to improve the stability of the hydrophobic membrane. Roughly, 1 L modified solution can be used for 5 membrane tubes. The membranes were stored at room temperature. The properties of the tubular asymmetric α-Al 2 O 3 membrane to be characterized include water contact angle, gas permeation and morphology. The water contact angle of the ceramic membranes was measured by a contact angle analyzer (Dataphysics-OCA20, DataPhysics Instruments GmbH Co., Ltd., Filderstadt, Germany). The porosity of the membrane was characterized by an ellipsometry device (Complete EASEM-2000U, J.A. Woolam, Lincoln, NE, USA). The tests of gas permeation were carried out to investigate the effect of hydrophobic modification on membrane microstructure. Pure N 2 was used to investigate the gas permeation. The test module containing a ceramic membrane with 11 cm length was prepared to determine the N 2 permeance of the membrane. The upstream pressure was increased at 0.05 MPa intervals up to 0.4 MPa. The N 2 was fed into the lumen side of the module, and the permeation rates were measured at 25 • C in the shell side using a rotor flow meter. The morphology of the membrane was assessed using field emission scanning electron microscopy (FESEM, S-4800, Hitachi High-Tech, Tokyo, Japan). transfer performance of the hydrophobic asymmetric ceramic membrane was experimentally evaluated in terms of the N2 flux and, more importantly, CO2 desorption flux under various conditions, including temperature, pressure and liquid flow rate. In addition, the stability of the original and hydrophobic membranes was evaluated in terms of the N2 flux, water contact angle and morphology before and after the immersion of the ceramic membrane in aqueous MEA solution at 373 K for 30 days. Materials The ceramic membrane, which was fabricated by coating an α-Al2O3 membrane layer on the internal surface of tubular α-Al2O3 support, was supplied by Membrane Industrial Park, (Jiangsu, China). Reagent grade MEA with a purity of ≥99% was purchased from Shanghai Ling Feng Chemical Reagent Co., Ltd. (Shanghai, China). Commercial grade N2 and CO2 were supplied by Nanjing Ning Wei Medical Oxygen, Co., Ltd., Nanjing, China. Reagent grade hexadecyltrimethoxysilane (C16) with a purity of ≥85% (GC) was purchased from Shanghai Aladdin Chemical Reagent Co. Ltd., (Shanghai, China). Preparation and Characterization of the Hydrophobic Membrane The surface modifier was prepared by mixing the concentrated C16 with ethanol and a certain amount of nitric acid (about 3 mL 0.1 mol/L HNO3 per 1 L solution) to 0.1 mol/L C16 at room temperature for 24 h. The raw tubular membranes were dried and immersed into the modifier solution at 30 °C for 12 h. In the modification process, the -OCH3 group in silane molecule undergoes hydrolysis reaction to form silanol (R-Si-(OH)3) to possess hydrophobicity ( Figure 1). The modified membranes were taken out and rinsed with deionized water and then dried at 110 °C for 6 h for curing the silane-modified silica to improve the stability of the hydrophobic membrane. Roughly, 1 L modified solution can be used for 5 membrane tubes. The membranes were stored at room temperature. The properties of the tubular asymmetric α-Al2O3 membrane to be characterized include water contact angle, gas permeation and morphology. The water contact angle of the ceramic membranes was measured by a contact angle analyzer (Dataphysics-OCA20, DataPhysics Instruments GmbH Co., Ltd., Filderstadt, Germany). The porosity of the membrane was characterized by an ellipsometry device (Complete EASEM-2000U, J.A. Woolam, Lincoln, NE, USA). The tests of gas permeation were carried out to investigate the effect of hydrophobic modification on membrane microstructure. Pure N2 was used to investigate the gas permeation. The test module containing a ceramic membrane with 11 cm length was prepared to determine the N2 permeance of the membrane. The upstream pressure was increased at 0.05 MPa intervals up to 0.4 MPa. The N2 was fed into the lumen side of the module, and the permeation rates were measured at 25 °C in the shell side using a rotor flow meter. The morphology of the membrane was assessed using field emission scanning electron microscopy (FESEM, S-4800, Hitachi High-Tech, Tokyo, Japan). The N 2 permeance flux can be calculated as follows: where J N2 is the N 2 permeance flux, mol·m −2 ·s −1 ·Pa −1 ; G is the volume flow rate of N 2 from the permeation side, L/s; V m is the gas molar volume, 22.4 L/mol; A is the area of the membrane layer, m 2 ; ∆P is the transmembrane pressure difference, Pa; T is the temperature, K. Sample Analysis The solutions were prepared by mixing concentrated MEA with deionized water to desired concentrations. The MEA concentration was verified by titration against 1.0 mol/L hydrochloric acid (HCl) using methyl orange as an indicator. The liquid phase CO 2 loading was determined in a Chittick apparatus by the standard method presented by the Association of Official Analytical Chemists (AOAC) apparatus [23]. The CO 2 concentration in the gas phase was determined by a CO 2 analyzer (COZIR TM Wide Range, CO 2 Meter, Ormond Beach, FL, USA). A gas flow totalizer (D07-19B, Beijing Sevenstar Electronics Co., Ltd., Beijing, China) was used to measure the accumulated flow rate of the stripping CO 2 . Experimental Apparatus and Procedure for Membrane CO 2 Desorption The schematic diagram of the experimental setup for CO 2 desorption is shown in Figure 2. A hydrophobic tubular asymmetric α-Al 2 O 3 membrane was encapsulated in a 304 stainless module to form the membrane contactor. The characteristics of the membrane contactor are presented in Table 1. The CO 2 -rich aqueous MEA solution in a heating tank was continuously pumped into the lumen side of the membrane contactor and then recycled back to the tank. The liquid flow rate was controlled by a rotameter (accuracy: ±2%). The temperatures and pressures of the solvent at the inlet and outlet of the membrane contactor were monitored using PT100-type thermal sensors (0−200 • C) and SIN-P300 pressure transmitters (0−0.6 MPa), respectively. In addition, the reduced pressure of the permeable side of the membrane contactor is generated by a vacuum pump and is determined by a pressure transmitter (−0.1-0 MPa). The vaporized H 2 O was extracted from the membrane contactor and then was condensed. The condensate was determined by a precise graduated cylinder. The stripped CO 2 was online measured by a gas flow totalizer (D07-19B, Beijing Sevenstar Electronics Co. Ltd., Beijing, China), which enables converting it into the standard state by automatic temperature calibration and connecting to a computer to collect instantaneous and cumulative flow rates once per second. the permeable side of the membrane contactor is generated by a vacuum pump and is determined by a pressure transmitter (−0.1-0 MPa). The vaporized H2O was extracted from the membrane contactor and then was condensed. The condensate was determined by a precise graduated cylinder. The stripped CO2 was online measured by a gas flow totalizer (D07-19B, Beijing Sevenstar Electronics Co. Ltd., Beijing, China), which enables converting it into the standard state by automatic temperature calibration and connecting to a computer to collect instantaneous and cumulative flow rates once per second. The CO 2 permeancec flux can be calculated as follows: where J CO2 is the CO 2 permeance flux, mol·m −2 ·s −1 ; F is the flow rate measured by mass flowmeter, L/s. Stability Study of the α-Al 2 O 3 Membrane The thermal and chemical stability of the membranes was studied as follows: the asymmetric α-Al 2 O 3 membranes were immersed in a 5.0 mol/L MEA solution at 373 K for 30 days, as shown in Figure 3. After the 30 days of immersion, the membranes were taken out and washed with distilled water, then dried at room temperature. Then, the membrane samples were studied via FESEM analysis and gas permeation. The CO2 permeancec flux can be calculated as follows: where JCO2 is the CO2 permeance flux, mol•m −2 •s −1 ; F is the flow rate measured by mass flowmeter, L/s. Stability Study of the α-Al2O3 Membrane The thermal and chemical stability of the membranes was studied as follows: the asymmetric α-Al2O3 membranes were immersed in a 5.0 mol/L MEA solution at 373 K for 30 days, as shown in Figure 3. After the 30 days of immersion, the membranes were taken out and washed with distilled water, then dried at room temperature. Then, the membrane samples were studied via FESEM analysis and gas permeation. Characterization Results of the Hydrophobic Ceramic Membrane To characterize the hydrophobic membrane, Fourier transform infrared spectrum (FTIR) determination was firstly conducted to see the bond variation. It can be observed in Figure 4 that the asymmetric stretching vibration peaks and symmetric stretching vibration peaks of -CH2appeared at 2921 cm −1 and 2853 cm −1 on the modified spectrum, indicating that the silane molecules have been successfully grafted to the surface of the Characterization Results of the Hydrophobic Ceramic Membrane To characterize the hydrophobic membrane, Fourier transform infrared spectrum (FTIR) determination was firstly conducted to see the bond variation. It can be observed in Figure 4 that the asymmetric stretching vibration peaks and symmetric stretching vibration peaks of -CH 2 -appeared at 2921 cm −1 and 2853 cm −1 on the modified spectrum, indicating that the silane molecules have been successfully grafted to the surface of the ceramic membrane. Subsequently, the cross-sectional and surface roughnesses of the original membrane and the modified membrane are determined via FESEM and AFM, as presented in Figures 5 and 6, respectively. No obvious change can be observed from the FESEM and AFM images of the membranes before and after hydrophobic modification, indicating that the effect of the hydrophobic modification using C16 on membrane microstructure was insignificant. ceramic membrane. Subsequently, the cross-sectional and surface roughnesses of the original membrane and the modified membrane are determined via FESEM and AFM, as presented in Figures 5 and 6, respectively. No obvious change can be observed from the FESEM and AFM images of the membranes before and after hydrophobic modification, indicating that the effect of the hydrophobic modification using C16 on membrane microstructure was insignificant. ceramic membrane. Subsequently, the cross-sectional and surface roughnesses of the original membrane and the modified membrane are determined via FESEM and AFM, as presented in Figures 5 and 6, respectively. No obvious change can be observed from the FESEM and AFM images of the membranes before and after hydrophobic modification, indicating that the effect of the hydrophobic modification using C16 on membrane microstructure was insignificant. The hydrophobicity of the grafted ceramic membrane was tested in terms of water contact angle. The water contact angle of the original membrane decreased sharply from 40 • to 0 • in a few seconds due to the presence of hydroxyl groups (−OH) on the membrane surface, as shown in Figure 7. By contrast, the contact angle of the grafted membrane kept stably greater than 130 • , indicating the modifier had been satisfactorily grafted and the surface of the ceramic membrane was hydrophobic. The hydrophobicity of the grafted ceramic membrane was tested in terms of water contact angle. The water contact angle of the original membrane decreased sharply from 40° to 0° in a few seconds due to the presence of hydroxyl groups (−OH) on the membrane surface, as shown in Figure 7. By contrast, the contact angle of the grafted membrane kept stably greater than 130°, indicating the modifier had been satisfactorily grafted and the surface of the ceramic membrane was hydrophobic. The N2 permeances of the original and grafted ceramic membranes at the transmembrane pressure of N2 ranging from 0.05 to 0.40 MPa, with the highest permeation fluxes of 1.01 × 10 −5 and 0.967 × 10 −5 mol/(m 2 •s•Pa), respectively, as shown in Figure 8. It indicates that the grafted ceramic membranes exhibited high hydrophobicity concurrently without causing much reduction of gas permeation. It was likely due to that the grafted C16 layer on the inner surface of pore channels was very thin. The thickness of the grafted layer was only a few nanometers (<3 nm) [24,25], which was much smaller than the pore sizes (0.1 μm for the top layer and 1.0 μm for the support layer). Therefore, the hydrophobic modification had an insignificant effect on the gas permeation. The N 2 permeances of the original and grafted ceramic membranes at the transmembrane pressure of N 2 ranging from 0.05 to 0.40 MPa, with the highest permeation fluxes of 1.01 × 10 −5 and 0.967 × 10 −5 mol/(m 2 ·s·Pa), respectively, as shown in Figure 8. It indicates that the grafted ceramic membranes exhibited high hydrophobicity concurrently without causing much reduction of gas permeation. It was likely due to that the grafted C16 layer on the inner surface of pore channels was very thin. The thickness of the grafted layer was only a few nanometers (<3 nm) [24,25], which was much smaller than the pore sizes (0.1 µm for the top layer and 1.0 µm for the support layer). Therefore, the hydrophobic modification had an insignificant effect on the gas permeation. 1.01 × 10 −5 and 0.967 × 10 −5 mol/(m 2 •s•Pa), respectively, as shown in Figure 8. It indicates that the grafted ceramic membranes exhibited high hydrophobicity concurrently without causing much reduction of gas permeation. It was likely due to that the grafted C16 layer on the inner surface of pore channels was very thin. The thickness of the grafted layer was only a few nanometers (<3 nm) [24,25], which was much smaller than the pore sizes (0.1 μm for the top layer and 1.0 μm for the support layer). Therefore, the hydrophobic modification had an insignificant effect on the gas permeation. Effects of Key Operating Conditions Operating conditions are important to CO2 desorption performance. To investigate the effects of several key operating parameters on the membrane CO2 desorption perfor- Effects of Key Operating Conditions Operating conditions are important to CO 2 desorption performance. To investigate the effects of several key operating parameters on the membrane CO 2 desorption performance, experiments were conducted at an MEA concentration of 5.0 mol/L, liquid temperature ranging from 363.15 to 373.15 K, CO 2 loading ranging from 0.2 to 0.45 mol CO 2 /mol MEA, liquid flow ranging from 200 to 400 mL/min and permeate pressure ranging from 50 to 80 kPa. The effect of CO 2 loading on the CO 2 stripping flux can be seen in Figure 9. With the decrease of CO 2 loading, the CO 2 stripping flux decreased significantly. This is because the decrease in CO 2 loading would lower the CO 2 equilibrium partial pressure, reflecting the smaller driving force for CO 2 mass transfer. Meanwhile, an increase in liquid flow was of great benefit to improving the CO 2 stripping flux. This is because as the liquid velocity increased, the liquid temperature and CO 2 loading were little changed and maintained at high levels, thus keeping high mass transfer performance. In addition, an increase in the liquid flow resulted in reduced liquid phase mass transfer resistance which had a great effect on the overall mass transfer resistance. It should be noted that a high liquid flow means a fast circulation rate for liquid solution circulating between the membrane contactor and the reboiler, which will consume more pump energy. Therefore, it is important to choose an optimized liquid flow. The effect of CO2 loading on the CO2 stripping flux can be seen in Figure 9. With the decrease of CO2 loading, the CO2 stripping flux decreased significantly. This is because the decrease in CO2 loading would lower the CO2 equilibrium partial pressure, reflecting the smaller driving force for CO2 mass transfer. Meanwhile, an increase in liquid flow was of great benefit to improving the CO2 stripping flux. This is because as the liquid velocity increased, the liquid temperature and CO2 loading were little changed and maintained at high levels, thus keeping high mass transfer performance. In addition, an increase in the liquid flow resulted in reduced liquid phase mass transfer resistance which had a great effect on the overall mass transfer resistance. It should be noted that a high liquid flow means a fast circulation rate for liquid solution circulating between the membrane contactor and the reboiler, which will consume more pump energy. Therefore, it is important to choose an optimized liquid flow. An increase in liquid temperature was of great benefit to increasing the CO2 stripping flux, as shown in Figure 10. This is because temperature directly affects CO2 equilibrium solubility and diffusion coefficients. The CO2 solubility in the MEA solution decreases exponentially with temperature [26], and the diffusivity increases in multiples of 4 by increasing the temperature by 10 K [27,28]. Thus, an increase in operating temperature leads An increase in liquid temperature was of great benefit to increasing the CO 2 stripping flux, as shown in Figure 10. This is because temperature directly affects CO 2 equilibrium solubility and diffusion coefficients. The CO 2 solubility in the MEA solution decreases exponentially with temperature [26], and the diffusivity increases in multiples of 4 by increasing the temperature by 10 K [27,28]. Thus, an increase in operating temperature leads to increases in both driving force and mass transfer coefficient for CO 2 stripping. Moreover, an increase in the feed flow rate enabled reducing the temperature difference between the liquid bulk and the liquid-membrane interface, resulting in increased transmembrane pressure difference. The 363 K curve tended to a maximum as the feed flow rate further increased. This might be because the mass transfer in the feed side was negligible, and the transport in pore sizes governed the overall mass transfer process. Regeneration pressure also impacts CO2 desorption flux. Lowering permeate side pressure enhanced the CO2 desorption flux, as shown in Figure 11. It can be explained that the decrease in the permeate side pressure is favorable for decreasing the CO2 partial pressure in the gas phase, thus improving the CO2 stripping driving force. However, a too low permeate side pressure will lead to the considerably high energy consumption of the vacuum pump. A moderate degree of vacuum condition is in favor of improving the CO2 membrane stripping performance, facilitating the CO2 transport in the permeate side, concurrently will not cost too much energy. Compared to other studies reported on membrane contactors for CO2 desorption using MEA solution, the hydrophobic tubular asymmetric α-Al2O3 membrane exhibited competitive mass transfer performance, as shown in Table 2. Regeneration pressure also impacts CO 2 desorption flux. Lowering permeate side pressure enhanced the CO 2 desorption flux, as shown in Figure 11. It can be explained that the decrease in the permeate side pressure is favorable for decreasing the CO 2 partial pressure in the gas phase, thus improving the CO 2 stripping driving force. However, a too low permeate side pressure will lead to the considerably high energy consumption of the vacuum pump. A moderate degree of vacuum condition is in favor of improving the CO 2 membrane stripping performance, facilitating the CO 2 transport in the permeate side, concurrently will not cost too much energy. Regeneration pressure also impacts CO2 desorption flux. Lowering permeate side pressure enhanced the CO2 desorption flux, as shown in Figure 11. It can be explained that the decrease in the permeate side pressure is favorable for decreasing the CO2 partial pressure in the gas phase, thus improving the CO2 stripping driving force. However, a too low permeate side pressure will lead to the considerably high energy consumption of the vacuum pump. A moderate degree of vacuum condition is in favor of improving the CO2 membrane stripping performance, facilitating the CO2 transport in the permeate side, concurrently will not cost too much energy. Compared to other studies reported on membrane contactors for CO2 desorption using MEA solution, the hydrophobic tubular asymmetric α-Al2O3 membrane exhibited competitive mass transfer performance, as shown in Table 2. Compared to other studies reported on membrane contactors for CO 2 desorption using MEA solution, the hydrophobic tubular asymmetric α-Al 2 O 3 membrane exhibited competitive mass transfer performance, as shown in Table 2. The Stability of the Modified Ceramic Membrane In industrial applications, membrane stability is an important issue in the membrane process for CO 2 desorption from amine solutions. It determines how long a membrane can be operated. Therefore, not only permeate flux but also the thermal and chemical stability is critical for a membrane to be employed in CO 2 desorption. Here, the hydrophobically modified α-Al 2 O 3 membrane after 30 days' immersion in MEA solution was characterized and compared with that without immersion. In this work, the contact angle, gas permeance and morphology of the immersed α-Al 2 O 3 membrane were evaluated in order to investigate its thermal and chemical stability. As shown in Figure 12, the water contact angle of the immersed membrane was very close to that of the membrane without immersion. It means that the immersed membrane maintained good hydrophobicity. In addition, the gas permeation of immersed membrane the performance of unimmersed membrane at the transmembrane pressure of N 2 ranging from 0.05 to 0.40 MPa, as shown in Figure 13. They had permeation fluxes of 0.932 × 10 −5 and 0.847 × 10 −5 mol/(m 2 ·s·Pa), respectively. These results indicate that the MEA solution has a small effect on the stability of the modified membrane; however, the effect was acceptable. The Stability of the Modified Ceramic Membrane In industrial applications, membrane stability is an important issue in the membrane process for CO2 desorption from amine solutions. It determines how long a membrane can be operated. Therefore, not only permeate flux but also the thermal and chemical stability is critical for a membrane to be employed in CO2 desorption. Here, the hydrophobically modified α-Al2O3 membrane after 30 days' immersion in MEA solution was characterized and compared with that without immersion. In this work, the contact angle, gas permeance and morphology of the immersed α-Al2O3 membrane were evaluated in order to investigate its thermal and chemical stability. As shown in Figure 12, the water contact angle of the immersed membrane was very close to that of the membrane without immersion. It means that the immersed membrane maintained good hydrophobicity. In addition, the gas permeation of immersed membrane the performance of unimmersed membrane at the transmembrane pressure of N2 ranging from 0.05 to 0.40 MPa, as shown in Figure 13. They had permeation fluxes of 0.932 × 10 −5 and 0.847 × 10 −5 mol/(m 2 •s•Pa), respectively. These results indicate that the MEA solution has a small effect on the stability of the modified membrane; however, the effect was acceptable. The microstructure of the hydrophobic membranes before and after immersion was presented in Figure 14 to observe the effect of MEA solution on the stability of the modified membrane. As shown in Figure 14a,c, no obvious variation can be found between the surface morphology of the hydrophobic ceramic membrane before and after the immersion in MEA solution. From the cross-sectional FESEM images, it can be found that the near-surface of the ceramic membrane was partially corroded after immersion in MEA solution at 100 °C for 30 days, which explained why the N2 flux of the immersed membrane showed a little decrease. In the membrane desorption process, a hydrophobic membrane can prevent the permeation of reactive MEA into pores; thus, it is an effective way to reduce corrosion. The microstructure of the hydrophobic membranes before and after immersion was presented in Figure 14 to observe the effect of MEA solution on the stability of the modified membrane. As shown in Figure 14a,c, no obvious variation can be found between the surface morphology of the hydrophobic ceramic membrane before and after the immersion in MEA solution. From the cross-sectional FESEM images, it can be found that the nearsurface of the ceramic membrane was partially corroded after immersion in MEA solution at 100 • C for 30 days, which explained why the N 2 flux of the immersed membrane showed a little decrease. In the membrane desorption process, a hydrophobic membrane can prevent the permeation of reactive MEA into pores; thus, it is an effective way to reduce corrosion. The microstructure of the hydrophobic membranes before and after immersion was presented in Figure 14 to observe the effect of MEA solution on the stability of the modified membrane. As shown in Figure 14a,c, no obvious variation can be found between the surface morphology of the hydrophobic ceramic membrane before and after the immersion in MEA solution. From the cross-sectional FESEM images, it can be found that the near-surface of the ceramic membrane was partially corroded after immersion in MEA solution at 100 °C for 30 days, which explained why the N2 flux of the immersed membrane showed a little decrease. In the membrane desorption process, a hydrophobic membrane can prevent the permeation of reactive MEA into pores; thus, it is an effective way to reduce corrosion. Conclusions A hydrophobic ceramic membrane was fabricated via grafting a hexadecyltrimethoxysilane ethanol solution and tested in terms of water contact angle, pure N 2 permeability and CO 2 desorption performance. The results showed that the modification strategy enables the grafted ceramic membranes to exhibit hydrophobicity higher than 130 • concurrently without causing much reduction of gas permeability (less than 5%) compared to the original membrane without modification. CO 2 desorption from MEA solution was conducted through the tubular asymmetric membrane. The results demonstrated that the CO 2 loading, liquid flow rate, liquid temperature and permeate pressure were the key parameters on the CO 2 desorption flux. The CO 2 flux was found to be 1.17 × 10 −3 (mol·m −2 ·s −1 ) at feed temperature of 373 K, permeate side pressure of 60 kPa, MEA concentration of 5.0 mol/L, CO 2 loading of 0.41, feed flow rate of 400 mL/min. Moreover, stability tests of immersing the membrane into a 5.0 mol/L aqueous MEA solution at 373 K for 30 days were also performed to investigate the stability of the hydrophobic α-Al 2 O 3 membrane. The experimental results showed that the MEA solution did affect the membrane stability, however, was acceptable (less than 10%).
v3-fos-license
2021-09-27T19:49:35.287Z
2021-08-10T00:00:00.000
238701553
{ "extfieldsofstudy": [ "Geography" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/00961442211037313", "pdf_hash": "aa7d546e2ac402f81dbac0782a822ad832eed947", "pdf_src": "Sage", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42146", "s2fieldsofstudy": [ "History", "Sociology" ], "sha1": "37afd401a86b3e6ce95dd0cf94cc2991846871cc", "year": 2021 }
pes2o/s2orc
Urban Segregation in a Nordic Small Town in the Late-Seventeenth Century: Residential Patterns in Sortavala at the Eastern Borderland of the Swedish Realm The general view of urban segregation in pre-modern towns has been that the wealthy lived near the administrative and economic center(s), while the poor were pushed to the limits of the town. This approach has been questioned by studies proving that urban spaces were socially mixed. This dilemma has been studied here by examining in detail the urban segregation in one small town, Sortavala, at the eastern borderland of the Swedish realm. The analysis shows that the town space was bipolarly segregated. The “gentry,” officeholders and the like, lived near the market square and town hall; the wealthy burghers along the main street. However, even the poorest taxpayers lived among the wealthy and those of high social rank. The segregation was relative: the proportion of the wealthy grew in the grid plan in the town center; the settlements growing “freely” outside the original grid plan were for the poor only. In his influential work analyzing and generalizing patterns in preindustrial cities (1960) from all over the world, Gideon Sjöberg postulates that the dwellings of the rich and powerful were concentrated in town centers, while the poorest and powerless were pushed to the limits near the town walls. In status-oriented pre-modern cities, the administrative center, often surrounding the market square, was, Sjöberg maintains, the place for the dwellings of those of high social rank. In capitalist modern cities, the needs of trade guided the location of the wealthy burghers. He also remarks that along with social status and wealth, ethnicity and religion (often combined) formed remarkable patterns for the socio-topographical formation of urban spaces. 3 This has been the dogma repeated in many works on segregation, but it has also been disputed by many scholars. More recent research on urban segregation has offered evidence that the social space of early modern towns was mixed rather than strictly socially segregated. However, although all socioeconomic groupings may have lived in the same part of town, there has been a tendency to suggest that the wealthier groups had more weight in the center, while the poorer were overrepresented near the limits. 4 This phenomenon can be called "relative" segregation. 5 In short, this is the view given by research on urban segregation in the early modern towns of Western Europe and Great Britain. How was it in the early modern North, where the towns were small and often very young? Were urban spaces socially segregated; were town centers socially mixed; and can we find any residential patterns to make generalizations about urban segregation? The results of studies undertaken in the Nordic countries vary. According to a Finnish group of researchers, 6 the academic dissertation of E. Brunnius from 1731 states that in seventeenthcentury Tornio, 7 the most able traders lived on the first street (i.e., the street nearest the strand and harbor), the "common people" on the second, and "the poorest" on the third. They therefore suggest that the town space of Tornio was relatively socially segregated at the beginning of the eighteenth century. However, they are unable to verify Brunnius's claim about spatial segregation with any primary sources. In studying the social construction of the sixteenth-and early seventeenth-century town of Nya Lodöse in Western Sweden, the Swedes Rosén and Larsson combine the results of archeological excavations and archival sources. According to them, wealthier people lived in some parts of the town more than in others, while many artisans' dwellings were found in some areas. Yet the overall picture was that the settlement was socially and economically mixed. No parts of the town were specially designated to any special groups. 8 A GIS analysis of early eighteenth-century Copenhagen by a Dane, Mads Linnet Perner, suggests that the urban space was horizontally segregated to the streets of the wealthy and to the alleys of the poor, although they could live very close to each other. 9 However, Rosén and Larsen posit that segregation in small urban centers in Sweden may have changed during the first half of the seventeenth century. New towns were founded, and old ones were moved to new places. The renaissance grid plan inspired by antiquity became the ideal townscape in Sweden at the beginning of the seventeenth century. The change from freely growing medieval urban spaces to regulated town plans took place in many Swedish towns especially in the 1640s and 1650s. 10 These processes of founding a new town, moving old towns to new places to regulate medieval town centers to grid plans, may have broken the old social spatial structures of the urban spaces developed over decades and centuries. Were the old residential patterns transferred to the new grid plans? Did the gentry and burghers settle in new towns in the conventional order: the powerful in the center, the powerless at the limits? Yet these changes in urban spaces may have paved the way for an intentional town planning, in which the social and economic needs of social groupings were taken into account. In the 1640s, when Kalmar was moved to a new place, social inequalities were inscribed in the town plan from the outset. When Jönköping was moved in the 1620s, separate areas by the seaside were allocated to German workers. On the contrary, in the 1620s, when Halmstad was rebuilt after its destruction, the tax records from the following decades show a mixed socioeconomic structure, with merchants and artisans of differing wealth living side by side. A general observation in these Swedish studies to which Rosén and Larsson refer in their article is that the wealthier merchants had plots closer to the market square and church, but no separate areas were reserved for special socioeconomic groups in early modern towns. 11 A problem with studying the socioeconomic spatial organization and verifying the patterns of spatial urban segregation of the settlement in early modern towns in the North has been the lack of cadastral sources and town plans in which the data about the socioeconomic, ethnic, or religious status of the inhabitants can be compared with and located in blocks and plots. Because some data on the social status and wealth of townspeople comparable with a contemporary geographical town plan and a list of plot-owners are available from the small town of Sortavala at the eastern borderland of the Swedish realm, we attempt to respond to this need and identify how townspeople spatially located themselves on a grid plan at the end of the seventeenth century. Sortavala-A Borderland Town and Its Town Space In the treaty of 1617 between Sweden and Russia, the provinces of Ingria and Kexholm were annexed to the Swedish realm. During the first half of the seventeenth century, new towns were founded in the new eastern provinces to promote trade in these remote areas. One of them was Sortavala, founded in 1643. Although the location of Sortavala by the northern bank of Lake Ladoga was favorable for shipping and commercial activities, the town never grew to be a remarkable trading center. However, it had local importance in collecting peasant products, mostly tar, for shipping via Lake Ladoga to the town of Nyen. 12 Nyen and Stockholm were the most important trading partners for the burghers of Sortavala. When the Russo-Swedish War (1656-1658), a sideshow of the Northern War (1655-1560), raged in Karelia, Sortavala was badly damaged. Almost all the burghers, most of them Orthodox Karelians, fled to Russia. After the war in the 1660s and 1670s, the town was newly inhabited by Finns from the Savo region and some Swedes who had ended up in Finland. Only a few Karelian families continued as burghers or returned to the town. We can therefore study the spatial formation of the residential patterns on an almost uninhabited grid plan (Map 1). In 1710, the town was badly damaged again in the Great Northern War (1700-1721). The treaty of 1721 between Sweden and Russia left the town on the Russian side of the new borderline. Sortavala shrank to a village-like trading center until it regained town privileges in 1783. In 1809, Finland was annexed to the Russian Empire as a Grand Duchy. According to an order of Emperor Alexander I, the territories annexed to Russia in the treaties of 1721 and 1743 (Sortavala included) were ceded to Finland. In the nineteenth century and at the beginning of the twentieth, Sortavala was an important school and merchant town for eastern Finland. However, as a result of the Second World War, Finland lost Sortavala with the part of Karelia annexed to the Soviet Union in 1944. Today, it is a small border town with fewer than 20,000 inhabitants in Russia's western borderland. The appearance of the seventeenth-century town plan of Sortavala is known from a geographical map drawn by Erik Beling in 1697 (Map 2). Beling worked as a land surveyor in the Baltic provinces between 1688 and 1700. 13 Beling's map gives reason to believe that over the years, although the grid plan remained as it was planned in 1643, the settlement was enlarged quite freely in some corners of the urban space. 14 The town consisted of a rather small area: the northsouth length of the space inside the town's customs fence is about 280 meters, and the width about 300 meters. The size of the town space inside the customs fence was between approximately five and six hectares. 15 The population of the early modern town of Sortavala is usually estimated to have been about 600. This estimate is based on the number of plots (102) and the guess that the average number of people living in each house (plot) was six. 16 However, according to the estimates of Sven Lilja, based on a large research project on Swedish early modern towns, the average family size varied greatly, but the appropriate figure is between 3.5 and 4.5. 17 The tax roll of Sortavala from 1685 lists 103 burghers, but also seventeen names of landless (bobuler) living in the town and earning their living through different kinds of work. The gentry, that is, civil servants, schoolmasters, and the clergy, are missing from the tax rolls. The list of plot-owners (Notorium explicatio 18 ) in the map of 1697 names nine. In addition to these, the town council minutes mention several artisans like a tailor, a shoemaker, and a founder, who lived in the same houses as the burghers but are not visible in the cameral sources. With these prerequisites, we can assume that the number of households in the town of Sortavala was at least 140. Using 4.5 as an average household size gives almost the same result, 630 inhabitants, as previous researchers proposed with bigger estimated average family size but a smaller number of families. We can therefore accept 600 as the approximate population at the end of the seventeenth century. Sven Lilja has classified Swedish early modern towns according to their size. In his classification, Sortavala falls into the category of "small towns" (500-1,000 inhabitants) between the categories of "micro-towns" (less than 500 inhabitants) and "small medium towns" (1,000-2,000 inhabitants). According to Lilja, "micro-towns" were the most common category in Sweden throughout the seventeenth century. Almost all towns fell into these three categories, and only 11 out of 101 towns had more than 2,000 inhabitants. 19 According to Lilja, the median size of a seventeenth-century town in Sweden was less than 500 inhabitants. 20 We can therefore say that although Sortavala was a tiny town in the periphery, for its size, it was a very typical early modern town in the seventeenth-century Swedish realm. In Beling's map of 1697, only one street is named. The 230-meter-long 21 "Great Church Street" bisects the town horizontally. Four streets crossing Church Street in an approximate north-south direction are unnamed. In a small community, the naming of all the streets was perhaps unnecessary. The streets divide the town into the upper part (five blocks), inner part (five blocks), and lower part (three blocks) on the shore. 22 Inside the blocks, the plots are separated by borderlines and numbered from 1 to 102. The plots in the grid's regulated blocks were roughly square-shaped, each side about 22 to 23 meters, 23 and the size of the plots in the inner part of the town was about 540 square meters, and in the upper part, about 470 square meters. The plots in Sortavala were very similar in size to other Finnish towns in the seventeenth century. 24 The town did not have walls or other fortifications, but it was separated from the countryside by the customs fence. The blocks and plots near the fence and shore were very asymmetrical, and the size of the plots varied greatly. The buildings on the plots were simple one-story single houses made of timber, with the shelters required for livestock in the yards. Burgher's granaries and storehouses were situated on the shore and are numbered in the map's legend from 103 to 134. Number 135 is the place for the gardens. 25 The town was limited by Lake Ladoga in the south, and fields in the north and west. In the east, the urban space met gardens, a stream, and the churchyard. In the northwest, the outermost plots were beside a rocky hill, Kisamäki (Sw. "Leekberget," literally "Playhill"). Sortavala was also a town that did not have any medieval structures, like an old town center, affecting the segregation process. After the mid-1650s war, the urban space of Sortavala where the burghers and civil servants settled was practically an empty grid plan. As mentioned, in rebuilding seventeenth-century Halmstad after the fire, some segregation took place according to the town planning. It is not known if any kind of plan existed, according to which the burghers and civil servants could claim possession of certain plots to build and settle in Sortavala. This makes Sortavala as an interesting "laboratory," as we can expect that if some segregation can be traced to the late-seventeenth-century city, then it was a result of natural segregation process produced in the social interactions of the settlers. Tracing the Residential Patterns of the Sortavala Urban Space Carl H. Nightingale has conducted a comparative study on urban segregation, beginning with Mesopotamian Ziggurats and reaching to twentieth-century cities. Nightingale postulates that the history of urban settlement is the history of segregation: segregation of some kind is an inevitable part of urban life. When not taking into account the regulations of town plans, the driving force of segregation has been, according to Nightingale, the free will of the wealthy to choose their neighbors. The perception that having the poor as one's close neighbors would bring down the value of one's property did lie behind the choices made by the wealthy. Urban segregation has been a result of choosing one's place of living in relation to wealth, ethnicity, race, religion, or other identity. 26 During the last two decades, the concept of space has been under re-evaluation. Modern human geography and sociology are looking at space as a socially produced and relative concept. Classical urban segregation research has been criticized from treating urban space as a container without interaction with its surroundings. 27 However, the early modern towns were trying to be containers. From the 1620s, the towns in the Swedish realm were surrounded by a customs fence. The traffic into and out from the town was controlled at the gates. The town had control over those who wanted to settle in the town. The town council decided if an artisan was needed in the town and, to become a burgher, one needed to apply for the right from the town council, showing a good reputation and obtaining guarantees from two burghers of one's solvency. Urban segregation is often studied using segregation indexes, the most famous being the dissimilarity index. 28 These segregation indexes have been extended to measure spatial relationships to more accurately capture the spatial aspect of urban segregation. 29 Unfortunately, the data for Sortavala are too incomplete for such analyses. Spatial social relations must therefore be described by using a partly more descriptive approach. Several studies have used fiscal records to study early modern urban space in this way. Good examples are Tim Bisschops's visualizations of the social topography of late medieval Antwerp that map a variety of sources, 30 and the analysis of segregation in three cities in Holland by Clé Lesger and Marco Van Leeuwen. 31 Lesger and Van Leeuwen study segregation on three levels, between districts (macro), between plots (meso), and within buildings (micro). Studying on these three analytical levels, they find that in smaller cities (Delft and Alkmaar), segregation was visible mainly on the meso-level. The social and economic elites were in small clusters in good locations, but always near a cluster of lower classes, often even within the same city blocks. Only in Amsterdam do they find clear macro-level segregation, which they attribute to the large size of the population and the presence of a sizable Jewish minority. Micro-level analysis is used in studying vertical segregation-whether the different social classes lived in the basement or on the upper floors. Although the sources from seventeenthcentury Sortavala sometimes describe households with more than one family living in a house, micro-level vertical analysis is irrelevant in our case study. Evidently, the houses were small one-floor wooden constructions made of timber, usually consisting of only one or two rooms. In studying the urban space of tiny Sortavala, we therefore analyze the horizontal segregation on meso-(blocks and plots) and macro-(town part) levels. The socioeconomic status of townspeople is studied here in two dimensions: first, the social status given by the offices the person held; second, his wealth. We will study if the social segregation according to social status or wealth (or both) can be traced to macro-level town parts or to meso-level clusters of plots-or at all in the tiny early modern urban space that was Sortavala. Because of the smallness of the urban space-for example, the longest distance from the remotest corner of the town space to the market was about 500 meters-it was easy to go on foot everywhere. Perhaps in such a small area, it was all the same where the gentry or the wealthy burghers had their houses. The only noticeable minority in Sortavala was the Orthodox Karelians. The spatial distribution of their dwellings in the urban space is one aspect to be examined in our study. Although our data do not allow the use of segregation indexes, we can trace the patterns of residence in Sortavala by measuring the distances of the plots owned by different social and wealth groups from the urban space's possible "hot spots." These possible densifications of the dwellings of people of a socially high rank or the wealthy are identified by locating the inhabitants on whom we do have data in the town's surviving plan. These identified or suspected patterns are then tested by graphically presenting the results of measuring the distances of the dwellings to the "hot spots." The main sources in our study are the town map drawn in 1697 by Erik Beling and the lists of tax contributions paid annually by the town's burghers. The map of Beling is furnished with a list of the names of owners of 102 plots. 32 One plot is unbuilt and empty, so the list consists of 101 names that mention the profession of the most local civil servants. 33 The amount of the paid tax contribution was based on the annual valuation of the burgher's property and trade. 34 Lists of the tax paid by each burgher in Sortavala have survived from 1681, 1682, 1683, and 1685 in the state's provincial accounts. The accounts from 1684 are missing. After 1685, the lists of the paid taxes were not included in the provincial accounts. Evidently, the tax was collected and used by the tax farmer, who rented the Sortavala fief from 1686 (see below for more details). 35 Several issues must be pondered when using the taxing lists as a source. First, the mayor, civil servants, schoolmasters, and clergy were omitted from the valuation. However, those gentry who owned a plot and house in the town can be recognized in Beling's map, because their profession is mentioned. Although we lack any information about their wealth, in tracing the socioeconomic organization of the town's settlement, we can refer to their high social status in local society. Not only wealth may have affected the location where a person settled in the town: social status may also have been an important factor. For example, the schoolmaster had a relatively high social status in local society, but compared with the wealthy burghers, he was certainly poor. Another difficulty with the taxing lists is that they were made about fifteen years before Beling drew his map and wrote the list of plot-owners. We can therefore find data about the wealth of thirty-eight burghers owning 41 plots out of the 101 plots mentioned in Beling's map. This makes our data quite scanty, but we assume that we can obtain some good clues about the spatial organization of the settlement of Sortavala in the 1680s and 1690s. The main sources of this study are supplemented by the rolls of the town council (Sw. rådstugurätt). These minutes of the town council have survived as an almost continuous series between 1673 and 1706, and they cover the entire period studied by this article. The literature on Sortavala supplements the archival data. The history of the town of Sortavala in the seventeenth century has received little attention. There are descriptions of the outlines of the development, and some very detailed information in two town histories published in 1932 and 1970. 36 Mayors and the Gentry by the Market Square? In 1651, Sortavala town and the parish around it were given to Gustaf Adam Banér. A manor was founded on the other side of the Vakkosalmi strait from the town. Banér never visited his remote shire. A bailiff, called a "hopman," took care of the manor and the fief, and acted as the town's mayor. The largest fiefs in Sweden and Finland, including Sortavala county, were reduced to the Crown by the decision of the Swedish Diet in 1680. In Kexholm, Ingria, and Estonia provinces, the reduced fiefs were rented to the tax farmers. In Sortavala, the former "hopman" of the fief and mayor Johan Mether rented the fief from 1685. 37 During the latter half of the seventeenth century in Sweden, the rule was adopted that mayors' posts were no longer to be filled by a town's burghers but by men who had at least some academic education or experience in jurisprudence. 38 One task of a mayor was to act as the judge at the town court. Some training in jurisprudence was therefore needed. 39 Bailiffs like Mether often met this requirement. The fief owned a plot (81 in Beling's map) larger than others at the "parade place" by the market square. The legend of Beling's map states that the plot had long been owned by the man responsible for the fief. 40 Moreover, the inspector of the fief and later the tax farmer had a residence in the manor outside the town limits, as well as in the town by the market square. The house on the plot was used as the meeting place for the town's council, so it also served in a way as a town hall. We only know the locations of the residences of two mayors of Sortavala. Sven Hielmberg, who was nominated as the deputy mayor of Sortavala in 1687, evidently did not have a permanent dwelling in the town. When Hielmberg suddenly died in 1689, the burgher who had accommodated Hielmberg while he lay on his sickbed demanded compensation from his heirs for the food, beer, and spirits he had given him. 41 Mether was again forced to take care of his mayoral duties. 42 Carl Ottoson, the son-in-law of Johan Mether, replaced the deceased Hielmberg as deputy mayor of Sortavala. Finally, in 1693, he was nominated as the full mayor. 43 Ottoson owned a house on two adjacent plots (both numbered 32 in Beling's map) on the north side of the market square. 44 Both Ottoson and Mether died in 1697, and the new tax farmer of the manor, Salomon Enberg, was nominated as mayor. 45 Enberg had his residence in the manor outside the town. Enberg came into conflict with the town's burghers, and in 1700, Benjamin Krook was nominated for the mayor's post. 46 Later, he worked as a district judge in northern Savonia county. 47 It is clear that Krook lived in the town, but the location of his dwelling is unknown. Krook was the last mayor of Sortavala before the town was destroyed in the war of 1710. Alongside the mayors, a small number of people owning a plot in Sortavala can be labeled as belonging to the "gentry." We use the concept "gentry" here for civil servants, foremen of the manor, schoolmasters, and members of families who otherwise held notable positions in the Kexholm province. The plots next to the market square seem to have been valued highly by the gentry. The manor house serving as a town hall and the plots of Carl Ottoson were by the market square, as described above. Most other plots by or near the market square were inhabited by the gentry, wealthy burghers, or artisans. The young district clerk (Sw. häradskrivare) Lorentz Frese owned a house on a plot (24) only a few steps from the market square. 48 Lorentz Frese's father was a district judge in Ingria, and Lorentz followed in his father's footsteps. Having graduated from Uppsala University (in 1700 with the name "Lorentz Freese Kexholmia Carelius") and the Royal Academy of Turku (in 1702), he first served at the Court of Appeal in Turku. In the 1710s, he was nominated as a vice district judge in the Kexholm province, and in the 1720s as district judge of the court of Karelia. 49 Plot 93 by the market square is marked in Beling's list of plot-owners for Johan Amptman. 50 The bailiffs and tax farmers of the manors were often called "inspectors," "hopmans," or "amptmans." An "amptman" was a bailiff of lower rank, 51 an overseer or a foreman. However, an "amptman" was clearly better off from the common folk. It is clear that "Johan Amptman" in Beling's map is the amptman of Sortavala manor Johan Isaksson Looman (or Loom, as he is called in the council's minutes 52 ). 53 Plot 92 on the southeast corner of the market square was owned by the "widow of Daniel Bång." The background of Daniel Bång is unknown. However, the Bång family is well known as burghers and priests in Finnish towns and parishes. The most famous was Daniel's contemporary, the Bishop of Vyborg Petrus Bång (1633-1696). However, it is unknown if they were close relatives. The family had entered Finland from Sweden proper at the beginning of the seventeenth century. Plot 94 next to mayor Ottoson's house by the market square was owned by Knut Skomakare, that is, "shoemaker." 54 Perhaps the plot beside the market square was suitable for an artisan's workshop. The vicar, curate, and to some degree even the sacristan can be counted as belonging to the literati of the tiny town's society. The church, vicarage, and curate's house were outside the town limits. However, the sacristan had a small plot (96) near the market square. The plot itself was behind plot 93, but Beling has drawn a narrow corridor between plots 92 and 93 from the sacristan's house to the market square. 55 The humble servant of the church lived in the backyard of the valued open space of the town. The vicar had died in 1692, 56 but his widow earned part of her living by keeping a tavern on the south side of the market square (plot 89). 57 Just beside her tavern was the tavern of tailor Staffan Sairanen (plot 90). In 1700, because of the fear of fire, the council ordered these two "huts" in poor condition to be torn down. 58 It is unknown if this was actually done. We can conclude that around the market square, there was not only the town hall and houses of the gentry, but also a concentration of plots owned by artisans: in addition to the houses of the shoemaker and tailor, there was a plot (102) belonging to the carpenter Hinrich Koistinen. Two taverns complete the picture of the heart of the town. However, not all the gentry lived near the market square. Another place which seems to have been acceptable for the lesser gentry to have their houses was block 4 in the upper part of the town plan. Carl Affleck, who owned a plot (74) in this block, was the oldest son of Simon Affleck. 59 His father Simon was in the service of Salomon Enberg, the tax farmer and mayor of Sortavala. Enberg had rented the taxes of both Sortavala and Pielisjärvi (the northernmost parish of Kexholm province) manors and fiefs, and Simon Affleck was the bailiff of the Pielisjärvi fief, famous for his heavy-handed deeds. The son seems to have had same kind of career in mind, because he is mentioned as an "amptman." 60 He very probably had the same master as his father. During the Great Northern War (1700-1721), Carl Affleck rose as an officer to the ranks of Lieutenant and Quartermaster. 61 Schoolmaster Petter Pomelius owned a house (plot 68) on the other side of the block where Carl Affleck lived. 62 Pomelius was a son of the late vicar of Puumala parish, Gabriel Carol, 63 and served as the schoolmaster in Sortavala for about twenty years in the 1680s and 1690s. 64 By the standards of Sortavala, amptman Affleck and schoolmaster Pomelius lived in wealthy company. This small cluster of gentry was complemented by three of the wealthiest burghers in the town, Anders Rautiain (plot 72), Tarasia Pukari (plot 73), and Thomas Immonen (plot 66), who owned houses in the nearest neighborhood in the same block. Wealthy Burghers by the Main Street? The mayor headed the town administration, the town court (Sw. råd), with the town councilors (Sw. rådmen), who were chosen from the well-established burghers. In Sortavala, there were usually five or six councilors at the same time. In practice, membership was a lifelong post. Only seldom was a councilor released from the duty because of his behavior, advanced age, or sickness. Membership of the town council was a mark of social status in the local town society. To compare the wealth of the burghers of Sortavala, the households were divided in wealth quartiles of equal size by the amount of contribution tax paid in copper thalers in the 1680s. To make the households easier to compare, the limits of the quartile categories for all years are based on 1681 taxes when possible. 65 The taxes for 1681-1683 were recorded as copper thalers, while 1685 taxes were in silver thalers; a standard rate of 3-1 was used to compare silver with copper money. The wealth quartiles of equal size are given in Table 1: In Table 1, we have placed the known wealth from the 1680s of the town councilors mentioned in the legend of Beling's 1697 map and the wealth of the other burghers mentioned in the legend in the wealth quartiles. We must remember that it was possible to connect the data about taxpaying in the 1680s with thirty-eight plot-owners of 1697 of forty-one plots. The total number of plots with houses was 101. 66 The proportions given in the following tables are therefore not exact figures of the real situation. However, they indicate the trends in the studied phenomenon. Table 2 shows clearly that the town councilors were elected from the well-established burghers: the councilors were recruited from the wealthier half of the burghers, and the wealthiest quartile (4) is very much overrepresented. 67 A long career and stable position as a burgher in society was a prerequisite for becoming a councilor. 68 The councilors were not always chosen from the wealthiest burghers, but they can usually be described as at least belonging to the "well-to-do" category. However, the town's wealthiest burgher, Anders Taskinen, for example, was never elected to the town council. 69 According to Petri Karonen, the Crown favored a practice in which towns' strongest traders were not elected to town councils. It was more profitable that they put their efforts into trading and not into town council and magistrate cases. 70 However, the Crown did not get involved in the election of councilors in Sortavala. The councilors were elected by the burghers, gathered in the town hall, or the town council voted itself for its new member. It seems that a burgher's education played some role in election as a councilor. In Sortavala, some councilors had an education of at least some degree. Councilor Martinus Canuti was teaching children before Peter Pomelius founded a school to the town at the beginning of the 1680s. 71 In 1681, Martinus Canuti is mentioned as a bridge bailiff (Sw. broofogde) in the minutes of the district court in Sortavala parish. 72 His Latinized name refers to academic studies, although we cannot find his name in the graduate registers of the Royal Academy of Turku. 73 We do not know on which plot Martinus Canuti had his house, but when he died in 1693, Brun Olofsson, who lived on plot 14-just a stone's throw from the market square-took his seat on the town council. Brun Olofsson was a son of bailiff Olof Brunsson. The minute mentions that the gathered burghers of the town wanted to elect Brun Olofsson to the town council because of his irreproachable life, and because he was a "scholar" ("Literatus"). 74 Education, wealth, and the councilor's position gave these burghers a status in the local society very close to that of the group we have labeled "gentry." One councilor who may have had some scholarly education is Jören Wallius. He was a son of the first vicar Jören Petri Wallius of Kitee parish near Sortavala town. He lived on plot 28, which was on the corner of the street heading from the market square to the main Great Church Street. Beside Jören's plot was the house of his brother Hindrich Wallius (plot 20). Jören was a wealthy man among the Sortavala burghers. In 1681, his tax contribution was thirty copper thalers, while his brother Hindrich, less successful in trade, paid only five. 75 These remarks justify an examination of whether any residential patterns were characteristic of the group of councilors among the burghers. Even a glimpse of Map 3 affords an impression that most of the councilors for whom we have data lived by the main street. Seven of the ten councilors whose plots we could locate lived along the main street. Two out of ten had their house only one plot away from the main street, and one had his plot in the "gentry" cluster in block 4 of the upper part of the town. We can therefore conclude that Great Church Street, which bisected the town horizontally from the common gardens in the west to the bridge leading to the church in the east, seems to have been a valuable place to own a plot. The above conclusions, that the gentry tended to live near the market square, and the wealthy burghers along or near the main street, are based on the qualitative description and the visual image obtained by mapping the data (Map 3). However, human brains tend to ascribe order and patterns in data where they in reality do not exist. To test the conclusions, we have measured the distance of the plots owned by the gentry and burghers by wealth quartiles (Q1-Q4) to the center point of the market square and to the main street. Distances were measured from the centroids 76 of the plots and calculated using a Python script, including the GeoPandas and Shapely software libraries. 77 The results are presented in Figure 1. The distributions of distances from plots to central locations are depicted using a box-and-whisker plot. A box depicts the five-number summary of each distribution, so that each horizontal line corresponds to one statistic. In ascending order, the summary includes the minimum (in this case, shortest distance), the first quartile, the median, the third quartile, and the maximum. The not available (NA) group represents the distance to plots for which no socioeconomic data are available. Figure 1 was created using the Python Seaborn visualization library. 78 Map 3. Spatial segregation of the gentry and burghers according to wealth quartiles (Q1-Q4). The figure shows that the gentry tended to have their dwellings near the market square, while the wealthiest burghers belonging to quartiles 4 and 3 owned the plots near Great Church Street (median about twenty-three meters). 79 Meanwhile, the average distance of the plots of the poorer burghers (quartiles 2 and 1) was much greater (medians over 120 and almost 60) from the main street. The plots owned by the burghers do not correlate in any way with the distance to the market square, and vice versa, there is no correlation between the location of the plots owned by the gentry and the distance to the main street. Many of the poorest taxpayers seem to have lived in the corner furthest from the market square. This is the settlement of irregular plots in the western and north-western limits of the town. On the contrary, some of the poorest lived on these irregular plots on the eastern side near the market square. However, we do find members belonging to the poorest quartile (Q1) on the grid plan in the middle of the plots of wealthy burghers. The urban space of Sortavala seems to have been segregated to some degree and quite mixed at the same time. Karelian Traders as an Ethnic Minority Finally, we must study the possibility of the segregation of the only ethnic-religious minority in the town's urban space, that of the Karelian Orthodox families. The taxing lists of the early 1680s mention at least six burghers who can be reliably identified as Orthodox Karelians. 80 In addition, the list of plot-owners on Beling's map has four Karelian or Russian names, two of which differ from the names in the taxing list. We can therefore identify at least eight Orthodox families among the Sortavala burghers of the 1680s and 1690s. The proportion of Karelians among the inhabitants of Sortavala during the whole period was therefore about 6 percent. Following the guidance of Sjöberg's classic work, we must ask if this small group of Karelian trader families was a segregated sect in the spatial organization of Sortavala. Because of the data's shortcomings and smallness, this analysis must again be based on a qualitative description and visual mapping of the data (see Map 3). In the 1680s and 1690s, the most remarkable of the Karelian traders in Sortavala was Staffan Klimpo. He had his house in the best place by the market square (plot 25) and according to another map made by Erik Beling, he owned most of the fields surrounding the town. Klimpo belonged to the wealthiest group of burghers in Sortavala. Although he was not a councilor, he was one of the oath-sworn men who were chosen to execute the evaluation of the contribution tax for each burgher. This shows that he was a respected member of the town's society. He was a shipowner, and from the minutes of the town council, we find that he sailed with his wares to Stockholm almost annually. Jöran Klimpo, one of the wealthiest burghers and a councilor in the town before the 1680s, was Staffan's father. With his own funds, Jöran bought church bells from Stockholm for the local Orthodox church. 81 This background may explain the excellent and valuable situation of Staffan's house. We do not know the location of the plot of Staffan Klimpo's brother, Pamphilia. In the taxing list of 1681, Pamphilia is marked as even wealthier than his brother Staffan. He had to pay a tax contribution of ninety thalers in copper, while Staffan paid eighty thalers. Only one other burgher paid more tax than these two brothers. While Staffan remained prosperous, Pamphilia had another fate. The tax contribution lists show that his property and wealth were rapidly vanishing. In the tax list of 1685, he was among the poorest burghers. The minutes of the town council make it clear that Pamphilia's downfall was his destructive alcoholism. Three other Karelians, Tarasia Pukari, Pafwila Federoff, and Ondruska Iwanoff Kuhnoinen, were also quite prosperous burghers. Pukari paid a tax contribution of twenty copper thalers in 1681, and in 1683, even thirty thalers. With this income, he belonged to the third wealth quartile of burghers (the fourth being the most prosperous). His house was in the northern part of the urban space (plot 73). The plot was not near the main street or the market square, but it was in the cluster of plots of members of the gentry and wealthy burghers. Pukari's nearest neighbors were amptman Carl Affleck (plot 74) and schoolmaster Petter Pomelius (plot 68). Pafwila Federoff was taxed five copper thalers, therefore, belonging to the poorest quartile of burghers, yet was the most well-to-do in this group. In 1681, Ondruska Iwanoff paid ten thalers and in the following two years, twenty thalers, but he is no longer mentioned in the tax list of 1685. We do not know the location of his house in the town. However, in the list of plot-owners of 1697, plot 98 on the easternmost edge of the town plan was owned by Gauril Pawilof. It is most likely that Gauril's patronym refers to Pafwila Federoff. We can conclude that although the Karelian traders in Sortavala formed a small religious minority in the town, there is no sign that where they lived had any spatial group cohesion in the urban space. Discussion At the beginning of this article, we saw that Finnish historians and archeologists suggested that the early modern town space of Tornio in Northern Finland was highly segregated: the wealthiest lived by the street near the shore, while the poorest were pushed to the remotest street near the town limits. However, the Swedish researchers Rosén and Larsson present the preliminary results of the excavations of the early modern Nya Lodöse, which suggest that the urban space was economically highly mixed. The poor lived side by side with the rich. They also guess that artisans' workshops were probably in some special parts of the town. Observations about such slight segregation have also been made concerning other Swedish towns. Studies of the spatial segregation in European towns show a tendency to find segregation on the macro-level (town parts) in large cities (Amsterdam and London), but social segregation in smaller towns is visible only on the meso-level (blocks). The results of examining the Sortavala urban space of the 1680s and 1690s seem to support the results from early modern small towns elsewhere in Europe and Sweden. However, the segregation found in the data collected from late-seventeenth-century Sortavala seem to suggest that the segregation was at least bipolar. The market square was the "heart" of the town, especially for the gentry. The wealthiest burghers, on the contrary, lived along the main street or in the plots near it. Block 4, or the upper part of the town, had a cluster of gentry and wealthy burgher dwellings. Sjöberg suggests that living near the administrative centers was practical and socially valued by the early modern elites. 82 However, differences in the residential patterns of pre-modern elites and wealthy burghers have been explained by the different needs of administration and commerce because of the rise of capitalism before industrialization. 83 Because we have yet to collect any data on production in Sortavala from the studied period, we cannot even guess the extent of the differences detected in the segregation patterns between the gentry and the wealthy burghers. However, in all blocks inhabited by the wealthiest, we find plots of the poorest quartile of taxpayers. Settlements in the grid plan of seventeenth-century Sortavala were relatively segregated in their social and economic distribution. The blocks furthest from the market square were irregular, and the plots near the customs fence were usually settled by the poorest taxpayers in the town. The block to the east of the market square was highly unorganized and perhaps settled by the many artisans living in the town. Although no town walls or ramparts limited the town space of Sortavala, the town was surrounded by a customs fence. Outside the fence, the town met fields owned by the vicar and the wealthy burgher, Staffan Klimpo. This was not the direction in which to enlarge the urban space. When the plots in the grid plan were built and reserved, anyone who wanted to settle in the town had to find the place for his house outside the grid plan. The only places this could be done were at the sides of the rocky Kisamäki hill and on the eastern banks of the cape where the town was built. This must be the explanation for the poor settlement and irregularly formed plots at the northern end of block 2 of the upper part of the town and on the eastern side of the market square (block 3 in the lower part of the town, behind the plots nearest the market square). The irregularly formed and freely grown outskirts of the urban space of Sortavala indicate that although an early modern town tried to be a container, strictly limited to selected settlers and a planned spatially and socially closed space, it did not fully succeed in this task. People earning their living through craftsmanship, as farm hands or by other irregular works were able to settle at the limits of the closed urban space. In conclusion, the settlement in the Sortavala urban space shows evidence of socioeconomic segregation at the meso-level. Those in socially respected positions and the wealthiest burghers occupied the plots near the market square and by the main street. At the same time, the spatial structure of this small town was socially and economically mixed. Among the wealthiest, we can find some of the poorest as neighbors. However, this rule is not valid the other way round. The wealthiest did not settle among the poorest in the rocky parts of the town. Much as in the smaller Dutch towns of the same era, the town space of Sortavala was more homogeneous on the macrolevel but segregated on the meso-level. This suggests that the urban space of small towns in early modern Europe was similar in both the core and the periphery of Protestant Europe, but further research is needed to confirm this. Author Contributions Antti Härkönen has prepared the cartographic presentations, figures, created the wealth classification, and written the paragraphs on explaining the visualizations and segregation studies methodology. Kimmo Katajala has written the rest of the text. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
v3-fos-license
2023-09-17T06:17:48.501Z
2023-09-15T00:00:00.000
261963373
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41467-023-41490-5.pdf", "pdf_hash": "eaf61cc31bdec9988263d4dcd4b658d2468cdea3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42147", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "sha1": "e6d606d763be6738a595bb763649b969d8950853", "year": 2023 }
pes2o/s2orc
Hydrodynamic tearing of bacteria on nanotips for sustainable water disinfection Water disinfection is conventionally achieved by oxidation or irradiation, which is often associated with a high carbon footprint and the formation of toxic byproducts. Here, we describe a nano-structured material that is highly effective at killing bacteria in water through a hydrodynamic mechanism. The material consists of carbon-coated, sharp Cu(OH)2 nanowires grown on a copper foam substrate. We show that mild water flow (e.g. driven from a storage tank) can efficiently tear up bacteria through a high dispersion force between the nanotip surface and the cell envelope. Bacterial cell rupture is due to tearing of the cell envelope rather than collisions. This mechanism produces rapid inactivation of bacteria in water, and achieved complete disinfection in a 30-day field test. Our approach exploits fluidic energy and does not require additional energy supply, thus offering an efficient and low-cost system that could potentially be incorporated in water treatment processes in wastewater facilities and rural communities. Water disinfection is conventionally achieved by oxidation or irradiation, which is often associated with a high carbon footprint and the formation of toxic byproducts.Here, we describe a nano-structured material that is highly effective at killing bacteria in water through a hydrodynamic mechanism.The material consists of carbon-coated, sharp Cu(OH) 2 nanowires grown on a copper foam substrate.We show that mild water flow (e.g.driven from a storage tank) can efficiently tear up bacteria through a high dispersion force between the nanotip surface and the cell envelope.Bacterial cell rupture is due to tearing of the cell envelope rather than collisions.This mechanism produces rapid inactivation of bacteria in water, and achieved complete disinfection in a 30-day field test.Our approach exploits fluidic energy and does not require additional energy supply, thus offering an efficient and low-cost system that could potentially be incorporated in water treatment processes in wastewater facilities and rural communities. The human race has fought against pathogenic microbes throughout history 1 .Waterborne pathogens have long been a threat to public health, and are associated with great pain and suffering 2 .The development of disinfection techniques including chlorination, ultraviolet radiation and ozonation has helped eliminate waterborne pathogens and improved the quality of life 3 .However, current disinfection practices rely on strong oxidants or harsh conditions 4,5 , leading to a high carbon footprint and unpredictable health risks (e.g.carcinogenic byproducts 6,7 and microbial resistance 8,9 ).Most of these technologies require a large-scale infrastructure and extensive maintenance, and therefore cannot be easily deployed in rural areas with inadequate electric power 10,11 .At present, billions of people worldwide still lack access to clean water and sanitation 12 .To provide universal access to safe and affordable drinking water, new disinfection processes that produce less secondary pollution and require less energy are urgently needed. Recent advances in the mechano-bactericidal effects of nanomaterials provide a chemical-free approach for bacterial control [13][14][15] .It is generally believed that if enough mechanical force is exerted on a bacterium by surface contact, its cell wall can be penetrated 16 .However, bacteria have a natural resistance to mechanical shock from the environment 17 , as reported by Suo et al., who showed that a bacterial cell remained viable after repeatedly puncturing it with a sharp atomic force microscopy (AFM) probe (r ~35 nm) 18 .Previous studies have shown that mechano-bactericidal effects are more pronounced when bacteria were statically attached on the nanostructured surface to allow a sufficient disruption of cell integrity 19 .Incorporation of capillary force or surface tension at the air-liquid interface could help to achieve rapid cell deformation at the nanostructured surface, leading to its death 20 .Yet, this condition is not easily achieved in bulk water disinfection featured by a high throughput and a fluidic environment. A fundamental characteristic of water is its fluidity, and using the mechanical energy in water flow to inactivate bacteria would be an ideal way to sustainably disinfect water.In a fluidic environment, the motion of bacteria is dominated by hydrodynamic forces and Brownian motion, which lead to random collisions during water flow 21,22 (Fig. 1a).However, bacteria have evolved cell envelope to mechanically resist external forces, which were up to 2-20 nN as obtained by AFM 18,23 .Therefore, most cells experience resilient deformation without any physiological structural damage when they collide, and this does not significantly change even at a sharp and rigid nanostructured surface (Fig. 1b).The rupture of bacteria during flow was only observed when the flow was stopped so that the cells could be adhered on the surface 24 .At present, there is no reported way of destroying bacteria in a continuous flow condition using fluidic energy. London dispersion force is a basic form of attractive interaction, and it is vital to structural stability at the microscale 25,26 .It contributes to the membrane integrity 27 and determines cellular functions such as membrane permeability 28 and cell adhesion 29 .To destroy the physiological structure of bacteria, dispersion interactions between the contact surface and bacteria should not be ignored, as they play a key role in transforming the kinetic energy from water flow to the cell wall.At high flow speeds (e.g. a turbulent flow), the contact time is very short, and thus a stable London dispersion interaction between bacteria and the contact surface is not reached.However, a mild fluidic condition with a relatively low flow rate (e.g. a laminar flow) can only deliver a small kinetic energy.Facing this dilemma, it is imperative to set up a new force model incorporating dispersion interaction at the microscale, so as to increase the efficiency of energy transfer from the water flow to the cell envelope. Here, we use a contact surface with nanotips which substantially increases the stress delivered by the water flow to the cell envelope.With this unique structure, we demonstrate a hydrodynamicbactericidal mechanism which couples mild fluidic energy and London dispersion force between the nanotip surface and the cell envelope, leading to a dramatic bactericidal effect in water flow.We confirm that the stress produced by the hydrodynamic and dispersion forces is outward of the cell and overcomes the puncture resistance of the bacteria.Using this method, we inactivated >99.9999% of the bacteria in water and achieved continuous mechanical disinfection in a 30-day field test, demonstrating the potential of using environmental mechanical energy to destroy pathogenic bacteria. Illustration of hydrodynamic-bactericidal mechanism The basic principles of the process are presented in Fig. 1c.We set up a model nanostructured surface with a strong dispersion interaction with bacteria, which enables an efficient energy transfer from the water flow to the cell envelope.When a bacterium collides with this nanostructure in flowing water, it is transiently trapped on the surface of the nanotips due to their strong attraction.The drag force of the flow stresses the contact area and therefore induces a considerable outward tension on the cell envelope, which is strong enough to overcome the puncture resistance of the bacterium, causing it to rupture and die (Fig. 1d). To quantitatively investigate the London dispersion interaction, the well depth (ε) of the van der Waals (vdW) potential, which represents the energy of a system at the equilibrium state, was used to reflect intermolecular interactions 30 .The cell membrane was modeled with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE), which is the typical lipid molecule in the cell envelope 31 .Six different materials with different ε values were placed close to a POPE lipid bilayer and showed distinctly different configurations after 100-ns free molecular dynamics (MD) simulations (Supplementary Movies 1-6).As shown in Fig. 1e, materials with ε < 0.256 kJ mol −1 swung away from the lipid membrane while those with ε > 0.256 kJ mol −1 strongly interacted with the lipid molecules and were inserted into the membrane. To better understand these effects, we analyzed the physical interactions between the materials and the lipid membrane.Figure 1 f, g shows the changes with time in the interaction energy for different ε values, together with the distances of the center-of-mass of these materials from the membrane.For materials with ε < 0.256 kJ mol −1 , a high-energy plateau (approximately 0 kJ mol −1 ) indicated a weak attraction interaction in the system.A relatively constant energy value was found at ε = 0.256 kJ mol −1 where the material was absorbed at the surface of the membrane without insertion.For ε > 0.256 kJ mol −1 , the vdW interaction energy decreased rapidly when the material was inserted into the lipid membrane, corresponding to a strong dispersion interaction between the material and the lipid molecules 32 . Although the POPE lipid bilayer is a simplified model for bacterial membrane, it is widely accepted and has been proved significant in analyzing molecular mechanism of bacteria-nanomaterial interactions 31,33,34 .The simulation of cell membranes with realistic components, including lipid, protein, and peptidoglycan, is the future direction, which is essential to unravel the behavior of a real cell membrane.However, due to the requirements of enhanced sampling algorithms and substantial data processing, there will be a continuing demand for simplified models containing few components 35 . Our theoretical simulations based on the POPE lipid bilayer model show an important finding that ε > 0.256 kJ mol −1 is essential for strong attraction between the surface and the cell membrane.Based on these results, there may be a large number of materials with this property.For example, sp 2 -carbon, which widely exists in nature and is chemically stable, has large numbers of delocalized electrons to produce a strong dispersion interaction with a bacterial membrane with ε of 0.293 kJ mol −1 , and is an excellent candidate. A model nanostructured surface We chose copper foam as the substrate for the production of the nanotip contact surface.This is an easily accessible material with a porous three-dimensional structure (Supplementary Fig. 1a) to allow water flow and cell collision.A high density of Cu(OH) 2 nanowires (Cu(OH) 2 NWs) with the diameter ~200 nm and length up to 5 μm was grown on the foam (Supplementary Fig. 1b, c), which provides numerous contact points on their sharp tips.Thermal treatment was used to coat these nanowires with a carbon layer to change the magnitude of the London dispersion force of the nanostructured surface.No obvious morphological change was observed after carbon coating and the sharp tips of the original Cu(OH) 2 NWs were well preserved (Supplementary Fig. 1d).X-ray diffraction (XRD) measurement was performed to investigate the crystalline structure of the modified NWs, which indicates that the Cu(OH) 2 phase was maintained after carbon coating (Supplementary Fig. 2).A slight change in surface hydrophilicity was observed (Supplementary Fig. 3), which confirms the coating of carbon on the modified NWs. A transmission electron microscopy (TEM) image shows the morphology of an individual modified NW (Fig. 2a).Using an aberration-corrected TEM (ACTEM), a layer of amorphous carbon is seen evenly covering the surface of the nanowire with a thickness of about 15 nm (Fig. 2b).The surface carbon can be discriminated from the Cu substrate by their different image contrast in a bright-field scanning TEM (BF-STEM) and a high-angle annular dark-field scanning TEM (HAADF-STEM), as by elemental mapping (Fig. 2c).The chemical composition of the carbon layer was studied by Raman and X-ray photoelectron spectroscopy (XPS).The Raman spectra of the modified NWs shows a graphitic D-band at about 1580 cm −1 (Fig. 2d), which is consistent with the main peak at 284.7 eV in XPS C 1s spectra (Supplementary Fig. 4), demonstrating that the surface carbon is dominated by sp 2 C-C bonds. We used atomic force microscopy (AFM) to confirm the attractive force between the amorphous carbon layer and the bacterial cell membrane (see Methods).An AFM tip was treated by the same coating method to modify the Cu(OH) 2 NWs (Supplementary Fig. 5).From Fig. 2e, we see that the cantilever's retraction was hindered as a result of adhesion between the carbon layer and the cell surface, while the original AFM tip exhibited no hysteresis in cell surface detachment.In this testing condition, the adhesion force between the carbon-coated AFM tip and the bacterium was measured to be 0.9 ± 0.5 nN (Supplementary Fig. 6), while there was a negligible adhesion force for the uncoated AFM tip (Supplementary Fig. 7).This measurement confirms that the amorphous carbon surface has a strong attraction to the bacterial cell due to the enhanced London dispersion force, which was verified in the above MD simulation. Bactericidal performance of modified NWs In our tests, the copper foam has a filter-like porous geometry with an average pore size of 200 μm, which causes over 99.9999% of the bacteria to collide with a 3-mm thick membrane (Supplementary Fig. 8 and Supplementary Movie 7).To evaluate the bactericidal efficiency of the modified NWs in water flow, the Gram-negative bacterium Escherichia coli (E.coli) was used as the indicating microorganism and was suspended in sterilized water with a concentration of 10 6 -10 7 colony-forming units per milliliter (CFU mL −1 ).We first compared the bactericidal performance of modified NWs with two control materials, Cu(OH) 2 NWs and modified Cu foam (Supplementary Fig. 9), in a flowthrough cell at a flux of 2 m 3 h −1 m −2 (Supplementary Fig. 10).These control materials were used to exclude the contribution of the Cu substrate, the amorphous carbon itself or other related factors.The logarithmic removal efficiency was defined by -log(C/C 0 ), where C and C 0 represent the live bacterial concentrations in the treated and untreated water samples respectively.As shown in Fig. 2f and Supplementary Table 1, the modified NWs achieved a superior disinfection of E. coli by more than 99.9999% (>6 log removal).By contrast, the disinfection efficiency of the Cu(OH) 2 NWs and modified Cu foam were relatively low (~1 log), with large quantities of live bacteria remaining (Fig. 2g).E. coli viability was also assessed through a live/dead fluorescence assay.Bacteria with intact cell membranes were stained with SYTO 9 (green), whereas nonviable bacteria with damaged membranes were stained with propidium iodide (red).It is clear from Fig. 2h that the bacteria treated by the modified NWs were stained red, indicating severe membrane damage, while bacteria treated by the Cu(OH) 2 NWs and the modified Cu foam remained viable.These results indicate that the removal of bacteria in the flow was related to a combined effect between the nanotip structure and the surface carbon layer. To obtain insight into the physiological structure changes of the bacteria after contact with the modified NWs, we examined the morphology of E. coli by scanning electron microscopy (SEM).The initial E. coli were rod-shaped with an intact cell membrane (Fig. 2i); however, in the effluent, they were severely damaged and had many pores (Fig. 2j).The rupture sizes of the treated bacteria were measured to be 100-200 nm from the SEM image, with an average value of 122 ± 32 nm (Supplementary Fig. 11).The TEM analysis (Fig. 2k, l) also indicates that the cell envelope was ruptured, leading to partial leakage of the cytoplasmic contents.To confirm this cell damage, the E. coli cells treated by the modified NWs were immediately imaged in water by structured-illumination microscopy with the presence of lipophilic cyanine dye DiO (green) and propidium iodide (red).After flowing through the modified NWs, the E. coli cells had a compromised membrane that allowed the entrance of red propidium iodide (Supplementary Fig. 12).Furthermore, a number of fragments (red dashed circles) were attached to the modified NWs after a continuous disinfection test (Fig. 2m, n).As can be seen from a higher-magnification SEM image (Supplementary Fig. 13a, b), the dimension of the fragments was in the range of 100-200 nm, which agrees with the rupture sizes of the cell envelope.Considering that the bactericidal test was conducted in deionized (DI) water, these fragments are likely to be bacterial debris that were torn from the cell bodies while the damaged cells were flushed away in the flow.In contrast, the unmodified Cu(OH) 2 NWs did not show any debris on the surface after continuous disinfection (Supplementary Fig. 13c, d), which suggests that the carbon layer induced a different interaction mode between the bacteria and the nanowires due to a higher London dispersion force, causing the tearing of the bacteria during flow. We further confirmed the negligible contributions of other potential mechanisms to the removal of bacteria, including adsorption, oxidative stress, and toxicity of the released copper ion (Cu 2+ ).The optical density of the effluent water was comparable to the influent water (Supplementary Fig. 14), meaning that the density of the bacterial cells was unchanged in the effluent and the bacteria were not removed by adsorption.In addition, the elevation of intracellular reactive oxygen species (ROS) was not observed in the bacteria treated by the modified NWs (Supplementary Fig. 15).Therefore, the influence of oxidative stress is negligible.Note that the effluent Cu 2+ concentration (0.3-0.5 mg L −1 ) was far below the guideline of World Health Organization for safe drinking water (2 mg L −1 ) 36 , and the contribution of released Cu 2+ ions to bacterial removal was limited (Supplementary Fig. 16).From these observations, we conclude that mechanical destruction was the major cause of bacterial inactivation by the modified NWs. This bactericidal process is different from the reported mechanobactericidal activities in previous studies (Fig. 2o and Supplementary Table 2).As is shown in Fig. 2o, there are large variations in the contact time and the bactericidal efficiencies among different types of nanomaterials.Typically, previously reported mechano-bactericidal activities are based on a surface-contact mechanism, in which bacterial cells are deformed during static contact with the sharp nanostructures 37,38 .This mostly requires a long contact time (up to hours) to deliver sufficient stress beyond the elastic limit of the cell envelope 14,[39][40][41] .Metal nanoparticles show higher bactericidal activities due to their toxicity and induction of oxidative stress after translocating into the cell 42,43 , apart from their mechano-bactericidal behaviors.Here, we report a fluidic energy triggered tearing mechanism, in which bacteria are torn apart by an instantaneous contact with nanotips during flow.The coupling of the hydrodynamic force and London dispersion interaction between the nanotip surface and the cell envelope achieved >99.9999% inactivation of the bacteria within a short contact time (e.g. 7 s), which is, to the best of our knowledge, the first observation of effective mechanical disinfection in bulk water. From puncturing to tearing: discussions of the cell rupture mechanism The design of a filter-like porous flow-through unit allows effective cell collision with the surface (>99.9999%)(Supplementary Fig. 8 and Supplementary Movie 7) and the formation of nanotips reduces the contact area with bacteria, leading to an enlarged stress on the cell envelope.During an instantaneous collision with a nanotip, the collision energy deforms the bacteria (Fig. 1a, b).While the high London dispersion force at the surface of the modified NWs creates a transient attachment of the bacteria to the nanotips.Subsequent flow produces tearing stress on the cell envelope (Fig. 1c, d).To determine whether the bacteria were ruptured by puncturing or tearing, we developed a biophysical model to analyze the forces exerted on the bacterial cell during flow. Before this we measured the mechanical properties of the E. coli cell.Measurements of mechanical properties of the bacterial cell are sensitive to the experimental conditions, and probing live cells under physiological conditions is therefore an ideal way to investigate the biophysical mechanism of bacteria 17,44 .Here we used AFM to directly probe live E. coli in a liquid environment.We followed the protocol of a puncture experiment 18 and obtained force versus displacement curves of bacterial cells (see Methods).A typical puncture curve is shown in Fig. 3a, and more information is provided in Supplementary Fig. 17.The Young's modulus of the bacterial cell is determined from the initial part of the loading force versus cell indentation (see inset in Fig. 3a).By fitting the data with the classic Sneddon and Hertz Model (Supplementary Table 3), we obtained an average Young's modulus of 0.5 MPa, which is comparable to the reported results measured in liquid environment (Supplementary Table 4).The critical point at which the AFM tip broke into the cell wall appeared at a maximum cell indentation of 85 nm (Fig. 3a).We calculated the stress distribution profile of the bacterial cell envelope at this critical point using the obtained Young's modulus (Fig. 3b).The maximum pressure appeared at the edge of the contact area, with a value of 0.05 MPa, which is considered the critical stress required to rupture the cell. To investigate the detailed process during the contact between the bacteria and the surface during flow, we further analyzed the bacterial movement near the surface nanostructure by a Brownian dynamics and computational fluid dynamics method using a set of cylindrical tips perpendicular to the horizontal surface to present the nanotips on the copper foam.The flow rate in the main flow was set to 5.5 × 10 −4 m s −1 corresponding to a flow rate of 2.7 mL min −1 in the experimental conditions.The velocity of the flow near the contact surface was calculated and is shown in Fig. 3c, where the flow rate was much lower than that in the main flow, with a value of 5 × 10 −5 m s −1 .Then, movement of the E. coli cell in the flow field was simulated by importing the calculated flow information into a Brownian dynamics equation 45 .We simulated 100 cells in the defined flow field and obtained eight different types of contact between a bacterium and nanotips (Fig. 3d and Supplementary Fig. 18).Because E. coli is rodshaped, the contact with the nanotips can be either at the end (six types) or the middle (two types), and the possibility of end-contact (57%) is higher than that of middle-contact (43%). Finally, we modeled the stress distribution profile of a cell membrane based on its interaction with the nanotips.We first considered puncturing during the collision process (Supplementary Movies 8 and 9).During collision, we assumed the work done by the tip to a bacterium equaled the loss of kinetic energy.The maximum stresses for the end-contact and middle-contact types were calculated to be 2.61 × 10 −4 and 4.54 × 10 −4 MPa, respectively (Fig. 3e, f), which are two orders of magnitude lower than the critical value (0.05 MPa).Such data suggests that the collision process cannot mechanically rupture a bacterium, and explains the poor bactericidal performance of the surface of the Cu(OH) 2 NWs (Fig. 2f), where only the puncturing effect exists.When bacteria collided with the surface of modified NWs, the water flow combined with the higher London dispersion interaction to exert a tearing effect (Supplementary Movies 10 and 11).The maximum outward stresses in the end-contact and middle-contact form were calculated to be 6.99 × 10 −2 and 5.28 × 10 −2 MPa, respectively (Fig. 3g, h), which exceed the rupture stress of the bacteria.In our simulation, the drag force of the flow was estimated by the minimum flow rate and the random torque of the flow was ignored, which theoretically induces rotation of the cell body and exerts extra tension for membrane deformation 22,46 .Hence, the stress in the tearing process was sufficient to rupture the bacterial cell.The numeric simulation results are in good agreement with the experimental results of Cu(OH) 2 NWs and modified NWs, which allow us to demonstrate that the tearing generated by the hydrodynamic and dispersion forces is the true reason for the cell rupture rather than hydrodynamic/Brownian collisions. Practical disinfection applications Based on the hydrodynamic-bactericidal mechanism, we have produced a model disinfection system (Fig. 4a).The modified NWs were placed in a chamber, and the contaminated water flowed into the chamber for disinfection.During a short time in the chamber, the bacteria suffered destructive mechanical damage and lost cell integrity by contact with the modified NWs.This prototyped continuous water disinfection system can be integrated into municipal water pipelines and be easily scaled up by stacking the chamber units with the modified nanotips in series. We first evaluated the influence of flow rate on the performance of this novel disinfection system (Supplementary Fig. 19).Limited bactericidal activity (~1.4 log) was achieved in the static condition, indicating that the significance of water flow to the rupture of the bacteria.In the flow condition, the modified NWs showed remarkable inactivation of E. coli over a range of flux (0.5-6 m 3 h −1 m −2 ).Complete disinfection (>6 log removal) was observed at fluxes of 0.5 and 2 m 3 h −1 m −2 , which are the commonly-adopted fluxes in filtration modules 47,48 .Yet there is a decrease in the inactivation efficiency at a higher flux of 6 m 3 h −1 m −2 .This is because at such a high flux, the contact time of the bacteria within the materials was decreased, leading to a substantial decrease in the contact possibility between the bacteria and nanotips 49 . To confirm the robustness of the disinfection effect, we evaluated the persistence of bacterial inactivation caused by this hydrodynamicbactericidal mechanism.We disinfected three representative Gramnegative and Gram-positive bacteria and assessed their viability using a 24-h storage experiment, including E. coli, Pseudomonas aeruginosa (P.aeruginosa), and Staphylococcus aureus (S. aureus).As shown in Fig. 4b, the three types of bacteria completely lost their viability after disinfection (>6 log inactivation), and no regrowth or reactivation was observed during storage.A significant inactivation was also observed for Gram-positive S. aureus.Similar to E. coli, the cell envelope of the S. aureus cell was damaged with holes on the surface after flowing through the modified NWs (Supplementary Fig. 20).Yet there was only a slight leakage of cytoplasm, suggesting that S. aureus is more resistant to mechanical damage compared with Gram-negative E. coli, due to a thicker peptidoglycan cell wall 50 .We tested the disinfection performance in real water samples using E. coli as the indicating microorganism (Fig. 4c).The characteristics of the tap water and reclaimed water are shown in Supplementary Table 5.After flowing through the modified NWs, we observed a rapid decrease of the live bacterial concentration in both water samples.The treated bacteria did not reactivate during the 24-h storage under visible light illumination.Because of mechanical destruction, the treated cells could not recover with the extension of membrane damage and loss of cytoplasm 51 , which avoids the risks of bacterial regrowth in conventional disinfection processes such as the ultraviolet radiation 52 . A flow-through disinfection apparatus was constructed to test the long-term performance of this model system (Fig. 4d inset).A series of stainless-steel chamber units was connected for evaluation during a 30-day continuous disinfection.A bacterial solution containing 10 3 -10 4 CFU mL −1 E. coli was used as the feed water, simulating the typical bacterial concentrations found in water purification system 53 .During a 30-day continuous operation, no live bacteria were detected in the effluent (Fig. 4d), which corresponds to a treating capacity of over 10,000 times the effective volume of the chambers.The morphology of modified NWs after 30-day operation was evaluated by SEM.The bacterial debris only accumulated in the first unit (Supplementary Fig. 21a, b), while in the following units, sparse debris was found and the morphology of the modified NWs was well preserved after 30-day flushing (Supplementary Fig. 21c-f).It was estimated that this flow-through disinfection apparatus with a chamber volume of 1 L can support the daily water consumption of an adult for over ten years (see Methods), showing its prospects for broad applications. The hydrodynamic-bactericidal mechanism is solely a physical process in which the surface London dispersion interaction is the critical factor regardless of the type of surface with nanotip characteristics.This was shown by using three different nanomaterials, including ZnO nanorods, Co, Mn-layered double hydroxides (LDH) nanoneedles and titanate nanowires, in which the same carbon coating was used for all three materials (see Methods).Disinfection results show that only a weak bactericidal activity was observed in their original forms (Fig. 4e).After surface modification, the three nanostructured surfaces showed significant bacterial inactivation of >99.9%.The difference in the bacterial killing efficiency of these nanomaterials is associated with the differences in surface geometry (Supplementary Figs.22-24).The diameter, height, and density of these surface patterns affect the contact between the bacteria and the nanotips 54 , leading to changes in the local force distribution.For instance, the carbon-coated ZnO nanorods showed lower bactericidal efficiency than the carbon-coated Co, Mn-LDH nanoneedles and carbon-coated titanate nanowires.This is because ZnO nanorods have larger diameters and a lower tip density (Supplementary Table 6).Typically, nanostructures with a blunt feature are supposed to deliver less mechanical stress 41 , due to an enlarged contact area.Besides, a lower tip density of ZnO nanorods also reduces the possibility of bacterial contact with the nanotips during flow, which negatively impacts their bactericidal performance.Nonetheless, these results indicate that the carbon coating treatment can increase the bactericidal performance of different nanomaterials by at least three orders of magnitude, confirming that the London dispersion interaction is the critical factor to determine the bactericidal efficiency of the nanotips in water. Discussion In this study, we demonstrate a hydrodynamic-bactericidal mechanism which couples the mild fluidic energy and the London dispersion interaction between the nanotip surface and the cell envelope, leading to superior mechanical inactivation of bacteria in water.Although the applicability of this method was verified on different nanotip surfaces, we have yet to fully identify the influence of nanostructure geometry on the rupture of the bacteria, as the formed nanotip structures obtained by chemical methods cannot be ensured all geometrically identical.A thorough study of the effects of geometrical parameters requires precisely-controlled nanofabrication.We believe the advancement in nanofabrication techniques such as reactive ion etching or deep UV lithography may help to provide more precise analysis on the role of nanostructure geometry in the future 55 . Our method is effective against Gram-positive S. aureus, yet we observed the rupture of E. coli is more vigorous than S. aureus based on the morphology of the bacteria after disinfection.The influence of bacterial species is a complex issue.For instance, the difference in bacterial shape (e.g.rod shape or coccus) can affect the stress distribution profile when interacting with the nanotips 56 .In addition, the cell envelope composition not only governs the cell stiffness but also influences the level of London dispersion interaction between the bacteria and the nanotips.A thorough study incorporating the above issues should be carried out to give guidance for the design of a more reliable disinfection device.Besides, the effectiveness of this method towards other types of waterborne pathogens need to be studied, such as viruses, fungi and protozoa.As viruses possess a noncellular structure with a much smaller size, there is potential limitation of the current system for the inactivation of viruses.The fine adjustment of nanotip geometry is thus important to produce nanotips comparable to the size of viruses. Furthermore, studies under real-world conditions are required.Colloids, particles, dissolved organic matter, and ions coexist with pathogens in realistic water treatment conditions 57 .These substances may absorb to the modified NWs, shield the effective sites on the nanotip surface, and possibly change the level of dispersion interaction with the bacteria.Exploring the effects of these substances on the killing efficiency of the pathogens is therefore of great importance for its broad applications.For practical water treatment, the modified NWs can be combined with other conventional water treatment processes.For example, the influent water can be pretreated by an ultrafiltration module to remove most of the impurities 58 . Nevertheless, this study reports a methodology on exploiting fluidic energy to destroy pathogenic bacteria for the first time, which provides implications for the development of chemical-free disinfection technology to address global challenges in environment and healthcare.A superior inactivation of >99.9999% bacteria was achieved by simply flowing the water through the device.That is, besides the nanotip surface, flow of contaminated water is the only requirement to attain disinfection, which avoids toxic chemical byproducts and additional energy input.As a result, drinking water, municipal water and wastewater facilities as well as communities living in rural areas may benefit from this method of obtaining safe and clean water.It may also shed light on the development of future pathogenic control in other fields.Poly-L-lysine (0.01 wt%) was purchased from Sigma-Aldrich.LIVE/ DEAD BacLight Bacterial Viability Kit (L7007) and Vybrant DiO celllabeling solution (V22886) were obtained from Invitrogen, USA.Reactive Oxygen Species Assay Kit (NO.S0033) was purchased from Beyotime Biotechnology Co. Ltd., China.Deionized (DI) water was produced by Milli-Q Water System (Millipore, USA) and all the solutions were prepared by DI water unless otherwise mentioned. Fabrication of Cu(OH) 2 NWs, modified NWs and modified Cu foam Cu(OH) 2 NWs were synthesized on copper foam using chemical oxidation 59 .The copper foam with a thickness of 2 mm and an average pore size of 200 μm was cut into 3 × 4 cm 2 pieces and sequentially washed with ethanol, hydrochloric acid, and DI water to remove surface impurities.The cleaned copper foam was then immersed in 150 mL of an aqueous solution containing 2.5 M NaOH and 0.1 M (NH 4 ) 2 S 2 O 8 for 20 min at 4 °C to produce the Cu(OH) 2 NWs.It was removed from the solution, rinsed with DI water and dried in a vacuum oven. The modified NWs were prepared by a simple thermal treatment.The Cu(OH) 2 NWs were placed downwind of a tube furnace with glucose in the heating zone, which was used as the carbon precursor and pyrolyzed at 550 °C in an Ar atmosphere for 2 h.The evaporated carbon settled on the surface of the Cu(OH) 2 NWs to form the modified NWs.To prepare the modified Cu foam, the cleaned copper foam was subjected to the same treatment. Material characterization The morphologies of the fabricated materials were analyzed by scanning electron microscopy (SEM, HITACHI SU8010) and transmission electron microscopy (TEM, FEI Tecnai G2 spirit).The ultrastructure of the modified NWs was examined by aberration-corrected TEM (ACTEM, JEM-ARM300) using the bright-field scanning TEM (BF-STEM) and high-angle annular dark-field scanning TEM (HAADF-STEM) modes.The crystal structures of the samples were studied by X-ray diffraction (XRD, D8 Advance).The wettability of the samples was investigated by a contact angle measuring instrument (KRUSS DSA30).The chemical compositions of the samples were analyzed by X-ray photoelectron spectroscopy (XPS, PHI 5000 VersaProbe II) and Raman spectroscopy (Horiba LabRAM HR800). Bactericidal test E. coli was used as a model pathogen to evaluate the bactericidal performance of the sample materials.Pure E. coli was cultured in nutrient broth at 37 °C with shaking at 150 rpm for 12 h to achieve a concentration of 10 9 -10 10 CFU mL -1 .The composition of the culture media is listed in Supplementary Table 7.The cultured bacteria were harvested by centrifugation and washed twice with sterile DI water.The prepared E. coli suspension was diluted to sterile DI water to obtain a concentration of 10 6 -10 7 CFU mL −1 . Bactericidal tests were conducted in a flow-through Plexiglas cell with two pieces of prepared cupper foam placed inside (Supplementary Fig. 10).The copper foam was 2-mm thick with an effective filtration area of 78.5 mm 2 .The flow rate of the water sample was fixed at 2.7 mL min −1 by a peristaltic pump, which corresponds to a contact time of 7 s and a flux of about 2 m 3 h −1 m −2 .Before the bactericidal test, the copper foam was washed by pure water in the flow-through cell for two hours to remove surface impurities.The fresh bacterial solution (10 6 -10 7 CFU mL −1 ) was used as the influent water and flowed into the cell.During the test, the influent water was mixed by a magnetic stirrer.Normally, the bactericidal tests were completed within one hour, and thus about 160-mL water was treated for each test.For each type of material, three sets of flow-through cells were set up and operated independently.The bactericidal efficiency for each type of material was counted based on the three replicates. The live bacterial concentrations in the influent and effluent water were measured using a standard plate count method.The time to spread the bacterial solution on the plate for each sample was normalized to one hour after sampling.Each water sample was diluted serially (1:10, 1:100, 1:1000 and 1:10,000), and spread onto a sterile Petri plate (in triplicate for each dilution), in which cooled and molten nutrient agar medium had been added.Following 24-h incubation at 37 °C, the number of the bacterial colonies formed on the plates was counted, and the concentration of bacteria in the original water sample was obtained by multiplying the number of colonies obtained per plate by the dilution factor. The inactivation performance was evaluated by logarithmic removal efficiency, which was defined by -log(C/C 0 ), where C and C 0 represent bacterial concentrations in the influent and effluent water obtained by plate count.When no colonies were formed on the plates, including original and diluted water samples, the bacteria in the water sample were considered as completely inactivated and the logarithmic removal efficiency was calculated by log(C 0 ). Live/dead viability assay A LIVE/DEAD BacLight Bacterial Viability Kit was used to test the viability of the bacteria.The two dye components provided with the kit were mixed to achieve a concentration of 1.67 mM for SYTO 9 and 10 mM for propidium iodide, which provides good live/dead discrimination.3 μL of dye mixture was immediately added to 1 mL of the bacterial sample, mixed thoroughly and incubated at room temperature in the dark for 15 min.The bacterial sample was then filtered through a black polycarbonate membrane (diameter 25 mm, pore size, 0.22 μm, Millipore, USA) and observed under a fluorescence microscope (Nikon, ECLIPSE Ni-U). Bacterial sample preparation for SEM The morphology of the bacteria was investigated by SEM.The bacteria samples before and after treatment were harvested by centrifugation at 9700 × g for 5 min and fixed with 2.5% glutaraldehyde at 4 °C for 12 h.Next, the bacterial samples were rinsed with water and dehydrated for 15 min with a series of ethanol/water solutions with increasing ethanol content (30%, 50%, 70%, 90%, 100%).The ethanol was then displaced in a series of tert-butyl alcohol content (40%, 60%, 80%, 100%) solutions.Finally, the turt-butyl alcohol in the sample was removed by freezedrying. Bacterial sample preparation for TEM The ultrastructure of the bacteria was assessed by TEM.The bacterial samples were first harvested by centrifugation and were fixed with 2.5% glutaraldehyde at 4 °C for 12 h.After being washed with water three times and postfixed with 1% osmium tetroxide at 4 °C for 1 h, the samples were rinsed again and dehydrated for 15 min with a series of ethanol/water solutions with increasing ethanol content (30%, 50%, 70%, 90%, 100%).The samples were then infiltrated with resin and cured overnight at 60 °C to form resin blocks.The resin blocks were sectioned and picked up on a TEM grid.Finally, the samples on the grid were stained by a 3% aqueous solution of uranyl acetate and 4% lead citrate solution for 10 min. Bacterial sample preparation for AFM E. coli stock solution with a concentration of 10 9 -10 10 CFU mL −1 was prepared as discussed earlier.A glass slide coated with poly-L-lysine (Sigma, 0.01 wt%) was incubated in a 100 μL bacterial solution for 20 min and was then incubated in sterile DI water at least three times to remove the unattached cells.The immobilized bacteria were used immediately for AFM experiments and placed in water during the test to prevent cell dehydration. Cell-tip adhesion measured by AFM To prepare the carbon-coated AFM probe, an AFM tip (MLCT-bio, Bruker) was treated by the same method used for the modification of the Cu(OH) 2 NWs.It was washed with ethanol and DI water to remove surface impurities, and dried in air before use.The morphologies of the original and the carbon-coated MLCT-bio tips are shown in Supplementary Fig. 5. The adhesion force between the AFM tip and bacterial membrane was analyzed in the force-volume mode by an Asylum AFM (MFP-3D-SA) according to the literature 18 .The cantilever deflection sensitivity was first calibrated in DI water on the surface of a clean glass slide (not covered with bacteria).The tip was then aligned at the center of an area where E. coli cells were evenly distributed under inverted optical microscope.Force versus deflection curves were acquired from a 7 × 7 μm 2 area which was divided into 32 × 32 grids.The cantilever force constant was about 0.065 N m− 1 , and the maximum loading force was set at 1 nN.The experiments were performed at a frequency of 0.5 Hz, with the cantilever velocity around 1.98 μm s −1 .The tip with or without carbon coating was brought into contact with E. coli and the adhesion force was displayed on the retract curve. Cell mechanics measured by AFM Mechanical measurements of E. coli cell were carried out by a puncture test, which were performed in the force-volume mode as mentioned above.An MLCT-bio tip was used, and the tip radius was measured to be 40 nm using SEM.The cantilever force constant was about 0.046 N m −1 , and the maximum loading force was set at 10 nN to allow cell penetration.The experiments were performed at a frequency of 1 Hz, with the cantilever velocity around 1.98 μm s −1 .The Young's modulus (E) of the E. coli sample was obtained by fitting the indentation curve obtained from the approach curve using a Sneddon or Hertzian model 60 . Structured-illumination microscopy (SIM) The E. coli samples were stained with propidium iodide (Invitrogen, USA) for 10 min, and then mixed with a commercial lipophilic cyanine dye DiO (Vybrant cell labeling solution, Invitrogen, USA) for another 5 min.The DiO stains bacterial cell membrane while propidium iodide binds only to the DNA of cells with a compromised membrane.The bacterial solution was transferred to a glass bottom dish and immediately imaged using the Nikon N-SIM apparatus with a 100× oil objective and laser excitation for DiO (488 nm), propidium iodide (561 nm).High-resolution images were acquired by 3D-SIM and reconstructed by slice 3D-SIM. Optical density measurement of the influent and effluent water E. coli suspension with a concentration of 4 × 10 6 CFU mL −1 (determined by plate count) was used as the influent water.Before the test, the modified NWs were washed by pure water in the flow-through cell for two hours to remove surface impurities.Effluent water was collected at different flow rates (0.5, 2 and 6 m 3 h −1 m −2 ).Optical density data of the influent and effluent water were measured at a wavelength of 600 nm and an optical pathlength of 5 cm using a HACH spectrophotometer (DR3900).DI water was used as the blank.The effluent samples of DI water were collected at the same flow rates to exclude the potential influence of released particles on optical density. Measurement of reactive oxygen species (ROS) The intracellular levels of ROS in bacteria were determined by a Reactive Oxygen Species Assay Kit (Beyotime, No. S0033) containing a fluorescent probe DCFH-DA and a Rosup reagent.Fresh bacterial suspension (10 6 -10 7 ) was first treated with 100 μM DCFH-DA in the dark for 1 h at 37 °C, which was then washed three times with sterile DI water to remove the residual DCFH-DA 61 .The bacterial suspension loaded with the probe without further treatment was used as the negative control.To be treated by the modified NWs, the bacterial suspension was flowed into the flow-through cell at a flux of 2 m 3 h −1 m −2 .The bacterial suspension was also treated with 100 mg L −1 Rosup reagent for 20 min to induce ROS generation as a positive control 62 .Aliquots of 100 μL were taken from each sample and were added into a 96-well black plate to determine the relative fluorescence intensity on a microplate reader (Molecular Devices, SpectraMax i3) with the excitation/emission wavelengths of 488/525 nm. Measurement of Cu concentration in the effluent The concentration of Cu released in the effluent was measured by inductively coupled plasma mass spectrometry (ICP-MS, Thermo Fisher X Series).Briefly, a set of 15-mL aliquots were collected from the effluent at different sampling times.Before measurement, each aliquot was dosed with nitric acid to a concentration of about 1 wt% and filtered through a 0.45 μm membrane (Millipore, USA). Bactericidal performance of Cu 2+ A series of solutions containing 0.2, 0.4, 0.6, 0.8, 1.0 mg/L Cu 2+ were prepared using CuSO 4 ⋅5H 2 O. Then the E. coli suspension (10 9 -10 10 CFU mL −1 ) was dosed into the Cu 2+ solutions to obtain a concentration of 10 6 -10 7 CFU mL −1 .These solutions were incubated for one hour and the live bacterial concentrations were measured using the plate count method.The bactericidal tests of different Cu 2+ concentrations were repeated for three times. Bactericidal test of the modified NWs at different flow rates For the bactericidal test at the static condition (i.e., flow rate of zero), two pieces of copper foam with the modified NWs were first washed with pure water in the flow-through cell for two hours to remove surface impurities.Then the modified NWs were taken out and immersed in a 160-mL E. coli suspension with a concentration of 10 6 -10 7 CFU mL −1 , which corresponds to the treated volume in the flow-through cell at a flux of 2 m 3 h −1 m −2 for an hour.The live bacterial concentration was measured using the standard plate count method after being incubated for one hour.The bactericidal test at the static condition was repeated for three times.The bactericidal test in flow condition was conducted as introduced above.Specifically, the flow rate was controlled at 0.65, 2.7, and 7.8 mL min −1 , corresponding to a flux of 0.5, 2 and 6 m 3 h −1 m −2 respectively.Bacterial storage experiment E. coli (CGMCC 1.3373), P. aeruginosa (CGMCC 1.12483), and S. aureus (CGMCC 1.12409) were cultured at 37 °C with shaking at 150 rpm for 12 h, washed to remove the nutrient medium and diluted with DI water to obtain a bacterial suspension of 10 6 -10 7 CFU mL− 1 .The bacterial solution was flowed through the disinfection cell with the modified NWs at a flux of 2 m 3 h −1 m −2 .The influent and effluent solutions were collected and stored at 25 °C under visible light illumination, which represented a typical natural aquatic environment.The bacterial concentrations of each samples were measured at a series of storage times from 0 h to 24 h. Disinfection in real water samples The reclaimed water sample was collected from secondary effluent in a wastewater treatment plant (Shenzhen, China).The tap water was collected from the tap water faucet in the lab.The characteristics of the water samples are shown in Supplementary Table 5.Both were filtered through a 0.22 μm membrane to remove the indigenous bacteria.The E. coli stock solution was diluted in the two water samples to obtain a concentration of 10 6 -10 7 CFU mL −1 .The prepared solutions were flowed through the disinfection cell with the modified NWs at a flux of 2 m 3 h −1 m -2 .Storage experiments were conducted to evaluate the disinfection performance in real water samples. Long-term disinfection To perform the long-term disinfection test, twelve chamber units were stacked in series (Fig. 4d inset), and each unit has an outer diameter of 90 mm and a thickness of 10 mm.Each chamber unit contained two pieces of copper foam with a size of 2 × 2 cm and a thickness of 2 mm.The effective filtration area for each chamber was 2.3 cm 2 and the effective filtration volume for twelve chambers was calculated to be 10.9 mL (V 0 ).E. coli stock solution was diluted with sterilized water to prepare the feed water with a concentration of 10 3 -10 4 CFU mL −1 .During the test, the feed water continuously flowed into the system and the flow rate was fixed at 2.7 mL min −1 .The feed water was replenished with a newly-prepared E. coli stock solution every two days, and the concentrations of live bacteria in the influent and effluent were measured by the standard plate count method.The time to spread the bacterial solutions on the plate was normalized to one hour after sampling.The treating capacity is defined as the ratio of actual treated volume (V water ) to the effective volume of the chambers (V 0 ).During the 30-day test, 116.64 L of the feed water was treated, which is over 10000 times that of V 0 .Therefore, for a chamber volume of 1 L, an estimated 10,000 L feed water can be purified, which can provide the drinking water consumption of an adult (2 L per day) for more than ten years. Disinfection experiments with three different nanotip surfaces ZnO nanorods were grown on the copper foam by a chemical precipitation method 63 .The cleaned copper foam (2 × 5 cm 2 ) was transferred into a Teflon-lined autoclave with 40 mL of an aqueous solution containing 4.5 g zinc nitrate hexahydrate, 2.1 g hexamethylenetetramine, and 2 mL ammonia, and heated at 90 °C for 15 h.The prepared sample was washed with DI water and dried in a vacuum oven. Co, Mn-layered double hydroxides (LDH) nanoneedles were fabricated on nickel foam using a hydrothermal approach 64 .0.72 g urea, 0.37 g NH 4 F, 0.40 g MnCl 2 •4H 2 O and 1.2 g Co(NO 3 ) 2 •6H 2 O were dissolved in 80 mL DI water and transferred into a 100 mL Teflon-lined autoclave.A piece of nickel foam (2 × 5 cm 2 ) was cleaned and soaked in this solution.The autoclave was then sealed and placed in an oven at 120 °C for 6 h.Finally, the nickel foam was washed and dried in a vacuum oven. Titanate nanowires were synthesized on a titanium (Ti) substrate by a one-step hydrothermal method 65 .A piece of Ti foam was first cleaned and transferred into a Teflon-lined autoclave with 40 mL of 1 M NaOH solution.Afterwards, the vessel was sealed and placed in an oven at 220 °C for 4 h.Finally, the prepared sample was washed and dried in a vacuum oven. To prepare the modified ZnO nanorods, Co, Mn-LDH nanoneedles, and titanate nanowires, the same carbon-coating method was used as for treatment of the Cu(OH) 2 NWs.In the disinfection experiment, a bacterial suspension with 10 6 -10 7 CFU mL −1 E. coli was flowed through the disinfection cell with the modified or unmodified nanostructured surfaces at a flux of 2 m 3 h −1 m −2 .The live bacterial concentration in the effluent was measured at a normalized storage time of 5 h. All-atom molecular dynamics (MD) simulation The amorphous carbon materials were constructed as follows.First, carbon atoms were randomly and approximately uniformly placed in a box of 6 nm × 6 nm × 6 nm to give a density of 3 g cm −3 .An amorphous carbon cube of 5 nm side was extracted from this structure.Then, parameters of sp 2 carbon atoms based on the INTERFACE force field were assigned to the amorphous carbon atoms 66 , with a van der Waals (vdW) diameter σ 0 = 0.355 nm and a well depth ε 0 = 0.293 kJ mol −1 .Finally, elastic network restraints were added to the bonds between adjacent carbon atoms to achieve a relatively rigid amorphous carbon material.CHARMM36m all-atom force field parameters 67 for 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) lipids, water model of transferable intermolecular potential 3 P and ions (Na + , Cl − ) were used.CHARMM-GUI webserver 68 and GROMACS tools were used to set up all simulation systems 69 , which were performed using GROMACS software (version 2019.4).For the interactions of materials with different vdW forces, we artificially adjusted the well depth (ε) of the carbon cube material (1/4ε 0 , 1/2ε 0 , 7/8ε 0 , 15/16ε 0 , 2ε 0 ).The dimensions of the initial box were 10.0 nm × 10.0 nm × 14.3 nm, which consisted of 1 amorphous carbon cubic, 338 POPE lipids, 30667 water molecules and 150 mM NaCl.The carbon cube was initially placed close to the POPE lipid bilayer and then went through a 100-ns free MD simulation.A time step of 2 fs, temperature of 310 K and periodic boundary conditions were used to all MD simulations.System snapshots and movies were generated by visual molecular dynamics 70 . Simulation of bacterial motion inside a copper foam Bacterial motion inside a porous copper foam was simulated by the software package ANSYS Fluent (2020 R2).First, we built a 3D model of the copper foam, and imported it into the ANSYS Fluent to solve the continuity.The domain of the computing model was set to be a cuboid with a size of 520 × 420 × 420 μm 3 .The laminar flow inside the model was calculated by the Fluent computational fluid dynamics (CFD) code following the conservation of momentum and mass 71 given by: where ρ is the density of the fluid, u is the velocity vector of the fluid, t is the time, p is the pressure of the fluid, I is the identity tensor, and F is the volume force vector.The viscous stress tensor Κ is defined by: where μ is the dynamic viscosity, T is the absolute temperature.The bacterial motion in the defied flow field was obtained by the discrete element method solver.For simplicity, the bacterium was approximated as a spherical particle, with a diameter of 1 μm.The motion of a bacterium in fluid flow is described by Newton's Second Law: where m p is the mass of the bacterium, v is the velocity of the bacterium, and F D , F G , F b are the drag, gravity, and Brownian force, respectively.For small particles dispersed in a liquid, gravitational effect is negligible.The drag force is defined as: where τ p is the bacterial velocity response time.The Brownian force that causes bacterial diffusion is given by: where Δt is the time interval taken by the solver, r p is the particle radium, k B = 1.380649 × 10 −23 J K −1 is the Boltzmann constant, and ζ (dimensionless) is a normally distributed random number with a mean of zero and unit standard derivation.The left side of the model was defined as the velocity inlet and the particle generation surface, while the right side was defined as the outlet boundary.Symmetrical boundary conditions were assumed for the lateral sides of the domain.Bacteria were stuck to the wall of the foam model once they struck it.The influent velocity was 1 mm s −1 and the flow was allowed to evolve until it was fully developed in the entire domain.1681 bacteria were then released and allowed to move.The number of bacteria that collided with the walls of the copper foam and the number of bacteria that escaped from the outlet boundary were calculated. Types of contact between bacteria and nanotips during flow The movement of the E. coli cells near the nanowires was simulated by a Brownian dynamics and computational fluid dynamics method 45 , in which the information of the flow field near the nanotips was calculated by the CFD method and imported into the Brownian dynamics equation by which the movement of a bacterium was simulated. The velocity of the flow field near the nanowires (V) was obtained by solving the Navier-Stokes equation: where ρ is the density, p is the pressure, and μ is the viscosity of the fluid.A 5 × 5 tip array was used to present the nanowires on the copper foam.The cylindrical tips were perpendicular to the surface and the gap between two tips was set as 1 μm.The diameter of a tip was set as 200 nm and the length as 5 μm.We assumed the fluid impacted the nanotips at an angle of 45°to simplify the real conditions, and the velocity of the main flow was set as 5.5 × 10 −4 m s −1 corresponding to a flow rate of 2.7 mL min −1 in the experimental conditions.No-slip boundary condition was set at the walls of the tips.The calculated velocity field is shown in Fig. 3c.The E. coli cell was simplified using a bead-stick model with two balls to reflect both the translation and rotation (Supplementary Fig. 25).The velocity and location of the ith bead of the bead-stick model at a certain time was calculated by the Langevin equation: where r i is the position of the ith bead of bacterium, V(r i ) is the velocity of the flow field at r i , and ξ is the drag coefficient of a bead.F B i ðtÞ, F S i ðtÞ, F EV ,W all i ðtÞ are Brownian force, spring force of the stick between two beads, and the repulsive force between the bead and walls, respectively.The simulation was terminated when the bacterium moved out the area of the flow field or collided with the tips.The movement of 100 bacteria was simulated.The number of bacteria that collided with the tip array was counted and the types of contact were determined. Stress distribution profile of bacterial cell envelope The finite element method was used to explore the bacterial deformation, which was conducted using the software ABAQUS 6.11.The E. coli cell envelope was modeled as a spherocylindrical shell with a diameter of 500 nm and a length of 1 μm.The Young's modulus (E) encompasses the entire envelope with a value of 0.5 MPa obtained from AFM measurements (Supplementary Table 3). The critical stress of the bacterial cell envelope was simulated according to the puncture test (Fig. 3a and Supplementary Fig. 17).The half opening angle of the MLCT-bio tip was set as 30°and the tip radius was 40 nm.The displacement of the tip was set as zero and the displacement of the cell was set as 100 nm.The stress profile of the cell envelope was then calculated and the maximum stress was found to be the critical stress required for rupturing the cell. The stress distribution profiles of a bacterium under collision and tearing process were calculated.The nanowire was simplified as a cylindrical tip with a diameter of 200 nm.A single shell model was used for the E. coli cell as in the computation of the critical stress.Two types of contact (end-contact and middle-contact) between a bacterium and a tip were simulated and the displacement of the tip was set as zero.In the case of collision, when the work done by the tip equaled the complete kinetic energy loss of a bacterium, the cell encountered the maximum indentation from the collision process.The kinetic energy of a bacterium (E k ) is originated from the flow, which is determined by E k = 1=2mv 2 , where m is the mass of a bacterium, and v is its velocity.The maximum kinetic energy loss was calculated to be 2 × 10 −25 J and the maximum work done by the tip to a bacterium was obtained.The stress distribution profile under this case was calculated and compared with the critical stress for failure.In the case of tearing, the drag force of the flow was calculated, which equaled the adhesive force of the tip in the attached state.In a laminar flow, the drag force (F D ) can be determined by the classic Stokes law F D = 3πμdΔu, where μ is the dynamic viscosity of the fluid, d is the diameter of the particle and Δu is the relative velocity of the fluid with respect to the particle 72 .Here, the diameter of the bacterium was assumed to be 0.5-1 μm, and F D was calculated to be 2.3-4.7 × 10 −13 N. The stress distribution profiles were simulated under this reaction force in the two types of contact and compared with the critical stress for failure. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Fig. 1 | Fig. 1 | General principles of the hydrodynamic-bactericidal mechanism.a, b Interaction between the bacteria and the nanostructured surface with a weak London dispersion interaction in a fluidic environment.The water flow prompts the bacterial cells to collide with the nanostructure, and the bacteria only encounter resilient cell deformation.c, d The strong dispersion interaction between the bacteria and the nanostructured surface allows cells to be trapped and torn up by water flow.a-d are produced by Photoshop CC 2022.e Initial and final configurations of the POPE lipid bilayer interacting with materials of different well depth (ε) values during a 100-ns MD simulation.The material is shown in green beads and the lipids in the membrane are shown in lines.Changes with time in the interaction energy (f) and center-of-mass (COM) distance (g) between the tested materials and the membrane.The interaction energy is in the form of van der Waals forces.Source data are provided as a Source Data file. Fig. 2 | Fig. 2 | Bactericidal performance analysis.TEM (a) and ACTEM (b) images of the modified NWs.c AC-BF-STEM and AC-HAADF-STEM images of the modified NWs and corresponding elemental map.d Raman spectra of the Cu(OH) 2 NWs and the modified NWs. e Cell-tip adhesion analysis by AFM.The amorphous carbon coated AFM tip showed a distinctive hysteresis during retraction from the cell surface (blue line).f Bactericidal performance of the modified NWs, Cu(OH) 2 NWs, and modified Cu foam.Data in f are presented as mean ± SD with n = 3 independent experiments.g Cell-culture plates showing E. coli concentrations in the initial and treated water samples.h Fluorescence microscope images for E. coli in the initial and treated water samples (live cells are stained in green and dead cells are stained in red).SEM images showing morphologies of the initial E. coli (i) and E. coli treated by the modified NWs (j).Inset shows a single E. coli at a higher magnification.TEM images showing the ultrastructure of the initial E. coli (k) and E. coli treated by modified NWs (l).Morphologies of modified NWs before (m) and after disinfection (n).o Comparison of bactericidal efficiency between the modified NWs and comparable mechano-bactericidal activities.Observations using SEM or TEM (a-c, i-n) were repeated three times independently with similar results.Source data are provided as a Source Data file. Fig. 3 | Fig. 3 | Cell rupture by the hydrodynamic-bactericidal effect.a A typical puncturing curve of E. coli obtained with AFM.The inset shows the fit of the data in the early part of indentation using the Sneddon model.b Finite element method simulation of the cell penetration process of the AFM tip.c The calculated velocity field of the simulated area.A right triangle domain was selected for computation, in which the hypotenuse was defined as the inlet boundary, while the other two sides were defined as the outlet boundaries.d Possibilities of the four types of contact between the bacteria and the nanotips.Stress distribution profiles of the cell membrane during collision process by end-contact (e) and middle-contact (f).Stress distribution profiles of the cell membrane during tearing process by endcontact (g) and middle-contact (h).The maximum stresses exerted at each contact form are denoted in red, and are compared with the critical stress (0.05 MPa).Source data are provided as a Source Data file. Fig. 4 | Fig. 4 | Practical disinfection applications.a Schematic of the hydrodynamicbactericidal mechanism for practical water disinfection (produced by Photoshop CC 2022).b Bacterial storage experiments under visible light for 24 h using three representative bacteria including Gram-negative E. coli, P. aeruginosa, and Grampositive S. aureus.The lines with hollow circles represent bacteria without treatment and the lines with solid circles represent bacteria after treatment.c Disinfection in real water samples including tap water and reclaimed water.d Influent and effluent bacterial concentrations during a 30-day field test.Inset shows optical images of the model flow-through disinfection apparatus.Each unit has an outer diameter of 90 mm and a thickness of 10 mm.The treating capacity is defined as the ratio of actual treated volume (V water ) to the effective volume of the chambers (V 0 ).e Bactericidal performance of the unmodified and modified ZnO nanorods, Co, Mn-LDH nanoneedles and titanate nanowires.Data are presented as mean ± SD with n = 3 independent experiments for (b, c, e), and n = 3 independent measurements for (d).# indicates below detection limit 1 CFU mL −1 .Source data are provided as a Source Data file.
v3-fos-license
2024-03-11T06:16:43.403Z
2024-03-09T00:00:00.000
268295316
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "2b8041fd5fc34c591b53ba48101659ad23250e44", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42150", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "3ef8c8456e3f1f964269825128577221bf935300", "year": 2024 }
pes2o/s2orc
Athlete Mental Health and Wellbeing During the Transition into Elite Sport: Strategies to Prepare the System The transition into elite-level sport can expose young athletes to risk factors for mental ill-health, including increased performance expectations, stressors associated with becoming increasingly public figures, and changes in lifestyle demands, such as diet, training loads and sleep. Successful integration into elite-level sport requires athletes to quickly adapt to these newfound challenges and the norms and culture of the new sport setting, while developing relationships with teammates, coaches, and support staff. Despite these demands, the mental health experiences of athletes transitioning into elite-level sport have been largely neglected in sport psychology literature. This is reflected in the dearth of programs for supporting mental health during this career phase, particularly relative to retirement transition programs. In this article, we offer a preliminary framework for supporting athletes’ mental health during the transition into elite-level sport. This framework is based on holistic, developmental, and ecological perspectives. Our framework outlines a range of recommendations for promoting mental health and preventing mental ill-health, including individual-level, relational, sport-level, and sociocultural-level strategies. Key recommendations include preparing athletes for the challenges they are likely to face throughout their athletic careers, highlighting athletes’ competence earlier in their careers, developing supportive relationships in the sport setting, and fostering psychologically safe sporting cultures. Supporting mental health from earlier in the athletic career is likely to promote athletes’ overall wellbeing, support enjoyment and retention in sport, and encourage help-seeking. Introduction Elite athletes-or athletes who undertake specialised training and compete at national or international levels [1,2]-experience a range of sport-related transitions over the course of their athletic careers [3].Sportrelated transitions experienced by elite athletes include sport specialisation, progression to higher levels of performance, selection and deselection, pre-and postmajor competition, injury and recovery, relocation, and retirement [4,5].While transitions to retirement have been considered from a mental health perspective, little attention has been paid to supporting mental health during the transition into elite sport settings. Transitioning out of sport due to planned or unplanned/involuntary retirement (often due to serious/chronic injury or performance decline) is associated with adjustment difficulties, loss of identity and mental ill-health [6,7].The latter includes elevated rates of depression, suicidality, anxiety, low self-esteem, and substance use [6,[8][9][10][11][12].Given the importance of athlete adjustment when transitioning out of elite sport, sporting organisations are increasingly investing in programs to support athletes during this career phase, recognising that this transition is a process rather than a single-occasion activity [7,13].Such programs are critical given the comparatively young age at which most elite athletes 'retire' relative to their community counterparts, often necessitating career and identity 'reinvention' . The lack of focus applied to athletes' transitions or onboarding into elite sport settings is surprising, given that this is a phase associated with a range of potentially destabilising factors.Here we advocate for a framework to better support and protect athlete mental health during the transition into elite sport environments.Based on principles of early intervention, we argue for an explicit focus on supporting elite athlete mental health via: 1. Equipping athletes with adequate knowledge about elite sport and challenges associated with transitioning into these settings; 2. Equipping athletes with mental health literacy and self-management skills to promote early recognition of, and response to, possible adjustment difficulties or mental ill-health associated with the transition; and 3. Equipping sporting organisations (including coaches and team managers) with strategies for promoting healthy athlete development, identity formation, and mental health from the earliest point of their professional elite careers. Demands Associated with the Transition into Elite-Level Sport Athlete outcomes following career transitions are impacted by the type of transition (i.e., reasons underlying the transition and its level of predictability), the way the transition is appraised (i.e., favourably vs. unfavourably), internal characteristics (e.g., resilience, optimism), available resources (e.g., social support) and the coping strategies implemented when responding to the transition [14][15][16].Transitions that are unpredictable and/or unaligned with an athlete's goals can precipitate mental ill-health, particularly in the absence of sufficiently effective coping strategies and resources [8,17,18].Transitioning to higher levels of competition likely aligns with the goals of most athletes.Such progression indicates increasing athletic proficiency, often providing access to benefits, such as greater financial and public recognition.Nonetheless, transitioning into elite competition also presents complex demands that can contribute to mental ill-health [19].Such demands include the rapid introduction to more regimented structures (e.g., changes to diet, sleep, training load) and to established teams, including coaching staff, high performance support staff and teammates.These changes occur alongside exposures to known risk factors for mental ill-health, such as relocation, extended periods of travel, reduced connection to social networks, and increased performance pressure [5,[19][20][21][22][23][24].Athletes may also experience stressors related to greater selection competition, risk of de-selection, greater living and financial independence, and newfound demands related to becoming an increasingly public figure, including public scrutiny, social media abuse, pressure to be a role model, and sponsorship requirements [20,25,26].While navigating these changes, athletes in the onboarding phase are also tasked with learning about-and quickly adapting to-the systems, processes, norms, and culture within the new sporting environment. Former athletes reflecting on their career progression have reported both positive and negative impacts of transitioning into higher levels of competition [23].Positive impacts included feeling more competent in their athletic abilities and greater commitment to their sport as their careers advanced.However, they also described intensive training loads, insufficient time with loved ones and challenges in the sporting context (e.g., conflict with coaches, frustration associated with lacking autonomy about selection and organisational decision-making). Increased performance expectations are significant sources of stress for many athletes transitioning to higher levels of competition [19,21,22,24,27,28].Elevated performance expectations can be internal, imposed by athletes themselves, or external, imposed by coaches, teammates and/or caregivers [27][28][29].Higher expectations can result in perceived inadequacy, including normative comparison with athlete peers and fear of not meeting others' expectations, in addition to self-doubt and increased pre-competition performance anxiety [21,22,29].Relatedly, maladaptive sporting role perfectionism may be worsened by needing to 'earn' playing time at the elite level through consistently strong performance [22].This pressure can be compounded by sponsorship and financial pressures in elite-level sport [19]. Due to feeling underprepared for the transition in terms of higher performance demands and changes associated with the new environments, athletes entering higher levels of competition describe feelings of uncertainty and lacking control [21].Some describe that unfamiliarity with the team, club and organisational culture feed into poor clarity regarding behavioural expectations [27].Relational issues within the sport setting may be experienced during this phase, with some athletes reporting conflict with coaches and difficulty coping with negative performance feedback [22,24,27,30]. As a consequence of dedicating more time and commitment towards their sporting roles, athletes' life balance in other domains (e.g., social and academic/vocational pursuits outside sport) can be compromised during this transition (e.g., [28]), which risks athlete identity foreclosure, or the commitment to the athletic role and identity at the expense of exploring other aspects of the identity [31,32].While a strong athletic identity can protect against burnout and increase enjoyment inand commitment to-the sporting role [33,34], athlete identity foreclosure is a risk factor for mental ill-health, particularly among injured, retiring or recently retired athletes [35][36][37]. Finally, the age at which most athletes transition into elite-level sport overlaps significantly with the peak age of onset for mental ill-health [38,39].Throughout this paper, 'elite youth athletes' generally includes those aged 12-17 years, as recently recommended by Walton and colleagues [40].However, age of entry into elite sport (including both professional sporting codes and Olympic level sports) can differ significantly according to sport type, and in some circumstances, athletes transitioning into elite sport may comprise those aged under 12 years or 18 years or older [41].We argue that athletes' vulnerability to mental ill-health due to their age and developmental stage, coupled with the demands of the transition into elite sport, warrants a focus on how best this transition can be supported. The changes and stressors experienced during the transition into elite sport can lead to adjustment difficulties, mental ill-health, loss of enjoyment in sport, emotional and physical exhaustion, overtraining, injury, and burnout [22,30,42].Despite this, barriers to help-seekingincluding mental health stigma, lack of awareness about support resources, and concerns about the consequences of seeking help (e.g., de-selection)-may be particularly salient to athletes entering into elite sporting environments [43,44]. Despite the varied challenges associated with the transition from pre-elite to elite sport and associated risk of experiencing mental ill-health, current emphasis in sport psychology literature is biased towards consideration of mental health during the athletic career and after the athletic career, rather than the entry into elite settings. The Sport Setting as a Key Context for Athletes' Development Given the time, commitment and dedication required for an athlete to reach and maintain elite status, the sport setting is a space for not only athlete talent development, but also for the development of the person as a whole, including their social relationships, character, sense of self and worldview [42,45].Elite youth athletes typically experience sport-related transitions in parallel with key developmental tasks, such as identity exploration, autonomy development, and developing future life goals [42,46,47].Athletes transitioning into elite sport often recognise that their career progression comes at the expense of other areas of life and strengthens their commitment to the athletic role and identity [19].While athlete retirement transition programs recommend preventing athletic identity foreclosure throughout the athletic career [4,48], little attention to preventing this risk has occurred during the transition into elite sport. Existing Programs for Supporting Athletes Transitioning into Elite Sport We are aware of only three programs for supporting athletes' transitions into elite sport.Larsen and Alferman [49] developed educational workshops highlighting challenges soccer players may experience when preparing to move to the professional level, assisting athletes to develop strategies for preparing for the transition (e.g., coping strategies, goal setting, and psychological skill development).A program evaluation indicated that players reported several benefits, including access to transparent information about transition-related challenges and developing relationships within the club before the transition (via incorporating role models such as senior players and coaching staff into the program).Coaches also reported benefits related to individualised goal setting activities, which helped players maintain motivation towards pursuing their career goals [49]. Another promising program, designed by Pummell and colleagues, was implemented and evaluated at an international high-performance tennis centre in the United Kingdom [50].Elite junior-level tennis players participated in 10 workshops, which involved discussion about the upcoming transition into senior-level elite tennis.This included equipping players with realistic expectations about the transition (e.g., higher performance expectations and requirement for greater responsibility, independence, and discipline), and videos of senior elite players reflecting on their transition experiences and helpful coping strategies.An evaluation indicated increased knowledge among players about transition demands and readiness to cope with the transition [50]. Cupples and colleagues [51] developed a program for youth Rugby League players transitioning into high-performance environments.The program aimed to support effective transitions via upskilling coping strategies, such as problem-solving, support seeking, and breathing techniques.An evaluation found increased task-based coping (e.g., problem solving, seeking coach feedback on selection and training errors) and decreased use of avoidance coping, but found no changes in wellbeing or supportbased coping.To our knowledge, no programs specifically designed to target the promotion of, and support for, mental health during the entry into elite sport exist, and there are no frameworks to inform the development of such programs. A Framework for Supporting Athlete Mental Health During the Transition into Elite Sport In the general population, favourable outcomes associated with prevention and early intervention for mental ill-health are well-established [52,53].We propose that onboarding programs that provide athletes with the knowledge and skills to navigate the transition into elite sport will lead to more favourable mental health, wellbeing, performance, and retention outcomes in the elite sport system [47,48].Athletes will be better placed to respond to both sport-related and non-sport related stressors and transitions if they are equipped with effective coping strategies and skills to build resilience, and if the sporting environment they are entering recognises these challenges and provides the necessary resources to support this transition [4,10].Figure 1 presents an overarching framework for supporting mental health and wellbeing in athletes transitioning into elite sport settings, which is accompanied by recommendations throughout Sects."Strategies for Addressing Individual-Level Mental Health Risk and Protective Factors"-"Strategies for Addressing Sociocultural Mental Health Risk and Protective Factors". Our framework is predicated on the duty of care that elite sporting organisations have in developing and maintaining safe and supportive athlete environments.The framework is informed by holistic and developmental [45,54] and ecological perspectives [48,55], most notably, works from Wylleman and Stambulova [42,45,54,56].The framework is also informed by a recent narrative review by Walton and colleagues [40], and Sabato and colleagues' recommendations for supporting elite youth athlete physical and emotional health [2].The holistic and developmental perspective-largely informed by Wylleman and Stambulova's works-conceptualizes the athlete as a whole person, rather than only a sportsperson [15], recognising that healthy development includes vocational/academic, psychosocial, and financial domains, and that changes in each of these occur over the course of athletes' lives and careers [42,45,54].Within an ecological perspective, a range of risk and protective factors may influence athlete mental health at the individual, social, sporting, and cultural/societal levels [48].This approach helps to build accountability for protecting mental health throughout the relevant sport ecology, rather than placing the onus on an individual athlete. This framework extends previously proposed ecological models for supporting athlete mental health (e.g., [48,57]) by acknowledging that risk and protective factors for mental health overlap with major life transitions, including developmental changes and athletic career progression.Taken together, these approaches inform a set of practical recommendations for supporting athlete mental health during the transition into elite sport.These recommendations focus on enhancing protective factors for mental health at the individual athlete level, as well as the relational, sporting/ organisational, and sociocultural levels.These strategies can target factors both within the sport system (e.g., athlete-coach relationships, individual-level coping strategies for sport-related stressors) and outside the sport system (e.g., athlete relationships with family and friends, maintaining non-athletic identities). Strategies for Addressing Individual-Level Mental Health Risk and Protective Factors Strategies to strengthen athletes' resilience to the stressors associated with the entry into elite-level sport and to promote positive mental health outcomes include providing opportunities for autonomy, highlighting competence, preventing identity foreclosure, and preparing athletes for possible sport-related stressors [4,10,48].Many of these strategies are included as key components of retirement transition programs.We argue that these protective factors for mental health can be implemented in earlier phases of athlete careers as foundational components for mental health. Developmental considerations are necessary here given the young age at which many athletes transition into elite sport (often early-to-mid adolescence, if not younger).While providing athletes with full autonomy is not always feasible, possible strategies include asking athletes for feedback about key organisational decisions or outcomes, co-developing personalised training plans based on athletes' self-rated strengths, weaknesses, and career goals, and adjusting training schedules to accommodate educational, vocational, or familial commitments. Assisting coaching staff to better recognise and emphasise athletes' competence during the transition into elite sport is also recommended.Highlighting competence and providing opportunities for autonomous decisionmaking from early in the elite sporting career can promote positive sport-related outcomes (e.g., enjoyment in the sporting role, motivation to engage in sport) and wellbeing outcomes (e.g., self-esteem, decision-making skills) [58][59][60]. Preventing athletic identity foreclosure early in an athlete's career is also critical [4,21,23,28,31], since this is a known risk factor for mental ill-health [35][36][37].This can be facilitated in onboarding programs via structured and individualised career guidance that encourages athletes to consider future career goals outside their athletic roles, as well as informal strategies, such as coaches and other key staff taking an interest in athletes' educational/ vocational pursuits and life interests outside of sport.Given the number of changes associated with the transition into elite sport, it is recommended that onboarding programs provide athletes with transparent information about the challenges they are likely to experience during the transition (and potentially throughout their careers) and how these can be mitigated.This should ideally involve the voices of current and/or former athletes who have experienced and managed these challenges (recognising that many challenges, such as performance pressure, negative feedback from coaches or public scrutiny are largely unavoidable).Equipping athletes with adaptive coping skills and self-management strategies to build resilience should be a critical component of these programs.Evidence-based strategies in this regard include healthy self-talk (with an emphasis on self-compassion and recognising athletes' intrinsic value as people), use of problem-solving techniques, and grounding or mindfulness techniques [48,61,62]. Despite best prevention efforts, some athletes will nonetheless experience adjustment difficulties and mental ill-health over the course of their careers [39,62].Provision of mental health literacy programs for athletes transitioning into elite sport is warranted.These should focus on (1) promoting early recognition of mental illhealth, burnout, and adjustment difficulties, (2) encouraging help-seeking, and (3) providing practical advice regarding where and how to access support.As recommended by Gorczynski and colleagues [63], these mental health literacy programs should be tailored to the athletes' stage of development and should be informed, where possible, by the organisation's contextual factors (e.g., available resources, existing support pathways, identified needs among individuals within the organisation).Further, the ongoing evaluation of mental health literacy programs is essential to ensure these programs are deemed acceptable, appropriate, and are meeting their primary aims [63]. Strategies for Addressing Relational Mental Health Risk and Protective Factors Meaningful and well-functioning relationships, both inside and outside sport, are another key protective factor for mental health [64,65].Sporting organisations should provide opportunities for onboarding athletes to develop supportive relationships with teammates, coaching staff, high performance staff and others in the environment, including via mentoring programs or similar.From the onboarding stage onwards, coaching and other high performance staff should engage in genuine conversations with athletes that promote connection (e.g., asking about their lives and interests outside sport) to ensure that athletes feel intrinsically valued as people, rather than only valued for their skills and performance [48,66].This can assist with building psychological safety (see Sect. "Strategies for Addressing Sociocultural Mental Health Risk and Protective Factors") in the sports environment, enabling athletes to establish a sense of belonging, while feeling safe to ask questions [67,68].It is also recommended that sporting organisations facilitate the development of social networks and opportunities for communication between coaching staff and athletes' caregivers (or other key supports), as both coaches and caregivers can be well-placed to recognise changes in an athlete's mood or behaviour that reflect early indications of mental illhealth (e.g., social withdrawal, negative comments about weight or shape, loss of enjoyment in sport) [69].Accordingly, the delivery of mental health literacy programs to caregivers prior to the athletes' transition into the new sport setting is recommended to support the recognition of transition difficulties and/or mental health symptoms and to upskill caregivers in having effective conversations about these experiences [40]. Induction programs can also provide coaching and support staff with optimal strategies for communicating with newly entering athletes.Here there is opportunity to highlight the benefits of mastery-oriented coaching styles, where there is a focus on personal improvement and displays of effort and persistence in the face of setbacks.This can be seen as distinct to ego-oriented coaching styles, focusing on social comparison with other athletes [70]. Strategies for Addressing Sport-Level Mental Health Risk and Protective Factors At the sport (or organisational) level, it is recommended that sporting organisations are responsive to their safeguarding responsibilities, offer encouragement for athlete help-seeking, and routinely monitor athlete mental health.Further, clearly identifying who in the organisation is responsible for organising and implementing specific mental health strategies is advisable to ensure these activities are prioritised on an ongoing basis.This may be facilitated by allocating mental health 'champions' in the organisation who are well-informed about the organisation's mental health strategy, can serve as points-of-contact for mental health-related questions or concerns, and hold key responsibilities in ensuring any mental health activities/programs are delivered as planned [69].Crucially, these 'champions' should be clearly identifiable among athletes and staff. In addition to ensuring appropriate safeguarding policies and procedures exist, information about reporting processes and pathways (in response to bullying, harassment, discrimination, or abuse) should be provided to athletes entering any elite sport system.Onboarding athletes will be less knowledgeable about the systems, processes and procedures for reporting adverse events, and likely less confident reporting and seeking support, relative to more experienced athletes [71].Onboarding programs should provide mandatory safeguarding information to ensure that athletes understand how they can report concerns [72,73].Safeguarding information should also be disseminated to athletes' caregivers and across the organisation to coaching, high performance, executive/leadership, and administration staff. Similarly, sporting organisations should ensure that athletes transitioning into the new sport setting-in addition to staff within the organisation-receive adequate opportunities for seeking support for mental ill-health, and that information about pathways for help-seeking is readily available from early in the athletic career.Given athletes transitioning into these environments may be hesitant to disclose mental health problems due to confidentiality concerns and possible ramifications of seeking support (e.g., loss of selection, being viewed as weak) [43], information concerning confidentiality should be highlighted in communications about opportunities for support. It is also recommended that athletes transitioning into elite sport are screened for symptoms of mental ill-health before, during, and after the transition to facilitate early identification and intervention.Where possible, sportsensitised tools for detecting mental ill-health, psychological distress, and perceived psychological safety should be used (e.g., [74][75][76]).Screening practices should be accompanied by clear processes for responding to elevated scores, offering pathways for discussion and support where needed [77]. Strategies for Addressing Sociocultural Mental Health Risk and Protective Factors Addressing sociocultural risk and protective factors for mental health is another essential component of a sporting organisation's overall mental health strategy that should be prioritised from the onboarding process onwards.Here we highlight the importance of developing cultures that are 'psychologically safe' [78], reducing mental health stigma, and recognising and embracing diversity within the organisation. Sport settings are often characterised as valuing toughness and stoicism.While these values can be helpful on-field, when applied rigidly to all situations, they can risk contributing to mental health stigma and barriers to help-seeking (e.g., concerns about being viewed as weak or others responding negatively to mental illhealth disclosures) [43,48].Sport teams, clubs, and organisations are responsible for developing cultures that are psychologically safe [68].Psychologically safe environments are characterised by interpersonal trust, mutual respect, and acceptance of mistakes and differences among individuals within the environment [78,79].Improving psychological safety has been shown to enhance team cohesion, learning, innovation and performance outcomes in a range of high-performance organisational contexts [80].There is increasing recognition that psychological safety can also support mentally healthy sport settings [67,68,74]. Sporting teams, clubs, and organisations are encouraged to normalise mental ill-health among their members, as this can promote psychological safety and facilitate help-seeking [43,81].Normalising mental illhealth can be facilitated by providing opportunities for open discussions about mental health, encouraging leadership to share mental health challenges they have experienced, encouraging individuals in the sport setting to respond to disclosures in supportive and non-judgmental ways, and framing mental health help-seeking as an important strategy for maintaining one's overall health and wellbeing.Tackling mental health stigma and promoting openness about mental ill-health-including how to initiate conversations and have safe conversations with others-can be facilitated by mental health literacy programs that offer training in how to communicate about mental ill-health (e.g., Ahead of the Game) [82]. Given that individuals from diverse backgrounds (e.g., according to race/ethnicity, sexuality, gender expression) are more likely to experience discrimination and harassment, including in sporting contexts [83], sporting organisations should ensure that the diversity of staff and athletes is recognised and valued [69].This may be facilitated by promoting meaningful conversations about diverse identities to better understand these experiences, seeking to reflect diversity across staff, providing opportunities for athletes and staff to engage in culturally meaningful practices and celebrations, and directly engaging with those from diverse backgrounds about their preferences regarding how their identities are discussed and shared among others within the organisation. On a broader scale, sports may also be able to impact the broader public on issues surrounding mental health stigma and embracing diversity.For example, reducing mental health stigma via campaigns involving prominent sportspeople talking about their personal experiences of mental ill-health can be effective, particularly since elite athletes are commonly viewed by the public (and youth athletes) as strong, resilient, successful role-models [84].Sporting organisations can also publicly address unacceptable public behaviour directed towards athletes that may contribute to mental illhealth, such as harassment, racism, or discrimination. Conclusion The transition into elite sport is associated with a range of stressors.If left unchecked or unaddressed, these may contribute to psychological distress or mental ill-health, which in turn may impact an athlete's performance and ability to achieve their career goals.The youth athlete career phase is a particularly vulnerable period for mental ill-health, given the upheaval in many athletes' physical environments, social relationships, training and lifestyle demands, and performance expectations [19,22].This paper seeks to address the balance in attention to transitions within elite sport, shining a light on the entry into elite sport settings, and the opportunities available via structured induction/onboarding programs.We argue that this career phase represents a key opportunity to provide athletes with the foundational components for supporting their mental health and building resilience to manage the demands and rigors of elite sport.We note that given the nascency of literature in this field, this framework will warrant revision when increased evidence is available about athletes' mental health needs during the transition into elite sport, as well as evidence-based strategies for supporting mental health during this career transition. Fig. 1 Fig. 1 Framework for supporting athlete mental health during the transition into elite sport (and beyond)
v3-fos-license
2019-03-09T04:11:20.733Z
2019-02-19T00:00:00.000
67862210
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-6531-9", "pdf_hash": "c86e2e1b44972cc7c8b23b27b93d3d87a8f21a1f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42151", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "c86e2e1b44972cc7c8b23b27b93d3d87a8f21a1f", "year": 2019 }
pes2o/s2orc
Socioeconomic status and improvement in functional ability among older adults in Japan: a longitudinal study Background Recovery from functionally disabled status is an important target of public health measures for older adults. This study aimed to examine socioeconomic inequalities in the improvement of functional ability among older adults stratified by the level of disability at baseline. Methods In the Japan Gerontological Evaluation Study, we conducted a mail survey of community-dwelling older adults (1937 men and 2212 women) who developed functional impairment during 2010–2014. The survey data were individually linked to the longitudinal records of changes in the levels of functional disability based on the Public Long-Term Care Insurance System. Results The mean (standard deviation) follow-up period was 316 (269) days. During follow-up, 811 participants (19.5%) showed improved functional ability. Among those with severe disabilities at baseline, men with 13 or more years of education were more likely to improve functional ability than men with 9 or fewer years of education (hazard ratio: 1.97, 95% confidence interval: 1.12–3.45). A similar association was observed among women (hazard ratio: 2.16, 95% confidence interval: 1.03–4.53). Neither income nor occupation was statistically associated with improved functional ability. Conclusions There are education-related inequalities in the improvement of functional ability, especially among older adults with severe disabilities. Health policy makers and practitioners should consider the educational background of individuals with reduced functionality in formulating strategies to improve their functional ability. Electronic supplementary material The online version of this article (10.1186/s12889-019-6531-9) contains supplementary material, which is available to authorized users. Background More than a third of older adults aged ≥60 years experienced functional disability worldwide in 2004 and the prevalence of disability has been increasing due to an aging population [1]. The improvement of functional ability is considered important because of its impact on the quality of life of older adults and the demand for long-term care (LTC) services [2]. In Japan, 6.0 million individuals were certified as eligible to use public LTC insurance (LTCI) benefits in 2015. The LTCI benefit expenses were 9.0 trillion Japanese yen (equivalent to approximately 80.0 billion US dollars) [3]. Although the majority of disabilities were progressive in the long-term, 1 in 10 individuals had improved functional ability within a year [4]. The trajectory of functional ability among older adults may improve if they receive adequate health care [2]. Factors affecting the trajectory of functional ability among older adults include physical, psychosocial, and socioeconomic factors [5][6][7][8][9]. In this study, we have focused on socioeconomic status (SES) as an important target of public health interventions aiming to achieve equity in health and longevity [2,10]. However, the evidence for the association between SES and the improvement of functional ability is inconsistent [8,[11][12][13][14]. A longitudinal study in the UK [11] reported that those with higher education had higher rates of recovery from morbidity/disability independent of comorbidity, with a follow-up interval of 2-10 years. In contrast, other studies from the US [8], Europe [12,13], and Taiwan [14] reported neither education-nor income-related inequalities in the improvement of functional ability after the onset of disability. These mixed results could partly be due to the difference in net and gross effects of SES on functional ability, adjustment for other dimensions of SES in the analysis, and self-reported measurements of disability [15]. Potential mechanisms to explain how socioeconomic inequalities contribute to improvements in functional ability may include material and psychosocial pathways [16,17]. Accessibility to health care services and rehabilitation programs may be limited among those with a low SES [16]. Moreover, studies have suggested that persistent psychological stress and lack of social support are likely to be greater among individuals with low SES, which are also barriers to improving physical ability [16]. Therefore, this study aimed to examine the independent contributions of three major proxy measures of SES, namely, educational attainment, income, and occupation to improving functional ability using a large Japanese population-based dataset linked to Japan's LTCI database. The LTCI database includes objective measurements of the level of disability with dates of application for LTC service utilization and dates of changes in the level of disability. Study population The analyses were based on the Japan Gerontological Evaluation Study (JAGES) linked to the LTCI database of Japan. A description of the JAGES study design is reported elsewhere) [22]. In brief, self-administered questionnaires were mailed to community-dwelling older adults (aged ≥65 years) without physical or cognitive disabilities in 31 municipalities in 2010 (Additional file 1). The LTCI database was obtained from 24 municipalities in which a survey was conducted in 2010 as part of the JAGES program. The LTCI database includes information on the level of disability, the dates on which the level of disability changed, and the dates of death or movement to a different municipality. The period covered by the LTCI database ranged from 14 to 46 months depending on the municipality. Among the JAGES participants living in 24 different municipalities, 4239 individuals were newly certified to receive LTCI benefits during the survey period. The follow-up period began at the initial certification date for each participant. The follow-up period ranged from 1 to 1318 days. Those with missing data on age and gender (n = 85) and those whose follow-up periods were 0 (n = 5) were excluded, resulting in an analytic sample of 4149 subjects. Measurements of the level of disability The level of disability was evaluated at the time of LTC service application and certification according to nationally standardized criteria (Additional file 2: Table S1) [18][19][20]. The Certification Committee for Long-Term Care Need in each municipality, consisting of physicians and other health experts, assigned the level of disability based on both home visit interviews and the opinions of the primary physicians. The level of disability was repeatedly measured within 6 months for the second assessment and at least once a year for following assessments according to government recommendations or measured at the time of reassessment for LTC service users or their families. The level of disability was divided into seven categories: requiring support-1 and -2 and requiring LTC-l to LTC-5. The measurements were used in recent studies, which investigated the determinants of functional disability and their association with mortality [21][22][23][24]. The study population included only those who were assigned to the requiring LTC-l to LTC-5 groups at the time of the initial assessment, because those who were assigned to the requiring support-1 and -2 groups were only eligible for LTC prevention programs and not actual physical care. We categorized the study population into three disability groups according to the initial level of disability: mild (requiring LTC-1), moderate (requiring LTC-2 or LTC-3), and severe (requiring LTC-4 or LTC-5). Improvement in the level of disability was defined as an improvement of ≥1 level(s) during the follow-up period compared with the level at the time of the initial assessment. Improvements in the level of disability included transitioning into requiring support-1 and -2. Socioeconomic status SES and other covariates were assessed in 2010 using the JAGES self-reporting mail-in questionnaires. Educational attainment was measured as completed years of schooling (≤9, 10-12, and ≥ 13 years). Annual household income was reported in 15 predetermined categories (in thousands of Japanese yen). The equivalized income was calculated by dividing each response by the square root of the household size. Household income was then categorized into quartiles. The occupation engaged in for the longest period of time was reported in 7 categories and, following recent studies, we dichotomized the occupations into manual (craft and related trade workers and skilled agricultural, forestry, and fishery workers) and non-manual (professionals, managers, clerical support workers, service and sales workers, and other non-manual workers) categories [25]. Statistical analyses We used the Cox proportional hazards model to assess the association between SES indicators (educational attainment, income, and occupation) and improvements in functional ability during the follow-up period. We evaluated the proportional hazards assumption by visual and statistical means. Time to improvement in the level of disability is measured in dates. Participants who died or moved to another municipality were censored from the date of their death or move to another municipality. In addition to the crude model, a second model was used with adjustments for age, gender, other SES, marital status, living status, comorbidities, depressive symptoms, and municipality. The analysis was stratified by gender and disability group, because SES has differential effects on functional transitions depending on the individuals' prior functional state [34]. As for sensitivity analysis, we restricted our sample to those who were followed-up for > 6 months, because the second assessment of the level of disability was mostly conducted 6 months after the initial assessment. We applied the missing indicator method (including a dummy variable for missing data in the analysis) in our study, following recent recommendations [35,36]. All statistical analyses were conducted using STATA (version 14.0; StataCorp, College Station, TX, USA). A P < .05 was considered statistically significant. Results The mean ± standard deviation follow-up period was 316 ± 269 days. The maximum follow-up period was 1318 days. The mean ± standard deviation age at initial LTC certification was 81.5 ± 6.7 years. Among the study population, approximately half of the participants were women, more than half reported completed years of education of ≤9, and approximately a third reported their occupation as manual ( Table 1). The proportion of individuals with improved functional ability during the follow-up period in the mild, moderate, and severe disability groups were 11.7, 24.4, and 25.4%, respectively. The characteristics of the cohort stratified by gender and disability group at the time of the initial assessment are shown in Additional file 3: Table S2. Among men with severe disabilities, the hazard ratio for improved functional ability in those with ≥13 years of education versus those with ≤9 years of education was 1.91 (95.0% confidence interval [CI]: 1.17-3.12) in the crude model and 1.97 (95.0% CI: 1.12-3.45) after adjusting for other covariates (Table 2). There was no significant association between education and improved functional ability among men with mild and moderate disabilities. There was neither significant association between income nor (Table 3). Although education was associated with the improvement of functional ability, there was no specific association between income and improved functional ability among women with severe disabilities (P for trend = .570). Occupation-related differences in improved functional ability were not significant across all 3 disability groups among women. When the analytic sample was restricted to those with a follow-up period of ≥6 months, similar associations were observed though the 95.0% CIs were wide due to the limited sample size (Additional file 4: Table S3 and Additional file 5: Table S4). Discussion The results of our study show that education is positively associated with improved functional ability, especially among older adults with severe disabilities at the time of the initial assessment. Our findings were inconsistent with recent studies in other countries that have mostly shown no significant educational differences in the improvement of functional ability [8,12,14]. This inconsistency may be due to differences in the measurements of functional ability and follow-up intervals between the current study and other recently published reports [12,14]. The current study used objective measurements, which assessed the initial level of disability at the time of the application for LTC service utilization and tracked the dates when the level of disability changed. Therefore, the current study could capture temporal changes in functional disability in each of the 3 disability groups. Other studies often defined disabilities as severe restrictions in activities of daily living, with follow-up intervals of 1-10 years, and did not stratify the analysis by disability severity. Several mechanisms may explain how education contributes to improvements in functional ability. First, educational attainment is a predictor of socioeconomic status in later life, including occupation and income [16,[37][38][39]. Therefore, educational attainment may reflect the material conditions contributing to recovery: access to medical and rehabilitation services. Although in Japan most medical and rehabilitation services are covered by public health and LTC insurances, the existence of co-payment and other ancillary costs, including transportation and other opportunity costs may discourage them from utilizing those services regularly. In this study, however, inequalities in the improvement of functional ability were not clear for the levels of income and occupational classes. This may be related to the measurement issues in income and occupation in older ages. Previous studies have suggested that income was less predictive of health than wealth in older ages [40][41][42]. Moreover, our measure, the occupation individuals had engaged in for the longest period of time, may not be strongly linked to the current living arrangements of older adults, because the majority of subjects had already retired. Occupation may not represent material conditions among older women adults in Japan, given that the majority of older households were man breadwinner households [43]. Poor education may also alter health behaviour due to the limited health literacy and psychosocial stress, which may be other pathways linking educational attainment and recovery. For example, financial strain, lack of engagement in social networks, and lack of social support could impede recovery [16,37,44]. Psychosocial stress may also lead to adverse health behaviours, such as smoking and alcohol abuse, which are known to be more common among less educated people [16]. The analysis of Japan's nationally representative data revealed that low education was associated with limited health literacy that were required to evaluate and utilize health information in critical and communicative ways [45]. Education-related inequalities in the improvement of functional ability were not observed in the mild and moderate disability groups. One possible explanation was low statistical power, because fewer individuals improved their functional ability in the mild disability group than in the severe disability group. Alternatively, differences in disease structure by the levels of disability may explain the gap. In Japan, the most common reasons for requiring care among the mild and moderate disability groups were dementia (26%), stroke (16%), and frailty (13%) [3,46]. The trajectories of dementia and frailty status often slowly or steadily declined without remission [47]. Therefore, it may be difficult, even for individuals with high SES, to improve their functional ability [47]. In the severe disability group, the most common reasons for requiring care in Japan included stroke (27%), dementia (23%), and bone fractures (11%) [3,46]. The trajectory of functional ability of organ failure, including stroke or bone fractures, sometimes achieve a remission [47]; therefore, individuals with severe disabilities who have high SES may be able to improve their functional ability better than those with severe disabilities who have low SES, because of access to medical and LTC services [38,48]. This study has several limitations. First, improvements in functional ability may be underestimated in this study. Japan's public LTC system requires users to take the clinical examination of disability levels every 6 months and to consider the renewal of the disability level unless requested by the users before 6 months have passed. This could apply negative incentives for the prior half year request for the LTC service users who believe their functional ability has improved because, if they are classified as less disabled, the maximum amount of LTC cost coverage is reduced. This results in a delay in capturing improvements in disability. Second, the LTCI database used in this study does not include information concerning the disqualification from LTCI eligibility due to regain of functional independence. However, the impact of this missing information may be small because the proportion of individuals who were disqualified over 4 years was < 3%. This may lead to an underestimation of the improvements in functional ability [49]. Third, the LTCI database may not capture changes in the level of disability after hospital admission, because LTC services are not used during hospitalization. Fourth, income may not capture the socioeconomic status of older adults, given that they are likely to rely on pension, savings, and other assets they have. Therefore, other measures such as degree of wealth should also be investigated. However, the questionnaires used in the current study did not include information on wealth. Degree of wealth may be more strongly associated with the health of older adults than income, because wealth indicates the ability to meet sudden expenditures such as medical expenses; however, measurement of wealth might be more difficult than that of income because wealth includes multiple factors [41,50]. Therefore, the results of the current study, using income as a measurement of SES, might be underestimated. Fifth, the LTCI database used in this study did not include information on the major causes of functional disability. The aetiology in the course of functional disability might be different based on the cause of the initial onset of functional decline, and the estimates based on these causes may have more valuable clinical implications. The course of functional disability based on its causes warrants further study [47]. Sixth, some potential confounders that we considered, such as comorbidity, could be mediators linking SES to our health outcome. However, in our preliminary analysis, we evaluated the changes in the point estimates of the SES/outcome associations in the models including or excluding those factors step-by-step, and confirmed that changes in point estimates are not large. Finally, we applied the missing indicator method to address missing data on our explanatory variables; another approach such as multiple imputation could be an alternative, though the application of such methods to complex longitudinal merged data has been under debate [51,52].
v3-fos-license
2024-01-17T06:44:53.155Z
2024-01-13T00:00:00.000
266999580
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ad2618/pdf", "pdf_hash": "2be97e6db49afd93aa305f120c9288eb60a5cf4a", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42153", "s2fieldsofstudy": [ "Physics" ], "sha1": "2be97e6db49afd93aa305f120c9288eb60a5cf4a", "year": 2024 }
pes2o/s2orc
SN 2020udy: A New Piece of the Homogeneous Bright Group in the Diverse Iax Subclass We present optical observations and analysis of the bright type Iax supernova SN 2020udy hosted by NGC 0812. The evolution of the light curve of SN 2020udy is similar to that of other bright type Iax SNe. Analytical modeling of the quasi-bolometric light curves of SN 2020udy suggests that 0.08 ± 0.01 M ⊙ of 56Ni would have been synthesized during the explosion. The spectral features of SN 2020udy are similar to those of the bright members of type Iax class, showing a weak Si ii line. The late-time spectral sequence is mostly dominated by iron group elements with broad emission lines. Abundance tomography modeling of the spectral time series of SN 2020udy using TARDIS indicates stratification in the outer ejecta; however, to confirm this, spectral modeling at a very early phase is required. After maximum light, uniform mixing of chemical elements is sufficient to explain the spectral evolution. Unlike in the case of normal type Ia SNe, the photospheric approximation remains robust until +100 days, requiring an additional continuum source. Overall, the observational features of SN 2020udy are consistent with the deflagration of a carbon–oxygen white dwarf. Thermonuclear supernovae (SNe), also known as type Ia SNe, are the outcome of the explosive burning of Carbon-Oxygen (CO) white dwarfs.They are known as one parameter family and are extensively used in cosmology as standard candles (Phillips 1993;Phillips et al. 1999).With the increasing sample, diversity has been observed among type Ia SNe leading to their subclassification (Taubenberger 2017).Distinguishing different subclasses can help improve the precision of their distance measurements (Wang et al. 2009) and identify their physical origins (Wang et al. 2013). There are different subtypes of type Ia SNe having some similarities and dissimilarities.Amongst them, type Iax SNe (Li et al. 2003;Jha et al. 2006b;Foley et al. 2013) are one of the peculiar subclasses of SNe Ia, having low luminosity (M r = −12.7 mag, Karambelkar et al. 2021 to M V = −18.4mag, Narayan et al. 2011) and lower energy budget (Jha 2017, and references therein).The light curves of SNe Iax are characterized by a faster rise to the maximum and post-maximum decline in the bluer bands than normal type Ia SNe (Magee et al. 2016(Magee et al. , 2017;;Jha 2017;Li et al. 2018).Due to the lack of fairly good pre-maximum coverage for a good fraction of type Iax SNe, precise measurement of peak absolute magnitudes and rise time is difficult.This also poses a problem in putting strong observational constraints on the explosion models.The occurrence rate of SNe Iax is ∼ 5% to 30% of that of normal type Ia SNe (Foley et al. 2013;Miller et al. 2017;Srivastav et al. 2022). As a class, type Iax SNe show homogeneous spectral evolution (Jha et al. 2006a;Jha 2017) with low expansion velocities at maximum ranging from 2000 to 8000 km s −1 (Foley et al. 2009;Stritzinger et al. 2014).The early spectra of SNe Iax show Fe III features along with features due to intermediate mass elements (IMEs), similar to 1991T-like SNe.During the late phase, type Iax SNe exhibit significant differences from the type Ia class.Strong and wide P-Cygni lines dominate the optical wavelengths until 4-6 months after explosion.Even later, type Iax SNe do not enter into a fully nebular phase.Permitted spectral lines mainly of Fe co-exist with forbidden emission lines in the late time spectra of all type Iax SNe (McCully et al. 2014a;Stritzinger et al. 2015;Foley et al. 2016).The spectral synthesis of SN 2014dt (the only example with continuous observations from its maximum to +550 days) showed that the assumption of an expanding photosphere provides a remarkable match with the observed spectral evolution during the first ∼ 100 days.At even later epochs, the approximation is capable of reproducing the P-Cygni lines formed by Fe, Ca, and Na (Camacho-Neves et al. 2023). The progenitor system and explosion scenario of type Iax SNe has been a matter of debate for many years.Being low luminosity and less energetic events, type Iax SNe hint towards a different progenitor scenario from type Ia SNe.High-resolution pre-explosion images of a few type Iax SNe, obtained with the HST, are used to identify the progenitor systems of these events.SN 2012Z is one such type Iax SNe, hosted by a nearby galaxy NGC 1309.McCully et al. (2014b) analyzed the pre-explosion images of SN 2012Z and suggested that a binary consisting of a white dwarf and Helium star is one of the most plausible progenitor systems for SN 2012Z.Another type Iax with pre-explosion images is SN 2014dt, and a similar progenitor system has been suggested as one of the possibilities (Foley et al. 2015) for this SN.However, except for a few type Iax SNe (Foley et al. 2013;Greiner et al. 2023), helium is not detected in spectroscopic studies (White et al. 2015;Jacobson-Galán et al. 2019;Magee et al. 2019). The observed explosion parameters of many of the bright type Iax SNe are successfully explained by the pure deflagration of a CO white dwarf (Jordan et al. 2012;Kromer et al. 2013;Fink et al. 2014).However, several other explosion models have been proposed to explain the observational properties of bright type Iax SNe such as pulsational delayed detonation (PDD, Baron et al. 2012;Dessart et al. 2014), deflagration to detonation transition (DDT, Seitenzahl et al. 2013;Sim et al. 2013), etc.The deflagration of CO white dwarf cannot explain the observables of faint members of this class.Instead, they can be explained by the deflagration of a hybrid Carbon-Oxygen-Neon (CONe) white dwarf (Meng & Podsiadlowski 2014;Kromer et al. 2015). SN 2020udy was spotted by Nordin & Perley (2020) using automated detection software AMPEL (Nordin et al. 2019) in association with Zwicky Transient Facility (ZTF, Bellm et al. 2019;Fremling et al. 2020) on 24 September 2020 and was classified as type Iax SN (Nordin et al. 2020a,b).The SN exploded in a spiral galaxy NGC 0812 at a redshift of 0.017222 (Falco et al. 1999).The SN was located at R.A.(J2000.0)02 h 06 m 49.35 s , Dec.(J2000.0)44 o 35 ′ 15.29 ′′ , 52 ′′ .82N and 23 ′′ .13W from the center of the host galaxy.Maguire et al. (2023) have presented an analysis of SN 2020udy.The very early detection of the SN allowed them to place strict limits on companion interaction.They ruled out the possibility of a main sequence star with mass 2 and 6 M ⊙ , to be the companion, however a helium star with a narrow range of viewing angle is suggested as a probable companion.They have shown that the light curve and spectra of SN 2020udy are in good agreement with the deflagration model of a CO white dwarf, specifically the N5-def model of Fink et al. (2014). This paper presents a detailed photometric and spectroscopic analysis of SN 2020udy.The light curve has been modeled using analytical prescription proposed by Arnett (1982) and Valenti et al. (2008).To confirm the line identification and expansion velocity of the ejecta, the early spectral sequence is modeled with SYNAPPS.One-dimensional radiative transfer code TARDIS has been used to perform abundance tomography modeling of the entire observed spectral sequence.Section 2 gives details about observations and methods used to reduce the data of SN 2020udy.In Section 3 we estimate the distance, the explosion time, and the line-of-sight extinction of the SN.Section 4 presents the photometric properties and the modeling of the pseudo-bolometric light curve of SN 2020udy.Spectroscopic features of SN 2020udy, the evolution of the photospheric velocity, and spectral modeling are presented in Section 5. A brief discussion of the observational features of SN 2020udy and their comparison with a few explosion scenarios proposed for the type Iax class are given in Section 6. Section 7 summarizes our results. OBSERVATIONS AND DATA REDUCTION Optical photometric follow-up of SN 2020udy began ∼ 5 days after discovery and continued up to 130 days, with the 1 m Las Cumbres Observatory (LCO) telescopes (Brown et al. 2013) under the Global Supernova Project and 80 cm Tsinghua-NAOC Telescope (TNT, Wang et al. 2008;Huang et al. 2012, National Astronomical Observatories of China).The observations were carried out in the BgVri photometric bands. The LCO photometric data were reduced using the lcogtsnpipe routines (Valenti et al. 2016), which performs point-spread-function (PSF) photometry of the stars and the SN.The instrumental BV magnitudes were calibrated to the Vega system using the APASS catalog (Henden et al. 2016) 1 and the instrumental gri magnitudes were calibrated to the AB system using the SDSS catalog (Gunn et al. 2006). The pre-processing of the photometric data, obtained with the 80 cm TNT was carried out following standard procedures using a custom Fortran program.The photometry was then performed by the automatic pipeline ZrutyPhot (Jun Mo, et al., in prep.).The pipeline utilizes the Software for Calibrating AstroMetry and Photometry (SCAMP, Bertin 2006) and the IRAF2 Daophot package.The TNT instrumental BV magnitudes of the SN were calibrated to the Vega system and the instrumental gri magnitudes were calibrated to the AB system using the PanSTARRS (Panoramic Survey Telescope and Rapid Response System3 ) catalog.The optical photometry of SN 2020udy are tabulated in Table 1.The errors mentioned in Table 1 are obtained by propagating the photometric and the calibration errors in quadrature.The spectroscopic observations of SN 2020udy spanning up to ∼ 121 days after the maximum, were obtained with the LCO 2-m Faulkes Telescope North (FTN) in Hawaii, Beijing Faint Object Spectrograph and Camera (BFOSC), mounted on the Xinglong 2.16 m telescope of NAOC (XLT; Fan et al. 2016), and Dual Imaging Spectrograph (DIS) on the 3.5 m telescope (Astrophysical Research Consortium, ARC) at the Apache Point Observatory.The 1D wavelength and flux-calibrated LCO spectra were extracted using the floydsspec pipeline4 (Valenti et al. 2014).The wavelength calibration of the BFOSC data and DIS data was done using Fe/Ar and He/Ar arc lamp spectra, respectively.The flux calibration was performed using standard star spectra which had similar airmass as that of the SN.All the spectra were scaled with the photometry to correct for slit loss.Finally, the spectra were corrected for the heliocentric redshift of the host galaxy.The log of spectroscopic observations is given in Table 2. Distance and extinction There are eight different distance estimates available for the host galaxy of SN 2020udy using Tully-Fisher method (Karachentsev et al. 2006;Theureau et al. 2007;Tully et al. 2013;Sorce et al. 2014;Tully et al. 2016).Out of these, we have used four recent measurements and scaled them to H 0 = 73.00km s −1 Mpc −1 (Spergel et al. 2007).The average of these four estimates, 56.20±12.48Mpc (consistent with Maguire et al. 2023) is used in this work. The SN is located at the outskirts of the host galaxy, so significant reddening from the host galaxy is not expected.Also, in the spectral sequence of SN 2020udy, we do not find the NaID feature associated with the host galaxy.Hence, for reddening correction, we have used only the extinction within the Milky Way, which is E(B − V ) = 0.067 mag (A v = 0.210 mag) (Schlafly & Finkbeiner 2011). Explosion epoch The first ZTF r−band detection at JD 2459116.8 with the last non-detection in g−band at 0.87 days ago, together with the comprehensive sampling of SN 2020udy in ZTF gr-bandpasses during a few weeks before its explosion allows a careful inspection of its time of the explosion.Assuming SN 2020udy exploded as an expanding fireball whose luminosity scales with its surface area, the flux (F ) thus increases with the square of the time after the explosion (i.e., F ∝ (t − t exp ) 2 , Arnett 1982; Riess et al. 1999;Nugent et al. 2011).Hence, we use Equation 1 to fit the extinction-corrected flux in ZTF g and r bands, separately. where A is a scale factor and the power-law index n has been fixed to 2 in the initial fitting, which gives t g exp =JD 2459111.9±0.6 and t r exp =JD 2459113.3±0.4.Such a ≈1.4 day discrepancy contradicts the g−band last non detection at 2459115.9 JD (see Figure 1). Deviation from the n = 2 expanding fireball during the early-rising phase of type Iax SNe has been reported for multiple cases (Magee et al. 2016;Miller et al. 2020).In the next iteration of fitting, we thus kept n as a free parameter.This resulted in estimates of explosion epochs which were off by 0.5 day between both the bands with n g =0.91± 0.05 and n r =1.09±0.04. Further, we fit both the light curves iteratively by varying n simultaneously for both the bands in such a way that for the same value of n, they converge to the same explosion epoch, which is a free parameter as before. We found that both the light curves converge to the same explosion epoch (t exp = 2459116.2± 0.1 JD) for n = 1.03.Maguire et al. (2023) quote a similar value for explosion epoch (JD 2459115.7±0.1),however, they estimated the power index n ∼ 1.3 using a modified functional form.In this work, we use JD 2459116.2±0.1 as the explosion date.We use cubic spline fit to estimate the time and magnitude at maximum light in BgVri bands, the decline in magnitude from the light curve peak to 15 days after (i.e., ∆m 15 ) is also estimated for these bands (Table 3). In Figure 3 we compare the light curves of SN 2020udy with several other well-sampled type Iax SNe, including SNe 2002cx (Li et al. 2003), 2005hk (Sahu et al. 2008), 2008ha (Foley et al. 2009), 2010ae (Stritzinger et al. 2014), 2011ay (Stritzinger et al. 2015), 2012Z (Stritzinger et al. 2015), 2019gsc (Srivastav et al. 2020;Tomasella et al. 2020), 2019muj (Barna et al. 2021a), and 2020rea (Singh et al. 2022).The light curves are normalized to their peak magnitudes in the respective bands and plotted in the rest frame of individual SN.In B band, SN 2020udy declines slower than SNe 2002cx and 2011ay and its light curve shape looks remarkably similar to that of SNe 2012Z and 2020rea.In g band, SN 2020udy declines slower than all the comparison SNe.The V band light curve evolution of SN 2020udy is faster than SNe 2002cx, 2005hk, 2011ay, 2012Z, 2020rea and slower compared to the other SNe presented in our sample.With a slower decline in r and i bands, SN 2020udy appears to be very similar to SNe 2012Z and 2020rea (Figure 3). The B-V, V-I, V-R, and R-I color evolution of SN 2020udy and its comparison with other type Iax SNe have been depicted in Figure 4.All the colors are corrected for total reddening and brought to the rest frame of each SN.The color evolution of SN 2020udy seems to follow the same pattern as other type Iax SNe used for comparison. Analysis of bolometric light curve The pseudo-bolometric BgVri light curve of SN 2020udy is constructed using SuperBol (Nicholl 2018), adopting distance and extinction discussed in Section 3.1.In SuperBol, the dereddened magnitudes and associated errors of the SN are converted to fluxes and flux errors.These fluxes are used to construct the spectral energy distribution (SED) at all epochs, which are integrated using trapezoidal rule in the limit of the input passbands to obtain the pseudo-bolometric luminosity.The flux errors and bandwidths are used to calculate the corresponding pseudo-bolometric luminosity error.The integration spans the wavelength range between 3960 and 8120 Å.To estimate the missing flux in UV and NIR region, SuperBol fits blackbody functions to SED at each epoch and applies this correction to the pseudo-bolometric luminosities to construct the total bolometric luminosity.Figure 5 shows the pseudobolometric light curve of SN 2020udy along with a few other bright type Iax SNe.The pseudo-bolometric light curves of all the SNe presented in Figure 5 are constructed similarly.We have added another bright type Iax SN 2018cni (Singh et al. 2023) for comparison in Figure 5. Around maximum, SN 2020udy looks slightly fainter than SNe 2005hk and 2018cni but after ∼ 10 days their pseudo-bolometric luminosities are comparable.This shows that SN 2020udy is a bright type Iax SN with peak (BgVri) luminosity 2.06±0.14×10 42 erg sec −1 . To estimate the explosion parameters, such as the mass of 56 Ni and ejecta mass (M ej ), we employed an analytical model proposed by Arnett (1982) and Valenti et al. (2008).This model assumes spherically symmetric and optically thick ejecta, a small initial radius, constant optical opacity, and the presence of 56 Ni in the ejected matter.We fitted the pseudo bolometric light curve with the analytical model using scipy.optimize.curvefit algorithm, setting κ opt to 0.1 cm 2 g −1 and the photospheric velocity to 8000 km s −1 at maximum.From the fitting, we obtained the following values of the parameters: 56 Ni mass = 0.08±0.01M ⊙ and ejecta mass (M ej ) = 1.39±0.09M ⊙ , with the errors obtained from the covariance matrix. The bolometric rise time obtained through this fit is ∼ 15 days.This is consistent with the rise time estimated independently by fitting the early light curves of SN 2020udy (see, Section 3.2). To obtain the total bolometric luminosity from the pseudo-bolometric luminosity, the contribution from the missing bands needs to be added.For type Iax SNe, a definite contribution of UV and NIR fluxes to the total bolometric luminosity is not known, as for only a handful of objects, both UV and NIR coverage are available in literature (Phillips et al. 2007;Yamanaka et al. 2015;Tomasella et al. 2016;Srivastav et al. 2020;Dutta et al. 2022;Srivastav et al. 2022).Based on the available estimates, if the contribution from UV and NIR bands to the total luminosity at maximum is taken as ∼ 35%, the mass of 56 Ni increases to 0.11 M ⊙ . Figure 6 compares the evolution of blackbody temperature and radius of SN 2020udy with that estimated for several other well-studied type Iax SNe.The blackbody temperature evolves in a similar fashion for all the type Iax SNe presented in Figure 6, whereas the blackbody radius is proportional to the luminosity of the SN. In Figure 7, we compare the pseudo-bolometric luminosity of SN 2020udy with deflagration models presented in Fink et al. (2014).The model light curves N1-def, N3-def, N5-def, and N10-def are adopted from the HESMA database.The numerical value indicates number of the ignition spots of the model, which approximately scales with the strength of the deflagration.During the photospheric phase, the pseudo-bolometric luminosity of SN 2020udy lies between N3-def and N5def models.Around 30 days post-explosion, SN 2020udy shows a slower decrease in the bolometric luminosity as compared to all the models presented in Figure 7, which may indicate a higher ejecta mass of SN 2020udy compared to the model predictions.uum with features due to Ca II H&K, Fe III, Si III, Fe II, and Si II.Prominent P-Cygni profiles with a broad absorption component can also be identified.Similar to other bright type Iax SNe, SN 2020udy also exhibits a rather shallow Si II absorption line at 6355 Å.We do not detect the C II feature at 6580 Å. Figure 10 presents a comparison of the earliest spectrum of SN 2020udy at −6.6 days with several other type Iax SNe observed at similar phases.Fe III features near 4400 and 5000 Å are present in all the SNe.Additionally, we remark that the strength of both Si II and C II features diminishes with the increasing luminosity of type Iax SNe.For example, fainter events like SNe 2008ha and 2010ae display significantly stronger Si II and obvious C II absorption lines compared to bright events such as SNe 2012Z, 2020rea, 2020udy etc.Similarly, in faint type Iax SNe, Ca II NIR feature also emerges earlier than in the bright type Iax SNe. Comparison of spectral features near maximum (Figure 11) indicates similarity between SN 2020udy and other bright type Iax SNe.Near maximum, the strength of Si II 6355 Å line increases.Features due to Fe III near 4400 Å, Fe II near 5000 Å, Si II, and Ca II NIR triplet are prominently seen in all the SNe.In the early post-maximum phase (∼ 7 days), the blue part of the spectrum gets suppressed because of cooling of the ejecta and line blanketing (Figure 8).With time, the Si II line is replaced by the progressively emerging Fe/Co lines (Figure 9).The features between 5500 to 7000 Å are mostly dominated by Fe II lines. Post maximum (after ∼ 20 days since maximum) spectral evolution of SN 2002udy is shown in Figure 9. Cr II feature near 4800 Å and Co II feature near 6500 Å are clearly visible.Ca II NIR triplet becomes progressively stronger.Figure 12 compares the spectrum of SN 2020udy obtained at day +18.3 with the spectra of other type Iax SNe at similar phases.The spectra of all SNe in the sample are dominated by Iron Group Elements (IGEs).SN 2020udy exhibits remarkable similarities to bright type Iax SNe such as SNe 2005hk, 2011ay, and 2020rea.An absorption feature at ∼9000 Å, due to Co II is also seen in SN 2020udy.As the inner ejecta of the SN becomes optically thin during the late phase, emission lines increase in strength. In the late phase, spectral features become narrow (Figure 9).The region around 7300 Å is composed of forbidden lines of Fe/Ni and Ca (Foley et al. 2016).The presence of both forbidden and permitted lines in the late phase spectra of SN 2020udy indicates that the spectrum is not fully nebular.Type Iax SNe posses a long-lived photosphere with the presence of permitted lines even at late times (Jha et al. 2006b;Sahu et al. 2008;Foley et al. 2010aFoley et al. , 2016)).Figure 13 shows nebular phase spectral comparison between SNe 2020udy, 2008ge (Foley et al. 2010b), and 2014dt (Singh et al. 2018) at comparable epoch.The nebular phase spectrum of SN 2020udy and SN 2008ge (Foley et al. 2010b) shows broad emission feature, while SN 2014dt has narrow spectral features.Maguire et al. (2023) have also reported broad emission features in the spectrum of SN 2020udy at ∼ 119 and 137 days and suggested that it might be coming from the SN ejecta.In the case of SN 2012Z, similar broad emission features were reported at a very late phase (∼ 190 days, Stritzinger et al. 2015).Foley et al. (2016) suggested that relatively bright type Iax SNe with higher ejecta velocities exhibit broad forbidden lines; SN 2020udy is consistent with these findings. Evolution of photospheric velocity Figure 14 displays the photosphere expansion velocity of SN 2020udy traced by the evolution of the Si II line at 6355 Å and that measured for several other type Iax SNe.We estimate the expansion velocity by fitting a Gaussian profile to the absorption minimum of the Si II line.The expansion velocity of SN 2020udy around maximum light is ∼ 6000 km s −1 .During −6.6 and +7.27 days, the photospheric velocity of SN 2020udy is similar to SN 2020rea, higher than SN 2005hk, and by the absorption minimum of the Si II line at 6355 Å becomes unreliable after ∼two weeks relative to the B−band peak brightness due to the increasing blending with the emerging iron lines.Type Iax SNe having higher luminosity are known to have high photospheric velocity (McClelland et al. 2010;Foley et al. 2013) except for a few outliers.We found that SN 2020udy is consistent with such a luminosity-velocity correlation. Spectral modeling with SYNAPPS Photospheric spectra of SN 2020udy at three epochs: −6.6, −0.7, and 7.3 days relative to the B−band peak brightness are modeled using the spectrum synthesis code SYNAPPS (Thomas et al. 2011) model falls from 11250 km s −1 at −6.6 days to 6570 km s −1 at 7.3 days.The outer ejecta velocity used in the modeling is 30000 km s −1 .The photospheric temperature evolves from 12200 K at −6.6 days to 8500 K at 7.3 days.The chemical species used in the modeling are Fe, Si, Ca, Mg, and S. Specifically, in the pre-maximum spectral fitting, we used S II, Si II, Si III, Mg II, Ca II, Fe II and Fe III ions.The prominent Fe II and Fe III features in the pre-maximum spectrum are reproduced very well in the models.In the model spectra at maximum and post maximum, most of the spectral features such as Fe, Si, Ca, etc. match well with our observed spectra. Spectral modeling with TARDIS We perform spectral modeling for SN 2020udy using the one-dimensional radiative transfer code TARDIS, following the principles of abundance tomography.The setting of TARDIS and the fitting strategy is almost the same as in our previous study, where SNe 2018cni and 2020kyg were the subjects of analysis (Singh et al. 2023).The method was previously applied for abundance tomography of normal SNe Ia using Artificial Intelligence-Assisted Inversion (AIAI, Chen et al. 2020Chen et al. , 2022) ) and fit-by-eye method (Barna et al. 2021b) as well, providing similar abundance profiles and the same goodness of fit. The present spectral synthesis covers a longer range of time, ∼ 90 days after the maximum.Camacho-Neves et al. (2023) showed that radiative transfer codes assuming a blackbody-emitting photosphere can reproduce most of the spectral features and their evolution over years after the explosion because type Iax SNe never show a fully nebular spectrum.At the same time, spectral synthesis is less sensitive to the exact mass fractions of chemical elements at t exp > 30 days, compared to the earlier epochs.The information is retrievable from late-time spectral fitting limits to luminosity (L), photospheric velocity (v phot ), and identification of the presence of chemical elements. The abundance tomography presented in this study is split into two parts: before t exp = 30 days from the date of explosion (hereafter referred to as early epochs), we follow the standard technique for synthesizing a spectral time series (Singh et al. 2023), while for the later epochs, a simplified method is adopted similar to that of Camacho-Neves et al. (2023).For both phases, the same time of the explosion (T exp = 2459116.3),which is within the uncertainty range of the time derived from early light curve synthesis (see in Section 3.2), and density profile are used for the fit of any epochs.The latter one is chosen as a simple exponential function and constrained from the fitting as: where ρ 0 = 1.4 g cm −3 is the core density shortly after the time of the explosion (T exp ) when the homologous expansion starts (here, chosen as t 0 = 100 s); while v 0 = 2300 km s −1 is the exponential steepness of the density structure decreasing outwards.The chemical composition of ten elements (see in Figure 16) is fit in radial shells with a velocity width of 500 km s −1 .Our initial assumption on each element's abundance was the constant fit of the abundance structure of the N5-def deflagration model (Fink et al. 2014).During the fitting process, we aimed for simplicity and modified the actual abundance structure of a shell only when the quality of the match between the synthetic and observed spectra is clearly improved with the change.At later epochs (t exp > 30 days), the minor changes in abundance have little or no impact on the outcome of the radiative transfer, thus, we fixed the initially assumed constant values with minor modifications regarding the presence of lines of certain elements (see below).Note that despite several simplifications, the number of fitting parameters is way too high to fully cover the parameter space, thus, our 'best-fit' model can be considered as a feasible solution at its best. The synthetic spectra are displayed in Figure 17.The dominant spectral features of SN 2020udy are reproduced by our final model at every epoch.The absorption profiles of IMEs such as Si, S, Ca, etc. are nicely fitted at all phases.As an observable weakness of the fits, the P-Cygni features of IGEs after 20 days are overestimated in general (see between 4000 and 5500 Å in Figure 17).However, the fix of this issue involves either reduced Fe and 56 Ni abundances or lower v 0 to decrease the outer densities; but both options would reduce the goodness of the fits at other epochs. The best-fit density profile (upper panel of Figure 16) shows the same steepness (v 0 = 2300 km s −1 ) as that of other type Iax SNe, which were the subject of similar abundance tomography analysis (Barna et al. 2018;Barna et al. 2021a).The inferred ρ(v) function scales the densities between those of the N3-def and N5-def models during the early epochs, which is consistent with Figure 7.The constrained abundance structure shows a strong stratification of both of the IGE and IME mass fractions that reproduce the fast evolution of spectral lines.As a further contradiction to the pure deflagration models, C is not allowed below 11 000 km s −1 in our model to prevent the appearance of extremely strong C II λ4618 and λ6578.The possible detection of such stratification in the outermost region of type Iax SNe is not unprecedented in the literature (see e.g.Seitenzahl et al. 2013;Barna et al. 2018), but studies with other methodologies of spectral synthesis argued against chemical layering (Magee et al. 2022).Note that due to the high level of degeneracy of our fitting process, the results regarding the stratification are inconclusive.Optical spectroscopy at even earlier phases (t exp < 4 days) and/or UV spectral observations are required to resolve this issue. The fits of the late-time epochs (below ∼ 6000 km s −1 ) support the existence of a uniform chemical structure in the inner ejecta (Fink et al. 2014).Similar to Camacho-Neves et al. (2023), we set a high Na mass fraction to produce the P -Cygni feature at ∼ 5800 Å.As a further modification, we increase the abundance of V and Cr to create the features observed around ∼ 4000 and ∼ 5800 Å, respectively.While the overabundance of Na is welljustified due to the unambiguous identification, similar to that of the extremely late-time spectral synthesis of SN 2014dt, the presence of Cr and V with the 1-2% mass fraction is not conclusive. DISCUSSION The relatively complete photometric coverage of premaximum evolution of SN 2020udy allows us to constrain its rise time to ∼ 15 days in B−band, which is similar to SN 2005hk (Phillips et al. 2007) metric light curves of SN 2020udy with different deflagration models shows that SN 2020udy lies between N3def and N5-def models during the photospheric phase.Spectroscopic features and photospheric velocity evolution of SN 2020udy are similar to other bright type Iax SNe.The late nebular phase spectral features of SN 2020udy are broad and similar to SN 2012Z.To ascertain the most probable explosion scenario/progenitor system, we compare the observational properties of SN 2020udy with different proposed models for type Iax SNe. In DDT models (Khokhlov 1991a,c;Khokhlov et al. 1993;Hoeflich et al. 1995;Hoeflich & Khokhlov 1996;Höflich et al. 2002;Seitenzahl et al. 2013;Sim et al. 2013), a deflagration flame is ignited by nuclear burn- The three-dimensional pure deflagration of a white dwarf yields a range of synthesized 56 Ni mass of 0.03-0.38M ⊙ (Fink et al. 2014).The rise time and peak absolute magnitudes provided by these models fall in the range of 7.6 to 14.4 days and −16.84 to −18.96 mag, respectively.Most of the observed parameters of SN 2020udy fit well within the range of parameters predicted by such a 3D pure deflagration process.Type Iax SNe are considered as a heterogeneous class, the progenitor and explosion scenario at the two extreme ends of luminosity distribution are shown to be different.However, bright members of this class (excluding outliers) behave in a similar manner and most of the observed properties of other bright type Iax SNe such as SNe 2020rea (Singh et al. 2022), 2020sck (Dutta et al. 2022), etc., are consistent with the pure deflagration of a white dwarf in 3D (Fink et al. 2014).Based on the comparison of bolometric light curves and the density structure constrained from the abundance tomography analysis, SN 2020udy resembles a transition between the N3-def and N5-def pure deflagration models (Fink et al. 2014). SUMMARY The extensive photometric and spectroscopic followup of SN 2020udy reveals the following features: • SN 2020udy is a bright member of the type Iax class with M B,max = −17.41±0.34mag. • The analytical modelling of the pseudo-bolometric light curve yields 0.08±0.01M ⊙ of synthesized 56 Ni with a well constrained rise time of ∼ 15 days. • Spectroscopic features of SN 2020udy show similarity with the other bright members of the type Iax class. • Abundance tomography modeling of SN 2020udy shows photospheric velocity of ∼8000 km s −1 at maximum.This fits into the general trend of the Iax class, as brighter objects expand with higher velocity. • Comparison of proposed explosion models with the observational parameters shows that SN 2020udy is consistent with explosion models of pure deflagration of a white dwarf. • While the earliest spectra, which sample the outermost layers of the ejecta, show some indications of chemical stratification, the post-maximum evolution of SN 2020udy is consistent with the predictions of the pure deflagration scenario. • Thus, SN 2020udy is an addition to the bright type Iax SNe population.The observed similarities of SN 2020udy with other bright Iax SNe indicate homogeneity within bright members of this class. ACKNOWLEDGMENTS We thank the anonymous referee for providing useful comments and suggestions towards improvement of the manuscript.We acknowledge Wiezmann Interactive Supernova data REPository http://wiserep.weizmann.ac.il (WISeREP) (Yaron & Gal-Yam 2012).This research has made use of the CfA Supernova Archive, which (Kerzendorf et al. 2018(Kerzendorf et al. , 2019)).The development of TARDIS received support from the Google Summer of 1. INTRODUCTION arXiv:2401.07107v1[astro-ph.HE] 13 Jan 2024 Figure 1 . Figure 1.Estimation of explosion epoch using an analytical expression for the early rise.The 1-σ deviations around the best fits are shown with the shaded regions. Figure2presents the BgVri band photometry of SN 2020udy.The light curves are well sampled around the SN peak brightness in all bandpasses.We use cubic spline fit to estimate the time and magnitude at maximum light in BgVri bands, the decline in magnitude from the light curve peak to 15 days after (i.e., ∆m 15 ) is also estimated for these bands (Table3).In Figure3we compare the light curves of SN 2020udy with several other well-sampled type Iax SNe, including SNe 2002cx(Li et al. 2003), 2005hk (Sahu et al. 2008), 2008ha (Foley et al. 2009), 2010ae(Stritzinger et al. 2014), 2011ay(Stritzinger et al. 2015), 2012Z(Stritzinger et al. 2015), 2019gsc(Srivastav et al. 2020;Tomasella et al. 2020), 2019muj (Barna et al. 2021a), and 2020rea(Singh et al. 2022).The light curves are normalized to their peak magnitudes in the respective bands and plotted in the rest frame of individual SN.In B band, SN 2020udy declines slower than SNe 2002cx and 2011ay and its light curve shape looks remarkably similar to that of SNe 2012Z and 2020rea.In g band, SN 2020udy declines slower than all the comparison SNe.The V band light curve evolution of SN 2020udy is faster thanSNe 2002cx, 2005hk, 2011ay, 2012Z, 2020rea and slower compared to the other SNe presented in our sample.With a slower decline in r and i bands, SN 2020udy appears to be very similar to SNe 2012Z and 2020rea (Figure3).The B-V, V-I, V-R, and R-I color evolution of SN 2020udy and its comparison with other type Iax SNe have been depicted in Figure4.All the colors are corrected for total reddening and brought to the rest frame of each SN.The color evolution of SN 2020udy seems to follow the same pattern as other type Iax SNe used for comparison. Figure 4 . Figure 4. Color evolution of SN 2020udy with those of other type Iax SNe. Figure 5 .Figure 6 . Figure 5.This figure presents the pseudo bolometric light curve of SN 2020udy with those of other type Iax SNe.The analytical model used to estimate the explosion parameters is also shown by a solid line. Figure 7 . Figures 8 and 9 present the spectroscopic evolution of SN 2020udy spanning from −6.6 to +121.2 days relative to its B-band maximum light.During the pre-maximum phase, the spectra of SN 2020udy show a blue contin- Figure 9 . Figure 9. Spectral evolution of SN 2020udy spanning between 27 to 121 days since the maximum. Figure 13 . Figure 13.Comparison of nebular phase spectral features of SN 2020udy with other type Iax SNe. Figure 14 . Photospheric spectra of SN 2020udy at three epochs: −6.6, −0.7, and 7.3 days relative to the B−band peak brightness are modeled using the spectrum synthesis code SYNAPPS(Thomas et al. 2011) and are shown in Figure15.The photospheric velocity in the best-fit Figure 15 . Figure 15.The −6.6, −0.7, and 7.3 days spectra of SN 2020udy are shown (in grey) along with the respective SYNAPPS models (in red). Figure 16 . Figure 16.Top panel: the best-fit TARDIS density profile (red) compared to the pure deflagration at t0 = 100 s.The grey lines indicate the location of the photosphere for each of the analyzed epochs.Bottom panel: the best-fit chemical abundance structure from the fitting process for the spectral sequence of SN 2020udy.The profile of the radioactive 56 Ni shows the mass fractions at texp = 100 s. Figure 17 . Figure 17.Spectral synthesis of the evolution of SN 2020udy.The model spectra (in red) are produced with TARDIS within the framework of the abundance tomography analysis. is funded in part by the National Science Foundation through grant AST 0907903.This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.RD acknowledges funds by ANID grant FONDECYT Post-doctorado Nº 3220449.The work of XW is supported by the National Natural Science Foundation of China (NSFC grants 12288102, 12033003, and 11633002), the Science Program (BS202002) and the Innovation Project (23CB061) of Beijing Academy of Science and Technology, and the Tencent Xplorer Prize.This work makes use of data obtained with the LCO Network.This research made use of TARDIS, a community-developed software package for spectral synthesis in supernovae Table 1 . Optical photometric measurements of SN 2020udy Table 2 . Log of spectroscopic observations of SN 2020udy † JD 2,459,000+ ‡ Phase has been calculated with respect to Bmax = 2459130.53 Table 3 . Light curve parameters of SN 2020udy Figure 10.Pre-peak spectral features of SN 2020udy compared with those of other type Iax SNe at similar phases.The spectra are plotted in the decreasing order of B−band peak brightness of the SNe.Figure 12.Comparison of post-peak spectral features of SN 2020udy with other type Iax SNe.
v3-fos-license
2019-09-16T23:13:16.803Z
2019-01-01T00:00:00.000
249962043
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.32604/cmes.2019.06905", "pdf_hash": "80d789f08b1a461093e2a4408bde3aeea0d49117", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42155", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "sha1": "2708610b4e9120453e1c58151fe2e042eaff6f9e", "year": 2019 }
pes2o/s2orc
Simulation of Damage Evolution and Study of Multi-Fatigue Source Fracture of Steel Wire in Bridge Cables under the Action of Pre-Corrosion and Fatigue : A numerical simulation method for the damage evolution of high-strength steel wire in a bridge cable under the action of pre-corrosion and fatigue is presented in this paper. Based on pitting accelerated crack nucleation theory in combination with continuum mechanics, cellular automata technology (CA) and finite element (FE) analysis, the damage evolution process of steel wire under pre-corrosion and fatigue is simulated. This method automatically generates a high-strength steel wire model with initial random pitting defects, and on the basis of this model, the fatigue damage evolution process is simulated; thus, the fatigue life and fatigue performance of the corroded steel wire can be evaluated. A comparison of the numerical simulation results with the experimental results shows that this method has strong reliability and practicability in predicting the fatigue life of corroded steel wire and simulating the damage evolution process. Based on the method proposed in this paper, the fatigue life of steel wires with different degrees of corrosion under the action of different stress levels is predicted. The results show that as the degree of corrosion increases, the fatigue properties of steel wire gradually decrease, and the influence of existing pitting corrosion on fatigue life is far greater than that on mass loss. Stress concentration is the main cause of fatigue life of corroded steel wire in advance attenuation. In addition, the fracture process of steel wire with multi-fatigue sources and the effect of the number and distribution of pits on the fatigue life of steel wire are studied. The results show that, compared with a stepped pitting distribution, a planar pitting distribution has a greater impact on the damage evolution process. The fatigue life of steel wire is positively correlated with the number of pits and the angle and distance between pits. Introduction Fatigue failure is one of the typical failure modes of corroded cables in bridges because the cables are exposed to polluted natural environments and subjected to long-term alternating loads. Corrosion fatigue failure is the brittle fracture process of metal materials subjected to periodic (cyclic) or aperiodic (random) alternating stress and a corrosion medium and is essentially the result of the interaction of an electrochemical corrosion process and mechanical process. The effect of this interaction on material damage is much greater than that of the alternating stress and corrosion medium alone [Watson and Stafford (1998); Hirose and Mura (1985)]. The corrosion fatigue damage evolution process of steel wire in cables mainly includes two stages, crack initiation and crack propagation. Under the action of a corrosion environment, corrosion damage gradually accumulates in the components. When the damage reaches the critical state, a corrosion fatigue crack initiates. The corrosion crack then propagates gradually under the action of alternating stress. When the crack length reaches the critical length, the crack propagates rapidly, and the steel wire quickly breaks. Studies show that uncorroded steel wire with an intact surface has strong anti-fatigue crack initiation ability and weak anti-crack propagation ability [Li, Song and Liu (1995)]. However, once corrosion occurs on the surface of high-strength steel wire, tiny pitting defects can become a source of crack initiation in the steel wire, and the anti-crack initiation ability of the corroded steel wire will significantly decrease [Nakamura and Suzumura (2013)]. Therefore, the establishment of an initial pit model based on a pitting accelerated crack nucleation mechanism is a key issue in studies of the fatigue life of corroded steel wire [Huang and Xu (2012)]. Researchers have established many models for pit evolution and corrosion fatigue crack nucleation under corrosion fatigue conditions and verified these models for specific materials and corrosion environments [Liao and Wei (1999); Perkins and Bache (2005)]. To simplify the complexity of the corrosion process, most studies presuppose the pit shape and assume that the evolution rate of pitting corrosion is the same in all directions, that is, the shape of the pits does not change during their evolution process. For example, Godard [Godard (2015)] believed that pits were hemispherical in shape and had the same size change rate in all directions during the evolution process. By observing the evolution of pits in an aluminum alloy in water, the equation for the change in pit depth with time was established. However, considering the randomness and complexity of the corrosion process, corrosion morphology is often not controllable, and the shapes of pits are diverse, including wide-shallow, narrow-deep, conical, spherical, ellipsoid, groove and so on, as shown in Fig. 1 [China National Standardization Management Committee (2001)]. In general, a more realistic pitting model should include a large number of random parameters related to the aspects of material, solution, electrochemistry, and mechanics. These parameters will greatly affect the geometric size and corrosion morphology of the pit. However, pitting growth models that consider the influence of these parameters on the pit remain lacking. Therefore, in addition to these empirical methods in the literature mentioned above, a calculation model that can predict the pitting growth process and reflect the pitting corrosion morphology based on the actual corrosion mechanism of pits should be proposed [Pidaparti, Palakal and Fang (2004)]. In recent years, cellular automata (CA) technology has been gradually applied in many fields of materials science [Chowdhury, Santen and Schadschneider (2000); Wimpenny and Colasanti (1997)], especially in the field of corrosion science [DiCaprio, Vautrin and Stafiej (2011);Caprio, Vautrinul and Stafiej (2013); Stafiej, DiCaprio and Bartosik (2013)]. The CA technique is a powerful tool to describe and simulate the behavior of complex physical and chemical systems. It can simulate the system at the micro level or mesoscopic level and describe the cumulative effects on macro performance at the micro level or mesoscopic level [Wang, Song and Wang (2009)]. As an effective tool for studying complex systems, the CA method has strong universality and stability in simulating the corrosion damage evolution process [Chen and Wen (2017)]. This article therefore adopts CA technology and the practical nature of the electrochemical reaction to simulate random pitting morphology and study the fatigue damage evolution process of corroded steel wire. By defining the corrosion rules of pitting, the metal, passive film and corrosion medium are discretized into wellordered cells in the 3-dimensional CA system. The pitting process of the metal is simulated at the mesoscopic scale, and a random initial pitting model can be automatically generated. Previous studies have employed two main methods for damage evolution simulation: one based on the continuum damage mechanics model or one based on the fracture mechanics model [Sun (2018)]. In the damage mechanism-based method, pits are regarded as grooves, and the pitting evolution follows Faraday's law [Amiri, Arcari and Airoldi (2015)]. Based on this method, the damage distribution in a certain analysis step can be shown; however, the damage accumulation and evolution process cannot be visualized, especially the transformation between pitting and cracking. For fracture mechanic-based methods, pits are equivalent to surface cracks, and a linear superposition model [Wei (2010) c) The wide-shallow pit competition model [Austen and Mcintyre (2013)] have been established. The method based on fracture mechanics can reveal the damage evolution quantitatively, but because many factors influence the crack propagation process and the complex relationship between corrosion and fatigue, the superposition model and process competition model are not convenient for engineering applications. To overcome the shortcomings of traditional research methods, in this paper, to explore the characteristics of fatigue damage of corroded steel wire, a material subroutine, UMAT, is written with based on the FORTRAN language, and simulations of the fatigue damage evolution of corroded material depending on the cyclic load and environmental conditions are carried out according to the life-and-death element method in ABAQUS software. Compared with the above methods, the method proposed in this paper can not only describe the distribution of fatigue damage in corroded steel wire based on the fatigue damage evolution model but also visualize the process of damage evolution based on the concept of the life-and-death element method. The technical route for the numerical simulation method of fatigue damage evolution of corroded high-strength steel wire in bridge cables proposed in this paper is shown in Fig. 2. There is three parts in Fig. 2, including corrosion fatigue damage evolution theory, simulation of corrosion fatigue damage evolution and research on pre-corrosion fatigue life. First, for the part of corrosion fatigue damage evolution theory, based on continuum damage mechanics, a fatigue damage model suitable for corroded steel wire in a bridge cable is established, and the above model is written into a user-defined material subroutine (UMAT) using FORTRAN language. Then, for the part of simulation of corrosion fatigue damage evolution, the CA technology is utilized to generate the morphology and position of random irregular corrosion pit in MATLAB software. By using the programming interfaces between MATLAB, AutoCAD, RHINO and ABAQUS software, the data obtained by the CA method are inputted into the above software to produce the threedimensional grid model, the surface model and the geometer model successively. Hence, the visualization of high strength steel wire with initial corrosion pit is realized. In addition, based on the linear elastic finite element method, focusing on the characteristics of fatigue damage analysis, a solution process based on cyclic block is proposed. The simulation of corrosion fatigue damage evolution is realized with reference to the concept of life-anddeath element method combined with the developed UMAT. Finally, for the part of research on pre-corrosion fatigue life, the fatigue life of corroded steel wire is obtained, and the effects of the number of pits, stress level and corrosion degree on the fatigue life of steel wire are studied by using the proposed simulation method. 2 Fatigue damage evolution model The material damage evolution process refers to the process of performance degradation caused by the initiation, evolution and emergence of internal defects in material under external factors (alternating load, temperature change and corrosion environment, etc.) [Yu (1988)]. The pre-corrosion fatigue damage of steel wire in bridge cable includes two stages, i.e., corrosion damage and fatigue damage, which result in the formation of pits and cracks, respectively. Pitting corrosion appears before crack formation and promotes crack initiation. In this paper, the pitting evolution process is achieved by using cellular automata (CA) technology, which will be described in detail in Section 3, while the process of fatigue damage evolution is realized by using the user material subroutine UMAT, which will be described in detail in Section 4. These two processes compose the method proposed in this paper to study the fatigue damage evolution process of steel wire with corrosion damage. In continuum damage mechanics, the macroscopic state variable D is often used to describe the distribution, characteristics and evolution process of microstructure defects [Sun (2018) where ( ) f F is the function to be determined. If only the influence of the cyclic stress range is considered, the fatigue damage in each cyclic load can be expressed as follows [Chaboche (1981)]: ( ) where µ and B are parameters related to temperature, and M is material parameter related to average stress. Based on the initial condition (2) is integrated to obtain the fatigue damage evolution model: Before adopting this model, the material parameters in the above model need to be fit. In this paper, the uniaxial tensile fatigue test results of high-strength steel wire specimens in Lan et al. [Lan, Xu and Ren (2017)] are used for parameter fitting, and the median S-N curve of steel wire without initial damage is written as follows: lg 23.480 6.649 lg Based on Eq. (4), the fitting values of , µ B and M are 3772.53, 6.649 and 1, respectively. 3 Modeling method of a high-strength steel wire model with an initial random defect 3.1 Simulation of the metal corrosion process based on 3D cellular automata (CA) technology Cellular Automata, or CA for short, is a complex dynamic system that is discrete in both time and space [Li, Packard and Langton (2017)] and is mainly composed of elements such as the cell, cell state, cell space, neighbor, and discrete time set, among others. According to the definition of CA, the "cell" of its main components will have one and only one specific state, which belongs to a certain limited specified state [Langton (1984)]. According to the specific requirements, these cells are distributed on the divided cellular space grid according to a certain law, and the state of each cell changes with the time step according to the defined local rules. In other words, the state of each cell at the next moment is determined by the state at the current moment and the state of its neighbors. The damage evolution process of metal corrosion is simulated by using the CA method. In three-dimensional cellular space, the metal/passive film/electrolyte is regarded as an automata system with a special local rule or transformation rule, which is discretized into a well-ordered cellular grid to obtain the corrosion morphology, size and distribution of pits on the metal surface. Physical model The most basic corrosion processes can be divided into three stages: 1) random diffusion of corrosive media in solution; 2) breakage of the passive film on the metal surface; 3) electrochemical reaction of the exposed metal surface in a humid environment and the metal gradually dissolves. If the reoxidation of the metal surface in the corrosion process is not taken into account, the general formula for the basic reaction of metal dissolution in a humid environment [Pidaparti, Fang and Palakal (2008)] can be expressed as follows: where Me represent metal, 2 H O is the chemical formula of water, 2 H is the chemical formula of hydrogen gas and aq MeOH is the substance in solution after the reaction, which has no influence on the subsequent corrosion process and is not considered. Cellular space A discrete space is used to characterize the metal/passive film/electrolyte solution system in the CA model, and the dynamic characteristics of metal corrosion are defined by the transition rules of the cell state in discrete time. The cylindrical coordinate system is selected to describe the geometrical shape of the steel wire. The corrosion simulation accuracy is set as 0.1 mm and 3°, and thus the metal component and corrosion environment are divided into an array composed of cells. The number of cells along the three coordinate directions in the cylindrical coordinate system is represented by NR, Na and Nl, which are respectively 70, 120 and 140. Each cell is represented by its position coordinates of (i,j,k). A fixed boundary condition is selected; that is, a constant is selected for the cell at the boundary to simulate a finite cell space. The three-dimensional cellular space for studying the corrosion process of steel wire in a bridge cable is thus established as shown in Fig. 3. The state of each cell (i,j,k) is determined by the cell properties. As shown in Fig. 3, there are three main cellular types involved in the CA system for metal corrosion: a) Metal cell-M: It can be corroded by the corrosive cell, and its position remains unchanged during the simulation process. b) Passive cell-P: In the pitting evolution process, the passive film cell in the initial state will cover the outermost metal cell, and its position is basically fixed. However, in the simulation process, the passive film cell will randomly generate initial defects, and under these conditions, the passive film cell at this position will be transformed into a metal cell. c) Corrosion cells-C: They are randomly distributed in the solution, which can corrode metal cells and move freely. In three-dimensional cellular space, the corrosion cells can randomly choose to move in a certain direction of an axis at every time step. The initial state of the cell in each position is determined according to the size of the metal component and the concentration of corrosive solution. In this paper, the concentration of corrosive solution is set to 0.1, and the distribution of corrosion cells is stochastic uniform. That is to say, each cell out of the metal cell is given a random number that obeys uniform distribution at the interval from 0 to 1. If the number is smaller than 0.1, the corresponding cell is the corrosion cell. In addition, the location of a single initial defect is set on the middle section of steel wire. Cellular transformation rules In the cell space, the cell evolution rule is based on the most basic corrosion processes that have been mentioned in 3.1.1. For the diffusion process in solution, within a time step, a cell C can selectively move to one of its neighbors. The neighbor type adopted in the CA model in this paper is a group of 6 neighbor cells as shown in Fig. 4. In Fig. 4, the dark cell represents the center cell, and the light cells represent the neighbor cells of the center cell. That is to say, in each time step, the corrosion cell C at position (i, j, k) spreads randomly and selectively jumps to one of its neighbor cells. If the selected neighbor cell is not occupied by other cell C or cell P, the cell C jumps to the neighbor cell; otherwise, it remains in the original position. Figure 4: Cellular neighbor type For the passive film breakage and the initial defect generation process, in the CA model, the cell P is randomly converted into a cell M, i.e., P M → (6) For the metal dissolution process, in the CA model, the reaction of metal dissolution in Eq. (5) can be expressed as follows: M+C C → (7) which indicates that when the metal dissolves, the current position of the cell M is occupied by the cell C of the neighboring position, and then the corresponding neighboring position becomes empty. In addition, the following space interaction is introduced to limit the random diffusion process. Within a time step, a cell C selects one of its neighbors. If the selected neighbor cell is empty or is occupied by a cell M but is chosen as a target position of another cell C, the cell C remains its original position. (1) Within a time step, cell C in initial position (i, j, k) randomly selects one of its neighbor positions and is ready to move to that position. Specifically, each cell C is given a random integer that obeys uniform distribution at the interval from 1 to 6. Each number is corresponding to one of directions, which means there is the equal-probability for all directions that cell C may move to; (2) If the selected neighbor position in Step (1) is occupied by another cell C, then cell C remains in its initial position (i, j, k), as shown in Fig. 5(a); (3) If the selected neighbor position in step (1) is empty but another cell C is also ready to move to this neighbor position, then cell C remains in its initial position (i, j, k), as shown in Fig. 5 (4) If the selected neighbor position in Step (1) is empty and no other cell C is ready to move to this neighbor position, then the cell C moves to the neighbor position, and the original position (i) becomes empty, as shown in Fig. 5(c); (5) If the selected neighbor position in Step (1) is occupied by a cell M but another cell C is ready to move to this neighbor position, then cell C remains in its initial position (i, j, k), as shown in Fig (1) is occupied by cell P and cell C is located outside the metal space, then cell C remains in its initial position (i, j, k), as shown in Fig. 5(g); (9) If the selected neighbor position in Step (1) is occupied by cell P and cell C is located inside the metal space but another cell C is also ready to move to this neighbor position, then cell C remains in its initial position (i, j, k), as shown in Fig. 5(h); (10) If the selected neighbor position in Step (1) is occupied by cell P and cell C is located inside the metal space and no other cell C is ready to move to this position, then cell P disappears, cell C moves to a neighbor position, and the original position (i, j, k) becomes empty, as shown in Fig. 5(i). Establishment of a three-dimensional model of the pit To study the properties of corroded steel wire, a three-dimensional model of corroded wire must be established to further simulate the fatigue damage evolution of steel wire and predict its fatigue life. The corrosion morphology of a steel wire surface is simulated based on the data generated in the CA model. First, considering the irregularity and discretization of the generated data, the function of griddata() in MATLAB software is used for data interpolation in this paper, and the interpolation method is cubic based on a triangle. To import the fitted surface into AutoCAD software, a transformation program, mat2cad, is written in this paper, based on the principle of AutoCAD software drawing. That is, the AutoCAD drawing is completed through the grid (the same principle as in MATLAB). Therefore, as long as the coordinates of all points in a graph are generated, it can be imported into AutoCAD software. The data generated in MATLAB software by CA method are imported one by one into AutoCAD software, Rhino software and ABAQUS software to generate a threedimensional grid model, the surface model and the geometry model successively. The object of the whole process is to easily realize the visualization and substantialization of the corrosion pit generated by CA method. Specifically, the grid model is generated directly by data transmission between MATLAB software and ABAQUS software. Then, the grid model is inputted into Rhino software to generate the surface model that is irregular but smooth by a plugin, namely, RhinoResurf plugin. And a 3D solid model is generated in Rhino software directly by using the function of automatic solid generation. Finally, the 3D solid model is inputted into ABAQUS software to generate 3D geometry model used for damage evolution simulation. The reason for complex data transition between four softwares, rather than directly transition from MATLAB software to ABAQUS software, is that the data produced by CA model in MATLAB software is too complex to fit the surface model directly. The plug-in in Rhino software can solve the problem perfectly, and the 3D solid model is created easily in Rhino software. The generation process is shown in Fig. 6. The generated model is based on the mechanism of metal anodic dissolution and can reasonably describe the irregular morphology and distribution of pits in line with the actual situation. The termination condition of cellular transformation in CA model is that the mass loss rate of steel wire reaches 0.25%. Simulation of the damage evolution of steel wire 4.1 Structure solving process based on a cyclic block An actual loading cycle corresponds to the process in which the load increases from the minimum to the maximum and then decreases to the minimum. Due to the high number of cycles, it is difficult to analyze each cycle. For high-strength steel wire with high fatigue life, the material damage will change obviously only when the loading numbers are large; therefore, it is not necessary to calculate the damage during each loading cycle [Han, Huang and Gao (2014)].The cyclic block is used to represent a certain loading number and is defined as the calculation accuracy; that is, the damage value of the material remains unchanged in each cyclic block. When the analysis is carried out to the next cyclic block, the damage value of the material will be updated according to block at the end of loading. However, the solution process of ABAQUS software is based on the analysis step, and each analysis step corresponds to the analysis of a loading or unloading process. Therefore, in this paper, each cycle block is composed of a loading analysis step or a unloading analysis step in ABAQUS. In the loading analysis step, the load increases from the minimum to the maximum, while in the unloading analysis step, the load decreases from the maximum to the minimum. In short, the relationship among the loading number, cyclic block, calculation accuracy and analysis step can be expressed as follows: where, cb N is the number of cyclic blocks, T N is the total number of cyclic loads, n is computational accuracy, as N is analysis step and INT( ) is the integral function. Therefore, the relationship between the cyclic block, the analysis step and the number of cyclic loads is as shown in Fig. 7. The material damage evolution model adopted in this paper reflects the relationship among the material damage, loading number, and stress range. In the process of damage evolution analysis, the damage is mainly related to the cyclic block, and it can be theoretically Step max σ 2 Loading step A cyclic block Unloading step 1 considered that the damage accumulation only occurs at the discrete time point at the end of the cycle. In other words, when the analysis step is 1, the initial material characteristics are used to calculate the structural response, and then the accumulated damage value is calculated. For the subsequent loading analysis step, namely, the process in which the cyclic load increases from the minimum to the maximum, before calculating the structural response in this cycle, the damage accumulation increment is first calculated according to the structural response of the previous cycle. Then, the calculated damage accumulation increment is added to the total damage. Finally, according to the total damage, the structural response of this cycle is calculated by updating the material characteristics. Therefore, at the p th cycle, the total damage value p D of the structure can be expressed as follows: cycle and is calculated as follows: is the material damage evolution model used in this paper, cycle, p is the cyclic block, and n is the calculation accuracy. If the damage value D of an element in the cyclic block is greater than a value close to 1, it is judged that the element has failed. Under these circumstance, the stiffness matrix of the element is multiplied by (1-D) because the deactivated degrees of freedom in the "dead" element need to be restrained to reduce the number of solving equations and prevent error. According to the reference [Lan, Xu and Liu (2018)], the value close to 1 in this paper is set as 0.9. For the entire analysis, during each load cycle the material has accumulated the damage caused by the previous cycle, which requires the solution process of each seemingly independent cyclic block to be connected. Consequently, the impact of damage related to the loading number on the subsequent analysis should be considered. In this paper, a solution process suitable for corrosion-fatigue damage analysis of steel wire is established by referring to the analysis process of general nonlinear materials. Implementation of the material subroutine The user-defined material subroutine (UMAT) is a FORTRAN programming interface provided by ABAQUS for users to customize material properties, thereby allowing users to use the undefined material model in the ABAQUS material database. ABAQUS is able to call these material subroutines and analyze the mechanical properties of the structure through data exchange between them. The function of UMAT is mainly to calculate the Jacobian matrix and the corresponding stress and strain and to update other state variables. The main program of ABAQUS will form a stiffness matrix according to the Jacobian matrix in UMAT and then solve the response of the structure, including the displacement and strain increment, and UMAT will update the state of the structure according to these variables. The solving steps of UMAT based on the fatigue damage model in Section 2 are shown in Fig. 8. The subroutine consists of a matrix of four state variables STATEV, which are used to save some user-defined variables. STATEV(1) is used to save the stress components of the material in the load-analysis step in each cycle block. STATEV(2) is used to save the stress components in the unloading analysis step in each cycle block. STATEV (3) and (4) are used to save the updated elastic modulus E and damage value D in each cycle block. Update STATEV(1) with obtained stress, STATEV (2) UMAT includes three steps. The first step corresponds to the first loading step when the step is equal to 1. Under this condition, there is no accumulated damage to the material; that is, D is equal to 0, and the elastic modulus is equal to the initial elastic modulus 0 E . 0 E is used to generate the Jacobian matrix DDSDDE and calculate the stress and strain. According to the calculation results of the stress and strain, state variables STATEV(1) and STATEV(3) are updated, while other state variables remain unchanged. This analysis step corresponds to the situation of the structure from bearing the maximum load to the minimum load, and thus the structure will not accumulate damage. In other words, the accumulated damage value in the material at this analysis step is the same as in the previous analysis step (loading analysis step); that is, the elastic modulus is equal to the updated elastic modulus in the previous analysis step and is used to generate DDSDDE. The stress and strain are calculated to update the state variable STATEV (2), while the other state variables remain unchanged. The third step corresponds to the loading analysis step when the step is odd and is not equal to 1. The function of this analysis step is to increase the load borne by the structure from the minimum to the maximum. D and the stress value during this loading analysis step are calculated. Before calculating D, it is necessary to calculate the difference between the stress under the loading analysis step and unloading analysis, that is, the stress range S ∆ .The obtained S ∆ is used to calculate the cumulative damage increment D ∆ given in Eq. (9). D ∆ is added to D (stored in the state variable STATEV(4)) obtained in the previous cycle block to obtain the accumulated damage value before this loading analysis step. It should be noted that when the cyclic block is equal to 2, D can be directly calculated instead of D ∆ , because D ∆ generated in the first cycle is the total damage value. After calculating D, it is necessary to verify the damage value to ensure that it is not less than the damage value in the previous cyclic block and not greater than 0.9. Then, the elastic modulus is updated according to the total damage value, and the stress and strain are calculated. Finally, the obtained stress component, E and D are saved in the state variables STATEV(1), (3) and (4). Verification of UMAT To verify the effectiveness of UMAT, the simulation results are compared with the test results of steel wire without initial damage under uniaxial tensile fatigue loads given in the literature [Lan, Xu and Ren (2017)]. The steel wire sample was taken from a segment of finished cable from a cable factory in China and was a high-strength steel wire with a tensile strength of 1670 MPa, length of 250 mm and diameter of 7 mm. During the fatigue test, both ends of the steel wire were clamped for 50 mm, and the length of the standard segment used for analysis was taken as 150 mm. The stress ratio was set at 0.5, and the stress ranges were set as 335 MPa, 418 MPa, 520 MPa and 670 MPa, respectively. The MTS Landmark 100 kN fatigue testing machine was used for the test, and the loading frequency was 10 Hz. The geometry model of high-strength steel wire was established according to the test, as shown in Fig. 9. The density of the model is set as 7. Fig. 10. The comparison shows that the UMAT written in this paper can achieve the expected goal and that the calculation results are consistent with the experiment results. In addition, since the main object of this paper is to simulate fatigue damage evolution of corroded steel wires, the verification against a structure with defects needs to be made. Therefore, results in the literature [Zheng, Xie, Li et al. (2018)] are adopted to compare with the results from simulation by using UMAT. In the literature [Zheng, Xie, Li et al. (2018)], several fatigue tests of corroded steel wires with a hemispherical pit were conducted and a calculation method based on the damnification tolerance for fatigue life of corroded steel wires was proposed. It is well-known that the relationship between the crack propagation rate / da dN and the stress intensity range K ∆ conformed to Paris law is: where C and m are undetermined parameters that equal 12 1.39 10 − × and 3.3 respectively for steel wire with the diameter of 7 mm when the stress ratio R is 0.5 according to the reference [Zheng, Xie, Li et al. (2018)]. The general formula of K ∆ can be expressed as follows: where S ∆ is the range of the far-field stress, a is the crack length and Y is the crack shape. The evolutionary equation for the crack shape factor in a notched round-bar specimen is [Zheng, Xie, Li et al. (2018) where D is diameter of steel wire and equals 7 mm in this paper. Therefore, the equation for predicting the crack propagation life can be deduced from Paris law with integral operation: According to the study by Mahmoud [Mahmoud (2007)], the fracture toughness IC K of high-strength steel wire with a diameter of 7 mm is approximately 65.7 MPa m ⋅ , which is used to calculate the critical length of crack (ac). The fatigue life of steel wire with a hemispherical pit whose initial diameter is 1 mm (a0) under different stress ranges is calculated by Eq. (16), as shown in Tab. 2. The initial geometry model of high-strength steel wire with a hemispherical pit was established according to the test in literature [Zheng, Xie, Li et al. (2018)], as shown in Fig. 11. Other parameters are the same as the model in Fig. 9. The UMAT is used to calculate the fatigue life of the model under different stress ranges. The results of numerical simulation are listed in Tab. 3. The above two results are compared in Fig. 12. The comparison shows that the UMAT written in this paper can achieve the expected goal and that the calculation results are also consistent well with the results calculated by the damnification tolerance method proposed in the literature [Zheng, Xie, Li et al. (2018)]. Division of the transitional grid and simulation of cyclic loading To study the corrosion-fatigue damage evolution process of steel wire, this paper examines steel wire with initial defects. The finite element model of the corroded steel wire is established by using the method proposed in Section 2, as shown in Fig. 6. Because of the existence of the initial pit, material failure first appears near the pit, and the damage evolution process mainly occurs around the pit. The initial pit is small, and thus its element size is also small, resulting in a dense grid around the pit. The denser the mesh, the higher the calculation accuracy, but the longer the calculation time. Therefore, a transitional grid is adopted to improve the computational efficiency and effectively reduce the amount of computation, as shown in Fig. 13. To accurately simulate crack propagation, the grid size around the pitting corrosion (z=6 mm~8 mm) is set as 0.1 mm, while the grid size on both sides of the steel wire (z=0mm~4 mm & z=10 mm~14 mm) is set as 0.5 mm, and the grid size between the two areas (z=4 mm~6 mm & 8 mm~10 mm) gradually changes from 0.1 mm to 0.5 mm. The loading mode of the steel wire specimen mentioned above is uniaxial tensile fatigue load. The stress range of the cyclic load used for simulation in this paper is constant. The applied cyclic load is realized by the script program written in the PYTHON language, and the setting of the cyclic load can be realized efficiently through the script interface of ABAQUS, a commercial finite element software. where 0 m is the initial mass of the wire and 1 m is the mass of corroded steel wire after pickling. The accelerated corrosion test was carried out for 1 day, 10 days, 30 days and 60 days. The average mass loss rate of the corroded steel wire was 0.25%, 1.85%, 2.5% and 3.33%, respectively [Lan, Xu and Liu (2018)]. In view of the discrete nature of the pitting test of steel wire, most related studies focus on the fatigue performance of uniformly corroded high-strength steel wire. However, the fracture that often occurs is a pitting defect on the surface of steel wire. Therefore, to achieve the same corrosion degree as the uniform corrosion in the literature [Lan, Xu and Liu (2018)], the upper surface in the middle part of the steel wire is set as the corrosion area in this paper, which means that the initial volume of the steel wire mentioned above is part of the established model, namely, the yellow part in Fig. 14. The size of the initial defect is 0.5 mm*0.5 mm. The modeling method proposed in Section 3 is used to generate corroded steel wire models with single pit with different mass loss rates, as shown in Fig. 15. The density of the model is set as 7.8 g/cm3, the elastic modulus as 212,000 MPa, and the Poisson's ratio as 0.31 [China National Standardization Management Committee (2008)]. Fatigue life of corroded steel wire After the initiation of the main crack of the high-strength steel wire, the crack will continue to propagate under the fatigue load. When the stress intensity factor of the crack reaches the fracture toughness, the fracture occurs. This criterion relation can be expressed in the form of Eq. (20) [Fu (1995)]: where σ is the stress of the specimen and Y is the crack shape factor, which, for a semielliptic surface crack under tension on the cylinder, can be expressed as follows [Nakamura and Suzumura (2013)]: where x is the depth of the pit and i D is the wire diameter. In this paper, in order to unify the damage assessment system, the stress intensity factor is used to quantify the overall damage degree of high strength steel wire. Therefore, the irregular corrosion pit is equivalent to a semi-elliptic surface crack and do not make too much distinction between the corrosion pit and the crack. Based on the above assumption, the stress intensity factor can be calculated easily by using Eq. (14) and Eq. (15). According to the literature [Mahmoud (2007)], the fracture toughness of high-strength steel wire with a diameter of 7 mm is approximately 65.7 MPa m ⋅ . Based on the method proposed in Section 4, the corrosion-fatigue damage evolution of steel wire is studied, and the fatigue life of corroded steel wire with different mass loss rates is obtained. The calculation process and results are shown in Tab. 4. Compared with the test results, the fatigue life of corroded steel wire obtained by this method is generally conservative. The error between the simulation and test results increases with the increase in the corrosion degree and the stress range for three reasons: 1. The experimental data in the literature [Lan, Xu and Liu (2018)] comes from uniform corrosion tests. The number and distribution of corrosion pits cannot be controlled, which is not quite consistent with the initial model established in Fig. 15, especially when the corrosion degree is large. 2. The pitting morphology of the initial model generated by cellular automata is also random, and the randomness of the simulation and test increases the error between them. 3. The size of the steel wire model established in this paper is not consistent with that of the test specimen. The length of the steel wire in this paper is 14 mm, while the length of the steel wire in the literature [Lan, Xu and Liu (2018)] is 150 mm. This difference in length leads to error. However, both results are in the same order of magnitude, and the average error is approximately 20.24%, which demonstrates the reliability and practicability of this method in predicting the fatigue life of corroded steel wire. Fig. 16 shows the S-N curve of high-strength steel wire with different degrees of corrosion. The fatigue performance of steel wire decreases gradually with increasing corrosion degree. Specifically, for steel wire with a mass loss rate of 0.25%, its fatigue life drops by 4.25%, 13.02% and 12.82%, respectively, relative to nondestructive steel wire under each stress range. For steel wire with a mass loss rate of 1.85%, its fatigue life drops by 49.07%, 57.40% and 58.30%, respectively, compared with nondestructive steel wire under each stress range. For steel wire with a mass loss rate of 2.50%, its fatigue life drops by 62.11%, 67.16% and 67.40%, respectively, relative to nondestructive steel wire under each stress range. For steel wire with a mass loss rate of 3.33%, its fatigue life drops by 80.65%, 67.16% and 76.88%, respectively, relative to nondestructive steel wire under each stress range. These results show that the influence of pitting corrosion on fatigue life is far greater than its influence on mass loss and that stress concentration is the main reason for the attenuation of the fatigue life of the corroded steel wire. To further analyze the impact of pitting on the fatigue performance of steel wire, the relationship between fatigue life and corrosion degree under different stress ranges is studied, as shown in Fig. 17. When the mass loss rate is less than or equal to 2.50%, the log value of the fatigue life presents a linear decreasing trend with the mass loss rate. Under this condition, an increase of 0.5% in the mass loss rate leads to a decrease of 0.1 in the log of fatigue life, namely, a decrease of approximately 20.57% in the fatigue life. When the mass loss rate is equal to 3.33%, the log value distribution of the fatigue life of steel wire is not obvious. Therefore, more mass loss rates must be analyzed. The corrosion-fatigue damage evolution process To study the change in pitting morphology, the corroded steel wire with a mass loss rate of 0.25% under a stress range of 418 MPa is selected to analyze the corrosion-fatigue damage evolution process. Fig. 18 shows the stress and damage distribution of the steel wire specimen at the 195th analysis step (the loading numbers of 980,000), when it is about to fracture. Fig. 18(a)) shows that the maximum stress appears at the bottom of the pit, and the high stress zone presents a zonal distribution perpendicular to the tension direction. The minimum stress appears at the edge of the pit, and the low-stress zone is distributed along the axis. Fig. 18(b)) shows that the fatigue failure mainly occurs in the high-stress zone and presents a zonal distribution perpendicular to the tension direction, which is affected by the pit morphology. Fig. 19 shows the damage evolution process and stress distribution on the cross section z=7 mm of the model, in which the left figures present the damaged elements whose color is blue and the right figures present the stress distribution of steel wire. From Fig. 17, the stress concentration occurs in the middle of the pit, where the damage begins to accumulate gradually. As the load continues to increase, damage accumulates and penetrates into the inner zone of the steel wire far from the pit, and the damage evolution rate gradually increases. In addition, the stress distribution in the steel wire changes with the loading process. Due to the stress concentration, the elastic modulus of the material around the pit will gradually attenuate with the cyclic loading; consequently, the deformation energy of the material will change, and its stiffness will decrease. However, under this condition, the damaged material still has the ability to transfer stress and is continuous with the adjacent material, which increases the stress borne by the adjacent materials. a) 260,000 loading numbers (step=51) b) 510,000 loading numbers (step=101) Figure 19: Fatigue damage evolution process and stress distribution of corroded steel wire (z=7) Fig. 20 presents the time-varying curves of the stress intensity factor and the size of defect. As shown in Fig. 20, with the increase of loading step, the defect size gradually increases, the damage gradually accumulates, and the rate of damage evolution gradually increases. The maximum pit depth is 0.83334, 0.9438, 1.14426 and 1.64332 mm, respectively, at loading numbers of 260000, 510000, 760000 and 960000, and the evolution rates of each loading number interval are 4.4184*10 -3 mm/10,000 cyclic numbers, 8.0184*10 -3 mm/10,000 cyclic numbers and 0.02493 mm/10,000 cyclic numbers, respectively. When Fracture process of steel wire with multiple fatigue sources The fractography of corroded steel wire where fatigue fracture occurs can be divided into three zones, namely, crack initiation, crack propagation and instantaneous fracture zones, as shown in Fig. 21 [Zheng, Xie and Li (2017)]. Fig. 21(a) shows that the fatigue crack initiates in a certain pitting defect on the surface, namely, the fatigue source of crack initiation. The crack propagation zone is the zone extending from the center of the fatigue source to the periphery, which appears very smooth after repeated friction under highfrequency fatigue load. The instantaneous fracture zone is the zone where the fracture occurs when the crack propagates to the critical size, which has an irregular stepped shape. There are two fracture types of steel wire with multiple fatigue sources. In one type, crack initiation and propagation are generally in the same plane, as shown in Fig. 21(b). In the other type, crack initiation and are not in the same plane with a stepped fracture, as shown in Fig. 21(c). Most studies of corrosion-fatigue fracture only focus on the case of a single pit without considering the interaction between multiple pits [Sun (2018) ;Hu, Meng and Hu (2016)]. To study the fracture process of steel wire with multi-fatigue sources, in this section multiple pits will be set up in the steel wire, and the influence of the number and distribution of pits on the pre-corrosion fatigue life of the steel wire will be studied. Steel wire with multiple planar fatigue sources In this section, five steel wires with different corrosion morphologies are analyzed, as shown in Figs. 22-24. Case 1 is the corrosion steel wire with a single pit. The location of the pit is shown in Fig. 22(a), and the models of the corroded steel wire with corresponding mass loss rates of 1.85% and 3.33% are shown in Fig. 23(a) and Fig. 24(a), respectively. Case 2 is the corroded steel wire containing two pits located in the same radial section, and the angle between the two pits is set to 90°. The locations of the pits are shown in Fig. 22(b), and the models of the corroded steel wire with corresponding mass loss rates of 1.85% and 3.33% are shown in Fig. 23(b) and Fig. 24(b), respectively. Case 3 is the corroded steel wires with three pits in the same radial section, and the angle between adjacent pits is set as 45°. The locations of the pits are shown in Fig. 22(c), and the models of the corroded steel wire with corresponding mass loss rates of 1.85% and 3.33% are shown in Fig. 23(c)) and Fig. 24(c), respectively. Case 4 is also the corroded steel wires with three pits in the same radial section, but the angle between adjacent pits is set as 60°. The locations of the pits are shown in Fig. 22(d), and the models of the corroded steel wire with mass loss rates of 1.85% and 3.33% are shown in Fig. 23(d) and Fig. 24(d), respectively. Case 5 is also the corroded steel wires with three pits in the same radial section, but the angle between adjacent pits is set as 90°. The locations of the pits are shown in Fig. 22(e), and the models of the corroded steel wire with mass loss rates of 1.85% and 3.33% are shown in Fig. 23 maximum. Comparing cases 1, 2 and 5 shows that, under the same mass loss rate and angle of pits, the fatigue life of the corroded steel wire increases gradually with the increase in the number of pits. Fig. 25 is a stress contour of a corroded steel wire with a mass loss rate of 3.33% at the 73rd analysis step (loading number 370,000) in the three cases. As shown in Fig. 25, as the number of pits increases, the depth of pits decreases, which leads to a decrease in the stress concentration in the pit. For example, the maximum stress of steel wire containing a single pit is 1.22 times that of steel wire containing two pits and 1.5 times that of steel wire containing three pits, thus leading to an increase in the fatigue life of the corroded steel wire. 26 is the time-varying curves of stress intensity factor of corroded steel wires with a mass loss rate of 3.33% in cases 1, 2 and 5. As shown in Fig. 26, the variation trends of these three curves are basically consistent, which indicates that the influence of the number of pits on the damage evolution rate is slight when the angle between adjacent pits equals 90°. Therefore, the main reason for the increase of fatigue life with the increase of the number of pits is the difference between initial damage (the initial pit depth) in different cases. The points of A, B and C in Fig. 26 are corresponding to the 73rd analysis step. The damage contour of corroded steel wires with a mass loss rate of 3.33% in cases 1, 2 and 5 under this step is shown in Fig. 27(a)). From Fig. 27(a)), the corroded steel wire in case 1 has fractured. At this moment, in cases 2 and 5, the damage degree around the pit with the maximum depth is obviously greater than that of the other pits, but the damage areas around corrosion pits evolve independently and don't mutually affect. The points of D and E in Fig. 26 are corresponding to the 101st analysis step. The damage contour of corroded steel wires with a mass loss rate of 3.33% in cases 2 and 5 under this step is shown in Fig. 27(b)). From Fig. 27(b)), the corroded steel wire in case 2 has also fractured. Under this condition, the damage areas of the two pits are connected. For the corroded steel wire in case 5, the damage areas gradually expand, and the damage areas around the two pits are connected to each other. The point of F in Fig. 26 is corresponding to the 127th analysis step. The damage contour of corroded steel wires with a mass loss rate of 3.33% in case 5 under this step is shown in Fig. 27(c)). From Fig. 27(c)), the damaged areas of the three pits on the section of the corroded steel wire are connected to each other, and the steel wire breaks. The results show that in the process of planar multi-fatigue source fracture, damage usually appears in one pit with a relatively high degree of corrosion, and in the process of damage evolution, the damage areas near each fatigue source gradually connect with each other, eventually leading to fatigue failure of the steel wire specimen. Comparing cases 3, 4 and 5 shows that, under the same mass loss rate and number of pits, the fatigue life of the corroded steel wire increases gradually with the increase in the angle between pits. However, the larger the angle of the pits, the higher the stress concentration in the pits, and the larger the maximum stress value, which is often unfavorable for the fatigue performance of corroded steel wire, as shown in Fig. 28. The mass rates of steel wires in Fig. 28 are 1.85%. 29 is the time-varying curves of stress intensity factor of corroded steel wires with a mass loss rate of 1.85% in cases 3, 4 and 5. As shown in Fig. 29, the damage evolution rate of the steel wire in case 3 is significantly higher than that in cases 4 and 5. So even if the initial damage of the steel wire in case 3 is less than that in other two cases, which is caused by the random distribution of pitting defects, the fatigue life in case 3 is the lowest among the three cases. The points of A, B and C in Fig. 29 are corresponding to the 129th analysis step. The damage contour of corroded steel wires with a mass loss rate of 1.85% in cases 3, 4 and 5 under this step is shown in Fig. 30. From Fig. 30, the damage areas around three pits in cases 3 and 4 have connected, which indicates that the steel wires have fractured or are going to fracture; while for the steel wire in case 5, the damage areas around the pits are still independent mutually, and the fracture of the steel wire need the further loading. Steel wire with stepped multi-fatigue sources In this section, four steel wires with different corrosion morphologies are studied, as shown in Fig. 31, Fig. 32 and Fig. 33. Case 1 is the corroded steel wire with a single pit. The Fig. 31(a)), and the corresponding mass loss rates are 1.85% and 3.33%, respectively, as shown in Fig. 29(a) and Fig. 33(a). Case 2 is the corroded steel wire with two pits, which are located in different sections and 2 mm apart along the length direction of the wire. The locations of the pits in this case are shown in Fig. 31(b)), and the corresponding mass loss rates of 1.85% and 3.33% are shown in Fig. 32(b)) and Fig. 33(b), respectively. Case 3 is the steel wire with two pits, which are 4 mm apart along the length direction of the wire. The locations of the pits in this case are shown in Fig. 31(c), and the corresponding mass loss rates of 1.85% and 3.33% are shown in Fig. 32c) and Fig. 33(c), respectively. Case 4 is the steel wire with three pits, which are located in different sections, and the distance d between adjacent pits is set as 2 mm. The locations of the pits in this case are shown in Fig. 31(d), and the corresponding mass loss rates of 1.85% and 3.33% are shown in Fig. 32(d) and Fig. 33(d), respectively. The user-defined material subroutine UMAT is used to calculate the fatigue life of the corroded steel wire under the cyclic load with a stress range of 418 MPa. The method of calculation of the fatigue life is the same as in Section 5.1.2, and the calculation results are shown in Tab. 6. Regardless of whether the mass loss caused by pre-corrosion is large or small, the fatigue life of the corroded steel wire is lowest in case 1, while the fatigue life is highest in case 4. Specifically, when the mass loss rate and the distance of the pits are the same, the fatigue life of the corroded steel wire is proportional to the number of pits. Obviously this result is related to the depth of the pit and is the same as the conclusion reached for the steel wire with a planar multi-fatigue source. For example, as shown in Fig. 34, when the mass loss rate is 1.85% and the analysis step is 95, the maximum depth of the pit under case 1 is 1.428 mm, which is 1.2 times and 2.3 times those under cases 2 and 3, respectively. As a result, the maximum stress in case 1 is 1.5 times and 2.62 times those in cases 2 and 3, and the fatigue performance of the corroded steel wire in case 1 is lower than those under cases 2 and 3. 35 is the time-varying curves of stress intensity factor of corroded steel wires with a mass loss rate of 1.85% in cases 2 and 4. As shown in Fig. 35, the variation trends of these two curves are basically consistent, which indicates that when the distance between pits equals 2 mm, the influence of the number of pits on the damage evolution rate is slight. Therefore, as mentioned above, the main reason for the increase of fatigue life with the increase of the number of pits is the difference between initial damage (the initial pit depth) in different cases. The points of A, B, C and D in Fig. 35 are corresponding to the 81st, 101st, 151st, 165th analysis step. The damage contour of corroded steel wires with a mass loss rate of 1.85% in cases 2 and 4 under this step is shown in Fig. 36. From Fig. 36, the number of pits has little influence on the fracture of the stepped multi-fatigue source. When multiple pits are distributed along the axial direction, they are independent o each other and the damage areas around the pits evolve independently. When the damage around the pit reaches the critical state of fracture, the steel wire breaks. This process is consistent with the damage evolution of steel wire with a single pit. The comparison of cases 2 and 3 shows that when other conditions are consistent, the larger the distance between pits, the higher the fatigue life of the corroded steel wire. Fig. 37 is the stress contour of the corroded steel wire in cases 2 and 3 (mass loss rates equal to 3.33%). The stress distributions of the two cases are similar. However, as the distance between pits increases, the volume of the low-stress zone increases, and the maximum stress decreases, leading to enhancement of fatigue performance and an increase in the fatigue life of the steel wire. Conclusions 1) In this paper, cellular automata (CA) technology is adopted to simulate the damage evolution process of metal corrosion in three-dimensional cellular space. A metal/passive a) Case 2 b) Case 3 film/electrolyte system is considered as an automata system with special local rules/transformation rules and discretized into a CA system in an orderly cellular grid to obtain the morphology of metal corrosion pits on the surface and their size and distribution. Then, by using the griddata( ) function to interpolate the generated pitting data and compiling the conversion function, the model can be imported into AutoCAD, Rhino and ABAQUS software in succession to generate the required three-dimensional geometric model of corroded steel wire with irregular pits. The model is based on the mechanism of metal anodic dissolution and can reasonably describe the irregular morphology and distribution of pits in line with the actual situation. 2) A structural solving process based on a cyclic block is presented in this paper, and the solving process of each seemingly independent cyclic block is connected to form a nonlinear analysis process. At the same time, the calculation of damage is mainly related to the cyclic block. The concept of the life-and-death element method is used to control the failure of the element. Based on the fatigue damage model, the user-defined material subroutine is written, which can adequately simulate the material damage evolution and failure process under cyclic loads. The effectiveness of the material subroutine is verified by comparison of the numerical simulation results with the test results for a steel wire specimen without initial defect and with a hemispherical pit under uniaxial tensile conditions. 3) Based on the numerical simulation method proposed in this paper, the pre-corrosion fatigue life of steel wire is calculated. The results show that the fatigue life of the corroded steel wire obtained by this method is generally conservative, especially when the corrosion degree and the stress range are large. However, considering the randomness of simulations and experiments, this method can basically achieve the expected goal and has high reliability and practicability in predicting the fatigue life and simulating the damage evolution of corroded steel wire. 4) Based on the method proposed in this paper, the fatigue life of corroded steel wires with different degrees of corrosion under the action of different stress ranges is calculated. The results show that as the corrosion degree increases, the fatigue performance of steel wire decreases gradually, the effect of pitting corrosion on fatigue life is far greater than the mass loss, and the stress concentration is the main reason for the decline in the fatigue life of pre-corroded steel wire. In addition, when the mass loss rate is relatively small, the log value of fatigue life shows a linear decreasing trend with the mass loss rate. Moreover, the damage evolution process is studied. Under the action of fatigue load, the stress concentration occurs at the bottom of corrosion pit where damage gradually accumulates and penetrates into the inner of steel wire and the damage evolution rate gradually increases. 5) The fracture process of steel wire with multi-fatigue sources and the influence of the number and distribution of pits on the pre-corrosion fatigue life of steel wire are studied. The results show that the planar pitting distribution has a great influence on the damage evolution. The fracture of steel wire with a planar multi-fatigue source is usually caused by one of the pits with high degree of corrosion. When this pit is in the process of damage evolution, the damage areas around it are gradually connected with each other, which eventually leads to the fatigue failure of the steel wire specimen. For the stepped pitting distribution, the pits are independent of each other, and the damage evolution process is consistent with that of the steel wire containing a single pit. The fatigue life of steel wire is positively correlated with the pitting corrosion number and the angle and distance between pits.
v3-fos-license
2017-09-08T22:25:02.854Z
2013-05-17T00:00:00.000
27919323
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://foodsystemsjournal.org/index.php/fsj/article/download/175/171", "pdf_hash": "8bcccfdd5ca5945e5bd71b84fb3aa9f58b94b861", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42158", "s2fieldsofstudy": [ "Economics" ], "sha1": "33c5100aec046da0e57be221954e7ed079eb4b4c", "year": 2013 }
pes2o/s2orc
Measuring current consumption of locally grown foods in Vermont : Methods for baselines and targets Numerous studies have measured the economic impact of increased consumption of locally grown foods, and many advocates have set goals for increasing consumption of locally grown foods to a given percentage. In this paper, we first apply previously developed methods to the state of Vermont, to measure the quantity and value of food that would be consumed if the USDA Dietary Guidelines were followed. We also assess the potential of locally grown foods to meet these a Department of Community Development and Applied Economics, University of Vermont, 205H Morrill Hall, Burlington, Vermont 05405 USA b Center for Rural Studies, University of Vermont, 206 Morrill Hall, Burlington, Vermont 05401 USA c 161 Austin Drive, #71, Burlington, Vermont 05401 USA d Vermont Sustainable Jobs Fund, 3 Pitkin Court, Suite 301e, Montpelier, Vermont 05602 USA e UVM Center for Sustainable Agriculture and Co-Chair, Sustainable Agriculture Council, 109 Carrigan Drive, Burlington, Vermont 05405 USA * Corresponding author: Florence Becot; +1-802-656-9897; fbecot@uvm.edu Conflict of interest statement: This material is based upon work supported by the Cooperative State Research, Education, and Extension Service, U.S. Department of Agriculture, under Award No. 2010-34269-20972. The research work was done in collaboration with all the co-authors on the article. UVM affiliates did not have a contractual relationship with the Vermont Sustainable Job Funds. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the U.S. Department of Agriculture. Journal of Agriculture, Food Systems, and Community Development ISSN: 2152-0801 online www.AgDevJournal.com 84 Volume 3, Issue 3 / Spring 2013 guidelines, finding that meeting dietary guidelines with a local, seasonal diet would bring economic benefit, in this case, US$148 million in income for Vermont farmers. A missing piece of information has been: what is the current percentage of locally grown food being consumed in a given city, state, or region? The Farm to Plate Strategic Plan, a 10year plan for strengthening Vermont’s food system, attempted to answer this question. To date, we know of no credible set of methods to precisely measure the percentage of food consumed that is locally grown. We collect data from a variety of sources to estimate current local consumption of food. We were able to measure and account for about US$52 million in local food expenditures, equal to about 2.5% of all food expenditures in Vermont. We then discuss limitations and suggestions for improving measurement methods moving forward. Introduction and Literature Review Eating locally grown food has become quite popular in recent years.In 2007, the word "locavore" was named the "Oxford Word of the Year" (Oxford University Press, 2007).The cause of eating locally is championed by well-known authors in the popular press (Kingsolver, 2007;Pollan, 2008).Scholars have also expressed interest in the potential benefits of eating locally as part of a sustainable or community-based food system (Feenstra, 2002;Hinrichs, 2003).Among the purported benefits of increasing consumption of locally grown foods are improved farm profitability and viability, farmland conservation, improved public health, and closer social ties between farmers and consumers (Andreatta & Wickliffe, 2002;Conner, Colasanti, Ross, & Smalley, 2010;Conner & Levine, 2006;Lyson, 2004).Selling locally grown food is a strategy that allows small and medium-sized farms to differentiate their products in the marketplace.These same farms also contribute to a broad array of indicators of social, economic and environmental well-being (Kirschenmann, Stevenson, Buttel, Lyson, & Duffy, 2008;Lobao, 1990;Lyson & Welsh, 2005).Community-based food systems can engage diverse stakeholders with many different motivations, although some scholars caution against associating "local" with all things virtuous (Bellows & Hamm, 2001;Born & Purcell, 2006;Conner, Cocciarelli, Mutch, & Hamm, 2008;Oglethorpe, 2008;Wright, Score, & Conner, 2008). As interest in the social, health, environmental, and, in particular, farm-and community-based economic benefits of local food consumption has grown, the state of Vermont passed legislation to create the Farm to Plate Strategic Plan, a 10-year plan for strengthening Vermont's food system.Vermont's food system (with elements including nutrient management, farm inputs, production, processing, distribution, wholesaling, and retailing) is an important driver of economic prosperity and job creation in the state, estimated to include 57,089 jobs (16% of all private-sector jobs) at 6,984 farms and 4,104 other food-related businesses (13% of all private-sector establishments) (Vermont Sustainable Jobs Fund, 2012).Total output from food production in the state is estimated at US$2.7 billion (Vermont Sustainable Jobs Fund, 2011).The Farm to Plate Strategic Plan contracted with a consultant to conduct an economic impact analysis using the economic forecasting software REMI.The model estimated that increasing instate production by 5% (over an assumed 5% baseline) over 10 years would result in the creation of about 1,700 new private-sector jobs in the food system, along with an additional US$213 million in economic output annually (Vermont Sustainable Jobs Fund, 2012). This study attempts to create baseline measures for the Farm to Plate Initiative.Specifically, it measures current consumption and upper bounds for consumption under specific dietary scenarios.To be clear, it does not advocate for Vermont farmers growing exclusively for local markets.Rather, it attempts to understand the current situation around local food consumption in Vermont and to estimate how much local food could be consumed, with an eye toward informing efforts to foster more local food consumption and its concomitant community and economic benefits.We begin by asking the following questions: what quantities of foods do Vermonters eat (under two dietary scenarios); and what volumes (in dollar value and acreage) are needed to meet these diets with a locally grown, seasonal diet?Following this, we present methods and results for actual current consumption. Many Vermonters are interested in the extent to which the state can feed itself through local food production.Many advocates have set goals for increasing consumption of locally grown foods to a given percentage.Unfortunately, no comprehensive data exist to indicate exactly how much and what types of food Vermonters are currently consuming.We lack methods for determining the current percentage of locally grown food being consumed in a given city, state, or region.One objective of this study is to quantify the amount of locally produced food that has been consumed by Vermonters, using the best available data sources. Previous Assessments of Local Demand Many studies of local food have focused on the demand side of the equation, identifying drivers of demand, and demographic, psychographic, and behavioral attributes of local food consumers (Bean Smith & Sharp, 2008;Brown, 2003;Conner, Colasanti, et al., 2010;Ostrom, 2005;Thilmany, Bond, & Bond, 2008;Zepeda & Leviten-Reid, 2004;Zepeda & Li, 2006).Key drivers of demand include geographic proximity, relationships with farmers, and support for local economies. Assessments of Production Given the magnitude of the global agri-food system, some observers, such as Meter and Rosales, (2001), bemoan the lost opportunity for community economic development when food production and consumption are disconnected.In light of this, a number of studies have looked at the capacity of a given region or state to supply its own food and the potential economic impacts of increased consumption of local food under different dietary scenarios.A series of studies from Cornell University finds that New York state could provide 34% of its total food needs (with rural upstate regions predictably being more self-sufficient than New York City), and that dietary intake influences the acreage needed to meet human consumption needs (Peters, Bills, Lembo, Wilkins, & Fick, 2009;Peters, Wilkins, & Fick, 2007). Import Substitution and Dietary Scenario Measurements Other studies look at the economic impact of meeting local food consumption targets.Using the Impact Analysis for Planning economic impact modeling system (IMPLAN) input-output model, an Iowa State University researcher modeled the impact of meeting United States Department of Agriculture (USDA) dietary guidelines with Iowagrown fresh produce for one-quarter of the calendar year, finding that this change would sustain, either directly or indirectly, US$462.7 million in total economic output, US$170 million in total labor income, and 6,046 total jobs in Iowa (Swenson, 2006).A similar study, which looked at potential impacts of increased fruit and vegetable production for local consumption in a six-state region of the upper Midwest, found that more than a billion dollars in income and nearly 10,000 jobs would result (Swenson, 2010).A study in Michigan used the IMPLAN model to measure job and income impacts of meeting public health dietary recommendations with locally grown fruits and vegetables (Conner, Knudson, Hamm, & Peterson, 2008).In all cases, the models suggest large increases in income to farmers and in job creation, even accounting for the opportunity costs of transitioning field crop acreage into produce production. A key limitation of the above studies (Meter & Rosales, 2001;Peters, Bills, et al., 2009;Peters, Wilkins, et al., 2007;Conner, Knudson, et al., 2008;Swenson, 2006;2010) is that they all measure the outcome or impact of hypothetical changes: what would happen if some consumption pattern were to change.An obvious gap in the literature is how much locally grown food is actually being consumed.One place to start this calculation is with upper and lower bounds. Upper and Lower Bounds Timmons, Wang, and Lass (2008) demonstrated a method for calculating the upper bound for the proportion of locally grown food in a given state or region.Their research measured the ratio of per capita consumption (i.e., disappearance) of a given crop or crop category divided by per capita consumption.Their results for Vermont show that for some crops and products, most notably dairy, production far exceeds consumption, while for fruits and vegetables, Vermont can only produce a fraction (25% and 36%, respectively) of what is consumed instate.Their calculations did not take into consideration dietary requirements or seasonality.This figure also omits the proportion of food that is grown in Vermont and consumed elsewhere (likely to be relatively small for produce, but very large for dairy).By comparison, using data from the Consumer Expenditure Survey and Vermont Department of Taxes, we estimate that US$2.7 billion is spent on food annually in Vermont by residents and nonresident tourists, including both at-home and away-from-home consumption, (United States Department of Labor, 2010; Vermont Department of Taxes, 2010). A possible lower bound for the proportion of local food is the USDA National Agricultural Statistics Service (NASS) figure of food sold directly to consumers, which is available in the Census of Agriculture (USDA, 2007).This figure does not distinguish between direct sales made to Vermont residents and out-of-state residents.Also, at least one study suggests that NASS undercounts the true value of direct food purchases (Conner, Smalley, Colasanti, & Ross, 2010).Similar undercounting was found in another study.The 2008 Organic Production Survey (OPS) reported sales at a higher level than the 2007 Census, while the OPS survey reported data from fewer farms (Hunt & Matteson, 2012).Furthermore, Lev and Gwin (2010) argue that the counting of direct-marketing sales is difficult and not well understood. Estimation of Current and Target Consumption Patterns in Vermont This estimate uses methods developed by Conner, Knudson, et al. (2008) and Abate, Conner, Hamm, Smalley, Thomas, and Wright (2009) to measure the current consumption of fruits, vegetables, dairy, and proteins in Vermont (regardless of source), as well as the levels of consumption if USDA dietary guidelines were followed.We chose these as a dietary benchmark as they are well known and permit relatively easy replication of our methods.We recognize the dietary guidelines' contested and politicized nature and therefore make no claim, for or against, that they truly guide optimal consumption.For products that can be grown in Vermont, yield and price data (primarily from USDA, as used by Conner et al., 2008, andAbate et al., 2009) are used to calculate the number of acres that would be needed and the revenue farmers would receive.The basic questions leading the analysis are as follows: 1. How many servings of fruits, vegetables, proteins and dairy should Vermonters consume according to USDA dietary guidelines?This is subsequently called the "Recommended" diet.2. Assuming Vermonters' consumption patterns mirror those of the United States as a whole (according to USDA consumption data), how many servings of each do they actually eat?This is subsequently called the "Average" diet.3.If Vermonters met these two diets with locally grown foods, as much as is practical given climate and land availability, how many acres would be required to produce them at current yield levels and, given prevailing prices, how much revenue would this generate for Vermont farmers?Next, we calculated current annual consumption of individual fruit, vegetable, proteins, and dairy products (per capita consumption times state populations) for the Average diet.These figures were multiplied by the Recommended to Average ratio in table 1 for the figures listed in the Recommended diet.We assumed that all meat (beef, pork and chicken), 20 vegetables, and 12 fruits can be grown in Vermont.Following methods developed by Conner, Knudson, et al. (2008) and Abate et al. (2009), the seasonal availability of fruits and vegetables was taken from a Michigan State University Extension (2004) publication.We assumed that locally grown fruits and vegetables are only available at these times.Given Vermont's short growing season, we assume Vermont's seasonal availability of vegetables is 80% that of Michigan's. 1 We used price data and yield data from Conner, Knudson, et al. (2008) and Abate et al. (2009), primarily based on USDA NASS and ERS data, to calculate the revenue generated and acres needed if current and recommended consumption levels were met, when available, with Vermont-grown foods (table 2).Note that these are total acres needed, not additional acres of production.Note also that, as assumed in Conner, Knudson, et al., 2008, if and vegetable consumption were increased to Recommended levels, Vermonters would increase consumption proportionally.Specifically, for the example of fruit, in aggregate Vermonters eat 2.23 times as many items that grow in Vermont -like apples -as well as items, which do not -like bananas.This assumes that consumer tastes remain consistent: people who like apples eat more apples, and so on.Last, comparing total sales data with revenue from the Recommended diet, we find that currently Vermont is producing more fruits and dairy than the state population needs for the Recommended diet, while it does not produce enough vegetables and protein.This finding has potential economic and political implications that we will address in the discussion section. Methods and Results for Estimating Actual Current Consumption of Local Food We utilized secondary data from two government sources.We used U.S. Census non-employer data (United States Department of Commerce, 2009) for food manufactured in Vermont by small-scale businesses, and USDA NASS (USDA, 2007) figures measuring food sales direct to consumers.We also made direct inquiries to several types of stakeholders to fill data gaps: • Institutional food service operations that purchase and serve locally grown foods, including K-12 schools, colleges and universities and hospitals.This was done in a number of ways, including by direct inquiry to the food service director, via local food hubs, statewide nonprofits, and school-led buying cooperatives; • Statewide nonprofit organizations that conduct surveys on sales to farmers' markets, community supported agriculture (CSA) operations, and restaurants; • Produce distributors and food hubs; • Retailers (mainstream grocery, food cooperatives and natural food stores); and • State government. In each case, members of the research team asked for their total 2010 sales of locally grown foods.The data were then analyzed by the team for credibility and to detect and eliminate double counting.For example, we looked at purchase figures from a hospital and subtracted out certain purchases that were characterized as "local" but had no local content (e.g., soda).In addition, we avoided double counting by looking at both reports from institutional buyers and wholesalers known to sell to them, subtracting out those figures as well, crediting these figures only to the hospital rather than the distributor. We received no data from several key sources, including Vermont's three major retail grocery store chains.It is not clear whether these sources are unwilling (they believe the data is proprietary and confidential) or unable (they do not track local products in a way which makes reporting possible) to provide such data.In 2013, efforts will be made to collect additional data from locally owned, independent grocers, and food service companies operating in Vermont's colleges and universities.The early protocols and a report of preliminary findings were shared with the project advisory committee, consisting of scholars and practitioners well-known for their interest and expertise in this area, namely Mike Hamm and Rich Pirog of Michigan State University, Christian Peters of Tufts University, and Ken Meter of the Crossroads Resource Center.Many of the ideas in the discussion were generated in conversations and communications with them. Results of our inquiries are presented in table 3. Estimation of Current and Target Consumption Patterns in Vermont We found that in order to meet the dietary guidelines, Vermonters need to increase their consumption of fruits, vegetables, and dairy while decreasing their consumption of meat.These dietary changes provide the Vermont agricultural sector with potential new markets.When looking at the current level of production in the state, we found that the state produces more than enough fruits and dairy to meet the Recommended diet, but not enough vegetables and protein.Our findings, particularly concerning fruit consumption and production, differ from those of Timmons et al. (2008) in part because our analysis focused on locally and seasonally available products. Based on these findings, at least two scenarios emerge.First, a state could devote all resources only to feeding its own people -a type of autarky.In this scenario in Vermont, dairy and fruit production would need to be scaled down, leaving the state with excess capacity, and concomitant loss of revenue and employment in these sectors, while production of protein and vegetables would have to be scaled up.This scenario would require major restructuring and would likely be both politically and economically untenable. In another scenario, each state could coordinate with others in the region, with each pursuing a more localized and regionalized diet.Such coordination would allow access to regional markets and create a smoother transition for the regional agricultural economy.It would be important for other states to conduct a similar kind of analysis in order to inform future allocation and align food system development with local communities' goals, such as economic development, nutritionally improved diets, and around those products which are best suited for the soils, climate, land base, and existing infrastructure of a given state in the region. Though extreme, these scenarios highlight the need for collaboration between states at least at the regional level.Collaboration should take place not only at the planning level, but also at the production, processing, and distribution levels.Suggestions for collaboration in terms of data needs and research is highlighted in the paragraph below. Estimating Actual Current Consumption of Local Food Our estimate of about US$52 million makes up a small percentage (2.5%) of Vermont's US$2 billion total food bill.We had a great deal of cooperation from many partners and agencies in this research, but still lack data of a potentially large magnitude from a few sources.Nationally, the largest purveyors of local food are distributors and retailers (Low & Vogel, 2011), so their lack of response is significant.At this time, most see too little (or no) benefit and/or too high a cost in reporting these figures.Given current food safety protocols, they are able to trace back foods to the farm of origin in case of a recall, but they may consider it too costly to measure local food sales as a routine practice.Methods must be developed which either automatically gather this information or circumvent the need for it.Below we discuss the limitations of our study and potential strategies for overcoming them. Limitations and Strategies Regardless of what strategies are used, we have identified many lingering issues that need to be addressed.2011).The Michigan Good Food Charter requires 50% local ingredients (Colasanti et al., 2010).Should a single standard be used, and if so, which one?Furthermore, sourcing of products can change depending on the time of the year.How should this be addressed?Again, measurement at the farmgate level would address these issues. • Fluid milk may be difficult to trace back to a single farm, given the degree to which it is pooled from multiple farms.How can this counted with accuracy?• With increased attention to the capacity and prospects for regional food systems, interstate cooperation, notably harmonization of standards and definitions, will be needed to conduct these types of studies on regional scales.Vermont's Farm to Plate Initiative and Michigan's Good Food Charter are two prominent examples from which to start. Based on our work so far, we foresee the following opportunities and obstacles for a more comprehensive and accurate count.Potential strategies include: • Work with agencies already collecting data from farmers to get information directly from farmers.One promising idea is to work with the state or regional National Agricultural Statistics Service, as it is capable of developing and administering surveys with high response rates at affordable rates (M.Hamm, personal communication, June 12, 2012).One method would be to ask for total farm sales revenue, and then to list percentages sold to various market channel categories (summing to 100%).As emphasized above, care must be made, however, to avoid putting all the data collection burden on farmers without consideration of their time.for data collection.In particular, if farmers are to be the primary source, methods must compensate farmers, minimize their burden, and be feasibly implemented.Even if farmer data collection is put in place, these suggestions will serve the dual purpose of encouraging local food purchase and triangulating farmer-generated data. • Work with local buyers to incorporate local product supply requirements into bids and requests for proposals within their procurement practices.Effective examples could be shared and tested elsewhere to develop a set of tools or lists of best practices.• Building on the point above, work with state legislatures to require public institutions to annually report this information.• Use the public relations power ("bully pulpit") of local food advocates to publicly praise businesses that provide data. Conclusions The potential economic impact of increased consumption of locally grown food is of interest to policy makers and other stakeholders, yet to date little research has been conducted that estimates current consumption, a benchmark against which progress can be measured.This paper began by estimating the quantities of food, potential farmgate income, and number of acres needed to supply Vermont's current diet, as well as a diet in line with USDA dietary guidelines.We then developed and utilized a set of methods to measure current consumption of locally grown foods, and shared and discussed outcomes with an advisory committee of national experts.We were unable to gather data from several sources, creating a significant gap in our study.We then discussed the potential to use farm-level data to address key limitations. Our study focuses on one state, but as discussed above, collaboration among states in a region would foster a smoother transition to a more localized and regionalized agricultural economy.The Northeast region has a track record of regional collaboration through the Northeast Sustainable Agriculture Working Group (NESAWG), whose mission is to "build a more sustainable, healthy, and equitable food system for our region" (Northeast Sustainable Agriculture Working Group, 2013).Using a community of practice like NESAWG is crucial to continue improving the methodology to measure local consumption and data collection robustness.Efforts to test and build on the methods discussed in this paper, and learn from others' work, are already underway. The strengths of this paper include being the first attempt known to the authors to comprehensively measure this local food consumption statewide, as well as the degree of cooperation from stakeholders and the project advisory committee, which led to the lessons learned above and the opportunity to improve on this pilot effort.The weaknesses are the lack of data from the likely largest sources of local food and the other barriers discussed above.We hope our study assists scholars and practitioners elsewhere in their efforts and facilitates development of sound methods to address this important but difficult question. Table 1 . fruit Annual Consumption for Vermont: Average and Recommended Table 2 . Revenue and Acreage Required for Current and Recommended Diets a USDA Census of Agriculture (USDA, 2009).b 1 acre = 0.40 hectare Table 3 : Summary of Results
v3-fos-license
2017-08-18T01:53:01.095Z
2017-08-01T00:00:00.000
4575543
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1999-4915/9/8/203/pdf", "pdf_hash": "edc7215984b6c67a78aa37343df4f7bdb52391b6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42159", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "edc7215984b6c67a78aa37343df4f7bdb52391b6", "year": 2017 }
pes2o/s2orc
Evaluation of Taterapox Virus in Small Animals Taterapox virus (TATV), which was isolated from an African gerbil (Tatera kempi) in 1975, is the most closely related virus to variola; however, only the original report has examined its virology. We have evaluated the tropism of TATV in vivo in small animals. We found that TATV does not infect Graphiurus kelleni, a species of African dormouse, but does induce seroconversion in the Mongolian gerbil (Meriones unguiculatus) and in mice; however, in wild-type mice and gerbils, the virus produces an unapparent infection. Following intranasal and footpad inoculations with 1 × 106 plaque forming units (PFU) of TATV, immunocompromised stat1−/− mice showed signs of disease but did not die; however, SCID mice were susceptible to intranasal and footpad infections with 100% mortality observed by Day 35 and Day 54, respectively. We show that death is unlikely to be a result of the virus mutating to have increased virulence and that SCID mice are capable of transmitting TATV to C57BL/6 and C57BL/6 stat1−/− animals; however, transmission did not occur from TATV inoculated wild-type or stat1−/− mice. Comparisons with ectromelia (the etiological agent of mousepox) suggest that TATV behaves differently both at the site of inoculation and in the immune response that it triggers. Despite the similarity between VARV and TATV, only one scientific report, written in 1975, has been published that directly examines the virology of TATV [28]. TATV was isolated from an apparently healthy and wild gerbil (Tatera kempi or Gerbilliscus kempi) caught in northern Dahomey (now Republic of Benin) in 1968. The virus was noted to grow well on chorioallantoic membranes (CAMs) and produce pocks of similar sizes to those of VARV and had a ceiling temperature of 38 • C-also similar to Viruses 2017, 9,203 2 of 17 VARV. Moreover, TATV produced a cytopathic effect similar to that of VARV, but distinct from ECTV, MPXV, RPXV, CPXV, or VACV in Vero, LLC-MK2, GMK-AH, and RK-13 cell lines. Intracranial (IC) or intraperitoneal (IP) inoculation of the Mongolian gerbil (Meriones unguiculatus) with TATV suspension revealed no obvious signs of disease. No virus could be recovered from the spleens, livers, or kidneys of gerbils sacrificed on Days 14 and 19 post infection (p.i.). It was noted that one rabbit inoculated intradermally (ID) developed a local lesion which ulcerated and crusted over. In three other rabbits, reddened indurated areas occurred at the site of inoculation but resolved without ulceration or crusting. No hemorrhagic or secondary lesions were observed on any of the rabbits. One monkey (Macaca mulatta) was inoculated via the intramuscular (IM) and intranasal (IN) routes and developed a fever of 104 • F (40 • C); however, no lesions were observed and no viremia could be detected. The monkey did have increased levels of hemagglutinin inhibition antibodies to orthopoxviruses (OPVs) and the monkey resisted a MPXV inoculation at 10 weeks p.i. (route unspecified). No virus was isolated when liver, spleen, kidney or pancreas suspensions from the inoculated animals were inoculated onto CAMs or Vero cells [28]. Thus, similar to VARV, no animal model tested to date supports robust TATV replication [29]. In this report, we extend our understanding of the tropism of TATV in small animal models. In particular, we have examined the infection and pathology of TATV in the Mongolian gerbil, a species of African dormouse (Graphiurus kelleni) and several wild-type and immunocompromised mouse strains. We found that TATV induces seroconversion in the gerbil and mice; however, in wild-type mice and gerbils the infection is unapparent. Immunocompromised stat1 −/− mice did show signs of disease but experienced no mortality; however, SCID mice were susceptible to both IN and footpad (FP) infections. Furthermore, SCID mice were capable of transmitting TATV to C57BL/6 and C57BL/6 stat1 −/− animals. Following a FP inoculation, TATV replicated but induced a different cytokine response to that of ECTV. This response is likely to have contributed to the failure of TATV to move to the draining lymph node. A subsequent paper will present data pertaining to in vitro studies of TATV which reveal further insights into the biology of this virus. Animals The Institutional Animal Care and Use Committee at Saint Louis University School of Medicine approved all experimental protocols (protocol 2082). A/Ncr and C57BL/6 mice were purchased from the National Cancer Institute (Frederick, MD, USA). Mongolian gerbils and 129, SCID (BALB/c background), SCID (SKH1 background) and SKH1 mice were acquired from Charles River Laboratories (Wilmington, MA, USA). Dormice were acquired from an in-house colony [18]. The 129 stat1 −/− mouse strain was acquired from Taconic (Hudson, NY, USA), it was originally developed in the laboratory of Robert Schreiber at Washington University School of Medicine (St. Louis, MO, USA) [30]. C57BL/6 mice carrying a stat1 −/− mutation were provided by Michael Holtzman (Washington University School of Medicine) who acquired them from Joan Durbin (New York University School of Medicine, NY, USA) [31]. All experimental and animal procedures were completed at animal biosafety level-3 (ABSL-3) where animals were housed in filter-top microisolator cages. A standard rodent diet (Teklad Global 18% Protein Rodent Diet, Envigo, Huntingdon, UK) and water were provided ad libitum. Corn cob bedding was provided in each cage where no more than 5 animals were housed. All animals were acclimatized for at least one week prior to infection. Animals were 6-12 weeks in age and experiments consisted of 4-5 animals per group (unless otherwise stated). Experiments were performed at least twice. For IN infection, mice and dormice were anesthetized by IP injection of 9 mg/mL ketamine HCl (90 mg/kg) and 1 mg/mL xylazine (10 mg/kg) at a ratio of 0.1 mL/10 g body weight. Gerbils were anesthetized as above but with 6 mg/mL ketamine HCl (60 mg/kg) and 0.5 mg/mL xylazine (5 mg/kg). IN inoculations with 5 µL/nare of virus were used to seed the upper respiratory tract Viruses 2017, 9, 203 3 of 17 as described previously [32]. For FP infections, 10 µL/pad of virus was used, animals were briefly exposed to CO 2 :O 2 (4:1) followed by injection with a 29 × 1/2 gauge needle. TATV was a gift from Geoffrey Smith (Imperial College, London, UK). The virus was isolated from a wild gerbil (Tatera Kempi) caught in Dahomey (now Republic of Benin), Africa in 1968 [28]. Plaque purified isolates of TATV, ECTV-MOS and VACV-COP were propagated in BS-C-1 cells. Psoralen Inactivation TATV was mixed in a 24-well plate with 4 -aminomethyltrioxsalen (psoralen) (Sigma, St. Louis, MO, USA) to a final concentration of 10 µg/mL in a total volume of 200 µL. After a 10 min incubation at room-temperature (RT) the cover of the 24-well plate was removed and the plate was exposed to LWUV (high-intensity ultraviolet lamp 365 nm, 120 volts, 60 Hz, 1.05 amp; Spectronics Corp., Wesbury, NY, USA) at a distance of~7 inches for 30 min with gentle agitation every 5 min. Aliquots were prepared and stored at −80 • C. After freezing, aliquots (and controls) were thawed and titered, as above. Psoralen-treated stocks yielded no plaques. ELISAs For mice, direct anti-OPV ELISAs were performed using lysates from BS-C-1 cells infected with VACV-WR. Clarified cell lysate was diluted in 50 mM carbonate-bicarbonate buffer (pH 9.6) at a 1:2500 dilution, and used to coat 96-well microtiter ELISA plates at 4 • C overnight. Plates were blocked with blocking buffer (PBS + 0.05% Tween 20 + 2% normal goat serum; Vector, Burlingame, CA, USA) at room temperature for 30 min, and serial dilutions of mouse sera were added to wells. Following incubation at room temperature for 1 h, wells were washed with PBS-T + 0.05% Tween 20. Bound antibody was detected by using biotin-conjugated goat anti-mouse IgG (Invitrogen, Carlsbad, CA, USA) at 1:2500 dilution followed by streptavidin-HRP (Invitrogen) at 1:4000 and orthophenylenediamine (0.4 mg/mL) in 50 mM citrate buffer (pH 5.0) as a chromogen. Optical density was measured at 490 nm. For dormice and gerbil serum a modified ELISA was used with a 1:30,000 dilution of HRP conjugated protein A/G (Thermo Scientific, Worcester, MA, USA). Microscopy Light microscopy images were taken with a Zeiss (Oberkochen, Germany) dissecting microscope with a 3.2× objective lens. The images were captured using an Olympus (Tokyo, Japan) 5.1 megapixel C-5060 wide zoom camera and were processed in Microsoft Power Point (version 15; Redmond, WA, USA). Statistics T-tests were used to compare means between groups of animals and to determine the mean time to death. T-tests were also used to compare changes in viral titers and cytokine levels based on sample sizes of six tissue titers or cytokine preparations from different mice. Mortality rates were compared using the Fisher's exact test. Blinded images were measured qualitatively using a scoring system. Throughout the manuscript, "significant" indicates p values < 0.05. TATV Infection of Immunocompetent Animal TATV was originally isolated from a wild African gerbil, Tatera kempi, which are not available commercially or easily obtainable from Africa [28]. For this reason, we investigated the pathogenesis of TATV in the commercially available Mongolian gerbil, Meriones unguiculatus, which were inoculated by the FP or IN routes with 1 × 10 6 PFU of TATV. Over a 60-day observation period, the inoculated gerbils lost no weight, showed no signs of morbidity and did not die, although inoculated animals seroconverted by Day 70 p.i. (Table 1). We next evaluated a second African rodent species, Graphiurus kelleni (dormouse), that has previously been used to study MPXV infections for susceptibility to severe disease [18]. No morbidity or mortality was observed following FP and IN inoculations with 1 × 10 6 PFU of TATV and inoculated dormice failed to seroconvert up to 120 days p.i. We also investigated the lethality of TATV for different immunocompetent mouse strains. Infections of A/Ncr, SKH1, C57BL/6, CAST/EiJ, and 129 mouse strains with 1 × 10 6 PFU of TATV via the FP and IN routes resulted in no weight-loss, morbidity or mortality, although all inoculated mice seroconverted by 60 days p.i. (Table 1) SKH-1 mice failed to present with rash under conditions where ECTV inoculated mice did (data not shown). Consistent with the lack of systemic disease, TATV was not detected in the livers, kidneys, spleens, and lungs of SKH1 mice sacrificed at Day 25 p.i. These studies extend the list of animal species that do not support robust TATV replication. Inactivated TATV Induces Seroconversion Except dormice, all of the immunocompetent animals seroconverted despite showing no overt signs of morbidity; furthermore, tissue titers from 129 and SKH1 mice were negative for TATV. Conventionally, seroconversion occurs following an active infection with associated viral replication; however, seroconversion can also occur when the immune system is exposed to viral antigen-reminiscent of the seroconversion observed following some vaccination protocols. To determine the cause of seroconversion we inoculated BALB/c mice via the IN or FP route with TATV (1 × 10 3 PFU) and TATV that had been made replication deficient via psoralen inactivation (TATV-psoralen). At T = 28 days mice were bled and assayed for seroconversion by ELISA. At T = 35 days, mice were inoculated with a lethal dose of ECTV (1 × 10 3 PFU) via the corresponding inoculation route and monitored for survival. As expected, mock-inoculated mice failed to seroconvert and experienced 100% mortality by eight days post ECTV inoculation ( Table 2). Mice that were inoculated via the IN or FP route with TATV experienced seroconversion and were protected against the subsequent ECTV inoculation. Only one (1 of 3) mouse inoculated via the FP route with TATV-psoralen experienced seroconversion and this animal was protected against the subsequent ECTV inoculation; however, the animals that did not seroconvert experienced mortality by Day 7 post ECTV inoculation. The TATV-psoralen inoculated animal that did seroconvert suggests some virus particles were not fully inactivated and that measurement of virus infectivity is more sensitive in vivo compared to in vitro plaque assays which yielded no plaques (this finding is not unusual as IN and FP challenges of A-strain mice are lethal at doses <1 PFU which is below the threshold of detection for an in vitro plaque assay). All mice that were inoculated via the IN route with TATV-psoralen failed to seroconvert and experienced 100% mortality by Day 9 post inoculation with ECTV. These data suggest that the seroconversion experienced by animals is a function of TATV replication and not induced simply by the exposure of the immune system to TATV antigen. 1 TATV was made replication-inactive by psoralen treatment (see methods). 2 Mice were inoculated at T = 0 days with TATV or TATV-psoralen via the intranasal (IN) or footpad (FP) routes (1 × 10 3 PFU). 3 At T = 28 days mice were bled for ELISA to determine seroconversion; − indicates negative for antibodies; + indicates positive for antibodies. Scores are given for each mouse individually. 4 At T = 35 days mice were inoculated with a lethal dose of ectromelia (ECTV) via the corresponding IN or FP route (1 × 10 3 PFU) and monitored for mortality. 5 Days of death (DOD) are indicated for each mouse post inoculation with ECTV. Individual days of death are recorded in the same order as seroconversion status is recorded (column 4). ND indicates no death. 6 One mouse seroconverted following FP TATV-psoralen inoculation and survived the subsequent ECTV inoculation on Day 35. TATV Infection of Immunodeficient Mice Due to the failure to observe disease following infection of various immunocompetent species, we examined a number of immunocompromised murine strains. Murine strains lacking STAT1, a key protein involved in type 1 and type 2 interferon signaling networks, have been shown to be sensitive to a wide-number of viral and bacterial infections, including infections with MPXV [24]. 129 stat1 −/− mice inoculated with 1 × 10 6 PFU of TATV via the FP route developed tail lesions from Day 16 p.i. By Days 25 and 39 p.i., 129 stat1 −/− mice had 5.5 ± 1.2 and 6.2 ± 1.8 lesions/tail, respectively. These lesions appeared along the tail, were discreet and reached a size of approximately 5 mm ( Figure 1). Interestingly, C57BL/6 stat1 −/− mice failed to develop lesions; however, both strains developed severe FP swelling by Day 5 p.i. No other signs of morbidity were observed and all animals seroconverted by Day 60 p.i. Infections of 129 stat1 −/− and C57BL/6 stat1 −/− strains with 1 × 10 6 PFU of TATV via the IN route resulted in no weight-loss, morbidity or mortality, although all inoculated mice seroconverted by 60 days p.i. (Table 1). Virus infectivity was not detected in livers, kidneys, spleens, lungs and blood from mice sacrificed on Days 6, 10 and 28 p.i. (data not shown). 1 TATV was made replication-inactive by psoralen treatment (see methods). 2 Mice were inoculated at T = 0 days with TATV or TATV-psoralen via the intranasal (IN) or footpad (FP) routes (1 × 10 3 PFU). 3 At T = 28 days mice were bled for ELISA to determine seroconversion; − indicates negative for antibodies; + indicates positive for antibodies. Scores are given for each mouse individually. 4 At T = 35 days mice were inoculated with a lethal dose of ectromelia (ECTV) via the corresponding IN or FP route (1 × 10 3 PFU) and monitored for mortality. 5 Days of death (DOD) are indicated for each mouse post inoculation with ECTV. Individual days of death are recorded in the same order as seroconversion status is recorded (column 4). ND indicates no death. 6 One mouse seroconverted following FP TATV-psoralen inoculation and survived the subsequent ECTV inoculation on Day 35. TATV Infection of Immunodeficient Mice Due to the failure to observe disease following infection of various immunocompetent species, we examined a number of immunocompromised murine strains. Murine strains lacking STAT1, a key protein involved in type 1 and type 2 interferon signaling networks, have been shown to be sensitive to a wide-number of viral and bacterial infections, including infections with MPXV [24]. 129 stat1 −/− mice inoculated with 1 × 10 6 PFU of TATV via the FP route developed tail lesions from Day 16 p.i. By Days 25 and 39 p.i., 129 stat1 −/− mice had 5.5 ± 1.2 and 6.2 ± 1.8 lesions/tail, respectively. These lesions appeared along the tail, were discreet and reached a size of approximately 5 mm ( Figure 1). Interestingly, C57BL/6 stat1 −/− mice failed to develop lesions; however, both strains developed severe FP swelling by Day 5 p.i. No other signs of morbidity were observed and all animals seroconverted by Day 60 p.i. Infections of 129 stat1 −/− and C57BL/6 stat1 −/− strains with 1 × 10 6 PFU of TATV via the IN route resulted in no weight-loss, morbidity or mortality, although all inoculated mice seroconverted by 60 days p.i. (Table 1). Virus infectivity was not detected in livers, kidneys, spleens, lungs and blood from mice sacrificed on Days 6, 10 and 28 p.i. (data not shown). Following a FP infection with 1 × 10 6 PFU of TATV, we found SCID mice on a BALB/c background (SCID) and SCID mice on a hairless SKH1 background (SCID-SKH1) experienced 100% Following a FP infection with 1 × 10 6 PFU of TATV, we found SCID mice on a BALB/c background (SCID) and SCID mice on a hairless SKH1 background (SCID-SKH1) experienced 100% and 50% mortality by Day 52, respectively. Tail lesions were detected on SCID-SKH1 and SCID mice strains starting on Days 16 and 25 p.i., respectively. Because the SCID mice presented with the greatest Viruses 2017, 9,203 7 of 17 mortality, we decided to further investigate pathogenesis following FP and IN infections of 1 × 10 6 PFU of TATV ( Figure 2). We found that IN inoculated mice experienced 100% mortality approximately 20 days earlier than those inoculated via the FP route (Table 1 and Figure 2A). To further explore the pathogenesis of IN inoculated mice, we inoculated groups of mice with TATV doses ranging from 1 × 10 6 PFU down to 1 × 10 2 PFU ( Figure 2B). Mice inoculated with the lowest virus dose took until Day 127 p.i. to experience 100% mortality, whereas mice were dead by Day 29 p.i. at the highest dose ( Figure 2B). Rate of weight-loss and total weight-loss (as a % of starting weight) was also a function of inoculation dose ( Figure 2C). Following the 1 × 10 6 PFU IN infection, virus was recovered from the lung by Day 10 p.i. but virus could not be detected in the spleen, liver and kidney until Day 25 p.i. (Figure 2D). We found no infectious virus in the blood at any of the time points (blood could not be removed from dead, non-sacrificed animals) and tail lesions were not apparent. Histological examination of mice inoculated with 1 × 10 6 PFU and sacrificed at a moribund state revealed a reduction in splenic extramedullary hematopoiesis in TATV inoculated mice compared to mock animals. It also revealed that all TATV inoculated mice presented with fibrinopurulent rhinitis in the nasal passages and that all TATV inoculated animals had adrenal subcapsular cell hyperplasia. One TATV inoculated animal also presented with pancreatic lobular degeneration. All other observations were normal for SCID mice of the age used. Similar results were obtained following FP infection of SCID mice except the disease course was extended with significant weight-changes by Day 31 p.i. (117 ± 4.4% and 92.3 ± 2.9%, p = 0.009 for mock and inoculated groups, respectively). As expected, we also began to detect tail lesions on inoculated mice from Day 25 p.i.; these lesions increased in number until the day of death. Virus was detected in spleens and lungs from Day 20 p.i., in the liver from Day 30 p.i., in the kidney from Day 40 p.i. and in the blood from Day 48 p.i. (data not shown). The finding of infectious virus in the blood from the FP inoculated mice is in contrast to IN inoculated mice which presented with no infectious virus in the blood; however, consideration should be given to the fact that TATV was detected in the blood of FP inoculated mice that were in groups that were moribund or had already experienced mortality. In the IN inoculated mice, the final time point for blood sampling was from groups of mice that had not yet experienced any mortality and did not appear moribund. Therefore, it is possible that infectious virus in blood only appears a few days before death. Histologic findings following FP infection were similar to those following IN infection. The presence of virus in tissues was not associated with obvious pathology following both routes of infection as the major findings in sacrificed, moribund mice were a reduction of extramedullary hematopoiesis, a reduction in acute lung congestion, an increase in adrenal subcapsular cell hyperplasia, and mesenteric lymph node hypoplasia. All other observations were normal for SCID mice of the age used. These findings do not provide any explanation for cause of death. Viruses 2017, 9,203 7 of 17 and 50% mortality by Day 52, respectively. Tail lesions were detected on SCID-SKH1 and SCID mice strains starting on Days 16 and 25 p.i., respectively. Because the SCID mice presented with the greatest mortality, we decided to further investigate pathogenesis following FP and IN infections of 1 × 10 6 PFU of TATV ( Figure 2). We found that IN inoculated mice experienced 100% mortality approximately 20 days earlier than those inoculated via the FP route (Table 1 and Figure 2A). To further explore the pathogenesis of IN inoculated mice, we inoculated groups of mice with TATV doses ranging from 1 × 10 6 PFU down to 1 × 10 2 PFU ( Figure 2B). Mice inoculated with the lowest virus dose took until Day 127 p.i. to experience 100% mortality, whereas mice were dead by Day 29 p.i. at the highest dose ( Figure 2B). Rate of weight-loss and total weight-loss (as a % of starting weight) was also a function of inoculation dose ( Figure 2C). Following the 1 × 10 6 PFU IN infection, virus was recovered from the lung by Day 10 p.i. but virus could not be detected in the spleen, liver and kidney until Day 25 p.i. (Figure 2D). We found no infectious virus in the blood at any of the time points (blood could not be removed from dead, non-sacrificed animals) and tail lesions were not apparent. Histological examination of mice inoculated with 1 × 10 6 PFU and sacrificed at a moribund state revealed a reduction in splenic extramedullary hematopoiesis in TATV inoculated mice compared to mock animals. It also revealed that all TATV inoculated mice presented with fibrinopurulent rhinitis in the nasal passages and that all TATV inoculated animals had adrenal subcapsular cell hyperplasia. One TATV inoculated animal also presented with pancreatic lobular degeneration. should be given to the fact that TATV was detected in the blood of FP inoculated mice that were in groups that were moribund or had already experienced mortality. In the IN inoculated mice, the final time point for blood sampling was from groups of mice that had not yet experienced any mortality and did not appear moribund. Therefore, it is possible that infectious virus in blood only appears a few days before death. Histologic findings following FP infection were similar to those following IN infection. The presence of virus in tissues was not associated with obvious pathology following both routes of infection as the major findings in sacrificed, moribund mice were a reduction of extramedullary hematopoiesis, a reduction in acute lung congestion, an increase in adrenal subcapsular cell hyperplasia, and mesenteric lymph node hypoplasia. All other observations were normal for SCID mice of the age used. These findings do not provide any explanation for cause of death. Mortality in SCID Mice Is Not a Function of Virus Mutation TATV took significantly longer to kill SCID mice than is observed in ECTV and MPXV infections which kill all mice by Day 6 and Day 18, respectively [24]. The lack of detectable virus in tested tissues at 10 days p.i. from FP inoculated mice, and the progressive increase in tissue titers with time was consistent with the generation and/or selection of TATV mutants that were better adapted to replication in the mouse. If this hypothesis is true, the mutant virus would make-up a significant proportion of virus isolated at 48 days p.i., and infection of SCID mice with recovered virus would yield a shorter mean time to death than the original stock. Accordingly, groups of SCID mice were inoculated by the FP route with 1.7 × 10 4 PFU of the original stock virus or kidney-lysate virus or lung-lysate virus from end-point specimens. We chose to take viral lysate from FP inoculated animals because they died at a later date and thus were provided more time for the selection of TATV mutations. Mice inoculated with the original stock virus experienced 100% mortality by Day 51 p.i., and mice inoculated with the kidney and lung-lysate virus reached 100% mortality on Day 52 and 60 p.i, respectively ( Figure 3A). In addition, virus titers recovered from the spleens, livers, kidneys, and lungs of dead mice inoculated were similar (no significant different) between the inoculums ( Figure 3B). Thus, the extended mean time of death of TATV-inoculated SCID mice is not consistent with the evolution of a virus strain with enhanced replication and/or virulence capacity in the mouse. Taken together, these studies in SCID mice suggest very inefficient replication and spread. Mortality in SCID Mice Is Not a Function of Virus Mutation TATV took significantly longer to kill SCID mice than is observed in ECTV and MPXV infections which kill all mice by Day 6 and Day 18, respectively [24]. The lack of detectable virus in tested tissues at 10 days p.i. from FP inoculated mice, and the progressive increase in tissue titers with time was consistent with the generation and/or selection of TATV mutants that were better adapted to replication in the mouse. If this hypothesis is true, the mutant virus would make-up a significant proportion of virus isolated at 48 days p.i., and infection of SCID mice with recovered virus would yield a shorter mean time to death than the original stock. Accordingly, groups of SCID mice were inoculated by the FP route with 1.7 × 10 4 PFU of the original stock virus or kidney-lysate virus or lung-lysate virus from end-point specimens. We chose to take viral lysate from FP inoculated animals because they died at a later date and thus were provided more time for the selection of TATV mutations. Mice inoculated with the original stock virus experienced 100% mortality by Day 51 p.i., and mice inoculated with the kidney and lung-lysate virus reached 100% mortality on Day 52 and 60 p.i, respectively ( Figure 3A). In addition, virus titers recovered from the spleens, livers, kidneys, and lungs of dead mice inoculated were similar (no significant different) between the inoculums ( Figure 3B). Thus, the extended mean time of death of TATV-inoculated SCID mice is not consistent with the evolution of a virus strain with enhanced replication and/or virulence capacity in the mouse. Taken together, these studies in SCID mice suggest very inefficient replication and spread. Mortality in SCID Mice Is Not a Function of Virus Mutation TATV took significantly longer to kill SCID mice than is observed in ECTV and MPXV infections which kill all mice by Day 6 and Day 18, respectively [24]. The lack of detectable virus in tested tissues at 10 days p.i. from FP inoculated mice, and the progressive increase in tissue titers with time was consistent with the generation and/or selection of TATV mutants that were better adapted to replication in the mouse. If this hypothesis is true, the mutant virus would make-up a significant proportion of virus isolated at 48 days p.i., and infection of SCID mice with recovered virus would yield a shorter mean time to death than the original stock. Accordingly, groups of SCID mice were inoculated by the FP route with 1.7 × 10 4 PFU of the original stock virus or kidney-lysate virus or lung-lysate virus from end-point specimens. We chose to take viral lysate from FP inoculated animals because they died at a later date and thus were provided more time for the selection of TATV mutations. Mice inoculated with the original stock virus experienced 100% mortality by Day 51 p.i., and mice inoculated with the kidney and lung-lysate virus reached 100% mortality on Day 52 and 60 p.i, respectively ( Figure 3A). In addition, virus titers recovered from the spleens, livers, kidneys, and lungs of dead mice inoculated were similar (no significant different) between the inoculums ( Figure 3B). Thus, the extended mean time of death of TATV-inoculated SCID mice is not consistent with the evolution of a virus strain with enhanced replication and/or virulence capacity in the mouse. Taken together, these studies in SCID mice suggest very inefficient replication and spread. TATV Transmission Because our failure to detect robust replication of TATV in tested immunocompetent animal models may be due to the failure to examine the relevant tissue, we utilized a transmission assay as another approach to detect biologically relevant virus infectivity. This assay evaluates virus shedding and determines if enough virus is produced for transmission. We tested both immunocompetent and immunocompromised mice for their ability to transmit TATV and we used ECTV as a transmission control as we have previously shown that ECTV is efficiently transmitted from an inoculated mouse to naïve cage mates over approximately a 10 day period [35]. As determined by seroconversion of contacts, we found that wild-type 129 and C57BL/6 mice transmitted ECTV when the index mouse was inoculated with 1 × 10 6 PFU via the FP route; however, a similar experimental design with TATVinoculated index mice did not detect seroconversion of contacts (Table 3). 129 stat1 −/− and C57BL/6 stat1 −/− mice inoculated with ECTV or TATV (1 × 10 6 PFU) via the FP route failed to transmit the viruses. We next evaluated the transmission efficiency of SCID mice inoculated by the IN route-the route that facilitates increased TATV virulence. We found that SCID mice inoculated with 1 × 10 6 PFU of TATV, but not ECTV, transmitted virus to C57BL/6 contact mice. The failure of SCID mice to transmit ECTV to C57BL/6 mice is likely due to the fact that the SCID mice die before virus titers reach a high enough level to infect contact mice. SCID mice inoculated by the IN route with ECTV or TATV successfully transmitted both viruses to C57BL/6 stat1 −/− mice, but only ECTV transmitted from the SCID mice to the A/Ncr contact mice (Table 3). Finally, we evaluated the ability of TATV to transmit from IN or FP inoculated gerbils and found that inoculated gerbils seroconverted by Day 70 p.i. but no seroconversion was detected in any of the contact animals (Table 3). These data reveal that TATV has reduced transmission capacity compared to ECTV. The ability of TATV to transmit to contact animals is only possible from inoculated, index SCID mice, and this finding is likely related to the fact that TATV replicates more efficiency in these animals-especially in lung tissue ( Figure 2D). TATV Transmission Because our failure to detect robust replication of TATV in tested immunocompetent animal models may be due to the failure to examine the relevant tissue, we utilized a transmission assay as another approach to detect biologically relevant virus infectivity. This assay evaluates virus shedding and determines if enough virus is produced for transmission. We tested both immunocompetent and immunocompromised mice for their ability to transmit TATV and we used ECTV as a transmission control as we have previously shown that ECTV is efficiently transmitted from an inoculated mouse to naïve cage mates over approximately a 10 day period [35]. As determined by seroconversion of contacts, we found that wild-type 129 and C57BL/6 mice transmitted ECTV when the index mouse was inoculated with 1 × 10 6 PFU via the FP route; however, a similar experimental design with TATV-inoculated index mice did not detect seroconversion of contacts (Table 3). 129 stat1 −/− and C57BL/6 stat1 −/− mice inoculated with ECTV or TATV (1 × 10 6 PFU) via the FP route failed to transmit the viruses. We next evaluated the transmission efficiency of SCID mice inoculated by the IN route-the route that facilitates increased TATV virulence. We found that SCID mice inoculated with 1 × 10 6 PFU of TATV, but not ECTV, transmitted virus to C57BL/6 contact mice. The failure of SCID mice to transmit ECTV to C57BL/6 mice is likely due to the fact that the SCID mice die before virus titers reach a high enough level to infect contact mice. SCID mice inoculated by the IN route with ECTV or TATV successfully transmitted both viruses to C57BL/6 stat1 −/− mice, but only ECTV transmitted from the SCID mice to the A/Ncr contact mice (Table 3). Finally, we evaluated the ability of TATV to transmit from IN or FP inoculated gerbils and found that inoculated gerbils seroconverted by Day 70 p.i. but no seroconversion was detected in any of the contact animals (Table 3). These data reveal that TATV has reduced transmission capacity compared to ECTV. The ability of TATV to transmit to contact animals is only possible from inoculated, index SCID mice, and this finding is likely related to the fact that TATV replicates more efficiency in these animals-especially in lung tissue ( Figure 2D). Table 3. Transmission of TATV and ECTV between index and contact mice following 1 × 10 6 PFU challenges 1 . Virus Replication and Cytokine Synthesis in the Primary Site of Infection (Footpad) and the Draining (Popliteal) Lymph Node The FP route is the most thoroughly studied inoculation route in the ECTV/mousepox model [37][38][39]. We and others have shown that the inflammatory and early immune response in the lymph node draining the site of infection are critical for survival following ECTV infections via the FP route [38,40,41]. Accordingly, we examined virus replication and cytokine responses in the FP following inoculation of BALB/c mice with 1 × 10 6 PFU of TATV or ECTV. To examine viral replication at the site of infection, the soft tissue of the FP was assayed for viral titers at 0 (input virus), 12, 24, 48, and 72 h p.i. We found that the input viral titer was higher for ECTV; however, we found that significant replication above input viral levels was detected in the ECTV FPs from 24 h p.i. and from 48 h p.i. in the TATV FPs ( Figure 4A). For histological analysis, we removed the FP at 24 and 72 h p.i. and sectioned the mid-sagittal plane. At the 24 h time point, we could not observe any obvious differences between ECTV and TATV (data not shown) and at the 72 h time point we observed multifocal, subacute inflammation in the plantar metatarsal region of both ECTV and TATV samples; however, the severity was increased in TATV samples (Table 4). At the 72 h time point we could see mixed inflammatory cell infiltrate in the superficial and deep dermis of TATV inoculated animals whereas only scattered mononuclear cells in the edematous dermis along with the detection of marginating neutrophils were observed in the FP of ECTV inoculated mice. These findings suggest a more robust immune response to the presence of TATV at 72 h p.i. compared to ECTV at the same time ( Figure 5) and may contribute to the increasing difference between ECTV and TATV titers. To further dissect virus dissemination, we looked at viral titers and viral DNA (vDNA) levels in the popliteal lymph node (PLN). Following a 1 × 10 6 PFU inoculation, titers of TATV were not detected at 12, 24, 48 and 72 h p.i. in the PLNs, whereas levels of ECTV increased over the same 12-72 h time period ( Figure 4B). As determined by qPCR, vDNA loads in the PLN at 24 and 48 h p.i. were detectable, but significantly lower in mice inoculated with TATV as compared to ECTV ( Figure 4C). Similar findings were observed in the PLNs of SCID, A/Ncr and C57BL/6 mice inoculated with 1 × 10 6 PFU of TATV or ECTV (data not shown) i.e., following a TATV inoculation vDNA was detected but the PLNs were negative for viral titers. In summary, these data suggest that TATV does not spread efficiently from the primary site of infection to the draining lymph node and that the innate immune response at the site of infection is far more robust in TATV inoculated animals compared to ECTV inoculated animals. The Host Response at the Draining Lymph Node The host response in the draining PLN was evaluated by measuring 32 different cytokines in mock, TATV and ECTV inoculated mice at 24 h p.i. Data are summarized in Table 5. As compared to controls, the levels of 19 cytokines were elevated, seven unchanged and one decreased in ECTV lysates, whereas three were elevated, 15 unchanged, and nine decreased in TATV lysates. The cytokine data in the PLN were consistent with a ramping up of a Th1-biased innate response to infection in the case of ECTV infections; however, TATV inoculations failed to broadly activate cytokines in the PLN. The Host Response at the Draining Lymph Node The host response in the draining PLN was evaluated by measuring 32 different cytokines in mock, TATV and ECTV inoculated mice at 24 h p.i. Data are summarized in Table 5. As compared to controls, the levels of 19 cytokines were elevated, seven unchanged and one decreased in ECTV lysates, whereas three were elevated, 15 unchanged, and nine decreased in TATV lysates. The cytokine data in the PLN were consistent with a ramping up of a Th1-biased innate response to infection in the case of ECTV infections; however, TATV inoculations failed to broadly activate cytokines in the PLN. 73.2 ± 7 1 BALB/c mice (N = 4) were inoculated via the footpad (FP) with 1 × 10 6 PFU of ECTV or TATV. At 24 h p.i., the popliteal lymph node (PLN) was removed to phosphate buffered saline (PBS) with 0.05% Triton X-100 and ground up. The supernatant was removed and assayed for cytokines. 2 Unchanged cytokines (IL-2, IL-4, IL-7, MIP-2, and LIX) are not shown. 3 Cytokines in green are increased and cytokines in red are decreased, compared to mock (p < 0.05). GM-CSF: granulocyte macrophage colony-stimulating factor; TNF-α: tumor necrosis factor α; MIP: macrophage inflammatory protein; IL: interleukin; IFN-γ: interferon γ; LIF: leukemia-inhibitory factor; MCP-1: Monocyte chemoattractant protein-1; VEGF: vascular endothelial growth factor. Discussion TATV was isolated in 1975, and yet, despite its close similarity to VARV, no research investigating its virology has been published in the last 42 years [28]. Two other studies have documented pox infections in gerbils. In 1971, Bradley et al. reported nodules on the tails of a gerbil, T. robusta (a close relative of T. kempi), in Uganda which presented with epidermal proliferation and a large number of intracellular poxvirus particles. Unfortunately, no further studies were published to characterize this virus [42]. In 1978, Marennikova et al. isolated a poxvirus from the kidney of a gerbil (Rhombomys opimus) from Turkmenia which was shown to cause a disease with high mortality in the gerbils as well as in Yellow Susliks (Spermophilus squirrels) via various inoculation routes; however, the virus isolate was shown to be very similar to CPXV in some lab assays and differs from TATV in its ability to infect mice at a low-dose inoculum [21]. We have conducted an investigation into the virology of TATV and can report that TATV cannot initiate robust disease in any tested immunocompetent murine strains, the Mongolian gerbil (M. unguiculatus), or the African dormouse (G. kelleni); however, as measured by seroconversion, TATV infects all these animals except for the dormouse. We confirm previous work by Lourie that the Mongolian gerbil presents with no obvious signs of disease-although Lourie et al. inoculated via the IC and IP routes [28], whereas we used the IN and FP routes. We also found that the gerbil fails to transmit TATV. A drawback to our gerbil studies was that they were conducted in the Mongolian gerbil, and TATV was isolated from an apparently healthy West African gerbil, T. kempi. These two gerbils are not closely related as they belong to different Tribes of the Subfamily Gerbillinae (Family Muridae): T. kempi is a member of the tribe Taterillini, and M. unguiculatus belongs to the tribe Gerbillini [43]. It would be interesting to investigate the virology of TATV in T. kempi but legal and logistical restrictions prevent the importation of these animals. The finding that TATV can infect and produce lesions in immunodeficient stat1 −/− mice is important because it suggests that restriction is not merely a function of a lack of host-range ORFs. Furthermore, our data indicate that type 1 and 2 interferons are at some level required for controlling TATV infection. The transmission studies revealed that stat1 −/− mice could not transmit ECTV or TATV to other stat1 −/− mice. It was not overly surprising that ECTV, which is highly virulent in stat1 −/− mice, did not transmit to other stat1 −/− animals because the index mice died rapidly and most likely before virus could reach the extremities or mucus membranes-a finding we have observed in other ECTV studies (data not shown). Moreover, it is somewhat surprising that stat1 −/− mice did not transit TATV to other stat1 −/− mice given that the index animals would have experienced some morbidity and the development of tail lesions. The most likely explanation for this failure of transmission is that TATV in the index animals could not replicate to high enough titers to transmit to contact animals-a finding that is supported by the absence of tissue titers in stat1 −/− mice inoculated with TATV. The only robust disease that we observed (other than tail lesions and swollen footpads) was in SCID mice on the SKH1 and BALB/c backgrounds. Of these we found the highest virulence in the BALB/c SCID mice, therefore confirming the importance of background genetics to the susceptibility of murine strains. This observation is not uncommon, for example A/Ncr and C57BL/6 mice present with different disease courses and mortality levels when infected with ECTV via the same route [38]. Furthermore, we found that the disease course was approximately 40% shorter in SCID mice infected via the IN route compared to the FP route. Again, this observation is not unusual as many strains of mice are susceptible to IN ECTV infection but are resistant to FP infections [39]. Other OPVs (CPXV, MPXV and VACV) have been evaluated in SCID mice and typically induce a highly fulminant disease with a mean time to death of less than 12 days [24,[44][45][46][47]. Following FP infection of SCID mice, there was a progressive increase of TATV titers in spleen, liver, kidney, and lungs until death at~50 days p.i., which was consistent with the generation and/or selection of TATV mutants of enhanced virulence; however, this explanation was ruled out experimentally by re-passage of kidney and lung lysates from TATV-infected SCID mice that also died at~50 days following FP inoculation. Examination of histological evidence acquired from moribund SCID mice following an IN or FP inoculation failed to identify a plausible cause of death. In the case of smallpox, interstitial pneumonitis and tubulointerstitial nephritis are thought to be the main cause of death [48]; however, neither of these were identified in the TATV-infected SCID mice. Similarities also could not be drawn to other IN models of disease such as: VACV IN challenges of mice which present with necrotizing bronchopneumonia [49,50]; IN CPXV challenges where mice present with tracheitis/tracheobronchitis, bronchiolitis/bronchopneumonia, rhinitis/sinusitis, meningitis, cranial myositis, otitis media, and eustachitis [47,51]; and in aerosolized and IN MPXV challenges of cynomolgus macaques which die due to fibrinonecrotic bronchopneumonia [52,53]. We hypothesize that TATV fails to cause systemic disease in tested immunocompetent mouse strains due to a failure to optimally replicate, spread and counteract the innate immune response to infection. As compared to ECTV, which has previously been shown to replicate at the site of infection [54,55], TATV titers were reduced at the primary site of replication and the magnitude of the difference increased throughout the 72 h period of observation. This was consistent with an in vitro replication assays that found TATV had a diminished replication capacity in tested murine epithelial (key target cell of poxviruses) and fibroblast cell lines as compared to ECTV (data not shown). These in vitro findings will be addressed in a subsequent paper. TATV infectivity was not detected in the PLN draining the primary site of replication in BALB/c or SCID mice up to 48 hours p.i. This was surprising as studies with VACV in mice [56], and ECTV in mice [54,57] have detected virus infectivity as early as 6-12 h p.i. The failure to detect TATV in the PLN was not due to the presumed lower TATV replication rate in skin as by 48 h p.i. the TATV skin titers were equal to that observed for ECTV at 12 h p.i., a time point when ECTV could be observed in the PLN. By 24 h p.i., dramatic upregulation of many cytokines was detected in the PLNs from ECTV inoculated mice; however, in TATV PLNs cytokines generally failed to respond or responded differently to those of ECTV infected PLNs. This lack of a change in the cytokine/chemokine pattern following TATV infection may be due to the failure of TATV to replicate robustly in the PLN or because TATV never arrives at the PLN. That said, TATV vDNA was detected at low levels in the PLN at 24 and 48 h p.i. The detection of vDNA but not viral antigen could explain the triggering of a different cytokine response compared to ECTV which replicates at the PLN with vDNA also detected. The cytokine responses at the PLN following ECTV infection are well studied. Parker et al. demonstrated that the PLNs of C57BL/6 and A-strain mice respond differently following a FP inoculation with ECTV with resistant C57BL/6 mice favoring a Th1 response compared to susceptible A-strain mice which favour a Th2 response [38]. Furthermore, Chaudhri et al. revealed a detailed breakdown of cytokine changes in the PLNs of susceptible and resistant mice following a FP inoculation with ECTV and confirm that resistance favors a Th1 cytokine response [58]. In this paper, we did not detect an obvious Th1 vs. Th2 response in BALB/c mice inoculated with TATV or ECTV via the FP route; however, our study was limited to a very early time point (up to 24 h p.i.). In conclusion, TATV in the animals that we evaluated does not provide a suitable model for smallpox or human monkeypox. In vitro and bioinformatics studies (data not shown) suggest that, similar to VARV, TATV has a very narrow host-range. Whether the gerbil is the natural host of TATV remains to be elucidated. Although TATV does not replicate well in the immunocompetent animals that we evaluated, it did induce seroconversion. This finding suggests that TATV could be used as an OPV vaccine in animals or humans.
v3-fos-license
2022-03-23T06:19:06.206Z
2022-03-21T00:00:00.000
247598842
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "952f511d416300e02e9910cb553b52d4581be402", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42162", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "93fcc2ba0a274dec6081bb3d8a9ff5cb78d22b34", "year": 2022 }
pes2o/s2orc
Current and future strategies against cutaneous parasites Cutaneous parasites are identified by their specific cutaneous symptoms which are elicited based on the parasite’s interactions with the host. Standard anti-parasitic treatments primarily focus on the use of specific drugs to disrupt the regular function of the target parasite. In cases where secondary infections are induced by the parasite itself, antibiotics may also be used in tandem with the primary treatment to deal with the infection. Whilst drug-based treatments are highly effective, the development of resistance by bacteria and parasites, is increasingly prevalent in the modern day, thus requiring the development of non-drug based anti-parasitic strategies. Cutaneous parasites vary significantly in terms of the non-systemic methods that are required to deal with them. The main factors that need to be considered are the specifically elicited cutaneous symptoms and the relative cutaneous depth in which the parasites typically reside in. Due to the various differences in their migratory nature, certain cutaneous strategies are only viable for specific parasites, which then leads to the idea of developing an all-encompassing anti-parasitic strategy that works specifically against cutaneous parasites. The main benefit of this would be the overall time saved in regards to the period that is needed for accurate diagnosis of parasite, coupled with the prescription and application of the appropriate treatment based on the diagnosis. This review will assess the currently identified cutaneous parasites, detailing their life cycles which will allow for the identification of certain areas that could be exploited for the facilitation of cutaneous anti-parasitic treatment. Introduction Parasites are organisms that are dependent on their host for survival and physiological development in a malignant manner and can be classified into three specific types, helminths, ectoparasites and protozoa. Helminths are large multicellular organisms that fall within 3 subcategories, platyhelminths, acanthocephalans and nematodes. Ectoparasites are a broad classification which encompasses arthropods such as mosquitoes, but typically refers to smaller arthropods such as mites, ticks and fleas. Lastly there are protozoa which are unicellular organisms that can multiply within their human host. Protozoa can be classified into four types with respect to their migratory action: Sarcodina, mastigophoran, ciliophoran and sporozoan. The presence of the parasite increases the risk and spread of injury within the infected area, thus leading to the possibility of host mortality in the most severe of cases. The introduction of parasites into the host body can occur via a wide range of different routes such as the ingestion of contaminated food and water, transmission via an arthropod vector, intercourse, open wounds, an infected organ transplant etc. For the purposes of this review, only parasites that are transmitted via open wounds or vectors will be reviewed as these follow the route of infection via the transcutaneous pathway. Here we will discuss current treatment options and to serve as a call to action to the pharmaceutical community to translate medical technologies currently used to treat other diseases into pragmatic and cost effective treatments in the combat against cutaneous parasite infection. Cutaneous parasites The vast majority of cutaneous parasites are transmitted via the direct skin contact of the human host with an infected vector or is facilitated by contact with a piece of contaminated material containing the parasite. The infected medium is normally a vector or a form of contaminated material, such as soil or clothing infested with the parasitic eggs. Specific types of parasites are capable of laying eggs, i.e., helminths and ectoparasites. In most instances the parasitic eggs are laid within the epidermal layer of the human skin, where they begin hatch and mature, resulting in escalated cutaneous damage. Initial symptoms may include irritation and inflammation as the parasitic eggs are ejected into the skin. Once the eggs hatch, the larval parasites may move around the cutaneous and subcutaneous layers, resulting in lesions and minor haemorrhaging beneath the skin. There are also other instances where parasitic eggs are introduced into the human host through non-cutaneous means but can instead occur through the ingestion of contaminated sustenance, which then develops and migrates within the host leading to cutaneous symptoms, i.e., gnathostomiasis. Of the three classifications, protozoans are the only ones that do not lay eggs, but instead multiply within the host, which when coupled with their specific migratory mode, can result in observable cutaneous skin symptoms. Such an example would leishmaniasis. It should be noted that not all parasites that are transmitted via skin contact will result in cutaneous pathologies, however for the purposes of this review we will specifically be exploring the parasites that elicit cutaneous symptoms, Table I. Leishmaniasis Leishmaniasis is caused by the bite and subsequent ejection of the Leishmania promastigotes into the human host by the female Phlebotomus sandfly. The leishmania disease itself is based off of Leishmania spp, which are unicellular eukaryotes called protozoans that multiply within the host. The initial bite induces an inflammatory response at the site of injury, recruiting professional phagocytes to the location, which then subsequently phagocytose the foreign promastigotes. Typically speaking once a phagocytes compartmentalises foreign matter, they fuse with the lysosomes Tungiasis Tunga penetrans, Tunga trimamillata -Cutaneous ectopic penetration by the female sand flee -Human -Domestic animals -Sylvatic animals (17,18) resulting in the release of reactive oxygen species (ROS) which destroys the foreign material. In the case of leishmaniasis, the parasite utilises neutrophils as a medium to safely facilitate its undetectable entry into host macrophages (19). The utilisation of neutrophils by leishmania occurs via the protozoan interfering with the granule fusion process within the neutrophil, which in turn prevents the neutrophil from generating microbicidal granules which would kill the protozoan (20) This was observed in the first day of L.mexicana and L.Major infection, whereby the protozoan were observed to be within the host neutrophils, evidencing the resistance that leishmania had developed against the neutrophil's microbicidal processes (21). Once the leishmania promastigotes have been transported into the phagocyte, they then begin to inhibit the fusion of the host phagocyte with the lysosome, via the process of phagosome maturation arrest. This then results in an isolated space devoid of ROS, which allows the promastigote to replicate within the phagocyte, thus becoming an amastigote (22). Eventually the amastigotes replicate up to a certain point in which their numbers begin to affect the physiological stability of the phagocyte leading to cellular rupture, accompanied by the subsequent uptake of amastigotes by uninfected phagocytes. The amastigotes then begin to circulate throughout the host, eliciting responses with respect to the type of leishmania species within the host. In the specific case of cutaneous leishmaniasis, the species of Leishmania amazonensis, Leishmania braziliensis, Leishmania guyanensis, Leishmania major, Leishmania mexicana, Leishmania panamensis, and Leishmania tropica result in cutaneous responses which then escalate in accordance with the magnitude of division undertaken by the parasite. The circulation of the amastigotes within its host only represents one part of its life cycle as the Phlebotomus sandfly consumes the blood of its host, thus intaking the parasitized phagocytes within itself, which initiates the second part of the Leishmania life cycle. Once the amastigotes are taken up by the sand-fly, they differentiate back into promastigotes which then replicate and migrate through midgut and foregut of the sand-fly to the salivary glands, allowing the promastigotes to be transmitted into the next host, thus repeating the lifecycle, Fig. 1. The initial bite caused by the sand-fly instigates an acute inflammatory response which quickly subsides, however there are typically no noticeable symptoms postskin trauma for several weeks and it is this which represents the promastigote incubation phase. Prior to the manifestation of cutaneous symptoms, lymphadenopathy may occur, which can act as an early indicator of human leishmaniasis. The cutaneous symptoms presented by this disease occur as a result of amastigote metastasization which triggers leukocytic and fibroblastic activity in response to the detection of infected phagocytes. The activation of leukocytes and fibroblasts result in localised damage to the structural architecture of the native tissue, causing keratinocyte necrosis and apoptosis, which in turn leads to the occurrence of dermal tissue necrosis and therefore the visual manifestation of dermal injury. The initial dermal symptoms typically manifest as a lesion at the initial site of the sandfly bite, which occurs most commonly on the more exposed regions of the human body, including the lower legs, arms, neck and face. As the disease escalates, the lesion may progress into nodules, ulcers and various plaque formations, thus representing the ulcerative phase of disease. These cutaneous lesions often self-heal over time, however specific species of Leishmania can produce clinical symptoms of differing severity and recovery periods. I.e. L.major more than 50% of patients self-heal in 2-8 months, L.tropica self-heals after 1 year, L.aethiopica, self-heals within 2-5 years, L.mexicana often self-heals within 3-4 months and L.Braziliensis self-heals between several months to years (http:// www. emro. who. int/ negle cted-tropi cal-disea ses/ infor mation-resou rces-leish mania sis/ cl-facts heet. html) (23,24). In the case of Leishmania aethiopica, Leishmania amazonensis, Leishmania braziliensis and Leishmania panamensis, the escalation of leishmaniasis can result in the transition of cutaneous manifestations into mucocutaneous symptoms which can result damage nasal and oral tissue, with disruptions to regular respiratory function occurring in more severe cases. Another instance of a Leishmania species producing irregular cutaneous symptoms is Leishmania amazonensis, which produces simultaneous nodular lesions in multiple locations of differing sizes and is often identified in areas of the body away from the primary lesion caused by the bite of sand-fly. Cutaneous leishmaniasis can be represented by 7 distinct pathophysiological phases, starting with the acute phase caused by the initial bite by the infected sand-fly followed by the silent phase where the promastigotes begin to replicate slowly. The next phase is the active phase which involves the initial development of necrotic tissue due to the increased recruitment of phagocytes. This eventually escalates to the ulcerative phase which is characterised by mass tissue necrosis. Once necrosis begins to slow down, the healing phase initiates, resulting in the re-epithelialization of the damaged dermal structures, however this may result in 'over-healing' leading to the occurrence of hyperplasia which represents the chronic phase. Ultimately this develops into the final phase, non-ulcerative leishmaniasis which is represented by the minor atrophy of the epidermal layer (25). Lyme's disease Lyme's disease, which is also known as Lyme borreliosis is the disease associated with the bite from an Ixodes tick infected with the bacteria Borrelia. Typically speaking, the transmission of this disease requires the tick to be attached onto the host for a minimum of 36 h and is separated into three distinct phases, starting with the early localized disease, which is characterised by the cutaneous development of erythema migrans at the bite area, coupled with systemic symptoms such as fever, malaise and headaches. The next phase is the early disseminated, where the singular erythema migran develops into multiple erythematic lesions with possible systemic symptoms such as lymphadenopathy, cranial nerve palsies and possible cardiac abnormalities. The final phase is the late disseminated, which does not involve the escalation of cutaneous abnormalities, but instead results in the development of arthritis in major joints. In terms of the cutaneous pathophysiological development of Lyme borreliosis, the erythema migrans typically appears after one to two weeks of the initial bite and can expand to a diameter size of + 5 cm over several days, with the possible development of visible concentric rings around the bite area. This generally persists up to 3 weeks if untreated, exuding a burning or itching sensation, but some cases may be asymptomatic. The main cutaneous hallmark of Lyme borreliosis is the development of concentric rings at the bite area, which is visually comparable to a bullseye. The Borrelia infection can usually be eliminated via a course of antibiotics lasting 2-4 weeks. However, if the infection is not treated early on, it can lead to neurological damage, with some rare cases of the symptoms persisting for up to 6 months. In regard to the development of Lyme Borreliosis, it is highly dependent on the location and life cycle of the Ixodes tick. The geographical distribution of the disease is biased towards the northern regions with temperate climates that contain a consistently high level of relative humidity. This is important as it affects the survival of the bacteria and the tick vector but can also determine the mammalian host in which it feeds and develops from. In most cases the tick itself feeds and copulates on deer and mice during autumn or spring, whereby the female tick then detaches from the mammalian host, laying its eggs on the ground which then hatch prior to summer. During the summer season, the newly hatched larvae feed on small mammals such as rodents and birds, before becoming inactive until the following spring. Upon the arrival of the next spring, these larvae then develop into nymphs which then repeat the same feeding cycle as their larval forms, before finally developing into adult ticks in autumn (https:// wonder. cdc. gov/ wonder/ prevg uid/ p0000 380/ p0000 380. asp# head0 01003 00000 0000). Overall, the developmental cycle takes approximately 2 years prior to reach full maturation, however during this time the tick may infect domestic animals which can then be transferred onto a human host, thus resulting in human based Lyme's disease, Fig. 2. Myiasis Myiasis is the disease resulting from the infestation of the human host by certain species of fly. Different species of fly produce differing forms of cutaneous myiasis: Cordylobia anthropophaga, Cuterebra spp., Dermatobia hominis, and Wohlfahrtia vigil result in furuncular myiasis; whereas Gasterophilus spp., Hypoderma ovis and Hypoderma lineatum result in migratory myiasis; and finally Chrysomya bezziana, Cochliomyia hominivorax and Wohlfahrtia magnifica which result in traumatic myiasis. Each of these three types of cutaneous myiasis can be distinguished by their distinct visual characteristics, which can be used as a form of identification to determine the infective species in question. Furuncular myiasis is characterised by the presence of erythematic lesions in the form of single or multiple papules, which then develop in furuncles as larvae grow. Migratory myiasis in contrast is characterised by lesions that occur as a result of larval migration, thus resulting in the occurrence of an elongated wound that may take on a meander-like morphology. Finally, there is traumatic myiasis which is the most severe form of myiasis, which revolves around the parasitic larvae infecting pre-existing wounds, causing significant cutaneous and subcutaneous haemorrhaging to the host tissue. This specific form of myiasis can be life-threatening if the larvae burrow into an existing wound on the host's facial area, as it can then further penetrate into ocular and brain tissue, resulting in blindness, possible brain damage, sepsis and death. The geographical distribution of human cutaneous myiasis is typically biased towards the tropical regions with more temperate climates, with the highest occurrence in regions of extreme poverty and low levels of hygiene (26). In regards to the life cycle of myiasis causing parasites, it is specific to the species associated with a particular type of cutaneous myiasis. In the case of furuncular myiasis, the main vector is Dermatobia hominis, which deposits its eggs onto either foliage or carrier flies. Once these parasitic eggs reach the host epidermis, they hatch and penetrate into the subdermal region where they begin to grow and develop for 5-10 weeks (27). Upon maturation, the larva pierces through the epidermis where it then detaches itself from the host before pupation. Cordylobia anthropophaga is also another vector associated with furuncular myiasis, but unlike D. hominis, follows a different route of human infestation. In the case of C. anthropophaga, the eggs are deposited onto contaminated soil which then hatch into larvae, before burrowing into the ground for approximately 9 days. During this period, the larvae will detect the presence of a possible host based on ground vibrations and heat signatures before attaching themselves onto the host. Once the host has been parasitized, then begins to burrow into its host tissue for one and a half weeks, causing furuncular myiasis in the process. Eventually the parasite leaves its host via a lesion, where it then begins to pupate before hatching into an adult fly. In the case of migratory myiasis, the two most common parasites are Gasterophilus species, Hypoderma ovis and Hypoderma lineatum which both begin the route of infestation by having their eggs laid on domestic animals. For Gasterophilus species, the parasite first lays their eggs on horse hairs which then hatch into larvae that burrow into the epidermis of human hosts upon successful contact. The parasite itself burrows into the lower epidermis where it then begins to migrate at that level, causing raised lesions that represent the parasite migratory route. Unlike the Gasterophilus species, Hypoderma ovis and Hypoderma lineatum are both slightly more severe in terms of their migratory routes which penetrate deep into the subcutaneous tissue of the host, damaging a large variety of areas including the muscle tissue, skin and nerve fibres. Both H. ovis and H. lineatum begin by attaching their eggs onto cattle hair, which then proceed to hatch within a week. The resulting larvae then burrows into the subcutaneous region causing erythema lesions, before migrating to other areas of the host body. For traumatic myiasis the most commonly associated species are Wohlfahrtia magnifica, Chrysomya bezziana and Cochliomyia hominivorax. In the case of W. magnifica, the vector deposits larvae directly onto the pre-existing cutaneous lesion which then proceed to feed for up to a week, resulting in major injury to the surrounding tissue. After the feeding phase, the larvae detach from the host and begins to pupate. For C. Bezziana and C. hominivorax, the vector deposits its eggs on the periphery of the lesion which then hatch after 15 h of incubation, before entering the feeding phase (27). Although the feeding phase is known to cause significant tissue damage to the host, the main problem lies in the quantity of parasitic eggs deposited by the vector, ranging from 150-500 eggs which result in the occurrence of multiple infestation cases. The feeding phase persists for up to one week, before the larvae detach from the host to begin pupation, Fig. 3. Pediculosis Pediculosis is the cutaneous parasitic infestation of the human scalp by the louse Pediculus humanus var. capitis. The life cycle of this parasite can be separated into the three stages of egg, nymph and adult, in which the adult stage is responsible for infestations and the main clinical manifestation. The cycle begins with its human host being infected with an adult louse which can remain within the scalp region for up to 30 days, Fig. 4. During its infestation period, both adult male and female lice feed on their host's blood which generally does not trigger any significant clinical manifestations, however some hosts may develop an allergic reaction to the saliva of the louse and can also develop secondary bacterial infections with respect to the pathogen carried by the parasite. The life cycle continues with the adult female louse laying up to eight eggs daily onto a hair shaft of the host, which take roughly a week to hatch. Once hatched, the nymph goes through three stages of moulting before reaching its adult stage within the course of a week. The general mode of transmission is head-to-head contact of hosts, however secondary forms of transmission can also occur via the transfer of the louse onto items that are of close proximity to the head of the infected host, for example combs, pillows and towels. Regarding the global distribution of cases, pediculosis is widely distributed globally with significantly greater prevalence in children aged 3 to 11 (https:// www. cdc. gov/ dpdx/ pedic ulosis/ index. html). Scabies Scabies is the cutaneous disease associated with the migration of the burrowing mite Sarcoptes scabiei, into the skin of the host organism. The primary transmission route is direct skin to skin contact between a host infected with fertile female mites and a new host. Secondary transmission routes can also occur via day-to-day fomites that are of close proximity to the epidermis, such as clothing, bedding and furniture. Scabies exists as a result of the immunological response to the presence of Sarcoptes scabiei within the epidermal layer of the host's skin. The parasite itself only burrows within the epidermis, never going below the stratum corneum (https:// www. cdc. gov/ paras ites/ scabi es/ biolo gy. html). The immunological response to the parasite typically manifests itself in the form of pruritus with visible signs of raised lines where the mites are present. The manifestation of such symptoms may not appear for up to two months from the initial infestation period. However, during this time, infestation is perpetuated by the life cycle of the parasite, leading to the possible accumulation of delayed cutaneous symptoms. Secondary immunological responses can also arise due to the presence of specific bacterial species carried by the parasite. The initial site of infestation can become infected by skin-based commensal bacteria such as Staphylococcus epidermidis and Staphylococcus aureus which may cause the initial wound site to degenerate into a chronic wound. The life cycle of Sarcoptes scabiei begins with the egg stage, whereby the eggs laid within the wound site begin to hatch within an approximate time frame of four days. Upon hatching, the newly formed larvae then burrow into the stratum corneum, where they form moulting pouches. Upon the successful moulting of the larvae into a nymph, further moulting then occurs resulting in a fully developed adult mite. The mating process begins with the male mite fertilising the adult female by penetrating through its moulting pouch. Once fertilised, the female leaves the initial pouch and resurfaces above the stratum corneum where it then begins to migrate to another suitable epidermal location. The female then begins to burrow into the epidermis where it begins to lay eggs for approximately one to two months (Fig. 5). Typically, the clinical manifestations of scabies are the primary concern for the host, however due to the pathology of this condition, social stigmas may also occur, resulting in mental and psychological issues (28). In terms of its global distribution, scabies is most abundant in tropical climates, in overcrowded environments and regions of poor healthcare. Tungiasis Tungiasis stems from the infestation of the female sand fleas Tunga penetrans and Tunga trimamillata. The life cycle begins with the sand flea penetrating through the host's epidermal layer before burrowing into the stratum granulosum where it then begins to feed off the host's dermal vasculature for a blood meal. As the female flea feeds, it releases approximately 200 eggs into the external environment before it dies and gets sloughed away. The eggs that have been shed by the female then begin to hatch over a period of 3 to 4 days where it feeds on the local debris before entering its maturation phase into the larval and pupal stage which takes between 3 to 4 weeks. Upon its full maturation into an adult flea, it begins to seek out a host for a warm blood meal, thus restarting the life cycle, Fig. 6. It should be noted that whilst both male and female fleas feed off the host, it is only the pregnant female fleas that burrow into the host's skin for approximately 4-6 weeks (29). Typically speaking the wounds are focused around the lower body, with the vast majority of incidences occurring on the feet, specifically in the toes and sole of the foot (30). The pathological complications resulting from tungiasis can be separated into primary and secondary causes, with the primary cause relating directly to the flea, whilst secondary causes relate to the aftereffects of the wound caused by the flea. The initial wounding event causes localised inflammation, leading to erythema and pruritus, with the possibility of ulceration if the flea burrows into the affected appendage. This in itself can lead the production and build of pus within the affected wound and may also lead to the deformation of the patients toes as well as the loss of toenails, with respect to the intensity of infestation. The secondary causes from the initial wounding event relates to the introduction of pathogens carried by the flea which can cause adverse effects and superinfections, depending on the bacterial strain introduced into the patient. In the case of Tungiasis, the rate of superinfection was recorded at 29% within a particular study carried out by Feldmeier et al. (31). The primary bacterial strain carried by the flea is of the Wolbachia genus which has an endosymbiotic relationship with the host flea. Once the flea dies, the Wolbachia is released into the wound facilitating further inflammation within the wound. The secondary bacterial strain that is sometimes known to be associated with the flea is Clostridium Tetani, which is responsible for the development of tetanus. This can occur when the flea comes into contact with soil infected by the clostridium bacteria, ultimately resulting in the flea acting as a carrier between the infected soil and the host. The resulting pathological complications are exacerbated in countries with inadequate sanitation and protection for the foot, leading to further opportunities for the flea and pathogens to develop. The accumulation of both parasites and pathogens can lead to the necrosis of tissue of plantar tissue, beginning with localised epidermal necrosis which can occur as a consequence of the flea's blood meals, leading to a reduction in the transportation of oxygen and nutrients to the epidermis. The geographical distribution of tungiasis is widespread amongst sub-saharan Africa, south America, India and Pakistan where it is endemic (32). Gnathostomiasis Gnathostomiasis is the parasitic disease caused by the migratory action of third larval stage Gnathostoma nematodes through human tissue and are generally acquired through the consumption of raw freshwater fish and copepods. The initial cycle for the Gnathostoma parasite begins with the hatching of the parasitic eggs into their first larval stage which requires an incubation period of 7 days. The first larval stage nematodes are then ingested by copepods which then allow the nematodes to reach their second larval stage. This second larval stage nematode is then ingested by larger species such as fish, i.e., loach and eels, as well as reptiles and amphibians which then allow the nematode larvae to transition into the third larval stage localising within the muscle tissue of the second host (33). Humans then typically ingest the infected fish in its raw or undercooked state, which allows the third larval stage nematode to migrate from the second host into its human host via the penetration of the intestinal mucosa. This is most prevalent in certain regions of the world, where raw fish is consumed regularly as a part of the regions culture, i.e., in the form of sashimi, sushi and ceviche (34). Post-ingestion of the parasite does not illicit any immediate clinical cutaneous response until the third or fourth week, where visible signs of cutaneous injury may begin to surface as a result of the lesions caused by parasitic migration through the superficial tissue, Fig. 7. Due to the non-severe nature in which the initial cutaneous symptoms manifest, it is commonly overlooked as it is not coupled with any systemic symptoms, leading to possible cases of misdiagnosis. The initial cutaneous symptoms typically dissipate within the span of one to two weeks; however, these symptoms usually reoccur later on, manifesting near its initial site or upon the areas of the chest and abdominal region. The migratory route of the cutaneous gnathostomiasis typically traverses the dermis and subcutaneous tissue, however in some instances the parasite itself may migrate upwards towards the epidermis resulting in the formation of a defined nodular region containing the parasite in its hibernated state (35). In this instance a punch biopsy can be undertaken to remove the nodule and thus the parasite. The defining characteristic of cutaneous gnathostomiasis is the migratory pattern of the parasite, which can be identified by visible areas of irritation, pruritus and migrating lumps (35). Generally speaking the initial site of cutaneous irritation can be anywhere on the human body, however its subsequent cutaneous manifestations will typically occur on the chest and abdominal region, with only a single migratory location undergoing cutaneous inflammation in most instances. Onchocerciasis Onchocerciasis is the cutaneous parasitic disease associated with infection of the nematode parasite Onchocerca volvulus via a black fly of the simulium genus. The life cycle begins by when the female blackfly begins to feed on a human, resulting in the transmission of third stage filarial larvae from within the blackfly to the human host. From there, the larvae migrate down towards the subcutaneous tissue where they begin to mature into the adult form. These adult filariae are generally found within nodules where they can live for up to 15 years. Within that time span, female adults produce microfilariae for up to 9 years. These microfilariae are spread throughout the body, with the most common location being the skin and connective tissue but larvae have also been known to migrate towards the periphery and may be found in the blood and urine of the host. Upon the feeding of another simulium black fly, the microfilariae within the skin of the human host are ingested, which then migrate towards the blackfly's midgut. From there they begin to mature into the first larval stage, all the way up to the third stage where they then enter another human host, thus repeating the cycle (Fig. 8). The main cutaneous manifestation of onchocerciasis occurs as a result of immunoreactivity towards the adult filariae and microfilariae. One of these cutaneous manifestations is the development of fibrosis around the adult filariae which induces the formation of nodules around the affected area. Cutaneous manifestations associated with microfilariae include minor tissue inflammation in the presence of live migrating parasites, whilst dead microfilariae induce more severe tissue inflammation with the possibility of necrosis (36). Overall, this results in a pruritic papular rash with hyperpigmentation and scarring on the cutaneous surface. Although the cutaneous manifestation of onchocerciasis is quite significant, the main clinical symptom is actually associated with the degradation of ocular integrity which can ultimately lead to blindness. Initial clinical presentation include a transient rash, ocular pruritus and photophobia, however if the infestation reaches a chronic level, the host may experience lichenification, tissue atrophy and the loss in vision. Lichenification is a secondary skin lesion process which can occur as a result of chronic pruritus and is characterised by the transformation of the skin into a thick leathery texture that is often accompanied by hyperpigmentation. In terms of the geographical distribution of onchocerciasis worldwide, the majority of cases occur within Africa, with some cases also found in Latin America (https:// www. who. int/ oncho cerci asis/ distr ibuti on/ en/). In the case of the free-living cycle, the newly hatched rhabditiform larvae are non-infective and simply travel to the intestinal lumen where they are excreted, leading to soil contamination. The excreted rhabditiform larvae will then mature into filariform larvae, thus restarting the entire cycle. In the case of the auto-infection cycle, the rhabditiform larvae matures into filariform larvae within the intestinal lumen, this then results in the filariform larvae penetrating through the perianal skin, thus resulting in host reinfection, Fig. 9. Strongyloidiasis The auto-infection cycle is the most problematic as it is selfperpetuating and can therefore be extended throughout the host's lifetime unless the appropriate treatment is facilitated. One of the most significant problems associated with strongyloidiasis Fig. 9 The life cycle of strongyloidiasis parasites. auto-infection is the issue of hyperinfection syndrome. This occurs as a result of repeated auto-infection cycles which causes the host to become immunosuppressed, leading to cases of sepsis as a result of gradual bacterial infection within the damaged intestinal walls. Another issue resulting from perpetual autoinfection cycles is the positive feedback loop generated, which leads to the accumulation of filariform larvae within the body. This can ultimately lead to the dissemination of larvae within the host, resulting in the possible migration of filariform larvae towards end organs such as the brain, which can lead to host mortality. Due to the migratory nature of these parasites, other organs can be infected leading to possible cases of human-tohuman transfer via organ transplants. In terms the host's immune reaction to strongyloidiasis, there is generally no clear symptoms, until the host reaches the hyperinfection stage or if the dissemination of filariform accumulate in other organs, thus eliciting an immune response. Hyperinfection based symptoms are dependent on the origin of infection and can be classified into gastrointestinal, pulmonary or extraintestinal. Gastrointestinal symptoms include vomiting, nausea, abdominal pain and diarrhoea, whilst pulmonary symptoms include haemoptysis, tracheal irritation, coughing, dyspnea and wheezing. Extraintestinal symptoms can also be subdivided into skin, central nervous system, haematological and allergic response. The main cutaneous symptoms are pruritus and petechial rashes, central nervous system symptoms include seizures, headaches and comas, haematological symptoms include chills and fevers, whilst allergic responses can result in hives or anaphylaxis (37). In rare cases strongyloidiasis can elicit an acute symptomatic response upon the immediate exposure to the parasite, whereby the symptoms can prolong for up to several weeks. The geographical distribution of strongyloidiasis is most prevalent within the regions of southeast Asian, sub-Saharan Africa, southern and eastern Europe, the Caribbean islands and Latin America. Overall, there has been a global increase in strongyloidiasis due to a variety of reasons ranging from the lack of sanitation, insufficient supply of potable water, poor hygiene etc. Comparisons between cutaneous parasites The general consensus regarding the similarities and differences between cutaneous parasites relate to their specific associations with the host. Each parasitic scenario is distinguishable from one another based on their cutaneous symptoms, duration, possibility of reinfestation, as shown in Table II. Generally speaking the majority of parasites that elicit significant cutaneous symptoms enter the host through trans-epidermal means, triggering a minute immune response in most cases. However, this initial response is generally a result of either the vector or microparasite piercing into/through the epidermis, so the actual parasite associated pathological symptoms do not take effect until later on. The dormancy period is specific to the parasite in question whereby some cutaneous manifestations do not appear until several months have elapsed. The nature of these dormancy periods are dependent on the lifecycles of the parasite and their associated behaviours in regards to its interactions with the host. For example, the infestation of a microparasite, such as the strongyloidiasis parasite rarely elicits any immediate major cutaneous responses and lays dormant for extended periods of time before the eventual mass accumulation of filariform, which triggers the discernible cutaneous symptoms. On the contrary another microparasite such as the one responsible for onchocerciasis, can require a couple of days before the immune system detects any significant activity relative to the adult filariae and microfilariae, which then leads to the manifestation of cutaneous symptoms. On the other side of the spectrum, there are macroparasites such as the one responsible for tungiasis which typically elicits an immediate immune response due to the magnitude of epidermal damage caused by the infestation process. The classification criteria of microparasites and macroparasites does not simply depends on the size of the parasite itself, but also on its lifecycle with respect to the location of its reproductive cycle. Macroparasites can typically be distinguished by the fact that they reproduce outside of the host, whereas microparasites almost always reproduce from within the host. Based on the information displayed in Table II, the parasites are quite distinguishable from one another based on the time period associated with the manifestation of cutaneous symptoms, as well as their specific pathological symptoms. Current treatments The general route for treatment of cutaneous parasites typically involves the use of antibiotics to combat parasite associated pathogenesis. In most cases, the oral route of drug delivery is most common for combatting cutaneous parasites, due to the systemic nature of parasitic circulation and distribution within the host. Whilst oral administration is the most common drug delivery pathway, other delivery methods can also be used for effective administration. Intravenous delivery is utilised for certain cutaneous parasites such as leishmaniasis, Lyme's disease and onchocerciasis, whilst intralesional drug delivery is applied specifically to leishmaniasis. Other methods include topical antibiotic delivery as well as more miscellaneous methods such as suffocation heat therapy and larvae removal. Antibiotics Anti-parasitic strategies via the use of antibiotics is the most common method, given the fact that antibiotics are systemically distributed which ensures that the parasite will be affected within a certain time period before the antibiotic is excreted from the host. Currently, most of the utilised antibiotics focus on the disengagement of the parasite from its usual functions such as procreation and migratory movement. In this regard most antibiotics do not actively kill the parasite, but instead reduces it such a state whereby it can no longer reproduce or undertake the necessary actions to survive. The most commonly used antibiotics are outlined below. Ivermectin is a synthetic anthelmintic drug with a broad spectrum of antiparasitic activity. Ivermectin works by selectively binding to chloride ion channels within the nerve and muscle cells of microfilaria which in turn increases the permittivity of the microfilaria cells towards chloride ions, thus resulting in a cellular hyperpolarisation and therefore cell death. Ivermectin is most commonly used for gnathostomiasis, myiasis, onchocerciasis, pediculosis, scabies and strongyloidiasis. In terms of its regimen, varying oral dosages are used for different parasites. For gnathostomiasis 0.2 mg/ kg for 7 days, myiasis on to two doses of 150-200 mg/kg, onchocerciasis 150 mg/kg every 6 months, pediculosis 200 mg/kg every 10 days, scabies two doses of 200 µg/kg at an interval of two weeks and strongyloidiasis 200 mg/kg, daily for 2 days (2,(38)(39)(40)(41)(42). Albendazole is anthelmintic drug that has multiple mechanisms for the induction of anti-parasitic activity. Albendazole selectively degenerates the cytoplasmic microtubules via the inhibition of microtubule polymerisation which prevents the parasitic cells from undergoing mitosis, ultimately killing them. The other mechanisms include the disruption of metabolic pathways which inhibits ATP synthesis, as well as the disruption to the parasites glycogen storage which prevents the parasite from effectively utilising glucose. One of the issues regarding albendazole lies in the fact that it has a very low solubility within water, so for the oral administration route, it is generally suggested for albendazole to be ingested alongside meals with high fat content. Current usages include gnathostomiasis 400 mg/kg for 21 days, myiasis 400 mg/k for 3 days, strongyloidiasis 400 mg twice daily for 7 days (2,38,42). Fluconazole and ketoconazole are both orally administered antifungal drugs used in the treatment of systemic and cutaneous fungal infections. Their mechanism of action are exactly the same, where both pathways follow the selective inhibition of the enzyme lanosterol 14-α-demethylase which is used for the conversion of lanosterol to ergosterol. The inhibition of this enzyme ultimately prevents the formation of the fungal cell wall which requires the use of ergosterol in its synthesis. In the specific case of their usage against cutaneous parasites, its primary usage is in the improved healing of cutaneous lesions via the suppression of fungal (43). Miltefosine is an antimicrobial agent that is specifically utilised for leishmaniasis. The mechanism of action follows the disruption of normal mitochondrial function through the inhibition of cytochrome c oxidase which results in cell death. For the treatment of leishmaniasis, 50 mg of miltefosine should be daily for 28 days, however it should be noted that certain gastrointestinal side effects such as nausea and vomiting may occur (43). Amphotericin B is an antifungal drug that can produce fungicidal or fungistatic effects depending on the concentration of the dose relative to the susceptibility of the fungal target. Unlike fluconazole and ketoconazole which targets the production ergosterol, amphotericin B specifically targets the ergosterol itself by to it, thus destabilising the integrity of the cell membrane which leads to the formation of transmembrane channels which in itself causes the contents of the fungus to leak out, resulting in cell death. Amphotericin B is used in the treatment of leishmania through intravenous injection of 0.5-1.0 mg/kg per day for 20 days (43). Sodium stibogluconate is an anti-leishmania drug that can be applied through the intravenous and intralesional pathways. The mechanism of action follows the inhibition of DNA topoisomerase which is vital for DNA replication and transcription. This is because DNA topoisomerase controls the release and recombination of the DNA strand, which if inhibited prevents the cell from replicating leading to cell death. Sodium stibogluconate should be applied intravenously 20 mg/kg for 20 days, whilst intralesional application is 0.2-5 ml every 3-7 days (43). Paromomycin is an antibiotic that inhibits bacterial protein synthesis. The mechanism of action follows the binding of paromomycin to the 16 s ribosomal RNA which then results in the formation of defective polypeptide chains during the protein synthesis. This eventually leads to the build of defective proteins within the bacterial system, thus resulting in bacterial death. For its usage against leishmania, paromomycin should be applied topically at a concentration of 15% coupled with + 12% methylbenzethonium chlorideointment for a duration of 10 days, then 10 days off, then finally 10 days application (43). Doxycycline is a synthetically derived antibiotic used in the treatment of a wide range of bacterial infections. The mechanism of action follows the binding of doxycycline onto the 16 s rRNA area of the bacterial ribosome which is responsible for protein synthesis. Once bound, the 16srRNA portion is unable to bind to RNA-30 s which ultimately prevents protein translation from occurring. Overtime this prevents the bacteria from replicating, thereby producing a bacteriostatic effect. For its application against Lyme's disease, doxycycline is to be taken orally 100 mg twice daily for two weeks, whilst the intravenous injection of doxycycline should only be used in severe cases (44). Amoxicillin is an antibiotic derived from penicillin for the treatment of gram-positive bacteria. Amoxicillin works by inhibiting the continual cross-linkage of the bacterial cell wall through the disruption of penicillin binding proteins. Overtime the bacterial cell wall weakens due to the imbalance between enzyme based autolytic action and cross-link maintenance, ultimately leading to the leakage of the bacterial organelles and thus cell death. For its usage against Lyme's disease amoxicillin 500 mg is taken 3 times per day for 2 weeks, whilst intravenous injections are used in the most severe cases (44). Cefuroxime is a beta-lactam antibiotic that covers a broad spectrum of bacterial infections, similar to that of penicillin. The antibacterial mechanism of cefuroxime follows the inhibition of the bacterial wall synthesis process, specifically that of the third and the final stage. This disrupts the formation of peptidoglycan layer that makes up the bacterial cell wall which leads to bacterial cell death through the leakage of its internal content. For Lyme's disease, Cefuroxime is prescribed 500 mg twice per day for 2 weeks, whilst intravenous injection is used for severe scenarios (44). Mebendazole is an anthelmintic used to treat the infection from parasitic worms such as myiasis (44). The mechanism of action works by directly preventing the parasitic worms from producing microtubules which are needed to facilitate the absorption of glucose when the worm is in its larval and adult stages. Mebendazole binds to tubulin preventing it from undergoing polymerisation which in itself prevents the formation of microtubules. As a result of the parasite being unable to uptake glucose it eventually depletes it energy storage and dies as a result. Levamisole is an anthelmintic drug designed to treat bacterial and viral infections from parasitic sources such as myiasis (44). Levamisole specifically targets the nicotine receptors as a way of facilitating its mechanism of action against parasites. The specific action that is facilitated by levamisole follows the severe reduction in copulative capacity, via the inhibition of the male parasite from using its reproductive muscles, thereby preventing copulation from occurring. Other benefits also include the stimulation of host-cell activation, coupled with improved phagocytotic functions, however it has been withdrawn from the market due to a variety of adverse effects. Moxidectin is a semisynthetic antiparasitic drug that works against both endo and ectoparasites. The mechanism of action works via the specific binding of the chloride ion channels within the parasite which are required for the normal functioning of nerve and muscle cells. After moxidectin had been bound to the parasite, the ion channels become more permeable resulting in a high increase of chloride ions within the parasite, leading to its paralysis and its eventual demise. Moxidectin is generally prescribed in 8 mg doses with varying dosage periods depending on the severity of onchocerciasis, however it has been replaced by ivermectin in most cases (39). Despite the efficacy of antibiotics, they can also produce significant side effects which occur as a result of their systemic distribution within the host. This issue coupled with the development of antibiotic resistance result in a situation whereby antibiotics can no longer be considered as a sustainable anti-parasitic method, which further incentivises for the development of a localised anti-parasitic treatment that specifically targets the parasites instead of resulting in unwanted systemic effects. Therefore, there is an urgent need to develop alternative cost-effective treatment methods for patients with cutaneous parasite infection. Advanced therapies Thermotherapy Thermotherapy works on the basis that some parasitic species such as the Leishmania species are unable to multiple within the host's macrophages once the localised temperature is above 39 °C (45). The typical approaches to this method include the utilisation of infrared light, hot baths and laser therapy, all of which can generate non-localized heat that can damage the other tissue surrounding the cutaneous lesion of interest (46). Due to such reasons, radiofrequencybased thermotherapies were developed as a means of more accurately targeting leishmanial lesions without affecting surrounding tissue, thus leading to higher quality treatment with minimal side effects. Generally speaking radio frequency based thermotherapy is the most effective, where some studies have shown that a single application was able to encourage the reepithelization of the leishmania affected lesion, thus improving the speed of healing (47). It should also be noted that the utilisation of radio frequencies can also help to stimulate increased collagen synthesis, contraction and remodelling which ultimately results in improved cutaneous healing with significantly better cosmetic results (48). Despite the various advantages presented by radio frequency induced thermotherapy, the main limitations follow the fact that radio frequencies only penetrate to a depth of 4 mm which is ideal for leishmania amastigotes, but not for other cases of leishmaniasis that have penetrated deeper into the subcutaneous tissue, thereby limiting its usage to only superficial leishmaniasis. Although the main reported applications of thermotherapy are in the treatment of cutaneous leishmaniasis, this may also be used to kill other parasite infections such as gnathostomiasis, myiasis, and sarcoptes scabiei, where literature on the use of heat in cooking fish (https:// web. stanf ord. edu/ group/ paras ites/ ParaS ites2 001/ gnath ostom iasis/ PAGES/ index. html) or ironing clothes (49) or direct heating (https:// www. cdc. gov/ paras ites/ scabi es/ gen_ info/ faqs. html) are used as a thermal mechanism to kill parasites in fish have been reported. Cryotherapy Cryotherapy is an alternative treatment for leishmaniasis, which typically either utilises liquid carbon dioxide or liquid nitrogen to kill the parasites. The suggested mechanism of action works by freezing the parasite, which causes the formation of intra and extracellular ice crystals, which when fully expanded rupture the parasite's cell membrane (50). This has shown to be quite effective in regard to the facilitation of amastigote based cryonecrosis. Despite the fact that this form of treatment has been reported to have a success rate of above 95%, controlled trials have instead shown its effectiveness to be approximately around 27% (51,52). This low clinical success rate may be attributed to a variety of different factors, such as the fact that this cryotherapy does not immediately contact the dermis due to the Leidenfrost effect, which in turn reduces the efficacy of the therapy, as immediate contact is required to eliminate the parasites without damaging the surrounding cutaneous tissue (53). Other factors include the duration and frequency of each cryotherapy session, as the duration of each liquid nitrogen application may be too short between each interval to effectively inhibit the proliferation of parasites within the affected lesions (52). Similar effects are observed when cryo-treating patients with tunga penetrans (54). Although efficacy is observed against leishmaniasis and tunga penetrans, cryotherapy is not recommended for other cutaneous parasites such as gnathostomiasis. Photodynamic Therapy Photodynamic therapy in the context of cutaneous anti-leishmanial treatment refers to the utilization of photo-excitable dyes in conjunction with specific wavelength frequencies to induce the release of reactive oxygen species (ROS), which in turns results in the photodynamic inactivation of the parasites. In the case of Leishmania, the dyes uroporphyrin and phthalocyanines are utilised to facilitate complete deactivation (55). Other dyes include methylene blue which can serve as a low-cost alternative for photodynamic therapy (56). The main advantage of this therapy is that the dye can selectively accumulate within the parasite prior to the application of the ROS inducing wavelengths. This allows for the effective destruction of parasites without harming the host tissue. Photodynamic therapy is also being used in the quest to eradicate Lyme's disease (https:// www. klinik-st-georg. de/ en/ antim icrob ial-photo dynam ic-thera py/). Laser therapy The application of lasers for anti-parasitic treatment is dependent on the type of laser that is used, which determines its output, oscillation form and conversion efficiency. Output refers to the strength of the laser in megawatts, whilst oscillation form refers to the motion of the laser which can either be pulsed of continuous. Conversion efficiency refers to balance in the energy input with respect to useful energy output. Lasers are classed into 3 different types, gas lasers, solidstate laser and semiconductor lasers, however only gas and solid-state lasers are used for antiparasitic treatments. Gas lasers utilise gas as its laser medium, which in the specific context of anti-leishmanial treatment, either requires the use of carbon dioxide or argon. Solid-state lasers use ores as the laser medium, which in the context of anti-leishmanial treatment uses neodymium-doped yttrium aluminium garnet (Nd:YAG) or erbium (57). The general consensus is that carbon dioxide is the most commonly utilised compound for leishmanial laser therapy, which is primarily due to its abundance in nature, coupled with its effectiveness and safety when used for leishmania. The only problems associated with carbon dioxide lasers, are the minor side effects which are generally of a cosmetic nature, such as hypertrophic scarring, erythema and hyperpigmentation (58). The other compounds also elicit similar results to carbon dioxide, but with differing levels of efficacy. An advantage of using laser therapy lies in the fact that the power density can be varied to induce various effects upon the affected lesion i.e., the affected lesion can be ablated to destroy the parasites and can then be treated again with a lower power density laser to form a protective top layer which will isolate the freshly ablated tissue from the external environment. Another form of laser therapy is the use of pulsed dye laser which improves the cutaneous properties of the lesion, resulting in improved pliability, reduced lesion size, reduction in erythema and improved skin texture, however one of the problems of pulsed laser dyes lies in its limited penetration depth, which constrains it to only superficial applications (59). The limitations associated with laser penetration depth is primarily dependent on the laser medium and the wavelength that is used, as opposed to the oscillation form. An example would be Nd:YAG at differing wavelengths, where by 2940 nm only results in partial penetration through the stratum corneum, whilst at 1064 nm results in the laser penetrating through to the dermal vasculature layer (60). Nanoparticle based therapy There is large scope within nanotechnology to help in the development of new platforms. These may be either based on drug carriers such as liposomes, polymeric micelles or dendrimers, incorporated into larger macromolecular structures such as into hydrogels, wafers or even into bandages. The standard delivery method of anti-parasitic compounds typically follows the ingestion route which leads to the systemic circulation of the compound, resulting in lower efficacy, non-specificity and increased side effects. To combat this issue, nanoparticles have been utilised as a means of increasing the efficacy of drug delivery whilst reducing the levels toxicity. In the case of leishmaniasis treatments, it has been reported that a variety of different nanoparticles have shown significant effectiveness against the parasite. Such nanoparticles include liposomes, metal oxides and lipid nanoparticles, with iron (III) oxide magnetic nanoparticles being of particular interest due to their ease of removal from the host's body thereby making them a good carrier medium for superficial drug delivery (61,62). For drug delivery, liposomal preparations of antimicrobials such as amphotericin B have been reported for leishmainia treatments (63,64), whilst polymeric carriers have been reported loaded with primaquine (65) or amphotericin B (66). Combined delivery of antibiotics using nanotechnology delivery systems has resulted in reduced resistance (67), these findings can be used to guide the development of interventions of new anti-parasitics. Whilst there is some progress in this field there is huge scope to improve and widen the target from leishmaniasis to other parasites. Aside from drug delivery, nanotechnology can be used topically as a local lethal dose killing off parasite activity. Iron (III) oxide nanoparticles have displayed antileishmanial effects, in which the suggested mechanism of action occurs through the production of nitric oxide (68). Nitric oxide is one of the main molecules utilised by macrophage against leishmania, which involves the macrophage undertaking the oxidative burst mechanism. This produces high quantities of nitric oxide and ROS which effectively facilitates the elimination of promastigotes within the macrophage, thereby limiting the population size within the host (69). Whilst the ROS induced mechanism of anti-parasitic activity is well understood, the same does not apply to nitric oxide as its specific mechanism is still not fully understood. Current research indicates that nitric oxide is not directly involved in the direct killing of leishmaniasis and may instead contribute to host tissue damage (70), however it has also been shown that downregulation of nitric oxide provides Leishmania with a form of immune escape via reduced host response (71), thereby suggesting that nitric acid is needed to prevent immune escape, thus implying that nitric acid is needed to initiate a host response against the parasite. Other studies have examined the use of silver nanoparticles exploiting their inherent antimicrobial / localised cytotoxicity either alone or in combination with UV light (72,73). Another exciting application of nanotechnology for treatment of cutaneous leishmaniasis is the use of iron oxide coupled with magnetic flux, this results in magnetic hyperthermia which can be used to kill parasites. Berry et al. reported the use of iron oxide as heat seeds for thermal kill of amastigote cells in vitro (74). This study in combination with the ROS generation finding, indicates that iron oxide nanoparticles may be a frontrunner in the next generation of leishmania treatments. The literature in this area is highly biased towards leishmania treatment, however, there is scope to develop therapeutics for other cutaneous parasites. The beauty of nanotechnology lies within the breadth of unique qualities each material possesses at the nano-scale domain, as well as the ease of tailor-ability towards bespoke applications. We believe that more work targeted in this area towards some of the less studied parasites may render great reward. Future Perspectives The current treatments for cutaneous parasites are virtually all encompassed by the use of drugs as a general solution. Alternative treatments are effective, but nonetheless are limited to a specific type of parasite. The primary issue that is inhibiting the development of a general purpose nonantibiotic based therapy lies in the fact that all cutaneous parasites have different life cycle mechanisms, coupled with varying migratory routes which may not result in the parasites having an extended period of time whereby they dwell within the superficial layers of skin. For a parasite such as leishmaniasis, which dwells within the superficial layer of skin, non-invasive treatments can be applied to a high level of efficacy as the parasite lives within the cutaneous nodules, thereby acting as a viable point for exploitation. Other parasites have shown to pass through the upper cutaneous region during their migratory routes, however unlike leishmaniasis and possibly onchocerciasis, other parasites are not known to live within exposed regions of the host and as of such cannot be treated through alternative treatments. Perhaps a consideration that needs to be taken is our current methods for approaching parasitic treatment. Our current alternative methods aim to target the parasites based on where they reside, which presents us with a specific set of limitations, specifically the depth and invasiveness of the treatment that can be applied. Instead, it may be worth considering developing a method which instead influences the migratory route of the parasite, thereby herding them to a specific area where they can then be annihilated in a more efficient manner. Whilst this method itself may not specifically partake in the direct destruction of the parasite; it will instead act as a process to facilitate the controlled movement of parasites, which will hamper their development as a bare minimum. The primary concern regarding this method would be the use of an effective antiparasitic agent that does not compromise the safety of the host. This compound would be required to fulfil two specific requirements, one of which is for it to be non-cytotoxic to human cells and the second is for the compound itself to be able to be systemically circulated around the host before it is safely excreted out. The compound itself should also be able to exude a repulsive effect towards the parasite, which would therefore allow the migratory route to be influenced. Assuming that the compounds will temporarily accumulate in certain regions of the host, it will therefore act as a temporary road-block within the parasites migratory route, forcing them to undertaken a different migratory path. The main issue of this method is that there are currently no clinically known compounds that would have such effects and would also require a significant amount of time and resources to identify the changes in migratory pattern. Despite the significant problems associated with this method, it may be feasible with the use of magnetic nanoparticles, whereby their distribution within the host can be controlled through the use of an attunable magnetic field. Ultimately, there is an urgent need for new pragmatic treatment approaches to parasitic infection. Often such cases present in tropical climates or low-income countries, both of which may result in challenges for administration. Biomaterials research and expertise has vastly grown over the past two decades, with solution based approaches to multiple clinical conditions or disease states. These platform technologies could be adapted to suit the requirements for the treatment of cutaneous parasite infections, however, greater awareness of the clinical need is required in order to leverage greater research investment for such progress to be realised. ACKNOWLEDGEMENTS AND DISCLOSURES The authors would like to state they have no conflict of interest. Author contributions EM wrote the manuscript draft with HP and CH supervising and adding to the drafts. All authors approved the manuscript before submission. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2020-03-05T11:11:54.023Z
2020-02-14T00:00:00.000
212940019
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.library.brandeis.edu/index.php/caste/article/download/44/9", "pdf_hash": "2a55d5445edd8d8409c2cdec409eeca1ef01a907", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:42163", "s2fieldsofstudy": [ "Sociology" ], "sha1": "efc963915c0bce419f4f28722e8e581a9ba7770b", "year": 2020 }
pes2o/s2orc
The Revolt of the Upper Castes This article argues that the recent rise of Hindu nationalism in India can be seen as a revolt of the upper castes against the egalitarian demands of democracy. By and large, the upper castes have managed to retain their power and privileges in the post-independence period. Nevertheless, democratic institutions have forced them to accept some sharing of power and privilege in important spheres of public life. Some economic changes have also undermined their dominant position, at least in rural areas. The Hindutva project is a lifeboat for the upper castes, in so far as it stands for the restoration of the Brahminical social order that places them at the top. Seen in this light, the recent growth of Hindu nationalism is a major setback for the movement to annihilate caste and bring about a more equal society in India. Introduction The recent growth of Hindu nationalism in India is a huge setback for the movement to annihilate caste and bring about a more equal society. The setback is not an accident: the growth of Hindu nationalism can be seen as a revolt of the upper castes against the egalitarian demands of democracy. Hindutva and Caste The essential ideas of Hindu nationalism, also known as 'Hindutva,' are not difficult to understand. They were explained with great clarity by V.D. Savarkar in Essentials of Hindutva (Savarkar, 1923), and amplified by other early Hindutva thinkers such as M.S. Golwalkar. The basic idea is that India belongs to the 'Hindus,' broadly defined in cultural rather than strictly religious terms that include Sikhs, Buddhists, and Jains but not Muslims and Christians (because the cradle of their religion is elsewhere). The ultimate goal of Hindutva is to unite the Hindus, revitalize Hindu society and turn India into a 'Hindu rashtra.' 1 Incidentally, the arguments that were advanced to support these ideas involved startling departures from rational thinking, common sense, and scientific knowledge. Just to illustrate, consider Golwalkar's argument that all Hindus belong to one race, the Aryan race. Golwalkar did not have to contend, at that time, with the scientific evidence we have against that argument today, but he did grapple with an alleged discovery that Aryans came from somewhere north of India, in fact near the North Pole. He dealt with this claim by arguing that the North Pole itself used to be located in India: '… the North Pole is not stationary and quite long ago it was in that part of the world, which, we find, is called Bihar and Orissa at the present;… then it moved northeast and then by a sometimes westerly, sometimes northward movement, it came to its present position… we were all along here and the Arctic zone left us and moved away northwards in its zigzag march.' 2 Golwalkar did not explain how the Aryans managed to stay in place during this 'zigzag march' of the North Pole. He used similarly contrived arguments to defend the odd claim that all Hindus share 'one language. ' The Hindutva project can also be seen as an attempt to restore the traditional social order associated with the common culture that allegedly binds all Hindus. The caste system, or at least the Varna system (the four-fold division of society), is an integral part of this social order. In We or Our Nationhood Defined, for instance, Golwalkar clearly says that the 'Hindu framework of society,' as he calls it, is 'characterized by varnas and ashrams' (Golwalkar 1939, p. 54). This is elaborated at some length in Bunch of Thoughts (one of the foundational texts of Hindutva), where Golwalkar praises the Varna system as the basis of a 'harmonious social order.' 3 Like many other apologists of caste, he claims that the Varna system is not meant to be hierarchical, but that does not cut much ice. Golwalkar and other Hindutva ideologues tend to have no problem with caste. They have a problem with what some of them call 'casteism'. The word casteism, in the Hindutva lingo, is not a reference to caste discrimination (like 'racism' is a reference to race discrimination). Rather, it refers to situations such as Dalits asserting themselves, or demanding special safeguards like reservation. That is casteism, because it divides Hindu society. The Rashtriya Swayamsevak Sangh (RSS), the torch-bearer of Hindu nationalism today, has been remarkably faithful to these essential ideas. On caste, the standard line remains that caste is part of the 'genius of our country,' as the National General Secretary of the Bharatiya Janata Party, Ram Madhav, put it recently in Indian Express (Madhav, 2017), and that the real problem is not caste but casteism?. 4 An even more revealing statement was made by Yogi Adityanath, head of the BJP government in Uttar Pradesh, in an interview with NDTV two years ago. Much like Golwalkar, he explained that caste was a method for 'managing society in an orderly manner.' He said: 'Castes play the same role in Hindu society that furrows play in farms, and help in keeping it organised and orderly… Castes can be fine, but casteism is not… ' 5 To look at the issue from another angle, Hindutva ideologues face a basic problem: how does one 'unite' a society divided by caste? The answer is to project caste as a unifying rather than a divisive institution. 6 The idea, of course, is unlikely to appeal to the disadvantaged castes, and that is perhaps why it is rarely stated as openly as Yogi Adityanath did in this interview. Generally, Hindutva leaders tend to abstain from talking about the caste system, but there is a tacit acceptance of it in this silence. Few of them, at any rate, are known to have spoken against the caste system. Sometimes Hindutva leaders create an impression that they oppose the caste system because they speak or act against untouchability. Savarkar himself was against untouchability, and even supported one of Dr. Ambedkar's early acts of civil disobedience against it, the Mahad satyagraha (Zelliott 2013, p.80). But opposing untouchability is not at all the same as opposing the caste system. There is a long tradition, among the upper castes, of defending the caste system along with opposing untouchability, often dismissed as a recent perversion of it. Gandhi himself argued that 'the moment untouchability goes, the caste system will be purified.' 7 Uncertain Power The ideology of Hindu nationalism plays into the hands of the upper castes, since it effectively stands for the restoration of the traditional social order that places them at the top. As one might expect, the RSS is particularly popular among the upper castes. Its founders, incidentally, were all Brahmins, as were all the RSS chiefs so far except one (Rajendra Singh, a Rajput), and many other leading figures of the Hindutva movement -Savarkar, Hedgewar, Golwalkar, Nathuram Godse, Syama Prasad Mukherjee, Deen Dayal Upadhyay, Mohan Bhagwat, Ram Madhav, to name a few. Over time, of course, the RSS has expanded its influence beyond the upper castes, but the upper castes remain their most loyal and reliable base. In fact, Hindutva has become a kind of lifeboat for the upper castes, as their supremacy came under threat after India's independence. By and large, of course, the upper castes have managed to retain their power and privileges in the postindependence period. Just to illustrate, in a recent survey of the 'positions of power and influence' (POPI) (the university faculty, the bar association, the press club, the top police posts, trade-union leaders, NGO heads, and so on) in the city of Allahabad, we found that seventy-five per cent of the POPIs had been captured by members of the upper castes, whose share of the population in Uttar Pradesh is just sixteen per cent or so. Brahmins and Kayasthas alone accounted for about half of the POPIs. Interestingly, this imbalance was, if anything, more pronounced among civic institutions such as trade unions, NGOs, and the press club than in the government sector. Allahabad, of course, is just one city, but many other studies have brought out similar patterns of continued upper-caste dominance in a wide range of contexts -media houses, corporate boards, cricket teams, senior administrative positions, and so on. 8 Nevertheless, the upper-caste ship has started leaking from many sides. Education, for instance, used to be a virtual monopoly of the upper castes -at the turn of the twentieth century, literacy was the norm among Brahmin men but virtually nil among Dalits. 9 Inequality and discrimination certainly persist in the education system today, but in government schools at least Dalit children can claim the same status as uppercaste children. Children of all castes even share the same midday meal, an initiative that did not go down well with many upper-caste parents (Drèze, 2017). The recent introduction of eggs in midday meals in many states has also caused much agitation among upper-caste vegetarians. 10 Under their influence, most of the states with a BJP government have been resisting the inclusion of eggs in school meals to this day. The schooling system is only one example of a sphere of public life where the upper castes have had to resign themselves to some sharing of power and privilege. The electoral system is another example, even if 'adult suffrage and frequent elections are no bar against [the] governing class reaching places of power and authority,' as Dr. Ambedkar put it. 11 The upper castes may be somewhat over-represented in the Lok Sabha (lower house of Parliament), but their share of it is a moderate twenty-nine per cent, in sharp contrast with the overwhelming upper-caste dominance of POPIs in society. At the local level, too, Panchayati Raj institutions and the reservation of seats for women, scheduled castes and scheduled tribes have weakened the grip of the upper castes on political affairs. Similarly, the judicial system restrains the arbitrary power of the upper castes from time to time (for instance in matters of land grab, bonded labour, and untouchability), even if the principle of equality before the law is still far from being realised. Some economic changes have also undermined the dominant position of the upper castes, at least in rural areas. Many years ago, I had an opportunity to observe a striking example of this process in Palanpur, a village of Moradabad district in western Uttar Pradesh. When we asked Man Singh, a relatively educated resident of Palanpur, to write down his impressions of recent economic and social change in the village, here is what he wrote (in late 1983): 1. Lower castes are passing better life than upper castes. So there has been a great jealousy and hatefulness for lower castes in the hearts of upper caste people. 2. Ratio of education is increasing in low castes very rapidly. 3. On the whole, we can say that low castes are going up and upper castes are coming down; this is because the economic condition of lower castes seems better than higher castes people in the modern society. I could not make sense of this until I understood that by 'lower castes,' Man Singh did not mean Dalits but his own caste, the Muraos (one of Uttar Pradesh's 'other backward classes'). With that clue, what he wrote made good sense, and indeed, it was consistent with our own findings: the Muraos, a farming caste, had prospered steadily after the abolition of zamindari and the onset of the Green Revolution -more so than the upper-caste Thakurs. Even as the Thakurs were struggling to keep the appearances of idle landlords (traditionally, they are not supposed to touch the plough), the Muraos were taking to multiple cropping with abandon, installing tubewells, buying more land and -as Man Singh hints -catching up with the Thakurs in matters of education. The Thakurs did not hide their resentment. Palanpur is just one village, but it turns out that similar patterns have been observed in a good number of village studies. 12 I am not suggesting that the relative economic decline of the upper castes is a universal pattern in rural India in the postindependence period, but it seems to be a common pattern at least. In short, even if the upper castes are still in firm control of many aspects of economic and social life, in some respects they are also losing ground, or in danger of losing ground. Even when the loss of privilege is relatively small, it may be perceived as a major loss. Striking Back Of all the ways upper-caste privilege has been challenged in recent decades, perhaps none is more acutely resented by the upper castes than the system of reservation in education and public employment. How far reservation policies have actually reduced education and employment opportunities for the upper castes is not clear -the reservation norms are far from being fully implemented, and they apply mainly in the public sector. What is not in doubt is that these policies have generated a common perception, among the upper castes, that 'their' jobs and degrees are being snatched by the scheduled castes, scheduled tribes and other backward classes (OBCs). 13 As it happens, the revival of the BJP began soon after the V.P. Singh government committed itself to the implementation of the Mandal Commission report on reservation for OBCs, in 1990. This threatened not only to split Hindu society (the upper castes were enraged), but also to alienate OBCs -about forty per cent of India's population -from the BJP, opposed as it was to the Mandal Commission recommendations. L.K. Advani's Rath Yatra (chariot journey) to Ayodhya, and the events that followed (including the demolition of the Babri Masjid (mosque) on 6 December 1992), helped to avert this threat of 'casteism' and re-unite Hindus on an anti-Muslim platform, under the leadership of the BJP -and of the upper castes. This is a striking example of Hindutva enabling the upper castes to counter a threat to their privileges and reassert their control over Hindu society. That, indeed, seems to be one of the main functions of the Hindutva movement today. The potential adversaries of this movement are not just Muslims but also Christians, Dalits, Adivasis, communists, secularists, rationalists, feminists, in short anyone who stands or might stand in the way of the restoration of the Brahminical social order. Though it is often called a majoritarian movement, Hindutva is perhaps better described as a movement of the oppressive minority. One possible objection to this interpretation of the Hindutva movement (or rather, of its rapid growth in recent times) is that Dalits are supporting it in large numbers. This objection, however, is easy to counter. First, it is doubtful that many Dalits really support the RSS or Hindutva ideology. Many did vote for the BJP in recent elections, but that is not the same thing as supporting Hindutva -there are many possible reasons for voting for the BJP. Second, some aspects of the Hindutva movement may appeal to Dalits even if they do not subscribe to the Hindutva ideology. For instance, the RSS is known for its vast network of schools, and other kinds of social work, often focused on underprivileged groups. Third, the RSS has gone out of its way to win support among Dalits, not only through social work but also through propaganda, starting with the co-option of Dr. Ambedkar. Objectively speaking, there is no possible meeting ground between Hindutva and Dr. Ambedkar. Yet the RSS routinely claims him in one way or another. Finally, it is arguable that even if Hindutva does not stand for the abolition of caste, its view -and practice -of caste is less oppressive than the caste system as it exists today. Some Dalits may feel that, all said and done, they are treated better in the RSS than in the society at large. As one RSS sympathiser puts it: 'Hindutva and the promise of a common Hindu identity always appealed to a large Dalit and OBC castes [sic] as it promises to liberate them from the narrow identity of a weaker caste, and induct them into a powerful Hindu community' (Singh 2019). As mentioned earlier, the rise of Hindu nationalism should not be confused with the recent electoral success of BJP. Nevertheless, the sweeping victory of the BJP in the 2019 parliamentary elections is also a big victory for the RSS. Most of the top posts in government (prime minister, president, vice-president, speaker of the Lok Sabha, key ministries, many governors, and so on) are now occupied by members or former members of the RSS, firmly committed to the ideology of Hindu nationalism. The quiet revolt of the upper castes against democracy is now taking the form of a more direct attack on democratic institutions, starting with the freedom of expression and dissent. The retreat of democracy and the persistence of caste are in danger of feeding on each other. 10. In fact, the state government's recent decision to add eggs in school meals is the subject of a major political battle in Chhattisgarh. BJP legislators, egged on by 'communities such as Kabir Panthi, Radha Soami, Gayatri Parivar, Jains and others,' are opposing the move in the State Assembly (India Today, 2019). 11. Ambedkar (1945), p. 208. 12. See Drèze, Lanjouw andSharma (1998) on this literature, and also for a more detailed account of caste relations in Palanpur including the relative decline of the Thakurs. 13. This perception is well captured in a 1990 cartoon, mentioned by K. Balagopal (1990), where SC, ST and OBC students are standing on a ship and "grinning cruelly at the forward caste students who are sinking all round with their degree certificates held high". As Balagopal observes, 'it is difficult to imagine a more atrocious caricature of reality, which is almost exactly the opposite' (p. 2231).
v3-fos-license