added
string
created
string
id
string
metadata
dict
source
string
text
string
version
string
2020-03-16T14:16:16.986Z
2020-03-16T00:00:00.000
212718887
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-020-61697-6.pdf", "pdf_hash": "81c0ec78572bd0240cd8381f2ce8208e8442ec04", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45126", "s2fieldsofstudy": [ "Biology" ], "sha1": "81c0ec78572bd0240cd8381f2ce8208e8442ec04", "year": 2020 }
pes2o/s2orc
Identifying mutation hotspots reveals pathogenetic mechanisms of KCNQ2 epileptic encephalopathy Kv7 channels are enriched at the axonal plasma membrane where their voltage-dependent potassium currents suppress neuronal excitability. Mutations in Kv7.2 and Kv7.3 subunits cause epileptic encephalopathy (EE), yet the underlying pathogenetic mechanism is unclear. Here, we used novel statistical algorithms and structural modeling to identify EE mutation hotspots in key functional domains of Kv7.2 including voltage sensing S4, the pore loop and S6 in the pore domain, and intracellular calmodulin-binding helix B and helix B-C linker. Characterization of selected EE mutations from these hotspots revealed that L203P at S4 induces a large depolarizing shift in voltage dependence of Kv7.2 channels and L268F at the pore decreases their current densities. While L268F severely reduces expression of heteromeric channels in hippocampal neurons without affecting internalization, K552T and R553L mutations at distal helix B decrease calmodulin-binding and axonal enrichment. Importantly, L268F, K552T, and R553L mutations disrupt current potentiation by increasing phosphatidylinositol 4,5-bisphosphate (PIP2), and our molecular dynamics simulation suggests PIP2 interaction with these residues. Together, these findings demonstrate that each EE variant causes a unique combination of defects in Kv7 channel function and neuronal expression, and suggest a critical need for both prediction algorithms and experimental interrogations to understand pathophysiology of Kv7-associated EE. MHF algorithm identifies EE mutation clusters in K ). These variants include 10 submicroscopic and partial gene deletions, 17 splice site mutations, 10 nonsense mutations, 25 frameshift mutations, 2 non-initiation mutations, 126 missense mutations that lead to single amino acid substitutions, and 4 mutations that result in single amino acid deletions (Fig. 1a). These mutations were classified into three groups according to the severities of their clinical outcomes described in the RIKEE database (Fig. 1a, Supplementary Table S1). The "mild or BFNE" mutations lead to seizures but not developmental delay in patients. The "severe or EE" mutations cause neonatal encephalopathy, seizures, and developmental delays. The "uncertain severity" mutations are associated with both benign seizures and EE or have limited clinical information. In addition, 130 silent mutations (with highest allele frequency from 0.0017% to 19%) and 25 relatively common nonpathogenic missense mutations of K v 7.2 (with allele frequency ≥ 0.01%) were identified from the Exome Aggregation Consortium (ExAC) database that collected protein-coding genetic variations from 60,706 humans (http://exac.broadinstitute.org). In contrast to the evenly distributed silent mutations, pathogenic single amino acid mutations are concentrated at transmembrane segments S1 to S6, the pore loop, and intracellular helices A and B of K v 7.2 (Fig. 1b). To test if this trend was statistically significant, we developed a resampling algorithm titled Mutation Hotspot Finder (MHF). This algorithm was applied under the null hypothesis that pathogenic mutations are equally observed at every residue of a functional domain in the full-length K v 7.2 protein when there is no further association between the mutations and the domains. Because our MHF examines the association between the pathogenic variants and the functional domains, we used 130 single amino acid mutations and excluded nonsense and frameshift mutations that truncate one or more functional domains in K v 7.2. This analysis revealed that epilepsy mutations are significantly clustered at the voltage sensing S4, the pore loop, and S6 of K v 7.2 (p < 0.005), whereas silent and nonpathogenic mutations did not cluster at any of the functional domains (Fig. 1b, Supplementary Table S2). Importantly, epilepsy mutations of K v 7.2 were significantly associated with the "severe or EE" group (p < 0.001) and not the "mild or BFNE" and "uncertain severity" groups ( Fig. 1a-e, Supplementary Table S3). Our MHF analysis also revealed that helix B and helix B-C linker have significantly more pathogenic mutations (p < 0.01) than other domains within the K v 7.2 C-terminal tail due to the clustering of "severe or EE" mutations (p < 0.05) (Fig. 2a-d, Supplementary Tables S2-3. Since K v 7.2 binds to CaM through helices A and B (Fig. 2e) 28, 32 , we next tested if the clinical severity of epilepsy mutations is associated with the extent to which K v 7.2 variants bind to CaM. Both "mild or BFNE" and "severe or EE" mutations located at helices A and B decreased the CaM binding energy of K v 7.2 (Fig. 2e,f). Furthermore, EE mutations occur at the positively charged residues in the distal portion of helix B and the helix B-C linker away from the CaM contact site in the modeled highlighted with colored spheres on the C-alpha atoms of one subunit: mild or BFNE (blue), uncertain severity (purple), severe or EE (red). Where more than one mutation occurs at a single position, the residue is colored by the most severe phenotype. (e) Location of amino acids mutated in mild or BFNE (blue), uncertain severity (purple), or severe or EE (red) in S4, the pore loop and S6 of K v 7.2. Voltage-dependent activation of homomeric K v 7.2 channel is disrupted by selected EE mutations in epilepsy mutation hotspots. To test if EE variants within the mutation hotspots disrupt key functional protein domains of K v 7.2, we selected four EE mutations which have not been previously characterized: L203P at the voltage-sensing S4 23 , L268F at the pore loop 26 , and K552T and R553L at helix B 22,24 (Fig. 3a,b). To determine their effects on voltage-dependent activation of homomeric K v 7.2 channels, we performed whole-cell patch clamp recording in Chinese hamster ovary (CHOhm1) cells, which display very low expression of endogenous K + channels and depolarized resting membrane potential of −10 mV 12,33 . Application of depolarizing voltage steps from −100 to +20 mV in GFP-transfected CHOhm1 cells produces very little voltage-dependent currents that reverse around −26 mV 12,34 . In contrast, the same voltage steps in cells transfected with GFP and K v 7.2 wild-type (WT) generated slowly activating voltage-dependent outward K + currents that reached peak current densities of 17.3 ± 1.1 pA/pF at +20 mV (Fig. 3d,e, Supplementary Fig. S1). The average V 1/2 of WT channels (−26.8 ± 2.1 mV) was similar to the previously published value of −25 ± 1.9 mV 35 . Consistent with increased outward K + current, cell expressing K v 7.2 displayed hyperpolarized resting membrane potential (−35.5 ± 1.1 mV) and reversal potential (−38.8 ± 1.9 mV) (Supplementary Tables S4-5. Cells expressing GFP and K v 7.2-L203P produced K + currents with a large depolarizing shift in their voltage dependence and V 1/2 and increased activation time constants, decreasing peak current densities at voltage steps up to 0 mV. These cells also displayed depolarizing resting membrane potential (−26.8 ± 2.0 mV) and reversal potential (−22.1 ± 1.2 mV) ( Fig. 3d-f, Supplementary Fig. S1, Supplementary Tables S4-5. Surprisingly, their peak current density at +20 mV was larger (27.6 ± 1.88 pA/pF) than that of WT channels despite their slower activation kinetics (Fig. 3c-e, Supplementary Fig. S1). The L268F mutation in the pore loop decreased outward K + currents through K v 7.2 channels but not their protein level ( Fig. 3c-f, Supplementary Fig. S1). While the R553L mutation in distal helix B had no effect on K v 7.2 channels, the K552T mutation reduced both protein and current expression ( Fig. 3c-f, Supplementary Fig. S1). The L268F and K552T mutations did not alter voltage-dependence, activation kinetics, and reversal potential of K v 7.2 currents (Fig. 3g,h, Supplementary Tables S4-5). pip 2 -induced potentiation of K v 7.2 current is blocked by selected EE variants. PIP 2 is a critical cofactor required for the opening of K v 7 channels 14,16,17,36 and is proposed to bind to the intracellular side of S4, the S2-S3 and S4-S5 linkers, and intracellular region from pre-helix A to the helix B-C linker [11][12][13][14]28,[36][37][38] . Therefore, we next tested if selected EE mutations alter gating modulation of K v 7 channels by PIP 2 . Previous studies have shown that the activation of K v 7 channels is far from saturated by the endogenous membrane level of PIP 2 39 and that supplying exogeneous PIP 2 can enhance single-channel open probability and whole-cell current densities of homomeric K v 7.2 channels 12,14,37,40 . Indeed, inclusion of diC8-PIP 2 (100 μM) in the intracellular pipette solution increased K + currents through K v 7.2 WT channels by 2-fold and caused a modest left shift in voltage-dependence ( Fig. 3d-f, Supplementary Figs. S1-2) as previously shown 12,14,37 . Surprisingly, all selected EE mutations abolished diC8-PIP 2 -induced potentiation of K v 7.2 channels and hyperpolarizing shift in their voltage dependence, resulting in a significant reduction in their current densities compared to WT channels in the presence of diC8-PIP 2 ( Fig. 3d-f, Supplementary Figs. S1-2, Supplementary Table S5). To increase cellular PIP 2 levels, we transfected phosphatidylinositol-4-phosphate 5-kinase (PIP5K), which catalyzes the formation of PIP 2 via the phosphorylation of phosphatidylinositol-4-phosphate 41 . Consistent with previous reports 11,40,42 , co-expression of PIP5K increased K + currents through K v 7.2 WT channels with a hyperpolarizing shift in their voltage dependence. Consistent with the recording with diC8-PIP 2 inclusion (Fig. 3), this effect was absent in K v 7.2 channels containing L268F, K552T, and R553L variants ( Supplementary Fig. S3), indicating that these mutations abolished current potentiation of K v 7.2 channels upon increasing cellular PIP 2 levels. Modeled K v 7.2 structure and molecular dynamics simulation suggest that selected EE mutations reside in pip 2 binding regions of K v 7.2. To investigate if selected EE mutations are located in PIP 2 -binding regions, we compared our modeled K v 7.2 structure bound to CaM and the published structure of TRPV1 channel embedded in lipid nanodiscs with phosphatidylinositol bound (PDB: 5irz) (Fig. 4a) 43 . In the modeled K v 7.2 structure, the voltage-sensor (S1-S4) and the pore domain of K v 7.2 form the hydrophobic cavity where L203 and L268 are located (Fig. 4a). Similar to the binding of phosphatidylinositol to TRPV1 channel ( Fig. 4a) 43 , the fatty acid tails of amphiphilic PIP 2 are most likely embedded in this hydrophobic cavity of K v 7.2 where L203P and L268F mutations reside. Furthermore, the bottom of the voltage-sensor (S1-S4) together with proteins. For clarity, cropped gel images are shown. Full-length gels can be found in Supplementary Fig. S7,a. (d) Representative recordings after subtraction of leak currents. Leak current was defined as non-voltagedependent current from GFP-transfected cells. (e) Average peak current densities at all voltage steps. *p < 0.05, ***p < 0.005 based on one-way ANOVA Fisher's test. (f) Average peak current densities at -20 mV (left) and + 20 mV (right). p values are computed from one-way ANOVA Tukey test. (g) Normalized conductance (G/ Gmax) at all voltage steps. (h) Activation time constant (τ) at + 20 mV. The number of GFP-cotransfected cells that were recorded without diC8-PIP 2 : K v 7.2 WT (n = 12), L 2 03P (n = 17), L268F (n = 17), K552T (n = 13), or R553L (n = 13). The number of GFP-cotransfected cells that were recorded with diC8-PIP 2 : K v 7.2 WT (n = 11), L 2 03P (n = 14), L268F (n = 13), K552T (n = 11), or R553L (n = 11). Data shown represent the Ave ± SEM. (2020) 10:4756 | https://doi.org/10.1038/s41598-020-61697-6 www.nature.com/scientificreports www.nature.com/scientificreports/ pre-helix A, helix B and the helix B-C linker of K v 7.2 form a highly basic environment favorable for binding the phosphate headgroup of PIP 2 (Fig. 4a), consistent with previous studies in K v 7.1 28 . To test if PIP 2 interacts with K552 and R553 in distal helix B, we performed molecular dynamics (MD) simulation. We constructed a homology model of the CaM-bound closed state conformation of K v 7.2 using the structure of K v 7.1 (PDB: 5VMS) 28 as a template, and employed targeted MD to model the open-state conformation of K v 7.2 in the explicit lipid bilayers containing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) and PIP 2 lipids. To extensively sample the lipid-protein interactions, we constructed two independent simulation systems, each containing seven PIP 2 molecules randomly placed around K v 7.2 without CaM in both outer and inner membrane leaflets (~2.2% PIP 2 ) (Fig. 4b). Within the time frame of the simulations, PIP 2 molecules diffused from the periphery of the K v 7.2 structure towards its central region (Fig. 4c). To examine PIP 2 binding to helix B, we measured the distance between the center of mass of R553 from each monomer and that of the phosphate groups at position 4 and 5 of PIP 2 (Fig. 4d). Binding of PIP 2 molecules towards R553 was observed in 2 out of 4 monomers in the first simulation and in 3 out of 4 monomers in the second simulation (Fig. 4d). In both simulations, PIP 2 molecules interacted with K552-R553-K554 within 100 ns and remained stably bound throughout the simulations (Fig. 4e, Supplementary Video 1, Supplementary Fig. S4), consistent with previous in vitro biochemical studies and molecular docking simulations that demonstrated PIP 2 binding to the corresponding residues in the C-terminal helix A-B fragments of K v 7.1 37 . These findings suggest that K552T and R553L mutations are located in helix B of K v 7.2 that interacts with the phosphate head group of PIP 2 . To test if selected EE mutations alter PIP 2 affinity, we examined the K v 7.2 current decay upon PIP 2 depletion induced by activation of voltage-sensitive phosphatase (VSP) 11,42,44 . In CHOhm1 cells coexpressing danio rerio VSP 11 , the 10s-depolarization step at voltages from +40 mV decreased peak currents of K v 7.2 channels, reaching the maximal decay of 53.2 ± 4.0% at +100 mV ( Supplementary Fig. S5). Current decay of K v 7.2-K552T was greater than that of WT at +40 mV but was comparable to that of WT from +60 to +100 mV ( Supplementary Fig. S5), suggesting that the K552T mutation modestly decreased PIP 2 affinity to K v 7.2. Interestingly, the same depolarization steps delayed current decay of K v 7.2 channels containing L203P, L268F, and R553L mutations ( Supplementary Fig. S5), indicating their reduced sensitivity to PIP 2 depletion. Selected EE variants decrease current expression of heteromeric K v 7 channels and their current potentiation by diC8-PIP 2 . Since KCNQ2-associated EE is an autosomal dominant epileptic syndrome, we repeated voltage-clamp recording in CHOhm1 cells transfected with plasmids for K v 7.3, wild-type K v 7.2, and mutant K v 7.2 at a 2:1:1 ratio as described 35 Table S5). The L268F variant also increased their activation kinetics (Fig. 5e). None of the tested mutations affected total protein expression of K v 7.2 and K v 7.3 ( Fig. 5f). When diC8-PIP 2 was added in the intracellular pipette, all tested EE mutations significantly decreased current densities of heteromeric channels at +20 mV compared to WT without altering their voltage-dependence ( Fig. 5a-d, Supplementary Table S5) and their activation time constant was increased by L203P and K552T mutations (Fig. 5e). Importantly, all tested EE variants abolished PIP 2 -induced current potentiation of heteromeric channels ( Selected EE variants variably decrease axonal surface expression of heteromeric K v 7 channels. The physiologically relevant current through K v 7 channels is controlled by both their function and expression at the neuronal plasma membrane. Given that K v 7.2 interaction with CaM and K v 7.3 are critical for axonal surface expression of K v 7 channels 9,45 , we next tested if selected EE variants of K v 7.2 affect interaction with CaM and K v 7.3 and axonal targeting of K v 7 channels (Figs. 6-7, Supplementary Figs. S9-11). Coimmunoprecipitation assay in HEK293T cell lysate 12,45 revealed that the K552T and R553L mutations in helix B decreased K v 7.2 binding to YFP-tagged CaM whereas the mutations including L203P in S4 and L268F in the pore loop had no effect ( Fig. 6a,b). None of the tested mutations affected K v 7.2 interaction with K v 7.3 ( Fig. 6c,d). Total K v 7.2 expression was also reduced by the L203P and K552T variants in cells co-expressing CaM but not K v 7.3 ( Fig. 6). To test if selected EE mutations of K v 7.2 affect surface density of K v 7 channels, we transfected rat dissociated hippocampal cultured neurons with K v 7.3 containing an extracellular HA epitope (HA-K v 7.3) and performed surface immunostaining of HA-K v 7.3 as described 9,12,45 (Fig. 7, Supplementary Fig. S11). In cultured neurons, transfection of HA-K v 7.3 alone yields negligible surface expression of HA-K v 7.3 9 . However, co-transfection of K v 7.2 WT results in robust HA-K v 7.3 expression on the plasma membrane of the AIS and distal axons compared to the soma and dendrites (Fig. 7a,b) 9,12,45 , resulting in a surface fluorescence "Axon/Dendrite" ratio of 3.9 ± 0.49 (Fig. 7c). Although the L203P mutation in S4 did not affect surface and total expression of HA-K v 7.3/ K v 7.2 channels ( Fig. 7a-d), the L268F mutation in the pore loop abolished their preferential enrichment at the axonal surface by severely decreasing their axonal surface density (surface Axon/Dendrite ratio = 0.85 ± 0.10, Fig. 7a-c) and also reduced intracellular K v 7.2 expression in the axon (Fig. 7d). The K552T and R553L mutations in helix B significantly reduced surface expression of heteromeric channels in both distal axon and dendrites (Fig. 7f-i), resulting in similar surface Axon/Dendrite ratios as the WT channels (Fig. 7h). Disruption of CaM binding to K v 7.2 has been shown to impair axonal enrichment of K v 7 channels by inhibiting their trafficking from the endoplasmic reticulum (ER) 45 . The ability of the L268F mutation to impair axonal K v 7 surface expression without affecting K v 7.2 binding to CaM or K v 7.3 (Figs. 6, 7a-d) suggests a different mechanism. To test if the L268F mutation reduces axonal enrichment of K v 7 channels by increasing their endocytosis, we used dynamin inhibitory peptide (DIP, 50 μM) which blocks dynamin-dependent endocytosis in cultured hippocampal neurons 46 . The DIP treatment for 45 min induced a small increase in surface HA-K v 7.3/ K v 7.2 WT and L268F mutant channels in the soma and dendrites but not axons (Fig. 7e,j), indicating their basal endocytosis in somatodendritic membrane. Although the DIP treatment had no effect on K552T mutant channels, the same treatment modestly increased axonal surface expression of L268F mutant channels (Fig. 7e,j) www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ increase did not reach the axonal level of WT channels (Fig. 7e), suggesting that increased endocytosis is not the main cause for reduced axonal surface expression of L268F mutant channels. Discussion In this study, we investigated the pathogenetic mechanisms underlying de novo EE mutations of K v 7.2. Visual inspection in K v 7.2 primary sequence has suggested the enrichment of EE variants at S4, the pore domain from S5 to S6, and helices A and B 12,25,47 . Clustering of epilepsy mutations in the ion transport domain of K v 7.2 has also been detected by identifying its variation-intolerant genic sub-regions 48 . Our novel MHF statistical algorithm interpreted in the context of modeled K v 7.2 atomic structure (Figs. 1-2) supports these earlier observations. We discovered that "severe or EE" missense variants cluster at S4, the pore loop that contains the selectivity filter, S6, helix B, and the helix B-C linker of K v 7.2 ( Fig. 1). A recent study by Goto et al., reported that the EE missense variants cluster at the pore domain, S6, and pre-helix A of K v 7.2 49 . The regional differences in mutation clusters between our study and Goto et al., could be attributed to the use of different algorithms and databases (ExAC and GnomAD) as sources for non-pathogenic mutations. Nonetheless, both studies identified the pore domain and S6 as hotspots of EE variants, supporting the functional importance of these regions. However, sequence variant interpretation from the prediction algorithms should be used carefully. The presence of both gain-of-function and loss-of-function EE variants in S4 47,50-52 suggest that it is not straight forward to predict the genotype-phenotype correlation of EE. In addition, both EE and BFNE variants exist in each of the identified hotspots and even at the same codon 18,49 , suggesting that different amino acid substitutions at the same 49 , the in vivo impact of a mutation is difficult to predict in patients due to their variable exposures to genetic and environmental factors. Thus, the use of multiple in-silico tools and comprehensive experimental analyses of epilepsy variants are needed to understand their effects on K v 7 channels ex vivo and in vivo. Our functional characterization of de novo EE variants selected from the mutation hotspots revealed that each mutation impaired the function of its associated protein domain within K v 7.2. The L203P mutation in the main voltage sensor S4 induced a large depolarizing shift in voltage-dependence and slowed activation kinetics of homomeric K v 7.2 channels (Fig. 3) but had no effect on heteromeric channels (Fig. 5). In contrast, the L268F mutation in the pore loop decreased current densities of both homomeric and heteromeric channels without affecting their voltage dependence (Figs. 3, 5). K552T and R553L mutations in CaM-binding helix B decreased the interaction between CaM and K v 7.2 (Fig. 6), which is shown to play critical roles in M-current expression and inhibition of hippocampal neuronal excitability 53 . Current suppression of homomeric channels is a common feature of EE variants of KCNQ2 54 . Given the overlapping distribution of K v 7.2 and K v 7.3 throughout the hippocampus and cortex 4 , the dominant negative functional effects of L268F and K552T variants on heteromeric channels (Figs. 3, 5) are expected to induce neuronal hyperexcitability and may underlie severe symptomatic EE with drug-resistant seizures, psychomotor delay, and profound intellectual disability 22,26 . Interestingly, our modeled K v 7.2 structure revealed that the distal helix B and helix B-C linker come together with pre-helix A to form a positively charged surface close to the voltage sensor S1-S4 and the base of S6 (Fig. 2). Mutations of basic amino acid residues including H328C, R325G, and R333W at pre-helix A and R560W at the helix B-C linker of K v 7.2 have been shown to impair regulation of K v 7.2 currents by PIP 2 11,12,14 , which couples voltage sensor activation to the opening of the gate 28,36 . Our MD simulations revealed that K552 and R553 in distal helix B bind to the negatively charged head group of PIP 2 (Fig. 4). Importantly, K552T and R553L mutations impaired current enhancement of both homomeric and heteromeric channels upon acute or tonic increase in PIP 2 (Figs. 3, 5, Supplementary Fig. S3), suggesting that these mutant channels cannot respond to the changes in cellular PIP 2 . Since stable binding of CaM to K v 7.2 is crucial for PIP 2 modulation of neuronal K v 7 channels 33 , a decrease in CaM binding (Fig. 6) may also contribute to the loss of PIP 2 -induced current enhancement of K552T and R553L mutant channels (Figs. 3, 5). The impairment of PIP 2 -induced current enhancement of L268F mutant channels was unexpected (Figs. 3, 5-7) because it is unlikely for the hydrophobic L268 to bind a negatively charged head group of PIP 2 . A comparison between modeled K v 7.2 structure and TRPV1 structure (Fig. 4a) suggests that the amphiphilic chains of PIP 2 may extend to the hydrophobic cavity created by the voltage-sensors S1-S4 and the pore domain of K v 7.2. We speculate that the L268F mutation at this hydrophobic interface impair K v 7.2 interaction with PIP 2 . Furthermore, analogous residue for L268 in the bacterial KcsA structure can secure the proper opening size of the pore 55 . Therefore, it is also possible that the L268F mutation may disrupt PIP 2 -dependent coupling to the pore opening 36 . Several studies including our own have investigated PIP 2 affinity of K v 7 function by inclusion of diC8-PIP 2 in the intracellular pipette solution 12,14,33,37 . However, caution must be exercised in interpreting their results. Potentiation of K v 7.2-L203P current by tonic elevation of cellular PIP 2 upon PIP5K expression but not acute application of diC8-PIP 2 (Fig. 3, Supplementary Fig. S3), suggest that diC8-PIP 2 inclusion may not readily potentiate the mutant channels that displayed very slow activation kinetics (Fig. 3). Furthermore, the loss of diC8-PIP 2 -induced current potentiation can be either due to decreased PIP 2 affinity or saturated level of interaction with PIP 2 at low PIP 2 concentration. We found that the K552T mutation modestly weakens PIP 2 affinity, whereas other mutant channels were resistant to PIP 2 depletion (Supplementary Fig. S5). Considering multiple proposed PIP 2 binding sites in K v 7.2 including S2-S3 and S4-S5 linkers, pre-helix A, helix B, and helix A-B and helix B-C linkers [11][12][13][14][15] , selected EE mutations may cause conformational change that weakens or enhances PIP 2 affinity to other regions within K v 7.2. As Suh and Hille (2008) pointed out 56 , it is not straight forward to determine PIP 2 affinity of mutant channels by assessing their currents after manipulation of PIP 2 level. Nonetheless, the lack of current potentiation upon increasing cellular or exogenous PIP 2 (Fig. 3, Supplementary Fig. S3) and the location of the EE mutated residues in a region of K v 7.2 that binds to fatty acid tails or polar headgroups of PIP 2 (Fig. 4) strongly suggest that there are multiple ways by which the selected EE variants may influence PIP 2 interaction with K v 7 channels and reduce their currents. Our investigation of selected EE variants on neuronal expression of K v 7 channels revealed that K552T and R553L mutations in helix B reduced enrichment of K v 7 channels at the axonal surface (Fig. 7), supporting previous observations that the degree of CaM interaction with K v 7.2 correlates with the overall amount of K v 7 channels at the axonal surface 12,45 . Unexpectedly, the L268F mutation at the pore loop severely decreased both surface and intracellular expression of heteromeric channels in axons without affecting K v 7.2 binding to CaM or K v 7.3 (Figs. 6-7), demonstrating the importance of studying K v 7 expression in neurons. Decreased axonal expression of K v 7.2-L268F and minor effects of endocytosis inhibition (Fig. 7) suggest that a severe reduction of L268F mutant channels at the axonal surface is caused by a CaM-and endocytosis-independent mechanism. Given that of transfected neurons that were analyzed in Fig. 7i: K v 7.2 WT (n = 17), K552T (n = 23), R553L (n = 14), UT (n = 13). (e,j) Background-subtracted mean intensities of surface HA fluorescence from the transfected neurons treated with vehicle (CTL) control or dynamin inhibitory peptide (DIP). The number of transfected neurons that were analyzed in Fig. 7e: WT + CTL (n = 14), WT + DIP (n = 13), L268F + CTL (n = 8), L268F + DIP (n = 8). The number of transfected neurons that were analyzed in Fig. 7j: WT + CTL (n = 8), WT + DIP (n = 13), L268F + CTL (n = 6), L268F + DIP (n = 7). Data represent the Ave ± SEM (*p < 0.05, **p < 0.01, ***p < 0.005 against WT channels). (2020) 10:4756 | https://doi.org/10.1038/s41598-020-61697-6 www.nature.com/scientificreports www.nature.com/scientificreports/ misfolded membrane proteins are retained in the ER for chaperone-assisted refolding 57 , the L268F mutation may cause a folding defect that facilitates ER retention and disrupts forward trafficking of heteromeric channels to the axon. Taken together, we identified EE mutation hotspots in K v 7.2 and discovered that each variant selected from these hotspots impairs the function of its associated protein domain and displays a combination of defects in voltage-and PIP 2 -dependent activation and axonal expression of K v 7 channels (Fig. 8). Such combinations of defects may decrease K v 7 current and its ability to inhibit neuronal excitability in neonatal brain 5,12 , as conditional deletion of K v 7.2 during embryonic development results in hippocampal and cortical hyperexcitability and spontaneous seizures in mice 7 . Continued optimization of prediction algorithms and experimental interrogations to understand pathophysiology of K v 7-associated EE will aid the development of better therapeutic strategies for this disease. Materials and Methods The resampling statistical algorithm. A resampling algorithm titled Mutation Hotspot Finder (MHF) was developed to search for mutation clusters that localize to the functional domains in human K v 7.2 (GenBank: NP_742105.1). The complete MHF algorithm can be found in GitHub repository (https://github.com/jerrycchen/ MutationHotspotFinder). The functional domains were annotated based on multiple published sources 3,28,30,32 and the RIKEE database (www.rikee.org). Briefly, the MHF algorithm compares the observed numbers of mutations against the expected numbers of mutations, and computes corresponding statistical significance through bootstrapping within each pre-specified protein functional domains. The following sections explain the MHF algorithm in detail. The S denote the set of all observed single amino acid mutations for the whole sequence of protein X (e.g. the whole sequence of K v 7.2), or the subset of the sequence of protein X (e.g. the intracellular C-terminal tail of K v 7. Table S2), D 1 is the number of mutations in the S1, and D 2 is the number of mutations in the S2, and etc. Among 9 functional domains in intracellular K v 7.2 C-terminal tail (Supplementary Table S2), D 1 is the number of mutations in the pre-Helix A, and D 2 is the number of mutations in the Helix A, and etc. The MHF algorithm assumes that mutations are equally observed at each amino acid position within functional domains when there is no further association between the mutations and the domains. Due to this null hypothesis 58 , the application of MHF algorithm is restricted to single amino acid mutations, and is not suitable for mutations outside of the coding sequence as well as nonsense or frameshift mutations that delete one or more function domains. Under such assumption, we can randomly draw samples (i.e. bootstrap), with size = | | S , from the sequence X to construct the bootstrapped "mutation sets" in order to simulate the distribution of mutations. www.nature.com/scientificreports www.nature.com/scientificreports/ For iteration k of bootstrapping, the  S k ( ) denote the bootstrapped mutation set where k K {1, 2, , } ∈ … . In the context of this paper, we ran 10,000 iterations of bootstrapping (i.e. = K 10, 000). The ∼ D k ( ) j is defined as the number of mutations in  S k ( ) that fall into the functional domain j of protein sequence X. The empirical expected number of mutations within each domain j was constructed from The empirical P-values (P j ) were computed from a right-tailed test to measure the level of statistical significance on the proportion of the bootstrapped mutation sets that had more mutations than the observed mutation set S at each individual protein functional domain. is the Indicator function. The computed P-values were adjusted for multiple comparisons (J times) using Bonferroni's correction. Mutations were visualized and mapped to K v 7.2 and K v 7.3 primary structures with MutationMapper (http://www. cbioportal.org/mutation_mapper.jsp Mapper). Fisher's Exact Test was implemented using the standard fisher. test() function in R (https://stat.ethz.ch/R-manual/R-devel/library/stats/html/fisher.test.html). Structure modeling and visualization. The S1-S6 sequence of K v 7.2 (R75-Q326) was threaded to the cryo-EM structure of Xenopus laevis K v 7.1 bound to CaM (PDB: 5VMS) 28 . The loops of K v 7.2 (E86-W91 and K255-T263) were rebuilt in FoldIt (https://fold.it/portal). The structure was relaxed in Rosetta software (https:// www.rosettacommons.org/software) using two rounds of rotamer sampling followed by side chain and backbone minimization, ending with minimization of all degrees of freedom while maintaining C4 symmetry. The lowest scoring decoy with Root mean square deviation (RMSD) < 2.0 Å was chosen as the final model. The amino acid residues mutated in BFNE and EE are indicated in the Rosetta-based model. To model the interaction between CaM and K v 7.2 helices A and B, the helix A sequence of K v 7.2 (E322-V367) was threaded to the crystal structure of chimeric K v 7.3 helix A -K v 7.2 helix B in complex with Ca 2+ -bound CaM (PDB: 5J03) 32 . The structure was relaxed with Rosetta using two rounds of sequential rotamer, side chain and backbone minimization, followed by rigid body minimization. Mutations were made to the model in Rosetta followed by sequential rotamer, side chain, backbone, and rigid body minimization. The binding energy was calculated from 20 simulations. Structures were visualized using PyMOL 2.0 (Schrödinger, LLC). DNA Constructs and mutagenesis. EYFP-hCaM was a gift from Dr. Emanuel Strehler (Addgene plasmid # 47603). The plasmid pIRES-dsRed-PIPKIγ90 was a gift from Dr. Anastasios. Tzingounis (University of Connecticut) and was previously described 42 . Plasmids pcDNA3 with KCNQ3 cDNA (GenBank: NM004519) encoding K v 7.3 (GenBank: NP_004510.1), HA-K v 7.3, and KCNQ2 cDNA (GenBank: Y15065.1) encoding K v 7.2 (GenBank: CAA 75348.1) have been previously described 9,12,45 . Compared to the reference sequence of K v 7.2 (GenBank: NP_742105.1), this shorter isoform lacks 2 exons which do not harbor pathogenic variants to date. However, the amino acid numbering in the manuscript conforms to the reference sequence of K v 7.2 for clarity. Epileptic encephalopathy mutations (L203P, L268F, K552T, R553L) were generated using the Quik Change II XL Site-Directed Mutagenesis Kit (Agilent). Electrophysiology. Whole cell patch clamp recordings in Chinese hamster ovary (CHO hm1) was performed as described 12 . To express homomeric K v 7.2 channels, cells were transfected with pEGFPN1 (0.2 μg) and pcDNA3-K v 7. . Leak-subtracted current densities (pA/pF), normalized conductance (G/Gmax), and channel biophysical properties were computed as described 12,35 with the exception that V 1/2 and the slope factor k were calculated as described 35,59 by fitting the plotted points of G/Gmax with a Boltzmann equation To examine the decline of K v 7.2 current upon activation of Dr-VSP, CHO hm1 cells were transfected with pDrVSP-IRES2-EGFP (0.5 μg) and pcDNA3-K v 7.2 WT or mutant (0.5 μg). The pDrVSP-IRES2-EGFP plasmid was a gift from Yasushi Okamura (Addgene plasmid # 80333). Voltage-clamp recording of K v 7.2 current upon depolarization-induced Dr-VSP activation was performed as described 60 with an external solution containing 144 mM NaCl, 5 mM KCl, 2 mM CaCl2, 0.5 mM MgCl2, 10 mM glucose and 10 mM HEPES (pH 7.4). Patch pipettes (3)(4) were filled with intracellular solution containing 135 mM potassium aspartate, 2 mM MgCl2, 1 mM EGTA, 0.1 mM CaCl2, 4 mM ATP, 0.1 mM GTP and 10 mM HEPES (pH 7.2). Cells were held at -70 mV and 10 s step depolarizations were applied in 20 mV steps from -20 to +100 mV with 2 min inter-step intervals to allow PIP 2 regeneration. The extent of K v 7.2 current decay upon Dr-VSP activation during 10 s depolarization was measured as the ratio of current at 10 s over peak current at each voltage step. (2020) 10:4756 | https://doi.org/10.1038/s41598-020-61697-6 www.nature.com/scientificreports www.nature.com/scientificreports/ Molecular dynamics simulation. For modeling of open and closed states of K v 7.2, the closed-state conformation of KCNQ2 in calmodulin-bound form was modeled based on the recent cryo-EM structure of K v 7.1 (PDB code 5VMS) 28 . Multiple sequence alignment of the template and KCNQ2 sequence was performed by using TCoffee web server (https://www.ebi.ac.uk/Tools/msa/tcoffee/). After the alignment, the homology model of closed-state conformation was built with MODELLER 61 . The stability of the closed-state conformation of K v 7.2 was tested by performing all-atom molecular dynamics (MD) simulations in explicit lipid bilayer. In order to model the open-state conformation of K v 7.2, we performed non-equilibrium MD simulations. Using our stable closed-state conformation of K v 7.2, we performed 20-ns of Targeted MD (TMD) 62 simulations in an explicit lipid bilayer. TMD has been shown to drive the conformational changes by gradually minimizing the RMSD of S4-S5 and S6 helices of the closed-state conformation and the target structure which is K v 1.2/K v 2.1 in open conformation (PDB: 2R9R) 63 . As major structural changes occur in the pore region of the channel, we applied a restraint (force constant = 250 kcal/mol/Å) on the S4-S5 and S6 helices of each monomer to drive it towards the target state which was defined by a highly homologous K v 1.2/K v 2.1 channel in open-state conformation (PDB code 2R9R) 63 . The success of TMD was gauged by measuring the backbone RMSD of S4-S5 and S6 helices with respect to the target (Supplementary Fig. S6). Upon completion of TMD, all the structural restraints were released and the stability of the obtained open-state conformation of K v 7.2 was tested by performing MD simulations in explicit lipid bilayer (Supplementary Fig. S6). All the MD simulations were performed with NAMD2.12 65 using CHARMM36m force field for lipid/protein 66 and a timestep of 2 fs. Long range electrostatic interactions were evaluated with particle mesh Ewald (PME) 67 and periodic boundary conditions were used throughout the simulations. Non-bonded forces were calculated with a cutoff of 12 Å and switching distance of 10 Å. During the simulation, temperature (T = 310 K) and pressure (P = 1 atm) (NPT ensemble) was maintained by Nose-Hoover Langevin piston method 68 . During pressure control, the simulation box was allowed to fluctuate in all the dimensions with constant ratio in the x-y (lipid bilayer) plane. Immunoprecipitation. HEK293T cells were plated on 100 mm cell culture dishes (BD Biosciences, 2 × 10 6 cells per dish) and maintained in Minimal Essential Medium containing 10% Fetal Bovine Serum, 2 mM glutamine, 100 U/mL penicillin and 100 U/mL streptomycin at 37 °C and 5% CO 2 . At 24 hr post plating, the cells were transfected with plasmids (total 1.6 μg) containing K v 7.2 and EYFP-hCaM (1:1 ratio), using FuGENE6 transfection reagent (Promega). For coimmunoprecipitation studies of K v 7.2 and K v 7.3, the cells were transfected with K v 7.2 and K v 7.3 containing an extracellular hemagglutinin epitope (HA-K v 7.3) (1:1 ratio). At 48 h post transfection, the cells were washed with ice-cold PBS and lysed in ice-cold immunoprecipitation (IP) buffer containing (in mM): 20 Tris-HCl, 100 NaCl, 2 EDTA, 5 EGTA, 1% Triton X-100 (pH 7.4) supplemented with Halt protease inhibitors (Thermo Fisher Scientific). The lysate containing equal amount of proteins were first precleared with Protein A/G agarose beads (100 μL, Santa Cruz) for 1 hr at 4 °C, and then incubated overnight at 4 °C with Protein A/G-agarose beads (100 μL) and rabbit anti-K v 7.2 antibody (2 μg). This amount of anti-K v 7.2 antibody allowed us to immunoprecipitate the equal amount of K v 7.2 proteins and analyze the effects of mutations on the amount of co-immunoprecipitated EYFP-hCaM and HA-K v 7.3. After washing with IP buffer, the immunoprecipitates were eluted with SDS sample buffer by incubating at 75 °C for 10-15 min, and analyzed by western blot analysis with mouse anti-GFP (1:500 dilution), mouse anti-K v 7.2 (1:200 dilution), mouse anti-HA antibodies (1:500 dilution), and rabbit anti-GAPDH antibodies (1:1000 dilution). Antibodies used in coimmunoprecipitation and immunoblotting include anti-K v 7.2 (Neuromab, N26A/23), rabbit anti-K v 7.2 (Alomone, APC-050), anti-GFP, anti-HA, Immunocytochemistry. All procedures involving animals were reviewed and approved by the Institutional Animal Care and Use Committee at the University of Illinois Urbana-Champaign and conducted in accordance with the guidelines of the U.S National Institute of Health (NIH). Primary rat dissociated hippocampal cultured neurons prepared from 18-day old embryonic rats were plated on 12 mm glass coverslips (Warner Instruments, 10 5 cells per coverslip) coated with poly L-lysine (0.1 mg/mL). These neurons were maintained in neurobasal medium supplemented with B27 extract, 200 mM L-glutamine, and 100 U/mL penicillin and streptomycin in a cell culture incubator (37 °C, 5% CO 2 ). At 5 days in vitro (DIV), neurons were transfected with plasmids (total 0.8 μg) containing K v 7.3 with an extracellular hemagglutinin epitope (HA-K v 7.3) and wild-type or mutant K v 7.2 using lipofectamine LTX as described 12,45 . Fluorescence and phase contrast images of transfected neurons were viewed using a Zeiss Axio Observer inverted microscope High-resolution gray scale images of healthy transfected neurons were acquired using a 20X objective with a Zeiss AxioCam 702 mono Camera and ZEN Blue 2.6 software and saved as 16-bit CZI and TIFF files. To compare the fluorescence intensity of the neurons transfected with different constructs, the images were acquired using the same exposure time within one experiment. The image analyses were performed from the healthy transfected neurons using ImageJ Software as described 12,45 and excluded the transfected neurons with broken neurites or soma as well as regions where fasciculation or overlapping processes occurred. The axon was identified as a process that were labeled for the AIS marker 14D4, whereas the dendrites were identified as the processes that were absent for 14D4 in the transfected neurons. ImageJ software was used to trace the all major primary dendrites, the AIS (defined as the first 0-30 μm segment of the axon), and distal axon (defined as the segment between 50 and 80 μm from the beginning of the axon) as 1 pixel-wide line segments, and obtain their mean fluorescent intensities. The perimeter of the neuronal soma was also traced to obtain background-subtracted mean fluorescent intensities of the soma. Statistical analyses. All analyses are reported as mean ± SEM. Using Origin 9.1 (Origin Lab), the Student t test and one-way ANOVA with post-ANOVA Tukey and Fisher's multiple comparison tests were performed to identify the statistically significant difference with a priori value (p) < 0.05. The number of separate transfected cells for immunostaining and electrophysiology was reported as the sample size n. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
v3-fos-license
2021-09-23T06:23:26.335Z
2020-09-01T00:00:00.000
237595834
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/anp/a/9S8F8dbqy5YfjZZFw5tb48R/?format=pdf&lang=en", "pdf_hash": "5e9b191e45511a6e686b66298988f07a1558baa8", "pdf_src": "Thieme", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45129", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "c5d56c8d2337e77882661a72bd9e55399eca7baa", "year": 2020 }
pes2o/s2orc
Effects of rehabilitation programs on heart rate variability after stroke: a systematic review ABSTRACT Background: It has been shown that the autonomic nervous system can be modulated by physical exercise after stroke, but there is a lack of evidence showing rehabilitation can be effective in increasing heart rate variability (HRV). Objective: To investigate the effectiveness and safety of rehabilitation programs in modulating HRV after stroke. Methods: The search strategy was based in the PICOT (patients: stroke; interventions: rehabilitation; comparisons: any control group; outcomes: HRV; time: acute, subacute and chronic phases of stroke). We searched MEDLINE, CENTRAL, CINAHL, LILACS, and SCIELO databases without language restrictions, and included randomized controlled trials (RCTs), quasi -randomized controlled trials (quasi-RCTs), and non-randomized controlled trials (non-RCTs). Two authors independently assessed the risk of bias and we used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology to rate the certainty of the evidence for each included study. Results: Four studies (two RCTs with low certainty of the evidence and two non-RCTs with very low certainty of the evidence) were included. Three of them showed significant cardiac autonomic modulation during and after stroke rehabilitation: LF/HF ratio (low frequency/high frequency) is higher during early mobilization; better cardiac autonomic balance was observed after body-mind interaction in stroke patients; and resting SDNN (standard deviation of normal R-R intervals) was significantly lower among stroke patients indicating less adaptive cardiac autonomic control during different activities. Conclusions: There are no definitive conclusions about the main cardiac autonomic repercussions observed in post-stroke patients undergoing rehabilitation, although all interventions are safe for patients after stroke. inTROdUCTiOn Stroke is one of the main causes of morbidity and mortality in industrialized countries and the leading cause of chronic disability in adults [1][2][3] . After stroke, more than 70% of individuals present alterations in motor, sensory, or cognitive systems, which can be mild and transient or severe and disabling, and these alterations can be related to autonomic nervous system impairments, which can lead to changes in heart rate variability (HRV) [4][5][6] . HRV is the result of adaptive changes in heart rate caused by sympathetic and parasympathetic activity in response to external or internal stimuli 7 . Based on this concept, HRV is defined as the changes in heart rate (HR) that occur after a stimulus, and it is a predictor of processes related to the autonomic nervous system. Studies have shown that a low HRV response is related to a high risk of stroke 8,9 , severe stroke severity 10 , mortality after stroke 4,5,11 , low vagal modulation 12 , and a poor prognosis after stroke 13 . There is evidence that physical inactivity reduces cardiac autonomic modulation after stroke 14 . Therefore, the autonomic nervous system can be increased through physical exercise and rehabilitation programs after stroke 15 . Lower HRV is a predictor of morbidity and mortality and cardiac changes increase the risk of death after stroke 16 and may be related to unfavorable outcomes 17 . Additional studies need to be conducted to elucidate the cardiac autonomic modulating mechanisms and clinical repercussions of HRV after stroke rehabilitation. Thus, it is possible that specific and effective rehabilitation programs, allowing greater cardiovascular stability, functional gains, and quality of life in individuals after stroke, can be developed. Due to the lack of evidence that rehabilitation can be effective in modulating the autonomic nervous system after stroke, there is no consensus on this effect; there are no systematic reviews in the literature on this topic. Therefore, the aim of this review was to evaluate the effectiveness and safety of rehabilitation programs in modulating HRV after stroke. MeTHOdS We adhered to the methods described in the Cochrane Handbook for Intervention Reviews 18 and to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines 19 . This review was registered in the International Prospective Register of Systematic Reviews (PROSPERO -CRD42020156527). eligibility criteria The eligibility criteria were as follows: 1. Study designs: RCTs, quasi-RCTs, and non-RCTs 2. Participants: adults over 18 years of age of either sex with any duration of illness, severity of initial impairment, type of stroke diagnosis (ischemic or intracranial hemorrhage) that was made by a clinical examination or radiographically by computed tomography (CT) or magnetic resonance imaging (MRI). 3. Interventions: any rehabilitation protocol for stroke recovery (early mobilization, physical exercises) 4. Comparators: any conventional stroke rehabilitation program 5. Outcomes: Heart rate variability data sources and search strategy The search strategy was based in the PICOT (patients: stroke; intervention: rehabilitation; comparison: any control group; outcome: heart rate variability; time: acute, subacute, and chronic phases of stroke). We searched MEDLINE (OvidSP), the Cochrane Central Register of Controlled Trials (CENTRAL), CINAHL, the Latin-American and Caribbean Center on Health Sciences Information (LILACS), and SCIELO databases without language restrictions. The date of the most recent search was July 10, 2020. All searches were conducted with the assistance of a trained medical librarian. We also searched the reference lists of relevant articles and conference proceedings, and contacted the authors of the included trials. The search terms included "Heart rate variability or (MeSH terms)" and stroke or (MeSH terms) and rehabilitation or (MeSH terms). Other resources searched In an effort to identify additional published, unpublished, and ongoing trials, we performed the following steps: • screened the reference lists of the identified studies; • contacted the study authors and experts; and • used the Science Citation Index Cited Reference Search to track important articles. Selection of the studies Two pairs of reviewers independently screened all titles and abstracts identified in the literature search, obtained fulltext articles of all the potentially eligible studies, and evaluated the articles for eligibility. The reviewers resolved disagreements by discussion or, if necessary, with third party adjudication. We also considered studies reported only as conference abstracts. We used the START program (State of the Art through Systematic Review), developed by the Software Engineering Research Laboratory of the Federal University of São Carlos for data organization. data extraction The reviewers underwent calibration exercises and worked in pairs to independently extract data from the included studies according to the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions 20 . Disagreements were resolved by discussion or, if necessary, with third party adjudication. Reviewers collected the following data using a pretested data extraction form: study design, participants, interventions, comparators, assessed outcomes, and relevant statistical data. Risk of bias assessment Two authors of this review independently assessed the risk of bias for each study using the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions 20 . Disagreements were resolved by discussion or by consultation with another review author. We assessed the risk of bias according to the following domains. We graded the risk of bias for each domain as high, low, or unclear and provided information from the study report, together with justification for our judgment, in the "Risk of bias" tables. For incomplete outcome data in individual studies, we stipulated a low risk of bias for a loss to follow-up of less than 10% and a difference of less than 5% in missing data between the intervention/exposure and control groups. Certainty of evidence We summarized the evidence and assessed its certainty separately for bodies of evidence from RCT and non-RCT studies. We used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology to rate the certainty of the evidence for each outcome as high, moderate, low, or very low. In the GRADE approach, RCTs begin with high certainty, and non-RCT studies begin with moderate certainty. Detailed GRADE guidelines were used to assess the overall risk of bias, imprecision, inconsistency, indirectness, and publication bias and to summarize the results in an evidence profile (Table 1) 21 . We planned to assess publication bias through the visual inspection of funnel plots for each outcome for which we identified 10 or more eligible studies; however, we were not able to do so because there were an insufficient number of studies to conduct this assessment. data synthesis and statistical analysis It was not possible to perform a meta-analysis due to the non-homogeneity of the interventions. The effects of the interventions, risk of bias, and quality of evidence for each study are reported. ReSULTS We identified a total of 88 studies through database searches (see Figure 1 for the search results). After screening the titles and then the abstracts, we obtained full-text articles for the 22 studies that were potentially eligible for inclusion in the review. We excluded 18 studies because they were considered one of the following types of articles: case report, case series, self-controlled study, review, or a study that was not relevant. The remaining two RCTs 22,23 and two non-RCTs 15,24 were included in this review. Characteristics of the participants and groups All participants in the included studies were diagnosed with ischemic stroke. The total sample size was 172 individuals, and the average age was 65 years; they were divided into groups, with the size of each group ranging from seven to 36 individuals. In one study 15 , there was no description of the difference between the intervention and control groups since all of the participants received interventions; the participants were instead divided according to stroke severity, as assessed by the National Institutes of Health Stroke Scale (NIHSS). The other three studies 22-24 divided the individuals into intervention and control groups. Beer et al. (2018) described the control group as healthy individuals. Two studies 22,24 included only individuals with one stroke, two evaluated patients within 1 to 10 days of an ischemic stroke 15,23 , one study evaluated individuals' post-acute stroke 24 , and another study evaluated individuals at 15 days after a stroke 22 . The characteristics of the included studies are shown in Table 1. All studies evaluated individuals based on the analysis of linear heart rate variables, as shown in Table 2. evaluations and interventions The interventions reported by the studies were early mobilization 15 , low-intensity activity associated with meditation 23 , cycle ergometer and cognitive activities 24 , and protocol mobilization with a cycle ergometer, which were determined by exercise resistance tests individually (cycle ergometer, walking test, and going up and down stairs) 22 . All individuals in the control group performed activities such as conventional physical therapy. In the study by Nozoe et al. (2018) 15 , the variables LH, InHF, and LF/HF ratio were evaluated by a cardiac monitor, and in the analysis, the complement Lab Chart Pro HRV (ADInstruments Pty Ltd, Castle Hill, Australia) was used. In the intervention protocol, the participants performed an early mobilization in the sitting position; the evaluation comprised 5 minutes in the supine position (rest), followed by five minutes in the sitting position. The patients were reevaluated three months after the stroke. In the study by Chen et al. (2019) 23 , the variables SDNN, LF, HF, and LF/HF ratio were evaluated during the execution of Chan-Chuang qigong, known as traditional Chinese medicine therapy, which promotes body-mind interaction and relaxation. The individuals performed the technique for 15 minutes each day for 10 days; the assessment took five minutes and was performed using a portable HRV analyzer (8Z11, Enjoy Research Inc., Taiwan), the Chinese version of the Short Form-12 (SF-12) to assess quality of life, and the Hospital Anxiety and Depression Scale (HADS) to assess negative emotions. In the study by Beer et al. (2018) 24 , individuals underwent a protocol in which they were first evaluated at rest for 10 minutes, and then they were evaluated during a handgrip activity that lasted two minutes accompanied by controlled breathing (two minutes -six cycles in one minute). Afterwards, they performed cognitive activity (serial 3's subtractions) and finally mobilization with a cycle ergometer in combination with a cognitive exercise. Cognitive capacity was assessed using the Montreal Cognitive Assessment Scale (MoCA), and the Barthel index was used to assess functional capacity. The variables SDNN and RMSSD were measured by the Polar Advanced Heart Rate Monitor (RS800CX). All included studies performed evaluations of linear heart rate variables; however, studies did not present heterogeneity among the groups, interventions, or evaluations. Only one study, of the four included, did not show significant results in relation to the variables evaluated. All studies excluded individuals who had heart disease. evaluation of the effectiveness and safety of the included studies The evaluation of the effectiveness and safety of the included studies are displayed in the Table 3. In the study by Chen et al. (2019) 23 , the LF/HF ratio was higher in the intervention group after early mobilization regarding the physical component of the quality of life (QOL) scale (SF-12) than in the control group (P = 0.02). The authors did not report the effect sizes or confidence intervals of the data, and any adverse effects were observed after intervention. The study by Beer et al. (2018) 24 showed less adaptive cardiac autonomic control during different activities. The values described for the groups were as follows: post stroke RR = 728.7 ± 110.1 ms; healthy individuals RR = 847.6 ± 120.6 ms, with P = 0.002; post-stroke SDNN = 32.5 ± 26.9 ms, healthy individuals SDNN = 48.7 ± 17.9 ms, with P = 0.01. The authors did not report any adverse effects after intervention. In the study by Katz-Leurer and Shochina (2007) 22 , no significant interaction effects on HRV were observed between exercises during physical therapy. The values indicated for the variables were as follows: treatment group LF = 1248 ± 1684 Hz, control group LF = 1238 ± 1728 Hz, with P = 0.93; treatment group HF = 378 ± 638 Hz, control group HF = 667 ± 150 Hz, with a P = 0.33. The authors did not report any adverse effects after intervention. Risk of bias interpretation All included articles were analyzed for risk of bias, as shown in Table 4. Figure 2 shows a graphical analysis of the risk of bias. diSCUSSiOn This systematic literature review study comprised four articles from clinical trials that aimed to assess HRV using different methodologies, describing sympathovagal activity after specific rehabilitation protocols in patients after ischemic stroke. Of the four studies included, two 22,24 used the cycle ergometer for the main rehabilitation program. Only the study by Beer et al. (2018) 24 showed a significant reduction in the RR and SDNN variables among post-stroke individuals compared to healthy individuals at rest, which indicates a state of sympathetic hyperactivity in the subacute phase after the stroke. In this study, patients did not show a normal increase in sympathetic activity in response to the test conditions, mainly due to a hypersympathetic state at rest. During the subacute phase, according to this study and other studies 25,26 , there is apparently a significant physiological change in the ability of the autonomic nervous system to respond adequately to the demands imposed by rehabilitation practices, so only large demands yield expected sympathetic responses 27 . The results indicate a need for rehabilitation focused on improving autonomic cardiac control. In the study by Nozoe et al. (2018) 15 , patients were classified as having or not neurological deterioration (ND) using the NIHSS score (severity scale used in the acute phase of stroke). These individuals were evaluated during hospitalization and underwent an intervention involving early mobilization with posture changes. The LF/HF ratio showed a significant increase in the ND group (a higher NIHSS score) from before to after the intervention. Since the LF/HF ratio seems to reflect sympathetic performance, according to the authors, it is likely that an increase in sympathetic activity during mobilization is associated with neurological deterioration in acute stroke patients. Xiong et al. (2018) 28 reported that autonomic dysfunction is one of the predictors of worse functional outcomes in patients in the acute phase of stroke, which can confirm the possible occurrence of increased sympathetic performance in patients with a worse NIHSS classification. Chen et al. (2019) 23 introduced a mind-body interactive exercise (Chan-Chuang qigong practice) as an intervention for hospitalized patients after stroke to increase cardiac parasympathetic tone mainly because the technique has relaxing effects. They concluded that the LF/HF ratio regarding the physical component of the quality of life (QOL) scale (SF-12) was higher in the intervention group after mobilization than in the control group. Therefore, during the hospital stay, the sympathovagal balance influenced the physical aspect of the QOL of individuals with subacute stroke. Thus, improved HRV in stroke patients after a specific rehabilitation protocol can lead to the recovery of physical functions and improve their quality of life. In the study by Katz-Leurer and Shochina (2007) 22 , an individualized training protocol was used, and they did not find significant differences in HRV. Despite this result, a significant improvement was found in the functional parameters of post-stroke individuals, such as climbing stairs, and physical training allowed patients to significantly increase their workload. As described by other authors, autonomic impairment after stroke leads to low aerobic capacity 27 . Thus, the importance of early mobilization, rehabilitation, and physical-functional training in post-stroke patients is reiterated. The authors reported sympathetic-vagal alterations in poststroke patients when subjected to physical activities. Thus, from this systematic review, it can be stated that significant autonomic modulation occurs in these individuals. Despite the methodological divergence found in the articles, only one article reported no changes in HRV between the groups evaluated 22 , which established an assessment in the frequency domain. In the study by Beer et al. (2018) 24 , variables in the time domain were included, whereas assessments in both domains (time and frequency) were included in other studies, which demonstrated significant changes in the HRV linear variables after stroke rehabilitation. Studies on HRV demonstrate the need for flexibility in autonomic activity for individuals to maintain a good quality of life, as impaired adaptation can cause autonomic dysfunctions, cardiovascular deterioration, and increased morbidity and mortality rates in patients after stroke 28 . The four articles selected for the review show the need for specific therapies, early mobilizations, and physical activity protocols in the modulation of HRV. This conclusion points to the importance of maintaining muscle function, strength, and activity for cardiovascular benefits, which has been widely studied for methods including cardiac rehabilitation [28][29][30] . This study has limitations, such as heterogeneity in the selected individuals and the analyzed outcomes; because only a few studies were selected, it was impossible to perform a meta-analysis. However, this is the first systematic review addressing this topic, with the possibility of elucidating the main autonomic repercussions observed in post-stroke patients undergoing rehabilitation procedures. In conclusion, thequality of the evidence from the selected clinical trials was either low or very low; therefore, there are no definitive conclusions about the main autonomic repercussions observed in post-stroke patients undergoing rehabilitation, although all interventions are safe for these patients. The applicability of these results can be compromised since most of the results described in this review were obtained from clinical trials with methodological differences. This review highlights the need to conduct well-designed tests in this field. Future trials should be properly designed and should include standardized measures. It is suggested that RCTs address a heterogeneous population and include measures in the time and frequency domains, in addition to a nonlinear analysis of HR, to establish parameters of sympathetic-vagal behavior during rehabilitation protocols after stroke.
v3-fos-license
2020-07-07T13:20:45.350Z
2020-07-01T00:00:00.000
220366693
{ "extfieldsofstudy": [ "Biology", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/07/01/2020.07.01.182402.full.pdf", "pdf_hash": "540c18dd259d05e8581010dcab9ac0ee6f5346db", "pdf_src": "BioRxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45130", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "sha1": "540c18dd259d05e8581010dcab9ac0ee6f5346db", "year": 2020 }
pes2o/s2orc
Isomerization of antimalarial drug WR99210 explains its inactivity in a commercial stock WR99210, a former antimalarial drug candidate now widely used for the selection of Plasmodium transfectants, selectively targets the parasite dihydrofolate reductase thymidine synthase bifunctional enzyme (DHFR-TS) but not human DHFR, which is not fused with TS. Accordingly, WR99210 and plasmids expressing human dhfr have become valued tools for the genetic modification of parasites in the laboratory. Concerns over the ineffectiveness of WR99210 from some sources encouraged us to investigate the biological and chemical differences of supplies from two different companies (compounds 1 and 2). Compound 1 proved effective at low nanomolar concentrations against Plasmodium falciparum parasites, whereas compound 2 was ineffective even at micromolar concentrations. Intact and fragmented mass spectra indicated identical molecular formulae of the unprotonated (free base) structures of 1 and 2; however, the compounds displayed differences by thin layer chromatography, reverse phase high performance liquid chromatography, and ultraviolet-visible spectroscopy, indicating important isomeric differences. Structural evaluations by 1H, 13C, and 15N nuclear magnetic resonance spectroscopy confirmed 1 as WR99210 and 2 as an isomeric dihydrotriazine. Induced fit, computational docking models showed that 1 binds tightly and specifically in the P. falciparum DHFR active site whereas 2 fits poorly to the active site in loose and varied orientations. Stocks and concentrates of WR99210 should be monitored for the presence of isomer 2, particularly when they are not supplied as the hydrochloride salt or are exposed to basic conditions that can promote isomerization. Absorption spectroscopy may serve for assays of the unrearranged and rearranged triazines. INTRODUCTION WR99210 [4,6-diamino-1,2-dihydro-2,2-dimethyl-1-(2,4,5-trichlorophenoxypropyloxy)-1,3,5 triazine] is a folate pathway antagonist with potent activity against Plasmodium malaria parasites (1). Action of WR99210, originally designated as BRL 6231, in parasites is against the bifunctional dihydrofolate reductase thymidylate synthase enzyme (DHFR-TS), where the compound binds to the DHFR active site and blocks the production of tetrahydrofolate in the folate pathway (2). In contrast, WR99210 interacts only weakly with human DHFR (hDHFR). Episomal expression of the hDHFR coding sequence in Plasmodium falciparum produces a 4-log increase in WR99210 halfmaximal effective concentration (EC50) against transformed parasites, providing a powerful selectable resistance marker for transfection studies (3). The exciting potential of WR99210 as an antimalarial drug candidate eventually waned after preclinical and clinical trials demonstrated adverse events including severe gastrointestinal side effects in non-human primates and humans, even at low drug doses (4)(5)(6)(7). Pursuit of a pro-drug form of WR99210 (PS-15) was also limited by a regulatory restriction on the starting material, 2,4,5 trichlorophenol, used to produce dioxin, among other toxic substances (8,9). However, alternative antifolate compounds that incorporate a flexible linker as seen in WR99210 are under evaluation, including candidate P218 which is in clinical trials with the Medicines for Malaria Venture (10)(11)(12)(13). Genetic transfections and transformations of Plasmodium have become elemental tools of malaria research. Positive selection of hDHFR-transformed parasites is commonly achieved by WR99210 exposure in vitro and in vivo with P. falciparum as well as certain other Plasmodium species (14)(15)(16). Untransformed Plasmodium spp. are sensitive to nanomolar levels of WR99210, and spontaneous development of WR99210 resistance has not been reported from exposure to the compound (17). The usefulness of the compound for the selection of hDHFR-transformed lines thus created an ongoing demand for WR99210 for genetic manipulations of the parasites. In recent transfection experiments, we observed that WR99210 from one source (stock JP, from Jacobus Pharmaceutical Company Inc., Princeton, NJ ) was effective at nanomolar concentrations, but a stock of WR99210 from another source (stock SA, from Sigma-Aldrich Corp. -St Louis, USA) did not kill untransformed P. falciparum parasites, even at micromolar concentrations. After this discovery, and learning of similar observations reported in online scientific discussion groups (18,19), we initiated investigations into potential differences between the JP and SA WR99210 stock compounds. Here we report results from drug response assays, chemical and structural evaluations, and computational modeling studies that demonstrate dramatic effects on activity from isomeric differences between the JP and SA stocks. Further, we propose a mechanism by which the inactive isomer may develop from the active WR99210 compound, and we suggest some methods for rapid detection of the inactive isomer. P. falciparum parasites exhibit strikingly different responses to stocks of WR99210 from two sources For more than 20 years (3), our standard EC50 assays with JP WR99210 have routinely yielded susceptibilities in the sub-nanomolar range for untransformed P. falciparum parasites. Results for two P. falciparum lines are listed in Table 1. NF54 encodes an antifolate sensitive isoform of PfDHFR (N51, C59, S108), while Dd2 (N51I, C59R, S108N) encodes mutations at PfDHFR residues that confer resistance to certain other antifolates such as pyrimethamine, but not to WR99210. NF54 EC50 results with JP WR99210 were 0.056 nM with a 95% confidence interval (CI) of 0.029 -0.103 nM, while Dd2 showed a 10´ increase to 0.62 nM (CI: 0.580 -0.671 nM). In striking contrast to these results with JP WR99210, our EC50 results from stocks of more recently acquired SA WR99210 (Sigma catalog no. 47326-86-3 lots/batches 0000042122, 0000014239, 095M4622V, 070M4610V) were 900 -23,000 fold higher: 1.25 µM (CI: 1.01 -1.57 µM) for NF54 and 547 nM (CI: 525 -571 nM) for Dd2 (Table 1). Morphological appearances of parasites exposed to stock compounds of JP WR99210 and SA WR99210 were consistent with results of the EC50 assays. Culture samples of synchronized NF54 schizont-infected erythroctyes were exposed to 2 nM WR99210 from JP or SA, to 1 µM WR99210 from SA, or to 0.0002% DMSO (control) for 4 days. Microscopy of Giemsa-stained thin smears of these exposed cells showed dead and dying parasites only in response to 2 nM JP WR99210; the appearances of the parasitized cells under all of the other exposures remained healthy, including parasites exposed to 1 µM SA WR99210. An isomeric difference is suggested by TLC, RP-HPLC, and UV-vis spectroscopy We next checked for evidence of a structural difference between 1 and 2 that could explain their different effects in the drug response assays. Thin-layer chromotography (TLC), reverse phase high performance liquid chromatography (RP-HPLC), and ultraviolet-visible spectroscopy (UV-vis) were performed with the samples dissolved in tetrahydrofuran:methanol (2:1), which provided solublities of more than 3 mg/mL vs. 0.2 mg/ml with DMSO. Silica gel TLC of these samples developed with a chloroform:diethyl ether:methanol solvent system showed different retention factors (Rf) for 1 (Rf = 0.30) and 2 (Rf = 0.37) (Fig. S3). RP-HPLC experiments indicated that both compounds were more than 95% pure and eluted with different retention times (tR) of 13.7 minutes for 1 and 13.1 minutes for 2 ( Fig. S4-5). Consistent with the differences between 1 and 2 detected by TLC and RP-HPLC, UVvis spectroscopy showed the presence of two separate compounds. These differences were particularly evident in the different absorbances in the plateau region between 230 -240 nm (Fig. 1B). The results of these analyses and the drug assays pointed to the likelihood that 2 was an inactive isomer of the WR99210 structure 1. Chemical derivatization and nuclear magnetic resonance spectroscopy clarify structural differences between compounds 1 and 2 We next sought to determine the structures of 1 and 2 by 1-and 2-dimensional 1 H, 13 (Table S1, Fig. S6-13). Due to the lack of protons in the triazine ring and the four-bond separation between C-9 and the closest carbon, we were unable to unambiguously assign the chemical shifts for the triazine ring from 1 H-13 C data alone. We therefore recorded a 1 H-15 N HMBC ( 2,3 JHN) experiment to assign the NH and NH2 chemical shifts and the regiochemistry of the ring. The structure of compound 1 was therefore as published for WR99210 (20,21). NMR spectroscopy of compound 2 showed that the 1 H and 13 C NMR data corresponding to the (2,4,5-trichlorophenoxy) propoxy moiety, extending from C-1 through C-9, were in close agreement with 1. However, the chemical shifts for the diamino-dimethyl triazine unit were absent, and the 1 H spectrum showed broad peaks corresponding to two NH groups at dH 5.86 and 6.80 and one NH2 group at dH 5.61. Together, both the MS data and the NMR data suggested we had observed two, distinct singly-charged species by HRMS; namely a protonated and positively charged compound 2 or [2+H] + , and a positively charged compound 1, or [1] + already present in its protonated form (Fig. S2). The NMR data therefore demonstrate that 2 is an isomer of compound 1. We recorded 1 H-15 N HSQC and HMBC spectra for 2 at multiple temperatures and in different solvents; however, unlike the case for compound 1, we were unable to determine the complete structure of 2 by NMR alone (Table S2, To obtain additional NMR data we permethylated 2 using a stoichiometric excess of methyl iodide in sodium carbonate buffer (Fig. 2B). HRMS of the product indicated a molecular formula of C19H29Cl3N5O2, which corresponded to the addition of five methyl groups (Fig. S19). These were confirmed by signals from the 1 H and 13 C NMR spectra ( Fig. S20-21, Table S3). The chemical shifts of all 1 H, 13 (Fig S22-25). Modeling of compounds 1 and 2 within the PfDHFR-TS binding pocket Using the induced-fit docking mode of Glide, we examinined the predicted binding of Results confirmed that our model was in good agreement with the known binding geometry (Table S4) (Fig. 3B). The bestscoring docking pose with 1 had a GlideScore of -9.33 and an induced fit score of -425.81, while the best-scoring pose of 2 produced a GlideScore of -8.07 and induced fit score of -423.44 (Fig. 3C). Furthermore, the best scoring binding pose of 2 was in an orientation opposite to that of 1 and the crystallographic pose, with the halogenated ring placed into the interior the enzyme rather than protruding from the pocket (Fig. 3A). DISCUSSION Although not suitable for clinical use, WR99210 continues to be an important tool in molecular parasitology as a selection agent for genetically modified parasites. Recent observations of greatly different efficacies of WR99210 stocks from different sources have been unexplained. Here, we show that isomerization of WR99210 accounts for these efficacy differences, and we propose a rearrangement mechanism for isomerization. The potent antifolate activity of WR99210 that results from tight and specific binding to the Plasmodium DHFR-TS site is lost with the inactive isomer, which interacts only weakly at the DHFR active site in various, greatly different orientations. HRMS analysis of WR99210 (compound 1) and its isomer (compound 2) showed that molecular formulae of these compounds were identical, without evidence in the stocks for degenerative products of reduced size or substantial impurities, incidental polymerization, or redox changes. HRMS also confirmed the experimental concentrations of 1 and 2 in DMSO-dissolved stocks used for drug response assays, thus demonstrating that the expected exposures of these compounds to parasitized erythroctyes were the same. Having eliminated these possible explanations for the large, supplier-dependent efficacy differences between the stocks, we performed TLC, RP-HPLC, and UV-vis spectroscopy studies on stocks of 1 and 2 for evidence of structural differences. All three methods identified differences between 1 and 2. demonstrating that it is a positional isomer of 1, is identical to that of a previous report, which demonstrated production of 2 from 1 in an ethanolic solution brought to pH >8 with sodium hydroxide, or with added triethylamine (21). Conversion of arylmethoxydimethyl-dihydrotriazines from their base form to isomeric dihydrotriazines had also been observed when the compounds were heated in ethanol, benzene, or partial aqueous suspension (24). Stocks of WR99210 may thus be inactivated by spontaneous rearrangement to isomer 2 under basic conditions. Isomer 2 differs from WR99210 by a repositioning of the propoxyl substituent on the triazine ring. Figure 4 presents a proposed pathway of isomerization by a basemediated ring opening, followed by ring closure at the gem dimethyl carbon with the NH2 group proximal to the propoxy linker. First, WR99210 forms from a substituted biguanide by a pathway involving O-bonded amine attack at the dimethyl-bearing methine carbon to form the triazine ring (8,24,25). Second, the amine groups of the diaminotriazine moiety in WR99210 are stabilized by the HCl salt form. When converted to, or purified as, the free base, tautomerization of the guanidine groups can occur, allowing for the prior imine nitrogen (now an NH2) to attack the dimethyl-bearing methine carbon to form the dihydrotriazine isomer 2. Figure 4. Proposed mechanism for the isomerization of compound 1 to 2. The mechanism of isomerization relies upon the conditions for production of the freebase rather than the hydrochloride salt, in agreement with the solved structures (Fig 2). The proposed mechanism starts with base-mediated ring opening, followed by ring closing via substitution at the gem dimethyl quaternary carbon by the NH2 group proximal to the propoxy linker. The isomerization which repositions the amine substituents and extends the propoxy linker between the halogenated ring and the triazine ring. The results of our docking modeling indicate that isomer 2 binds to PfDHFR-TS with much lower affinity than WR99210, if at all. This dramatic difference of affinity further verifies the Plasmodium DHFR active site as WR99210's target of action and accounts for the compound's almost complete loss of drug efficacy after isomerization. Stocks and concentrates of WR99210 should therefore be checked for the presence of isomer 2, particularly when they are not obtained and maintained as the hydrochloride salt or are exposed to basic conditions. Among the checks reported here are analysis by UVvis spectroscopy, determination of the RP-HPLC elution time, and verification of the TLC band and its retention factor. We also note that infrared spectroscopy in the 6.0 -10.0 µm region has been reported to distinguish diagnostic differences between unrearranged and rearranged triazines (24). These assessments alone or in combination may serve for accessible quality control of WR99210. Dose-response assays In vitro drug response assessments were performed employing a standard 72-hour malaria SYBR Green I assay against the lab-adapted lines Dd2 and NF54 (27)(28)(29). Twofold serial dilutions of JP and SA WR99210 (50 µL) were added across a 96-well plate, reserving two wells per row as drug-free controls. After reaching 4 -10% parasitemia with >70% ring stage parasites, cultures were resuspended to 1% parasitemia and 1% hematocrit in CM and 150 µL was added to each well for drug phenotype response. EC50 values were determined using the variable sigmoidal function feature from Prism 7 on four independent replicates (GraphPad Software Inc). HPLC with UV-vis spectroscopy A SB-C3, 3.5 micron, 300Å, 0.3 ´ 100 mm capillary column (Aligent) was used for chromatography at 6 ml/min. Typical sample amounts were 100 nanograms (ng). The column was equilibrated in 80% of 0.1% formic acid:20% acetonitrile and eluted over 15 minutes with a gradient to 100% acetonitrile. Detection was done at 290 nm with diode array spectral acquisition between 220 and 400 nm. Spectral analysis was performed with the Chemstation 2D software. For HPLC prior to Mass Spectrometry (further methods below) all chromatography conditions were maintained, except, a flow rate of 10 ml/min was used and approximately 25 ng of solubilized sample (primary stock solution) was injected. Mass Spectrometry Acquisition and analysis was done with a Sciex 4000 QTrap in positive mode using either HPLC or direct infusion at 1 µg/ml of WR99210 in 85% acetonitrile:15% formic acid (0.1%) with a flow rate of 12 µl/min. Spray voltage was 4000, de-clustering potential was 25 volts and nebulizing nitrogen gas was 20 psi. MS2 fragmentation was done with a collision energy of 40 and MS3 with AFC values of 40 to 65. Nuclear Magnetic Resonance NMR spectra were recorded on Bruker Avance 500 and 600 MHz spectrometers equipped with z-shielded gradient cryoprobes. Spectra were recorded at multiple temperatures, and 1-and 2-dimensional homonuclear and heteronuclear 1 H, 13 C and 15 N spectra were recorded. See Supporting Information for additional details. Computational docking studies The three-dimensional structure of PfDHFR-TS (PDB ID 1J3I) was downloaded from the Protein Data Bank. The protein structure was readied for docking via the Protein Preparation procedure, and used Induced Fit Docking protocol 2015-2, Glide version 6.4, Prime version 3.7, Schrödinger, LLC, New York, NY, 2015, release 2018-4. To achieve unbiased docking, the conformation of WR99210 from the PDB structure was not used. Instead, the 3D conformer of WR99210 was downloaded from PubChem using CID 121750. The SA isomer was drawn in the PubChem sketcher and converted to 3D using Open Babel 2.3.1 (30). Both ligands were readied for docking using the LigPrep procedure in Maestro (31). Induced fit docking (IFD) was performed using the Maestro suite. IFD used Glide for ligand docking with a softened potential to increase the possible initial protein conformations by decreasing the van der Waals repulsion term, permitting closer packing, and a final re-docking after protein optimization. Prime was used to predict side-chain position within each protein-ligand complex and to find stable coformations around the docked ligand. To evaluate the quality of the docked poses, IFD provided an energy score for each docked pose that combines both Glide and Prime. Root mean square deviation (RMSD) calculations were performed between the crystallographic structure and the poses resulting from IFD using the DockRMSD program (https://zhanglab.ccmb.med.umich.edu/DockRMSD/). 2D ligand interaction diagrams of the active site were created within Maestro and virtual reality visualization was done via UCSF ChimeraX (https://www.rbvi.ucsf.edu/chimerax/) and the HTC Vive Pro.
v3-fos-license
2020-12-31T09:06:48.996Z
2020-12-24T00:00:00.000
234425750
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4441/13/1/12/pdf", "pdf_hash": "61d537c225117c67953c356683e16cb6427469f2", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45131", "s2fieldsofstudy": [ "Physics", "Engineering" ], "sha1": "507b5fa845c1c6f3bfb2924b9d24aa61906aa68f", "year": 2020 }
pes2o/s2orc
Growth and Collapse Dynamics of a Vapor Bubble near or at a Wall This study investigated the dynamics of vapor bubble growth and collapse for a laser-induced bubble. The smoothed particle hydrodynamics (SPH) method was utilized, considering the liquid and vapor phases as the van der Waals (VDW) fluid and the solid wall as a boundary. We compared our numerical results with analytical solutions of bubble density distribution and radius curve slope near a wall and the experimental bubble shape at a wall, which all obtained a fairly good agreement. After validation, nine cases with varying heating distances (L2 to L4) or liquid heights (h2 to h10) were simulated to reproduce bubbles near or at a wall. Average bubble radius, density, vapor mass, velocity, pressure, and temperature during growth and collapse were tracked. A new recognition method based on bubble density was recommended to distinguish the three substages of bubble growth: (a) inertia-controlled, (b) transition, and (c) thermally controlled. A new precollapse substage (Stage (d)) was revealed between the three growth stages and collapse stage (Stage (e)). These five stages were explained from the out-sync between the bubble radius change rate and vapor mass change rate. Further discussions focused on the occurrence of secondary bubbles, shockwave impact on the wall, system entropy change, and energy conversion. The main differences between bubbles near and at the wall were finally concluded. Introduction Vapor bubbles have recently drawn intensive attention in many research fields [1], such as micro-or nanomanipulation [2], the heat transfer of two-phase heat exchangers [3,4], and medical vapor bubble cancer treatment [5][6][7]. However, vapor bubble dynamics are incredibly complex and nonlinear [8][9][10], involving bubble oscillations and interface fluctuations during growth, shockwave impact, and cavitation noise during collapse. Understanding bubble growth and collapse mechanisms is key to successfully solving the challenges connected to these applications. Many studies have focused on vapor bubble dynamics, both experimentally or numerically. With regard to experiments, pulsed lasers are usually used to produce vapor bubbles, and high-speed cameras observe bubble dynamics under the bubble-free surface or bubble-solid wall interactions [11,12]. Gonzalez-Avila et al. studied the dynamics of bubbles in a highly variable liquid gap [13], and Sun and Zachary et al. concluded that thermal effects played an important role in the entire growth and collapse of bubbles in microchannels [14,15]. Kangude et al. explained the growth mechanism of vapor bubbles on hydrophobic surfaces with the help of infrared thermal imaging measurement methods [16]. In addition to experiments, numerical simulations could help to better understand bubble dynamics and provide more details about bubble density, velocity, and heat fluctuations, including volume of fraction (VOF), the lattice Boltzmann method (LBM), and smoothed particle hydrodynamics (SPH). Among them, volume of fraction (VOF) is SPH Modeling In our model, compressible vapor and liquid are considered a two-phase fluid with continuous density gradients. In the Lagrange formula, the liquid and gas phase uniformly follows the conservation equations of mass, momentum, and energy: Water 2021, 13, 12 where ρ is the density, v is the velocity vector, M is the stress tensor, T is the temperature, U is the internal energy, and κ is the thermal conductivity. The stress tensor M includes pressure terms, shear and bulk viscosity terms, as well as an additional Korteweg tensor term Mc of the gas-liquid diffusion interface, as shown below: where p represents pressure, dim represents the dimension of space, and η s and η v are shear and volume viscosity, respectively. The Korteweg tensor Mc can be used to simulate the capillary force on the interface due to the density gradient, expressed as: where K is the gradient energy coefficient. In order to close the momentum and energy equations, the VDW equation is chosen to describe the pressure state equation, which can describe the gas-liquid coexistence system. The expression of the van der Waals equation of state is: where k b is the Boltzmann constant, α is the parameter of attraction, and β is related to the size of the particles. k b , α, and β are set as 1, 2, and 0.5 for the van de Waals fluid, respectively. Considering the thermodynamic relationship of the system, the thermodynamic consistency formula could be expressed as [31]: and the total differential form of entropy S (T, V) is: (6), we can obtain the total differential form of the internal energy dU: Integrating Equation (9) and substituting C V = dim k b 2 , U (T, V) can be expressed by U (T, ρ): The former term dim 2 k b is defined as thermal energy, and the latter term −αρ is defined as potential energy. For the closed governing equations of Equations (1)-(3), the SPH method is used to discretize them into numerical forms. Equation (11) is used to calculate the mass equation with second-order accuracy: where m is the particle mass, the subscript b indicates the adjacent particle around the particle a, and W ab is a kernel function, which explains the particle distance between particles a and b. Here, the vapor or liquid phase is determined through the critical fluid density of the VDW fluid. If the fluid density is less than the critical density, then it is the vapor phase; otherwise, it is the liquid phase. In the VDW fluid, the momentum and energy equations should be divided into short-range and long-range because this treatment can accurately deal with the surface tension effect. Thus, the momentum and energy numerical equations could be represented as: T ab (13) where the long-distance items are marked with superscript H, and the short-distance items are unmarked. In this paper, we use the hyperbolic-shaped kernel function proposed by Yang [32]. The fluids of liquid and vapor are represented by continuous particles, compact or sparse, depending on their local density. Mirror ghost particles are used for the solid wall to ensure the no-slip boundary condition. Further information about this numerical model is provided in our previous work [33]. Validation We firstly simulated the generation of a laser-induced spherical vapor bubble, which caused the outward expansion of fluid to produce a high-temperature vapor in the central region of the flow field. In Figure 1, we show the typical characteristics of a vapor bubble under laser heating. In Felderhof's study, the density distribution along the bubble radius is as follows [34]: where ρ L is the liquid density, ρ V is the vapor density, and w is the width of the gas-liquid interface. Our model is based on the diffuse interface description of a two-phase liquid-vapor system endowed with thermal fluctuations. After bubble growth, we could observe that the inertial-driven bubble oscillates slightly in the confined systems. The average fluid density agrees well with Equation (14) in Figure 2. Gallo et al. investigated the nucleation of vapor bubbles in stretched or overheated liquids and found a similar phenomenon of bubble oscillation [29]. According to the curve slope of the bubble radius with time, three control mechanisms could be observed during bubble growth, categorized into the (a) inertia-controlled stage, (b) transition stage, and (c) thermally controlled stage. Lee and Merte [17] found that the curve slope in Stage (a) is twice as much as that in Stage (c) in logarithmic coordinates, as shown in Figure 3a. Our simulation obtained the logarithmic fitting coefficients for the radius to time as 2.2098 and 1.1043 in Stages (a) and (c), respectively, as shown in Figure 3b. This ratio is approximately 2:1, which is consistent with the analytical results of Lee and Merte. According to the curve slope of the bubble radius with time, three control mechanisms could be observed during bubble growth, categorized into the (a) inertia-controlled stage, (b) transition stage, and (c) thermally controlled stage. Lee and Merte [17] found that the curve slope in Stage (a) is twice as much as that in Stage (c) in logarithmic coordinates, as shown in Figure 3a. Our simulation obtained the logarithmic fitting coefficients for the radius to time as 2.2098 and 1.1043 in Stages (a) and (c), respectively, as shown in Figure 3b. This ratio is approximately 2:1, which is consistent with the analytical results of Lee and Merte. According to the curve slope of the bubble radius with time, three control mechanisms could be observed during bubble growth, categorized into the (a) inertia-controlled stage, (b) transition stage, and (c) thermally controlled stage. Lee and Merte [17] found that the curve slope in Stage (a) is twice as much as that in Stage (c) in logarithmic coordinates, as shown in Figure 3a. Our simulation obtained the logarithmic fitting coefficients for the radius to time as 2.2098 and 1.1043 in Stages (a) and (c), respectively, as shown in Figure 3b. This ratio is approximately 2:1, which is consistent with the analytical results of Lee and Merte. Set-Up The SPH simulation started from the steady-state liquid, determined by the binodal line of the van der Waals fluid. The critical fluid density was introduced to distinguish between liquid and vapor, with = = . The SPH liquid particle mass m = 0.6, steady density = 1.2029, and distance = / were initially arranged at a size of 400 dx × hdx at the bottom, where h is the liquid height as shown in Figure 4. We tried 500 dx, 400 dx, 300 dx, and 200 dx × hdx cases, respectively. Among these cases, the 400 dx × hdx case provided results that were good enough. The upper and bottom walls were treated as the stationary and insulated solid wall, and the left and right boundaries were the periodic boundaries. Gravity was considered in the downward direction. The whole chamber size was 400 dx × 200 dx. The heating area was within a radius of 12 dx at the heating distance L above the wall to a superheat of ∆T = 10.33. This region, heated by a laser, is spherical and homogeneous in this paper. We conducted a series of cases by changing the liquid height h and heating distance L. The main characteristics of bubble growth and collapse Set-Up The SPH simulation started from the steady-state liquid, determined by the binodal line of the van der Waals fluid. The critical fluid density was introduced to distinguish between liquid and vapor, with ρ c = 1 3b = 2 3 . The SPH liquid particle mass m = 0.6, steady density ρ = 1.2029, and distance dx = m/ρ were initially arranged at a size of 400 dx × hdx at the bottom, where h is the liquid height as shown in Figure 4. We tried 500 dx, 400 dx, 300 dx, and 200 dx × hdx cases, respectively. Among these cases, the 400 dx × hdx case provided results that were good enough. The upper and bottom walls were treated as the stationary and insulated solid wall, and the left and right boundaries were the periodic boundaries. Gravity was considered in the downward direction. The whole chamber size was 400 dx × 200 dx. The heating area was within a radius of 12 dx at the heating distance L above the wall to a superheat of ∆T = 10.33. This region, heated by a laser, is spherical and homogeneous in this paper. We conducted a series of cases by changing the liquid height h and heating distance L. The main characteristics of bubble growth and collapse in these cases could be classified by two deduced nondimensional Water 2021, 13, 12 7 of 17 parameters of heating distance and liquid height: γ = L × dx/Rx, η = h × dx/Rx. The key parameters of L, h, γ, η, η-γ, and N secb are shown in Table 1 for each case. Here, η-γ is related to the hydrostatic pressure, and N secb is the number of secondary bubbles. in these cases could be classified by two deduced nondimensional parameters of heating distance and liquid height: γ = L × dx/Rx, η = h × dx/Rx. The key parameters of L, h, γ, η, η-γ, and Nsecb are shown in Table 1 for each case. Here, η-γ is related to the hydrostatic pressure, and Nsecb is the number of secondary bubbles. Bubble near the Wall Three cases, L2, L3 and L4, were examined for determining the bubble dynamics near the solid wall. Here, the liquid height h was 100, and the heating distance L varied from 20-30 to 40 regarding the initial liquid particle spacing distance dx. Figure 5 shows the representative bubble shapes at different moments for different cases. It was found that the bubble rose obviously during growth, and the spherical shape of the bubble was slightly distorted. We calculated the volume of the bubble and obtained the average radius of the bubble with time, as shown in Figure 6. It was found that during the entire bubble growth process, the change of heating distance γ mainly affected the average radius of the bubble and the transition time from Stages (a) to (c) in the early growth stage. Bubble near the Wall Three cases, L2, L3 and L4, were examined for determining the bubble dynamics near the solid wall. Here, the liquid height h was 100, and the heating distance L varied from 20-30 to 40 regarding the initial liquid particle spacing distance dx. Figure 5 shows the representative bubble shapes at different moments for different cases. It was found that the bubble rose obviously during growth, and the spherical shape of the bubble was slightly distorted. We calculated the volume of the bubble and obtained the average radius of the bubble with time, as shown in Figure 6. It was found that during the entire bubble growth process, the change of heating distance γ mainly affected the average radius of the bubble and the transition time from Stages (a) to (c) in the early growth stage. The greater the heating distance, the larger the bubble radius, and the later the transition time. The greater the heating distance, the larger the bubble radius, and the later the transition time. In many studies, the vapor pressure, density, and temperature are considered constant, and the liquid is considered incompressible. Our study is different. The vapor, as well as the liquid, are compressible, thus we could capture the oscillation of bubble density during bubble growth and collapse, as shown in Figure 7. From Figure 7, we can clearly observe the transition time from Stages (a) to (b) as the first inflection point and the transition time from Stages (b) to (c) as the second inflection point. After the bubble radius reaches its maximum volume, there still exists a time gap before the bubble density increases continuously, which we call Stage (d). The precollaps of Stage (d) is from the point of the largest bubble radius to the next minimum value of bubble density. We define the last vapor inflection point to vanish as collapse (Stage (e)). The secondary bubble in Case L4 only has Stages (a) and (e), as shown by the inflection point in Figure 7. The reason is that the lifetime of the secondary bubble is too short. The bubble density profile could provide a clear and easy criterion between these five substages (Stages (a)-(e)). Therefore, the recognition method based on density is better for determining bubble growth and collapse stages than the radius. A further determination of the bubble growth process could be conducted by calculating the bubble radius changing rate = / and the vapor mass changing rate = / for Cases L2 to L4, which are illustrated in Figure 8. Both and reach a peak value in Stage (a). Although the time of the peak is different, changes first. In Stage (b), the magnitude of fluctuation in is marginally larger than that in . In Stage (c), and are both at a lower growth rate, but dominates the process. In Stage (d), and mv decrease, with dominating the process. In many studies, the vapor pressure, density, and temperature are considered constant, and the liquid is considered incompressible. Our study is different. The vapor, as well as the liquid, are compressible, thus we could capture the oscillation of bubble density during bubble growth and collapse, as shown in Figure 7. Figure 7. The reason is that the lifetime of the secondary bubble is too short. The bubble density profile could provide a clear and easy criterion between these five substages (Stages (a)-(e)). Therefore, the recognition method based on density is better for determining bubble growth and collapse stages than the radius. A further determination of the bubble growth process could be conducted by calculating the bubble radius changing rate = / and the vapor mass changing rate = / for Cases L2 to L4, which are illustrated in Figure 8. Both and reach a peak value in Stage (a). Although the time of the peak is different, changes first. In Stage (b), the magnitude of fluctuation in is marginally larger than that in . In Stage (c), and are both at a lower growth rate, but dominates the process. In Stage (d), and mv decrease, with dominating the process. From Figure 7, we can clearly observe the transition time from Stages (a) to (b) as the first inflection point and the transition time from Stages (b) to (c) as the second inflection point. After the bubble radius reaches its maximum volume, there still exists a time gap before the bubble density increases continuously, which we call Stage (d). The precollaps of Stage (d) is from the point of the largest bubble radius to the next minimum value of bubble density. We define the last vapor inflection point to vanish as collapse (Stage (e)). The secondary bubble in Case L4 only has Stages (a) and (e), as shown by the inflection point in Figure 7. The reason is that the lifetime of the secondary bubble is too short. The bubble density profile could provide a clear and easy criterion between these five substages (Stages (a)-(e)). Therefore, the recognition method based on density is better for determining bubble growth and collapse stages than the radius. A further determination of the bubble growth process could be conducted by calculating the bubble radius changing rate . r = dR/dt and the vapor mass changing rate . m = dm v /dt for Cases L2 to L4, which are illustrated in Figure 8. When near the wall, as in these cases, the released heat could be transported around the fluid by periodic mass oscillations, albeit in a relatively long time. However, for the bubble at the wall, the impact time is short, and heat transport is limited by the solid wall and the nearly stationary fluid. This could help us understand why a shockwave generates more serious damage when the bubble collapses violently on the wall, which will be discussed in the next section. r, especially in the moment of collapse. As shown in Figure 9, we set a probe into the wall to detect the shockwave released by bubble collapse. With the increase of the heating distance γ, the impact of the bubble on the wall is weaker. We also find that no secondary bubbles were produced in Cases L2 and L3, but there is still pressure oscillation on the wall. There is a certain threshold for overcoming hydrostatic pressure for bubble formation. From the above analysis, we could conclude that the pressure force of the expansion wave is the driving force for bubble growth, whereas for bubble shrink, liquefaction occurs earlier than the visible radius contraction. Liquefaction generates huge heat release. When near the wall, as in these cases, the released heat could be transported around the fluid by periodic mass oscillations, albeit in a relatively long time. However, for the bubble at the wall, the impact time is short, and heat transport is limited by the solid wall and the nearly stationary fluid. This could help us understand why a shockwave generates more serious damage when the bubble collapses violently on the wall, which will be discussed in the next section. Bubble at the Wall Six cases, h2, h4, h6, h7, h8, and h10, were set at the solid wall. Here, the heating distance L was 0 and the liquid heights h were 20, 40, 60, 70, 80, and 100, respectively, regarding the initial liquid particle spacing distance dx. Figure 10 shows the bubble density evolution and velocity vector for the different cases. In Case h2, the vapor bubble expands rapidly from the bottom and contacts the free surface. When the bubble exceeds the limit of the free surface, it explodes and splits into multiple parts. Some fluid is quickly splashed to the cold environment, where the hot vapor is quickly liquefied to form many small isolated drops. In Case h4, the maximum bubble diameter is slightly larger than the thickness of the fluid layer. The bubble bulges the liquid film but does not burst, where a very thin liquid bridge is formed at the top of the bubble. We believe that the surface tension of the liquid film stops the vapor bubble from breaking. Padilla-Martinez observed a similar phenomenon and concluded that the surface tension inhibited the vapor bubble growth when the radius was extremely small and could maintain its relative stability to prevent bursting when the vapor bubble was close to the adjacent radius [10]. In the other cases of a greater liquid height (Cases h6 to h10), the vapor bubble remains under the free surface. The difference is that there is a secondary bubble in Cases h6 and h7 but none in Cases h8 and h10. In Figure 11, the bubble snapshots of η = 1.8 show that our numerical results are in good agreement with the experimental results of Nguyen [35]. During the growth stage, the bubbles appear to be elongated in the vertical direction. During the collapse stage, the bubbles become flattened. Bubbles then appear as secondary bubbles, and the tip of the secondary bubble is sharp. Due to the effect of the wall, we can observe diverse bubble shapes at the collapse stage in many of the experimental snapshots, being, for example, shell-, flame-, cap-, or droplet-shaped. Although our simulations and experiments show a similar bubble size evolution during growth and collapse, the bubble attachment on the wall is marginally different. This might be caused by the mismatch of the meniscus-covered container used in the experiments vs. the open-topped container in our simulations. Such a meniscus cover was used to suppress the bubble but was difficult to model in the simulation. A further validation might be conducted in the future, either with a model improvement or more precise experiments. Figure 12 shows the evolution of the bubble radius R , bubble average density ̅ , wall pressure pw, wall temperature Tw, and entropy increase ΔS with time for Cases h6, h7, and h8. It is found that pw, Tw, and ΔS of the bubble have a certain periodicity with the bubble Bubble at the Wall Six cases, h2, h4, h6, h7, h8, and h10, were set at the solid wall. Here, the heating distance L was 0 and the liquid heights h were 20, 40, 60, 70, 80, and 100, respectively, regarding the initial liquid particle spacing distance dx. Figure 10 shows the bubble density evolution and velocity vector for the different cases. In Case h2, the vapor bubble expands rapidly from the bottom and contacts the free surface. When the bubble exceeds the limit of the free surface, it explodes and splits into multiple parts. Some fluid is quickly splashed to the cold environment, where the hot vapor is quickly liquefied to form many small isolated drops. In Case h4, the maximum bubble diameter is slightly larger than the thickness of the fluid layer. The bubble bulges the liquid film but does not burst, where a very thin liquid bridge is formed at the top of the bubble. We believe that the surface tension of the liquid film stops the vapor bubble from breaking. Padilla-Martinez observed a similar phenomenon and concluded that the surface tension inhibited the vapor bubble growth when the radius was extremely small and could maintain its relative stability to prevent bursting when the vapor bubble was close to the adjacent radius [10]. In the other cases of a greater liquid height (Cases h6 to h10), the vapor bubble remains under the free surface. The difference is that there is a secondary bubble in Cases h6 and h7 but none in Cases h8 and h10. In Figure 11, the bubble snapshots of η = 1.8 show that our numerical results are in good agreement with the experimental results of Nguyen [35]. During the growth stage, the bubbles appear to be elongated in the vertical direction. During the collapse stage, the bubbles become flattened. Bubbles then appear as secondary bubbles, and the tip of the secondary bubble is sharp. Due to the effect of the wall, we can observe diverse bubble shapes at the collapse stage in many of the experimental snapshots, being, for example, shell-, flame-, cap-, or droplet-shaped. Although our simulations and experiments show a similar bubble size evolution during growth and collapse, the bubble attachment on the wall is marginally different. This might be caused by the mismatch of the meniscus-covered container used in the experiments vs. the open-topped container in our simulations. Such a meniscus cover was used to suppress the bubble but was difficult to model in the simulation. A further validation might be conducted in the future, either with a model improvement or more precise experiments. greater than 80 due to large hydrostatic pressure. Considering the hydrostatic pressure effect of the heating distance and liquid depth on the vapor bubble, we used η-γ to estimate the number of secondary bubbles. It can be concluded that if η-γ < 1, the bubble bursts through the free surface during its lifetime without secondary bubbles detected. Modestly, if 1 < η-γ <3, it may see secondary bubbles and more than one emission shockwave after the first bubble collapses. For 3 < η-γ, there are no secondary bubbles because higher hydrostatic pressure suppresses the appearance of secondary bubbles. Figure 10. Bubble density counter (top) and flow field vector (below) for Cases h2, h4, h7, and h8. Figure 10. Bubble density counter (top) and flow field vector (below) for Cases h2, h4, h7, and h8. Figure 12 shows the evolution of the bubble radius R, bubble average density ρ, wall pressure p w , wall temperature T w , and entropy increase ∆S with time for Cases h6, h7, and h8. It is found that p w , T w , and ∆S of the bubble have a certain periodicity with the bubble period. It is common that the bubble radius increases at the early stage, following the coincident trajectory. When the bubble growth is close to the maximum value, the bubble behavior is different due to the wall effect. For Cases h6 and h7, Stages (b)-(d) couple together. Damage to a wall from bubble collapse is common and important for cavitatio other vapor-bubble-related applications. However, a consensus has not yet been re for the mechanism of energy conversion during collapse. Zhang et al. believe that ergy of a bubble is transformed into the wave energy of its fluid, causing an impa solid wall [24]. Qin found that when a bubble collapses, there is a large amount o transfer and energy exchange between the bubble and the outside [23]. To illustrate the energy conversion during bubble collapse, we can divide th internal energy into two parts as = ℎ + , as in Equation (10), with thermal ℎ = and potential energy = − , respectively. The fluid kinetic energy ∑ is also accounted for to represent the mechanical energy informatio After the bubble collapses, the change of R, ∆S, T w , and p w shows a certain order. First, it is found that bubble collapse starts with an increase in entropy change. After the bubble collapses, there is a sudden change of T w , with a sudden change of p w followed closely. This means that both the fluid pressure and thermal energy increase during collapse. The emergence of the T w peak is even earlier and longer than the p w peak. Although researchers have paid more attention to erosion by pressure, we believe that with an increase in energy, the pressure impact would cause worse erosion damage. Comparing Cases h6, h7, and h8, no secondary bubble forms with a liquid height greater than 80 due to large hydrostatic pressure. Considering the hydrostatic pressure effect of the heating distance and liquid depth on the vapor bubble, we used η-γ to estimate the number of secondary bubbles. It can be concluded that if η-γ < 1, the bubble bursts through the free surface during its lifetime without secondary bubbles detected. Modestly, if 1 < η-γ <3, it may see secondary bubbles and more than one emission shockwave after the first bubble collapses. For 3 < η-γ, there are no secondary bubbles because higher hydrostatic pressure suppresses the appearance of secondary bubbles. Damage to a wall from bubble collapse is common and important for cavitation and other vapor-bubble-related applications. However, a consensus has not yet been reached for the mechanism of energy conversion during collapse. Zhang et al. believe that the energy of a bubble is transformed into the wave energy of its fluid, causing an impact on a solid wall [24]. Qin found that when a bubble collapses, there is a large amount of heat transfer and energy exchange between the bubble and the outside [23]. To illustrate the energy conversion during bubble collapse, we can divide the fluid internal energy into two parts as E U = E h + E u , as in Equation (10), with thermal energy E h = C V T and potential energy E u = −αρ, respectively. The fluid kinetic energy E k = ∑ total i=1 1 2 m i v 2 is also accounted for to represent the mechanical energy information. The transition between these three energy forms is shown in Figure 13. We found that the formation of the bubble is mainly accompanied by thermal fluctuation, which is the mutual conversion of heat energy and potential energy. into heat energy. High-temperature fluid in the central area then forms a higharea, which acts as a pressure source to release shockwaves. This high therma causes more damage to the wall beside the pressure impact. Shockwave pressure causing cavitation erosion on a solid wall is a concerni Zhang showed that during the collapse of a single bubble, the greatest impact on is the p2ndmax, which is caused when the bubble is completely collapsed [24]. F induced thermal cavitation bubbles, the initial input energy will also cause e high-pressure shock p1stmax. Calculating the ratio p2ndmax/p1stmax as the shockwave of the secondary bubble collapse over the initial vapor bubble impact pressure, p2ndmax/p1stmax in Case h8 reaches the maximum, as shown in Figure 14. We concl the shockwave impact first increases and decreases at the nondimensional liqui with a maximum height of η = 4.5. We define the cavitation potential energy of th on the solid wall as Ecp = ∫p(t)dt. In the whole process, Ecp reflects the time in pressure, and higher statistical pressure will cause more serious cavitation poten Figure 13. (a-d) Energy conversion and entropy increase with time for Cases h6 to h10. Figure 13. (a-d) Energy conversion and entropy increase with time for Cases h6 to h10. Figure 13 clearly reveals an energy conversion relationship at this time. At the moment of collapse completion, E k and E u decrease, but E U and E h increase. It is also observed in Figure 12 that the temperature changes earlier than the pressure changes when the vapor bubble vanishes. We note that the kinetic energy of the bubble collapse is transformed into heat energy. High-temperature fluid in the central area then forms a high-pressure area, which acts as a pressure source to release shockwaves. This high thermal energy causes more damage to the wall beside the pressure impact. Shockwave pressure causing cavitation erosion on a solid wall is a concerning issue. Zhang showed that during the collapse of a single bubble, the greatest impact on the wall is the p 2ndmax , which is caused when the bubble is completely collapsed [24]. For laserinduced thermal cavitation bubbles, the initial input energy will also cause extremely highpressure shock p 1stmax . Calculating the ratio p 2ndmax /p 1stmax as the shockwave pressure of the secondary bubble collapse over the initial vapor bubble impact pressure, we find p 2ndmax /p 1stmax in Case h8 reaches the maximum, as shown in Figure 14. We conclude that the shockwave impact first increases and decreases at the nondimensional liquid height, with a maximum height of η = 4.5. We define the cavitation potential energy of the impact on the solid wall as E cp = p(t)dt. In the whole process, E cp reflects the time integral of pressure, and higher statistical pressure will cause more serious cavitation potential. Figure 14. Ratio of two pressure peaks recorded by the probe p2ndmax/p1stmax and cavitation potential Ecp for different cases. Conclusions We used the SPH numerical method to directly simulate bubble growth and collapse processes for a laser-induced bubble near or at a wall. Bubble radius, density, vapor mass, pressure, temperature, energy, entropy, and their correlations were carefully examined during the bubble's lifetime. The bubble density was found to better describe the bubble behavior than the radius. A new precollapse substage (Stage (d)) was discovered between the growth stages (Stages (a)-(c)) and collapse stage (Stage (e)). The precollapse stage (Stage (d)) was from the point of the largest bubble radius to the next minimum value of bubble density. The mechanism for these five stages was in the out-sync of radius change and mass phase change. Using this new recognition method based on bubble density, we found there were different substages for bubbles near and at the wall. For the bubble near the wall, the first bubble had five complete stages, from Stages (a) to (e). However, the bubble at the wall only had two clearly defined stages: (a) the inertia-controlled growth stage and (e) the collapse stage. The other three intermittent stages (Stages (b)-(d)) were mixed due to the strong wall effect. The bubble near the wall had an almost spherical shape, being slightly distorted when floating up slowly. However, the bubble at the wall was nearly hemispherical at growth. When the bubbles collapsed at the wall, they presented diverse shapes, being cap-, shell-, drop-, or flame-shaped. The lifetime of the bubble near the wall was usually longer than that at the wall. Secondary bubbles occurred at a modest hydrostatic pressure of 1 < η-γ < 3 either near or at the wall in the current study. There was a sharp increase in entropy once the bubble completed its collapse, and the kinetic energy of its fluid was converted into heat energy to release the shockwave. Instantaneous erosive damage to the solid wall caused by the shockwave of bubble collapse reached its maximum at nearly η = 4.5. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available because of privacy. Figure 14. Ratio of two pressure peaks recorded by the probe p 2ndmax /p 1stmax and cavitation potential E cp for different cases. Conclusions We used the SPH numerical method to directly simulate bubble growth and collapse processes for a laser-induced bubble near or at a wall. Bubble radius, density, vapor mass, pressure, temperature, energy, entropy, and their correlations were carefully examined during the bubble's lifetime. The bubble density was found to better describe the bubble behavior than the radius. A new precollapse substage (Stage (d)) was discovered between the growth stages (Stages (a)-(c)) and collapse stage (Stage (e)). The precollapse stage (Stage (d)) was from the point of the largest bubble radius to the next minimum value of bubble density. The mechanism for these five stages was in the out-sync of radius change and mass phase change. Using this new recognition method based on bubble density, we found there were different substages for bubbles near and at the wall. For the bubble near the wall, the first bubble had five complete stages, from Stages (a) to (e). However, the bubble at the wall only had two clearly defined stages: (a) the inertia-controlled growth stage and (e) the collapse stage. The other three intermittent stages (Stages (b)-(d)) were mixed due to the strong wall effect. The bubble near the wall had an almost spherical shape, being slightly distorted when floating up slowly. However, the bubble at the wall was nearly hemispherical at growth. When the bubbles collapsed at the wall, they presented diverse shapes, being cap-, shell-, drop-, or flame-shaped. The lifetime of the bubble near the wall was usually longer than that at the wall. Secondary bubbles occurred at a modest hydrostatic pressure of 1 < η-γ < 3 either near or at the wall in the current study. There was a sharp increase in entropy once the bubble completed its collapse, and the kinetic energy of its fluid was converted into heat energy to release the shockwave. Instantaneous erosive damage to the solid wall caused by the shockwave of bubble collapse reached its maximum at nearly η = 4.5.
v3-fos-license
2020-08-06T09:05:28.008Z
2020-02-05T00:00:00.000
226124614
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.umcs.pl/we/article/download/10367/pdf", "pdf_hash": "bd600ffad42e62750c9368d791cf8d9f622e2f1e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45133", "s2fieldsofstudy": [ "Political Science", "Education", "Sociology" ], "sha1": "eb7dfd4badac08c7925921eec61e157d40c65cb0", "year": 2020 }
pes2o/s2orc
Selected social problems of rural areas in political thought of Wincent Witos The article concerns some social problems in rural areas with particular emphasis on education and the situation of people in rural areas. The intent was also to show the living conditions and the difficulties faced by inhabitants of rural areas. With a wide range of rights Wincenty Witos devoted much attention to education. The most conscious part of the rural population understood that one of the causes of social impairment peasants was their little access to education. Therefore, programs of peasant political parties resounded loudly demands the democratization of education be free education and support for young people who wants to educate the indigent. Poland received the legacy of a large number of illiterates. According to the census in 1921, illiteracy concerned about 38% of the rural population, sources of the political thought of Witos were centered around several reasons. First, this was due to the situation peasants and rural Poland were in. Second, education played a significant role in socio-political transformations. This provided people with a chance for social advancement and has sparked national consciousness and a sense of nationality. With the democratization of education, peasants could consciously participate in the life of the state, educate the young generation in the spirit of patriotism and citizenship. Witos considered education and culture as the basis for the strength of the state and welfare of its citizens. This enabled an impact on the political affairs of the country and the implementation of obligations to the homeland. Introduction The title of this article comprises three levels relating to the subjective scope, the temporal (chronological) aspect and the objective scope. The objective-scope level includes notion that merit explanation -political thought. Political thought is generally understood as "any form of reflection on political realities, irrespective of the degree of development, internal cohesion and systematization, and the level of theorization and concretization". Political thought comes into existence as a result of political thinking which manifests itself, inter alia, in the creation of ideas. If the reflection on political realities is well evolved and structured, and has a certain theoretical component, it can assume three stages in its development, namely, the stages of ideology, concept and program. The temporal (chronological) plane of the article covers the end of the 19 th century and first half of the 20 th century. The article concerns some social problems in rural areas with particular emphasis on education and the situation of people in rural areas. The intent was also to show the living conditions and the difficulties faced by people living in rural areas. With a wide range of rights, Wincenty Witos devoted much attention to education. The most conscious part of the rural population understood that one of the causes of social impairment peasants was their little access to education. Therefore, the programs of political parties involved postulates regarding free education and support for adolescents who wanted to receive education. Poland received the legacy of a large number of illiterates. According to the census of 1921, the illiterate comprised approx. 38% of the rural population, sources of the political thought of Witos were centered around several reasons. First, it was the situation of peasants and rural Poland. Second, education played a very large role in socio-political transformations. It is what offered people a chance for social advancement, led to national consciousness and nationality. Owing to democratized education peasants could consciously participate in Education as the most important social problem In the broad catalogue of laws, Wincenty Witos has given the most attention to education. His reflections related to education stemmed from his own life experiences and knowledge, which Witos gained as an autodidact. He perfectly understood that one of the reasons for the social handicap of peasants is their limited access to education 1 . Ideological ideas in educational projects resulted directly from the situation of peasants in Polish lands in various historical periods. At the time of the Partitions, there were three distinct socio-political mechanisms that influenced the sphere of education. The Second Republic of Poland put the attention of the rulers on the need to elaborate and undertake work towards the intellectual revival of citizens. Political programs of many folk groups reflected the demands of democratization of education, as well as free schools and help for impecunious youth that sought to be educated. 2 Witos put education in one line next to the values of land and power. He believed that it was only through education that the Polish countryside could be reborn and experience full bloom. He referred to education as "the golden source, the fount of knowledge". 3 Witos treated education as a means of a socio-cultural change and a way for building a new social order, which was characteristic of the thought of agrarianism. 4 Witos inextricably connected the right to receive education with the socio-political changes leading peasants into the role of Poland's host. It was education that gave the population a chance for social advancement, led to nationalization and aroused national and state awareness. The necessity of popularizing education was seen by Witos through the prism of several strictly connected premises. The first was the understanding by peasants of the feeling of relationships with the native language, customs and culture or national identity. The second was undoubtedly the necessity to become aware of the need for a national rebirth: "The whole nation must act as one to create happi-1 Podgajna E., Myśl polityczna Wincentego Witosa 1874-1945[Political thought of Wincenty Witos (1874-1945], Lublin 2018, p. 216-234. ness for themselves and future generations, to astonish the world. Small nations have shown that they can; The great Polish nation will do it too, if it wants to, and one should want to do and must want to". 5 He emphasized in his speeches that Poland had fallen as a result of the darkness and backwardness of the great majority of its people. These words undoubtedly became valid during World War One, when Witos drew a vision of the future of the Polish state and saw a chance for the Poles to revive their own state in the global conflict: "We must not waste this historic moment under the threat of being cursed for generations. The Polish people are the foundation of our future, the people must take the right place and must persevere on it". 6 He stated that "Poland cannot and will not be the seat of medieval retrogression and backwardness". 7 Witos repeatedly stressed that "the rights for which the people have made a bloody sacrifice cannot be reduced, they can only be extended". 8 He demanded all equality for the peasants: "I can never agree that in Poland there are first and second class citizens, some have rights and others only obligations". 9 Education was, in his opinion, the best weapon in the struggle for freedom and independence, for prosperity, citizenship and peasants' rights. Witos was well aware that knowledge and access to education, including political rights and land rights, had to be made real in order for the peasants to be able to pretend to the role of the landlord and tie them to a future state permanently. This thought was undoubtedly seen as progressive. He repeatedly stressed that he "witnessed poverty and deprivation of the Galician peasant". 10 One of the ways leading to the exit from backwardness was the development of education. He pointed out that firstly, education should be universal, giving opportunity of an equal start to all children, and secondly, emphasize the need to change curricula that did not include national elements: "Many of our teachers are diligently teaching children to read and write, and to count and even geography, grammar and other subjects, telling them about China and Japan, and not mentioning Poland. It is not enough for a student to go to school only to not be illiterate […] but it is imperative that the student leaves the school as a Pole". 11 In 1908, during a speech at the Sejm, he said: "School and education are the foundation of all progress and prosperity. " He stressed that in thousands of villages there were no schools, and in those where schools could be found, the governing Poles had done nothing "to make the peasant feel a Pole". Children learned from Austrian textbooks and books of Austrian history and not Polish. He argued that the folk schools did not teach such subjects as Polish grammar and geography, and yet they prepared for the work of future councilors, commune heads and even deputies. Witos in his reflections on education emphasized, "the present school system creates national illiterates because they do not tell children what every citizen of the should know about his history". 12 As Andrzej Zakrzewski noted, the existing system and curriculum blamed Witos for the low level of national awareness of the peasants, which many times even led them to deprivation of their national identity. 13 The peasants looked at Poland through the prism of social affairs. The living memory of the serfdom obscured its picture, since "Peasants knew about Poland as it was: indecent, villein, belonging to nobility, vulgar, unjust, trembling with mortal fear of its return". Concerns on the subject education, at the end of the war, became an integral part of the concept of the vision of the future system of the revived Poland. Witos preached on the pages of "Piast", that the road to free Poland leads through the establishment of a democratic right to vote, agricultural reform and popularization of education. Witos considered education to be a lever of national progress, an attitude of moral rebirth of the state, and the possibility of transforming peasants into fully-fledged decision makers on the nation's destiny. In the expose presented to the Sejm on 24 September 1920 he stated, "the war has enabled, or at least hindered, the proper functioning of schooling, to which the government draws urgent attention, understanding that the strength and prosperity of the state depend on education". 14 These were not empty words because Witos was the prime minister who devoted most attention to education. 15 Being a self-taught person, Witos had very strong feelings about the lack of systematic education. The issues of education were treated in a special way and attempts were made to combine it with other aspects of social and political life. Therefore, education and especially political education were seen as highly important. Witos put forward a previously under-valued postulate of raising awareness and implementing popularizing activity by the political party: "A peasant must become not only equal but the decisive factor in the state. He must acquire the right, the power and education". 16 The education of a new human being actively contributing to the political, social and cultural life at various levels and aiming to shape society in the spirit of democracy was U M C S 55 Selected social problems of rural areas in political thought of Wincent Witos Восток Европы / Гуманитарно-общественные исследования 2019 / 5, 2 a necessary condition for the realization of the political and social concepts embodied in the idea of the People's Republic of Poland. 17 PSL "Piast" in the 1919 program emphasized the need to provide all citizens with adequate education. It emphasized the right "to education from the lowest to the highest, according to their abilities". The peasants demanded the introduction of universal, free, compulsory education and the ability to continue their education in secondary and tertiary education. The curriculum was to be designed in such a way that every "citizen of the state could be educated generally and professionally. Science will be universal, compulsory and free, religion education -compulsory". 18 Witos advocated a fair education system so that every child had the opportunity to study at all levels, regardless of wealth, background, religion or nationality, claiming, "the cost of teaching in all schools will be covered by state funds". 19 However, PSL "Piast" was in opposition to the over-democratization of school self-government by increasing the participation of the factors chosen by the public. Witos opted for solutions that would lead to the removal of barriers hindering or often preventing peasant children from accessing knowledge. He advocated the introduction of universal, free, compulsory teaching for all children in the general school, with the possibility of continuing education in secondary and tertiary education. He emphasized that education shapes modern, bright citizens who are consciously and willingly participate in state life. He also pointed to the need to educate young people in the spirit of patriotic values, in love for their homeland, to be the best citizen and serve the Polish state as best as possible. It was Witos who came up with the idea to promote such an educational model that would combine individual needs and values with social ones. Their nurturing and development in everyday life was a guarantee for the rebirth of a state in which full rights, freedoms, taking up own initiatives through hard work could be realized. Thanks to education, a socio-political departure from political marasmus became possible. It was important to introduce an instruction of compulsory Polish language education in the Polish lands and the possibility for Polish children to attend schools of different types 20 . 17 The development of education and culture was of great importance for shaping national consciousness. Providing children with access to knowledge gave the opportunity to shape modern and wise citizens who are consciously involved in state life. The educational idea developed by W. Witos was based on the principles of patriotism. The necessity to awaken national feelings, love and close ties with the homeland, the inviolability of state goods were all considered the priority. He emphasized that through education, it would be possible to develop and deepen moral, ethical and honest values. Witos supported shooting organizations adapting the youth to an armed struggle for independence: "special care will be provided by the PSL shooting organizations and will endeavor to consolidate them in all Polish villages by permanently providing them with moral, political and material assistance". 21 He formulated the thesis that the state would exist thanks to its bright and self-aware citizens, through the love of freedom and respect for the law: "We will attach great importance to education, especially folk as the school must educate and bring up the new Polish generation, wise, industrious, persevering, civic and sacrificial. " 22 He paid particular attention to the knowledge of the nation's own history, for through this knowledge it was possible to create a bright citizen, with a deep sense of responsibility for the fate of the state, prepared for active participation in the collective life, actively working for the homeland and having the most profound respect for work. He stressed that "at school a student should learn about all the great Polish men, about Poland and its greatness, about the fall and the partitions, he should feel hot love for his country and the willingness to sacrifice for it". 23 Witos emphasized the role of the school in educating young generations of citizens, also for rural and local communities, and preparing them for participation in the process of rural transformation. Witos emphasized the close link between schools and the local community. The school was supposed to participate in the life of the pupils to a much wider extent than would be expected from their typical tasks related to education. He believed that the school should also conduct widely understood non-school education for the immediate environment. The designation of such a school role was related to the fact that in most rural areas the school was only a cultural and educational institution. Especially in work with the youth, Witos turned his attention to educational activities intended to promote the development of general and agricultural knowledge, owing to which they would progress in their quest to become fully-fledged and aware citizens. 24 them with the rights and duties of an individual towards the state and society, to understand the values of the state, to familiarize them with its needs, and to prepare them for creative work for the state. In 1921, the PSL "Piast" program called for the establishment of such a teaching and education system that, by developing the harmonious forces of youth, would educate it in the knowledge of the country and the needs of the nation, teach creative work and prepared for giving one's strength to his homeland. " 25 Witos critically referred to the ideas propagated by the Piłsudski camp, aimed at shaping attitudes affirming the policy of the ruling camp. 26 W. Witos was in the position to fight against all inequalities and social injustice. This was due to the conviction that it was necessary to eliminate the age-old backwardness of the rural population. He wanted to develop a new type of a peasant -a citizen who would be aware of his rights and duties, actively participate in the political and cultural life of the country while being attached to the folk tradition and guided by the principles of Christian ethics, knowingly assuming responsibility for the state. Witos emphasized his peasant origin and wanted peasants to be aware of the cultural distinctness of the village and its importance to preserve Polishness in the past and in the present. He promoted all forms of activity aimed at cultural and social growth of his environment. Knowledge, according to Witos, was the key to a good, fruitful change. He claimed, "Homeland will someday be grateful for raising the masses of citizens to a new life. Not one great talent and heart awaits under the coats, you have to wake up those who are sleeping so the harvest time can come fast". 27 The value of non-school skills in acquiring knowledge, skills and qualifications was considered in the context of taking a conscious place in the society and state of the citizen, being able to make his own decisions and understand the reality surrounding him. In the political thought of Witos we can see the impact of the concept of national upbringing on the demands of education. At the source of approval for its individual parts was the conviction of the need for development and integration of the nation that Witos had taken from the thought of Boleslaw Wysłouch. 28 The idea of the nation, which was a constituent part of the educational ideal of national pedagogy, was exposed in the democratic and civic direction of the educational ideology influenced by The core of the curriculum and the organization of the school was based on Christian values. It was undoubtedly an element of the idea of national education that proved the necessity of observing the principles of Christian ethics in private, state and social life. Such a position did not only arise from national needs that consider religion an important ingredient of nationality. It was determined by peasants' strong attachment to faith, resulting not only from a deep theoretical-philosophical reflection, but from an emotional acceptance of the hopeful message that after a life full of effort and humiliation, peace, harmony and eternal life would come. 32 In the National Sejm, Witos emphasized the role and importance of the Roman Catholic Church in the life of peasants, its deep attachment to the Catholic religion, its importance for the formation and renewal of national identity, education, peaceful and coherent human coexistence, and the propagation of the idea of peace. He firmly emphasized the need for national unity based on the Catholic religion, claiming, "The Polish people are Catholic and deeply religious. The Catholic faith educates in the clearest rules of humanity, which has become the basis of modern civilization, and that the violation of faith can weaken in humankind the noblest ideals of love of neighbor, sacrifice for others, fraternity and social justice". 33 PSL "Piast" recognized that the leading role in the life of the nation and its upbringing was to be played by the Church and the Catholic religion. Consequently, the stance adopted by the Party regarding the Catholic religion led to education on all levels being handed over under the influence of the clergy. The Concordat signed with the Holy See in 1925, which was supported by the PSL "Piast", enabled the Catholic clergy to significantly influence education of young people. 34 An important role in the educational process was attributed by Witos and PSL "Piast" to family. They emphasized that the life of a rural family is filled with religious content. They underlined that education is a process that lasts for years, and parents are the only and the most important educators in the first several years of a child's life. The main elements included education, comprehensive development, an in-depth exploration of religiosity, shaping moral attitudes and building one's own value system, evoking love for the soil and working with it. Witos believed that civic education in the family from an early age equips man with the attributes needed for a meaningful participation in social and political life. The family should shape a man conscious of his membership in a given community with the desire to actively participate in its life for the general good. It was invariably important to develop the love of native history, to learn and cultivate the memory of national heroes, to educate young people in the spirit of the needs of the nation and of the homeland. Witos emphasized that the family has certain obligations in relation to the state and the state power, such as taxes, military service, peace-time social services, and civic education. The parents' approach to these duties is important. In the atmosphere of family life, the child learns about civic virtues, respectful and loving treatment of others. It maintains the social and moral attitudes that build all the relationships in community life. In the family community, a new generation of Poles is emerging that will build the society of the coming centuries. Only children brave in the practice of truth and sensitive to respect of good will be able to responsibly undertake duties on every position. Children raised in such a way will also be able to rebuild the authority of power in crisis. It was important to involve children together with parents in state ceremonies with parents, on the scale of a child's perception, explaining to him or her why they participate and what is being celebrated. Educational purposes also served to cultivate traditions, customs and rituals. In this way interest in history was developed, the roots of the native culture were emphasized, and the sense of pride in its work was developed. It is worth noting that many of them, despite their pagan origins, were religious. 35 In the PSL "Piast" program of 1926, the family was recognized "as the basic unit of social life on which the health of the nation depends". A commitment was made to provide it with proper care from the state and society, appreciating its educational value. 36 A similar resolution was incorporated into the SL program of 1931, opposing all factors that could compromise family ties as a result. The Piast supporters believed that teaching should be based on the principles of universality, compulsory schooling, a free and unified system of education with compulsory religion classes. PSL "Piast" and Witos attached great importance to the education about religion. Witos considered the Roman Catholic religion a source of morality and the foundation in matters of faith and morality. However, he was always against infringing religious tolerance. At the 1926 Congress in Cracow, a new program of PSL "Piast" was presented. It was emphasized that the party supports constitutionally protected religious freedom of all denominations, because "the masses of the Polish people are deeply attached to the Catholic religion and recognizing the importance of the It defined the goals and values that a man must follow in order to be allowed to enter Paradise after death. So, religion was an integrative link that attempts to find answers to all fundamental questions both about temporal life with all its aspects and death. It also introduced some social control due to its moral dimension. An important part of the 1920 educational program of the PSL "Piast" that Witos supported with all diligence was the demand for education for adults: "we will support the development of knowledge and organize all cultural and educational institutions (folk universities, educational societies, community centers, agricultural clubs etc.) bringing true knowledge to the people. 39 programs have demonstrated the need to undertake various initiatives for wide development of social life and to promote the physical, moral and health culture of rural areas such as: setting up folk libraries, free reading rooms to promote reading, folk high schools, exhibitions, agricultural competitions, choirs, bands, sports societies, orphanages, medical clinics, homes for the elderly and incurably sick people, etc. 40 Witos and the Piast advocates attached great importance to the growth of cooperative societies, manufacturers and tradesmen. Their significance and role were not overestimated in economic or educational terms. They were perceived as institutions of self-reliance, organizational and cooperative work. In the program of November 1921, the PSL "Piast" proposed the organization of vocational education, especially agricultural schools, and also called for the organization of vocational courses and general education at public schools. In addition to the duplicated educational content of 1919, the new postulate was the idea of a school of work at all levels of science. They demanded the creation of a legal basis for enabling talented young people to pursue higher education without formal certification. The party also demanded that the program of the last three classes of the seven-year-high school be in line with the program of the first three classes of secondary school. The Piasts, however, have shown no consistency in achieving full uniformity in striving to full uniformity of education on a basic level and the democratization of the whole school system. They emphasized that every citizen has the right to education of all degrees according to his aptitudes, and agreed at the same time to the temporary preservation of the manifestations of multi-track character at a primary school level. They accepted, among others maintaining middle school and basic education of a junior high school level at a seven-graded general school -when the most common type of school in Poland and predominating in rural areas were single-unit schools (realizing a fourbranch program) and dual-classes (mostly based on the five-branch program) enabling peasant children to educate on higher levels of the education system. 41 Piast ( Witos's devoted special attention to the teaching staff. The profession of a teacher was of great importance for social life, therefore Witos opted for respect, protection and the provision of the necessary privileges. He announced the necessary care for a high level of education of teachers, he defended its professional interests and remuneration so that they could be fulfilled in this profession without any sacrifices and renunciation. The material situation of the teachers was difficult, mainly because of low wages. Witos announced taking over the care of teachers in 1914, "particular care should be given to the layer of teachers, as the educator of the folk masses whose fortune organically connected with the fortune of the folk. " 42 Tts material rank was supposed to correspond to the social rank of the profession. Witos demanded to bureaucratize education by reducing the number of curators, clerks in school boards, visitors, inspectors and the complete removal of deputy inspectors. The money saved in this way was intended for the development of general and vocational education. Witos pointed to the social attitude and personality of the teacher, who in his opinion played a key role in the education process. He appealed not to hire unqualified teachers in schools. At every level of teaching, teachers should have the highest qualifications, the same pay, and be covered by a uniform employment law, regardless of whether it is a common school, secondary or higher one. He emphasized that "the school must educate and bring up the new Polish generation, we will try to ensure the existence of teachers, because it is up to us to educate our people. " 43 The specificity of the impact of civic education, especially in the area of caring for human social utility, imposed specific obligations on the school and other educational establishment as well as education and care institutions, i.e. on teachers and institutional pedagogues as responsible for shaping pro-social attitudes of young people. Teachers were supposed to have the proper knowledge and skills in forming civic personalities: people knowingly exercising their rights and performing their duties, active and committed communities. Witos thought that it was their great duty and the basic task of the education system. After the coup of Jozef Pilsudski in 1926, the situation of teachers was very difficult, not only due to material reasons, but primarily political ones. In the spring of 1928, a decree was issued by the President on the subordination of school authorities to the authorities of the political administration. Teachers were transferred in business to remote areas not only for personal reasons, but clearly for political reasons. Transitions were held for activities in opposition parties, as well as in province and municipal councils. 44 Such a policy of the authorities was unfavorable not only for education but primarily for the state. Witos and PSL "Piast" together with other parties, among others with PSL "Wyzwolenie" postulated the amendment of the Presidential Decree and the restoration of the principle of independence of education from the state administration. Witos paid attention to the working conditions of teachers, and those in which children and young people gained knowledge. Among other things, the finances from the state budget for the expansion and renovation of schools were observed. Unfortunately, the deepening economic crisis in Poland caused the reduction of money for the construction of schools, boarding houses, dormitories, and apartments for teachers. The party criticized the educational policy of the ruling camp. The budget for education in 1928 amounted to 5 million -in the scope of construction of common schools, while 26 million PLN was allocated for the maintenance of all denominations. The disproportions were therefore blatant. In the meantime, the costs could be returned very quickly, because they would have influence on the development of the society, on improving its qualifications. On the one hand, Witos understood the process of acquiring knowledge through the successive stages of education, secondly, in an extraordinary way, i.e. works for the self-government, in various social and cultural organizations, agricultural circles and cooperatives. Witos attributed a particular role to socioeconomic and self-education courses. He was therefore bringing in a thought to exploit all possibilities of self-development and to work for various institutions, associations and organizations, since it was still in the thirties, when illiteracy was not fully abolished. The argument that science played an important role was that through knowledge people can live in the community, be independent and skillfully use the help we receive from others. Final conclusions The analysis of the political thought of Wincenty Witos leads to the conclusion that education was an important category that was often referred by him. Witos concentrated of some social problems in rural areas, with particular emphasis on education and the situation of people in rural areas. The intent was also to show the living conditions and the difficulties faced by people living in rural areas. With a wide range of rights Wincenty Witos devoted much attention to education. The most conscious part of the rural population understood that one of the causes of social impairment peasants was their little access to education. The importance of knowledge gained in this way was considered to be an investment of the nation for the future, "this education will determine Poland's position among other countries and nations. […] education is the best way to equalize life chances". Witos directed his appeal especially to the peasants, pointing to knowledge as an indispensable factor enabling their achievement of a co-host position in the state. These concepts undoubtedly widened the agrarian conception of the role of education in building the future of Poland, both in the economic and social aspects as well as in the improvement and modernization of mankind. The education process played an important role in shaping the aspirations of individuals and of the society as a whole. Educational actions led to the formation of a citizen with a deep sense of patriotism, a conscious and active participant in collective life, capable of fulfilling personal, family and work responsibilities, and taking responsibility for the fate of the state. In the opinion of Witos, they translated into the organization of individuals' lives, the entrepreneurial attitudes of the whole society. Educated was supposed to prepare people to adapt in changing circumstances so that they could reasonably influence the structures that will be subject to transformation in the future. It was to awaken and maintain a strong relationship with the homeland, its attachment to land, language, culture and customs. Equalizing educational opportunities as one of the greatest social problems in the village was associated with minimizing social exclusion among the widest social groups. • Abstract: The article concerns some social problems in rural areas with particular emphasis on education and the situation of people in rural areas. The intent was also to show the living conditions and the difficulties faced by inhabitants of rural areas. With a wide range of rights Wincenty Witos devoted much attention to education. The most conscious part of the rural population understood that one of the causes of social impairment peasants was their little access to education. Therefore, programs of peasant political parties resounded loudly demands the democratization of education be free education and support for young people who wants to educate the indigent. Poland received the legacy of a large number of illiterates. According to the census in 1921, illiteracy concerned about 38% of the rural population, sources of the political thought of Witos were centered around several reasons. First, this was due to the situation peasants and rural Poland were in. Second, education played a significant role in socio-political transformations. This provided people with a chance for social advancement and has sparked national consciousness and a sense of nationality. With the democratization of education, peasants could consciously participate in the life of the state, educate the young generation in the spirit of patriotism and citizenship. Witos considered education and culture as the basis for the strength of the state and welfare of its citizens. This enabled an impact on the political affairs of the country and the implementation of obligations to the homeland. Keywords: political thought, Wincenty Witos, rural areas, peasant movements. Bibliography Sources
v3-fos-license
2020-05-16T14:41:36.744Z
2020-05-15T00:00:00.000
218653125
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.nature.com/articles/s41423-020-0458-z.pdf", "pdf_hash": "8051ba6a43e9cef2e65bf05df05f66f6d123f019", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45134", "s2fieldsofstudy": [ "Biology" ], "sha1": "8051ba6a43e9cef2e65bf05df05f66f6d123f019", "year": 2020 }
pes2o/s2orc
Key residues of the receptor binding motif in the spike protein of SARS-CoV-2 that interact with ACE2 and neutralizing antibodies Coronavirus disease 2019 (COVID-19), caused by the novel human coronavirus SARS-CoV-2, is currently a major threat to public health worldwide. The viral spike protein binds the host receptor angiotensin-converting enzyme 2 (ACE2) via the receptor-binding domain (RBD), and thus is believed to be a major target to block viral entry. Both SARS-CoV-2 and SARS-CoV share this mechanism. Here we functionally analyzed the key amino acid residues located within receptor binding motif of RBD that may interact with human ACE2 and available neutralizing antibodies. The in vivo experiments showed that immunization with either the SARS-CoV RBD or SARS-CoV-2 RBD was able to induce strong clade-specific neutralizing antibodies in mice; however, the cross-neutralizing activity was much weaker, indicating that there are distinct antigenic features in the RBDs of the two viruses. This finding was confirmed with the available neutralizing monoclonal antibodies against SARS-CoV or SARS-CoV-2. It is worth noting that a newly developed SARS-CoV-2 human antibody, HA001, was able to neutralize SARS-CoV-2, but failed to recognize SARS-CoV. Moreover, the potential epitope residues of HA001 were identified as A475 and F486 in the SARS-CoV-2 RBD, representing new binding sites for neutralizing antibodies. Overall, our study has revealed the presence of different key epitopes between SARS-CoV and SARS-CoV-2, which indicates the necessity to develop new prophylactic vaccine and antibody drugs for specific control of the COVID-19 pandemic although the available agents obtained from the SARS-CoV study are unneglectable. INTRODUCTION Coronavirus disease 2019 (COVID-19) is a respiratory tract infection caused by a newly emergent coronavirus, SARS-CoV-2, which was first recognized in December 2019. Globally, as of 2:00 a.m. CEST, 25 April 2020, there have been 2,724,809 confirmed cases of COVID-19, including 187,847 deaths, reported to World Health Organization (https://covid19.who.int/). Genetic sequencing of the virus suggests that SARS-CoV-2 is a betacoronavirus closely linked to SARS-CoV. 1,2 The research on SARS-CoV provided useful information that may be directly used in the battle against SARS-CoV-2, but the novel coronavirus also has different characteristics in some respects, which need more in-depth study. Many groups have shown that SARS-CoV-2 utilizes the homotrimeric spike (S) glycoprotein to bind to the functional receptor human ACE2 (hACE2); this mechanism for viral entry is also used by SARS-CoV. 3,4 The RBD in the S protein mediates the binding of the virus to host cells, which is a critical step for the virus to enter target cells. According to the high-resolution crystal structure information acquired thus far, 5-7 the receptor-binding motif (RBM) is the main functional motif in RBD and is composed of two regions (region 1 and region 2) that form the interface between the S protein and hACE2. 8 The region outside the RBM also plays an important role in maintaining the structural stability of the RBD. 9 According to amino acid alignment studies, the sequence identity of the RBD shared by SARS-CoV and SARS-CoV-2 is 73.5%. 10 However, the identity of RBM, the most variable region of RBD, is only 47.8%. Although the identity of the amino acids in the RBM region is low, the binding mechanism is similar for the two viruses. [5][6][7][11][12][13] The conservation of amino acid sequences suggests that the RBDs of the two viruses may elicit crossreactive antibodies which may have the potential for cross protection. It is currently unclear whether the variable RBMs of the two viruses can induce cross-reactive antibodies. In this study, we compared the SARS-CoV-2 and SARS-CoV RBD affinity for hACE2 and explored the possibility of cross protection by antibodies targeting these RBDs. By creating single amino acid substitution mutations in the SARS-CoV and SARS-CoV-2 RBM sequences, we demonstrated that SARS-CoV-2 has two types of amino acid residues to maintain its binding activity with hACE2: receptor binding was enhanced by introducing amino acid changes at P499, Q493, F486, A475 and L455, and receptor binding was diminished by replacing residues N501, Q498, E484, T470, K452 and R439. An animal immunization study revealed that the RBDs of SARS-CoV and SARS-CoV-2 are potential antigens that induce strong clade-specific neutralizing antibodies in mice, while the cross-neutralizing effect is much weaker. This finding was due to the differences in antigenicity of the RBDs in the 2 viruses, which was carefully verified with the available neutralizing monoclonal antibodies(mAbs) against SARS-CoV and/ or SARS-CoV-2. Finally, the potential epitope of HA001, a newly developed SARS-CoV-2 receptor-blocking human antibody, was found to involve amino acids A475 and F486 in the SARS-CoV-2 RBD, which are newly discovered binding sites for neutralizing antibodies. Overall, our study has revealed the presence of different key epitopes between SARS-CoV and SARS-CoV-2, which indicates the necessity to develop new prophylactic vaccine and antibody drugs for specific control of the COVID-19 pandemic although the available agents obtained from the SARS-CoV study are unneglectable. MATERIALS AND METHODS Cells, plasmids and antibodies HEK293T cells (ATCC CRL-3216) were cultured at 37°C with 5% CO 2 in Dulbecco's modified Eagle's medium (DMEM; Invitrogen) supplemented with 100 U penicillin per ml, 100 μg streptomycin per ml, and 10% fetal calf serum. ExpiCHO-S mammalian cells (Invitrogen) were cultured in ExpiCHO™ Expression Medium (invitrogen), supplemented with 1% penicillin-streptomycin at 37°C with 8% CO 2 . The 80R, m396, S230, CR3022 and N-176-15 VH and VL sequences were synthesized (GenScript) and cloned into human IgG1 scaffold. HA001 targeting SARS-CoV-2 RBD that was generated by phage display was provided by Shanghai Sanyou Biopharmaceuticals. The expressing plasmids encoding full length of SARS-CoV S protein, SARS-CoV-2 S protein and human ACE2 were purchased from Sino Biological. The RBD domain encompassing residues 306-527 of SARS-CoV S protein, residues 318-541 of SARS-CoV-2 S protein were cloned into the pcDNA3.1 mammalian expression vector by incorporating an immunoglobulin (Ig) heavy chain (H) signal peptide at the N-terminus and a human IgG1 Fc tag at the C-terminus. Single amino acid substitution mutagenesis of the RBDs The DNA sequences encoding the RBD of SARS-CoV or SARS-CoV-2 were fused in frame with an N-terminal human IgE signal peptide and a C-terminal 6 His tag and cloned into a pBudCE4.1 vector (Invitrogen). The residues selected for mutagenesis were based on an amino acid sequence alignment and the structural information of the RBD-ACE2 binding interface. Single amino acid substitution mutagenesis was induced with a commercialized KOD-Plus mutagenesis kit (TOYOBO). All the mutations were verified by DNA sequence analysis (Biosune). To express the wild-type and mutant RBDs, ExpiCHO-S cells were plated in six-well plates and transiently transfected with these plasmids. The supernatants were harvested 96 h after transfection. Expression and purification of the RBD-hFc and mAbs The mAbs and RBD-hFc were produced by transient transfection of ExpiCHO-S cells (Invitrogen). The supernatants from the RBD-hFc transfected cells were collected after 4 days and from the mAb-transfected cells after 7 days. The supernatants were affinity purified by protein G chromatography (GE Healthcare) and dialysed against PBS overnight at 4°C. Biolayer interferometry analysis of the SARS-CoV and SARS-CoV-2 RBD binding affinity for hACE2 Biolayer interferometry was performed using an Octet Red96 instrument (ForteBio, Inc.). A 5 μg/ml concentration of SARS-CoV and SARS-CoV-2 RBD-hFc was immobilized on an anti-human IgG-Fc (AHC)-coated biosensor surface for 300 s. The baseline interference phase was obtained by measurements taken for 60 s in kinetics buffer (KB: 1x PBS and 0.02% Tween-20), and then, the sensors were subjected to association phase immersion for 400 s in wells containing recombinant hACE2 diluted in KB. Then, the sensors were immersed in KB for as long as 400 s in the dissociation step. The mean Kon, Koff and apparent K D values of the SARS-CoV and SARS-CoV-2 RBD binding affinities for ACE2 were calculated from all the binding curves based on their global fit to a 1:1 Langmuir binding model with an R2 value of ≥0.95. Pseudo-typed virus infection assay SARS-CoV and SARS-CoV-2 pseudo-typed viruses were produced as previously described. 14 Briefly, plasmids coding full-length S protein and pNL4-3.luc.RE were cotransfected into 293T cells in 10 cm dishes. The supernatants were harvested 48 h after transfection and diluted in complete DMEM mixed with or without an equal volume (50 μl) of diluted serum or antibody and then incubated at 37°C for 1 h. The mixtures were transferred to HEK 293 T cells stably expressing human ACE2. The cells were incubated at 37°C for 48 h, lysed with passive lysis buffer and tested for luciferase activity (Promega USA). The percent neutralization was calculated by comparing the luciferase value of the antibody or serum group to that of the virus-only control. Syncytia formation assay In brief, HEK293T cells were transfected (when~60-70% confluent in six-well plates) by Lipofectamine 2000 with plasmids encoding a codon-optimized full-length form of the SARS-CoV and SARS-CoV-2 S protein or a control plasmid. In parallel, another group of HEK293T cells was transfected with plasmids encoding hACE2. Twenty-four hours after transfection, the two groups of cells were trypsinized and mixed at a 1:1 ratio and then plated on 24-well plates. After 48 h of coculturing, multinucleated giant cells were observed. Images were collected and analysed with an Olympus IX53 confocal microscope. 15 Animal immunization Groups of five age-and weight-matched male C57BL/6 mice were immunized intramuscularly with 25 μg recombinant SARS-CoV-RBD hFc, SARS-CoV-2 RBD hFc or hIgG, as a control, in the presence of adjuvant QuickAntibody (BioDragon). A booster was subsequently administered at 3-week intervals in all cases. The animals were sacrificed 14 days after the second immunization, and serum was collected. In terms of neutralization assays, the serum samples were heat inactivated at 56°C for 30 min. Enzyme-linked immunosorbent assay (ELISA) To confirm whether the antibodies recognized SARS-CoV RBD or SARS-CoV-2 RBD, 96-well microwell plates (Nunc) were coated with 50 ng/well recombinant SARS-CoV RBD hFc and SARS-CoV-2 RBD hFc in 0.1 M sodium carbonate-bicarbonate buffer (pH 9.6) and incubated overnight at 4°C. After blocking at 37°C for 2 h with ovine serum albumin (2%) in PBS, the ELISA plates were washed, and diluted antibodies were added for a 2-h incubation. HRP-conjugated goat anti-human Fc or HRP-conjugated goat antimouse Fc antibody (Sigma) was used to detect the bound antibodies. To determine the residues that contributed to the binding of the SARS-CoV and SARS-CoV-2 RBDs to ACE2 or neutralizing antibodies. The concentration of RBD mutants in the culture supernatant was measured by sandwich enzyme-linked immunosorbent assay. Specifically, CR3022 and R007 (Sino Biological), which cross-react with both the SARS-CoV and SARS-CoV-2 RBD, were used to coat plates, and cell supernatant diluted 50-fold or 500-fold was then added and treated with the two mAbs. Serially diluted purified SARS-CoV and SARS-CoV-2 RBD (2fold dilution of initial 200 ng/ml) were used as standards. Subsequently, the bound antigen was detected by an HRPconjugated mouse anti-His mAb. The concentration of the RBD mutants was determined according to the standard curve. Then, another ELISA was performed to analyse the relative binding activity of these RBD mutants for ACE2 and the mAbs. The RBD mutants (100 and 200 ng/ml) were incubated on plates pre-coated with 500 ng mAbs or ACE2. After 2 h of incubation at 37°C, the binding affinity of the RBD mutants for mAbs or ACE2 was detected by an HRP-conjugated mouse anti-His mAb. The binding signals of the mutants to the mAbs were compared to those of the wild-type virus proteins. Receptor blocking assay To investigate the ability of the mAbs and sera from immunized mice to block SARS-CoV-2 spike protein binding to ACE2, serially diluted mAbs (3-fold dilution of initial 30 µg/ml) and sera (3-fold dilution of initial 1:10) were added to plates pre-coated with 100 ng/well of the recombinant SARS-CoV RBD-his, SARS-CoV-2 RBD-his (Sino Biological) and incubated for 1 h at 37°C. Then, 150 ng/ml of the biotin-labelled recombinant hACE2-his (Novoprotein) expressed by 293 T cells was added to the plates. After 2 h of incubation at 37°C, the wells were washed and detected with HRP-conjugated streptavidin (R&D Systems). The plates were incubated for 1 h, followed by the addition of TMB substrate. The percentage of receptors blocked was determined by the reduced percentage of S binding to ACE2 as determined by comparison with the percentage obtained in the absence of serum. The fifty percent inhibitory concentration [IC 50 (micrograms per millilitre)] was used as the inhibition value. Structure analysis Local minimization was carried out by Prime after mutating Q498 to Y to simulate a conformation change within 5 Å in the amino acids around Q498 (PDB ID: 6LZG). The minimization shows that the aromatic ring of Y498 can form π-π stacking interactions with Y41 in hACE2, which enhances the RBS binding with hACE2. Structural figures were generated using PyMOL (The PyMOL Molecular Graphics System, Version 2.0 Schrödinger, LLC). Statistical analyses All statistical analyses were performed using Graph Pad Prism 6 software. The P values shown in the figures and figure legends were determined using unpaired two-tailed Student's t-tests (*P < 0.05, **P < 0.01, and ***P < 0.001; not significant (NS)). RESULTS Both the SARS-CoV and SARS-CoV-2 RBDs bind to hACE2 for virus entry To confirm that the infectivity of SARS-CoV and SARS-CoV-2 is dependent on hACE2, we constructed pseudo-typed SARS-CoV and SARS-CoV-2 by the co-transfection of a plasmid encoding Envdefective luciferase-expressing HIV-1 (pNL4-3.luc.RE) and a plasmid expressing the full-length S protein of SARS-CoV or SARS-CoV-2 into HEK293T cells. The HEK293T cells expressing or not expressing hACE2 were treated with pseudo-typed virus-containing supernatants. The pseudo-typed SARS-CoV and SARS CoV-2 showed much higher infectivity in the HEK293T cells expressing hACE2 than they did in the HEK293T cells not expressing hACE2, while there was no significant difference in the pseudo-typed VSVG infectivity in the HEK293T cells with or without hACE2 (Fig. 1a). The results indicated that hACE2 is a receptor used by both SARS-CoV and SARS-CoV-2 to enter the cells. Because syncytial formation has been observed in cultured Vero E6 cells infected with SARS-CoV, 16 we also sought to determine whether HEK293T cells expressing the SARS-CoV-2 S protein could fuse with HEK293T cells expressing hACE2. As expected, the HEK293T cells transfected with hACE2 formed many syncytia with cells expressing the SARS-CoV S protein. In contrast, for the 293T cells expressing hACE2, the S protein of SARS-CoV or SARS-CoV-2 alone did not form syncytia. The HEK293T cells expressing the S protein of SARS-CoV-2 also efficiently formed syncytia with hACE2-transfected cells (Fig. 1b). As the RBD is the key region for SARS-CoV S-hACE2 recognition, we investigated the binding affinity of hACE2 and S protein though biolayer interferometry (BLI) and enzyme-linked immunosorbent assay (ELISA). The biotin-conjugated hACE2 protein was captured by streptavidin that was immobilized on a chip and tested for binding with gradient concentrations of soluble RBD from SARS-CoV and SARS-CoV-2. The equilibrium dissociation constant (KD) of SARS-CoV-2-RBD binding to hACE2 was calculated to be 5.09 nM, which is comparable to that of the SARS-RBD: 1.46 nM 6 ( Fig. 1d). Similar data were obtained through ELISAs (Fig. 1c). Taken together, these results confirmed that both SARS-CoV and SARS-CoV-2 utilize the RBD to bind to hACE2 for virus entry. Distinct immunogenicity of the SARS-CoV RBD and SARS-CoV-2 RBD Since sequence alignment indicated high conservation of the SARS-CoV and SARS-CoV-2 RBDs with a shared identity of 76%, we sought to determine whether the RBDs induced cross-reactive immune responses to confer protection from both viruses. To address this question, we conducted an in vivo immunization experiment to determine the cross-reactivity of the antibodies induced by the SARS-CoV RBD and SARS-CoV-2 RBD (Fig. 2a). First, we performed ELISAs to detect the cross-reactivity of the sera from mice immunized with recombinant SARS-CoV-2 RBD or SARS-CoV RBD. The results showed that the SARS-CoV RBD antiserum reacted strongly to the SARS-CoV RBD, with a mean antibody titre of 1.701 × 10 4 and with a lower titre upon SARS-CoV-2 RBD exposure (mean antibody titre: 3.1 × 10 2 ). Similarly, the SARS-CoV-2 RBD antiserum reacted strongly to the SARS-CoV-2 RBD, with a high antibody titre (7.53 × 10 4 ) and with a lower titre upon SARS-CoV RBD exposure (2.19 × 10 3 ) (Fig. 2b). Correspondingly, we investigated the efficiency of the antiserum cross-blocking of the interaction between hACE2 and the S protein RBD. The results revealed that the SARS-CoV RBD antiserum strongly blocked the interaction between hACE2 and the SARS-CoV RBD with a mean 50% blocking antiserum titre (BT 50 : 1.68 × 10 3 ) but very low BT 50 upon exposure to the SARS-CoV-2 RBD (BT 50 : 73.6). Similarly, the SARS-CoV-2 RBD antiserum had a higher 50% blocking antiserum titre (BT 50 : 3.12 × 10 3 ) for the SARS-CoV-2 RBD and hACE2 interaction but much lower cross-blocking efficiency for the SARS-CoV RBD and hACE2 interaction (BT 50 : 1.65 × 10 2 ) (Fig. 2c). To confirm this observation, a pseudo-typed virus neutralization assay was performed to evaluate the cross-neutralization efficacy of the SARS-CoV RBD-or SARS-CoV-2 RBD-immunized mouse antiserum. In agreement with the results from the blocking assay, the neutralization activity of the mouse antisera was near completely clade-specific with very low cross-neutralization levels. The 50% neutralization antiserum titre (NT 50 ) of the SARS-CoV-2 antiserum to the SARS-CoV-2 pseudo-typed virus was calculated to be 1.49 × 10 4 , and that of the SARS-CoV antiserum to the SARS-CoV pseudo-typed virus was 8.15 × 10 3 (Fig. 2d). Therefore, these results suggest that there is distinct immunogenicity in the SARS-CoV RBD and SARS-CoV-2 RBD, explaining the limited amount of cross-protecting antibodies produced by the two viruses. Identification of key residues in the SARS-CoV and SARS-CoV-2 RBDs to determine receptor-binding levels It is well known that the interaction of the interface between S-RBD and ACE2 plays a crucial role in their binding activity, with the RBM regions considered to be particularly important. 5 However, the amino acids of the RBM were quite different, with a shared identity of only 47.8%. It was not clear which mutated residues within the RBM would alter its affinity for hACE2; thus, a functional analysis was needed. Based on the sequence alignment (Fig. 3a), 19 residues were selected, and single amino acid substitutions were added to mutate the SARS-CoV-2 and SARS-CoV RBM regions. The results showed that, in some mutants, the binding affinity of the SARS-CoV-2 RBD with hACE2 were decreased, but for others, it was increased or not affected (Fig. 3b). According to the high-resolution crystal structure information acquired thus far, 5-7,11,12 the receptor-binding motif (RBM) is composed of two regions (region 1 and region 2) that form the interface of the S protein with hACE2, and we focused on the residues within either region 1 or region 2 (Fig. 3c). 17 Interestingly, after 9 amino acid residues were mutated in the SARS-CoV-2 to SARS-CoV (L455/ Y442, F456/L443, S459/G46, Q474/S461, A475/P462, F486/L472, F490/W476, Q493/N479 and P499/T485), their binding affinity for hACE2 was abolished, in contrast to that of the WT viruses (Fig. 3b, c), indicating that these residues are very important for the binding of SARS-CoV-2 to hAEC2. It is worth noting that 6 of the 9 residues, L455, F456, A475, F486, F490 and Q493, have been previously reported to be SARS-CoV-2 RBD-hACE2 interacting residues based on structure analysis. [5][6][7] According to structure analysis, L455 and Q493 of the SARS-CoV-2 RBD have favourable interactions with hACE2 K31 and E35; upon binding, the salt bridge between the two hACE2 residues breaks, and each of the residues forms a hydrogen bond with Q493 in the SARS-CoV-2 Fig. 1 Both the SARS-CoV-2 RBD and SARS-CoV RBD bind to hACE2. a Receptor-dependent infection of SARS-CoV-2 and SARS-CoV pseudotyped virus entry into hACE2 + 293 T cells. 293T cells stably expressing hACE2 were infected with SARS-CoV-2 or SARS-CoV pseudo-typed viruses, and the cells were harvested to detect the luciferase activity. Fold changes were calculated by comparison to the levels in the uninfected cells. VSV pseudo-typed viruses were included as controls. b Syncytia formation between S protein-and hACE2-expressing cells. 293T cells transfected with hACE2 plasmid were mixed at a 1:1 ratio with 293T cells transfected with plasmid encoding S protein from SARS-CoV-2 (bottom left) or SARS-CoV (bottom right). As controls, 293T cells transfected with an empty plasmid were either mixed at a 1:1 ratio with 293T cells transfected with the hACE2 plasmid (top row), S protein from SARS-CoV-2 (middle left) or SARS-CoV (middle right). Images were photographed at ×20 magnification. Representative images are shown. c Dose-dependent binding of the SARS-CoV-2 RBD to soluble hACE2 as determined by ELISA. The binding of both the SARS-CoV-2 RBD and SARS-CoV RBD with an Fc tag on hACE2 was tested. Human Fc was included as a control. Data are presented as the mean OD450 ± s.e.m. (n = 2). d Binding profiles of the SARS-CoV-2 RBD and SARS-CoV RBD to the soluble hACE2 receptor measured by biolayer interferometry in an Octet RED96 instrument. The biotin-conjugated hACE2 protein was captured by streptavidin that was immobilized on a chip and tested for binding with gradient concentrations of the soluble RBD of S proteins from SARS CoV and SARS CoV-2. Binding kinetics were evaluated using a 1:1 Langmuir binding model by ForteBio Data Analysis 9.0 software RBM, thus enhancing the binding to hACE2 6 (Fig. 3c), a finding in agreement with our mutagenesis results (Fig. 3b). The introduction of F486 in the SARS-CoV-2 RBD enhanced the hACE2 binding affinity by creating a hydrophobic pocket involving M82 and Y83 in hACE2 6 (Fig. 3c). In addition, we report here, for the first time, that the other three SARS-CoV-2 substitution mutants, S459/G443, Q474/S461 and P499/T485, which do not directly contact hACE2, also reduce the binding affinity. These three SARS-CoV-2 residues may strengthen the structure and stabilize the hACE2-SARS-CoV-2 binding interface. In contrast, we also identified 6 substitution mutants in the SARS-CoV-2 RBD, N439/R426, L452/K439, T470/N457, E484/P470, Q498/Y484 and N501/T487, that enhanced the binding affinity, which provides clues for monitoring the increased infectibility of natural RBD mutations during the transmission of the virus (Fig. 3b, d). We speculate that these residues may be critical for SARS-CoV RBD binding to hACE2. As expected, for SARS-CoV, when R426, K439, N457, P470, Y484 and T487 were replaced with the corresponding residues in the SARS-CoV-2 RBD, the binding activity of the SARS-CoV RBD was dramatically decreased compared with that of the RBD in the WT virus (Fig. 3e). Previous structural studies identified R426, Y484 and T487 as key residues for the SARS-CoV RBD binding to hACE2, 4,18 which was confirmed based on the data provided above (Fig. 3d). According to the structure analysis, the replacement of SARS-CoV-2 RBD N439 with R426 increased the hACE2 binding affinity by introducing a strong salt bridge between R426 in the SARS-CoV-2 RBD and E329 in hACE2 (Fig. 3d). Molecular docking showed that the substitution of SARS-CoV-2 RBD Q498 with Y484 formed π-π stacking interactions with Y41 on hACE2, although hydrogen bonds were also involved, which explains the enhanced hACE2 binding (Fig. 3f). Moreover, according to the structural data, both SARS-CoV-2 RBD N501 and Fig. 2 The antibody response induced by recombinant RBD of SARS-CoV and SARS-CoV-2 in mice. a Schematic of the vaccine regimen. Five C57BL/6 mice per group were immunized two times (2-3 weeks apart) intramuscularly with 25 µg of the SARS CoV-2 RBD-hFc or SARS CoV RBD-hFc protein in combination with quick adjuvant. Mice immunized without the RBD protein but with hIgG were included as controls. Mice were sacrificed on day 35 after immunization, and antisera were collected for subsequent tests. b Cross-reactivity of SARS-CoV-2-RBD-or SARS-CoV-RBD-specific mouse sera against the SARS-CoV RBD or SARS-CoV-2 RBD as determined by ELISA. Mouse antisera were serially diluted three-fold and tested for binding to the SARS-CoV RBD or SARS-CoV-2 RBD. The IgG antibody (Ab) titres of SARS-CoV-2 antisera (red), SARS-CoV antisera (blue) and control antisera (black) were calculated at the endpoint dilution that remained positively detectable for the SARS-CoV-2 RBD or SARS-CoV RBD. The data are presented as the mean A450 ± s.e.m. (n = 5). c Cross-competition of SARS-CoV-2-RBD-or SARS-CoV-RBD-specific mouse sera and hACE2 with the SARS-CoV RBD or SARS-CoV-2 RBD as determined by ELISA. The data are presented as the mean blocking (%) ± s.e.m. (n = 5). Fifty percent blocking antibody titres (BT 50 ) against the SARS-CoV pseudo-typed virus or SARS-CoV pseudo-typed virus were calculated. d Cross-neutralization of SARS-CoV-2-RBD-or SARS-CoV-RBD-specific mouse sera against SARS-CoV-2 or SARS-CoV pseudo-typed virus entry, measured by pseudo-typed virus neutralization assay. The data are presented as the mean neutralization (%) ± s.e.m. (n = 5). Fifty percent neutralizing antibody titres (NT 50 ) against the SARS-CoV-2 or SARS-CoV pseudo-typed virus were calculated Fig. 3 Single amino acid substitution mutagenesis of the SARS-CoV-2-RBD and SARS-CoV-RBD. a Sequence differences in the SARS-CoV and SARS-CoV-2 RBDs. RBM is in red. Previously, identified critical ACE2-binding residues are shaded in green. The conserved residues are marked with asterisks (*), the residues with similar properties between groups are marked with the colon symbol (:) and the residues with marginally similar properties are marked with the period symbol (.). b ACE2 binding with reciprocal amino acid substitutions in the SARS-CoV-2 RBD. Each value is calculated as the binding relative to that of the WT (%). The mean±S.E.M. of duplicate wells is shown for two independent experiments. The two red dotted lines represent 75% and 125% relative to the WT data, respectively. c, d Structural alignment of SARS-CoV-2-RBD and SARS-CoV-RBD binding with ACE2. The SARS-CoV-RBD complex (PDB ID: 2AJF) is superimposed on the SARS-CoV-2 RBD (PDB ID: 6lzj. grey: ACE2, wheat: SARS-CoV-2. Mutants that weaken the SARS-CoV-2 RBD binding with ACE2 are highlighted in cyan (c). The corresponding residues from SARS-CoV are indicated in green and are illustrated in detail (c left). Mutants that enhance ACE2 binding are highlighted in magenta (d). e ACE2 binding with reciprocal amino acid substitutions in the SARS-CoV RBD. Each value is calculated as the binding relative to that of the WT (%). The mean ± S.E.M. of duplicate wells is shown in two independent experiments. The two red dotted lines represent 75 and 125% relative to the WT data, respectively. f Molecular docking of the SARS-CoV 2 RBD carrying the Q498Y mutant in complex with hACE2. Q498Y formed π-π stacking with Y41 in hACE2: left, Y498; right, Q498 SARS-CoV RBD T487 have similar interactions with Y41 and K353 in hACE2, but the replacement of SARS-CoV-2 RBD N501 with T487 significantly enhanced its binding with ACE2. The enhanced binding activity of the SARS-CoV-2 RBD mutant N501/T487 may be due to the increased support provided by this residue to stabilize the overall structure of RBD or strengthen the network of hydrophilic interactions (Fig. 3d). These results suggest that some residues critical for SARS-CoV RBD-ACE2 recognition, namely, R426, K439, N457, P470, Y484 and T487, were different at the corresponding positions in SARS-CoV-2. In contrast, the key residues for SARS-CoV-2 recognition, L455, A475, F486 and Q493, were also different at the corresponding positions in SARS-CoV-2. In summary, the overall receptor-binding mode of the SARS-CoV-2 and SAR-CoV RBDs was quite similar, but the detailed interaction patterns were substantially different, which might explain the distinct of immunogenic features of the SARS-CoV-2 and SAR-CoV RBDs, which induce the production of clade-specific neutralizing Abs. A panel of mAbs revealed limited cross-neutralization in SARS-CoV-2 and SARS-CoV Then, we investigated the immunogenic characteristics of the SARS-CoV-2 RBD and SARS-CoV RBD by using a panel of neutralizing mAbs against the SARS-CoV RBD, including those targeting 80R, S230, m396, CR3022 and N-176-15. 14, [19][20][21][22] We also tested the human mAb HA001 against the SARS-CoV-2 RBD (Fig. 4a). All the antibodies, except for CR3022, showed only cladespecific binding activity (Fig. 4b, c). All the neutralizing antibodies selected disrupted only clade-specific RBD-hACE2 interactions (Fig. 4b, c). Further pseudo-typed virus assays confirmed that all the SARS-CoV-RBD mAbs failed to neutralize SARS-CoV-2, and HA001 did not neutralize SARS-CoV. As expected, the SARS-CoV-RBD mAbs neutralized SARS-CoV with IC 50 values ranging from 0.016 to 2.0 µg/ml, and HA001 neutralized SARS-CoV-2 with an IC 50 value of 0.016 µg/ml (Fig. 4d). These data indicate that the tested mAbs targeting the SARS-CoV-2 RBD could not cross protect SARS-CoV, and vice versa, which may be due to the different sequence of the RBDs in the two viruses. This result emphasized the necessity of developing specific vaccines and antibodies against SARS-CoV-2 S. Substitute mutagenesis of the RBM to identify key residues for neutralizing antibody recognition To evaluate the identified key residues for antibody recognition, a panel of neutralizing mAbs against the SARS-CoV RBD and one neutralizing mAb against the SARS-CoV-2 RBD (HA001) were selected (Fig. 4a). Each antibody showed only clade-specific binding activity (Fig. 4b). To investigate the key residues of the RBD in terms of the recognition of the clade-specific neutralizing antibodies, single amino acid substitution mutagenesis scanning, based on the reported antibody epitope positions and sequence changes within the RBM, was performed (Fig. 5a, b). In our mutagenesis assays, four SARS-CoV-2 RBD mutants, A475/ P462, V483/P469, F486/L472 and S494/D480, failed to bind HA001 (Fig. 5b). Among these four amino acids, A475 and F486 were critical for the binding activity of the RBD in interaction with HA001 and hACE2 (Fig. 5e). These observations demonstrate that HA001 neutralizes SARS-CoV-2 by competing for the same critical residues in the β 5 loop of the RBD and thus blocking receptor binding. 5 We employed a similar strategy to test the binding affinity of SARS-CoV mAbs for SARS-CoV RBD mutants. The data showed that the binding of 80R was significantly suppressed when SARS-CoV RBD residues D480 and Y484 were mutated to S494 and Q498, the corresponding amino acids in the SARS-CoV-2 RBD, whereas other sequence changes had no effect (Fig. 5b). Based on the crystal structures of the SARS-CoV RBD-80R complex, 19 D480 and Y484 were shown to play strategic roles in the interaction between 80R and SARS-CoV-RBD (Fig. 5c). Replacement of Y484 with Q weakened the interaction between SARS-CoV-RBD and 80R by eliminating the strong π-π stacking interactions of Y102 in the CDRH3 region of 80R. Similarly, for m396, replacing the residues of SARS-CoV RBD Y484 and T487 with the Q498 and N501 residues in the SARS-CoV-2 RBD significantly reduced its binding to the SARS-CoV RBD, compared to that of the other mutants (Fig. 5b). Based on crystal structure analysis of the SARS-CoV RBD-m396 complexes, the m396 CDRH1 region made contact with the hydrophobic residues Y484, T486 and T487 of the SARS-CoV RBD. The replacement of SARS-CoV RBD Y484 with hydrophilic Q disrupted the hydrophobic interaction with m396. SARS-CoV RBD T487 inserted into a hydrophobic pocket involving m396 and the replacement of this residue with SARS-CoV-2 RBD N501 may have led to a change in the conformational structure of the hydrophobic pocket, thus weakening both SARS-CoV RBD-hACE2 binding and SARS-CoV RBD-m396 binding (Fig. 5d). In brief, we identified A475 and F486 in the SARS-CoV-2 RBD and Y484 and T487 in the SARS-CoV RBD as the key residues for the recognition of both their common functional receptor hACE2 and neutralizing antibodies. Due to the different immunogenicity of the RBMs in the 2 viruses, the neutralizing antibodies failed to show cross-reactivity. In addition, several mutations to the SARS-CoV RBD, which were mainly located in the hypervariable region A430-D463, moderately reduced its binding activity to S230 and N-176-15. All of the mutations may synergistically contribute to poor cross-reactivity by inducing conformational changes at the binding surface. We observed that K439 was important for both S230 and N176-15 binding to the SARS-CoV RBD, and it is also a key residue for the SARS-CoV RBD binding to hACE2. In conclusion, the variations in the RBMs, especially those residues involved in ACE2 recognition, may be critical for the failure of the crossing neutralization of the antibodies targeting the RBDs. DISCUSSION The World Health Organization officially declared SARS-CoV-2 a pandemic on 11 March 2020. The pandemic has become increasingly serious worldwide. With the deepening of research to SARS-CoV-2 and COVID-19, the previous optimistic speculation has been gradually replaced by expectations for a long-term fight against the virus. To control pandemics, prophylactic vaccines and effective drugs are urgently required. According to published articles, both SARS-CoV-2 and SARS-CoV utilize the same human receptor, ACE2, which was also confirmed in our study. Hence, the S protein, especially its RBD, which is responsible for hACE2 binding, is the most promising target for the development of SARS-CoV vaccines and antibody-based drugs. 23 Based on the newly disclosed structural information and our functional analysis, we discussed the receptor recognition and antigenic features of SARS-CoV-2 and SARS-CoV. The crystal and cro-EM structures of both the SARS-CoV-2-and SARS-CoV-hACE2 complexes revealed that the overall binding modes were quite similar, although the amino acids in the RBMs were quite different. However, how the variable parts of the RBMs in SARS-CoV-2 and SARS-CoV affect receptor recognition had not been well illustrated. In this study, we demonstrated that, six single-amino acid substitutions in SARS-CoV-2 RBD resulted in the loss of favourable interactions with hACE2, namely, N501, Q498, E484, T470, K452 and R439. We also demonstrated that, 5 single amino acid substitutions in SARS-CoV-2 RBD enhanced SARS-CoV-2 RBD-hACE2 binding activity, namely, P499, Q493, F486, A475 and L455. These findings, together with the results of other substitution mutations, confirmed the hypothesis of other published structural articles. Our work provides evidence for the convergent evolution of the SARS-CoV-2 and SARS CoV RBDs and reveals a good example of a functional compensatory evolution mechanism. 24,25 Remarkably, our data indicated that six substitution mutations in the SARS-CoV-2 RBD, N439/R426, L452/K439, T470/N457, E484/ P470, Q498/Y484 and N501/T487, led to the acquisition of enhanced binding affinity for hACE2, which provides clues for monitoring the increased infectibility of natural S protein mutations during the transmission of the virus. The difference in the RBM amino acid sequence raises a new question: Are the protective antigenic sites in the RBD different among SARS-CoVs? We found that the antigenicity antigenic sites of the RBD were distinct in the 2 viruses. We tested a panel of neutralizing mAbs targeting the SARS-CoV RBD, and only one of these antibodies, named CR3022, was able to recognize the SARS-CoV-2 RBD, but it had no neutralizing activity. Notably, the IC 50 of CR3022 for neutralizing the SARS-CoV pseudo-typed virus was much higher than that of the other four antibodies tested. The latest report from Meng Yuan et al. 13 revealed the crystal structure of the SARS-CoV-2 RBD in complex with CR3022 and showed that the conserved epitope centred on the S protein did not overlap with the hACE2-binding RBM interface. We also tested a human neutralizing antibody against SARS-CoV-2, named HA001 and purchased from Shanghai Sanyou Biopharma, that showed highbinding affinity for and neutralizing activity against SARS-CoV-2 but no cross-reactivity with SARS-CoV. We identified Y484 of the SARS-CoV S protein as the key amino acid recognized by SARS-CoV-specific mAbs m396 and 80R. This amino acid was also important for SARS-CoV S-hACE2 binding. Considering that Y484 in SARS-CoV S protein was substituted with Q498 corresponding to SARS-CoV-2 S, this residue may be one of the key amino acids that contributes to the antigenic variation. The speculated epitopes of HA001 was identified as two hACE2 contacting amino acids, A475 and F486, in the SARS-CoV-2 RBM region, which may be new sites for neutralizing antibody binding. Using ELISAs, we also demonstrated that the antisera from mice immunized with mammalian cells expressed recombinant RBDs of SARS-CoV and SARS CoV-2 showed high binding affinity for and neutralizing activity against the respective homologous virus, while the cross-binding and neutralizing activity was much weaker. These results indicated that the RBD domain is a good immunogen to induce clade-specific neutralizing antibodies for disrupting virus-receptor engagement. Regarding the possibility of inducing cross-neutralizing antibodies by immunizing mice with the SARS-CoV RBD, several studies have indicated that natural infection with SARS-CoV or SARS-CoV-2 and immunization of animals with the SARS-CoV RBD induced very limited crossneutralizing S protein-targeting antibody responses, 26,27 which is consistent with our observation. Overall, this study provided clues for developing intervention strategies against SARS-CoV-2. First, although it was not easy to induce cross-protective antibodies, the RBD in SARS-CoV-2 was a potential antigen that could induce abundant neutralizing antibodies against SARS-CoV-2, potentially making it a good candidate for developing subunit vaccines. Second, we demonstrated that SARS-CoV-2 RBM-specific neutralizing mAbs prevented SARS-CoV-2 infection by blocking hACE2 interactions and hence are promising passive antibody-based agents in the absence of an effective prophylactic vaccine. Moreover, the identification of SARS-CoV-2 and SARS-CoV residues important for ACE2 and neutralizing antibody recognition sheds light on the pathogenicity and immune escape mechanisms of SARS-CoV-2.
v3-fos-license
2020-09-25T13:01:45.942Z
2020-09-24T00:00:00.000
221888636
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7554/elife.54573", "pdf_hash": "a493be8b6e812a5982930fbdcdf0f9520f497562", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45135", "s2fieldsofstudy": [ "Biology" ], "sha1": "72e33ea3e8f2e6f0c97f3db75dc8dd7e81dae48c", "year": 2020 }
pes2o/s2orc
Hedgehog signaling is required for endomesodermal patterning and germ cell development in the sea anemone Nematostella vectensis Two distinct mechanisms for primordial germ cell (PGC) specification are observed within Bilatera: early determination by maternal factors or late induction by zygotic cues. Here we investigate the molecular basis for PGC specification in Nematostella, a representative pre-bilaterian animal where PGCs arise as paired endomesodermal cell clusters during early development. We first present evidence that the putative PGCs delaminate from the endomesoderm upon feeding, migrate into the gonad primordia, and mature into germ cells. We then show that the PGC clusters arise at the interface between hedgehog1 and patched domains in the developing mesenteries and use gene knockdown, knockout and inhibitor experiments to demonstrate that Hh signaling is required for both PGC specification and general endomesodermal patterning. These results provide evidence that the Nematostella germline is specified by inductive signals rather than maternal factors, and support the existence of zygotically-induced PGCs in the eumetazoan common ancestor. Introduction During development, animal embryos typically set aside a group of primordial germ cells (PGCs) that later mature into germline stem cells (GSCs) and in turn give rise to gametes during adulthood (Nieuwkoop and Sutasurya, 1979;Nieuwkoop and Sutasurya, 1981;Wylie, 1999;Juliano et al., 2010). The process of PGC specification both underpins the sexual reproduction cycle and involves transitions of pluripotency, making the mechanisms that distinguish germ cells from soma of critical importance in developmental and stem cell biology (Solana, 2013;Irie et al., 2014;Magnúsdó ttir and Surani, 2014). PGC specification occurs early in development and could hypothetically rely on autonomous or non-autonomous cues. Historically, comparative studies of germ cell development defined the core mechanisms of PGC specification as preformation and epigenesis (Nieuwkoop and Sutasurya, 1979;Nieuwkoop and Sutasurya, 1981;Extavour and Akam, 2003). For clarity, here we adopt the terms 'inherited' and 'induced' to distinguish between PGC specification mechanisms (Seydoux and Braun, 2006;and Seervai and Wessel, 2013). In inherited PGC specification (e.g. Drosophila, C. elegans and Danio rerio), cytoplasmic determinants referred to as the germ plasm are maternally deposited in oocytes and then segregated into specific blastomeres through cell division (Strome and Wood, 1982;Williamson and Lehmann, 1996;Yoon et al., 1997). In contrast, there are neither maternal germline determinants nor pre-determined PGC fates in specific blastomeres in inductive PGC specification. For example, BMP signaling is required for PGC specification from precursor cells in mouse, axolotl, and cricket embryos (Lawson et al., 1999;Chatfield et al., 2014;Nakamura and Extavour, 2016). The inductive mode of PGC specification is more prevalent across the animal kingdom, and therefore hypothesized to reflect mechanisms present in the cnidarian-bilaterian common ancestor (Extavour and Akam, 2003). Strictly categorizing PGC specification mechanisms into one of two types may fail to account for the true variety present in nature. Highly regenerative animals such as sponge, Hydra and planaria maintain multipotent stem cells that are capable of differentiating into both soma and germ line in adults (Bosch and David, 1987;Fierro-Constaín et al., 2017;Issigonis and Newmark, 2019), thus representing an alternative post-embryonic mode of PGC specification. In sea urchin, a combination of inherited and inductive PGC specification mechanisms are observed (Voronina et al., 2008). It follows that the germline determination mechanisms of a wide variety of organisms may lie at different positions along the continuum between maternal inheritance and zygotic induction (Nieuwkoop and Sutasurya, 1981;Seervai and Wessel, 2013). Cnidarians (jellyfish, sea anemones and corals) are the sister group to bilaterians and occupy an ideal phylogenetic position for investigating likely developmental traits of the eumetazoan common ancestor (Technau and Steele, 2011;Russell et al., 2017). Among cnidarians, the sea anemone Nematostella vectensis maintains distinct adult gonad tissue and features PGC specification dynamics hypothesized to reflect an evolutionary transition from induction to inherited mechanism based on expression patterns of conserved germline genes (Extavour et al., 2005). Additionally, a wellannotated genome (Putnam et al., 2007), defined developmental stages (Fritzenwanker et al., 2007) and diverse genetic tools (Ikmi et al., 2014;Renfer and Technau, 2017;He et al., 2018;Karabulut et al., 2019) make Nematostella a genetically tractable model to elucidate developmental mechanisms controlling PGC specification. In this study, we explore mechanisms of PGC development in Nematostella and test whether the putative PGC clusters are specified by maternal or zygotic control. We first follow the development of putative PGCs and provide evidence supporting their germ cell fate in adults. We then leverage shRNA knockdown and CRISPR/Cas9 mutagenesis to interrogate the developmental requirements for the Hedgehog signaling pathway in PGC specification. From these results, we conclude that Hh signaling is either directly or indirectly required for PGC specification in Nematostella. As Hh signaling is only activated zygotically, these data indicate an inductive mechanism for Nematostella PGC specification and support the inference that the eumetazoan common ancestor likely specified PGCs via zygotic induction. Evidence that PGCs form in primary polyps and migrate to gonad rudiments The localized expression of the conserved germline genes vasa and piwi suggest that Nematostella PGCs arise during the metamorphosis of planula larvae into primary polyps between 4 and 8 dayspost-fertilization (dpf; Figure 1A-D''; Extavour et al., 2005;Praher et al., 2017). These cells first appear as discrete clusters within the two primary mesenteries, in close proximity to the pharynx ( Figure 1B-D''). To follow the development of putative PGCs at higher spatio-temporal resolution, we generated a polyclonal antibody against Nematostella Vasa2 (Vas2) and used immunohistochemistry and fluorescent in situ hybridization to confirm that Vas2 was co-expressed with piwi1 and piwi2 in the putative PGC clusters (Figure 1-figure supplement 1A-L). Further supporting their germline identity, we also found that tudor was enriched in putative PGC clusters (Figure 1-figure supplement 1M-P; Boswell and Mahowald, 1985;Arkov et al., 2006;Chuma et al., 2006). We next quantified the number of putative PGCs in primary polyps and found that Vas2+ cell numbers varied between individuals, with a median number of 10 cells per primary polyp ( Figure 1E). We did not observe a significant difference in Vas2+ cell numbers between two primary mesenteries (Figure 1figure supplement 2). Within the same spawning batch, there was no significant difference in the number of putative PGCs in primary polyps assayed on different days. This suggests that there was neither loss nor expansion of PGCs in primary polyps prior to feeding and development of the juvenile stage. Adult Nematostella harbor mature gonads in all eight internal mesentery structures while confocal images found that Vas2+ epithelial cell clusters were localized only at endomesodermal septa associated with segments s2 and s8 (Figure 2A Williams, 1975;Frank and Bleakney, 1976;He et al., 2018). If the two Vas2+ epithelial cell clusters are the only precursors for adult germ cells, it follows that these cells would have to delaminate and migrate to populate the eight gonad rudiments. Alternatively, new PGCs could arise within each of the six non-primary mesenteries, perhaps at a later developmental stage. To distinguish between these possibilities, we examined the localization of Vas2-expressing putative PGCs in primary polyps and later juvenile stages. In all primary polyps, putative PGCs appeared in two coherent clusters at 8 dpf ( Figure 2B-B'). In older primary polyps (>10 dpf), some PGC clusters cells appeared to stretch basally through the underlying mesoglea, an elastic extracellular matrix that separates epidermis and gastrodermis layers ( Figure 2C-C'; Schmid, 1991;Shaposhnikova et al., 2005). After feeding for more than a week, primary polyps start adding tentacles and enter the juvenile stage. Source data 1. PGC numbers on 8, 12 or 16 dpf from three spawning batches. Interestingly, upon feeding, Vas2-expressing putative PGCs appeared to delaminate from the epidermis into the underlying mesoglea ( Figure 2D-D'). Putatively delaminated Vas2 positive cells displayed a fibroblast-like morphology with pseudopodial protrusions, similar to other migratory cell types (Figure 2-figure supplement 2; Scarpa and Mayor, 2016). Further consistent with migratory potential, Vas2+ cells also expressed twist (Figure 2-figure supplement 3), a conserved regulator of mesoderm development and a marker of metastatic cancer cells (Yang et al., 2004;Kallergi et al., 2011). While specification of additional PGC clusters was not observed on the six non-primary mesenteries, we did find evidence for a process of radial cell migration between mesenteries at the level of the aboral end of the pharynx, where the mesoglea between ectoderm and endomesoderm increases in volume after the primary polyp stage ( Figure 2E-E'). We next followed the localization of putative PGCs through successive developmental timepoints. The majority of 10 dpf primary polyps showed PGCs within clusters ( Figure 3A-A'), while >10 dpf primary polyps showed some PGCs localized between the primary mesenteries and segment s1 ( Figure 3B-B'). The direction of this initial migration toward segment s1 suggests the existence of attractive/repulsive signals for migratory PGCs. Additionally, we found that in juveniles putative PGCs migrated aborally toward the mesentery region where gonad rudiments will mature in adults ( Figure 3C-D'). These migratory PGCs were proliferative as shown by Phospho-Histone H3 labeling and EdU incorporation ( Figure 3-figure supplement 1). Combining all observations, we hypothesize that in Nematostella, putative PGCs initially form in primary polyps as two endomesodermal cell clusters at the level of the aboral pharynx. During juvenile stages, we further postulate that these cells delaminate into the mesoglea layer between the ectoderm and endomesoderm via an apparent epithelial-mesenchymal transition (EMT) and then migrate to the gonad rudiments. Evidence that putative PGCs mature and give rise to germ cells in adult gonads To assess the germline identity of putative PGCs, we next followed the development of Vas2+ cells from juvenile to young adult stages (>2 month-old). In maturing polyps, the endomesodermal mesenteries are organized from proximal (external) to distal (internal) into parietal muscles, retractor muscles, gonads and septal filaments, with occasionally observed ciliated tracts between the gonads and septal filaments ( Figure 4A-D; Williams, 1979;Jahnel et al., 2014). In juvenile polyps, Vas2+ cells were observed in the mesoglea between the septal filaments and the retractor muscles ( Figure 4C), an endomesodermal region that will later form the adult gonad epidermis. We also occasionally observed putative PGCs between the ciliated tracts and the retractor muscles ( Figure 4D). After feeding for 8 weeks, most polyps reached the 12-tentacle stage and the mesenteries progressively matured, becoming wider and thicker. At this stage we observed Vas2+ putative PGCs in the maturing gonad region, along with Vas2+ immature oocytes or sperm cysts in females and males, respectively ( Figure 4E . Taken together, these observations suggest that the putative PGCs comprise a continuous Vas2-expressing lineage that proliferates and ultimately gives rise to mature germ cells. As proposed by previous work (Extavour et al., 2005), these data support the hypothesis that the germline gene-expressing cell clusters of primary polyps represent bona fide PGCs of Nematostella. Video 1. 3D reconstruction of pharyngeal structures. PGCs (magenta) are specified on the epithelium of the two primary mesenteries, close to the pharynx (cyan cells in the center). The endomesodermal nuclei are pseudocolored in blue, and the ectodermal nuclei are in cyan. https://elifesciences.org/articles/54573#video1 Evidence supporting a zygotic mechanism for primordial germ cell specification We next investigated whether Nematostella PGCs are specified by inheritance of maternal determinants or through induction by zygotically-expressed factors. In the inheritance-based mechanisms of other species, maternally-deposited germline determinants are segregated into specific PGC precursors during early cleavage (Nieuwkoop and Sutasurya, 1979;Nieuwkoop and Sutasurya, 1981;Extavour and Akam, 2003). In Nematostella, prior to the appearance of putative PGC clusters in developing polyps, we observed perinuclear Vas2 granules that could hypothetically serve as maternal germline determinants ( Figure 5). These granules were previously identified with an independent antibody and proposed to regulate Nematostella piRNAs (Praher et al., 2017). However, the perinuclear Vas2 granules were distributed homogenously around oocyte germinal vesicles ( Figure 5A), in every cell of blastulae, and in most endomesodermal cells after gastrulation ( Figure 5B; Praher et al., 2017). In endomesodermal cells, Vas2+ granules gradually diminished when the putative PGC cluster cells activated Vas2 expression ( Figure 5B-E''), suggesting that germ cell fate was gradually specified in endomesodermal precursor cells rather than being maternally predetermined in a set of germline precursor cells. Furthermore, germline gene transcripts (i.e. vas1, vas2, nos2 and pl10) displayed a homogenous distribution in the endomesoderm of embryos and larva before PGC specification (Extavour et al., 2005). Endomesodermal enrichment of germline genes before Nematostella PGC formation could be consistent with the proposed germline multipotency program (GMP), where the expression of conserved germline factors underlies the multipotency of progenitor cells (Juliano et al., 2010). In line with the GMP hypothesis, we hypothesized that Nematostella PGCs are specified from a pool of multipotent endomesodermal precursors as observed in other species that use zygotic mechanisms to induce PGCs. In primary polyps, putative PGC clusters initially form in the two primary mesenteries, which are distinguished by the presence of aborally-extended regions of pharyngeal ectoderm known as septal filaments ( Figure 6A-A', Video 1; Steinmetz et al., 2017). While the mechanism of primary mesentery specification is unknown, this process likely lies downstream of Hox-dependent endomesodermal segmentation in developing larvae. Interestingly, segmentation of the presumptive primary mesenteries is disrupted in both Anthox6a mutants and Gbx shRNA-KD polyps (He et al., 2018). In both of these conditions, we observed aberrant attachment of the septal filaments and the associated induction of PGC clusters in non-primary septa ( Figure 6B-D'). This suggests that the precise location of the putative PGC clusters can be subject to regulation, and hints at the existence of zygotic PGC-inducing signals from the pharyngeal ectoderm. PGC specification is dependent on zygotic Hedgehog signaling activity Previous gene expression studies have suggested that the Hh signaling pathway may be involved in patterning the endomesoderm and potentially the formation of germ cells (Matus et al., 2008). Using double fluorescent in situ hybridization to detect the expression of Nematostella hedgehog1 (hh1) and its receptor patched (ptc) in late planula larvae, we found that both ligand and receptor were expressed in reciprocal domains of ectoderm and endomesoderm associated with the pharynx ( Figure 7A). Later, the PGC clusters appeared within the endomesodermal ptc expression domain, adjacent to where hh1 is expressed in the pharyngeal ectoderm ( Figure 7B-D'). Because PGCs formed in association with the juxtaposed hh1 and ptc expression domains, we hypothesized that Hh signaling may direct neighboring endomesodermal cells to assume PGC identity ( Figure 7E). To test functional requirements for Hh signaling in Nematostella development, we used shRNAmediated knockdown and CRISPR/Cas9-directed mutagenesis (Ikmi et al., 2014;Kraus et al., 2016; He et al., 2018). Unfertilized eggs were injected with shRNAs targeting either hh1 or gli (a transcription factor downstream of Hh signaling) or with two independent gRNAs targeting gli. Using the expression of Vas2 protein and piwi1 transcripts as readouts for PGC identity, we found that PGC specification was significantly inhibited in both knockdown ( Figure . These data suggest that the Hh signaling pathway is required for normal Nematostella PGC specification. During Hh signal transduction, binding of Hh ligand to Ptc de-represses the transmembrane protein Smoothened (Smo), which in turn activates a cytoplasmic signaling cascade (Forbes, 1993;Alcedo et al., 1996;Stone et al., 1996;van den Heuvel and Ingham, 1996;Bangs and Anderson, 2017). To further test the involvement of Hh signaling in PGC formation, we treated developing animals with the Smo antagonists GDC-0449 (Vismodegib) or Cyclopamine (McCabe and Leahy, 2015; Sharpe et al., 2015). When early gastrulae were treated with either inhibitor, we did not observe significant developmental defects at our working concentration ( Figure 8A-D). However, PGC numbers were significantly reduced ( Figure 8E-H). To test Hh requirements for the establishment versus maintenance of PGC identity, we treated developing Nematostella with GDC-0449 either during PGC specification (4-8 dpf) or post-PGC specification (8-12 dpf; PGCs in long-term control treatments (0.05% DMSO between 4-12 dpf, Ctrl-Ctrl) than in short-term controls (4-8 dpf, Ctrl). In wild-type primary polyps, there was no significant difference in mean PGC number between 8, 12 and 16 dpf primary polyps ( Figure 1E). We therefore infer that the drug vehicle DMSO may have had deleterious effects on PGCs after specification. Furthermore, when we compared no-inhibition, continuous-inhibition and released-from-inhibition conditions (Figure 8figure supplement 1B, compare Ctrl-Ctrl, GDC-GDC and GDC-Ctrl), PGC numbers did not vary significantly. These observations suggest that even though the initial PGC specification is Hh dependent, the PGC population can be dynamically replenished, potentially through cell proliferation. Additionally, we did not observe PGC migration defects in different combinations of GDC-0449 treatments. At 12 dpf we found that like control polyps, more than half of treated polyps still showed the expected PGC migration away from the clusters: 18 of 29 polyps in Ctrl-Ctrl; 18 of 30 polyps in Ctrl-GDC; 19 of 30 polyps in GDC-GDC; 16 of 30 polyps in GDC-Ctrl (p value of Chisquared test comparing with Ctrl-Ctrl are 0.87, 0.92 and 0.5 respectively). Therefore, Hh signaling is not likely to be involved in PGC migration after the initial specification step. Hh signaling regulates endomesodermal patterning and PGC specification To definitively test the requirements for Hh signaling in Nematostella, we next used an established CRISPR/Cas9 methodology to mutate hh1 (Ikmi et al., 2014;Kraus et al., 2016;He et al., 2018). These efforts generated three F1 heterozygous lines carrying frame-shift mutations (Figure 9 supplement 2). Consistent with a defect in Hh signal transduction, homozygous mutants expressed lower levels of ptc, a conserved Hh pathway target gene ( Figure 9D-E'). Primary polyps homozygous for either hh1 mutant allele developed primary mesentery-like endomesodermal septa; however, we did not observe Vas2, piwi1 or tudor expressing PGC-like cluster cells ( Figure 9F-I'). Morphological analysis revealed abnormal internal tissue patterning in hh1 homozygous mutants. In wild-type animals, the pharynx and primary septal filaments are separated from body wall ectoderm by intervening endomesodermal tissue ( Figure 10A-A'). By contrast, in hh1 mutants part of the pharynx and the primary septal filaments were in direct contact with the ectoderm ( Figure 10B-B'). As a result, the eight segments of the larval body plan were abnormally segregated into groups of three and five segments by the pharynx ( Figure 10B). These defects were not observed in hh1 and gli shRNA knockdowns, suggesting that PGC formation may require a higher level of Hh signaling activity than endomesoderm patterning. The primary polyp-like hh1 homozygous mutants passed through gastrulation, indicating that the pharynx and the endomesoderm likely formed a continuous epithelium. Consistent with the hypothesis that a pharyngeal signal induces PGC development, hh1 mutants failed to develop PGCs even though the pharynx associated with the endomesoderm. Nevertheless, without more sophisticated genetic tools, we cannot rule out the possibility that PGC formation was indirectly perturbed by Hh-dependent endomesodermal patterning defects. In either case, we conclude that zygotic signaling activity is required for specification of the putative PGC clusters. PGC formation in ptc mutants may reflect a default Hh activation without the receptor In bilaterian model systems, Ptc has been shown to serve as a receptor for the Hh ligand and to inhibit the pathway when the ligand is absent (Johnson et al., 2000). To further interrogate the mechanism of PGC specification in Nematostella, we generated four ptc heterozygous mutant lines ( Figure 10C-D, Figure 10-figure supplement 1, Figure 10-figure supplement 2; see Materials and methods). Crosses between heterozygous siblings resulted in the expected 25% of homozygous progeny based on genotypic analysis, and these developed into abnormal mushroom-shaped polyps which lacked the four primary tentacles ( Figure 10C-D, Figure 10-figure supplement 2). Detailed morphological examination and Vas2 immunofluorescence revealed that the ptc homozygous mutants developed a pharynx, eight endomesoderm mesenteries, and two PGC clusters ( Figure 10D-E'). Combined with the requirement for hh1, gli and Smo activity during PGC specification, we propose that the presence of Hh ligand or absence of ptc activates the pathway and that zygotic Hh signaling provides permissive conditions for PGC formation within the pharyngeal domain of the Nematostella endomesoderm. Discussion In this report, we confirm that Nematostella putative PGCs form in pharyngeal endomesoderm and provide evidence that these cells delaminate via EMT and migrate through the mesoglea to populate the eight gonad primordia. We also demonstrate that putative PGCs form between the expression domains of hh1 and ptc and present evidence that Hh signaling is required for PGC specification but not PGC maintenance. Because Hh signaling transducers are only expressed zygotically (Matus et al., 2008;Lotan et al., 2014), these data indicate that Nematostella employs an Source data 1. PGC numbers in hh1-or gli-shRNA knockdown primary polyps. Source data 2. PGC numbers in gli-gRNA injected primary polyps. inductive mechanism to specify PGC fate, which is consistent with the proposed ancestral mechanism for metazoan PGC specification (Extavour and Akam, 2003). Considering the existence of maternally-derived perinuclear Vas2 granules and vas1 and nos2 transcripts (Extavour et al., 2005;Praher et al., 2017), it remains possible that maternally inherited germline determinants still play some essential roles in PGC specification and that zygotic Hh activity serves to augment their function. In this combined maternal-zygotic scenario, the mechanism of Nematostella PGC formation would not neatly fit within either inheritance or induction, but instead falls within the continuum between either extreme, similar to sea urchin PGCs where maternal and zygotic factors cooperate in PGC determination (Nieuwkoop and Sutasurya, 1981;Voronina et al., 2008;Seervai and Wessel, 2013). We labeled the origins of Nematostella putative PGC by the expression of Vas2 and other conserved germline marker genes. However, the observations do not exclude the possibility that Vas2+ cells can give rise to other somatic lineages because these conserved germline marker genes are also expressed and functional in multipotent stem cells of many species, such as Hydra, planaria and Platynereis dumerilii (Mochizuki et al., 2001;Reddien et al., 2005;Rebscher et al., 2007;Gustafson and Wessel, 2010;Wagner et al., 2012). In line with the theory of a 'germline multipotency program' where PGCs and multipotent stem cells are sister cell types and the specification and maintence of multipotency rely on these genes (Juliano et al., 2010), it is possible that the Vas2 + cells of Nematostella polyps contribute to both germline and soma. Alternatively, because our observations were made from static images, multiple rounds of PGC specification from other origins may exist. Other yet-identified pluripotent stem cells may also contribute to the Nematostella germline even after the initial PGCs have been specified. Based on our current data, we can simply propose that PGCs originate from the Vas2+ cell clusters in primary polyps. With more advanced genetic tools, we anticipate that the Nematostella PGC lineage will be revealed by live tracing methods. Hh pathway activity in ptc mutants In many bilaterian model organisms ptc is a transcriptional target of Hh signaling and serves as both a receptor and negative regulator of pathway activity (Briscoe and Thérond, 2013;Bangs and Anderson, 2017). We sought to functionally dissect Hh signaling in Nematostella and leveraged CRISPR/Cas9 mutagenesis to generate both hh1 and ptc mutants. While hh1 mutants lacked putative PGC cell clusters (Figure 9), to our surprise these cells formed properly in ptc mutant animals ( Figure 10). This finding could be consistent with three possible scenarios: (1) The existence of residual receptor activity due to allele-specific effects or potential redundancy with an unannotated paralogue elsewhere in the genome; (2) An indirect or insufficient role for Hh in PGC specification; (3) A default repressive role for Ptc in the specification of pre-patterned PGC clusters. Because Based on our combined data, we propose that the pharyngeal ectoderm releases Hh ligand to inhibit Ptc-dependent repression of PGC fate in the neighboring endomesoderm. This reasoning would suggest that the PGC clusters are pre-patterned by other yet-identified extracellular signals, and that the role of Hh activity may be to provide a spatial or temporal cue to trigger their maturation. Direct versus indirect roles for Hh activity in PGC specification To our knowledge, Hh signaling has not been directly implicated in PGC specification in previous studies of established bilaterian systems. Nevertheless, as summarized in Figure 11 and Table 1 and Table 2, a requirement for Hh signaling during Nematostella PGC formation is supported by three lines of evidence: (1) hh1 and gli shRNA knockdowns and gli CRISPR/Cas9 mutagenesis ( Figure 7); (2) Smo inhibition assays ( Figure 8); and (3) hh1 mutants (Figure 9). In developing primary mesenteries, PGCs are specified in endomesoderm cells that lie in close proximity to the hh1 expression domain in adjacent pharyngeal ectoderm ( Figure 7A-E). Even in the absence of primary mesenteries in Anthox6a mutants and Gbx knockdown juveniles ( Figure 6; He et al., 2018), PGCs still develop from endomesodermal cells in proximity to the pharyngeal ectoderm-derived septal filaments (Steinmetz et al., 2017). Interestingly, while hh1 expression seems to be restricted to the pharyngeal ectoderm and septal filaments, we observed broad endomesodermal patterning defects in hh1 mutants ( Figure 10B-B'). This phenotype was not seen in either knockdown experiments or inhibitor assays where PGC specification was nevertheless inhibited (Figure 7 and Figure 8). This could Figure supplement 2. Crosses between ptc 1 /+ and ptc 1 /+ or ptc 1 /+ and ptc 2 /+ heterozygous siblings. suggest that the PGC defects in hh1 mutants are a direct result of aberrant endomesodermal patterning. Consistent with this, preliminary attempts to broadly overexpress Hh by injecting Ubiqui-tin>hh plasmid did not result in the induction of any apparent ectopic PGCs (data not shown). Still, looking forward, genetic tools that allow the discrimination between cell autonomous and cell nonautonomous mechanisms will be required to definitively rule out whether PGC formation is directly or indirectly regulated by Hh signaling. Figure 11. A model of Nematostella PGC specification and migration. (A) In wild-type 4 dpf tentacle bud larvae, hh1 (yellow) and ptc (orange) are expressed in the pharyngeal ectoderm and endomesoderm, respectively. Between 4 to 8 dpf when larvae metamorphose to primary polyps, Hh1 signals to neighboring endomesodermal cells and specifies Vas2/piwi1-positive PGC clusters (red) in the primary mesenteries. Meanwhile, perinuclear Vas2 granules (red dots) within endodermal cells (gray) gradually diminish. After initial specification, the PGC clusters undergo EMT and migrate to gonad rudiments during the juvenile stage. (B) hh1 KD, gli KD, hh1 mutants and gli gRNA-Cas9 injected embryos develop reduced or absent PGCs clusters, indicating a requirement for Hh signaling in this process, whether direct or indirect. (C) Drug treatments inhibiting Smo activity between 4 to 8 dpf impair PGC specification. However, some polyps still form reduced numbers of PGCs at later time points, possibly due to compensatory PGC proliferation. Note that the reduced ptc expression depicted in B is supported by FISH data in hh1 mutants but is presumptive in C. Future perspectives In this report, we provide an initial framework demonstrating an inductive PGC formation mechanism in Nematostella vectensis, a representative early-branching animal. To our knowledge, there is no direct evidence about the involvement of Hh signaling in PGC specification in other organisms. One report from Hara and Katow showed hedgehog expression in the small micromeres (PGC precursors) of the sea urchin, Hemicentrotus pulcherrimus (Hara and Katow, 2005;Yajima and Wessel, 2011). Functional interrogation of Hh signaling did not address sea urchin PGC formation but suggests the pathway patterns mesoderm and regulates left-right asymmetry (Walton et al., 2009;Warner et al., 2016). Because complete hedgehog protein homologs appeared in the cnidarian-bilaterian common ancestor and the pathway is involved in many facets of development (Adamska et al., 2007;Ingham, 2001;King et al., 2008), it is possible that the pathway may serve to distinguish germline and soma in other eumetazoans as well. Alternatively, the requirement for Hh signaling in Nematostella PGC formation could be a lineage-specific feature, which could be tested through a broad sampling of PGC development in diverse anthozoan cnidarians. Materials and methods Key resources EdU incorporation and Click-it reaction To detect proliferating cells, juveniles were incubated in 800 mM EdU (Thermo Fisher Scientific) for 30 min and adults were incubated in 100 mM EdU for 12 hr together with artemia followed by fixation and dehydration with immunohistochemical staining protocol. The adult samples were then Whole-mount fluorescent in situ hybridization (FISH) To clone target genes, purified total RNA was reverse transcribed into cDNA by ImProm-II Reverse Transcription System (Promega; Madison, WI; Cat. No. A3800). Target gene fragments were first amplified from a mixed cDNA library of planula larva and primary polyps. Primers are listed in Table 3. We adopted a ligation-independent pPR-T4P cloning method (Newmark et al., 2003) to generate plasmids with probe templates and confirmed the positive clones by sequencing. We then PCR amplified the DNA template fragments using the AA18 (CCACCGGTTCCATGGCTAGC) and PR244 (GGCCCCAAGGGGTTATGTGG) primers, which flank the T7 promoter and the target gene sequence. After purifying DNA templates, we synthesized DIG-labeled RNA probes with the DIG RNA labeling mix (Sigma-Aldrich; Cat. No. 11277073910) and T7 RNA polymerase (Promega; Cat. No. P2077). Sample preparation, probe hybridization and signal detection followed established protocols (Steinmetz et al., 2017;He et al., 2018). The probe working concentration was 0.5 ng/mL for all genes. For double FISH, we synthesized fluorescein-labeled RNA probes with the Fluorescein RNA labeling mix (Sigma-Aldrich; Cat. No. 11685619910) and hybridized together with a DIG-labeled probe of another gene. After detecting the first probe signal with TSA fluorescein reagent (PerkinElmer; Waltham, MA) and several washes with TNT buffer, we quenched peroxidase activity by incubating samples in 200 mM NaN 3 /TNT for 1 hr. Samples were then washed six times with TNT for at least 20 min each and then subjected to second-round probe detection by either anti-DIG -POD Fab fragments (Sigma-Aldrich; Cat. No. 11207733910) or anti-Fluorescein-POD Fab fragments (Sigma-Aldrich; Cat. No. 11426346910). Short hairpin RNA knockdown shRNA design, synthesis and delivery followed the protocol of He et al., 2018 with the following modification: A reverse DNA primer containing the shRNA stem and linker sequence was annealed with a 20 nucleotide T7 promoter primer (TAATACGACTCACTATAGGG). The annealed, partially double stranded DNA directly served as template for in vitro transcription. We tested knockdown efficiency with shRNA produced by this modified method by targeting b-catenin and dpp shRNA, and found the same phenotypic penetrance as previously reported (He et al., 2018;Karabulut et al., 2019). To control for shRNA toxicity, we injected 1000 ng/mL eGFP shRNA and did not observe noticeable developmental defects. All shRNA working solutions were prepared at 1000 ng/mL and the sequences are listed in Table 4. By 8 dpf, primary polyps were fixed to assay PGC development. Generation of mutant lines by CRISPR/Cas9 mutagenesis hh1 and ptc mutant lines were generated using established methods (Ikmi et al., 2014;Kraus et al., 2016;He et al., 2018). In brief, to generate F0 founders, we co-injected 500 ng/ml of gRNA (sequences listed in Table 5) and 500 ng/ml of SpCas9 protein into unfertilized eggs. Mosaic F0 founders were then crossed with wild-type sperm or eggs to create a heterozygous F1 population. When the F1 polyps reached juvenile stage, we genotyped individual polyps by cutting tentacle samples; the resultant alleles are described in Figure 9-figure supplement 1 and Figure 10-figure supplement 1. Heterozygous carriers of insertion/deletion-induced frame-shift alleles were crossed to generate homozygous mutants. The phenotypes and genotypes of the F2 population followed Mendelian inheritance and were subjected to further analysis. In progeny resulting from a hh1 1 /+ cross, the observed phenotypic ratio of wild-type and mutant primary polyps was 948:343, close to the expected Mendelian ratio. Progeny from heterozygous crosses were also randomly genotyped and confirmed to follow the expected 1:2:1 ratio (+/+: hh1 1 /+: hh1 1 /hh1 1 = 6:14:8 and +/+: hh1 2 /+: hh1 2 /hh1 2 = 7:16:6). A similar strategy was used to analyze ptc mutants. ptc mutant genotypes also followed Mendelian segregation (+/+: ptc 3 /+: ptc 3 /ptc 3 = 5:17:8). These results suggest the phenotypes observed in the hh1 and ptc mutant lines result from single locus mutations. We tried to generate gli mutant lines by co-injecting two gRNAs (500 ng/ml each; Table 1 and Figure 7-figure supplement 1) with 500 ng/ml of SpCas9 protein into unfertilized eggs. However, the F0 founders were >90% lethal at juvenile stage and the survivors were sterile. Therefore, we analyzed PGC formation in the F0 generation and as shown in Figure 7K-M. PGC event Developmental stage Data Specification on primary mesenteries 4-8 dpf Figure 1B-D EMT and radial migration juveniles with feeding Figure 2C-E', Figure 3B-B' Aboral migration to gonad rudiments juveniles with feeding Figure 3C-D', Figure 4 Table 5. Summary of experimental designs. Experiment Treatment duration Data Verify PGC specification in hh1 and gli shRNA knockdown 0-8 dpf Test Hh pathway on specification and post-specification by GDC-0449 temporal treatments 4-8, 4-12 and 8-12 dpf Demonstrate patterning defects in hh1 1 mutant 0-12 dpf Figure 10A-B' Demonstrate the mutant morphology of ptc 3 mutant 0-18 dpf Figure 10C Verify PGC specification and demonstrate the mutant morphology of ptc 4 mutant 0-12 dpf Figure 10D-E' Demonstrate the mutant morphology of ptc 1 and ptc 2 mutants 0-8 dpf Figure 10-figure supplement 2 These stocks were diluted 1:2000 in 12 ppt filtered artificial sea water (FSW) to generate working solution: 25 mM GDC-0449 and 5 mM Cyclopamine. 1:2000 diluted 100% DMSO (final 0.05%) was applied as a control. All treatments were protected from light and replaced with fresh working solutions every day. Imaging and quantification For confocal imaging, we used a Leica TCS Sp5 Confocal Laser Scanning Microscope or a Nikon 3PO Spinning Disk Confocal System. Bright field images were acquired using a Leica MZ 16 F stereoscope equipped with QICAM Fast 1394 camera (Qimaging; Surrey, BC, Canada). The brightness and contrast of images were adjusted by Fiji, and the PGC number of individual polyp was manually quantified by the Cell Counter Macro or automatically by blurring and masking the Vasa2 signal to find the cluster and 3D peak finding of the DAPI nuclei within the cluster (https://github.com/jouyun/ pub-2020elife; Chen, 2020; copy archived at https://github.com/elifesciences-publications/pub-2020elife). Serial z section images of N. vectensis pharyngeal structures were reconstructed as a 3D movie (Video 1) by Imaris 8.3 (Bitplane, Concord, MA). Box plots were generated by BoxPlotR (Spitzer et al., 2014). Statistic analysis was done by one-way analysis of variance (ANOVA). We recognize significant difference when the p-value is less than 0.05. Figures of this report were generated using Adobe Illustrator 2019.
v3-fos-license
2019-04-08T13:11:34.249Z
2017-01-01T00:00:00.000
100232321
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://spiral.imperial.ac.uk/bitstream/10044/1/51979/2/J.%20Electrochem.%20Soc.-2017-Jamil-D210-7.pdf", "pdf_hash": "e0362897e59bd123dffec6f3036699920deb6a02", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45137", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "91f52066f049f9a4f4e66941183b76aff0dceb20", "year": 2017 }
pes2o/s2orc
Nickel Electrodeposition on Silver for the Development of Solid Oxide Fuel Cell Anodes and Catalytic Membranes Nickel was electrodeposited on porous Ag/GDC (silver/Ce 0.9 Gd 0.1 O 2-x ) scaffolds and dense Ag/GDC composites for the fabrication ofSOFCelectrodesandcatalyticmembranesrespectively.TocontrolthedistributionandamountofnickeldepositionontheAg/GDCsurfaces;first,asystematiccyclicvoltammetrystudyofnickelelectrodepositionfromaWattsbathonsilverfoilswascarriedouttounderstandtheinfluenceofoperatingconditionsontheelectrodepositionprocess.Fromthecyclicvoltammetrystudy,itcanbeconcludedthatsuitableoperatingconditionsfornickelelectrodepositionintoporousAg/GDCscaffoldsandcatalyticmembranesare:1.1MNi 2 + concentration in Watts bath; deposition potential between − 0.65 to − 1.0 V vs. Ag/AgCl; a temperature at 55 ◦ C; sodium dodecyl sulfate (SDS) as the surfactant; pH 4.0 ± 0.2 and an agitation rate of 500 rpm. It was observed that the nickel surface microstructure changed with the deposition current densities due to the co-evolution of H 2 . Pulse and continuous electrodeposition modes allow nickel to be deposited throughout porous Ag/GDC scaffolds and onto catalytic membranes. The pulse electrodeposition mode is favored as this is shown to result in an even Ni distribution within the porous scaffolds at minimum H 2 pitting. © The 2017. work work [DOI: Nickel is used as a catalyst and current collector in electrochemical energy conversion systems such as solid oxide fuel cells (SOFCs), electrolysers, and as a catalyst in catalytic membranes ( Figure 1). Conventionally, Ni is incorporated into SOFC anodes by mechanically mixing NiO and ionic conductive materials such as yttriastabilized zirconia (YSZ) and Ce 0.9 Gd 0.1 O 2-x (GDC), then sintering at high temperature. However, the use of relatively large volumes of Ni (∼30 vol%) needed to achieve adequate electronic conductivity in the electrodes will affect the stability of cell microstructure under redox cycling. Recent advances in the manufacture of SOFC anodes via infiltration of Ni nitrate solution into a porous backbone (scaffold) have made it possible to achieve an excellent performance with a significantly reduced amount of nickel. 1,2 However, repeated infiltration involving heating and cooling cycles is a lengthy and energy consuming process, presenting challenges to its industrial application. An alternative is to use electroless and electrodeposition techniques, which offer the potential to accelerate the incorporation of Ni into porous scaffolds at room or near-room temperature. The electroless deposition of Ni is well known and it is a simple process used to coat any substrate, however the use of boron-or phosphorus-based reducing agents in this technique is unsuitable for fuel cells due to adverse catalytic effects of the residues. 3 Alternatively, hydrazine has been used as the reducing agent in the past years 4-6 yet its high toxicity may not be suitable for large production. Catalytic membranes have shown their potential to reform methane to syngas (CO+H 2 ) by coupling oxygen separation from air and catalytic partial methane oxidation into one single step. 7,8 The oxygen is separated from air due to oxygen pressure difference on both sides of dense mixed ionic and electronic conducting (MIEC) membranes such as single phase La 1-x Sr x Co 1-y Fe y O 3-δ 9 and/or dual metal-ceramic composites such as Ag/GDC 10 . The incorporation of Ag (< 10 vol %) in dual phase membranes enables oxygen permeability and catalyses oxygen reduction. [11][12][13] In previous studies, Ni has been incorporated into the membranes using infiltration techniques or commercial Ni foams. 10,14 In this study, the electrodeposition of Ni into porous and dense Ag/GDC composites is explored as the process is scalable, low cost and capable of producing Ni deposits with controllable properties. Nickel is deposited into a porous and non-conducting (at room temperature) ceramics such as GDC by combining electroless and electrodeposition: (i) to provide a conductive surface, porous or planar surfaces were metallized with Ag using an electroless technique; and * Electrochemical Society Student Member. z E-mail: z.jamil13@imperial.ac.uk; riana_jay@yahoo.com.sg (ii) Ni was electrodeposited on the Ag-coated surfaces using Watts bath. In this work, we report the electrodeposition of Ni on Ag using Watts bath. In particular we aim to: (i) understand the Ni electrodeposition process on Ag and therefore study Ag foils first; (ii) find the best condition to deposit Ni deposition on a porous Ag/GDC scaffolds as we are looking for alternatives to the conventional Ni infiltration method to fabricate SOFC electrodes; 15,16 and (iii) deposit Ni onto dense Ag/GDC composite membranes to provide catalytic activity for the partial oxidation of methane. 10 Experimental Electrolyte for nickel electrodeposition.-The Watts bath was prepared from analytical grade chemicals and deionized water and consisted of 0.86 M NiSO 4 .6H 2 O, 0.25 M NiCl 2 .6H 2 O and 0.72 M H 3 BO 3 (boric acid). All the experiments were performed at pH 4.0 ± 0.2. The bath was magnetically stirred at 500 rpm to maintain the uniformity of the concentration in the solution and reduce surface pitting. The temperature of the bath was controlled with a hot plate in the range of 22-70 • C and monitored with a digital thermometer (Leegoal). Preparation of Ag electrode substrates.-The Ag substrates used as the working electrode (WE) in this study are divided into two groups: planar and porous structures. Figure 2 illustrates the substrate groups, as well as SEM images of the Ag foil and Ag-coated GDC. The substrates were prepared as follows: (i) Ag foils Ag foils (0.075mm thickness, 99.97% purity, Figure 2a) were obtained from Advent Research Materials. The working surface area of the foils used in this study was 1cm 2 with one side masked by coating with lacquer. (iii) Dense Ag/GDC composite membranes The dense planar Ag/GDC membranes ( Figure 2c) were prepared by coating suspended GDC powder with Ag using Tollens' method, then pelletizing the Ag/GDC powder by isostatic pressing and followed by high temperature sintering, as described in previous work. 17 Cyclic voltammetry.-The electrochemical experiments were carried out in a three-electrode cell. The WE was silver in the form of either Ag foil, Ag-coated GDC electrolyte, Ag-coated GDC porous scaffold, or a dense Ag/GDC composite membrane. In all experiments, a Ni mesh (99% pure Ni) and silver/silver chloride (Ag/AgCl) were used as counter (CE) and reference (RE) electrodes respectively. Ag foils (area 1 cm 2 ) were used as the substrate to carry out the underpinning study of the electrochemical behavior of Ni electrodeposition on Ag at different bath conditions (temperature, additives, and bath concentrations). Two types of additives were used in this study, a non-ionic surfactant, surfynol (Surfynol 104DPM, Air Products) and an anionic surfactant, sodium dodecyl sulfate (SDS, Sigma-Aldrich). Cyclic voltammetry (CV) was started from the open circuit voltage (OCV), and two potential regions of −1.1 to +0.8 V and −0.9 to +0.5 V vs. Ag/AgCl were used (unless noted otherwise) at a scan rate of 50 mVs −1 . CV on different Ag structures was carried out to understand how different Ag substrates affected the Ni deposition. The electrochemical measurements were conducted using a potentiostat/galvanostat (Autolab PGSTAT302N) with NOVA 1.9 data processing software. Nickel electrodeposition and structural morphology analysis.- From the CV study on Ag foils, the appropriate potential/current density and operating conditions (temperature, additives, and bath concentrations) for Ni deposition were selected. Two modes of electrodeposition, continuous constant and pulse current/potential over time were investigated for the deposition of Ni on Ag substrates. In pulse current/potential electrodeposition over time (PED), an ontime (T on ) of 400 ms and an off-time (T off ) of 900 ms was applied. In electrodeposition with continuous constant current/potential over time (CED), the current densities were varied from −0.008 to −0.83 A cm −2 . Deposition current efficiencies were determined by calculating the total amount of deposited Ni using faraday's law 18 and elemental analysis (Agilent technologies ASX-500 7900 ICPMS) of samples dissolved in hot aqua regia. 19,20 The nickel deposits were analyzed by scanning electron microscopy (SEM; LEO Gemini 1525 FEGSEM), electron-dispersive X-ray spectroscopy (EDX; Phenom ProX) and Xray diffraction (XRD; PANalytical MRD). Electrochemical characterization.-A symmetrical cell of electrodeposited Ni/Ag/GDC|YSZ|Ni/Ag/GDC was prepared using the same procedure as our previous study. 16 Ni was electrodeposited on a Ag/GDC scaffold by applying a current density of −0.035 A cm −2 using PED mode and operating at the conditions selected from the CV study. Impedance spectroscopy was used to study the electrodeposited symmetrical cell performance from 600 to 750 • C in humidified-H 2 (97 vol% H 2 , 3 vol% H 2 O), in the frequency range of 10 −1 -10 6 Hz and with an AC amplitude of 20 mV. Other details follow those described in our previous work. 16 Results and Discussion Typical cyclic voltammetric behavior of Ni on Ag foils.-The typical CVs recorded from OCV to two potential regions, −1.1 to +0.8 V and −0.9 to +0.5 V vs Ag/AgCl are shown in Figure 3. The cathodic current commences at ∼−0.55 V vs Ag/AgCl and a clear current peak (C1 and C1 ) appears at ∼−0.65 V vs Ag/AgCl associated with Ni nucleation and reduction in the Watts bath. The equilibrium Ni 2+ reduction potential (E Ni 2+ / Ni o at 55 • C, 1.1 M Ni 2+ ), is −0.47 V vs Ag/AgCl, with the offset probably due to a nucleation barrier. 21,22 Then the current increases sharply when the scan proceeds toward more negative potentials (>−0.75 V vs Ag/AgCl) for both overpotential ranges, corresponding to nickel deposition with metallic gray deposits and co-evolution of H 2 . The co-evolution of H 2 became visible with the formation of H 2 gas bubbles at the edges of the deposited layer on the WE (Ag foil) beyond −1.0 V vs Ag/AgCl ( Figure 3a). It is reported that adsorbed hydrogen molecules (H ads * ) are strongly bonded on the surface of fresh Ni deposits and form Ni-hydrogen alloys, inhibiting the growth of Ni. [23][24][25] In some cases, the continuous co-evolution of H 2 bubbles led to delamination of the deposited layer from the Ag foil. On the reverse potential scan, in the positive direction, a crossover point when the current becomes almost zero is observed, indicating that the nickel electrodeposition proceeds through nucleation and growth phenomena. 26,27 On scanning to more positive potentials, anodic peaks corresponding to oxidation reactions were observed. The existence of Ni-hydrogen alloys is shown by two anodic peaks (a1 and a2) at ∼+0.1 V and ∼−0.2 V (Figure 3b), associated with the dissolution of two separate phases, α-Ni (solid solution of hydrogen in Ni, H/Ni ∼ 0.3) and β-Ni (H/Ni > 0.6). [28][29][30] These peaks are clearly formed when a narrower potential window is used, indicating a mixture of two phases of Ni-hydrogen alloy and pure Ni are deposited in the early stages of Ni deposition. When a broader potential window is used (Figure 3a), the anodic peaks appear at ∼−0.2 V (A1) followed by the peak at ∼+0.3 V (A2) and ∼+0.75 V (A3), indicating the dissolution of β-Ni and Ni metal, and oxidation of Ni species (hydroxide/oxide) respectively. 31,32 This shows that β-Ni is primarily formed when a higher negative potential is applied (>−1.0 V vs Ag/AgCl) compared to α-Ni. 33 The oxidation of Ni species to hydroxide/oxide was observed by the formation of a black deposited layer. The black layer remained when scanned for the second and third cycles. This might be an indication as to the source of deactivation or inhibition of the subsequent electrodeposition on the electrode. 34,35 The overall Ni redox reactions from a Watts bath at cathodic and anodic regions are summarized in Table I. Cyclic voltammetry at different operating conditions.-A systematic cyclic voltammetry study of the influence of operating conditions on Ni deposition on Ag foils is shown in Figure 4. All voltammograms scanned from OCV to several potential regions of −1.1 to +0.9 V, −1.1 to +0.8 V and −0.9 to +0.5 V vs Ag/AgCl. Ni-H included (α or β) 3 Oxidation of Ni to Ni(OH) 2 and NiO: 53,54 Ni negative potentials as a result of the increase of electroactive species in the Watts bath. 36 Thus, better deposits are expected at higher Ni 2+ concentrations. To minimize mass transport limitations and to obtain satisfactory Ni deposits, a Ni 2+ concentration of 1.1 M was selected for the subsequent deposition studies. Effect of bath temperature.- Figure 4b shows that the Ni reduction potentials shifted to less negative values, from −0.75 to −0.64 V vs Ag/AgCl when the bath temperature increased from 22 to 70 • C. An increase of temperature may decrease the overpotential of both H 2 evolution and nickel reduction. 37 However, high plating temperatures leads to hydrolysis reactions that leads to the inclusion of impurities in the deposited films such as nickel hydride and develop internal stress within the deposits. 38 Consequently, the deposits may form cracked and poor coatings. Therefore a temperature at 55 • C was chosen for the next deposition process. Figure 4c. The addition of SDS causes the deposition overpotential to shift to more negative values, and decreases the peak current density from −0.13 to −0.10 A cm −2 at −1.1 V vs Ag/AgCl compared to surfynol. of SDS into Watts solution significantly reduced H 2 pitting compared to surfynol. A compact and brighter Ni layer formed with SDS (Figure 5a), however, a darker layer of Ni accompanied by some submicron pores due to H 2 evolution was obtained with surfynol ( Figure 5b). This shows that SDS is favored as it enhances the electrostatic adsorption of Ni 2+ ions by increasing their positive charges, and suppressing H 2 evolution. 27 Effect of substrate structure.-Different Ag substrates (Ag/GDC scaffolds, planar Ag/GDC and Ag/GDC composite membrane) were used to elucidate the influence of substrate structure on the nickel electrodeposition process as ultimately we want to deposit nickel onto these substrates for a range of applications. Figure 4d shows the cathodic region of the cyclic voltammograms of Ni deposition on the different substrates in the Watts bath. The voltammograms of Ni deposition on the planar Ag/GDC, Ag/GDC scaffold, and Ag/GDC composite membrane, follow the same trends as seen on Ag foil, showing the presence of a similar charge controlled process. A cathodic wave (C) at −0.7 V vs Ag/AgCl for Ni deposition on the porous Ag/GDC scaffold is clearly observed. This peak may indicate that the deposition of Ni in the porous scaffold may involve a combination of charge and diffusion controlled processes with co-evolution of H 2 . 39,40 Furthermore, it is observed that the current densities for the Ag/GDC scaffold, planar Ag/GDC and Ag/GDC composite membrane are smaller than that obtained on the flat Ag foil, despite their increased surface area. These observations could be due to a combination of: Effect of additives.-The effect of non-anionic (surfynol) and anionic (SDS) additives in the Watts bath are illustrated in (i) non ideal current distribution on the Ag-coated GDC and Ag/GDC membrane surfaces; and (ii) mass transport limitations related to the diffusion of Ni 2+ ions through the porous support to the metal-electrolyte interface. 41,42 A narrow range of applied potentials was selected to deposit Ni, particularly in the porous structures to minimize the co-evolution of H 2 that can otherwise interfere with the growth and structure of Ni deposits. The operating conditions selected for Ni electrodeposition into porous Ag/GDC scaffolds and onto Ag/GDC membranes were: 1.1 M Ni 2+ concentration in the Watts bath; potential range between −0.65 to −1.0 V vs. Ag/AgCl; temperature 55 • C; SDS as the surfactant; pH 4.0 ± 0.2 and agitation rate 500 rpm. Structural and morphological properties.-Characterization of Ni electrodeposition on Ag foil.-XRD and EDX measurements confirmed the presence of Ni for all deposition regimes from low to high current densities (within the range of −0.65 to −2.5 V). Figure 6 compares X-ray diffractograms of Ni electrodeposited films on Ag foils using a pulse current of −0.008 to −0.83 A cm −2 . The XRD data were normalized to the highest intensity of Ag foil as the reference. As expected, all Ni electrodeposited films were found to be pure Ni with no peaks of other phases observed. The Ni electrodeposits with strong orientation along the (111) direction (ICSD No. 064989) at lower current densities (<−0.2 A cm −2 ) changed to the (200) and (220) directions at higher current density. This might be attributed to the existence or formation of different interfacial inhibitors such as Figure 7 shows a series of SEM micrographs of Ni films electrodeposited on Ag foil prepared using pulse current plating at different current densities. The surface morphology of Ni deposits changed from fine grains to pyramidal-shaped growth and then cauliflower-shaped morphology when the current densities increased from −0.008 to −0.83 A cm −2 . The low current densities led to very slow growth rates of Ni. This suggests that fine Ni grains form and accumulate in the early stage of Ni electrodeposition. The high current densities contributed to excess co-evolution of H 2 that caused defects in the Ni films ( Figure 8a) and even delamination of the films from the substrate. Ni dendrites were seen on the cauliflower-like Ni films at higher current densities (Figure 8b), accompanied by hydrogen evolution. For these reasons, the deposition of Ni at high current density is not recommended. From elemental analysis of the total Ni deposited dissolved in aqua regia, a current efficiency of ∼90% was found in the range of −0.008 to − 0.08 A cm −2 (∼−0.65 to −1.0 V vs Ag/AgCl). Nickel deposition using different electrodeposition modes (CED and PED ).-Ni electrodeposition on Ag/GDC scaffolds was conducted using chronoamperometry at −0.8 V vs. Ag/AgCl. The deposition was conducted in direct and pulse mode for 69 s (total applied voltage time). The pulse off-time and on-time were 400 ms and 900 ms respectively. Figure 9 shows the elemental distribution mapping of Ni deposited at −0.8 V in both CED and PED modes. Both deposition modes enable Ni deposition throughout the porous scaffold, but when using CED mode more Ni is observed deposited on the top of the porous scaffold (Figure 9b). This layer can block the transport of Ni 2+ into the innermost pores where the local ion concentration can then be depleted. However, Ni deposits at the top of the scaffold using PED mode (Figure 9a) were less evident than in CED mode. Furthermore, the distribution of Ni was more homogeneous and dense in the pores. This suggests that during the off-time, the open pores at the top of the scaffold allow the diffusion of Ni 2+ back into the pores. 39,46 Since our interests are to engineer and control the Ni microstructures in Ag/GDC scaffolds as the anodes of SOFCs, PED mode of electrodeposition is preferable in order to minimize the blocking pores on the top of scaffolds. However, in the case of catalytic membranes, either PED or CED can be used to deposit Ni layers onto the dense and planar Ag/GDC. electrodeposition conditions identified from the CV study. The electrochemical performance of this symmetrical cell was measured and characterized. Figure 10a shows the impedance response of the symmetrical cell operating at 600-750 • C in humidified-H 2 (97 vol% H 2 , 3 vol% H 2 O) over the frequency range of 10 −1 -10 6 Hz. These impedance spectra were fitted using the equivalent circuit shown in Figure 10b to estimate polarization resistance values of the cell. Two semicircles were observed, indicating the polarization of the cell at high, intermediate and low frequencies. The intercept of the impedance spectra at high frequencies with the real axis corresponds to the ohmic resistance (R ohm ). The difference between the low frequency intercept of the impedance curve with the real axis and the R ohm gives the total area specific resistance (ASR) of the electrode. These values were extracted from the fitting and then normalized by dividing by two and multiplying by the electrode area (1.15 cm 2 ). R ohm decreases when the temperature increases, indicating that it is primarily related to the ionic conductivity of the electrolyte. 47 The ASR of the electrode (inset table in Figure 10), including polarization resistance at intermediate (R m ) and low (R l ) frequencies, was significantly decreased from 2.31 to 0.57 cm −2 when the temperature increased to 750 • C. The lowest ASR obtained in this study was 0.57 cm −2 , approximately half the value reported in our previous study (1.12 cm −2 ), 16 fabricated using a similar GDC scaffold. This result indicates that the identified electrodeposition conditions used for depositing Ni onto Ag/GDC scaffold improved the performance of the anode, though the value remains greater than data reported in the literature for Ni/GDC and Ni/YSZ anodes fabricated using conventional and infiltration techniques (0.1-0.35 cm −2 ), [48][49][50][51] showing that more optimization is needed to further improve performance, and in this regard many process variables remain to be explored. Conclusions The influence of the operating conditions of a Watts bath for Ni electrodeposition on Ag foils using CV has been investigated. Ni started to electrodeposit on Ag substrates at ∼ −0.65 V vs Ag/AgCl. More negative potentials allowed faster growth of Ni, however this led to the co-evolution of H 2 (≥−1.0 V vs Ag/AgCl). The co-evolution of H 2 in large amounts resulted in defects in the deposited layers due to H 2 voids. Nickel dendrites and delamination of the layers from the substrate were also observed as current density increased. The suitable electrodeposition conditions were identified to be 1.1 M Ni 2+ in the Watts bath; a potential range of −0.65 to −0.95 V vs. Ag/AgCl; temperature 55 • C; SDS as the surfactant; pH 4.0 ± 0.2 and agitation rate at 500 rpm; to maximize the deposition of Ni into porous Ag/GDC scaffolds and membranes. SDS was selected as the additive because it limited the co-evolution of H 2 , and produced a compact, homogeneous and bright coating. Ni deposition using PED within the suggested potential range was shown to be preferential due to its ability to (i) deposit evenly in the pores; (ii) reduce the potential for pore blockage on the top of porous scaffolds; and (iii) allow the transport of Ni 2+ into the electrode region during the off period. In the case of partial oxidation membranes, both CED and PED can be used to deposit Ni layers on dense Ag/GDC substrates. The electrochemical performance of the electrode produced using the identified conditions improved from a previous study, 1.12 cm −2 , 16 to 0.57 cm −2 in this study at 750 • C in humidified-H 2 (97 vol% H 2 , 3 vol% H 2 O).
v3-fos-license
2018-04-03T03:24:42.795Z
2014-02-24T00:00:00.000
5610581
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1111/psyp.12197", "pdf_hash": "c32504f8383557115fd88360cdbd31f14c43a566", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45138", "s2fieldsofstudy": [ "Psychology" ], "sha1": "c32504f8383557115fd88360cdbd31f14c43a566", "year": 2014 }
pes2o/s2orc
Prefrontal-posterior coupling while observing the suffering of other people, and the development of intrusive memories Witnessing the suffering of others, for instance, in hospital emergency rooms but also through televised images in news or reality programs, may be associated with the occurrence of later intrusive memories. The factors contributing to why some people develop intrusive memories and others do not are still poorly understood. N = 121 healthy women were exposed to film scenes showing the suffering of dying, severely injured, and mourning people while their EEG was recorded. Individuals showing greater decreases of functional coupling between prefrontal and posterior cortices (greater decreases of EEG beta coherences) reported more intrusive memories of the witnessed events. This was shown for intrusions in the short term (immediately after viewing the film) as well as in the medium term (intrusive memories over 1 week). The findings illuminate brain mechanisms involved in the encoding of information in ways that make intrusive memories more likely. Descriptors: Intrusions, Implicit memory, EEG coherence, Intrahemispheric communication, Top-down modulation Experiencing horrifying events such as a violent assault or a severe road traffic accident oneself but also witnessing death and suffering of others has been associated with the occurrence of later intrusive memories. They are, for instance, frequently reported by medical and paramedical personnel who work in hospital emergency rooms, and several studies have suggested that even observing the suffering of other people through televised images can lead to the development of intrusive memories (Breslau, Bohnert, & Koenen, 2010;Durham, McCammon, & Alison, 1985;Schlenger et al., 2002;Schuster et al., 2001). Indeed, the recent Diagnostic and Statistical Manual of Mental Disorders, 5th ed. (American Psychiatric Association, 2013) has amended the criteria for posttraumatic stress disorder (PTSD) to include the viewing of traumatic film footage in the line of work as an index traumatic event. Intrusive memories, or intrusions, are unwanted spontaneously occurring recollections of past events. They are a symptom of various psychopathologies such as PTSD as well as depression, obsessivecompulsive disorders, and other anxiety disorders (Brewin, Gregory, Lipton, & Burgess, 2010;Holmes & Hackmann, 2004), but also occur in everyday life and can vary on a continuum from only mildly burdensome forms to very distressing forms of flashbacks (Horowitz, 1975;Krans, Näring, Becker, & Holmes, 2009). In contrast to deliberately retrievable memories, which are verbally reportable with voluntarily recalled explicit memory content, intrusive memories are thought to arise from a more implicit or involuntary memory (Brewin, Dalgleish, & Joseph, 1996;Ehlers & Clark, 2000;Holmes, Brewin, & Hennessy, 2004). To date, little evidence exists as to why certain people develop intrusive memories and others do not. It has been argued that individual differences in the processing of sensory information during a distressing event may lead to differences in the proportion of explicit and implicit memories of the event, which may be crucial for the development of intrusive memories (Holmes et al., 2004). There is some evidence suggesting that these individual differences in peritraumatic processing may be related to distinct variations of the functional coupling between prefrontal and more posterior cortical areas while experiencing or witnessing a distressing event. The acquisition of explicit memories is accompanied by activity in large-scale cortical networks, including an increase of functional coupling between prefrontal and more posterior, perception-related cortical areas (McIntosh, Rajah, & Lobaugh, 1999;Rose, Haider, & Büchel, 2010;Wessel, Haider, & Rose, 2012). A recent model concerning conscious and nonconscious processing, based on the global workspace model of Baars (1988Baars ( , 1997Baars ( , 2002, suggested that sensory information only enters awareness and becomes verbally reportable if bottom-up stimulus strength and top-down attentional amplification is mobilized to a sufficient magnitude (Dehaene & Changeux, 2011;Dehaene, Changeux, Naccache, Sackur, & Sergent, 2006). Specifically, it was proposed that sensory input leads to cortical feedforward activation ensuing from early extrastriate areas. Since this bottom-up activation declines depending on strength and feedforward duration, it is not thought to be sufficient for conscious processing. Orientation of attention to the incoming sensory information results in top-down amplification of the activation and ignition of a large-scale prefrontalposterior network. Through prefrontal feedback control achieved by long-distance coupling between prefrontal cortical regions and posterior sensory areas, the activation is maintained to a sufficient degree to enable evaluation of the sensory information and its encoding in long-term memory. Sensory input that is processed in this manner becomes explicit memory content, whereby input that does not reach the threshold for conscious processing because of weak bottom-up stimulus strength or insufficient top-down modulation by the prefrontal cortex will be stored as implicit memory. In line with the model by Dehaene and colleagues (Dehaene & Changeux, 2011;Dehaene et al., 2006), it has been proposed that if insufficient attention is directed towards sensory information, this information will be encoded as implicit memory. Severe emotional distress, for example, during experiencing or witnessing horrifying events, narrows the attentional focus and therefore limits consciousness processing and explicit memory encoding. Information that is encoded as implicit memory may later "pop up" as intrusions (Brewin, 2001;Brewin et al., 1996;Cohen, Cavanagh, Chun, & Nakayama, 2012;Holmes et al., 2004). Taken together, there is evidence that the functional connectivity between prefrontal and more posterior cortical regions during the processing of incoming sensory information may be related to the formation of implicit memories and, consequently, to the development of intrusions. We seek to test this possibility in the current study. Intrusive memories are also more affect laden than explicit memories, usually implicating the feelings that the distressing event had initially evoked (Brewin, 2001;Ehlers & Clark, 2000). In the context of affective processing, the prefrontal cortex receives highly processed sensory information and in turn exerts feedback control on posterior association cortices in order to further modulate representations of affectively relevant information (Miskovic & Schmidt, 2010;Rudrauf et al., 2008; see also Vuilleumier & Driver, 2007). Recently, it was demonstrated that an increase in functional connectivity between prefrontal and more posterior cortical areas, as is also expected with explicit memory encoding, attenuates the emotional impact that an event has on the individual (Papousek et al., 2013;Reiser et al., 2012). It has been proposed that loosening of modulatory control over emotionally laden sensory input, by further opening the perceptual gate, leaves individuals relatively unprotected from becoming affected by the perception of emotional information, and that these modulatory processes may not only influence the perception and experience of affect but also the encoding and later recall of emotional content (Miskovic & Schmidt, 2010;Papousek et al., 2013;Reiser et al., 2012). Therefore, reduced top-down modulation of distressing sensory information, indicated by reduced prefrontal-posterior coupling, may also result in more affectladen memories (see also Brewin, 2001). These relationships further support the notion that individual differences in prefrontal-posterior coupling during the perception of distressing information may be a strong candidate for explaining variability in the development of intrusions. Coupling and de-coupling of prefrontal and posterior cortical regions in the context of relevant modulatory processes can be assessed by electroencephalogram (EEG) coherence measures (Miskovic & Schmidt, 2010;Papousek et al., 2013;Reiser et al., 2012;Wessel et al., 2012). Increases of EEG coherences are considered to indicate increased connectivity and functional communication between two neuronal populations whereas decreases indicate a decline in functional communication (Fries, 2005;Srinivasan, Winter, Ding, & Nunez, 2007). In the current study, participants were exposed to film scenes showing dying, severely injured, and mourning people. The film content was similar to that witnessed by television viewers watching programs such as news coverage of road traffic accidents, or programs about the police or ambulance service work. We hypothesized that those individuals showing relatively greater decreases of functional coupling of prefrontal and temporoparietal cortical regions while viewing the film would report more intrusive memories of the film content. Particularly, changes of prefrontalposterior EEG coherences in the right hemisphere were expected to be predictive of intrusive memories. Previous research on the relevance of prefrontal-posterior coupling to explicit memory encoding as well as to affective processing indicated greater importance of the right than of the left hemisphere in relevant modulatory processes (Papousek et al., 2013;Reiser et al., 2012;Wessel et al., 2012). The occurrence of intrusive memories was assessed in the short term (directly after viewing the film) and in the medium term (during the subsequent week), in order to examine whether intrusions persisted at least for some time. Participants One hundred and twenty-four right-handed female university students completed the experiment with all required data. A femaleonly sample was chosen because previous research indicated that women are more reactive in a lab situation to negative emotional stimulation than men, particularly if the stimulation is threatening or traumatic (Whittle, Yücel, Yap, & Allen, 2011). Only women who confirmed that they had not had real traumatic experiences related to car crashes, surgery, or death of a close person within the past 12 months and did not have a neuropsychiatric disease were admitted to the study. Individuals who reported using psychoactive medication or whose scores on the Beck Depression Inventory exceeded the threshold for severe depressive symptoms were excluded from the study (n = 3). The final sample comprised 121 women aged 18 to 59 years (M = 22.5, SD = 4.9). Handedness was assessed by a standardized handedness test (performance test; Papousek & Schulter, 1999;Steingrüber & Lienert, 1971). Participants were requested to come to the study well rested and to refrain from alcohol for 12 h, and from coffee and other stimulating beverages for 2 h, prior to their lab appointment. The study was performed in accordance with the 1964 Declaration of Helsinki and the American Psychological Association's Ethics Code and was approved by the local ethics committee. Participants gave their written and informed consent to participate in the study. Stimulus Material Participants were exposed to a film (approximately 10 min in length) containing 11 clips that have been used in previous studies Prefrontal-posterior coupling and instrusions as an experimental analogue of psychological trauma. They have been shown to induce significant levels of emotional distress (Holmes & Bourne, 2008;Holmes, James, Coode-Bate, & Deeprose, 2009;Holmes, James, Kilford, & Deeprose, 2010). EEG was recorded during the last 5 min of the film, during which there were five clips depicting several car accidents and a rampaging elephant injuring people at a circus. These clips included graphic scenes of severely injured, dying, and mourning people. The film was displayed on a 21″ computer monitor viewed at 100 cm and was presented without sound, so that the stimulation was dominated by visual information for all participants. The neutral visual display, used for obtaining the reference data, showed a green circle (diameter 90 mm) at the center of the screen. Self-Report Measures Intrusive memories assessed in the short term. Two minutes after viewing the film, participants were asked to rate the frequency of involuntarily appearing images from the film in their mind's eye that occurred during the 2-min rest period following the film (4-point rating scale ranging from 0 (not at all) to 3 (six times or more); M = 1.2, SD = 1.0). Participants indicated their judgment via a click with the mouse. Intrusive memories assessed in the medium term. The intrusion subscale of the Impact of Event scale (IES-R; German adaptation, Maercker & Schützwohl, 1998) was used, adapted to refer to the film as the index event. It consists of seven items referring to the occurrence of intrusive thoughts, nightmares, intrusive feelings, and imagery associated with the event over the week. Scores have a potential range from 0 to 35 (in the present sample, M = 6.4, SD = 4.8; internal consistency reliability α = .78). Additionally, participants were asked to keep a pen and paper daily diary in which they recorded spontaneous intrusive images of the film over a period of 1 week and also described each intrusion's content for verification that they indeed matched the film content (cf. Bourne, Mackay, & Holmes, 2013;Holmes & Bourne, 2008;Holmes et al., 2004Holmes et al., , 2009Holmes et al., , 2010. Only entries that passed this verification were included in the analysis. The number of diary entries qualifying as intrusive images ranged from 0 to 11 (M = 1.7, SD = 2.0). Depression. Depressed mood was assessed using the Center for Epidemiologic Studies Depression Scale (CES-D; German adaptation, Hautzinger & Bailer, 1993). It is comprised of 20 items referring to mood and attributions over the past week and is designed for measuring subclinical depressive experiences in the general population (Wood, Taylor, & Joseph, 2010). Scores have a potential range from 0 to 60 (in the present sample, M = 11.3, SD = 6.0, α = .78). Subjective impact of the stimulus. Participants rated the degree to which the film had affected them on a 10-cm horizontal visual analogue scale. The responses were scored in millimetres from 0 (not at all) to 100 (extremely); M = 73.1, SD = 22.3. EEG Recording and Quantification The EEG was recorded from 19 channels according to the International 10-20 system, using a Brainvision BrainAmp Research Amplifier (Brain Products; sampling rate 500 Hz, resolution 0.1 μV) and a stretchable electrode cap, and was rereferenced offline to a mathematically averaged ears reference (Essl & Rappelsberger, 1998;Hagemann, 2004). Impedance was kept below 5 kΩ for all electrodes. Horizontal and vertical electrooculogram (EOG) measures were obtained for identification of ocular artifacts. All data were inspected visually, in order to eliminate intervals in which ocular or muscle artifacts occurred. All participants had at least 30 s of artifact-free data in each of the recording periods and in each of the electrode positions of interest. The mean numbers of artifact-free epochs were M = 116.8 (SD = 44.8) for the baseline recording and M = 215.0 (SD = 109.1) for the film recording. Artifact-free EEG data were submitted to fast Fourier analysis using a Hanning window (epoch length 1 s, overlapping 50%; low-cut filter 0.016 Hz). Spectral coherence (Fisher's z transformed) was obtained in the beta band (13-30 Hz) using the quotient of the cross spectrum (CS) and the auto spectra according to the following equation: Coh(c1,c2)(f) denotes the coherence at frequency f between electrodes 1 and 2, which can vary between 0 and 1. Procedure After completing the handedness test and the CES-D, participants were seated in an acoustically and electrically shielded examination chamber, and electrodes were attached. Participants were then instructed that, after a short recording period during which they should watch the green circle on the screen (2 min), they would see a film to which they should direct their whole attention. They were asked to view the film as if they were really there, like a bystander at the scene of the events and to not close their eyes or look away. The film would be followed by another 2-min rest period. Subsequently, the short-term intrusion rating and the rating to what degree the film had affected the participant appeared on the screen, which the participants completed using the computer mouse. The technical equipment and the experimenter were located outside the EEG chamber. The participants were continuously monitored by a camera. Participants then were instructed to keep a daily diary for 548 E.M. Reiser et al. 1 week, in which they recorded their intrusions of the film scenes. On return to the laboratory 1 week after the EEG recording, participants delivered the diary and completed the IES-R intrusion scale. 1 See Figure 1 for an overview of the study design. Statistical Analysis Following previous relevant research (Papousek et al., 2013;Reiser et al., 2012), linear regressions were conducted using the EEG beta coherence during the reference period preceding the film to predict the coherence during viewing the film, in order to calculate residualized change scores. These were used as an index of statedependent decreases or increases of intrahemispheric coherence in response to observing the suffering of other people. This was done to ensure that the analyzed residual variability was due to the experimental manipulation, and not to individual differences in baseline levels, and to control for measurement error inherent in the use of repeated measures of the same kind (e.g., Linden, Earle, Gerin, & Christenfeld, 1997;Steketee & Chambless, 1992). In the following, the abbreviation "Δcoh" will be used for these changeof-coherence scores. Negative scores indicate a decrease in prefrontal-posterior coherence; positive scores indicate an increase. To evaluate whether interindividual differences in the coherence changes during observing the suffering of other people may predict the later occurrence of intrusive memories, multiple regression analyses were conducted, with one of the indicators of intrusive memories (short-term rating, IES-R intrusion scale, or diary) as the dependent variable and the change-of-coherence score (Δcoh) and depression as predictors. Depression was controlled because there is evidence that depressed mood can affect the likelihood of intrusive memories (Brewin, Reynolds, & Tata, 1999;Brewin, Watson, McCarthy, Hyman, & Dayson, 1998). Entering Δcoh and depression simultaneously in the model allowed us to determine the unique contribution of each. A significant semipartial correlation (sr) for Δcoh indicates that Δcoh explained a significant amount of variance of intrusive memories, independently of depression. Sr 2 indicates the amount of unique variance and thus the size of the unique effect of Δcoh. As previous research suggested lateralized effects of prefrontal-posterior coupling on implicit memory encod-ing (Rose et al., 2010;Wessel et al., 2012) and affective processing (Papousek et al., 2013;Reiser et al., 2012;Schellberg, Besthorn, Klos, & Gasser, 1990), coherence changes were analyzed separately for the left and the right hemisphere. Intrusive Memories Occurring Immediately After Observing the Suffering of Other People Δcoh in the right hemisphere in response to the film predicted the incidence of intrusive images in the 2-min rest period following the film, F(2,118) = 4.2, p < .05; β = −.21, p < .05. Independently from depressed mood, a stronger decrease of prefrontal-posterior coherence during watching the film was associated with a higher number of intrusive images. Depressed mood also predicted the number of intrusive images, with higher depression scores being related to more intrusive images (β = .19, p < .05). The analogous analysis with coherence changes in the left hemisphere yielded no significant results, F(2,118) = 2.0, p = .15; Δcoh: β = −.09, p = .33. See Table 1 for a summary of the semipartial correlations. 2 1. Additional data were obtained for purposes related to other, nonoverlapping research questions. These include genetic data and EEG asymmetries, reported for a larger sample (first experimental session only) in Papousek et al. (2013). 2. Analyses in other frequency bands did not reach the significance level. Figure 1. Study design overview. Participants were exposed to a film containing several scenes that had been used in previous studies as an experimental analogue of psychological trauma. EEG was recorded during the last 5 min of the film and during the 2-min reference period preceding the film. Short-term impact in terms of intrusive images occurring during the 2 min following the film was assessed immediately afterwards. Medium-term impact was assessed over a period of 1 week with a daily pen and paper diary in which participants recorded intrusive images of the film and using the intrusion subscale of the Impact of Event Scale (IES-R), administered 1 week after the initial test session. Note. The table shows zero-order correlations (r) of depression with intrusive memories and coherence changes, and semipartial correlations (controlling for depression, sr) of Δcoh in the right and left hemisphere with intrusive memories. Negative coherence scores indicate a decrease in prefrontal to posterior coupling. The short-term rating captured intrusions occurring immediately after viewing the film. The medium-term measures captured the occurrence of film-related intrusive memories over the week. *p < .05. **p < .01. Intrusive Memories Occurring During the Subsequent Week The analysis revealed an association between Δcoh in the right hemisphere and the participants' scores on the IES-R intrusion scale, F(2,118) = 9.4, p < .001; β = −.23, p < .01. Independently from depressed mood, a stronger decrease of prefrontal-posterior coherence during watching other people suffer was associated with higher scores on the IES-R intrusion scale, indicating a higher incidence of film-related intrusive memories over the week. Depressed mood was also related to higher IES-R intrusion scores (β = .34, p < .001). Δcoh in the left hemisphere did not predict intrusive memories reported in the IES-R, F(2,118) = 6.4, p < .005; Δcoh: β = −.10, p = .25; depression: β = .31, p < .001. Semipartial correlations for Δcoh are shown in Table 1. Δcoh did not predict the frequency of intrusive memories reported in the diary (right hemisphere: F(2,118) = 0.7, p = .48; Δcoh: β = −.04, p = .70; left hemisphere: F(2,118) = 0.8, p = .42; Δcoh: β = −.06, p = .52). 3 Figure 2 shows the scatter plot of the correlation between changes of prefrontal-posterior EEG coherence while viewing the distressing film (Δcoh, right hemisphere) and film-related intrusive memories over the week (IES-R intrusion scale). Supplemental Analyses To exclude the possibility that the coherence changes might have been influenced by changes in enhanced myogenic activity, we calculated correlations between the Δcohs and the respective changes of the spectral power in the range of 65-75 Hz, which is presumed to be exclusively myogenic in origin (averaged across the used electrode positions). These correlations were small and not significant (right hemisphere: r = −.12, p = .21; left hemisphere: r = −.11, p = .24). Repeating the main statistical analysis with additionally entering the changes in the 65-75 Hz frequency range into the regression models did not change the initial findings (Δcoh right hemisphere: intrusion rating β = −.19, p < .05; IES-R intrusion scale β = −.24, p < .01; diary β = −.03, p = .80). To exclude the possibility that differences in the length of the recordings might have affected the results, correlations between Δcoh and the difference of used (i.e., artifact-free) epochs in the reference recording and the film recording were calculated. Correlations were r = −.03 (p = .76) for the right hemisphere and r = −.12 (p = .21) for the left hemisphere. The difference of used epochs in the two recording periods did not correlate with any of the dependent variables (short-term intrusion rating: r = −.07, p = .47; IES-R intrusion scale: r = −.04, p = .65; diary: r = .06, p = .55). Repeating the main statistical analysis with additionally entering the difference of the number of artifact-free epochs into the regression models did not change the initial findings (Δcoh right hemisphere: intrusion rating β = −.21, p < .05; IES-R intrusion scale β = −.23, p < .01; diary β = −.04, p = .70). To further evaluate the specificity of the findings, we also tested potential effects of diagonal EEG coherences between the left prefrontal and right posterior clusters, and between the right prefrontal and left posterior clusters on the occurrence of intrusive memories, using linear regression analyses analogous to those in the main analysis. These coherences did not predict any of the dependent variables (all srs ≤ .10), rendering the influence of a strong source affecting the signals at anterior as well as posterior electrodes and thereby producing spurious coherences between prefrontal and posterior sites of one hemisphere unlikely. On average, EEG beta coherence decreased from the reference period to viewing the film (right hemisphere: t(120) = 7.1, To illustrate the changes of prefrontalposterior EEG coherence while observing the suffering of other people, prefrontal-posterior EEG coherences (beta frequency band, right hemisphere) during the reference period preceding the film and during viewing the film were calculated for participants scoring one standard deviation above and one standard deviation below the sample mean on the IES-R intrusion scale using linear regression (Figure 3). Figure 3 illustrates that prefrontal-posterior coherence decreased during watching the film in individuals with high scores on the IES-R intrusion scale, whereas it did not decrease in indiviuals who reported few film-related intrusions over the week. A highly similar pattern was observed relating to the short-term intrusion rating immediately following the film. In addition, we show a descriptive illustration of the EEG power spectra during the neutral reference period and during the stressful film, calculated for participants scoring one standard deviation above and one standard deviation below the sample mean on the IES-R intrusion scale using linear regression (Figure 4). The correlations between the retrospective rating to which degree the participants felt affected by the film with the intrusion rating briefly after viewing the film and the IES-R intrusion scale 3. Analyses in other frequency bands did not reach the significance level. were r = .30 (p < .001) and r = .37 (p < .001), respectively. No significant correlation was observed between the rating and the number of intrusive memories reported in the diary (r = .15, p = .10). Additionally entering the rating in the regression analyses did not change the statistical results for Δcoh (significant results remained significant and nonsignificant results remained nonsignificant), indicating that the effect of the EEG coherence changes on intrusive memories was not explained by the degree to which the participants felt affected by the film. For descriptive purposes, a topographic illustration of the highest correlations (sr > .20) between coherence changes of single electrode pairs and film-related intrusive memories over the week (IES-R intrusion scale) is given in Figure 5. As the present study follows a strictly hypothesis-driven approach, statistically evaluating theoretically motivated relations of only a few specific (aggregated) coherence data (Δcoh) to the variables referring to intrusive memories, no inferences are to be drawn from this purely descriptive illustration. Discussion The present study demonstrated that individuals showing greater decreases of prefrontal-posterior EEG coherences while observing the suffering of other people reported more intrusive memories of the witnessed events. This was shown for intrusions in the short term (immediately after viewing the film) as well as in the medium term, that is, for the occurrence of intrusive memories over one week, indicating that the effects persisted at least for some time. The results are in line with the notion that individual differences in information processing directly while experiencing or witnessing a distressing event (i.e., peritraumatically) are related to the development of later intrusive memories of that event (Bourne et al, 2013;Holmes et al., 2004). In addition, the present findings provide first evidence for the relevance of individual differences in changes of prefontal-posterior coupling in this particular context. As intrusive memories are considered at least in part to arise from implicit memories, it was argued that encoding sensory information in explicit memory would be protective against the occurrence of intrusive memories (Holmes et al., 2004). Cognitive research has shown that the transfer of sensory information to . EEG power spectra during the neutral reference period and the film depicting the suffering of other people. The figure shows estimated EEG power spectra for each electrode during the neutral reference period (gray line) and during the film (black line) in participants scoring one standard deviation below (left) and one standard deviation above (right) the sample mean on the IES-R intrusion scale, indicating film-related intrusive memories over the subsequent week. The bars below the spectra mark the EEG frequency ranges: theta (4-7 Hz): light gray bar, alpha (8-12 Hz): dark gray bar, beta (13-30 Hz): black bar. explicit memory is accompanied by the activation of a prefrontalposterior cortical network, with prefrontal cortex exerting feedback control on more posterior cortices and thus on the representations of sensory input (Dehaene & Changeux, 2011;Dehaene et al., 2006;McIntosh et al., 1999;Wessel et al., 2012). There is also evidence that abnormal neural communication plays an important role in several brain disorders where consciousness and memory processes are impaired (Dehaene & Changeux, 2011;Uhlhaas & Singer, 2006). In affective research, similar processes are assumed in the context of processing social-emotional information. Bottom-up processing of social-emotional information, which is automatically activated by perceptual input, is supposed to be modulated in a top-down fashion through an executive control component implemented in the prefrontal cortex (Decety & Moriguchi, 2007). The relevance of these proposed interactive processes has been supported by studies using magnetic resonance imaging methods as well as by studies using EEG coherences, which also demonstrated lesser emotional impact of social-emotional information when prefrontal-posterior communication during the stimulation was strong (Diekhof et al., 2011;Papousek et al., 2013;Reiser et al., 2012). The present results add to these findings, suggesting that, independently from the degree to which one immediately feels affected by the event, state-dependent changes of prefrontalposterior coupling during distressing events influence the encoding and later recall of such events (see also Miskovic & Schmidt, 2010). Greater increases of prefrontal-posterior EEG coherences during the processing of negative social-emotional information have also been linked to lower scores in the propensity to ruminate (Reiser et al., 2012), a personality trait that is related to cognitive as well as emotional processes associated with depression (Joormann, 2006;Joormann & Gotlib, 2008;Koster, DeLissnyder, Derakshan & DeRaedt, 2011). Intrusive memories are automatically triggered by external or internal triggers that may be only remotely associated with perceptual input during a traumatic event. It has been proposed, therefore, that intrusive memories arise when cue-driven activation of traumarelated representations in associative networks is poorly controlled, that is, when representations are very readily activated (Ehlers & Clark, 2000;Michael & Ehlers, 2007). Related to that, previous research indicated that, in individuals showing prefrontal-posterior de-coupling during the processing of emotionally laden sensory input, emotion-congruent representations were more readily activated. For instance, individuals showing greater decreases of prefrontal-posterior EEG beta coherence during the observation of other people's expressions of cheerfulness rated cartoons as being more funny than individuals in whom prefrontal-posterior coherence had decreased to a lesser extent or had increased. On the other hand, a decrease of prefrontal-posterior EEG coherence during the exposure to negative affect expressions predicted greater difficulties to judge one's amusement, probably due to poorer inhibition of nascent negative feelings evoked by sympathy with the victims in the jokes (Papousek et al., 2013). Similarly, prefrontal-posterior functional de-coupling during perceptual processing may be linked to the ready activation of respective memory representations, which in turn is thought to be related to implicit memory encoding (Brewin, 2014;Michael & Ehlers, 2007). As was expected, only EEG coherences in the right, but not in the left, hemisphere predicted the occurrence of intrusive memories. Previous studies have suggested a particular importance of prefrontal-posterior coupling in the right hemisphere for explicit versus implicit memory encoding (Rose et al., 2010;Wessel et al., 2012), although there is some inconsistency across studies (McIntosh et al., 1999). The use of exclusively visual material, which may be preferentially processed in the right hemisphere, may also play a role in this context. Viewed from the perspective of the affective research tradition, the present results may be in line with the predominant role of the right hemisphere in emotion processing, in particular in terms of the intensity of emotional arousal (Gainotti, 2000;Hagemann, Hewig, Naumann, Seifert, & Bartussek, 2005;Papousek, Schulter & Lang, 2009). Most studies examining changes of functional coupling in affective contexts have reported stronger associations with EEG coherence changes in the right than in the left hemisphere (Papousek et al., 2013;Reiser et al., 2012;Schellberg et al., 1990; but see Miskovic & Schmidt, 2010). In contrast to intrusive memories occurring immediately after viewing the film and intrusive memories occurring over the week as assessed with the intrusion scale of the IES-R, EEG coherence changes during viewing the film did not predict the number of Beta Alpha Theta intrusions noted in the daily diary. Several reasons may account for this difference. The mean number of intrusive memories noted in the diary was somewhat lower than in previous studies (Holmes et al., , 2010. The strong dependence on the conscientiousness and compliance of the participants gives rise to problems in the diary assessment (Bolger, Davis, & Rafaeli, 2003). It cannot be ruled out that at least some of the participants had not fully complied with the requirement to keep the diary daily, while the tasks administered in the lab would not have had this limitation. Compliance ratings should be added in future studies. When considering the present findings, one has to keep in mind that, compared to experiencing or witnessing real traumatic events, the potential of the used film to produce intrusive memories was only relatively weak. However, it is all the more remarkable that plausible associations between individual differences in the functional coupling of prefrontal and posterior cortices and the development of intrusive memories were shown, not only immediately after exposure to the traumatic content, but also during the subsequent week. At the same time, the findings suggest that viewing film content of the kind that is shown in television programs such as reality or news programs can have some impact on the viewers (see also Breslau et al., 2010;Schlenger et al., 2002;Schuster et al., 2001), and indeed traumatic film footage viewed in work situations may even lead to PTSD symptoms (American Psychiatric Association, 2013). Thus, in addition to providing indications of processes that may be relevant to the development of clinical symptoms (see also Holmes & Bourne, 2008), the present experimental study may show neurological processes that are relevant to intrusive memories in everyday life and in the subclinical domain, which can also be burdensome (Krans et al., 2009). A limitation of the present study is that implicit (rather than explicit) memory encoding of the contents that later popped up as intrusions could not be verified empirically. In that respect, we have to rely on the theoretical background and empirical evidence outlined in the introduction (Brewin, 2001;Brewin et al., 1996;Cohen et al., 2012;Dehaene & Changeux, 2011;Dehaene et al., 2006;Holmes et al., 2004;McIntosh et al., 1999;Rose et al., 2010;Wessel et al., 2012). The study design may have introduced some error variance, because the film scenes may have been more prone to eye movements than the static image used for obtaining the reference data. However, it is important to consider that the research question of the present study did not refer to the main effect of condition on EEG coherence but to individual differences in state-dependent coherence changes, specifically to their relation to the occurrence of intrusive memories. Thus, a greater amount of eye movements during the film than during the reference period could only have produced spurious findings, if the difference in eye movements (or any other response to physical differences between the stimuli) had been systematically related to the dependent variables (i.e., individual differences in the occurrence of intrusive memories). Specifically, this would mean that participants in whom prefrontal-posterior coherence decreased would have had to show reduced eye movements during watching the film compared to the neutral display, and at the same time be more prone to intrusions. This seems very unlikely. Similarly, this applies to the different length of the recording periods, which could also only have influenced the findings, if there had been a systematic covariation of the difference between the number of artifact-free epochs in the two recording periods with the coherence changes as well as with the dependent variables. No such correlations were present. A potential limitation of the study is that volume conduction artifacts cannot be completely excluded when exploring EEG coherences. The absence of correlations between diagonal (left-right and rightleft) prefrontal-posterior coherences and the dependent variables provides some indication that the findings were not explained by a strong source producing spurious coherences because it had influenced the signals at anterior as well as at posterior electrodes. However, more research, preferably also using other methods to test for individual differences in intrahemispheric coupling, will be needed to more exactly clarify the brain mechanisms underlying the present findings (e.g., Nolte et al., 2004). Further, the explanatory power of the findings is limited by the modest size of the effects. Large effects are generally not to be expected in brain research, because all psychological processes always involve several brain structures and mechanisms. Moreover, the potential of the used procedure to evoke intrusive memories was only moderate. Of course, ethical considerations play a major role in this context because studying brain processes during the experience or witnessing of real trauma is clearly impossible. In conclusion, merging cognitive and emotional brain research (Isaac & Bayley, 2012), the findings of the present study illuminate novel brain mechanisms involved in the encoding of information in ways that make intrusive memories more likely. In contrast to research on hypoactivation of brain areas and its relation to deficits in cognitive and affective functioning, the importance of functional connectivity changes between cortical units to personality and psychopathology has been relatively sparsely examined to date. The present study adds to the evidence that investigating interindividual differences in corticocortical communication may indeed deliver us insight into mechanisms not fully understood so far, such as the development of intrusive memories. Given the distress they can cause, the understanding of such underlying mechanisms is crucial.
v3-fos-license
2018-12-10T21:15:38.038Z
2004-12-20T00:00:00.000
55306164
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.eje.cz/doi/10.14411/eje.2004.089.pdf", "pdf_hash": "ae5e545df1372ff26a79da4c9147a098b6739aff", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45141", "s2fieldsofstudy": [ "Biology" ], "sha1": "ae5e545df1372ff26a79da4c9147a098b6739aff", "year": 2004 }
pes2o/s2orc
Revision of European species of the genus Rhabdomastix ( Diptera : Limoniidae ) . Part 2 : Subgenus Rhabdomastix s . str . The second and final part of a revision of the European species of the genus Rhabdomastix Skuse, 1890 is presented. The subgenus Rhabdomastix s. str. is revised. Seven species are redescribed, Rhabdomastix (Rhabdomastix) japonica Alexander, 1924, R. (R.) laeta (Loew, 1873), R. (R.) borealis Alexander, 1924, R. (R.) edwardsi Tjeder, 1967, R. (R.) subparva Starý, 1971, R. (R.) hirticornis (Lackschewitz, 1940) and R. (R.) beckeri (Lackschewitz, 1935). Three new synonyms are proposed. Lectotypes of four pertinent nominal species are designated. Descriptions are provided of six species, viz. R. (R.) laetoidea sp. n. (Czech Republic, Slovakia, Bulgaria, Ukraine), R. (R.) crassa sp. n. (France, Czech Republic, Slovakia), R. (R.) corax sp. n. (Bulgaria, Greece), R. (R.) eugeni sp. n. (France, Switzerland, Germany, Czech Republic, Slovakia, Italy, Romania, Bulgaria, Greece, Ukraine, Armenia), R. (R.) filata sp. n. (Bulgaria, Greece, European Russia, Turkey, Georgia, Armenia) and R. (R.) georgica sp. n. (Georgia). Male and female terminalia are illustrated for all the species, and a key to species is appended. INTRODUCTION In the first part of this revision published recently (Starý, 2003), the taxonomic history of the genus Rhabdomastix Skuse, 1890 was reviewed, its classification outlined and re-assessed, and a new subgenus, Lurdia Starý, 2003, was established.Nine European Lurdia species were treated, and seven of these were described as new. The second instalment, now presented, deals with the nominotypical subgenus Rhabdomastix s. str., as defined in the first part, i.e. in a broad sense, comprising the majority of species of the former subgenera Rhabdomastix s. str., Palaeogonomyia Meunier, 1899 and Sacandaga Alexander, 1911.Generally, the same morphological terminology is used as in the first part of this revision (Starý, 2003).The following should be added or repeated: Rs length (at half of Rs length or beyond it in European species).Sc2 lacking or faintly apparent at tip of Sc1 or some distance before it .R3 very short, from one-fifth to one-eighth of R4 length, vertical or virtually so, forming a 90 o with R4.Discal cell generally hexagonal, with proximal section of M3+4 (forming lower side of discal cell) distinctly angled near mid-length, at attachment of m-cu.Distal sections of M1+2 and M3 (beyond discal cell) considerably arched. Abdomen.Male terminalia (cf.Fig. 17 and other relevant figures): Segment 9 generally parallel-sided, simple, with at most small lobe dorsally at posterior margin on each side of median interruption (except in beckeri, Fig. 44).Gonostyli generally shorter than those in Lurdia; outer one terminating in curved apical spine, or broadly rounded and without apparent spine; inner one fleshy, generally conical, variously swollen, generally a little shorter than outer gonostylus.Interbase membranous, pale, mostly dilated before apex into variously shaped apical blade, not connected membranously to its counterpart at about one third of its length.Female terminalia (cf.Fig. 19 and other relevant figures) with cercus and hypogynial valve of moderate length, the former at most slightly exceeding length of tergite 10.Spermathecae two or three in number, spherical, oval, or reniform, subequal in size to each other, or, if three, one spermatheca sometimes tending to be smaller than other two. Discussion.The two subgenera, Lurdia and Rhabdomastix s. str., were compared in the first part of this revision (Starý, 2003).It should be noted that the venational pattern in Rhabdomastix s. str., with its vertical (subvertical) R3, combined with the hexagonal discal cell and distinctly arched M1+2 and M3, is quite unique within the Limoniidae.The venation may vary in details even within a species, especially in aspects such as the verticality of R3 or the length ratio of R3 to R4. Rhabdomastix s. str.species, as those of Lurdia, are mainly distinguished by details in the structure of the male and female terminalia, in the latter case predominantly by the number and size of the spermathecae.Other internal structures of the female terminalia, such as the infra-anal (supravaginal) plate, sternum 9 and genital fork (vaginal apodeme) are not sufficiently differentiated, and, of these, only the vaginal apodeme may sometimes provide some species-specific peculiarities (cf.Figs 18,25,28).If from the same region, some species may well be separated by the body colouration.However, this only applies to dry-mounted specimens. Distribution.Worldwide. REVISION OF EUROPEAN SPECIES OF THE SUBGENUS RHABDOMASTIX S. STR. Compared to the subgenus Lurdia, representatives of the European Rhabdomastix s. str.species are more diverse structurally, representing several species clusters, or evolutionary trends.However, within a cluster, the taxonomic situation parallels that in Lurdia: species may be very similar to each other in both external and genital characters that, in addition, vary in a certain degree.Moreover, some Rhabdomastix s. str.species have, or are presumed to have, wide ranges of distribution, and some show infraspecific variation in body colouration, a trait not observed in Lurdia.Consequently, to properly recognising species limits, large series of specimens have been examined from Central Europe where, fortunately, many species occur sympatrically, even syntopically.This was combined with the examination of additional specimens from more remote areas.Without this extensive and geographically diverse material, totalling over 2500 specimens, a re-assessment of the European Rhabdomastix s. str.species would not have been possible This revision of the European Rhabdomastix s. str.has been in progress, with interruptions, for an exceedingly long time.Many specimens from various institutions were examined as early as in the late 1970's.In the course of the study, some species concepts were modified.Therefore, as many as possible, relevant specimens, especially types, were re-examined quite recently (2002)(2003). In all, thirteen Rhabdomastix s. str.species are treated here.Seven species are redescribed, viz.R. (R.) japonica Alexander, 1924, R. (R.) laeta (Loew, 1873), R. (R.) borealis Alexander, 1924, R. (R.) edwardsi Tjeder, 1967, R. (R.) subparva Starý, 1971, R. (R.) hirticornis (Lackschewitz, 1940) and R. (R.) beckeri (Lackschewitz, 1935).Three new synonyms are proposed: R. hilaris Edwards, 1938 andR. cunctans Tjeder, 1955 are treated as junior synonyms of R. (R.) japonica, and R. lapponica Tjeder, 1936 as a junior synonym of R. (R.) borealis.The latter synonymy had been tentatively suggested by Savchenko et al. (1992).Lectotypes of hilaris, laeta, hirticornis and beckeri are designated.Descriptions are provided for six new species, viz.R. (R.) laetoidea sp.n., R. (R.) crassa sp.n., R. (R.) corax sp.n., R. (R.) eugeni sp.n., R. (R.) filata sp.n. and R. (R.) georgica sp.n., the latter being extra-European.Some European Rhabdomastix s. str.species have often been covered in the literature.The many species records listed below in the references sections under species headings are, however, largely suspect because of subsequently newly described species.Practically all the small, darkly coloured species had been identified as R. schistacea (Schummel, 1829), one of the species most commonly treated in the literature, until Tjeder (1967) described R. edwardsi.The taxonomic situation in Rhabdomastix at that time may best be illustrated by the fact that P. Lackschewitz, a distinguished student of Limoniidae who contributed considerably to a better knowledge of the group in Europe, had identified a series of specimens deposited in NHMW as belonging to R. schistacea (cf.Lackschewitz, 1940).Within this series, four species (edwardsi, crassa, filata, subparva) have now been differentiated (see respective Material examined sections).Schummel's description of Limnobia schistacea, based on a single female from Wroc aw (Breslau), Poland, although clearly representing a Rhabdomastix s. str.(cf.Schummel, 1829, Tab. 2, Fig. 2), deals with a small species ("2½" = ~ 5.5 mm) having the head and thorax of slate colour ("schiefergrau"), a yellowish grey abdomen with segments seamed with yellowish, R3 more than its own length beyond the tip of R1, and A2 long, ending beyond the origin of Rs.This combination of characters is not known within the European Rhabdomastix s. str.Therefore, Limnobia schistacea is considered a nomen dubium (cf.also Starý & Rozkošný, 1970).Since literature references are as complete as possible for each species treated, the same is provided for L. schistacea at the end of this paper. Considering the material below, collected by me in the Czech Republic and Slovakia, many Rhabdomastix s. str.species may seem to be common.The rich material available from these territories, including that of new species (laetoidea, crassa, eugeni) is, however, a result of many years' collecting activity at specific habitats to which these species are strictly confined, namely sandy or gravelly banks of streams.Actually, there is a single common and largely eurytopic species in Central Europe, R. (R.) subparva.Other regions may show different relations. The European Rhabdomastix species of the former subgenus Sacandaga were subdivided into three species groups by Savchenko (1982), the lurida, laeta and edwardsi groups.As the lurida group represents now the subgenus Lurdia (cf.Starý, 2003), and species such as R. hirticornis (formerly in the subgenus Palaeogonomyia) are treated here within Rhabdomastix s. str., the above concept had to be modified.Anyway, considering the world fauna, differences between species groups in Rhabdomastix s. str.should be considerably greater than are those found between the laeta and edwardsi groups of Savchenko (1982).Based on various characters, three clusters, or species complexes, may be distinguished preliminarily within the European Rhabdomastix s. str., namely those centred around R. (R.) laeta, R. (R.) edwardsi and R. (R.) hirticornis.In addition, R. (R.) beckeri is a distinctive species considerably different from all the others, and its affinities remain in question. R. (R.) beckeri Antenna short, with pubescence on flagellomeres distinct (Fig. 10); palpus short; R3 more than its own length beyond tip of R1 (Fig. 3); A2 ending opposite to origin of Rs (Fig. 3); differs considerably from the species above by having milky wings, with narrow darker seams along veins, and many details in structure of male terminalia.It most probably belongs to a different species cluster. As already noted, the classification above is preliminary, since the characters used have unequal value.The laeta and edwardsi complexes are well defined, based on several independent features, whereas the so-called hirticornis complex, although distinguished at once by the long male antennae, can hardly be assigned the same weight.The conspicuousness of the long antennae is responsible for their being generally overvalued as a taxonomic character.The length of the antennae, however, vary extensively within genera and subgenera of the chioneine Limoniidae, unlike most other characters. Diagnosis.General colouration yellow to pale yellow, sometimes conspicuously patterned with deep dark brown on thorax, including three stripes on prescutum.Antenna short, with very short pubescence on flagellomeres.Wing broad.A2 ending beyond origin of Rs.Legs yellow throughout.Male terminalia with apical blade of interbase very broad and aedeagus long and slender.Female terminalia with three spherical medium-sized spermathecae. Colour.General colouration yellow to pale yellow, subshiny, with darker markings on thorax.Antenna dark brown, scape mostly yellow.Prescutum pale yellow laterally, with three broad darker stripes.Scutum and mediotergite (postscutellum) similarly darker, restrictedly patterned with yellow.Scutellum pale yellow.Pleuron mostly yellow, patterned with sulphur yellow in upper part, darker below, especially on lower portion of katepisternum and meron.Colouration of thorax practically identical in distribution of darker markings with that of two following species (laeta, laetoidea); in contrast to these, however, considerably variable in actual hue of pattern, varying from little-distinct, yellowish brown to sharply pronounced, deep dark brown.Wing tinged with yellowish.Halter pale yellow.Legs yellowish brown to yellow throughout, femora not darkened distally.Abdomen yellowish brown to brown. Head.Antenna (Fig. 5) comparatively short, not reaching to base of wing.Proximal three flagellomeres nearly spherical, following ones gradually narrowed and lengthened towards apex of antenna.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence very short, suberect, subequal in length to half breadth of respective segments, or even shorter, distinct only on proximal four or five flagellomeres.Palpus short. Thorax.Wing (Fig. 1) rather broad, about three times as long as broad, with comparatively short stalk.Sc1 ending before fork of Rs, at about three quarters of Rs length.Sc2 not apparent, or slightly so some distance before tip of Sc1.R3 about its own length, or less, beyond tip of R1.R4 varying in number of macrotrichia, but generally with only few.A2 considerably sinuous, ending distinctly beyond origin of Rs.Halter moderately long, reaching to about posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 17).Segment 9 broader than long.Gonocoxite comparatively long and slender.Outer gonostylus short, less than half length of gonocoxite, gently and evenly arched, somewhat broadened before apex, with small apical spine.Inner gonostylus generally conical.Aedeagal complex as in Fig. 17.Interbase moderate in length, reaching to about half length of gonocoxite, abruptly expanded distally to form broad triangular, sometimes nearly quadrangular apical blade, microscopically serrate at distal margin.Aedeagus very long and slender, nearly twice length of vesica, the 661 latter comparatively small and narrow.Apodeme of vesica spine-like in dorsal aspect, subequal in length to vesica.Female terminalia 46).Cercus moderately broad, slightly exceeding length of tergite 10, gently upturned.Vaginal apodeme moderately broad.Spermathecae three, spherical, intermediate in size between those of R. (R.) laeta and R. (R.) laetoidea sp.n., with sclerotised parts of ducts thin, slightly shorter than spermathecal diameter. R. (R.) japonica, R. (R.) laeta and R. (R.) laetoidea sp.n. represent a group of closely related and exceedingly similar species.They are identical in distribution of darker markings on the thorax.In the female holotype of R. (R.) japonica (and two other specimens from Japan, see Material examined), these markings are dark brown, similar to the condition as described for R. cunctans from Europe.Such specimens with distinct dull or shiny markings on the thorax are sporadically represented in the material examined, from Sweden (holotype of cunctans), Switzerland, Germany, Italy and Algeria, and they show identity in structural characters with other specimens, paler in the pattern and more commonly collected, sometimes even at the same localities.This suggests that the variation is individual, independent of geographical distribution or ecological factors.This variation has not been observed in R. (R.) laeta and R. (R.) laetoidea sp.n., although this cannot be excluded (see note under laetoidea).A certain type of geographical variation may also be involved in R. (R.) japonica.British specimens (identified as hilaris) usually have a medium-dark pattern, with the markings somewhat diffuse, suffused with grey pruinosity, whereas members of Central European populations are mostly pale, with the pattern only slightly indicated.More material would be necessary from the eastern Palaearctic to decide whether specimens with the distinct pattern on the thorax are more frequent there than they are in Europe.In any case, a male from the Kuriles (see Material examined) has the pattern considerably paler than the Japanese specimens, as much as in Central European material.Wings are described as being broad in R. (R.) japonica (as they are in laeta); this, however, is also subject to a certain variation.First, females always have somewhat narrower wings than males, and, second, the smaller a specimen is, the narrower wings it has relative to its own wing length. R. (R.) japonica is on average the largest of the species treated here.In general appearance, if not distinguished by a dark pattern on the thorax, specimens of R. (R.) japonica are exceedingly similar to R. (R.) laeta and R. (R.) laetoidea sp.n.The latter species is distinctive by a comparatively short aedeagus and broad vesica in males and large spermathecae in females, supported by markedly narrower wings in both sexes.The differences between R. (R.) japonica and R. (R.) laeta are best noticeable in the structure of the antennae, especially those of males (cf.Figs 5 and 6).In R. (R.) japonica, these are distinctly shorter, with the proximal flagellomeres more spherical, and the pubescence very short, subequal in length to at most half the breadth of the respective segments, distinct only on the proximal four or five flagellomeres.In R. (R.) laeta, the antennae are longer, with the proximal flagellomeres rather oval, and the pubescence is long, subequal in length to the entire breadth of the respective segments, distinct on almost all flagellomeres.This character, together with a slightly different shape of the interbases, had been indicated by Edwards (1938: 114, Text-figs 22e, f) as distinguishing his R. hilaris from R. laeta.Females of all the three species (japonica, laeta, laetoidea) differ in the size of the spermathecae (cf. Figs 19,21,23,(46)(47)(48). Distribution.The species was reported from Japan (Hokkaido, Honshu, Shikoku, Kyushu), North Korea and the Russian Far East (Savchenko et al., 1992).There are probably no authentic literature records for Kyushu and North Korea.The synonymy with hilaris and cunctans, proposed here, extends its distribution into Europe (Great Britain, Sweden, cf.Savchenko et al., 1992) where it may also be among some records of R. (R.) laeta.Based on the material examined, the species is now recorded from Great Britain, Switzerland, Germany, Czech Republic, Slovakia, Austria, Italy, Macedonia, Albania, Bulgaria, Greece, Algeria, Azerbaijan, Russian Far East and Japan. Colour.General colouration yellow to pale yellow, subshiny, with faintly indicated markings on thorax.Antenna dark brown, scape yellow.Prescutum pale yellow laterally, with indications of three broad darker stripes.Scutum and mediotergite (postscutellum) similarly darker, restrictedly patterned with yellow.Scutellum pale yellow.Pleuron mostly yellow, patterned with sulphur yellow in upper part, darker below, especially on lower portion of katepisternum and meron.Pattern on thorax generally pale, little-distinct, not varying to dark condition as in R. japonica.Wing tinged with yellowish.Halter pale yellow.Legs yellow throughout.Abdomen yellow to yellowish brown. Head.Antenna (Fig. 6) moderately long, reaching to base of wing.Flagellomeres oval proximally, gradually narrowed and lengthened towards apex of antenna.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence long, suberect, subequal in length to breadth of respective segments, distinct on almost all flagellomeres.Palpus short. Thorax.Wing rather broad, about three times as long as broad, with comparatively short stalk.Sc1 ending before fork of Rs, at about three quarters of length of the latter.Sc2 not apparent or slightly so some distance before tip of Sc1.R3 about its own length or less beyond tip of R1.R4 with variable number of macrotrichia, but mostly with only few.A2 considerably sinuous, ending distinctly beyond origin of Rs.Halter moderately long, reaching to about posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 20).Segment 9 broader than long.Gonocoxite comparatively long and slender.Outer gonostylus short, less than half length of gonocoxite, gently and evenly arched, somewhat broadened before apex, with small, sometimes barely distinct apical spine.Inner gonostylus generally conical.Aedeagal complex as in Fig. 20.Interbase longer and much more slender than in R.(R.) japonica, slightly extending beyond half length of gonocoxite, moderately expanded distally to form roughly triangular apical blade with a few microscopic teeth at distal margin.Aedeagus very long and slender, yet not as long as in R. (R.) japonica.Vesica comparatively small and narrow.Apodeme of vesica spine-like in dorsal aspect, subequal in length to vesica.Female terminalia (Figs 21,47).Cercus moderately broad, slightly exceeding length of tergite 10, gently upturned.Vaginal apodeme moderately broad (much as in japonica).Spermathecae three, spherical, small, with sclerotised parts of ducts thin, subequal in length to spermathecal diameter. Distribution.As one of those most commonly treated in the literature, the species was recorded from many European countries, also from West Siberia (Altai) and Mongolia (Savchenko et al., 1992).Although it most probably does occur throughout Europe, the actual records are unreliable because it could have been confused with R. (R.) japonica or R. (R.) laetoidea sp.n.Based on the material examined, the species is confirmed in Europe for Sweden, Finland, Netherlands, Switzerland, Germany, Czech Republic, Slovakia, Austria, Slovenia, Bulgaria and Ukraine, and newly recorded for Andorra and Italy.[The record from Italy by Mannheims (1964), originally accepted by Savchenko et al. (1992), refers to a Swiss locality (cf.Starý & Oosterbroek, 1996) Diagnosis.General colouration yellow to pale yellow, with faintly indicated markings on thorax, including three stripes on prescutum.Antenna moderately long, with pubescence on flagellomeres subequal in length to breadth of respective segments.Wing narrow.A2 ending shortly beyond origin of Rs.Legs yellow throughout.Male terminalia with apical blade of interbase lanceolate and aedeagus short and broad.Female terminalia with three spherical, large spermathecae. Colour.General colouration yellow to pale yellow, subshiny, with less distinct markings on thorax, compared to R. japonica and R. laeta, however, practically identical to the latter species in distribution of pattern.Antenna dark brown, scape yellow.Wing tinged with yellowish.Halter pale yellow.Legs yellow throughout.Abdomen yellow. Head.Antenna moderately long, reaching to base of wing.Flagellomeres oval proximally, gradually narrowed and lengthened towards apex of antenna.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence long, suberect, subequal in length to breadth of respective segments, distinct on almost all flagellomeres.Palpus short. Thorax.Wing (Fig. 2) narrow, compared to both R. japonica and R. laeta, about four times as long as broad, with stalk longer than in two latter species.Sc1 ending just beyond mid-length of Rs.Sc2 slightly apparent, shortly before tip of Sc1.R3 about its own length beyond tip of R1.R4 with numerous macrotrichia both dorsally and ventrally.A2 slightly sinuous, ending shortly beyond origin of Rs.Halter moderately long, reaching to about posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 22).Segment 9 broader than long.Gonocoxite not as long as in R. (R.) japonica and R. (R.) laeta.Outer gonostylus short, less than half length of gonocoxite, gently and evenly arched, somewhat broadened before apex, with small, sometimes barely distinct apical spine.Inner gonostylus generally conical.Aedeagal complex as in Fig. 22. Interbase subequal to half length of gonocoxite, with apical blade more or less lanceolate, more slender than in R. (R.) laeta.Aedeagus comparatively short and broad, subequal in length to vesica, the latter rather broad.Apodeme of vesica spine-like in dorsal aspect, shorter than vesica.Female terminalia (Figs 23,48).Cercus moderately broad, slightly exceeding length of tergite 10, gently upturned.Vaginal apodeme narrow, compared to R. (R.) japonica and R. (R.) laeta.Spermathecae three, spherical, larger and paler than those of latter two species, with sclerotised parts of ducts thin, subequal in length to spermathecal diameter. Etymology.The name of the new species, laetoidea, indicates its close relationship to R. (R.) laeta.An adjective in nominative singular. Discussion.The new species differs from both R. (R.) japonica and R. (R.) laeta by its generally smaller size and distinctly narrower wings in both sexes.The latter character is correlated with the venation in that the veins run closer to each other and A2 is less sinuous, ending only shortly beyond the origin of Rs (Fig. 2).In R. (R.) laetoidea sp.n., the male terminalia are especially distinctive by having a comparatively short and broad aedeagus and a broad vesica (Fig. 22 Distribution.Czech Republic, Slovakia, Bulgaria, Ukraine.Note.This may be Rhabdomastix (Sacandaga) shardiana Alexander, 1957 described from a single male from Pakistan (Alexander, 1957: 292) [holotype %: North West Frontier Province, Shardi, 1.-10.viii.1953,altitude 6,130 feet, (F.Schmid leg.) (USNM)], a species with a rather distinct pattern on the thorax, generally conforming the condition in the laeta complex.Due to distortion of the male terminalia mounted on a slide, absence of the antennae in the holotype, and considering the disjunct occurrences of R. (R.) laetoidea sp.n. and R. shardiana, a clear decision on the conspecifity of the two forms could not be reached.Therefore, it is preferred to describe here R. (R.) laetoidea sp.n., a species fairly common in Central and South Europe, until more material, including females, is available from the area relevant to R. shardiana.Alexander, 1924 (Figs 24-26, 49) Rhabdomastix (Sacandaga) borealis Alexander, 1924a: 9 (description).Rhabdomastix (Sacandaga) borealis: Alexander, 1965 Diagnosis.General colouration yellowish brown, patterned with dark brown and pale yellow on thorax.Antenna moderately long, with pubescence on flagellomeres very short.Wing moderately broad.A2 ending beyond origin of Rs.Legs with femora considerably darkened distally.Male terminalia with apical blade of interbase triangular provided with tooth at outer margin, and aedeagus long.Female terminalia with three spherical spermathecae, somewhat smaller than those of R. (R.) laetoidea sp.n. Colour.General colouration yellowish brown, with slight greyish pruinosity, patterned with dark brown and pale yellow on thorax.Antenna dark brown, scape yellowish brown.Prescutum dark brown, yellowed laterally.Sometimes two yellow longitudinal lines apparent, demarcating three broad dark brown stripes; yellow patch near posterior margin of prescutum.Scutum dark brown, patterned with yellow medially.Scutellum mostly yellow.Mediotergite (postscutellum) yellow anteriorly, dark brown in posterior half.Pleuron yellowish brown to brown, with slight greyish pruinosity, patterned with pale yellow in upper part, darkened on lower portions of katepisternum and meron.Wing tinged with brownish.Coxae yellowish brown.Trochanters and bases of femora yellow, the latter considerably darkened distally.Rest of legs yellowish brown.Halter pale yellow.Abdomen greyish brown. Head.Antenna moderate in length, reaching to about base of wing.Proximal three or four flagellomeres shortoval to nearly spherical, following ones gradually narrowed and lengthened towards apex of antenna.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence very short, suberect, distinct only on proximal four or five flagellomeres (much as in japonica).Palpus short. Thorax.Wing moderately broad, more than three times as long as broad, with stalk comparatively short.Sc1 ending before fork of Rs, at about three quarters of length of the latter.Sc2, if apparent, considerably retracted from tip of Sc1, approximately opposite half length of Rs (much as in japonica).R3 less than its own length beyond tip of R1.R4 with a few macrotrichia both dorsally and ventrally.A2 considerably sinuous, ending distinctly beyond origin of Rs.Halter moderately long, reaching to about posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 24).Segment 9 broader than long.Gonocoxite comparatively long and slender.Outer gonostylus short, less than half length of gonocoxite, gently and evenly arched, parallel-sided, with small apical spine.Inner gonostylus generally conical, slender distally.Aedeagal complex as in Fig. 24.Interbase very long, considerably extending beyond half length of gonocoxite and expanded before apex to form roughly triangular apical blade terminating in large acute tooth at outer margin (at margin closer to long axis of hypopygium for crossing interbases).Aedeagus long (not as slender as in japonica and laeta), nearly twice length of vesica or less, the latter comparatively small and narrow.Apodeme of vesica spine-like in dorsal aspect, short, about half length of vesica.Female terminalia (Figs [25][26]49).Cercus moderately broad, slightly exceeding length of tergite 10, gently upturned.Vaginal apodeme abruptly expanded into very broad, transversally oblong or semicircular distal (caudal) portion.Spermathecae three, spherical, somewhat smaller than those of R. (R.) laetoidea sp.n., with sclerotised parts of ducts thin, shorter than spermathecal diameter. Although clearly belonging to the species complex comprising the three species above (japonica, laeta, laetoidea), R. (R.) borealis is very distinctive in both general appearance and the structure of the male terminalia.Its general colouration is distinctly darker, yellowish brown, with greyish pruinosity, patterned with dark brown on the thorax, extensively so dorsally.In contrast to the species above, the femora are considerably darkened distally in R. (R.) borealis (yellow throughout in japonica, laeta and laetoidea).Within the species treated here, the male terminalia of R. (R.) borealis are unique in the shape of the interbases (Fig. 24).The female terminalia are similar to those of the related species, with the spermathecae somewhat smaller than in R. (R.) laetoidea sp.n., although they are well characterised by a very broad vaginal apodeme (Fig. 25). Distribution.In the new concept presented here, the species appears to be Holarctic in distribution, and was recorded from the USA (Alaska) (Alexander, 1965), Norway, Sweden and the Russian Far East (Savchenko et al., 1992).Herewith confirmed for all the countries.Distribution in Canada is practically beyond any doubt, but not proved by the two wings on the slide tentatively listed here.The identification by Alexander, who must have had the rest of the specimen, is reliable insofar as made possible by a comparison of external characters.In any case, an authentic record for Canada is needed.A short diagnosis of Gonomyia schistacea from Finland by Lundström (1907: 21), added with a figure of the wing showing R3 less than its own length beyond the tip of R1 (cf.Lundström, 1907, Fig. 25), suggests that a species of the laeta complex is involved, most probably R. (R.) borealis.This, however, should also be confirmed.Tjeder, 1967 (Figs 27-29, 50) Rhabdomastix parva : Edwards, 1938: 113, 115 Diagnosis.General colouration dark greyish brown, with bluish pruinosity on pleuron.Antenna short.Wing narrow, infuscated.A2 ending before origin of Rs.Legs brown with coxae greyish brown.Male terminalia with outer gonostylus generally straight, with distinct apical spine, and apical blade of interbase spoon-shaped.Female terminalia with three spherical, small spermathecae. Colour.General colouration dark greyish brown with bluish tinge, dull, without conspicuous markings on thorax, more brownish in middle of prescutum.Antenna dark brown throughout.Pleuron heavily suffused with dark bluish grey pruinosity, somewhat variable in extent and bluish hue.Wing infuscated.Halter whitish.Coxae generally dark, brown to greyish brown.Trochanters and bases of femora yellowish brown, the latter darkened distally.Rest of legs generally brown.Abdomen dark greyish brown. Head.Antenna short, not reaching to base of wing.Flagellomeres short-oval.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence indistinct.Palpus short. Thorax.Wing rather narrow, about four times as long as broad, with stalk comparatively short.Sc1 ending before half length of Rs.Sc2 lacking.R3 more than its own length beyond tip of R1.R4 bare or with at most a few macrotrichia dorsally.A2 sinuous, ending before origin of Rs.Halter comparatively short, not reaching to posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 27).Segment 9 longer than broad.Gonocoxite sometimes rather stout, broad.Outer gonostylus comparatively short, about half length of gonocoxite, bent only at base, straight distally, generally parallel-sided, with distinct apical spine.Inner gonostylus generally conical.Aedeagal complex as in Fig. 27.Interbase moderate in length, reaching to about half length of gonocoxite, very slender near mid-length, at most very slightly bent distally to form spoon-shaped apical blade, mostly rounded at apex (pointed in some cases, cf.Fig. 27).Aedeagus slender, subequal in length to vesica, the latter broad, bulbous, with long apodeme, narrowly fan-shaped in dorsal aspect, about same length as vesica.Female terminalia 50).Cercus comparatively slender and rather long, longer than tergite 10, gently upturned.Spermathecae three, small, spherical, with sclerotised parts of ducts very short.One spermatheca sometimes smaller than other two. Material examined. Holotype & (original designation): Great Britain, England, South Devon, Sidmouth, 10.v.1936 (F.W. Edwards leg.) (BMNH), labelled "S.Devon: Sidmouth.10.V.1936.F.W. Edwards. B.M. 1936-366" (printed), "Holotype" (a red-margined circular label, printed), "Holotypus & Rhabdomastix edwardsi Tjed.Bo Tjeder 1966" ("Holotypus" printed, the rest in Tjeder's hand, red label).The specimen is micro-pinned on a celluloid slide, with only left fore and hind Discussion.There is a certain variation in the body size and colouration.Specimens from Great Britain and South Europe are, on the average, smaller, somewhat more robust, and darker, rather dark greyish brown, with the wings strongly infuscated and with the bluish pruinosity limited to the pleuron.Members of Central European populations are larger, more slender, generally somewhat paler, with a very distinct bluish suffusion that passes from the pleuron onto the prescutum and other dorsal parts of the thorax. The species concept of R. (R.) edwardsi has been one of the controversies of this revision, since specimens from various regions differ in various aspects (see above).Although the species had been described and illustrated adequately (cf.Tjeder, 1967: 225, Figs 1-10), it does not seem to have been well recognised, and it only was accepted as occurring in Great Britain.It has never been reported outside that country, except for records from the Czech Republic and Slovakia (Starý, 1987), later doubted (see Distribution).From the beginning of my studies on Rhabdomastix, the sympatric material of two forms was available to me from the former Czechoslovakia, differing considerably in the body colouration from both R. (R.) subparva, the most common regional species, and from each other.The one form was distinctive by a bluish pruinosity on the pleuron (these specimens may occur in various collections labelled by me as "caesia"), whereas the other form was entirely black.The sympatric occurrence of these forms supported the view that they represent valid species.The solution reached after a thorough comparison, repeated many times, is now believed to be a correct one: form No. 1 is R. (R.) edwardsi and form No. 2 is a new species described below as R. (R.) crassa sp.n. The bluish pruinosity, rather dark and intensive, differentiates R. (R.) edwardsi from all species treated here, except perhaps R. (R.) hirticornis, which, however, is sufficiently distinctive by its very long male antennae.In the structure of the male terminalia, R. (R.) edwardsi appears to be most closely related to R.(R.) crassa sp.n. and R. (R.) corax sp.n., both entirely black species.R. (R.) edwardsi differs from these species in having a generally straight outer gonostylus (gently and evenly arched in corax) and rather long, slender interbases (shorter and broader in crassa, more rounded at apex).Some other external and male genital features differentiating the three species are specified in the discussions of the two latter.The female terminalia of the three species are very similar to each other, having the spermathecae of approximately the same size.R. (R.) edwardsi has the cerci slightly longer and more slender than the other two. Distribution.So far the species has been known from Great Britain only.Records from the Czech Republic and Slovakia (Starý, 1987), based on unpublished material, were later withdrawn (Starý, 1993(Starý, , 1996)).Records are presented here for Great Britain, Spain, France, Germany, Czech Republic, Slovakia, Austria, Italy, Slovenia, Bosnia and Hercegovina, Albania and Bulgaria. Diagnosis.General colouration black throughout.Antenna short.Wing narrow, strongly infuscated.A2 ending far before origin of Rs.Legs dark brown, including coxae.Male terminalia with outer gonostylus generally straight, with distinct apical spine, and apical blade of interbase spoon-shaped.Female terminalia with three spherical, small spermathecae. Description.Very small species, plump in general appearance, with all body appendages (antennae, palpi, wings, legs) slightly shorter, compared to other species.Body length 3-6 mm, wing length 3-5 mm. Colour.General colouration black, dull (deep dark brown in faded dried specimens), without conspicuous markings on thorax.Antenna almost black throughout.Pleuron heavily suffused with dark greyish black pruinosity.Wing strongly tinged with blackish.Halter infuscated, especially on stem.Coxae deep dark greyish brown.Trochanters and bases of femora brown, the latter darkened distally, deep dark brown.Rest of legs dark brown.Abdomen almost black, somewhat shiny. Thorax.Wing (Fig. 4) rather narrow, about four times as long as broad, with stalk very short.Sc1 ending at about half length of Rs or slightly before it.Sc2 littledistinct at tip of Sc1.R3 more than its own length beyond tip of R1.R4 bare or with at most a few macrotrichia dor-sally.A2 slightly sinuous, ending far before origin of Rs.Halter comparatively short, not reaching to posterior margin of abdominal tergite 2. Legs rather thick and short, compared to other species. Etymology.The name of the new species, crassa (= thick, stout), refers to its somewhat plump general appearance, with all body appendages slightly shorter than in the other species.An adjective in nominative singular. Discussion.R. (R.) crassa sp.n. is, on the average, the smallest of the species treated here, long known from numerous localities in the Czech Republic and Slovakia.It is distinctive by its generally plump appearance, short verticils on antennae and the black body colouration.Within European species, the latter character is only shared by R. (R.) corax sp.n., which differs from R. (R.) crassa sp.n. in having a light grey pruinosity on the pleuron and more macrotrichia on R4.In the structure of the male terminalia, R. (R.) crassa sp.n. is very similar to R. (R.) edwardsi and R. (R.) corax sp.n., differing from either or both in having the outer gonostylus generally straight (gently and evenly arched in corax), the inner gonostylus moderately broad (more slender in corax), the interbase short and comparatively broad (longer and more slender in both edwardsi and corax), and the apodeme of the vesica narrowly fan-shaped (more slender in corax, rod-like or spine-like).The female terminalia of the three species are not well distinguished having the spermathecae of approximately the same size, but the features listed above clearly validate R. (R.) crassa sp.n. as a separate species. Colour.General colouration black, dull (deep dark brown in faded dried specimens), without conspicuous markings on thorax.Antenna almost black throughout.Pleuron slightly suffused with light grey pruinosity.Wing strongly infuscated, blackish.Halter infuscated throughout.Coxae deep dark greyish brown.Trochanters and bases of femora brown, the latter still darkened distally, deep dark brown.Rest of legs dark brown.Abdomen almost black. Head.Antenna comparatively short, not reaching to base of wing.Flagellomeres short-oval.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence indistinct.Palpus short. Thorax.Wing rather narrow, about four times as long as broad, with stalk comparatively long, longer than that of R. (R.) crassa sp.n.Sc1 ending at about half length of Rs or slightly before it.Sc2 not apparent.R3 more than its own length beyond tip of R1.R4 with about 10 macrotrichia, mostly dorsally.A2 slightly sinuous, ending before origin of Rs.Halter moderate in length, reaching to posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 32).Segment 9 longer than broad.Gonocoxite moderate in length and breadth.Outer gonostylus comparatively short, about half length of gonocoxite, considerably bent at base, otherwise gently and evenly arched, generally parallel-sided, sometimes slightly tapered distally, with barely distinct apical spine.Inner gonostylus generally conical, more slender than in other related species.Aedeagal complex as in Fig. 32.Interbase moderate in length, reaching slightly beyond half length of gonocoxite, slender, bent near two-thirds of its length to form lanceolate apical blade, similar to that of R. (R.) subparva.Aedeagus short, subequal in length to vesica, the latter broad, bulbous, with long apodeme, subequal in length to vesica, rod-like or spine-like in dorsal aspect.Female terminalia (Fig. 36).Cercus moderate in length and breadth, subequal in length to tergite 10, gently upturned.Spermathecae three, small, spherical, subequal in size to both R. (R.) edwardsi and R. (R.) crassa sp.n., with sclerotised parts of ducts practically not apparent.One spermatheca sometimes smaller than other two. Etymology.The new species is named corax (= the rook) for its notably black body.A noun in nominative singular standing in apposition to generic name. Discussion.This new species is distinctive by its black body colouration, being thus similar to R. (R.) crassa sp.n., from which it differs by some external and genital characters, such as the overall slender appearance, the setosity of R4, the shape of both gonostyli (the outer gonostylus of corax somewhat suggesting that of subparva) and the structure of the aedeagal complex.For details, see the discussion of R. (R.) crassa sp.n. Colour.General colouration dark greyish brown to light grey, dull, without conspicuous markings on thorax, more brownish in middle of prescutum and sometimes still more so on abdomen.Antenna brown throughout.Pleuron heavily suffused with grey, sometimes greyish brown pruinosity.Wing slightly infuscated.Halter dirty white to pale yellow.Coxae, trochanters and bases of femora mostly yellow to yellowish brown, otherwise legs somewhat darker, brown.Abdomen greyish brown, proximal segments sometimes paler, brown, more conspicuously so ventrally.Sometimes abdomen entirely brown. Head.Antenna (Fig. 8) comparatively short, not reaching to base of wing.Flagellomeres short-oval to oval.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence indistinct.Palpus short. Thorax.Wing rather narrow, about four to five times as long as broad, with short stalk (Fig. 15).Sc1 ending at about half length of Rs.Sc2 not apparent.R3 more than its own length beyond tip of R1.Macrotrichia on R4 somewhat varying in number, about 10, mostly placed dorsally.A2 sinuous, ending before origin of Rs.Halter comparatively short, not reaching to posterior margin of abdominal tergite 2 (Fig. 15). Abdomen.Male terminalia (Fig. 33).Segment 9 longer than broad.Gonocoxite moderate in length and breadth.Outer gonostylus, on average, slightly longer than that of related species (except that of filata), more than half length of gonocoxite, gently and evenly arched or nearly straight distally, generally parallel-sided, rounded at apex, with apical spine barely distinct or lacking.Inner gonostylus rather broad, generally conical.Aedeagal complex as in Fig. 33.Interbase comparatively short and slender, reaching to about half length of gonocoxite, bent shortly before apex to form short and narrow, roughly lanceolate apical blade.Aedeagus slender, subequal in length to vesica, the latter broad, bulbous, with apodeme comparatively short, fan-shaped or rounded in dorsal aspect, shorter than vesica.Female terminalia (Figs 37,51). Etymology.The new species is named in honour of the late Dr. Yevgeniy (= Eugen) Nikolaevich Savchenko (Kiev, Ukraine), a distinguished specialist on the Limoniidae (s.lat.) and Tipulidae, who first collected this species and who was able, under conditions inconceivable to his colleagues from the then so-called free world, to do so much on the craneflies.A noun in genitive singular. Discussion.Whereas members of Central European populations are dark greyish brown, most resembling R. (R.) subparva, the specimens examined from the Balkans are somewhat paler, bicoloured, having a light grey thorax and brown abdomen.Surprisingly, a few specimens examined from Calabria (southern Italy) are rather dark, much as the Central European individuals. R. (R.) eugeni sp.n., if from Central Europe, most resembles R. (R.) subparva by its dark greyish brown body colouration and yellow coxae.It differs from the latter, as do other related species, in that it is not as slender in general appearance as R.(R.) subparva, having the wing stalk and halter shorter (cf.Figs 15 and 16).In R. (R.) eugeni sp.n., R4 bears about 10 macrotrichia (more numerous, about 20, in subparva, fewer or none in edwardsi, crassa and filata).Specimens of R. (R.) eugeni sp.n. from the Balkans are paler, bicoloured, thus somewhat resembling R. (R.) filata sp.n. and the Balkan specimens of R. (R.) hirticornis, both quite different in the structure of the antennae.R. (R.) eugeni sp.n. is probably most closely related to R. (R.) filata sp.n. in having the outer gonostylus rounded at apex, without an apparent apical spine, it differs, however, from the latter by a small apodeme of vesica in males (large, broadly fan-shaped in filata) and reniform spermathecae in females (spherical in filata). Colour.General colouration grey to light grey, dull on thorax, without conspicuous markings.Antenna pale brown to brown throughout.Pleuron variably suffused with whitish grey pruinosity.Wing hyaline or slightly tinged with yellowish.Halter whitish yellow.Coxae, trochanters and bases of femora yellow, rest of legs somewhat darker.Abdomen light brown, subshiny. Head.Antenna (Fig. 9) short, not reaching to base of wing, with pedicel large, compared to other species, and flagellum very thin, filiform.Flagellomeres mostly elongate-oval, slender, except more rounded first flagellomere.Longest verticils on flagellomeres exceeding length of their respective segments.Pubescence indistinct.Palpus short. Thorax.Wing rather narrow, about four times as long as broad, with short stalk.Sc1 ending at about half length of Rs.Sc2 sometimes slightly apparent at tip of Sc1.R3 more than its own length beyond tip of R1.R4 with at most a few macrotrichia dorsally.A2 sinuous, ending opposite to origin of Rs or shortly before it.Halter short, not reaching to posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 34).Segment 9 longer than broad.Gonocoxite moderate in length and breadth.Outer gonostylus slightly more than half length of gonocoxite, bent at base, otherwise straight, parallel-sided, rounded at apex, with apical spine sometimes not apparent.Inner gonostylus generally conical, somewhat swollen.Aedeagal complex as in Fig. 34.Interbase reaching beyond half length of gonocoxite, bent shortly before apex to form narrow, roughly lanceolate blade.Aedeagus short, rather broad, subequal in length to vesica; the latter broad, bulbous.Apodeme of vesica large, broadly fanshaped in dorsal aspect, subequal in length to vesica.Female terminalia (Figs 38,52).Cercus moderately broad, slightly exceeding length of tergite 10, gently upturned.Spermathecae two, nearly spherical, mediumsized, with sclerotised parts of ducts comparatively thin, extending about one-third of spermathecal diameter. Discussion.R. (R.) filata sp.n. is distinctive by the structure of the antennae, having the pedicel large, compared to other species treated here, and the flagellum thin, filiform (Fig. 9).Body colouration is generally pale, bicoloured, with the light grey thorax and pale brown abdomen, similar to populations of R. (R.) hirticornis and R. (R.) eugeni sp.n. from the Balkans.R. (R.) filata sp.n. is close to R. (R.) eugeni sp.n., sharing with it the overall shape of the outer gonostylus, with rounded apex, without a distinct apical spine.It differs, however, from the latter by other details in the structure of the male and female terminalia, particularly the large, broadly fan-shaped apodeme of vesica in males and the spherical spermathecae in females. Colour.General colouration dark greyish brown, dull, without conspicuous markings on thorax.Antenna brown throughout.Pleuron heavily suffused with grey pruinosity.Wing slightly infuscated.Halter dirty white to pale yellow.Coxae, trochanters and bases of femora mostly yellow to yellowish brown, otherwise legs somewhat darker, brown.Abdomen greyish brown. Head.Antenna of moderate length, reaching to base of wing.Flagellomeres oval.Longest verticils on flagellomeres exceeding length of their respective segments.Pubescence indistinct.Palpus short. Thorax.Wing rather narrow, about four times as long as broad, with comparatively long stalk (Fig. 16).Sc1 ending at about half length of Rs.Sc2 slightly apparent at tip of Sc1.R3 more than its own length beyond tip of R1.R4 with numerous macrotrichia, about 20, both dorsally and ventrally.A2 sinuous, ending shortly before origin of Rs.Halter comparatively long (compared to other species), reaching to about posterior margin of abdominal tergite 2 (Fig. 16). Abdomen.Male terminalia (Fig. 35).Segment 9 longer than broad.Gonocoxite moderate in length and breadth.Outer gonostylus comparatively short, about half length of gonocoxite, gently and evenly arched, tapered distally, with distinct apical spine.Inner gonostylus generally conical.Aedeagal complex as in Fig. 35.Interbase moderate in length, reaching to about half length of gonocoxite, generally slender, bent at two-thirds of its length to form long and slender, lanceolate apical blade.Aedeagus slender, longer than vesica; the latter narrow, with long apodeme, rod-like or spine-like in dorsal aspect, subequal in length to vesica.Female terminalia (Figs 39,53).Cercus rather broad and short, subequal in length to tergite 10, abruptly tapered and upturned before apex.Spermathecae two, exceedingly large, irregularly short-oval to reniform, with sclerotised parts of ducts short and curved.plex, thus actually representing the opposite of R. (R.) crassa sp.n.R. (R.) subparva has, e.g., the wing stalk and halter rather long, the latter reaching to about the posterior margin of the abdominal tergite 2 (Fig. 16) (shorter in all related species, cf.Fig. 15).The numerous macrotrichia on R4 likewise are unique within the complex.The dark greyish brown body colouration combined with yellow coxae is only present in the Central European specimens of R. (R.) eugeni sp.n.The male terminalia of R. (R.) subparva are characterised by an evenly arched outer gonostylus tapered distally and provided with a distinct apical spine, and by a generally slender aedeagal complex with a narrow vesica and a rod-like apodeme.The female terminalia are distinctive in having rather broad, abruptly upturned cerci and, especially, two shortoval, exceedingly large spermathecae. Distribution.The species was described comparatively recently, at the time when more attention was paid to small dark Rhabdomastix s. str.species, previously lumped under "schistacea".Therefore, it may be assumed that most of the country records published by Savchenko et al. (1992) are correct.These are as follows: Switzerland, Germany, Poland, Czech Republic, Slovakia, Austria, Ukraine, Italy, the former Yugoslavia (Slovenia, Serbia), Albania (?), Romania and Bulgaria (modified according to the present political boundaries).Based on the material examined, the species is here confirmed for Switzerland, Germany, Czech Republic, Slovakia, Austria, Italy, Slovenia, Romania, Bulgaria and Ukraine, and newly recorded for European Russia (southeast).Not confirmed from Poland, Serbia or Albania. Diagnosis.General colouration grey on thorax, more brownish on abdomen.Male antenna very long, subequal to entire body.Wing narrow, slightly infuscated.A2 ending before origin of Rs.Legs yellowish brown to brown, with yellow coxae.Male terminalia with outer gonostylus gently arched, club-shaped, with apical spine not apparent; apical blade of interbase spoon-shaped.Female terminalia with three reniform, medium-sized spermathecae. Redescription.Rather small species, yet somewhat larger than many others described here as "small".Body length 4-6.5 mm, wing length 5-7 mm. Colour.General colouration grey to light grey, dull, without conspicuous markings on thorax, sometimes more brownish in middle of prescutum and still more so on abdomen.Antenna dark brown throughout.Pleuron heavily suffused with bluish grey pruinosity.Wing slightly infuscated.Halter dirty white to pale yellow, with knob faintly infuscated.Coxae yellow to yellowish brown, fore coxa more greyish.Trochanters and proximal half of femora yellow, the latter darkened distally.Rest of legs yellowish brown.Abdomen greyish brown to pale brown. Head.Male antenna (Fig. 11) very long, subequal to entire body.Flagellomeres very long, first one rather conical, following ones cylindrical, longest near midlength of antenna, terminal flagellomere minute.Verticils inconspicuous, about one-fourth to one-fifth length of their respective segments, distinct at base of eight proximal flagellomeres, largely getting lost among greatly developed erect pubescence subequal to one-third length of longest flagellomeres.Female antenna (Fig. 12) considerably shorter than that of male, yet distinctly longer than antenna of any other species treated here (except for male of georgica), extending beyond base of wing by about one-fourth of its length.Flagellomeres mostly elongate-oval, progressively narrowed towards apex of antenna.Verticils on flagellomeres slightly shorter than their respective segments.Pubescence indistinct.Palpus long, distinctly exceeding diameter of head; terminal palpomere nearly twice as long as penultimate (Fig. 13). Thorax.Wing rather long and narrow, about four to five times as long as broad, with comparatively short stalk.Sc1 ending at about half length of Rs.Sc2 faintly apparent at tip of Sc1.R3 more than its own length beyond tip of R1.R4 with numerous macrotrichia, about 20, both dorsally and ventrally.A2 sinuous, ending far before origin of Rs.Halter appearing rather long, but not reaching to posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 40).Segment 9 longer than broad.Gonocoxite moderate in length and breadth.Outer gonostylus more than half length of gonocoxite, generally slender, club-shaped, gently and evenly arched, slightly broadened before apex, with apical spine mostly not apparent, concealed by expanded apical portion of gonostylus.Inner gonostylus somewhat swollen, with short obtuse point.Aedeagal complex as in Fig. 40.Interbase comparatively long, extending beyond half length of gonocoxite, expanded distally to form spoon-like apical blade, sometimes pointed at apex.Aedeagus slender, longer than comparatively narrow vesica.Apodeme of vesica short, mostly spine-like from dorsal aspect, shorter than vesica.Female terminalia (Figs 42,54).Cercus slender, rather long, longer than tergite 10, gently upturned.Spermathecae three, medium-sized, short-oval to reniform, with sclerotised parts of ducts short, somewhat curved.One spermatheca sometimes smaller than other two. Discussion.Members of Central European populations are distinctly darker than specimens from the Balkans (including the type series).The latter are more pronouncedly bicoloured, having a light grey thorax and a pale brown abdomen.This may be the case for other southern regions of Europe, as indicated by several specimens from southern Ukraine and a dry-mounted female from southern Switzerland (Ticino).On the other hand, the examined specimens from Algeria are still darker than those from Central Europe.There is a certain variation in the shape of the outer gonostylus; the apical spine is sometimes well apparent (cf.Lackschewitz, 1940, Tab. 3, Figs 28a, b), which, however, may be an artefact caused by compression of the hypopygium between the celluloid slides. From all the species treated (except georgica), males of R. (R.) hirticornis may at once be separated by the long antennae (Fig. 11).The problem of long antennae in Rhabdomastix was discussed in some detail in the first part of this revision (Starý, 2003) and it is again mentioned in the discussion of the species complexes above.Due to the bicoloured appearance, females of R. (R.) hirticornis from South Europe may be confused with R. (R.) filata sp.n. or South European specimens of R. (R.) eugeni sp.n.It should, however, be emphasised that females of R. (R.) hirticornis are also clearly separable by the structure of the antennae.These are considerably shorter than those of males, yet distinctly longer than the antennae of any other species treated (Fig. 12).The long palpi in both sexes of R. (R.) hirticornis, distinctly exceeding the diameter of the head, with the terminal palpomere nearly twice as long as the penultimate, represent another distinguishing character (Fig. 13) (in the other species treated, except georgica, the palpi are short, subequal to the diameter of the head, with the terminal palpomere only slightly longer than the penultimate, cf.Fig. 14).The considerable distinctness of the male antennae in R. (R.) hirticornis is not reflected in the structure of the male terminalia, which provide no essential differences as compared to the other species, and only differ by details, such as the overall shape of the outer gonostylus and the structure of the aedeagal complex.The female terminalia of R. (R.) hirticornis are well characterised by three reniform, medium-sized spermathecae.Differentiating between R. (R.) hirticornis and R. (R.) georgica sp.n. is covered in the discussion of the latter species. Rhabdomastix (Rhabdomastix) georgica sp.n. (Figs 41,43,55) Diagnosis.General colouration dark greyish brown.Male antenna very long, longer than entire body.Wing narrow, infuscated.A2 ending before origin of Rs.Legs yellowish brown to brown, with coxae yellowish brown.Male terminalia with outer gonostylus straight, with distinct apical spine, and apical blade of interbase triangular, with sharp point at inner margin.Female terminalia with three reniform, large spermathecae. Colour.General colouration dark greyish brown, dull, without conspicuous markings on thorax, more brownish on abdomen.Antenna brown throughout.Pleuron heavily suffused with grey pruinosity.Wing infuscated.Halter infuscated, especially on knob.Coxae yellowish brown, fore coxa more greyish pruinose.Trochanters and femora yellowish brown, the latter darkened distally.Rest of legs yellowish brown to brown. Head.Male antenna very long, longer than entire body.Flagellomeres very long, first one rather conical, following ones cylindrical, longest near mid-length of antenna, terminal flagellomere minute.Verticils indistinct.Pubescence, long, erect, rather sparse, subequal to one-third length of longest flagellomeres.Female antenna considerably shorter than that of male, not reaching to base of wing.Flagellomeres oval to elongateoval, rather thin, not noticeably changing in size towards apex of antenna.Verticils on flagellomeres shorter than their respective segments.Pubescence indistinct.Palpus long, distinctly exceeding diameter of head; terminal palpomere nearly twice as long as penultimate. Thorax.Wing long and narrow, about five times as long as broad, with long stalk.Sc1 ending at about half length of Rs.Sc2 faintly apparent shortly before tip of Sc1.R3 more than its own length beyond tip of R1.R4 with numerous macrotrichia both dorsally and ventrally.A2 long, sinuous, ending before origin of Rs.Halter comparatively short, not reaching to posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 41).Segment 9 longer than broad.Gonocoxite moderate in length and breadth.Outer gonostylus slightly more than half length of gonocoxite, gently bent at base, otherwise straight, nearly parallel-sided, with apical spine distinct.Inner gonostylus generally conical.Aedeagal complex as in Fig. 41.Interbase comparatively long, extending beyond half length of gonocoxite, with apical blade generally triangular, drawn out into sharp long point at inner distal margin, directed laterally.Aedeagus subequal in length to comparatively narrow vesica.Apodeme of vesica short, spine-like or somewhat bulb-shaped at apex from dorsal aspect, shorter than vesica.Female terminalia (Figs 43,55).Cercus slender, rather long, longer than tergite 10, gently upturned.Spermathecae three, large, reniform and considerably narrowed in portion closer to duct, practically without sclerotised parts of ducts. Material examined.Holotype %: Georgia (Transcaucasia), Arsianskiy khrebet [mountain ridge], E slopes of Goderdzi Pass (1450-1500 m), 28.vi.1978(E.N.Savchenko leg.) (SMOC).Except for a printed inscription "Transcriptio" the data on the label are hand-written in Russian (in Cyrillic).The specimen (originally papered) is glued onto a triangular cardboard point, in nearly perfect condition, with only wings somewhat crumpled and stuck together and apex of abdomen missing.Terminalia dissected and placed in a sealed plastic tube with glycerine, pinned with the specimen.Paratypes: 3%, 1&, same data as for holotype (JSO).Etymology.The name of the new species, georgica, is derived from the name of the country of its occurrence, Georgia in Transcaucasia.The name is deemed to be and to be treated as a latinised adjective in nominative singular, in accordance with relevant provisions of the Article 11.9 of ICZN (1999). Discussion.R. (R.) georgica sp.n. is described from Georgia in Transcaucasia, hence from outside of Europe, representing thus an extra-limital species in terms of this revision.It was included here because of its affitinies to R. (R.) hirticornis.Within the Palaearctic Region, only three other Rhabdomastix species are known to be distinguished by correspondingly long male antennae, viz.R. (R.) hirticornis, Lackschewitz, 1940 (Europe), R. (R.) leucophaea Savchenko, 1976 (Transcaucasia: Azerbaijan) and R. (R.) omeina Alexander, 1932 (China: Sichuan), all formerly classified in the subgenus Palaeogonomyia (cf.Savchenko et al., 1992).In the latter two species the antennae are shorter than the body.Five Oriental species with greatly lengthened male antennae (himalayensis Alexander, 1960;manipurensis Alexander, 1964;nilgirica Alexander, 1949;schmidiana Alexander, 1958;trochanterata Edwards, 1928) all belong to the group centred around R. illudens Alexander, 1914, as discussed in the first part of this revision (Starý, 2003: 590), with the male antennae several times as long as the body. R. (R.) georgica sp.n. apparently is closely related to R. (R.) hirticornis.It differs from Central European specimens of the latter by somewhat darker body colouration as well as having longer male antennae, which, in R. (R.) georgica sp.n., exceed the length of the entire body (subequal in length to the body in hirticornis).However, the female antennae of R. (R.) georgica sp.n. are shorter than those of R. (R.) hirticornis, not reaching to the bases of the wings (extending beyond the bases of the wings in hirticornis).The male terminalia of R. (R.) georgica sp.n. are characterised by a generally straight outer gonostylus, nearly parallel-sided, with the apical spine distinct (evenly arched in hirticornis, club-shaped at apex, without a distinct apical spine), and by the interbases with the apical blade triangular, drawn out into a sharp long point at inner margin (apical blade of interbases generally spoon-like in hirticornis).The female terminalia are distinctive in having the spermathecae comparatively large, reniform (smaller in hirticornis, often short-oval). Diagnosis.General colouration grey throughout.Antenna short.Wing moderately broad, somewhat milky, with narrow darker seams along veins.A2 ending opposite to origin of Rs.Legs yellow to yellowish brown, femora darkened distally.Male terminalia with outer gonostylus unusually short and broad, inner gonostylus swollen, broadly rounded at apex and aedeagal complex generally slender.Female terminalia with three spherical, medium-sized spermathecae. Colour.General colouration grey, dull, restrictedly tinged with brownish, without conspicuous markings on thorax.Antenna deep dark brown to black throughout.Pleuron heavily suffused with grey pruinosity.Wing slightly infuscated, somewhat milky, most veins vaguely and very narrowly seamed with darker.Halter bright pale yellow.Coxae, trochanters and proximal half of femora yellow, the latter darkened distally.Tibiae yellowish brown, tipped with darker.Tarsomere 1 yellowish brown, others darker.Abdomen a little darker than thorax, greyish brown. Head.Antenna (Fig. 10) comparatively short, not reaching to base of wing.Flagellomeres short-oval.Longest verticils on flagellomeres slightly exceeding length of their respective segments.Pubescence rather long, suberect, subequal in length to breadth of respective segments, or slightly less so, distinct on almost all flagellomeres.Palpus short. Thorax.Wing (Fig. 3) moderately broad, more than three times as long as broad, with short stalk.Sc1 ending at about half length of Rs.Sc2 faintly apparent shortly before tip of Sc1.R3 more than its own length beyond tip of R1.R4 with a few macrotrichia dorsally.A2 strongly sinuous, ending opposite to origin of Rs.Halter short, clearly not reaching to posterior margin of abdominal tergite 2. Abdomen.Male terminalia (Fig. 44).Segment 9 very short, broader than long, with conspicuous, more or less triangular lobe dorsally at posterior margin on each side of median interruption.Gonocoxite stout, short and broad.Outer gonostylus unusually short and broad, somewhat flattened at base, about one-third length of gonocoxite, subequal in length to inner gonostylus, nearly straight and parallel-sided, with distinct apical spine.Inner gonostylus very broad, swollen, broadly rounded at apex.Aedeagal complex as in Fig. 44.Interbase reaching to about half length of gonocoxite, generally slender, sinuous and only faintly dilated distally.Aedeagus very slender and rather long, about 1.5 times as long as moderately broad vesica.Apodeme of vesica rod-like or spinelike in dorsal aspect, about same length as vesica.Female terminalia (Figs 45,56).Cercus broad, subequal in length to tergite 10, generally straight, expanded and abruptly upturned before apex.Spermathecae three, rather large, spherical, with sclerotised parts of ducts thin and long, exceeding spermathecal diameter. Material examined.The species was described from an unspecified number of males ["Banat, Orsova.Im Juni, %%, leg Th.Becker no.61829.(Typ. in der Samml.des Zool.Museums in Berlin.)"(Lackschewitz, 1935: 13)].I have examined two specimens that may be considered syntypes.Lectotype % (present designation): Romania, [Banat], Orsova, vi., (Th.Becker leg.) (ZMHB), labelled: "Orsova 61829.VI." (hand-written), "Gonomyia schistacea Schumm."(hand-written in pencil), "Sac.beckeri nov.sp.det.Lacksch."(printed, orange).Accordingly labelled as lectotype ("Lectotype Rhabdomastix (s.str.) beckeri (Lacksch.)% J. Starý 2003").The specimen is micro-pinned on a stage of plant parenchyma, with only left hind leg present; left antenna and apex of abdomen broken off.Terminalia dissected and placed in Canada balsam between celluloid slides, pinned with the specimen.Paralectotype: 1% (ZMHB) with same labels as lectotype (incl."61829"), except for the label with the schistacea identification.Terminalia dissected and placed in a sealed plastic tube with glycerine, pinned with the specimen.The paralectotype belongs to the species described here as R. (R.) eugeni sp.n. and is also listed as paratype under that species.The specimen was examined by me in 1978 (later not traced again in ZMHB), and it was not labelled by me as the paralectotype of beckeri, nor the paratype of eugeni.Hence, the type series of R. (R.) beckeri is a mixed one, and the lectotype is designated here to maintain the current usage of the name for the species with the male terminalia as illustrated by Lackschewitz (1935, Figs 6a,b). Discussion.Some character traits of R. (R.) beckeri indicate a more distant relationship to the other species of the European Rhabdomastix s. str.In general appearance, R. (R.) beckeri is particularly distinctive by its somewhat milky wings with narrow darker seams along most of veins.The isolated position of R. (R.) beckeri is also indicated by some features in the structure of the generally rather robust male terminalia.Segment 9 is broad and short, broader than long, with a conspicuous triangular lobe dorsally at the posterior margin on each side of the median interruption (Fig. 44) (broader than long, but without any lobes, in the species centred around laeta; longer than broad, with only small lobes in the other species).In contrast to all the species, the outer gonostylus is unusually short and broad, the inner gonostylus conspicuously swollen and broadly rounded at apex and the interbases and the aedeagus is very slender.The female terminalia, although less distinctive, are well characterised by the shape of the cerci which generally are straight and broad, expanded and abruptly upturned before the apex.(Lackschewitz, 1935).Scale bars 0.25 mm. It had been recorded from Slovakia by Starý (1987), but later this record was withdrawn (Starý, 1993(Starý, , 1996)).R. (R.) beckeri is very distinctive in the structure of the male terminalia, and the above specimens from Malé Trakany, Slovakia, had readily been identified with the figures by Lackschewitz (1935, Figs 6a,b).However, some doubts arose concerning the type [at that time, I had only examined the specimen listed here as paralectotype belonging to R. (R.) eugeni sp.n.].The records from the former Czechoslovakia by Savchenko (1989) and Savchenko et al. (1992) are based on the same unpublished material from Slovakia sent by me to E.N. Savchenko.Here I am publishing the first documented records from Slovakia. DOUBTFUL SPECIES OF RHABDOMASTIX S. STR. ACKNOWLEDGEMENTS.For invaluable information and/or for the loan and gift of specimens, I am much indebted to the following: R. Contreras-Lichtenberg, P.
v3-fos-license
2021-08-03T13:49:34.687Z
2021-08-03T00:00:00.000
236780688
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-021-06671-z.pdf", "pdf_hash": "b2de44016c80a7093ff31bb133d84cd4e84b8cae", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45142", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b2de44016c80a7093ff31bb133d84cd4e84b8cae", "year": 2021 }
pes2o/s2orc
Medial meniscal ramp lesions in ACL-injured elite athletes are strongly associated with medial collateral ligament injuries and medial tibial bone bruising on MRI Purpose Medial menisco-capsular separations (ramp lesions) are typically found in association with anterior cruciate ligament (ACL) deficiency. They are frequently missed preoperatively due to low MRI sensitivity. The purpose of this article was to describe demographic and anatomical risk factors for ramp lesions, and to identify concomitant lesions and define their characteristics to improve diagnosis of ramp lesions on MRI. Methods Patients who underwent anterior cruciate ligament (ACL) reconstruction between September 2015 and April 2019 were included in this study. The presence/absence of ramp lesions was recorded in preoperative MRIs and at surgery. Patients’ characteristics and clinical findings, concomitant injuries on MRI and the posterior tibial slope were evaluated. Results One hundred patients (80 male, 20 female) with a mean age of 22.3 ± 4.9 years met the inclusion criteria. The incidence of ramp lesions diagnosed at surgery was 16%. Ramp lesions were strongly associated with injuries to the deep MCL (dMCL, p < 0.01), the superficial medial collateral ligament (sMCL, p < 0.01), and a small medial–lateral tibial slope asymmetry (p < 0.05). There was also good correlation between ramp lesions and bone oedema in the posterior medial tibia plateau (MTP, p < 0.05) and medial femoral condyle (MFC, p < 0.05). A dMCL injury, a smaller differential medial–lateral tibial slope than usual, and the identification of a ramp lesion on MRI increases the likelihood of finding a ramp lesion at surgery. MRI sensitivity was 62.5% and the specificity was 84.5%. Conclusion The presence on MRI of sMCL and/or dMCL lesions, bone oedema in the posterior MTP and MFC, and a smaller differential medial–lateral tibial slope than usual are highly associated with ramp lesions visible on MRI. Additionally, a dMCL injury, a flatter lateral tibial slope than usual, and the identification of a ramp lesion on MRI increases the likelihood of finding a ramp lesion at surgery. Knowledge of the risk factors and secondary injury signs associated with ramp lesions facilitate the diagnosis of a ramp lesion preoperatively and should raise surgeons’ suspicion of this important lesion. Level of evidence Diagnostic study, Level III. Introduction "Ramp lesions" were first described by Strobel [47] in 1988 to define a menisco-capsular separation of the posterior horn of the medical meniscus (PHMM) form the posteromedial capsule (PMC) and they are most commonly found in association with anterior cruciate ligament (ACL) ruptures [3,6,11,27,31,32,41,45]. The PHMM is firmly attached to the PMC [12,52] and it acts as a secondary knee stabiliser to resist anterior translation of the medial tibia and thereby external tibial rotation [1,37,46]. In the event of an acute ACL injury, the forceful forward displacement of the tibia and subsequent stress on the posteromedial capsule and PHMM can result in posteromedial menisco-capsular injury-a ramp lesion. These occur in 9-34% of patients with ACL tears at the time of ACL rupture [3,6,11,27,31,32,41,45]. The word 'ramp' refers to the appearance of the synovium/PMC that is seen to sweep proximal and anterior, like a ramp, to the posterior margin of the PHMM when the posteromedial recess is viewed in the flexed knee. In the extended knee, the posterior capsule tightens, and pulled proximally and thereby obliterating the posteromedial recess and making the 'ramp' disappear [46]. It is in the extended position that magnetic resonance imaging (MRI) of the knee is usually undertaken, thus compromising detection of ramp lesions since there is no 'ramp'. This phenomenon may account for the low sensitivity of MRI in identifying ramp lesions, which means ramp lesions are frequently not diagnosed preoperatively [3,11]. The intraoperative detection through a systematic arthroscopic exploration, including a direct posteromedial visualization through the intercondylar notch or a direct posteromedial portal, currently remains the gold standard to detect a ramp lesion and can be technically difficult. Knowledge of the associated factors and secondary injury signs associated with ramp lesions will facilitate the diagnosis of a ramp lesion on MRI. Increased preoperative suspicion of a ramp lesion would be invaluable to the surgeon at time of arthroscopy so that lesions would not be overlooked at surgery. The purpose of this study was to describe demographic and anatomical associated factors for ramp lesions in elite athletes, to identify associated lesions on MRI and define their characteristics. The MRI findings were correlated with operative findings. It was hypothesized that a steeper medial slope is a risk factor for ramp lesions and that bone oedema at the posterior medial tibial plateau (MTP), and medial collateral ligament (MCL) injuries are associated with these injuries, given that it is logical that the injury mechanics causing the ramp lesion are those occurring with AMRI. Materials and methods This study was conducted according to the UK National Health Research Authority guidance and ethically approved by the institution (Fortius Clinic, London, UK) involved. This retrospective cohort study comprised a consecutive series of professional athletes who underwent ACL reconstruction between 2015 and 2019. This group was specifically chosen as they have MRI scans and surgery consistently soon after injury and, therefore, the MRI and surgery would occur with little delay from injury allowing best contemporaneous correlation of clinical, arthroscopic and MRI findings. Patients eligible for inclusion in the study were identified by a review of medical records, and demographic information, injury data, time from injury to MRI and surgery, as well as intraoperative findings were recorded. Patients were excluded if there was any history of previous ipsilateral knee injury or surgery, and any concurrent laxity or surgery of knee ligaments other than the ACL. To allow for high levels of accuracy in evaluation of damage to peripheral structures, only patients with an MRI scan taken within 3 weeks of the ACL injury, and that met the minimum imaging criteria of (1) field strength of 1.5 Tesla or above, (2) three-plane (sagittal, axial and coronal) imaging using water sensitive fat-suppressed sequences (STIR, fat-suppressed proton density or T2-weighted) and (3) slice thickness of 3 mm or less were included. Radiological assessment Preoperative MRI examinations were acquired from multiple centres. Due to the consequent variation in scanning protocols the sequences used varied. They often included T1-weighted imaging. However, for the purpose of the study, and to maintain consistency, only the fluid sensitive sequences were used for image analysis. Two radiologists specialized in musculoskeletal imaging with 20 and 25 years of experience, respectively, independently analysed all MRI images. Occurrence of a ramp lesion was recorded if there was fluid signal separating the PMC and the PHMM (Fig. 1). In addition, the presence or absence, and location of bone oedema in the medial compartment was recorded. Bone oedema was defined as increased signal intensity on the fat-suppressed water sensitive images within the bone. Injuries to the superficial and deep medial collateral ligament (sMCL, and dMCL), posterior oblique ligament (POL), lateral collateral ligament (LCL), anterolateral complex including the anterolateral ligament (ALL) and Kaplan fibres (KF), and the menisci were also recorded. The technique used in this study for measurement of medial and lateral posterior tibial slope has been previously published and validated by other authors [22,28,33]. The posterior tibial slope was defined as the difference between the sagittal tibial joint surface orientation and a perpendicular line to the proximal anatomical tibial axis. A larger positive value indicates a steeper posterior tibial slope. Medial-lateral slope differential was calculated by subtracting the medial tibial slope from the lateral medial slope. Ligament laxity assessment All patients were routinely examined under anaesthesia (EUA) at the beginning of surgery by the senior author who is a specialist sports knee surgeon with over 25 years' experience. This included anterior and posterior drawer, Lachman, pivot-shift test, dial test, valgus and varus stress tests. The relevant stress tests were categorized according to International Knee Documentation Committee form (grade I: 3-5 mm, grade II: 5-10 mm, grade III: > 10 mm) as laxity differences compared to the healthy contralateral knee [20]. The pivot-shift test was graded as 0 (equal), 1 (glide), 2 (clunk), or 3 (gross). Arthroscopic assessment Standard anteromedial and anterolateral portals were made for ACL reconstruction surgery and a routine diagnostic assessment was made of the suprapatellar pouch, patellofemoral joint, lateral gutter including popliteus tendon and hiatus, medial gutter, medial compartment, intercondylar notch and lateral compartment. The posteromedial compartment and ramp were assessed by advancing a 30° arthroscope over the anterior surface of the ACL stump into the posteromedial recess through the intercondylar notch. To aid this the knee is held in 30° flexion with a varus stress applied (Gillquist maneuver [15,30]). Some authors recommend inserting the arthroscope via an accessory posteromedial portal to visualise the ramp region [51]. However, this is not the senior author's routine as ramp lesions are revealed as knee flexion was increased (Fig. 2). Occasionally a posteromedial portal is used to insert a probe. Statistical analysis Data were analysed using SPSS statistics software version 23.0 (IBM, New York, USA). Normal distribution was confirmed by the Shapiro-Wilk test and continuous variables were expressed as mean ± standard deviation. Chi-squared test or Fisher's exact test was used to analyse for any association between ramp lesions and demographic variables, EUA and other MRI findings. Binomial logistic regression analysis was performed to evaluate the associated factors for the presence of ramp lesions. The included six predictive factors were chosen by background knowledge and were medial and lateral tibial slope, and the presence of sMCL and dMCL injury, MRI ramp injury and bone oedema at the medial tibial plateau (MPT). Cohen's kappa value has been calculated for inter-rater agreement to detect ramp lesions and inter-rater correlation coefficient (ICC) was calculated for inter-rater reliability for measuring medial and lateral tibial slope A post hoc power analysis revealed an actual power of 82% for finding differences between two independent proportions (p1 = 0.25, p2 = 0.62) with a group allocation of 84:16 subjects (intact vs. ramp lesion) (G*Power 3.1). Statistical significance was set at a p value of < 0.05. Results One hundred and fifty-three patients underwent ACL reconstruction during the study period and of these 100 (80 male and 20 female) with a mean age of 22.3 ± 4.9 years met the inclusion criteria. The 53 patients were excluded due to concomitant medial and/or lateral abnormal knee laxity or for failing to meet the minimum imaging criteria. There were All patients were professional athletes and included 60 soccer players, 26 rugby players and 14 players from other sports. The median time between injury and MRI was 2 days (0-21) and 13 days (3-100) between injury and surgery. Cohen's kappa analysis showed excellent agreement between the two readers for the assessment of ramp lesions (1.00, p < 0.001). The ICC value for reliability of MRI measurements was 0.892 (95% CI 0.43-0.925, p < 0.01) for medial tibial slope and 0.977 (95% CI 0.966-0.985, p < 0.01) for the lateral tibial slope, indicating excellent agreement. Incidence of ramp lesions The incidence of ramp lesions diagnosed on preoperative MRI was 23% compared to a 'diagnosis at surgery' rate of 16%. Of these, 15 were repaired and only one was deemed stable and left alone. Hence, the MRI sensitivity to identify a ramp lesion was 62.5%, the specificity was 84.5%, the positive predictive value was 43.5% and negative predictive value was 92.2%. There was no difference in the timing of the MRI or surgery between the groups with and without ramp lesions. Association with ramp lesions Results of knee laxity EUA and preoperative MRI for the presence of intraoperative ramp lesions are summarized in Tables 1 and 2. Patients' gender, age and injury mechanisms did not correlate with the presence or absence of ramp lesions at surgery. There was no statistically significant correlation between the presence of ramp lesions and the grade of anterior drawer test, Lachman test and pivot-shift test (Table 1). However, 12 of 16 patients (75%) with a ramp lesion had a grade III Lachman test. 93.7% of patients with ramp lesions exhibited concomitant sMCL injury (the presence of oedema in the sMCLbut with normal clinical laxity) on MRI. Ramp lesions are strongly associated with injuries to the sMCL (p = 0.002, OR 13.000, 95% CI 1.642-102.936), and the dMCL (p = 0.006, OR 5.000, 95% CI 1.621-15.419). There was also a strong . In addition, differential medial-lateral posterior tibial slope was significantly smaller in patients with a ramp lesion compared to patients with an intact meniscus ramp (p < 0.05). Ramp lesions were not associated with oedema in the POL, LCL, Kaplan fibres, or ALL, nor medial or lateral meniscus lesions ( Table 2). Associated factors for ramp lesions Associated factors associated with the ramp lesions seen at arthroscopy were identified with logistic regression analysis with backward elimination ( Discussion The main findings of this study were that the presence on MRI of posterior MTP and MFC oedema, sMCL and dMCL lesions and a smaller tibial slope asymmetry (1.7° vs. 3.8°) are highly associated with ramp lesions. Additionally, with binomial logistic regression analysis, a dMCL injury, a flatter lateral tibial slope and the identification of a ramp lesion on MRI significantly increased the likelihood of finding a ramp lesion at surgery. The medial meniscus, with its firm attachment to the tibia via the menisco-tibial ligament, and specifically the posterior horn, is a secondary restraint to anterior tibial translation and external rotation of the knee. Its function becomes even more important in ACL-deficient knees [1,2,26,35]. Biomechanical studies have demonstrated that ramp lesions in addition to ACL deficiency, increases anteroposterior instability and external rotational instability [1,46]. AMRI has also been found in clinical studies [6,32,49]. There is growing scientific evidence and consensus among knee surgeons that ramp lesions should be sought, identified and repaired [7,38,40,50]. Repair of ramp lesions is safe [21] and restores normal knee kinematics when combined with ACL reconstruction in in-vitro studies [46]. Therefore, it is vital to detect ramp lesions, both on MRI and at surgery, and repair them at the time of ACL reconstruction or risk ongoing pain, instability and ACL graft failure due to overload. Traditionally ramp lesions were not detected as they cannot easily be identified when viewing the posterior medial meniscus with the arthroscope placed anteriorly in the medial compartment. The diagnostic requires to view the ramp region with the arthroscope in the posteromedial recess using the intercondylar approach or an additional posteromedial portal. As recommended, this inspection must be done routinely during ACL reconstruction procedures. MRI identification of a ramp lesion and its associated factors will alert a surgeon to focus to the posteromedial area. As an identified ramp lesion on MRI increased the chance of finding it at surgery by 13.6 times it emphasises the importance of this knowledge. This study found that MRI had a moderate sensitivity (62.5%) but high specificity (84.5%), and a high negative predictive value (92.2%) in detecting ramp lesions. The published rates for MRI sensitivity in identifying ramp lesions are 48-90% [3,11,19,27,29,49,56] with 3 T MRI possibly superior to 1.5 T MRI scans [19]. Arner et al. [3] compared results of three MRI readers with arthroscopic findings. Their MRI sensitivity varied between 53.9 and 84.6%. This is similar to the findings of the present study, which showed a much higher inter-rater agreement, highlighting the benefit of having experienced MSK radiologists and of applying specific MRI criteria for ramp lesions. MRI specificity was reported to be over 90% in several studies [3,19,27], which is similar to the findings in this study of 84.5%. Overall these results indicate that preoperative MRI is not accurate enough to detect all ramp lesions but is an excellent modality to exclude the presence of ramp lesions. Therefore, routine inspection of the ramp region in the posteromedial recess during arthroscopy in knees with ACL injury is advocated. The incidence of ramp lesions in ACL-injured knees at surgery in a professional athlete's population in this study is 16%. This is consistent with the published literature in which the incidence is reported to range from 9 to 34% [3,6,11,19,27,29,31,41,56]. The difference of ramp lesions' prevalence between contact and non-contact injuries was not significant in the present study, but has been described previously [41]. It is important to mention that although good views were obtained at surgery, it is possible that simply viewing the 'ramp' region with a 30° arthroscope passed into the posteromedial recess via the intercondylar notch and not using a posteromedial portal that the some ramp lesions could have been missed and, therefore, the incidence is under-reported. In the present study, the presence of ramp lesions is strongly associated with concurrent MRI injury to the sMCL and especially the dMCL. Interestingly, ramp lesions were not associated with damaged to the posteromedial capsule, what could have been expected due to the close anatomical relation [8]. This finding links injuries to the posterior menisco-capsular junction of the medial meniscus with injuries to the medial collateral ligaments [23]. One of the roles of the MCL is resistance to anterior translation of the medial tibial plateau/external rotation for which the dMCL is the primary restraint between 0° and 60° knee flexion [4,39,53,54] as well as valgus, for which the sMCL is the primary restraint. The association of concomitant injury to the dMCL and sMCL and ramp lesions is suggestive of a specific injury mechanism causing the ramp lesion as well as injury to the dMCL and sMCL. To injure these structures together logically there must be significant anterior translation and/or external rotatory subluxation of the medial tibial plateau during the event of an ACL rupture, since these loads are resisted by the posterior medial meniscus, dMCL and sMCL. This might cause the medial femoral condyle to move posteriorly and ride over the medial meniscus thereby stretching the PMC in the posteromedial recess to the point of failure [16]. In fact, 15 of 16 patients (93.7%) with ramp lesions had a sprain of the sMCL and 62.5% a lesion of the dMCL, which is firmly bound to the mid-portion of the medial meniscus. In contrast, the POL was intact in 87.5% patients, further reinforcing the idea that this injury is due to an anterior translation of the medial tibia /external rotation injury mechanism as the POL resists internal rotation in the knee close to full extension. Furthermore, bone oedema at the posterior MTP and the MFC was also correlated with ramp lesions, and 87.5% of cases with proven ramp lesions had MTP bone oedema, which is in keeping with the same mechanism. This finding is in agreement with previous studies that reported posteromedial tibial bone oedema as an important secondary MRI finding in conjunction with ramp lesions [5,11,27,29]. DePhillipo et al. [11] found MTP bone bruises in 72% of their patients diagnosed with ramp lesions. In contrast, Hatayama et al. only found bone contusions in 38% of their patients with ramp lesions and the incidence of bone contusion did not differ from patients with an intact medial meniscus [19]. The higher incidence in the present study may reflect the patient population: perhaps injuries in professional athletes are more severe. The present study also showed that a smaller posterior tibial slope asymmetry was associated with the ramp lesion and, from the logistic regression analysis, that a steeper posterior lateral tibial slope decreased the risk for them. In contrast, increased lateral tibial slope is associated with ACL rupture [10,14,18]. The authors of this present study believe that a higher lateral tibial slope predisposes to the lateral femoral condyle subluxing off the posterior lateral tibia causing ACL rupture and the classic lateral compartment distal femoral and posterior tibial bone bruises. In such cases the abnormal motion is predominantly in the lateral compartment with the centre of axial rotation on the medial tibial plateau leaving the ramp intact as the MFC does not move posteriorly. Conversely in knees with less lateral posterior slope laterally, and thus less differential medial/lateral posterior slope, with anterior tibial displacement the femur will have less tendency to move posteriorly just in the lateral component and will also do so medially hence loading the PHMM and PMC. Our results are similar to the findings of Kim et al. [27] who associated ramp lesions with a steeper medial and flatter lateral slope. Also Song et al. found a higher incidence of ramp lesions in patients with an increased medial slope [44]. With regard to clinical examination, Bollen described a correlation of ramp lesions with anteromedial rotatory subluxation [6]. The author noticed an increased anterior movement of the medial tibial plateau when the foot was externally rotated in 90° knee flexion [6] (i.e. Slocum test [42]). Ramp lesions have also been associated with a higher side-to-side difference of anterior translation examined with a KT-2000 [49] and a higher incidence of grade III pivotshift [32]. This study could not identify a statistically significant association between the presence of ramp lesions and a higher grade of Lachman test, anterior drawer test or pivot-shift test. However, 12 of the 16 knees with a ramp lesion had grade 3 Lachman tests and the other 4 were grade 2. Failure to reach statistical significance could, therefore, be a type 2 error as 20% of studies suffer from failure to show a difference statistically when one actually exists [13]. Unfortunately, the Slocum test for anteromedial rotatory instability was not routinely performed during the present study. This study has several limitations. All patients were professional athletes and, therefore, do not represent a typical patient cohort compared to most clinical practices, and their injury patterns and rates of ramp lesion may not be the same as in other patient groups. These patients were, however, chosen to ensure the best quality of MRI imaging, and scans sufficiently soon after injury to document the full extent of injury to intra-articular structures and the soft tissue envelope as they have more immediate access to MRI examination. Since these patients tend to have early surgery direct correlation of MRI findings with arthroscopic diagnosis is possible. The situation can change as delay might allow time for healing of ramp lesions. This does bring into to question the possibility that early surgery increases diagnosis and possible over treatment of ramp lesions that might otherwise heal. Furthermore, the size and category of ramp lesions (stable or unstable) was not further classified and analysed. In addition, MRIs were, however, acquired in various institutions with different protocols that could affect scan quality, but minimum imaging requirements were applied for inclusion in the study design to allow for reliable analysis, which was reflected by high inter-rater agreement. An obvious issue is that whilst abnormal (high) signal represents injury to tissue, the integrity or otherwise of that tissue cannot be certain. Furthermore, spread of oedema / hematoma might, by MRI criteria, imply injury to soft tissues that are, in fact, intact. Again, this risk is mitigated by the scans mainly being undertaken 2 or 3 days from injury. The results from the present study emphasize the relation between ramp lesions and damage to the medial soft tissue envelope, hence, indicating a likely external rotation injury mechanism in some ACL ruptures. This should raise awareness of possible AMRI in these patients which need to be carefully assessed through clinical examination and addressed in the operating room. Conclusion In cases of acute ACL rupture, on MRI the presence of bone oedema in the MFC and posterior MTP, sMCL and dMCL lesions, and a smaller differential medial-lateral tibial slope are highly associated with ramp lesions. Additionally, a dMCL injury, a flatter lateral tibial slope, and the identification of a ramp lesion on MRI increase the likelihood of finding a ramp lesion at surgery according to logistic regression analysis. Preoperative MRI has only moderate sensitivity, but high specificity, and a high negative predictive value for detection of ramp lesions. Author contributions LW, GB and AW designed the study, performed the statistical analysis and wrote the manuscript. LW drafted the manuscript. AW was responsible for clinical examination and the surgical treatment. VP, AM and JL collected the data and performed the radiological and MRI evaluation. MJ helped to design the study, assisted with data collection, statistical analysis and data interpretation and critically reviewed the manuscript. All authors read and approved the final manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. No funding was received for this study. Conflict of interests The authors did not receive financial support or other benefits from commercial sources for the work reported in this manuscript or any other financial support that could create a potential or the appearance of a conflict of interest regarding to the work. Ethics approval Approval to undertake the study was given by the institution involved in line with the UK Health Research Authority guidance. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
v3-fos-license
2018-05-08T17:39:55.168Z
2010-01-01T00:00:00.000
13583576
{ "extfieldsofstudy": [ "Environmental Science", "Geology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://escholarship.org/content/qt3f31077f/qt3f31077f.pdf?t=n6lpuk", "pdf_hash": "55d42371768b64b76fa9d4432b784d46423a4992", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45143", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "351f2ded951d517e619bfde4089e2eb0667d0cf9", "year": 2010 }
pes2o/s2orc
Recent increases in global HFC-23 emissions [ 1 ] Firn-air and ambient air measurements of CHF 3 (HFC-23) from three excursions to Antarctica between 2001 and 2009 are used to construct a consistent Southern Hemisphere (SH) atmospheric history. The results show atmospheric mixing ratios of HFC-23 continuing to increase through 2008. Mean global emissions derived from this data for 2006–2008 are 13.5 ± 2 Gg/yr (200 ± 30 (cid:2) 10 12 gCO 2 -equivalent/yr, or MtCO 2 -eq./yr), (cid:3) 50% higher than the 8.7 ± 1 Gg/yr (130 ± 15 MtCO 2 -eq./yr) derived for the 1990s. HFC-23 emissions arise primarily from over-fluorination of chloroform during HCFC-22 production. The recent global emission increases are attributed to rapidly increasing HCFC-22 production in developing countries since reported HFC-23 emissions from developed countries decreased over this period. The emissions inferred here for developing countries during 2006–2008 averaged 11 ± 2 Gg/yr HFC-23 (160 ± 30 MtCO 2 -eq./yr) and are larger than the (cid:3) 6 Gg/yr of HFC-23 destroyed in United Nations Framework Convention on Climate Change Clean Development Mechanism projects during 2007 and 2008. Introduction [2] Trifluoromethane (HFC-23) has an atmospheric lifetime of 270 yr, a 100-yr global warming potential (GWP) of 14,800 [Forster et al., 2007], and is an unavoidable by-product of chlorodifluoromethane (HCFC-22) production. Climate concerns have prompted efforts to reduce HFC-23 emissions by optimizing conditions during production of HCFC-22 and by destroying HFC-23 before it escapes to the atmosphere. Through voluntary and regulatory efforts in developed (Annex 1) countries [e.g. Approved CDM projects in non-Annex 1 countries generated Certified Emission Reductions (CERs) of 5.7 and 6.5 Gg of HFC-23 (84 and 97 MtCO 2 -eq.), in and 2008, respectively [UNFCCC, 2009. These CDM projects had a value during 2007 and 2008 of nearly US$1 billion annually (at US$13 per ton CO 2 -eq.), which is substantially higher than the estimated industry cost of this HFC-23 emission abatement alone [Wara, 2007]. [3] The importance of understanding the influence of HFC-23 emission abatement efforts has increased with rapid growth in recent production of HCFC-22 in developing countries for both dispersive and feedstock uses [United Nations Environment Programme (UNEP), 2009]. Atmosphere-based estimates of HFC-23 emissions are relevant to ongoing discussions under the UNFCCC and its Kyoto Protocol regarding renewing existing CDM projects and approving additional projects for HCFC-22 facilities that are not currently eligible to participate in this program. In this paper global HFC-23 emissions are estimated from measurements of HFC-23 in ambient air and air from the perennial snowpack (firn) during three separate excursions to Antarctica between 2001 and 2009. The analysis of air trapped in firn provides a robust record of atmospheric trace-gas changes during the past 50 -100 years [Bender et al., 1994;Battle et al., 1996;Butler et al., 1999]. Firn-Air Analysis [5] Flask air was analyzed using gas chromatography with mass spectrometry and sample cryo-trapping techniques [Montzka et al., 1993]. Separation was performed on a 30-m Gas-Pro column. Both HFC-23 and HCFC-22 were detected with the CHF 2 + ion (m/z = 51) eluting at different times. Calibration is based upon static HFC-23 standards at 8.53 and 25.12 ppt that were prepared with gravimetric techniques. Calibration for HCFC-22 has been discussed previously [Montzka et al., 1993]. Consistency in HFC-23 calibration was checked by periodic analyses of 4 archived air tanks. Results from these analyses showed no significant secular trend in HFC-23 mixing ratios (0.1 ± 0.1 ppt/yr) during 2007-2009. Based on repeat analyses of ambient air and differences between simultaneously filled flasks, the uncertainty on HFC-23 measurements is estimated to be 0.3 ppt. Firn Modeling [6] Diffusive air movement within firn was simulated with two different firn models: the Bowdoin model for SPO'01 and WAIS-D [Mischler et al., 2009], and the UCI model for SPO'08-09 [Aydin et al., 2004]. These models allow the consistency between a given trace-gas atmospheric history and firn-air measurements to be tested. The modeled diffusivity vs. depth relationships for each of the field studies were empirically determined by optimizing the agreement between modeled and measured CO 2 depth profiles and the known Antarctic atmospheric CO 2 history [Etheridge et al., 1996;Conway et al., 2004]. [7] An initial atmospheric history for HFC-23 from the 1940s to 2009 (history C) was derived from consideration of multiple inputs: during 1943 to 1995 with an atmospheric box model [Montzka et al., 2009] in which HFC-23 emissions were derived as a constant percentage of past HCFC-22 production (Alternative Fluorocarbons Environmental Acceptability Study, data tables, 2009, available at http:// www.afeas.org) and scaled to fit published measurements of HFC-23 from 40°S during the early 1990s [Oram et al., 1998]; during 1996 -2006 with firn-model-based dating of HFC-23 and HCFC-22 firn data using the ''effective age technique'' [Trudinger et al., 2002]; and with ambient measurements made during the firn-air collections in Jan. 2001, Dec. 2005, and Dec. 2008-Jan. 2009and constant emissions during 2006-2008 [8] Nineteen additional trial mixing ratio histories were considered for HFC-23 (Table 1 and Text S1 of the auxiliary material). 1 Most differed from C only in years after 1995 and were derived with an atmospheric box model incorporating HFC-23 emissions as different and variable fractions of reported HCFC-22 production (F, G, and K histories). A constant emissions scenario was also tested (history H) as were emissions histories derived from updated Cape Grim observations [McCulloch and Lindley, 2007; Intergovernmental Panel on Climate Change (IPCC), 2005] (E histories). Histories were also derived from constant HFC-23 emission to HCFC-22 production (E 23 /P 22 ) fractions to match observed atmospheric HFC-23 mixing ratios at certain dates and as modifications to good-fitting histories, but these trial histories gave poor fits to firn-air results (J and L histories in Text S1). Reduced c 2 values for SPO'01 were calculated with all firn data, but for WAIS-D and at SPO'08-09 with samples only from the mid-to-upper firn (see text). For the eight degrees of freedom associated with the nine samples used to assess histories at both WAIS-D and SPO'08-09 (HCFC-22 > 90 ppt), P < 0.1 for c 2 ! 1.67 (P < 0.05 for c 2 ! 1.938). For SPO'01 (degrees of freedom = 10), P < 0.1 for c 2 > 1.6 [Bevington and Robinson, 2003]. e UNEPa = UNEP HCFC-22 production amounts for dispersive uses only. Fractions of 2.8, 3.0, and 3.2% of UNEPa production correspond approximately to 1.8, 1.9, and 2.0% of total UNEP HCFC-22 production. UNEP(P 22, A5 total ) and UNEP(P 22, nA5 total ) correspond to total HCFC-22 production reported for all uses by developing (A5) and developed (nonA5) countries, respectively (terms used as defined in the Montreal Protocol) (see Text S1). [9] The well known atmospheric history of HCFC-22, derived from ongoing and archived surface flask measurements [Montzka et al., 1993;2009;Miller et al., 1998] (see Text S1), provides the basis here for deriving accurate HFC-23 histories from firn air. The consistency between trial HFC-23 histories and firn-air data was objectively assessed by calculating reduced c 2 between the modeled and measured HFC-23 vs. HCFC-22 relationship in firn air (Table 1). Reduced c 2 is calculated as P [(modelÀobserved) 2 /error 2 ]/ (degrees of freedom); a c 2 of 1.0 indicates that residuals and uncertainties are similar. The HFC-23 vs. HCFC-22 relationship was used to assess trial HFC-23 histories in order to minimize the influence of errors in the firn diffusivity vs. depth parameterization [Battle et al., 1996]. The accuracy of these models was validated using firn-air measurements of other compounds having well known atmospheric histories (HCFC-22, CFC-12, HFC-134a, and CH 3 CCl 3 ). Consistent results were obtained for all these gases despite their very different histories (see Text S1). Similar conclusions regarding which HFC-23 trial histories are most consistent with the firn data are reached when trial histories are evaluated with the SH atmospheric history and firn data for CO 2 . Results and Discussion [10] Results from all three Antarctic firn-air samplings show tight correlations between HFC-23 and HCFC-22 mixing ratios that are nearly linear, suggesting similar relative atmospheric changes for these trace gases in the past (Figure 1). This observation is consistent with emissions of HFC-23 arising primarily from HCFC-22 production at a fairly constant yield. Yields of 1.5 to 4% (by mass) of HFC-23 are typical during the production of HCFC-22, depending upon how well this process is optimized [McCulloch and Lindley, 2007]. [11] Firn air diffusion models provide a means to compare trial atmospheric histories with firn-air observations. A rough estimate of 20th-century changes in HFC-23 mixing ratios was initially provided with history C. This history, when modeled with the Bowdoin and UCI models, yields an expected firn profile that is highly consistent with the entire measured firn profile from SPO'01 and SPO'08-09 (c 2 = 0.7 for SPO'01 and 0.8 for SPO'08-09). This history is also reasonably consistent with Oram et al.'s [1998] results. Contamination of the deepest samples collected at WAIS-D by the KNF pump prevented an assessment of the older part of history C with the WAIS-D data (see Text S1). [12] To improve our understanding of atmospheric HFC-23 changes since the mid-1990s, a set of trial histories was derived as modifications of history C in years after 1995. These histories were also assessed with the reduced c 2 metric but only against firn samples in the mid-to-upper firn profile having HCFC-22 mixing ratios >90 ppt (>68 m depth at WAIS-D and >62 m depth at SPO'08-09) ( Table 1). HCFC-22 mixing ratios of >90 ppt are representative of high-latitude SH sites since the early 1990s [Montzka et al., 1993;Miller et al., 1998]. Calculated in this way, the reduced c 2 metric reflects model-data agreement for the past two decades. [13] Among these trial atmospheric histories, only a few provided a good fit (P < 0.1 for reduced c 2 > 1.67) to results from WAIS-D and SPO'08-09 in the mid-to-upper firn (Table 1 and Figure 1). All of these best-fit histories suggest an increase in the growth rate of HFC-23 in the atmosphere after 2005. Trial history H was derived as a linear increase to match ambient mixing ratios in 2001 and at the end of 2005. This history provides a good fit to the WAIS-D firn profile collected in December 2005 (c 2 = 0.6), but, when extrapolated to January 2009, underestimates the surface mixing ratio measured during SPO'08-09 in three different flasks by $1 ppt (Figure 1). History H also gives a poor fit to the SPO'08-09 firn results (c 2 = 2.3; Table 1), providing Multiple trial histories were derived (lines) and incorporated into the firn models to assess their consistency to firn-air measurements (points) (see Table 1 and Text S1 for history descriptions). Best-fitting histories (C, F1, F2, G, K2) are shown as red lines; others are shown in gray, except history H (green line). Results from WAIS-D showing substantial pump contamination are indicated as plus symbols. Insets are expanded views of results from the upper firn. Uppermost points are ambient air samples filled through firnsampling apparati. further evidence that the atmospheric growth rate of HFC-23 increased in recent years. [14] The range of trial atmospheric histories considered here leads to a wide range of past global HFC-23 emissions (Figure 2a). The atmospheric histories giving the lowest c 2 all suggest fairly constant emissions from 1990 to 2003 and increased emissions thereafter. A best estimate HFC-23 emissions record was derived from the mean of the five best-fitting SH atmospheric histories and indicates global HFC-23 emissions of 8.7 ± 1 Gg/yr during the 1990s and 13.5 ± 2 Gg/yr (200 MtCO 2 -eq./yr) during 2006-2008 (Figure 2b). By comparison, HCFC-22 emissions during 2006-2008 averaged 610 MtCO 2 -eq./yr [Montzka et al., 2009]. The best estimate HFC-23 emissions history is consistent with one derived from all 20 trial histories after weighting annual emissions by the sum of 1/c 2 from WAIS-D and SPO'08-09. It is also consistent with the mean emissions implied by measured HFC-23 changes in ambient air since 2001 (Figure 2b; see also Text S1). When considered with global HCFC-22 production data (including feedstocks), these results suggest a global mean E 23 /P 22 fraction of 1.7% by mass for 2003 -2008, which is slightly less than observed in the 1990s (Figure 2c) [Oram et al., 1998;McCulloch and Lindley, 2007]. [15] HFC-23 emissions from Annex 1 countries reported to the UNFCCC indicate a substantial decline beginning in 1998 as a result of voluntary and regulatory efforts (Figure 2b) [UNFCCC, 2009] (see Table 2 of Text S1). The decline in Annex 1 emissions stems from reduced HCFC-22 production and a decrease in the E 23 /P 22 fraction from approximately 2% in the 1990s to 0.9% during 2003-2007 (Figure 2c). Reported reductions in Annex 1 HFC-23 emissions and in the E 23 /P 22 fraction cannot be directly verfied with our atmospheric data because during this same period HFC-23 emissions were changing as HCFC-22 production was increasing rapidly in non-Annex 1 countries (Figure 2d). [16] The difference between global emissions derived here and those reported to the UNFCCC from Annex 1 countries provides an estimate of HFC-23 emissions from non-Annex 1 countries, which are not reported to the UNFCCC (Figure 2b). This analysis suggests steady increases in HFC-23 emissions from non-Annex 1 countries at the same time their HCFC-22 production was increasing on average by $50 Gg/yr (from 2000 to 2007) (Figures 2b and 2d). Mean HFC-23 emissions from non-Annex 1 countries are estimated to have been 11 ± 2 Gg/y during 2006-2008. A mean E 23 /P 22 of 2.4 ± 0.3% is derived for this same period using total non-Annex 1 HCFC-22 production (Figure 2c). [17] UNFCCC data show that 5.7 and 6.5 Gg of HFC-23 (84-97 MtCO 2 -eq.) were destroyed in 2007 and 2008, respectively, through the execution of CDM projects approved by the UNFCCC (Figure 2d; see Table 2 of Text S1). This represents the destruction of HFC-23 emissions from 43-48% of the HCFC-22 produced in non-Annex 1 Results are shown for the globe (red lines), for Annex 1 countries (blue lines) and for non-Annex 1 countries (black lines). Figure 2b includes a global best-estimate HFC-23 emissions history calculated from the mean of the best-fit trial histories in Figure 2a (bold red lines; other histories shown as different colors). Global emissions derived from surface measurements alone are indicated as shaded gray regions (Figure 2b, see Text S1). HFC-23 emissions from non-Annex 1 countries are calculated from the difference between the bestestimate global emissions and HFC-23 emissions reported by Annex 1 countries [UNFCCC, 2009] (Figure 2b). E 23 /P 22 values are derived from emissions in Figure 2b and HCFC-22 production data including unrestricted amounts for feedstocks, which accounted for 37% of global production in 2007 [UNEP, 2009]. Adding CDM-related CER quantities to the best-estimate global HFC-23 emissions shows the world avoided by CDM projects (green dotdot-dashed lines (Figure 2b; see Table 2 in Text S1). The green dot-dot-dashed line in Figure 2c is calculated from total non-Annex 1 HFC-23 emissions divided by non-Annex 1 HCFC production not covered by CDMs. Firn and ambient air results yield only a single average for 2006-2008 emissions and quantities derived from these emissions. Global quantities estimated elsewhere are also shown (red circles and lines [Oram et al., 1998] (Figures 2b and 2c). Production and Annex 1 emission data for 2008 are projections (dashed lines in Figures 2b-2d). Uncertainties on firn-derived global emissions represent the spread of best-fit trial histories plus a modeling uncertainty of 10%. Uncertainties of ± 5% are applied to production data and ±10% on reported Annex 1 HFC-23 emissions (see Text S1). Though a 100-yr GWP of 14800 is used here to convert HFC-23 emissions to CO 2 -eq. emissions [Forster et al., 2007], the UNFCCC [2009] uses a GWP of 11700. Annual values are plotted at mid-year. countries during these years. In the world avoided defined by the absence of HFC-23 destruction by CDM projects, global emissions of HFC-23 would have doubled from $9 Gg/yr to $18 Gg/yr during the past decade as HCFC-22 production increased in non-Annex 1 countries (Figure 2b). [18] Our results indicate that 11 ± 2 Gg/yr of HFC-23 (160 ± 30 MtCO 2 -eq./yr) was emitted during 2006 -2008 from non-Annex 1 countries. These emissions are associated with HCFC-22 production not covered by CDM projects and have an inferred E 23 /P 22 ratio of 3.7 ± 0.3% (Figure 2c; Table 2 of Text S1). This ratio is slightly higher, on average, than inferred for non-Annex 1 countries in most other years and is substantially larger than reported by Annex 1 countries. There are uncertainties in this ratio related to the precise timing of the inferred global emission changes and the extrapolation to 2008 of the Annex 1 reported emission and HCFC-22 production magnitudes. However, these uncertainties do not appreciably affect our derived 2006 -2008 emission and E 23 /P 22 estimates because these estimates represent averages over a 3-year period. The rather high yield ratio inferred for non-Annex 1 HCFC-22 production not currently covered by CDM projects explains why the global E 23 /P 22 fraction did not decrease between 2003 and 2008, even though HFC-23 emissions associated with $30% of total global HCFC-22 production were abated by CDM projects during 2007-2008 (Figures 2c and 2d). [19] In summary, the new atmospheric and firn air observations presented here indicate a substantial increase in global HFC-23 mixing ratios and emissions during the early 2000s. These increases are derived for a period when Annex 1 countries reported decreasing emissions to the UNFCCC, indicating that HFC-23 emissions from non-Annex 1 countries increased as they produced more HCFC-22. Although CDM projects destroyed a large fraction of HFC-23 emissions from non-Annex 1 countries during 2007 -2008, both HCFC-22 production data and the non-Annex 1 HFC-23 emissions inferred here suggest that a substantial amount of HCFC-22 production and associated HFC-23 emission continued unabated during these years.
v3-fos-license
2022-09-22T06:16:48.195Z
2022-09-01T00:00:00.000
252405063
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0010791&type=printable", "pdf_hash": "3bd21005213eff61b3f19b066245abde7c4455b6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45144", "s2fieldsofstudy": [ "Biology" ], "sha1": "2587cb18c7361f490c659e3bf2bdce67f234c219", "year": 2022 }
pes2o/s2orc
Vivaxin genes encode highly immunogenic, non-variant antigens on the Trypanosoma vivax cell-surface Trypanosoma vivax is a unicellular hemoparasite, and a principal cause of animal African trypanosomiasis (AAT), a vector-borne and potentially fatal livestock disease across sub-Saharan Africa. Previously, we identified diverse T. vivax-specific genes that were predicted to encode cell surface proteins. Here, we examine the immune responses of naturally and experimentally infected hosts to these unique parasite antigens, to identify immunogens that could become vaccine candidates. Immunoprofiling of host serum shows that one particular family (Fam34) elicits a consistent IgG antibody response. This gene family, which we now call Vivaxin, encodes at least 124 transmembrane glycoproteins that display quite distinct expression profiles and patterns of genetic variation. We focused on one gene (viv-β8) that encodes one particularly immunogenic vivaxin protein and which is highly expressed during infections but displays minimal polymorphism across the parasite population. Vaccination of mice with VIVβ8 adjuvanted with Quil-A elicits a strong, balanced immune response and delays parasite proliferation in some animals but, ultimately, it does not prevent disease. Although VIVβ8 is localized across the cell body and flagellar membrane, live immunostaining indicates that VIVβ8 is largely inaccessible to antibody in vivo. However, our phylogenetic analysis shows that vivaxin includes other antigens shown recently to induce immunity against T. vivax. Thus, the introduction of vivaxin represents an important advance in our understanding of the T. vivax cell surface. Besides being a source of proven and promising vaccine antigens, the gene family is clearly an important component of the parasite glycocalyx, with potential to influence host-parasite interactions. Introduction African trypanosomes (Trypanosoma subgenus Salivaria) are unicellular flagellates and obligate hemoparasites. Trypanosoma vivax is one of several African trypanosome species that cause animal African trypanosomiasis (AAT), a vector-borne disease of livestock that is endemic across sub-Saharan Africa, as well as found sporadically in South America [1][2]. Cyclical transmission of T. vivax by tsetse flies (Glossina spp.), or mechanical transmission by diverse other biting flies, leads to an acute, blood-borne parasitaemia and subsequent chronic phases during which parasites disseminate to various tissues, the central nervous system in particular [1][2][3][4][5]. AAT is a potentially fatal disease characterised by acute inflammatory anaemia and various reproductive, neural and behavioural syndromes during chronic phase [6][7]. The impact of the disease on livestock productivity, food security and the wider socio-economic development of endemic countries, is profound and measured in billions of dollars annually [8]. Thus, AAT is rightly considered one of the greatest challenges to animal health in these regions [9][10]. Strategies to prevent AAT are typically based around vector control, using insecticides, traps or pasture management, in combination with prophylaxis with trypanocidal drugs [11]. However, widespread drug resistance and the on-going cost of maintaining transnational control means that a vaccine is the preferred, sustainable solution [12][13]. African trypanosome infections are, however, far from an ideal target for vaccination for two reasons. First, antigenic variation of the Variant Surface Glycoprotein (VSG) enveloping the trypanosome cell leads to immune evasion and immunization with VSG fails to protect against heterologous challenge [14]. Second, chronic infection leads to an immunosuppressive environment and ablation of memory B-cells [13]. Successful recombinant vaccines exist for other pathogens that are capable of antigenic switching, for example, hemagglutinin of influenza [15], hepatitis C [16] outer surface antigens of Borrelia [17] and the circumsporozoite protein of Plasmodium falciparum [18]. These vaccines are based on pathogen surface antigens that elicit dominant immune responses in natural infections. Thus, while antigenic variation of trypanosomes specifically precludes whole-cell vaccine approaches, recombinant vaccines might work if based on non-VSG antigens exposed to the immune system during infections. Yet, most experiments using various conserved and invariant trypanosome proteins [19][20][21] have not led to robust protective immunity, causing the very plausibility of African trypanosome vaccines to be questioned [22]. Recently, however, in a systematic screen of recombinant subunit vaccines based on T. vivax non-VSG surface antigens, we identified a T. vivax-specific, invariant flagellum antigen (IFX) that induced longlasting protection in a mouse model. This immunity was passively transferred with immune serum, and recombinant monoclonal antibodies to IFX could induce sterile protection [23]. In this study, we continue our evaluation of T. vivax antigens using a complementary approach, beginning by analysing the naturally occurring antibody responses to T. vivax-specific surface proteins. We previously categorized genes encoding T. vivax-specific, cell-surface proteins that were not VSG ('TvCSP') into families, named Fam27 to Fam45 inclusive [24]. We showed that many of these TvCSP families (e.g. Fams 29,30,32,34 and 38) are abundant and preferentially expressed in bloodstream-form parasites [25]. Our aim here is to identify candidates for recombinant vaccine development through four objectives: (1) to assay serum antibody from naturally infected animals using a custom TvCSP peptide array; (2) to produce recombinant protein for immunogenic TvCSP using a mammalian expression system; (3) to vaccinate and challenge with T. vivax in a mouse model; and 4) to examine the cell-surface localisation of TvCSP using immunofluorescent and electron microscopy. We show that one TvCSP family of 124 paralogous genes encoding putative type-1 transmembrane proteins are especially immunogenic in natural infections, and we name this family vivaxin. Vaccination with recombinant vivaxin proteins produces a robust, mixed immune response in mice that significantly reduces parasite burden, but without ultimately preventing infection. We show that at least one vivaxin family member is found on the extracellular face of the plasma membrane of T. vivax bloodstream-stage trypomastigotes, and therefore, aside from its utility as a vaccine candidate, vivaxin is likely to be an abundant component of the native T. vivax surface coat, alongside VSG. Ethics statement All mouse experiments were performed under UK Home Office governmental regulations (project licence numbers PD3DA8D1F and P98FFE489) and European directive 2010/63/EU. Research was ethically approved by the Sanger Institute Animal Welfare and Ethical Review Board. Mice were maintained under a 12-h light/dark cycle at a temperature of 19-24˚C and humidity between 40 and 65%. The mice used in this study were 6-14-week-old male and female Mus musculus strain BALB/c, which were obtained from a breeding colony at the Research Support Facility, Wellcome Sanger Institute. Design and production of TvCSP peptide microarray The array design included 63 different T. vivax Y486 antigens that are not VSG (42 representatives of TvCSP multi-copy families and 21 T. vivax-specific, single copy genes with predicted cell surface expression). We selected these 63 proteins to ensure that the array included multiple representatives of all putative T. vivax-specific cell surface gene families, as well as singlecopy genes, that were defined in our previous work and strongly expressed in mouse bloodstream infections [24][25]. The microarrays comprised 600 peptides printed in duplicate, each 15 amino acids long with a peptide-peptide overlap of 14 amino acids, and manufactured by PEPperPRINT (Heidelberg, Germany). Each array included peptides cognate to mouse monoclonal anti-FLAG (M2) (DYKDDDDKAS) and mouse monoclonal anti-influenza hemagglutinin HA (YPYDVPDYAG), displayed on the top left and bottom right respectively, which were used as controls (12 spots each control peptide). Infected host serum Blood serum from trypanosusceptible cattle known, or suspected, to be infected with T. vivax were obtained from Kenya (N = 24), Cameroon (N = 26) and Brazil (N = 6). African samples came from naturally infected animals in endemic disease areas (although not necessarily infected at the time of sampling), while Brazilian serum came from calves experimentally infected with the Brazilian T. vivax Lins strain [26]. None of the animals had been treated with trypanocidal drugs prior to serum sampling. Samples were screened with the Very Diag diagnostic test (Ceva-Africa; [27]), which confirmed that they were seropositive for T. vivax. Negative (uninfected) controls were provided by serum from UK cattle (N = 4), seronegative by diagnostic test. A further negative control for cross-reactivity with T. congolense, (commonly co-incident with T. vivax), utilised serum from Cameroonian cattle (N = 11) that were seronegative by diagnostic test for T. vivax, but seropositive for T. congolense. Immunoprofiling assay Fifteen of the 57 positive T. vivax samples to be tested in the microarrays were seropositive for T. vivax only (i.e. unique infection), while 42 were seropositive for both T. vivax and T. congolense. Before applying these to the peptide arrays, one array was pre-stained with an antibovine IgG goat secondary antibody (H+L) Cy3 (Jackson ImmunoResearch Laboratories) at a dilution 1:4500 in order to obtain the local background values. Slides were analyzed with an Agilent G2565CA Microarray Scanner (Agilent Technologies, USA) using red (670nm) and green (570nm) channels independently with a resolution of 10um. The images obtained were used to quantify raw, background and foreground fluorescence intensity values for each spot in the array using the PEPSlide Analyzer software (Sicasys Software GmbH, Heidelberg, Germany). Immunoprofiling analysis The limma R package from Bioconductor [28] was used to identify the most immunogenic peptides in livestock serum samples by location and across all samples. The data were extracted directly from the Genepix files (.gpr) produced by the PepSlide Analyzer, using only the green channel intensity data. The "normexp" method was selected for background and normalization between arrays was achieved with vsn [29]. A cut-off threshold was defined according to Valentini et al. [30] and applied to the raw response intensity (RRI) values. A filtering step was performed removing control peptides (HA and FLAG) from each array and the RRI values from duplicate spots were averaged. After combining all samples from different locations, and both experimental and natural infections, the difference in RRI values in response to infected versus uninfected serum was assessed for each spot using limma to determine statistical significance (p-value < 0.05) and log2 fold-change. Benjamini and Hochberg's method for the false discovery rate was applied [31]. alignment (221 amino acids). Phylogenies were estimated for both codon and amino acid alignments using Maximum likelihood and Bayesian inference. Maximum likelihood trees were estimated using Phyml [34] with automatic model selection by SMS [35], according to the Akaike Information Criterion. The optimal models were GTR+Γ (α = 3.677) and JTT+Γ (α = 5.568) for codon and protein alignments respectively. Topological robustness was measured using an approximate log-likelihood ratio (aLRT) branch test, as well as 100 nonparametric bootstrap replicates. Raxml [36] was also used to estimate bootstrapped maximum likelihood trees, using unpartitioned GTR+FU+Γ (α = 3.134) and LG+Γ (α = 3.847) models for codon and protein alignments respectively. Bayesian phylogenies were estimated from the same alignments using Phylobayes [37], employing four Markov chains in parallel and a CAT model with rate heterogeneity. A single, divergent sequence (TvY486_0024510) was designated as outgroup because it branches close to the mid-point in all analyses. Recombinant protein expression Protein sequences encoding the extracellular domain and lacking their signal peptide, were codon optimized for expression in human cells and made by gene synthesis (GeneartAG, Germany and Twist Bioscience, USA). The sequences were flanked by unique NotI and AscI restriction enzyme sites and cloned into a pTT3-based mammalian expression vector [38] between an N-terminal signal peptide to direct protein secretion and a C-terminal tag that included a protein sequence that could be enzymatically biotinylated by the BirA protein-biotin ligase [39] and a 6-his tag for purification. The ectodomains were expressed as soluble recombinant proteins in HEK293 cells as described [40][41]. To prepare purified proteins for immunisation, between 50 mL and 1.2L (depending on the level at which the protein was expressed) of spent culture media containing the secreted ectodomain was harvested from transfected cells, filtered and purified by Ni2+ immobilised metal ion affinity chromatography using HisTRAP columns using an AKTAPure instrument (Cytivia, UK). Proteins were eluted in 400mM imidazole as described [42] and extensively dialysed into HBS before quantification by spectrophotometry at 280nm. Protein purity was determined by resolving one to two micrograms of purified protein by SDS-PAGE using NuPAGE 4-12% Bis Tris precast gels (ThermoFisher) for 50 minutes at 200V. Where reducing conditions were required, NuPAGE reducing agent and anti-oxidant (Invitrogen) were added to the sample and the running buffer, respectively. The gels were stained with InstantBlue (Expedeon) and imaged using a c600 Ultimate Western System (Azure biosystems). Purified proteins were aliquoted and stored frozen at -20˚C until use. Vaccine preparation VIVβ11, VIVβ14, VIVβ20 and VIVβ8 recombinant proteins were combined independently with one of the three adjuvants to analyze the potential different types of immune responses. The vaccine formulation was prepared by combining 20μg purified antigen with either 100μg Alhydrogel adjuvant (Alum) (vac-alu-250; InvivoGen), Montanide W/O/W ISA 201 VG (Sappic) or 15μg saponin Quil-A (vac-quil; InvivoGen), respectively. In a second experiment (see below), the vaccine was formulated with 50μg VIVβ8 and 15μg Quil-A. Control animals were immunized with the adjuvants only using the same concentration as the antigen-vaccinated groups. Mouse immunization and challenge with T. vivax Our approach to vaccination-challenge experiments has been described previously [23]. Male BALB/c mice were distributed in groups (n = 3) as follows for the immunization: four groups were immunized with Alum in combination with each antigen, (i.e. VIVβ � /A); and four groups with each antigen co-administrated with Montanide ISA 201 VG, (i.e. VIVβ � /M). There was one control group each for VIVβ � /A or VIVβ � /M-vaccinated group. In addition, female mice were randomly distributed in five groups (n = 8), four of them immunized with Quil-A plus each antigen (i.e. VIVβ � /Q) and one as control group immunized with adjuvant only. Mice from all groups were immunized on days 0, 14 and 28 subcutaneously in two injection sites (100μl/injection). Animals from the VIV � /A and VIV � /M groups were euthanized two weeks after the third immunization (day 42), since, by then, it was clear from post-immunization assays that Quil-A provided the preferred, balanced Th1/Th2 response. The VIVβ � /Q rested for 14 days prior to challenge; at day 42, they were infected intraperitoneally with 10 3 bioluminescent, bloodstream-form T. vivax parasites (Y486 strain). The parasites were obtained at day 7 post infection (dpi) from previous serial passages in mice. Briefly, 10μl whole blood was collected and diluted 1:50 with PBS+ 5% D-glucose+10% heparin. After challenge, the animals were monitored daily and quantification of T. vivax infection was measured by bioluminescent in vivo imaging. Subsequently, a second challenge was conducted to confirm the results; two groups of mice (n = 15) with equal numbers of each sex were immunized following the same schedule as before with 50μg VIVβ8 + 15μg Quil-A and adjuvant only respectively, prior to challenge on day 74. In vivo imaging Our approach to in vivo imaging has been described previously [23]. Briefly, animals were injected daily starting at 5dpi and 6dpi for the first and second challenge respectively with luciferase substrate D-luciferin (potassium salt, Source BioScience, UK) diluted in sterile PBS for in vivo imaging and data acquisition. Mice were injected intraperitoneally with 200 μl luciferin solution at a dose of 200mg/kg per mouse 10 minutes before data acquisition. Animals were anaesthetized using an oxygen-filled induction chamber with 3% isoflurane and bioluminescence was measured using the in vivo imaging system IVIS (IVIS Spectrum Imaging System, Perkin Elmer). Mice were whole-body imaged in dorsal position and the signal intensity was obtained from luciferase expressed in T. vivax. The photon emission was captured with charge coupled device (CCD) camera and quantified using Living Image Software (Xenogen Corporation, Almeda, California) and data were expressed as total photon flux (photons/ second). Serum collection Blood was collected from the tail vein of each animal at day 0 (pre-immune sera), day 42 (postimmune sera for VIV � /A and VIV � /M treatment groups) and day 50 (post-immune sera for VIV � /Q challenge group). Sera were isolated from blood by centrifuging the samples for 10min x 3,000rpm and the supernatant was stored at -20˚C until used for antibody titration. Spleens were aseptically removed from the VIV � /A and VIV � /M groups 42dpi and from VIV � /Q groups at 50dpi. Spleen tissue was used for in vitro antigen stimulation in order to quantify cytokine expression. In vitro antigen stimulation and cytokine measurement Splenocytes were isolated by collecting spleens individually in tubes containing 3ml sterile PBS. Single cell suspensions were generated, and red blood cells lysed using ACK lysis buffer. Cell density was adjusted to 5x10 6 cells/ml per spleen in complete medium and cultured in 48-well flat-bottom tissue culture plates (Starlab, UK) by seeding 200μl/well each suspension in triplicate. Splenocytes were stimulated with 10μg/ml each antigen diluted in complete medium for 72h at 37˚C with 70% humidity and 5% CO 2 . Likewise, cells were also incubated with 10μg/ml Concanavalin A (ConA) or complete medium only as positive and negative controls, respectively. Culture supernatants were harvested after 72h and centrifuged at 2000g for 5min at RT to remove remaining cells. The supernatant was collected and used for the quantification of interferon gamma (IFNγ), tumour necrosis factor (TNFα), interleukin-10 (IL-10) and interleukin-4 (IL-4) levels by sandwich ELISA kits (ThermoFisher Scientific). The measurement from unstimulated splenocytes (incubated with medium only) was subtracted from the antigen stimulated cultures with each adjuvant treatment. IgG-specific antibody response in mice and natural infections To identify the presence of specific antibodies in mice sera against the antigens, a titration of IgG1 and IgG2a isotypes was performed by indirect ELISA. Briefly, 96-well streptavidin-coated plates were incubated with each antigen for 1h at RT with 1:250 VIVβ11 and 1:50 VIVβ14/Q, VIVβ20/Q and VIVβ8/Q diluted in reagent diluent (PBS pH 7.4, 0.5% BSA), as described previously [43]. The plates were washed three times with PBS-Tween20 0.05% and two-fold serial dilutions of each serum diluted in reagent diluent were performed, added to each well and incubated for 1h at RT. Plates were washed as before and 100μl/well rabbit anti-mouse IgG1 or IgG2a conjugated to HRP (Sigma-Aldrich, Germany) diluted to 1: 50,000 and 1: 25,000, respectively, were added to the plates and incubated as before. After washing, 100μl/well of 3,3',5,5'-tetramethylbenzidine (TMB, Sigma-Aldrich, Germany) was incubated for 5 minutes at RT in the dark. The reaction was stopped by adding 50μl/well 0.5M HCl and the absorbance was read at 450nm using a Magellan Infinite F50 microplate reader (Tecan, Switzerland). The isotype profile against each antigen was also analysed in samples from naturally and experimentally infected cattle. The ELISA protocol used was the same as above performing two-fold serial dilutions in samples from experimental and natural infections. Bound IgG1 and IgG2 antibodies were detected by adding 100μl/well sheep anti-bovine IgG1 or IgG2 HRP (Bio-Rad, USA) at 1:5000 and 1:2500 concentration respectively. Cellular localization Cellular localization of VIVβ8 in T. vivax bloodstream-forms was determined by indirect immunofluorescence. T. vivax bloodstream-forms were isolated as described previously [23], adjusted to 2.5x10 6 cells/ml in PBS+20mM glucose, transferred to poly-L-lysine slides for 10min and fixed in 4% formaldehyde for 30min at RT. A polyclonal antibody against recombinant VIVβ8 was raised in rabbits (BioServUK, Sheffield, UK). Briefly, two rabbits were vaccinated by subcutaneous injection, receiving five injections of 0.15mg VIVβ8 antigen diluted in sterile PBS and co-administrated with Freund's adjuvant every two weeks (0.75mg total immunization). IgG antibodies were purified by affinity chromatography with a protein A column from antisera collected two weeks after the last boost. The final concentration of the rabbit purified antibody was 5mg/ml. Parasite cells were washed with PBS and blocked with blocking buffer (PBS+1% BSA) for 1h at RT. Either pooled anti-VIVβ8 post-immune mouse sera or purified rabbit anti-VIVβ8 IgGs was used as primary antibody (1:1,000 dilution) in blocking buffer and incubated overnight at 4˚C. After washing, cells were incubated for 1h at RT with either secondary goat anti-mouse IgG conjugated with Alexa Fluor-555 (Abcam, UK) (1:500 dilution in blocking buffer) or with secondary Alexa Fluor goat anti-rabbit IgG 555 conjugated in blocking solution. Cells were incubated in 500 ng/ml DAPI (Invitrogen, USA), and/or 1:100 mCLING unspecific staining (Synaptic Systems), washed and mounted in Slow Fade diamond antifade mounting oil. Cells were imaged using a LSM-800 confocal laser scanning microscope (Zeiss). Images were processed using Zen 3.1 (Zeiss), ImageJ [44]. 3D renders were generated from z-tacks using ImarisViewer 9.5.1 (Imaris). Electron microscopy Bloodstream-form T. vivax parasites were obtained from 8 female BALB/c infected mice (>10 8 parasites/mL) and enriched by centrifugation in 20mM D-glucose PBS, as described previously [23]. Parasites were washed in 0.1M phosphate buffer and fixed in 4% formaldehyde, 0.2% glutaraldehyde in 0.1M phosphate buffer for 1 hour at RT and kept in fixative solution at 4˚C. Fixative was washed out and cells were pelleted and embedded in 3% gelatine and infiltrated in glucose overnight at 4˚C. Embedded cells were cut into <1mm cubes and flash frozen ready for cryosectioning with a Leica UC6 ultramicrotome. Cryosections between 60-80nm were picked up using 2% methyl cellulose/2.3M sucrose at a ratio of 1:1 and deposited on formvar/ carbon coated nickel grids. Before labelling, gelatine plates were melted at 37˚C for 20 minutes with grids in place. Grids were then moved over the following solutions at RT: 20mM glycine in PBS 4 x 1 minute; 10% goat serum in PBS 1 x 10 minutes; 0.1% BSA 2 x 1 minutes; primary rabbit anti-VIVβ8 polyclonal antibody in 0.1% BSA (1:20 dilution) 30 minutes; 0.1% BSA 4 x 2 minutes; secondary goat anti-rabbit IgG 10nm gold-conjugated 0.1% BSA (1:5 dilution) 30 minutes; 0.1% BSA 5 x 2minutes; deionized water 6 x 1 minute. After treatment on ice with 1% aqueous uranyl acetate x 1 minute followed by 1.8% methylcellulose and 0.3% uranyl acetate, the grids were imaged at 100KV on a FEI Tecnai G2 Spirit with Gatan RIO16 digital camera. The proportion of VIVβ8 localised adjacent to the cell surface relative to the cytoplasm was determined by counting gold particles in parasite cells for which the entire plasma membrane was visible in section and distinguishable from neighbouring cells, and stained with at least five particles (N = 51). Live immunostaining Bloodstream-form T. vivax were isolated from infected blood with three rounds of centrifugation at 2,000xg for 10 minutes at 4˚C, and incubated with primary anti-VIVβ8 (purified rabbit polyclonal, 1:200 dilution) in blocking solution (1% BSA) for 30 minutes at either 4˚C or RT. Cells were washed in PBS 20mM glucose by centrifugation and incubated with secondary Alexa Fluor goat anti-rabbit IgG 555 conjugated in blocking solution for 30 minutes at either 4˚C or RT. After washing as described above, all cells were fixed in 4% formaldehyde for 30 minutes at RT. Cells were then incubated with 500 ng/mL DAPI DNA counterstain, mCLING unspecific staining (1:100 dilution) and/or 5 mg/mL FITC-conjugated ConA for 15 minutes at RT. After washing, cells were mounted in SlowFade diamond mounting oil. Immunoprofiling of naturally infected livestock serum identifies consistent T. vivax-specific antigens The immuno-reactivity of serum from natural bovine T. vivax infections in Kenya and Cameroon, as well as experimental bovine infections with Brazilian T. vivax strains was examined using a custom peptide microarray of 63 putative T. vivax-specific antigens (S1 Fig). Consistent binding of serum antibodies to peptides in the top two rows of the array was demonstrated for all locations ( Fig 1A) with Kenyan, Cameroonian and Brazilian samples displaying a spike in intensity ( Fig 1B). The majority of these peptides (51/60) correspond to Fam34 proteins, previously described as a family of putative transmembrane proteins and highly abundant in PLOS NEGLECTED TROPICAL DISEASES Vivaxin: A novel family of T. vivax cell surface proteins bloodstream-stage mouse infections [25]. UK cattle that were T. vivax seronegative and Kenyan cattle that were seropositive for T. congolense only lacked responses to these peptides (Fig 1). Table 1 describes the spots with the strongest RRI values (i.e. highest 10%) and shows which spots were significantly greater than fluorescence determined by the negative, uninfected control (see Methods; a complete dataset is provided in S1 Table). Of these 59 strongly responding spots, 45 relate to peptides derived from Fam34 proteins. 29/45 relate to a single family member (hereafter 'antigen 1'), eight more relate to a second protein ('antigen-2'), three relate to 'antigen-3' and two to 'antigen-4'. Peptides from antigens 1 and 2 have the highest maximum fold-change in normalized fluorescence intensity relative to the uninfected control, e.g. peptide 46 (4.53) and peptide 35 (2.38) respectively, and four peptides from 'antigen 1' have response values that exceed the significance threshold (p < 0.05) after correction for multiple tests (Table 1). Thus, we may conclude that Fam34 is principally responsible for the peak clearly visible in Fig 1B and, in particular, 'antigen-1' and 'antigen-2' are responsible for 63% of the strongest responses. It may be that our approach has underestimated the immunogenicity of other protein families since the peptide array lacked any post-translational modifications that are common on T. vivax surface proteins and could contribute to antibody binding. Nevertheless, given the pre-eminence of Fam34 proteins as consistent and robust antigens in natural infections, we focused our search for vaccine targets on this gene family, which we now rename vivaxin. Vivaxin is a species-specific gene family encoding type-1 transmembrane proteins that do not display antigenic variation Analysis of vivaxin amino acid sequences with BLAST returns no matches besides T. vivax itself, and more sensitive comparison of protein secondary structural similarity using HMMER also fails to detect homologs beyond T. vivax; this confirms that the family is speciesspecific. Comparison with the T. vivax Y486 reference genome (TritrypDB release 46) using BLASTp returns 50 gene sequences, while a further 74 homologs are detected by HMMER, which means that vivaxin is the largest T. vivax cell-surface gene family after VSG [24,45]. These gene sequences range from 1050-1900 bp in length when complete; 43/124 sequences are curtailed by sequence gaps in the current assembly. Only six sequences are predicted to contain internal stop codons, suggesting that pseudogenes are rare. We observe that almost all BLAST matches relate to sub-telomeric loci, (i.e., outside of regular core polycistrons). Previously, in silico predictions based on amino acid sequences indicated that all Fam34 genes encode a type-1 transmembrane protein with a predicted signal peptide and a single hydrophobic domain 15 amino acids from the C-terminus, orientated such that the protein is largely extracellular [24]. We carried out further analysis of antigens 1-4 with PredictProtein [46] and ModPred [47], shown in S2 Fig, that confirm this topology and suggest that the extracellular portion of vivaxin is both N-and O-glycosylated at multiple sites. We estimated a Maximum Likelihood phylogeny for 81 full-length vivaxin genes from an alignment of a 221-amino acid conserved region (see Methods). Fig 2A shows that vivaxin sequences group into three robust clades, which we term the α (41 genes), β (34 genes) and γ (5 genes) subfamilies. The genes encoding antigens 1-4 are noted, and are known as viv-β11, -β14, -β20 and -β8 respectively hereafter. The subfamilies consistently differ in length due to the N-terminal (extracellular) domain of vivaxin-α proteins being *200 amino acids longer than vivaxin-β ( Fig 2B). For each gene, the proportion of its protein sequence predicted in silico to be a human B-cell epitope is shown in Fig 2C; on average, 36.1% of a vivaxin protein sequence is predicted to be immunogenic, rising to almost 60% in some cases. Fig 2D). These show that, far from being uniformly polymorphic, the population history of vivaxin genes is extremely variable, with some loci being well conserved, indeed almost invariant, across populations. Note that viv-β11, -β14, -β20 and -β8, (encoding antigens 1-4 respectively), are all among the least polymorphic paralogs. Most loci are predicted to be under purifying selection, and for some including viv-α15, -α29, as well as the gene encoding antigen-4 (viv-β8), this is stringent (i.e. d n /d s � 0; Fig 2E). The d n /d s ratio only exceeds 1 for five genes and in only one case (viv-α10) does the positively-selected gene show evidence for expression. Fig 2 indicates that, first, there is consistent variation across the family in gene function, with some loci being essential while others are less so, and second, that this variation has been stable throughout the species history, and not subject to assortment or homogenisation by recombination. Vivaxin loci display conserved variation in gene expression profiles We examined RNAseq data from multiple previous experiments to consider functional variation among vivaxin genes. Fig 2F shows transcript abundance at sequential points of an experimental goat infection ( [48]; first nine columns), also in two separate experimental infections in mice ( [25,49]; columns 10 and 11) and, finally, in epimastigote (E) and metacyclic (M) parasite stages ( [25]; columns 12 and 13 respectively). Most vivaxin genes are expressed weakly in fly stages, confirming that this is predominantly a bloodstream-stage family; although there are exceptions (see viv-α36 and α38). Some genes are expressed rarely in all situations, such as viv-α6, -α8, and -β3-7, indicating that they may be non-functional (in the case of viv-α6 and -α8, these genes do indeed have internal stop codons). Conversely, genes such as viv-α10, α12, α39 and β11-12 are expressed constitutively. These genes remain abundant across sequential peaks of bloodstream infections, and across life stages, and indeed, across experiments using different parasite strains and hosts. This clearly indicates that, like many multi-copy surface PLOS NEGLECTED TROPICAL DISEASES Vivaxin: A novel family of T. vivax cell surface proteins antigen gene families, expression levels vary markedly among vivaxin genes, but, unlike other gene families, these differences are not dynamic. There is a cohort of vivaxin loci that are routinely active and orders of magnitude more abundant than most other paralogs. Since many vivaxin genes are expressed simultaneously, this indicates that they are not variant antigens with monoallelic expression such as VSG; moreover, the presence of specific genes that are seemingly constitutively expressed in different infections and parasite strains, with minimal polymorphism, shows that some vivaxin proteins are not variant antigens at all. Most interestingly, Fig 2F shows that transcripts of viv-β11, -β14, -β20 and -β8 are among the most abundant vivaxin transcripts in all conditions, perhaps explaining why they elicit some of the strongest immune responses. In particular, viv-β8 transcripts, which encode antigen-4, are perhaps the most abundant, often two orders of magnitude more abundant than most other loci. Taking the results together, we see that the most immunogenic Fam34 proteins are also among the most abundant vivaxin transcripts and among the most evolutionarily conserved vivaxin genes. Hence, our decision to focus on antigens 1-4 as potential subunit vaccines was based on the balance of immunoprofiling, gene expression and polymorphism data. Recombinant expression of four β-vivaxin proteins To determine whether the vivaxin family could elicit protective immune responses in the context of a subunit vaccine and murine model of T. vivax infection we first expressed the entire ectodomains of four vivaxin proteins using a mammalian expression system. The four recombinant soluble T. vivax proteins were purified using their C-terminal 6-histidine tags from spent tissue culture supernatants, quantified and resolved by SDS-PAGE to check their mass and integrity (S3 Fig). As expected, each protein preparation resolved as a mixture of different glycoforms between 50 and 55kDa which agreed well with their predicted molecular mass. Together, these data show we were able to express and purify recombinant vivaxin proteins corresponding to the entire ectodomain using a mammalian expression system. Immunization with VIVβ8 produces a balanced antibody response Having expressed recombinant vivaxin proteins, we examined their potential for vaccination. Initially, to establish a robust seroconversion, we inoculated BALB/c mice with our four recombinant vivaxin proteins in combination with multiple adjuvants and measured serum IgG1 and IgG2a antibody titres by indirect ELISA. Independently of adjuvant or antigen, antibody titre increased upon booster immunization indicating these antigens were immunogenic in mouse. Antibody titres showed a significant increase (p < 0.001) in both IgG1 and IgG2a-specific antibody compared with the pre-immune for all antigens regardless the adjuvant used (S4 Fig). However, adjuvant choice has a significant effect on antibody titres. Quil-A produced significantly higher IgG2a titres than either Montanide or Alum when applied with all antigens. Overall, mice vaccinated with Alum and Montanide elicited higher titres of IgG1 than IgG2a (ratios = 2.11 and 1.58 respectively) suggesting a Th2-biased immune response, while Quil-A came closest to producing an equal ratio of isotype titres (ratio = 1.03), which indicates a mixed Th1/2-type response. To compare the antibody responses to immunization with those observed for natural and experimental infections, we measured IgG1 and IgG2a titres in livestock serum seropositive for T. vivax (see above). Naturally-infected cattle from Cameroon and Kenya displayed significantly higher IgG1-specific titres than seronegative UK cattle (p < 0.05) for all four vivaxin antigens (S5 Fig). Experimentally infected cattle from Brazil showed a similar pattern to natural infections with a higher IgG1 than IgG2 antibody levels. Conversely, in most cases, anti-IgG2a responses for each antigen were not significantly greater than the negative control. These results indicate that, while vivaxin is strongly immunogenic in natural infections, it elicits a largely Th2-type response, similar to that produced by immunization with Montanide or alum, but that immunization with Quil-A using any of the recombinant vivaxin antigens can produce a more balanced effect. Cytokine expression provides further evidence for the type of immune response elicited by immunization. The concentrations of four cytokines (TNF-α, IFN-γ, IL-10 and IL-4) were measured in ex vivo mouse splenocyte cultures after stimulation with each vivaxin antigen, coadministrated with one of three adjuvants. All cytokines were undetectable in splenocytes cultured in media only, but after re-stimulation with an antigen, cytokine concentration increased significantly (p < 0.0001; S6 Fig). Immunization with each antigen, regardless of adjuvant, produced high TNF-α concentration, with no significant differences between antigens (p > 0.05). IFN-γ concentration was greater in all animals immunized with Quil-A, which produced a similar response to the positive control group stimulated with ConA. The expression of IL-10 was also dependent on the adjuvant; it was significantly greater when antigens were co-administered with Quil-A (p < 0.001 and p < 0.0001 for all cases). IL-4 displayed the lowest expression levels of all; the only appreciable difference being for VIVβ20 co-administered with Quil-A compared to all other antigens (p < 0.001 for all cases). These results further indicate that immunization with vivaxin combined with Quil-A produces hallmarks of a Th1 response, which has been observed to be necessary for controlling trypanosome infections [51][52][53]. Vaccination with VIVβ8 delays parasite proliferation but does not prevent infection To evaluate the efficacy of vaccination, mouse cohorts were vaccinated with a single vivaxin antigen each co-administered with Quil-A (chosen for its ability to stimulate a protective Th1 response), and were challenged with T. vivax bloodstream-forms ( Fig 3A); parasitaemia was monitored by bioluminescent assay. In all cases, bioluminescence increased over the course of infection (Fig 3B). Before 5dpi, all vaccinated mice showed low parasitaemia levels similar to control groups, and showed no adverse effects of infection. At 6dpi, the VIVβ8 cohort had the lowest parasitaemia with a mean of 2.45x10 8 p/s, while the other cohorts showed an average luminescence of 2.8x10 8 p/s. At 8dpi, when luminescence was greatest, the VIVβ20 cohort showed the highest parasitaemia of all groups, significantly greater than VIVβ11 (p = 0.008) and VIVβ8 cohorts (p = 0.002). By 9 dpi, however, all animals were sacrificed as they approached the acceptable limits of adverse welfare affects. At the end of the experiment, parasite luminescence in control and vaccinated animals was not statistically different (p > 0.05). However, this observation belies notable variation within the VIVβ8 cohort. Three of five VIVβ8 -vaccinated mice showed a delayed onset of parasite proliferation (Fig 3C), and a significant reduction in parasitaemia at 8dpi (p = 0.045). Mean bioluminescence at 8dpi in the partially protected mice was 3.38x10 8 p/s compared to 7.17x10 8 p/s in the two unprotected VIVβ8 mice (and 9.8x10 9 p/s in control mice). Antibody titres correlated positively with this partial protection. This indicates that VIVβ8 co-administered with Quil-A inhibited parasite proliferation in some cases, although without ultimately preventing infection. We repeated the VIVβ8+Quil-A challenge using a larger cohort (n = 15) and an increased dose of antigen (50μg) for vaccination (S7 Fig); this produced a similar but not improved effect. Bioluminescence from vaccinated animals was significantly lower than the control group at 6dpi (p = 0.016). However, there was no beneficial effect by 9dpi, with mice from vaccinated and control groups displaying a mean of 1.45x10 9 and 1.60x10 9 p/s, respectively. After challenge, animals vaccinated with VIVβ14 showed a non-significant reduction in both IgG isotypes, while there was a significant reduction in the IgG1 titration in VIVβ11 and VIVβ20 vaccinated mice (p < 0.01). IgG2a antigen-specific antibody levels also decreased significantly after challenge against VIVβ20 (p < 0.0001) and VIVβ8 (p = 0.0004; Fig 3D). Cytokine levels also displayed pronounced changes after challenge (Fig 3E), irrespective of the antigen involved. IL-10 expression became non-detectable after 8 dpi when compared with pre-vaccination levels (p < 0.001). IL-4 concentrations also decreased significantly after challenge (p < 0.0001). TNF-α and IFN-γ average concentrations against each antigen were Humoral response before and after challenge with T. vivax in mice vaccinated with four antigens co-administered with Quil-A (n = 8). Comparison of isotype IgG in fully immunized mice at day 42 with challenged mice at 8 dpi. Serum concentration determined by ELISA. (E) Cytokine production by splenocytes stimulated in vitro after removal from fully immunized mice at day 42 (left-hand bar, full colour), compared with challenged mice at 8 dpi (right-hand bar, faded). Note that reductions in cytokine concentration post-immunization and post-challenge were significant (P < 0.001) for all antigens but labels are omitted for clarity. Data normality was confirmed with a Shapiro-Wilk test and statistical significance was assessed using a twotailed ANOVA in R studio. Significance is indicated by asterisks: � (P < 0.05), �� (P < 0.01), ��� (P < 0.001), ���� (P < 0.0001). https://doi.org/10.1371/journal.pntd.0010791.g003 PLOS NEGLECTED TROPICAL DISEASES Vivaxin: A novel family of T. vivax cell surface proteins reduced significantly (p < 0.0001), representing a reduction of 95% and 92.8% respectively. Unstimulated cells from the adjuvant-only control group showed high cytokine levels indicating that Quil-A alone is able to stimulate their production. Overall, all four vivaxin antigens were immunogenic, although they differed in the precise balance of immune response elicited, but none was able to protect against acute T. vivax infection in mouse. Only antigen-4, encoded by viv-β8, produced a balanced Th1-Th2 immune response after immunization with Quil-A and went on to inhibit parasite proliferation in some cases. While encouraging, this effect was not observed in all animals, and the balanced immune response was diminished after challenge with the decline of IgG2 titres relative to IgG1. Immunofluorescent and electron microscopy localizes VIVβ8 to the wholecell surface but suggests that it is inaccessible to antibodies As yet, the cell-surface position of vivaxin is predicted based on amino acid sequence but not proven. If vaccination does not provide protective immunity perhaps this is because VIVβ8 is not surface expressed after all, or not accessible to antibodies. To explore this, we localized VIVβ8 by immunostaining T. vivax bloodstream forms with anti-VIVβ8 polyclonal antibodies (Fig 4A). When bloodstream form cells were isolated from infected mouse blood; positive stain was associated with the margins of the cell body and flagellum, indicating a specific association with the whole cell surface (Fig 4A, second row). Some cells also showed evidence for PLOS NEGLECTED TROPICAL DISEASES intracellular staining, with a noticeable posterior-to-anterior gradient in signal, and a concentrated intensity between the nucleus and kinetoplast (Fig 4A, third row). These observations are not contradictory; endosomes servicing the secretory pathway are known to accumulate at the posterior end of the cell [54], making the intracellular localisation of VIVβ8 consistent with it being trafficked to the cell surface. In fact, surface localisation was confirmed by confocal 3D reconstructions of bloodstream-form cells stained with anti-VIVβ8 antibodies, in where orthogonal views show ring-shaped signal representing the cellular periphery (Fig 4B). To corroborate this result, the post-immune sera of mice and rabbits immunised with recombinant VIVβ8 (residue-residue) was used to stain formaldehyde -fixed bloodstreamstage cells (Fig 5). In both cases, post-immune serum reacted strongly with the entire cell surface and flagellum, resembling the localization found using the polyclonal antibody. Greater resolution on the cell surface position of VIVβ8 was achieved with transmission electron microscopy of immunolabelled bloodstream-form cells. Anti-VIVβ8 binding was observed within the cytoplasm but also around the entire cellular periphery (Fig 6), including the flagellar membrane (Fig 6, right). Almost all cells (69/72) were immunolabelled, indicating that VIVβ8 (or, potentially, a closely related protein) was expressed constitutively. For most cells (37/51) that displayed >5 anti-VIVβ8 gold particles, the majority of particles were found adjacent to the cell surface (Fig 6, inset), consistent with the final location of VIVβ8 being on or beyond the plasma membrane. Thus, the lack of protection afforded by VIVβ8 vaccination is not due to an intracellular position. Yet, it is possible that fixation of cells affects the disposition of vivaxin on the cell surface. To assess the accessibility of vivaxin epitopes in a native setting, we performed immunostaining on live cells at room temperature (RT) and at 4˚C to arrest cell endocytosis ( Fig 7A). Unlike in fixed cells, those probed at RT were not stained. However, 4˚C-incubated cells localised anti-VIVβ8 exclusively to the flagellar pocket, confirmed by its position next to the kDNA in 3D reconstructions ( Fig 7B). As in a previous study [55], we interpret this as evidence for endocytosis. At RT, antibody-bound VIVβ8 is rapidly cleared, but at 4˚C, the antibody is not removed and accumulates where VIVβ8 epitopes are exposed. Thus, VIVβ8 is likely expressed on the plasma membrane, but there is a disparity between immunostaining of formaldehyde-fixed cells, which establishes staining of the whole cell surface, and of live cells, which indicates that VIVβ8 localises to the flagellar pocket only. This can be explained by epitope availability, if VIVβ8 is distributed across the whole cell membrane but obscured from antibody binding in its native state, perhaps by other proteins in the surface glycocalyx, and only revealed in its full distribution after epitopes are exposed by formaldehyde fixation. Finally, it is worth noting that we also observed anti-VIVβ8 staining on red blood cells after formaldehyde fixation of T. vivax-infected mouse blood (S8a Fig). Although the orthogonal view ( S8b Fig) might suggest that the stain is intracellular, but since mature red blood cells are PLOS NEGLECTED TROPICAL DISEASES not thought to endocytose, we suggest that the observed pattern reflects the concave cell, and that the stain localises in the centre of the bi-concave cell area. This pattern affected all red cells we examined. It is unclear whether VIVβ8 is secreted actively by T. vivax to target the host erythrocytes, or transferred accidentally after parasite cell lysis. Discussion We examined the antibody responses of naturally and experimentally infected hosts to diverse TvCSP to identify immunogens that could become the basis for a T. vivax vaccine. Immunoprofiling of host serum showed that one particular protein family (Fam34), now known as vivaxin, includes consistently the most immunogenic proteins among the set we tested. In fact, the PLOS NEGLECTED TROPICAL DISEASES Vivaxin: A novel family of T. vivax cell surface proteins prominence of vivaxin was implied in a previous study by Fleming et al. [56]; they identified two related proteins (TvY486_0019690/0045500) of particular immunogenicity in T. vivax infections and demonstrated their diagnostic potential. These two proteins can now be identified as vivaxin members (VIVβ26 and VIVβ27), part of a large gene family encoding transmembrane glycoproteins with a conserved primary structure but diverse expression profiles and population genetic dynamics. Thus, not all vivaxin genes are equally good candidate antigens; we focused on one gene with minimal polymorphism (viv-β8) that was among the most immunogenic and highly expressed, and confirmed its expression across the cell body and flagellar membranes. While VIVβ8 elicits a strong, balanced humoral and cellular immune response with Quil-A and significantly reduces parasite burden in some mice by delaying peak parasitaemia, animals were not protected from acute fatal disease. In fact, the experiment followed a familiar pattern, with reduced antibody titres and pro-inflammatory cytokine concentrations after challenge underscoring the impact of infection induced immunomodulation. Reduction of IgG1 and IgG2a antibody titres before and after challenge was observed previously in vaccinated cattle challenged with both T. vivax [57] and T. congolense [58], as well as vaccinated mice challenged with T. brucei [21]. These observations could be explained by a decline in total circulating WBCs, which is mirrored in the spleen where normal tissue architecture is lost and parasitedriven necrosis becomes prominent [59]. This loss of white pulp organization occurs in mice where homeostasis of both bone marrow and splenic B-cells is perturbed [60]. Another possible explanation could be the formation of immune complex of specific antibodies with the parasite antigens making it difficult to measure IgGs solely in circulation [61][62]. Reduction of antigen-stimulated cytokine levels observed after 8 dpi is not conducive to parasite control, since IFN-γ is associated with resistance to African trypanosomes [51][52][53] and TNF-α was shown to be essential to controlling T. vivax infection in mice [63]. Importantly, parasitedriven necrosis destroying host B-cells and leading to mortality, as above, occurs independent of TNF-α [64], indicating that mortality arises from parasite pathology rather than immunopathology, and suggesting that the protective effects of vaccine driven parasite-specific cytokines are more protective than harmful. These changes in both the IgG1/IgG2a ratio and the cytokine profiles after challenge indicate a transition from a Th1-type to Th2-type response, which is a typical feature of uncontrolled infections in trypanosomatids [65][66][67][68][69] and is observed in chronically naturally-infected cattle [70]. Ultimately, vaccine driven responses may require biasing towards the Th1 spectrum. Thus, after vaccination with VIVβ8 infection took its normal course. We should recognize that the lack of protection could be due to limitations in our approach, such as the use of a murine model in which parasite virulence is atypically high, or the use of expression of recombinant proteins in a mammalian cell line, although a positive result has been obtained for another T. vivax antigen using the same experimental model [23]. Assuming that the lack of protection is not an artifact, it could be that vivaxin epitopes are hidden in situ or that bound vivaxin is removed via endocytosis under physiological conditions. Immunofluorescent microscopy of fixed trypomastigotes using both purified recombinant antibodies (Fig 4) and post-immune serum (Fig 5), as well as electron microscopy (Fig 6), indicate that VIVβ8 is located across the entire cell surface, and so, a uniform component of the glycocalyx alongside the VSG. However, when antibodies were applied to live parasites they fail to bind, except when the cells are cooled, and then only to the flagellar pocket (Fig 7), indicating that these vivaxin proteins are rapidly removed from the surface by endocytosis or largely concealed in some way, though not by a VSG monolayer since structural characterization of surface receptors in T. brucei has indicated that the VSG layer does not physically conceal other surface proteins [71][72] and, in any case, vivaxin predicted proteins are on average at least equally large as a typical T. vivax VSG (*450 amino acids). If vivaxin epitopes are inaccessible in vivo, we must explain the strong and consistent serological response of both naturally and experimentally-infected animals to multiple vivaxin proteins. The strength of the serological response perhaps reflects the abundance and conservation of vivaxin, since we linked the most abundant and least polymorphic transcripts to the strongest immunogens. However, vivaxin may be secreted and eliciting antibodies after cleavage of the extracellular domain from the cell surface, or else, the antibody response may be directed primarily at dead and lysed parasites. Such responses would not affect live, circulating parasites if the protein remained concealed on their surface. Although VIVβ8 may not elicit protective immunity, our phylogenetic analysis reveals that two antigens that were effective in a previous study, IFX and V31 [23], belong to the vivaxin gene family. We now realize the IFX (VIVβ1) is also among the most strongly expressed and structurally conserved vivaxin proteins (although not as much as VIVβ8). V31 (VIVα18) had a partially protective effect in mice, but is much more polymorphic [23]. Curiously, viv-β1 (encoding IFX) adopts a unique position within the phylogeny, as the sister lineage to all other β-vivaxin. This topology is highly robust but nonetheless odd, because viv-β1 is much longer than other β-vivaxin and noticeably divergent (note the length of the viv-β1 branch). It is tempting to speculate from the strong conservation and divergent structure of VIVβ1 that this protein performs a distinct, non-redundant function among vivaxin proteins, a function that evidently exposes it to antibodies unlike many of its paralogs. Note that while VIVβ8 localizes across the cell body and flagellum, VIVβ1 was restricted to regions of the flagellar membrane [23]. Thus, among the 124 (and likely more) vivaxin paralogs there is great potential for reliably immunogenic and protective antigens; yet this study reveals substantial variability in structure and antigenic properties, even among closely related gene copies, such that not all vivaxin proteins will make good antigens. Besides its potential for subunit vaccines, the discovery of vivaxin has implications for hostparasite interactions. The protein architecture of the T. vivax cell surface is not well characterised, partly because attention is more typically focused on the human pathogen T. brucei, but also because there are few research tools (e.g., in vitro cell culture, reverse genetics, mouse infection model) developed for T. vivax [73]. Yet, recent results and historical anecdote suggest that the T. vivax cell surface is quite different to the uniform and pervasive VSG monolayer of T. brucei. Vickerman considered the T. vivax surface coat to be less dense than other species [74][75]. The T. vivax genome contains hundreds of species-specific and non-VSG genes [24][25]45]. Greif et al. (2013) showed that only 57% of surface-protein encoding transcripts during T. vivax mouse infections encoded VSG (compared to 98% of T. brucei bloodstream-stage surface-protein encoding transcripts) and that the remainder belonged largely to T. vivax-specific genes [49]. This study shows that vivaxin must be a major contributor to this difference between T. vivax and T. brucei surfaces. It follows that, with a different cell surface architecture, T. vivax may interact with hosts in a different way, dependent on what the function(s) of vivaxin might be. Other trypanosome surface proteins are variant antigens (e.g. VSG [76]), immunomodulators in other ways (e.g. trans-sialidases [77]), scavenge nutrients (e.g. transferrin and HpHb receptors [78][79]) or sense the host environment (e.g. adenylate cyclases [80][81]). Various molecular evolutionary aspects, (i.e. strong purifying selection, low polymorphism, maintenance of gene orthology), as well as the absence of monoallelic expression, indicate that vivaxin are not variant antigens. However, other functions in immunomodulation or pathogenesis are plausible. Attachment between erythrocytes and the T. vivax cell surface has been observed in sheep and is associated with mechanical and biochemical damage to red blood cells that contributes to pathology [82]. The secretion (perhaps passively through cell lysis, or actively via exosomes) of VIVβ8 and its adhesion to erythrocytes (S8 Fig) could suggest that vivaxin contributes to cytoadhesion, possibly leading to parasite sequestration in tissue capillaries as an immune evasion strategy. During T. brucei infections, VSG and other trypanosome surface proteins are deposited on the surface of murine erythrocytes [83]; in this case, secretion is mediated by parasite exosomes, fusion of which alters the erythrocyte cell membrane, leading to erythrophagocytosis and likely contributing to anaemia [83]. Future studies should consider whether vivaxin is actively secreted into the bloodstream in a similar way. Perhaps the only aspect of vivaxin function we can predict presently is that it will be multifarious. Differences in length among subfamilies will translate into distinct tertiary protein structures, while consistent differences in expression profile suggest that some vivaxin genes are 'major forms', while other paralogs appear to be non-functional, and a few may be expressed beyond the bloodstream-stage. Population genetics show that vivaxin genes evolve under a range of selective conditions, from strongly negative (i.e. functionally essential and non-redundant), to neutral (i.e. redundant), and positive (engaged in antagonistic host interactions?). Coupled with the evolutionary stability of these features, (that is, individual vivaxin genes are found in orthology across T. vivax strains rather than recombining or being gained and lost frequently), this is evidence for functional differentiation and non-redundancy within the gene family. Vivaxin represents a major component of the T. vivax surface coat, quite distinct from VSG, and includes proven vaccine targets, and many more potential targets. The molecular evolution of vivaxin implies that the paralogous gene copies lack the dynamic variability and redundancy of variant antigens, but instead perform multiple functions, and at least some genes may be essential. The discovery of this highly immunogenic and abundant protein family has important implications for how we approach AAT caused by T. vivax because, although it may yet be found in other Salivarian trypanosomes, it is certainly not found in T. brucei. Thus, it challenges the adequacy of T. brucei as a model for AAT, given the different qualities of their surface architectures, while posing new therapeutic opportunities and new questions about the roles vivaxin has in host interaction, immune modulation and disease. Supporting information S1 Fig. Peptide microarray slide design. The diagram shows the 600 spots of the microarray (scale at edge), with each cell corresponding to a 15-mer peptide, printed in duplicate, belonging to one of 63 Trypanosoma vivax proteins, or a control peptide. The cells are shaded to identify the T. vivax cell surface phylome (TvCSP) to which each non-control peptide belongs [24]. Twenty-one proteins do not belong to multi-copy families ('Single-copy'), but are still predicted to have cell surface expression. (DOCX) S2 Fig. Predicted secondary protein structures for six vivaxin genes. The six genes include those four encoding antigens 1-4 identified in this study and expressed in recombinant form (viv-β11, viv-β14, viv-β20 and viv-β8), as well as two others encoding candidate antigens from another study (viv-β1 and viv-α18; [23]) for comparison. Protein secondary structures were inferred from amino acid sequences using PredictProtein [41]: alpha helices (red), transmembrane helix (purple), disordered region (green). The solvent accessibility of each position is also indicated: accessible (blue) and buried (yellow). N-and O-linked glycosylation sites were predicted using ModPred [42] and are indicated by red and orange arrows respectively. The position of linear b-cell epitopes inferred from the TvCSP peptide microarray are indicated by grey bars at the bottom of each diagram (the range of positions in the amino acid sequence is given). (DOCX) S3 Fig. Recombinant expression of four vivaxin proteins using a mammalian expression system. A) Normalization of antigen 1 (VIVβ11) protein using two-fold serial dilutions. B) Normalization of antigens 2-4 (VIVβ14, VIVβ20 and VIVβ8). The concentration of biotinylated proteins was determined by ELISA. C) Purified vivaxin proteins were resolved by SDS-PAGE on a 12% NUPAGE SDS/polyacrylamide gel (under reducing conditions) and stained with Sypro orange. M: molecular mass marker. The gel showed a prominent band with apparent molecular mass of 50kDa for each recombinant protein. The antigens have a predicted molecular mass of 34-39kDa based on amino acid sequence alone, i.e. before glycosylation. Based on the extinction coefficient calculation, the purified proteins had a concentration of 4.3mg/mL (antigen 1; VIVβ11), 5.1mg/mL (antigen 2; VIVβ14), 9.8mg/mL (antigen 3; VIVβ20) and 2.5 mg/mL (antigen 4; VIVβ8). Note that the weaker, higher molecular mass bands that were also observed for all antigens are likely due to co-purifying proteins from the tissue culture supernatant. Smearing in the bands is probably due to variation in glycosylation. Almost all glycoprotein preps are a complex mixture of different glycoforms, which vary in the precise occupation of N-linked glycosylation sites as well as the actual glycan attached at each site. (DOCX) S4 Fig. Antibody titres after immunization. Both IgG1 and IgG2a-specific antibody titres in mice immunized with four different antigens are compared with two negative controls (preimmune sera and adjuvant-only mice). There is a consistent response for all antigens regardless of the adjuvant used. However, adjuvant choice has a significant effect on antibody titres. Montanide produced significantly higher IgG1 levels than either Alum or Quil-A when applied with VIVβ11, VIVβ14 and VIVβ20, but, there was no difference in IgG1 titre between adjuvants when VIVβ8 was used. In contrast, Quil-A produced significantly higher IgG2a titres than either Montanide or Alum when applied with all antigens. Data normality was confirmed with a Shapiro-Wilk test and statistical significance was assessed using a one-tailed ANOVA in R studio. Significance is indicated by asterisks: � (P < 0.05), ��� (P < 0.001), ���� (P < 0.0001). (DOCX) S5 Fig. Titres of IgG1 and IgG2a isotypes in infected cattle against four antigens, measured by indirect ELISA. IgG1 and IgG2a specific antibody titres were measured using two-fold serial dilutions in naturally infected (Cameroon and Kenya) and experimentally infected cattle (Brazil). Antibody levels were also measured in a group of UK cattle, which served as negative controls. IgG1 showed higher levels when compared to IgG2a for both natural and experimental infections with T. vivax. Each graph shows the antibody levels of individual serum, the geometric mean of each group, and the 95% confidence interval. Data normality was confirmed with a Shapiro-Wilk test and statistical significance was assessed using a one-tailed ANOVA in R studio. Significance is indicated by asterisks: ���� (P < 0.0001). (DOCX) S6 Fig. Cytokine expression after immunization compared for different adjuvants. Concentrations of four cytokines (TNF-α, IFN-γ, IL-10 and IL-4) were measured in ex vivo mouse splenocyte cultures after stimulation with each vivaxin antigen, co-administrated with one of three adjuvants. Concanavalin A was applied as a positive control. Stimulation with adjuvant only was applied as a negative control. A cross (+) denotes that a value could not be determined. Data normality was confirmed with a Shapiro-Wilk test and statistical significance was assessed using a one-tailed ANOVA in R studio. Significance is indicated by asterisks: �� (P < 0.01), ��� (P < 0.001), ���� (P < 0.0001). (DOCX) S7 Fig. VIVβ8 vaccination and challenge experiment in BALB/c mice repeated with a larger cohort (n = 15/group) and antigen dose (50μg); protocol as described in methods. A. Luciferase intensity from VIVβ8+Quil-A-vaccinated animals was significantly lower than the adjuvant-only control group at 6 dpi (p = 0.016), with means of 1.32x108 and 1.71x108 p/s respectively. On subsequent days, there were no significant differences between luminescence values of vaccinated and control groups. B. Kaplan-Meir survival curve (%) of both groups during the course of infection. C. Bioluminescence values from VIVβ8-vaccinated and control animals compared. D. Isotype IgG profiling in challenged animals culled at 8 and 9 dpi. E. Cytokine levels in challenged animals culled at 8 and 9 dpi. There were no significant changes in TNF-α, IFN-γ and IL-10 concentrations between 8 dpi and 9 dpi. There was a significant rise in IL-4 concentration, with undetectable values at 8 dpi and an average concentration of 7.83pg/ml at 9 dpi (p = 0.028). The comparison between 8 dpi and 9 dpi also showed pronounced changes in IL-10 and IL-4 levels in the control group stimulated with ConA (p = 3.40E-04 and p = 1.78E-04 for IL-10 and IL-4, respectively). In all cases, cytokine concentration from splenocytes stimulated with VIVβ8 was lower than the control group stimulated with ConA, except for IL-4 levels at 9 dpi. Data normality was confirmed with a Shapiro-Wilk test and statistical significance was assessed using a one-tailed ANOVA (panel D) or paired t-test (panel E) in R studio. Significance is indicated by asterisks: � (P < 0.05), �� (P < 0.01), ��� (P < 0.001), ���� (P < 0.0001). (DOCX) S8 Fig. Cellular localization of VIVβ8 antigen on the surface of murine erythrocytes. (A) Localization of VIVβ8 and the unspecific surface counterstain mCLING in red blood cells (RBC) from T. vivax-infected mice. Representative images of RBC stained with either preimmune or post-immune rabbit polyclonal antisera. Middle row shows the major localization pattern of VIVβ8 in RBC; protein accumulates in the central concave surface. Bottom row shows an example of leaking RBC. Differential increased contrast (DIC); DAPI DNA counterstain; VIVβ8 (secondary antibody AF555-conjugated) and merged channels. Scale bars; 5 μm. (B) 3D z-stack reconstructions of mouse erythrocyte cells and corresponding orthogonal (X-Z and X-Y) views from the stacks. Orthogonal views reflect the accumulation of VIVβ8 signal at the inner concave cell membrane. Scale bars; 5 μm. (DOCX) S1 Table. Raw response intensity (RRI) values for peptide array spots when assayed with infected serum and results of limma analysis. The table contains all data obtained from the peptide array assay, showing the peptide sequence and parent gene for each of 600 spots on the array. Where appropriate, the gene family is noted, or the gene is marked as 'single-copy' ('SCG') otherwise. The mean RRI value when assayed with infected serum (averaged across two duplicate spots) is followed by a statistical comparison (t-test) with the uninfected control (log 2 fold-change in RRI and adjusted P-value
v3-fos-license
2018-06-23T00:37:54.694Z
2018-06-13T00:00:00.000
49194069
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1371/journal.pone.0198144", "pdf_hash": "258dc6960914cb934076d169f8c02d51883cf571", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45147", "s2fieldsofstudy": [], "sha1": "258dc6960914cb934076d169f8c02d51883cf571", "year": 2018 }
pes2o/s2orc
Elevated plasma angiotensin converting enzyme 2 activity is an independent predictor of major adverse cardiac events in patients with obstructive coronary artery disease Background Angiotensin converting enzyme 2 (ACE2) is an endogenous regulator of the renin angiotensin system. Increased circulating ACE2 predicts adverse outcomes in patients with heart failure (HF), but it is unknown if elevated plasma ACE2 activity predicts major adverse cardiovascular events (MACE) in patients with obstructive coronary artery disease (CAD). Methods We prospectively recruited patients with obstructive CAD (defined as ≥50% stenosis of the left main coronary artery and/or ≥70% stenosis in ≥ 1 other major epicardial vessel on invasive coronary angiography) and measured plasma ACE2 activity. Patients were followed up to determine if circulating ACE2 activity levels predicted the primary endpoint of MACE (cardiovascular mortality, HF or myocardial infarction). Results We recruited 79 patients with obstructive coronary artery disease. The median (IQR) plasma ACE2 activity was 29.3 pmol/ml/min [21.2–41.2]. Over a median follow up of 10.5 years [9.6–10.8years], MACE occurred in 46% of patients (36 events). On Kaplan-Meier analysis, above-median plasma ACE2 activity was associated with MACE (log-rank test, p = 0.035) and HF hospitalisation (p = 0.01). After Cox multivariable adjustment, log ACE2 activity remained an independent predictor of MACE (hazard ratio (HR) 2.4, 95% confidence interval (CI) 1.24–4.72, p = 0.009) and HF hospitalisation (HR: 4.03, 95% CI: 1.42–11.5, p = 0.009). Conclusions Plasma ACE2 activity independently increased the hazard of adverse long-term cardiovascular outcomes in patients with obstructive CAD. Introduction Cardiovascular (CV) disease is a major cause of morbidity and mortality, [1] and is associated with activation of the renin-angiotensin system (RAS). Within the RAS, angiotensin converting enzyme (ACE) converts angiotensin (Ang) I to the vasoconstrictor and pro-atherosclerotic peptide Ang II, [2] whilst ACE2 is an endogenous inhibitor of the RAS through its major action to degrade Ang II. [3] ACE2 is highly expressed in the heart and blood vessels [4] and is cleaved from the cell surface to release the catalytically active ectodomain [5] into the circulation through the action of tumour necrosis factor alpha converting enzyme (TACE). [6] In human myocardium, ACE2 is localized to the endothelium of the microcirculation, [7] and is also present in the media of atherosclerotic radial arteries. [8] In healthy individuals, circulating ACE2 activity levels are low [9,10] but increase in the presence of CV disease or risk factors including heart failure (HF), [11] atrial fibrillation (AF), [12] kidney disease [13,14] and type 1 diabetes. [15] To date, there is limited information on the prognostic role of circulating ACE2 activity levels and the results are conflicting. For example, increased ACE2 activity predicted adverse CV outcomes in heart failure, [16] but not in patients after emergency orthopedic surgery [17] or with chronic kidney disease. [13,14] These differences may reflect the patient population, the relative cardiovascular risk of the patient population or the length of follow up. The aim of this study was to investigate the utility of plasma ACE2 activity levels to predict CV events in a high-risk cohort of patients with angiographically proven obstructive CAD with more than 10 years of follow-up. Materials and methods Consecutive patients aged >18 years were prospectively recruited between November 2004 and January 2006 after referral to a tertiary cardiovascular centre for a coronary angiogram to investigate suspected CAD. Only those with significant obstructive CAD were eligible for this study. Patients in cardiogenic shock, with a past history of congestive heart failure or with a left ventricular (LV) ejection fraction < 30% on angiography were excluded. Ethical approval was obtained from the Human Research Ethics Committee at Austin Health, Melbourne and the study complied with the Declaration of Helsinki. All patients gave informed written consent. A standardised medical questionnaire was completed and verified with the hospital medical record. Blood pressure was measured and anthropometric measurements were taken. Obstructive CAD was defined as !50% stenosis of the left main coronary artery and/or !70% stenosis in ! 1 other major epicardial coronary artery by visual assessment on invasive coronary angiography. [18] Diabetes was diagnosed based on a documented history, treatment with glucose lowering therapy or if fasting blood glucose was >7 mmol/L. Hypertension was defined if previously diagnosed by a physician and/or current use of anti-hypertensive medication. Dyslipidaemia was defined if previously diagnosed by a physician and/or current use of lipid lowering agents. Cigarette smoking was defined as smoking within the preceding 12 months. Fasting blood samples were collected at the time of admission for measurement of kidney function, lipids, and troponin. The Access AccuTnI assay (Beckman-Coulter, Chaska, MN, USA) was used to measure plasma troponin with the 99 th percentile of a healthy reference population of 0.04 μg/L. Levels of ! 0.04 μg/L (99 th percentile) were considered abnormal in this study. For plasma ACE2 measurement, blood was collected within 48 hours of presentation into lithium heparin tubes, and plasma was obtained by centrifuging blood at 3000 rpm at 4˚C for 10 minutes and stored at-80˚C until tested. Plasma ACE2 activity was measured within 2 years after all samples were collected. Samples were batched and ACE2 assays were run over a period of 2 days. The catalytic activity of ACE2 was measured using a validated, sensitive quenched fluorescent substrate-based assay as previously described. [9] Briefly, plasma (0.25 ml) was diluted into low-ionic-strength buffer (20 mmol/L Tris-HCl, pH 6.5) and added to 200 μl ANXSepharose 4 Fast-Flow resin (Amersham Biosciences, GE Healthcare, Uppsala, Sweden) that removed a previously characterized endogenous inhibitor of ACE2 activity. [9] After binding and washing, the resulting eluate was assayed for ACE2 catalytic activity. Duplicate samples were incubated with the ACE2-specific quenched fluorescent substrate, with or without 100 mM ethylenediaminetetraacetic acid. The rate of substrate cleavage was determined by comparison to a standard curve of the free fluorophore, 4-amino-methoxycoumarin (MCA; Sigma, MO, USA) and expressed as ρmole of substrate cleaved/mL of plasma/min. The intra-assay and inter-assay coefficient of variation was 5.6 and 11.8% respectively. The primary endpoint was a composite of major adverse cardiac events (MACE) defined as CV death, hospitalisation for HF or myocardial infarction (MI). The secondary endpoint was HF hospitalisation. Endpoints were described according to the 2014 American College of Cardiology/ American Heart Association definitions for CV endpoints in clinical trials. [19] CV death was defined as death due to sudden cardiac death, HF, acute MI, cerebrovascular accident, CV haemorrhage, CV procedures or other CV causes, that is death not included in the previous categories but with a specific, known CV cause such as pulmonary embolism. [19] Hospitalisation for HF was defined as an event where the patient is admitted to the hospital with a primary diagnosis of HF where the length of stay is at least 24 hours, where the patient exhibits new or worsening symptoms of heart failure on presentation, has objective evidence of new or worsening heart failure and receives intensification of treatment specifically for heart failure. [19] Myocardial infarction was defined as the clinical diagnosis of ST elevation or non-ST elevation myocardial infarction according to established criteria. [20,21] Clinical outcomes were collected by an experienced blinded investigator via medical records review and by contacting each patient and/or the nominated general practitioner for additional information. Statistical analysis was performed using STATA, version 14.2 (Statacorp., College Station, TX, USA). Normally distributed continuous variables are expressed as mean ± standard deviation and non-normally distributed data (Plasma ACE2 activity, triglycerides, troponin and glomerular filtration rate) are expressed as the median and inter-quartile range (IQR). Student t test or the Mann Whitney U test (for non-normally distributed data) was used to assess differences in continuous variables between patients with above and below median ACE2 activity. Categorical variables are expressed as counts and percentages and compared using Fisher's exact or chi-square tests. Multiple regression analysis was used to identify variables that may independently influence plasma ACE2 activity. Plasma ACE2 activity, troponin levels and glomerular filtration rate were natural-logarithm transformed for analysis because of their skewed distribution. This rendered a more normal distribution by visual inspection of the distribution of the variables and Q-Q plots. Cumulative incidence of MACE was estimated by the Kaplan Meier method and the logrank test used to evaluate differences between patients with below and above median plasma ACE2 activity. When multiple end-points occurred during follow-up, the time to the first event was considered for analysis of MACE. Cox proportional hazard modelling was used to estimate the adjusted hazard ratio (HR) and 95% confidence interval (CI) for MACE. Significant variables (p < 0.1) from univariate analysis were entered into the final multivariate model to identify independent predictors of MACE. Conventional prognostic variables were used including age, history of diabetes, log troponin and treatment with statin, beta-blocker, ACE inhibitor or angiotensin receptor blocker in addition to log ACE2. Two-tailed p-values < 0.05 were considered significant. Results We recruited 79 patients with angiographically proven obstructive CAD. No patient was lost to follow up and the median follow-up was 10.6 years (IQR 9.6-10.9 years). The clinical and biochemical characteristics of the study population are presented in Table 1. The cohort comprised 65% males with a mean ± SD age of 66 ± 12 years and BMI of 27.4 ± 4.4 kg/m 2 . Patients were at significant CV risk with 69% having a smoking history, and a history of CAD in 66%, dyslipidaemia in 60%, hypertension in 82%, diabetes in 24% and AF in 11%. With regard to pharmacological therapy at the time of presentation, 59% were on angiotensin converting enzyme inhibitors (ACEi) or angiotensin receptor blockers (ARB), 58% on beta-blockers, 72% on statins and 100% on aspirin. . Patients were categorized according to plasma ACE2 activity above / below the median ACE2 level. Patients with above-median plasma ACE2 activity were more likely to be male and have AF ( Table 1, both p < 0.05). Multiple regression analysis was performed to identify variables that influence plasma ACE2 activity. Male gender was the only independent predictor of higher ACE2 activity (p = 0.022). The prevalence of CAD and cardiac risk factors including dyslipidaemia, hypertension, diabetes and cigarette smoking were similar in the two groups, as was LVEF <50%, the use of pharmacological agents, low density lipoprotein cholesterol, triglycerides levels, kidney function and troponin level (all p > 0.05). Over the follow-up period, there were 18 deaths, 19 myocardial infarcts and 16 hospitalisations with HF. The primary endpoint of MACE, a composite of CV mortality, HF hospitalisation or MI occurred in 36 patients (46%). Above median levels of ACE2 (>29.3 pmol/ml/min) were significantly associated with a higher incidence of MACE (log-rank test, p = 0.035; Fig 1A) and HF hospitalisation (p = 0.01; Fig 1B) compared with those with below-median ACE2. There was no significant difference in the incidence of CV death (p = 0.195) or MI (p = 0.35). In a subgroup analysis including male patients only, there was no significant difference in the incidence of MACE according to median levels of ACE2 (p = 0.124). Survival analysis using the Cox regression model indicated that age, history of atrial fibrillation, history of diabetes and log ACE2 were univariate predictors of the primary endpoint of MACE. On multivariable Cox regression analysis, Log ACE2 activity remained the only significant predictor of MACE (HR: 2.4; 95% CI: 1.24 to 4.72; p = 0.009) ( Table 2). Discussion The major finding of the current study was that plasma ACE2 activity independently increased the hazard for adverse cardiovascular events in patients with significant obstructive CAD. In this study in high-risk patients followed for a median of 10.6 years, elevated ACE2 activity remained an independent predictor of CV mortality and morbidity even after comprehensive multivariable adjustment in a model that included prognostically meaningful variables. The median ACE2 level in patients with CAD was 29 pmol/ml/min which is higher than levels we previously reported in young healthy volunteers (4.44 ± 0.56 pmol/ml/min) [9] and in elderly patients (median 19.4 pmol/ml/min). [17] We excluded patients with known HF or severe LV systolic dysfunction as both are associated with increased circulating ACE2 levels. [11,16] Consistent with results of other studies, [13][14][15]22] plasma ACE2 activity was higher in male patients but we found no other independent predictors of plasma ACE2 activity levels. There are conflicting findings regarding the prognostic value of circulating ACE2 levels likely reflecting the differences in follow-up period and risk of CV events across the study populations. In a cohort of patients with HF (n = 113), 23% had an adverse CV event (death, cardiac transplant, HF hospitalisation) over a 34 month follow up and circulating ACE2 levels remained an independent predictor after adjustment for reduced ejection fraction and increased N-terminal-pro brain natriuretic peptide. [16] In another cohort of patients with chronic kidney disease (CKD) without prior CV disease, circulating ACE2 activity was not an independent predictor of CV mortality or events over a follow-up period of 24 months. [13,14] In concordance, our group reported no significant associations between elevated circulating ACE2 activity and adverse CV outcomes in patients with CKD stage III/ IV, haemodialysis patients or kidney transplant recipients. [13] We also found that in elderly patients undergoing emergency orthopaedic surgery, elevated ACE2 levels did not predict CV events after 12 months of follow-up (p = 0.051). [17] We ascribe the significant association between increased plasma ACE2 activity and adverse CV outcomes seen in the present study to a higher rate of CV outcomes observed in the study cohort and longer follow-up duration. Severe lines of evidence suggest that plasma ACE2 activity may serve as a marker of atherosclerosis. In non-dialysis patients with CKD, circulating ACE2 activity was associated with silent atherosclerosis in carotid and peripheral vessels. [14] In patients with type 1 diabetes and a history of CAD, circulating ACE2 activity was increased. [15] The same pattern was observed in kidney transplant recipients with a history of CAD, [23] further supporting the association between raised circulating ACE2 activity and coronary atherosclerosis. In another study of patients with angiographically confirmed CAD, Ortiz-Perez et al. demonstrated elevated levels of circulating ACE2 at baseline (24-48h) in patients presenting with ST-elevation myocardial infarction compared to a control group of patients without known CAD. [24] It is not therefore clear from the Ortiz-Perez et al. study whether the increase in circulating ACE2 reflects acute cardiac injury or underlying atherosclerosis. Our study extends knowledge in this regard as we included only patients with angiographically proven obstructive coronary artery disease, both with and without an acute presentation. As there was no difference in ACE2 according to presentation, our results suggest that the increase in plasma ACE2 reflects underlying atherosclerosis rather than acute myocardial injury. The importance of the RAS in the pathogenesis of atherosclerosis is well established and indeed targeted pharmacological inhibition of the classic RAS improves outcomes in atherosclerotic disease including CAD. [25] In experimental models of atherosclerosis, we and others reported that ACE2 is expressed in vascular endothelial cells, macrophages and smooth muscle cells within atherosclerotic plaques. [26,27] We also reported that ACE2 was present in atherosclerotic blood vessels in patients with CAD undergoing coronary artery bypass surgery. [8] Experimental studies have shown that ACE2 overexpression promotes atherosclerotic plaque stability and attenuates atherosclerotic lesions [28,29]. Activation of TACE results in increased ACE2 shedding from tissue into the circulation. [6] Shedding and hence loss of ACE2 from the tissue is mediated by angiotensin II and results in the pro-inflammatory effects of angiotensin II being unopposed. [6] Certainly in a rabbit model of atherosclerosis, gene silencing of TACE enhanced plaque stability and improved vascular remodelling, [30] possibly via reduced tissue ACE2 shedding. These findings reinforce the important counter-regulatory role of ACE2 in atherosclerosis and suggest that modulation of ACE2 could offer a future therapeutic option in patients with atherosclerotic disease. The relationship between tissue and circulating levels of ACE2 is not yet understood. It has been postulated plasma ACE2 levels may parallel tissue ACE2 expression with a constant rate of shedding in normal physiology [16], although there are no studies that have concurrently measured both tissue and circulating ACE2 and TACE levels to address this hypothesis. Our findings raise the possibility that in human atherosclerosis, increased plasma ACE2 activity in those with adverse cardiovascular outcomes reflect a persistent albeit insufficient counter-regulatory process to shift the balance away from the deleterious effects of sustained Ang II activation. Genetic variation in and around the gene encoding ACE2 may account for differences in ACE2 expression or activity. Indeed, the location of the ACE2 gene within the X chromosome in an area where genes are known to escape X-inactivation may contribute to phenotypic differences between sexes and tissue-specific differences in X-inactivation. [31] Furthermore, the rs1978124 polymorphism in the ACE2 gene has been associated with poorer outcomes in two separate CAD cohorts of Chinese Han [32] and European [33] ancestry but there are not yet studies that combine genetic approaches with measurement of plasma ACE2 activity. [34] The study has a number of limitations including the relatively small sample size and the use of a conventional troponin assay, as a high sensitivity assay was not available at the time of patient recruitment. Furthermore, the finding of elevated plasma ACE2 activity and its association with adverse outcomes only suggest a possible relationship and does not determine cause or effect. However major strengths include the detailed angiographic assessment and the long term follow up. In conclusion, our study demonstrates that elevated plasma ACE2 activity is an independent predictor of MACE in patients with obstructive CAD. Future This study has identified ACE2 as a potential surrogate marker of CV outcomes, and possibly a target for therapeutic intervention. Whether targeting patients with increased plasma ACE2 levels for more intensive therapy would lead to improved outcomes has yet to be tested.
v3-fos-license
2017-08-29T05:28:27.789Z
2013-03-20T00:00:00.000
35164161
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/43730", "pdf_hash": "b30bed94977f7b79f87c7280897162dab0bc335f", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45148", "s2fieldsofstudy": [ "Economics", "Medicine" ], "sha1": "aa9dc463d57a75d82f4c47f73e0f28c4971ed0bb", "year": 2013 }
pes2o/s2orc
Economic Evaluation of Diagnosis Tuberculosis in Hospital Setting Tuberculosis (TB) is an ancient disease, but not a disease of the past. After disappearing from the world public health agenda in the 1960s and 1970s, TB returned in the early 1990s for several reasons, including the emergence of the HIV/AIDS pandemic and the increase in drug resistance. More than 100 years after the discovery of the tubercle bacillus by Robert Koch, what is the status of TB control worldwide? The evolution of global TB control poli‐ cies, including DOTS (Directly Observed Therapy, Short course) and the Stop TB Strategy, and assess whether the challenges and obstacles faced by the public health community worldwide in developing and implementing this strategy can aid future action towards the elimination of TB.(Lienhardt, Glaziou et al. 2012) The report of the Commission on Macro‐ economics and Health of the World Health Organization has emphasized that tuberculosis is the most common of the infectious diseases. Tuberculosis is one of the most important health problems in the world, causing 1.4 million deaths each year, in 2011. (WHO, 2010) Introduction Tuberculosis (TB) is an ancient disease, but not a disease of the past.After disappearing from the world public health agenda in the 1960s and 1970s, TB returned in the early 1990s for several reasons, including the emergence of the HIV/AIDS pandemic and the increase in drug resistance.More than 100 years after the discovery of the tubercle bacillus by Robert Koch, what is the status of TB control worldwide?The evolution of global TB control policies, including DOTS (Directly Observed Therapy, Short course) and the Stop TB Strategy, and assess whether the challenges and obstacles faced by the public health community worldwide in developing and implementing this strategy can aid future action towards the elimination of TB. (Lienhardt, Glaziou et al. 2012) The report of the Commission on Macroeconomics and Health of the World Health Organization has emphasized that tuberculosis is the most common of the infectious diseases.Tuberculosis is one of the most important health problems in the world, causing 1.4 million deaths each year, in 2011.(WHO, 2010) The most of TB cases (82%) was concentrated in 22 countries around the world.In the year of 2010, in Brazil were detected 81946 cases, with 5000 death (WHO, 2010). In Rio Grande do Sul, a state in extreme south of Brazil, the incidence of TB in 2011 was 46,1 per 100.000, with 4947 new cases.Porto Alegre, capital of Rio Grande do Sul shows incidence of 116 in 2009.(Sul 2011;Brazil 2012) Tuberculosis is the first cause of death in patients with AIDS in Brazil.Patients with co-infection HIV/TB have had in treatment of Tuberculosis probability of worst outcome. Rio Grande do Sul, has had the major incidence of TB/HIV co-infection.The co-infection adversely affects the lives of individuals in both the biological and psychosocial aspects.(Neves, Canini et al. 2012) Some factors can be considered as risk factors for co-infection of TB and HIV, as the impoverishment of the population, use of injecting drugs, the disruption of services on the epidemiology of TB control, the delay in the diagnosis of TB and increased risk of acquiring multi-drug resistant TB (MDRTB), essentially associated to the expansion of the disease in the world.( For the above, in recent years became consensus that the epidemic of TB in developing countries demands the evaluation of broader approaches, described in the Plan STOP-TB/OMS control global TB 2006-2015. Among them have been prioritizing the implementation of: a) improvements in access to diagnostic system user health; b) culture for mycobacteria in every patient suspected of TB and HIV positive and all TB patients in retreatment; c) sensitivity test for suspected cases of resistant TB (retreatment cases, treatment failure, contact MDR-TB or have been treated at the Health Unit with a high rate TB-MDR/XDR); d) review and economic evaluation under routine conditions of deployment of new technologies (phenotypic or molecular, automated or not) for the early diagnosis of TB, resistant TB patients with paucibacillary TB, HIV-infected or suspected drug-resistant TB. Early detection of tuberculosis (TB) is essential for infection control.Rapid clinical diagnosis is more challenging in patients who have co-morbidities, such as Human Immunodeficiency Virus (HIV) infection.Direct microscopy has low sensitivity and culture takes 3 to 6 weeks (Sharma, Mohan et al. 2005;WHO 2006 The appropriate and affordable use of any of these tests depends on the setting in which they are employed (Perkins 2000;Brodie and Schluger 2005).New tools for TB diagnosis, treatment and control are necessary, especially in health settings with a high prevalence of HIV/TB co-infection. Although TB is one the greatest causes of mortality worldwide, its economic effects are not well known, especially in Brazil.Despite the fact that the families did not have to pay for medications and treatment, given that this service is offered by the State, the costs to families related to loss of income due to the disease were very high.The proportion of public service funds utilized for prevention is small.Greater investment in prevention campaigns not only might diminish the numbers of cases but also might lead to earlier diagnosis, thus reducing the costs associated with hospitalization.The lack of an integrated cost accounting system makes it impossible to visualize costs across the various sectors.(Costa, Santos et al. 2005) To make rational decisions about the implementation of new tools in the medical routine, cost-effectiveness studies are essential (Mitarai, Kurashima et A key step in cost-effectiveness analysis is to identify and value cost.The economic concept of opportunity cost is central to cost-effectiveness analysis.When a public health agency spends money to provide health care, this money is not available for housing, education, highway construction, or as a reduction in income taxes.When a health care organization spends money for bone transplantation, this money is not available for example for mammography outreach or something.When an elderly man spends time being vaccinated for influenza, this time is not available to play golf or to work.An overall conceptual goal in cost-effectiveness analysis is comprehensive identification of all costs of the intervention and its alternative, including all of the opportunity costs. Contributors to cost must be identified before the costs can be valued.The terms used to describe the contributors to cost (e.g.direct costs, indirected costs, opportunity costs) are used in different ways in different textbooks and in published cost-effectiveness analysis. The definition of cost terms is the opportunity cost is the value of resources in an alternative use, the direct cost is the value of all goods, services, and other resources consumed in the provision of an intervation or in dealing with the side effects or other current and future consequence linked to it and the productivity costs are the costs associated with lost or impaired ability to work or engage in leisure activities and lost economic productivity due to death attributable to the disease.These costs have been substituted for indirect costs.There are several categories of direct costs.The first category of total direct cost is direct health cost, this category include costs with tests, drugs, supplies, personnel, equipment, rent, depreciation, utilities, maintenance and support services.The second category of total direct cost is direct non-health care cost, these cost include for example the cost to patients to partake of the intervention e.g., transportation, child care, parking).The third category of total direct cost is the cost of informal caregiver time, this is the monetary value of the time of family members or volunteers who provide home care.The fourth category of total direct cost is the cost is the cost of the use of patient time.Such studies provide insight into the composition of different cost components, which may be the most important factor from the patient and the health service's perspectives.Recent studies have compared the cost effectiveness of news tools for diagnosis, treatment and control in Tuberculosis.( The mathematical models may be particularly useful for predicting the long term tendency of occurrence of the infection or disease.These models can simulate situations epidemiological and preventive or curative interventions beyond their theoretical impact in reducing the problem.Such predictive models properly formulated and fed with consistent data, may assist the processes of planning and management in public health.Currently several strategies have allowed the use of Multiple Logistic Regression (MLR) in the construction of predictive models.Models of decisions trees are also used for classification decision making or to provide a decision algorithm for the clinical management of infectious diseases.(Aguiar, Almeida et al. 2012) For developing countries, the emergence of continuous technological innovation represents a double burden.The rapid diffusion of scientific and technical information that are observed now and monetary action multinational companies create a local demand for innovation by health professionals, the media and more informed portions of the population, which further strains the health care system. Many factors limit the realization of a health technology assessment (HTA) analysis, as the lack of human resources, infrastructure or budget or due to lack of evidence or information costs. Another obvious problem is that often decisions are based on scientific evidence coming from developed countries and often in settings where the incidence of disease differs effusively of Brazilian and Latin American scenario. Given this scenario health managers are often between two objectives: they have to incorporate new and more costly technologies to improve the health of the population and at the same time are responsible for the financial sustainability and access equity of this in the system health.(Project 2005) Beyond the suffering caused directly by the disease, TB is requiring significant portions of the public budget in developing countries.It is estimated that by 2015 they will be required investments around $12 billion for control of diseases such as AIDS, TB and Malaria.The increased costs involved in care and control of TB are due also to the increasing number of cases of resistant bacteria to different types of chemotherapy.( Costs of TB diagnosis and treatment may represent a significant burden for the poor and for the health system in resource-poor countries.Costs incurred by TB patients are high in Rio de Janeiro, especially for those under DOT.The DOT strategy doubles patients' costs and increases by fourfold the health system costs per completed treatment.The additional costs for DOT may be one of the contributing factors to the completion rates below the targeted 85% recommended by WHO (Steffen, Menzies et al. 2010). Even in a country with a good health insurance system that covers medication and consultation costs, patients do have substantial extra expenditures.Furthermore, our patients lost on average 2.7 months of productive days.TB patients are economically vulnerable.(Kik, Olthof et al. 2009) In Brazil, the real costs of TB are estimated or poorly known and the overall costs of TB are not perceived by governments, given the fragmentation in the involvement of the three governmental levels: local, state and national. The purpose of this chapter is to describe the direct and indirect costs for diagnosis and treatment of Pulmonary Tuberculosis in patients infected or not by HIV, admitted to a Hospital Unit of Public Health. Costs of health system of Brazil In order to describe the costs of Health system of Brazil, we evaluate the costs directs of diagnosis and treatment of screening of 1000 hypothetical patients suspects of Pulmonary Tuberculosis in according with clinical and laboratory Brazilian recommendations for treatment (Tuberculose 2004;Conde, Melo et al. 2009). The cost components for each clinical and laboratory procedures of screening included costs incurred by the patient, laboratory costs, drugs, consumables and equipment costs.The strategy for screening was the same recommended for Brazilian Public Health System.Clinical, radiological and laboratory staff costs were calculated from the salary base of Rio Grande do Sul (State of Extreme South of Brazi) in 2011. For each procedure, costs were attributed based on procedure costs of the Brazilian Public Health System. Running costs (material costs were used for each 1000 tests evaluated) included all laboratory materials used in procedures. All costs were expressed in US$, using an exchange rate of US$ 1= R$ 1,72 (REAIS), the average exchange rate from 2010 to 2011.In the treatment costs, those were evaluated related to the treatment of inpatients and outpatients.To estimate the values that are spent by the public health system of Brazil with the monitoring and control of TB in a hospital and an outpatient unit, we simulated two different scenarios: a. TB cases diagnosed in hospital wards (hospitalized patients) b.TB cases diagnosed in outpatient environment (outpatients). The number of days considered to calculate the costs related to the treatment of inpatients they were considered as the same days that were spent in laboratory procedure. It was hypothesized that the time to detect Mycobacterium tuberculosis in sputum culture from patients with pulmonary tuberculosis may be a better indicator for the duration of time of hospitalization (Ritchie, Harrison et al. 2007). The time to detect M. tuberculosis in the culture was 30 days in this study.This cohort is the same as previous published by our group [20].This value was used as the standard at which release from isolation could be permitted (Scherer, Sperhacke et al. 2007) The time spent on laboratory procedure to provide access to the result of the laboratory technique was assumed to be 30 days for AFB smear plus culture.The number of days considered to calculate costs was the same as those spent on laboratory procedure.The number of days considered to calculate the cost of patient travel costs was assumed to be 2 days for AFB smear plus culture. Total treatment included clinical officer and hospital costs, assuming cost per pill, to be US$ 0.22, using 3 pills per day, during 180 days; hospital room costs, US$ 7/day; costs with salary of clinical staff and clinical consultation, US$ 2.52 per patient and clinical nursing consultation, US$ 2.52 per patient. Assuming that, during the treatment (6 months), in ambulatory situation, 6 AFB smear test, 6 chest radiographs, 6 consult of nurse and 2 consult of clinical were performed, we used this parameters to estimate the costs of ambulatory following the Brazilian recommendations for treatment (Tuberculose 2004). Assuming that, during the hospitalization (30 days), 4 AFB smear tests, 4 chest radiographs, 30 nurse and physician consultations were performed, we used these parameters to estimate the costs of inpatient assistance in hospital, following the Brazilian recommendations for treatment (Tuberculose 2004;Conde, Melo et al. 2009).Staff salaries for the physician, nurse and radiologist were considered to be US$ 11,163 per year, and for the chest radiograph technician, the salary was US$ 4,988 per year.The work days were considered 20 days for all staff. The days of admission to the hospital were considered to be the same number of days spent on each laboratory procedure.All estimated costs reflect an estimative of the public health system of Brazil expenses with the monitoring and control of TB. The costs were expressed per 1000 suspects, according to the specific bibliographic references for economic analyses, thus, allowing the best decision for investment to be made (Petitti 2000). Table 1A shows the costs at the health service level and Table 1B shows costs due to laboratory investment.The AFB smear plus Culture require (US$ 39,535) for equipment.Table 1C shows costs incurred by patients.a Microscopic and Laminar Flow Cabinets ,; .Other equipments were not included , b Income loss of patients was calculated from monthly salary base of Brazil (US$207) and was based on proportional days spent by patients until access to the result of each laboratory procedure.Patient costs were estimated using the average of two visits to the laboratory for AFB smear and culture procedures for outpatients; Travel cost was considered as US$ 1.4 (one bus ticket ).Food was considered as US$ 10 per meal.Base salary in Brazil was considered (US$ 10 per day /20 days of the work).For inpatients was considered just income loss; Staff costs in the laboratory were based on proportional days spent on each laboratory procedure; Costs of consumables and equipment were provided by the program as well as by the manufacturer.We annualized the capital cost of the equipment for 5 years, according to the literature [25].Building costs were not included.Opportunity costs were not applicable.The health service costs analysis was based on processing 50 AFB smear slides and 14 cultures per day.AFB smear plus Culture was performed by two trained staff. Staff Running costs were calculated from investments required to examine 1000 smears.The decisions related to incorporation, acquisition, reimbursement or coverage of new technologies and those that determine the way in which they should be used are the most important in the health system and should be taken in general and the management of health services in particular.(Greenberg, Peterburg et al. 2005) The health systems of different countries are diverse with respect to decisions about incorporating technologies and expectations of service users.Tough choices are faced by managers at all levels of the health system.This reality makes the TB every year, become more difficult for the system to provide the user with the most effective intervention theoretically available, depending on the pressures placed on the health system in relation to increased costs, the training of human resources, needs updating certification and regulatory instruments, and investment in physical infrastructure (Newhouse 1992) Attempts to improve the acceptability of resource allocation decisions around new health technologies have spanned many years, fields and disciplines.Various theories of decision making have been tested and methods piloted, but, despite their availability, evidence of sustained uptake is limited.Since the challenge of determining which of many technologies to fund is one that healthcare systems have faced since their inception, an analysis of actual processes, criticisms confronted and approaches used to manage them may serve to guide the development of an 'evidence-informed' decision-making framework for improving the acceptability of decisions.(Stafinski, Menon et al. 2011) Kritski, Lapa e Silva et al. 1998).Multidrug-resistant tuberculosis (MDR-TB) is a major clinical challenge, particularly in patients with human immunodeficiency virus (HIV) co-infection.(Nathanson,Nunn et al. ; Farley, Ram et al. 2011; Arjomandzadegan, Titov et al. 2012; Jain, Dixit et al. 2012; Udwadia 2012) ). Diagnostic testing for tuberculosis has remained unchanged for nearly a century, but newer technologies hold promise for a revolution in tuberculosis diagnostics.Tests such as the nucleic acid amplification assays commercial and in house technologies allow more rapid and accurate diagnosis of pulmonary and extrapulmonary tuberculosis.(RodriguesVde, Queiroz Mello et al. 2002; Sanchez, Rossetti et al. 2006; Scherer, Sperhacke et al. 2007; Scherer, Sperhacke et al. 2011; Hida, Hisada et al. 2012). Table 1 . Estimative of Costs in US$ in Tuberculosis Diagnosis in Brazil (Tuberculose 2004considered; for laboratory technician, US$2,860 per year; for Laboratory technologist, US$6,400 per year.Staff costs in the laboratory were based on proportional days spent on each laboratory procedure; Staff salary was considered for clinical physician, nurse and radiologist; US$6,400 per year; for the X-RAY technician, salary was US$2,860 per year.cThedays of admission to the hospital were considered as the same as the days spent on each laboratory procedure.The time spent on each laboratory procedure until access to the result of the laboratory technique was assumed to be 30 days for AFB smear plus Culture.Total treatment included clinical officer and hospital costs, assuming US$ 0,22 cost per pill, using 3 pills for day, during 180 days; hospital room costs, US$ 4,16/day; costs of salary of staff clinical; clinical consultation cost, US$2,52 per patient; clinical nursing consultation, US $2,52 per patient.Assuming that during the treatment of inpatients (4 months) 4 ZN and 4 chest radiograph were performed, and during the treatment of no-hospitalized patients (6 months) 6 AFB smear and 6 chest radiograph were performed, following the Brazilian recommendations for treatment(Tuberculose 2004); d Travel was considered 2 days for AFB smear plus Culture strategy.Food and income loss for AFB smear plus Culture strategy was considered 30 days Table 2 . (Taylor, Drummond et al. 2004000 suspects.The total screening costs to AFB smear plus Culture were US$ 9,668.815.Tuberculosis -Current Issues in Diagnosis and ManagementThe total cost (in US$) related to the treatment (no hospitalized patients) for AFB smear plus Culture was US$ 2,771.The cost related to the treatment of hospitalized patients, for AFB smear plus Culture strategy was US$ 4,686.The cost related to the treatment of (no hospitalized patients) and (hospitalized patients), for AFB smear plus Culture strategy was US$ 7,456.However, in a context of advanced technologies for the diagnosis of tuberculosis, economic resources has always limited the incorporation and diffusion of new technologies produced and validated by the academy.It is a challenge for health systems worldwide, and in many cases, the cause of serious sustainability problems.(Taylor,Drummondet al. 2004; King, Griffin et al. 2006; Mason, Weatherly et al. 2007; Hughes, Tilson et al. 2009; Weatherly, Drummond et al. 2009; Shi, Hodges et al. 2010)
v3-fos-license
2018-09-21T18:30:02.837Z
2018-09-21T00:00:00.000
52306599
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2018.02107/pdf", "pdf_hash": "38a06a66d434e5fc8c050da9fa84675a3138f618", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45149", "s2fieldsofstudy": [ "Computer Science", "Biology", "Medicine" ], "sha1": "38a06a66d434e5fc8c050da9fa84675a3138f618", "year": 2018 }
pes2o/s2orc
ImmuneDB, a Novel Tool for the Analysis, Storage, and Dissemination of Immune Repertoire Sequencing Data ImmuneDB is a system for storing and analyzing high-throughput immune receptor sequencing data. Unlike most existing tools, which utilize flat-files, ImmuneDB stores data in a well-structured MySQL database, enabling efficient data queries. It can take raw sequencing data as input and annotate receptor gene usage, infer clonotypes, aggregate results, and run common downstream analyses such as calculating selection pressure and constructing clonal lineages. Alternatively, pre-annotated data can be imported and analyzed data can be exported in a variety of common Adaptive Immune Receptor Repertoire (AIRR) file formats. To validate ImmuneDB, we compare its results to those of another pipeline, MiXCR. We show that the biological conclusions drawn would be similar with either tool, while ImmuneDB provides the additional benefits of integrating other common tools and storing data in a database. ImmuneDB is freely available on GitHub at https://github.com/arosenfeld/immunedb, on PyPi at https://pypi.org/project/ImmuneDB, and a Docker container is provided at https://hub.docker.com/r/arosenfeld/immunedb. Full documentation is available at http://immunedb.com. INTRODUCTION The study of immune cell populations has been revolutionized by next-generation sequencing. It is now commonplace to have hundreds of thousands or even millions of sequences from a single sample or individual (1,2). With this increase in experimental data output, many tools have been created for pre-processing sequences (3), germline association and clonal inference (4)(5)(6)(7), and postprocessing analysis (8,9). Lacking from this space, however, is a system to store fully-annotated sequences, their inferred germline sequences, clonal associations, and study-specific metadata. This paper describes ImmuneDB (10) and introduces new features added since its original publication including: additional importing & exporting formats, a more flexible metadata system, extra clonal assignment methods, integration of a novel allele detection tool (11), and the ability to analyze other species and light chains. ImmuneDB provides an easy to use immune-receptor sequence database, which has been optimized for and tested with datasets of up to hundreds of millions of sequences (1). It can take as input raw FASTA/FASTQ sequence files, or import pre-annotated sequences from an array of formats including the Change-O data standard (5) and the AIRR data standard currently being implemented and further refined (2). With either method, it can infer clonal associations, calculate selection pressure, generate lineages, and make all resulting information available both from the command line and as a webinterface. For interoperability with other systems, ImmuneDB can output data in AIRR, Change-O, VDJtools, and genbank formats. ImmuneDB's usage of MySQL also allows for rapid querying and data-sharing using a variety of existing tools. MATERIALS AND METHODS The methods below describe the ImmuneDB pipeline in the context of human B-cell heavy chain rearrangements. We then extend the methods to T cells, light chains, and other species (Figure 1). Computer Hardware and Software Requirements ImmuneDB is primarily written in Python and can therefore run on most common Unix-based operating systems (including macOS). Local installation of the version described in this paper (v0.24.1) requires Python 3.5+, although legacy versions support Python 2.7. The setup will automatically install all Python library dependencies. Additionally, MySQL (or a drop-in replacement like MariaDB) is required, although it need not run on the same host as ImmuneDB. Optional steps require installation of additional external tools. Local alignment requires Bowtie 2 (12), lineage construction depends on Clearcut (13), selection pressure calculations utilize BASELINe (9), novel gene detection requires TIgGER (11), and the web-frontend exists in a separate repository 1 . Alternatively, a Docker image 2 is available with all these dependencies pre-installed along with helper scripts, and is therefore the recommended method for using ImmuneDB. Hardware requirements depend on the input data, but as a general guideline it is recommended that ImmuneDB be run on a machine with enough available memory to store at least three times the largest input sample (e.g., for a 5 Gb input file, 15 Gb of memory should be available). Any number of cores are acceptable and ImmuneDB uses Python's multiprocessing library to utilize as many cores as possible. Germline Reference Database ImmuneDB can use any IMGT aligned V-and J-gene database which it accepts as a pair of FASTA files. We suggest always using the most recent IMGT/GENE-DB (14) database including only functional germlines. The ImmuneDB Pipeline ImmuneDB is comprised of sequential steps, run via the command line, that generate a database with analyzed immune receptor data as shown in Figure 1. Before running ImmuneDB, it is recommended that pRESTO (3) be used for quality control and, when applicable, paired-read assembly. ImmuneDB itself begins with V-and J-gene identification and optional local-alignment. Then, duplicate sequences are identified across samples originating from the same subject. These sequences are then grouped into clones using one of three methods of clonal inference (described in section Clonal Inference). Finally, aggregate statistics are generated and results can be exported, explored in a web browser, or further analyzed with an integrated set of downstream-analysis tools. Each step of the pipeline is detailed in this section along with an example of the command to run. In all cases passing the --help flag will list all possible parameters and their default values (if any). Raw Data Processing Before running the ImmuneDB pipeline itself, raw FASTQ reads from a sequencer should be quality controlled using pRESTO. First, sequences are trimmed of poor-quality bases on the end farthest from the primer where base call confidence tends to degrade. Using default parameters, sequences are then trimmed to the point where a window of 10 nucleotides has an average quality score of at least 20. If reads are paired, the next step is to align the R1 and R2 reads into full-length, contiguous sequences. Short sequences, those with less than 100 bases, are then removed from further analysis. Finally, any base with a quality score less than 20 is replaced with an N and any sequence containing more than 10 such bases is removed from further analysis. In the case of FASTA input which has no quality information, only pairedend assembly and short sequence removal are recommended. A detailed script for running this process can be found in Rosenfeld et al. (15). After this process, the remaining filtered sequences are presumed to be of adequate quality for germline inference and clonal assignment. Creating a Database ImmuneDB allows users to separate their datasets into individual ImmuneDB project, each with their own database. To create a properly structured MySQL database, the immunedb_admin command is used: Running this command with db_name replaced with an appropriate name will create a database named db_name and create a configuration file in ∼/configs with information for the remainder of the pipeline to access it. Specifically, it records a unique username and password for the database so each project you create is separated from others. Database names must consist of only alphanumeric characters, integers, and underscores. FIGURE 1 | A general overview of the ImmuneDB pipeline. To start, sequences are optionally pre-processed with pRESTO to remove poor-quality sequences and mask bases below a user-defined threshold. Next, using a conserved region anchoring method, sequences are either assigned V-and J-genes or labeled as "unidentifiable" which optionally can be corrected by local alignment. After gene assignment, sequences are collapsed across samples and grouped into clones based on one of three methods (see text). Lastly, downstream analyses such as selection pressure, and lineage construction are performed. A web interface is available to browse the resulting data and analyzed data can be exported in a variety of formats. Inset: Examples of downstream analysis: cosine similarity between inferred B-cell rearrangements in tissue samples from an organ donor, diversity (calculated as defined in Equation 1) plotted at different orders from the same tissue samples; rarefaction calculated for B-cell rearrangements amplified from colon samples. Sample Metadata Assignment Each ImmuneDB project is designed to house data across many samples and subjects. It is recommended that each qualitycontrolled FASTA/FASTQ file contains the sequences from one biologically independent sample. This implies that, if a given sequence is found in multiple independent samples, it actually occurred in multiple cells. Although not recommend, ImmuneDB will still operate normally if samples originated from multiple sequencing runs of the same PCR aliquot. However, many measures of sequence abundance and clone size break down under this conditions [see section Sequence Collapsing (copies, uniques, instances) for discussion]. For the ImmuneDB pipeline, some metadata about each sample are required: a unique sample name and a subject identifier. Samples with the same subject identifier came from the same source organism. Additional custom metadata (e.g., cell subset, tissue) can be attached to each sample, which can be useful for later analysis and grouping. To generate a template metadata file in the directory with the FASTA/FASTQ files for processing, the user runs: $ immunedb_metadata --use-filenames This will generate a metadata.tsv file that should be further edited with the appropriate information, and will be used in the next step of the pipeline. The optional -use-filenames flag pre-populates the sample names with the associated filename, stripped of its.fasta or.fastq extension. Germline Assignment (Anchoring, Local Alignment) The first portion of the ImmuneDB pipeline infers V-and Jgenes for each set (sample) of quality-filtered reads using the approach in Zhang et al. (4). This method was chosen because it is quicker than local-alignment and works for the majority of sequences which are not mutated in conserved regions flanking the CDR3. Given a small number of restrictions detailed in the documentation, this method can accept user-defined germlines so long as they are properly IMGT numbered (16). Specifics about the numbering scheme can be found at 4 . For each sequence, the anchor method first searches for a conserved region of the J gene. If it is found, all germline J-gene sequences are compared to the same region in the sequence, and the one with the smallest Hamming distance (17) is assigned as the putative J gene. Since ImmuneDB requires sequences to have a J-and V-gene assignment to be included in clones, if no anchor is found the sequence is marked as unidentifiable and is excluded from V-gene assignment for efficiency. Then, a conserved region near the 3 ′ end of the V-segment is used to position each sequence correctly relative to the IMGT numbered germline sequences. As with J-genes, each germline sequence is then compared using Hamming distance, and the one with the smallest distance is assigned as the putative V gene. If the conserved region is not found, the sequence is marked as unidentifiable and excluded from the rest of the anchoring process. After every sequence is assigned a V and J gene (or marked as unidentifiable) the average mutation frequency and sequence length are calculated. For each sequence, other germline genes which are statistically indistinguishable from the putative genes are added as "gene-ties." Thus, each sequence may have multiple V-and J-gene assignments. As a post-identification quality control step, ImmuneDB then marks sequences with a low V-germline identity (defaulting to 60%) as unidentifiable. Further, any sequence which has a window of 30 nucleotides with less than 60% germline identity is marked as a "potential insertion or deletion". After this command finishes, the anchoring portion of alignment is complete. Due to insertions or deletions, mutations in the conserved regions, and other anomalies, there are generally sequences which cannot be identified with this approach. To rectify such sequences, ImmuneDB can then optionally use Bowtie 2 (12) to attempt local-alignment on each of these sequences. Any insertion or deletions that Bowtie 2 finds are also stored with the sequence. The command to locally align sequences is similar to identification: $ immunedb_local_align /path/to/config.json \ /path/to/v_germlines.fasta \ /path/to/j_germlines.fasta . Sequence Collapsing (Copies, Uniques, Instances) After sequences are assigned V and J genes, sequences that differ only at N positions-those which had low quality calls from the sequencer-are collapsed within each sample resulting in one set of unique sequences per sample. Each unique sequence maintains a count called "copy number" of how many duplicates occurred in the sample. Then, all the sample-level unique sequences within the same subject are compared to one another and duplicates are marked and collapsing information is stored. After this process, each subject-level unique sequence has two fields associated with it: total copies and instances. When samples are biologically distinct, which is recommended in section Sample Metadata Assignment, the instance count of a sequence is the number of samples in which that sequence occurred (which can be interpreted as the lower bound on number of cells that contained that sequence) and the total copies is the number of duplicates across all samples. Although the latter is subject to PCR artifacts, it can give an indication of true sequence abundance. Alternatively, when samples are not biologically independent, the instances of a sequence no longer give a bound on cell count and the copy number of a sequence may be inflated, leading to skewed sequence and clone abundance calculations. An overview of the terms copy number, instances, and unique sequences is provided in Table 1. To run the collapsing process, run: $ immunedb_collapse /path/to/config.json Novel Allele Detection and Correction The ImmuneDB gene identification process assumes the germline allele database provided, from IMGT or another repository, are indeed those present within the subjects being analyzed. Users can add or remove genes as needed by modifying the germline FASTA files input into ImmuneDB. However, in many cases it may not be known a priori which genes are or if the subjects have novel germline alleles. To determine which genes are present in a dataset, ImmuneDB may optionally run TIgGER (11) on sequences to identify potential differences from the standard germline database. To do so, the identification and collapsing processes above is run with a presumed germline database followed by: $ immunedb_export /path/to/config.json changeo \ --min-subject-copies 2 $ immunedb_genotype /path/to/config.json \ /path/to/v_germlines.fasta This exports the sequences, as identified with the presumed germline genes, with at least two copies in the subject and then runs TIgGER. If novel alleles are found, a new set of input germlines is generated, and ImmuneDB can be re-run with these germline reference genes. Clonal Inference ImmuneDB incorporates two methods of clonal inference, all of which start with the same set of sequences: the subject-level unique sequences calculated previously. By default, only such sequences with a copy number of at least two are considered eligible for clonal assignment. This eliminates some of the sequences that potentially arose from sequencing error and could cause spurious construction of clones. After this process, each clone has three defined levels of size. The number of unique sequences are the number of distinct sequences that comprise a clone. Copies and instances are defined as the sum of copies and instances over the clone's constituent unique sequences. These clone size metrics are reviewed in detail in Rosenfeld et al. (15). CDR3 Similarity The first method of clonal inference is for B cells. It uses CDR3 similarity to group sequences from the same subject with the same gene assignments and CDR3 length into clones. Initially an empty list of clones C is created. Let S be the set of all subject-level unique sequences. Each sequence s ∈ S is visited in order of decreasing copy number. If there is a clone c ∈ C such that every sequence already assigned to c has the same V gene, J gene, CDR3 length in nucleotides, and has 85% CDR3 amino-acid similarity, s is added to the clone c. Otherwise, a new clone is added to C containing only s. This results in a set of clones such that all the sequences in a clone share the same gene assignments, CDR3 length, and pairwise are at least 85% similar in the CDR3. The percent similarity can be tweaked by the user as necessary. This method of clonal inference can be run with: $ immunedb_clones /path/to/config.json \ similarity Lineage Separation The newest method of clonal assignment in ImmuneDB is based on (18). For each subject, sequences are placed into buckets based on their V gene, J gene, and CDR3 length in nucleotides. Then, a lineage is made out of each bucket. Working from the root node of the lineage (the germline) each edge is traversed until a specified number (by default four) mutations accumulates. The subtree starting at that point is then grouped into a clone. This method, unlike similarity-based methods, is order-agnostic and can be run with: Selection Pressure After clonal inference, ImmuneDB can optionally use BASELINe (9) to estimate clonal selection pressure. It first runs on each clone as a whole, providing an overview of selection pressure in the framework and complementary regions. Then, it runs independently on the subset of sequences that occur in each sample. This can be useful when a clone spans multiple samples with various biological features. For example, a clone may appear in samples from different tissues or cell subsets. To run BASELINe via ImmuneDB, the path to the Baseline_Main.r script must be specified. Lineages ImmuneDB integrates Clearcut (13) to infer clonal lineages using neighbor-joining. For each clone, a lineage is constructed and every node maintains information about its associated sequence, as shown via the web-interface in Figure 2. This process can be parameterized in different ways including filtering sequences or mutations that occur less than a set number of times. Generally, it is recommended to run Clearcut excluding mutations that happen exactly once with: $ immunedb_clone_trees /path/to/config.json \ /path/to/clearcut \ --min-count 2 FIGURE 2 | In (A) an example clonal lineage as viewed through the web interface for ImmuneDB. Node diameter is proportional to the total number of sequence copies at that node and the edge numbers show the number of mutations between the parent and child nodes. The colors indicate the tissue(s) that make up the sequences at the node. In this case green is Bone Marrow, red is Lung, and black is a combination of both. As shown in (B), hovering over a node gives more information such as the specific mutations, copy number, and sequence metadata. Figure 3 shows the same clone's lineage constructed with Figure 3A no mutation threshold, Figure 3B a threshold requiring mutations to occur in at least 2 sequences, and Figure 3C a threshold requiring mutations to occur in at least 5 sequences. The large expansion of nodes in Figure 3A is likely due to sequencing error. A higher threshold like in Figure 3C may be useful when there is high sequencing depth or when clones are extremely large (such as in some hematopoietic malignancies). In these cases it is quite likely that the same sequencing error will occur multiple times. However, thresholding mutations means the lineages may not accurately reflect recent or rare clonal events. Web Interface ImmuneDB comes with a web interface for browsing analyzed data. It allows users to group and filter data to generate interactive plots, view clones, and inspect sequences. It is primarily intended to explore data at a high-level, visualizing individual samples or comparing different samples in various ways. The command line tools can then be used for more fine-grained analysis. An example interface can be found at http://immunedb.com/tissue-atlas. To utilize the web interface using the Docker container simply run the following and open http://localhost:8080 in a browser: $ serve_immunedb.sh /path/to/config.json Information about running the web interface without the Docker container or with more sophisticated configurations, such as hosting multiple databases, are described in the documentation. Importing Gene Assignments and Clonal Inference From Other AIRR Tools Although ImmuneDB has the features to fully analyze sequences from raw reads through clonal assignment, a concerted effort has been made to allow users to import both identified sequences and clonal assignments from other tools. For pre-identified sequences, ImmuneDB can import files in the Change-O data format (5) with: $ immunedb_import /path/to/config.json \ /path/to/v_germlines.fasta \ /path/to/j_germlines.fasta \ /path/to/changeo_files Note that this requires a metadata file identical to that needed by the identification step. Clonal assignments can be imported from either ImmuneDBidentified sequences or imported sequences. First, the command below is run to output a template file with a list of clonalassignment eligible sequences: $ immunedb_clone_import /path/to/config.json \ --action export sequences.tsv Users then fill in the clone_id column in sequences.tsv as they desire and import it back into ImmuneDB with: $ immunedb_clone_import /path/to/config.json \ --action import sequences.tsv Assuming that no constraints are broken (clones must still have the same V gene and J gene and originate from the same subject), the custom clonal assignment will then be accepted by ImmuneDB. As members of the AIRR Community (19), the authors will continue to integrate data standards (2) as they are defined. Aggregate Analysis and Data Export ImmuneDB automatically aggregates data for some common analyses in the last step of the pipeline with: $ immunedb_clone_stats /path/to/config.json $ immunedb_sample_stats /path/to/config.json This auto-generated, aggregate analysis is not exhaustive, and is meant to provide sufficient data for the web-interface and to guide further investigation. To assist with this, ImmuneDB allows users to easily export all portions of the analyzed dataset in useful, common formats. Specifically, ImmuneDB has integrated FIGURE 3 | Comparison of mutation thresholding on lineage shape. A comparison of a single clonal lineage constructed from bulk IgH V-region sequencing data where (A) all mutations are included, (B) only mutations found in at least 2 sequences are included, and (C) only mutations in at least 5 sequences are included. The drastic elimination of leaf nodes between (A) and (B) indicate that they are most likely due to sequencing error. In very deeply sequenced samples or in malignancies where clones are very large, an even higher threshold may be required, such as in (C), since the same sequencing error may occur multiple times. export capabilities for the Change-O (5),vdjtools (8), genbank, and FASTA/FASTQ formats. This enables users to quickly use common downstream analysis tools including VDJtools and those included with the Immcantation Framework 5 , or submit their datasets in the AIRR-compliant GenBank format. The basic template for this command is as follows, replacing the term format with changeo, vdjtools, or genbank: $ immunedb_export /path/to/config.json \ format T-cells ImmuneDB can analyze T-cell receptor sequences, in addition to B-cell receptor sequences. When compared to B-cell analysis, the two changes necessary in the pipeline for T-cell analysis are to use T-cell germline sequences during germline assignment and to specify the T-cell method during clonal inference. The T-cell method groups sequences with the same V gene, J gene, and 100% CDR3 nucleotide identity into clones. Like the B-cell similarity method described in section CDR3 Similarity, the Tcell method does not take into account any mutations in the V-and J genes. In the case of T cells, mutations are assumed to be experimental artifacts as T cell receptors do not undergo somatic hypermutation due to lack of activation-induced cytidine deaminase (AID). Putative T-cell clones may be comprised of sequences which appear to differ in the V-and J-gene sequences. Spurious intra-clonal diversification is likely from sequencing error whereas consistent divergence from the germline within a clone likely arises from allelic differences from the germline database. The latter case can be corrected with TIgGER as described in section Novel Allele Detection and Correction. Light Chains Because ImmuneDB does not attempt to determine D-genes for sequences during germline assignment, light-chains are naturally supported when a proper germline database of the V-and J genes are provided. At the present, the germline genes for kappa and lambda chains must be placed in separate files and run independently. This restriction will be lifted in future versions. Additionally, because of lower junctional diversity, it is recommended that clonal assignment be considered. For example, when using the similarity method, it is likely appropriate to lower the default amino-acid similarity threshold to a value below 85%. Other Species Species other than humans are supported by ImmuneDB, but with two restrictions. First, for the built-in anchoring method for gene identification, germline genes must have conserved anchoring points as described in Zhang et al. (4) and be IMGT aligned. Second, the length of all J genes past the 3 ′ end of the CDR3 must be fixed, which is the case for all species currently in the IMGT database. COMPARISON TO MIXCR It is difficult to verify the results of clonal and germline association methods as there is no agreed upon gold standard. We attempt to associate different types of diversity to their underlying cause(s), but in the end, this is still just an educated guess. Our methodology, as described above, is based on the best practices described in Yaari and Kleinstein (20): stringent pre-processing, correcting for allelic differences between subjects, identification of insertions and deletions, and multiple clonal assignment methods for different datasets. ImmuneDB also provides the option of varying the stringencies of both data filtering and clonal assignment to ensure reproducible and robust results. As a final argument for the efficacy of ImmuneDB, we show that repertoires analyzed with ImmuneDB take a form similar to those observed with other tools. In this section we compare ImmuneDB to a commonly used pipeline, MiXCR (6), on two datasets. First, we compare the germline gene assignment and clonal inference of the two methods on five samples, one each from five different tissues, all from one human organ donor. Second, we inspect how similar the overall view of a larger repertoire (19 biological replicates from a single organ donor's colon) appears with each method (1). Germline Assignment and Clonal Inference To determine how similarly MiXCR and ImmuneDB assign germline genes and infer clones, both pipelines were run on five samples from one human subject selected from Meng et al. (1) as listed in Table 2. The associated SRA accession information can be found in Table S1. This data set has a total of 651,988 reads. Sequences which were considered incorrect or misleading were discarded from both result sets: sequences had to have at least 160 bases in the V gene (at least all of CDR2), between 3 and 96 nucleotides in the CDR3, a functional V-gene assignment (no pseudogenes), and all V-gene calls (V-ties) for a given sequence had to be from the same V-gene family. For clonal comparisons, clones with only one total copy were discarded. Germline Assignment First, we compared which sequences each method was able to identify given this filtering. MiXCR identified 599,930 while ImmuneDB identified 611,252, and both identified the same 577,750. The corresponding Jaccard index of 0.91 indicates that the two methods identified a similar set of sequences. Next, we compared how many of the identified sequences were assigned to the same genes. Since both methods allowed multiple assignments for both V genes and J genes, we considered two sequences to have the same gene call if the intersection of their gene calls contained at least one shared gene. For V genes, the two methods agreed on 98% of the sequences, for J-genes 95%, and when considering both genes, 93%. Of the sequences that differed with either gene, less than 1% differed in their gene family calls. Thus, overall both methods generally agreed on which germline genes gave rise to each sequence. Clonal Inference Next, we compared how similarly the two methods inferred clonotypes for the 19 biologically independent colon sample replicates. The associated SRA accession information can be found in Table S2. For this process, we assigned each clone one or more labels from each method: • If clone A from one method contained exactly the same sequences as clone B from the other, we labeled both "identical". • If clone A from one method contained a strict superset of sequences compared to clone B from the other method, we labeled A "superset" and B "subset". • If clone A from one method contained a portion of sequences compared to a clone B from the other method, we labeled both "intersecting". • If clone A from one method was disjoint from all clones from the other method, we labeled it "disjoint". Note that a clone could potentially have both the labels superset and intersecting simultaneously if it contained all the sequences from a clone inferred by the other method and contained sequences from another clone. Further, a clone could have Clonal labels when comparing methods of clonal inference between ImmuneDB and MiXCR. Identical indicates the same set of clones was identified by both methods, subset/superset means the clone constructed by the associated pipeline was a subset/superset of one assigned by the other pipeline, and intersecting means there some sequences from a clone assigned by one pipeline that overlapped sequences assigned by the other pipeline. multiple superset labels if it contained all the sequences from multiple clones inferred from the other method. As shown in Table 3, ImmuneDB inferred 13,736 clones whereas MiXCR inferred 14,453. Of these 10,786 were identical; that is both methods constructed clones with exactly the same set of sequences. In 1,665 cases, an ImmuneDB clone was a subset of a MiXCR clone. There are two reasons this occurred. First, different amounts of N nucleotides in the either the V-or J-region can cause sequences, that are otherwise similar, to be assigned different sets of gene ties and therefore placed in different clones. Second, since ImmuneDB requires pairwise 85% similarity of CDR3 amino-acid sequences in clones, some sequences that may actually originate from the same clone are separated. Conversely, 2,819 MiXCR clones are subsets of an ImmuneDB clone. Nearly all of these are due to overly-strict J-gene assignment, resulting in separation of likely clonally related sequences. For example, some sequences that are one nucleotide away from IGHJ1 and two away from IGHJ4 could easily be confused due to sequencing error (4). Overall Repertoire Features Repertoire analysis pipelines should reveal similar overall trends in acceptably large datasets even if the minutiae of sequence assignment and clonal inference differ. Specifically, when looking at sufficiently large clones, the overlap across samples and diversity should lead to similar conclusions. It is generally acceptable to only look at larger clones as smaller clones have likely been under-sampled or are an artifact of sequencing error (21). To compare repertoire-level metrics generated from ImmuneDB and MiXCR processed data, 19 biologically independent colon replicates were analyzed. We assessed the similarity of the two pipelines by comparing their clone size distributions, diversity measures, rarefaction, and clonal overlap between samples as described in Meng et al. (1). Clone Size Distribution We first looked at clone size distributions from the two pipelines. In Figure 4, the left panel shows a comparison of clone sizes as measured by copy number. The overall landscape is similar with both methods, especially when looking only at clones with 10 or more sequence copies. For smaller clones, the difference in clone sizes can be attributed to the more stringent CDR3 similarity measure MiXCR uses compared to ImmuneDB. The right panel shows the same comparison but instead measures the size of clones as the number of instances comprising the clone. Both methods have nearly identical clone size distributions, especially when considering clones with at least 2 instances. Diversity We next considered the diversity of the clones assigned by each method. The diversity index q D, as defined by Equation 1 quantifies how many different clones there are. Here, R is the number of clones (richness), p i is the fraction of the repertoire (either as copies or instances) inferred to be in clone i, and q is the order. When the order is zero, the diversity is richness, or total number of clones. Increasing the order, q, gives more weight to the larger clones (21,22). Figure 5 shows the diversity at orders 1 through 15 for ImmuneDB and MiXCR, measuring clone size both as copies and instances. It is clear that MiXCR infers more clones than ImmuneDB. However, when the order number is increased (more weight is given to large clones) the diversity of the two methods converges. Rarefaction Rarefaction gives insight into how many clones are estimated to occur given a certain number of samples (biological replicates) from the same source. A rarefaction curve that levels out indicates that fewer new clones will be found with further sampling. Figure 6 shows the rarefaction curves for ImmuneDB and MiXCR for clones with at least 2, 5, 10, and 20 instances. The x-axis shows the number of samples and the y-axis shows the normalized richness (the richness divided by the richness at 19 samples). The solid lines (up to 19) are calculated from the 19 samples being compared, whereas the dashed lines past sample 19 show the projected number of additional clones if more replicates had been acquired. As only larger clones are considered, the rarefaction curves both begin to level out, indicating that those larger-clone populations have been more adequately sampled. Both pipelines tended to agree on when clones had been sampled enough, even though the overall diversity appears to be higher with MiXCR (indicated by lower fractional richness). Sample Overlap We next evaluated the amount of clonal overlap using the cosine similarity, as defined by Equation (2) (1)]. Clone size is given as copies in (A) and instances in (B). Both plots have been restricted to a maximum X-value of 50, but the trends continue beyond that. FIGURE 5 | Comparison of diversity calculations between ImmuneDB and MiXCR in 19 colon samples from one donor (same data as in Figure 4). As order increases, more weight is given to larger clones. For both copies and instances, MiXCR inferred more diverse clonal populations for low order numbers. As order increases, however, the two methods begin to converge. In this case, A and B are vectors corresponding to two samples both of which have a length equal to the total number of clones in the dataset. The ith value in each vector indicates the number of copies of the ith clone in the sample represented by the vector. Figure 7 shows the cosine similarity for clones with a minimum of 2, 5, 10, and 20 sequence instances. MiXCR infers less overlap between samples, but the general trend between both methods is the same: as expected, with larger colon samples from one donor. The Y-axis shows the number of predicted clones when the population has been sampled between 1 and 25 times. A rarefaction curve that plateaus indicates the underlying clonal population has been adequately sampled. For all cutoffs, although the overall richness varies, the conclusion drawn would likely be the same: for clones under 10 instances, more sampling is required, while larger clones have been sampled sufficiently. clones, more overlap is discovered. Further, the distribution of cosine similarities about the median of each method are not significantly different. That is for both methods, clones over a given instance count tend to be distributed across a similar number of samples with a similar fraction of sequences in each sample. DISCUSSION ImmuneDB provides a unified method for the storage and analysis of large amounts of high-throughput immune receptor sequencing data. Like other pipelines such as Change-O (5) and MiXCR (6), it can analyze data from raw reads through clonal assignment. ImmuneDB has two method of germline calculation, anchoring and local-alignment, and provides the option of filtering the data at different QC and copy number cutoffs, which is desirable when samples with different sequencing depths are being compared. In addition, ImmuneDB provides multiple methods of clonal assignment. Combined, these features provide a variety of ways to analyze different types of data. ImmuneDB is also flexible in that it can import pre-annotated data in a variety of formats supported by other AIRR software tools. This allows users to use custom tools for their dataset, using ImmuneDB for only a portion of the analysis. To provide a comprehensive suite of repertoire analysis tools, ImmuneDB also integrates downstream analyses such as selection pressure via BASELINe (9), lineages via clearcut (13), and novel allele detection via TIgGER (11), reducing the need for users to learn individual tools. Unlike most other tools, ImmuneDB stores the data in an easily queryable MySQL database and provides a web-interface for easily sharing data with non-technical users. It is worth noting that ImmuneDB does make some assumptions when using other tools, however. For example, it is assumed that sequences in a clone have the same V gene, J gene, and CDR3 length and that they come from the same organism. Although generally this is likely acceptable, there are certain situations where such assumptions may not hold, such as donor/recipient data where a clone may span multiple recipients. As such, it is important to consider the limitations of all tools before using them on non-traditional datasets. Additionally, since ImmuneDB calculates clones on a persubject basis, adding new samples to a subject requires clonal inference to be re-run for that subject. However, the rest of the database will remain unchanged. Finally, in section Comparison to MiXCR we compared ImmuneDB to MiXCR, a pipeline that similarly determines germline usage and infers clonotypes to show that the benefits of using ImmuneDB do not come at the cost of drastically changing conclusions one may draw from the data. Although the methods differ in their approach to clonal assignment, both yield similar clone size distributions, rarefaction plateau points, and sample overlap. CONCLUSION AND FUTURE PLANS In this paper we have provided a comprehensive description of ImmuneDB, a system for the analysis of large-scale, highthroughput immune repertoire sequencing data. ImmuneDB can operate either independently, providing an integrated collection of analysis tools to process raw reads for gene usage, infer clones, aggregate data, and run downstream analyses, or in conjunction with other AIRR tools using its import and export features. Thus, ImmuneDB can be an all-in-one solution for repertoire analysis or serve as an efficient way to visualize and store annotated repertoire data, or both. In either case, the ImmuneDB webinterface can be used to easily interact with the underlying dataset. ImmuneDB is regularly being updated to address user needs and handle the increasing complexity of adaptive immune receptor repertoire sequencing data. In the future we plan to add a feature to allow users to assess the quality of individual sequencing libraries (replicates) before running the entire pipeline. As pRESTO provides a per-sequence quality control step, this new feature will provide a post-identification quality control step, informing users if their samples have insufficient depth or quality. Further, the CDR3 similarity clonal inference method will receive two additional features. First, it will be extended to allow for different similarity thresholds for different CDR3 lengths. Second, this method will allow users to set a required minimum number of shared V-gene somatic hypermutations for sequences to be grouped into a clone.
v3-fos-license
2022-01-23T16:29:59.037Z
2022-01-01T00:00:00.000
246190037
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://discovery.ucl.ac.uk/id/eprint/10144724/9/Siassakos_1-s2.0-S1521693422000141-main.pdf", "pdf_hash": "9ff0c2c24d72b47df312e034541f550038f7ca1d", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45150", "s2fieldsofstudy": [ "Medicine" ], "sha1": "77b438a72d10c09e21265568af81ec7ffbd36d13", "year": 2022 }
pes2o/s2orc
Let's talk about it: Reframing communication in medical teams Communication is associated with a signi fi cant percentage of er- rors or omissions in secondary healthcare across specialities; it is also the core process in and through which medical teams manage tasks, establish a rhythm and relationship between themselves and the patient, all of which are critical components of clinical practice. Despite this, however, communication is framed in medical training and the literature in either narrow terms or in a broad and fuzzy way, and it is indicative of the issue that teamwork and team communication are perceived and treated sepa- rately. In this paper, we draw on completed and ongoing interdisciplinary work to show how teams interact through illus- trative examples from a large project on the management of obstetric emergencies. We provide a brief overview of the limitations in current tools and approaches, and we show how research under disciplines that have a long tradition in the analysis of interaction, and particularly healthcare sociolinguistics, can be translated and make a solid contribution to medical research and training. © 2022 Elsevier Ltd. This is an open access article under the CC BY license a b s t r a c t Communication is associated with a significant percentage of errors or omissions in secondary healthcare across specialities; it is also the core process in and through which medical teams manage tasks, establish a rhythm and relationship between themselves and the patient, all of which are critical components of clinical practice. Despite this, however, communication is framed in medical training and the literature in either narrow terms or in a broad and fuzzy way, and it is indicative of the issue that teamwork and team communication are perceived and treated separately. In this paper, we draw on completed and ongoing interdisciplinary work to show how teams interact through illustrative examples from a large project on the management of obstetric emergencies. We provide a brief overview of the limitations in current tools and approaches, and we show how research under disciplines that have a long tradition in the analysis of interaction, and particularly healthcare sociolinguistics, can be translated and make a solid contribution to medical research and training. The aim of this paper is, accordingly, twofold: we (a) provide a brief overview of the limitations in current tools and approaches aiming to help healthcare professionals to develop 'communication skills ' and (b) show in practice how sociolinguistic healthcare research can be translated and make a solid contribution to medical research and training. We draw on our ongoing work and we illustrate our core position through examples from a large project on the management of obstetric emergencies. The paper is organised in three parts: we start by reviewing existing communication models in the medical literature, turn to the role of sociolinguistic research in healthcare drawing on our study as an illustration, and conclude by translating our findings into teachable behaviours. Communication models from a healthcare sociolinguistic lens A brief review of widely used communication and information transfer models can succinctly illustrate the issues raised earlier. Contemporary attempts to systematise aspects of communication include the Relationship: Establishment, Development and Engagement (REDE) model of healthcare communication [11], the Plain Language, Engagement, Empathy, Empowerment, Respect (PEEER) model of effective healthcare team-patient communications [12], and the Begin with non-verbal cues, Establish information gathering with informal talk, Support with emotional channels, Terminate with positive note (BEST) communication model [13]. An increase is noted in models that attempt to codify more aspects involved in the way we interact, such as embodied cues. Gupta, for instance, draws emphasis on non-verbal cues using the acronym SOFTEN: Smile, Open arms, Forward lean, Touch with arm, Handshake, Eye contact, and Nod [13]. Communication models draw on sound principles; however, they typically take a structural approach and do not account for the dynamics of interaction in practice, the relationship between each of their components and the multiple forms they take in real practice. As teams interact, they manage the interactional floor in a dynamic way; each speaker creates the context and conditions for the next, and the interactants draw on their perception of what is expected, allowed and appropriate in their own setting. A holistic and nuanced approach to interactional accomplishment needs to feed into and help develop models with greater applicability in actual practice. As an illustration, recommendations such as the Establishment component of the REDE model which includes 'build rapport' and 'negotiate and set agenda' are not interactionally straightforward; the linguistic behaviours and process by which those can be achieved are not, and cannot be, specified outside the context of specialties and of discrete individuals and teams that have their own expectations and historicities. In short, although models attempt to codify and breakdown aspects of interaction simultaneously and in order to achieve universal relevance [relevance across specialties], they typically remain descriptive at a high level. Healthcare sociolinguistics research has shown the importance of local factors, with the context and environment e material and social e of the teams playing a central role in the interaction process. Teams that work together over time are and need to be treated differently to ad hoc formations; multiprofessional teams are different to same professional teams and so on. Abstract taxonomies cannot capture the dynamic nature of interaction in specific contexts and accordingly are limited in improving the understanding of processes and in situ negotiation of good practice. Further on this, lack of discussion of the evidence that feeds into components of models, and, at times, the accuracy of the claims, is an area where interdisciplinary work can bring immediate and direct benefits to the robustness of observations and teachability of behaviours as we show in this paper. Claims such as 'more than two-thirds of face-to-face conversation is based on body language' [13] perpetuate lay myths but are not supported by interactional research evidence. Note that if this claim was true, we would need a unit to measure meaning in verbal vs body language which cannot exist as the two are inseparable. It would also suggest that face-to-face interaction is richer than other forms which could also not explain why our conversations on the phone or in the dark are also fully complete without anything missing. Myth busting around communication, in all the meanings of the term, and solid evidence on how expert teams in different specialties and professional environments interact is urgently needed to improve models; more broadly however, it is necessary in order to reframe 'communication' in the perception of medical professionals who have been trained to divorce interaction from their other practice. Detailed and systematic evidence from interaction analysis can also feed into widely known information sharing tools, notably SBAR, Introduction, Situation, Background Assessment, Recommendation (ISBAR), Identify, Situation, Observations, Background, Agreed plan, Read back (iSoBAR), and so on, all of which aim at systematising intra-team interactions. These tools are shown to improve team performance. However, they are not used consistently and often not by the majority [14] and when they do, the gap in pinning down the exact linguistic behaviours involved in all the stages of those tools remain unaddressed. The same applies to tools that propose structures for managing the interactional floor in team interaction, such as closed-loop communication (CLC). Medical research on CLC [15] is very useful in corroborating the issues healthcare sociolinguistic research has raised. In more detail, CLC, in its basic formation, proposes organising turns in stages [speaker 1 issues message -speaker 2 confirms message -speaker 1 follows up/closes the loop] and there is indeed evidence that the use of tools for structuring interaction in ritualised, and hence predictable, forms improve team performance [16]. The sequence of messages: directedacknowledged-executed-confirmed is also reflected in our data of good practice. Recent literature on CLC, however [17], suggests that (a) real-life CLC is substantially different to textbook CLC, with the latter being more explicit and structurally unnatural and (b) that different groups and teams have different expectations 'regarding the content, timing, and generalised structure of information transfer and may not grasp the roles and priorities of other groups' (p. 5) [18]. These two points highlight the reason why improving team interaction needs to be context sensitive and applicable to real-life care. To sum up, we have argued that we need a different paradigm to study the nuances of team interaction and propose models and training approaches. On that front, work in healthcare sociolinguistics and the associated methodologies have a lot to offer to medical research, medical training, and tools. Although the medical encounter is accomplished in and through language as embodied practice [19], linguistic work and healthcare practice remain unbridged with the former being 'conspicuously absent from the mainstream of medical education, health communication training, and even the medical or health humanities' (p. 1) [20]. There is currently a body of work moving towards this direction showing how linguistic approaches can improve our understanding of patients' lived experiences of chronic diseases [21], feed into communication training [22], and revise the existing diagnostic tools [23]. Recently, Udvardi also looked at the role of linguistics in improving the evidence base of healthcare communication, underlining the importance of integrating qualitative linguistic analyses in future health communication research [24]. We return to this at the end of the paper. In closing the discussion here, all the tools we reviewed make a positive contribution to turning professionals' attention to the way they organise, communicate, and acknowledge activities in their team. However, in the current form, they remain focused on structural taxonomies which can be improved by a nuanced and sophisticated understanding of healthcare teamwork across specialties. We show how a sociolinguistic lens can provide in depth understanding in the next section. The role of sociolinguistic research in providing evidence-based recommendations e a worked example Context and methods We report here on an observational study, which draws on a sub-set of video recordings from the Simulation & Fire-drill Evaluation study (SaFE). The SaFE study was a randomised controlled trial of training for obstetric emergencies, which took place in six sites in the UK. The participating teams, 24 in total (and a total of 140 participants), were video recorded managing eclampsia. The teams, consisting of a senior doctor (SD), a junior doctor (JD), two senior and two junior midwives (SMs and JMs, respectively), did not know the nature of the emergency before entering the room. The scenario involved a patient-actor who was instructed to have a seizure for about 1 min, starting 1 min after the end of the first handover (for a detailed account of the SaFE's design and methodology see Ellis et al., 2008;Siassakos et al., 2010) [4,25]. The data were analysed for the clinical performance by medical professionals, while the interaction was analysed by healthcare sociolinguists. The researchers were blind to each other's findings. The clinical assessment of the teams was based on standard clinical criteria, the most important of which were found to be the success in obtaining, preparing, and administering magnesium sulphate, and the time interval to the administration of the magnesium sulphate [4]; a six-level taxonomy was applied differentiating between high clinical performance (magnesium administration in <5 min; 5e6 min; and >6 min) and poor clinical performance (magnesium not obtained; magnesium obtained but not drawn; magnesium drawn but not administered). In parallel, the data were analysed for the interactional dynamics and the ways in which the teams manage the interactional floor through an established sociolinguistic framework, namely interactional sociolinguistics (IS). IS focuses on the analysis of situated real-life encounters and connects the patterns to the organisational context within which professionals operate. The IS framework provides valuable methodological tools for exploring interactions between participants with varying degrees of institutional status and power. This makes it particularly appropriate for the study of ad hoc multidisciplinary obstetric teams, in which staff members with different backgrounds and from various seniority levels come together temporarily. Recent IS work also makes a case for the relevance of the framework for a critical study of professional interaction [26]. IS, in line with established approaches for interaction analysis, such as conversational analysis (CA), conceptualises space and speech as intertwined and interactively achieved. This is particularly relevant for the analysis here: interaction is understood as embodied performance and staff members use all verbal resources and the material space of the emergency room as part of doing their role. To illustrate our methodology and its appropriateness for identifying patterns and feeding into medical training, we zoom in on how teams manage tasks in their context. Task management is a key process in the emergency encounter and, more broadly, in the way in which teams deliver care and transfer responsibility and accountability in inter/intra-team handovers. For instance, task allocation is a core part of SBAR (under Recommendations) and the other widely used healthcare communication models [e.g., under Agreed plan in iSoBAR]. Previous work has indicated the significance of task management as a leadership function and demonstrated its link to performance [27,28]. This is directly relevant in our context, in which the effective management of eclampsia requires the coordination and synchronous performance of multiple tasks, including placement in the recovery position, administration of oxygen, sampling of venous blood, and the administration of magnesium sulphate [29]. Task management in the form of allocation and decision on task sequencing is critical for the management of an encounter and for decisions when clinical teams hand over responsibility. We pay particular attention to the role of the senior doctors, as they are usually (but not exclusively) the ones managing the team and initiating/coordinating the tasks. We show the systematicity of the patterns, the applicability of the findings, and the discrepancy with textbook approaches to team communication. We now turn to repositioning interactions in their situated, in time, place, and moment, context. Interaction as embodied practice The analysis of the video recordings through a multimodal lens has resulted in the identification of the following three core material zones in the obstetric room: (1) the area around the bed, and particularly the bedsides; (2) the equipment table; and (3) a zone out of the room. These are illustrated in Fig. 1 below which depicts the obstetric room in which our teams work. By monitoring the position of each professional in the room every 30 s or so, as well as the key actions/task performed, we then mapped the use of each zone with the various professional roles. The physical environment and the ways in which it is embodied by professionals constitutes part of situational awareness, a vastly discussed concept in the medical literature under the umbrella term of human factors [30]. Despite the extensive discussion on human factors in healthcare, however, the term still remains ambiguous, with some of its aspects, such as the physical environment and interactions with equipment, being overlooked [31]; we demonstrate below our methodology for addressing some of these aspects in the data that follow. Our analysis yielded systematic patterns in regard to professionals' preferred material zones, which we have visualised in Fig. 2 below. As illustrated in Fig. 2, senior doctors control the centre of the room, positioning self around the bed, and primarily at the bedsides. Turning to the senior midwives, one of them acts mostly at the equipment table and the other one at one of the bedsides, and, less frequently, they exit the room. As for junior midwives, those exhibit a clear tendency to stay close to the equipment table and are the ones who exit the room most frequently in order to retrieve things, while one of them also maintains a bedside role, passing crucial information to the team. The junior doctors have not been included in Fig. 2, as they appear more fluid in the data; we have provided different readings on this in earlier work [32,33]. As follows, the senior doctors are the only ones consistently occupying material zones around the bed (hence closer to the patient). Positioning in the centre of the room, thus, also positions them in the centre of the action and is part of doing their professional role; this is illustrated by the ways in which senior leaders are expected to take this position and deviations create interactional trouble, a term used to denotate a breakdown in the management of the interactional floor. We elaborate on the significance of this trouble in light of the data below. Looking into the work of medical teams from an interactional angle Research in healthcare sociolinguistics has shown the systematicity in the sequence and design of linguistic structures in the process of managing a complex medical task. Teams that perform well, in terms of clinical outcomes, tend to follow the same patterns of the encounter in our projects. Teams have been analysed interactionally for the use of common linguistic devices, such as questions, as well as smaller features, e.g., overlaps and interruptions. The relationship between the team leader's style and the teams' linguistic behaviour has been noted in the linguistic literature and anecdotal observations are also found in medical studies. Our aim here is to show the implications for medical training on the basis of what IS tools can deliver. We use task management as our main angle for discussing the work of teams with high and low clinical performance. We show the relationship between interactional trouble and overall performance in the data and make a case for the teachability of specific linguistic practices. The two examples below illustrate stable patterns in our dataset and are representative of the systematicity we note in the analysis. Teams with high clinical performance Excerpt 1 is drawn from a team with high clinical performance, in which staff members administered magnesium within 5e6 min. The senior doctor enters the room and, as soon as she is updated on what is happening, she manages the team by allocating and confirming tasks, as shown below. Her position at all times is in the zones that have been systematically associated with doing leadership and control in our data. In Instance 1, the senior doctor allocates/confirms the task of magnesium's preparation by using a common, and successful structure in our data, namely a yes/no question (are we getting the mag sulf sorted out[, lines 1e2). Yes/no questions are used by team leaders for issuing directives in our data, as such questions tend to restrict the respondents' possible uptake; to deviate from the senior doctor's directive here, staff members would have to produce a no-prefaced response which would be considered a direct disagreement which is rare and dispreferred in professional discourse. Although the senior doctor uses the collective pronoun we, she addresses certain staff members in an embodied way: in raising the question, she turns her torso and shifts towards the equipment table, which is the designated material space for the preparation of the magnesium sulphate; junior midwife 2 and senior midwife 2 are the only members standing there (not shown in instance 1). The senior doctor's embodied behaviour successfully opens the floor to those two members and in lines 3e6, and junior midwife 2 and senior midwife 2 are the only members responding in the affirmative. The use of 'Yeah' in turn initial position (line 6) explicitly shows alignment with the previous turn in its canonical form. As soon as the senior doctor confirms the magnesium's preparation in the equipment table, she proceeds to allocating the task of its administration by raising, again, a yes/no question (lines 7e8). As shown in Instance 2, the senior doctor has returned in the right bedside and looks directly at the junior doctor, targeting her as the only addressee, while briefly raising her voice's volume and repeating part of the question. A brief raise in the volume is an effective tool for claiming the floor, while repetition is a useful strategy for intensifying directives without this always involving full repetition of earlier turns (cf. textbook CLC examples); both strategies are consistently mobilised by senior doctors in the data. As soon as the senior doctor receives confirmation, she swiftly moves to the next task allocation, and this time to the junior midwife 1 (lines 11e12); the task allocation is again, uttered with a rising intonation and targets a specific addressee both verbally (and you Zi) and in an embodied way, as she turns her gaze to junior midwife 1. In both instances, the management of the floor is successful as in the next turn only the targeted addresses (JD in line 9 and JM1 in line 13) respond in the affirmative, without evident interactional trouble (i.e., interruptions, overlaps, delays, etc.). Moving on, Instance 3 is an illustrative case of how senior doctors mobilise the aforementioned strategies to control the floor and demonstrate leadership. In lines 22e25, the junior doctor and senior doctor overlap as the first raises an information-seeking question about the oxygen saturation (lines 22e23), while the latter asks the junior doctor to write down the magnesium sulphate count using, again, a yes/no question to allocate the task: OK can you write that[. The junior doctor briefly continues fighting for the floor and attempts to re-introduce the topic of the oxygen with another incomplete question, in line 26; the senior doctor, however, interrupts her again repeating her question (lines 27e28). This time, she also raises her voice's volume (CAN you write that down (.) mag sulf[) while she maintains eye contact with the junior doctor and makes a relevant gesture pointing to the equipment table where the junior doctor should right down the count. The interruption, the yes/no question which normatively requires a positive response, the raised volume as a floor-taking mechanism, the repetition to intensify the directive, the eye contact and the pointing gesture, all contribute to the senior doctor successfully allocating the task, as the junior doctor finally quits her turn responding in the affirmative, in line 29. Overall, what can be extracted from Excerpt 1 is the senior doctor's consistent use of questions for allocating (lines 7e8, 11e12, 25, 27e28) and confirming (lines 1e2) tasks. The format of the questions exhibits systematicity throughout the excerpt (and the whole dataset), too, allowing for the senior doctor's control of the floor and, ultimately, the situation; uttered in a yes/no format, such questions privilege a short e and positive e response, while, at the same time, the senior doctor briefly raises her voice's volume to manage the floor when required, and uses repetition to intensify the directives. These task allocations normatively target specific members both verbally and in an embodied way (i.e., eye contact). In doing so, the senior doctor mostly positions self at the right bedside, the team leader's identified material zone, which allows for an overseeing role (see Fig. 2), briefly moving to other material zones relevant to the requested tasks (Instance 1). The team's uptake throughout the excerpt illustrates that team members recognise and re-affirm the senior doctor's dominance, as they swiftly correspond addressing her requests (lines 3e6, 9,13,29), without evident interactional trouble. These are behaviours observed across teams with good clinical performance in the dataset. Equally, as illustrated below, teams with poor clinical performance consistently deviate from the above patterns, which further strengthens our case for the relationship between interactional and clinical performance; we elaborate on this in the discussion. Evidence in teams with poor clinical performance Excerpt 2 is drawn from a team with poor clinical performance, where staff members obtained the magnesium but did not prepare e and thus did not administer e it. To allow for a comparison with Excerpt 1, we focus again on the task allocation. Excerpt 2. Excerpt 2 begins with the senior doctor's attempt to allocate a task similar with the one in Excerpt 1 (lines 25e28): if we could write down the pressure. In contrast to the senior doctor's linguistic behaviour in Excerpt 1, however, the senior doctor here does not raise a straight yes/no question, while he also fails to target a specific member verbally or multimodally, using the collective pronoun we without making eye contact with anyone. In terms of his position in the room in relation to the identified material zones, the senior doctor stands in a peripheral zone, at the corner of the bed, maintaining some physical distance from the bed comparing to the junior doctor and the senior midwife 2, who occupy a central position at the bedsides (cf. Fig. 2). Note also, the senior doctor's hesitation and minimisation of the impact of directives, in lines 1e2, including a string of short pauses and a hesitation marker (hm) as well as the use of 'if' which mitigates his directive. The 3-s pause in line 2 indicates the impact of the less assertive linguistic design. Long pauses are rare in this emergency context, and here none of the present staff members takes responsibility for the requested task. In lines 3e4, the senior midwife 1 steps in and allocates the task directly to the junior midwife 1 in the following ways: she shifts closer to the junior midwife and points to the equipment table (zone 3 in Fig. 1), where the recording will take place, while directly talking to her and explicitly allocating the responsibility to her: you're in charge over there (lines 4e5). This attempt is successful, as the junior midwife 1 immediately transitions to the equipment table to record the patient's blood pressure. Moving forward, in Instance 2, the senior doctor tries again to claim the floor by raising a question; as in the previous instance, though, his intervention includes hesitation markers (e:hm), elongated vowels (e:hm; she:) and a short pause, while also retaining his physical distance from the bed and having his arms crossed e a hand gesture prototypically associated with insecurity/defensiveness. Being away from the bed for a senior doctor limits the ability to monitor the centre of the action and also deviates from the professional expectations of where senior leaders stand. The combination of those factors creates the context for the senior midwife 2, in line 9, to interrupt. The senior midwife gains the floor and raises an information-seeking question on the CTG; as soon as she receives the answer and confirms it, the senior doctor attempts again to re-introduce his question (line 13). Once again, his mitigation results in another interruption, again by the senior midwife 2, who raises another information-seeking question, this time relevant to the foetal heart rate (line 14). Overall, interruptions, particularly from junior to senior members, cause breakdowns in the interactional floor which, in turn, can hinder the information flow; uninterrupted information transfer is critical in the emergency context, with our findings consistently demonstrating that teams that control well the interactional floor also exhibit good task management and a strong clinical performance. The senior doctor appears to have difficulty in allocating tasks and managing his team; the lack of complete yes/no questions and the mitigation throughout his turns, combined with his body language and his position in a peripheral material zone, lead to uncertainty which filters through the team, documented in the team's long pauses (line 2), overlaps (line 6,9,14), and interruptions (lines 9, 14). Zooming out from the examples, the pertinent matter is how we move from microanalysis to wider claims useful for (obstetric) teams at the frontline. First, team movement in their material space; team movement and verbalisation are inseparable ingredients of task management. Our data suggest that good teams have little movement out of their designated zones and display less agitation compared to weaker teams. Fig. 3 shows the recommended material zone for team leaders (in our case, senior doctors) handling obstetric emergencies as emerged in the study of the SaFE data. Positioning in space is a resource for teamwork and role enactment as we have shown; the patterns we reported here are consistent with our findings from an ongoing large ED project providing robustness that makes them solid foundation to draw implications from moving from local to broader relevance. Moving further, the patterns we observe also translate to specific linguistic behaviours that are trainable and can enhance existing information sharing modules and tools such as CLC. In our work, our findings have been well received by clinical teams in obstetric and ED contexts, and the feedback shows that examples from everyday practice are powerful mechanisms for teams to relate and see the immediate difference in practice. Table 1 provides a succinct example of strategies that emerge from our data. Our sociolinguistic work of the last 15 years corroborates the literature and indicates that training in structuring, sequencing, and designing interaction can provide valuable tools to professionals who operate under pressure in high-risk, high-stakes medical contexts. We discuss this further in the next and final section of the paper. Discussion Our research shows that teams do teamwork in and through an emplaced/embodied interactive practice and negotiate their roles and coordinate in situ. Our work [32,33] on the management of obstetric emergencies shows that teams with strong clinical performance tend to declare the emergency, do direct task allocation, and maintain tight control of the floor, including only task-related The table provides a useful digest and can be also associated with making the use of information protocols such as SBAR more consistent. (meaningful) movement and articulate critical information for the stages of the emergency. Participants in teams with strong clinical results orient towards CLC forms of managing the encounter. However, they appear to favour shorter linguistic structures compared to textbook examples of what good communication looks like and to be more succinct. These findings extend our earlier work [29] which has shown that consistent use of tools that encourage the structuring of information in stable and hence recognisable sequences, such as SBAR and CLC, were associated with better team performance [16] also from a patient-actor perspective [34]. Further, we argued that separating clinical and interactional practice is artificial and damaging for understanding of the latter. The same applies for conceptualising interaction as 'soft' or different in nature to 'hard' skills. We have shown that interaction is technical and sequentially systematic; it involves detailed use of verbal and material resources available to the interactants and can be improved through training. By controlling the interactional floor, the senior doctor in Excerpt 1 controls the team and its clinical outcomes, with the team scoring high in clinical efficacy. Equally, the senior doctor's trouble in managing the team and allocating tasks, in Excerpt 2, is an inseparable part of the team's lower clinical performance. Further on this, interactional trouble (Excerpt 2) never occurs in a vacuum; it is part of the work interactants do in a situated encounter. Although not all teams with low clinical performance go through interactional trouble, all the high-performing teams exhibit tight control of the interactional floor and smooth management of the tasks they need to distribute and carry out. We further demonstrated that the role of positioning in the material space is part of professional performance and we made a case for shifting away from a verbal-only understanding of interaction which is the dominant praxis in conceptualisations of 'communication' in the medical literature. As a turn to the role of the body in understanding medical teamwork is growing, it is an opportunity to reframe our understanding of medical 'communication' too. A holistic understanding of teamwork practice, involving interaction as an inseparable component, needs to become core part of medical research and training. Current training approaches are typically based on narrow models and do not address the dynamics of interaction in situ. Models from other contexts, notably aviation, cannot provide us with a holistic understanding of the teamwork processes in obstetric emergencies, which is our focus, but in other specialties too. As we have argued elsewhere, training interventions for professionals need to draw on a systematic, ideally multimodal analysis of interaction such as the one we have illustrated here [26]. A context-sensitive analysis of medical practice and a specific focus on the characteristics of the different specialities and settings is a necessary condition for the generalisability of models to be applied and tested. To conclude, on the basis of our current and earlier work, we propose a framework for implementing training interventions based on the analysis of performance in clinical practice bringing together medical and healthcare sociolinguistic research (see Fig. 4). A progressive move from Department > Trust > National > Global contexts is useful in implementing findings and also informing training programmes available for multiprofessional teams. The framework is a visual illustration of the process in which evidence-based interventions can be designed and delivered and also indicates the potential of the interventions to translate to training, be measured locally, and introduced more widely. On this, the methodological framework we propose, IS, is enabling insights into the ways in which teams organise their work, establish a rhythm, and deliver their clinical tasks. It is a framework and a tool that is widely unknown to medical research but which has a lot to offer. As qualitative research is growing in healthcare research, innovating in methodology enables us to combine healthcare analysis with the tools of other disciplines that address relevant/complementary questions. This can provide more layers of meaning to the medical professionals' tools for organising and designing team management and ultimately enhancing clinical outcomes and by extension improve patient safety. Summary Good teamwork can improve patient safety as well as improve the patient experience. Unpacking the dynamics of interaction is a core part of this process. In order to produce new knowledge though we need to examine further clinical care in real contexts, frontline/simulations/narratives and analyse the data through various lenses to capture the complexity of team performance. We have argued that a joint medical and healthcare sociolinguistic research agenda can make a contribution to this complex phenomenon and we hope further research will continue exploring team interaction in real life. Declaration of competing interest The authors have no conflicts of interest. Practice points Teamwork and team communication are intertwined and should not been treated separately. The material space is a core part of interactional dynamics. Interdisciplinary research is needed to understand the dynamics of team interaction. Research agenda Teamwork in real-life contexts needs to be further examined. Healthcare sociolinguistic and medical research can complement research on teamwork. Material place and interactional analysis need to be embedded in medical research.
v3-fos-license
2018-12-16T06:44:23.515Z
2017-11-08T00:00:00.000
55402523
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/56400", "pdf_hash": "fbfd3a358d22e97a51bad60caf1f62f9d1c74002", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45151", "s2fieldsofstudy": [ "Biology", "Engineering", "Materials Science" ], "sha1": "0b14478038897aa0840fa4a130c8b69b869b17ed", "year": 2017 }
pes2o/s2orc
Potential Production of Ethanol by Saccharomyces cerevisiae Immobilized and Coimmobilized with Zymomonas mobilis : Alternative for the Reuse of a Waste Organic Potential Production of Ethanol by Saccharomyces cerevisiae Immobilized and Coimmobilized with Zymomonas mobilis : Alternative for the Reuse of a Waste Organic Fermentation technologies have been developed to improve the production of ethanol and an alternative is the immobilization technology, which offers the possibility of efficiently incorporating symbiotic bacteria in the same matrix. This study analyzes the potential use of immobilized and coinmobilized systems on beads of calcium alginate for ethanol production used mango waste ( Mangifera indica ) by Zymomonas mobilis and Saccharomyces cerevisiae compared with free cells culture and evaluate the effect of glucose concentra tion on productivity in coimmobilized system using a Chemostat reactor Ommi Culture Plus. For free cell culture, the productivity was higher for Z. mobilis (5.76 g L -1 h -1 ) than for S. cerevisiae (5.29 g L -1 h -1 ); while in coimmobilized culture, a higher productivity was obtained (8.80 g L -1 h -1 ) with respect to immobilized cultures (8.45 g L -1 h -1 - 8.70 g L -1 h -1 ). The conversion of glucose to ethanol for coimmobilized system was higher (6.91 mol ethanol) with 50 g L -1 of glucose compared to 200 g L -1 of glucose (5.82 mol ethanol); suggesting the immobilized and coimmobilized cultures compared with free cells offer an opportunity for the reuse of organic residues and high alcohol production. Use of agroindustrial waste in fermentation processes Alcoholic fermentation is a process by which microorganisms convert hexoses, mainly glucose, fructose, mannose and galactose, in the absence of oxygen and get products as alcohol (ethanol), carbon dioxide and adenosine triphosphate (ATP) molecules. Approximately 70% of the energy is released as heat and the remainder is preserved in two terminal phosphate bonds of ATP, for use in transfer reactions, such as activation of glucose (phosphorylation) and amino acids before of the polymerization. In other words, fermentation is a set of chemical reactions carried out by microorganisms in which an organic compound is oxidized, partially in the absence of oxygen to obtain chemical energy and understood as a partial oxidation when all the carbon atoms of the compound are oxidized to form CO 2 . It is a process known since antiquity and is currently the only industrial process for the preparation of ethyl alcohol in all countries. The glucose as raw material is not only used, but other types of raw material much cheaper. However, the process of alcoholic fermentation occurs naturally, originated by the activity of some microorganisms through its anaerobic energy cell metabolism; for a large-scale production process, it is necessary for microorganisms (bacteria, fungi and yeasts) to accelerate the process of alcoholic fermentation and increase the conversion rate [1]. During the twentieth century and until the beginning of the twenty-first century, alcoholic fermentation has focused exclusively on the improvement of fermentation processes and specifically on the optimization of industrial performance through a good selection of yeast strains, which are the most used microorganisms for the production of ethanol by fermentation, due to its high productivity in the conversion of sugars and better separation of the biomass after fermentation. Yeasts are unicellular (usually spherical) microorganisms of size 2-4 μm and are present naturally in some products such as fruits, cereals and vegetables. Different species of fermentative microorganisms have been identified, among which are mainly Saccharomyces cerevisiae, Kluyveromy cesfragilis, Torulaspora and Zymomonas mobilis [2]. S. cerevisiae is a unicellular organism that is able to follow two metabolic routes to obtain the energy necessary to carry out its vital processes: alcoholic fermentation and aerobic respiration. The first is characterized by the evolution of CO 2 and the production of ethanol out of contact with oxygen, obtaining the energy necessary to carry out its vital processes from metabolizing carbohydrates. The yeast requires glucose to be catalyzed by the glycolysis or Embden-Meyerhof pathway, to obtain pyruvate that is then converted by anaerobically into ethanol and CO 2 by the action of specific enzymes. Its optimal temperature of growth varies between 22 and 29°C and does not survive more than 53°C. It ferments a sugar solution with a concentration of less than 12% and is inactivated when the sugar concentration exceeds 15% due to the osmotic pressure of medium on the cell. On the other hand, Z. mobilis is a facultative anaerobic gram-negative bacterium that can ferment certain sugars through a metabolic pathway producing bioethanol, sometimes, more efficiently than yeasts. It has an incomplete Krebs cycle, but it has characteristics to perform the pyruvic synthesis pathways from glucose or glyceraldehyde-3-phosphate. This organism also shows a high rate of sugar uptake and a yield of ethanol as fuel of the 97% [3]. The alcoholic fermentation processes using agroindustrial products present a great challenge given the inconveniences that could arise when using raw material for human consumption or edible vegetable crops for the production of ethanol, and, on the other hand, the change in the use of land destined for the cultivation of vegetables that will be used to produce ethanol and bioethanol, which would sometimes lead to deforestation, food shortages, increase of desert regions and greater inability of soils to retain water, thus disrupting the balance of the hydrological cycle [4]. On a global scale, the use of energy raw materials for energy purposes and in the production of ethanol has led to higher prices for products such as maize or barley, as well as making ethanol production economically unviable. Therefore, it is important to use raw materials that do not compete with food products and that are low cost in the production of biofuels, must also ensure a good profitability and are environmentally sustainable projects. In the energy sector, it has been estimated that the use of all world food surpluses could only produce bioethanol to replace 1% of the oil currently used. Concluding that if food crops were used to produce ethanol, a chain of food imbalances would be generated, which would be unsustainable [5]. An alternative to producing ethanol is through the use of other nontraditional raw materials, which arise as by-products and/or waste from industrial processes. Propose new technologies that allow the production of ethanol from cane residues, solid waste and those materials containing cellulose and hemicellulose, which allows the revalorization of waste from various industries, converting them into raw material for the production of ethanol. At present, efforts have been made mainly in the search for cheap raw materials, which replace the traditional ones, in order to achieve greater efficiency in the processes of fermentation, recovery and purification of alcohol produced. The importance of the production of bioethanol has as main interest to compete with the use of fossil fuels since ethanol can be used as fuel for motor vehicles increasing the octane number, and therefore the reduction of consumption and contaminants (10-15% less carbon monoxide and hydrocarbons). Ethanol can be mixed with unleaded gasoline from 10 to 25% without difficulty, although some engines have been able to incorporate 100% alcohol as fuel. Thus, ethanol could substitute for methyl tert-butyl ether (MTBE), an oxygenation product with which gasolines have been reformulated in Mexico since 1989, which has reduced CO 2 emissions. This action is very important since MTBE, being a very stable compound, with low degradation and very soluble in water. The production of bioethanol lost importance at the end of the first half of the XX century, being replaced by the production of synthetic ethanol, from petroleum derivatives, which is cheaper, but cannot be used in food preparation, alcoholic beverages or medications. The rise in oil prices turned our eyes toward the fermentation route of ethanol production, and today, we work mainly in the search for cheap raw materials, replacing the traditional sugary materials. Studies carried out by different researchers suggest that the by-products of mango juice, cane juice and molasses are an efficient alternative for the production of ethanol without affecting the food item, besides increasing the productivity and concentration of ethanol in the fermentation medium, and therefore reduce the costs of ethanol production [6]. Historically, the sugar industry in Mexico is one of the most important, characterized by sugarcane harvests throughout the year, with a production of cane of 46,231,229 tons per year, and the remaining residue derived of sugarcane has been exploited as energy biomass and for the production of different biotechnological products by fermentation. Other alternatives of raw material are mango juice and its residues, and the fruit is grown in all the countries of Latin America, Mexico being the main exporting country of this fruit, with an annual production of approximately 1 million 452 thousand tons of mango and of which more than 60% of this production is given to the South-Southeast region of our country. The alternative of using residues or products that can replace the raw materials normally used in ethanol production is now a highly promising possibility, because the cost of production of ethanol is closely related and dependent on the cost of the raw material, the volume and the composition of the same. The existing economy in Mexico related to cane cultivation (experience and sugar tradition) and the export of mango types offers technological alternatives that allow the fermentation of cane juice, molasses and mango juice through S. cerevisiae and Z. Mobilis as viable sources for the production of ethanol, whether in the manufacture of alcoholic beverages or for the production of biofuels. Tecnología de inmovilización Research has been developed in order to increase the productivity of alcoholic fermentation processes. The productivity, expressed as grams of ethanol produced per hour per unit of fermentation volume, can be increased by optimizing the composition of the culture medium, by the selection of an appropriate microorganism strain or through the adaptation of the design of reactors [7]. One challenge today is to reduce ethanol production costs, and an alternative is to reduce the cost of the culture media, which can represent about 30% of the final production costs of ethanol [8]. Some fermentation technologies have been developed to improve the production of ethanol and its concentration in the culture media [9,10]. Among these, the immobilization technology offers advantages in contrast to free cell cultures, such as increased retention time in bioreactors, high cellular metabolic activity, high cell load and protection for cells from stress [10,11]. The immobilization cell technologies have been applied for different purposes as for the production of hydrogen [12] and compounds commercially used in the food industry [13]. Other studies have been developed with immobilized algal cells to remove nutrients (N and P) from wastewater, phenol and hexavalent chromium [14][15][16][17]. Similarly, the immobilization of Zymomonas and Saccharomyces have been used for the bio-ethanol production from waste materials [7][8][9][10][18][19][20]. On the other hand, the immobilization technology provides the possibility of efficiently incorporating symbiotic bacteria [21,22]. The interaction between two microorganisms in the same matrix is called coimmobilization, and this association can be positive with higher growth and production. However, there are relatively less applications in the ethanol production involving the immobilization of mixed-culture systems and/or coimmobilized cultures. In a petroleum deficiency situation, bioethanol from yeast and bacterial fermentation has become a promising alternative source for fuel. Agricultural and industrial waste containing sugar, starch and cellulose, such as cassava peels, fruit bunches, and the effluents from sugar and pineapple cannery productions have been successfully applied for the bioethanol production [23,24]. In this context, the municipality of Ciudad del Carmen, Campeche, Mexico, has an annual production of about 2.868 ha mango (Mangifera indica), obtained through various forms of cultivation and orchard-based technology, but the lack of local market and the poor product distribution to other locations cause much of the product be wasted, with significant losses in the locality. Hence, the need to seek alternatives to use these wastes and generate added value in the economy of the region. This study was to determine whether the association between S. cerevisiae coimmobilized with Z. mobilis improved growth, and ethanol production using a culture medium equivalent to mango juice (M. indica) creates an opportunity for a regional fruit for exploitation in the production of ethanol. In this study, both microorganisms were confined in small alginate beads, a practical means of using microorganisms for environmental applications. Microorganism and medium The yeast strain S. cerevisiae (ATCC ® 2601) and bacteria Z. mobilis (ATCC ® 8938) were obtained from the laboratory Microbiologis ® and used for fermentation in coimmobilized and immobilized systems. Both microorganisms were cultured in a medium containing composition (g L −1 ), as described by Demirci et al. [25]: 20 g glucose, 6 g yeast extract, 0.23 g CaCl 2 •2H 2 O, 4g (NH 4 ) 2 SO 4 , 1 g MgSO 4 •7H 2 O and 1.5 g of KH 2 PO 4 , previously sterilized by autoclave. Strains were maintained in 250 mL of culture at 30°C and pH 4.5 with manual shaking three times a day. Transfers of fresh medium were made every 24 h for three consecutive days prior to use in experiments. Preparation of immobilized and coimmobilized cells For the preparation of immobilized cells, we used the technique described by Tam and Wong [26]. Both microorganisms were harvested by centrifugation at 3500 rpm for 10 min. The bacteria and yeast cells were resuspended in 50 mL of distilled water to form a concentrated cell suspension. The suspension was then mixed with a 4% sodium alginate solution in 1:1 volume ratio to obtain a mixture of 2% microorganism-alginate suspension. The mixture was transferred to a 50-mL burette, and drops were formed when "titrated" into a calcium chloride solution (2%). This method produced approximately 6500 uniform algal beads of approximately 2.5 mm in diameter with biomass content for Z. mobilis-alginate of 0.0055 g bead −1 and for S. cerevisiae of 0.00317 g bead −1 for every 100 mL of the microorganism-alginate mixture (Figure 1). The beads were kept for hardening in the CaCl 2 solution for 4 h at 25 ± 2°C and then rinsed with sterile saline solution (0.85% NaCl) and subsequently with distilled water. A concentration of 2.6 beads mL −1 of medium (equivalent to 1:25 bead: medium v/v) was placed in a Chemostat Ommi Culture Plus (Virtis) containing 2 L of culture medium. The reactor was maintained under stirring at 120 rpm and 30°C (Figure 2). A similar procedure was used for coimmobilization, with the difference that the concentrate of bacteria (25 mL) and yeast (25 mL) was mixed and then mixed with 50 mL of alginate; this procedure allowed retaining the same concentration of cells in all experiments. Experimental setup and procedure This study was divided into two parts: (1) the batch experiment consisted of evaluating the growth and ethanol production in a medium equivalent to mango juice in cultures with free cells, immobilized and coimmobilized, and (2) evaluate the effect of glucose concentration in the production of ethanol in the system previously selected in the first experimental part based on the ethanol productivity obtained. Fermentation was performed in a Chemostat Ommi Culture Plus (Virtis) with a volume of 2 L operation, adjusting stirring at 120 rpm and maintaining a temperature of 30°C. The medium equivalent to mango juice was similar to that described by Demirci et al. [25] by adjusting the composition of the medium to a concentration of 200 g L −1 glucose, equivalent to that observed in the mango juice (M. indica). The experimental design consisted of triplicate cultures in a Chemostat reactor Ommi Culture Plus for S. cerevisiae and Z. mobilis in free cell culture, immobilized and coimmobilized. For each experiment, the biomass was collected, as well as samples of the culture medium to the end of the logarithmic phase every 20 h. For the determination of ash-free dry weights, five beads were dissolved and filtered through a GF-C glass fiber filter (2.5 cm diameter), previously rinsed with distilled water, and incinerated at 470°C for 4 h. The samples were dried at 120°C and put to constant weight for 2 h in a conventional oven and then in a muffle furnace at 450°C for 3 h. The soluble solids of each fermenting medium were determined every 20 h by taking 1 mL aliquot from each reactor and testing for the Brix level in an refractometer. Ethanol content (% v/v) was obtained using the Anton Paar DMA 4100M instrument, which determines the density of the mixture in relation to the standard OIML-STD-90, which can determine the content of distillate ethanol (% v/v); according to the ethanol density recorded, it was possible to obtain an approximate of ethanol content (grams of ethanol per liter of culture) produced for each experiment. Prior to the determination of the ethanol content, a distillation of cultures was conducted with a plate column distiller PS-DA-005/PE of four plates, at small scale. The cooling water flow was 3 L h −1 at 15°C. An aliquot of 3 L was distilled for 4 h, maintaining the operating conditions at atmospheric pressure, without reflux and with a temperature ramp in the heating jacket of 30°C up to 80°C. The STATISTICA 7.0 software for statistical analysis and calculated mean and standard deviation for each treatment were used. The covariance analysis (ANCOVA) with P ≤ 0.05 was used to evaluate the growth in free cell cultures, immobilized and coimmobilized. The Tukey test (P ≤ 0.05) was used when significant differences were observed. Growth In free cell cultures, the growth was observed immediately after being inoculated in the reactor of 2 L. Growth kinetics shows an exponential phase for S. cerevisiae and Z. mobilis of 120 h. After this period of cultivation, both species showed a decline in the production of biomass, finalizing treatment after 200 h of culture. The maximum values of biomass concentration were 14.18 and 11.80 g L −1 dry weight for S. cerevisiae and Z. mobilis, respectively. Both microorganisms grew satisfactorily under the culture conditions used in this study (Figure 3A), with a higher growth rate (μ) for S. cerevisiae (0.0547 d −1 ) with respect to Z. mobilis (0.0418 d −1 ). Growth rates in free cell cultures for both microorganisms S. cerevisiae and Z. mobilis were not significantly different (P ≥ 0.05). For immobilized cells, both yeast and bacteria presented immediate growth after adding the beads to the culture medium; in both treatments, the exponential phase of growth reached a maximum of 80 h. It is noteworthy that although both microorganisms were immobilized under the same procedure, the content of biomass per bead at the beginning of treatment was lower for Z. mobilis (0.0031 g bead −1 ) compared to S. cerevisiae (0.0039 g bead −1 ). Despite these differences, both microorganisms were able to tolerate immobilization (Figure 3B), reaching maximum biomass content values of 0.0055 and 0.0047 g bead −1 for S. cerevisiae and Z. mobilis, respectively. In relation to growth, Z. mobilis showed a higher growth rate (0.142 d −1 ) with respect to S. cerevisiae (0.106 d −1 ), but there were no significant differences (P ≤ 0.0001). Glucose-substrate removal The decrease of substrate showed significant differences (P ≤ 0.0001) between treatments with free and immobilized cells for both species (Figure 4). However, the Tukey test analysis showed that the two species in free culture were not significantly different (P > 0.05) in 200 h of treatment. While for the immobilized and coimmobilized cell cultures, only the immobilized Z. mobilis bacteria showed no significant differences (P = 0.245) during removal of the substrate with the coimmobilized system during 140 h of culture ( Figure 4B). It is a fact that consumed substrate was greater in free culture for S. cerevisiae and Z. mobilis from 200 to 80 g L −1 (60% removal) after the 200-h treatment period (Figure 4A), compared to the immobilized system with 40% removal for S. cerevisiae (from 200 to 120 g L −1 ) and 30% removal for Z. mobilis (from 200 to 140 g L −1 ), while in those cultures of coimmobilized cells consumption ranged from 200 to 130 g L −1 (35% removal) ( Figure 4B). The average consumption analysis based on removal rates determined during the exponential growth for both species showed that free culture S. cerevisiae and Z. mobilis reached removal rates of 2.0 and 2.7 g-substrate per g-biomass d, respectively. This suggested greater productivity for the bacteria (5.76 g h −1 ) with respect to yeast (5.29 g h −1 ) ( Table 1). In cultures with immobilized cells, the removal rate in the exponential phase (80 h) was greater for S. cerevisiae (0.165 g-substrate per g-biomass d) with respect to Z. mobilis (0.056 gsubstrate per g-biomass d), but in coimmobilized culture it was greater (0.235 g-substrate per g-biomass d) since both species contribute to reducing glucose and increasing the removal rate. Similar results were observed in the productivity, where the coimmobilized cell culture showed higher values (8.80 g L −1 h −1 ) with respect to the immobilized cells of S. cerevisiae (8.45 g L −1 h −1 ) and Z. mobilis (8.70 g L −1 h −1 ( Table 1). In general, the highest productivity levels were recorded in coimmobilized and immobilized cultures with respect to free cell cultures because shorter ethanol production time (80 h) compared to free cultures (120 h). Effect of initial concentration of glucose on ethanol production It is a fact that most cultures from fruits may contain a high concentration of fiber solids that cause problems of mixture in the reactor, and consequently a low contact between cells and substrate. Mango juice is no exception. In this study, we evaluated the growth and alcohol production of Z. mobilis coimmobilized with S. cerevisiae in cultures with dilutions of 200 and 50 g L −1 of substrate in equivalent medium. For the coimmobilized of Z. mobilis and S. cerevisiae within alginate beads, an immediate increase in biomass content was observed. Although the biomass content for both treatment The content of alcohol produced had no significant differences (P ≤ 0.05) with respect to glucose concentration. However, uptake rates exhibit a decline as the glucose content in the reactor decreases ( Table 2). The highest uptake rate occurred at a concentration of 200 g L −1 glucose (0.235 g-substrate per g-biomass d) with a 76.5% removal, compared to 50 g L −1 glucose (0.08 g-substrate per g-biomass d). Although the production of alcohol was similar in both treatments, the ratio mol-ethanol produced per consumed mol-glucose was higher in cultures of 50 g L −1 glucose with a value of 6.91, with respect to 200 g L −1 glucose with a ratio of 5.82 mol-ethanol produced per consumed mol-glucose (Table 2). Similarly, higher productivity was obtained (8.85 g L −1 h −1 ) at a lower glucose concentration compared to a medium with high glucose content (8.80 g L −1 h −1 ). Growth and productivity To evaluate the capacity of growth in free and immobilized culture, two microorganisms, yeast and bacteria, were subjected to the same culture conditions (Figure 3A). In free culture, the yeast S. cerevisiae showed a higher cell density and specific growth rate (0.0547 d −1 ) with respect to a,b,c Indicate significant differences (P ≤ 0.05). Table 1. Uptake rate, productivity (Y) and ethanol mole produced per glucose mole for Saccharomyces cerevisiae and Zymomonas mobilis in free culture, immobilized and coimmobilized. bacteria Z. mobilis (0.0418 d −1 ) in a treatment time of 120 h. The immobilized systems are known to have a greater capacity of cell growth and high metabolic activity [27,28], which is consistent with the results obtained in this study. The result showed a high growth rates for immobilized Z. mobilis (0.142 d −1 ) and S. cerevisiae (0.106 d −1 ) with respect to free cell cultures, suggesting that immobilization did not affect growth in both microorganisms and increased biomass content favorably. Furthermore, the high activity in immobilized cell was observed with in a decrease of substrate in a shorter time of treatment (80 h) compared to free cell cultures (120 h). The short time of treatment for immobilized cell could be attributed to the increase of biomass within the beads and consequently an immediate decay of the substrate; however, this indicates that increasing cell population within the beads can cause a limited effects of nutrients on the cells located at the center of the beads, causing a decrease in cellular activity [28,29]. Another factor that probably favors the rapid decline in cell density is attributed to the production of CO 2 as result of fermentation activity. Studies suggest the adverse effect of CO 2 gas, because if the diffusion of CO 2 is lower compared to its production, it will accumulate inside of alginate bead [30]. In this study, the CO 2 is observed in the reactor as bubbles attached on the surface of the beads, suggesting that the spread of CO 2 gas in the first 80 h was not a factor that inhibited growth and the production of alcohol; however, after this time, gas saturation in the reactor was probably high, affecting the diffusion of CO 2 . This coupled with a limitation in the transport of nutrients and subsequent inhibition of microorganisms and may have caused glucose consumption to be lower compared to free cells ( Table 1). In particular, in free cell culture, the lower percentage of alcohol obtained by yeast during the increasing fermentation culture commonly relates to the fact that this is affected by the high concentration of ethanol in the solution, which may inhibit metabolism and decrease efficiency [31], unlike bacteria Z. mobilis [30]. In the present study, the lowest biomass produced by the bacteria (0.0047 g L −1 ) with respect to yeast (0.0055 g L −1 ) may be practical from the standpoint of waste generation. Similarly, observations were reported by Amin and Verachtert [9] for Z. mobilis and Saccharomyces bayanus immobilized in carrageenan with 5.6 and 9.9 g L −1 , respectively. It is evident that ethanol production was not inhibited in immobilized or coimmobilized systems, and even showed higher productivity with respect to free cells ( Table 1), suggesting that they are more efficient in the conversion of sugar with respect to time. Krishnan et al. [19] reported lower productivity for Z. mobilis immobilized in carrageenan (1.6 g L −1 h −1 ) compared to that obtained in this study (8.7 g L −1 h −1 ); this difference may be attributed to the lower amount of glucose content in the culture medium of 32 g L −1 with respect to that used in the present study of 200 g L −1 . Interestingly, the immobilized systems showed a higher conversion of substrate of 4.42 and 6.29 mole of ethanol per mole of glucose for yeast and bacteria, respectively, compared to the obtained by free cells, from 2.7 to 3.3 mole of ethanol per mole of glucose. In general, treatments with immobilized cells showed a higher output of ethanol per mole of glucose with respect to that reported by Amin and Verachtert [9] for Z. mobilis and S. bayanus immobilized in carrageenan with values of 1.8-1.9 mole of ethanol produced per mole of consumed glucose. Gunasekaran et al. [32] and Krishnan et al. [19] suggest that Z. mobilis is a good candidate to obtain alcohol with approximately 1.9 mole ethanol per mole of glucose; similarly, Rogers et al. [33] reported that specific productivity of ethanol (g ethanol g −1 biomass dry weight) is greater for Zymomonas than for Saccharomyces uvarum. According to the results, immobilization and coimmobilization exhibited a lower uptake rate compared to free cells; this shows that there was less consumption of substrate (Table 1). Nevertheless, there was greater productivity, which indicates that it is possible to obtain high alcohol content with a lower requirement of substrate, but with the disadvantage of residual glucose in the medium; this problem can be solved with sequenced systems, as suggested by Demirci et al. [25]. Another alternative of solving this problem is to increase the cell number or inoculum size within the reactor. This is reasonable because a high number of cells could create a greater sorption of substrate (glucose) into the cell and eventually consumed substrate. However, Siripattanakul-Ratpukdi [10] suggested that with different cell yeast loads, the same reduction (>90%) of substrate is obtained at the end of a treatment period of 10 h. The low glucose reduction observed in this study in alginate beads can be attributed to the decline in cell density, but it is likely that the diffusion of substrate could have been prevented. Studies have reported that the adsorption of substrate by the matrix was observed in the first hours of treatment, with a possible decrease of substrate diffusion within the matrix in a continuous process [34]. On the other hand, Robinson et al. [35] suggest that the diffusion rate within the alginate matrix depends on the concentration gradient between the culture medium and matrix; this is, when the nutrient concentration in the culture medium decreases, the diffusion rate occurs within the matrix and therefore the removal rate. In this study, during the first hours of treatment, the matrix is probable a partial saturation with substrate (glucose), because the substrate is decreased during the culture time, and cell growth for both microorganisms was continuous. Clearly, the immobilized cell system successfully decreased glucose by adsorption of the matrix (immobilized glucose) and biodegradation (bioconversion of glucose), being the main process the biodegradation. This suggests that the main factor that could limit glucose removal may have been the high concentration of CO 2 in the reactor. Coimmobilization of Z. mobilis and S. cerevisiae at different glucose concentrations The biomass content in alginate beads shows that a high concentration of glucose (200 g L −1 ) leads to a rapid decrease compared to cultures with a low glucose concentration (50 g L −1 ). This confirms the fact that the high concentration of glucose saturates beads faster, reducing the diffusion between beads and the culture medium; consequently, the diffusion of CO 2 produced can be reduced and remains trapped inside the bead, causing a decrease in growth and substrate consumption. Conversely, the low concentration of substrate of the culture medium indicates the presence of a soft transport and substrate accumulation within the matrix, allowing a proper consumption and growth of bacteria and yeast. Therefore, a low concentration of substrate may actually increase the production of alcohol with minimal residual glucose, reaching values of 6.91 mol of ethanol per mole of glucose, with respect to a high glucose concentration ( Table 2). Previous studies in our laboratories have shown that the fermentation process of mango juice for a coimmobilized system can produce a production ratio of 1.4 L of alcohol (79% v/v ethanol) for every 3 L of mango juice. Conclusions The present study has shown the existing potential of using coimmobilized systems in the production of ethanol. The association of Z. mobilis and S. cereviase was positive, obtaining a higher ethanol content and high conversion of substrate compared to free and immobilized cells. In general, the immobilization technology offers an alternative by increasing productivity and conversion of substrate compared to culture systems with free cells. In the present study, the immobilized systems showed high conversion capacity to obtain high alcohol content with a lower requirement of substrate. The possible substrate inhibition was not a factor affecting cell growth in both organisms; it is clear that the immobilized cell system successfully reduced glucose by the matrix adsorption (immobilized glucose) and biodegradation (bioconversion of glucose), being biodegradation the main process. This suggests that the main factor that could limit further growth was the high concentration of CO 2 in the reactor. Furthermore, although no significant differences were detected in the alcohol content in immobilized culture in diluted medium, the conversion from glucose to ethanol is greater in those media with a glucose concentration of 50 g L −1 . For practical purposes, it is desirable that the fermentation of waste organic be performed through dilutions to increase the homogeneity of alginate beads within the reactor and consequently allow the diffusion of CO 2 and substrate through the beads.
v3-fos-license
2020-09-02T13:07:31.232Z
2020-09-01T00:00:00.000
221406194
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.thelancet.com/article/S2468266720301869/pdf", "pdf_hash": "a7576df3e97e09cde56ffa6a5f78a5ae59992974", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45154", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "a7576df3e97e09cde56ffa6a5f78a5ae59992974", "year": 2020 }
pes2o/s2orc
Use of HIV pre-exposure prophylaxis among men who have sex with men in England: data from the AURAH2 prospective study Background Since October, 2017 (and until October, 2020), pre-exposure prophylaxis (PrEP) has only been available in England, UK, through the PrEP Impact Trial, by purchasing it from some genitourinary medicine clinics, or via online sources. Here we report changes from 2013 to 2018 in PrEP and postexposure prophylaxis (PEP) awareness and use among HIV-negative gay, bisexual, and other men who have sex with men (MSM) and assess predictors of PrEP initiation. Methods In the prospective cohort study Attitudes to, and Understanding of Risk of Acquisition of HIV 2 (AURAH2), MSM were recruited from three sexual health clinics in England: two in London and one in Brighton, UK. Men were eligible if they were aged 18 years or older and HIV-negative or of unknown HIV status. Participants self-completed a baseline paper questionnaire at one of the three clinics between July 30, 2013, and April 30, 2016, and were subsequently able to complete 4-monthly and annual online questionnaires, which were available between March 1, 2015, and March 31, 2018, and collected information on sociodemographics, health and wellbeing, HIV status, and sexual behaviours. PrEP and PEP use in the previous 12 months was obtained at baseline and in annual questionnaires. We assessed trends over calendar time in 3-month periods from first enrolment to the end of the study period (July–December, 2013, was counted as one period) in use of PrEP and PEP using generalised estimating equation logistic models. We used age-adjusted Poisson models to assess factors associated with PrEP initiation among participants who reported never having used PrEP at baseline. Findings 1162 men completed a baseline questionnaire, among whom the mean age was 34 years (SD 10·4), and of those with available data, 942 (82%) of 1150 were white, 1076 (94%) of 1150 were gay, and 857 (74%) of 1159 were university educated. 622 (54%) of 1162 men completed at least one follow-up online questionnaire, of whom 483 (78%) completed at least one annual questionnaire. Overall, PrEP use in the past year increased from 0% (none of 28 respondents) in July to December, 2013, to 43% (23 of 53) in January to March, 2018. The corresponding increase in PrEP use among men who reported condomless sex with two or more partners was from 0% (none of 13 respondents) to 78% (21 of 27). PEP use peaked in April to June, 2016, at 28% (41 of 147 respondents), but decreased thereafter to 8% (four of 53) in January to March, 2018. Among 460 men who had never used PrEP at baseline, predictors of initiating PrEP included age 40–44 years (incidence rate ratio [IRR] 4·25, 95% CI 1·14–15·79) and 45 years and older (3·59, 1·08–11·97) versus younger than 25 years; and after adjustment for age, recent HIV test (5·17, 1·89–14·08), condomless sex (5·01, 2·16–11·63), condomless sex with two or more partners (5·43, 2·99–9·86), group sex (1·69, 1·01–2·84), and non-injection chemsex-related drugs use (2·86, 1·67–4·91) in the past 3 months, PEP use (4·69, 2·83–7·79) in the past 12 months, and calendar year (Jan 1, 2017, to March 31, 2018 vs July 30, 2013, to Dec 31, 2015: 21·19, 9·48–47·35). Non-employment (0·35, 0·14–0·91) and unstable or no housing (vs homeowner 0·13, 0·02–0·95) were associated with reduced rates of PrEP initiation after adjustment for age. About half of PrEP was obtained via the internet, even after the PrEP Impact trial had started (11 [48%] of 23 respondents in January to March, 2018). Interpretation PrEP awareness and use increased substantially from 2013 to 2018 among a cohort of MSM in England. Improving access to PrEP by routine commissioning by National Health Service England could increase PrEP use among all eligible MSM, but should include public health strategies to target socioeconomic and demographic disparities in knowledge and use of PrEP. Funding National Institute for Health Research. Introduction The PROUD study, an open-label randomised controlled trial carried out at 13 sites in England, UK, in 2015, reported that daily oral pre-exposure prophylaxis (PrEP) with tenofovir-emtricitabine resulted in an 86% reduction in HIV infection in gay, bisexual, and other men who have sex with men (hereafter referred to as men who have sex with men [MSM]). 1 A subsequent modelling study has shown that the introduction of a PrEP programme for MSM in the UK would be costeffective and possibly cost-saving in the long term. 2 In England, PrEP is freely available to people at risk of HIV only in the context of the PrEP Impact trial by Public Health England that launched across England in October, 2017. All trial participants will get National Health Service (NHS) England funded PrEP until at least October, 2020 (the end of the trial). Otherwise, people can legally purchase PrEP for their own use via the internet or from some genitourinary medicine clinics. A nationally commissioned PrEP programme for England has been agreed and should be operational by the end of the PrEP Impact trial. Among gay and bisexual men in the UK, modelling of available data suggests that the estimated annual number of new HIV infections has decreased by 71%, from 2800 in 2012, to 800 in 2018. 3 The annual number of HIV diagnoses recorded among gay and bisexual men in the UK has also decreased by 35%, from 3480 in 2014, to 2250 in 2018. 3 Based on a CD4 cell count back-calculation model, the modelled number of incident infections among gay and bisexual in England has decreased by 65% since 2014, with the most rapid decrease occurring after 2016. 3 A combination of PrEP scale-up, a large increase in ever and repeat HIV testing, and rapid antiretroviral therapy initiation at diagnosis are most likely responsible for these steep decreases in new infections, largely among MSM. 4 Similar decreases in HIV diagnoses among MSM have also been reported in San Francisco, CA, and New York City, NY, in the USA, 5,6 and New South Wales, Australia. 7 To date, little information exists on trends in PrEP awareness and uptake and predictors of PrEP initiation in England. Such data would be helpful to further inform the unrestricted PrEP implementation programme in England in 2020, in which PrEP will be made routinely available after completion of the PrEP Impact trial. 8 The Attitudes to, and Understanding of Risk of Acquisition of HIV 2 (AURAH2) study is among the first prospective cohort studies of MSM in England. We assessed changes in awareness of, and use of, PrEP and postexposure prophylaxis (PEP), predictors of PrEP initiation, and factors associated with reporting the recent use of PrEP among initially HIV-negative MSM between 2013 and 2018. Study design and participants The AURAH2 study was a prospective cohort study that recruited MSM who were HIV negative or of unknown HIV status from two large sexual health clinics in London (56 Dean Street and Mortimer Market Centre) and one in Brighton (Claude Nicol clinic), between July 30, 2013, and April 30, 2016. 9 Additionally, participants were For more on the PrEP Impact trial see https:// www.prepimpacttrial.org.uk/ For more on purchasing PrEP online see https://www.iwantprepnow.co. uk/buy-prep-now/ Research in context Evidence before this study Pre-exposure prophylaxis (PrEP) taken daily or on-demand has been shown to be highly effective for prevention of HIV infection among men who have sex with men (MSM) in clinical trials and open-label studies. PrEP is recommended by WHO for HIV-negative individuals at substantial risk of sexually acquired HIV, including MSM. We searched PubMed for longitudinal cohort studies in English that included MSM in England, UK, published from database inception up to Jan 31, 2020, using key search terms including "pre-exposure prophylaxis", "PrEP", "HIV", "MSM", "homosexual", "men who have sex with men", "gay", "bisexual men", "longitudinal", "cohort", and "prospective". We identified 83 articles, which included articles of clinical trials, demonstration projects, and cohort studies mostly done in high-income countries. Apart from reports on PrEP uptake and associated factors, among these articles were studies about sexual behaviours, sexually transmitted infections, and HIV incidence among PrEP users; PrEP awareness, acceptability, and willingness to use; PrEP retention, engagement, adherence, and discontinuation; and characteristics of PrEP users. We identified two studies run in England, two articles from the PROUD trial, and one article from our research group (the Attitudes to, and Understanding of Risk of Acquisition of HIV [AURAH] and AURAH2 study) that measured changes in the prevalence of sexual behaviours and PrEP use, by comparing data from the AURAH cross-sectional study with baseline data from the AURAH2 prospective cohort. To our knowledge, no data have been published from longitudinal cohort studies in England, excluding those from clinical trials or PrEP demonstration projects. Added value of this study This study provides the first estimates of trends in PrEP use and predictors of PrEP initiation among HIV-negative MSM in England, using data from a prospective cohort, at a critical time in planning the roll-out of PrEP in England. We found that, despite free PrEP availability in England being only through the PrEP Impact trial, both awareness and use of PrEP increased substantially from 2013 to 2018 among MSM attending three sexual health clinics in south-east England. In 2018, PrEP use was almost 80% among men with multiple condomless sex partners. A substantial proportion of men accessing PrEP obtained it via the internet. PrEP use was lower among those with lower socioeconomic status than among those of higher socioeconomic status. Implications of all the available evidence The available evidence highly supports the addition of PrEP to the standard of care for MSM who would benefit from the preventive prophylaxis. Sociodemographic and economic barriers associated with PrEP use should be promptly addressed via routine commissioning of PrEP. eligible if they were aged 18 years or older and attending or had attended the study clinics for routine sexually transmitted infection (STI) or HIV testing. Individuals who consented to participation in the study completed a confidential baseline paper questionnaire in the clinic. During the follow-up period, participants self-completed sub sequent 4-monthly and annual question naires that were available online from March, 2015, until March, 2018; hence, maximum follow-up was 3 years. The baseline and follow-up questionnaires gathered information on demographic and socio economic factors, health and wellbeing, knowledge and understanding of HIV, life style, sexual health, recent sexual behaviour, and PrEP and PEP use. The study protocol has been described previously. 9 All participants provided written informed consent before taking part. Measures All measures were self-reported from participant question n aires. Information on PrEP and PEP use was collected in the baseline and annual questionnaires, with information on PrEP and PEP awareness collected at baseline only. The main outcomes of interest were PrEP and PEP awareness and reported PrEP and PEP use in the previous 12 months. Study definitions and questions are shown in figure 1. To be classified as positive for having taken PrEP or PEP in the past 12 months, at baseline or at the annual questionnaire, participants were required to have answered "yes" to the question on having ever taken PrEP or PEP, and subsequently indicated use in the past year on the question on frequency of use. Missing answers to PrEP and PEP use questions were classified as no use. Sociodemographic variables of interest included age group (<25, 25-29, 30-34, 35-39, 40-44, ≥45 years), country of birth and ethnicity (white UK born, other ethnicity UK born, white non-UK born, other ethnicity non-UK born), sexual identity (gay, bisexual or other), university education (yes, no), ongoing relationship (yes, no), employment status (employed, not employed), sufficient money for basic needs (all of the time, most of the time, sometimes or no), and housing status (homeowner, renting, unstable or other). We considered the following seven measures of HIV risk and related behaviours and activities (in the past 3 months, unless otherwise stated): condomless anal sex (condomless sex), condomless sex with two or more partners, group sex (sex involving more than two participants on the same occasion), recreational drug use classified into four groups (none; drug use but not injection or chemsexrelated; use of at least one chemsex-related drug [crystal metham phetamine, γ-hydroxybutyrate (GHB), γ-butyrolactone (GBL), or mephedrone] but no injection drug use; injection drug use), STI diagnosis (in the past 12 months at the baseline questionnaire, and in the past 3 months at the annual questionnaire), PrEP or PEP use in the past 12 months, and having had a recent HIV test (in the past 6 months at the baseline questionnaire, and in the past 3 months at the annual questionnaire). Mental health and lifestyle factors of interest included depressive symptoms (a score of ≥10 on the Patient Health Questionnaire-9), 10 anxiety symptoms (a score of ≥10 on the Generalised Anxiety Disorder-7 test), 11 and higher alcohol consumption (a score of ≥6 on the WHO alcohol screening tool audit, Alcohol Use Disorders Identification Test-consumption [AUDIT-C], questionnaire; first two questions only). 12 Ethnicity, sexual identity, education, employment, financial status, and housing status were fixed variables derived from the baseline questionnaire, whereas age, depressive symp toms, anxiety symptoms, alcohol consumption, and HIV risk and related behaviours were time-varying variables derived from baseline and annual questionnaires. We treated missing values as "no" answers (these accounted for <5% for each variable), except for the few individuals with Statistical analysis In preparation for the analyses, we considered calendar year 3-month periods from the first enrolment (July, 2013) to the end of the study period (March, 2018). Information from each participant's questionnaires was ascribed to the 3-month period in which the questionnaire was completed. We combined data for the last two quarters of 2013 as one calendar period (July-December, 2013) because recruitment started on July 30, 2013, and so the third quarter of 2013 was less than 3 months; the number recruited by September, 2013, was too small to have a separate period. To describe the prevalence of PrEP and PEP awareness at baseline, we used data from all available baseline questionnaires. We assessed trends over calendar time in the proportion of participants who indicated PrEP and PEP awareness at baseline over the period from July, 2013, to April, 2016 (enrolment stage), and did a χ² test for linear trends in proportions. To examine trends in past 12-month PrEP and PEP use over the entire study period, we used pooled data from all available baseline and annual questionnaires. We used univariate generalised estimation equation (GEE) models with a logit link and robust SEs to assess trends over calendar time during the period July, 2013, to March, 2018, in the proportion of questionnaires for which PrEP and PEP use were reported, accounting for multiple questionnaire responses from individual parti cipants. Calendar year was fitted as a continuous variable to obtain a test for linear trend. Similarly, we investigated trends over calendar time in the proportion of men who reported condomless sex with two or more partners, and trends in PrEP use among these men specifically. We also assessed trends over time (in 6-month calendar periods) in source of PrEP, using questionnaires in which PrEP use was reported. We assessed, in a longitudinal analysis, factors associated with PrEP initiation during follow-up among those who reported not using PrEP at baseline and who had completed at least one annual questionnaire. PrEP initiation was defined as the first report of PrEP use in the past 12 months from an annual questionnaire; time to initiation was the time from baseline to the date of completion of the questionnaire in which PrEP was first reported, or from baseline to the end of follow-up if PrEP was not initiated. We considered each factor separately in age-adjusted Poisson models (using age as a continuous variable) with robust SEs. The predictors considered included sociodemographic factors, mental health, lifestyle factors, and sexual health and behaviours reported in past questionnaires associated with sub sequent PrEP initiation. We present these results as age-adjusted incidence rate ratios (IRRs) with their corres ponding 95% CIs. We did an additional cross-sectional analysis to examine factors associated with being on PrEP, using all available baseline and annual questionnaires. We used GEE models with a logit link function, adjusted for age (as a continuous variable). Also, separately among PrEP users, we analysed factors associated with reporting non-prescribed PrEP (ie, PrEP obtained via the internet, friends, or others sources) using all available annual questionnaires in which PrEP use was reported. We present these results as age-adjusted odds ratios (ORs) with their corresponding 95% CIs. p values below 0·05 were considered to be significant. We did all analyses using STATA (version 15.1). Role of the funding source The funder had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication. 622 (54%) of 1162 men completed at least one online follow-up questionnaire, of whom 483 (78%) completed at least one annual questionnaire. A higher proportion of participants who were older, had greater financial security, and with more stable housing status continued on the study than did those who were younger, who only sometimes or did not have money to cover basic needs and who had unstable or no housing (table 1). The number of follow-up questionnaires (4-monthly and annual) completed by the end of the study period was 3277. Results To describe PrEP and PEP awareness at enrolment by calendar period of baseline questionnaire, we used data from 1161 of 1162 participants who completed a baseline questionnaire (one questionnaire was excluded from the analysis due to missing data). Overall, at baseline, 838 (72%) of 1161 participants were aware of PrEP and 1074 (93%) were aware of PEP. The Data are n/N (%), mean (SD), or median (IQR). Data are from baseline paper questionnaire. Condomless sex refers to condomless anal sex. Data are missing from the baseline questionnaire as follows: age (n=9), money status (n=4), housing status (n=15); country of birth and ethnicity (n=12), sexual identity (n=12), university education (n=3), employment (n=3), relationship (n=3), HIV test (n=3), condomless sex in the past 3 months (n=3), condomless sex with ≥2 partners in the past 3 months (n=3), group sex in the past 3 months (n=3), PEP use (n=3), PrEP use (n=3), recreational drug use (n=3), STI diagnoses (n=3), alcohol consumption (n=3), depressive symptoms (n=3), and anxiety symptoms (n=3). AURAH2=Attitudes to and Understanding of Risk of Acquisition of HIV 2. PrEP=pre-exposure prophylaxis. PEP=postexposure prophylaxis. STI=sexually transmitted infection. *Other ethnicity includes Black, Asian, Mixed, and other ethnic groups. †Renting housing includes private renting and renting from council or housing association; unstable or other housing includes temporary accommodation, staying with friends or family, other accommodation, and homeless. ‡In the past 12 months at the baseline questionnaire and in the past 3 months at the annual questionnaire. §Higher risk defined as a score of ≥6 on the WHO modified alcohol screening tool audit, Alcohol Use Disorders Identification Test-consumption, questionnaire. ¶Defined as a score of ≥10 on the Patient Health Questionnaire-9. ||Defined as a score of ≥10 on the Generalised Anxiety Disorder-7. figure 2). By contrast, some fluctuation was seen in the trend of past 12-month use of PEP. PEP use was reported in 371 (18%) questionnaires. PEP use was consistently higher than PrEP use until the period July to September, 2016, but after reaching a peak of 28% (41 of 147 respondents) in April to June, 2016, the Data for condomless sex with ≥2 partners are from all baseline and annual questionnaires (ie, 1161 participants provided 2079 questionnaires; one questionnaire was excluded from the analysis due to missing data on year of enrolment). Data on PrEP use in the past 12 months is from those with ≥2 condomless sex partners at baseline and in the annual questionnaires (n=755 questionnaires; one questionnaire was excluded from the analysis due to missing data on year of enrolment). Condomless sex is condomless anal sex. p values are for linear trends. PrEP=pre-exposure prophylaxis. Q=quarter. To examine factors associated with initiating PrEP, we restricted our analysis to the 460 participants who reported no previous PrEP use in the baseline questionnaire and who completed at least one annual follow-up questionnaire (data from 875 questionnaires). Ageadjusted IRRs for factors associated with initiating PrEP are shown in table 2. The PrEP initiation rate increased substantially from 2013 to 2018. When considering year as a continuous variable, the age-adjusted IRR per calendar year was 5·91 (3·68-9·49; p<0·0001). Compared with the category of younger than 25 years, rate of PrEP initiation was increased in the age categories of 40-44 years and age 45 years and older. In age-adjusted models, non-employment and unstable housing were significantly associated with a reduced rate of PrEP initiation compared with being a homeowner. Behavioural factors associated with higher rates of PrEP initiation within the past 12 months were having a recent HIV test; reporting condomless sex; condomless sex with two or more partners, group sex, use of noninjected chemsex-related drugs, and use of PEP in the past 12 months (table 2). To assess factors associated with reporting PrEP use in the previous year, we used data from 2080 questionnaires representing 1162 respondents (baseline and follow-up; table 2). The predictors for PrEP use in the previous year were similar to those associated with initiating PrEP. In particular, older age and later calendar year were strongly associated with increased use of PrEP, while unstable or other housing was associated with less use of PrEP than being a homeowner. We found some evidence of a trend between money status and reporting PrEP use (p trend =0·037). Behavioural factors associated with repor ting the use of PrEP were a recent HIV test, repor ting condomless sex, condomless sex with two or more partners, group sex, non-injection chemsex-related drugs use, and PEP use. Country of birth and ethnicity, sexual identity, education, ongoing relationship, higher alcohol use, STI diagnosis, and symptoms of depression and anxiety were not associated with initiation of PrEP or use of PrEP (table 2). Discussion To our knowledge, this Article is the first prospective study of PrEP use among MSM in England. In this study of MSM attending sexual health clinics in London and Brighton, UK, between 2013 and 2018, we found that use of PrEP increased substantially over the study period. The internet was the preferred source for obtaining PrEP, and online PrEP purchasing still continued even after the PrEP Impact trial started. Between January and March, 2018, about 48% of men who reported PrEP use obtained it from the internet, and paid for it using their own money. A substantial increase in PrEP use was also seen among men who self-reported engaging in condomless sex with two or more partners; in the period of January to March, 2018, the proportion was 78% compared with 0% in 2013. The increased level of awareness, use, and rates of initiation of PrEP in this study coincided first with the PROUD study (Nov 29, 2012, to April 30, 2014), 1 and then the initiation of the PrEP Impact implementation trial in England, and the availability of PrEP through NHS sexual health clinics in Scotland and Wales, and through a pilot in Northern Ireland. The PrEP Impact trial started recruitment in 2017 and the rate of PrEP initiation among the men in our study, who at least at baseline had attended these study sites, increased by more than twenty times in 2017-18 compared with before 2015. The high proportion of men accessing PrEP online despite the PrEP Impact trial opening for enrolment might have been because available places were rapidly filled and recruitment was closed temporarily. 13 As a result, men in need of PrEP were being turned away and had no choice but to purchase it via the internet (Clarke A and Nwokolo N, unpublished). Substantial advocacy efforts from community-based organisations have also contributed to some men accessing PrEP online. In our analysis, older age was independently associated with being more likely to initiate PrEP, with the rate of initiation among men aged 40 years and older being four times higher than among those younger than 25 years. This finding was similar to that in a cohort in Amsterdam in which the median age among men initiating PrEP was 40 years, 14 and a cohort in Australia in which rates of PrEP initiation were highest among men aged 40 years and older. 15 We also found that indicators of socioeconomic disadvantage (eg, not being employed, having unstable housing status, and having less or no money for basic needs), were associated with a reduced rate of initiating PrEP or being on PrEP. Previous research in the UK has shown that lower socioeconomic situation is associated with worse HIV treatment outcomes among individuals living with HIV. 16 Efforts need to be made to ensure that socioeconomically disadvantaged individuals have equitable access to all effective HIV prevention strategies, including PrEP. High-risk sexual behaviours such as condomless sex, condomless sex with two or more partners, group sex, and using non-injection chemsex-related drugs were also associated with PrEP use, indicating appropriate use of PrEP by these men. Similar to our findings, a recent national online prospective study in Australia reported that younger age, less use of illicit party or sex drugs, and lower engagement in HIV sexual risk behaviours such as group sex or any condomless sex, were independently associated with non-uptake of PrEP. 15 The study also reported an increase in the uptake of PrEP from baseline (2014-15) to 24 months of follow-up. Qualitative data from the PROUD study 17 showed that MSM who were already having frequent condomless sex added PrEP as a prevention tool. MSM with a high risk of contracting HIV through condomless sex should be offered PrEP as a matter of urgency. In our study, more than 22% of respondents in 2018 were not using PrEP when having condomless sex with multiple partners and so were still at risk. These data would support the national roll-out of PrEP in England. We did not find an association between STI diagnoses and taking PrEP, except among men who reported past 12-month non-prescribed PrEP use. A 2019 meta-analysis of 20 PrEP studies and trials among MSM found high incidences of STIs among MSM taking PrEP, ranging from 33·0 per 100 person-years to 99·8 per 100 personyears. 18 However, whether PrEP use leads to increased rates of STIs remains unknown. The meta-analysis generated estimates of STI incidence among MSM who engaged in high-risk sexual behaviours, rather than comparing the rates among MSM taking PrEP versus not taking PrEP. The PROUD study found extremely high levels of STI diagnoses, but detected no difference in the occurrence of STIs between the immediate and deferred PrEP groups; 1 while, the PrEPX study in Australia found the incidence of STIs increased during PrEP use, but that this finding was partly explained by increased testing frequency. 19 Additionally, in the PrEPX study, half of the participants were not diagnosed with an STI during follow-up and STIs were highly concentrated among PrEP users with repeat STIs, and associated with number of partners and group sex. Regular STI testing should continue alongside PrEP use to ensure patients' good sexual health. Although in our study we found no significant association between anxiety or depression and reporting recent PrEP use or PrEP, in a 2020 Australian study, PrEP use was independently associated with lower levels of HIV-related anxiety among PrEP-eligible men (MSM at high risk of HIV infection) than among PrEP ineligible men (MSM at low risk). 20 Alongside the increase in PrEP use, we found a substantial decreasing trend in PEP use between 2013 and 2018, and that the use of PEP was a predictor of future PrEP initiation. This finding suggests that transition from PEP use to PrEP use occurred in these men. Guidelines recommend transitioning MSM who are at continuous risk of HIV from use of PEP towards use of PrEP. 21 PrEP taken daily or on-demand before possible exposure is a highly effective strategy for reducing the risk of HIV acquisition among MSM who are at high and ongoing risk of infection. 21 PEP, on the other hand, is a short-term treatment and is to be used in emergency circumstances after recent HIV exposure (within 72 h). 22 Both PrEP and PEP should be a part of combination HIV prevention strategy. Our study has some limitations. Men in this cohort were recruited from three sexual health clinics where the PROUD study was run and might have been better informed about PrEP than the general MSM population in England. Therefore, prevalence of PrEP and PEP use in this study might overestimate use in the MSM population nationwide. Men in this cohort were recruited from three sexual health clinics in urban areas in southeast England, and so the sample size was relatively small and these men might not be representative of the broader MSM population in England and the UK. Trends in use and predictors of PrEP initiation might also differ among MSM who are not engaged with sexual health clinics. Additionally, the sample comprised pre dominantly men who were highly educated, employed, in a stable economic situation, of white ethnicity, and with access to the internet (follow-up questionnaires were only available online, therefore to complete the questionnaires participants needed an internet connection), which might not allow generalisability to all MSM living in England. However, the Australian prospective cohort study that used a more diverse sample of MSM reached similar findings as were observed in our study. 15 Further research is needed to investigate PrEP use among MSM who are more socioeconomically disadvantaged in England and the UK. Recall bias and social desirability bias might be evident in these self-reported data; however, the study collected sensitive and personal data through an online follow-up questionnaire, which might have reduced such bias. 23 Finally, the online retention of participants who initially registered in the study was lower than we hoped; however, more than 60% of participants who completed at least one online questionnaire (n=622) were followedup until the end of the study (n=400). 24 In summary, this study provides important data for PrEP implementation in England. PrEP use has increased substantially over the past 5 years, with a high proportion of PrEP being obtained via the internet. Our data suggest that men engaging in sexual behaviour related to high HIV risk, who are older, and those of higher socioeconomic status are significantly more likely to use PrEP. A fully commissioned programme for PrEP in England has been agreed; however, implementation has been delayed (Rodger AJ, unpublished). Due to the COVID-19 pandemic, the programme might not be fully operational across England by the end of the PrEP Impact trial in October, 2020. To transition participants of the PrEP Impact trial onto the nationally commissioned programme, an interim supply of PrEP will be made available by the trial for participants with an ongoing need for PrEP and who attend services where the national programme has not yet commenced. The results of our study can inform the implementation of the national programme by highlighted patient groups who might be at increased risk of HIV infection but less likely to be aware of or using PrEP and who could benefit most from public health outreach and advice. Improving access to PrEP via routine commissioning by NHS England could increase PrEP use among all eligible MSM and reduce socioeconomic disparities, if it is accompanied by an understanding of these disparities and tailoring of public health message and services to address them.
v3-fos-license
2017-10-20T09:22:01.370Z
2008-09-15T00:00:00.000
30584458
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://escholarship.org/content/qt13k155w3/qt13k155w3.pdf?t=opbwpt", "pdf_hash": "ba565332fb1584d056e8ab18c081e1c7fccdb363", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45158", "s2fieldsofstudy": [ "Physics" ], "sha1": "6faef15a53288c2b32c73d001ce8348b0d0845d4", "year": 2008 }
pes2o/s2orc
Organization of microscale objects using a microfabricated optical fiber We demonstrate the use of a single fiber-optic axicon device for organization of microscopic objects using longitudinal optical binding. Further, by manipulating the shape of the fiber tip, part of the emanating light was made to undergo total internal reflection in the conical tip region, enabling near-field trapping. Near-field trapping resulted in trapping and self-organization of long chains of particles along azimuthal directions (in contrast to the axial direction, observed in the case of large tip cone angle far-field trapping). Optical manipulation of microscopic objects using spatially sculptured optical landscapes coupled with optical binding considerable interest for engineering self-assembled colloidal and biological structures. While far-field binding between microscopic objects has been demonstrated using elliptical beams or two counterpropagating beams near-field trapping and binding over a large area has been reported at the interface of total internal reflection (TIR) occurring in a prism. Except for twofiber trapping all other approaches have depth limitation. The two-fiber configuration requires critical alignment of the two counterpropagating beams and therefore restricts three-dimensional (3D) manipulation of the optically bound structure. Theoretical evaluation of the trapping force exerted by the microfocused beam from an axicon-tipped single fiber and its use for in-depth trapping of cells and low-index recently. An axicon (having a conical surface) can be used to turn a Gaussian beam into a Bessel beam, with greatly reduced diffraction and smallest optical confinement The micro-axicon fiber can trap at a larger distance from the fiber tip compared to a tapered we report of in a single fiber-optic beam. where E(x 1 ,y 1 ) is the field at the base of the micro-axicon, which can be calculated using E fund , and accounting for the phase acquired along the axicon-tip region. XY-intensity distribution of the 800 nm beam transmitted through the axicon tip, calculated at two Z distances from the tip (fiber core size, 8 μm; refractive index of axicon, 1.5; cone angle, ∼30°), is shown in Figs. 1(a) and 1(b). Figures 1(c) and 1(d) show typical beam profiles measured at distances of 5 and 15 μm from the tip. The measured beam profiles showed Bessel-like beam profiles with few concentric rings. The scattering force in the axial direction is minimized by the Bessel-like beam as compared to the beam from a lensed/ tapered fiber. Owing to this special property of the Bessel-Gauss beam (having a small high-intensity region along the Z direction), a relatively less diverging beam can achieve single-beam optical tweezers as compared to Gaussian beam optical tweezers. In addition to an increase in propagation distance with a decrease in cone angle (data not shown), transmittance of the beam through the fiber tip decreased substantially, which was attributed to an increase in TIR at the tip [10]. The experimental setup consists of a TEM 00 mode output of a cw Ti:sapphire laser beam (800 nm, Coherent Inc., USA) coupled to the microfabricated single-mode fiber. A 20 × microscope objective (MO) was used for imaging. Two 1 μm polystyrene particles suspended in phosphate-buffered saline (PBS) were trapped and raised to a height of a few mm from the coverslip. Figure 2(a) shows optical binding between two particles (in the encircled region) at a distance of ∼3 μm from the tip. Analysis of images using crosscorrelation techniques [13] provided positions of particles with nanometer resolution. The contrast was increased by region-of-interest selection and thresholding. The bottom right inset in Fig. 2(a) show a 3D intensity map of the two optically bound particles. The two optically bound particles remained almost at a fixed separation over the 10 min observation period. However, in the bound state, they were found to move in the axial direction within 3 to 6 μm. Translation of the fiber in three dimensions led to transportation of the (encircled) optically bound particles (data not shown). The distance between the two particles decreased as they moved away from the tip. Tracking of particles (1 and 2) as the fiber tip (dark line) was translated is shown as an inset [ Fig. 2 Figure 2(b) shows the histogram of the separation between the optically bound particles measured over 30 s. Figure 3(a) illustrates how far-field single fiber trapping and optical binding of a chain of microscopic particles could be achieved. For a cone angle of 60°, a truncated Bessel beam (power, 146 mW) trapping of polystyrene particles (diameter, 1 μm) was observed at a distance of ∼5 μm from the tip. For a fixed cone angle (e.g., 90°), trapping stiffness along the axial direction measured by the equipartition theorem method [7] was found to depend on the size of the particle (2.0 pN/μm for 1 μm polystyrene versus 3.2 pN/μm for 2 μm polystyrene, for 60 mW trapping power). Similarly, trapping stiffness was found to depend on cone angle, e.g., 1.2 pN/μm for 60° cone angle and 2.0 pN/μm observed for a 90° cone angle tip, for a 1 μm particle trapped at a trapping power of 60 mW. This is due to the longer (∼5 μm) propagation distance of the Bessel beam generated by the 60° tip. This ensured transverse trapping of more particles along the axial direction. Figure 3(b) shows arrangement of a chain of ∼20 particles along the beam propagation direction. This can be attributed to longitudinal optical binding [4] where each trapped particle acts as a lens to trap a subsequent particle near its focal point [ Fig. 3(a)]. The difference between the Bessel-Gauss beam generated by the axicon-tip fiber and the conventional Bessel beam is the propagation distance. A conventional Bessel beam (focused through a MO) has a large propagation distance and therefore low axial-trapping stiffness, leading to two-dimensional (2D) optical trapping [14]. The optically bound chain could be displaced by translation of the fiber. Over a 15 min period, more particles aligned along the axial direction [ Fig. 3(c)]. During the transverse motion of the long chain (achieved by movement of the fiber), when an obstacle [particles adhered to the glass substrate, marked by arrow in Fig. 3(d)] was encountered, the loose end of the optically bound chain oscillated around the obstacle [Figs. 3(d)-3(g)]. Though use of a larger cone angle (90°) led to more axially stable 3D Bessel beam trapping, the optically bound chain was shorter compared to the smaller cone angle (60°) tip. In order to achieve near-field trapping, the cone angle was made small enough (≤30°) that a high percentage of the beam underwent TIR at the tip-water interface [ Fig. 3(h)]. Since the strength of an evanescent wave decays rapidly with the distance from the place where it is generated, trapping volume is significantly reduced. The incidence critical angle for TIR is calculated to be ∼63° (refractive index of the tip/water, 1.5/1.33) using Snell's law, which corresponds to a tip cone angle of 54°. Assuming all the rays to be parallel, none of the laser beam should exit a 30° cone angle tip. However, imperfection in the tip and the fact that not all rays inside the single-mode fiber travel in straight lines leads to leakage of the beam. In our case, stable far-field trapping in the axial direction was rarely observed [Figs. 3(i)-3(k)] since a small amount of laser power came out in the axial direction. Additionally, the propagation distance became longer (∼8 μm), adding to the instability. However, near-field trapping [5] at the interface between the tip and the water led to better trapping and thus self-organization of long stable chains of particles along the azimuthal directions. Owing to the exponentially decaying evanescent field [5,15] at the site of TIR, the closest trapped particle was found nearer to the surface of the tip in contrast to a few micrometers in the far-field case [Figs. 3(a)-3(g)]. Azimuthal binding of trapped particles may be affected by whispering gallery mode excitation in the beads. The azimuthal angles, at which the optically bound chains are formed, were found to vary from 30° to 75° [ Fig. 3(c)]. Some of the near-field induced optically bound chains (1 and 4) lengthened [Figs. 3(i)-3(k)] over a period of time and became highly stable, while others at smaller azimuthal angles (2 and 3) shortened. Switching off the laser beam led to disorganization [ Fig. 3(l)] of the particles. Metallization of the tip to enhance the evanescent field by the surface plasmon effect resulted in heating effects leading to convection and bubble formation (data not shown). In conclusion, by shaping the axicon tip cone angle, single fiber optical trapping and binding in the farfield as well as near-field was achieved, leading to organization of microscopic particles. Since the trapping force on metallic particles in the Rayleigh regime is higher as compared to dielectric particles, the axicon fiber can also be used to organize metallic nanoparticles and to study optical binding [16]. The proposed noninvasive axicon-tipped fiber can be used in multifunctional mode for in-depth trapping as well as for excitation of fluorophores and detection of backreflected light/fluorescence.
v3-fos-license
2024-04-27T15:18:16.064Z
2024-04-25T00:00:00.000
269395368
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmolb.2024.1384307/pdf?isPublishedV2=False", "pdf_hash": "85df61db9a615cd9f52b37f5fc8bfeeb16e3e67c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45159", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "sha1": "780a0223962244d4d1f9a5dc47862b152b60c9f5", "year": 2024 }
pes2o/s2orc
Applying GC-MS based serum metabolomic profiling to characterize two traditional Chinese medicine subtypes of diabetic foot gangrene Traditional Chinese medicine (TCM) has a long history and particular advantages in the diagnosis and treatment of diabetic foot gangrene (DFG). Patients with DFG are mainly divided into two subtypes, tendon lesion with edema (GT) and ischemic lesion without edema (GI), which are suitable for different medical strategies. Metabolomics has special significance in unravelling the complexities of multifactorial and multisystemic disorders. This study acquired the serum metabolomic profiles of two traditional Chinese medicine subtypes of DFG to explore potential molecular evidence for subtype characterization, which may contribute to the personalized treatment of DFG. A total of 70 participants were recruited, including 20 with DM and 50 with DFG (20 with GI and 30 with GT). Conventional gas chromatography-mass spectrometry (GC-MS) followed by orthogonal partial least-squares discriminant analysis (OPLS-DA) were used as untargeted metabolomics approaches to explore the serum metabolomic profiles. Kyoto encyclopedia of genes and genomes (KEGG) and MetaboAnalyst were used to identify the related metabolic pathways. Compared with DM patients, the levels of 14 metabolites were altered in the DFG group, which were also belonged to the differential metabolites of GI (13) and GT (7) subtypes, respectively. Among these, urea, α-D-mannose, cadaverine, glutamine, L-asparagine, D-gluconic acid, and indole could be regarded as specific potential metabolic markers for GI, as well as L-leucine for GT. In the GI subtype, D-gluconic acid and L-asparagine are positively correlated with activated partial thromboplastin time (APTT) and fibrinogen (FIB). In the GT subtype, L-leucine is positively correlated with the inflammatory marker C-reactive protein (CRP). Arginine and proline metabolism, glycine, serine and threonine metabolism, phenylalanine, tyrosine and tryptophan biosynthesis are the most important metabolic pathways associated with GI. The main metabolic pathways related to GT include pyrimidine metabolism, glutathione metabolism, biosynthesis of valine, leucine, and isoleucine, as well as valine, serine, and isoleucine with metabolites. The results of this study indicate that patients with different DFG subtypes have distinct metabolic profiles, which reflect the pathological characteristics of each subtype respectively. These findings will help us explore therapeutic targets for DFG and develop precise treatment strategies. Diagnostic Criteria 1.The diagnostic criteria of DM (1) Typical diabetes symptoms and random blood glucose ≥ 11.1 mmol/L; (2) oral glucose tolerance test (OGTT) 2 h blood glucose ≥ 11.1 mmol/L; (3) fasting blood glucose (FBG) ≥ 7.0 mmol/L.Diagnosis can be made if one of the above is met. The diagnostic criteria of DFG 2.1 The diagnosis of DGF referred to grade 3~5 in Wagner's classification of diabetic foot ulcers. The diagnosis criteria of DFG subtypes The DFG subtypes were divided into GT and GI according to the clinical symptoms and are identified by two TCM experts with senior titles.The diagnosis results were included in the study after unification, and if they were inconsistent, the third expert would conduct syndrome differentiation.If they were inconsistent, the cases would not be included. Clinical symptom Critical limb ischemia/chronic limb-threatening ischemia (CLI/CLTI), limb pain, intermittent claudication, chronic rest pain.The pain may be worsened in an elevated leg and improved in the dependent position due to compromised blood flow.The area was rubor, pale and raised in the early stage.Capillary refill is reduced, and there may be a loss of overlying hair.Ankle pulses are frequently absent. Physical and chemical examination All kinds of examinations proved that there was an occlusive change of limb artery stenosis, and color Doppler, CT, DSA, vascular ultrasound, and vascular electro-optical volume flow chart confirmed that there was limb artery stenosis or occlusion; angiography mainly focuses on arterial lesions of lower limbs, and popliteal artery lesions are most common in distal arteries, accounting for more than 80%.The morphology of vascular lesions is similar to arteriosclerosis obliterans.Due to extensive limb arteriosclerosis and diabetes, there are fewer collateral vessels, and the vessels can be tortuous, narrow, and occluded.The lower limb artery ankle-brachial ratio decreased significantly; plain X-ray film showed obvious calcification shadows in the aortic arch, abdominal aorta, or lower limb artery.2.2.1.3Blood-stasis syndrome. Blood-stasis syndrome can be diagnosed when the symptoms conform to 1 main criterion, or 2 secondary criteria. Main Criteria (1) Dull red or purple or cyanosed tongue texture or with ecchymoses and petechiae, or cyanosed, purple black, varicose or coarse swelling sublingual vein.( 2 Clinical symptom Classical clinical signs of infection such as redness, warmth, swelling, pain in the foot.The affected limb has no obvious ischemic symptoms: ankle-brachial index (ABI) > 0.9, no limb numbness or intermittent claudication, dorsalis pedis (DP) and posterior tibial (PT) artery pulsations were palpable.Deep infection, usual systemic signs of infection (e.g., fever, elevated white blood cell count).Drainage and edema in the setting of a patient with a previous foot ulcer or tissue injury secondary to diabetes. Physical and chemical examination (1) Diagnostic physical and chemical examination: three high-high blood sugar, high white blood cell, high erythrocyte sedimentation rate, three low-albumin, low red blood cell, low hemoglobin Doppler vascular examination, vascular ultrasound: the blood flow of the dorsal foot artery and the posterior tibial artery of the affected foot is in the normal range, or there is partial stenosis or occlusion. (2) Auxiliary physical and chemical examination: X-ray examination showed foot abnormalities.The pathological examination of the tendon can be performed under experimental conditions.The degeneration, edema, or necrosis of the tendon can be seen.2.2.2.3 Dampness-heat-related symptoms Dampness-heat-related symptoms can be diagnosed when the symptoms conform to 1 main criterion, or 2 secondary criteria. Inclusion Criteria (1) Conform to the diagnostic criteria of Xi's diabetes mellitus gangrene type or ischemic type; (2) there are no patients with severe liver and kidney function injury, mental illness, and other ) Dull red or purple or cyanosed face, lips, gum, periorbital region, and finger/toe-end.(3) Varicosity or telangiectasia at any sites.(4) Blood outside the meridians (causing blood stasis and accumulation in organs, tissues, subcutaneous or serosal cavity).(5) Abdominal tenderness and tightness.(6) Dark menstrual flow, or slightly dark with blood clots.(7) Vascular occlusion or moderate to severe stenosis (50%) showed in imaging examinations.(8) Material evidence of thrombosis, infarction, or embolism.Secondary Criteria (1) Fixed pain, or stabbing pain, or pain aggravated at night.(2) Limb numbness or hemiplegia, or joint swelling and deformity.(3) Dry and scaly skin (rough and thick skin with increased scale).(4) Unsmooth, intermittent or hardly perceivable pulse.(5) Pathological lumps, including organomegaly, neoplasm, inflammatory or non-inflammatory masses and tissue hyperplasia.(6) Mild vascular stenosis (<50%) showed in imaging and other examinations.(7) Abnormal results of physicochemical examinations such as hemodynamics, hemorheology, platelet function, coagulation function, fibrinolysis function, microcirculation, chest X-ray and ultrasonography that indicate circulation disorder, abnormal microvascular structure and function, or concentrated, viscous, coagulated and aggregated state of blood.(8) Trauma, operation or abortion within 2.2.2The diagnostic criteria of GT serious diseases; (3) first foot break; (4) Obtain the consent of the patient or his family; (5) Does not violate the requirements of medical ethics.4. Exclusive Criteria (1) Skin ulcer caused by electrical, chemical, or radiation injury, tumors, varicose veins, or other reasons; or malignant lesions within the ulcer; (2) severe clinical infection indicated by cellulitis, fever, elevated white blood cell count, bacterial culture, or increased (high-sensitivity) C-reactive protein levels; (3) severe uncontrollable hypertension with systolic blood pressure ≥ 160 mmHg or diastolic blood pressure ≥ 110 mmHg; (4) serum albumin levels < 28 g/L; (5) hemoglobin < 90 g/L; (6) platelet count < 50 × 10 9 /L; (7) severe heart, liver, or kidney injury, in case of medical treatment that may seriously affect the safety and treatment; (8) pregnancy, family planning, or breastfeeding women; (9) cognitive dysfunction preventing fully informed consent; (10) allergic disposition or allergic to the ingredients of the treatment under investigation and reference drugs; (11) participation in other clinical trials during the past one month; (12) in the judgment of the researcher, inability to complete the trial or comply with its requirements.
v3-fos-license
2022-04-03T16:43:10.083Z
2022-03-30T00:00:00.000
247896997
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/24740527.2022.2058919?needAccess=true", "pdf_hash": "4ef97faa4736498176e142fd4a2ba7110b35eaa5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45161", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "sha1": "40be0e7c2acef4202228c58ac95913fcfc0ac6f2", "year": 2022 }
pes2o/s2orc
A memory-reframing intervention to reduce pain in youth undergoing major surgery: Pilot randomized controlled trial of feasibility and acceptability ABSTRACT Background Three to 22% of youth undergoing surgery develop chronic postsurgical pain (CPSP). Negative biases in pain memories (i.e., recalling higher levels of pain as compared to initial reports) are a risk factor for CPSP development. Children’s memories for pain are modifiable. Existing memory-reframing interventions reduced negatively biased memories associated with procedural pain and pain after minor surgery. However, not one study has tested the feasibility and acceptability of the memory-reframing intervention in youth undergoing major surgery. Aims The current pilot randomized clinical trial (RCT; NCT03110367; clinicaltrials.gov) examined the feasibility and acceptability of, as well as adherence to, a memory reframing intervention. Methods Youth undergoing a major surgery reported their baseline and postsurgery pain levels. Four weeks postsurgery, youth and one of their parents were randomized to receive control or memory-reframing instructions. Following the instructions, parents and youth reminisced about the surgery either as they normally would (control) or using the memory-reframing strategies (intervention). Six weeks postsurgery, youth completed a pain memory interview; parents reported intervention acceptability. Four months postsurgery, youth reported their pain. Results Seventeen youth (76% girls, Mage = 14.1 years) completed the study. The intervention was feasible and acceptable. Parents, but not youth, adhered to the intervention principles. The effect sizes of the intervention on youth pain memories (ηp2 = 0.22) and pain outcomes (ηp2 = 0.23) were used to inform a larger RCT sample size. Conclusions Memory reframing is a promising avenue in pediatric pain research. Larger RCTs are needed to determine intervention efficacy to improve pain outcomes. Postsurgical pain in youth is common, often inadequately managed, distressing, and, for 3% to 22% of youth, may become chronic. [1][2][3] Chronic postsurgical pain (CPSP; i.e., pain that persists for 3 months or longer after surgery and impacts health-related quality of life) contributes to the rising prevalence of pediatric chronic pain, which has been coined a "modern public health disaster." 4,p466 Pediatric CPSP is associated with sleep disturbances, 5 activity limitations, 6 and functional disability. 7,8 According to a conceptual model proposed by Rabbits and colleagues, 9 transition of acute to chronic pediatric postsurgical pain is influenced by demographic (e.g., age, sex), biological (e.g., genetic profile, inflammatory response), psychological (e.g., emotions, cognitions, behaviors), and social (e.g., parent, family) factors. Due to their modifiable nature and robust associations with outcomes, psychosocial factors are of particular interest and importance. Youth with high levels of general and pain-related anxiety, 8,10 shorter presurgery sleep duration and worse sleep quality, 11,12 and general psychosocial distress (i.e., a combination of high pain catastrophizing, pain interference, depression, and fatigue) 13 are at greater risk of developing CPSP. Another risk factor for CPSP may involve negatively biased memories for pain (i.e., recalling higher pain as compared to the initial report). In two cohorts of youth undergoing major surgery, higher postsurgical pain intensity ratings were associated with negatively biased memories for pain 5 to 12 months later. 14,15 Further, higher levels of baseline anxiety sensitivity and catastrophic thinking about pain during the first 24 to 48 hours postsurgery contributed to more negatively biased pain memories one year after surgery. 15 Children's memories are highly modifiable 16 and can be altered by the simple act of talking about past pain experiences. 17 However, the few existing psychosocial interventions aimed to prevent pediatric CPSP focus on pain in the short term and address modifiable psychological and behavioral factors (e.g., anxiety, psychological arousal, catastrophic cognitions), 18,19 but pain memories have not been targeted in the context of major pediatric surgery despite their potential importance for subsequent pain experience. 20 The existing memory-reframing interventions have been tested in the context of procedural pain (e.g., lumbar puncture, vaccine injection, dental injection) [21][22][23] and have resulted in reduced negative biases in children's memories for pain. 24 A recent randomized controlled trial (RCT) tested the efficacy of a parent-led memory-reframing intervention in a cohort of young children undergoing a tonsillectomy. 17 Parents learned three key principles of optimal reminiscing about past postsurgical pain, including (1) highlighting the positive aspects of the past painful experience and avoiding using pain-related words, (2) correcting negative exaggerations in pain memories, and (3) enhancing children's pain-related self-efficacy by talking about coping strategies. 17 Parents then used the intervention principles to reminisce with their children about the tonsillectomy. Children in the intervention group recalled their postsurgical pain in a less negatively biased way compared to children in the control group. 17 The existing research on memory reframing is limited to procedural pain and pain associated with a minor outpatient surgery, as well as samples of young children (i.e., participants aged 4 to 9 years except for Chen and colleagues' 23 sample of youth aged 3 to 18 years with cancer undergoing needle procedures). The feasibility and acceptability of a memory-reframing intervention, as well as its effect size on pain outcomes, in the context of major surgery with older children is unknown. The present pilot RCT aimed to fill this gap by testing the adherence to, as well as feasibility and acceptability of, a modified version of the previously used 17 memory-reframing intervention in a sample of youth undergoing spinal fusion or pectus repair. Based on previous research, 17 we hypothesized that the intervention would be feasible and acceptable. We hypothesized that parent-child reminiscing in the intervention group would be more intervention congruent compared to the control group (i.e., parents and children would more frequently use positive emotion-, coping-, and bravery-related words and less frequently use negative emotion-, pain-, and fear-related words). A secondary aim of the study was to calculate the observed effect size of the intervention on youth memory biases and pain outcomes to determine the sample size for a future definitive trial. Trial Design This pilot study is a part of a larger preregistered randomized controlled trial (NCT03110367; clinicaltrials.gov, posted on April 12, 2017). The trial had a parallel group assignment with a 1:1 allocation ratio and blinded assessment of outcomes. Participants were recruited from January 2018 to June 2019. The recruitment was stopped due to insufficient funding (see Protocol Deviations section). Parent-child dyads were recruited at the Alberta Children's Hospital. The recruitment pool was generated as follows: (1) clinic staff identified the patients scheduled for pectus repair/spinal fusion surgeries, (2) upon booking of the preop clinic visit, the administrative clinic staff obtained permission to contact from parents and share their contact details for research purposes, and (3) the study staff contacted eligible families to screen potential participants and obtain verbal consent/assent. Data were collected using a study protocol ( Figure 1). Eligible families were sent and completed consent/assent forms and baseline questionnaires using secure online survey software (i.e., REDCap) approximately 1 week prior to surgery. 25 The baseline questionnaires included measures of pain characteristics as well as multiple measures of youth functioning. For a full list of measures, please see the published trial protocol (https://clinicaltrials.gov/ct2/ show/NCT03110367). On the day of the surgery and during the acute postsurgical recovery period (i.e., typically the first 1 to 3 days postsurgery), youth reported their pain characteristics ( Figure 1). Four weeks postsurgery, youth and a participating parent came to the hospital for a laboratory visit. During the visit, group allocation was revealed to the interventionist (see Randomization and Blinding section), and participants received either intervention or attention control instructions (see Interventions section for more details). The same researcher, a clinical psychology graduate student (M. P.), provided the intervention and attention control instructions. Following the instructions, parents and youth completed a reminiscing task 17 during which they talked together about the youth's recent surgery and postsurgical experience (i.e., the first few days after the surgery) either as they normally would (attention control group) or using the memory-reframing intervention principles (intervention group). There was no time limit. Parent-child reminiscing narratives were video-and audio-recorded, transcribed verbatim, and coded by two blinded coders for intervention adherence using an adapted coding scheme (see Intervention Adherence section for more information). Two weeks after the laboratory visit (i.e., 6 weeks postsurgery), participants completed an established 20 telephone memory interview to assess youth memories for pain. Memory interviews were conducted by trained research assistants who were blinded to the intervention status. The same pain measures were used for baseline assessments and memory interviews (i.e., youth reported their memories for pain and baseline/postsurgery pain using the same scales). At the end of the memory interview, the interviewer opened a sealed envelope containing the participant's group allocation to debrief participants appropriately. Parents in the intervention group reported the intervention acceptability. Finally, 4 months postsurgery, youth reported their pain characteristics using online surveys. Participants allocated to the attention control group received a handout summarizing the intervention principles. Protocol Deviations The study was registered as a randomized clinical trial (n = 90 parent-child dyads). However, due to insufficient funding, the trial was stopped. The primary aims were modified to assess the intervention feasibility and acceptability. The trial measures remained the same. Twenty-three dyads were recruited, with the last dyad to receive intervention/control group allocation joining the study in June 2019. Twenty-five parent-child dyads were enrolled in the study; however, due to the lack of funding, the last two dyads were not randomized to receive control/intervention instructions. Thus, the registered trial criteria were not met. Instead of intervention efficacy, the collected data were used to assess the intervention feasibility, acceptability, and adherence. The following protocol change occurred after the study began: Instead of watching the Planet Earth video, in line with previous research utilizing active attention control instructions, 17 participants in the control group received information about volunteering at Alberta Children's Hospital. Additionally, we would like to acknowledge a mistake regarding the number of groups (i.e., three) in the registered protocol; the study had two groups. Attention control and normal reminiscing comprise one group (i.e., control group); participants randomized to the control group received attention control instructions and reminisced as they normally would about their past surgery. Randomization and Blinding A researcher not otherwise involved in the clinical trial or in the delivery of clinical care performed block randomization (1:1) using a random number generator. 26 A different researcher blinded to the study hypotheses sealed group allocations into opaque, sequentially numbered envelopes. The interventionist and other investigators were blind to group allocation. At the start of the lab visit, the interventionist (the first author, M.P.) opened the envelope with a number corresponding to the participant number to reveal group allocation. The interventionist then delivered the instructions according to the group allocation. Other investigators remained blind to group allocation until the end of the memory interview; group allocation was revealed to the memory interviewer to debrief participants appropriately and assess acceptability for those in intervention group. Statistical analyses were performed by the first author (M.P.). The analyses took place after data collection; therefore, the first author, who delivered the intervention, was not blind to group allocation at the time of data analyses; group allocation variable was labeled as "Intervention" or "Control." Participants Seventeen youth aged 10 to 18 years and one of their parents were recruited from the General Surgery and Orthopedic Surgery Clinics at Alberta Children's Hospital. Youth were eligible to participate if they were between 10 and 18 years old and scheduled to undergo a spinal fusion or pectus repair surgery. Youth were excluded if they had severe cognitive impairment or developmental disorders, were not able to access the Internet, had serious chronic health and/or lifethreatening conditions (i.e., American Society of Anesthesiologists ≥III physical status), could not speak English, and/or did not have a parent who could speak English. Ethics The University of Calgary conjoint health research ethics board approved the study (REB17-0426). Participants received standard pre-and postsurgical pain management. Surgical teams were blinded to the group allocation and followed standard anesthesia and surgery protocols. No adverse effects were reported. Control Group Similar to previous research, 17,27 active attention control instructions were provided. Previous research taught parents the principles of child-directed play 17,27 ; however, given the older age of the current study's participants, different information was offered. Specifically, youth and parents randomized to the control group learned about and received a handout summarizing volunteering opportunities at the Alberta Children's Hospital. On average, the control instructions lasted 12.8 minutes (SD 4.0). During the control instructions, no information about the surgical experience was mentioned or elicited. Intervention Group Youth and parents in the intervention group learned about optimal ways of reminiscing about past experiences involving pain. This standardized intervention was previously tested in a sample of children aged 4 to 7 years undergoing tonsillectomy. Based on efficacious 24 memory-reframing interventions for needle procedures [21][22][23] and observational data demonstrating the influence of parent-child reminiscing on children's memories for pain, 28 the present intervention focused on three key principles of pain memory reframing: (1) highlighting the positive aspects of past surgery experience while avoiding pain-related words (e.g., hurt, sore, pain), (2) identifying and correcting any exaggerated memories for pain, (3) validating youth bravery during the pain experience and discussing effective pain coping strategies; to adapt the intervention for the older age group, selfvalidation (e.g., saying "I was brave") was taught. In line with previous research, 17 participants were given a rationale for the importance of pain memories (i.e., being powerful predictors of future pain) and their malleability through reminiscing and received a handout summarizing the intervention to use while talking about the surgery. Intervention instructions lasted, on average, 18.5 (SD 4.0) minutes. Previous memory-reframing interventions [21][22][23] were delivered directly to young children by researchers; the principles of the parent-led memory-reframing intervention 17 were taught to parents to use with their young children when reminiscing. Due to the older age and cognitive capacity of the participants in the current study, we decided to teach the intervention principles to both youth and parents. Patient Engagement The study team interpreting the results included a patient partner (J.S.) in addition to pain researchers (M.N., J.K.), a pediatric surgeon (M.B.), and clinical psychology trainees (M.P., T.L.). The patient partner provided her feedback regarding the intervention (see Discussion) and was compensated to reflect her contribution, in line with best practices. 29 Demographic Characteristics Parents reported their age, gender, ethnicity/race, education level, and household income, as well as their child's age, gender, and ethnicity/race. Primary Outcomes Intervention Feasibility. In line with previous research, 17,30 intervention feasibility was assessed using recruitment statistics and parent report of how motivated they were to learn and understand the intervention. Intervention Acceptability. The Treatment Evaluation Inventory-Short Form 31 was used to assess the intervention acceptability. The measure is reliable and valid. 31 Parents also reported whether they used the intervention principles with their children after the laboratory visit using a scale from 0 = not at all to 10 = a lot. Parents used the same 11-point scale to rate their rapport with the interventionist, as well as their understanding of, and motivation to learn, the intervention principles. Intervention Adherence. Intervention adherence was assessed by coding parent-child reminiscing narratives that followed the intervention/control instructions and that had been subsequently transcribed verbatim. A previously adapted 17 coding scheme was used to code for intervention-congruent and incongruent language used by parents and youth (i.e., six codes: words related to positive emotions, negative emotions, anxiety/ fear, pain, coping, and bravery). To account for varying narrative lengths, a proportion was calculated for each of the six codes by dividing each by the total number of codes used by each participant. Two researchers blind to group allocation coded a randomly selected 20% (n = 4) of the narratives with intercoder reliability ≥.80 (Cohen's kappa). 32 The primary coder (T.L.) coded the remaining narratives. Secondary Outcomes Memory Biases. For the purposes of this pilot study, youth memory biases for pain intensity, pain unpleasantness, and pain-related anxiety on day 1 postsurgery and during acute recovery periods (i.e., an average for days 1-3) were secondary outcomes. Pain intensity, unpleasantness, and anxiety were assessed in line with previous research 33 and to capture both sensory and affective dimensions of the multidimensional pain experience. 34,35 Memory biases were analyzed and reported to determine the observed effect size of the intervention and to calculate the required sample size for a larger definitive RCT. In line with previous research, memory biases were defined as a withinperson deviation between the initial and recalled pain intensity and pain-related unpleasantness/anxiety ratings. 17,20,28 Negatively biased pain memories were defined as recalling higher levels of pain intensity, unpleasantness, or anxiety compared to initial ratings. Positively biased pain memories were defined as recalling lower levels of pain intensity, unpleasantness, or anxiety than initial ratings. A trained researcher blind to group allocation conducted an established telephone interview previously used in pediatric surgical cohorts 14,15 to collect the ratings needed to calculate the memory biases 6 weeks postsurgery. Youth recalled both the sensory (i.e., pain intensity) and affective (i.e., pain unpleasantness and anxiety) aspects of their postsurgical pain at two time points when pain is typically most severe 36 : (1) on day 1 postsurgery and (2) during the acute recovery period (i.e., days 1 to 3 postsurgery); thus, acute recovery encompassed the first time point (i.e., day 1 after surgery). These time points have been used in previous postsurgical pain memory research. 14,33 Each question was anchored with a specific time frame and location (e.g., day 1 after surgery at the hospital). The same measures (described below) were used in the memory interview as well as the baseline and follow-up questionnaires (i.e., 1 week before surgery, acute recovery [1 to 3 days postsurgery], 2 weeks after surgery, 4 months after surgery). Pain Characteristics and Outcomes. Pain intensity was assessed using an 11-point numeric rating scale (NRS) ranging from 0 (no pain) to 10 (worst pain possible). The NRS has demonstrated good psychometric properties in pediatric samples undergoing spinal fusion and pectus repair surgeries. 37 Pain unpleasantness was rated on a 5-point Likert scale assessing how much pain was bothersome over the past 7 days (ranging from 0 = not at all to 4 = very much). The scale has been previously used in a pediatric perioperative sample. 33 Pain-related anxiety was assessed using an 11-point NRS (0 = not anxious/nervous, 10 = extremely nervous or anxious). Similar scales have been used in previous research on children's pain. 38 Pain interference was assessed using the pain interference subscale of the PROMIS-25 Profile (Patient-Reported Outcomes Measurement Information System 25-item pediatric short form). 39 The subscale's four items are rated on a 5-point Likert scale and assess the extent of everyday impairment due to pain. The scale has excellent psychometric properties and has been used in youth with chronic pain. 40,41 Sample Size and Power The initial RCT was based on a formal sample size estimation (i.e., n = 90) that was not appropriate for the present purposes given that the primary outcomes changed when the trial was modified to assess feasibility and acceptability. Samples sizes ranging from 8 to 114 participants are typical for pilot studies examining intervention feasibility and acceptability. 42 However, we acknowledge that the sample size of 17 parent-child dyads was not initially planned and is the result of the early study termination. Statistical Methods Data analyses were performed using SPSS (v27). 43 Descriptive statistics were used to characterize the sample and determine intervention feasibility and acceptability. Independent samples t, χ 2 , and Fisher's exact tests compared groups on sociodemographic variables and intervention adherence. To determine the observed effect size of the intervention on youth pain memories, we conducted six one-way analyses of covariance. In line with previous research, 15,17 memory biases were defined as a relative deviation between the initial and recalled pain ratings. This was statistically modeled by including the initial pain intensity score on day 1 postsurgery as the covariate, memory for pain intensity on day 1 from the 6-week assessment as the dependent variable, and group (intervention or control) as the between-subjects factor. To calculate the observed effect size of the intervention on youth pain outcomes (i.e., pain intensity and pain interference 4 months postsurgery), we compared the intervention and control groups using a series of independent sample t tests. Results The RCT was conducted from January 2018 to June 2019; it was stopped due to the lack of funding. Forty-five parent-child dyads were assessed for eligibility ( Figure 2). Nine dyads could not be reached prior to surgery; seven families declined to participate. Five dyads did not complete the baseline questionnaires. One dyad did not complete the lab visit and memory interview. There were no significant differences in sociodemographic parameters between participants who completed the study and those who withdrew. Data from 17 parent-child dyads were analyzed. Data were missing at random (Little missing completely at random test P = 0.97); no data imputations were performed. The sample (82% mothers, 76% girls, youth M age = 14.1 years, parent M age = 49.0 years) was mostly white and educated (73% of parents completed a college degree; Table 1). Most children presented with scoliosis (82%) and underwent spinal fusion (82%). Control and intervention groups did not significantly differ on sociodemographic (Table 1) or initial and recalled levels of pain characteristics ( Table 2). Intervention Feasibility Seventy-four percent (n = 17) of enrolled participants completed the study up to the memory interview. All participants (n = 8, 100%) allocated to the intervention group received the intervention and competed the study. All but one participants (n = 9, 90%) randomized to the control group received attention control instructions and completed the study. Intervention Acceptability Parents reported being motivated to learn the intervention (M = 6.9/10, SD 2.4). They understood the purpose of intervention (M = 7.8/10, SD 1.9) and reported a good level of rapport with the interventionist (M = 7.5/10, SD 1.9). Parents reported using the intervention strategies after the lab visit as 4.4/10 (SD 2.6; 0 = not at all, 10 = a lot). The intervention was rated as highly acceptable (M = 40.8/45, SD 3.9). Intervention Adherence Parents allocated to the intervention group used words associated with memory-reframing principles more frequently compared to participants in the control group (Table 3). Specifically, parents allocated to the intervention group more frequently used words associated with positive emotions, t(15) = 2.7, P = 0.016, and bravery, t(7) = 3.3, P = 0.012, and less frequently used words associated with negative emotions, t(15) = −2.4, P = 0.029, and anxiety/fear, t (15) = −2.2, P = 0.042, compared to the control group. Parents did not differ in their use of words associated with pain and coping as a function of group allocation (all Ps > 0.05). Youth use of content codes did not significantly differ across two groups except for anxiety-/fear-related words (t (11) = −2.3, P = .045) that were used less frequently by youth in the intervention group compared to the control group. The Effect of Intervention on Youth Pain Memories At the 6-week follow-up, groups did not differ on memory biases for day 1 or acute recovery pain intensity, anxiety, or pain unpleasantness (all Ps > 0.05; Table 4). The largest effect size for the intervention was observed for youth memory for day 1 pain intensity (η p 2 = .22, P = 0.074) with youth allocated to the intervention group (M = 4.9/10, SD 2.8, 95% confidence interval [CI] 3.3-6.5) recalling pain in a more accurate or positively biased way compared to the control group (M = 6.9/10, SD 2.1, 95% CI 5.4-8.6). To detect an η p 2 = 0.22 effect size with a type I error of 0.05, 80% probability, and two covariates (i.e., age and gender), a sample of 203 youth would be required. The Effect of Intervention on Youth Pain Outcomes Intervention and control groups did not significantly differ on pain intensity or interference at 4-month follow-up (all Ps > 0.05). The largest effect size of the intervention was observed for youth pain interference (η p 2 = 0.23), such that youth in the intervention group reported lower levels of pain interference (M = 48.3, SD 4.5) than youth in the control group (M = 52.6, SD 4.5). To detect an η p 2 = 0.22 effect size with a type I error of 0.05, 80% probability, and two covariates (i.e., age and gender), a sample of 186 youth would be required. Discussion The goal of this pilot RCT was to assess feasibility and acceptability of a memory-reframing intervention in a sample of youth undergoing major surgery. The study also aimed to assess participants' adherence to the intervention principles. Recruitment and parent report indicated good feasibility. 30 Parents allocated to the intervention group rated the intervention as highly acceptable. The feasibility and acceptability ratings of the intervention are in line with previously reported ratings of a similar intervention tested in a sample of young children undergoing minor surgery. 17 Parents in the intervention group followed the intervention principles when reminiscing with their children about their recent surgery. In contrast, youth allocated to the intervention group did not follow intervention instructions. There were no statistically significant differences in youth pain memories or pain outcomes as a function of group membership. Nevertheless, the observed effect sizes provide an estimate of the required sample size for a definitive RCT testing the efficacy of the intervention. Intervention adherence varied among parents and youth. Similar to the previous study with young children undergoing tonsillectomy, 17 parents randomized into the intervention group used words congruent with the intervention principles (i.e., using words associated with positive emotions and bravery more frequently; using words associated with negative emotions and anxiety/fear less frequently). However, there were no significant differences between the groups in parent use of words associated with pain and coping strategies. Previous research 28 has demonstrated the role of parent reminiscing content in the development of children's memories for pain. More frequent parent use of pain-related words was associated with more negatively biased memories for postsurgical pain in young children. 28 Further, when parents attend more to pain immediately before or during acute pain experiences (e.g., needle procedures), children are observed to experience more distress and pain. 44 High frequencies of pain-related words when reminiscing about past painful experiences may increase children's distress and bring the distressing sensory aspects of past pain into focus. Thus, it would be important to further emphasize the importance of and model avoiding painladen language. Reminding children about successful coping skills is another key part of pain memory reframing. 23 It may be challenging for parents to recall coping strategies that worked for their children due to their own distress, 45 which may explain nonsignificant differences in use of coping language across the two groups. The intervention may be adjusted to more explicitly encourage parents and youth to recall successful coping skills. The recalled coping skills may, then, be incorporated into visual reminders to be used during and after the intervention, similar to Chen and colleagues' memory-reframing intervention. 23 In the present study, youth were present for the intervention/control instructions. Therefore, youth use of intervention-congruent and incongruent words was examined in addition to parent intervention adherence. Youth did not use any bravery-related words when reminiscing about their surgery with parents. Youth also did not differ in their use of words associated with positive/negative emotions, pain, and coping. However, youth in the intervention group used fear-/anxiety-related words less frequently compared to the control group. Previous observational research on child reminiscing patterns demonstrated that young children who used more emotion-laden language when reminiscing about past tonsillectomy had more positively biased or accurate pain memories. 28 The use of words associated with negative emotions while reminiscing may, however, depend on the levels of experienced stress. In a study of children aged 2 to 13 years who suffered a stressful injury, went to an emergency department for treatment, and recalled it later, the use of emotion-laden language differed. 46 Children who, according to their parents' ratings, found their experience to be highly distressing provided less details about any of their emotions compared to children who were less distressed. 46 The intervention instructions therefore may be tailored to children's levels of distress associated with their postsurgical experience. Distressed youth may need more reminders and encouragement to introduce positive emotion-related words into their reminiscing narratives. Negative emotion-focused words may need to be further discouraged. In a sample of adolescents who reminisced with their caregivers about a traumatic natural disaster event (i.e., a tornado), the more frequent mentions of negative emotion words by adolescents were associated with higher levels of adolescents' anxiety. 47 Children may naturally use negative emotion words less frequently when reminiscing about past events involving pain compared to past events involving sadness. 48 The intervention did not significantly change youth memories for pain, nor did it influence youth pain outcomes 4 months postsurgery. These null results are likely due to the small, underpowered sample size of the present pilot study. Preliminary analyses were performed to calculate the sample size calculation required for a larger RCT examining the intervention efficacy in youth undergoing major surgery. The larger RCT's preregistered sample size of 90 youth was based on medium effect sizes observed in previous memory-reframing interventions. 24 However, a recently published RCT of the intervention efficacy in young children revealed a smaller effect size of the intervention on children's memories for pain. 17 Based on the effect size observed in the current study, a sample size of at least 203 youth would be required to detect the effect of intervention on youth pain memories. Further, the potential effect size of the intervention on youth pain outcomes was hypothesized based on the memoryreframing interventions that were tested in procedural pain contexts. 21,23 To our knowledge, no studies tested the efficacy of memory reframing in the context of postsurgical pain. Preliminary analyses were needed to examine the observed effect size and adjust sample calculations. The observed effect was largest for pain interference at 4 months postsurgery. To detect an effect of a similar size, a sample of 186 youth would be required. Based on these preliminary analyses, the registered sample size of the larger RCT (NCT03110367) will be changed to 250 parent-child dyads with a 20% overrecruitment to account for attrition. The intervention acceptability was measured using parent report only, which does not align with the values of and initiatives for patient engagement in study design and development. Nevertheless, after the study was completed, a youth patient partner (J.S.) provided her feedback on the intervention as well as ways to improve it. Based on patient partner feedback, the following modifications of the intervention should be considered in the future trials. First, more active involvement of youth participants in the intervention design and the intervention procedure should be incorporated. Given adolescents' increasing independence and cognitive capacity, playing a more active role in treatment, as well as being given voice and space to share, construct, and reconstruct their experience of pain, is developmentally appropriate and in line with treatment benefits identified by youth with chronic pain. 49 Second, providing a more in-depth rationale for the importance of pain memories before the surgery and/or including the elements of intervention throughout the surgery preparation period would allow youth to better understand and get more invested in the intervention. Third, according to the patient partner, the intervention could be improved by explicitly validating and supporting youth during reminiscing. Indeed, interpersonal validation in pain communication was shown to be an important factor influencing affect and report of pain intensity in patients with chronic pain. 50 In conversations about chronic pain, validation may convey the listener's acceptance, understanding, and confirmation that another's pain is complex, distressing, and legitimate. In a study of interpersonal validation and empathy in adults with chronic pain and their partners, higher levels of validation were linked to higher levels of disclosure about pain experiences. 51 No studies have examined the levels of validation and empathic support in parent-child conversations about pain. The patient partner also highlighted the importance of asking youth postintervention whether they thought that reminiscing about their past pain was helpful and whether their parent was supportive/validating and focused on positive aspects of their past pain experience. There are limitations to this study. First, the original RCT was discontinued before the primary goals (i.e., examining the efficacy of the intervention to change youth pain memories and improve pain outcomes) were achieved. However, the importance of publishing reports of clinical trials that were discontinued and/or demonstrated null results has been emphasized. 52 Further, this study's data and preliminary results will be beneficial for future trials of the intervention. Second, the intervention acceptability assessment was limited to parent report, as well as quantitative methods. In a future RCT we are planning, both parents and youth will be invited to provide their qualitative feedback regarding the intervention. Third, the intervention was designed to take place in person. The ongoing COVID-19 pandemic highlighted the urgent need for virtually delivered interventions. Given the flexible format of the memory-reframing intervention, it can be adapted to be delivered online and practiced at home. The virtual delivery would have an advantage of greater inclusivity, because families would not need to spend time and money traveling to the laboratory visit. Further, parent-child dyads would be learning and applying the intervention principles in their usual home environment, which may increase the use of intervention principles by forming context-dependent (i.e., home) memories. 53 Thus, simply being in the home environment where participants received the intervention would remind them about pain memory reframing. Fourth, the intervention reminders were limited to one handout summarizing the intervention principles. Additional, visually attractive reminders (as used by Chen and colleagues 23 ) may be more effective in capturing parent and youth attention and encouraging them to use the intervention principles more frequently. The intervention was limited to a single occurrence to reduce burden on families and increase intervention feasibility. However, repeated, versus one-time, memory-reframing instances may be more efficacious in changing memories for past event. 54 Future trials should consider repeated encouragement to reminisce about past pain using the intervention principles, which may be achieved using text/e-mail reminders. We have also recently argued for the inclusion of memory-reframing principles into preparation for painful procedures. 55 An abbreviated version of the intervention principles with the focus on building up pain-related selfefficacy and reminding youth about past successful coping strategies may be included in future trials in preparation for surgery. Finally, it was not possible to blind participants and the interventionist to group allocation, which is common for psychosocial interventions. 56 In conclusion, this pilot trial examined the feasibility and acceptability of, as well as adherence to, a memory-reframing intervention in a sample of youth undergoing major surgery. The intervention was feasible. Parents reported it to be highly acceptable. Parents, but not youth, adhered to its principles when reminiscing about past surgery. The preliminary analyses did not reveal significant effects of the intervention on youth pain memories or pain outcomes. The observed effect sizes were used to inform the sample size of a larger RCT.
v3-fos-license
2021-02-03T07:15:25.634Z
2020-09-29T00:00:00.000
231782507
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://actamedica.org/index.php/actamedica/article/download/492/463", "pdf_hash": "d25a4c4f23f5272a5a13e31fb802d2e1bc93f3f5", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45163", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d25a4c4f23f5272a5a13e31fb802d2e1bc93f3f5", "year": 2020 }
pes2o/s2orc
Thoracic Surgery during Covid-19 Pandemic; Single Center Experience Objective: After the world health organization declared the COVID-19 epidemic as a pandemic, serious changes were made in the functioning of health institutions along with restrictions in social life. The aim of this study is investigating the operations and clinical procedures performed in a thoracic surgery clinic during covid pandemic. Material and Methods: In this study, the surgical procedures performed in the thoracic surgery clinic between March 2020 and June 2020 which is accepted as the first wave dates for COVID-19 in our country are presented. Results: Totally, nineteen patients are operated during this period. Average age was 44 (12-68) years old. Forty-three covid PCR tests were performed for a total of 19 patients. Three of them were positive results for COVID-19. After operations one patient died due to septic shock during postoperative period. Conclusion: Malignancy and emergency surgeries can be performed by following precautions in the outbreak of COVID-19. INTRODUCTION On 30 January 2020, World Health Organization (WHO) officially declared the COVID-19 epidemic as a public health emergency of international concern [1]. In the next period, the first case reports were made in our country in the first week of March. To date, more than 150 thousand cases and 4 thousand deaths have been reported. [2]. Although the disease seems to be under control today, the pandemic still continues. The functioning of health institutions, together with other state institutions, could not return to normal order. This study aims to evaluate the functioning of the operated patients of Yıldırım Beyazıt University Faculty of Medicine, Ankara City Hospital Chest Surgery Clinic between March 2020 and June 2020, which is the most intense period in our country. MATERIALS and METHODS Emergency surgery and cancer surgery procedures were performed only in thoracic surgery clinics with the recommendations of the Ministry of Health scientific committee. This study was done by examining the file records of patients who were operated in thoracic surgery clinic during the pandemic. Approval was obtained from the Ministry of Health, Ankara Provincial Health Directorate and Ankara City Hospital Local Ethics Committee. Clinical Operation and Inpatient Care * With the decision of the chief physician of Ankara City Hospital, the mobility of employees and patients in all clinics was reduced. * All healthcare professionals were informed about the COVID-19 pandemic by the chief physician training coordination center. Online seminars were organized for all healthcare professionals by the infection control committee * The work order of research assistants and specialist physicians was planned as a duty system. Similarly, restrictions were made on the number of nurses and assistant health personnel. * The patients were hospitalized as one person in the clinical rooms. From hospitalization to discharge, only one person was allowed to remain as companion. In addition, patient visitors were not allowed. Along with the patients, their companions were followed up with fever and symptoms. * All healthcare workers performed patient follow-up and interventions with personal protective equipment. (PPE) Preoperative Evaluation * In the preoperative period, all patients and their companions were informed about the use of masks, hand disinfection and COVID-19 related hospital rules. * COVID-19 PCR test was performed in all patients planned to be operated in the preoperative period. Samples from the nasopharynx and throat region were studied with rapid test. * Patients with negative results were evaluated with non-contrast thorax tomography before operation. Patients with no signs of viral pneumonia and no appearance of ground glass in their parenchyma were operated * The erythrocyte suspension, which was prepared 3 units in thoracic surgery in the preoperative period, was planned as one or two units due to the low blood supply in the blood centers during the pandemic. * Patients who received positive COVID-19 PCR tests during the preoperative period were hospitalized in covid clinics. Hydroxychloroquine, oseltamivir, azithromycin, clexane and vitamin c supportive therapy were given for 5 days. Favipiravir treatment was added to those without clinical improvement. Afterwards, the patients who remained in quarantine for a further 2 weeks were put on the operation list when COVID-19 PCR tests were negative twice in a row Intraoperative Management * The operations were carried out in the operating rooms with negative pressure ventilation systems. * Operating room staff and surgical team worked with PPE. Glasses, surgical overalls, N95 surgical mask were routinely used in all procedures. * Procedures in lung surgery were performed primarily by specialist surgeons, since shorter surgery times are aimed. During this period, no assistant training cases were made. * All patients received a double lumen intubation tube. Care was taken to avoid air leakage after surgery in lung resections. Tissue supporting products were used in cases of lung parenchyma air leak after surgery. * When using the videothoracoscopic method for surgical resection, the use of CO2 insufflator was avoided. Direct thoracotomy method was used in cases where we thought that air leak control would be difficult (perforated cyst hydatid, emphysematous lung) Rapid ventilation flow and jet ventilator use were not preferred in patients requiring surgery due to severe tracheal stenosis. Postoperative Follow-up * The postoperative first day follow-up of the patients was performed in single intensive care beds. * In the postoperative period, all patients and patient refractors were required to use surgical masks. * The ventilation and hygiene conditions of the patient rooms were followed closely by the nurse in charge of the service. * Routine antithrombotic therapy was routinely performed in the postoperative period. The patients were mobilized early. * During the pandemic period, pulmonary rehabilitation was performed under the observation of nurses due to the restriction of staff mobility. * In the postoperative follow-up, patients with high fever were consulted with the infection department. Rapid PCR test was performed in those with clinical suspicion of COVID-19 RESULTS Between March 2020 and June 2020, 19 patients were operated. Avarege age was 44 (12-68) years old. The female to male ratio was 10/9. All patients had malignancy or were requiring emergency surgery. The patients' characteristics are given in Table 1. Forty-three COVID-19 PCR tests were performed for a total of 19 patients. There was test positivity in 3 cases and tomography findings in one of these cases. Patients who had positive test results were taken to surgery after 20 days with 2 negative test results after appropriate treatment. In this process, PCR tests were performed on 3 healthcare workers who were in contact with 2 patients who were positive. PCR results were negative. In the postoperative period, 4 patients had high fever (over 38 C) during follow-up. These cases were consulted with infectious diseases. In 3 cases, fever was associated with postoperative atelectasis. In one case, PCR test was performed with COVID-19 pre-diagnosis. However, the result was negative. This case was later lost due to septic shock. [3]. In a study in which breathing was visualized with high-speed imaging techniques, it was observed that droplets that spread to the environment during coughing, speech and sneezing could be transported to more than 2 meters in a gas cloud [4]. All these studies indicate how risky patients who have undergone surgery in thoracic surgery clinics in terms of airway contact. COVID-19 has also been shown to be transmitted from people in an asymptomatic or incubation period so that live virus was detected in cell culture in samples taken from asymptomatic or presymptomatic individuals with PCR positive during the COVID-19 outbreak, up to 5-6 days before the onset of symptoms. [5]. For this reason, we used PCR and thorax tomography examinations for the detection of asymptomatic COVID-19 positive cases in patients with whom we planned surgery. In a recent study conducted by Dr Nan-Shan Zhong's team in the laboratory, sampling 1099 confirmed cases, common clinical symptoms were fever (88.7%), cough (67.8%), fatigue (38.1%), sputum (33.4%), shortness of breath. (18.6%), sore throat (13.9%) and headache (13.6%) [6,7]. In many studies, the symptoms were not different from similar viral infection symptoms. We made system queries for both the patients to be operated on and the patients' relatives in terms of these symptoms. Yang LI et al. gathered the patients with COVID-19 in 4 groups in their study [8]; Group 1 had mild clinical symptoms and did not have pneumonia symptoms, Group 2 had patients with fever, respiratory and other system findings and pneumonia radiology. Group 3 were patients with severe symptoms, shortness of breath, more than 30 breaths, oxygen saturation value less than 93%, and PaO2 / FiO2 ≤300 mmHg. In these patients, more than 50% infiltration was seen in the lung parenchyma within 24-48 hours. In the fourth group, which was the critical group, it was the group with shock, respiratory failure and other organ failure requiring mechanical ventilator. All of our cases consisted of mild clinical findings or asymptomatic patients that could be considered in the first group. In the radiological evaluation of COVID-19 patients, frosted glass shadows and interstitial changes in tomography were reported especially in the lung peripheral parenchyma areas [9]. Pneumonic consolidated areas and, although rare, pleural effusion findings can be seen [10]. In the later period, signs of acute lung injury are observed in severe disease. Surgical protocol is not clear in COVID-19 positive cases. The absence of a guideline to be followed for the procedures performed and the lack of high level of evidence caused hesitations in practical applications in all thoracic surgery clinics. However, in many thoracic surgery clinics, a common view has been reached on performing oncological surgeries and emergency surgeries [11,12]. In a study performed in a thoracic surgery clinic, mortality was reported as 5 (38.5%) in 13 COVID-19 positive patients (11 lobectomies and 2 esophagectomies performed) during the pandemic [8]. For this reason, COVID positivity is considered as a serious risk factor especially for thoracic surgeries. Smoking and chronic obstructive pulmonary diseases were the most prominent findings in these patients as risk factors. Thoracic Surgery Outcomes Research Network, classified the practical applications of thoracic surgery according to the phase of the pandemic [12]. In the case accepted as the first phase, the number of covid positive patients in that hospital is very low, the hospital capacity is very sufficient and it describes the situation in which all kinds of equipment are available. Since we are working in such an environment clinically, we planned our applications in a similar way to these recommendations. We think that our country is affected by pandemics later than many European countries, giving us the opportunity to make more accurate practices by learning from the experiences of these countries. Clinically, emergency operations and unexpected cancer cases (tumors larger than 2cm, tumors with induction therapy, symptomatic mediastinal tumors and invasive chest wall tumors) were clinically operated during this process. Tumors with a frosted glass area of more than 50%, nodules smaller than 2 cm, carcinoid tumors and slow-progressing tumors, pulmonary oligometastasis and non-emergency bronchoscopy procedures were delayed for up to 3 months. In cases where alternative therapy could be applied SBRT, ablation, endoluminal therapy (in early esophageal tumors), surgery was not considered during this period. CONCLUSIONS Malignancy and emergency surgeries can be performed by following precautions in the outbreak of COVID-19. During the pandemic period in thoracic surgery clinics, the operation of the hospital and the practice of thoracic surgery can be different for each center. The process should be tried to be managed by making active plans according to the hospital conditions, the phase of the pandemic, healthcare worker and equipment capacity.
v3-fos-license
2017-04-13T15:44:37.985Z
2009-10-01T00:00:00.000
18336176
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.ajnr.org/content/30/9/1792.full.pdf", "pdf_hash": "3c270d5390af0ca6a1d3759501d023df4f84ea1f", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45164", "s2fieldsofstudy": [ "Medicine" ], "sha1": "3c270d5390af0ca6a1d3759501d023df4f84ea1f", "year": 2009 }
pes2o/s2orc
Anisotropic Diffusion Properties in Infants with Hydrocephalus: A Diffusion Tensor Imaging Study BACKGROUND AND PURPOSE: Diffusion tensor imaging (DTI) can noninvasively detect in vivo white matter (WM) abnormalities on the basis of anisotropic diffusion properties. We analyzed DTI data retrospectively to quantify the abnormalities in different WM regions in children with hydrocephalus during early infancy. MATERIALS AND METHODS: Seventeen infants diagnosed with hydrocephalus (age range, 0.13–16.14 months) were evaluated with DTI and compared with 17 closely age-matched healthy children (age range, 0.20–16.11 months). Fractional anisotropy (FA), mean diffusivity (MD), axial diffusivity, and radial diffusivity values in 5 regions of interest (ROIs) in the corpus callosum and internal capsule were measured and compared. The correlation between FA and age was also studied and compared by ROI between the 2 study groups. RESULTS: Infants with hydrocephalus had significantly lower FA, higher MD, and higher radial diffusivity values for all 3 ROIs in the corpus callosum, but not for the 2 ROIs in the internal capsule. In infants with hydrocephalus, the increase of FA with age during normal development was absent in the corpus callosum but was still preserved in the internal capsule. There was also a significant difference in the frequency of occurrence of abnormal FA values in the corpus callosum and internal capsule. CONCLUSIONS: This retrospective DTI study demonstrated significant WM abnormalities in infants with hydrocephalus in both the corpus callosum and internal capsule. The results also showed evidence that the impact of hydrocephalus on WM was different in the corpus callosum and internal capsule. H ydrocephalus is a pathologic condition in which excessive CSF accumulates in the ventricular system because of either obstruction along the CSF pathways or an imbalance between CSF production and reabsorption. 1 The enlarged ventricles and the associated increased intracranial pressure (ICP) can cause significant damage to various regions of brain, especially to the adjacent white matter (WM) tissue. [1][2][3][4] The mainstay of treatment for most hydrocephalic patients has been surgical diversion of the excess CSF to a distant location in the body. Although surgical outcomes are generally good, some patients are still at risk for cognitive, motor, and physical developmental delays. [5][6][7][8][9][10] The variability of outcomes in hydrocephalus treated in infancy may reflect the wide spectrum of injury to specific regions of the brain and the variety of recovery mechanisms in the pathophysiology in these locations. Ventricular size, CSF flow, and ICP are routinely used to guide the treatment of hydrocephalus and have been the basis for diagnostic standards and attempts to predict prognosis. However, neither these nor conventional imaging modalities have been found to be completely accurate. The pathophysiologic mechanism of brain injury has not been clearly elucidated, but measurement of diffusion parameters may provide insight into the mechanisms and/or reversibility of WM injury in childhood hydrocephalus. Diffusion tensor imaging (DTI) provides quantitative information about anisotropic diffusion properties in WM and has been applied to investigate in vivo WM damage and possible recovery in various neurologic and pathologic disorders. [11][12][13] However, to our knowledge, there are very few published articles on the use of DTI in childhood hydrocephalus. Although Assaf et al 14 studied abnormal anisotropic diffusion properties in various WM regions before and after CSF shunt surgery, they did not include patients in early childhood, the common age of hydrocephalus presentation and treatment. In a similar fashion, in a recent DTI study by Hasan et al 15 of children with spina bifida and hydrocephalus, the average age of participants was 12.3 Ϯ 2.1 years, well beyond the usual age of diagnosis and treatment of childhood hydrocephalus. In our study, we used DTI to study WM integrity in infants with hydrocephalus to assess the anisotropic diffusion properties in the corpus callosum and internal capsule preoperatively. We hypothesized that 1) these WM structures would demonstrate abnormal anisotropic diffusion values (FA, MD, axial and radial diffusivity) compared with age-matched healthy children, and 2) the abnormality would be regionspecific in the direction and the degree of abnormality. Patient Population We retrospectively reviewed existing clinical DTI datasets and identified 2 groups of participants for our study: a preshunt hydrocephalus group and an age-matched control group. The Institutional Review Board of Cincinnati Children's Hospital Medical Center approved the study. The preshunt hydrocephalus group consisted of 17 infants (age range, 0.03-16.14 months; age mean Ϯ SD, 4.65 Ϯ 4.27 months; sex ratio, 7 girls/10 boys) who were diagnosed with hydrocephalus and had MR imaging and DTI performed before shunt surgery as part of their standard clinical care. Demographics and clinical information for these patients are summarized in Table 1. Fifteen of these patients were described as initially presenting with symptoms of accelerated head growth, macrocephaly, enlarged ventricle, or ventriculomegaly. All of the patients demonstrated clinical improvement when evaluated postsurgically: all demonstrated neurologically stable examination or had normal development. The control group consisted of 17 closely age-matched children (age range, 0.20 -16.11 months; age mean Ϯ SD, 4.71 Ϯ 4.17 months; sex ratio, 6 girls/11 boys). These children were selected from a cohort of an ongoing project that aims to establish a frame of reference for the anisotropic diffusion properties throughout development (total n is approximately 250, 0 -18 years old; n ϭ 45 for age range 0 -16 months). All of the children in the control group met the following criteria: 1) they were scanned for non-central nervous system (CNS)related problems, 2) they had no previous record of neurologic dis-order, 3) they had normal MR imaging results, and 4) they had no record of neurologic disorder for at least 4 months after the MR imaging/DTI scan. Because DTI parameters usually change with age during early childhood, 3 weeks were used as the criteria for the maximal age difference in searching for matching healthy children from the data base. We were only able to find 1 match for each child in the patient group. The age difference in these 17 pairs of children ranges from 0 to 16 days (mean Ϯ SD, 5.35 Ϯ 4.57 days). The 2 groups were not significantly different in age (2-tailed paired t test; P ϭ .32) or sex ratio (Fisher exact test; P ϭ 1). The fronto-occipital horn ratio (FOHR) was used to assess ventricular size for participants in both groups. This index measures the ratio between the mean of the frontal and occipital horn width to the width of parietal lobe and has been found to correlate well with relative ventricular size. 16 Figure 1A shows the methodology for the measurement of FOHR. Periventricular interstitial edema may be used as a radiologic sign of acute and severe hydrocephalus in children. In our study, only 3 infants with hydrocephalus were found to have abnormal periventricular T2 or fluid-attenuated inversion recovery signals. The number was not sufficient to conduct any statistical comparison and was therefore not examined further. The relative higher water content in young infants may contribute to the low incidence of detectable interstitial edema in periventricular WM in our patient population. MR Imaging/DTI Scan All of the MR imaging/DTI scans were performed at Cincinnati Children's Hospital Medical Center between October 2003 and May 2008. Images were acquired clinically either on a 3T Magnetom Trio scanner (Siemens, Erlangen, Germany) or a 1.5T Signa Horizon LX scanner (GE Healthcare, Milwaukee, Wis). All of the DTI images were acquired with diffusion-weighted, spin-echo echo-planar imaging in the axial plane with a b-value of 1000 s/mm 2 . In the 17 infants with hydrocephalus, 3 infants were scanned on the 3T Siemens scanner with a 12-direction DTI protocol: TR, 6000 ms; TE, 87 ms; resolution, 2 ϫ 2 mm; and section thickness, 2 mm. The other 14 patients were scanned on a 1.5T GE scanner with a 15-direction DTI protocol: TR, 12,000 ms; TE, 81 to 101ms; and resolution, 3 ϫ 3 mm (n ϭ 11) or 1.88 ϫ 1.88 mm (n ϭ 3). In the 17 age-matched control subjects, the parameters were even more inhomogeneous, perhaps because of the variety of protocols used for these children who were referred for MR imaging scanning for more diverse reasons. Of the 17 healthy children, 9 were scanned on the 3T Siemens scanner. Among them, 6 were scanned with a 12-direction DTI protocol: TR, 6000 ms; TE, 87 ms; resolution, 2 ϫ 2 mm; and section thickness ϭ 2 mm. The other 3 healthy children were scanned on the 3T scanner with a 6-direction DTI protocol: TR, 4100 or 5300 ms; TE, 84 ms; resolution, 1.56 ϫ 1.56 mm (n ϭ 1) or 1.72 ϫ 1.72 (n ϭ 2); and section thickness, 4 mm (n ϭ 2) or 3 mm (n ϭ 1). Of the 17 healthy children, 8 were scanned on a 1.5T GE scanner: gradient directions, 15; TR, 12,000 ms; TE, 66 to 97 ms; resolution, between 2.5 ϫ 2.5 and 3 ϫ 3 mm; and section thickness, 3 mm. A matrix of 128 ϫ 128 was used for all patients. The differences in the DTI scan protocols were because of the occasional adjustment in clinical scanning for image quality optimization. We also conducted a variance ratio test for each pair on the basis of the observed SD and corresponding degree of freedom, with the goal to examine whether there was a systemic image quality bias for the FA value measurement. The number of pairs that demonstrate significant difference at a P level of .05 was found to be small (2/16, 1/7, 1/14, 0/17, and 2/17 for the 5 respective ROIs, respectively). An additional analysis of the effect of field strength and protocol variations on the measured diffusion values found no statistically significant differences for any of the ROIs tested and did not affect the results and conclusion of this study. This issue has been elaborated and discussed elsewhere. 17 On color-coded FA maps, 5 WM regions (Fig 2A and B) were delineated for each participant: 1) genu of the corpus callosum (gCC); 2) body of the corpus callosum (bCC); 3) splenium of the corpus callosum (sCC); 4) anterior limb of the internal capsule (ALIC); and 5) posterior limb of the internal capsule (PLIC). These are all major WM structures that can be easily identified on color-coded FA maps on the basis of their anatomic location and the knowledge of their orientation. Fibers in left-right (eg, gCC, bCC, and sCC), superiorinferior (eg, PLIC), and anteroposterior (eg, ALIC) direction are conventionally coded as red, blue, and green, respectively. We first randomly selected a healthy child and manually drew all of the ROIs on color-coded FA maps as shown in Fig 2. Then the ROIs identified in this subject were used as a guide to manually define ROIs for other subjects as reproducibly as possible. The delineation of these ROIs followed the approach by Hermoye et al 21 and has also been described in our previous work. 22 In both ALIC and PLIC, because we did not find statistically significant differences between the left and the right side, we used the average DTI value in the analysis. Anatomic distortion caused by hydrocephalus made some ROIs undefinable in some patients. The corpus callosum is expected to be thinner in hydrocephalus. Because the ROIs were all defined on axial images, partial volume effect is likely to occur when CSF is included in the delineation of a very thin bCC. To minimize the potential impact, we determined that the bCC of a patient would be excluded from analysis if the corpus callosum (measured on sagittal T1 images) was thinner than the section thickness (mean Ϯ SD, 3.6 Ϯ 0.65 mm after excluding bCC measurements in 7 patients with hydrocephalus). Statistical Analysis We performed statistical analysis using SPSS, version 15 (SPSS, Chicago, Ill). The statistical differences between the 17 children with hydrocephalus and their age-matched control subjects were tested with the paired t test on all the DTI parameters in various ROIs. To control for the expected proportion of incorrectly rejected null hypotheses (type I error rate) in multiple comparison, we made a correction for multiple comparisons using the false discovery rate method. 23 As reported in the literature, 21,24,25 the developmental trajectory of FA in healthy children can often be modeled by a monoexponential or a biexponential curve. The most drastic increase in values occurs in the first 24 months of life, with the values leveling off and stabilizing before 36 months. The maximal age in our study group was 16 months, and the full age-range over which the normal curve occurs was not represented in this cohort and curve fitting with use of an exponential model was not appropriate. Therefore, the increase of FA with age was fitted with a linear model. The 95% prediction interval was also calculated. For each ROI, the value of FA was objectively determined to be abnormally high, normal, or abnormally low on the basis of whether the DTI index was above, within, or below the prediction interval at 95% confidence level as derived from the regression analysis in the normal group. The term frequency of occurrence is used to reflect how often the abnormal DTI measurement occurs in a certain cohort. It is equivalent to the percentage of individuals who can be categorized to a certain subgroup. The Freeman-Halton extension 26 for the Fisher exact test was used to evaluate the 2 ϫ 3 contingency table and to assess the statistical significance (at a level of P ϭ .05) of the different frequency of occurrence in abnormal FA either across ROIs or across subject groups. Comparing FOHR between Infants with Hydrocephalus and Age-Matched Control Subjects The FOHR for the control group followed a normal distribution, with a range from 0.285 to 0.374 (mean Ϯ SD, 0.336 Ϯ 0.023, Fig 1B). The FOHR of the patients with hydrocephalus (range, 0.395-0.648; mean Ϯ SD, 0.530 Ϯ 0.065) demonstrated a wider range of values, with the minimum close to the maximum ratio seen in the control group (paired t test, P Ͻ .0001). No statistically significant correlation was found between FOHR and DTI parameters in any of the 5 ROIs examined. Comparison of DTI Parameters between Infants with Hydrocephalus and Age-Matched Control Subjects FA, MD, axial diffusivity, and radial diffusivity values for the 2 study groups are presented in Table 2. FA values in the corpus callosum in children with hydrocephalus were significantly lower than that in age-matched control subjects in all 3 ROIs (2-tailed paired t test controlled for multiple comparison, P Ͻ .003, P Ͻ .03, and P Ͻ .03 for gCC, bCC, and sCC, respectively). MD values in the 3 ROIs in children with hydrocephalus were all higher than those in agematched control subject. This difference was statistically significant in the gCC (P Ͻ .05) and bCC (P Ͻ .05); the difference in the splenium showed a trend but did not reach statistical significance (P ϭ .17). No statistically significant differences in axial diffusivity were demonstrated for any ROI in the corpus callosum between the 2 groups. In contrast, all 3 ROIs in the corpus callosum showed significantly higher radial diffusivity in patients compared with age-matched control subjects (P Ͻ .03, P Ͻ .03, and P ϭ .05 for gCC, bCC, and sCC, respectively). No statistical difference in mean DTI parameters was found between the 2 groups for either the ALIC or PLIC. Figure 3 shows the linear regression between FA and age for sCC ( Fig 3A) and PLIC (Fig 3B). In general, FA values of children in the control group increased with age in all 5 ROIs, consistent with the published literature. 21,24,25 Within the age range in our study, this increase followed a linear pattern with statistical significance in healthy children for all ROIs (P Ͻ .001). The curve-fitting coefficients and the other statistical results are listed in Table 3. This significant linear correlation between FA and age was not observed in children with hydrocephalus in any of the 3 ROIs in the corpus callosum (Table 3). However, FA and age still correlated highly in children with hydrocephalus in both the ALIC and PLIC. For example, FA values of patients in PLIC (Fig 3B) correlated significantly with age (R 2 ϭ 0.549; P ϭ .002). Comparison of residuals from the linear regression found that both groups were unbiased, with a mean value of zero. No increasing or decreasing spread about the regression line was observed as the age increased for any of the ROIs. The Frequency of Occurrence of Abnormal FA Values in Children with Hydrocephalus In the corpus callosum, most patients had abnormally low FA values (13/16, 8/10, and 9/15, for gCC, bCC, and sCC, respectively). No child with hydrocephalus was found to have an abnormally high FA value. In the internal capsule, however, only a small portion of the group had abnormally low FA values (2/17 for both ALIC and PLIC), and some had abnormally high FA values (2/13 and 6/17 for ALIC and PLIC, respectively), which was not observed in the corpus callosum. Figure 4 demonstrates the regional difference of the frequency of abnormal FA between the corpus callosum and the internal capsule. The difference is statistically significant (P Ͻ .01, with control for multiple comparison) when comparing a ROI in the corpus callosum (gCC, bCC, or sCC) with any ROI in the internal capsule (ALIC or PLIC). On the other hand, no statistical significance was observed between any 2 ROIs within either the corpus callosum or the internal capsule. Discussion This is the first DTI study of WM abnormalities in children with hydrocephalus during early infancy. Previously published reports have used DTI to examine the effects of hydrocephalus on WM in older children or in young adults. 14,15 In our study, the study population ranged from ages 1 day to 16 months at the time of the pretreatment MR imaging/DTI study. Thus, it fills a knowledge gap and provides a radiographic reflection of the understanding for the effects of hy- not shown). The 95% prediction intervals are shown in dashed lines representing age-specific range in which a normal value is supposed to be at 95% certainty. drocephalus on developing WM. The sensitivity of DTI to assess WM diffusion properties in patients with hydrocephalus as demonstrated in our study potentially may help determine whether shunt surgery would be of benefit in borderline ventriculomegaly and benign external hydrocephalus or augment the fetal counseling process for parents. It may also aid in the management of shunted hydrocephalus used at present, allowing a more accurate assessment of structural integrity than the volume measurements used at present. We included a closely age-matched healthy control group to establish a frame of reference for determining the normality of diffusion properties in patient group. The age difference between the 17 pairs of children ranged from 0 to 16 days. Such a close matching was intended to eliminate the confounding factor of age because DTI parameters have been found to change dramatically during early childhood. 24,25,27 Any study of WM in pediatric patients must account for this dramatic change occurring during development, especially the first 36 months of life, to reach any conclusion about pathologic alterations. Our results demonstrate that the DTI parameters in the corpus callosum in infants with hydrocephalus are abnormal, and this deviation from normal range is region specific. Similar to previous studies in older subjects, 14 we found that the corpus callosum in the patients had lower FA and higher MD values. We also found that radial diffusivity in all of the ROIs in the corpus callosum increased significantly, which can explain the above 2 changes. In addition, the correlation of FA with age during normal development seen in our control group and in other studies 24,25,27 was absent in the corpus callosum in children with hydrocephalus. Furthermore, no increasing or decreasing spread about the regression line was observed as the age increased for any of the ROIs, suggesting that WM development does not follow a normal trajectory in children with hydrocephalus in regions closest to the ventricles. In the internal capsule, the mean FA value or other DTI parameters for children with hydrocephalus was not found to be abnormal in either ALIC or PLIC. The trend of linearly increasing of FA with age was also preserved, which may indicate a normal developmental pattern in this brain area. However, further examination demonstrated that, though FA in most patients with hydrocephalus fell within the age-appropriate normal range, there was a greater degree of variability of values in the patient group, with some patients exhibiting abnormally higher FA values and some others having abnormally low FA values. In PLIC, the abnormally high FA values are found mostly in infants older than 3 months, which is in line with the study by Assaf 14 of older children with hydrocephalus. Extrapolating from the observations presented in Fig 4, we found that the frequency of occurrence of abnormalities was region specific (ie, FA in patients with hydrocephalus is more often low in the corpus callosum but high in the internal capsule). We can hypothesize that the impact of hydrocephalus is region specific and is likely related to the proximity to the enlarged ventricles. WM structures located farther away from the ventricles, such as the internal capsule, will exhibit less severe damage because the effect is dampened by the intervening compressible deep gray matter and WM structures. No statistically significant correlation was found between FA and ventricle size. It is believed that there is a wide range of variability in the association between ventricle size and outcomes. Different underlying injury mechanisms may contribute to different FA measurements. The increase of FA as seen in the internal capsules of some patients has been sometimes suggested to be the result of the mechanical compression that leads to increased homogeneity in fiber orientation. The decrease of FA in the corpus callosum, on the other hand, is often regarded as a reflection of permanent WM damage (eg, myelin sheath and axonal cell membrane damage, or demyelination). This latter hypothesis correlates with our observation of increased radial diffusivity in the corpus callosum. However, without knowing the exact underlying mechanism of injury, it is difficult to predict whether ventricle size and FA are inversely or positively correlated. Discrepancy between the results from our study and published literature exists in various parameters and in various ROIs. For example, increased diffusion coefficient in patients with hydrocephalus has been identified with previous DWI and DTI studies. 14,28-30 Our study demonstrates a similar trend of MD change in the corpus callosum but not in the internal capsule. The variability in the FA value in the internal capsule seen in our study is also different from other results. 14 These differences may be the result of variations in ROI selection. For example, the study by Ulug et al 28 defined ROIs adjacent to the ventricular horns, which does not correlate with the ROIs in our study. In addition, our cohort included extremely young patients, with rapidly changing WM water content, myelination, and axonal membrane growth in the brain. The difference one may expect during this time of WM development, compared with older children and adolescents, bears significant importance because it may demonstrate the diverse nature of the mechanism of injury seen in hydrocephalus depending on the time during development when this insult has occurred. This study is based on retrospective analysis of existing patient data in the clinical data base. To accommodate the age range and distribution of the patient group, we were only able to find 1 closely age-matched child from our data base of healthy children. Although our one-to-one matching is no less valid than the one-to-many matching, a larger sample size may increase the efficiency of the analysis. Like most studies of patients with hydrocephalus, we do not have histopathologic correlation of the imaging findings and analysis. It would be ideal to validate the DTI conclusion with the current criterion standard of tissue examination, possibly by studying autopsyacquired human brain tissue from patients with hydrocephalus or by studying the brain structure in various hydrocephalus animal models. We do not have long-term (Ն 5 years) behavioral and neuropsychological outcome results to relate to the imaging findings in this study. This study was also limited by the heterogeneity in the causes of the patients' hydrocephalus and DTI scanning protocols. The current trend in neurosurgery has been to divert CSF as soon as hydrocephalus is diagnosed to prevent additional injury to the CNS. However, the injury and recovery mechanisms as well as the overall prognosis in patients with hydrocephalus warrant additional investigation. A large-scale prospective longitudinal study may be able to help address the above-noted issues. Conclusions This study demonstrated the sensitivity of DTI techniques to investigate WM integrity in pediatric patients with hydrocephalus in infancy. We found significant alterations in diffusion values throughout the corpus callosum in infants with hydrocephalus. In the internal capsule, there was a greater degree of variability in FA values, though FA in most patients fell within the age-appropriate normal range. It is anticipated that DTI may be of value in the management of hydrocephalus, possibly helping to predict long-term outcome on the basis of pretreatment WM diffusion properties.
v3-fos-license
2017-10-18T15:34:08.123Z
2011-08-11T00:00:00.000
24954162
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/16/32/art19940-en.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/ese.16.32.19940-en&mimeType=pdf", "pdf_hash": "51c3f662d57bdf833e50d3d2001192100872b842", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45165", "s2fieldsofstudy": [ "Medicine" ], "sha1": "51c3f662d57bdf833e50d3d2001192100872b842", "year": 2011 }
pes2o/s2orc
Surveillance and outbreak reports Unlinked anonymous testing to estimate HIV prevalence among pregnant women in Catalonia , Spain , 1994 to 2009 D Carnicer-Pont (dcarnicer@iconologia.net)1,2,3, J Almeda4,3, J Luis Marin5, C Martinez5, M V Gonzalez-Soler1,3, A Montoliu1,3, R Muñoz1, J Casabona1,2,3, the HIV NADO working group6 1. Centre of Epidemiological Studies of HIV/AIDS and STI of Catalonia (CEEISCAT), Badalona, Spain 2. Department of Paediatrics, Obstetrics and Gynaecology of the Autonomous University of Barcelona.(UAB), Bellaterra, Spain 3. CIBER, Epidemiology and Public Health (CIBERESP), Madrid, Spain 4. Primary Health Department Costa de Ponent, Catalan Health Institute (ICS), IDIAP-Jordi Gol, L’Hospitalet del Llobregat, Spain 5. The Catalan Neonatal Early Detection Programme, Service of Biochemistry and Molecular Genetics, Hospital Clinic, Faculty of Medicine, Barcelona, Spain 6. The members of the group are listed at the end of the article Introduction Accurate estimates of the number of individuals living with human immunodeficiency virus (HIV) infection are essential for the planning and monitoring of HIV prevention and care programmes.Studies of HIV prevalence in sentinel populations are one of the key strategies to monitor the epidemic [1], and one of the methods that has been widely used in sentinel populations is unlinked anonymous testing (UAT) [2].By 1987, the United States and the United Kingdom (UK) had already put in place UAT programmes to improve the understanding of the evolving epidemic in their countries.Over the years, UAT in pregnant women has been substituted by regular antenatal screening programmes in most European and North American countries and only few countries such as the UK and Spain still maintain this surveillance approach. The UAT to monitor trends of HIV infection in women giving birth in Catalonia is performed annually on blood samples collected from newborns.The presence of HIV antibodies in the newborn reflects maternal infection due to the passive transfer of maternal antibodies to the infant.Since this testing is unlinked (prior to HIV testing the link between the specimen and the personal identifying information is removed) and anonymous (the health staff cannot identify an individual's test result), it is impossible to inform the women of the test results. The use of sentinel populations to estimate prevalence is a common practice and UAT in these populations has been seen since the beginning of its use as a good tool to prevent participation bias associated with populations at risk (the higher the risk the lower the will to participate) [2] Catalonia UAT has proven to be an easy and cost-effective tool to monitor prevalence because of its association with other screening programmes that provide very good coverage of the population of women of childbearing age.The objective of this study was to describe the HIV epidemic and trends in women giving birth and those terminating pregnancy as an estimation of the HIV prevalence in pregnant women in Catalonia. Methods In the period from 1994 to 2009, we used samples from newborns of women living in Catalonia collected as part of an annual cross-sectional study.In addition, we analysed blood samples from women voluntarily terminating their pregnancy in three selected clinics in Catalonia in the period from 1999 to 2006. Women giving birth The Catalan Neonatal Early Detection Programme (NEDP) has been collecting blood spot samples from all newborns since 1994.These samples are used to determine hypothyroidism, phenylketonuria and cystic fibrosis in newborns.This screening is carried out annually by the Institute of Clinical Biochemistry (Institut de Bioquímica Clínica, IBC) and covers 99% of all infants born in Catalonia [3]. For 1994, we obtained samples for HIV antibody detection from this pool of the NEDP for the period between August and December.For all subsequent years until the end of 2009, we selected samples from every second month.The total sample obtained represents half of the yearly newborns in Catalonia [4]. Before determination of HIV antibody status, the samples from women giving birth were screened for neonatal metabolic disease.The remaining dried blood spots were used for the HIV antibody detection.This is an UAT programme to estimate HIV prevalence in pregnant women.Although this meant that the women could not be informed of the result, all of them were offered HIV testing as part of their routine screening during pregnancy, and women testing positive there were offered treatment.The annual number of samples needed to estimate a prevalence of between 1.8 and 2.8% with a 95% confidence interval and a precision of 0.06% is around 35,000 samples.The yearly mean of samples obtained during our period of study was 34,391 [5]. Women terminating pregnancy The second source of information to monitor HIV prevalence in pregnant women were blood samples taken from women attending three specialised medical centres to terminate their pregnancies.Informed consent was required to obtain these samples.All dried blood spots from women terminating pregnancy were sent to the IBC for HIV antibody detection.There were at least 11,000 voluntary interruptions of pregnancy annually in the three centres participating in the study.Testing all samples from these centres, we can therefore estimate a prevalence of 2 per 1,000 with a 95% confidence interval and a precision of 0.08%. In women terminating their pregnancy, information on age was available for those sampled in the years 1999 to 2006.Mean age comparisons between women giving birth and those terminating pregnancy have been performed for this period of time.Information about country of origin was poor and discarded in the analysis of this set of samples. Sample analysis Sample collection and HIV antibody detection was done using dried blood spots.Two drops of blood were collected on filter paper discs (Schleicher and Schuell no.903TM, Dassel, Germany) and stored at 4 °C until used.HIV antibodies were determined using a modified Serodia IgG antibody-capture particle agglutination test (GACPAT) for HIV-1 (Fujirebio Diagnostics) [6].Positive samples were sent to the Microbiological Service of the University Hospital Germans Trias I Pujol (HUGTiP) to confirm the results using an IgG antibody capture ELISA for HIV-1 and HIV-2.Until 2001 this was done using the GACELISA test (Murex, UK) [7].In 2002 this confirmatory test was replaced with the Pasteur HIV-1/2 GenElavia Mixt ELISA (BioRad, Spain) after checking that normal and external valid values were similar for both tests [8]. Variables collected in the study were HIV status of the pregnant women, age and country or region of origin.Confidentiality for both data sets (women giving birth and those terminating pregnancy) was ensured by using a computer-aided coding process at the NEDP.The results of HIV antibody testing could not be correlated with any patient identification number. The annual HIV prevalence among women of childbearing age was computed as the number of HIVpositive samples divided by the total number of HIV-positive and HIV-negative samples tested each year, with 95% confidence intervals.Trends were analysed using the Cochran-Armitage test.Data were analysed using Stata SE 8.For the age variable, a comparison between women giving birth and those terminating pregnancy was done by non-parametric Mann-Whitney U-test. Results Among the 581,593 blood spot samples analysed, 549,689 were from infants born during the years 1994 to 2009 and 31,904 from women terminating their pregnancy during the years 1999 to 2006.We obtained 1,081 HIV positive results, representing a global prevalence of 1.85 per 1,000.Overall, we tested 54% of all women giving birth in Catalonia, ranging from 53% in 2). HIV prevalence in women giving birth by country or region of origin Country of birth information was available only for women giving birth between 2002 and 2009, with poor completion in 2002 (country of origin was unknown in 79% of records) but much better completion in 2009 (missing information in only 2% of the records). We observed an increasing trend in HIV prevalence between 2007 (1.6 per 1,000) and 2009 (3 per 1,000) among women born abroad, compared to lower prevalence rates and a decreasing trend from 1.3 per 1,000 to 1.1 per 1,000 among Spanish women in the same period.Prevalence was particularly high among those from Sub-Saharan Africa, reaching 6.9 per 1,000 in 2004 and 5.4 per 1,000 in 2009 (Figure 3). HIV prevalence trends in women terminating pregnancy versus those giving birth Information on women terminating pregnancy was available only for the period 1999 to 2006.We analysed samples from 31,904 women who interrupted their pregnancy in the three participating centres, representing 27% of all women who legally interrupted pregnancy in Catalonia. Figure 2 HIV prevalence in women giving birth, by age, Catalonia, 1994-2009 (n=549,689) HIV: human immunodeficiency virus.HIV prevalence per 1,000 Women giving birth Women terminating pregnancy Year HIV prevalence during this time period did not differ between women terminating pregnancy and women giving birth (p=0.06), with 42 of 31,904 (13%) and 522 of 293,120 (18%) HIV-positive.samples,respectively.HIV-positive women terminating pregnancy were younger than those giving birth (average age 26.6 versus 30.6 years; p<0.0001) for the same time period.A non-significant decreasing trend in HIV prevalence was observed in women who voluntarily interrupted pregnancy (p=0.066)from 2.3 per 1,000 in 1999 to 1.0 per 1,000 in 2006 (Figure 4). Discussion and conclusion Unlinked anonymous surveillance of newborns and women interrupting pregnancy allowed us to estimate the HIV prevalence among pregnant women as a surrogate for HIV infection prevalence in women of childbearing age.We found this method to be feasible and reliable in Catalonia.Our study provides 16 years of meaningful information, if limited by covering only the variables age and country of origin. Data from women voluntarily interrupting pregnancy were included with the objective of identifying any potential bias due to voluntary interruption of pregnancy among women with higher rates of HIV infection [9].However, their HIV prevalence was similar to the one found in women giving birth.Nevertheless, the small sample studied cannot guarantee the representativeness for all interrupted pregnancies performed in Catalonia, because important hospitals did not contribute data.The HIV prevalence rates followed a decreasing trend between 1994 and 2002, rose in the following three years (2003 to 2005), dropped in 2006 and then increased again in the years up to 2009.This rise was observed not only in Sub-Saharan African mothers but also in other European countries and Latin America.As expected, the seroprevalence observed in this study reflected the prevalence in the regions where the study population originated.For the decade 2000 to 2010, the HIV prevalence in Sub-Saharan Africa is reported as around 50 per 1,000, in Latin America around 5 per 1,000 for the same time period and in other European countries of around 2 per 1,000 [10,11]. Compared to other autonomous regions of Spain for which data are available, Catalonia has since the early 90s had one of the highest HIV prevalence rates [12,13], after the Canary and Balearic Islands.Over the period from 1995 to 1998 prevalence rates we observed in Catalonia decreased from 3.1 to 1.7 per 1,000.Other European countries such as Germany, Italy and the UK, where UAT has been used since the early 1990s, had different experiences in the same time period.In Italy [14,15] rates did not change significantly as well as in Scotland [15] and Germany [15].*Information available from the years 1999 to 2004, shows that HIV prevalence estimations from UAT in Catalonia followed a different trend than, for example, those in the UK [15] where the prevalence was systematically increasing over the years (Table ). HIV prevalence among pregnant women in the World Health Organization European Region [16] has been monitored using three methods: seroprevalence studies based on UAT of either newborns or pregnant women, seroprevalence studies based on multiple data sources (for other sexually transmitted diseases such as syphilis or hepatitis), and systematic collection and reporting of the results of diagnostic testing carried out among pregnant women in antenatal care or at delivery.Most of these countries are nowadays prioritising the third method because of increased accessibility to testing through antenatal care and the establishment of national registers of pregnant women, thus making UAT potentially redundant. In Catalonia, UAT of neonatal dried blood spots taken for metabolic screening has been carried out since 1994 and the policy of universal antenatal HIV screening was introduced in 1996 [17].However, to obtain prevalence rates through antenatal HIV screening, we would need information on the number of pregnant women tested for HIV, and in our country the systems to obtain this information are not yet in place.Therefore, UAT has been continued, mainly because data and sample collection are simple, cheap and have the added advantage of providing unbiased prevalence rates.On the other hand, UAT of blood taken from women voluntarily interrupting their pregnancy was stopped in 2007 due to small samples and low representativeness. As in other regions of Spain, pregnant women in Catalonia are offered HIV screening in the first trimester of pregnancy and, if they are at risk of exposure, also during the third trimester of pregnancy [18].A survey of HIV testing coverage conducted in Catalonia in the year 2000 found that 89% of women were tested during pregnancy, which at the time was assessed as good coverage [19,20].Current policy aims at 100% coverage, and there is concern regarding subpopulations that never reach antenatal care because of low educational level, low interest or arrival to the country at the time of delivery.It is worth noting that between the years 2000 and 2009, the foreign population in Catalonia has increased from 2.9% to 15.9% of the total population [21].Targeted efforts to include foreign mothers are not in place or of dubious efficacy.Strengthening surveillance and promoting testing at voluntary counselling and testing sites may support the already existing and well functioning antenatal care programme.Another important use of the UAT data is to produce estimates of HIV infections in order to plan and monitor the HIV prevention and care programs. In conclusion, since routine HIV surveillance does not provide data on undiagnosed infections and there is evidence that immigrants may not have access to prenatal care until delivery, data from UAT in Catalonia is still useful to complement the epidemiological data on this infection.Moreover, UAT among pregnant women is still the best available surrogate for HIV prevalence among the sexually active female population.
v3-fos-license
2017-05-02T21:28:24.378Z
2017-01-19T00:00:00.000
14101528
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2017.00005/pdf", "pdf_hash": "e9e08c426a598406d05df3dd7edb18bf6f2bb07e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45169", "s2fieldsofstudy": [ "Biology" ], "sha1": "e9e08c426a598406d05df3dd7edb18bf6f2bb07e", "year": 2017 }
pes2o/s2orc
Chemokines Associated with Pathologic Responses to Orthopedic Implant Debris Despite the success in returning people to health saving mobility and high quality of life, the over 1 million total joint replacements implanted in the US each year are expected to eventually fail after approximately 15–25 years of use, due to slow progressive subtle inflammation to implant debris compromising the bone implant interface. This local inflammatory pseudo disease state is primarily caused by implant debris interaction with innate immune cells, i.e., macrophages. This implant debris can also activate an adaptive immune reaction giving rise to the concept of implant-related metal sensitivity. However, a consensus of studies agree the dominant form of this response is due to innate reactivity by macrophages to implant debris danger signaling (danger-associated molecular pattern) eliciting cytokine-based and chemokine inflammatory responses. This review covers implant debris-induced release of the cytokines and chemokines due to activation of the innate (and the adaptive) immune system and how this leads to subsequent implant failure through loosening and osteolysis, i.e., what is known of central chemokines (e.g., IL-8, monocyte chemotactic protein-1, MIP-1, CCL9, CCL10, CCL17, and CCL22) associated with implant debris reactivity as related to the innate immune system activation/cytokine expression, e.g., danger signaling (e.g., IL-1β, IL-18, IL-33, etc.), toll-like receptor activation (e.g., IL-6, tumor necrosis factor α, etc.), bone catabolism (e.g., TRAP5b), and hypoxia responses (HIF-1α). More study is needed, however, to fully understand these interactions to effectively counter cytokine- and chemokine-based orthopedic implant-related inflammation. iNTRODUCTiON Total hip and knee replacements are examples of incredibly successful medical technologies with overall success rates of >90% at 10 years after surgery (1). However, the rate of failure grows with increasing time after surgery, where survival rates at 15-20 years post-op are very low at less than 50%. Currently, greater than 40,000 hip arthroplasties are revised each year in the US because of non-infection (aseptic)-related implant failure (painful implant loosening), and this is expected to increase by approximately 140% for total hip and 600% for total knee revisions over the next 25 years (1). Painful loosening is a serious long-term complication because of the risks of clinical/surgical of revision surgery. Implant debris-induced biological reactions have been well established as the central cause of long-term implant failure (2,3). However, other mechanisms of long-term implant failure have also been shown to contribute to the pathogenesis of implant failure, such as high fluid pressures forcing fluid between the bone and implant, endotoxin contamination (lipopolysaccharide from Gram-negative bacterial membranes), stress shielding where reduced stresses imposed on bone leads to decreased remodeling (4). Various mechanical factors, such as micromotion, may play a role in the induction of aseptic loosening not only directly but also indirectly through the formation of additional implant debris such as wear particles. Aseptic implant failure due to inflammation is responsible for >70% of total hip arthroplasty revisions and >44% of total knee arthroplasty revisions (2,5). Local bone loss (or peri-implant osteolysis) is initiated by inflammatory responses to innate immune system interactions with small implant wear particles (generally <10 µm in diameter) resulting in persistent cytokine-and chemokine-induced inflammation in the peri-implant milieu (6). The focus of this review will be the identification of the central chemokines and cytokines involved in these innate and adaptive inflammatory reactions to implant debris (e.g., wear particles and metal ions). When particles activate the inflammasome pathway, cells release mature IL-1β, IL-18, IL-33, and other cytokines and chemokines as follows: . Once phagocytosed by APCs such as macrophages, particles, such as asbestos and implant debris, induce danger signaling through mechanisms such as lysosomal destabilization. This lysosomal destabilization then causes a cascade of NADPH (nicotinamide adenine dinucleotide phosphate-oxidase), and an associated increase in reactive oxygen species, which then activates the intracellular multi-protein "inflammasome" complex composed of NALP3 (NACHT-, LRR-, and pyrin domain-containing protein 3) in association with ASC (apoptosis-associated speck-like protein containing a CARD domain) (17,18). This inflammasome activation then activates Caspase-1, which does not act as an apoptosis stimulus (despite its caspase nomenclature) but rather converts cytokines such as IL-1β and IL-18 (and others) from their inactive into their active form. Recent studies demonstrate a polarization toward an M1 phenotype for macrophages in response to implant debris challenge (released metal ions and particles) (Figure 1) (19). Thus, given that wear particles are biologically active and influence the innate immune pathway, the amount, appearance, rate of production, time of exposure, and antigenicity of the wear particulates (and their breakdown products) are all important factors (8,20). The macrophage M1-associated cytokines released after contact with wear debris include IL-1α, IL-1β, IL-6, IL-10, IL-11, IL-15, TNF-α, transforming growth factor α, granulocytemacrophage colony-stimulating factor (GM-CSF), macrophage colony-stimulating factor (M-CSF), platelet-derived growth factor, and epidermal growth factor (Figure 1) (21)(22)(23). Lymphocytes All metal implants release implant debris through wear and corrosion (24,25) and the released metal ions, while not sensitizers on their own, can act as haptens, activating the immune system by forming complexes with native proteins (26)(27)(28). Nickel is the most common delayed type hypersensitivity (DTH) sensitizer in humans followed by cobalt and chromium (29)(30)(31)(32). Lymphocytes have been shown that they can play a central role in the failure of some kinds of orthopedic implants (33)(34)(35)(36). The subtypes of T-cells that dominate implant debris-associated responses are T-helper (TH) cells (33)(34)(35)(36). These TH responses have been characterized as a type IV DTH response. DTH response to metal implant debris is an adaptive slow cell-mediated type of response. Metal-antigen sensitized and activated DTH T-cells release various chemokines, which recruit and activate macrophages [ Figure 2; (37)] such as IL-3 and GM-CSF (promotes hematopoesis of granulocytes); monocyte chemotactic activating factor (promotes chemotaxis of monocytes toward areas of DTH activation); IFN-γ and TNF-β (produce a number of effects on local endothelial cells facilitating infiltration); and migration inhibitory factor (signals macrophages to remain in the local area of the DTH reaction). A DTH self-perpetuating response can create extensive tissue damage. Forms of metal sensitivity testing such as lymphocyte transformation test and patch testing (for skin reactions) are the only means to predict/diagnose those individuals that will have an excessive immune response to metal exposure that may lead to premature implant failure (approximately >1-2% patients/year) (37). TH1 cells have been implicated as mediating metal DTH responses as characterized by production of IFN-γ and IL-2 and to a lesser degree IL-17. DTH response-associated chemokines fractalkine and CD40 indicate the possibility of TH17 activity (vs non-observed TH2 cell-mediated IL-10 responses) (36,38). However, the chemokines involved in TH1 responses such as MIG (monokine induced by gamma interferon, i.e., CXCL9) and CXCL10 (39) have not been investigated in the context of adaptive immune responses to implant debris and greater understanding of their roles is critically needed. Specific lymphocyte responses (e.g., TH1 cells) may be underestimated and falsely attributed to innate immune responses because relatively very few activated lymphocytes locally can release macrophage-associated chemokines. It has been difficult to readily identify these responses in peri-implant tissues, by such signature cytokines as IL-2, interferon-γ, TNF-α, and IL-2 receptors (40). But some studies using mRNA detection instead of tissue immunohistochemistry (IL-2) have shown the increased expression of these TH1 cytokines (38). Osteoclasts The role of osteoclasts has been purported to be central to osteolysis, as they are the primary bone-resorbing cells. RANK(L) signaling is central for the activation of osteoclasts and activates a variety of downstream signaling pathways required for osteoclast development, but cross talk with other signaling pathways also fine-tunes bone homeostasis both in normal physiology and disease (41,42). The degree to which other cells with the potential to resorb bone (e.g., macrophages) dominate implant debrisinduced osteolysis remains controversial. The roles of released cytokines such as TNF-α are important to bone-related diseases FiGURe 2 | innate immune system (i.e., macrophage) interactions with implant debris produces danger signaling (inflammasome) and pathogen (NF-κB)-associated cytokines such as iL-1β and tumor necrosis factor α (TNF-α) and increased expression of costimulatory molecules such as CD80/86, iCAM1, and HLADR where the effects on chemokine receptors such as CCR2 and CCR4 are incompletely understood. These innate responses can trigger adaptive immune responses where destructive TH1 type cytokine profiles that then require T-regulatory cells (e.g., IL-10) to control this response (courtesy of BioEngineering Solutions Inc.). (43), but their relative contribution to bone loss due to potent macrophage activation vs that of osteoclast activation alone, in implant debris-induced osteolysis, is not completely understood. Osteoclasts (in vitro) have been shown capable of phagocytosing a wide size range of ceramic, polymeric, and metallic wear particles. After particle phagocytosis, they remain fully functional, hormone responsive, bone-resorbing cells (44,45). However, we have reported that when fully differentiated in vitro, osteoclasts lose the ability to release inflammatory cytokines (46), thus indicating a diminished role for osteoclasts in recruiting and potentiating implant debris-induced inflammation and perhaps osteolysis as well. Osteoblasts Osteoblasts have shown the potential when stimulated in vitro by wear particles to produce osteoclastogenesis factors RANKL and M-CSF and cytokines such as IL-6 and IL-8 as well as VEGF. These in vitro investigations also demonstrated debris-induced decreased de novo synthesis of type 1 collagen as well as increased expression of matrix metalloproteinase 1 (MMP-1) (47)(48)(49)(50). The caveat here is the important limitation "in vitro studies" and thus the degree to which osteoblasts are able to transduce implant debris stimuli into an inflammatory or functional effects is less well established in vivo. Soft Tissue Responses Fibroblasts Soft tissue cells such as fibroblasts are also actively involved in osteoclastogenesis and bone resorption (51,52). The most prominent fibroblasts responses to implant wear debris were MMP-1, MCP-1, IL-1β, IL-6, IL-8, cyclooxygenase 1 (cox-1), cox-2, leukemia inhibitory factor, transforming growth factor beta 1, and TGFβ receptor type I. Additionally, downregulation of bone maintenance regulator such as osteoprotegrin (OPG) has been reported to decrease in osteoblasts/soft tissue cells exposed to implant debris and may contribute to regulatory RANKL/OPG imbalance in bone homeostasis contributing to the pathogenesis of implant debris-associated aseptic loosening/bone loss (53). Toxicity Responses Toxicity responses are another facet of innate immune activation where apoptosis and hypoxia responses have been found to be induced by implant debris (54). While there is a plethora of reports by us (55)(56)(57) and others (58) implicating implant metals as "toxic" at high (and possibly clinically relevant) concentrations, there is little in terms of mechanism specificity, i.e., how implant metals induce this toxicity or what type of toxicity responses happen first. Additionally, confusing is the misidentification of metal ion-induced apoptosis rather than the more accurate pyroptosis (inflammatory apoptosis) when inflammatory cytokines have been identified. One specific mechanism that has been identified has been that of metal-induced hypoxia-like responses (54). Soluble and particulate metal debris have been shown to induce hypoxia-like pathology resulting in HIF-1α compensatory responses to metal implant debris by promoting both the induction of hypoxia (HIF-1α) and tissue angiogenesis (VEGF) providing a specific mechanism, which explains why local soft tissue growths (fibro-pseudotumors) and apoptosis responses can form in some people with certain orthopedic implants (54). The induction of apoptosis-like responses associated with implant debris has also been correlated with implant debris in vivo, such as caspase-3 associated with macrophages, giant cells, and T-lymphocytes in local tissues (capsules and interfacial membranes) of patients with aseptic hip implants (59). But, it is important not to confuse apoptosis with that of danger signaling and other inflammatory pathways because early studies using pan-caspase inhibitors (which inhibit danger signaling) erroneously concluded that inhibition of apoptosis by a pan-caspase inhibitors mitigates implant-induced inflammation osteolysis (60), when in fact it was the pan-caspase inhibition of inflammation pathways that decreased inflammation (8,11). The role of apoptosis, pyroptosis, and pyronecrosis in implant-induced inflammation is still unclear and controversial. CeNTRAL CHeMOKiNeS iN iMPLANT DeBRiS-iNDUCeD iNFLAMMATiON Chemokine expression by macrophages, fibroblasts, and osteoblasts exposed to implant debris is also a central innate immune effector reaction to implant debris enhancing migration to and inhibiting migration away from the site of implant debris (23,61). The roles of chemokines relevant to the context of orthopedic implant debris include pro-inflammatory cytokine production, pyroptosis, apoptosis, angiogenesis, and collagen production, which act together to product aseptic bone resorption around implants. However, mostly macrophages and MSCs have been implicated as the major source of this chemokine in periprosthetic tissues induced by different types of wear particles like titanium, CoCr, and UMHWPE (62,63). This migration of macrophages and osteoclasts to the sites around implants leads to accelerated osteolysis (64). The chemokines, particular to implant aseptic loosening pathology, include IL-8, MCP-1 MIP-1α, CCL17/thymus and activation-regulated chemokine (TARC), and CCL22/ monocyte-derived chemokine (MDC) (64), which have been identified in peri-implant tissues and associated with implant debris reactivity (65-67). iL-8 IL-8, a CXC chemokine, is released by peri-implant cells such as macrophages, epithelial cells, MSCs, mast cells, and endothelial cells. It has been well established as present in periprosthetic tissues with implant debris and has been put forward as a biomarker of peri-implant osteolysis (47,68,69). Surprisingly, implant debris can induce the production of IL-8 by human osteoblasts (47,70,71). However, the main effector cells producing IL-8 are human macrophages that have migrated to the site of implant debris-induced inflammation (63). IL-8 attracts activated macrophages and neutrophils (PMNs) and which together with osteoclasts act to over ride the balance of bone homeostasis resulting in bone loss over time. However, the degree to which IL-8-dependent neutrophil attraction and activation affects implant-bone integrity over time is not clear. This may be due to the difficulty in modeling this system in vitro. Monocyte Chemotactic Protein-1 Increased expression of chemokines MCP-1 (CCL2), MIP1a (CCL3), and MIP 1α (CCL4) was observed in local tissues around failed arthroplasties and also produced by macrophages in cell culture after exposure to different types of wear particles (72). In contrast to MIP1α, an increased release of MCP-1 was also observed from fibroblasts after exposure to titanium and PMMA particles (73). MCP-1 (CCL2) potently chemoattracts monocytes but can also recruit macrophages, natural killer cells (NK cells), and T cells through the CCR2 or CCR4 receptors (74,75). MCP-1 is produced by fibroblasts, osteoblasts, monocytes, and macrophages (74,75). Thus as expected, implant debris can induce the production of MCP-1 in human fibroblasts, osteoblasts, monocytes, and macrophages together recruiting innate immune reactivity [i.e., monocytes and macrophages; (72,73)]. MCP-1 has been found in peri-implant tissues of failed total joint implants, highlighting the potential of MCP-1 as potential biomarker of inflammation and osteolysis (72,76). Implant debris particles such as PMMA or UHMWPE particles increased MCP-1 expression in RAW 264.7 macrophage cells (77,78) where supernatant from particlechallenged macrophages caused THP-1 macrophages to migrate and was neutralized with the addition of antibody to MCP-1 (77,78). While there has been some controversy as to whether blocking MCP-1/CCR2 interaction is effective at blocking macrophage recruitment in vitro (78), in vivo studies have shown that injected MCP-1 in a murine femoral implant model resulted in exogenous macrophage recruitment (RAW 264.7 cells) to the site of injection when challenge with of UHMWPE particles and that inhibiting the interaction of MCP-1/CCR2 decreased macrophage migration (22). However, while the use of injected CCR2-deficient macrophages resulted in less recruitment to the site of particle and MCP-1 challenge, there was still recruitment, demonstrating the pleiotropic nature of other CCRs and chemokines (79). However, the role of MCP-1 may be more complex. Kim et al. reported blocking MCP-1-induced formation of TRAP(+)/ CTR(+) multinuclear cells was critical to blocking bone resorption (80). These findings show that MCP-1 is a potent chemokine involved in the complex pathology of osteolysis. However, there is a lack of in vivo (human or animal) data to indicate that interruption of a single, albeit potent, chemokine receptor interaction (MCP-1/CCR2) will reverse or prevent particle-induced inflammation (that is danger signal based) and prevent any resulting osteolysis (without significant negative consequences) given the multitude of other powerful inflammatory cytokines involved in this process and detailed in the following sections. MiP-1 Other chemokines such as MIP-1 have a less clear role in implant debris-induced inflammation. MIP-1 (MIP-1α CCL3 and MIP1β CCL4) is produced by a variety of peri-implant cell types including adaptive (lymphocytes) and innate (monocytes and macrophages), and tissue (fibroblasts and epithelial) cells (81). MIP-1α is likely central feature of adaptive immune responses (T-cells and B-cells) to implant debris; but to date, little evidence has shown that MIP-1 is central to adaptive (DTH) type immune responses observed in peri-implant tissues with elevated metal debris (33,35,82). However, monocytes, neutrophils, dendritic cells, and NK cells are also effected by MIP-1, to foster adaptive immune responses (83,84). In vitro, metal (titanium) and polymeric (PMMA) implant wear debris was found to increase the production of MIP-1α by primary human monocytes/macrophages, resulting in increased monocyte migration. Countering MIP-1 with a MIP-1 antibody decreased this migratory effect (72). However, these findings have been challenged by others where RAW 264.7 cells failed to produce increased amounts of MIP-1α when challenged with wear particles. Moreover, a neutralizing antibody to MIP-1α failed to inhibit the migration of THP-1 macrophages in culture challenged with implant debris particles (78). A lack of response was also found for MSCs during MIP-1/wear debris induction. Huang et al. found that using a neutralizing antibody to CCR1 (one of the receptors for MIP-1α) failed to affect the migration of MSCs challenged with implant debris particles in vitro. However, the actions of CCR1 involve many ligands (e.g., MIP-1α, MCP-3, and RANTES), and others have found that neutralizing the actions of CCR1 in the presence of particles challenge does indeed lead to a decrease of MSC migration and differentiation into osteoblasts (22). Thus, currently, there is insufficient evidence to indicating a central role for MIP-1α in pathology of implant debris-induced inflammation and osteolysis. CCL17 and CCL22 CCL17/TARC, CCL20/MIP-3alpha, and CCL22/MDC both interact with the chemokine receptor CCR4 and are important chemokines for adaptive immune responses (85). They are known to be mainly produced by cell lineages closely related to osteoclasts such as dendritic cells and are examples of chemokines that are produced in secondary lymphoid organs and in peripheral tissues (86). CCL22 and CCL17 are produced by macrophages, dendritic cells, and endothelial cells and act as adaptive immune chemokines affecting TH2 population, and are associated with allergy and dermal hypersensitivity to haptens when produced by keratinocytes and langerhans cells (39). These CCL17 and CCL22 chemokines have also been shown induced by the exposure of metal implant debris (e.g., titanium particles) on bone cells (osteoclasts and osteoblasts) (87). In addition, the receptors for these chemokines CCR4 were shown increased in macrophage-like osteoclasts precursor cells (87). Moreover, the expression of CCR4 was upregulated when osteoclast precursors were stimulated with titanium particles (87). Central chemokines to implant debris-induced inflammation and bone loss and their effects are summarized in Figure 3. Given the complexity of multiple receptors and chemokines involved, further study is required to understand the central mediators involved in the in the migration of MSCs sites of peri-implant inflammation. CONCLUSiON Implant debris-induced chemokine expression and the interplay between resulting chemokine and cytokine expression are incompletely characterized and currently limited to a basic FiGURe 3 | Orthopedic implant debris act on a number of different cells around implants inducing the release of chemokines. Different types of immune cells are recruited by different chemokines. However, there is crossover between the receptors associated with different ligand/chemokines. This schematic highlights the complexity associated with understanding, which key chemokines are best targeted for mitigating implant debris-induced inflammation (88)(89)(90)(91)(92)(93)(94)(95). understanding that a few central chemokines including MCP-1, IL-8, and MIP-1 are important. Central among these seems to be MCP-1. However, despite this centrality, it seems unlikely that interruption of only one pathway (e.g., MCP-1/CCR2) will be effective at mitigating implant debris-induced inflammation given the numerous responses detailed in this review and the pleiotropic nature of chemokines and chemokine receptors (e.g., MCP-1 binds to both CCR2 and CCR4, Figure 3). Additionally, it is important to note that chemokine response is essentially a downstream effect of debris-induced inflammation (i.e., cytokine induced) and thus the single bullet strategy of inhibiting a single chemokine to address aseptic inflammatory osteolysis is unlikely to succeed clinically as a useful strategy until more sophisticated understanding of this interplay is understood. It is important to note that most of our current understanding of cytokines, chemokines, and bioreactivity associated with implant debris-induced inflammation and aseptic loosening comes from in vitro models that may be overly simplistic. Continuing consensus-building in vivo investigations/evidence will be required to support current models/ understanding. The serious pathology of aseptic inflammation and resultant osteolysis around joint replacement implants is intimately dependent on both cytokines and chemokines released by innate and adaptive immune reactions and local cells around implants. These types of debris-induced inflammation are dominated by innate immune cell (macrophages) secretion of TNF-α, IL-1β, IL-6, and PGE2, which together with potent chemokines such as MCP-1 causes a persistent low-grade immune reaction resulting in peri-implant bone resorption. Given the increasing number of people receiving orthopedic implants, the issue of biologic reactivity is growing more critical. There is increasing need for more detailed study of implant debris-induced cytokine and chemokine interplay to mitigate this response effectively.
v3-fos-license
2018-09-13T06:06:16.801Z
2014-12-09T00:00:00.000
52262472
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.14800/rd.441", "pdf_hash": "b2f59b3555a689a61918a7c0eb356e1316097ce3", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45171", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "b2f59b3555a689a61918a7c0eb356e1316097ce3", "year": 2014 }
pes2o/s2orc
microRNA global expression analysis and genomic profiling of the camptothecin-resistant TALL derived cell line CPT-K 5 The clinical use of the camptothecin (CPT) derivatives, topotecan and irinotecan, has had a significant impact on cancer therapy. However, acquired clinical resistance to these drugs is common, which greatly hampers their clinical efficacy. MicroRNAs (miRNAs) is an exciting novel class of endogenous non-coding RNAs that negatively regulate gene expression of up to 50% of the protein-coding genes at the post-translational level. Abnormal expression of miRNAs is associated with pathogenesis of cancer and is also implicated in anticancer drug resistance phenotypes. We used global expression analysis to examine for differential miRNA expression between the camptothecin-resistant cell line CPT-K5 and its parental CPT-sensitive RPMI-8402. In the CPT-K5 cell line 18 miRNAs were deregulated. Fifteen of these were down-regulated and three were up-regulated. The miRNA-193a-3p, miR-130a-3p, and miR-29c-3p were the most down-regulated miRNAs at 205.9-fold, 33.9-fold and 5.5-fold, respectively, while the miRNA let-7i-5p was the most up-regulated at 3.9-fold. We used subtraction BAC-based array CGH analysis to examine for genomic copy number changes. Only for the three most down-regulated miRNAs a positive correlation was found with genomic loss of their chromosomal regions in which they are encoded. Potential functional targets of the differentially expressed miRNAs were examined by searching the miRBase and miRTarBase databases. Recurrent KEGG pathways that theoretically could be affected by the deregulated miRNAs are lysine degradation, cell cycle, PI3K-Akt-, ERbBand p53signaling pathways. We show that the intracellular levels of several miRNAs are significantly deregulated upon acquisition of CPT resistance in the T-ALL derived cell line CPT-K5, and that genomic copy number changes is not a major cause of deregulation. In addition, the most deregulated miRNAs in our study have previously been described to be involved in various types of chemotherapeutic resistance, including the chemotherapeutics CPT, gefitinib and cisplatin in other cancer and cell types. Our study adds to the current knowledge of the mechanisms of acquired CPT resistance. Specific miRNAs may prove to be future targets to reverse or inhibit development of CPT resistance thereby providing means for a more effective treatment. Introduction Camptothecin (CPT) specifically inhibits the nuclear enzyme DNA topoisomerase I (Top1) [1,2] .Its effect is exerted by binding to the covalent complex formed by DNA and Top1 leading to persistent DNA breaks that are turned into devastating double stranded breaks (DSBs) upon collision with the replication or transcription machinery [3] . BRIEF REPORT When these DSBs are unrepaired it confers apoptosis of the malignant cells and eventually eradication of the tumor [3] . The water-soluble CPT derivatives such as topotecan and irinotecan belong to a class of chemotherapeutic drugs that have been approved for treatment of various malignancies, including colorectal, ovarian, small cell lung cancers, leukemias and lymphomas [4] .However, their clinical efficacy is greatly hampered by development of resistance against CPT and its derivatives [5] .Reversing or inhibiting CPT resistance may provide means for a more effective treatment.Preclinical studies have shown that "classical" cellular alterations, such as drug-efflux, metabolism, Top1-down regulation, TOP1 mutation and DNA damage response contribute to the CPT-resistance.However, recent lines of evidence have suggested new resistance mechanisms involving microRNA (miRNA) deregulation [6] . More than 2,500 human miRNAs have been identified.They belong to a class of non-coding RNA that is 20 to 25 nucleotides in length.The precise mechanisms of miRNA function have not been fully clarified yet.It is known, however, that each miRNA can regulate the expression of up to hundreds of target genes simultaneously, while a single gene can also be targeted by multiple miRNAs.The miRNAs control 30-50% of the protein-coding genes [7] and are therefore involved in a multitude of signaling pathways controlling normal cell differentiation, division, and apoptosis [8,9] .There is strong evidence that dysregulated miRNAs can cause cancer and its progression [10] .However, it remains to be elucidated how they affect the response of cancer cells to chemotherapeutic treatment, in particular their involvement in acquisition of CPT resistance in leukemic cells.In one study it was shown that mRNA and miRNA expression profiles correlated with sensitivity to FdUMP [10], fluorouracil, floxuridine, topotecan, and irinotecan across a NCI-60 cell line screen [11] .In another study it was shown that 25 miRNAs were deregulated in intrinsic CPT resistance in gastric cancer derived cell lines [12] . To explore miRNA's role in acquired camptothecin resistance we took advantage of the camptothecin resistant cell line CPT-K5 and its parental RPMI-8402 and compared their global miRNA expression.The CPT-K5 cell line was previously developed by stepwise increasing exposure of the human T-ALL derived cell line RPMI-8402 to the CPT-derivative irinotecan [13] .Later we showed that the CPT-resistance correlated with a mutation at amino acid residue 533 (p.D533G, Asp->Gly) in the DNA binding domain of the Top1 enzyme [14] .The mutant enzyme acquired altered biochemical properties, especially as to how it interacts with DNA.The changed properties resulted in a higher efficiency for recognition of specific sequences and a higher stability of cleavable complexes contributing to the cellular CPT-resistance [15] . In the present study we show that 18 miRNAs are differently expressed between CPT-K5 and RPMI-8402 and that two miRNA's, miR-130a-3p and miRNA-193a-3p, were the most deregulated by a magnitude of 33.9-and 205.9-fold, respectively.In addition, we explored whether the deregulated miRNA expression correlated with changes in genomic copy numbers between the cell lines. Cell cultures and purification of nucleic acids The cell lines CPT-K5 and RPMI-8402 were cultured as described [15] .Five million cells in exponentially growth phase were used for RNA purifications with the miRNeasy Mini Kit (Qiagen Nordic, Solna, Sweden) according to manufacturer's protocol.DNA from three million cells was purified using the Gentra Puregene Blood Kit (Qiagen). Two dual-color microarrays (version 208001V8.1)were used in a dye-swap setup with each sample (RNA from CPT-K5 and RPMI-8402) labelled with opposite fluorophores in the two hybridizations.Three micrograms of RNA was used for each labeling reaction and the matching labeling Kit from Exiqon was used according to protocol.Upon labeling the RNA was hybridized to the microarrays in an automated Tecan HS400 Pro hybridization station according to Exiqon's recommendations.The arrays were scanned at 532 and 635 nm in a GenePix 4000B scanner using the GenePix Pro 6.1 software (Molecular Devices, Sunnyvale, California, USA).The raw image files were analyzed in GenePix Pro 6.1 using a Gal-file annotated in compliance with miRBase release 11 [16] .Individual features were identified as irregular and the background subtraction was calculated using the "Morphologic opening" setting.Features with intensities below threshold (negative controls + 5SD) were excluded, and only features representing known human miRNAs where at least three of the four replicated spots had passed the above mentioned criteria were considered present and used in the subsequent analysis.The data from the two hybridizations were normalized using the global lowess algorithm in Acuity 4.0 (Molecular Devices).The normalized data from the dye-swap procedure were averaged to provide the final data.All miRNA names are reported according to the nomenclature used in miRBase 20.Exempted from this rule is data referenced from earlier publications, where miRNA names are reported according to the nomenclature used in the original publications. qPCR analysis of miRNA expression Specific miRNA expression were investigated by qPCR using TaqMan R MicroRNA Assays (Life Technologies Europe, Naerum, Denmark) according to protocol.The Mx3000P RQ-PCR System (Agilent Technologies, Santa Clara, CA, USA) was used for the PCR reactions and output data were analyzed by the ΔC t relative quantification model using RNU6B and RNU48 (Life Technologies Europe) as reference genes for normalization. Array-based comparative genomic hybridization analysis Array-based comparative genomic hybridization (aCGH) analysis was done using the CytoChip BAC-array platform (BlueGnome, Cambridge, UK) as previously described [17] .CPT-K5 DNA was labeled with Cy3 and RPMI-8402 DNA with Cy5.The GenePix 4000B laser scanner (Molecular Devices) together with GenePix Pro 6.1 software (Molecular Devices) was used to scan he microarray.The CytoChip algorithm analysis tool in the BlueFuse 3.5 software (BlueGnome) defined regions of gain or loss.Reference genome was NCBI build 36.1 (hg18).Bioinformatics analysis was performed by querying the UCSC database (http://genome.ucsc.edu). Comparison of miRNA profiles between CPT-K5 and RPMI-8402 We examined whether differential miRNA expression between the camptothecin-resistant CPT-K5 and its parental camptothecin-sensitive RPMI-8402 cell line could be detected using a miRNA microarray platform from Exiqon.This microarray contains 5924 probes representing 429 miRNAs, and 94 miRNAs were expressed above threshold (Supplemental Table 1).Eight-teen miRNAs emerged as differently expressed by more than a 2-fold change in the CPT-K5 cell line comparied with RPMI-8402 (Table 1).Specifically, fifteen miRNAs were down-regulated and three were up-regulated in CPT-K5.The miRNA-193a-3p and miR-130a-3p, were the most down-regulated miRNAs at 205, 9-fold and 33.9-fold, respectively, while the miRNA let-7i-5p was the most up-regulated at 3.9-fold. To validate the microarray data, four differently expressed genes were randomly chosen for validation by Taqmann MicroRNA assays.The selected miRNAs were up-regulated (miR-7-5p), down-regulated (miR-18a-5p and miR-130a-3p) or showed no change in expression (miR-223-3p) (Figure 1A).The expression levels of the miRNAs from real time PCR were consistent with results from miRNA array analysis.A comparison of expression levels between CPT-K5 and RPMI-8402 was done by real time PCR confirming the observed differences in expression of the miRNAs, miR-7-5p, miR-18a-5p and miR-130-3p, and no change of miR-223-3p as also determined by the microarray analysis (Figure 1B). Comparison of differently expressed miRNAs and copy number changes of CPT-K5 and RPMI-8402 To evaluate a possible correlation between differently expressed miRNAs and genomic copy number changes we performed subtractive whole genome profiling to analyze for copy number changes between CPT-K5 and RPMI-8402 using a BAC-based aCGH microarray.Sixty-one genomic regions emerged as regions of copy number changes between CPT-K5 and RPMI-8402 (Figure 2A, and Supplemental Table 2).Specifically, 32 regions were gained while 29 regions showed copy number losses.We examined the copy number changes relative to the genomic positions of the differently expressed miRNAs (Table 1, Figure 2B).In the group of up-regulated miRNAs we observed for all miRNAs no copy number changes between CPT-K5 and RPMI-8402.In the group of down-regulated miRNAs nine (9/18, 50%) were located in regions of copy number losses, three (3/18, 17%) were located in regions of copy number gains and six (6/18, 33%) were located in regions of no copy number changes. For the three most highly down-regulated miRNAs (miR-193a-3p, hmiR130a-3p, and miR-29c-3p) there were a correlation between their location in genomic regions with copy number losses while for the rest of the miRNAs there were no correlation with their differences in expression levels and copy number changes. miRNA targets of the changed miRNAs To identify potential functional miR-targets of the deregulated miRNAs the DIANA-miRPath (http://diana.imis.athena-innovation.gr)were searched (Table 2).Recurrent KEGG pathways that theoretically might be affected by the deregulated miRNAs are lysine degradation, cell cycle, PI3K-Akt-, ERbB-and p53-signaling pathways. Discussion In the present study we showed that 18 miRNAs are deregulated during acquisition of resistance to the camptothecin derivative irinotecan in the CPT-K5 cell line.By comparing the miRNA differences with genomic copy number changes between the two cell lines the expression of the most down--fold decrease) correlated with identified genomic losses.The most deregulated miRNAs were miR-193a-3p, miR-130a-3p, and miR-29c-3p exhibiting a 205.9-, 33.9-, and 5.5-fold decrease in expression, respectively.For the remaining 15 deregulated miRNAs no correlation between their relative expression level and genomic copy number status was found.These findings indicate that high expression changes are more likely associated with copy number changes compared to moderate or minor expression changes.This notion is supported by the findings that massive up-regulation of MYC expression is associated with high-level amplification of the MYC oncogene [18] , and that massive down-regulation is associated with gene deletions [19] .When moderate or minor expression changes are observed other factors than copy number changes may influence their expression level [10,20] .The miRNA expression is mainly regulated in a tissue-specific and disease state-specific fashion by different transcription factors while some miRNAs are regulated by tumor suppressor or oncogene pathways such as TP53, MYC and RAS [21] .Deregulated miRNA expression can also result from changes in epigenetic regulation, such as methylation status of miRNA genes [22] or may result from mutations in miRNA genes [23] . Altered intracellular levels of miRNAs interfere with the chemoresponses in a variety of cancer cells [11,12,[24][25][26] .In a study utilizing the NCI-60 cell line panel it was shown that mRNA and miRNA expression profiles correlated with sensitivity to FdUMP [10], fluorouracil, floxuridine, topotecan, and irinotecan [11] .In a study of the CPT-resistant colon cancer cell line SW1116/HCPT, seventy-seven miRNAs were differentially expressed compared to its parental SW1116 (30 miRNAs were down-regulated and 47 were up-regulated) [25] .Among these miRNAs the miR-548d was the highest up-regulated (124.55-fold) while miR-641 was the most down-regulated (60.20-fold).The authors also performed differential gene expression showing that -regulated (404.1-fold), and that it is one of the target genes of miR-506, which was overshown to regulate some MDR proteins, which uses ATP to extrude chemotherapeutic agents from the cells [27] .Intrinsic drug resistance to hydroxycamptothecin was studied in six gastric cancer cell lines and it was shown that 25 miRNAs were deregulated in the resistant cells, including up-regulated miR-196a, miR-338, miR-126, miR-98, let-7g, and down-regulated miR-200 family, miR-31, and miR-7 [12] .In our study we found that let-7i-5p and miR-7-5p were the two most up-regulated and that miR-193a-3p, miR-130a-3p, and miR-29c-3p were the three most down-regulated miRNAs.Comparing our differential expression miRNA results with the previous studies on CPT resistance in various cell lines and cell types it is only the miR-7-5p that is a recurrent miRNA involved in CPT resistance.We found that miR-7-5p was up-regulated while Wu et al. found it to be down-regulated [12] . The disparate results between the different studies may be explained by variations in the molecular pathways in the different cancer cells.It should be noted that miRNAs execute their biological function via repression of many different protein-coding genes involved in a multitude of signaling pathways.Another reason could be the effect of local tumor microenvironment, which is well known to be modulated via a variety of signaling networks [28] . The most deregulated miRNA in our study was the 205.9-fold down-regulation of miR-193a-3p in the acquired CPT resistance of the CPT-K5 cell line.Its aberrant expression has been reported in all the of cancer examined, including colorectal cancer [29] , non-small cell lung cancer (NSCLC) [30] , myeloid leukemia [31] and Wilms' tumor blastema [32] .The transcription factors XB130 [33] , and p63 [34] have been implicated in the regulation of miR-193a expression as well as the DNA methylation state of its promotor region [35] .A tumor-suppressor role of miR-193a-3p has been reported in NSCLC [30] and epithelial ovarian cancer cells [36] .Conversely, miR-193a-3p can also 1. promote both in vivo growth and chemo-resistance of hepatocellular carcinoma [35] and bladder cancer cells [37] .The bladder cancer cell line 5637 is chemosensitive when the promotor of miR-193a is hypermethylated while the chemoresistant bladder cancer cell line is drug resistant because of a hypomethylated promotor [38] .Four direct target genes of miR-193a-3p were identified (PLAU, HIC2, SRSF2 and LOXL4) conveying bladder cancer multi-chemoresistance against various chemotherapeutic drugs, including etoposide, carboplatin, cisplatin, 5-fluorocil, and doxorubicin.Resistance against CPT has not prior to our study been associated with deregulated miR-193a-3p expression. The second-most down-regulated miRNA in our study was miRNA-130a-3p exhibiting a 33.9-fold down-regulation.miR-130a plays a crucial role in tumor biology with different functions in various cancers acting as both an oncogene (NSCLC, cervical cancer and colorectal cancer) and as a tumor suppressor gene (glioblastoma, prostate cancer and leukemia) [39] .The reasons for the apparently contradictory roles of miR-130a are not clear.Recently, it was shown that miR-130a under-expression leads to gefitinib resistance in NSCLC whereas overexpression increases sensitivity to gefitinib [40] .The MET gene was shown to be a direct target of miR-130a and that MET amplification leads to gefitinib resistance by activating the ERBB3 signaling pathway.In ovarian cancer it was shown that under-expression conferred cisplatin-resistance by targeting X-linked inhibitor of apoptosis [41] .In hepatocellular carcinoma (HCC) miR-130a increases drug resistance by regulating RUNX3 and wnt signaling in cisplatin treated HCC [39] .Taken together these findings indicate that a common drug resistance mechanism may be ascribed to miR-130a. We observed that the miR-29c-3p was down-regulated by 5.5-fold.miR-29c has been identified as a tumor suppressor in several human cancers [42] .Low expression of miR-29c was recently shown to be positively associated with therapeutic resistance in 159 nasopharyngeal carcinoma cases against ionizing radiation and cisplatin [43] .The authors also showed that expression of the anti-apoptotic factors, MCL-1 and BCL-2 in NPC tissues and cell lines were repressed by miR-29c. Conclusions Our present study demonstrates that the intracellular levels of certain miRNAs are significantly deregulated upon acquisition of camptothecin resistance.We show a positive correlation of genomic losses and miRNAs down-regulated by ≥5.5-fold.Interestingly, all major deregulated miRNAs described in our study have previously been described in other cancer and cell types to be involved in various types of chemotherapeutic resistance, including CPT, gefitinib and cisplatin.Our study adds to a better understanding onto which miRNAs that might have a role in resistance to CPT and its derivatives.Resistance to CPT limits its clinical efficacy and by knowing which miRNAs that are involved in the resistance mechanism future experiments with direct miRNA targets may be designed to examine whether it'll be possible to reversing such resistance or to avoid its development. Supplemental Table1.Detailed summary of miRNA expression differences. Figure 1 . Figure 1.Analysis of miRNA expression.Panel A. Comparison of microarray-based and qPCR-based measurements of selected miRNAs.Log2 values for expression of miR-130a-3p, miR-18a-5p, miR-223-3p, and miR-7-5p in CPT-K5 relative to RPMI-8402 are shown.Dark and light grey bars represent data from microarray and qPCR measurements, respectively.Panel B. Fold difference in expression of indicated miRNAs measured by qPCR.Means from triplicate experiments are indicated.Dark and light grey bars represent data from RPMI-8402 and CPT-K5, respectively. Figure 2 . Figure 2. Analysis of genomic copy number changes.Panel A. CPT-K5 genome chart view of BAC-based aCGH profile where RPMI-8402 is used as reference genome.Chromosomal position is specified on the x-axis and log 2 ratio on the y-axis.Panel B. Chromosome view of called regions.Blue bars next to the chromosomal ideograms indicate regions of gain and red bars indicate regions of losses.Chromosomal positions of miRNAs that are differently expressed in CPT-K5 relative to RPMI-8402 are indicated by open (up-regulated) or closed (down-regulated) circles where the numbers in the circles refer to the miRNAs listed in Table1. Table 1 . miRNA expression differences and copy number changes in CPT-K5 relative to RPMI-8402 microRNA name a Pre-microRNA Cytoband Genomic position (bp) Mean (log2) Fold change Copy number change Up-regulated a microRNAs are named according to miRBase 20.b microRNAs with more than one genomic location although it is only possible to measure the overall expression of the respective microRNAs.NC refers to no copy number change.bp refers base pair.
v3-fos-license
2019-10-24T09:05:48.175Z
2019-10-21T00:00:00.000
242146567
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-6928/v1.pdf", "pdf_hash": "3dff63b2da173ba02b5d7f6d9f0bcd5c619cbf9e", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45172", "s2fieldsofstudy": [ "Psychology" ], "sha1": "df008bff0865158ab748ddd95cab309f40e93e67", "year": 2019 }
pes2o/s2orc
Using Intervention Mapping to Develop a Motivational Interviewing Training and Support Program for HIV Lay Counsellors to Improve ART Uptake in The Primary Health Care Setting in Gauteng, South Africa Background Worldwide, countries are striving to achieve universal antiretroviral treatment (ART) coverage. In South Africa, given the shortage of specialist health care professionals in the public sector, lay HIV counsellors are at the forefront of many HIV related behavioural interventions. They have limited formal counselling training and little ongoing in-service support, leading to considerable variability in skills, knowledge, and approaches to counselling. We aimed to use the Intervention Mapping approach to develop a motivational interviewing counselling training and support program for lay HIV counsellors practising in primary health care (PHC) clinics in Gauteng, South Africa. Methods We applied the steps of Intervention Mapping. This included the analysis of key informant data collected among clinic managers and counsellors (target group), in-depth literature reviews on determinants of critical elements of target behaviours and approaches for influencing these. Extensive consultations with an expert team led to two program objectives: 1) improved general HIV counselling skills among lay HIV counsellors; and 2) sustained motivational interviewing skills for ART and HIV care demand creation. Matrices of change objectives were produced specifying performance and change objectives as well as evidence and theory-based training methods to achieve these. Result We developed a motivational interviewing counselling training and support program titled “Thusa-Thuso - helping you help”. For objective one, we partnered with a seasoned psychologist and counselling trainers to recap and strengthen essential counselling skills and resilience. For objective two, we adapted the Boston University Brief Negotiated Interviewing motivational interviewing counselling training. Adaptations include adjusting the English readability level of training materials; translating materials to spoken Zulu and Sotho; anchoring the training around interactive sessions; producing contextually-relevant modelling videos, in-training role plays and on-site (clinic) mentoring using the correction tool. The planned support component comprises of quarterly support and mentoring sessions over 12-months. Conclusion The "Thusa-Thuso" motivational interviewing counselling training and support program is a contextually relevant, locally-produced, scalable training and support program designed to impart sustained motivational interviewing counselling skills in lay HIV counsellors for improved ART uptake in the UTT era. Background Countries across the globe are striving to achieve universal antiretroviral treatment (ART) coverage among HIV infected individuals. In 2015, South Africa, with 7.9 million people living with HIV in 2017 1-3 , adopted the World Health Organisation's (WHO) universal test and treat (UTT) policy and began measuring progress towards the UNAIDS 90-90-90 goals in order to increase ART coverage and reduce transmission 4,5 . Guidance for ART initiation on the same day of HIV diagnosis was later provided in 2017 to help reduce loss to follow up and decrease time to viral suppression 6,7 . However, despite these efforts, just over a two-thirds of HIV infected patients were initiated on treatment by 2017 [1][2][3] and new approaches will be needed to achieve these targets. Plans to increase the number of patients on treatment and initiating ART on the same day as testing positive for HIV are being implemented in the context of limited specialist healthcare professionals [8][9][10][11][12] . This has led to a greater reliance on lay health care workers to perform health promotion activities [13][14][15] , support HIV testing, and provide adherence support and community-based HIV care and treatment 5,16,17 . Under the UTT and same-day ART initiation policies lay health care workers will increasingly be asked to support asymptomatic HIV infected patients (i.e. those with higher CD4 counts) who may not see the benefits of starting lifelong HIV treatment soon after diagnosis 9,18,19 . Yet, lay counsellors usually have limited formal training leaving considerable variability in their skills, knowledge, and approaches to counselling 13,[20][21][22] . It is, therefore, necessary to strengthen their capacity to provide counselling in a way that improves early patient demand for ART and HIV care to support ART scale-up efforts 23,24 . Motivational interviewing counselling is a directive and collaborative counselling approach that could enable lay counsellors to effectively assist patients to successfully navigate existing barriers to acceptance of lifelong ART and remain in care despite existing challenges 23,25,26 . While this approach has been previously used in substance abuse cessation programs and mostly used by clinicians and psychotherapists in high-income countries (HIC) 20,[27][28][29] , application in the HIV field, by lay counselling staff, in primary care settings in sub-Saharan Africa is limited [27][28][29][30] . Where it has been used for HIV counselling by lay counsellors results have been mixed suggesting a need for greater investment to achieve any benefits 29 . If motivational interviewing is to be successfully employed by lay counsellors, standard systematic approaches will need to be tailored to this population. Intervention Mapping is an iterative approach to developing 31 or adapting 32,33 tailored and context-specific interventions, following six well-defined steps. We describe the application of Intervention Mapping to develop of a contextually relevant motivational interviewing training and support program for lay HIV counsellors working in the primary healthcare setting in South Africa, targeting early ART uptake among newly diagnosed HIV patients under the sameday-ART policy. Intervention Mapping Intervention Mapping provided a framework to guide the development of a training and support program for lay HIV counsellors. It is a systematic method for the development, implementation and evaluation of health interventions grounded in behavioural theory and empirical evidence 31 . The Intervention Mapping protocol outlines a six-step process, using evidence and theory, starting from identifying the health problem to systematically developing interventions to address the health problem. It is an iterative and cumulative process continually using outputs from preceding steps to inform subsequent steps in the process 31 . Step 1 begins with conducting a needs assessment focusing on analysis of the health problem and development of a logic model of the problem. This involves the description of the health-related problem, the at-risk population, impact on the quality of life, the environmental and behavioural factors related to the problem and their determinants. In step 2, evidence from the needs assessment is used to select the target groups, and the behavioural and environmental program outcomes based on importance and changeability. Behavioural outcomes in the target population and environmental agents are sub-divided into specific performance objectives that stipulate actions that need to be taken to change individual behaviour. Matrices of change objectives are then produced by combining performance objectives with determinants thus creating change objectives. In step 3, theory-informed methods and practical strategies are selected and used in step 4 to produce the final intervention program. Step 5 focuses on planning program adoption and implementation and the last step is producing an evaluation plan to assess effectiveness and implementation success of the intervention program 31 . The application of Intervention Mapping in designing our intervention relied on stakeholder participation and engagement as well as the incorporation of theory-based behavioural change methods. We were also mindful of the target group and implementer importance in the ultimate success of the intervention. Lay HIV counsellors exist and operate within communities and organisational environments, and as such we also sought to understand and incorporate the social-ecological perspective when designing the training intervention. This involved engaging with and garnering support from district and clinic managers who oversee counsellor's operational environmental. Intervention Mapping is facilitated through core processes: posing questions, brainstorming with the planning group, reviewing findings from empirical literature, reviewing theories for additional constructs, assessing and addressing needs for new data, and developing a working list of solutions 31,34 . Results Step 1: Needs assessment In line with Step 1 of the Intervention Mapping process, we created an intervention planning group consisting of researchers, HIV prevention specialists from a local nongovernmental organisation (NGO) that provides HIV-specific technical support to primary health care clinics, psychologists, representatives from an advocacy group for persons living with HIV and a local community counselling organisation. The first and final author of the intervention planning group were formally trained on Intervention Mapping at Maastricht University (www.interventionmapping.com). Preliminary research The needs assessment process included an extensive literature review to understand the landscape of counsellor skills, practice approaches, and challenges. The literature review was augmented by in-depth interviews with seven clinic managers and lay HIV counsellors of four primary health clinics in Johannesburg between October and December 2017, exploring current approaches to demand creation for ART 35 . We found that approaches for ART demand creation were inconsistent and counsellor dependent ( Figure 1). Counsellors who had been personally affected by HIV emphasised the benefits of ART and preferred early uptake, whereas those who were less personally impacted were more concerned about preparing patients to cope with treatment challenges. The process for assessing patient readiness was poorly defined, inconsistent, and counsellor dependent. We also found that providers were unclear of the process to ensure patients who defer treatment return for ongoing counselling 35 . Figure 1. Summary of results from key informant interviews with clinic managers and lay HIV counsellors 35 Step 2: Program goals and objectives Formative data collected among clinic managers and counsellors, as well as consultations with the planning group, confirmed and enriched evidence from extensive literature reviews. In addition to the previously described need to train lay counsellors on motivational interviewing, we identified critical counselling skills gaps and important factors to consider in the training program development including lack of ongoing inservice training and support. Two main program outcomes were outlined as: Improved general HIV counselling skills among lay HIV counsellors; sustained motivational interviewing skills for ART and HIV care demand creation The training outcomes were focused mainly on the need to improve ART uptake and retention in care among newly diagnosed HIV positive patients in South Africa. To improve general HIV counselling skills (outcome one), we partnered with psychologist and counselling trainers and used the textbook "Elements of Counselling" 36 that targets lay counselling staff in the South African context, to strengthen key counselling skills and develop counsellor resilience. To develop motivational interviewing skills (outcome two), we adapted and expanded on the Boston University Brief Negotiated Interview (BNI) training tools for adults. We proceeded to formulate performance objectives (PO), which are interim trainee behaviour targets needed to achieve the expected program outcomes (Table 1) (Table 1) with the evidence-based determinants of each performance objective to assist in the definition/selection of appropriate training methods to include in the program. Determinants of performance objectives were identified through planning group brainstorming session asking questions regarding determinants for the behavioural outcome for our target population, i.e. lay HIV counsellor motivational interviewing counselling efficacy. A preliminary list of possible determinants was drawnup, which was refined further through evidence from the literature and applicable theoretical constructs associated with the program outcome. Determinants were graded by 1) relevance (the strength of the association with the behaviour) and 2) changeability of the determinants. For example, the age and gender of the counsellor are not changeable but are likely to have some influence on their efficacy in counselling patients who may differ from them in age and gender 37 . Also, low or inconsistent remuneration has been shown to negatively impact counsellor motivation, work ethic, and consequently the quality of counselling service 38,39 . Remuneration change is theoretically possible but is unlikely through a training program as it requires intervention at the national level of government 21 . On the other hand, knowledge and skills are critical determinants of counselling efficacy and have been found to be changeable through ongoing training, supervision, and support 13,40 . Matrices of change objectives were developed by connecting the performance objectives with identified relevant and changeable determinants. These matrices guided the development of the training program, manuals, and tools. Step 3: Program Design We then applied theory-based behaviour change methods that targeted the selected determinants to achieve the change objectives formulated in Table 1. Theories used included theories of learning 41 , information processing 42 , self-regulation, and social cognitive theory 43 . Since skills development generally relies on knowledge, practice and feedback, behaviour change methods applied include methods to increase knowledge, increase self-efficacy, and change attitude and outcomes expectation such as repeated exposures, reinforcement, elaboration, and providing cues. We also used behaviour change method to increase skills which included modelling and guided practice, applied through skills demonstration videos as well as opportunities for skills practice. To ensure contextually relevant applications of behaviour change methods used, we ensured that the training videos were conducted in the local languages, used realistic settings, using model clients, and counselling scenarios to make them more relevant and meaningful to the lay HIV counsellors 44 . We also adhered to the theoretically defined conditions for the methods to be effective 44 . Step 4: Program production Description of the training intervention The Evidence gathered has shown that counsellors encounter emotionally difficult situations during interactions with clients, which necessitates emotional support or debriefing incorporated and supported through their work environment 47 . We applied the improving physical and emotional states behaviour change method, a component of Social cognitive theory 43 , by including journaling for self-care as part of our program. Self-care is taking care of one's mental, emotional, and physical health to be better able to take care of others as well. Journaling is a simple, inexpensive, and effective form of self-care which has been shown to assist in emotional healing and resilience when approached in a purposeful manner [48][49][50] . It has been effective in addressing compassion fatigue, burnout, post-traumatic distress disorder (PTSD) among different cadres of health workers in different settings 49,50 . Branding and theme We titled the program "Thusa-Thuso -helping you help" using Sotho, one of the local languages in Gauteng. Figure 3.The "Thusa-Thuso -helping you help" branding The theme, including the logo developed, combined the skills development and wellness support components which aim to support the lay HIV counsellors to support their clients in deciding to adopt health-promoting behaviours of initiating ART and remaining in HIV care. This was also the case for all illustrations included in all the training materials which used contextually relevant modelling scenarios for skill demonstrations role-plays. We also developed locally produced training videos modelling motivational interviewing counselling using local languages and in settings similar to their work environments, with realistic model clients, and counselling scenarios to make them more relatable to the trainees. The HIV treatment readiness framework We developed a HIV treatment readiness framework, an implementation support tool for the lay counsellor. This was formulated through a theoretical model of HIV treatment readiness we developed by integrating change theory 51 Step 5: Program Implementation Plan We presented the training program and implementation tools to key stakeholders including decision makers at the district health management level, as well as to frontline health providers who are closest to the HIV testing implementation. We invited primary health care clinic managers to a workshop where we presented the training program, including implementation support tools. This consultation served three main purposes, 1) to provide further input into intervention development, 2) to gather support from the managers as gatekeepers to the implementation level, 3) to receive input on implementation planning from managers as they oversee operations at the clinic level. The group appreciated efforts made in the bottom-up approach to the development of the training program, particularly the extensive engagement with HIV programme implementers at different levels. They strongly supported the inclusion of the emotional support component of the intervention program, highlighting that they also don't have access to any form emotional support available to help them cope with the emotional distress they are often exposed to in their profession. They also supported the need for ongoing implementation support, highlighting its importance to sustain counsellor motivational interviewing competency. Some strongly felt that the success of the program depends on it. The managers indicated that they could not authorise the release of all counsellors for the baseline training as it would disrupt HIV testing services. However, they saw the benefit of having trained counsellors and suggested that the training could be run in two rounds, with each round including half of the counselling team. Discussion We have provided a detailed description of our application of Intervention Mapping to develop a motivational interviewing training intervention for lay HIV counsellors in the primary health care setting in Gauteng, South Africa. This resulted in the formation of the "Thusa-Thuso -helping those who help" programme that provides training in motivational interviewing counselling, ongoing implementation support, and implementation support tools, as well as providing a scalable method for remotely counselling HIV patient to encourage retention in HIV care. We also provide support and tools for self-care journaling as a simple but effective self-care tool that can be used outside the training. The program also offers counsellors regular debriefing during quarterly refresher training sessions to ensure emotional support. We found Intervention Mapping requires a commitment by the planning group members to the process, which requires considerable time and effort. Members of the planning group required reorientation on the key elements of Intervention Mapping, with emphasis on the importance of evidence and theory-based decision making in the intervention development process. The Intervention Mapping steps are not necessarily sequential and therefore requires a correct understanding of the process and considerable flexibility in its application. Intervention Mapping provided a systematic approach to be able to adapt 53,54 the Boston University BNI Motivational Interviewing counselling training, an evidencebased intervention which has shown effectiveness in hospital emergency departments, to develop our training intervention 31,55 . We found that a majority of existing motivational interviewing and the Boston University BNI Motivational Interviewing counselling training materials, including training videos, were created in high-income countries, mainly for substance abuse cessation programs [56][57][58] . It was apparent that it would be difficult for lay HIV counsellors in our setting to relate to these individuals in these training videos, who do not look or sound like them or address health issues that were a priority in their setting. We used theory and evidence to adapt training and implementation tools for the program for the local cultural context. Adaptations we made include adjusting the readability level of English training materials; translating materials to spoken Zulu and Sotho to ensure easy understanding; anchoring the training around interactive sessions to build motivational interviewing self-efficacy, use visual modelling by local trainers, locally produced videos with contextually relevant scenarios, and implement in-training role plays. Motivational interviewing is a key aspect of our counselling intervention. It is a patientcentred, goal-oriented counselling approach that seeks to help the client resolve barriers to behaviour change 26 . With changes in HIV treatment policy, counsellors will increasingly encounter patients who may be ambivalent about starting ART. Our adaptation of the Boston University BNI algorithm and tool focused on preserving core elements of the original program which were key in its effectiveness 33 . The algorithm provides a summary of motivational interviewing counselling 59 , and its brevity made it easily adaptable to guide lay HIV counsellors to enhance patients' motivation to take up ART treatment. We modified the tool for our setting to build up a readiness framework, an implementation support tool to guide counsellors in the application of motivational interviewing counselling for ART uptake and retention in care. Through our experience in using Intervention Mapping, we came to recognise the importance of ongoing key stakeholder engagement. This included program implementers, whose contribution is important not only because they are most knowledgeable regarding frontline conditions for implementation, but also because they are the ultimate users of the intervention. This makes getting their buy-in and support for the intervention critical for successful implementation. We organised a stakeholder workshop with facility managers of clinic facilities to be targeted with the intervention to a workshop and introduced them to the training program, including implementation support tools as well as the process we undertook to develop the program. The inclusion of an evaluation plan ensures that the effectiveness of the intervention will be measured systematically by gathering evidence from implementation. This will also provide an opportunity to make any necessary updates to the program to improve impact as well as strengthening adoption beyond the planned sites for evaluation. Limitations Since this paper does not present evaluation data, we cannot, at this stage, draw conclusions on the effectiveness of the intervention in increasing lay HIV counsellor motivational interviewing skills or ART uptake among HIV patients. However, the face validity data obtained from primary healthcare stakeholder engagement and the various consultation indicate the demand for such training, the relevance and early satisfaction with its content. Conclusion Even with challenges encountered during the application, Intervention Mapping still offered a well-structured, evidence-based framework for developing our training and support intervention relevant to the context of the target population, with training components that reflect the program's purpose and intention to strengthen lay counsellor motivational interviewing counseling skills to improve ART uptake and retention in HIV care. The "ThusaThuso -helping you help" branding HIV treatment Readiness Framework
v3-fos-license
2018-04-03T00:16:09.807Z
2008-02-01T00:00:00.000
9803902
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://escholarship.org/content/qt5cj0k9vj/qt5cj0k9vj.pdf?t=qp3jp8", "pdf_hash": "a5190d47a6417f38d380e23780b09938b5d50cb3", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45173", "s2fieldsofstudy": [ "Psychology" ], "sha1": "bc7e9eea825b1e4a1d6640eb2967a9bd8ac9b206", "year": 2008 }
pes2o/s2orc
Examination of the decline in symptoms of anxiety and depression in generalized anxiety disorder: impact of anxiety sensitivity on response to pharmacotherapy. Pharmacotherapy is an effective treatment for generalized anxiety disorder (GAD), but few studies have examined the nature of decline of anxiety and depression during pharmacotherapy for GAD and even fewer studies have examined predictors of symptom decline. This study examined the decline in symptoms of anxiety and depression in patients with GAD during a 6-week open trial of fluoxetine. Growth curve analyses indicated that pharmacotherapy with fluoxetine led to significant declines in symptoms of anxiety and depression over the 6 weeks of treatment. However, the decay slope observed for anxiety symptoms was significantly greater than that for depressive symptoms. Further analyses revealed that the decline in anxiety remained significant after accounting for the changes in symptoms of depression. However, the effect of treatment on depression was no longer significant after controlling for the reduction in anxiety symptoms. Overall anxiety sensitivity (AS) did not moderate the level of reduction in symptoms of anxiety or depression during pharmacotherapy. However, AS specific to physical concerns demonstated a marginal negative association with decline in anxiety and depression. AS specific to social concerns also demonstrated a marginal negative association with decline in anxiety symptoms. These findings suggest that the decline in anxiety symptoms is independent of the decline in symptoms of depression during pharmacotherapy for GAD and specific AS dimensions may predict symptom change in GAD. purpose of this study was to examine the influence of negative mood on body dissatisfaction. Body dissatisfaction and negative mood consistently show positive associations in clinical (e.g., Dunkley, Masheb, & Grilo, 2010) and non-clinical samples of women (e.g., Johnson & Wardle, 2005;Santos, Richards, & Bleckley, 2007). Although a large body of research suggests that body dissatisfaction contributes to negative mood, depression also has been supported as a prospective risk and maintenance factor for body dissatisfaction in longitudinal studies (Bearman, Presnell, Martinez, & Stice, 2006;Keel, Mitchell, Davis, & Crow, 2001). Thus, some researchers have proposed a model in which negative mood increases body dissatisfaction (Griffiths & McCabe, 2000;Keel et al., 2001;Tylka & Subich, 2004). Keel and colleagues (2001) theorized that depression may cause body dissatisfaction because general negative feelings are funneled into negative feelings about body shape and weight in cultures that idealize thinness. Expanding on earlier theoretical models, Tylka and Subich (2004) posited that negative affect contributes to body image disturbance because women who experience negative affect are more likely to internalize the thin ideal and generalize negative feelings toward their bodies. Supporting these proposals, research has found that negative affect and self-esteem are unique predictors of variance in body image (Griffiths & McCabe, 2000;Tylka & Subich, 2004). If negative mood increases body dissatisfaction, we would expect changes in negative mood to precede changes in body dissatisfaction. However, correlational and longitudinal findings cannot establish whether acute changes in mood cause acute changes in body dissatisfaction. Previous research has used experimental methods to successfully manipulate mood and body satisfaction in non-clinical samples of women. Negative mood inductions led to increases in body dissatisfaction in some (Baker, Williamson, & Sylve, 1995;Cohen-Tovée, 1993;M. J. Taylor & Cooper, 1992) but not all studies (Carter, Bulik, Lawson, Sullivan, & Wilson, 1996). Conflicting results may be due to limitations of this literature, including small sample size (N = 15;Carter et al., 1996), no control condition for comparison (Cohen-Tovée, 1993), and lack of an immediate pre-induction assessment of mood and body dissatisfaction (Baker et al., 1995). Thus, methodological limitations constrain the conclusions that can be drawn from existing experimental studies. The present study sought to examine causal relationships between mood and body dissatisfaction in a non-clinical sample utilizing a controlled experimental design with repeated assessments to evaluate changes in mood and body dissatisfaction as a consequence of negative mood induction procedures. We hypothesized that experimentally-induced increases in negative mood would cause increases in body dissatisfaction. Method Participants Participants were 45 female undergraduates recruited through campus advertisements. Eligible participants were between 18 and 25 years old, had a body mass index (BMI) in the normal range (19 -24 kg/m 2 ), and reported no prior or current eating disorder during a screening interview that covered lifetime history of eating disorder symptoms. Mean (SD) age and BMI were 20.03 (1.78) years and 21.68 (1.71) kg/m 2 , respectively. The sample was predominantly Caucasian (90.5%). Participants were paid $10 for their participation. This research was reviewed and approved by an institutional review board. Measures Participants completed demographic and self-report questionnaires on a separate day as part of a larger study examining factors that influence body image. Measures included global ratings of depression, body dissatisfaction, and eating pathology and were completed within one week of their participation in the current study. In addition, participants completed assessments of their current mood, body shape satisfaction, and weight satisfaction immediately before and after experimental procedures using three visual analogue scales. Eating Attitudes Test-26 (EAT-26)-This 26-item measure assesses features commonly present in individuals with anorexia nervosa (Garner, Olmsted, Bohr, & Garfinkel, 1982). Using a scale that ranges from 1 (never) to 6 (always), participants rate how often they engage in certain thoughts and behaviors such as, "I like my stomach to be empty" and "I am terrified about being overweight." Although Garner and colleagues (1982) recommended recoding items into a 0-3 rating system, item values in the current study were summed using a continuous 1-6 rating system to counter lower variability in a non-clinical sample. Cronbach's alpha for the current sample was .92. Body Dissatisfaction subscale of the EDI (EDI-BD)- This 9-item scale measures the belief that certain body parts are too large (Garner, Olmstead, & Polivy, 1983). Because of greater sensitivity in a nonclinical population, original item values ranging from 1 to 6 (rather than recoded items) were summed to calculate a score on this measure. Cronbach's alpha for the current sample was .92. Visual Analogue Scales (VAS)-Participants' current mood, body shape satisfaction, and weight satisfaction were evaluated immediately before and after the experimental/ control procedure using three VAS. To assess mood, participants were asked, "How are you feeling right now?" with response anchors of "Extremely Unhappy" on the left end versus "Extremely Happy" on the right end. To assess weight and body shape satisfaction, participants were asked, "How do you feel about your weight right now?" and "How do you feel about your body shape right now?" with response anchors of "Extremely Unsatisfied" at the left end and "Extremely Satisfied" at the right end. These horizontal scales were 100 millimeters long, and participants were instructed to make one vertical mark on each line to indicate their current state. Scores were calculated by measuring the distance in millimeters from the left end of the scale to the participant's mark. Changes in VAS scores were used to evaluate changes in mood, body shape satisfaction, and weight satisfaction. VAS scores measuring satisfaction with body weight and appearance have correlated highly with the EDI-BD, demonstrating good construct validity (Heinberg & Thompson, 1995). Procedure Participants were told that the purpose of this study was to investigate body image. After providing informed consent, participants were randomly assigned by a coin toss into either the experimental (n = 21) or control condition (n = 24). Similar to methods successfully implemented by Cohen-Tovée (1993), the current study used Clark's (1983) musical mood induction method to induce a temporary increase in negative affect in the experimental group and no mood change in the control group. After completing the pre-induction VAS, participants in the experimental group received the following written instructions: "Please try to get into a sad mood. Both the statements on the cards and the music are designed to help you get into that mood. Read the statements to yourself and try to think that they are true for you." Participants then listened to an excerpt from Gabriel Faur Fauré's "Requiem" (Op. 48, part one, "Introit et Kyrie") while viewing ten printed self-statements with negative connotations, such as "I have been dishonest" and "I do not have any true friends." Importantly, none of the negative self-statements were related to body shape or weight. Participants in the control group completed the pre-induction VAS and then received the following written instructions: "Please read the following statements and think about a time when you've observed or experienced the events described in the statements." Control participants viewed cards with neutral statements, such as "In the mountains, the air is fresh" and "Leaves change color in the fall," while listening to an excerpt from Antonín Dvorák's "Slavonic Dances" (Op. 46,No. 1 in C major & Op. 72,No. 7 in C major). None of the neutral statements were self-statements. Musical excerpts for both control and experimental groups were approximately 10 minutes long. After the music was finished, the study administrator turned off the tape and participants completed the post-induction VAS without discussion. For both conditions, the study administrator maintained a neutral affect so as not to influence participants' responses. Data Analyses Independent samples t-tests were used to compare experimental and control participants on demographic characteristics and baseline depression, body dissatisfaction, and eating pathology. Repeated measures ANOVAs were used to examine effects of Group (experimental vs. control), Time (pre-vs. post-induction), and their interaction for influence on mood and body dissatisfaction. For each significant interaction, the simple effects of group within time and time within group were examined using a corrected p-value to control for multiple comparisons. Results There were no differences between experimental and control participants on demographic characteristics or baseline measures of global eating pathology, body dissatisfaction, or depression (see Table 1), supporting the success of randomization. Effects on Mood A repeated measures ANOVA revealed significant main effects of Group (experimental vs. control) and Time (pre-induction to post-induction) on mood. However, these main effects are qualified by a significant Group X Time interaction (Figure 1a). Both groups reported a moderately positive mood before induction procedures and did not differ from each other in the simple effects analysis (p = .41). Control participants' mood remained relatively stable over time (p = .06), whereas experimental participants experienced a worsening of mood following the negative mood induction procedure (p < .001). Thus, results suggested that the experimental procedure was successful in inducing a negative mood. Effects on Body Dissatisfaction In addition to changes in mood, we hypothesized that experimentally induced changes in mood would cause changes in body satisfaction. There was a significant effect of Time and a significant Group × Time interaction for VAS ratings of satisfaction with weight and body shape (Figures 1b and 1c). Simple effects analyses revealed that experimental and control participants did not differ in levels of satisfaction with body shape or weight prior to induction procedures (ps > .35). Mirroring results for mood, satisfaction with body shape and weight decreased significantly in experimental participants (ps < .001) and did not change in control participants over time (ps > .17). Consistent with study hypotheses, results indicated that the negative mood induction procedure caused decreased satisfaction with body weight and shape among experimental participants. Discussion We employed a rigorous experimental design to examine causal relationships between mood and body dissatisfaction. A negative mood induction caused increased negative mood and increased body dissatisfaction, providing strong support for a causal model in which depressed mood contributes to body dissatisfaction. All participants were in a healthy weight range and, therefore, had no objective reason to experience body dissatisfaction during this study. In cultures which idealize thinness, body dissatisfaction may arise from funnelling general feelings of dysphoria into more concrete and culturally meaningful negative feelings about the body (Keel et al., 2001). Increases in body dissatisfaction may then lead to disordered eating behaviors, such as dieting and binge eating, that contribute to the development of eating disorders. These findings may explain why negative affect is a prospective risk factor for bulimia nervosa (Stice, 2002) and how negative affect could be a proximal trigger for disordered eating behaviors (Haedt-Matt & Keel, 2011). However, current results do not rule out the possibility that body dissatisfaction also causes increased negative mood. Indeed, previous research has found that body dissatisfaction prospectively predicts greater negative affect (e.g., Paxton et al., 2006), and there are likely reciprocal relationships in which body dissatisfaction and negative mood contribute to each other. Further, negative mood may contribute to increased body dissatisfaction, which then causes even greater negative mood, resulting in a downward spiral similar to recent research on body checking and weight/shape concerns among Caucasian women (Fitzsimmons & Bardone-Cone, 2011). Future research is needed to examine potential reciprocal relations between mood and body satisfaction. Limitations of this study included a predominantly Caucasian sample. In addition, participants were drawn from a non-clinical sample of undergraduate students, and causal relationships between mood and body dissatisfaction may differ in women with clinical eating disorders. However, depression is a significant longitudinal predictor of body dissatisfaction among women with bulimia nervosa (Keel et al., 2001). Thus, it appears that depression plays a role in the development of body dissatisfaction among a clinical sample. Future research is needed to further investigate the generalizability of these results. Participants were aware of the purpose of this study, and experimental participants were instructed to get into a sad mood. Thus, demand characteristics may have influenced who chose to participate and current findings. Future research is needed to examine the effects of a negative mood induction using more deceptive techniques, such as a back story or "filler" questionnaires. The current study compared a negative versus neutral mood induction condition and did not include a positive mood induction control. Future studies could investigate the impact of a positive mood induction on body satisfaction. In addition, future studies are needed to examine whether some individuals are predisposed to experiencing body dissatisfaction as a consequence of negative mood. We were precluded from examining potential moderators in the current study due to small sample size and a resulting lack of statistical power. However, trait personality characteristics and internalization of the thin ideal represent prime candidates to examine as moderators in future studies. Finally, the effects of changes in negative mood may not be specific to changes in body dissatisfaction. Increased negative mood could cause increased negative evaluations across multiple domains (e.g., feeling fat and feeling stupid). However, funnelling general negative emotions into body dissatisfaction may be specific to cultures where body shape and weight are used to evaluate one's self-worth. In conclusion, findings supported negative mood as a causal risk factor for body dissatisfaction. This research has important implications for the prevention of body dissatisfaction and, potentially, disordered eating. Most prevention programs employ a disease-specific pathway model that focuses specifically on body image, such as Student Bodies (C. B. Taylor et al., 2006) and Reflections: Body Image Program (Becker, Bull, Schaumberg, Cauble, & Franco, 2008). However, current results suggest that a non-specific vulnerability stressor model (Levine & Smolak, 2006), which focuses more on general mood states and sources of negative affect, such as stressful life events, lack of social support, and teasing, may also represent an effective prevention strategy. Supporting this assertion, intervention efforts targeted towards reducing depressive symptoms have been effective in reducing bulimic symptoms (Burton, Stice, Bearman, & Rohde, 2007). Importantly, current findings cannot establish whether increases in negative mood are the sole, or even primary, cause of body dissatisfaction. Thus, future studies are needed to compare the effects of programs based on a disease-specific pathway versus a non-specific vulnerability stress model on changes in body image, disordered eating, and related mood disturbances. Repeated measures ANOVA for VAS scores before and after mood induction procedure. *p < .05, **p < .01, ***p < .001 Note. BDI = Beck Depression Inventory; BMI = body mass index; BULIT-R = Bulimia Test-Revised; EAT-26 = Eating Attitudes Test-26; EDI-BD = Body Dissatisfaction subscale of the Eating Disorder Inventory.
v3-fos-license
2024-02-24T06:17:52.186Z
2024-02-21T00:00:00.000
267808449
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20002297.2024.2316485?needAccess=true", "pdf_hash": "3d3384573e18599c65169f00eb9e19b3a5f0a923", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45175", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "sha1": "8ae47638351a41686ba85d5792d7aed92992be55", "year": 2024 }
pes2o/s2orc
Ecological competition in the oral mycobiome of Hispanic adults living in Puerto Rico associates with periodontitis Abstract Background: Fungi are a major component of the human microbiome that only recently received attention. The imbalance of indigenous fungal communities and environmental fungi present in the oral cavity may have a role in oral dysbiosis, which could exacerbate oral inflammatory diseases. Methods: We performed a cross-sectional study and recruited 88 participants aged 21 to 49 from sexually transmitted infection clinics in Puerto Rico. A full-mouth periodontal examination following the NHANES protocol defined periodontal severity (CDC/AAP). ITS2 (fungal) genes were amplified and sequenced for mycobiota characterization of yeast and environmental fungi. Environmental outdoor spore levels were measured daily by the American Academy of Allergy Asthma and Immunology San Juan station and defined by quartiles as spore scores. Results: Our data indicate polymicrobial colonization of yeast and environmental fungi in the oral cavity. Dominant taxa associated with periodontal disease included Saccharomyces cerevisiae, Rigidoporus vinctus, and Aspergillus penicilloides, while Candida albicans were found to be ubiquitous. Fungal aerosols were found to impact the oral cavity biofilm, likely due to competition and neutralization by inhaled outdoor and indoor fungal spores. Conclusion: To our knowledge, this is the first report showcasing the ecological competition of measured outdoor environmental fungi with the human oral mycobiota. Introduction Bacterial and fungal communities play an essential role in developing the oral biofilm.These communities participate in infectious dysbiotic processes of many diseases, such as dental caries, gingivitis, and periodontitis, which could lead to oral cancer.Gingivitis and periodontitis are common bacterial infections caused by host immune responses against pathogenic microorganisms, leading to inflammation and dysbiosis [1].While gingivitis is a mild reversible inflammation, if left untreated, it can develop into periodontitis, an irreversible disease that causes chronic inflammation [2].This chronic inflammation produces inflammatory mediators that can induce mutagenesis, cell proliferation, and cell degeneration.This process is one of the major causes of pathogens taking advantage of several oral surfaces, provoking teeth loss, destruction of supporting structures, and gingival recession [3].Several factors have been associated with the exacerbation of periodontal disease, for instance, smoking, heavy alcohol consumption, diet, sexual practices, and HPV infection [4]. Microorganisms cause about 20% of human cancers [5].While microbes in saliva are not considered a direct agent of oral disease, ecological changes in gingival crevice physiological conditions may affect microbiota composition.Research has focused mainly on bacterial communities; however, fungal communities remain understudied compared to the bacterial biota [6].For instance, fungi have been implicated in exacerbating several human diseases, but their potential role as modulators remains unexplored.It has been demonstrated that filamentous fungi and yeasts can form biofilms in both biotic and abiotic surfaces, only exacerbating periodontal disease and other chronic inflammation conditions by making them more resistant to treatments [7]. Many Candida species are considered commensal in the oral cavity [8].However, individuals with preexisting health conditions, such as diabetes, HIV, periodontal disease, and cancer, are at a higher risk of developing oral candidiasis [7].This infection, also known as oral thrush, often occurs when Candida overgrows in individuals with a weakened immune system, allowing the formation of biofilms by adhering to the epithelial cells in the oral cavity [7].Although Candida albicans has been highly associated with oral thrush, many species like C. glabrata, C. krusei, C. parapsilosis, C. pseudotropicalis, and C. tropicalis have also been linked to candidiasis [9].Formation of these biofilms is of utmost importance, as they can grow inside periodontal pockets.This process aids inflammatory responses and permits other pathogenic and opportunistic microbes to infiltrate tissues, creating more bone loss [10].Studying polymicrobial associations is of clinical concern regarding the synergies of bacterial and fungal biofilms, with infections expected to be more severe and burdensome to treat with antimicrobials.Hence, knowing the relationship between pathogenic bacteria and fungi implies the possible discovery of treatment strategies for periodontal disease and preventing oral complications. To our knowledge, no other study before has simultaneously associated the levels of environmental outdoor fungal spores matched daily with oral fungi measured in the oral cavity.We investigated the composition of the oral fungal community diversity and its relationship with periodontal disease severity and periodontal risk factors among Hispanic adults.Additionally, we matched environmental spore levels and evaluated how oral mycological diversity is associated with spore counts from the recruitment days.Our study establishes possible ecological implications of outdoor fungal spores' aspiration on the resident mycobiota of the oral cavity. Description of the study population This cross-sectional study recruited sexually active Hispanic adults living in Puerto Rico, given the high prevalence of high-risk sexual practices in the adult population in Puerto Rico [11].Participants with high-risk sexual practices and coming to STI clinics may have a higher burden of oral diseases such as periodontitis [12].We hypothesized that changes in oral fungi in participants with STI-related concerns would be associated with periodontal severity status.Eligible participants were aged 21-49 years old, and mentally capable of participating in the study.Exclusion criteria included those characteristics that could impact the microbiome, such as HIV-positive status, pregnant women, hormonal contraceptive use, use of antibiotics in the last 2 months, postmenopausal status, depression or post-traumatic stress disorder diagnostic, and pre-existing heart conditions such as endocarditis, prosthetic cardiac valves, cardiac transplant, valvular heart disease, or congenital heart defect. Recruitment and data collection procedures This study was approved by the Institutional Review Board of the University of Puerto Rico Comprehensive Cancer Center (protocol 2018-01-01).All subjects provided written informed consent.Participants were recruited from two sexually transmitted infection (STI) clinics and recruited in Alliance in San Juan, Puerto Rico, promotion on social media and person-to-person promotion.Study participants completed procedures at the Hispanic Alliance for Clinical and Translational Research (ALLIANCE; U54GM133807).Sociodemographic, lifestyle, and clinical information were collected through an intervieweradministered questionnaire, and measurements included sex, age, oral hygiene, smoking, alcohol consumption, marihuana usage, and oral sex practices.This questionnaire also included medical and dental history, comorbidities, and diet.Information regarding drug use and sexual behaviors was assessed using an audio computer-assisted self-interview (ACASI) [13].Clinical procedures included saliva collection for mycobiome characterization, dental evaluation for periodontal disease assessment, and anthropometric measurements to determine body mass index (BMI, kg/m2).For BMI categories, these were defined as normal, overweight, obese, and underweight. Periodontal assessment Periodontal examination was performed following the NHANES protocol and was defined according to (CDC/AAP).Severity of periodontal disease was determined by clinical measurements of probing depth (PD) and clinical attachment loss (AL) for six sites (disto-buccal, mid-buccal, mesio-buccal, distolingual, mid-lingual, and mesio-lingual buccal), excluding the third molars.Measurements were taken with a periodontal probe severity.Periodontitis severity was defined as severe (≥2 interproximal sites with CAL ≥6 mm and ≥1 interproximal site with PD ≥5 mm), moderate (≥2 interproximal sites with CAL ≥4 mm or ≥ 2 interproximal sites with PD ≥5 mm), and mild (≥2 interproximal sites with CAL ≥3 mm and ≥ 2 interproximal sites with PD ≥4 mm or ≥1 site with PD ≥5 mm).Periodontal status was categorized without periodontitis or with periodontitis, while periodontal severity was categorized as none, mild and moderate/severe (periodontitis).Bleeding on probing (BOP) was also calculated.About 20 s after probing, BOP was confirmed if the bleeding was detected at the lingual and/or buccal surfaces, respectively.BOP was classified as high for each individual if 30% or more of buccal and/or lingual surfaces showed BOP as previously described [14].For the categories used in the oral mycological assessment for periodontal severity, we used for a range of no disease if BOP (0-9%), if BOP was between 10% and 29% it was considered mild and if >30% moderate to severe. Measures of environmental spore levels Fungal spore data was obtained from the American Academy of Allergy Asthma and Immunology (AAAAI) San Juan station located in the Department of Microbiology of the Medical Sciences Campus of the University of Puerto Rico.For the enumeration of the outdoor spores, we used the 12transverse-traverse methodology proposed by the British Aerobiology Federation [15].Airborne spores were collected using a volumetric Hirst-type sampler, specifically a Burkard (Burkard Scientific Ltd, Uxbridge, UK).This equipment was located on the rooftop of the Medical Sciences Campus of the University of Puerto Rico, 30 m above ground level (Coordinates 18°23'53.7'N, 66°04'25.3'W).The Burkard 24-h trapping system worked continuously with an intake volume of 10 l of air/min.Fungal spores were impacted on a microscopic slide coated with a thin layer of 2% silicon grease as a trapping surface.The slide was changed daily and mounted on polyvinyl alcohol (PVA) mounting media for microscopic examination.Counting was done on each preparation along transverse fields every 2 h for 24 h on the longitudinal traverse.Spores were identified based on their morphological differences [16].The identification was performed utilizing a bright-field optical microscope NIKON Eclipse 80i microscope (Nikon Manufacturing), using a magnification of 1000X. Distribution of spore counts was done by executing the quartile function in Excel with defined spore abundances to scores, varying from 1 to 4. According to these obtained values, we distributed the outside spore levels range as Level 1 (6771-39243 m3), Level 2 (39489-49461 m3), Level 3 (50015-76854 m3), and Level 4 (77685-102986 m3).Spore count scores were added as metadata categories to the microbiota analyses.As the oral cavity connects with the nose and the esophagus and also with the breathing passages (trachea and lungs) spores can be detected in the oral cavity and allows one to detect both resident fungi and environmental spores. Genomic DNA extractions and ITS2 gene amplification and sequencing for oral mycobiome characterization The University of Puerto Rico Biosafety (IBC, protocol # 49218) approved the applied laboratory protocols.Saliva (1.0 ml) was collected using sterile suction tubes and centrifuged at 13.2 rpm for 5 min, discarding the supernatant while the pellet was kept for the DNA extraction using standard protocols of the PowerSoil Kit (QIAGEN LLC, Germantown Road, Maryland, USA).DNA was quantified using the Qubit® dsDNA HS (High Sensitivity) Assay (ranging from 5 to 100ng/ ul) at room temperature (Waltham, Massachusetts, US) and stored at −20°C, until further use. ITS2 sequence analyses Sequences obtained from the amplified ITS2 region were deposited and processed in QIITA [19] (project ID 13 using a Phred score above 30 (for quality control), deblurred and clustered into operational taxonomic units (OTUs) with 97% identity threshold.Taxonomy was assigned using UNITE ver 8.97 database [20], and sequences with more than 1,000 reads were included for analyses.Filtering of singletons (>3) was done in QIIME2 [21].Additional analyses were done with two additional tables: we created a species table that included only environmental fungi, with a total of 43 samples after rarefaction (Supplementary Table 1.B), and another species table of fungi/yeast related to the oral cavity (Supplementary Table 2.B), with a total of 48 samples, after rarefaction. Beta diversity distances and fungal community composition were analyzed by calculating pairwise Bray-Curtis distances between samples using phyloseq R package [27].Dissimilarities among samples were visualized with a Principal Coordinate Analysis (PCoA) and Permanova [22], and Permdisp [23] measures were obtained using the qiime beta_group_significance command from QIIME2 [21]. Boxplots of alpha diversity measures, including richness (Chao1 Index) and evenness (Shannon Index), were created using R's phyloseq [24] and ggplot2 [25] packages.Chao1 and Shannon Index pairwise measures, implementing Kruskal-Wallis statistical tests, were obtained using the ggpubr package to compare all analyzed categories. The QIIME [21,26] platform was employed to obtain genus and species-level taxonomic profiles using the mean values.Significant taxa (p-value <0.05) were selected using the group_significance.pyscript in QIIME [21], which identifies significantly different OTUs using the Kruskal-Wallis statistical test.Boxplots were generated using the ggplot2 [25] package in R [24].Using QIIME [21] core microbiome script compute_core_microbiome, a core microbiome of 52% was calculated for healthy participants (without periodontal disease), while a core microbiome of 43% was obtained for participants with periodontitis.Bar plots depicting significant taxa were visualized through R package ggplot2 [22]. Data availability statement: ITS-2 Sequence data supporting the analyses presented in the paper can be found in QIITA [19], project ID 13193, and publicly available with EBI accession number ERP126217 (https://www.ebi.ac.uk/ena/browser/view/PRJEB42371). Clinical characteristics A total of 88 participants were recruited to this study.Most participants (94%, 83 out of 88) reported oral sex practices; however, 70% of the participants did not have periodontal disease, 10% had mild periodontitis, and 20% had moderate/severe periodontitis (Table 1).Detailed characteristics of periodontitis severity are presented in Table 1.Most participants with moderate/severe periodontitis were males (65%), and participants who had periodontal disease were aged 31-49 years (65%) and had good oral hygiene (42%). A higher percentage of smokers had moderate/ severe periodontitis (47%).Most participants with periodontal disease consumed alcohol.However, most participants with periodontal disease did not consume marihuana (Table 1). From a total of 2,356,743 raw reads, 2,355,524 were used for analyses after removing singletons (>3) and unidentified taxa.From these reads, we obtained 5566 OTUs from 88 samples (Table 1).After rarefying our species table, all analyses were performed using the same number of reads per sample (1007 reads) for normalization purposes.Tables 2 and 3 depict the species table of environmental fungi and Candida species. Fungal communities according to periodontal severity Mycobiota diversity was analyzed between participants without periodontitis (healthy), participants with mild and moderate/severe periodontitis (Figure 1).We also evaluated periodontal status (healthy participants vs. some level of periodontal disease), as supplementary analyses (Supplementary Figure S1).There were no significant differences in community structure, composition, or distance between samples among periodontal categories (Figure 1(a)).Supplementary analyses on periodontal status showed no significant differences in beta diversity (Supplementary Figure S1a). Alpha diversity was estimated using Chao1 and Shannon Indexes, yet no significant differences were observed in richness or evenness of fungi present in the oral cavity (Figure 1(b)).Although not significant, we observed that participants with mild periodontitis were richer, when compared with healthy and moderate/ severe participants (Figure 1(b)).Likewise, periodontal status showed no significant differences in richness nor diversity of species between participants with and without periodontal disease (Suplementary Figure S1b). Taxonomic profiles were analyzed at the genus level (Figure 1(c)).The most abundant genera across all categories included Candida, Saccharomyces, Rigidoporus, Aspergillus, and Trametes.Aspergillus, an environmental fungus, was dominant in healthy participants, Pseudolagarobasidium was more abundant in participants with mild periodontal disease, while Irpex and Saccharomyces were more abundant in moderate/severe periodontitis. A core microbiome at the species level was calculated for healthy and diseased participants.The healthy core microbiome that 43% of samples shared was mostly dominated by Rigidoporus vinctus and Candida albicans (Figure 2(a)).Species like Candida dubliniensis, Saccharomyces cerevisiae, and Trametes elegans were also observed at lower levels.On the other hand, the core microbiome calculated for OTUs shared by 52% of participants with some level of periodontal disease had a higher relative abundance of Saccharomyces cerevisiae, Rigidoporus vinctus, and Irpex lactus (Figure 2(b)).Candida albicans and many environmental fungi were observed across all periodontal severity categories. We evaluated the diversity and relative abundance of Candida species across all periodontal categories (Figure 3).No significant differences were found between Candida populations across the gradient of periodontal severity.Diversity analyses regarding community structure and composition (Figure 3(a)) revealed no significant differences (p-value >0.05).Alpha diversity was also not significantly different across periodontal severity (Figure 3(b)).Candida albicans was found to be ubiquitous.Candida parapsilosis was more abundant in healthy participants, while Candida tropicalis dominated in participants with moderate/severe periodontitis (Figure 3(c)).Although Candida tropicalis seems dominant in moderate/severe participants, only 2 participants had Candida tropicalis and dominated in sample ID O77.This participant is a male between 31 and 40 years of age, is overweight, and consumes alcohol, marihuana, and smokes. Changes to the mycobiota associated with alcohol consumption, marihuana usage, and smoking habits Alcohol consumption, marihuana usage, and smoking habits were considered the most influential risk factors impacting our cohort's oral mycobiota.We discovered significant differences (Permanova: 0.014, Permdisp: 0.007) in fungal community composition and dispersion of samples between participants who consume alcohol and those who do not consume alcohol (Figure 4(a)). Significant differences in alpha diversity were observed when assessing alcohol consumption.Nonconsumers had higher overall species diversity (Shannon Index: 0.009) than consumers (Figure 4(b)).Boxplots of significant taxa (p-value <0.05) revealed Candida albicans, Candida parapsilosis, and Hortaea werneckii to be more distinctive in non-consumers (Figure 4(c)). Changes in the mycobiota diversity based on marihuana usage showed no significant differences (p-values >0.05) in the community composition and structure of fungi present in the oral cavity (Figure 5(a)).Although differences in richness and overall diversity were not significant (p-values >0.05), non-users had higher Chao1 and Shannon Indexes (Figure 5(b)), demonstrating that they are richer and more diverse than those participants who use marihuana.No distinctive taxonomic profiles were found regarding marihuana usage. Beta diversity of smokers, compared to nonsmokers, showed no significant differences (p-values >0.05) in community composition and dispersion between samples (Figure 6(a)).Diversity analyses on richness and species diversity revealed no significant differences (p-values >0.05) between smokers and non-smokers (Figure 6(b)).However, when evaluating significant fungi (p-value <0.05), we found dominance of Saccharomyces cerevisiae in smokers (Figure 6(c)). Impact of environmental outdoor spore levels in fungi present in the oral cavity Environmental outdoor spore levels representing atmospheric spore counts were measured for the days participants were recruited.Measured spore levels were divided into four categories, ranging from a low spore count in the outside environment (spore level 1) to a very high atmospheric spore count (spore level 4).These spore levels were associated with fungi in the oral cavity for a subset of participants, n = 43 for environmental fungi analyses (Table 2), and n = 48 for Candida species analyses (Table 3).Alpha diversity analyses of outdoor environmental fungi on the fungal communities of the oral cavity revealed no significant differences as outdoor spore levels increased.However, as atmospheric spore counts increased, these fungal communities' richness and overall diversity decreased (Figure 7a).Oral taxonomic profiles of environmental fungi showed a higher abundance of Cerrena unicolor, Curvularia, Apiotrichum, and Irpex lactus with increasing outside spore levels (Figure 7b). As outdoor spore levels increased, alpha diversity analyses of Candida species diversity showed no significant differences.However, there was a tendency for richness and decreased overall diversity as spore levels increased (Figure 7(c)).Analysis regarding oral taxonomic profiles revealed changes in the abundance of Candida species as outside spore levels increased (Figure 7(d)). Discussion and conclusions Our data confirms the colonization of both yeast and filamentous fungi in the oral biofilm.Although we found no significant differences in structure, composition, or diversity between fungi in the oral cavity associated with disease progression, there were changes in composition among fungal species.The most abundant genera of fungi in the oral cavity included Candida, Saccharomyces, Rigidoporus, Aspergillus, and Trametes.Core mycobiome analyses found a higher abundance of Rigidoporus vinctus and Candida albicans in participants without periodontal disease.In contrast, Saccharomyces cerevisiae, Rigidoporus vinctus, and Irpex lactus were dominant in participants with periodontal disease.Interestingly, the presence of environmental fungi in the oral cavity for this population was mainly composed of basidiomycetes, representing over 60% of the outdoor fungal spores in Puerto Rico [28]. The ubiquity and abundance of Candida are of utmost importance for understanding oral health, as several Candida species cause oral candidiasis [29].These yeasts are commonly found in the oral cavity, but the imbalance of this flora can lead to the development of candidiasis.Candida adheres to the epithelial cell membrane as its first step for infection, and it is aided by C3d receptors (a protein complex and T-cell receptor), mannoproteins, and mannose present in the cell wall [30].Other virulence factors of Candida include endotoxins, proteinases, and inducement of Tumor Necrosis Factor (TMF) [30].Several Candida species have been isolated from the oral cavity, Candida albicans being the most dominant.While Candida albicans is considered a commensal yeast in the oral cavity of healthy individuals, it can become pathogenic in immunocompromised individuals.It is mainly observed as opportunistic infections in individuals with diabetes, oral cancer, HIV, and periodontal disease [31].Antibiotic use is also a risk factor for oral candidiasis, especially in people with HIV [32].Aside from Candida albicans, Candida dubliniensis, Saccharomyces cerevisiae, and Trametes elegans were also detected.Like C. albicans, C. dubliniensis is a commensal yeast in the oral cavity associated with oral candidiasis [33].Saccharomyces cerevisiae is considered a commensal fungus, mainly used in biotechnology [34,35].Participants with periodontal disease presented a higher abundance of Rigidoporus vinctus, Aspergillus penicillioides, and Saccharomyces cerevisiae.Aspergillus species have been related to Aspergillosis, a fungal infection that can affect immunocompromised individuals and those with hematological malignancies [36].Aspergillus penicilloides is a widespread xerophilic indoor fungus in Puerto Rico with allergenic potential [37].Candida albicans and Rigidoporus vinctus -an environmental fungus -were found across all periodontal categories.An increase of outdoor environmental fungi in the oral cavity of participants with periodontal disease severity suggest the impact of the outside environment on the human microbiome and the impact of non-indigenous taxa on immune response and disease persistence. When evaluating risk factors for periodontal disease, alcohol consumption showed a higher impact on fungal diversity.Non-consumers were richer and more diverse than those who consumed alcohol.Heavy alcohol consumption has been proposed as a risk factor for carcinogenesis due to the production of acetaldehyde, the first metabolite of ethanol [38].It is believed that acetaldehyde in the oral cavity reacts with DNA, resulting in mutations that could lead to cancer development [39].Also, studies have reported that individuals who consume high alcohol volume have higher levels of pro-inflammatory cytokines while also impairing macrophage function [36].Evaluation of taxonomic profiles showed that alcohol consumers had dramatic decreases in Candida albicans, Candida parapsilosis, and Hortaea werneckii as compared to non-alcohol consumers.However, further studies need to identify possible biomarkers associated with healthy individuals. We noticed how the use of marihuana decreased the richness and diversity of fungal communities present in the oral cavity.This finding implies the possible effect of marihuana on fungal community biofilm and might suggest that medicinal cannabis could positively impact oral fungal infections such as candidiasis.Nonetheless, this inhibition is not specific, and other studies have indicated mycosis due to marihuana usage [40].While no significant differences were observed in diversity nor community composition by alcohol consumption, Saccharomyces cerevisiae was more abundant in smokers than in non-smokers [41].Even though Saccharomyces cerevisiae is considered a ubiquitous yeast and used as a probiotic [35], infections have been reported in immunocompromised individuals, ranging from fungemia to skin infections and esophagitis [42,43]. Due to the high abundance of environmental fungi in the oral cavity, we then identified the effect of outdoor spore levels with fungi present in the oral cavity and spore counts corresponding to the participant recruitment dates.This analysis revealed a decrease in both the richness and overall diversity of fungi in the oral cavity as outside spore levels increased.We believe that this may be due to competition between environmental spores with mucosal yeast and the oral cavity filamentous fungi, ultimately reducing its abundance.For instance, Candida must adhere to epithelial surfaces to stay in the oral cavity [44,45], and environmental spores entering the oral cavity as we breathe and speak, may interfere with this adhesion mechanism.Additionally, we may speculate that inhaled outdoor fungi may induce an inflammatory response inhibiting the oral mucosa biofilm. The ability of fungi to successfully adhere to the oral cavity, including the fungi acquired from the environment leading to the formation of hyphae, could increase surface hydrophobicity, the secretion of hydrolytic enzymes and toxins, thus enabling the attack on host cells, likely impact bacterial composition, by increasing inflammation and leading to oral dysbiosis. Our data shows that associated risk factors such as alcohol consumption, marihuana usage, smoking, and the outdoor environment impact the oral mycobiome however these are not generalizable to population level.Some limitations included the limited agerange of 21-49 years old, the recruitment setting of STI clinics and therefore limited representation of healthy young adults.Even though this study has reduced confounding variables and batch effects which could mask biological signals, we recognize that a lack of mycobiome-associated signal associated with periodontitis could be caused by low power rather than a lack of real biological signal.Other pilot studies on oral mycobiome and on bacterial communities included the recruitment of low sample sizes with no clear exclusion criteria [46,47], a difficulty inherent to many microbiome studies, especially in low resource settings.Although not generalizable, still, it adds to our knowledge of oral microbiome dynamics.The role of fungal coexistence with bacterial periodontopathogens has been demonstrated before with Candida albicans shown to cohabit with Porphyromonas gingivalis, and other strictly anaerobic bacteria [48].No other study has co-investigated oral and environmental microbes and shed light on the possible ecological competition of environmental spores with local oral epithelial microbiomes.Our data shows that associated risk factors such as alcohol consumption, marihuana usage, smoking, and the outdoor environment impact the oral mycobiome. Disclosure statement No potential conflict of interest was reported by the author(s). Funding This project was funded by an award from the NIH National Institute of Dental and Craniofacial Research 1R21DE027226-01A1 and R21 DE027226 02S (NIDCR Diversity Supplement).Partial funds were given by the National Institute on Minority Health and Health Disparities U54-MD007600; the National Institute of General Medical Sciences Institutional.Development Award (IDeA) grant number 5P20GM103475 and the Hispanic Alliance for Clinical and Translational Research (Alliance) National Institute of General Medical Sciences (NIGMS) U54GM133807. Table 1 . Clinical and behavioral characteristics by periodontal disease severity.Rarefaction level remains the same, samples with 1000 or more sequences were kept. Figure 1 . Figure 1.Fungal community profiles including beta (a), alpha diversity (b), and taxonomic profiles at genus level (c), according to the periodontal severity levels. Figure 2 . Figure 2. Comparison of the fungal core microbiome at species level, between healthy and diseased participants (some level of periodontis mild or severe). Figure 3 . Figure 3. Diversity of Candida popularions.Beta diversity (a), alpha diversity (b), and taxonomic profiles (c), showcase the relative abundance of Candida species according to periodontal severity. Figure 4 . Figure 4. Fungal community profiles, including beta and alpha diversity, along with significant taxa at species level, according to alcohol consumption. Figure 5 . Figure 5. Diversity analyses comparing marihuana usage on participants, regardless of periodontal disease. Figure 6 . Figure 6.Fungal community profiles, including beta diversity plot (a), alpha diversity (b), and significant taxa at species level (c), between smokers and non-smokers. Figure 7 . Figure 7. Plots depicting richness and evenness (a, c), along with relative abundance of environmental fungi (b) and Candida species in relation to outside spore levels (c). Table 2 . Environmental species table statistic numbers of sequences and OTUs according to metadata categories.Rarefaction level remains the same, samples with 1000 or more sequences were kept. Table 3 . Candida species table statistic numbers of sequences and OTUs according to metadata categories.Rarefaction level remains above 1000, samples with 1049 sequences or more were kept.
v3-fos-license
2020-03-26T10:08:03.479Z
2020-03-20T00:00:00.000
216212398
{ "extfieldsofstudy": [ "Sociology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/12/6/2465/pdf", "pdf_hash": "bbf90534457add3b2d098c108c5146952b215d7f", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45176", "s2fieldsofstudy": [ "Education" ], "sha1": "c6a56dbf1c3522c9e5f416b957d265438b3539bd", "year": 2020 }
pes2o/s2orc
Towards A Relational Model for Emerging Urban Nature Concepts: A Practical Application and an External Assessment in Landscape Planning Education : The increasing interest in urban nature and its connection to urban sustainability and resilience has promoted the generalized use of new concepts such as green infrastructure, ecosystem services and nature-based solutions. However, due to their heterogeneous origins and interpretations, the usage and understanding of these concepts may vary considerably between di ff erent academic and professional groups, a ff ecting their coordinated and synergistic use in integrative planning education and emphasizing the need for the exploration of clearer syntaxes and articulations between them. Accordingly, the main aim of this research was to develop a relational model and to investigate, through an external evaluation process, the benefits that these types of models can provide in higher education and in professional practice. This article presents the background theory and process that led to the development of the relational model, the outcomes of its academic implementation and the results of the assessment of both the model and the students’ work by di ff erent types of planners, researchers and practitioners. The findings show the potential of the defined relational model to integrate di ff erent concepts operating in complex socio-ecological systems and the benefits of developing, testing and validating models by linking research, education and professional practice. Introduction As presented in this introductory section and according to literature, the development of novel urban nature concepts has generated conceptual and operational challenges affecting the understanding of their mutual interconnections and their combined use in education and practice. Accordingly, this article elaborates on the following three research questions: (i) Can emergent urban nature concepts be included in a relational model supporting their integrated use in planning? (ii) How could this proposed model support urban and landscape planning education? and (iii) How would this model and the results of its academic implementation respond to the needs and expectation of decision makers, professionals and specialists from different fields? A preliminary study of new urban nature concepts and their interconnections revealed that the hybridization between different disciplines and fields of knowledge is generating new concepts, framing the approach to urban nature in landscape and urban planning. However, despite the numerous definitions of urban green infrastructures (UGI), ecosystem services (ESS), nature-based solutions (NBS), urban sustainability and urban resilience, the formation of durable terms and a consistent grammar between them often escapes the boundaries of conventional academic and professional disciplines. This situation becomes particularly noticeable in complex socio-ecological systems such as urban areas, where planning increasingly needs to deal with an expanding number of drivers, practitioners" and identifying "attributes or factors important to professionals for evaluating the quality of conceptual models". In this specific case, the survey was distributed amongst different types of users and researchers, asking them also their occupation from a predefined list [31]. Similarly, Maes and Poels (2007), highlight the lack of practical evaluation frameworks for conceptual modelling scripts and develop an empirical test for a proposed model based on four quality dimensions from a user perspective (semantic quality, ease of understanding or systemic quality, usefulness and user satisfaction) and structured around two experiments with students. The results supported the suggested model and provided relevant inputs for both theory and practice [32]. The possibility of framing the development of a relational model about urban nature concepts and testing it within the landscape architecture field opens up specific challenges and possibilities. Thus, from a disciplinary point of view, urban nature has been a central element in many disciplines, in which the landscape concept has usually been perceived as an integrative platform combining the spatial, functional, dynamic, formal, ecological, socio-cultural and economic aspects of nature and linking them with other dimensions of socio-ecological systems. In this endeavour, landscape architecture has been supported by its classical branches: landscape planning and design; and has been accompanied by other disciplines, such as landscape urbanism, ecological urbanism, landscape ecology and urban ecology, amongst others [33][34][35][36]. Moreover, the "landscape architecture" contribution to urban nature planning and management could specifically be placed in the integration of different types of knowledge, in the exploration of their mutual intersections, in the generation of new synergies and, consequently, in the design of schemes in which space and function, as well as pattern and process, display high levels of multifunctionality [37]. Accordingly, if human knowledge constitutes a densely interconnected web [38], landscape architecture-because of its highly integrative and transversal nature-could be seen as an extremely connected node with a particular sensitivity to changes in peripheral or close disciplines. Furthermore, this strategic or nodal location imposes additional challenges in landscape architecture education, especially when those related disciplines undertake significant methodological or theoretical changes or when new values or planning paradigms arise [37]. This highly connected character of landscape architecture in particular [39,40], and of the landscape concept in general [33,41], has been widely recognized and reveals some clear parallelisms with general systems theory [42], and with systems properties such as sustainability or resilience. In conclusion, this literature review suggests the pertinence of advancing in the development of conceptual and relational models combining different urban nature concepts for more integrated use in education and practice (research question 1). In addition, it reveals the potential of education and planning studios to bridge research and innovative practice, raising at the same time the necessity of exploring how the development and testing of such relational models can be integrated in planning education (research question 2). Finally, the selected literature highlights the importance of validating these kind of models and their results through a reliable evaluation method, in which the features and concepts included in the model, and their combined use, are assessed by the final users (research question 3). The development of this research within the field of landscape planning education and practice generates specific opportunities and challenges based on its highly transversal and interdisciplinary character. Methods Due to their very different natures, the three formulated research questions required different research methods. Firstly, the definition of a relational model integrating the studied urban nature concepts was initiated through a selective literature review that revealed some of the most relevant texts dealing with the connection between different urban nature concepts and their respective definitions. This selection was implemented by using different combinations of the following keywords in Google Scholar: GI, UGI, ESS, NBS, models, conceptual models, relational models, landscape, urban sustainability, urban nature and planning. The identified texts were filtered in order to select those Sustainability 2020, 12, 2465 5 of 21 with a clearer focus on relational models covering two or more concepts or including a systematic comparison of more concepts. The selected concepts correspond to those more often mentioned in existing models for urban nature and planning. Some other concepts appeared frequently in many texts, either as key characteristics of the urban nature system (e.g., biodiversity) or as key goals (human well-being or quality of life) and were retained in the elaboration of the final relational model. The ongoing discussion on how the biodiversity concept relates to the ecosystem services concept is highly connected to how the former concept is approached [43], either as a benefit for humans, hence as an ecosystem service [44], or as a goal in itself. On the other hand, existing literature analysing the connections between the sustainability and the biodiversity concepts, suggests that biodiversity is a precondition for ecosystem function and for the generation of some ESS, and therefore, is directly connected with the economic, social and environmental [45,46]. The selected relational models were analysed to understand the criteria used in each of them to connect the addressed concepts. Different typologies of models emerged from this process. Simultaneously, the definitions used for the studied concepts in the selected texts, together with some definitions found in seminal or authoritative documents (e.g., European guidelines), were compared in order to produce an internal glossary of terms supporting the subsequent development of a new relational model. This glossary was accompanied by a categorization of the urban nature concepts according to their main potential and complementary roles in urban nature planning. Secondly, the proposed relational model was considered to redefine the goals, contents, learning methods and structure of a seven credits compulsory course at the Aalto University Master's Programme in Landscape Architecture (studio course MAR-1025). The selection of one academic studio course to test the application of the model was based on the reasons presented in the introduction, namely the importance of developing conceptual and relational skills in future planners and the possibilities for flexible and solid speculation provided by higher education and for linking research with innovative practice [18,20,23,24,26]. In particular, this new version of the course assigned a pivotal role to the studied concepts in the definition of strategic visions for urban nature systems in a set of Baltic and Finnish cities (Helsinki, Espoo, Tampere, Turku, Oulu, Lahti and Mikkeli in Finland, Gävle and Söderham in Sweden, Riga and Jelgava in Latvia and Tartu in Estonia). The proposals, produced during the academic years 2016-2017, 2017-2018 and 2018-2019, were synthetized and displayed by each group of students in a set of posters that were later evaluated by external experts, according to a set of criteria and objectives. In particular, the studio course was structured around the following assignments: (1) morphological, social, functional and historical analysis of the studied cities; (2) analysis and diagnosis of the cities using the studied urban nature concepts (urban blue-green infrastructure, ecosystem services, nature based solutions and their potential connections to urban sustainability); (3) definition of city strategies and pilot actions based on the proposed relational model and aimed at improving urban sustainability in the studied cities. Thirdly, and following one of the methods presented in the introduction to validate conceptual models [31,32], the potential of the suggested relational model for responding to real urban nature planning challenges and to support innovative education was evaluated through a questionnaire delivered to different types of stakeholders or users, who assessed both the model and the outcomes produced by the students. The questionnaire included three types of questions. The first type included open questions to determine the job sector, former education and professional field of the respondents. The second type assessed the respondent's familiarity and level of use with the different urban nature concepts included in the relational model (UGI, ESS, NBS, etc.), as well as their acquaintance with any integrative model or framework connecting the concepts in question. The third type of questions requested the opinion of the respondents on the effectiveness and utility of the proposed integrative model to support urban sustainable planning, both at a general level and in the specific works produced by the students in their respective Baltic or Finnish City. Questions included in the second and third Sustainability 2020, 12, 2465 6 of 21 type were formulated as closed-ended questions using a Likert scale in which the score 1 indicated a strong disagreement or low grading, and 5 strong agreement or high grading. The survey was sent to 75 people including 51 civil servants working in the studied cities, 12 teachers/researchers from different disciplines working with some of the studied urban nature concepts and 12 practitioners from landscape architecture offices, as that was the discipline in which the course was organized. The composition of the sample was also defined in order to include respondents with different academic backgrounds or professional foci (e.g., from integrative to specialized planning or from spatial design to city management). The results were analysed to assess the general effectiveness and utility of the proposed relational model to support sustainable urban-planning and its potential to help students articulate the studied concepts. In addition, this quantitative study permitted the detection of possible convergences and divergences between respondents with different academic backgrounds, professional fields or job sectors, as well as possible differences between distinct groups regarding their level of knowledge and use of the studied concepts. However, due to the small sample size, statistical analyses of the data had serious limitations and hindered the detection of significant differences, correlations or connections in the analysed variables (e.g., effect of the level of knowledge of some concepts or academic background in the evaluation of the students' works). Results The presented results correspond to the three research questions which originated in the conducted research and were explored through the three research methods explained in the previous section. The analysis of the studied models connecting different urban nature concepts revealed the existence of some general model typologies. The diversity found in the studied models is highly dependent on the different objectives, criteria and ways to conceptualize themes connected to urban nature. Thus, and as detected in other abstract concepts such as urban environmental quality, human well-being, quality of life and sustainability [5], different authors assign different meanings to the same terms. Moreover, they also have dissimilar approaches to some fundamental questions such as the meta-concepts framing the model (e.g., sustainability), the domains that are used to address these meta-concepts or the applicable spatial or temporal scales or scopes. Following these findings, we decided to start the definition of the proposed relational model by developing a glossary of the considered concepts. As displayed in Table 1, this glossary included the most widely accepted definitions and preferability-those that also referred to other concepts. The key selected meta-concept framing the definition of the relational model was urban sustainability and its different environmental, socio-cultural and economic domains. The typological classification of the studied relational models was initiated by the differentiation between topological and chorological approaches proposed for analysing the landscape by Ahern (1999). Thus, "topological analysis is a parametric approach which describes and analyzes the "vertical" relationships between many factors that occur at a given location . . . In landscape ecological planning, the topological approach is complemented, not replaced by, a chorological approach which describes and analyses horizontal relationships and flows" [33] (p. 179). "A strategically planned network of natural and semi-natural areas with other environmental features designed and managed to deliver a wide range of ecosystem services.... This network of green (land) and blue (water) spaces can improve environmental conditions and therefore citizens' health and quality of life. It also supports a green economy, creates job opportunities and enhances biodiversity" [47]. Spatial Currently the use of the UGI concept in urban planning as a spatial network is relatively well-established [3]. However, the connection with other concepts is sometimes unclear. UGIs can support a more systemic and delimited approach to the generation of ESS and to the use of NBS. Ecosystem services(ESS) ESS are defined as the benefits people obtain from ecosystems and are usually divided into supporting, regulating, provisioning and cultural services [48]. According to Daily et al. (2011), if natural capital is the stock of nature's assets, ESS can be understood as the benefits resulting from those assets [49]. Functional (Benefits) (linking processes, services and benefits) ESS can support the definition, implementation and management of UGI and NBS by revealing and systematizing the benefits provided by nature [3]. Although the ESS concept fosters a holistic approach to nature with a focus on benefits for humans, the subdivision of the concept into categories and independent services can generate partial approaches or the inconsistent aggregation of different services. Nature-based solutions (NBS) According to the European Commission (2015), NBS are "solutions that are inspired and supported by nature, which are cost-effective, simultaneously provide environmental, social and economic benefits and help build resilience. Such solutions bring more, and more diverse, nature and natural features and processes into cities, landscapes and seascapes, through locally adapted, resource-efficient and systemic interventions" [50]. Maes and Jacobs (2015) connect the NBS concept with the natural capital concept by associating NBS with "any transition to a use of ecosystem services with decreased input of non-renewable natural capital and increased investment in renewable natural processes" [51]. Instrumental (Tool And Solutions) for multiple and often systemic problems According to Haase (2016), NBS can mediate the interaction between human activities and ecosystem processes in cities and, if adequately designed and managed, they can mitigate human impact. NBS should consider the full spectrum of land uses in cities as well as the co-existence and potential synergic interaction between built, grey, brown, green and blue systems [52]. Sustainable drainage systems (SUDS) SUDs consist of different approaches to manage storm water and runoff, with special attention to water quantity, water quality, biodiversity and amenity (e.g., flooding, pollution, wildlife and recreation) [53]. Conceptually, SUDs can be embedded within Sustainable Storm Water Management and is highly connected to other water related terms. Instrumental Or Mediating (Tool And Solutions) for Storm Water Management and for other linkable ESS SUDs can play a key role in Sustainable Storm Water management, promoting at the same time the generation of different water-related or dependent ESS as well as the connectivity of GIs by connecting it with the continuous character of water flows. Socio-ecological system (SES) According to Glaser et al. (2008), a socio-ecological system (SES) "consists of 'a bio-geo-physical' unit and its associated social actors and institutions". They are complex, adaptive systems and their boundaries respond to the spatial or functional limits of the affected ecosystems and their context [54]. Socio-ecological thinking (SET) could be defined as a way of thinking especially concerned with the sustainable interaction between bio-geophysical systems and humans [37]. Systemic and cognitive framework SES provides a new, systemic and integrative lens to address the relationship with nature in all types of landscapes. In fact, cities can be approached as a family of socio-ecological systems with specific conditions for the use of GI, ESS and NBS imposed by the urban condition [37]. Urban metabolism (UM) There are multiple definitions and methodological approaches to the metabolism of socio-ecological systems. Because of its generalized use and potential connections to planning, the approach proposed in the field of industrial ecology can be particularly relevant to investigate connections with urban nature concepts. Thus, according to Baccini (2007), urban metabolism provides a model and metaphor to describe and analyse material and energy flows within cities and to investigate the interactions of natural and human systems [55]. In the same vein, Kennedy et al. (2007) define UM as the addition of the "technical and socioeconomic processes that occur in cities, resulting in growth, production of energy, and elimination of waste" [56] (p. 44). Systemic and methodological framework (a type of model to approach, analyse, plan and manage socio-ecological systems like cities) UM provides a quantitative framework for modeling and planning socio-ecological systems like cities, usually from a material and energy point of view. This framework can support the analysis and planning of GI, ESS and NBS in order to deepen the connections between social and ecological systems and to increase the performative character of urban nature for sustainable urban planning and key urban flows (e.g., water, energy, waste, biota, etc.). Natural capital (NC) NC can be defined as a stock of natural assets that can produce a sustainable flow [57]. This stock can yield direct or indirect benefits to humans and can include living and non-living components of the natural system [4,57]. The concept of NC can be extended, and some researchers include the information stored in natural systems [58], or the services and benefits provided by them [59], although the latter are usually separated [4]. Integration of nature-based assets and dimensions sustaining sustainable flows. In its wider and most integrative sense, NC can be defined as a stock of resources embedded in natural systems and allowing sustainable flows and processes. Accordingly, urban NC could comprise the spatial (UGI), human benefits-based (ESS) and mediating or instrumental (NBS, SUDS) components of nature. This differentiation between overlapping or layered dimensions of the same phenomenon or landscape, and the horizontal connections between different components of a given system or landscape, was used to define three types of models and relationships between concepts ( Figure 1). The first type is an "embedding" model (nesting dolls) in which concepts are nested within each other. This type of model is highly hierarchical, and the external concepts absorb conceptually and/or spatially the internal ones. The second type is a horizontally "complementary" model, in which the concepts deal with different physical elements, spaces or issues. In this complementary model, some concepts might partially overlap, generating a new conceptual space (e.g., intersections in a Venn diagram). The third type is a "layered" model in which the concepts express or contain a different dimension of the same space, system or phenomenon. This type of model can also be perceived as one that works with vertical and highly interconnected complementarities or interdependencies. In a way, the well-known diagrams used to differentiate strong and weak sustainability can illustrate the difference between an embedded and complementary model for the same concepts (social, economic and environmental sustainability). On the other hand, the different attributes associated to a given spatial unit in a geographical information system exemplify how a layered system works and acknowledges the possible interdependencies between different overlapping attributes. one that works with vertical and highly interconnected complementarities or interdependencies. In a way, the well-known diagrams used to differentiate strong and weak sustainability can illustrate the difference between an embedded and complementary model for the same concepts (social, economic and environmental sustainability). On the other hand, the different attributes associated to a given spatial unit in a geographical information system exemplify how a layered system works and acknowledges the possible interdependencies between different overlapping attributes. The relational models included in the selected literature were often generated to support a particular theoretical or operational framework proposed by the authors. In many cases, this situation affected the types of connections explored in the models. In addition, rather than focusing on urban nature, many of the models grew around one specific concept that often gained a particular relevance and in many cases was perceived as an umbrella concept embedding the others. The study of the selected relational models was systematized considering the following aspects: existence of a meta-concept; considered domains; scope; existence of a central concept; inclusion of a terminological analysis; main purpose and variables considered to explore connections between concepts; type of model according to Figure 1. The results of the studied relational models from a typological perspective reveal that the model developed by Pauleit et al. (2017) can be described as a combination of an embedded and complementary model in which the NBS concept was perceived as an umbrella concept integrating the spatial dimension of urban green infrastructures (UGI) and the benefits provided by nature (ESS) [3]. Likewise, the model proposed by Nesshöver et al. (2017) included nature-based solutions as an umbrella concept or framework, amongst others, to promote more sustainable socio-ecological systems [4]. Similarly, Tibdall and Krasny (2010) considered sustainability to be an overarching framework and defined GI as a biophysical component and ESS as an interface with the socio-cultural component [13]. Conversely, Lafortezza et al. (2013) established an embedded model in which the GI framework includes different concepts and functions [10], whereas Hansen and Pauleit (2014) defined a layered model in which the spatial dimension of the GI is connected to its functional dimension in terms of benefits for human well-being (ESS) [6]. In their model, Tzoulas et al. (2007), followed a similar approach and layered arrangement [12], whereas Galan and Perrotti (2019) explained the metabolic performance of a regional socio-ecological system through a set of layered meta-systems (social and physical) and their embedded systems [15]. An overall review of the typological arrangements of concepts in the studied models shows that The relational models included in the selected literature were often generated to support a particular theoretical or operational framework proposed by the authors. In many cases, this situation affected the types of connections explored in the models. In addition, rather than focusing on urban nature, many of the models grew around one specific concept that often gained a particular relevance and in many cases was perceived as an umbrella concept embedding the others. The study of the selected relational models was systematized considering the following aspects: existence of a meta-concept; considered domains; scope; existence of a central concept; inclusion of a terminological analysis; main purpose and variables considered to explore connections between concepts; type of model according to Figure 1. The results of the studied relational models from a typological perspective reveal that the model developed by Pauleit et al. (2017) can be described as a combination of an embedded and complementary model in which the NBS concept was perceived as an umbrella concept integrating the spatial dimension of urban green infrastructures (UGI) and the benefits provided by nature (ESS) [3]. Likewise, the model proposed by Nesshöver et al. (2017) included nature-based solutions as an umbrella concept or framework, amongst others, to promote more sustainable socio-ecological systems [4]. Similarly, Tibdall and Krasny (2010) considered sustainability to be an overarching framework and defined GI as a biophysical component and ESS as an interface with the socio-cultural component [13]. Conversely, Lafortezza et al. (2013) established an embedded model in which the GI framework includes different concepts and functions [10], whereas Hansen and Pauleit (2014) defined a layered model in which the spatial dimension of the GI is connected to its functional dimension in terms of benefits for human well-being (ESS) [6]. In their model, Tzoulas et al. (2007), followed a similar approach and layered arrangement [12], whereas Galan and Perrotti (2019) explained the metabolic performance of a regional socio-ecological system through a set of layered meta-systems (social and physical) and their embedded systems [15]. An overall review of the typological arrangements of concepts in the studied models shows that one of the key variables is the definition of an overarching concept, which, depending on the model, concentrates on the goal or purpose (e.g., sustainability) or on the prevalence of one concept over the others. In addition, the results prove that the relationships between concepts is highly dependent on how they are defined, since these definitions generate different associations: insertion (embedded model), complementarity or juxtaposition (layered models). Regarding the terminological research displayed in Table 1, a systematic review of the new urban nature concepts used in the studied literature and relational models, reveals that they usually operate or emphasize different purposes or semantic levels (e.g., spatial, functional, instrumental, cognitive or guiding). In order to advance in the production of a relational model for urban nature concepts, specific definitions were chosen for the considered concepts and their main aims and potential use in urban nature planning were inferred from the studied literature. The selection of definitions was based on their broad acceptance by researchers, practitioners and decision-makers while the identification of their main aims was based on the definitions themselves and on the search for complementary aims between concepts, to avoid duplications. The resulting relational model is a combination of a layered model in which space and function, in this case GI and the benefits generated within it (ESS, human well-being and biodiversity if understood as an ecosystem service), are layers of the same entity (urban nature) and in which other concepts have the capacity to affect the generation of benefits in the spatial infrastructure (NBS, SUDS, etc.). In addition, other concepts embed the urban nature concept, giving purpose or direction to its planning or management (e.g., urban sustainability, resilience, quality of life or biodiversity when this last concept exceeds the pure anthropocentric perspective and is understood as a precondition for sustainability), or approaching it from a specific perspective (e.g., natural capital). As presented in Figure 2, urban sustainability and resilience, together with biodiversity and well-being, could ideally operate as overarching and guiding frameworks, setting dynamic goals or processes for the evolution of urban socio-ecological systems and supporting systemic and transdisciplinary ways of thinking. The proposed model emphasizes the spatial dimension of urban green-blue infrastructures that, following some of the most extended and recent definitions of the term [47], are perceived as spatial networks where different types of nature and nature-associated processes take places in cities. These infrastructures are understood as basic components for the functioning of the urban socio-ecological system and, from an anthropocentric perspective, have the capacity to increase human well-being by delivering an extensive range of benefits or ESS [11,12,47,52]. In fact, the adequate and rigorous consideration of those benefits can be used to objectively assess the performance of green-blue infrastructures and support their improvement. Conversely, the structure, composition, characteristics and functioning of each green-blue infrastructure greatly determines its capacity to generate ESS and this capacity or performance can be managed or adjusted by using features, structures or elements assisted by nature and natural processes (e.g., NBS, SUDs, etc.). Furthermore, in accordance with the suggested relational model, the combination of the benefits and spatial-functional dimensions of urban nature, together with the instruments or tools facilitating its normal or amplified performance, could be understood as overall urban nature capital [37]. transition towards more sustainable urban metabolisms by analysing the potential contributions of urban nature in material, water and energy flows. Students´ Application of the Proposed Relational Model in a Set of Baltic/Finnish Cities (How Could a Proposed Relational Model for Urban Nature Concepts Support Urban and Landscape Planning Education?) The role of the proposed relational model in the Aalto University Green Area Planning course was to promote sustainability transitions in Finnish and Baltic cities through the transformation and improvement of their urban nature systems in general, and of their blue-green infrastructures and associated ecosystem services in particular. The model provided a methodological and conceptual framework combining different urban nature concepts and aimed at increasing the performance of urban nature and its capacity to generate well-being and influence urban metabolisms, urban morphology and urban ways of living [37]. In addition, the model tried to help the students understand and explore the connections between the concepts and how they can be used effectively and synergistically in urban and landscape planning. The course was organized in different phases in which students progressively explored the considered concepts, analysed their potential linkages and familiarised themselves with various quantitative and qualitative methods supporting the analysis and upgrading of urban nature systems for the studied cities and their functional units (urban landscape types, urban functional areas or typological urban areas). The definition of functional urban units challenged the dichotomist division between green and urban, and approached the city as a collage of different landscape types conformed by particular combinations of green types, building types and land-uses, at the same time allowing for the inclusion of new types of unplanned and dispersed elements of nature. Overall, the outcomes and deliverables produced during the course revealed an adequate articulation amongst the studied urban nature concepts and an increased capacity of the students to think both conceptually and systemically. Moreover, the results of the course displayed a remarkable level of exportability and scalability, as well as a good connection between the upgraded urban nature systems and the overarching goal of increasing the levels of sustainability and resilience in urban socio-ecological systems. In a wider sense, the acquired skills and knowledge were expected to reinforce the students´ capacity to become engaged in broader urban discussions both as part of their studies and of their future professional or research practice (Figures 3 and 4). Figures 3 and 4 show some examples of work produced by the students. The matrix located in the upper part of Figure 3 displays how, in the city of Oulu (Finland), different urban green types All these concepts can operate under the umbrella of urban sustainability and resilience and support the approach to the city as a socio-ecological system in which humans and the "other nature" are deeply intertwined and can achieve higher levels of synergy and hybridity. In addition, this model considers the connection between highly performative urban nature systems and the transition towards more sustainable urban metabolisms by analysing the potential contributions of urban nature in material, water and energy flows. Students' Application of the Proposed Relational Model in a Set of Baltic/Finnish Cities (How Could a Proposed Relational Model for Urban Nature Concepts Support Urban and Landscape Planning Education?) The role of the proposed relational model in the Aalto University Green Area Planning course was to promote sustainability transitions in Finnish and Baltic cities through the transformation and improvement of their urban nature systems in general, and of their blue-green infrastructures and associated ecosystem services in particular. The model provided a methodological and conceptual framework combining different urban nature concepts and aimed at increasing the performance of urban nature and its capacity to generate well-being and influence urban metabolisms, urban morphology and urban ways of living [37]. In addition, the model tried to help the students understand and explore the connections between the concepts and how they can be used effectively and synergistically in urban and landscape planning. The course was organized in different phases in which students progressively explored the considered concepts, analysed their potential linkages and familiarised themselves with various quantitative and qualitative methods supporting the analysis and upgrading of urban nature systems for the studied cities and their functional units (urban landscape types, urban functional areas or typological urban areas). The definition of functional urban units challenged the dichotomist division between green and urban, and approached the city as a collage of different landscape types conformed by particular combinations of green types, building types and land-uses, at the same time allowing for the inclusion of new types of unplanned and dispersed elements of nature. Overall, the outcomes and deliverables produced during the course revealed an adequate articulation amongst the studied urban nature concepts and an increased capacity of the students to think both conceptually and systemically. Moreover, the results of the course displayed a remarkable level of exportability and scalability, as well as a good connection between the upgraded urban nature systems and the overarching goal of increasing the levels of sustainability and resilience in urban socio-ecological systems. In a wider sense, the acquired skills and knowledge were expected to reinforce the students' capacity to become engaged in broader urban discussions both as part of their studies and of their future professional or research practice (Figures 3 and 4). Figures 3 and 4 show some examples of work produced by the students. The matrix located in the upper part of Figure 3 displays how, in the city of Oulu (Finland), different urban green types (columns) can generate different types of ESS (rows with pie charts presenting regulating, provisioning and cultural ESS, as well as their overall aggregation). The urban green-blue infrastructure was approached as a combination of different green types and the city as a collage of landscape types characterized, amongst other factors, by the particular qualities of their green-blue infrastructure and the specific ESS provided by it. Figure 3 also includes a map in its lower half, displaying the location of each urban green type and suggesting-together with other physical, functional and perceptual factors-the main landscape types in the city of Oulu (Finland). (columns) can generate different types of ESS (rows with pie charts presenting regulating, provisioning and cultural ESS, as well as their overall aggregation). The urban green-blue infrastructure was approached as a combination of different green types and the city as a collage of landscape types characterized, amongst other factors, by the particular qualities of their green-blue infrastructure and the specific ESS provided by it. Figure 3 also includes a map in its lower half, displaying the location of each urban green type and suggesting-together with other physical, functional and perceptual factors-the main landscape types in the city of Oulu (Finland). In particular, the two sections at the top of Figure 4, show how adjustments in two prototypical urban transects in Espoo (Finland), can have a significant effect on the performance of its blue-green infrastructure and its constituent green types. The pie charts on the left display the proportions of each green type in the blue-green infrastructure of each urban or landscape type and are accompanied by a second pie chart explaining the type of ownership (public or private). The proposed improvements would produce a significant increase in the performance or quality of the addressed green types, as well as of the total blue-green infrastructure. These results are aligned with the strategic proposals developed in some European cities to make densification and greening compatible and to generate more ESS without increasing the overall area of the blue-green infrastructure [60]. It is important to remark that, arguably, for the purpose of the development of these proposals, the quality of a particular green area was connected with the intensity and variety of the ESS produced within it. In the end, this approach promoted higher levels of vertical or site-related multifunctionality but was lacking a more systemic character that was partially achieved by introducing other strategies usually based on horizontal properties, flows and connectivity. As displayed in the lower part of Figure 4, the same principles were applied in different urban areas of the city of Turku (Finland), generating similar types of outcomes. External Assessment of the Proposed Relational Model and Its Application by the Students in a Set of Baltic/Finnish Cities (How Would the Proposed Relational Model and the Results of Its Academic Application Respond to the Needs and Expectation of Decision Makers, Professionals and Specialists from Different Fields?) Planners, researchers and practitioners assessed the proposed relational model and its implementation by students of the green area planning course after reviewing the posters produced by the students (see examples in Figures 3 and 4). Table 2 displays the results of that assessment in relation to the capacity of the model and corresponding proposals, to respond to sustainable urban planning challenges in general, and to green urban planning challenges in particular. In addition, Table 2 reveals the main merits and shortcomings of the model and shows how familiar different job sectors, professional fields and academic backgrounds are with the studied concepts, how much they use them, and their perception of the model and the students' works. As is indicated in Table 2, 24 of the 75 contacted persons answered the questionnaire. By job sector, 62% of the respondents were employees of the cities where students applied the proposed model, 21% were teachers or researchers and 17% were practitioners. By academic background, 42% of the respondents were landscape architects, 17% were architects, 17% were environmental scientists, 8% were engineers, 8% were geographers and 8% were social scientists or licentiates in public administration. Regarding the professional field of practice, 25% of the respondents were working in environmental planning, 25% in city and urban planning, 21% in green area planning/landscape planning and 21% in landscape design, while the remaining 8% were involved in civil engineering or public administration. Analysis of the results show that the most known urban nature concepts across all sectors, academic backgrounds and professional fields of practice are "ecosystem services", "sustainable storm water management" and "green infrastructure". The "nature-based solution" concept is less known except in the teachers/researchers (job sector), geographers (academic background) and green area planners (professional field) groups. The "urban metabolism" concept is only well known amongst academics (job sector) and architects and geographers (academic background). The level of use of the studied urban nature concepts is always lower than the level of familiarity amongst respondents. The most used concept is "ecosystem services", especially in the academic/research sector, amongst environmental scientists, geographers and social scientists (academic background) and in the professional field of Environmental Planning. The "green infrastructure" concept is also frequently used by city, green-area, landscape and environmental planners (professional field), by respondents with an academic background in environmental sciences, architecture, landscape architecture and geography, and by people working in the academic research sector or in local administration. The "nature-based solution" concept is not frequently used by the sampled academics/researchers but is used by city employees (sector) and by people with technical academic backgrounds, including engineers. Surprisingly, the "sustainable storm water management" concept loses relevance across the addressed job sectors, professional fields and academic backgrounds, and this situation becomes extreme in the case of the "urban metabolism" concept. Regarding the combination of the studied urban nature concepts, only teachers/researchers and environmental scientists confirmed to have and use an integrative or relational framework or model. The proposed integrative or relational model for the studied urban nature concepts was perceived by all job sectors, academic backgrounds and professional disciplines as a promising model to support "green area planning", "sustainable city planning" "environmental urban sustainability", "sustainable storm water management" and for the "integration of the studied concepts". The potential of the model to increase "the role of urban nature in sustainable urban metabolisms" and to support "your professional or academic activity" generated very different opinions between the sampled groups and, in general, the model was perceived as less useful to support "social and economic urban sustainability". Regarding the responses of different groups, the proposed model was perceived as highly effective and useful by practitioners from the private sector and, to a lesser degree, by teachers and researchers. Regarding groups with different academic backgrounds, the model received a high score from architects, engineers and geographers, and an average score from landscape architects and environmental scientists, being the last group and the only one that had previously used a relational model for the studied concepts. Finally, amongst the studied professional fields, the respondents working in city and urban planning, landscape design and environmental planning rated the model positively, whereas professionals working in green area planning and landscape planning gave the model an average score, despite the high score that the same groups had given to the relational model to support green area planning and management. Finally, evaluation of the students' works in the selected Baltic and Finnish cities was highly aligned with the assessment of the relational model explained above. Thus, the works produced by the students were perceived as particularly useful to support "sustainable green area planning and management", "environmental urban sustainability" and "sustainable storm water management" and, to a lesser degree, "sustainable city planning" and the "role of urban nature in sustainable urban metabolisms". Again, the works produced here were not particularly effective in dealing with "social and economic urban sustainability". Amongst the studied groups, practitioners from the private sector, architects and landscape designers assessed the produced works very positively whereas civil servants from the case study cities and respondents working in city and urban planning gave a positive assessment. Due to the small number of respondents in the social scientist, civil engineering and public administration groups, their positive assessment was not considered significant. Discussion This discussion is based on a critical reflection on the results produced in this research and how they can inform or support landscape planning education and urban planning practice in Finland. This critical reflection is connected with the overall potential of relational models to more effectively integrate different urban nature concepts into planning and, more specifically, with the process that led to the generation of the relational model proposed in this article and with the results of its use in higher education, as assessed by a wide range of external experts and potential users. About Relational Models for Urban Nature Concepts Concerning the general need for relational models connecting urban nature concepts, and as revealed by the results of the survey, few groups of users, regardless of their job sector, academic background or professional activity, have or use such types of models except for academics and environmental researchers. Some of the studied urban nature concepts are quite well known amongst both academics and practitioners (e.g., GI, ESS, NBS and sustainable storm water management) but the level of knowledge decreases for other concepts such as urban metabolism. However, the level of use of these concepts is much lower than their level of familiarity amongst almost all types of users, suggesting that there is a gap between theory and practice. In accordance with the results and with the selected literature advocating for the use of conceptual or relational models as a prerequisite to holistically analyse, plan and manage complex systems in general, and urban nature systems in particular [5], it would be advisable to promote their use in education in order to increase the capacity of future practitioners and decision makers to use or develop frameworks in which they can coherently integrate different concepts emerging from research. About the Relational Model as a Tool in (Landscape) Planning Education and Planning Practice Regarding the potential of the relational model to support landscape and urban planning education, teachers and researchers found it to be a promising tool, both in general and in their specific academic fields. This opinion was confirmed by the positive assessment that the same group of people, together with private practitioners and city employees, gave to the works of the students using the model. Likewise, both city employees and private practitioners found the relational model adequate to integrate the concepts, especially in sustainable city/urban planning, green area planning, storm water management and environmental planning in Finnish cities. The suggested relational model combines various urban nature and sustainable-planning concepts (e.g., GI, ESS, NBS, natural capital, socio-ecological systems, etc.) and offers a conceptual framework for more performative natures in more sustainable cities. Additionally, through the external assessment, these results suggest potential ways to improve the model, its use in higher education and directions for future research. In contrast to other conceptual models that focus on one particular urban nature concept and its relationship with other related concepts, or to studies focused on comparative terminological investigations, this model aims to offer an operational framework strictly focused on the relationship between urban nature concepts and their integrated use in urban sustainability and planning. Because of this focus, some specific aspects or meanings of the concepts might have been partially overlooked and some particular details of their methodological use might not have been presented. However, they would be perfectly compatible with the proposed model, such as the green infrastructures planning principles proposed by Hansen and Pauleit (2014), the practical and governance implications of using ESS in urban landscapes as defined by Haase et al. (2014) [6,9] or a more consistent integration of the biodiversity concept, either as an ecosystem service or as an overarching goal per se [43,44]. Somehow, this was an assumed and logical consequence of focusing on integration and in the definition of a more consistent and operational syntax between existing and emerging urban nature concepts, rather than in independent concepts. From a content and typological point of view and compared to other models, the proposed relational model aims to include a much wider number of concepts and to further elaborate their complementarities and connections (Figure 2). The result is a model that formally and conceptually combines a layered approach to urban nature and an embedded approach to the concepts framing and directing its planning and management. Therefore, as a result of this approach, urban sustainability and resilience, together with human well-being and biodiversity, are perceived as overarching goals, and the urban socio-ecological system as the system were urban nature is inserted and as the all-encompassing thinking and epistemological paradigm. Moreover, the formulation in the model of urban nature as a layered combination of a spatial network and their associated cascade of processes, functions services and benefits [11], has clear connections with landscape ecology theory and the proposed interdependence of structure, function and change or pattern and process [61,62]. These analogies can be extended to the perception of sustainability as a paradigm driving the evolution of socio-ecological systems. The methodology followed for the development of the relational model was effective and rigorous, but as revealed during the semantic review (Table 1), some additional concepts might have been included in the initial search in order to integrate them more clearly in the model (e.g., quality of life, human well-being or biodiversity). The parameters defined by Maes and Poels (2007) offer a solid framework for the analysis of the quality of a conceptual or relational model [32]. Thus, in terms of semantic quality, the proposed model starts with a semantic alignment of concepts in order to avoid duplication and misunderstanding. However, this semantic work could have been evaluated and endorsed by different types of users after checking if the proposed definitions conform to their own principles, experiences and expectations. Secondly, in terms of systemic quality (ease of understanding) and of usefulness, the model was found highly useful for integrating the concepts. This positive feedback was provided by all the types of users who responded to the survey, regardless of their job sector, academic background or professional field. Finally, regarding the user satisfaction factor, the students who used the model to develop their proposals found the model extremely useful for combining different concepts in a coordinated and complementary manner. Nevertheless, despite the overall positive assessment of the model, this research is perceived as an initial step towards a more consistent definition and validation of relational models serving the needs of both research and practice. About the Academic Results Produced with the Proposed Relational Model From an educational point of view, the work produced by students on the basis of the relational model shows a high level of integration of the considered urban nature concepts as confirmed through the external and quantitative evaluation of the deliverables produced in the green area planning course. From a cognitive perspective, the model was aimed at promoting systems of thinking and relational skills in an increasingly complex world, in which one of the main challenges for both students and practitioners is to develop their capacity to integrate existing and emerging concepts in a coherent and operational framework. Thus, complex systems theory is gaining relevance in a wide range of professions, where there is a real need to "understand the relationship of general complex systems principles to domain-specific features of such systems" [63] (p. 23). This challenge and goal have also been recognized in the landscape architecture field [64]. Following these pedagogical findings, it can be concluded that the elaboration of relational models is a key exercise in itself to promote systems and relational thinking. Consequently, in the 2020 version of the green area planning course, the presented model was just used as an activator for the generation of relational models by the students themselves. The outcomes from this course showed an even deeper level of understanding of the addressed concepts and of their potential in urban and landscape planning and management. As displayed in Table 2, the scores obtained by the students' works were slightly lower that of those obtained by the relational model itself. This can be explained by the limited amount of time, resources and expertise of the students, and suggests the need of deepening the connections between conceptual integrative models, education and practice in order to fulfil the full potential of the models. Moreover, the decision of applying and testing the model in a studio course of a master programme instead of in a professional practice or in a city office provided a particularly flexible and speculative framework and, in addition, was also perceived as a way to generate new and innovative modes of practice and educate future gamechangers. Nevertheless, the close collaboration with the cities where the proposals were developed ensured a smooth flow of information between cities and students during the whole process. About the External Assessment or Validation of the Model and the Produced Outcomes by Different Types of Stakeholders in the Landscape and Urban Planning Process The external assessment of both the relational model and the results produced by the students was accomplished through a survey providing quantitative feedback about the utility of the model and its capacity to respond to different urban and landscape planning challenges. This approach is aligned with other initiatives aimed at validating conceptual models in a more consistent and objective way [43,44]. The inclusion of different types of respondents, organized according to job sectors, academic background and professional field of practice, gave the survey an additional value by revealing how familiar different types of users are with some urban nature concepts, and how much they could benefit from the development of more clearly articulated connections between them. Due to the limited size of the surveyed sample, results of this work mainly have a qualitative value. However, they disclose patterns and potential links that deserve further and more comprehensive investigation. Despite these limitations, the results suggest new lines of action to improve the relational model in order to address social and economic urban sustainability more clearly and to facilitate the integrated and articulated use of urban nature concepts in education, research and practice. Moreover, the results can also activate a critical and constructive diagnosis of the current usage of urban nature concepts and integrative models in different academic disciplines, job sectors and professional fields, challenging existing boundaries and promoting systemic and interdisciplinary thinking. Conclusions Overall, this research shows the benefits of defining relational models to work in research, education and practice, with different and interconnected concepts operating in complex systems. The methodology to define this kind of model emerged in this case from a specific theme: urban nature in sustainable urban planning, but it has the potential to be applied in other fields or topics. Additionally, the definition and use of these kind of models in education can open new possibilities to increase the needed linkage between research and education whereas the assessment by external evaluators of the model itself and the subsequent academic results might define a potential and promising path to reinforce dialogue between academia and practice. In a more specific sense, the conducted research reveals that the level of transfer from theory to practice for the studied concepts is still in an early stage and that emphasis has been placed on the generation of concepts rather than in their articulation and their connection to integrative landscape and urban planning. Moreover, the presented results show the potential of the proposed relational model to support landscape and urban planning education by promoting an integrative and coordinated use of novel urban nature concepts emerging from research and feeding professional practice. In a broader sense, and as unveiled by the critical assessment of the students' works, the development and application of this kind of models promote conceptual, relational, critical and systemic thinking, which are considered essential cognitive skills to work in complex socio-ecological systems. From the perspective of sustainable landscape and urban planning, the proposed model and the subsequent students' proposals were found highly relevant and useful for different types of users, hinting at the need to make more available, clear syntaxes between urban nature concepts and suggesting the potential of the proposed relational model to be disseminated, used in practice and perfected on the basis of the detected weaknesses and strengths. Funding: This research received no external funding.
v3-fos-license
2020-05-21T09:15:14.254Z
2020-05-20T00:00:00.000
219500037
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/12/10/4175/pdf", "pdf_hash": "89aa9570e73796bf86b345f6b6a519021c276cc9", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45177", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "sha1": "42d94d3a6c554c1abbe5c6e3fe8155d896296fb9", "year": 2020 }
pes2o/s2orc
Determinants of Energy-Based CO2 Emissions in Ethiopia: A Decomposition Analysis from 1990 to 2017 Ethiopia, among the fastest growing economies worldwide, is witnessing rapid urbanization and industrialization that is fueled by greater energy consumption and high levels of CO2 emissions. Currently, Ethiopia is the third largest CO2 emitter in East Africa, yet no comprehensive study has characterized the major drivers of economy-wide CO2 emissions. This paper examines the energy-related CO2 emissions in Ethiopia, and their driving forces between 1990 and 2017 using Kaya identity combined with Logarithmic Mean Divisia Index (LMDI) decomposition approach. Main findings reveal that energy-based CO2 emissions have been strongly driven by the economic effect (52%), population effect (43%), and fossil fuel mix effect (40%) while the role of emission intensity effect (14%) was less pronounced during the study period. At the same time, energy intensity improvements have slowed down the growth of CO2 emissions by 49% indicating significant progress towards reduced energy per unit of gross domestic product (GDP) during 1990-2017. Nonetheless, for Ethiopia to achieve its 2030 targets of low-carbon economy, further improvements through reduced emission intensity (in the industrial sector) and fossil fuel share (in the national energy mix) are recommended. Energy intensity could be further improved by technological innovation and promotion of energy-frugal industries. Introduction A hike in global warming, the accumulation of carbon dioxide (CO 2 ) emissions and greenhouse gas (GHG) emissions have attracted global attention. Most recently, an increasing number of countries have embarked on the road to promoting industrialization and economic restructuring and this has consequently led to higher levels of energy-related CO 2 emissions. In addition, rapid population growth and urbanization have been huge contributors to the changes in CO 2 levels around the globe [1]. Emissions of CO 2 are also a strong environmental consequence of economic development around the world [2]. Africa, with the lowest Human Development Index (HDI) in the world, has an obvious need to strive for economic posterity at all costs in the years ahead [3]. Around the globe, there is undeniable ample evidence of increasing CO 2 emissions, with Africa ranked the most susceptible to global warming. With the rising share of fossil fuels as an energy resource, two major challenges have emerged especially for the developing economies, namely: increasing CO 2 emissions and lowering efficiency of energy consumption [4]. This region has set sustainable development targets of reducing Meanwhile, many studies have confirmed Kaya identity (extended IPAT approach, where "I" stands for environmental impact, "P" for population, "A" for affluence and "T" for technology) as a tool to assess a variety of determinants (population, economic growth, energy intensity, fossil fuel mix and emission intensity) with a flexibility of adding more factors to investigate drivers of environmental impacts. Previously, different variables have been introduced into this Kaya framework such as the labor input effect [13], industrial structure effect [14], and fuel mix component effect [15]. The application of Kaya identity has been quite global with several studies for China, Europe and United States [16], G20 countries [17], Cameroon [18], and other parts of Africa [19][20][21][22].Among different variations of Kaya identity, some groups of researchers have used the Autoregressive Distributed Lag (ARDL) and the Stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT) models in decomposition related studies. The STIRPAT model has been previously used for multi-country comparisons [23], and for different countries such as Ghana [24], Tunisia [25], the United States [26], China [27][28][29], and for several developing economies and economic sectors [30][31][32]. According to [33], within the global CO2 emissions in 2018 as represented by 65.7 MtCO2 in East Africa, Ethiopia accounts for only 15 MtCO2. Although Ethiopia is ranked 94th in the world, it is very far from being one of the world's largest emitters of CO2. It is currently the third largest emitter of Meanwhile, many studies have confirmed Kaya identity (extended IPAT approach, where "I" stands for environmental impact, "P" for population, "A" for affluence and "T" for technology) as a tool to assess a variety of determinants (population, economic growth, energy intensity, fossil fuel mix and emission intensity) with a flexibility of adding more factors to investigate drivers of environmental impacts. Previously, different variables have been introduced into this Kaya framework such as the labor input effect [13], industrial structure effect [14], and fuel mix component effect [15]. The application of Kaya identity has been quite global with several studies for China, Europe and United States [16], G20 countries [17], Cameroon [18], and other parts of Africa [19][20][21][22]. Among different variations of Kaya identity, some groups of researchers have used the Autoregressive Distributed Lag (ARDL) and the Stochastic Impacts by Regression on Population, Affluence, and Technology (STIRPAT) models in decomposition related studies. The STIRPAT model has been previously used for multi-country comparisons [23], and for different countries such as Ghana [24], Tunisia [25], the United States [26], China [27][28][29], and for several developing economies and economic sectors [30][31][32]. According to [33], within the global CO 2 emissions in 2018 as represented by 65 ) each, and Burundi (0.5 MtCO 2 ). Following the Paris Agreement, Ethiopia is set to reduce its GHG emissions by 64% below the business-as-usual levels by 2030. To succeed with this high emission reduction target in a low-carbon growth economy, knowledge of changes in the country's CO 2 emissions and their determinants must be assessed quantitatively in order to devise informed policy decisions. To quantitatively determine the drivers of environmental impact, such as energy-based CO 2 emissions, the Logarithm Mean Divisia Index (LMDI) is a well-regarded decomposition analysis approach [34]. The main strength of this method is that it can be applied to more than two factors and it would give a perfect decomposition; it creates a link between the multiplicative and additive decomposition, thereby giving estimates of an effect on the sub group level [35][36][37][38]. In recent years, this approach has been applied at various levels, both national and sub regional [27,39] such as in China [40,41], Latin America [42], the United States [30], Iran [43], India [44], Pakistan [45], Philippines [46], the European Union [47], Greece [48], Spain [49], Ireland [50], South Korea [51], United Kingdom [52], Brazil [53], and Turkey [54]. Until the present day, there has not been any single study on the decomposition of CO 2 emissions in Ethiopia, which considers all five determinants selected in this study (population, economy, energy intensity, fossil fuel mix, and emission intensity). As per the literature review, this is the first study to assess five factors using the Kaya identity and LMDI for CO 2 emissions in Ethiopia. The various determinants of carbon emissions studied can well provide more indicators that would expand the existing mitigative strategies in order to curb GHG emissions, and therefore help to attain the mitigation targets set by the country. Secondly, the data used in this study is the most recent available (including data for until 2017). In addition, the study of the determinants of CO 2 emissions in Ethiopia could help strengthen carbon mitigation practices in Ethiopia as well as at the African regional level. With this circumstance, this study aims to achieve following research objectives: (i) Examine the determinants for Ethiopia's CO 2 emissions from 1990-2017; (ii) Assess the effect of each determinant with its effect coefficient factor; (iii) Elaborate on policy implications for Ethiopia towards achieving low-carbon and sustainable economic development. The study analyses the effect of five determinants on Ethiopia's CO 2 emissions from 1990-2017: population, economic growth, energy intensity, fossil fuel mix and emission intensity, together with their effect coefficient for the very first time. An extended Kaya identity and LMDI decomposition are used to explain the various determinants of Ethiopia's CO 2 emissions. Africa being most susceptible to global warming issues and Ethiopia among the top CO 2 emitters in East Africa makes this study very pertinent in curbing emissions in developing African countries. Additionally, Ethiopia is emerging as a manufacturing hub of Africa with increasing consumption of total primary energy, which requires huge attention as concerns CO 2 emission issues. The rest of the article is organized as follows: in the next Section 2, we present the materials and methods used in the study, followed by Section 3 which presents the results, while Section 4 presents the policy implications and recommendations. Finally, Section 5 presents the conclusion of the study. Materials and Methods The overall methodological framework applied in this study is illustrated in Figure 2. Firstly, the use of Kaya identity with LMDI approach was integrated to decompose changes in CO 2 emissions in Ethiopia from 1990-2017. As shown, extended Kaya identity and LMDI approach served as the basis of analysis in which activity effect was analyzed for five different drivers of CO 2 change. The extended Kaya identity and LMDI approach was used to analyze the population effect, economic growth effect, energy intensity effect, fossil fuel mix effect and emission intensity effect. The Kaya identity is a renewed version of the IPAT identity postulated previously [55,56]. Next, a policy analysis was performed to better understand the energy policy developments in Ethiopia based on official reports and policy documents, in relation to changing carbon emissions in the country. This analysis carried out to overview the situation from a regional perspective. Socio-Economic Status of Ethiopia Ethiopia is located Northeast of Africa, at the horn of the continent, surrounded by Sudan to the East, Kenya to the North, Eritrea to the South, Djibouti to the West and Somalia to the Northeast [57]. In 2018, Ethiopia ranked second in SSA in terms of population (107.5 million) and fifth in economic status in Africa (2018) (GDP of 80.3 billion USD) [58]. During the last two decades, the country has undergone huge structural and economic changes, and experienced high economic growth, averaging 10.9% a year from 2005-2015, according to official data, and compared to its regional average of 5.4% [59,60]. As of 2018, the share of industries in national GDP was 28.1% which was considerably lower than that of services (40.0%) and agriculture (33.3%). But later, the Ethiopian economy recorded a 9% growth in 2018-2019, with a 12.6% growth by the industrial sector. With a shift from agriculture to manufacturing in recent years, Ethiopia is fast becoming the manufacturing hub of Africa with enormous progress made, especially following policies which include the Growth and Transformation Plan (GTP) [61]. Socio-economic statistics for Ethiopia from 1990-2017 are presented in Table 1. Next, a policy analysis was performed to better understand the energy policy developments in Ethiopia based on official reports and policy documents, in relation to changing carbon emissions in the country. This analysis carried out to overview the situation from a regional perspective. Socio-Economic Status of Ethiopia Ethiopia is located Northeast of Africa, at the horn of the continent, surrounded by Sudan to the East, Kenya to the North, Eritrea to the South, Djibouti to the West and Somalia to the Northeast [57]. In 2018, Ethiopia ranked second in SSA in terms of population (107.5 million) and fifth in economic status in Africa (2018) (GDP of 80.3 billion USD) [58]. During the last two decades, the country has undergone huge structural and economic changes, and experienced high economic growth, averaging 10.9% a year from 2005-2015, according to official data, and compared to its regional average of 5.4% [59,60]. As of 2018, the share of industries in national GDP was 28.1% which was considerably lower than that of services (40.0%) and agriculture (33.3%). But later, the Ethiopian economy recorded a 9% growth in 2018-2019, with a 12.6% growth by the industrial sector. With a shift from agriculture to manufacturing in recent years, Ethiopia is fast becoming the manufacturing hub of Africa with enormous progress made, especially following policies which include the Growth and Transformation Plan (GTP) [61]. Socio-economic statistics for Ethiopia from 1990-2017 are presented in Table 1. framework, Ethiopia has registered GDP growth rates averaging slightly above 10% [62]. Poverty levels have been reduced substantially and Ethiopia is on track to meet most of the Millennium Development Goals (MDGs). Ethiopia is now embarking on GTP II, with the goal of moving towards a low-carbon growth economy, with middle-income status by 2025 [63]. The government of Ethiopia intends to curb its GHG emissions by 2030 to 145 MtCO 2 e, in line with the 255 MtCO 2 e reductions projected by business-as-usual emissions with the integration of CRGE and GTP II, with the GTP II aiming at achieving a carbon neutral economy. Currently at 1.8 tCO 2 e, Ethiopia's per capita GHG emissions are not high compared to global average, but achieving its targets of reducing to 1.1 tCO 2 e by 2030 is a priority concern [64]. Kaya Identity Approach Kaya identity has been applied to many fields of energy, energy economics, environmental science, climate change, resource metabolism, etc.; examples include [50,[65][66][67]. Here, assumptions of population growth, economic factors, and energy technology, as well as the carbon cycle itself play an important role in predicting the growth of CO 2 emissions. The conventional approach related to developing a series of emission scenarios depends on those factors and the use of those scenarios to manipulate mathematical models on how the atmospheric and climate systems will react with these inputs. At the therapeutic level given in the short section, we cannot begin to approach the complicated models. However, we can perform some simple calculations to at least give some meaning to some important factors. One way to establish some simple models of environmental problems is to start with the notion that the impacts are driven by the population, affluence and technology, also called the IPAT equation. The following application of IPAT for carbon emissions from energy sources is often referred to as the Kaya identity, which is a more concrete form of IPAT in this case. Kaya identity, a modified/extended form of the IPAT equation, is often used to study carbon emissions related with energy resources [68]. In this study, we have used the Kaya identity framework to calculate the environmental impact of energy consumption for Ethiopia during 1990-2017, as given by Equations (2)-(4). The factors in Equation (2) represent ratios which are part of the Kaya identity and these factors showcase the relationship between anthropogenic CO 2 emissions and its determinants. Here the environmental impacts "C" are represented by carbon emissions, other factors include population "P", affluence "A" expressed as GDP per person, technology "T" expressed as energy consumption per unit of GDP, and finally fossil fuel consumption "FFC" which is a fraction of TPES as fossil fuels. In this study, we have extended both the IPAT equation and Kaya identity as given by Equations (3) and (4). When incorporating the fossil fuel consumption (FFC) per unit of TPES (fossil fuel mix effect) we get Equation (3) or its simplified version in Equation (4) and simplified as where I = CO 2 emissions (Mt), P = national population, A = affluence considered in terms of GDP per capita measured in constant US dollar prices of 2010, and T = technology. In Equation (4), P t and A t are the population and affluence at time t, E t represents energy intensity in terms of TPES per unit Sustainability 2020, 12, 4175 6 of 17 of GDP measured in Mt per million USD, F t represents fossil fuel mix effect, in terms of fossil fuel consumption (FFC) per unit of TPES, and C t represents emission intensity in terms of CO 2 emissions per unit of FFC. All five impact categories in Equation (4) will be used to analyze their relative impacts on CO 2 emissions during 1990-2017 in Ethiopia. Decomposition and Effect Coefficient Analysis The LMDI decomposition analysis proposed in [34] is popular in evaluating the determinants of carbon emissions in various case scenarios. Based on the Kaya identity, the following equations illustrate the universal form of LMDI decomposition analysis. At the start year (t = 0) and end year (t = 27), using Equation (5), changes in the total environmental impact "∆I t " is calculated. where ∆Pt represents the population effect, ∆At represents economy (or income) effect, ∆Et represents the energy intensity effect, ∆Ft represents the fossil fuel mix (or substitution) effect, and ∆Ct represents the emission intensity effect. Each of the activity effect parameters will be calculated using Equations (6)-(10), respectively: Following the decomposition approach, we also used effect coefficient analysis to further study the changing impact of drivers of CO 2 emissions over time. The effect coefficient of each driving force (effect) was calculated using Equation (11) (11) where, I Abs = |∆P| + |∆A| + |∆E| + |∆F| + |∆C| Data Collection The CO 2 emissions data for Ethiopia were compiled using national emission records for the years between 1990 and 2017 and were complemented by energy use data from the International Energy Agency (IEA) database [12], where required. Country population and GDP were acquired from national economic reports and global databases, such as the World Bank database. For the policy analysis, there were publicly available policy documents, such as the CRGE, GTP (2010-2015 and 2015-2020) as describe in Section 2.1. Some of the official reports by the Government of Ethiopia and IEA were also analyzed. Results This section presents the outcomes of this work based on the Kaya identity and LMDI decomposition. Activity effect and its coefficient analysis are also discussed in this section. Kaya Identity Analysis Results for the five parameters considered in the Kaya identity analysis such as the changes in population, economy, energy intensity, fossil fuel mix, and emission intensity, during the study period, are illustrated in Figure 3. As observed over the study period, most parameters have increased steadily over the study period (1990-2017) while only energy intensity was seen to be declining over the same period. As per the statistics for the period 1990-2017, the population grew by 122.1% (from 47.9 million in 1990 to 106.4 million in 2017), while the per capita GDP rose by 163.4% (from 208.1 USD in 1990 to 548.1 USD in 2017). This is indicative of large significant economic prosperity achieved by Ethiopia during the study period. The population is a strong factor for CO 2 emissions, as there is a linear relationship; as it grows, human consumption patterns also swell, creating the need for fuel increases and increasing anthropogenic contributions to global emissions [69]. With the implementation of the second phase GTP policy (2015-2020), several industrial parks are being developed throughout the national territory, which plays an important role in boosting economic growth in Ethiopia [70]. Economic growth, asset consumption and financial affluence do affect the CO 2 emissions and could cause high consumption of non-renewable energy resources [71]. Endowed with a higher economic growth rate, Ethiopia was able to rapidly develop urban and industrial infrastructures, transforming some of the industrial parks to eco-industrial parks, and thereby uplifting living standards of the growing population [20]. Considering the technological advancement and foreign direct investments in the industrial, agricultural and service sectors, a rising fuel mix share of 49.4% was observed (0.045 in 1990 to 0.89 in 2017). This indicated a high rate of consumption of fossil fuels in the country. The rising intensity of fuel mix was observed accompanied by rising economic growth and industrialization, indicating a rapid carbonization of the local economy. Similarly, emission intensity has also been on the rise in Ethiopia evidently by rising energy consumption. From 1990 to 2017, emission intensity increased from 2.46 to 3.22 (Mt CO 2 per ktoe of fossil fuels) indicating higher emissions being released. The country is, however, considered energy insecure because of rising emission intensity that is mainly due to changing industrial structures and rising CO 2 emissions from non-fossil fuel resources, such as biomass, wood, etc., as well as inefficient use of fossil fuel resources and the lack of high-efficiency energy conversion technologies, such as power plants, industrial boilers, steam generators, etc. [21]. Moreover, the early 1990s saw a drop in emission intensity mainly attributable to the industrial restructuring efforts in Ethiopia, whereas the years 2015-2017 have shown rising emission intensity as the industrial and GDP growth rates also increased sharply in those years. . This is indicative of large significant economic prosperity achieved by Ethiopia during the study period. The population is a strong factor for CO2 emissions, as there is a linear relationship; as it grows, human consumption patterns also swell, creating the need for fuel increases and increasing anthropogenic contributions to global emissions [69]. With the implementation of the second phase GTP policy (2015-2020), several industrial parks are being developed throughout the national territory, which plays an important role in boosting economic growth in Ethiopia [70]. Economic growth, asset consumption and financial affluence do affect the CO2 emissions and could cause high consumption of non-renewable energy resources [71]. Endowed with a higher economic growth rate, Ethiopia was able to rapidly develop urban and industrial infrastructures, transforming some of the industrial parks to eco-industrial parks, and thereby uplifting living standards of the growing population [20]. Considering the technological advancement and foreign direct investments in the industrial, agricultural and service sectors, a rising fuel mix share of 49.4% was observed (0.045 in 1990 to 0.89 in 2017). This indicated a high rate of consumption of fossil fuels in the country. The rising intensity of fuel mix was observed accompanied by rising economic growth and industrialization, indicating a rapid carbonization of the local economy. Similarly, emission intensity has also been on the rise in Ethiopia evidently by rising energy consumption. From 1990 to 2017, emission intensity increased from 2.46 to 3.22 (Mt CO2 per ktoe of fossil fuels) indicating higher emissions being released. The country is, however, considered energy insecure because of rising emission intensity that is mainly due to changing industrial structures and rising CO2 emissions from non-fossil fuel resources, such as biomass, wood, etc., as well as inefficient use of fossil fuel resources and the lack of high-efficiency energy conversion technologies, such as power plants, industrial boilers, steam generators, etc. [21]. Moreover, the early 1990s saw a drop in emission intensity mainly attributable to the industrial restructuring efforts in Ethiopia, whereas the years 2015-2017 have shown rising emission intensity as the industrial and GDP growth rates also increased sharply in those years. The energy intensity of Ethiopia surprisingly is the only factor that decreased over the study period, that is from 1.81 (toe per 1000 USD) in 1990 to 0.72 (toe per 1000 USD) in 2017, indicative of a significant drop. As Ethiopia heavily relies on biomass and waste for energy, improved cook-stoves, universal electrification, and efficient lighting as measures put in place by Ethiopia, have gone a long way to improve emission intensity in recent years [22]. The energy intensity of Ethiopia surprisingly is the only factor that decreased over the study period, that is from 1.81 (toe per 1000 USD) in 1990 to 0.72 (toe per 1000 USD) in 2017, indicative of a significant drop. As Ethiopia heavily relies on biomass and waste for energy, improved cook-stoves, universal electrification, and efficient lighting as measures put in place by Ethiopia, have gone a long way to improve emission intensity in recent years [22]. Decomposition Analysis The five determinants of CO 2 emissions for Ethiopia are analyzed from 1990 to 2017. With an interval of five years and for 2016-2017, the determinants of carbon emissions and their relative contributions are presented in Table 2 (effect during the entire study period is also given in the last column). As shown, during the first half of the 1990s, CO 2 emissions were largely driven by emission intensity because the rising population growth was pushing for increasing use of fossil fuels. This was closely followed by higher population growth which also greatly affected the agricultural patterns and hence the carbon emissions in the country. It is well known that an increase in the population would put pressure on rising energy consumption patterns and hence cause higher impact on carbon emissions. The role of energy intensity in Ethiopia was less pronounced, yet it is important enough to be considered in national level policies. During the very same period, fossil fuel mix effect and affluence played a significant role in slowing down CO 2 emissions in Ethiopia. This can be attributed to the popular use of biomass, promotion of low-carbon energy sources, and improved methods of cooking (with environmentally friendly stoves) during that period. However, this was the only period when fossil fuel mix effect and economy effect were relatively low, and this helped to slow down the rate of carbon emissions significantly. During the latter part of 1990's, change in CO 2 was mainly driven by the fossil fuel mix effect and population growth. Meanwhile, energy intensity and emission intensity played the smallest role in changing the carbon emissions. As the years went by, the economy effect, population factor and emission intensity caused a rise in CO 2 emissions from this point onwards. During 2001-2005, CO 2 emissions more than doubled from previous periods and thus indicated a rise in carbonization from national economic development, and the impacts that played a positive role were economy effect, population increase, and emission intensity. The period from 2006 to 2011 also saw a substantial increase in carbon emissions driven by economy effect, fossil fuel mix effect, and population effect. Thus, this period was highly responsible for increased CO 2 emissions in Ethiopia and apparently no effective effort was made towards carbon emission mitigation. This is usually the case with developing countries that always consider improvement in energy efficiency with the aim of reducing energy consumption patterns to output [19]. However, between 2011 and 2017, emission intensity was improved, and the population factor was greatly improved, both resulting in slowing down the rising CO 2 emissions. On the whole, during the entire study period, the major driver of CO 2 emissions was found to be affluence (promoting higher consumption patterns), followed by population influx (higher resource demand per capita); also followed closely was fossil fuel mix effect (rising fossil fuel shares), emission intensity (higher CO 2 emissions per unit of FFC). The only negative driver of CO 2 emissions during 1990-2017 was found to be energy intensity (more economic output per unit of TPES). Thus, in order to promote a low-carbon growth, energy intensity could be focused in the future to further slowdown the growth in carbon emissions [72]. Population Effect and Its Coefficient As shown in Table 2, population played a significant role in increasing carbon emissions during the study period . This is a clear indication that population growth is directly proportionate to CO 2 emissions and in future, population growth and urban demographic patterns will directly increase carbon emissions. Rising population is also an indication of increase in household size, high levels of urbanization, emerging infrastructure, increased transport facilities, increase in levels of energy consumption, change in lifestyle patterns and an ever-increasing exploitation of natural resources. Results for population effect and its coefficient for CO 2 emissions in Ethiopia from 1990-2017 are shown in Figure 4. As shown in Figure 4, population effect (colored red) has been fluctuating upward. Within the study period from 1995 to early 2000, the population was relatively stagnant, but with a rather high coefficient effect, the share of population effect is somehow at a standstill and other determinants in the study are relatively becoming stronger drivers of CO2 emissions. For the entire study period, the coefficient population effect was reduced from 0.15 in 1990 to 0.10 in 2016, which is a 33% drop implying an overall drop with the impact share. Studies have proven that population growth contributes enormously to CO2 emissions in both developed and developing countries [69] and with a steady rise in both populations and with CO2 emissions in Ethiopia, much attention is needed to reduce the CO2 emissions per capita. In this regard, some efforts made by the government of Ethiopia to promote low-carbon economic development should be appreciated. However, more efforts are required to protect their population from adverse effects of climate change such as extreme droughts through responsive action against climate change. Economic Growth Effect and Its Coefficient Economic growth is synonymous to affluence, standards of living and the socio-economic performance of a country. As Ethiopia has made a good economic progress during the study period, it has heavily impacted its CO2 emissions as well. So far, Ethiopia has witnessed relatively fast growth in per capita GDP levels, higher consumption of finished goods, material intensive living patterns, and increased overall energy consumption. As given in Table 2, rising affluence was the major driver of carbon emissions in the country during 1990-2017. This can be seen at its peak between 2006-2011. Results for the economy effect and its coefficient for CO2 emissions in Ethiopia from 1990-2017 are shown in Figure 5. As shown in Figure 4, population effect (colored red) has been fluctuating upward. Within the study period from 1995 to early 2000, the population was relatively stagnant, but with a rather high coefficient effect, the share of population effect is somehow at a standstill and other determinants in the study are relatively becoming stronger drivers of CO 2 emissions. For the entire study period, the coefficient population effect was reduced from 0.15 in 1990 to 0.10 in 2016, which is a 33% drop implying an overall drop with the impact share. Studies have proven that population growth contributes enormously to CO 2 emissions in both developed and developing countries [69] and with a steady rise in both populations and with CO 2 emissions in Ethiopia, much attention is needed to reduce the CO 2 emissions per capita. In this regard, some efforts made by the government of Ethiopia to promote low-carbon economic development should be appreciated. However, more efforts are required to protect their population from adverse effects of climate change such as extreme droughts through responsive action against climate change. Economic Growth Effect and Its Coefficient Economic growth is synonymous to affluence, standards of living and the socio-economic performance of a country. As Ethiopia has made a good economic progress during the study period, it has heavily impacted its CO 2 emissions as well. So far, Ethiopia has witnessed relatively fast growth in per capita GDP levels, higher consumption of finished goods, material intensive living patterns, and increased overall energy consumption. As given in Table 2, rising affluence was the major driver of carbon emissions in the country during 1990-2017. This can be seen at its peak between 2006-2011. Results for the economy effect and its coefficient for CO 2 emissions in Ethiopia from 1990-2017 are shown in Figure 5. As seen in Figure 5, the economic growth effect was fluctuating in the early and late 1990s but assumed a sharp rise from the year 2000 onwards. Especially in the year 2003, the Ethiopian economy experienced a economic boom, and this had a bearing on economic growth, and by extension, on the standards of living. This did not come without a spinoff in CO2 emission levels. However, the economic growth somehow experienced another fluctuation between 2004 till 2011 before having a dramatic increase until date. This also strongly accounted for the changes in CO2 emissions. For the period 1990 to 2017, economic growth effect coefficient increased from −0.44 in 1990 to 0.26 in 2017, indicating a large rise in its overall impact share. This highlights the fact that desirable economic prosperity will invite unwanted environmental implications along the way. As a way forward, Ethiopia, and the countries alike, could achieve sustainable economic growth by promoting clean energy technologies, and by incorporating the concepts of material circularity in their urban, regional, and industrial development as part of their sustainable development strategy. Energy Intensity Effect and Its Coefficient In this study, energy intensity was represented by TPES per unit of GDP. It expresses the energy requirement of an economy with increasing values indicating higher energy demand from the economic processes. Moreover, as energy intensity rises, carbon emission intensity also rises indicating a direct relationship and a recoil effect of economic growth and higher energy demand. As shown in Table 2, energy intensity was the only driver of carbon emissions with the most negative values (apart from 1990-1991) and it helped slow down rising CO2 emissions in Ethiopia. The trend for energy intensity in Ethiopia was a downward slope, indicating an improvement of the efficiency of energy use. The results for energy intensity effect and its coefficient for CO2 emissions in Ethiopia during 1990-2017 are shown in Figure 6. As seen in Figure 5, the economic growth effect was fluctuating in the early and late 1990s but assumed a sharp rise from the year 2000 onwards. Especially in the year 2003, the Ethiopian economy experienced a economic boom, and this had a bearing on economic growth, and by extension, on the standards of living. This did not come without a spinoff in CO 2 emission levels. However, the economic growth somehow experienced another fluctuation between 2004 till 2011 before having a dramatic increase until date. This also strongly accounted for the changes in CO 2 emissions. For the period 1990 to 2017, economic growth effect coefficient increased from −0.44 in 1990 to 0.26 in 2017, indicating a large rise in its overall impact share. This highlights the fact that desirable economic prosperity will invite unwanted environmental implications along the way. As a way forward, Ethiopia, and the countries alike, could achieve sustainable economic growth by promoting clean energy technologies, and by incorporating the concepts of material circularity in their urban, regional, and industrial development as part of their sustainable development strategy. Energy Intensity Effect and Its Coefficient In this study, energy intensity was represented by TPES per unit of GDP. It expresses the energy requirement of an economy with increasing values indicating higher energy demand from the economic processes. Moreover, as energy intensity rises, carbon emission intensity also rises indicating a direct relationship and a recoil effect of economic growth and higher energy demand. As shown in Table 2, energy intensity was the only driver of carbon emissions with the most negative values (apart from 1990-1991) and it helped slow down rising CO 2 emissions in Ethiopia. The trend for energy intensity in Ethiopia was a downward slope, indicating an improvement of the efficiency of energy use. The results for energy intensity effect and its coefficient for CO 2 emissions in Ethiopia during 1990-2017 are shown in Figure 6. As shown in Figure 6, energy intensity effect has dwindled over the last few decades. Especially during 2004-2005 and 2014-2015 when energy intensity effect decreased significantly, indicating its slowing effect on CO 2 emissions during these periods. Moreover, the energy intensity effect coefficient has been coincidental with the energy intensity effect, indicating its fluctuating relative impact on net carbon emissions has remained somehow similar. This means the share of energy intensity in 1990 has not changed much in 2017 as well. For the entire study period, the energy intensity effect coefficient decreased from 0.32 in 1990 to −0.26 in 2017, a substantial change in its overall impact share. With heavy reliance on biomass and waste for energy, and the lack of up-to-date energy technologies, Ethiopia needs to further improve its energy intensity to curb rising carbon emissions. To this end, hydropower is could be an important source of clean renewable energy in Ethiopia. Acknowledgement is made of the improved cook-stove initiative, efficient lighting systems and the universal electrification, which are promising efforts in Ethiopia to improve energy intensity [22]. Other measures to further improve energy intensity in Ethiopia could be to change the light bulbs to those with lower voltage (use of LED bulbs in lightening), consumption and minimization of energy waste, capacity building and public awareness towards energy savings. As shown in Figure 6, energy intensity effect has dwindled over the last few decades. Especially during 2004-2005 and 2014-2015 when energy intensity effect decreased significantly, indicating its slowing effect on CO2 emissions during these periods. Moreover, the energy intensity effect coefficient has been coincidental with the energy intensity effect, indicating its fluctuating relative impact on net carbon emissions has remained somehow similar. This means the share of energy intensity in 1990 has not changed much in 2017 as well. For the entire study period, the energy intensity effect coefficient decreased from 0.32 in 1990 to −0.26 in 2017, a substantial change in its overall impact share. With heavy reliance on biomass and waste for energy, and the lack of up-todate energy technologies, Ethiopia needs to further improve its energy intensity to curb rising carbon emissions. To this end, hydropower is could be an important source of clean renewable energy in Ethiopia. Acknowledgement is made of the improved cook-stove initiative, efficient lighting systems and the universal electrification, which are promising efforts in Ethiopia to improve energy intensity [22]. Other measures to further improve energy intensity in Ethiopia could be to change the light bulbs to those with lower voltage (use of LED bulbs in lightening), consumption and minimization of energy waste, capacity building and public awareness towards energy savings. Fossil Fuel Mix Effect and Its Coefficient Fossil fuel mix effect refers to the proportion of fossil fuels in TPES, which is an important factor in determining the changing impact of fossil fuels and non-fossil resources on carbon emissions. For the study period, Ethiopia's share of fossil fuels has been unsteadily rising, as seen in Figure 7. Presented in Table 2, the impact of fossil fuel mix effect has been second (after population effect) in rising CO2 emissions. Results for the fossil fuel mix effect and its coefficient for CO2 emissions in Ethiopia for 1990-2017 are shown in Figure 7. Fossil Fuel Mix Effect and Its Coefficient Fossil fuel mix effect refers to the proportion of fossil fuels in TPES, which is an important factor in determining the changing impact of fossil fuels and non-fossil resources on carbon emissions. For the study period, Ethiopia's share of fossil fuels has been unsteadily rising, as seen in Figure 7. Presented in Table 2, the impact of fossil fuel mix effect has been second (after population effect) in rising CO 2 emissions. Results for the fossil fuel mix effect and its coefficient for CO 2 emissions in Ethiopia for 1990-2017 are shown in Figure 7. As shown above (Figure 7), the fossil fuel mix effect has mostly fluctuated during the study period with a peak shown for the year 2014. This indicates that the fossil fuel mix effect has been a uniform driver of carbon emissions in the country, and extraordinarily little structural change has occurred to minimize the fossil fuel mix effects. Although there have been periods when the fossil fuel mix effect helped in slowing down CO2 emissions, its overall impact has been positive. Moreover, the fossil fuel mix effect coefficient has followed a similar trajectory to the energy intensity effect, indicating a strong coupling of the two factors. In the near future, alternate energy resources such as the wind energy, solar, bioenergy, and geothermal could be developed to support the existing As shown above (Figure 7), the fossil fuel mix effect has mostly fluctuated during the study period with a peak shown for the year 2014. This indicates that the fossil fuel mix effect has been a uniform driver of carbon emissions in the country, and extraordinarily little structural change has occurred to minimize the fossil fuel mix effects. Although there have been periods when the fossil fuel mix effect helped in slowing down CO 2 emissions, its overall impact has been positive. Moreover, the fossil fuel mix effect coefficient has followed a similar trajectory to the energy intensity effect, indicating a strong coupling of the two factors. In the near future, alternate energy resources such as the wind energy, solar, bioenergy, and geothermal could be developed to support the existing hydroelectric resources. Short-term measures could include the use of fuel-efficient on-road vehicles, reduced travelling per person per car (e.g., carpooling and sharing) in order to minimize country's CO 2 emissions coming from fossil fuel combustion. Emission Intensity Effect and Its Coefficient Emission intensity effect refers to the emissions of CO 2 per unit fossil fuels that are consumed, and this can clearly predict the changing energy mix and technological advancement. With increasing demand for fossil fuels in Ethiopia, as the population grows with time, CO 2 emissions per unit fossil fuels consumed has increased, indicating higher emissions now as compared to the previous years. This can be partly attributed to the increased use of coal and petroleum fuels, as compared to natural gas. Moreover, ageing energy infrastructure and mobile sources (such as vehicles) also have a negative effect on Ethiopia's emission intensity. As shown in Table 2, the impact of emission intensity effect on CO 2 emissions has been negative during the late 1990s and positive during the rest of the period. Results for emission intensity effect and its coefficient for CO 2 emissions in Ethiopia during 1990-2017 are shown in Figure 8. (Table 2) which indicates less prominent impact on rising carbon emissions in comparison with other positive drivers. This means that current emission intensity levels are less harmful to the levels of CO2 emissions when compared with economy effect, population effect, and fuel mix effect. Nonetheless, attention must be paid to minimizing emission intensity through innovative structural changes. With increasing carbon emissions from fossil fuels, the emission intensity effect coefficient has dropped slightly during 1990-2017 indicating a diminishing impact share for this determinant. Policy Implications and Recommendations In view of the present results and Ethiopia's target on limiting its net GHG emissions by 2030 to 145 Mt CO2e, it is pertinent to draw up some policy insights based on this study and make key recommendations for the future. At the country level, rising CO2 emissions and air pollution issues have made it necessary for Ethiopia to draw up strategies to combat these environmental adversities. In addition, as the Government of Ethiopia has put in place a number of strategies and programs aimed at enhancing the adaptive capacity against climate change, reducing the vulnerability of the country to CO2 emissions still remains a great challenge. Policy initiatives such as CRGE and GTP are (Table 2) which indicates less prominent impact on rising carbon emissions in comparison with other positive drivers. This means that current emission intensity levels are less harmful to the levels of CO 2 emissions when compared with economy effect, population effect, and fuel mix effect. Nonetheless, attention must be paid to minimizing emission intensity through innovative structural changes. With increasing carbon emissions from fossil fuels, the emission intensity effect coefficient has dropped slightly during 1990-2017 indicating a diminishing impact share for this determinant. Policy Implications and Recommendations In view of the present results and Ethiopia's target on limiting its net GHG emissions by 2030 to 145 Mt CO 2 e, it is pertinent to draw up some policy insights based on this study and make key recommendations for the future. At the country level, rising CO 2 emissions and air pollution issues have made it necessary for Ethiopia to draw up strategies to combat these environmental adversities. In addition, as the Government of Ethiopia has put in place a number of strategies and programs aimed at enhancing the adaptive capacity against climate change, reducing the vulnerability of the country to CO 2 emissions still remains a great challenge. Policy initiatives such as CRGE and GTP are now greatly focusing agriculture, forestry, renewable energy and advanced technologies to develop a green economy. In addition, issues related to the environment, forests and climate change are being actively discussed at the national level. During the last two decades, emissions have been shifting their major sources; formerly the emissions were mainly from the agricultural sector (including livestock, soils, forestry etc.), but currently a huge portion of the emissions are coming from the industrial sector (including manufacturing and building construction). Some of the important policy implications based on the results of the study are outlined below. • From the population standpoint, organization of trainings and capacity building programs could be implemented regarding green issues and the issues of carbon emissions. These can be complemented by increasing public awareness on energy savings and conservation to curb rising carbon emissions and poor air quality issues currently faced by the country. • Economic growth must be sustainable in nature. This means that renewable energy resources should be promoted at the national level and low-carbon economic growth should be part of the national economic development agenda. • The use of clean and renewable energy needs to be encouraged at all levels of society. For example, the use of efficient cook-stoves as against the use of wood for fuels could be a good initiative specially for the regional communities and sub-urban populations. • Leapfrogging to modern and energy-efficient technologies in transport and industrial sectors could support the achievement of their 2030 carbon mitigation targets if adequate policy decisions are taken. • From the energy intensity viewpoint, more efforts could be directed to enhance higher GDP generation per unit energy consumed. This could be done by eliminating energy intensive sectors and promoting high-end production of finished goods and services. This, however, could involve multinational and regional cooperation with industrialized economies so that the transfer of technology is materialized. • From a fossil fuel mix effect perspective, energy efficient strategies in industries (such as industrial symbiosis and waste-to-energy), housing (such as LED lighting, smart lighting, green construction), transportation (such as clean fuels, emission control systems in vehicles), and agriculture (such as solar powered grids, rain water harvesting) could be encouraged from a policy perspective. In addition, alternative sources of energy (such as geothermal, wind, solar) could also greatly help curb GHG emissions at the national level. • From an emission intensity perspective, improvements could be achieved in all sectors of economy. For instance, in the agricultural sector, best farm practices for improving crop yield and livestock production could create co-benefits such as higher food security and reduced carbon emissions. Conclusions With a fast-growing economy such as that of Ethiopia, there are bound to be some adverse effects to the environment. Ethiopia has so far achieved a plausible economic growth especially in the last two decades. The environmental cost of economic progress is also quite high. This study examined major drivers, based on Kaya identity, in rising CO 2 emissions from fossil fuel consumption in Ethiopia from 1990 till 2017. Important outcomes of this study are presented below. • From an analysis of the results obtained, the population of Ethiopia grew by 122% from 1990-2017, its GDP grew by 385% while CO 2 emissions increased by 450% portraying a true picture of economic buoyancy at the cost of massive carbonization. • Based on the decomposition analysis, major influencers of rising CO 2 emissions in Ethiopia included economy effect (49.1%) followed by population effect (42.7%) and fossil fuel mix effect (40.3%). However, emission intensity effect (14.5%) was four times less harmful than economy effect. • The only negative driver of CO 2 emissions was energy intensity effect which played the greatest role in mitigating rising carbon emissions in the country during 1990-2017. • Based on the effect coefficient analysis, the shares of energy intensity effect and emission intensity effect have been declining in recent years, while the impact shares of population effect, economy effect, and fossil fuel mix effect have been on the rise meaning they could further cause carbon emissions to increase unless mitigation strategies are adopted. These results, and policy implications discussed in this article, could very well be used as an instrument to promote low-carbon and sustainable economic growth in Ethiopia and other emerging countries of the world.
v3-fos-license
2021-10-05T20:10:04.643Z
2021-09-06T00:00:00.000
239210268
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ibimapublishing.com/articles/JAST/2021/472658/472658.pdf", "pdf_hash": "882678a3ac946983e042e54b0972b5d2391492fe", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45178", "s2fieldsofstudy": [ "Business" ], "sha1": "93532463eae9beda4a7122174f111b76ac6be0c3", "year": 2021 }
pes2o/s2orc
The Level of Preparedness and Response of Nonprofit Organizations in A Pandemic Crisis: An Exploratory Qualitative Research A crisis can occur with little or no warning, anywhere, and at any time. Crisis management seeks to help organizations cope with specific, unexpected, and non-routine events that create high levels of uncertainty and threat. Although the crisis management literature is plentiful regarding the business sector, little has been written on this subject when it comes to nonprofit organizations (NPOs). This article aims to identify the level of crisis management preparedness and response of the North Portugal NPOs, in the context of COVID-19 pandemic, with enormous numbers of spread disease and deaths, mainly in the senior segment. A qualitative approach is developed with an exploratory and explanatory study based on six semi-structured interviews, carried on in May 2020. The findings of the study suggest that NPOs were not prepared or had any type of planning to face a crisis, and simply reacted following the guidelines of official bodies and creating contingency plans oriented by these entities. Further, this study argues that NPOs need to actively engage in everyday maintenance and updating to prevent crisis activities to build organizational security, transparency, and accountability. The study adds to previous research on crisis management on NPOs, by proposing the identification and exploration of a set of activities that accurately enables the assessment of crisis management strategies within the context of NPOs, on an emergency event. Introduction Organizations operate in highly volatile environments (Spillan and Crandall, 2002). As the environment is becoming increasingly complex, the crises that organizations face will also increase not only in extent but also in impact (Spillan, 2003). As Mitroff and Anagnos (2001, p. 3) state, "crises have become an inevitable, natural feature of our everyday lives". The demands of day-to-day operations and crisis management are particularly important and challenging, and organizations need to implement crisis management plans and create teams to achieve business continuity (Spillan, 2003). The level of crisis preparedness, as well as the ability to detect crises at an early stage, are crucial (Schwarz and Pforr, 2011). For decades, planning and crisis management have been recognized as important areas of management, by both practitioners and academics (Spillan, 2003). According to Caponigro (2000) and Spillan (2003), crisis management is the function that works to minimize the impact of unexpected, unfortunate, and catastrophic events, and helps the organization gain control of the situation. Crisis management, the impact planning process, and crisis mitigation are important strategic concerns that must be incorporated into an organization's overall planning process (Spillan and Crandall, 2002). The ability to manage a crisis can determine survival and disaster (Spillan and Crandall, 2002); however, some organizations consider crisis planning and management important, others less so (Spillan, 2003). The extent to which the NPOs are strategically prepared to deal with crises is an unknown fact (Schwarz and Pforr, 2011). The literature on crisis management is abundant, but regarding NPOs, little has been written on the subject, given that few studies have explored the crisis management of the NPOs (Spillan, 2003). In the same way, the preparation of the NPOs crisis communication has received little attention in the investigation (Schwarz and Pforr, 2011), where although crisis communication remains a hot topic in Public Relations literature, there has been little attention to strategic responses to the crisis in NPOs. This may be an indicator that many managers of these organizations are not aware of or ignore the risks and vulnerabilities that exist in their organizations (Spillan and Crandall, 2002). Most of the crisis management literature addresses the profit sector. However, NPOs must also plan for the unthinkable (Spillan and Crandall, 2002). Addressing this literature gap, the forthcoming qualitative study aims to better know about crises in NPOs. Specifically, it intends to capture the critical issues of concern by leadership, the level of preparedness for crises, and, in case NPOs lack preparedness, the reasons or motives for the lack of preparation and planning. Finally, this study gets to better know what are the activities and practices that can expand crisis management planning and prevention. The article proceeds as follows. First, with the research background in mind, it separates crisis management, in general, from crisis management planning in the NPOs sector. Doing so allows the knowledge of this field to be adapted to this specific context, browsing themes as types of crises and their phases. Second, it presents the methods carried out in this phase of the study and clarifies the data collection process. It also examines and discusses preliminary results obtained on this exploratory field study to be explored. Finally, conclusions and implications for further research are drawn. Crisis Management With the perspective that time allows, today, one can easily identify the 60s of the 20th Century as the beginning of the literature about organizational crises (Mendes and Pereira, 2006). One of the first authors to write about this subject was Charles Hermann, in 1963, and his concern was to analyze the consequences that certain disruptive phenomena, which he called crises, had on the viability of organizations. This author defined a crisis (Hermann, 1963). Fink (2002, cit. in Wrigley, Salmon andPark, 2003), one of the leading authors in the field, conceptualizes crisis as something that, positively and negatively, impacts an organization and as something (time, phase, or event) that is decisive or crucial. Devlin (2007, p. 5) refers to a crisis as "an unstable time for an organization, with a distinct possibility for an undesirable outcome. This undesirable outcome could interfere with the normal operations of the organization, it could damage the bottom line, it could jeopardize the public image, or it could close media or government scrutiny". Seeger et al. (2003, cit. in Jordan, Upright andTice-Owens, 2016, p. 162) add on the definition of a crisis as "a specific, unexpected and nonroutine organizationally based event or series of events which creates high levels of uncertainty, and the threat or perceived threat to an organization's high priority goals". A crisis starts from the onset of illness, which is accentuated, and draws the attention of the organization and causes disruption of the business routine and, consequently, threatens the reputation and financial viability of the organization (Fink, 2002, cit. in Wrigley, Salmon andPark, 2003). Pauchant, Mitroff, and Ventolo (1992) and Ulmer (2001) also share the crisis trilogy developed by Fink (2002, cit. in Wrigley, Salmon, andPark, 2003): disruption, threat, and potentially negative consequences for the organization. The authors highlight that a crisis is a serious and critical phase in the evolution of things or situations. It is a rupture, a disturbance in the organization's balance. It is a turning point characterized by great instability that can result in undesirable consequences, affect the organization's reputation, and hence produce negative public notoriety. Although crises can arise in infinite sizes, shapes, intensity, complexity, uncertainty, and magnitudes (Eriksson and McConnell, 2011), in the opinion of Marcus and Goodman (1991), different types of crises can be distinguished, such as accidents, scandals, product safety, and health incidents. According to Devlin (2007), there is a panoply of types of crises, which can range from fires, floods, tornadoes, bombings, earthquakes, product failure, a product market-shifts, a product safety issue, an incident that results in a poor image or negative reputation, a financial problem, and so on. Regarding the types of crises, it should be noted that any organization is sensitive to an endless number of crises. For this matter, there is an immense number of classifications of crisis events in the literature. Crisis management researchers have classified crises into 2x2 matrix (e. g. Meyers and Holusha, 1986;Marcus and Goodman, 1991;Coombs and Holladay, 1996), through cluster analysis Pearson and Mitroff (1993), and by categories (Spillan, 2003;Devlin, 2007;Crandall, Parnell and Spillan, 2014). Also, recommendations for crisis management usually take a developmental stage approach (Jordan, Upright, and Tice-Owens, 2016) as models range from three to five stages or phases. Because the world, and in particular, the North of Portugal, is still in a pandemic situation, this study focuses on the first two phases of Coombs (2012) and Coombs and Laufer (2018): pre-crisis phase, and crisis phase. In this sense, the pre-crisis phase involves the prevention, and preparation for crises to minimize damage to the organization (Coombs and Laufer, 2018). This stage includes activities, such as signal detection and prevention which include knowing potential areas of vulnerability, evaluating potential crisis types, preparing a crisis management team, selecting, and training an organizational spokesperson, developing a crisis management plan, and preparing organizational communication systems (Coombs, 2012). The crisis management plan (CMP) developed during the precrisis stage, as some advocates (Coombs, 2012), should explain whom to contact (Jordan, Upright, and Tice-Owens, 2016). Coombs (2012) agrees that social media has implications for crisis management and has a role in crisis management, ranging from pre-crisis monitoring of warning signs to post-crisis communication. Concerning the crisis phase, this phase represents the response to the crisis, including the response of the organization and its stakeholders (Coombs and Laufer, 2018). The crisis response is what management does and says after the crisis hits (Coombs, 2012) to limit the effects. Pearson and Mitroff (1993, p. 53) call this phase "damage containment". Effective management of this phase must go through plans to prevent a localized crisis from affecting other uncontaminated parts of the organization or its environment, for example, through evacuation plans or procedures for neutralizing or damage containment mechanisms and activities (Pearson and Mitroff, 1993). Crisis management planning in the NPOs sector Despite their status, being some of the least understood segments of society, NPOs are among the most crucial ones (Waters, 2014), and are not immune to crises (Spillan, 2003;Wrigley, Salmon and Park, 2003;Schwarz and Pforr, 2011;Sisco, 2012;Jordan, Upright and Tice-Owens, 2016). NPOs play an increasingly influential role in economies and societies as a catalyst for new approaches and service provision, and a crucial actor in social and economic life (Salamon and Anheier, 1992;Spillan, 2003;Crandall, Parnell and Spillan, 2014). NPOs are extremely vulnerable organizations in times of crisis (Sisco, 2012). Patterson andRadtke (2009, cit. in Jordan, Upright andTice-Owens, 2016) mention that nonprofit organizations face one of two types of crises: emergencies and controversies. On one hand, an emergency may result in injury to individuals, damage to or loss of facilities, financial loss, or lapses in services (Patterson andRadtke 2009, cit. in Jordan, Upright andTice-Owens, 2016). Emergencies differ from routine events in terms of critical and timely information requirements and a high level of uncertainty (Kapucu, 2007). On the other hand, controversies may damage an organization`s reputation because of fraud or accusations, legal difficulties, or challenges to organizational leadership integrity and effectiveness (Patterson andRadtke 2009, cit. in Jordan, Upright andTice-Owens, 2016). According to Jordan, Upright, and Tice-Owens (2016), during crises, NPOs face increased pressure for accountability through different organizational practices, as the organization, its stakeholders, and the community at large try to survive the crisis. Furthermore, these entities should be prepared for common potential crises, considering that time spent in preparing helps leaders navigate through the crisis (Coombs, 2012). Methods This investigation uses a two-part data collection process. Qualitative and quantitative data will be connected during the phases of research, following a mixedmethod approach (Creswell, 2003). Based on the literature review, it started with an exploratory qualitative data collection, where data analysis and its results will be used to inform the quantitative phase developed subsequently. This study is the beginning of a larger researcher project. The qualitative study included six in-depth semi-structured face-to-face online interviews with key informants of six Northern Portuguese NPOs, including 36 different social facilities, carried out in May 2020 (see Table 1). The interviews conducted were semi-structured, thus questions prepared in advance were mixed with questions that emerged during the interview. Semi-structured questions were chosen since they have proven efficient when using a case study approach __________________________________________________________________________ ________________ Lara SANTOS and Luísa LOPES, Journal of Administrative Sciences and Technology, DOI: 10.5171/2021.472658 (Creswell, 2003). The interview duration was approximately 40 minutes per interview. Both authors were present in all interviews. All interviews were taped, transcribed, and analyzed. Data were categorized, and transcripts were repeatedly read during this analysis. The sampling for the interview process was carried out for convenience, encompassing a total of six executive directors of NPOs in the North of Portugal, according to the list in Table 2. The respondents are familiar with the context considering their experience amount. The NPOs, they represent, deal with a variety of social issues as drugs dependency, elderly, childhood, homeless, among others. Results And Discussion Based on the script (see appendix) and considering the research objectives, it was possible to outline an analysis grid (Table 3) for the interviews carried out with the six executive directors. The presentation of the data will be structured and illustrated by explanatory excerpts of the positions taken by the interviewees about each of the categories and sub-categories inspired by the work of Coombs and Laufer (2018). • Crisis definition • Types of crises • Critical themes before and at the beginning of a crisis • Level of preparedness • Motives for the lack of preparation and planning Crisis phase (response) • Reactive or proactive actions • Contingency planning practices • Procedures and activities of crisis management planning and prevention The following section of the article presents, in a structured way, some of the data and information obtained through the interviews, for each of the categories under analysis. Crisis definition When people think about crises, they think about unprecedented events that result in catastrophic damage and mass casualties. However, not all crises fall into this category. So, the problem with associating only catastrophic events with crises is that they sound so dramatic that most leaders assume an "it can't happen to us" mentality (Crandall, Parnell & Spillan, 2014). The interviewees mention this clearly: "We think it only happens to others and it doesn't happen to us." (CR) In common with the interviews, the literature reveals that researchers, when defining a crisis, use words such as 'surprise, insufficient information, event escalation, loss of control, intense scrutiny, panic, urgency, loss, disruption of normal activities, and threat to financial stability, creditability, and reputation' (Coombs, 2002, cit. in Jordan et al. 2016. For instance: "Crisis is everything that differs from what is normally functioning and always causes a moment of adaptation." (PA) "This crisis that fell on us ... has a very heavy burden ... so hell fell on the institution, literally overnight." (JR) Some respondents stated that this crisis was unpredictable; they felt that it constrained and changed the course of events, organizational models, and practices, as following: "A crisis is always something that can condition it, so it's always in the negative aspect. The crisis always changes the course of events, changes the organizational models and changes practices, therefore, changes the regular functioning of the institutions, forces us to, in a very short period, reformulate everything and get out of the box, so we cannot be in a functionalist structure, we have to rethink all the strategies, all the organizational models, the practices, the intervention models and adapt them to the moment of the crisis." (JB) Types of crises Although crises can arise in infinite sizes, shapes, and magnitudes, concerning the types of crises, there is no agreement on the universal classification of crisis events (Devlin, 2007;Marcus & Goodman, 1991;Pearson & Mitroff, 1993;Spillan & Crandall, 2002). Some respondents mentioned unpredictable, "We have an unpredictable crisis, and it is catastrophes, disasters, sabotages and other things that can happen." (CR) "Financial crisis, health or public health crisis (...)." (AG) "The types of crises that can negatively impact a nonprofit organization are those of a political dimension, an economicfinancial dimension and a disease dimension." (JB) Critical issues before and at the beginning of Covid-19 Regarding previous critical issues, it is important to remember that issue management represents a valuable proactive method of potential threat analysis, understanding external issues potentially affecting the organization and its stakeholders (Coombs, 2012;Mitroff, 1994cit. in Jordan et al., 2016. For the respondents of this study, previous critical issues are related to low economic capacity and scarce financial resources, high dependence on the supervisory entity, establishment of partnerships with support institutions and networking, lack of equipment and human resources, unemployment, demographic aging, and changes, as well as ignorance and devaluation of risk and planning. Some interviewees clearly highlighted this: "Critical issues before this?... any situation that affects the well-being of the population… unemployment, of course, but has to do with a financial crisis. There is an aspect that harms us immensely, which is the desertification of the interior, the demographic issue (...). The lack of revenue, of course, is reflected in the financial health of the institution." (AG) "(...) in an initial phase, there was a sharp devaluation of risk, perhaps associated with ignorance." (JB) "(...) we are doing what the social security institute asks us to do." (CR) "(...) we can solve or help to solve because the institution alone does not solve the problem." "We have partnerships (...)." (AC) "We have a series of partnerships (City Council, Fire Department, Civil Protection, Social Security, Health Delegate) that in such situations, they are always available." (PA) Also, respondents mentioned that at the beginning of the crisis, they were exposed to public opinion; attacks on the institution's good name and reputation arose, depending on the goodwill of employees, confirming that NPOs are extremely vulnerable in times of crises (Sisco, 2012). Literature suggests that the more an organization is held responsible for the crisis, the more accommodative a reputation repair strategy must be, to protect the NPOs' reputation (Coombs & Holladay, 1996). "But right now, everyone is looking at institutions." "(...) some attacks that we have been feeling that jeopardize our ability as managers and even as an organization, to deal with this." (JR) "(...) we are not omnipotent, and we do not have infinite resources." "Human resources are never enough. But we tried to manage (...) we formed two teams to work in a mirror regime (...) and we managed to articulate all these people. Of course, it depends on their goodwill." (AC) Level of preparedness for the crisis Concerning emergency preparedness, crisis management involves four interrelated factors: prevention, preparation, response, and revision (Coombs, 2015, cit. in Coombs & Laufer, 2018. All respondents stated that they were not prepared at all; no one could plan for a pandemic crisis like Covid-19, given its size and seriousness. Interviewees stated that clearly: "Predict a situation like this, with this magnitude? Nobody is prepared for a hurricane. (…) What is normally required, we try to comply and try to have minimally legal things." (PA) "Nowhere, a situation like this was predicted (…) it never crossed my mind, going through the situation I had (...) in March, I had very bad days! (…). Before Covid-19, we had no plans." (AG) "We were neither prepared nor mentalized." (AC) Reasons for the lack of preparation and planning According to Pearson and Mitroff (1993), the preparation/prevention stage includes the creation of a crisis team as well as crisis training and simulation exercises. In this study, interviewees point out the following reasons for the lack of preparation and planning: issues of leadership, non-professional management, lack of human resources, insufficient training in management and planning, as well as historical reasons, among other reasons. "Have I delegated the technical staff responsible for each valence (...) to specify it in detail? What are the steps? It doesn't pass me by." (PA) "We have "big ships" coping with nonprofessional management for years. (…) We had contingency plans; we followed the guidelines that came from different public institutions. (…) We had some human resources that allowed us to organize, creating offices/teams." (JR) "We are talking about structures that have an administrative and bureaucratic system that is often very slow, which obeys very strict criteria that are framed in specific legislation. (…) There is also a devaluation of planning, on one hand, and on the other, there is little training and awareness in the areas of planning. (…) NPOs do not think like market companies (...) are dependent on subsidies and state funding. (…) They are controlled by the State that gives NPOs guidance. Not receiving guidelines, these structures consider that there are no reasons to do so, they are not obliged to do so, and therefore, they do not do it. (…) structures that work a lot from volunteering and with people's solidarity. (…) People have no training in management and planning." (JB) Crisis phase (response) In this phase, the purpose is the "damage containment, is to limit the effects" with detailed plains for preventing a localized crisis from affecting other uncontaminated parts of the organization through an evacuation plan, procedures, mechanisms, or activities (Pearson & Mitroff, 1993, p. 53). Reactive or proactive actions and contingency planning practices Spillan and Crandall (2002) and Spillan (2003) state that there are two ways to observe a crisis: ignore the signals and react to the crisis, or prepare to prevent or manage a crisis. As an accomplished result, it is possible to confirm that the NPOs, of this sample, tend to respond and react to the crisis events, simply following the guidelines of official bodies, and creating contingency plans oriented by these entities. The following sentences demonstrate this: "It was reactive in the sense that we were reacting, we started adapting what were the guidelines that were left by the official and public organisms." (JR) Procedures and activities of crisis management planning and prevention As for this phase, the respondents mentioned that there is a need to create entry and exit circuits, operationalize mirror teams, reintroduce hygienicsanitary practices, implement internal and external communication teams and offices, create segmented routing, reception, and transport protocols, list and prepare different volunteer platforms (of health care, and support staff) along with recruitment mechanisms and partnerships to provide emergency human force. "Regardless of the contact with the family and communication with the population in general, it must be well defined who does this, how they do it, what information should be conveyed and how often, in each type of crisis." (JR) "Updated phone book"; "signage"; "routines"; "mirror teams." (PA) "Complaints management"; "A specific plan for each social facility." (CR) Conclusions and Future Research In summary, the data collected, and here discussed, echo a reality, though partially true, shared by researchers who focus on the subject under study. Despite the exploratory nature of these results, most of the arguments of recent literature are confirmed. Pre-crisis activities were suggested, namely: a plan for each social facility, the preparation of different crisis teams and offices (e.g., internal, external, digital communication, recruitment, and voluntary). It must be well defined who does what, how it is done, what information should be conveyed, and how often. Other processes should be in practice in a continuous way, such as systems of alert and trend awareness for a proper risk evaluation. For the crisis preparation, specific plans should include complaints management mechanisms, in and outflows, alternative teams, reception, routing and transport protocols, proper signage, routines, and the reintroduction of hygienic practices. Specifically, for the pre-crisis phase, future research suggests two fundamental areas: risk assessment, diagnosing crises' vulnerabilities, and crisis management plans as a primary tool for crisis managers (Coombs & Laufer, 2018). The lack of preparedness can be further analyzed in what reasons and explanation mechanisms concerns. Further research could, also, confront leadership perspectives with the perception of the technical staff and its operational lenses, as they have a fundamental role to play and are immersed in each specific context.
v3-fos-license
2018-10-26T13:04:29.297Z
2018-10-26T00:00:00.000
53024135
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2018.00481/pdf", "pdf_hash": "a00bae6ab781b09010da1e12d69b1b55edc3f05a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45179", "s2fieldsofstudy": [ "Biology" ], "sha1": "a00bae6ab781b09010da1e12d69b1b55edc3f05a", "year": 2018 }
pes2o/s2orc
Vaccination With a FAT1-Derived B Cell Epitope Combined With Tumor-Specific B and T Cell Epitopes Elicits Additive Protection in Cancer Mouse Models Human FAT1 is overexpressed on the surface of most colorectal cancers (CRCs) and in particular a 25 amino acid sequence (D8) present in one of the 34 cadherin extracellular repeats carries the epitope recognized by mAb198.3, a monoclonal antibody which partially protects mice from the challenge with human CRC cell lines in xenograft mouse models. Here we present data in immune competent mice demonstrating the potential of the D8-FAT1 epitope as CRC cancer vaccine. We first demonstrated that the mouse homolog of D8-FAT1 (mD8-FAT1) is also expressed on the surface of CT26 and B16F10 murine cell lines. We then engineered bacterial outer membranes vesicles (OMVs) with mD8-FAT1 and we showed that immunization of BALB/c and C57bl6 mice with engineered OMVs elicited anti-mD8-FAT1 antibodies and partially protected mice from the challenge against CT26 and EGFRvIII-B16F10 cell lines, respectively. We also show that when combined with OMVs decorated with the EGFRvIII B cell epitope or with OMVs carrying five tumor-specific CD4+ T cells neoepitopes, mD8-FAT1 OMVs conferred robust protection against tumor challenge in C57bl6 and BALB/c mice, respectively. Considering that FAT1 is overexpressed in both KRAS+ and KRAS− CRCs, these data support the development of anti-CRC cancer vaccines in which the D8-FAT1 epitope is used in combination with other CRC-specific antigens, including mutation-derived neoepitopes. INTRODUCTION Human FAT atypical cadherin 1 (FAT1) is a type 1 transmembrane protein carrying 34 cadherin repeats, five EGF-like repeats, a laminin A-G domain in the extracellular region and a cytoplasmic tail (1). The protein undergoes a proteolytic cleavage by Furin and is predicted to be further cleaved by γ secretase so that its intracellular domain (ICD) can translocate into the nucleus and directly activate cell signaling. FAT1 ICD also interacts with Ena/VAPS and Scribble, promotes actin-mediated cell migration and inhibits YAP1-mediated cell proliferation (2). In addition, FAT1 ICD also interacts with β-catenin and prevents its translocation to the nucleus (3). Alteration of FAT1 expression and function has been associated to several human cancers. Although its role in tumorigenesis is still controversial, in some cancers such as acute myeloid leukemia (AML), pre-B acute lymphoblastic leukemia (ALL), T-ALL, and hepatocarcinoma, FAT1 has been described to act as tumor promoter (4,5). FAT1 up-regulation is an unfavorable prognostic factor for precursor B-cell acute lymphoblastic leukemia patients (4) and recent studies in melanoma and pancreatic cancer have demonstrated that FAT1 undergoes an aberrant processing and an altered localization compared to normal cells (6). Recently, Pileri et al. (7) discovered that FAT1 is overexpressed in a large fraction of early and late stage colorectal cancers (CRCs). Moreover, a murine monoclonal antibody (mAb198. 3), recognizing an epitope present within a 25 amino acid region of the cadherin domain 8 (hereinafter D8-FAT1), was shown to selectively bind the surface of different FAT1-positive CRC cell lines. Moreover, all CRC-derived liver metastases so far tested are highly positive mAb198.3 staining. Interestingly, using extensive immunohistochemistry analysis, the same authors demonstrated that not only mAb198.3 stained 93% of 642 CRC samples tested but also the antibody did not recognize a large panel of human healthy tissues. This strongly suggests that FAT1 can be exploited as a novel target for CRC immunotherapy. Indeed, it was demonstrated that in immunocompromised mice challenged with human colon CRC cells, mAb198.3 accumulated at tumor sites and partially inhibited tumor growth (7). In our laboratories we have been exploiting bacterial Outer Membrane Vesicles (OMVs) as a vaccine platform (8). OMVs, 20-300 nm closed spheroid particles (9), are particularly attractive for vaccine applications for three main reasons. First, they carry many Microbe-Associated Molecular Patterns (MAMPs), which work synergistically and stimulate potent Th1-skewed immune responses (10)(11)(12). Second, OMVs can be easily decorated with foreign antigens/epitopes by properly manipulating the OMV-producing strains (13)(14)(15)(16)(17). Third, OMVs can be efficiently purified from bacterial culture supernatants (18,19). Indeed, OMVs have already been exploited in human vaccination and they represent a key component of the anti-Meningococcus B vaccine (Bexsero) currently available in Europe and the USA. The recent data demonstrating that vaccines constituted by mutation-derived CD4+/CD8+ T cell neoepitopes induce antitumor immune responses in both preclinical and clinical settings (20,21), had prompted us to test whether the unique properties of OMVs could be exploited in cancer immunotherapy. We already showed that immunization with OMVs engineered with the EGFRvIII tumor-specific B cell epitope (22) and with M30, a mutation-derived CD4+ T cell epitope expressed in B16F10 murine melanoma cells (23), fully prevented tumor growth in C57bl6 mice challenged with B16F10 cells expressing EGRFvIII (24). We also showed that protection was associated to the elicitation of both anti-EGFRvIII antibodies and M30-specific T cells. In this work we have investigated whether OMVs could be engineered with the D8-FAT1 domain and whether D8-FAT1decorated OMVs could induce anti-tumor immune responses against FAT1-positive tumors. Here we show that several murine cancer cell lines, including the colon cancer cell line CT26, expose FAT1 on their surface. We also show that OMVs can be efficiently decorated with D8-FAT1 epitope and that D8-FAT1-OMVs induce high levels of anti-FAT1 antibodies. Furthermore, immunization with D8-FAT1-OMVs partially prevents tumor growth in BALB/c mice challenged with CT26 cell line. Finally, we show that when combined with other cancer specific-epitopes, D8-FAT1 provides an additive effect and potentiates the overall anti-tumor immune responses. Taken together these data strengthen the association of FAT1 expression in CRC and other tumors and pave the way to the use of D8-FAT1 epitope in cancer immunotherapy, particularly in association with other tumor-specific epitopes. FAT1 Is Expressed in Murine Cancer Cell Lines As already pointed out, FAT1 is over-expressed in most human CRCs and the 25 amino acid hD8-FAT1 domain recognized by mAb198.3 is exposed on cancer cells and not on healthy human tissues. Therefore, hD8-FAT1 represents a novel tumor specific-epitope which could be potentially exploited in immunotherapeutic vaccines. A prerequisite to test this hypothesis in tumor models of immunocompetent mice is that D8-FAT1 is expressed on the surface of syngeneic murine cancer cell lines. Murine FAT1 (mFAT1) shares 88% identity to hFAT1 and in particular 21 out of the 25 amino acids of hD8-FAT1 are conserved in murine D8-FAT1 (mD8-FAT1) ( Figure 1A). However, to the best of our knowledge, little was known about FAT1 expression and compartmentalization in mouse cancer cells. To investigate this, we first purified total mRNA from CT26, B16F10, LLC, and Tramp C1 cells and the presence of FAT1specific transcripts were analyzed by qRT-PCR. As shown in Figure 1B, FAT1-specific mRNA was present in all cancer cells and, interestingly, FAT1 mRNA was particularly abundant in CT26 cell line, a colon cancer cell line derived from BALB/c mice. We next investigated the presence of the mD8-FAT1 domain on the surface of B16F10 and CT26 cells by flow cytometry analysis using mAb198.3. Neither cell line was recognized by the monoclonal antibody ( Figure 1C). This negative result could be attributed either to a difference in cellular surface expression of FAT1 between human and mouse cancer cells or to the inability of mAb198.3 to bind mD8-FAT1 due to the four amino acid difference present in the D8-FAT1 sequences of the two species. To discriminate between the two possibilities, polyclonal antibodies against a synthetic peptide corresponding to the 25 amino acid sequence of mD8-FAT1 were generated in rabbits and the serum was used to detect mD8-FAT1 surface expression by flow cytometry. As shown in Figure 1C, the anti-mD8-FAT1 antibodies specifically bound both B16F10 and CT26 cells, FAT1 surface expression being higher in CT26, in line with the RNA data. OMVs Can Be Decorated With mD8-FAT1 The demonstration of FAT1 expression and D8-FAT1 surface exposition in cancer cell lines prompted us to produce OMVs Highlighted is the comparison between the 25 amino acid sequence of the human cadherin domain 8 (hD8-FAT1) recognized by mAb198.3 (7) and the corresponding sequence in murine FAT1 (mD8-FAT1). (B) Quantitative analysis of FAT1 mRNA in mouse cancer cell lines-mRNA was purified from different cancer cells lines and qRT-PCR was carried out to quantify FAT1-specific mRNA. Data are reported as fold differences with respect to FAT1 mRNA from B16F10 cell line. The bars represent the means ± SD of three independent experiments. (C) Surface exposition of mD8-FAT1 domain in B16F10 and CT26 cell lines. Cancer cells were incubated with either mAb198.3 monoclonal antibodies or with polyclonal antibodies raised against the KLM-conjugated synthetic peptide corresponding to the mD8-FAT1 (A). Cells were subsequently incubated with fluorescent labeled secondary antibodies and analyzed by flow cytometry. decorated with mD8-FAT1 with the aim of using mD8-FAT1decorated OMVs in mouse immunogenicity studies. To load OMVs with mD8-FAT1, a synthetic minigene encoding three copies of the 25 amino acid mD8-FAT1 domain was fused to the 3' end of the genes encoding the E. coli periplasmic Maltose Binding Protein (MBP) (25) and the Staphylococcus aureus FhuD2 lipoprotein (26) (Figure 2A). The two gene fusions were inserted into pET plasmid under the control of the IPTG-inducible T7 promoter and plasmids pET_MBP-mD8-FAT1 and pET_FhuD2-mD8-FAT1 thus generated were used to transform E. coli BL21(DE3) ompA, a strain featuring an OMV over-producing phenotype (16). After 2-hour induction of protein expression, OMVs were purified from the culture supernatants and the accumulation of the fusion proteins in the OMVs was analyzed by SDS-PAGE. As shown in Figure 2B, protein bands corresponding to the expected molecular masses of MBP-mD8-FAT1 (51 kDa) and tri-acylated FhuD2-mD8-FAT1 (approx. 45 kDa), were clearly visible on the gel. In the case of MBP-mD8-FAT1-OMVs a second band of ∼45 kDa is also visible. The protein is likely to correspond to a degradation product, which is however still recognized by mD8-FAT1 antibodies (data not shown). MBP is a periplasmic protein while FhuD2 is a lipoprotein which is expected to reach the outer membrane. Therefore, C-terminal fusions to MBP and FhuD2 should reside in the luminal and in the membrane compartments of OMVs, respectively. The different compartmentalization of the two fusion proteins in the OMVs was indirectly confirmed by solubilizing the OMVs with 1% Triton X-114 at 4 • C and by following the partition of the fusion proteins in aqueous and detergent phases, which separate upon temperature shifting at 37 • C. Under these conditions membrane proteins and lipoproteins typically partition into the Triton X-114 hydrophobic phase while periplasmic proteins in the hydrophilic one (17). As shown in Figure 2C, FhuD2-mD8-FAT1 compartmentalized in the detergent phase while the MBP-mD8-FAT1 fusion in the aqueous phase. The localization of the two fusions was also analyzed by flow cytometry analysis of BL21(DE3) ompA(pET_MBP-mD8-FAT1) and BL21(DE3) ompA(pET_FhuD2-mD8-FAT1) cells, using mD8-FAT1-specific antibodies. As shown in Figure 2C, while the antibodies bound the cell surface of the strain expressing the FhuD2-mD8-FAT1 fusion, no appreciable florescent shift was observed in the strain expressing the MBP-mD8-FAT1 fusion. These data also indicate that the FhuD2-mD8-FAT1 fusion not only resides in the outer membrane of E. coli but also protrudes out of the cell surface, thus making the mD8-FAT1 epitope accessible to antibody binding. This is an interesting observation since E. coli does not expose most of its outer membrane lipoproteins and this is often attributed to the absence of specific "flippases" that allow lipoproteins to move from the inner to the outer leaflet of the outer membrane. The fact that FhuD2 lipoprotein is surface-exposed, supports our previous observations that in Gram-negative bacteria many lipoproteins, in the absence of still poorly characterized retention signals, are "by default" destined to cross the outer membrane (17). mD8-FAT1-OMVs Immunization Inhibits Tumor Growth in CT26-Challenged Mice We next asked the question whether immunization with mD8-FAT1-decorated OMVs could elicit anti-mD8-FAT1 antibodies in mice. To this aim, BALB/c mice were immunized three times ( Figure 3A) with either MBP-mD8-FAT1-OMVs (20 µg/dose supplemented with Alum) or with FhuD2-mD8-FAT1-OMVs (20 µg/dose) and 1 week after the third immunization sera from each group were pooled together and analyzed by ELISA using plates coated with the synthetic mD8-FAT1 peptide. As shown in Figure 3B, both immunizations induced high titers of mD8-FAT1 specific antibodies. In line with a previously published work (16), no appreciable difference was observed between titers elicited by OMVs carrying D8-FAT1 on the surface or in the lumen. Immunized animals were subsequently challenged with CT26 cells and tumor growth was followed over a period of 25 days. Both immunizations inhibited tumor progression in a statistically significant manner, and after 25 days from challenge tumor volumes were ∼50% smaller than those measured in mice immunized with "empty" OMVs ( Figure 3C). We also analyzed the immune cell population in tumors from control mice and from mice immunized with mD8-FAT1-decorated OMVs. As shown in Figure 3D, tumor inhibition in mice immunized with mD8-FAT1-OMVs was accompanied by the accumulation of infiltrating CD8+ and CD4+ T cells and by the concomitant reduction of regulatory T cells (CD4+/Foxp3+) and myeloidderived suppressor cells (MDSCs). mD8-FAT1-OMVs Immunization Cooperates With OMVs Decorated With Other Cancer-Specific B Cell Epitopes Because of the heterogeneity of the cancer cell population and of the immune-editing mechanism that allow cancer cells to escape immune surveillance, to be effective cancer vaccines should be formulated with more than one tumor-specific/associated antigen. Therefore, we first tested whether mD8-FAT1 could be utilized in combination with other B cell epitopes selectively expressed in cancer cells. Several human cancers express EGFRvIII, a variant of EGFR in which a large deletion in its extracellular domain generates a 14 amino acid sequence not found in healthy tissues (22). A vaccine based on EGFRvIII peptide was tested in glioblastoma patients, with promising results even though EGFRvIII-negative tumor cells ultimately escaped vaccine-induced protection (27). We previously demonstrated that OMVs decorated with EGFRvIII peptide elicited specific antibodies which could inhibit the growth of a B16F10 cell line derivative expressing EGFRvIII in syngeneic C57bl6 mice (24). Since EGFRvIII-B16F10 cells, like their progenitor B16F10, express mD8-FAT1 on their surface ( Figure 4A), we tested whether the combination of mD8-FAT1-OMVs and EGFRvIII-OMVs could further enhance the anti-tumor activity of EGFRvIII-OMVs immunization in mice challenged with EGFRvIII-B16F10. Mice were immunized three times with either mD8-FAT1-OMVs (20 µg/dose), or EGFRvIII-OMVs (20 µg/dose) or with mD8-FAT1-OMVs + EGFRvIII-OMVs (10 µg each/dose). One week after the third immunization mice were given 10 5 EGFRvIII-B16F10 cells and tumor growth was followed over a period of 25 days. In line with our previous results (27), at day 25 after challenge, EGFRvIII-OMVs immunization elicited a 70% reduction of tumor growth as compared to immunization with "empty" OMVs. Immunization with mD8-FAT1-OMVs elicited a protection of ∼25% (average tumor volume 630 mm 3 as opposed to 850 mm 3 in control group). Such protection was lower than what observed in the BALB/c/CT26 model and this is likely due to the fact that B16F10 cells express less mD8-FAT1 than CT26 ( Figure 1C). Finally, immunization with the OMVs combination almost totally prevented tumor growth, suggesting that anti-mD8-FAT1 and anti-EGFRvIII antibodies cooperate in inhibiting tumor cell proliferation and in promoting tumor cells killing. mD8-FAT1-OMVs Synergize With OMVs Decorated With Cancer T Cells Epitopes in Protecting Mice From Tumor Challenge Kreiter and co-workers recently reported a list of mutationderived, CT26-specific CD4+ and CD8+ T cell "neoepitopes" and showed that immunization with RNA vaccines encoding such neoepitopes elicited robust protection in BALB/c mice challenged with CT26 tumor cells (23). We took advantage of these data to address the question as to whether anti-D8-FAT1 immune responses could potentiate the cell-mediated protective activity elicited by T cell neoepitopes. To this aim, we first tested whether OMVs carrying five of the neoepitopes described by Kreiter et al. could protect BALB/c mice from CT26 challenge. Synthetic peptides (20 µg each) corresponding to neoepitopes M03, M20, M26, M27, and M68 (23) were mixed with 20 µg of "empty" OMVs and after challenging mice with 2 × 10 5 tumor cells the mixture was used to immunize mice every 3 days for a total of 7 injections ( Figure 5A). Tumor growth was followed over a period of 25 days. As shown in Figure 5B, immunization with peptides-absorbed OMVs inhibited tumor growth in a statistically significant manner, the average tumor size being 500 ± 94 mm 3 , as opposed to 1.200 ± 103 mm 3 of control mice. To test whether the observed protection could be at least partially attributable to the elicitation of neoepitope-specific T cells, a group of five naïve mice were immunized twice 1 week apart with peptides-absorbed OMVs and 5 days after the second BALB/c mice were immunized three times (2 weeks apart) with OMVs from either BL21(DE3) ompA(pET_MBP-mD8-FAT1) or BL21(DE3) ompA(pET_FhuD2-mD8-FAT1) strains and 1 week after the third immunization the animals were challenged with 2 × 10 5 CT26 cells. Tumor growth was followed over a period of 25 days. As control, a group of mice was also immunized with "Empty" OMVs. (B) anti-mD8-FAT1 titers from mice immunized with mD8-FAT1 OMVs. The day before challenge sera from immunized mice were pooled (triangles: mice immunized with MBP-mD8-FAT1-OMVs; squares: mice immunized with FhuD2-mD8-FAT1-OMVs; circles: mice immunized with "Empty" OMVs) and the anti-mD8-FAT1 titers were determined by ELISA using plates coated with synthetic mD8-FAT1 peptide. (C) Anti-tumor activity of mD8-FAT1 OMVs immunizations. After challenge tumor growth was followed by measuring tumor volume with a caliper. Animals were sacrificed 25 days after challenge. Means ± SEM are indicated. ***Indicates that the difference in tumor size between the immunized group and control group is statistically significant with P < 0.001, while *indicates P < 0.05. (D) Analysis of immune cell populations in tumors. At day 25 from challenge, tumors were collected from sacrificed mice and the frequencies of CD4+ T cells, CD8+ T cells, MDSCs and CD4+/Foxp3+ T cells in tumors were determined by flow cytometry. The data reported in the figure represents the means ± SD of cell populations from four tumors collected from mice immunized with either "Empty OMVs" or with mD8-FAT1 OMVs (two mice from MBP-mD8-FAT1 OMVs and two mice from FhuD2-mD8-FAT1 OMVs) (*P < 0.05). vaccine dose the frequency of epitope-specific IFN γ-producing T cells was measured in splenocytes stimulated with the peptide mixture. As shown in Figure 5C, which shows the analysis of IFN γ-producing CD4+ and CD8+ T cells from two of the five immunized mice, immunization elicited both epitope-specific CD4+ and CD8+ T cells. The average frequency of CD4+ T cells and CD8+ T cells in the five mice was 0.82 ± 0.29 and 0.48 ± 0.21, respectively. Having demonstrated that the five neoepitopes described by Kreiter et al. were partially protective when absorbed to OMVs, we next set up an immunization/challenge experiment involving three groups of mice ( Figure 6A). Two groups received three doses (2 weeks apart) of either "empty" OMVs or mD8-FAT1-OMVs (20 µg/dose). Ten days after the last immunization the groups were challenged with 2 × 10 5 CT26 cells and tumor growth was followed over a period of 25 days. A third group was first immunized with mD8-FAT1-OMVs (20 µg/dose), challenged with 2 × 10 5 CT26 cells and subsequently repeatedly immunized with the mixture of the five M03, M20, M26, M27, and M68 synthetic peptides (20 µg each) absorbed to "empty" OMVs (20 µg). As shown in Figure 6B, the prophylactic mD8-FAT1-OMVs immunization followed by the therapeutic immunization with peptides-absorbed OMVs resulted in a 70% tumor inhibition, the average tumor size at day 25 being 312 ± 76.4 mm 3 . The protection data obtained in mice immunized with D8-FAT1-OMVs indicate that, although tumor growth was markedly reduced in most of the mice, in few mice immunization was poorly protective (Figures 6B, 3C). By contrast, the tumor size in mice immunized with the combination of D8-FAT1-OMVs and pentatope-absorbed-OMVs was as average not only smaller but also more homogeneous among mice. To explain this difference, we speculated that while in D8-FAT1-OMVs-immunized mice protection was exclusively dependent on anti-D8-FAT1 antibody titers (the higher the titers the better the protection), in mice treated with the OMV combination the antibody titers should have been less critical in protection due to the contribution of cell-mediated immunity. To test this hypothesis, at the end of the experiment described in Figure 6, sera were collected from each mouse and anti-mD8-FAT1 antibody titers were measured in each individual mouse. As shown in Figure 6D, most protected mice (tumor volume <750 mm 3 ) immunized with D8-FAT1-OMVs had antibody titers > 1:3.500. By contrast, in mice treated with the OMV combination the same protection was achieved even if anti-D8-FAT1 antibody titers were below 1:3.500. DISCUSSION FAT1 was originally reported as a tumor suppression marker linked to E-cadherin and Wnt/β catenin pathways. Previous evidence from clinical samples showed that, in the presence of wild type FAT1, β-catenin is held at the cell membrane whereas in several tumors, where FAT1 is inactivated by mutation or deleted, an excess of β-catenin is present in the cytoplasm. This results in the inability of the GSK3P/axin/Wtx/Apc complex to completely degrade cytoplasmic β-catenin, allowing active β-catenin to enter the nucleus. Here β-catenin functions as an activator of T-cell factor (TCF) and lymphoid enhancer factor (LEF), leading to a subset of cellular effects involving cellular adhesion, tissue morphogenesis, and tumor development (3). However, as already pointed out, in some tumors FAT1 is up-regulated, suggesting that its Wnt/β catenin-dependent tumor suppressive mechanism is counterbalanced by a still poorly characterized role as tumor-promoting factor. FAT1 was reported to be overexpressed in breast cancer (28), in melanoma (29) in leukemia (4), and in pancreatic cancer (6). Interestingly, in pancreatic cancer FAT1 was shown to be overexpressed on the surface of cancer cells together with ADAM10 metalloprotease, which mediates FAT1 ectodomain shedding. Although the biological significance of the shed FAT1 ectodomain is unknown, it is possible that it can promote carcinogenesis by disrupting cell junctions and by promoting the up-regulation of metalloproteases, similarly to what has been proposed for the shed ectodomain of E-cadherin (30,31). Pileri et al. (7) reported that FAT1 is overexpressed on the surface of most human CRCs and of CRC-derived metastatic hepatocarcinomas. Interestingly, the same authors provided evidence of an ADAM10-dependent FAT1 shedding in HCT15 colon carcinoma cell line, as demonstrated by the accumulation of FAT1 on the cell surface upon siRNA-mediated silencing of ADAM10 mRNA. While for many tumors the opposite role of FAT1 as tumor suppressor and tumor promoter is at present difficult to reconcile, in the case of CRC there might be a mechanistic explanation. In virtually all colon carcinomas β-catenin degradation is hampered by defects in the Apc subunit of the GSK3P/axin/Wtx/Apc complex (32). Therefore, even if FAT1 overexpression should reduce the concentration of free cytoplasmic β-catenin, the inability to degrade it should allow enough β-catenin to reach the nucleus and activate genes involved in tumorigenesis. At the same time, the abundancy of surface FAT1 and its shed extracellular ectodomain should promote carcinogenesis. In this work we wanted to investigate whether FAT1based cancer vaccines could be potentially exploited in CRC immunotherapy. The rationale behind this work stems from a number of experimental observations. First, a monoclonal FIGURE 5 | Protective activity of OMVs "absorbed" with synthetic CD4+ T cell epitopes. (A) Schematic representation of tumor protection experiment. BALB/c mice were challenged with 2 × 10 5 CT26 cells and the day after were immunized with 20 µg of OMVs mixed with five synthetic peptides (20 µg each) corresponding to CT26-specific CD4+ T cell epitopes ("pentatope"). Immunizations were repeated at a frequency of 3 days and tumor growth was followed over a period of 25 days. (B) Protection of BALB/c mice immunized with "pentatope" OMVs. The figure reports the tumor volumes measured with a caliper at day 25 after the first immunization (***P < 0.001). Means ± SEM are indicated. (C) Analysis of pentatope-specific CD4+ and CD8+ T cells in mice immunized with "pentatope"-absorbed OMVs. BALB/c mice were immunized twice1 week apart with 20 µg of OMVs mixed with five synthetic peptides (20 µg each) corresponding to CT26-specific CD4+ T cell epitopes ("pentatope"). Five days after the second immunization splenocytes were collected and stimulated with either five irrelevant peptides (control) or with the "pentatope" peptide mixture. Induction of IFN-γ expression in CD4+ and CD8+ T cells was analyzed by flow cytometry. antibody (mAb198.3) specific for a conserved amino acid sequence in D8 and D12 FAT1 domains could bind the surface of over 90% of CRCs with affinities in the low nM range. Second, cancer cell recognition by mAb198.3 appeared rather specific. IHC analysis of 33 normal human tissues showed a limited recognition of any of the tissues by mAb198.3 and, when present, the staining was confined to the intracellular compartment. Third, when used in xenograft mouse models with HCT15 and HT29 cell lines, mAb198.3 passive immunization could reduce tumor growth in a statistically significant manner. However, for a FAT1-based vaccine to be effective and safe a fundamental requisite is the absence of FAT1, and in particular of D8-FAT1 epitope, surface expression in healthy tissues. Only under these circumstances the central clonal deletion of FAT1-spcific naïve B cells is avoided together with the risk of inducing immune responses which could be detrimental to immunized patients. While the effectiveness and safety of FAT1 vaccine can ultimately be demonstrated only in humans, to move to the clinics robust preclinical and safety data are required. Starting from the assumption that mice and humans share a similar FAT1 expression profile, we decided to test FAT1based vaccines in an immune competent mouse model. First we analyzed whether the mouse homolog of human FAT1 was expressed in some of the cell lines most frequently utilized in mice. Indeed, as judged by quantitative RT PCR analysis of FAT1 mRNA, we found that FAT1 is overexpressed in a number of cancer cell lines, including B16F10 and, particularly, CT26. This Two groups of BALB/c mice received three doses (2 weeks apart) of either "Empty" OMVs or mD8-FAT1-OMVs (20 µg/dose). Ten days after the last immunization the groups were challenged with 2 × 10 5 CT26 cells and tumor growth was followed over a period of 25 days. A third group was first immunized with mD8-FAT1-OMVs (20 µg/dose), challenged with 2 × 10 5 CT26 cells, and finally repeatedly immunized with 20 µg of empty OMVs "absorbed" to the mixture of the five M03, M20, M26, M27, and M68 synthetic peptides (20 µg each). (B) Protection of BALB/c mice immunized with mD8-FAT1-OMVs and with the combination of mD8-FAT1-OMVs with "pentatope" OMVs. Mice were immunized with mD8-FAT1-OMVs (MBP-mD8-FAT1 OMVs: open circles; FhuD2-mD8-FAT1-OMVs: closed circles) or with the combination of MBP-mD8-FAT1-OMVs + "pentatope" OMVs (open triangles) and FhuD2-mD8-FAT1-OMVs + "pentatope" OMVs (closed triangles) as described in A. The figure reports the tumor volumes measured with a caliper at day 25 after the first immunization (*P < 0.05; ***P < 0.001). Means ± SEM are indicated. (C) Analysis of anti-mD8-FAT1 antibody titers in immunized mice. At the end of the immunization experiments described in A and B, sera were collected and anti-mD8-FAT1 antibody titers were measured by ELISA. Titers (x axis) are plotted with tumor volumes (y axis). Open and closed circles correspond to sera of mice immunized with MBP-mD8-FAT1-OMVs and FhuD2-mD8-FAT1-OMVs, respectively. Open and closed triangles correspond to sera of mice immunized with MBP-mD8-FAT1-OMVs + "pentatope" OMVs and FhuD2-mD8-FAT1-OMVs + "pentatope" OMVs, respectively. was encouraging, considering that CT26 cell line derives from a spontaneous colon cancer of BALB/c mice. We next aligned the sequences of hFAT1 and mFAT1 and we selected the mFAT1 25 amino acids (mD8-FAT1) corresponding to the human epitope (hD8-FAT1) recognized by mAb198.3. Interestingly, mD8-FAT1 differs in 4 out of 25 amino acids from hD8-FAT1 and this difference is sufficient to abrogate the binding of mAb198.3 to mD8-FAT1. We then asked the question whether mD8-FAT1 epitope could induce FAT1-specific antibody responses in mice. This was a critical question since, as said above, if mFAT1 were sufficiently expressed on the surface of normal tissues, the administration of mD8-FAT1 containing vaccines should be poorly immunogenic and/or potentially harmful. Interestingly, the mD8-FAT1 immunization elicited high titers of specific antibodies and although we did not carry out a pathological/histopathological analysis of immunized mice, the animals showed no severe sign of malaise and/or alteration of physiological functions throughout the experiment. This result is in line with the immunohistochemistry data, which indicate the absence of surface expression of hD8-FAT1 domain on normal human tissues (7) and suggests that in mice FAT1 has a topological organization and an expression profile similar to what observed in humans. Next, we investigated whether the immune response induced by mD8-FAT1 vaccination was potent enough to inhibit tumor growth in BALB/c and C57bl6 mice challenged after vaccination with CT26 and B16F10 cells, respectively. Our data indicate that mD8-FAT1 immunization did reduce the kinetics of tumor development in both animal models, even though it was not capable of fully abrogating tumor formation. Protection was more pronounced in the BALB/c-CT26 model and this is likely due to the fact that FAT1 expression is fourfold higher in CT26 than in B16F10 (Figure 1). Interestingly, when we analyzed the cell population in tumors from immunized BALB/c mice we found an increase in infiltrating CD4+ and CD8+ T cells and a concomitant decrease in Treg and MDSCs with respect to tumors from mock immunized mice (Figure 3). This is in line with one of the expected mechanisms of action of anti-tumor antibodies according to which ADCC-mediated killing of cancer cells creates an inflammatory environment and favors the infiltration of effector T cells specific for cancer epitopes released by dead cells. While ADCC is likely to play an important role in the observed anti-tumor activity, other possible mechanisms can be involved including a direct cell killing or growth inhibition mediated by antibody binding to target cells. To investigate the involvement of this latter mechanism we carried out in vitro experiments in which CT26 cells were incubated for 72 h with different concentrations of affinity-purified anti-mD8-FAT1 polyclonal antibodies and we followed cell proliferation using the MTT assay (Promega). As shown in Supplementary Figure 1, the addition of affinity-purified anti-mD8-FAT1 polyclonal antibodies partially inhibited cell growth (∼20%) in a dosedependent manner. No growth inhibition was observed when the cells were incubated with pre-immune serum, and with similar concentrations of purified polyclonal antibodies against an unrelated peptide. Finally, we investigated whether the tumor inhibiting activity elicited by mD8-FAT1 immunization could be potentiated by the combination with other tumor-specific antigens. Targeting single antigens would hardly be effective in cancer immunotherapy and therefore the capacity of antigens to synergize with others would be an important prerequisite in the final selection of the proper vaccine combinations. Our data indicate that when mD8-FAT1 is combined with other B and T cells protective epitopes the anti-tumor immune response is potentiated. In particular, the immunization with mD8-FAT1 combined with EGFRvIII, a B cell epitope expressed in a variety of tumors in which the EGF receptor undergoes the deletion of its ectodomain (22), almost fully abrogated tumor development in C57bl6 mice challenged with B16F10-EGFRvIII, a cell line expressing both mD8-FAT1 and EGFRvIII. Our data not only point to the effectiveness of antibody-mediated immunotherapies targeting more than one tumor-specific B cell antigen/epitope but also suggest that the combination of D8-FAT1 and EGFRvIII might find practical applications in CRC patients since EGFRvIII expression has been described in at least a subset of human colorectal cancers (33). In this work we also show the additive protective activity of mD8-FAT1 when combined with cancer-specific T cell epitopes. In the last few years CD4+/CD8+ T cell neoepitopes originated from cancer mutations are emerging as key targets for cancer immunotherapy (34). This has been proved in the clinical settings for both adoptive cell transfer therapy (ACT) (35) and cancer vaccines (36). In the case of cancer vaccines, the combination of more than one cancer-specific T cell neoepitope was critical for the effectiveness of the vaccines (20,21). As already pointed out in a recently published work from our laboratories (27), the fact that the potency of multi-T cell epitopes vaccines can be further potentiated by the addition of protective B cell epitopes expand the potential of future cancer vaccines. It has to be pointed out that our data on B and T cell epitope combinations should be taken as a "proof-of-concept" study that needs further optimization. According to the protocol utilized in this work, before the challenge with CT26 cells, the animals are immunized first with mD8-FAT1 to elicit sufficiently high FAT1-specific antibody titers. The challenge is then followed by repeated immunizations with T cell epitopes absorbed to OMVs. The reason why we followed this protocol in our "proof-ofconcept" experiments is because previously described work with Rindopepimut, an anti-glioblastoma EGFRvIII peptide vaccine, highlighted the importance of implanting mice with high doses of cancer cells only in the presence of sufficiently high antibody titers (37,38). It would be interesting to follow protection when B and T cell epitopes are combined and given to mice simultaneously after tumor challenge. Preliminary data seem to indicate that such schedule is not as effective as the one here described but before drawing any conclusion further experiments involving different antigen dosages, formulations, and timing should be carried out. A last but important comment deserves the adjuvant/formulation used in this study. Several adjuvants/delivery systems have been proposed for cancer vaccines, including DNA and RNA encoding cancer antigens/epitopes, synthetic peptides combined with hiltonol, and viral vectors expressing cancer antigens (36). We tend to believe that OMVs are a valid and promising alternative. As already pointed out, OMVs have a few interesting properties. They carry many Microbe-Associated Molecular Patterns (MAMPs), which can work synergistically, thus providing a strong built-in adjuvanticity to OMVs (11,12). Furthermore, OMVs can be easily decorated with foreign antigens/epitopes by manipulating the OMV-producing strains (13)(14)(15)(16)(17). Finally, OMVs can be rapidly and easily purified from bacterial culture supernatant using either detergent treatment of bacterial cells (18), or hyper-vesiculating strains (19). We had previously shown that OMVs engineered with EGFRvIII peptide and a CD4+ T cell epitope fully protected C57bl6 mice from the challenge of B16F10-EGFRvIII cell line and that protection strongly correlated with the elicitation of both humoral and cell-mediated immunity. The data described in this work further support our motivation to exploit OMVs in cancer immunotherapy. MATERIALS AND METHODS Bacterial Strains, Cell Line, and Mice E. coli HK100 strain was used for cloning experiments with the PIPE method. E. coli BL21(DE3) ompA strain used for OMVs production was previously described (16). CT26 and B16F10 were obtained from ATCC (Manassas, VA, USA) and cultured under recommended conditions. B16F10 melanoma cell line that stably expresses human EGFRvIII variant was generously provided by Prof. Sampson (Department of Neurosurgery of the Duke University, Duhram, NC). Cells were tested for mycoplasma before animal injection. BALB/c and C57bl6 female 4 weeks old mice were purchased from Charles River Laboratories and kept and treated in accordance with the Italian policies on animal research at the Toscana Life Sciences animal facility (Siena, IT). Construction of Plasmids Three copies of mD8-FAT1 were fused to the C-terminus of the S. aureus FhuD2 lipoprotein. D8-mFAT1 minigene was constructed, taking into consideration BL21 E. coli codon usage, by assembling six complementary oligonucleotides, the sequence of which is reported in Table 1, and the assembled DNA fragment was amplified with primers fat1ms-FhUD2 F/ fat1ms-FhUD2 R primer ( Table 1). These primers were designed to generate extremities complementary to the pET-FhuD2 plasmid. This vector, which carries the fhuD2 gene fused to the lpp leader sequence (39) was linearized by PCR amplification using the divergent primers nohis flag F/ FhuD2-V-R, according to the PIPE method (40). Finally, the PCR products were mixed together and used to transform HK-100 competent cells, obtaining plasmids pET_FhuD2-mD8-FAT1-3x plasmid. Similarly, to express mD8-FAT1 peptide in the lumen of OMVs, the Maltose binding protein (MBP) was used as a carrier and the FAT1 minigene was cloned as an in frame fusion to the 3 ′ end of the MBP gene. For this purpose, pET-MBP plasmid (41) was used as template for a PCR reaction carried out using primers pET21-MBPF and pET21-MBPR (see Table 1) to generate a linear fragment. Then, the linear fragment was ligated to mD8-FAT1 3x gene, which was assembled as previously described and subsequently amplified with primers MBPmFa-F and MBPmFa-R. The DNA mixture was used to transform HK-100 competent cells and clones carrying pET_MBP-mD8-FAT1 plasmid were selected on LB agar plates supplemented with 100 µg/ml of Ampicillin. The correctness of the mD8-FAT1 fusions from one of the Ampicillin resistant clones was verified by DNA sequencing. The construction of pET-Nm-fHbp-vIII plasmid expressing the Neisseria meningitidis FHbp fused to three repeated copies of EGFRvIII peptide was previously described (24). Preparation of Bacterial Total Lysates and OMVs Plasmids containing the genes of interest were used to transform E. coli BL21(DE3) ompA strain. Recombinant clones were grown in 200 ml LB medium (starting OD 600 = 0.05) and, when the cultures reached an OD 600 value of 0.5, protein expression was induced by addition of 1 mM IPTG. After 2 h, OMVs were collected from culture supernatants by filtration through a 0.22 µm pore size filter (Millipore) followed by highspeed centrifugation (200,000 × g for 2 h). Pellets containing OMVs were finally re-suspended in PBS. Total bacterial lysates were prepared by suspending bacterial cells from 1 ml cultures (centrifuged at 13,000 × g for 5 min) in sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) Laemmli buffer and heated at 100 • C for 5 min. Proteins were separated by 4-12% or 10% SDS-PAGE (Invitrogen), run in MES buffer (Invitrogen) and finally stained with Coomassie Blue. Flow Cytometry Analysis Twenty milliliter of LB medium supplemented with 100 µg/ml Ampicillin were inoculated at OD 600 = 0.05 with an overnight culture of BL21 ompA(pET_Empty), BL21 ompA(pET-pET_MBP-mD8-FAT1), and BL21 ompA(pET-pET_FhuD2 mD8-FAT1) strains. The cultures were then grown and IPTGinduced as described above. BL21 ompA(pET_Empty) strain was used as negative control. Bacterial cells from 1 ml were harvested by centrifugation at 10,000 × g for 5 min at 4 • C and re-suspended with 1% BSA in PBS to obtain a cell density of 2 × 10 7 CFUs/ml. 50 µl were then dispensed in a round bottom 96 well plate. Anti-mD8-FAT1 peptide rabbit antibodies were added at a concentration of 5 µg/ml and incubated 1 h on ice. After three washes with 1% BSA in PBS, 20 µl of FITC labeled anti-rabbit secondary antibodies (1:200 dilution) (Life Technologies) were added and incubated 30 min on ice. Each well was then washed twice with 200 µl 1% BSA in PBS and plates were centrifuged at 4,000 × g for 5 min. Samples were then re-suspended in 2% formaldehyde solution, incubated 15 min at RT and centrifuged again at 4,000 × g for 5 min. Finally, samples were re-suspended in 200 µl of PBS and data were acquired by using BD FACS Canto II cell analyzer (BD) and analyzed by FlowJo software. Triton X-114 Protein Separation From OMVs One hundred micrograms of OMVs (10-15 µl) were diluted in 450 µl of PBS, then ice cold 10% TritonX-114 was added to 1% final concentration and the OMV-containing solution was incubated at 4 • C for 1 h under shaking. The solution was then heated at 37 • C for 10 min and the aqueous phase was separated from the detergent by centrifugation at 13,000 × g for 10 min. Proteins in both phases were then precipitated by standard chloroform/methanol procedure, separated by SDS-PAGE electrophoresis and stained with Coomassie blue. Vaccine Immunogenicity and Tumor Challenge OMV Immunizations BALB/c mice were vaccinated on day 0, 14, and 28 with 20 µg of either "empty" OMVs [derived from BL21 ompA (pET21_Empty)] or MBP-mD8-FAT1-OMVs strain supplemented with Alum, or FhuD2-mD8-FAT1 strain. At day 35, 2 × 10 5 CT26 cells were subcutaneously (s.c.) injected in each animal and tumor growth was measured with a caliper every 3 days over a period of 30 days. For ethical reasons, mice were euthanized when tumors reached a size of 1,500 mm 3 . Analysis of Anti-mD8-FAT1 Antibodies in Immunized Animals Anti-mD8-FAT1 antibodies were measured by ELISA. Amino plates (Thermo Fisher) were coated with synthetic mD8-FAT1 peptide (0.5 µg/well) and incubated overnight at 4 • C. The day after, plates were saturated with a solution of 1% BSA in PBS (200 µl per well) for 1 hr at 37 • C. Mice sera were threefold serially diluted in PBS supplemented with 0.05% tween (PBST) and 0.1 % BSA. After 3 washes with PBST, 100 µl of each serum dilution were dispensed in plate wells. As positive control, Anti-mD8-FAT1 rabbit serum from animals immunized with KLH-conjugated IQVEATDKDLGPSGHVTYAILTDTE peptide was used. After 2 hr incubation at 37 • C, wells were washed three times with PBST and then incubated 30 min at 37 • C with mouse anti-rabbit alkaline phosphatase-conjugate antibodies at a final dilution of 1: 2,000. After 3 washes with PBST, 100 µl of Alkaline Phosphatase substrate (Sigma Aldrich) were added to each well and plates were maintained at room temperature in the dark for 30 min. Finally, absorbance was read at 405 nm using the M2 Spectramax Reader plate instrument. T Cell Analysis At the end of the tumor challenge studies described above (30 days from tumor cell administration) mice were sacrificed and spleens collected in 5 ml DMEM high glucose (GIBCO). Spleens were then homogenized and splenocytes filtered using a Cell Strainer 70 µm. After centrifugation at 400 × g for 7 min, splenocytes were suspended in PBS and aliquoted in a 96 well plate at a concentration of 1 × 10 6 cells per well. Cells were stimulated with 10 mg/ml of an unrelated peptide (negative control), or 10 mg/ml each of a mix of the five peptides that make up the "pentatope" (M03, M20; M26, M27, and M68 peptides). As positive control, cells were stimulated with phorbol 12-myristate 13-acetate (PMA, 0.5 mg/ml) and Ionomycin (1 mg/ml). After 2 h of stimulation at room temperature, Brefeldin A [Beckton Dickenson (BD)] was added to each well and cells incubated for 4 h at 37 • C. After 2 washes with PBS, NearIRDead cell staining reaction mixture (Thermo Fisher) was incubated with the splenocytes for 20 min at room temperature in the dark. After two washes with PBS and permeabilization and fixing with Cytofix/Cytoperm (BD) using the manufacturer's protocol, splenocytes were stained with a mix of the following fluorescent-labeled antibodies: Anti CD3-APC (BioLegend), Anti-CD4-BV510 (BioLegend), anti-CD8-PECF594 (BD) and IFN-γ-BV785 (BioLegend). Samples were analyzed on a BD LSRII FACS using FlowJo software. Graphs were processed with Prism 5.0 software (Graphpad). Statistical analysis and differences in means between two groups were compared by unpaired, two-tailed Student's t-test (n.s.: P > 0.05, * P < 0.05, * * P < 0.01, * * * P < 0.001). Analysis of Tumor Infiltrating Lymphocytes Tumor infiltrating lymphocytes were isolated from subcutaneous CT26 tumors taken from sacrificed mice. At least two tumors per group were collected and minced into pieces of 1-2 mm of diameter using a sterile scalpel. Tumor samples were then transferred into a 15 ml tubes containing 5 ml of collagenase solution (Collagenase Type 3 200 U\ml, Collagenase Type 4 200 U\ml) diluted in HBSS with 3 mM CaCl 2 and incubated under agitation for 2 h at 37 • C. The resulting cell suspensions were filtered through a Cell Strainer 70 µm, washed twice with PBS and 1 × 10 6 cells were dispensed in a 96 well plate. Then cells were incubated with NearIRDead cell staining Kit (Thermo Fisher) 20 min on ice in the dark. After two washes with PBS, samples were stained with the following mixture of fluorescentlabeled antibodies (BD): anti GR1 (BV605), anti-CD11b-BV480, anti-CD45-BV786, anti-CD4-PE, and anti-CD8-PECF594. The samples were then incubated 1 h at RT. After 2 washes with PBS, Cytofix/Cytoperm (BD) was added to each well and incubated 20 min on ice in the dark. After 2 washes with PBS, cells were stained with anti-Foxp3-A488 (BD) antibodies diluted in Permwash 1X buffer 20 min at RT in the dark. Finally, samples were washed 2 times with 1% BSA in PBS and analyzed on a BD LSR II FACS as described above. RNA Extraction and qRT-PCR Analysis RNA extraction from cell lines was performed using the RNeasy mini kit (QIAGEN) and 500 ng of it were reverse transcribed using Superscript III Reverse Transcriptase (Life Technologies) with oligo dT. Triplicate cDNA samples from each cell line (equal to 50 ng RNA/sample) were subjected to qRT-PCR to assess the relative FAT1 (Quantitect R Primer Assay for mouse FAT1, QIAGEN) transcript levels using the Quantitect R SYBR Green PCR kit (QIAGEN). MAPK, actin (Quantitect R Primer Assay for Human actin or MAPK, QIAGEN), were used as an internal normalization controls, respectively. Data were analyzed with the One-Step Plus qRT-PCR equipment (Applied Biosystems). Cytotoxicity Assay 5 × 10 4 CT26 cells were plated in triplicate in a 96 wells plate in RPMI medium + 10% FBS (GIBCO). Cells were incubated for 72 h at 37 • C with three different concentrations (10 µg/ml, 5 µg/ml and 1 µg/ml) of affinity-purified rabbit anti-mD8-FAT1 polyclonal antibodies. As controls CT26 cells were incubated with PBS, pre-immune serum from the same rabbit used for D8-FAT1 immunization (1:1000 final dilution), or with three concentrations of rabbit polyclonal antibodies against an unrelated peptide. Cell proliferation was followed using the CellTiter 96 Non-Radioactive Cell Proliferation Assay (Promega) according to the manufacturer's protocol. Finally, absorbance was read at 750 nm using the M2 Spectramax Reader plate instrument. ETHICS STATEMENT Mice were monitored twice per day to evaluate early signs of pain and distress, such as respiration rate, posture, and loss of weight (more than 20%) according to humane end points. Animals showing such conditions were anesthetized and subsequently sacrificed in accordance with experimental protocols, which were reviewed and approved by the Animal Ethical Committee of Toscana Life Sciences Foundation and by the Italian Ministry of Health. AUTHOR CONTRIBUTIONS AlG and MP cloning, expression, purification OMVs, challenge studies, flow cytometry, and manuscript revision. SV mouse models. ST and CS, flow cytometry. LaF, CI, RC, EC, and SS cloning and OMV preparation, antigen compartmentalization in OMVs. MT and IZ flow cytometry, T cell analysis and tumor challenge. LuF, LG, EK, SI, and AsG WB, protein analysis and ELISA. GG experimental design, project coordination, and manuscript preparation. FUNDING This work has been supported by the Advanced ERC grant OMVac 340915 and by the PoC ERC grant OMCRC 780417 assigned to GG.
v3-fos-license
2019-04-15T13:10:02.527Z
2016-12-07T00:00:00.000
35027875
{ "extfieldsofstudy": [ "Engineering" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajae.20160303.13.pdf", "pdf_hash": "fa59f59a6cee7db113df276624fc2c9179a6a71b", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45180", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "c95eddee8308624d1b5c9c2719b6070d372f15ef", "year": 2016 }
pes2o/s2orc
Haptic Feedback Experiments for Improved Teleoperation of a Robotic Arm : The paper presents a robotic arm which is operated by means of a sensorial interface mounted on the hand and arm of the human operator. The novelty of the research is the application of devices similar to those used for movements detection in virtual reality applications in order to command a robotic system. Depending on the precision of the application intended and also depending on the number of degrees of freedom, the motion detection for the human hand and arm’s motions was approached on different levels of complexity. The data processing methods and the action commands methods were developed in correlation with the structure of the robotic arm and starting from the monitor of the movements of the human arm and hand. The sensorial interface was conceived on the premises that the robotic arm should be able to realize movements similar to those of a healthy human hand, as requested by the application. Therefore, the sensorial interface that monitors the movement of the hand was implemented in order to command a robotic arm having 5 degree of freedom, having an anthropomorphic robotic hand at its end The joints of the system allow rotations of 30-180 degrees (depending on the utility and position). The experimental testing of the robotic system verified the performances of the robotic arm to replicate the movements of the human hand. The operator executed a sequence of movements, with the sensorial interface on, and the robotic arm reproduced the movements (the response was analyzed in quality and quantity). Introduction In the last years, different groups of researchers and several companies have designed a large variety of anthropomorphic mechanical arms with ever growing performances. Some of the products available now have a flexibility comparable to that of a healthy human hand, which allows a large variety of handling operations for different types of objects. However, there is still more research necessary in order to conceive and develop the algorithms that make possible specific sets of complex movements for such anthropomorphic robotic arms. The methods most frequently used for remote control of the robotic arms include joysticks, preset sets of commands or the replication of the movements of a human operators's arm. The last of these methods is the most efficient, regarding the performance-cost ratio. The method implies the use of several devices: electromagnetic sensors, movement sensors, bending sensors, gloves with bending and pressure sensors and accelerometers etc. It can be used both with wired command devices or with wireless connection to the command unit. A significant limitation of the command systems of the teleoperate robotic arms is established by the feedback provided by the dedicated circuits implemented on the robotic arm. The feedback can provide a survey mechanism for the operator, in order to look after the operations realized by the robotic arm. This is one of the main reasons why almost all most performant robotic arms realized in the recent period implement several artificial intelligence features or functions, being able to make decisions regarding, for instance, the force of the …or the speed of the movement, in order not to break the objects manipulated or the objects in the surrounding environment [1]. For such functions, the command and control block may require a microprocessors network, each dedicated to one specific action. Excepting the full autonomous robots, the implementation of one or several feedback levels is indispensable in order to efficiently command the movements (of a tele-operated robotic arm). [2] Without several types of sensors, it wouldn't be possible to duplicate the movements of the human hand. The applications that require remote control of a robotic arm belong to the category of operations in dangerous environments and remote or inaccessible (for the human operator) environments -in such situations a solution frequently used is to duplicate the movements of the hands/arms of a human operator. In some situations, it is cheaper and more efficient to program and teleoperate a robot that is placed in such a medium than to have a risky and/or very expensive human operator intervention. In addition, due to the advances and technological development of the robot and automation industry, nowadays the performances and abilities of the robotic arms may overpass the human abilities and performances (in terms of strength, precision, reproducibility etc.) Such abilities, combined with adequate sensorial systems and intelligent data processing will allow a perfect control of the robotic systems that may function, when necessary, as efficient and safe extensions of the human locomotory system. There are several fields and applications where already robots are considered more efficient than humans and the domain of these application is constantly extending, as it is much more adequate to use specialized robots than human operators in situations where robots can efficiently realize complex operations in handling objects of different sizes and weights with high precision. While several robots are program to repeat specific operations, other robotic arms are controlled by the human operators with specific devices that allow the remote control: joysticks, spacebars, motion detection (and duplication of the human movements). Of all those types of commands, the last requires most information processing: the sensors are placed on the joints of the human hand, or the operator use a glove with sensors, that acquire information that will be precessed to obtain an intelligent command and control of the robotic hands and arms. Methods That Are Used to Detect the Movements of the Hands and Arms of the Human Operator In 1998, researchers of Systems Lab at Southern Methodist University developed an exoskeleton system that operates with pneumatic action elements. The term exoskeleton (external skeleton) refers to a system of leverages, motors, actuators and sensors that is mounted on the hands and arms of the operator (this kind of systems have applications also in prosthetic and recuperatory medicine). In the robotic system developed by Systems Lab, the exoskeleton is made of aluminum mobile elements having four joints each that detect the movement of the shoulder and elbow of the operator's arm. In figure 1 is visible the standpoint of the exoskeleton (behind the operator's shoulder); the aluminum elements may be adjusted depending on the physical characteristics of the operator, to fit on the right arm. In order to command the metal structure (which has large dimensions), the human operator commands the engines associated with each part of the system, through the movements of the fingers, arm and forearm. The robot named Mahru was developed by Korea Institute of Science and Technology (KIST) and Samsung Electronics uses the same method. The robot implements a large selection of complex movements for arms and legs, stored in its library. Figure 2 represents the exoskeleton that command the arms of the robot. It consists of leverages, sponsor and actuators and it detects and decomposes the movements of the operator -the corresponding movement is searched in the library of movements for the hand, arm and forearm. In both examples presented so far, the devices mounted of the joints of the hand and arm measure, memorize and process several movement parameters for different parts of the human arm, that are put in correspondence with the moving elements of the robotic arm. In order to replicate the human movements, the movement parameters are detected with high precision -and therefore the trajectory, speed and acceleration of each mobile segment are determined. A different category of equipment developed for hand's movement detection are the "data gloves", also with variations for other parts of the body: locomotory interfaces, specialized devices for different type of feedback and interactive suits. Interactive data suits like the one in figure 3 are nowadays used not only in virtual reality computer games, but also in training simulator for divers, soldiers, parachute jumpers etc. Another real-time method for motion detection used to detect the motion of the human hand was developed by Elliptic Labs [6]. The method is based on an ultrasonic system with micro-electro-mechanic microphones and transmitters that detect and emit sonic waves in a specter of over 20 kHz. The acoustic waves are then processed and transformed in different commands for the computing circuits. Data gloves (and systems that include data gloves) are the most used devices for real-time detection of the movements of the hands (including the movements of the arms) of the operators, since sun data gloves offer a complete solution for high precision detection of the movements of all joints. Data gloves are mostly used for virtual reality applications. They have sensors inside the fingers of the gloves, tracking devices, accelerometers and force-feedback systems (that measure the resistance force when grasping objects and during different types of touch and interactions). The movements of the hand and finger are accurately detected and transmitted to the PC, as input of specific applications. The accelerometers allows detection of two types of movements: dynamic and static. The data gloves available on the market have different features and their performances depend on the number of sensors mounted on each finger, the sensors' resolution, and the force-feedback functions. They can detect complex movements, decomposed in up to 22 degrees; the accelerometers have up to 3 movement axis and is able to measure to rotation and elevation of the hand, in addition to the movements on the three axis. Cyber Glove is market leader for these products and produces various types of data gloves. In certain applications that make use of mobile robots, the command and control of the movements of the mobile elements of the arms of the robot are based on computing systems of the type client-server. These systems use dedicated architectures and specialized software for the realtime processing of the information captured by the sensorial interfaces of the robot and its internal processing and memory blocks. For instance, the mobile platform PowerBot and Power Cube Manipulator which is controlled with a client-server type of architecture. The architecture of the control block of this robot contains several micro controllers that give the control signals for orientation in a complex environment with high unpredictability and also control the interaction of the robot with this environment. This robot is manufactured by MobileRobots company that also developed software for implementation of high level control to avoid obstacles, for trajectory design, location, navigation and operation of the robotic arm. [7] Conceptual Modeling of Replication of the Movements of the Human Hand by Artificial Hands of the Robots The robotic systems that have been developed so far have a large variety of construction solutions, functional facilities and working conditions. Typical features refer to the number of fingers and their segmentation, number of segments of the arm, degrees of freedom, types of actions, working conditions like temperature and pressure etc.). However, no matter of the type of solution implemented, a robotic arm always will be designed as to be able to drive the effector (the robotic hand) in the required position, under the required angle (in order to realize a specific operation). The interconnection of the human movements with the movements of the robotic arm implies to take into consideration the fact that the latter is actuated, usually, by more articulations (joints) than the human hand -typically, one for each degree of freedom. In order word, except for axial rotations of some segments (for instance, supine pronation of the forearm), the other double or triple rotations will be implemented with either two or three articulations for each human articulations (like those of the hand wrist or of the shoulder). The detection of the movements of the human hand movements can be regarded as a technical and computing problem with different levels of complexity, depending on the number of degrees of freedom monitored and the precision needed in different applications. Starting with the monitoring of the movements of the human hand and arm, in direct correlation with the structure of the robotic arm that has to be controlled, adequate methods were conceived and selected for date processing and command of the action systems. [8] In order to have a robotic arm and hand perform actions that are specific for humans, often the engineers have an empiric approach, specific for each type of articulated structure and its features (architecture of active elements, degrees of freedom). The movements of the human hand and hence the movements of the device that "copy" the human hands imply a certain number of articulated segments, each articulation having a limited number of degrees of freedom. The larger the number of segments and the larger the number of degrees of freedom, the greater will be the diversity of gestures and the higher the complexity of the movementstherefore, the hand has more expressivity and more elaborate abilities. However, in the case of a robotic arm, the larger the number of articulations, the lower resistance (the robot will be fragile and vulnerable). The "/robot" is the whole systems that performs an action/function, and includes more blocks than the robotic arm. There is a direction of research and development in robotics that is oriented toward the "reproduction" of the natural human arm and hand in models that copy more or less details of the latter. However, there are a lot of different solutions for robotic arms that are able to grasp and manipulate objects with simple to complex actions. In the scientific literature the main types of robotic hands and arms may have 2, 3 or 4 "fingers", each of them having 2 or 3 segments. Even those models try to "copy" some features of the human hand. In order to replicate the movements of the human arm and forearm, we must take into consideration the degrees of freedom of the joints: shoulder -3 degrees of freedom, of which 2 in vertical planes that are orthogonal (frontal and sagittal) and 1 for the rotation around the own axis; elbow -only 1 degree of freedom (flexion/extension) hand wrist -3 degrees of freedom, of which 2 in vertical planes that are orthogonal and 1 for the rotation around the own axis (pronation/supination). For the hand, the articulations have the flexibility as follows thumb of the hand -the main joint (carpal-metacarpal) has two degree of freedom in vertical planes, while the metacarpal -phalanges and that between phalanges have only one degree of freedom other four fingers -the carpal-metacarpal articulations have only one degree of freedom (that allow small movements in order to modify the form of the palm), the metacarpal -phalanges articulations have two degrees of freedom each, and all articulations between phalanges have one degree of freedom. Summing up, the arm and forearm have 7 degrees of freedom altogether, while the palm with the fingers have 24 degrees of freedom, of which 20 are explicit and large, the other 4 allowing only small movements to adjust the form of the palm to the form of the object handled and grasped. In conclusion, a precise and complete mathematical representation of the position and movement of the arms should take into consideration all 27 explicit degrees of freedom, and corresponding parameters should be monitored in order to replicate the movement of the human hand. The Implementation of the Processing and Control Method for the Artificial Arm and Hand The data processing required for our application implies a large computing power, as real-time processing of a large quantity of data is needed. As classic computing models (i.e. with a central computing block) may not handle such processing, our solution is based on distributed and hierarchic computing architecture. The block scheme of the computing architecture is represented in Figure 4. The processing tasks for the data acquired by sensors are locally processed, only results being transmitted to the superior level. The same hierarchy is implemented for the data processing of the robotic system moving parameters. The superior computing level will therefore not be overloaded by data and data transmission tasks. Therefore the scheme does not contain large capacity data buses, but a standard communication system micro controller -processor. The detection of the movements of the human hand may be solved by different solutions, of different complexity, all aiming at replication of the movements of the human hand, but taking into consideration a different number of parameters, in order to obtain the required performances of precision and ability. The goal is to control the robotic hand through the movement of the operator's hand. To do this efficiently, a first level of computing is dedicated to high precision detection of the movements of the operator's hand (of which "segments of interest" may be accordingly defined, depending on application) and processing of the information, in order to command the robotic arm. Figure 5 is a picture of the prototype of the robotic armits mechanical structure. In the picture in figure 8, all sensorial interfaces and processing elements are mounted on the prototype. As one can see, the model looks alike the human arm and hand (or a sketch of it), but there are significant differences regarding the cinematic and functionality. The data processing is divided in two main sections: the command section of the data processing, in which the sensorial input from the hand of the operator is processed, transmitted and converted in appropriate commands; a local section, for local control of movements and stability, being responsible for, instance, for the adaptation of the actions to the form, weight and other characteristics of the manipulated objects. In our hierarchic approach of the computing, the local section is subordinated to the central level of command [9] [10]. The effective action elements of the robotic hand and arm are controlled both by central commands and local controls. Specific feedback is transmitted to the operator in order to have an efficient remote control. In addition to the interfaces with the user and the robotic arm, several computation units are included in the architecture in order to may adjust the functionality to complex operations. The design of the sensorial interface that monitors the movements of the operator took into consideration all mobile possibilities of the healthy human hand. The robotic arm was designed accordingly to the largest capacity and variety of movements, therefore it has 5 degrees of freedom. The robotic hand has an anthropomorphic structure (i.e. imitates the human hand, having 5 fingers with similar number of phalanges as the human fingers). Figure 6. Interconnection of sensorial (data acquisition) systems and information processing blocks. However, since the robotic arm/hand has more articulations than the human, the interconnection of the movements is not a direct replication of commands. The processing system converts the movement of the human hand into commands for the robotic hand. Except for the axial rotation of some segments (like the supine-pronation of the forearms, the other movements that imply double or triple rotations are realized by means of two/three articulations for each human articulation. Being more flexible than the human physical body, the articulations of the robotic arm allow rotations in the range of 30-180 degrees, depending on the position and utility of the articulation. The sensorial interface for the human arm (figure 7) consist of a data glove and a set of sensors for flexion, placed on the joints of the arm and hand, data acquisition systems and a processing unit. The result of the movement detection is transmitted to the central unit, equipped with the main microprocessor, which computes and gives the commands for the robotic arm. The algorithms for data processing that generate the commands in order to replicate the human movements with the artificial structure were presented by the authors in [7]. The experiments realized so far had an empiric approach, testing the capacity of the robotic system to follow the movements of the human hand and the reproducibility of the actions performed. The replication of the operator's movements was verified for several movements, at different speeds and acceleration, involving complex trajectories and rotations. A set of basic movements were repeated for different parameters -the robotic arm repeated the movements. The measurements of the amplitude and trajectory of the movements showed small deviation (in the alignment) of the human/artificial arm. However, the robotic arm was able to grasp and lift objects and even to play a simple tune on a piano. Figure 9. In this experiment, the operator tries to give the appropriate commands for grasping a small ball, based on the haptic feedback. All mobile units have acted as designed, according to the corresponding segment of the operator arm and hand. The experiments certified the efficiency of the methods used for data acquisition and data processing. Conclusions This paper presents an innovative conception for teleoperation of a robotic arm: using data gloves and additional (similar) sensorial units, for the joints of the arm, as in virtual reality applications. The movements of the operator are duplicated by the robotic arm, although the structure is different in number of articulations and different motion components are involved. The prototype has an anthropomorphic artificial hand (with 5 fingers) and was trained with basic operations of lifting, grasping and rotations [11]. The experiments had promising results, with the robotic arm replicating the human movements, although there are small alignment errors of the two. The limitations of the systems are due to the performance of the mechanical structure of the robotic anthropomorphic arm, including the fix and mobile segments and their joints. For this phase of the project, the aim of the research was to solve the interconnection and processing issues, generating the commands for tele-operation -the applications will imply haptic feedback and 3D visual contact, therefore offering the possibility to correct the alignment with further feedback. The experiments certified the efficiency of the architecture and algorithms and will be followed by further experiments with more complex and dynamic movements.
v3-fos-license
2023-10-22T06:18:09.553Z
2023-10-20T00:00:00.000
264378101
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-023-45075-6.pdf", "pdf_hash": "d9d3689e284587725867ff0af1e676b33c8578c1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45181", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "sha1": "a5badb20b3dc1aea157f85fcd51f34af6eef1b27", "year": 2023 }
pes2o/s2orc
Enhanced removal of organic dyes from aqueous solutions by new magnetic HKUST-1: facile strategy for synthesis A novel, magnetic HKUST-1 MOF based on MgFe2O4-NH2 was designed and synthesized in two steps and applied effective removal of malachite green (MG), crystal violet (CV), and methylene blue (MB) from water samples. Characterization of the newly synthesized MgFe2O4-NH2-HKUST-1 was performed by various techniques such as Fourier transform infrared spectroscopy, X-ray diffraction, Field emission scanning electron microscopy, Brunauer–Emmett–Teller, Thermal gravimetric analysis, and Vibration sampling magnetometry. Malachite green, crystal violet and methylene blue are toxic and mutagenic dyes that can be released into the water in different ways and cause many problems for human health and the environment. The removal of malachite green, crystal violet, and methylene blue from aqueous solutions was investigated using the magnetic HKUST-1 in this research. The effect of various parameters such as pH, amount of sorbent, dye concentration, temperature, and contact time on dye removal has been studied. The results showed that more than 75% of dyes were removed within 5 min. Adsorption isotherms, Kinetic, and thermodynamic studies were investigated. The results of this study show that adsorption capacity of the magnetic adsorbent is equal to 108.69 mg g−1 for MG, 70.42 mg g−1 for CV, and 156.25 mg g−1 for MB. This study shows the good strategy for the synthesis of the functionalized magnetic form of HKUST-1 and its capability for increasing the efficiency of the removal process of malachite green, crystal violet, and methylene blue from an aqueous solution. requires an efficient sorbent.In the present study, the efficient removal of malachite green, crystal violet, and methylene blue was done using an efficient synthetic sorbent based on magnetic MOF.The dyes studied in this research are presented in Table 1. Metal-organic frameworks (MOFs) are porous structures that are created from the coordinated bond between metal ions and organic linkers or bridging ligands 10 .The specific structural characteristics of MOFs include a wide range of particle sizes, high surface-to-volume ratio, high absorption tendency, controllability of particle size, and high absorption capacity 11 .Due to their high surface area, tunable structural properties, and thermal stability, MOFs are suitable for a wide range of applications, including catalysis 12 , gas storage 13 , enzyme carriers 14 , sensors 15 , and as a sorbent in the adsorption process has been studied 4 .The absorption capacity of MOFs to adsorb dyes is significant.One of the significant advantages of MOF compared to other sorbents is its structural diversity, the possibility of determining the size of the pores, and their properties by choosing different metal Ions and organic ligands in the synthesis stages 16 .HKUST-1 MOF contains Cu 2+ units coordinated by four carboxylate groups and creates a highly porous cubic structure with a 3D network 17,18 .The important point in manufacturing HKUST-1 is the easy and reproducible synthesis of this MOF.MOFs can be used to treat dyes wastewater.MOFs are suitable materials for dye adsorption from wastewater with favorable performance compared to conventional sorbents 19 .Recently, for the better performance of MOFs, their composites are used, and in this study, a magnetic composite has been synthesized.Among the sorbents that have been synthesized so far for the removal of pollutants, magnetic sorbents have received much attention for absorbing dyes 20 . Magnetic nanoparticles have attracted the attention of many researchers due to their extraordinary properties.They can be bound to MOFs by functionalization for the formation of a composite that has magnetic properties.Composites are a combination of two or more separate materials that have different properties than individual parts 21 . HKUST-1 has attracted the attention of researchers in recent years due to its easy and cost-effective synthesis.On the other hand, it is difficult to separate MOFs from the mixed solution and it causes secondary pollution in the environment, and a centrifugal step is needed to separate them from the aqueous solution, so the use of MOF magnetic adsorbent allows easy separation with an external magnetic field and product recovery It will be easy and the operating cost will be more affordable.Also, the stabilization of adsorbent in aqueous environments using composite formation, a short time in the removal process, and acceptable adsorbent capacity are important in this study.Therefore, the magnetic HKUST-1 has been considered an accepted and efficient adsorbent for its easy separation. In this study, the synthesized MgFe 2 O 4 nanoparticles were functionalized by the -NH 2 functional group to bind to the carboxylic acid group of HKUST-1 by hydrogen bonding.The new magnetic absorbent with the structure of MgFe 2 O 4 -NH 2 -HKUST-1 was studied to remove dye pollutants methylene blue, crystal violet, and malachite green.This compound shows adequate adsorption capacity, speed, and removal percentage for removing dyes from wastewater.The effective factors, isotherms, kinetics, and thermodynamics of the adsorption removal process were investigated. Materials and instruments Malachite green (MG), Crystal violet (CV), Methylene blue (MB), Iron (III) nitrate nonahydrate Fe(NO 3 ) X-ray powder diffraction (XRD) measurements were performed using a Philips X'pert diffractometer with monochromatic Cu-Kα radiation.A glass pH electrode (Metrohm 713 pH-meter) was used for pH measurements.The morphology of samples was studied by field emission scanning electron microscopy (FE-SEM) ) and 2.4 mmol of copper nitrate were dissolved in 25 ml of ethanol and placed in an ultrasonic bath for 30 min.Then 1 mmol of BTC was dissolved in 25 ml of ethanol and added to the previous solution at a rate of 0.5 mL/min under mechanical stirring for 1 h.Then, the final product (green precipitate) was washed several times with ethanol separated by an external magnet, and then dried under a temperature of 50 °C for 4 h in a vacuum oven (Scheme 2). Adsorption process Different solutions of dyes were prepared by dissolving appropriate amounts of MG, CV, and MB in the range of 0.18-9.27mg L −1 , 0.81-6.94mg L −1 , and 0.63-5.43mg L −1 , respectively, in distilled water.The calibration curves were obtained by measuring the absorbance for MG at 620 nm, CV at 590 nm, and MB at 664 nm.The adsorption process was carried out by the addition of 10 mg, 5.5 mg, and, 1 mg of sorbent separately to the 5 mL solutions with different concentrations of MG, CV, and, MB (Scheme 3). Scheme 1.The magnetic nanoparticle synthesis process. Scheme 2. The magnetic composite synthesis process. The solutions including the sorbent were mixed with a magnetic stirrer for 5 min for all dyes.Subsequently, the sorbent was separated by an external magnet, and the remaining MG, CV, and MB concentrations were calculated by the following equation: where A is the adsorption of the analyte after adding the sorbent and A 0 is the initial adsorption of the solution. In the FT-IR spectra related to HKUST-1 Bands in 1447 cm −1 and 1639 cm −1 are attributed to -O-C-O-groups, and the bands in 1375 cm −1 and 1565 cm −1 are attributed to the C=C stretching vibration of the BTC ligand.The band in 680 cm −1 is related to the Cu-O bond 23 .The broad peak at 3420 cm −1 can be attributed to the -OH of water molecules 24 .In the FT-IR spectra related to MgFe 2 O 4 -NH 2 the bands in 2923 cm −1 and 2856 cm −1 are attributed to the stretching vibration of the C-H bond in ethanolamine or ethylene glycol.The band in 1054 cm −1 can be attributed to the overlap of the C-O bond with the C-N stretching vibration, which is a sign of the binding of amine groups on the MgFe 2 O 4 -NH 2 nanoparticle 22 , The band in 570 cm −1 is related to the Fe-O bond.Also, the bands in 1383 cm −1 , 1630 cm −1 , and 3420 cm −1 are related to C-N stretching vibration, NH 2 scissor bending vibration, and N-H stretching vibration, which indicates the presence of ethanolamine molecules on the nanoparticle surface 25 . Therefore, the peaks appearing in the spectrum of HKUST-1 and MgFe 2 O 4 -NH 2 can be seen in the spectrum corresponding to the synthesized composite. BET analysis The surface properties of the MgFe 2 O 4 -NH 2 -HKUST-1, such as the surface area, and pore diameter were also examined using BET surface area analysis.The specific surface area of the MgFe 2 O 4 -NH 2 -HKUST-1 composite evaluated was obtained at 297.13 m 2 g −1 .Also, average pore diameter distribution was investigated using the Barret-Joyner-Halenda (BJH) method.The average pore diameter of the synthesized composite was reported to be 4.26 nm (Table 2).The adsorption-desorption curve of nitrogen gas for the synthesized composite shows I/IV mixed type isotherm, which means MgFe 2 O 4 -NH 2 nanoparticles are with microporous and mesoporous structures at the same time, as shown in Fig. 3a. TGA analysis The thermal stability of the MgFe 2 O 4 -NH 2 -HKUST-1 composite was estimated via TGA.The TGA profile depicted three weight loss steps in the tested temperature range of 50-700 °C (Fig. 3b).The first weight loss %20 appeared in the temperature range of 80-150 °C, which probably indicates the removal of surface water or residual solvent and physisorbed and chemisorbed H 2 O molecules in the sample.In the second stage, in the temperature range of 150-420 °C, organic binders begin to degrade and eventually lead to the complete collapse of the composite, and the remaining mass in this stage reaches 45% of the initial mass.In the third stage, with an increase in temperature from 420 °C, an increase in mass of about 2% by weight is observed.This increase in mass can be due to the formation of some oxides that are not stable at higher temperatures and gradually decompose.Also, the approximate stability of the sample mass in the range of 40% of the initial mass can be attributed to the presence of CuO and Fe 3 O 2 compounds 23,[26][27][28] . Table 2. Information about the porosity of the synthesized composition.www.nature.com/scientificreports/ VSM analysis To study the magnetic behavior of MgFe 2 O 4 -NH 2 nanoparticles and MgFe 2 O 4 -NH 2 -HKUST-1 composite, magnetization measurements were performed by VSM.As seen in (Fig. 3c (1), the value of saturation magnetic (Ms) was 28.48 emu g −1 for MgFe 2 O 4 -NH 2 nanoparticles and according to Fig. 3c (2), the maximum saturation magnetization of the MgFe 2 O 4 -NH 2 -HKUST-1 composite was obtained 8.13 emu g −1 , which is enough to quickly collect it by a strong magnet.from a large volume of water.The amount of saturation magnetization of magnetic composite has decreased compared to magnetic nanoparticles, which can be caused by an increase in the thickness of the non-magnetic component.Also, due to the reduction of the coercivity force (H C ), it is concluded that the synthesized sorbent has superparamagnetic properties. Zeta potential To check the surface charge of the synthesized composite, the zeta potential was used.The zeta potential of the sorbent is one of the factors that can affect the adsorption capacity.According to Fig. 3d, the results showed that the surface charge of the synthesized sorbent is negative and equal to − 20.5 mV, so MG, CV, and MB cationic dyes are adsorbed on the surface of the sorbent. Optimization of experimental conditions for dyes removal To obtain more effective adsorption of MG, CV, and MB, the effect of adsorption conditions was investigated and optimized.These parameters are initial solution pH, amount of sorbent, and contact time. Initial solution pH In this procedure, adsorption experiments of MG, CV, and, MB were done within the solution pH ranging from 3.0 to 10.0, and %removal was calculated (Fig. 4a-c).The adsorption capacities of MgFe 2 O 4 -NH 2 -HKUST-1 for MG, CV, and MB increased with the solution pH increasing to 5.5, 5, and 7.However, the adsorption capacity decreased up to 10.According to the pK a and cationic nature of these dyes at the mentioned pH and also based on the zeta potential analysis that shows the sorbent surface is negative it can be concluded these cationic dyes are protonated at the tested pH and are probably electrostatically adsorbed by the sorbent. Amount of sorbent The amount of sorbent could directly affect the adsorption capacity.To obtain the optimal adsorption conditions, experiments were carried out by the addition of 1-12.0 mg, 1-10 mg, and 0.25-5 mg of sorbents to a series of 5 mL of 1.3 × 10 −5 mol/L solution of MG and CV and MB, respectively.The optimum weight of sorbent for the removal of MG, CV, and MB has obtained 10 mg, 5.5 mg, and 1 mg, respectively as shown in Fig. 4d-f.www.nature.com/scientificreports/ Adsorption time The adsorption rate is an important factor.Absorbance spectra for MG, CV, and MB were recorded by time in the presence of sorbent.The optimal time for three dyes was 5 min, with a high removal percentage which is one of the excellent features of the synthesized sorbent. Stability and reproducibility According to the tests, the synthesized adsorbent can be used for at least 5 cycles without significant decrease in its efficiency.Also to check the reproducibility, adsorbents were synthesized in different times and removal experiments performed under the mentioned optimal conditions.The reproducibility of the adsorbent for 3 dyes is reported in Table 3. Adsorption capacity and isotherm To describe the adsorption mechanism for MG, CV, and MB, the values of 1/q e versus 1/C e were plotted for the Langmuir isotherm.The values of Ln q e versus Ln C e were plotted for the Freundlich isotherm.Equations ( 2) and ( 3) express the linear form of Langmuir and Freundlich isotherms, respectively: where C e (mg L −1 ) represents the equilibrium concentration of MG, CV, and MB; q e (mg g −1 ) is the equilibrium adsorption capacity; q max (mg g −1 ) is the maximum adsorption capacity; K L (L mg −1 or L mol −1 ) is Langmuir constant represents the adsorption energy and K F and n are the Freundlich constants 29 .Figure 5 and Table 4 show Langmuir and Freundlich isotherm diagrams and results for MG, CV, and MB. Adsorption kinetics Adsorption rate is an important characteristic for the evaluation of sorbents.Fast adsorption onto MgFe 2 O 4 -NH 2 -HKUST-1 occurred in the initial adsorption stage.For example, more than 75% of dyes were removed within 5 min when the initial MG concentration was 10 −5 mol/L with 10 mg of sorbent and CV concentration was 1.3 × 10 −5 mol/L with 5.5 mg of sorbent, and MB concentration was 1.3 × 10 −5 mol/L with 1 mg of sorbent.To further analyze and calculate the kinetic parameters, the experimental data were fitted by two kinetic models: the pseudo-first-order Eq. ( 4) and pseudo-second-order Eq. ( 5) kinetic models 30 : where q t (mg g −1 ) and q e (mg g −1 ) are the amounts of MG, CV, and MB adsorbed at time t (min) and the amount at adsorption equilibrium, respectively, and k 1 (min −1 ) and k 2 (g mg −1 min −1 ) are the pseudo-first-order and pseudo-second-order rate constants, respectively.The fitting results (Table 5) show that the adsorption of MG, CV, and MB onto MgFe 2 O 4 -NH 2 -HKUST-1 could be well described by the pseudo-second-order kinetic model. Adsorption thermodynamics The thermodynamic parameters, standard free energy change (ΔG°, kJ mol −1 ), enthalpy change (ΔH°, kJ mol −1 ), and entropy change (ΔS°, J mol −1 K −1 ) related to the adsorption were determined.These parameters can be calculated with the following equations (Eqs.6, 7).where, T (K) is the temperature in Kelvin, R (8.314 J mol −1 K) is the universal gas constant and K (L mol −1 ) is the thermodynamic equilibrium constant for the adsorption process calculated from the ratio of the equilibrium adsorption capacity (q e ) to the equilibrium concentration of the solution (C e ).The thermodynamic parameters were determined at 21,116 and 18,217 kJ mol −1 for ΔH and 75.493 and 67.856 J mol −1 K −1 for ΔS for MG and CV respectively.The results confirm an endothermic nature for MG and CV, as the temperature increases, the removal percentage MG and CV increases.The adsorption process of these two dyes is a possible and spontaneous process.The thermodynamic parameters were determined − 7102.3 kJ mol −1 for ΔH and 6.05 mol −1 K −1 for ΔS for MB.Unlike MG and CV, the adsorption of MB decreased with increasing temperature and the adsorption process was exothermic. The mechanism of the adsorption process The organic dye adsorption process occurs because the adsorbent's surface particles are not in the same environment as the particles within the bulk.Inside the MgFe 2 O 4 -NH 2 -HKUST-1, all of the forces acting between the particles are mutually balanced, but on the surface, the particles are not surrounded by atoms or molecules of their kind, and thus they have unbalanced or residual attractive forces.These adsorbent forces are responsible for attracting dyes to their surface.Some studies have shown that dye adsorption in MOFs is controlled by electrostatic interactions 31 .Therefore, the pH of the solution is expected to affect the amount of dye adsorbed as a result of the presence or absence of electrical charges on the dye molecules and the adsorbent surface.These dyes have a positive charge at the mentioned optimal pH and according to the results of the zeta potential analysis and the negativity of the surface charge of the absorbent, the adsorption process occurs due to the electrostatic interaction between the cationic dye and the absorbent surface. Sorbent efficiency in removing other dyes Industrial effluents and aqueous solutions are a mixture of various pollutants, including organic dyes.The selectivity of a sorbent is important if it can separate a group of specific dyes from other pollutants, and it is one of the features that are considered in the manufacture of sorbents.In this study, magnetic composite MgFe 2 O 4 -NH 2 -HKUST-1 can absorb cationic dyes such as malachite green, crystal violet, and methylene blue while two anionic dyes, metal orange, and methyl red, are not adsorbed on the sorbent surface.This feature can be an advantage for the studied absorber.Figure 6 shows the high adsorption capacity of the sorbent used for cationic dyes, while the adsorption capacity for the other two anionic pollutants is very small. Comparison with other studies The synthesized magnetic adsorbent was compared with some adsorbents to remove of MG, CV, and MB and shown in Table 6.As it is known, the synthesized adsorbent shows a good absorption capacity and extremely short time of 5 min for the absorption process.Also, the ability to easily separate the adsorbent with an external magnet is a special feature of the synthesized adsorbent compared to non-magnetic adsorbents. Conclusion In this research, magnetic adsorbent MgFe 2 O 4 -NH 2 -HKUST-1 was synthesized as an efficient adsorbent in absorbing malachite green, crystal violet, and methylene blue.The results showed that the synthesized adsorbent has a negative surface charge that creates an electrostatic attraction between the cationic dyes and the adsorbent surface.The results of this study showed that the adsorption capacity of the magnetic adsorbent is equal to 108.69 mg g −1 for MG, 70.42 mg g −1 for CV, and 156.25 mg g −1 for MB.It was observed that malachite green, crystal violet, and methylene blue respectively at pH = 5.5, 5, and 7 at the optimum time of 5 min, and amounts of adsorbent 10, 5.5, and 1 mg were removed more than 75%.The synthesized adsorbent has a high potential to remove dyes in a very short contact time.In addition, this absorbent with its magnetic property is very quickly and easily collected by an external magnet from the working environment, which easily eliminates the difficult and time-consuming step of separating the solid phase from the solution and using a centrifuge.So the process FE-SEM was also carried out to observe the morphology of the MgFe 2 O 4 -NH 2 -HKUST-1.In Fig.2a, FE-SEM images of metal oxide nanoparticles MgFe 2 O 4 -NH 2 are given.These images show that the synthesized nanoparticles are spherical with an approximate diameter of 40 nm.In Fig.2b, nanocomposite MgFe 2 O 4 -NH 2 -HKUST-1 is given which can represent the synthesized magnetic composite framework. Scheme 3 .Figure 1 . Scheme 3. The process of removing organic dyes and separating the sorbent from the solution. Figure 4 . Figure 4.The effect of pH on the removal process of (a) MG, (b) CV, and (c) MB by the sorbent in the range of 3.0-10.0and the graph of the removal percentage of (d) MG, (e) CV, and (f) MB from the solution according to the weight of the sorbent. G • = H • − T S • www.nature.com/scientificreports/ of absorbing by this absorbent is an efficient, fast, and economical process.MgFe 2 O 4 -NH 2 -HKUST-1 composite as a magnetic adsorbent can be a promising future for environment-based processes. Table 1 . Chemical characteristics of organic dyes studied in this research. Table 4 . Langmuir and Freundlich isotherm constants of MG, CV, and MB constants. Table 5 . Pseudo-first-order and pseudo-second-order kinetic model. Table 6 . Comparison of adsorption capacities of MG, CV, and MB using various MOFs obtained in the present work and reported in the literature.
v3-fos-license
2021-09-01T15:06:16.927Z
2021-06-29T00:00:00.000
237886939
{ "extfieldsofstudy": [ "Political Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.4102/hts.v77i2.6326", "pdf_hash": "c8a3c9cdee9461143ac117ddbb726ae5839b21b5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45182", "s2fieldsofstudy": [ "History" ], "sha1": "a2850835f9e0d93bf18e53aa55bf0bca672e231f", "year": 2021 }
pes2o/s2orc
‘The farm that became a great problem’: Epworth Mission Station and the manifestation of mission in crisis in post-independence Zimbabwe researcher sought to uncover the latent sources and nature of the mission problems and ended by suggesting what new approaches can be used to salvage respectability of mission in a post-colonial era. These include missional orientation and decolonisation of the African mind. Introduction 1 Epworth Mission Station, 2 like other mission stations, was created to be an 'active centre, from which would radiate' (Carpenter 1960:192;Gondongwe 2011:53, Kollman & Smedley 2018) the light of Christian teaching to the surrounding area, and thereby model the new way of life, brought about by the church. In the words of Sitshebo, mission stations were supposed to be beacons of light in the midst of darkness. Everything done on them, especially by the Africans, was to be 'Christian', so that the distinction between them and the villages was clearly apparent (Sitshebo 2000:90). These stations were created to bring about change by providing model communities. However, the current mission farm and station situation is far from that ideal; instead the mission farm has become a place of land disputes, poverty and squalor. Banana (ed. 1991:144) has made the following telling statement about Epworth, 'there was one farm, however that remained a great problem, that of Epworth near Harare'. Epworth speaks of overcrowding and unregulated settlements to poor infrastructure, poor service delivery and lack of social facilities. Chirisa (2010) has described Epworth as 'a complex humanitarian crisis driven by institutionalised poor governance, corruption and politics'. Epworth is one of the challenging settlements to both the church and the state; it has the highest crime levels, and high prostitution and poverty levels. Epworth has become a harbour for criminals and has seen a rise in immoral behaviour such as commercial sex work, which expose young children to sex and alcohol abuse. As Msindo and others articulate, '[c]riminal activities appear to be a cancer in Epworth squatter settlements' (Msindo, Gutsa & Choguya 2013:178;Mbanje 2017). Scholars' attention has been drawn to this settlement for various reasons, ranging from geographical to sociological and religious ones (Butcher 1986;Gundani 2002;Tawodzera & Chigumira 2019). Butcher (1986:12), for example, identified four major critical characteristics of Epworth: land speculation, subregional pollution threat, lack of services and community facilities, 1.Prof. S.T. Kgatla assisted Rev. R. Ncube with missiological formulations and argument. 2.Epworth is named after John Wesley's (the founder of Methodism) village of birth, it is a small town in North Lincolnshire, England. and sites unsuitable for proper housing. According to research conducted by Tawodzera and Chigumira (2019), Epworth is considered one of the poorest urban areas in Zimbabwe: about 70% of its residents live in informal conditions, where access to key city-provided services such as energy, water and sanitation is limited or absent, and housing structures range from self-built brick structures to shacks, and poverty is endemic. Approximately 78% of households and 82% of individuals in the area live below the poverty datum line (Tawodzera & Chigumira 2019:2). Epworth was one of the settlements defined as a squatter settlement by the Zimbabwean government in 2005, resulting in operation Murambatsvina (Operation Restore Order) (Msindo et al. 2013:177;Chenga 2010). This operation was meant to clear squatters from around and within Harare, an operation that affected many families, and was heavily criticised by the United Nations. Using qualitative inquiry, this study sought to investigate the cause of these challenges in mission stations, that is, why these stations have suddenly become a thorn in the flesh of the church. Using desk research, Methodist archives and available literature and ethnographic methods, the authors examined the causative factors underlying the church's thrust on the post-colonial existence and made recommendations. The research is inspired by David Bosch's paradigm shift theory, which challenges mission to adapt and reformulate in response to changes in society. What can be inferred from this study is that mission stations were used as a means that represented a western missionary approach that was based on Christendom, conquest and economic interest -a method that was condescending in nature and completely disregarded the culture and context of the converts. This approach lacked depth and authenticity, and as a result instead of creating sustainable, lasting communities, it gave birth to 'experimental communities' (Sanneh 2010:222), which wilted and continued to struggle as missionaries handed over the churches to local leadership, in what was called autonomy. As Africans gained political independence and their assertiveness challenges began to manifest. Epworth thus brings together the dynamics of a mission struggling in postindependence and post-autonomy era together. Historical background of the establishment of Epworth Mission Epworth is a shanty town situated about 12 km to the South East of Harare, the capital of Zimbabwe (Msindo et al. 2013:172), and currently has an estimated population of 167 462. According to Butcher (1986:11), Epworth is Harare's largest recognised informal settlement. In terms of population density, Epworth comes fifth after Mutare, Chitungwiza, Bulawayo and Harare (Zimstats 2019). Epworth was established as a Methodist mission station in 1892. It was established by Rev. Isaac Shimmin who was accompanied by Rev. Owen Watkins, the chair of the Transvaal District of South Africa, as part of the expansion of the Methodist church in the Southern Africa mission to the North (Gondongwe 2011:45). The Methodist mission in South Africa had been established much earlier, having reached the Cape in 1875, four years after the death of the founder of Methodism, John Wesley. The mission was formally introduced by Rev. Barnabas Shaw in 1816, and in 1889 the Methodist church in South Africa became independent from the British conference. Rev. Isaac Shimmin was a little known cleric, but a very popular young man with Rhodes and his soldiers (Thorpe 1951:39). Isaac Shimmin had successfully negotiated an offer of support from Rhodes who granted him enough land for mission and a grant (Zvobgo 1991:18). The establishment of the Epworth Mission was not an isolated event (Mazarire 2007:2;Ndile 2018:51); it represented an influx of missionary activity after the signing of the Rudd Concession by Lobhengula with an untidy cross in 1888, the entry of the Pioneer column on 12 September 1890 and the raising of the Union Jack in Salisbury on 13 September 1890 (Graaf 1988:14;Thorpe 1951:32). It represented a collaboration of the crown and the cross after the Berlin Conference of 1884/1885, which conveniently partitioned Africa for European interests (Goto Nathan 1994:14;Njoku 2005:220;Nkomazana & Setume 2016:29). Epworth was the second mission station to be established by Methodists in Zimbabwe. Others included Fort Salisbury (1891), Hartleyton (1891), Nenguwo (1892) and Kwenda (1892) (ed. Banana 1991). It was a mission with so much promise, scoring immediate successes in terms of enrolment of children in school and church attendance. Because of a number of challenges and problems that developed after Independence, a large part of Epworth was handed over to government in 1981. Epworth farm is naturally scenic, housing the famous balancing rocks, a popular tourist attraction (Vumbunu & Manyanhaire 2010:244). Geographically, Epworth is a combination of three plots acquired by the Methodist Church between 1891 and 1908. These plots include Epworth, Glenwood and Adelaide. Epworth is a grant 3 measuring 2520 acres given to Rev. Owen Watkins and Isaac Shimmin by Cecil John Rhodes on their arrival for the purpose of their mission work (Thorpe 1951:44). Glenwood (purchased 1904) and Adelaide (purchased 1908) were purchased later by Rev. John White, and the mission work at Epworth was growing and people multiplying such that Epworth alone was no longer enough to offer space for more people. Glenwood was thus bought by the Methodist Church through a loan paid off by rentals of tenants. From these three plots, two major villages were created over the years, Chiremba (which consists of Muguta and Makomo families) and Chizungu (consisting of Chinamano and Zinyengere families) (Chitekwe-Biti et al. 2012:132). Because of expansion, Epworth is now made up of nine villages, which are Chiremba, Chinamano Extension, Zinyengere Extension, Chizungu, Jacha, Overspill, Magada, Makomo and Domboramwari. These altogether constitute seven administrative wards. There are seven primary schools and two secondary schools, all of which are run by the community 3.Cecil Rhodes was one of the leading figures in British imperialism at the end of the 19th century, pushing the empire to seize control over vast areas of southern Africa. He annexed Southern Rhodesia in the 1890s. and the local board. There are only three clinics serving the entire population, one of which still falls under Methodist administration. The entire establishment now falls under Epworth Local Board, a body established in 1986 to oversee the development of Epworth (Chirisa 2010:42). The extent and complexity of problems at Epworth had become insurmountable after the Zimbabwean independence, and the church's capacity to deal with these challenges was tested. The Methodist Church in 1981 handed over much of the Epworth land to the government and remained with a small piece of land called Lot 2. The lot houses the church, the minister's residence, a clinic, a women's centre, a children's home and a theological college. The primary school is in the hands of the community, whilst much of the land close to the United Theological College and the Matthew Rusike Children's Home has been invaded by squatters. The church has been struggling to evict land invaders for a long time, and instead of abating the problem, it is increasing, with much of the land now fully occupied. Recently, conference has commissioned a team to look into the future prospects of mission land in the context of these issues. Part of their findings and recommendations will be discussed at a later stage. The story of Epworth is a story that brings together the various aspects of the manner in which mission crises have manifested. It is an exemplification of a mission in crisis. It represents the challenges of a church in mission in a postcolonial context and whether the church is able to respond appropriately to the new and emergent challenges or if they are stuck in privilege of the colonial and missionary era. Whilst the missionaries had the support of their mother countries and colonial governments, churches in a postindependence era do not have such luxuries. The terrain has changed, and there is growing poverty, rural-urban migration, as well as competing perceptions about land and authority whilst colonial church has retreated. Epworth developments post-independence The challenges at the Epworth Mission Station represent a fourfold manifestation of mission in crisis in a post-colonial and post-missionary setting. The first is the church's lack of capacity to handle emerging issues and problems at Epworth; the second is the growing overcrowding, landlessness and indiscriminate illegal sale of land; thirdly, crisis of development -health and other social ills emergingpoor water and sanitation issues, crime and prostitution; and fourthly, the crisis of identity amongst the people of Epworth in the post-independence and post-missionary era. Lack of capacity of the church to handle the issues and problems at Epworth Mission Station What was intended to be a model community espousing a Christian standard of community and values has deteriorated to an urban menace (Msindo et al. 2013:172;Zindoga & Kawadza 2014). The Methodist Church in Zimbabwe, after independence in 1980, struggled to handle a plethora of challenges at the Epworth Mission Station, culminating in the church handing over a greater part of the mission land to government, and remaining with a small portion called Lot 2, and the institutions namely the mission station (church, manse and primary school), Matthew Rusike Children's Home and the United Theological College. The problems at Epworth were, however, not new to the period after autonomy and independence, and they have their roots in the colonial period. They only manifested after the war (1980). Contestation over tenure has always been there. As early as the inception of the Land Apportionment Act 1930 (Gundani 2019:4), the church grappled with issues of tenancy at Epworth, especially security of tenure, what could be done on the land and the problem of inheritance. After the Zimbabwean war, problems crystallised further. Events leading to the donation (surrender) of the land to the new government The Methodist Church in Zimbabwe Conference in August 1982 voted unanimously to hand over a part of the Epworth land to the government (MCA 1982). This donation of Epworth land to government must be seen in the context of the wider crisis, which evolved around the sustainability of the mission station in its original form. As shown by Mujinga (2018:291), in 1998 the Methodist Church in Zimbabwe Conference again voted to cede part of Kwenda Mission (another mission station of the Methodist church in Zimbabwe) to government. The resolution to donate (surrender) Epworth's land to the government was done by conference through a recommendation from the Standing Committee. The proposer was the government that made the suggestion to the church. The church was battling the squatter problem and crime that was brewing in Epworth owing to the exponential unplanned population growth in Epworth after the war. Epworth's population grew during the war period, and people from the rural areas escaping the war found refuge in Epworth which was less regulated than the urban townships. The church allowed its tenants out of hospitality to welcome visitors fleeing the war. The influx of the people increased unabated after the war, with more people searching for a better life in the city. Many tenants began to sell part of their land as these people sought some permanent shelter. Because of the increase in unregulated settlement, not only did the administration of the mission become extremely difficult, but also the provision of social amenities. Crime and lawlessness also became prevalent. This meant that the church had to seek political help in easing the pressure and to reduce crime. The government suggested that the church donated the land. A draft of proposals was made by the Ministry of Local Government and National Housing on 06 October 1981. The government undertook to prepare a plan for the improvement of basic services through a squatter upgrading programme, by improving water and sanitation infrastructure and protecting the land rights of original tenants. In addition, the government would then seek donor support towards the improvement of the Epworth farm (MCA 1981). The government cited the rapidly deteriorating public health, lack of infrastructure and crime as reasons. The government also acknowledged responsibility for the occupiers of land and the challenges as urban housing problems (MCA 1981 The conference in 1982 voted for the resolution to hand over part of Epworth to the government, and the following were the votes: 55 for, 7 against and 12 neutral. The votes show that a quarter of the members of conference did not agree. This demonstrates that there was no unanimity on arriving at this decision. Furthermore, there seems to have been so much pressure on the church to hand over land rights as quickly as possible, judging by the chronology of the events. Either it was from those remaining missionaries who would have been keen to please the government at any cost to improve their image, or it could be political pressure from the new government that stood to gain significant mileage being seen to act on human plight. There was plenty of support of such a trajectory considering that the new president was a Methodist cleric, who would be seen to bring the church and government on a closer working relationship towards the betterment of the people's lives. Why would a steering committee meet and make decisions even before the tenants have met? The farm was donated without any payment. The government used the Deeds Registries Act Chapter 139 to revest the deeds to itself, something which was done in 1994. The government undertook a programme to develop the area for eventual incorporation into Harare City, which directed a freeze in all non-permitted developments (Chirisa 2010;Chitekwe Biti et al. 2012:132). The Epworth Local Board was created in 1986 to run the affairs of Epworth. However, as Mhanda (2018) observes, not much has been achieved, these boards have limited capacities in terms of funding models and that much of the decision-making rights are concentrated with the central government. The transfer of land to government, however, did not ease the problems at Epworth. Problems actually increased. Challenges in the donated sections could not be isolated. People who live on the land donated to the government remained in the Methodist fold, as practising Methodists, and/or as descendants of Methodists. The land they occupied lay in close proximity. The growing overcrowding and landlessness, indiscriminate illegal sale of land Epworth has become a harbour for sporadic squatter settlements in Zimbabwe, with an impact on the local board and the church's ability to provide necessary and adequate services and therefore improve the quality of life. The dynamics are faster than the authorities' ability to react appropriately. The Government of Zimbabwe in 2006 embarked on an operation to restore order, and as its reasons, it argued that it sought to deal with crime, squalor and lawlessness, and rebuild and reorganise urban settlements and small and medium enterprises (SMEs). '... It was a follow-up to the anti-corruption drive started by the government in early 2004 to cleanse the financial services economy, which had become the centre of speculative activities …' (Sachikonye 2006:16). When government took over Epworth in 1982, it directed a freeze on all new settlements so that it could focus on those already settled. But a combination of factors militated against the government directive, and instead of abating, more settlements emerged. Msindo et al. (2013) made general observations pertaining to this phenomenon. The unprecedented exponential rural-urban migration after the war, family growth, need for accommodation and slow allocation processes of stands for those who needed them led to more and more settlements sprouting. Epworth had a number of advantages; besides that, it was a cheaper place to rent, it was strategically positioned with regard to Harare's biggest market, Mbare, and the industry. More people entered the cities in search of better opportunities and Epworth was a better priced destination (Chirisa 2010). The increase in population at Epworth had an effect on the capacity of the land donated to the government and that which remained in the hands of the church. As Mhanda (2013:92) observes, the lack of resources had a negative impact on the local board's ability to deliver its mandate. The council could not service enough stands, as per demand from the seekers, and where they managed, the cost was beyond the generality of the population of Epworth dwellers who were largely either self-employed or were lowly paid. The local board made several attempts to control unauthorised development and evict all those in illegal settlements on the land without success (Chitekwe-Biti et al. 2012:133). Msindo et al. (2013:176) show that political meddling at election time and corruption had a significant effect in undermining the local board and the church's efforts in managing land. http://www.hts.org.za Open Access The uncontrolled influx and settlement of people at Epworth meant that the available adjacent land belonging to the church could not remain immune to land grab. Inevitably, the land has now all been invaded. Several reasons account to why the church's land at Epworth has been taken over by land-hungry people and parcelled out despite the government being prochurch as far as land acquisition is concerned (Chitando 2005;Gundani 2003). Firstly, the church did not immediately use the land that it retained, owing to its limited capacity in a post-missionary era. Although the church had plans for projects, it took long to implement them. Secondly, the church itself had serious gaps in administration of its land at Epworth as evidenced in a report by a commission set up to investigate a demonstration by Epworth residents who were accusing the church, especially its Bishop, Rev. F. Chirisa, of abusing land for his own personal gain. The findings of the report indicated that there was no clarity at the lower echelons as to who had the right to sell or distribute land, with instances where the local minister would give rights to people who wanted to settle in the land (Report of the Commission of Inquiry into the Epworth Demonstrations, MCA 1996). There is a lack of political will to remove the unregulated settlers from the church's land. The church has tried several times to evict the illegal settlers, but has failed. There is always a promise by government to assist, but both legal notices of eviction and political pressure have ended up in smoke. The encroachment of illegal settlers is affecting the church's ability to utilise its own land, and it further threatens the already existing institutions like the Theological College, the Matthew Rusike Children's Home, the clinic and the mission house itself. The recent farms' report commissioned by the conference found that expansion of the church's institutions is now impossible until there is a way of clearing the squatters. Crisis of development A third crisis presented by the Epworth Mission scenario is the crisis of development. Development in Epworth is lagging behind in all its facets. Infrastructure, housing, provision of social services, water and sanitation are way below expectation (Kadirire 2017). In a community with a population of over 168 000 (ZimStats 2019), it is ironic that it has only seven primary schools, two secondary schools and three clinics, of which only one is a Methodist clinic (Slum Dwellers International 2009). Epworth lacks proper housing, most of the houses do not meet the standards as they were never planned or supervised, some lie in wetlands and often flood during the rainy season. The limitations of the Epworth Local Board are militating against ability to plan, service and supervise proper housing (Chirisa 2010:12). Poverty in Epworth has many dimensions. It ranges from limited income levels owing to the capacity of its residents, largely because of the kinds of skills and jobs they undertake (Chirisa 2010). The church had plans for a few original tenants, and the institutions and facilities were able to cater to the few residents. The church never anticipated illegal influx of people, and never planned for the post-war pressure on the land. This influx of people during the 1970-1980 war of liberation complicated the capacity of the church, as most of those who settled illegally never had a say in planning. The residents, therefore, became informal, and mostly their jobs were also not high paying, making it worse. The informal nature of the settlement affected land tenure and residents' ability to attract funding or investment. Most settlements in Epworth have not been regularised and therefore residents do not have a title to the land on which they are settled. Whilst it is true that the current scenario is a product of the colonial legacy, it is also true that the post-missionary church has not been able to grasp the extent of the needs of Epworth or anticipate them. Colonial and missionary era development was based on a colonial ideology focused on separatemanaged development guided and controlled by a select few people (Kgatla 2016:122). The onset of independence changed the dynamics. Selective development was no longer possible, as the majority poor now demanded universal development. At the root of the Epworth misery is, therefore, a legacy of colonialism and the limited power of missionaries and the church to arrest this. The United Nations Development Programme's (UNDP) Human Development Report (2002) describes human development in the following words: Fundamental to enlarging human choices is building human capacities: the range of things that people can do or be. The most basic capacities for human development are leading a long and healthy life, being educated, having access to the resources needed for a decent standard of living and being able to participate in the life of one's community. As this Report emphasizes assuring people's dignity also requires that they be free-and able-to participate in the formation and stewardship of the rules and institutions that govern them. (p. 13) Amartya Sen (1999:14), in his seminal contribution to development discourse, argues that development is freedom. He suggests that poverty in its proper definition should not be seen in terms of income levels only, but a deprivation of capabilities. What he calls 'unfreedoms' are a result of inadequate processes and inadequate opportunities. Unfreedom can arise either through inadequate processes or through inadequate opportunities that some people have for achieving what they minimally would like to achieve (Sen 1999:17). He says, 'development has to be more concerned with enhancing the lives people lead and the freedoms they enjoy' (Sen 1999:14). This kind of development is what has lacked in mission stations and has given rise to growing poverty. Epworth as a mission station has not been able to provide the benchmark for human development because of a culture and an era that deprived the African resident the necessary involvement in the processes and opportunities for development. Tarus and Lowery (2017:305) define identity as 'both individual and personal traits as well as social aspects acquired from the groups one belongs to'. In an African society, a person belongs to a community (Battle 2000;Menkiti 2001) of the past, the present and the future. An individual is a part of a larger collective which includes the dead and those yet to be born. Identity in this sense incorporates all this that makes up a person and his or her community. Deng (1997) states that African societies functioned through an elaborate system based on the family, the lineage, the clan, the tribe and ultimately a confederation of groups with ethnic, cultural and linguistic characteristics in common. He further states that society is backed by 'values, institutions, and patterns of behaviour, a composite whole representing a people's historical experience, aspirations, and world view' (Deng 1997:1). The people of Epworth existed before Epworth was formed, and who they were was thus defined. Crisis of identity Before the onset of the missionary era, Epworth belonged to this larger community, ruled by chiefs and connected by family lineage, the customs and values that brought together their sense of being. In the course of the colonial and missionary era, Epworth assumed a new identity as a mission community, created for people who had become Christians. The old structures, norms and value systems were uprooted and new ones centred on the church and the missionary emerged (Madhiba 2010:59). For over 90 years, this had made the new society. When the missionaries departed, and when in 1982, the Methodist Church handed over Glenwood and Adelaide to the government, this threw the entire community into disarray. What had become a bona fide identity was dismantled without a negotiation or an offer of a new form of identity. With the challenges enumerated above, particularly after independence, Epworth people found themselves struggling to relate to themselves. Court cases and commissions of inquiry show clearly a community struggling to redefine its identity. The following commission of inquiry and two court cases will be discussed to expound on the nature of the identity crisis. In 1996, there were several demonstrations at Epworth. These were a culmination of a number of concerns and disgruntlements ever since the church had donated part of the land it previously held to government. Even though the church had donated the land, residents still identified themselves as part of the church and the church being part of them; the relationships seemed not to have been severed with the donation. There are reasons to acknowledge that Epworth residents were united by the history, the church and their graves. For the past century, they had been identified as believers under the church. They knew no other identity. Between 1982 and 1996, they were now to assume a new identity under the Epworth Local Board, a scenario undefined. Yet, they still congregated at church on Sundays, and buried their dead at the church cemetery. It must be highlighted that most of these processes had not involved wide consultations. The creation of Epworth had occurred without consultation, and the dismantling was now being done without consultation as well. This reminds us of Lamin Sanneh's lament that converts were 'dislodged' from their cultural system and that missionaries were deaf to local voices in assembling 'experimental communities'. In doing this, he bemoans the fact that Christianity dispossessed Africans of their natural ties without giving them a real stake in missionary culture (Sanneh 2010:222). The handing over of the mission is an example of an experimental community being dismantled. Amongst the series of demonstrations, the one held on 16 July 1996 caught the church's attention, leading to a commission of inquiry being instituted by the then presiding Bishop of the Methodist Church in Zimbabwe, Rev. Farai J. Chirisa (MCA 1996). The commission sought, amongst other objectives, to identify the problem, to collect and assess evidence and to recommend solutions to the standing committee (MCA 1996). The demonstrators had demanded the removal of Rev. Farai Chirisa as the bishop, accusing him, amongst other things, of carrying out projects without consulting the community, and that he wanted to build flats at Epworth and bring people to squat in the area; above all they claimed that their ancestors had bought the land through their rents. The commissioners upon researching proved many of the accusations to be false. However, they observed the following: the central problem at Epworth was the ownership of the land and the problem of communication. They went on to show that the Height had settled the issue of ownership in the 03 July 1996 case of Chiremba Residents Association and Epworth Local Board. However, what is clear from this analysis is firstly that the Epworth residents did not understand who owned the land and what their role was. They still believed that they and the church had a relationship and the church had to continue consulting them on issues pertaining to the land. The relationship was, however, confusing; they expected the church to account to them on land that belonged to the church, that is, Lot 2. Secondly, the Epworth residents did not understand their new place under the Epworth Local Board. They had limited understanding of leases, and what rents meant; they still believed in the traditional tenure, where one was entitled to land on account of genealogy. Why they challenged the church was their claim that their forefathers had contributed to the purchase of Epworth. The second indicator of the crisis is the court cases that Epworth residents lodged against the church and the local board claiming rights to the land at Epworth, both the section that had been donated to the government and the remaining Lot 2. From the commission findings, it is clear that residents in some sense believed that they had a right to the land owned by the church and this led to continued invasion despite appeal and courts directing otherwise. In July 1996, the residents had approached courts challenging the Epworth Local Board's jurisdiction over the land that used to belong to the church. The residents argued that their forefathers had paid for the land through rent, and that they had tittle to the land which the church had taken to England (MCA 1996). The court ruled against the residents, citing lack of evidence of ownership that the original residents claimed that they had a right to owing to the rents they had paid before. The court also disputed the title issue confirming that the Methodist Church had the title which the church had voluntarily ceded to the government. Three years later, a further case was lodged at the High Court, now pitting the church against Mr. Solani and residents of Epworth, in which case Mr. Solani was beginning to erect buildings on the church land (MCA 1999). This was the beginning of the further encroachment on the church land that had remained in the church's hands. The disagreements and the battles did not stop as evidenced by the media attention in the following headlines: 'Epworth the forgotten "suburb''' (Zindoga & Kawadza 2014) and 'Epworth land wrangle hots up' (The Zimbabwean 2014). Demonstrations and court battles continue to this day. These conflicts and appearances must have been a way of trying to salvage some identity and rights to self-determine out of a murky, unpredictable terrain on the one hand; however, these all proved to be illusory. The church, on the other hand, failed to manage the pressure and encroachments continued. Currently, all land that belonged to the church is now occupied by illegal tenants, some of whom are not suitable for settlement. There are no facilities and services -it is a health risk. Epworth has struggled to define itself since independence. It has been characterised by conflict as it sought to redefine itself as a community. Epworth is a pale shadow of its former self, and it used to be the mission area built up in hope at its inception, as Isaac Shimmin had said, 'what Kilnerton is to Pretoria, Epworth will be to Harare'. Today, Epworth is a symbol of struggle of a failed suburb, harbouring crime and prostitution, an example of the worst squatter area. The Some of the commission's findings included the following: inconsistency in policy implementation and weak enforcement, antiquated rules and regulations which are difficult to enforce, high leadership turnover, which is not compatible with project implementation, thus creating disconnect on project cycle, and the church's sensitivity in not asserting its legitimate authority and resorting to empty threats, thus widening the rift with farm tenants. On land tenure it was noted that there is a general feeling of entitlement over church mission land, the tenants and their children have concerns regarding their future on the church farms that arise from traditional expectations for land inheritance, and lack of resources has made it difficult for the church to implement land use programs and farm development activities (Farms Commission Report 2018:27). Amongst the recommendations, the critical ones related to the church policy on Christian villages. The commission observed that the concept of Methodist Christian villages was no longer sustainable and does not serve any functional and practical purpose given the modern multireligious, multi-denominational settlement trends (Farms Commission Report 2018:27). They advocated for a redefinition of this concept in light of the changing role and mission of the church. The church is coming to terms with the challenges it is facing because of the missionary paradigm and making attempts at addressing them. It is also an invitation to a redefinition of mission in the current and future context. Reimagining mission for the future David Bosch has argued for a paradigm shift in the conceptualisation and practice of mission (Bosch 1991:4). By this, he argues that with changes in society, an equivalent change must happen in the understanding of mission and how it is practised. Methodist Church mission in Zimbabwe was based on a mission station method (Thorpe 1951:81;Zvobgo 1991:26) -a phenomenon adopted by missionaries on the mission field without proper missiological reflection on implications and ramifications for the future. As may have been observed, mission in the 19th century was an overflow of European Christianity, culture and civilisation. Mission emerged in the glorious Christendom, where the church occupied the centre of the community and commanded authority and respect (Sales 1972:24). Mission imposed a western cultural blanket on Africa without reflection. It was condescending and rode on conquest and ignored the value of the African knowledge systems and worldview. As a result, it destroyed and dislocated communities rather than build them. Missionaries were, therefore, complicit in colonisation of the minds of Africans. The new challenges are a cry for redress and redefinition of the mission in a post-colonial post-missionary paradigm. They are a call for authentic mission approaches that address the past and current context holistically, whilst building sustainable ground for future mission. A relevant missional ecclesiology must define mission and locate it appropriately, and address the issues of sustainability, identity and poverty at a practical and structural level. 'Missional' is a relatively new terminology in the field of missiology, only appearing in the middle of the 20th century through the work of Bishop Leslie Newbigin and later blossoming through the Gospel and Our Culture Movement in the United States of America (Goheen 2002:367;Guder 1998:9). It represents a departure from the old term 'missionary', which has increasingly become associated with colonialism, subjugation and exploitation of people of colour by the west and carries the baggage of guilt on the part of missionaries from the west (Bosch 1991:3;Goto Nathan 2006:57). Missional thinking seeks to be true to cultural sensitivity and is dynamic, responding relevantly to a given paradigm, as opposed to the missionary which was static, rigid, paternalistic and condescending. The term 'missional' signifies a departure from us-them, North-South, to a sense of glocality (Henry 2016:6;Fagerli et al. 2012:27), a balance between the local as well as the global orientation. It unearths the biblical imperative of witnessing (Ac 1:6-8), beginning in Jerusalem to all Judea to Samaria and to the ends of the earth motif. Missional ecclesiology is grounded in missio Dei, an affirmation that God is the sender and authority in mission. God the Trinity is in a continuous relationship of sending. Through the son, and the Holy Spirit has not only descended, but continues to reach out to the margins, to the ends of the earth. This places mission where it belongs, the margins. The church is placed in a perichoretic relationship with the triune God (Niemandt 2012:2) -dynamic, accommodative and adaptive to a changing context. It is biblical, for the Methodist context, and missional ecclesiology relates well with the Wesleyan mission spirituality. Sheridan and Hendriks (2013:7) point out that understanding the relationships between gospel, church and culture is of primary significance, they view a missional shift as the process of learning to 'break the code' of the local cultural context in which the church finds itself. Missional thinking, therefore, has to do with contextual understanding of mission, mission as from the margins and to the margins. It is an affirmation of God as the author, and that the church as well as its missionaries responds to his guiding spirit, they participate in the mission of the triune God in humility. It is according to Bosch an affirmation that God has turned towards the world, towards the poor and the marginalised in an incarnational way (1991:423). A Zimbabwean missional ecclesiology should be rooted in the unique and fertile African social, cultural and religious worldview. Pobee and Ositelu (1998:9) reiterate that sociologists and anthropologists have demonstrated that Homo africanus homo religious radicaliter, the African is a radically religious person, religious at the core of his or her being. Mission practice must therefore begin with such a paradigm shift that recognises the value of the African social and cultural worldview and leverage on it, by building resilient and sustainable communities. Missional ecclesiology and practice address the challenges of poverty, deprivation and vulnerability in Zimbabwe holistically. It goes beyond missioning to engaging in issues of development, be it in an advocacy way or in practical ways. The story of Epworth represents poverty and deprivation. When the church leaves the communities, they are left at the mercy of systems of government that are equally unable to help. Banana (1987:2) reiterates a relevant approach which addresses present realities rather than a pie in the sky approach. Similarly, Kaulemu (2010:52) suggests that churches must strengthen capacities of communities to contribute towards the improvement of social conditions that facilitate the human development potential of every Zimbabwean. Decolonisation should be central to the doing of mission, and it involves the recovery of African confidence, and reaffirmation of values, culture and worldview. Missionally, decolonisation of the mind should be seen as the affirmation of the black African experience of God; the Africans' response through their own cultural and traditional experiences. It has to do with identification with the poor (Wa Said 1971:520). Decolonisation has implication for an African theological and contextual orientation, which must pervade both the theological institutions as well as churches, to how these engage their local contexts. The challenges of mission stations such as Epworth raise issues of African realities, which theology needs to appreciate and correct. Epworth Mission can be seen as a crisis or a problem, but on the other side, it awakens the current missional practitioners to the limitations of a colonial mission in addressing African realities. A decolonised mission must affirm the African struggle for recognition, and fight for economic and political justice. Conclusion The study has highlighted the origins of the current challenges of the mission station. It notes that the problems in the mission station, particularly at Epworth, have their antecedents in the colonial and missionary era. The challenges are rooted in both the systems and structures imposed and extensively used by the colonial administration, as well as the close proximity the colonial administration had with missionaries, and therefore the church. Colonial government and missionaries according to Gundani (2019:15) believed that Africans were backward and lacked capacity to self-determine. This led to modelling of mission stations that were unsustainable as times changed, leading to crises such as in the Epworth Mission Station. The church was incapacitated to address the needs and challenges after independence, as a result leading to abandonment of much of the farm and its people. The study concluded by highlighting new trends in mission which are grounded in missional thinking and ecclesiology, with a strong sense of decolonisation of the mind. These need to be adopted if the church is to salvage relevance in a post-colonial existence. This is what David Bosch suggests to be a paradigm shift in mission theology -a turn towards the margins and the poor.
v3-fos-license
2019-05-15T14:35:38.483Z
2019-05-15T00:00:00.000
153314217
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.00599/pdf", "pdf_hash": "926b7836dd0de10d27f4bb228ed9ba31a4282034", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45183", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "926b7836dd0de10d27f4bb228ed9ba31a4282034", "year": 2019 }
pes2o/s2orc
Reduced Neurosteroid Exposure Following Preterm Birth and Its’ Contribution to Neurological Impairment: A Novel Avenue for Preventative Therapies Children born preterm are at an increased risk of developing cognitive problems and neuro-behavioral disorders such as attention deficit hyperactivity disorder (ADHD) and anxiety. Whilst neonates born at all gestational ages, even at term, can experience poor cognitive outcomes due to birth-complications such as birth asphyxia, it is becoming widely known that children born preterm in particular are at significant risk for learning difficulties with an increased utilization of special education resources, when compared to their healthy term-born peers. Additionally, those born preterm have evidence of altered cerebral myelination with reductions in white matter volumes of the frontal cortex, hippocampus and cerebellum evident on magnetic resonance imaging (MRI). This disruption to myelination may underlie some of the pathophysiology of preterm-associated brain injury. Compared to a fetus of the same post-conceptional age, the preterm newborn loses access to in utero factors that support and promote healthy brain development. Furthermore, the preterm ex utero environment is hostile to the developing brain with a myriad of environmental, biochemical and excitotoxic stressors. Allopregnanolone is a key neuroprotective fetal neurosteroid which has promyelinating effects in the developing brain. Preterm birth leads to an abrupt loss of the protective effects of allopregnanolone, with a dramatic drop in allopregnanolone concentrations in the preterm neonatal brain compared to the fetal brain. This occurs in conjunction with reduced myelination of the hippocampus, subcortical white matter and cerebellum; thus, damage to neurons, astrocytes and especially oligodendrocytes of the developing nervous system can occur in the vulnerable developmental window prior to term as a consequence reduced allopregnanolone. In an effort to prevent preterm-associated brain injury a number of therapies have been considered, but to date, other than antenatal magnesium sulfate and corticosteroid therapy, none have become part of standard clinical care for vulnerable infants. Therefore, there remains an urgent need for improved therapeutic options to prevent brain injury in preterm neonates. The actions of the placentally derived neurosteroid allopregnanolone on GABAA receptor signaling has a major role in late gestation neurodevelopment. The early loss of this intrauterine neurotrophic support following preterm birth may be pivotal to development of neurodevelopmental morbidity. Thus, restoring the in utero neurosteroid environment for preterm neonates may represent a new and clinically feasible treatment option for promoting better trajectories of myelination and brain development, and therefore reducing neurodevelopmental disorders in children born preterm. Children born preterm are at an increased risk of developing cognitive problems and neuro-behavioral disorders such as attention deficit hyperactivity disorder (ADHD) and anxiety. Whilst neonates born at all gestational ages, even at term, can experience poor cognitive outcomes due to birth-complications such as birth asphyxia, it is becoming widely known that children born preterm in particular are at significant risk for learning difficulties with an increased utilization of special education resources, when compared to their healthy term-born peers. Additionally, those born preterm have evidence of altered cerebral myelination with reductions in white matter volumes of the frontal cortex, hippocampus and cerebellum evident on magnetic resonance imaging (MRI). This disruption to myelination may underlie some of the pathophysiology of pretermassociated brain injury. Compared to a fetus of the same post-conceptional age, the preterm newborn loses access to in utero factors that support and promote healthy brain development. Furthermore, the preterm ex utero environment is hostile to the developing brain with a myriad of environmental, biochemical and excitotoxic stressors. Allopregnanolone is a key neuroprotective fetal neurosteroid which has promyelinating effects in the developing brain. Preterm birth leads to an abrupt loss of the protective effects of allopregnanolone, with a dramatic drop in allopregnanolone concentrations in the preterm neonatal brain compared to the fetal brain. This occurs in conjunction with reduced myelination of the hippocampus, subcortical white matter and cerebellum; thus, damage to neurons, astrocytes and especially oligodendrocytes of the developing nervous system can occur in the vulnerable developmental window prior to term as a consequence reduced allopregnanolone. In an effort to prevent preterm-associated brain injury a number of therapies have been considered, but to date, other than antenatal magnesium sulfate and corticosteroid therapy, none have become part of standard clinical care for vulnerable infants. Therefore, there remains an urgent need for improved therapeutic options to prevent brain injury in preterm neonates. The INTRODUCTION Preterm birth is the leading cause of death and neurodevelopmental related disability in early life (Goldenberg et al., 2008). In resource rich nations such as Australia, the incidence of moderate-late preterm birth specifically now accounts for ∼80% of all preterm births (Cheong and Doyle, 2012;Frey and Klebanoff, 2016). These neonates have a high survival rate and a low incidence of gross neuroanatomical damage on routine clinical imaging; however, there is increasing evidence of microcystic white matter injury when assessed using MRI. Even amongst those infants who appear well at the time of hospital discharge, and are free of gross neuroanatomical lesions, there remains a high burden of later cognitive difficulties and neurodevelopmental disorders such as anxiety and attention deficit hyperactivity disorder (ADHD) (Ananth and Vintzileos, 2006;Chyi et al., 2008;Moster et al., 2008;Petrini et al., 2009;Loe et al., 2011;Baron et al., 2012;Cheong and Doyle, 2012;Potijk et al., 2012). The long-term individual, familial and socio-economic burden of these issues remains profound; with rates of preterm birth at around 10%, and with increasing numbers of children surviving, there is an urgent need to develop novel therapeutic options to mitigate, or prevent, the ongoing neurological burden of preterm birth. Myelination of white matter tracts continues throughout late gestation and following birth in areas such as the hippocampus and cerebellum: reduction in brain volumes and functionality of these regions are evident in children that were born preterm (Rivkin, 1997;Rees and Inder, 2005;Rees et al., 2008;Volpe, 2008). In particular myelination by mature oligodendrocytes is ongoing throughout this late gestation stage and is vulnerable to insults and excitotoxic damage associated with early exposure to the ex utero environment (Arnold and Trojanowski, 1996;Back et al., 2002;Matsusue et al., 2014). In utero, the fetal neurosteroid allopregnanolone is responsible for protection from neurological insults, modulating fetal behavior leading to the onset of a 'sleep-like state, ' and promoting myelination through its action on the inhibitory GABA A receptors of the central nervous system (CNS) (Nicol et al., 1998;Nguyen et al., 2003;Herd et al., 2007). Importantly, recent studies suggest that behavioral and cognitive outcomes are tightly linked with gestational age . Any decrement in gestation, even across 'early term' (37/38 weeks' gestation) is associated with, on a population basis, impaired cognitive and developmental outcomes compared to outcomes found in children born at full term (39-40 weeks gestational age) . Birth is necessarily associated with the loss of the fetus from the placenta-maternal unit, and therefore separation from any trophic factors derived from either mother or placenta. Preterm birth results in the premature loss of placentally supplied allopregnanolone during a period when it is critical for optimal neurodevelopment (Kelleher et al., 2013). Whilst neurosteroid therapy has been evaluated for the treatment of traumatic brain injury (TBI) and epilepsy (Nohria and Giller, 2007;Wright et al., 2007;Xiao et al., 2008;Reddy and Rogawski, 2012), therapeutic use of neurosteroids following preterm birth requires further evaluation. NEUROLOGICAL OUTCOMES OF PRETERM BIRTH Despite only comprising 10% of births, preterm birth is the leading cause of death and neurodevelopmental related disability in neonates, accounting for up to 50% of neonatal deaths (Simmons et al., 2010). Furthermore, the ongoing morbidity risks of preterm birth remain unacceptably high with up to 50% of survivors developing some form of longterm neurodevelopmental disability (Mathews et al., 2002;Ananth and Vintzileos, 2006). Cerebral white matter injury in the preterm infant varies based on gestational age at the time of birth. Historically, injury following early preterm birth was characterized by intraventricular hemorrhage and, or, periventricular leukomalacia (PVL) (Volpe, 2001(Volpe, , 2009. In survivors of early preterm birth weighing <1,500 g approximately 10% develop cerebral palsy as a result of these gross insults and necrosis (Volpe, 2003). However, with improvements in perinatal care these gross structural lesions are now far less common, whereas diffuse white matter injury (DWMI) demonstrable on MRI, but not routine screening cranial ultrasound, is increasingly recognized as the key contributor to the pathophysiology of preterm-associated brain injury. It is now established that impaired cognition, sensory and psychological functioning in children born preterm is associated with DWMI (van Tilborg et al., 2016). Furthermore, DWMI is a recognized risk factor for the development of neurobehavioral disorders such as autism-spectrum disorders and ADHD (van Tilborg et al., 2016). The underlying pathophysiological mechanisms of DWMI are poorly understood but are suggested to be due to immature oligodendrocyte arrest resulting in impaired myelination. In infants that were born <32 weeks' gestation, it has recently been shown that reductions in white matter volume in areas such as the fornix and the cingulum observed by MRI at the time of birth remained present until 19 years of age and were associated with impairments in memory functions (Caldinelli et al., 2017). The Stockholm Neonatal Project has also recently published the results of a longitudinal trial following infants born <36 weeks' gestation up until 18 years of age when they undertook psychological assessment including general intelligence and executive functioning measures. Significantly poorer outcomes were observed for preterm children in areas such as IQ, attention, working memory, and cognitive flexibility (Vollmer et al., 2017). Most importantly, however, is that the executive functioning deficits did not correlate with reductions in white matter or gray matter volumes evident by MRI following birth, but the microstructure of white matter tracts was altered at adolescence. Thus, this study found that following preterm birth, and in the absence of obvious perinatal brain injury, the alterations observed in white matter microstructure during adolescence correlate with executive function and general cognitive abilities. Furthermore, it suggested that disruption to neural pathways, as opposed to reductions in brain volume, is involved in the impairment of neurodevelopment following preterm birth. In addition to established preterm birth related disorders, such as cerebral palsy, there is now a growing body of evidence suggesting that preterm infants from moderate-late preterm pregnancies are much more likely to develop neurodevelopmental morbidities and learning disorders that become apparent at school age, with anxiety and ADHD being the most commonly diagnosed (Linnet et al., 2006;Chyi et al., 2008;Moster et al., 2008;Petrini et al., 2009;Lindstrom et al., 2011;Loe et al., 2011;Baron et al., 2012;Cheong and Doyle, 2012;Potijk et al., 2012;Berry et al., 2018). Attention deficit hyperactivity disorder is characterized by a deficit in behavioral inhibition, inattention, impulsivity and social difficulties, and in a Norwegian cohort of preterm/low birth weight children at 5 years old was more commonly diagnosed in males (Elgen et al., 2014). In the same cohort, the females were more likely to be diagnosed with anxiety (Elgen et al., 2014) highlighting that the behavioral outcomes of preterm birth occur in a sex-dependent manner. In a large Danish cohort children born at 34-36 weeks' gestation (moderate-late preterm range) had an 80% increased risk of being diagnosed with ADHD compared to children born after 37 weeks' gestation, a larger percentage of these were also male (Linnet et al., 2006). Furthermore, in a Swedish cohort, the amount of ADHD medication purchased for ex-premature school-aged males was more than three times as much than for females, and the amount purchased increased by degree of immaturity at birth (Lindstrom et al., 2011). In addition to anxiety and ADHD, incidences of autism and depression are also increased following preterm birth. Children in the United States that were born moderate-late preterm have been reported to have twice the incidence of autism at 10 years of age (Schendel and Bhasin, 2008). Parent-reported mental health rates in the United States are also higher for expreterm children than the general population for children and adolescents, with a prevalence of 22.9% compared to 15.5% in the general pediatric population (Singh et al., 2013). This study also revealed that ex-preterm children have 61% higher risk of having serious emotional/behavioral problems; specifically, a 33% higher chance of developing depression and a 58% higher chance of developing anxiety in childhood and adolescence (Singh et al., 2013). School-related problems also arise in children following preterm birth, with those born preterm needing more special educational support, having an increased risk of repeating a grade and lower overall reading and mathematics scores compared to children born at full term (Chyi et al., 2008). These findings appear to be consistent internationally with numerous cohort studies observing that moderate-late expremature children have a 1.3-to 2.8-fold increased risk for requiring special education, and a 1.3-to 2.2-fold increased risk of repeating grades at ages 5-10 (Huddy et al., 2001;Morse et al., 2009;van Baar et al., 2009;Gurka et al., 2010). Furthermore, another study identified reading, writing, and spelling difficulties in 9-to 11-year-old ex-premature children compared to those born full term (Kirkegaard et al., 2006). Even at just 3-4 years of age impairments to visuospatial processing, spatial working memory, and sustained attention have been documented following preterm birth where major neurological deficits were not present at the time of birth (Vicari et al., 2004). The direct impact of preterm birth on cognitive function is hard to quantify as it is confounded by many of the complex socio-economic, environmental and other factors that precipitated preterm birth in the first place. Additionally, given the plasticity of the developing brain, the timing of cognitive assessment needs to recognize the prognostic limitations of early assessment, especially for those born at extremes of gestational age. Studies comparing cognitive delays in toddlers at 2 years of age do not find any significant difference between preterm and term when corrected for prematurity (Cheatham et al., 2006;Darlow et al., 2009;Romeo et al., 2010;Woythaler et al., 2011). Alternatively, studies based in Swedish, American, and French cohorts found that 5-to 10-year old ex-premature children are twice as likely to score <85 on an IQ (intelligence quotient) test than term born children and that this is correlated with the gestational age at birth, with those being born more preterm at the highest risk of severe cognitive impairments (Schermann and Sedin, 2004;Marret et al., 2007;Talge et al., 2010). However, a much larger and comprehensive longitudinal American study in late-preterm expreterm 4-to 15-year olds found no significant differences in IQ based on 11 different cognitive tests at every age group (Gurka et al., 2010). These results suggest that intellectual disability may not be as prevalent following late-preterm birth as other negative outcomes such as behavioral disorders and poor school performance, suggesting that poor school performance may reflect behavioral disruptions that impact ability to pay attention and learn during class, rather than a result of reduced cognitive capacity. EXPOSURE TO THE ex utero ENVIRONMENT AND ASSOCIATED DAMAGE Preterm birth abruptly removes the newborn from the supportive in utero environment experienced by a fetus of the same postconceptional age. Organ maturation and function throughout the body changes tempo dramatically in response to this premature separation from the maternoplacental unit. Brain development requires neurotrophic and gliotrophic support during this time and so is vulnerable following preterm birth as it loses placental steroid support, the supply of precursors for fetal neurosteroid production, and other placentally supplied nutrients. In addition, premature loss of these steroids exposes the developing brain to increased stimulation and excitotoxic damage. Damage to oligodendrocytes of the developing nervous system can occur during this vulnerable developmental window prior to term gestation. The oligodendrocyte development lineage is sensitive to premature exposure to the external environment, leading to injury by chemical and mechanical damage. This involves increased levels of reactive oxygen species following the rise in excitation after preterm birth and early exposure to the ex utero environment (Antony et al., 2004;Blasko et al., 2009). Demyelinated regions in relapsing remitting multiple sclerosis undergo remyelination but residual impaired motor coordination may remain (Dutta et al., 2011). Similarly, after preterm birth myelination continues and animal studies have shown less marked deficits at the equivalent of childhood, compared to the reduced myelination seen at term equivalent age. Despite this 'catch up' ex utero myelination, these children experience impaired learning ability and motor coordination, suggesting a similar causal pathway (Rees and Inder, 2005). Furthermore, reductions in myelination are apparent in a rat model of ADHD (Lindahl et al., 2008), and decreases in the white matter volumes of vulnerable regions such as the hippocampus and cerebellum are evident on magnetic resonance imaging (MRI) comparing term and preterm neonates (Counsell et al., 2003). TREATMENT OPTIONS FOR PREVENTING POOR NEUROLOGICAL OUTCOMES In an effort to ameliorate or prevent preterm-associated brain damage a number of therapies have been adopted. However, despite the increasing body of evidence highlighting the increased neurodevelopmental vulnerability at all gestational ages below full term (39-40 weeks' gestation) no targeted therapies are available in the perinatal period to those infants born late preterm (34-36 weeks). For the less mature infants, maternal magnesium sulfate has been shown to reduce cerebral palsy in extreme preterm neonates, but the number needed to treat remains high, highlighting the need for other therapeutic approaches. A Cochrane systematic review of five large trials comprising 6,145 babies found that the incidence of cerebral palsy in preterm neonates dropped from 5 to 3.4% following antenatal magnesium sulfate therapy (Doyle et al., 2009). A recent study where magnesium sulfate was given between 6 days and 12 h before unilateral hypoxic ischemia in neonatal rats identified that maximal neuronal protection was achieved by treatment only 24 h before the insult (Koning et al., 2017), which may be sufficient in some instances of preterm birth. Although promising, a limitation of this therapy is the need for antenatal rather than postnatal treatment, especially considering that more than 50% of preterm births are spontaneous and thus antenatal therapy cannot be initiated with appropriate timing. Human and animal studies have demonstrated a lack of neurological improvement following postnatal magnesium sulfate therapy in the context of chorioamnionitis induced preterm birth and asphyxia associated with preterm labor (Kamyar et al., 2016;Galinsky et al., 2017), thus although magnesium sulfate offers some therapeutic benefit, it is not, in itself sufficient to reduce preterm-related morbidity and rather may be suitable as an adjunct therapy. The antioxidant melatonin has also been investigated for its neuroprotective benefits in animal models due to its roles in modulating neuroinflammation and reducing reactive oxygen species (Colella et al., 2016). In neonatal stroke and hemorrhage rat models pre-treatment with melatonin reduced the neuroinflammation and damage associated with stroke, whilst post-treatment reduced the amount of tissue death and improved cognitive and sensorimotor outcomes (Lekic et al., 2011;Villapol et al., 2011). However, despite entering clinical trials there are few conclusive results available, with a Cochrane systematic review finding no randomized trials published as yet (Wilkinson et al., 2016). Therefore, the long-term benefit of this treatment for neurobehavioral outcomes awaits the result of further randomized control trials. Controlled therapeutic hypothermia in late preterm and term infants with hypoxic ischaemic encephalopathy has demonstrated well-established benefits, such as reduced mortality and decreased long-term neurodevelopmental disability, if implemented within 6 h of the insult occurring (Jacobs et al., 2013;Laptook, 2017). The physiological instability and vulnerability of the preterm infant means that therapeutic hypothermia is unlikely to be an appropriate intervention in this cohort. Even a small decrement in gestational age (to 34-35 week GA infants with HIE) at initiation of hypothermia was associated with an increase in over-cooling (Laptook, 2017) and other hypothermia-associated complications in 90% of the preterm group versus 81.3% in the term cohort (Rao et al., 2017). In this small retrospective cohort study, 66.7% of the preterm neonates that received hypothermia therapy had evidence of white matter injury, whilst just 25% of the term neonates with HIE showed signs of white matter injury following an asphyxial insult managed with therapeutic hypothermia. These results are difficult to interpret, however, due to the lack of a normothermic preterm-control group. Importantly, all deaths following the hypothermia therapy were in the preterm group, highlighting their increased vulnerability compared to the term neonates. Similarly, a small retrospective cohort analysis between 2007 and 2015 of preterm infants 33-35 weeks' gestation who received whole body hypothermia revealed that 50% experienced mortality or moderate to severe neurodevelopmental impairment as a result of the therapy (Herrera et al., 2018). Currently there is an ongoing clinical trial implementing whole-body cooling in American preterm neonates born at 33-35 GA with moderate to severe neonatal encephalopathy, but as it is still in the recruiting stage results are not yet available (ClinicalTrials.gov Identifier: NCT01793129). The American Academy of Pediatrics committee advises that hypothermia should not be undertaken on preterm neonates due to the associated risks, unless it is performed in a research setting (Committee et al., 2014). These findings suggest that hypothermia may limit key developmental processes in the immediate postnatal period and this may limit its use in all but late preterm neonates, pending the outcome of the current clinical trial. Thus, the development of further adjunct therapy seems essential to improving neurodevelopmental outcome in the preterm infant. PLACENTAL CONTRIBUTION TO in utero BRAIN DEVELOPMENT The placenta plays an essential role in ensuring fetal neurodevelopment occurs correctly by secreting growth regulating factors including neurosteroid hormones throughout pregnancy (Figure 1). Neurosteroids are endogenous steroids that rapidly alter neuronal excitability through interaction with ligand-gated ion channels and other cell surface receptors. In late gestation, the fetus is maintained in a 'sleep-like' state, characterized by low levels of arousal-like activity. This ensures that excitation of the brain is limited, engendering a level of protection from excessive excitation and ultimately allowing sufficient energy for demanding developmental processes such as myelination to occur (Nicol et al., 1998;Nguyen et al., 2003). This fetal 'sleep' state is maintained by an elevated level of the neurosteroid allopregnanolone, and decreasing the synthesis of this neurosteroid has been shown to increase the excitation of the brain, potentially disrupting or delaying brain developmental processes (Yawno et al., 2007;Kelleher et al., 2011b). A reduction in the normal fetal neurosteroid environment is thus associated with adverse outcomes, such as the occurrence of potentially damaging seizures which can lead to destructive and permanent alterations in neurodevelopment (Yawno et al., 2011). Following preterm birth there is a premature reduction in the supply of neurosteroids, including progesterone and its neuroactive metabolite allopregnanolone, resulting in an already vulnerable premature neonate being exposed to the ex utero environment without neuroprotection. Importance of the Fetal Neurosteroid Allopregnanolone for Brain Development Reductions in white matter is suggested to be a key component in the development of neurobehavioral disorders in children that are born preterm (Rees and Inder, 2005) and may stem from the birth-associated loss of allopregnanolone, as the pro-myelinating effects of this neurosteroid are evident in vitro on rat cerebellar slice cultures (Ghoumari et al., 2003). Allopregnanolone induced protection against cell death has been demonstrated in an in vivo mouse model of neurodegeneration (Liao et al., 2009) and in a sheep model of acute fetal hypoxia which is also important in maintaining levels of mature myelination oligodendrocytes (Yawno et al., 2007). Allopregnanolone is metabolized by the rate limiting enzymes 5α-reductases type 1 and 2 (5αR1 and 2) from progesterone (Figure 1) (Martini et al., 1996;Mellon and Griffin, 2002). In addition to the allopregnanolone supplied by the placenta to the fetus, the fetal brain is also capable of metabolizing allopregnanolone from placentally derived precursors including progesterone and 5α-dihyroprogesterone (5α-pregnane-3,20-dione), thus there is also a high level of allopregnanolone locally produced and maintained within the fetal brain (Stoffel-Wagner, 2001;Nguyen et al., 2004). However, we have previously shown in the developmentally relevant guinea pig (Morrison et al., 2018), a precocial species with similar hormonal profile to humans throughout pregnancy, that following the loss of the placenta at birth both progesterone and allopregnanolone levels decline rapidly within 24 h, highlighting the necessity of the placenta for the supply of steroidogenic precursors (Kelleher et al., 2013). Both of the rate-limiting enzymes 5αR1 and 2 are expressed in the placenta, and sheep and rat studies show that the 5αR2 isoform is most strongly expressed on neurons and glia within the developing fetal brain in late gestation (Martini et al., 1996;Nguyen et al., 2003). Birth-associated loss of gestational allopregnanolone concentrations occurs earlier than normal in neonates that are born preterm leading to a damaging increase in excitation. Recent studies by our group have shown there is a dramatic drop in brain allopregnanolone concentrations following term and preterm birth compared to fetal levels (Kelleher et al., 2011a(Kelleher et al., , 2013. Furthermore, preterm delivered animals also had significantly decreased myelination (evidenced by reduced MBP expression) in the CA1 region of the hippocampus and adjacent subcortical white matter 24 h after delivery compared to animals delivered at term (Kelleher et al., 2013). We have also shown that preterm male and female neonates at term equivalence age exhibit deficits in MBP immunostaining of the CA1 region, subcortical white matter and posterior lobe of the cerebellum (Kelleher et al., 2013;Palliser et al., 2015), and that juvenile offspring present with lasting deficits in myelination of these regions in male and female guinea pigs (Shaw et al., 2016(Shaw et al., , 2017. Likewise, reduced allopregnanolone supply as a result of intrauterine growth restriction and also impairs myelin development of the CA1 in male fetuses (Cumberland et al., 2017b). We have found that the late developing cerebellum is particularly vulnerable to the insults associated with preterm delivery. In addition to the CA1 region of the hippocampus, reductions in myelination of the posterior lobe of the cerebellum were evident in preterm guinea pig neonates at PND1 . Furthermore, at term equivalence age we have demonstrated that not only is the expression of the level of mature oligodendrocytes reduced, but also that reductions are present throughout the oligodendrocyte lineage thereby lessening the potential of catch-up growth to occur . By juvenile age we further observed that there were FIGURE 1 | Neurosteroidogenesis in the placenta and fetal brain. Cholesterol is metabolized into progesterone by the enzymes cholesterol side-chain cleavage (P450scc) and 3β-hydroxysteroid dehydrogenase (3β-HSD). The rate-limiting enzymes 5α-reductase type 1 and 2 (5α-R) facilitate the conversion of progesterone into 5α-dihydroprogesterone (5α-DHP). Allopregnanolone is then synthesized from this precursor by 3α-hydroxysteroid dehydrogenase (3α-HSD). This process can occur both within the placenta, and de novo within the fetal brain. sex dependent alterations in myelination of the posterior lobe of the cerebellum as well as in components of the GABAergic pathway (Shaw et al., 2017). Functional imaging studies suggest that the posterior lobe of the cerebellum is particularly involved in cognition and emotion (Stoodley, 2012), as it is interconnected with the prefrontal cortex, association cortices, and the limbic system, which allows for its involvement in higher order executive functioning (Stoodley and Schmahmann, 2010). Therefore, the altered development of this area may be having a role in some of the neurobehavioral disorders that are more common following premature birth, such as ADHD and autism. Our studies indicate that juvenile males show a hyperactive phenotype following preterm birth (Shaw et al., 2016). Additionally, they exhibit behavior similar to that observed in mouse models of ADHD where, as with our study, within open field test conditions the spontaneous distance traveled, and time spent mobile is markedly higher for the affected mice compared to the controls (Kim et al., 2014). This hyperactive behavior has parallels with clinical studies where ex-preterm male children show an increased incidence of hyperactivity disorders (Linnet et al., 2006;Lindstrom et al., 2011). Taken together, these data emphasize the importance of allopregnanolone for myelination and optimal development of the GABAergic system to occur in fetal and neonatal life. We therefore speculate that the changes in neurodevelopmental and behavioral function we see following preterm birth may be accounted for by the loss of allopregnanolone supply. Pharmacological Reduction of the in utero Neurosteroid Environment The deficits in myelination seen following preterm birth can be mimicked by the administration of a 5α-reductase inhibitor, finasteride, directly to the fetal circulation preventing the metabolism of progesterone to allopregnanolone within the fetal brain. This intervention results in an increase of damaging excitation in the brain of fetal sheep due to reduced suppression by allopregnanolone (Nicol et al., 2001). As a result of this excitation, cell death is increased in areas such as the hippocampus, cerebellum, and white matter tracts. In another study in fetal sheep, allopregnanolone synthesis was reduced through inhibition of progesterone production by trilostane (a 3β-hydroxysteroid dehydrogenase inhibitor). This resulted in reduced fetal sleep-like behavior but increased arousal-like activity (Crossley et al., 1997), resulting in increased brain excitability and damaging seizures (Mirmiran, 1995;Nicol et al., 1997). Furthermore, when progesterone was replaced by exogenous supplementation, the occurrence of sleep-like behavior returned to normal fetal patterns (Crossley et al., 1997). Exposure to finasteride has also been shown to increase apoptotic cells in the CA1 and CA3 regions of the hippocampus, and the cerebellar molecular and granular layers in fetal sheep, as well as increasing the number of dead Purkinje cells in the cerebellum (Yawno et al., 2009). Importantly, co-infusion of finasteride and the allopregnanolone analog alfaxolone completely ameliorated the deleterious effects of finasteride treatment. Similarly allopregnanolone itself has also been shown to protect the fetal brain when insults occur, in a sheep model the introduction of brief asphyxia in the presence of finasteride induced cell death in the hippocampus, however, when allopregnanolone was present in normal concentrations this asphyxia-induced damage did not occur (Nicol et al., 2001). In utero administration of finasteride to guinea pigs has also highlighted the key role of allopregnanolone in myelination, as a reduction in myelination in the subcortical white matter was present following inhibition of allopregnanolone synthesis (Kelleher et al., 2011b). Interestingly, administration of the allopregnanolone precursor, progesterone, to in vitro rat cerebellum slices increased both the proliferation of myelinating oligodendrocytes and the rate of myelination (Ghoumari et al., 2003). Follow up studies then revealed that this effect was achieved by neurosteroids acting on the GABA A receptors (Ghoumari et al., 2005). Together these studies emphasize the important role of allopregnanolone in not just the development of the brain, but also for protection from hypoxia (Figure 2). A reduction in allopregnanolone concentrations during pregnancy can also have long-lasting effects on the offspring. In guinea pigs, late gestation maternally administered finasteride resulted in an anxiety-like phenotype in female offspring, along with reductions in components of the GABAergic pathway within the amygdala (Cumberland et al., 2017a). Furthermore, there was also decreased expression of neurosteroid-sensitive GABA A receptors and increased astrocyte activation within the cerebellum of these animals (Cumberland et al., 2017c). In a similar study, finasteride treatment to pregnant rats during late gestation resulted in increased serum corticosterone concentrations in their juvenile offspring, decreased hippocampal allopregnanolone levels and impaired performance in memory tasks (Paris et al., 2011). Studies inhibiting the production of allopregnanolone in adult rats highlight the importance of allopregnanolone for the prevention of neurodevelopmental disorders throughout life as reductions in the concentration of allopregnanolone within the hippocampus (Frye and Walf, 2002) or the amygdala (Walf et al., 2006) increased anxiety-like behaviors in these animals. Furthermore, multiple neurological conditions are characterized by a reduced level of circulating allopregnanolone in adults, including post-traumatic stress disorder (Rasmusson et al., 2006), major depressive disorder (Strohle et al., 1999), and premenstrual dysphoric disorder (Monteleone et al., 2000;Lombardi et al., 2004). Combined Effect of Reduced Neurosteroid Exposure and Increased Cortisol An underlying factor involved in the development of hyperactivity and anxiety following preterm birth may be increased cortisol. In our studies we have observed increased circulating cortisol levels in preterm offspring at birth , PND1 , and juvenility (Shaw et al., 2016). In humans, one study has found that as birth weight and gestational age decreases, there is an increase in circulating cortisol (Kajantie et al., 2002), and early life stress has also been shown to negatively impact hippocampal development with long-term effects into adolescence (Hodel et al., 2015). Interestingly at juvenility, we found that male preterm offspring had increased baseline concentrations of circulating cortisol that were unaffected by exposure to foreign situations (in the form of behavioral testing). Meanwhile, juvenile females born preterm experienced a substantial rise in cortisol in response to foreign situations compared to term-born females, suggesting that they have an anxious phenotype and increased fear response (Shaw et al., 2016). These data highlight the sexually dimorphic effects that preterm birth has on programming of the hypothalamic pituitary axis, with a blunting of the stress response following preterm birth in males, but an increased response in females. Previous studies in guinea pigs suggest that prenatally increased cortisol may program adverse behavior in childhood, for example maternal stress exposure in pregnancy was shown to result in increased anxious behaviors in juvenile female offspring (Bennett et al., 2015). This is consistent with studies showing that prenatal stresses 'programs' the HPA axis Matthews, 2005, 2008;Kapoor et al., 2006). This results in a greater postnatal sensitivity of the HPA axis to stressful stimuli, in turn contributing to behavioral disorders. The programming mechanism has been shown to be mediated by changes at the level of the hypothalamus Matthews, 2005, 2008; FIGURE 2 | Proposed cascade of events following preterm birth that lead to ongoing neurological impairments. Frontiers in Physiology | www.frontiersin.org Kapoor et al., 2006). Therefore, even in the absence of a parallel change in postnatal cortisol concentrations, early exposure to increased cortisol concentration can program an altered behavioral response to stress-inducing situations. These behavior-altering effects of cortisol may also involve interactions between cortisol and allopregnanolone. Glucocorticoids, such as cortisol, are known to adversely affect allopregnanolone production. Studies in guinea pigs have previously demonstrated that repeated administration of betamethasone (a synthetic glucocorticoid) to pregnant dams reduced the allopregnanolone synthesizing capacity of both the placenta and the fetal brain as demonstrated by a reduction in the expression of the rate-limiting enzyme 5α-reductase type 2 in both tissues (McKendry et al., 2009). Interestingly expression of this enzyme is also decreased in the brain of preterm guinea pig neonates (Kelleher et al., 2013), possibly as a result of exogenous glucocorticoid exposure, part of the gold standard treatment to reduce short-term morbidity and mortality following preterm birth. Our studies have also shown that both late gestation maternal stress and pharmacological inhibition of allopregnanolone synthesis by finasteride result in a reduction of allopregnanolone concentrations in the fetus, with development of an anxious phenotype in female juvenile offspring (Bennett et al., 2015;Cumberland et al., 2017a). In light of these data and the findings of the studies presented here we suggest that in addition to the lack of protection of allopregnanolone against excitotoxic damage, and the raised levels of cortisol present following early exposure to the ex utero environment, that cortisol hinders the synthesis and action of any offspring derived allopregnanolone in the preterm neonate and that this has lasting implications on neurodevelopment and behavior (Figure 2). NEUROSTEROIDS AND THE EXTRA SYNAPTIC GABA A RECEPTOR Inhibitory allopregnanolone exert effects throughout the brain to suppress excessive excitation. This suppression is achieved by increasing GABAergic inhibition (Herd et al., 2007). Allopregnanolone is an allosteric agonist of the GABA A receptors and specifically enhances GABA A receptor mediated inhibition, which results in anxiolytic, anti-convulsant, anesthetic, analgesic, and sedative effects (Harrison and Simmonds, 1984;Harrison et al., 1987;Lambert et al., 1987;Majewska, 1992;Paul and Purdy, 1992;Olsen and Sapp, 1995;Belelli and Lambert, 2005). These effects are achieved by activation of the extra synaptic receptors, which are known to be particularly sensitive to allopregnanolone. GABA A receptors form a gated chloride ionophore channel and specific binding sites for benzodiazepines, barbiturates, and anesthetics, however, neurosteroids are thought to bind to a separate allosteric steroid-binding site (Delaney and Sah, 1999;Macdonald and Botzolakis, 2009). GABA A receptors exhibit inhibitory effects in response to neurosteroid stimulation in adult animals and from mid gestation onward in the fetus, however, they are also capable of exhibiting excitatory actions in early gestation and these excitatory actions are known to stimulate glial cells and neuronal outgrowth (Owens and Kriegstein, 2002;Represa and Ben-Ari, 2005). Whether the effect is inhibitory or excitatory is determined by the chloride gradient of the receptor-ionophore, determined by the intracellular chloride concentration. This in turn is primarily regulated by the K + /Cl − co-transporter-2 (KCC2) (Rivera et al., 1999(Rivera et al., , 2005. The expression and activity of this integral co-transporter is regulated by the phosphorylation of its Ser940 residue, with dephosphorylation resulting in downregulation of the cotransporter, increasing the intracellular chloride concentration, and switching to excitatory GABA actions (Lee et al., 2007;Lee et al., 2011). GABA A receptors are involved in a broad range of functions including controlling the excitability of the brain, modulation of anxiety, as well as cognition, memory, and learning (Sieghart et al., 1999). In addition to neurons, extra synaptic neurosteroid sensitive receptors are highly expressed on glial cells including oligodendrocytes (Arellano et al., 2016) throughout the fetal brain from mid-gestation onward (Williamson et al., 1998;Crossley et al., 2000;Hirst et al., 2008). The expression of GABA A receptors in the fetal brain increases as gestation advances, reaching their highest levels of expression by full term gestation in most areas, such as the cerebral cortex and hypothalamus (Crossley et al., 2000(Crossley et al., , 2003Nguyen et al., 2003). GABA A receptors exist in a pentameric formation of 5 subunits with a central selective chloride anion channel. The five subunits come from a pool of 19 different subunits, α1-6, β1-3, γ1-3, δ, ε, π, θ, and ρ1-3 and subunit composition varies greatly depending on the function of the receptor (Barnard et al., 1998;Belelli et al., 2009). Synaptic receptors, which are responsible for fast transmission, usually feature the α1-3, β1-3, and γ2 subunits (Essrich et al., 1998), whilst the extra synaptic receptors that contribute to tonic inhibition (Belelli et al., 2009) possess the α4-6 and δ subunits (Burgard et al., 1996). Rather than produce an increase in amplitude of miniature inhibitory postsynaptic currents (mIPSCs), neurosteroids have been shown to increase the duration of the amplification by altering the kinetics of the GABA A -gated ion channels . This increase in duration is neuron specific, with different brain regions requiring different concentrations of neurosteroids to induce the same effect. Specifically, the CA1 neurons of the hippocampus, cerebellar granule cells, and Purkinje cells appear to be more sensitive to neurosteroids, only requiring low nanomolar concentrations to increase duration of amplification (Harney et al., 2003;Cooper et al., 2004), and this is primarily due to subunit composition. Receptor subunit composition plays an important role in determining receptor affinity for various ligands. Benzodiazepines for example are known to be attracted to receptors containing a γ subunit, whilst those featuring α6 are unresponsive to benzodiazepines (Delaney and Sah, 1999). Whilst there is a specific binding site for 3α-hydroxyneurosteroids such as allopregnanolone, the composition of subunits affects the sensitivity of the receptor to stimulation (Belelli et al., 2002;Hosie et al., 2007). Regional specificity also exists for these receptors, for example in a mouse knockout of the δ subunit tonic conductance was significantly reduced in the cerebellum, however, in the CA1 region of the hippocampus there was no effect on conductance (Stell et al., 2003). This regional specificity is due to differences in expression of various subunits throughout the brain and whilst the α6 and δ subunits, which are co-expressed in many receptors, are highly expressed in the cerebellum, tonic conductance in the hippocampus is controlled primarily by receptors containing the α4 and α5 subunits, in addition to those containing the δ subunit. The role of specific neurosteroid-sensitive subunits in behavior has been revealed in knockout mouse models. For example, global deletion of the δ subunit significantly reduces the anxiolytic and anti-convulsant effects induced by the allopregnanolone analog ganaxolone, confirming that neurosteroids bind to the δ subunit containing GABA A receptors to exert their inhibitory functions (Mihalek et al., 1999). Increased anxiety-like behavior was also present in a α4 subunit knockout mice as demonstrated by an increased preference for dark enclosed spaces in a T-maze (Loria et al., 2013). Seizure susceptibility has also been shown as increased following this knockout (Chandra et al., 2008). Similarly, it has been demonstrated that pro-epileptic behavior is increased in mice lacking the δ subunit (Mihalek et al., 1999;Spigelman et al., 2002Spigelman et al., , 2003. Taken together these data indicate the importance of configurations of the GABA A receptors and the necessity of the expression of key subunits for neurosteroid binding and for their effects on behavior. Of particular importance to preterm-associated neurodevelopmental impairment is the ability of allopregnanolone to promote GABA A receptor-mediated maturation of oligodendrocytes. Administration of progesterone to rat cerebellar slice cultures increased the expression of the mature myelinating oligodendrocyte marker, myelin basic protein (MBP) (Ghoumari et al., 2003). The enhancement of myelination was achieved by allopregnanolone, the neuroactive metabolite of progesterone, acting via the GABA A receptors located on oligodendrocytes as a selective GABA A receptor antagonist inhibited this promyelinating effect. GABA A RECEPTORS AND PRETERM BIRTH In juvenile guinea pigs born preterm, altered GABAergic pathway development is evident at juvenility in the cerebellum. Intriguingly, despite reduced expression of both subunits in the preterm neonatal cerebellums (Figure 3) mRNA expression of allopregnanolone sensitive GABA A receptor subunits α6 and δ are not altered in these pretermborn animals at juvenility (Shaw et al., 2017). These observations suggest that sometime between birth and juvenile age in the guinea pig (PND28) there is either a 'catch-up' in these key GABA A receptor subunits expression, or conversely, that levels in the brain of term born animals have dropped to lower levels. Subunits of the GABA A receptor are reported to go through age-related changes in expression, with early development often a period of high expression, followed by down regulation in adulthood (Yu et al., 2006). This age-related change in expression follows the maturation profile of the brain and therefore if the neurosteroid-sensitive receptors in the preterm brain do not undergo any 'catch-up' between birth and juvenility this may contribute to preterm-associated changes in neurodevelopment. An additional vulnerability that has been reported for preterm neonates is an observed lack of a birth-related adaptive increase in the cerebellar expression of the α6 and δ GABA A receptor subunits after birth (Figure 2) . This potentially reduces the effect of allopregnanolone postnatally, exposing the immature brain to damaging excitotoxicity. Knockout studies of the δ subunit, which is known to commonly group with the α6 subunit, have shown a link between a lack of these subunits with the manifestation of multiple neurodevelopmental phenotypes such as anxiety-like behavior and pro-epileptic behavior (Mihalek et al., 1999;Spigelman et al., 2002Spigelman et al., , 2003. Interestingly, receptor FIGURE 3 | Relative mRNA expression of the GABA A receptor (A) δ and (B) α6 subunits in guinea pig cerebellum. Fetal tissue was obtained at GA69 (term) and GA62 (preterm) ages, and neonatal tissue from 24 h after term or preterm birth. ( * p < 0.05, n = 11-16). Adapted and reprinted with permission from Shaw et al. (2015). changes are present in human brain tissue in disorders that primarily affect myelination, such as multiple sclerosis (Luchetti et al., 2011), and whilst their precise role in disease progression is unknown, the neurosteroid sensitive GABA A receptors present a common link between initial myelination, 'catch up' and remyelination processes, and behavioral state. Conversely, the hippocampal GABA A neurosteroid sensitive receptor subunits appear to be largely unaffected by preterm delivery with the exception of a decrease in the expression of the α5 subunit mRNA at juvenility (Shaw et al., 2016). This particular subunit is known to mediate tonic inhibition in the CA1 of the hippocampus, is required for associative learning and furthermore is known to be reduced in response to increased levels of cortisol (Crestani et al., 2002;Verkuyl et al., 2004;Glykys and Mody, 2006). Thus, a reduction in α5 subunit expression in childhood may reduce tonic inhibition, thereby increasing excitation in the hippocampus, which in turn may contribute to the risk of hyperactivity-disorders in male children born preterm. POTENTIAL OF NEUROSTEROIDS AS A PROTECTIVE THERAPY Steroid hormones, including progesterone, allopregnanolone and potentially other neuroactive metabolites, can exert neuroprotective effects following damage to neurons and glia by preventing excitation, apoptosis, and inflammation, as well as by regenerative mechanisms (Schumacher et al., 2004). Studies in adult rats have demonstrated the therapeutic effect of progesterone injections on TBI where progesterone administration reduced neuronal loss (Roof et al., 1994(Roof et al., , 1996He et al., 2003). Similarly, allopregnanolone administration was shown to reduce memory deficits and loss of neurons in the frontal cortex of these rats following bilateral injury by stimulating trophic effects (He et al., 2003). Importantly, in rat astrocytes and oligodendroglial progenitor primary cell cultures, progesterone exposure upregulated expression of the promyelinating factor insulin-like growth factor 1 FIGURE 4 | Myelination, the process of surrounding nerve axons with a myelin sheath, is achieved by oligodendrocytes. Placentally derived allopregnanolone (ALLO), the neuroactive metabolite of progesterone (PROG), promotes maturation of oligodendrocytes in utero via action on GABA A receptors in (A); premature loss of the placenta due to preterm birth results in an arrest in this process in (B); and reinstating GABA A receptor signaling by neurosteroid-replacement therapy may restore oligodendrocyte maturation leading to correct myelination in neonates born preterm, thus improving neurological function in (C). (Chesik and De Keyser, 2010) and, in organotypic slice cultures of rat cerebellum, myelination was stimulated by progesterone following its metabolism into allopregnanolone and its' trophic actions mediated by actions on GABA A receptors (Ghoumari et al., 2003). Both progesterone and allopregnanolone have been shown to be effective at reducing the pro-apoptotic activity of caspase-3, reducing astrogliosis as evidenced by GFAP staining, and improving performance in both the spatial learning task and memory function in adult male rats (Djebaili et al., 2005). Furthermore, rat studies have identified reductions in inflammatory cytokines TNF-α and IL-1β following TBI and subsequent progesterone or allopregnanolone administration (He et al., 2004). Following the potential benefits of progesterone therapy observed in animal studies, a randomized phase III clinical trial of progesterone (ProTECT) for treatment of acute TBI in adults was performed. This showed that progesterone treatment resulted in a lower 30-day mortality risk, and that patients were more likely to have a moderate to good outcome than those receiving placebo (Wright et al., 2007). Likewise, a large clinical trial in China is showing similar therapeutic benefits following progesterone therapy (Xiao et al., 2008). The role of progesterone as a precursor of allopregnanolone, and the number of positive studies relating to the use of progesterone, led to us examining the use of progesterone replacement therapy in preterm guinea pig neonates. In contrast to the earlier finding of effects on TBI in rats, we observed detrimental effects on postnatal neurodevelopment particularly in the male offspring. From this preliminary study, it appears that progesterone is metabolized differently by the male neonates and instead of producing allopregnanolone, much of the steroid is converted to cortisol. These males, with high plasma and salivary cortisol concentrations, also had reductions in myelination of the cerebellum and subcortical white matter, highlighting the vulnerability of these male neonates to increased cortisol as a result of increased postnatal progesterone . Previous studies have also investigated the potential use of allopregnanolone to restore neurosteroid deficits. Preliminary findings, however, suggested that allopregnanolone had limited effectiveness due to the very short half-life of allopregnanolone, or other possible metabolic conversion making therapeutic concentration difficult to achieve. To avoid both of these issues with allopregnanolone therapy, as well as potential conversion of allopregnanolone to its less active isomers, we explored a possible postnatal therapy with ganaxolone. Ganaxolone Ganaxolone is a 3β-methylated synthetic analog of allopregnanolone initially developed by Edward Monaghan at CoSensys in 1998, however, in 2004 Marinus Pharmaceuticals Inc., acquired the development and commercialization rights (Nohria and Giller, 2007). Marinus Pharmaceuticals then carried out a number of clinical trials using ganaxolone, some of which are still ongoing. Ganaxolone features a methyl group that prevents metabolism into other active steroids (Carter et al., 1997), and a half-life of 12-20 h in humans (Monaghan et al., 1997). Ganaxolone acts in a very similar manner to allopregnanolone and binds to the neurosteroid-binding site of GABA A receptors, producing similar anxiolytic and anti-seizure effects. The addition of the methyl group markedly improves oral pharmacokinetics and in addition ganaxolone is not readily metabolized to other steroids that may bind elsewhere and produce unwanted effects (Carter et al., 1997). Allopregnanolone can for example be metabolized into the 3β-isomer that is either inactive, or at higher doses, may block the steroid site on the GABA A receptor. Animal pharmacokinetic studies demonstrate that ganaxolone has a large volume of distribution as administration of radioactively labeled ganaxolone has shown wide distribution, and due to its' lipophilic nature, it becomes concentrated in the brain with a brain-to-plasma concentration of between 5 and 10 (Nohria and Giller, 2007;Reddy and Rogawski, 2012). In addition to pharmacokinetic studies there have been a number of animal studies relating to the use of ganaxolone and behavioral disorders. In an adult mouse model of Angelman syndrome (which is characterized by severe developmental delay, motor impairments, and epilepsy) treatment with ganaxolone over a period of 4 weeks was shown to ameliorate behavioral abnormalities (Ciarlone et al., 2017). Other mouse models of neurodevelopmental disorders have highlighted the therapeutic benefits of ganaxolone, including an adult mouse model of autism where ganaxolone reversed the autistic phenotype (Kazdoba et al., 2016), and an adult post-traumatic stress mouse model where again ganaxolone therapy improved behavioral changes such as aggression and anxiety (Pinna and Rasmusson, 2014). Despite numerous animal models of behavioral disorders demonstrating the therapeutic potential of ganaxolone in ameliorating disease states, there is limited information regarding the effects on neurodevelopment or myelination in these models. There has been one model where administration of ganaxolone to Niemann-Pick Type C diseased adult mice identified protection against Purkinje cell death, which is similar to the previously reported protective mechanisms of allopregnanolone (Mellon et al., 2008). Furthermore, there has only been one neonatal animal study using ganaxolone therapy, in a rat model of infantile spasms where the onset, number, and duration of spasms were reduced by ganaxolone therapy (Yum et al., 2014). An additional study examining the neuroprotective effects of ganaxolone following neonatal seizures in sheep is ongoing but shows promise (Yawno et al., 2017). A number of phase 2 clinical trials have examined the use of ganaxolone for epilepsy and infantile spasms, as well as for posttraumatic stress disorder, migraine, and the developmental problems associated with fragile X syndrome (Nohria and Giller, 2007;Reddy and Rogawski, 2012). Daily drug doses of up to 1,875 mg in adults and 54 mg/kg in children have been trialed, and it has been shown that a single oral dose of 1,600 mg can result in peak plasma concentrations of up to 460 ng/mL. Recently a randomized phase 2 trial for ganaxolone as an addon therapy for severe seizure disorders took place in 147 adults (Sperling et al., 2017). The subjects received 1,500 mg/day spread over three doses for 8 weeks. The treatment resulted in an 18% decrease in mean weekly seizure frequency, compared to a 2% increase in the placebo group. The treatment was reported as safe and well tolerated with similar rates of discontinuation due to adverse effects in the placebo and ganaxolone groups (ganaxolone 7.1% versus 6.1% for placebo). The most common side effects were classified as mild to moderate and included dizziness (16.3% versus 8.2% in placebo), fatigue (16.3% versus 8.2%), and somnolence (13.3% versus 2.0%). In the context of preterm birth, we have recently reported that ganaxolone neurosteroid-replacement therapy given to preterm guinea pigs between birth and term 'due date' improved myelination of the CA1 region of the hippocampus and overlying subcortical white matter, in addition to reduction in hyperactive behavior (Shaw et al., 2018). This was the first study to show that neurosteroid-replacement therapy can replicate the in utero neurosteroid environment and that this restores neurodevelopment to a normal, term-born, trajectory (Figure 4). By combining our recent studies on pregnancy compromises in the developmentally relevant guinea pig (Morrison et al., 2018) and the impact of disturbances in allopregnanolone levels on the developing fetus, the preterm neonate, and the long-term effects on the juvenile, we now suggest that re-establishment of neurosteroid action in the period between birth and term equivalence is a prospective therapy for future clinical use. Whilst more studies required, particularly on optimal dosing and longerterm outcomes, we suggest this study provides the impetus and a path for future preclinical trials using neurosteroidreplacement therapy following preterm birth. Furthermore, this therapy may be useful following other pregnancy compromises discussed previously where a major contributing factor to deficits in neurodevelopment is a lack of allopregnanolone exposure. CONCLUSION Until recently, the risk of neurodevelopmental impairment in children born moderate-late preterm who required little to no clinical intervention, was thought to be minimal, however, data from large international cohorts clearly demonstrate that this is not the case. Albeit that the effect size is not as great as for those born at extremes of gestational age, the significantly larger number of children born at moderate-late preterm gestations means that this is an increasingly large public health issue, with important implications for the provision of educational and other resources throughout childhood. Currently, there are no targeted therapies available to prevent the development of these neurodevelopmental problems, and as such therapy is limited to symptom management for the most affected children. Through use of studies in our model of preterm birth in the guinea pig we have begun to address these gaps in the knowledge of neurodevelopment following preterm birth. We suggest key pathways involved, targets for intervention, and a therapy for prevention of preterm-associated neurodevelopmental disorders. These studies are in their preliminary stages and whilst we have identified a target for improving outcomes, there are many aspects to this therapy that we are yet to investigate. Our pilot studies are primarily focused on identifying an optimal dose that promotes oligodendrocyte maturation but minimizes adverse side effects. Once we identify an ideal dose, we can then determine whether there are interactions with other therapies that the preterm neonate may be exposed to, such as synthetic glucocorticoids, and potentially in the future, for asphyxiated preterm infants, therapeutic hypothermia as a co-therapy. AUTHOR CONTRIBUTIONS JS: primary author. MB, RD, and GC: revisions and edits. JH and HP: co-senior authors, revisions and edits, and concept design.
v3-fos-license
2020-10-28T19:19:42.393Z
2020-07-16T00:00:00.000
241978786
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-08052-8", "pdf_hash": "53da767d7ca75149adde886fb31c19851c42d125", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45188", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "7d76bfc616286f4cdce4ffb46d2f4249a7bd9a1b", "year": 2020 }
pes2o/s2orc
Comprehensive Analysis of Immune Prognostic ‐ Related Genes in the Tumor Microenvironment of Hepatocellular Carcinoma Background: The percentage of death resulted from hepatocellular carcinoma (HCC) remains high worldwide, despite surgical and chemotherapy treatment. Immunotherapy offers great promise in the treatment of a rapidly expanding spectrum of HCC. Therefore, further exploration of the immune-related signatures in the tumor microenvironment, which plays a vital role in tumor initiation and progression for immunotherapy is currently needed. Methods: In this present research, 866 immune-related difference expression genes (DEGs) were identied by integrating the DEGs between TCGA HCC and normal tissue and the immune genes from databases (Innate DB; Imm Port), and 144 candidate prognostic genes were dened through weighted gene co-expression network analysis (WGCNA). Results: Seven prognostic immune-related DEGs were determined with LASSO Cox PH model, which was followed by constructing the ImmuneRiskScore model based on the prognostic immune-related DEGs. The prognostic index of the ImmuneRiskScore was validated then in the dependent dataset. Patients were divided into high- and low-risk groups according to ImmuneRiskScore. The difference in ImmuneRiskScore and inltration of immune cells between groups was detected and the correlation analysis for immunotherapy biomarkers was further explored. Conclusion:The ImmuneRiskScore of HCC could a prognostic signature reect characteristics within Furthermore, it also may provide novel immunotherapy predictive biomarker for HCC patients in the near future. Introduction Hepatocellular carcinoma (HCC) is one of the most common malignancies [1,2]. With 5-year survival being 18%, HCC is the second most lethal tumor after pancreatic cancer [3] and the fourth leading cause of cancer-related mortality worldwide [4,5]. Hepatocellular carcinoma is the major cancer type for primary liver cancers and its increase in deaths is a growing concern. However, general therapies such as radiotherapy and chemotherapy do not prolong overall survival (OS) signi cantly in HCC [6]. Thus, new strategies are required. Immunotherapy with checkpoint inhibitors is also emerging as an important treatment option. Immune checkpoint inhibitors (ICIs) is revolutionizing the clinical treatment landscape of multiple tumors, most notably advanced melanoma [7][8][9][10][11][12][13], non-small-cell lung cancer [14,15] and renal cell carcinoma [16,17]. Since HCC seems to be submissive to the programmed cell death protein 1 (PD1) pathway blockade [18], ICIs with approval likely for additional indications for HCC in the near future [6,19,20]. Though the remarkable progress that has been achieved, there is only a limited number of patients could bene t from ICIs [21]. Therefore, there is an urgent need for new, immune-based biomarkers distinguishing HCC patients more likely to have a better prognosis and further bene t from immunotherapy. As an important element of the HCC tumor microenvironment, immune cells show clinicopathological signi cance in predicting prognosis and therapeutic e cacy [22][23][24]. It is an active eld to investigate the characteristics of the tumor microenvironment functionally impact effect on immunotherapy. In this study, we make full use of TCGA data and a priori immune-related genes to construct a prognostic immune risk score by WGCNA and Lasso cox model. We also analyzed the correlation for ImmuneRiskScore and different immune cells to elucidate mechanisms possible for the formation of the microenvironment. Last, we explored the correlation with other immune biomarkers and the potential to identify patients eligible for immunotherapy to improve the therapy effects. The owchart as shown in Fig. 1. Data download and procession We downloaded the RNA sequencing expression pro le(count and RPKM format) and clinical data of TCGA-HCC from the UCSC Xena data portal(https://xena.ucsc.edu/), which contains 50 normal samples and 374 tumor ones. The immune-related genes set contains 1052 immune genes downloaded from InnateDB (https://www.innatedb.ca/) (Tables S1) and 1811 immune-related genes downloaded from ImmPort [25](https://www.immport.org) (Tables S2). The expression pro ling datasets GSE14520 as well as the clinical data were obtained by GEOquery [26] package of Bioconductor in R-3.5.2. GSE14520 contains 162 tumor samples respectively after removing the normal samples. Microarray probe ID mapped to gene symbols based on the GPL3921 platform (Affymetrix HT Human Genome U133A Array) and incorporated in the dataset matrix for each dataset. Eventually, the average of multiple probes computed that correspond to a single gene for each dataset individually employing in-house R scripts. The tumor mutation burden data was download from PanCancerAtla (https://www.cell.com/consortium/pancanceratlas). Difference Expression Analysis We identi ed differential expression genes (DEGs) based on limma [27] packages in R between Normal (n = 50) and cancer(n = 367) patients based on the raw counts of HCC gene expression data from TCGA. Empirical Bayes method was used in selecting signi cant DEGs based on "limma" package which was used in the standard comparison mode, and the threshold was set at p < 0.05 and |log2 fold change| >= 1.5. Gene Ontology (GO) and pathway enrichment analysis of DEGs As an ontology-based R package, clusterPro ler not only can automate the process of biological-term classi cation and the enrichment analysis of gene clusters, but also provides a visualization module for analysis results display [28]. In this present research, the clusterPro ler package was used in identifying and visualizing the GO terms (biological process, cellular component, and molecular function included) and KEGG pathways enriched by DEGs. We set P-value < 0.01 as the cut-off criterion, the signi cant adjustment method was BH and the cut-off criterion of q-value was also set 0.01. WGCNA We used fpkm format of TCGA datasets to construct WGCNA analysis [29]. First, Basic data preprocessing for handling missing data and removing outliers; Second, Choosing the soft-thresholding power by the pickSoftThreshold. It can calculate the scale-free topology t index for several powers and provide the appropriate soft-thresholding power for network construction. Third, constructing a one-step automatic network and detecting modules. Turning adjacency into topological overlap to measure the network connectivity of a gene considered as the sum of its adjacency with all other genes for the generation of network. Hierarchical clustering function was used in classifying genes with a similar expression pro le into modules [30]; Next, Selecting the key modules related to OS and OS time and visualized by Cytoscape [31]. In this present study, modules were chosen and were visualized. The correlation between MEs and clinical traits "survival" and "survival time" was calculated to identify the module related. And then, in the linear regression between gene expression and clinical information, we de ned gene signi cance (GS) as the log10 transformation of the P-value (GS = lgP). In a module, module signi cance (MS) was de ned as the average GS for all the genes. Hub genes were identi ed with high clinical traits signi cance (> 0.1) and high intramodular connectivity (> 0.5) in interesting modules and were selected as the candidates to be further analysed and validated. Gene set enrichment analysis of hub genes We used gpro ler2 (https://CRAN.R-project.org/package=gpro ler2) to perform over-representation analysis on input HCC hub gene list. Based on high-quality up-to-date data across different evidence types, g:Pro ler can provide a reliable service [32]. It maps these immune genes to known functional information sources and detects statistically signi cantly enriched terms. We included pathways from KEGG(https://www.genome.jp/kegg/), Reactome (https://reactome.org/), and CORUM (http://mips.helmholtz-muenchen.de/corum/). This method was done with the hypergeometric test followed by FDR correction for multiple testing. Construction and Validation of an Immunoscore Prognostic Model With the univariable Cox proportional hazards regression model in 'survival' package, we made a calculation of the hazard proportions for DEGs of the HCC cohort. DEGs with signi cance p value < 0.05 were analyzed, and we re-computed these DEGs with survival risk for Cox regression models by glmnet package [32]to select several more important prognostic genes among DEGs. As an R package, glmnet ts a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso by setting the regularization parameter lambda as 1. To predict patient survival, a formula for the ImmuneRiskScore model was established as follows: Immune Risk Score = Estimation of Immune Cell Type Fractions and estimate score CIBERSORT [33] can characterize cell composition of complex tissues from their gene expression pro les. In this study, we used CIBERSORT and LM22 reference gene expression matrix to quantify cell composition in diverse HCC samples. Analysis of normalized gene expression data were carried out by using the CIBERSORT algorithm, running with 1,000 permutations. Calculation of immune and stromal scores were performed with ESTIMATE, which was an algorithm could provid scores for the level of stromal cells present, and the in ltration level of immune cells in tumor tissues [34]. Statistical Analysis The survival curve for the hub immune genes was created by the Kaplan-Meier method and the statistical signi cance of difference was judged by the Log-rank test. The receiver operating characteristic (ROC) curve was applied to describe the sensitivity and speci city of survival prediction based on the Immune Risk Score, and the pROC package was utilized to quantify the area under the curve (AUC). Nonparametric Mann-Whitney-Wilcoxon Test was used to compare the data from different groups and Pearson's chisquare test was performed in measuring the level of signi cance for association amongst variables. All statistical analyses were conducted using R software. All of the reported P values were two-tailed, and p < 0.05 was considered to be statistically signi cant. Identi cation of DEGs related to immune From the TCGA database, we obtained the expression pro les of 417 HCC samples, which contained 367 tumor samples and 50 nontumor ones after data preprocessing. Totally 7194 genes were identi ed as DEGs with the threshold of P < 0.05 and |log2FC| > 1.5, containing 3657 genes upregulated and 3537 genes downregulated ( Fig. 2A, Tables S3). The samples could be well clustered for normal and tumor groups when the top 200 DEGs were selected for unsupervised hierarchical clustering (Fig. 2B). To obtained the immune-related DEGs for HCC samples, we selected the genes both be immune-related genes and difference expressed for HCC. We total collected 2542 human immune-related genes composed by the 1811 immune-related genes from Immport and 1052 innate immune-related genes from InnatedDB. Finally, there were 866 genes within all the DEGs when intersecting with immune genes (Fig. 1C, Tables S4), and These immune-related DEGs were then chosen for further analysis. The further GO enrichment analysis of these 866 immune-related DEGs showed that 1008 go terms including 891 biological processes (BPs), 39 molecule functions (MFs), and 78 cellular components (CCs) were signi cantly enriched in (Tables S5). As shown in Fig. 3A, the top 10 go terms were showed by classi cation. Leukocyte migration, positive regulation of cytokine production, cell chemotaxis, leukocyte cell to cell adhesion, and T lymphocyte activation are the top enriched biological process; the enriched cellular components including collagen − containing extracellular matrix, external side of plasma membrane, collagen trimer, secretory granule membrane, extracellular matrix component; the enriched molecule functions contains structural constituent of extracellular matrix, carbohydrate binding, receptor ligand activity, cytokine activity, glycosaminoglycan binding. More details about the top GO enrichment analysis results were showed by overview their genes in Fig. 3B, C, D. These circle plots signi cantly could help us to understand the function of DEGs in these enriched terms. And most of the time, it is more meaningful to further represent genes with various functions. Furthermore, analysis of KEGG pathway enrichment (Tables S6) indicated that signi cantly enriched pathways include interaction between cytokine and cytokine receptor, Viral protein interaction with cytokine and cytokine receptor, Cell adhesion molecules (CAMs), Chemokine signaling pathway, Malaria (Fig. 3E). The top 10 pathways were showed and interact with their assigned genes in Fig. 3F. The KEGG pathways analysis results were different from GO enrichment results, indicating that the immune microenvironment of HCC was sophisticated. Weighted Co-expression Network Construction And Key Modules Identi cation The 866 immune-related DEGs and 367 HCC tumor samples were used in constructing gene coexpression network. After checked the missing values, we detected the outlier samples by hierarchical clustering (Figure S1), and the dendrogram shows 5 outlier samples that will be removed from the analysis. In this study, β = 4 (scale-free R2 = 0.85) was selected to be the soft-thresholding parameter to ensure a scale-free network. As shown in Fig. 4A, we performed network topology analysis for thresholding powers from 1 to 20, 4 was the lowest power for the scale-free topology t index on 0.85. We obtained gene clustering tree by using hierarchical clustering of TOM-based dissimilarity and identi ed 6 modules (Fig. 4B, Table 1). To select the clinically signi cant modules, we used WGCNA to calculate the correlations between the external clinical information and gene modules. As shown in Fig. 4C, the Green module was found to be the highest one associated with overall survival, and the green and brown eigengenes are highly related ( Figure S2). We visualized the green module as networks by Cytoscape and selected the top 100 gene pairs by sorting the weight of gene pairs. As shown in Fig. 4D, it indicates that some genes, such as PDLIM7, EHHADH, DMGDH, and CYP8B1 with larger size have higher node degree. Finally, we screened 144 immune-hub genes with high gene signi cance with OS and OS time (> 0.1), and high relationship with interesting green module (> 0.5) (Table S7). Statistical gene set enrichment analysis about the 144 immune-hub genes was performed to nd overrepresentation of functions from biological pathways like KEGG, Reactome, and complexes in CORUM, etc. This is done with the hypergeometric test followed by correction for multiple testing. The results showed that immune-hub genes were enriched signi cantly in 52 pathways and complexes (p < 0.05) ( Fig. 4E; Tables S8), including KEGG pathways such as TNF signaling pathway, Metabolic pathways, Reactome pathways such as Phenylalanine and tyrosine metabolism, and CORUM complexes such as PLAUR − PLAU complex, IGF2R − PLAUR − PLAU complex, MAK − ACTR − AR complex, IGF2R − PLG − PLAU − PLAUR − LTGFbeta1 complex. These ndings show that these hub genes not only affect the metabolic, apoptosis, and cell survival as well as in ammation and immunity of HCC, but also plays a pivotal role in the protein complexes of immune cells. Establishment of the lasso cox-based prognostic gene signature We subsequently revealed that 108 of the 144 immune-hub genes were signi cantly related to OS through univariate Cox regression analysis (results in Tables S9). Then, we performed lasso-penalized Cox analysis to further narrow the hub genes (Fig. 5AB). Seven genes were identi ed and utilized thereafter in constructing an ImmuneRiskScore model to evaluate the prognostic value of CRC patients. The seven genes identi ed and their cox coe cient were showed in Table 2 . The GO enrichment analysis shows that these seven genes were enrichment in several molecular functions such as asparaginase activity, beta-aspartyl-peptidase activity, 6-phosphofructokinase activity, glucose-6phosphatase activity, and sugar-terminal-phosphatase activity. The formula for the ImmuneRiskScore model was described in the part of Materials and Methods. Next, we divided HCC patients into high-and low-score groups, based on the lasso cox hub genes and ImmuneRiskScore, according to the optimal cut-off got from survminer package. The results indicated that ve genes (PLBD1, ETV4, PFKP, GNAZ, ASRGL1) are risk factors, and that high-score samples had a worse OS than those with low score and the SPP2 and G6PC are protective factors (Fig. 5C). The last gure in Fig. 5C showed the prognostic accuracy of ImmuneRiskScore (95% CI for HR: 0.48 (0.33-0.68), log-rank test p < 0.0001). Additionally, the result of multivariate Cox regression analyses showed that the predictive value of the ImmuneRiskScore for HCC patients is not associated with common clinical variables (Tables S10). Validation Of The Immuneriskscore In Tcga Crc Cohort To further investigate the prognostic value of ImmuneRiskScore, we conducted a validation analysis in another GEO cohort (GSE14520). The dataset was categorized into two groups based on ImmuneRiskScore; the results of analysis indicate that there is a signi cant prognostic value between ImmuneRiskScore and OS as well as recurrence, the high-score group had a worse OS rate than the lowscore one, not merely in the TCGA cohort (Fig. 6AB). Figure 6C showed the prognostic accuracy of ImmuneRiskScore in the GEO dataset, which was considered as a continuous variable during investigation. The area under the ROC curves (AUC) of the prognostic model for OS was 0.608 at 1 year, 0.614 at 3 years, 0.620 at 5 years. These results indicated that ImmuneRiskScore is a good model to predict survival. Stromal and Immune cell in ltration landscapes between high and low ImmuneRiskScore By using ESTIMATE algorithm, we estimated the in ltrating cells and tumor purity of tumor tissue. Stromal score represented the presence of stromal cells in tumor tissue, and immune score indicated the in ltration of immune cells in tumor tissue, and combination of them stood for a measurement of tumor purity (Tables S11). We then distinguished differences in stromal and immune scores between high-and low-risk patients with LUSC. As shown in Fig. 7(A, B), the immune and stromal score in high-level patients were both signi cantly (Wilcox p < 0.05) higher than low-level HCC patients indicate the overall presence of immune and stromal levels are associated with Immune Risk Score. We further estimated the ratio of 22 immune cell categories in HCC patients using the CIBERSORT method, the results are shown in Tables S12. The proportion distribution of immune cells is heterogeneity not only between the high and low-levels samples of ImmuneRiskScore but among the HCC samples Correlations between the ImmuneRiskScore and immune biomarkers The discovery of broad immune biomarkers of the TME could effectively predict clinical bene t to ICIs. We next introduced a few important immune biomarkers including PDL1, PD1, PDL2, CTLA4, CYT, and IFN gamma. Among these biomarkers, the immune checkpoint genes including PDL1, PD1, PDL2, CTLA4 are worth attentional for Immune checkpoint inhibitors which are revolutionizing the clinical treatment landscape. Cytolytic activity (CYT) focuses on cytotoxic T cells (CTL) and natural killer cells (NK) for their powerful ability to lyse tumor cells [35]. CYT is measured based on the geometric mean of expression of granzyme A (GZMA) and perforin (PRF1). Interferon-γ(IFN-gamma) signature is a key cytokine to activate the PD-1 signaling axis by directly upregulating the ligands PD-L1 and PD-L2 mainly in tumor cell, which was produced by T cells activated, NK and NKT cells [36,37]. Tumor mutational burden (TMB) is de ned as the number of nonsynonymous mutations detected in each megabase sequenced. TMB is proved to be associated with improved responses to checkpoint blockade, in some tumors such as melanoma [38] and non-small-cell lung cancer [39,40]. An experimentally determined pan-broblast TGF-β response signature (Pan-F-TBRS) showed elevated mean expression in non-responders and decreased overall survival particularly in patients with mUC [41]. transforming growth factor β (TGF-β) signalling in broblasts is documented as a pleiotropic cytokine having relationship with poor prognosis in multiple tumor categories [42,43], and it is thought to be vital in advanced cancers in promotion of immunosuppression, angiogenesis, metastasis, tumor cell epithelial to mesenchymal transition (EMT), broblast activation and desmoplasia [44][45][46]. We further explored the relationship between ImmuneRiskScore and these immune biomarkers (Tables S13, Tables S14) by All of the signi cant p value for correlation was smaller than 0.001, suggesting that ImmuneRiskScore might be a potential biomarker for immunotherapy especially for Immune checkpoint inhibitors. In addition, there is no correlation between ImmuneRiskScore and TMB, indicating TMB is an independent factor mediating the TME in these two groups. Discussion Increasing evidence indicates that immune-related biomarker is associated with prognostic for various cancer types [47][48][49]. Moreover, prognostic biomarkers effectively guiding cancer therapy especially immunotherapy are still in need. Therefore, we constructed ImmuneRiskScore that contributes to overall survival by investigating the tumor microenvironment of HCC. Based on the gene expression data of HCC from TCGA, we identi ed a prognostic signature of seven immune hub genes (SPP2, G6PC, PLBD1, ETV4, PFKP, GNAZ, ASRGL1) through combining WGCNA, univariate Cox regression analysis, and LASSO PH model. Then, we constructed a seven-gene risk scoring system and classi ed HCC patients into two risk groups with signi cantly different survival rates. Successfully validation of the prognostic performance of the risk scoring model was conducted in an independent set from GEO. It shows that the seven-genes ImmuneRiskScore are promising prognostic biomarkers for HCC and may function importantly in the TME of HCC. Generally, the ImmuneRiskScore is mainly the result of the presence of TME which including the complex interaction between cancer cells, stromal cells, and immune cells. Under this condition, we aimed to further explore the relationship between them. We found that the relative or absolute in ltration levels of B cells memory, Plasma cells, T cells CD4 memory activated, T cells regulatory Tregs, NK cells resting, Macrophages M0, Dendritic cells resting, Mast cells resting, Neutrophils are signi cantly correlated with ImmuneRiskScore, indicated that these cells are likely to have prognostic values. Such as in ltration of macrophages in solid tumors is associated with poor prognosis and correlates with chemotherapy resistance in most cancers [50]. It is noteworthy that these cell types didn't conclude most of T cell compartment, which the clinical response strategies have largely focused on. In fact, other immune cells may also contribute to anti-tumour immunity [51][52][53]. Especially memory B cells also has potential role in the response to ICB treatment [54]. With an aim to effectively predict clinical bene t to checkpoint inhibitor strategies, we performed a wider exploration of active innate and adaptive immune responses within the tumor microenvironment by gene expression pro ling. The predictive value of our immune relation score is envisaged for the positive correlation with PDL1, PD-1, PD-L2, CTLA-4, CYT, IFN gamma and Pan-F-TBRS. These positive related biomarkers are involved in the proin ammatory cytokines related in ammation microenvironment of tumor [55,56] and the TGF-β signal pathway related excluded microenvironment of tumor [41]. Corresponds to the previous in ltration results, the in ammation of TME can be measured by the cellular content of the tumor, for example, in ltrating T and B cells. In amed tumors also contain a broad proin ammatory cytokines pro le and a type I interferon signature indicating activation of innate immune response. While TGF-β can drive an excluded phenotype in TME as its impact on stromal cells, and prevent T cell penetration into the centre of the tumor, [41]. These results indicate that anti-tumor immunological effect is a bidirectional and dynamic system in the tumor microenvironment (TME). This biomarker analysis should help us to unravel the complexities of the interaction and molecular mechanisms between cancer and the host immune system. In general, we gained a comprehensive insight into the TME of HCC and created a prognostic immunerelated score that might become potential prognostic and predictive biomarkers. Comparison of the statistical differences between the two groups were performed through Wilcox rank-sum test. Each boxplot is labeled with asterisks for p values. (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001) Availability of data and materials The datasets generated and analyzed during the current study are available in the TCGA(https://xena.ucsc.edu/) and GEO repository with the dataset number GSE14520 , (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE14520). Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. and KEGG (B). The top 10 terms by adjusted p-values were selected to show. The x-axis of the plot represents the z-score. The -log (adjusted p-value) is displayed on the y-axis which corresponding to the signi cance of the term. The area of the plotted circles is proportional to the number of genes assigned to the term. Each circle is colored according to its category and labeled with the ID. The yellow lines represent a threshold for the displayed labels (adjusted p-value<0.05). Comparison of the statistical differences between the two groups were performed through Wilcox rank-sum test. Each boxplot is labeled with asterisks for p values . (*p<0.05, **p<0.01, ***p<0.001, ****p<0.0001) Figure 8 Correlation scatterplots between ImmuneRiskScore and immune check block (ICB) associated biomarkers, combined with density plot of expression distribution. The ICB biomarkers including PDL1, PD1, PDL2, CTLA4, CYT, IFN gamma, Pan_F_TBRS and TMB.
v3-fos-license
2017-09-10T00:30:22.505Z
2015-07-03T00:00:00.000
43726040
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/6945/6649", "pdf_hash": "b345dab2460dccd5e80878585fcd33d56293d326", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45189", "s2fieldsofstudy": [ "Economics" ], "sha1": "b345dab2460dccd5e80878585fcd33d56293d326", "year": 2015 }
pes2o/s2orc
Workforce Development , a Mechanism for Stimulating Economic Growth in Albania This article aims to explore the concept of workforce development as an effective approach that provides great impact to economic growth. Literature has proven that workforce development creates greater benefits for economic growth, especially at times of economic constraints. Although there have been more than 20 years since the collapse of the communist system, Albania is still facing times of difficult economic conditions. Recent reforms of the government for improving the quality of higher education and promoting vocational training would not provide any sustainable impact if there is no coordination between different stakeholders. Workforce development aims at creating such cooperation between stakeholders for a wide range of activities, policies and programs in order to sustain and retain a viable workforce that can support current and future industry requirements. Workforce development contributes to lifelong learning, development of proper skills and expertise which can be beneficial to the country economic development and social welfare, especially considering the globalization phenomenon and Albania’s potential in the region. Introduction 1. Human capital development is key to the economic success of a country as it contributes to the enhancement of the potential for growth and improved productivity, therefore human development strategies are placed more and more under focus with the growing globalization effect. Workforce development can be strongly linked to lifelong learning philosophy.In a global economy the demands are shifting and many individuals need to reconsider their area of expertise, upgrade or learn new skills, therefore the workforce development approach comes closely at hand for greater impact. Although the concept of workforce is not new, the concept of workforce development has been initially used in 1995 when Harrison, Weiss and Gant drew a distinction between 'employment training' which emphasizes the skills supply side and 'workforce development' which aims to explore the nature of employer demand.The concept of workforce development directs attention to the way in which companies collaborate with other organizations, particularly labor market intermediaries, and the ways in which they source, recruit and develop labor (Sutton, 2001).The aim of workforce development has to do with a wide range of activities, policies and programs in order to sustain and retain a viable workforce that can support current and future industry requirements.Workforce development approach contributes to the achievement of a skilled and productive workforce by reducing the gap between the skills demanded by employers and those supplied by the workforce. Research Objectives 2. It has become increasingly clear that the well-being of nations, considered from both economic and social perspective, is dependent in a large measure on human resources (Ashton & Green, 1999).The economic challenges that Albania is facing, especially related to employment, education and economic growth, raise the dilemma of how far the quality provided either by higher education and vocational education matches what the labor market demands in terms of knowledge, skills and capacities.International experience has shown that if the provision of skills is not appropriately linked with market needs, the impact on growth and development is not maximized.The workforce development approach has resulted in positive contribution to human capital development, especially at times of economic difficulties; therefore such approach can be beneficial for Albania considering the level of economic development and challenges for growth.ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences 404 This article aims to raise the awareness on workforce development, amongst academics and policy makers in Albania, by exploring the concept of workforce development and different models implemented in other countries.The article explores also the project 'Rritje Albania' which has placed a special focus on workforce development in four export oriented industries in the country.The approach used by this project is unique for Albania, considering the methodology used and scale of intervention. Workforce development contributes to lifelong learning, development of proper skills and expertise which can be beneficial to the country's economic development and social welfare, especially considering the globalization phenomenon and Albania's potential in the region.This article is prepared based on secondary research, primarily in the form of literature review.Different literature like articles, reports, surveys, policies, and strategic documents have been consulted in the process of preparing this article. Workforce Dynamics in Albania 3. The transition from a state-run to an open market economy, following the collapse of the communist system in 1991, has faced Albania with a number of social and economic problems due to the changes in the economic structure, failure of state supported industry, rising unemployment and unstable political environment.Such changes were accompanied with high internal and external migration, school drop-out from youth and children, women staying at home, changes in the workforce structure, loss or outdated job skills and uncertainty for the future. Internal and external migration: Internal migration rates were very high with patterns of movement from rural to urban areas.In 2001 the rural population was 53%, while in 2011 it had fallen to 46% of the total population (INSTAT, 2012).Albania has faced high rates of emigration estimated to be overall 1.1 million, since 1990.The population decreased to 2.82 million in 2011 (INSTAT, 2012) and the average age increased from 30.6 years in 2001 to 35.3 years in 2011.Main causes for increased emigration were related to poverty, lack of employment, low wages, poor labor conditions, as well as unpromising perspectives especially related to the economic development.Albanian emigrants have been mainly young males with high levels of education.The destinations that have been chosen by them are mainly Greece (50%) and Italy (25%) due to geographical proximity and cultural bonds, as well as USA, UK and other western European countries (25%) (Siar-et al. 2008).Emigration has affected also a significant group of talented and successful Albanian students remaining abroad after finishing the university or post-graduate studies there. Employment: The workforce participation rate for the population between 15 to 64 years old for 2013 was 59.9%.During this year employment rates decreased by 11.2% compared to the previous year and total employment was dominated by the agricultural sector with about 44.6% and the services sector with 37.9%.The structure of employment by status showed that employees accounted for 40.2% of the total employment in 2013, while the self-employed accounted for 25.6%.The contributing family workers accounted for about one-third of the total employment, where females were 1.8 times more likely than males to work as contributing family worker.The informal employment accounted for 43% of the total employment in non-agricultural sectors. Unemployment: According to INSTAT (2014) the level of unemployment in Albania in 2013 was 15.6% showing an increase of 2.2% compared to the previous year.The unemployment rate for the youth of 15 to 24 years old was 30.2 %, showing an increase of 2.3% compared to the previous year, while the biggest share of unemployment (72.8%) was accounted by individuals who have been unemployed for more than a year.In 2013, although there has been a decrease in the unemployment rate of people with university education by 1.8% compared to the previous year, and an increase of the proportion of unemployed individuals with only primary and secondary education, the unemployment rate remains higher for those individuals with university education (about 14.7%) compared to those with primary (10.2%) and secondary education (14.2%). Employment is considered a great contributor to the economic growth.Policy development, reforms and investment for improvement of workforce knowledge and skills, creating access to job market, as well as the improvement of entrepreneurial environment are key interventions to increasing employment, people welfare and economic growth.Since Albania faces high rates of unemployment amongst youth, the political agenda have recently shown increased attention for vocational education and training as well as reforms for improving the quality of higher education.However, there are a number of questions which need to be considered regarding these reforms: -How far the curricula and quality provided by the vocational training and higher education, match the needs of the labor market and especially regarding technology improvement and innovation?-How far is the private sector part of the dialog with educational institutions for deciding on the training curricula and how the quality of the curricula can be improved in order to provide capacities that labor market requires?-How can the individuals choose proper education considering their career development options?ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences As there is limited research on labor market, both individuals searching for educational services as well as entities providing training and education services lack proper information that would help them make the right choice. Based on other countries experience, the perspective of workforce development can help to overcome such difficulties.Albania has the potential to learn from best practices in this field and to create successful models which will, in return, increase the impact on education and employment, development of policies in an integrated method by bringing together different actors, such as: employers, training institutes, vocational schools, universities and governmental structures, in order to achieve a bigger impact. Workforce Development Concept 4. Workforce development is frequently misunderstood as many people think of it as only job training.The literature shows that there are different definitions for workforce development, however they all aim at: human capital development based on labor market needs and access to employment.Harrison and Weiss (1998) defined workforce development as the "constellation of activities from orientation to the work world, recruitment, placement and mentoring to follow-up counseling and crisis intervention". Workforce development does not relate only to education and training.Training is only one component of workforce development; however, workforce development approach contributes to a lifelong learning process.Government of South Australia ( 2003) defines workforce development as 'those activities which increase the capacity of individuals to participate effectively in the workforce throughout their whole working life, and which increase the capacity of firms to adopt high performance work practices that support their employees to develop the full range and their potential skills and values'. Workforce development has evolved to describe a relatively wide range of national and international policies and programs related to learning for work (Jacobs & Hawley 2007).The approach does not focus solely on the education and training needs of the individual, but adopts a system based approach to consider the wider issues that need to be understood and addressed to enable the workforce to be developed appropriately (Staron 2008). According to Jacobs and Hawley (2007) the emergence of workforce development as a new concept comes from the contemporary intersection of five interrelated streams: globalization, technology, the new economy, political change, and demographic shifts.Globalization increases competition between countries due to open markets and is supported through technology, especially information technology which shortens the geographical distances between countries. The new technologies being introduced create more complexity in the work process, requiring employees to frequently adapt to diverse production requirements. Workforce development is linked to: entry and/or reentry to labor market, adapting to changes in the labor market and lifelong learning. According to Haralson (2010) workforce development approach can be viewed from different perspectives: -From the individual perspective the workforce development is considered as a combination of services, community support, job training and education that positions an individual for success in the workforce.Individuals cannot make a contribution to the society without access to training and education.-From the society perspective: workforce development is considered as initiatives that train and educate individuals to meet the needs of current and future needs of the labour market , in order to maintain a sustainable competitive economic environment.-From the organizational perspective: workforce development is defined as training programs that provide existing and potential workers with the skills to complete tasks that support organizations to be competitive in a global marketplace.In order to achieve greater impact, workforce development requires a higher degree of cooperation between training or education providers, industry, government and individual companies.Buchanan and Hall (2003) argue that it is critical to focus on the industry needs to demand, deploy and develop higher skills. Different Models for Workforce Development 5. Workforce development approach has been widely used in different countries, such as United States, United Kingdom, Singapore, Australia, Germany, etc. The concept of workforce development in the United States includes a wide range of activities, policies and programs, many of which are indistinguishable from vocational education and training (Jacobs, 2002).The idea of the concept is to ensure to the individuals a sustainable livelihood through development of training and skills policy, alongside ensuring that employer's skill demands can be met. The concept of workforce development in the United Kingdom consists of activities which increase the capacity of individuals to participate effectively in the workforce, thereby improving their productivity and employability.Workforce development is seen as broader in scope than training, but narrower than education as an element of lifelong learning focusing on labor demand.The scope of workforce development in the United Kingdom aims at stimulating development of new skills in order to increase productivity, social inclusion and preparing the economy for the future. Different models of workforce development have been developed in different countries based on industry, regional and national level.This article explores four models of workforce development. 'Sector Panels' This model aims at giving employers a guiding role.It has been developed by the King County in Washington State in the United States.The county population in 2010 was 1.93 million people which accounts for about 28% of Washington state population according to the United States Census Bureau.In order to contribute to workforce development, the county developed a model which relies on the engagement of employers whose input is very important for effective preparation of workers for job openings that will exist in the near future. The Workforce Development Council of this county developed an effective instrument to build close collaboration with employers -the so called 'Sector Panel' which convenes a group of employers in one particular industry to focus on workforce needs of that industry.The panel is comprised by employers and leaders in the industry who occupy at least 50% of the seats, as well as training and education agencies, labor unions, economic development agencies and community based organizations. A sector panel continues for anywhere between 6 to 18 months and begins with a detailed labor market analysis.The panels contribute to labor market survey for the industry, training curricula for preparing workers for future jobs, and a career pathway for entry-level workers.The county has established several sector panels in several industries, such as: healthcare, green construction, interactive media, life sciences, etc.By means of this instrument, the Workforce Development Council of King County has been able to raise financial support from the U.S. Department of Labor for training and education in specific fields. 'Strategic Pillars' This model was developed by the New Zealand Ministry of Health.It addresses systems and organizational strategies to produce five strategic pillars for workforce development, on which this model is being based.The application of the model is done through projects and programs under each pillar. Infrastructure Development -The focus of the model is placed on creating a national and regional infrastructure which supports stakeholders to progress workforce development in an efficient and effective manner.Such infrastructure aims to: avoid the risk of delivering inefficient, fragmented and replicated services due to lack of coordination amongst different stakeholders.Such aim is achieved through development of cross-sector functional networks with good lines of communication and coordination.develop improved funding mechanisms to support new models of care and training; develop programs to monitor the progress and success of workforce development activities and the implementation of workforce policy and regulation.Organizational development -The aim is to assist the sector to develop organizational culture and systems necessary to sustain their workforce.Positive organizational cultures and strong leadership contribute to attracting and retaining the employees as well as to achieving high levels of productivity, efficiency and customer satisfaction.The model is based on the philosophy that -during economic difficulties, when the organizations cannot afford to pay higher salaries they can promote healthy workplace practices, positive and flexible workplace environments, and other health, career and personal benefits to employees to attract and retain them.Training and development -This pillar is built around coordinating disparate elements of the sector into a framework that is relevant for all parties and with a qualifications framework that meets service provider requirements and takes into account existing competencies.The need for alignment between educational program providers, professional models of practice and changing ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences MCSER Publishing, Rome-Italy Vol 6 No 4 July 2015 407 service delivery needs has been identified as a central issue in this model, and therefore changes to education and training provide the key to successfully implement new models of practice and the new roles that will be needed to increase workforce productivity.In this way, as demand changes over time, the workforce has core sets of highly transferable competencies.Retention and recruitment -One of the pillars of the model is to develop a national and regional response to issues of retention and recruitment, providing medium to long-term solutions and therefore reducing reactive and crisis-driven approach.Good recruitment policies ensure that people with the right capacities and the right mix of people are employed within an organization.However recruitment policies can be effective only when there is a pool of appropriately qualified workers to recruit from.Therefore, recruitment policies rest on the foundation of a well-functioning training and education system, and are closely linked to organizational development activities.Retention is key to the effective use of workforce skills.The reasons for staff turnover vary, but they usually include such things as incentives, rewards and environment, family commitments, retirement, performance, and alternative careers and opportunities.High turnover leads to increased recruitment costs, loss of institutional knowledge, workplace stress, and reductions in the quality of care provided, and in return affect productivity.In order to promote recruitment and retention the model uses different strategies such as: branding and career promotion in schools; career pathways and professional development; providing support to new staff through the transition from training to practice.Research and evaluation -This aims to ensure information is available to the sector to inform workforce development and to seek better understanding of the effectiveness of workforce development expenditure.This pillar defines research and evaluation as they relate to the capacity and capability of the workforce, the work produced and the environment or context in which work is carried out. Singapore Workforce Development Agency The Singapore Workforce Development Agency is an example of a national agency established to enhance the competitiveness and employability of workers and jobseekers by helping them adapt to a changing economy.To achieve this, the agency works with various industry leaders, labor unions, employers, economic agencies, professional associations and training organizations to develop and implement programs to support Singapore's economic development.Such agency supports the growth of Singapore industries by building a pipeline of competent workers through the constant upgrading of workers' skills and raising industrial performance standards (WDA, 2008). The Singapore Workforce Development Agency was established in September 2003, during the early 2000s economic crisis to help the workforce cope through training and skills upgrading.Its mission is to "lead, drive and champion workforce development, enhancing the employability and competitiveness of Singapore's workforce".Some of the programs managed by this agency are: SDF EasyNet: This is a website application system which allows all skill development funds transactions to be made via the internet.This system links up all users of the system, such as companies, training providers and skills development funds for more efficient services. Lifelong Learning Endowment Fund: Its objective is to enhance the employment and employability through initiatives that promote and facilitate the acquisition of skills.Such fund can be used to support employer- 408 Priority is given to programs targeting those who face greater challenges related to structural changes in the economy and labor market. 'Rritje Albania' 'Rritje Albania' literally means 'Grow Albania'.It is a USAID project which was implemented during 2009-2014.Although there have been several projects and programs in the past 20 years in Albania focusing their interventions on improvement of educational system, vocational training, employment, economic growth, etc. little was done in terms of an overarching workforce development approach, as previous initiatives have managed to cover only some elements of the workforce development.'Rritje Albania' managed to create an integrated methodology which contributes to the economic growth also through a workforce development component.This is a novelty for development program interventions in Albania, given the methodology used and the scale of intervention. The main objective of the project was to 'Increase sales and create new and better jobs by strengthening the competitiveness of non-agricultural enterprises', therefore contributing to the country economic growth.Workforce development was one of the components of this project which would contribute to the achievement of the overall goal. The project focused its activities on four key export-oriented industries: tourism, garment, footwear, and information and communications technology.The main components of the project were: a) Strengthening trade and investment capacity; b) Increasing enterprise productivity; and c) Improving workforce development. The scope of this article is to explore the workforce development concept, therefore only a description of the intervention under this component is included. The project assisted directly 141 companies which by the end of the project recorded an overall 51% increase in total sales over their annual baseline taken before receiving project assistance, and 5% increase in jobs.The intervention of the project was done in different levels and through establishing certain structures.In addition to individual company direct assistance in terms of operations and training, identifying labor demand, increasing jobs and sales, and on the job training, the project played a catalytic and influential role in shaping policy debates and creating new and effective working relations among key industry stakeholders.A working group was established per industry, in order to maximize the project impact and to help continue sustained growth in the target sectors after the project end.These four groups were established: -Garment and Footwear Working Group; -Intellectual Property Rights; -Tourism Group, the Western Balkans GeoTourism Stewardship Council; -PROTIK ICT Resource Center.These working groups served as key public-private mechanisms for policy consultation.The project also identified willing and high-potential stakeholders, beneficiaries, intermediaries, collaborators, and partnered with more than 75 public and private institutions.In doing so, the project aimed to leverage resources and capitalize on synergies to achieve more positive outcomes and greater impact. In order to sustain the results realized in industry and firm-level competitiveness, 'Rritje Albania' undertook a number of 'key legacy initiatives'.The ones that contribute to workforce development are related to improved quality of workforce through training and education: Hotel Certified Programs: 'Rritje Albania' worked to improve professional standards in Albania's hospitality sector through improved training and education programs for tourism and universities.The project established partnerships between the American Hotel, Lodging Educational Institute and one Albanian NGO to offer certified hospitality training programs for working tourism professionals; and a partnership with a private university in Albania to establish diploma programs in tourism. Responding to industry needs through University programs, Vocational education and industry partnerships: The project paired foreign experts with the Textile and Fashion Department at the Polytechnic University of Tirana (the one of the kind in Albania) to increase the ability of the university faculty to conduct consultancies; promote graduates in this program; improve testing and laboratory facilities; strengthen internship and career counseling programs; introduce technical curriculum enhancements, including the addition of footwear and leather topics in diploma programs; and develop lifelong learning courses. With the assistance of the project, the university cooperated with three vocational schools to develop a new garment design course using computer-aided design and computer-aided manufacturing (CAD/CAM) ISSN 2039-9340 (print) Mediterranean technology.With the assistance of the project, Lectra Modaris, a world leader in CAD/CAM, provided 16 software licenses free-of-charge, while garment firms in Albania donated used sewing machines to vocational schools that previously had no production equipment.The project helped to establish partnerships between private and educational sector, therefore contributing to reduce the gap between the workforce skill supply and demand which had previously constrained industry competitiveness. Career offices: Although the law of higher education requires the establishment of career offices at universities, in most cases they are formal and inexistent.'Rritje Albania' led a successful initiative to create new career development offices at five large universities.These centers provide students with career counseling, internships, and help with job placement, thereby create sustainable links between schools and employers. Conclusions 6. The economic success of a country is strongly linked to the skills, talent and capacities of its human capital.The levels of education, the ability to access the job market, the growing need for continuous learning throughout lifetime, have become an imperative for economic growth.Globalization and fast changes in technology, especially in the information technology, require that the workforce is under a continuous learning and training process.However, not always the training and education programs fully comply with labor market needs and therefore the impact of training and education in economic growth is affected. Workforce development approach would maximize the contribution of different stakeholders to reduce the gap between the level of skills provided by the workforce and the ones demanded by the labor market.However, workforce development is not only based on skill supply, but also on improving the mechanisms by which employers needs for skills can be effectively communicated to training and education providers.Workforce development includes multiple interventions in different aspects of legislation, policy, recruitment and retention and different support mechanisms for implementation.A successful workforce development is characterized by effective communication mechanisms and partnerships between public and private education and training providers, employers and industry representatives, labor unions, employment agencies and governmental agencies. International experience has shown that workforce development creates greater impact at economic growth, especially at times of economic constraints.Given the current dynamics of the Albanian economy, workforce development integrated approach would create better impact for policy and reforms in terms of education and employment growth.An integrated approach would maximize the impact of investment, since the coordination amongst stakeholders would avoid replication and fragmented services.Workforce development provides proper mechanisms to match workforce skills with current and future labor market demand, and therefore contributes to economic growth. to invest in their education without being certain about what the labor market really needs. Development Fund: This program provides funds to encourage employers in upgrading the skills of the workforce.Funds are accumulated through a skills development levy.The Skills development Fund is managed by the Workforce Development Agency.It offers assistance as an incentive to companies to develop training programs for employees.Incentives are offered on the basis of a cost-sharing principle, while the training must be relevant to the economic development of Singapore and the amount of incentives that a company can obtain is not tied to the levy contribution.Skills Development Levy: It is a statutory requirement for employers to make monthly contributions for employees.All the levy funds that are accumulated are channeled to the Skills Development Fund which is used to support workforce upgrading skills programs and to provide training grants to employers when sending their employees to attend training under the national Continuing Training System.The Skills development Levy is managed by the Workforce Development Agency. based, individual-based or community-based training for increased employment and employability.Programs are developed in partnership with industry/trade or employer associations, community organizations, etc.ISSN 2039-9340 (print)
v3-fos-license
2020-06-25T09:03:31.833Z
2020-06-24T00:00:00.000
220043964
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.00538/pdf", "pdf_hash": "72cd2ec107aad5346389859f77b0a7cf0d58b572", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45190", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "db6c79a982099294c65dbe545b204306c1bda8af", "year": 2020 }
pes2o/s2orc
Impact of Boron Acceptors on the TADF Properties of Ortho-Donor-Appended Triarylboron Emitters We report the impact of boron acceptors on the thermally activated delayed fluorescence (TADF) properties of ortho-donor-appended triarylboron compounds. Different boryl acceptor moieties, such as 9-boraanthryl (1), 10H-phenoxaboryl (2), and dimesitylboryl (BMes2, 3) groups have been introduced into an ortho donor (D)–acceptor (A) backbone structure containing a 9,9-diphenylacridine (DPAC) donor. X-ray crystal diffraction and NMR spectroscopy evidence the presence of steric congestion around the boron atom along with a highly twisted D–A structure. A short contact of 2.906 Å between the N and B atoms, which is indicative of an N → B nonbonding electronic interaction, is observed in the crystal structure of 2. All compounds are highly emissive (PLQYs = 90–99%) and display strong TADF properties in both solution and solid state. The fluorescence bands of cyclic boryl-containing 1 and 2 are substantially blue-shifted compared to that of BMes2-containing 3. In particular, the PL emission bandwidths of 1 and 2 are narrower than that of 3. High-efficiency TADF-OLEDs are realized using 1–3 as emitters. Among them, the devices based on the cyclic boryl emitters exhibit pure blue electroluminescence (EL) and narrower EL bands than the device with 3. Furthermore, the device fabricated with emitter 1 achieves a high external quantum efficiency of 25.8%. INTRODUCTION Thermally activated delayed fluorescence (TADF) compounds have recently received great attention as efficient emitters in organic light-emitting diodes (OLEDs) because TADF-OLEDs can theoretically achieve nearly 100% internal quantum efficiency (η int ) via the upconversion of triplet excitons into emissive singlet excitons through a thermally activated reverse intersystem crossing (RISC) process Uoyama et al., 2012;Zhang et al., 2012Zhang et al., , 2014Dias et al., 2013;Tao et al., 2014;Hirata et al., 2015;Kaji et al., 2015;Im et al., 2017;Wong and Zysman-Colman, 2017;Yang et al., 2017;Cai and Su, 2018;Kim et al., 2018). To date, various types of TADF emitters have been reported, and those containing boron acceptor moieties are recently attracting special interest because of their excellent TADF properties. Boron acceptors such as triarylborons possess strong electron-accepting properties owing to their sp 2 hybridized, tri-coordinate boron atom, which has a vacant p(B) orbital (Hirai et al., 2015;Hatakeyama et al., 2016;Turkoglu et al., 2017;Matsui et al., 2018). In addition to this electron-deficiency, p(B)-π * conjugation between the boron atom and linked π systems can lead to the stabilization of the LUMO level. Thus, in combination with suitable donors, the boron acceptors may form donor-acceptor (D-A) emitters GRAPHICAL ABSTRACT | Impact of boron acceptors on the TADF properties of ortho-donor-appended triarylboron emitters. exhibiting TADF. In fact, boron-based emitters have been successfully employed in blue and green OLEDs (Kitamoto et al., 2015(Kitamoto et al., , 2016Numata et al., 2015;Suzuki et al., 2015;Liu et al., 2016;Lien et al., 2017;Chen et al., 2018a,b;Liang et al., 2018;Tsai et al., 2018;Meng et al., 2019;Wu et al., 2019). For example, TADF-OLEDs based on diboron and oxaborin emitters show excellent device performance with a very high external quantum efficiency (EQE) of over 38% Ahn et al., 2019). Hatakeyama et al. recently demonstrated that boron and nitrogen-doped polycyclic aromatic hydrocarbons exhibit a deep-blue emission with a very narrow band (full width at half maximum, FWHM = 18 nm), and achieve a high EQE of 34% when incorporated in OLEDs (Kondo et al., 2019). Our group also reported that ortho-donor-appended triarylboron compounds have strong TADF character because of their highly twisted D-A structure . We have shown that electronic modification of the donor and/or boryl acceptor moiety in the ortho compounds allows facile tuning of the HOMO and LUMO levels, which in turn tunes the emission color over the visible region (Kumar et al., 2019). In particular, TADF-OLEDs having ortho compounds as emitters display a high EQE of above 30% in blue OLED devices . These results demonstrate that boron compounds having an ortho-D-A scaffold constitute highly efficient TADF emitters due to their rigid backbone structure. nature, the latter is beneficial for attaining a rigid structure with adjustable electronic effects (Hirai et al., 2019). To elucidate the impact of the boron acceptors on the TADF properties in more detail, we set out to explore a series of ortho-donor-appended triarylboron compounds (1-3), which contain different boryl acceptor moieties such as BMes 2 and cyclic boryl groups and a fixed donor. The photophysical properties of these boron-based emitters are examined along with theoretical consideration. We demonstrate that the performance of TADF-OLEDs fabricated with these emitters varies with the type of acceptor, and a high EQE of over 25.8% is realized in pure blue devices with the emitter containing a cyclic boryl acceptor. Synthesis and Characterization Triarylboron compounds (1-3) in which a 9,9-diphenylacridine (DPAC) donor (D) is linked to a boryl acceptor (A) in the ortho position of the phenylene ring were prepared (Scheme 1). The cyclic boryl groups [9-boraanthryl (1) and 10H-phenoxaboryl (2)] and a dimesitylboryl group (BMes 2 , 3) were introduced as acceptors. Buchwald-Hartwig amination reactions between 9,9-diphenyl-10H-acridine and 2-bromo-1-iodo-3-methylbenzene produced an ortho-DPACsubstituted bromobenzene intermediate, DPACoBr. The lithium salt of DPACoBr was then subjected to reaction with the corresponding boryl-halides to afford the final ortho-DPACappended triarylboron compounds 1-3. As expected, compound 3 bearing a bulky BMes 2 group was very stable in air and water. In addition, compounds 1 and 2 having cyclic boryl groups were stable under ambient conditions, presumably due to the steric protection of the boron center by the ortho-DPAC and -Me groups. Consequently, all compounds exhibited high thermal stability, as judged by their high decomposition temperatures (T d5 ) over 320 • C. The compounds were characterized by multinuclear NMR spectroscopy, elemental analysis, and single crystal X-ray crystallography. The 1 H NMR spectra of 1 and 2 exhibit sharp signals for the cyclic boryl groups, whereas all the methyl and C Mes -H protons on the two Mes groups in 3 give rise to separate singlets. This feature indicates the highly restricted motion of the bulky BMes 2 moiety of 3 in solution, which can be attributed to the severe steric hindrance from the ortho-DPAC and -Me groups on the phenylene ring (Supplementary Figures 2-4) . In particular, while the 11 B NMR spectrum of 3 shows a broad signal at δ 84 ppm, typical of base-free, triarylboron compounds (Wade et al., 2010;Lee et al., 2017), the 11 B signals of 1 and 2 are observed at more upfield regions (δ 58 ppm for 1 and δ 65 ppm for 2). This can be mainly ascribed to the π-donation effect from the ipso-carbon atoms to the empty p(B) orbital due to the planar structure of the cyclic boryl moieties in 1 and 2 (Zhou et al., 2012). X-ray diffraction studies conducted on DPACoOB (2) confirmed that it exhibits an ortho D-A structure, in which both the DPAC and OB rings are almost orthogonal to the phenylene ring ( DPAC-Ph = 86.4 • and OB-Ph = 83.7 • ), thus facing each other (Figure 1). The distance between the N1 and B1 atoms is short (2.906 Å), lying within the sum of the van der Waals radii of the two atoms. This may indicate the presence of an N→B nonbonding electronic interaction. Interestingly, the DPAC ring is puckered at the 9-position, because of the sp 3 character of the 9-carbon atom, in such a way that one peripheral Ph ring protrudes right above the OB ring. Although the nearest contact between the Ph and OB rings (C42· · ·O1) is relatively long (3.462 Å), this feature might indicate the occurrence of a ππ interaction between the Ph and OB planes. These structural aspects suggest that the boron atom in 2 is sterically and electronically protected, which contributes to its chemical and thermal stability. In the boryl moiety, the boron atom possesses a trigonal planar geometry with a Σ(C-B-C) of 359.7 • . It is noteworthy that the two B-C(OB) bond lengths (1.532 and 1.528 Å) are much shorter than that of the B-C(Ph) bond (1.580 Å), the latter being within the range usually found for triarylborons such as Ph 3 B (Zettler et al., 1974) and Mes 3 B (1.57-1.59 Å) (Blount et al., 1973). As similarly noted in the upfield shifts of the 11 B signals, this finding can be attributed to the presence of p(B)π interactions in the oxaborin ring, which shorten the B-C(OB) bond lengths. Photophysical and Electrochemical Properties The photophysical properties of the compounds were investigated by UV/vis absorption and photoluminescence (PL) measurements in toluene (Figure 2 and Table 1). The intense high-energy absorptions observed at ca. 290 nm can be mainly attributed to the local π-π * transitions centered in the DPAC donor , and the broad bands or shoulders ranging from ca. 300 to 350 nm are assignable to the boryl centered π-π * transitions with π(Ar)-p(B) CT character (see the DFT results below) (Hudson and Wang, 2009;Wade et al., 2010;Zhou et al., 2012). The weak low-energy band for 1 or the tailing found for 2 and 3 above ca. 350 nm can be ascribed to the intramolecular CT (ICT) transitions from the DPAC donor to the boryl acceptor moieties. To elucidate the ICT transition, the electrochemical properties of the compounds were investigated by cyclic voltammetry (Figure 3 and Table 1, Supplementary Table 2). All compounds exhibit DPAC-centered oxidation with similar potential values Singlet (E S ) and triplet (E T ) energies from the fluorescence and phosphorescence spectra at 77 K. Calculated E ST from TD-DFT at PBE0/6-31G(d,p). (E ox = 0.51-0.54 V) probably because they have the same donor moiety and orthogonal D-A arrangement. However, minor peaks are concomitantly observed at ca. 0.15 V, which might originate from side reactions, such as possible dimerization of radical cations derived from DPAC moieties. As for the reduction, all compounds undergo similar boron-centered reduction with slightly different reduction potentials. Unlike usual triarylboronbased reductions, the reduction process is not completely reversible, but rather to be quasi-reversible. Oxaborin-containing 2 shows the most negative value presumably due to an electron donating effect of oxygen lone pairs through the planar structure, which raises the LUMO level. Although the reduction of boraanthracene-containing 1 occurs at a more positive position, it was measured in DMSO instead of THF because the reduction of 1 was unclear in the latter solvent. It seems that the DMSO solvent stabilizes the boryl moiety, resulting in a more facile reduction. Hence, the band gap (E g ) of 1 is smaller than that of 2 despite the similar ICT absorption wavelength of both compounds in toluene (Table 1). Next, the emission properties of all compounds were examined in toluene (Figure 2). The PL spectra show broad emission bands typical for an ICT transition. Compounds 1 and 2 exhibit sky blue emissions at similar wavelengths (λ PL = 490 nm for 1 and 485 nm for 2), whereas 3 displays green emission at 516 nm. In particular, the PL emission bandwidths of 1 and 2 with cyclic boryl moieties (λ FWHM = 64-66 nm) are narrower than that of 3 with a BMes 2 acceptor (λ FWHM = 73 nm). This indicates a small structural deformation between the ground and excited states of 1 and 2 in solution, presumably due to the rigid cyclic boryl groups. All compounds are highly emissive in oxygen-free toluene with high PL quantum yields (PLQY, Φ PL ) of over 90%; the PLQYs of 2 and 3 are close to 99%, and that of 1 is ca. 91%. In sharp contrast, the PLQYs in air-saturated toluene show drastically decreased values of ca. 5-6% (Table 1 and Supplementary Figure 6). This result suggests that efficient T 1 to S 1 RISC takes place in oxygenfree toluene, thus pointing to the occurrence of strong delayed fluorescence. In fact, the transient PL decay curves exhibit intense delayed components with microsecond lifetimes (τ d ) along with prompt components (Figure 4 and Supplementary Figure 6). The temperature-dependent PL decay also confirms that the delayed component is assignable to TADF (inset in Figure 4) (Uoyama et al., 2012). The E ST values are very small, below 0.02 eV, supporting a fast equilibration between the S 1 and T 1 states. The delayed fluorescence lifetimes (τ d ) of 1-3 are very Figure 8). Although the PL wavelengths show the same trend as that found in solution, all compounds display substantial rigidochromic blue-shifts by ca. 21-34 nm. We attribute this result primarily to the high rigidity of 1-3 in film state due to the steric effects of the ortho-DPAC and -Me groups. Moreover, the shifts observed for compounds 1 and 2 are greater than those of 3. This is most likely due to the additional rigidity endowed by the cyclic boryl moieties. As similarly found in solution, the PL emission of 1 and 2 is slightly narrower than that of 3 in the film state (λ FWHM = 62-71 nm for 1 and 2 vs. 74 nm for 3). In particular, the PLQYs in the host film (Φ PL = 90-97%) are very high and comparable to those obtained in solution, also with the value for 1 being lower than those for 2 and 3. The delayed fluorescence having long lifetimes (ca. 6.5-8.8 µs) and large portions of the delayed components indicate that the strong TADF character of 1-3 is well-retained in film state. It is noteworthy that the portion of the delayed fluorescence follows the order 3 > 1 > 2, as identically observed in solution state. Theoretical Calculations To gain a deeper understanding of the geometric structures and photophysical properties of compounds 1-3, computational studies based on density functional theory (DFT) were performed at the PBE0/6-31G(d,p) level. Optimization of the ground state (S 0 ) and excited state (S 1 and T 1 ) geometries was made by DFT and time-dependent DFT (TD-DFT) methods, respectively (Figure 5 and Supplementary Table 3). The short N···B contacts (ca. 2.90-2.94 Å) in 1 and 2 are comparable to that found in the crystal structure of 2 (2.906 Å), whereas that in 3 shows an increased distance of 3.09 Å. All compounds exhibit high dihedral angles close to 90 • , which are also comparable to the experimental value for 2 (86.4 • ), between the DPAC donor and the phenylene ring at the ground state. As a result, the HOMOs and LUMOs are spatially separated and located on the DPAC donor and boryl acceptor moieties, respectively. In particular, the LUMOs of 1 and 2 are almost exclusively contributed by the cyclic boryl moieties, whereas 3 has a substantial LUMO contribution from the phenylene ring (ca. 19%). This finding can be mainly attributed to the strong p(B)-π * electronic conjugation in the cyclic boryl moieties. Moreover, unlike the propeller-like conformation of the PhBMes 2 moiety in 3, the cyclic boryl rings in 1 and 2 are nearly orthogonal to the phenylene ring. This may weaken the LUMO conjugation between the two rings, which results in the cyclic boryl moieties dominating the LUMOs. The resulting LUMO level is slightly lowered for 3 compared to those of 1 and 2, as shown in the electrochemical reduction. The computed E ST values are in the range of ca. 0.04-0.05 eV for all compounds, similarly to the experimental values. The very small E ST values support the observed strong TADF properties. Although the HOMO-LUMO band gaps of all compounds in the ground state are very similar, the S 1 state energy and TD-DFT calculation predict that the lowest-energy absorption and emission energies follow the order 2 > 1 > 3, which is in agreement with the experimental results and corroborates the blue-shifted emission of cyclic boryl-containing 1 and 2 (Supplementary Tables 3-5). According to the current density-voltage-luminance (J-V-L) characteristics and the external quantum efficiency-luminance (EQE-L) characteristics of the devices shown in Figures 6C,D, all the devices exhibit very good performance with high EQE values. In fact, device D1 achieves a very high maximum EQE of 25.8% without any light-outcoupling enhancement. We attribute these high performances to the efficient TADF properties of the emitters with high PLQY and small E ST . However, substantial efficiency roll-off is observed for all devices. This can be mainly attributed to the long delayed fluorescence lifetime of the emitters that may increase the probability of exciton quenching processes such as triplet-triplet annihilation (TTA) and triplet-polaron annihilation (TPA) in the devices. Interestingly, it is noted that the EQE values obtained in this study are somewhat different from the trend found in the PLQYs of the host films. Device D1, which is based on the less emissive DPACoBA (1), exhibits higher EQEs than devices D2 and D3 although the PLQY of 1 is sufficiently high to afford the observed EQEs anyway; further optimization of the device structure would probably result in a more accurate trend in the device efficiency but is beyond the scope of this work. Nevertheless, the results obtained for the devices in this study suggest that the cyclic boryl groups function as good acceptors for TADF emitters, being capable of exhibiting narrow bandwidth emission and high device efficiency. CONCLUSION We have demonstrated the impact of boron acceptors on the TADF properties of ortho-donor-appended triarylboron compounds, which consist of an ortho D-A backbone structure containing cyclic boryl (1 and 2) or BMes 2 (3) groups as acceptors and a fixed DPAC donor. The compounds possessed a twisted structure and were sterically congested around the boron atom. All compounds showed strong TADF properties with high PLQYs in both solution and solid state. Blue-shifted fluorescence with narrower bandwidths was observed for the compounds bearing cyclic boryl acceptors (1 and 2) compared with that of the BMes 2 -containing 3. TADF-OLEDs fabricated with 1-3 as emitters exhibited high device performance, and those based on the cyclic boryl emitters showed pure blue emission and narrower EL bands than the device with 3. A high EQE of 25.8% was also achieved for the device fabricated with emitter 1. The findings in this study suggest that the cyclic boryl groups may be useful for designing TADF emitters with narrow bandwidth emission and high device efficiency. Cyclic Voltammetry The redox behavior of compounds were examined by cyclic voltammetry measurements using a three-electrode cell configuration consisting of platinum working and counter electrodes and an Ag/AgNO 3 (0.01 M in CH 3 CN) reference electrode. Oxidation curves were recorded in CH 2 Cl 2 solutions (1 × 10 −3 M), while reduction curves were obtained from THF (2 and 3) or DMSO (1) solutions (1 × 10 −3 M). Tetran-butylammonium hexafluorophosphate (TBAPF 6 , 0.1 M) was used as the supporting electrolyte. The redox potentials were recorded at a scan rate of 100-200 mV s −1 and are reported against the Fc/Fc + redox couple. The HOMO and LUMO energy levels were estimated from the electrochemical oxidation (E 1/2 ) and reduction (E onset ) peaks of cyclic voltammograms. Photophysical Measurements UV/vis absorption and photoluminescence (PL) spectra were recorded on a Varian Cary 100 and FS5 spectrophotometer, respectively. Solution PL spectra were obtained from oxygenfree (N 2 -filled) and air-saturated toluene solutions in a sealed cuvette (typically 50 µM). PL spectra and PLQYs of doped host films were obtained on quartz plates. PLQYs of the samples were measured on an absolute PL quantum yield spectrophotometer (Quantaurus-QY C11347-11, Hamamatsu Photonics) equipped with a 3.3-inch integrating sphere. Transient PL decays were recorded on a FS5 spectrophotometer (Edinburgh Instruments) equipped with an OptistatDN TM cryostat (Oxford Instruments). Fabrication of Electroluminescent Devices OLED devices were fabricated on 25 × 25 mm glass substrate with half-patterned ITO layers (AMG). Glass substrates with pre-patterned ITO electrodes were cleaned by a sequential wetcleaning processes in an ultrasonic bath (Song et al., 2018). After drying in a vacuum oven for a day, the substrates were subject to UV-plasma treatment for 1 min in a plasma cleaner (CUTE-MP, Femto Science). As a hole-injection layer, an aqueous dispersion of PEDOT:PSS (Clevios TM P VP AI 4083, Heraeus) was spun (2,500 rpm for 30 s) onto the plasma-treated substrates and annealed on a hot plate (100 • C for 10 min). Other organic and metal layers were sequentially deposited in a vacuum chamber (HS-1100, Digital Optics & Vacuum) at less than 1.5 × 10 −6 torr. The current density-voltageluminance (J-V-L) and angle-resolved electroluminescence (EL) intensity characteristics of the fabricated devices were obtained with a source-measure unit (Keithley 2400) using a calibrated photodiode (FDS100, Thorlab) and a fiber optic spectrometer (EPP2000, StellarNet) held on a motorized goniometer. The EQE (η EQE ) and PE (η PE ) of the devices were estimated from the measured full angular characteristics without Lambertian simplification. All device fabrication and measurement, except for the PEDOT:PSS coating, were carried out in a nitrogen (N 2 )-filled glove box, and all characteristics of the devices were measured at room temperature. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material.
v3-fos-license
2020-06-04T09:11:51.663Z
2020-06-01T00:00:00.000
219315742
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4409/9/6/1373/pdf", "pdf_hash": "91af2efa570d9e549a52d073985d0cc38171acc7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45191", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d5137131a2b47a39aaa228a35b851e43d463e1c7", "year": 2020 }
pes2o/s2orc
Effects of β-Adrenergic Blockade on Metabolic and Inflammatory Responses in a Rat Model of Ischemic Stroke Ischemic stroke provokes an inflammatory response concurrent with both sympathetic nervous system activation and hyperglycemia. Currently, their crosstalk and consequences in stroke outcomes are of clinical attraction. We have provided experimental evidence showing the suppressive effects of the nonselective β-adrenoreceptor antagonist propranolol on hyperglycemia, inflammation, and brain injury in a rat model experiencing cerebral ischemia. Pretreatment with propranolol protected against postischemic brain infarction, edema, and apoptosis. The neuroprotection caused by propranolol was accompanied by a reduction in fasting glucose, fasting insulin, glucose tolerance impairment, plasma C-reactive protein, plasma free fatty acids, plasma corticosterone, brain oxidative stress, and brain inflammation. Pretreatment with insulin alleviated—while glucose augmented—postischemic brain injury and inflammation. Additionally, the impairment of insulin signaling in the gastrocnemius muscles was noted in rats with cerebral ischemia, with propranolol improving the impairment by reducing oxidative stress and tumor necrosis factor-α signaling. The anti-inflammatory effects of propranolol were further demonstrated in isoproterenol-stimulated BV2 and RAW264.7 cells through its ability to decrease cytokine production. Despite their potential benefits, stroke-associated hyperglycemia and inflammation are commonly linked with harmful consequences. Our findings provide new insight into the anti-inflammatory, neuroprotective, and hypoglycemic mechanisms of propranolol in combating neurodegenerative diseases, such as stroke. Introduction Stroke is a leading cause of adult long-term disability, mortality, and morbidity worldwide. Currently, thrombolytic therapy with tissue plasminogen activator (tPA) still remains the first option for the treatment of ischemic stroke, although only a small population of patients have seen any clinical benefits due to its narrow therapeutic window after stroke onset [1]. Additionally, the increased risk of intracerebral hemorrhagic transformation and its accompanied inflammation further exhaust the beneficial effects of tPA. Clinical findings indicate that hyperglycemia is critical in counterbalancing the therapeutic benefits and increasing the adverse complications of tPA through an action mode related to inflammation exacerbation [2][3][4][5]. Despite their protective and regenerative potential, overwhelming hyperglycemia and neuroinflammation have been implicated in the pathogenesis of stroke. Rodent studies have also revealed the crosstalk among tPA, hyperglycemia, and neuroinflammation in cerebral ischemic brain injury, and the existing therapeutic benefits by means of targeting hyperglycemia and neuroinflammation [6][7][8][9][10]. These phenomena underscore the importance of exploring the underlying mechanisms of stroke-accompanied hyperglycemia and neuroinflammation, as well as highlighting their therapeutic potential towards combating stroke disease and related complications. There is existing bidirectional communication between the immune system and the central nervous system (CNS) [11]. Upon CNS injury, the activated immune system displays cytotoxic effects at the early phase, but removes debris and promotes tissue regeneration during the late phase. Oppositely, the case of immunosuppression increases the risk of infectious complications and worsens the outcome. Currently, the involvement of the sympathetic nervous system and hypothalamic-pituitary-adrenal (HPA) axis in stroke-accompanied immune suppression and immune activation has been described [12][13][14][15][16][17][18]. Because of the activation of sympathetic tone, and how the HPA axis is seen in acute ischemic stroke, as well as their dual roles in immune activities, the sympathetic nervous system and the HPA axis represent alternative targets for intervention, with an aim towards the treatment of stroke. Previously, we had detected an elevated circulating level of adrenaline, noradrenaline, and corticosterone in a rat model of cerebral ischemia, and reported the improved effects on adipose inflammation, hepatic inflammation, and hepatic gluconeogenesis, which the nonselective β-adrenoreceptor antagonist propranolol had [14][15][16]28]. Luger et al. [22] further demonstrated that propranolol is able to ameliorate stroke-associated impairment of glucose tolerance and brain ceramide accumulation. Clinically, despite their variability, the use of β-adrenoreceptor antagonists is advisable for patients at risk of stroke [29]. To extend the scope of propranolol's biological implications, its anti-inflammatory and neuroprotective identities were further explored in a rat model of cerebral ischemia. Cerebral Ischemia and Treatments The protocols surrounding animal study were reviewed and approved by the Animal Experimental Committee of Taichung Veterans General Hospital and strictly adhered to as per the Institute's guidelines (La-1071584, 1 August 2018). Adult male Sprague-Dawley rats weighing 300-350 g (n = 108 in total, with the specific numbers of rats indicated at the corresponding experiments) were anesthetized with isoflurane (2-4%), and body temperatures were maintained at 37.0 ± 0.5 • C. Focal cerebral ischemia was produced by clamping the two common carotid arteries and the right middle cerebral artery, as described previously [14]. For the sham operation, all surgical procedures were performed except for arterial occlusion. A single bolus of normal saline, propranolol (2 mg/kg), insulin (2 U/kg), or glucose (2 g/kg) was intraperitoneally delivered to ischemia or sham rats 30 min prior to surgery. All the ischemia and sham rats receiving treatments were euthanized for analyses 24 h after completion of surgery. A schematic diagram of the animal study is shown in Figure 1. Institute's guidelines (La-1071584, August 1, 2018). Adult male Sprague-Dawley rats weighing 300-350 g (n = 108 in total, with the specific numbers of rats indicated at the corresponding experiments) were anesthetized with isoflurane (2-4%), and body temperatures were maintained at 37.0 ± 0.5 °C. Focal cerebral ischemia was produced by clamping the two common carotid arteries and the right middle cerebral artery, as described previously [14]. For the sham operation, all surgical procedures were performed except for arterial occlusion. A single bolus of normal saline, propranolol (2 mg/kg), insulin (2 U/kg), or glucose (2 g/kg) was intraperitoneally delivered to ischemia or sham rats 30 min prior to surgery. All the ischemia and sham rats receiving treatments were euthanized for analyses 24 h after completion of surgery. A schematic diagram of the animal study is shown in Figure 1. Figure 1. Schematic diagram of animal study design. Adult male Sprague-Dawley rats were intraperitoneally administrated with saline, propranolol (2 mg/kg), insulin (2 U/kg), or glucose (2 g/kg), 30 min prior to sham or cerebral ischemia for a course of 24 h. The last eight hours, rats were deprived of foods except drinking water. At the end of the experiments, rats were allocated into the indicated analyses. Neurological Evaluation A modified six-point neurological deficit severity scoring criteria was applied in order to evaluate sensorimotor performance, which was done by technicians who had been blind to the treatments (n = 6 per group) [27]. Quantification of Ischemic Infarction Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were put into a Brain Slicer Matrix and sliced into a serial coronal section at 2 mm intervals. The brain sections were then immersed in 2% triphenyltetrazolium chloride (TTC) solution at 37 °C for 30 min, followed by fixation in 10% phosphate-buffered formalin for 45 min [27]. The areas of brain infarction were highlighted by a white color and the volume was measured with a computer image analysis system (IS1000; Alpha Innotech Corporation, San Leandro, CA, USA). Brain Edema Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were separated into contralateral and ipsilateral hemispheres for the isolation of cortical tissues. The obtained contralateral and ipsilateral cortical tissues were dried in an oven at 110 °C for 24 h. The water content was calculated by the wet/dry weight method, as described previously [27]. Neurological Evaluation A modified six-point neurological deficit severity scoring criteria was applied in order to evaluate sensorimotor performance, which was done by technicians who had been blind to the treatments (n = 6 per group) [27]. Quantification of Ischemic Infarction Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were put into a Brain Slicer Matrix and sliced into a serial coronal section at 2 mm intervals. The brain sections were then immersed in 2% triphenyltetrazolium chloride (TTC) solution at 37 • C for 30 min, followed by fixation in 10% phosphate-buffered formalin for 45 min [27]. The areas of brain infarction were highlighted by a white color and the volume was measured with a computer image analysis system (IS1000; Alpha Innotech Corporation, San Leandro, CA, USA). Brain Edema Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were separated into contralateral and ipsilateral hemispheres for the isolation of cortical tissues. The obtained contralateral and ipsilateral cortical tissues were dried in an oven at 110 • C for 24 h. The water content was calculated by the wet/dry weight method, as described previously [27]. Measurement of Lipid Peroxidation Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were separated into contralateral and ipsilateral hemispheres for the isolation of cortical tissues. The obtained contralateral and ipsilateral cortical tissues and gastrocnemius tissues were subjected to the measurement of lipid peroxidation using a thiobarbituric acid-reactive substance (TBARS) assay kit (Abcam, Cambridge, UK). TBARS is expressed as malondialdehyde (MDA) equivalents. Caspase 3 Activity Assay Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were separated into contralateral and ipsilateral hemispheres for the isolation of cortical tissues. The obtained contralateral and ipsilateral cortical tissues were subjected to the measurement of caspase 3 activity using a commercial fluorometric protease assay kit (BioVision, Mountain View, CA, USA). Glucose Tolerance Test Prior to an intraperitoneal glucose tolerance test (IPGTT) (n = 6 per group), rats were deprived of diet for 8 h. The IPGTT was performed through the intraperitoneal administration of glucose solution (2 g/kg body weight). Their blood was then collected from the tail veins over time and glucose levels were measured using a hand-held Accu-Check glucometer (Roche Diagnostics, Indianapolis, IN, USA). The total area under the curve (AUC) for IPGTT was calculated using the trapezoidal (trapezium) rule. Blood Sample Analyses Rats (n = 6 per group) were anesthetized with isoflurane (2-4%); their blood was withdrawn from the left femoral artery and the plasma samples kept at −80 • C until the analyses. The plasma levels of insulin (Shibayagi, Gunma, Japan), C-reactive protein (CRP), free fatty acids, and corticosterone (R&D Systems, Minneapolis, MN, USA) were measured using enzyme-linked immunosorbent assay (ELISA) kits, according to the manufacturer's instructions. Measurement of Tissue Cytokines Rats (n = 6 per group) were anesthetized with isoflurane (2-4%) and then decapitated. The dissected brains were separated into contralateral and ipsilateral hemispheres for the isolation of cortical tissues. The obtained contralateral and ipsilateral cortical tissues and gastrocnemius tissues were subjected to the measurement of Tumor Necrosis Factor-α (TNF-α) protein content using ELISA kits (R&D Systems, Minneapolis, MN, USA). Cell Cultures The murine microglia BV2 cell line and macrophage RAW264.7 cell line were maintained in Dulbecco's Modified Eagle Medium (DMEM) containing 10% fetal bovine serum (FBS) [30]. BV2 and RAW264.7 cells were pretreated with a vehicle or propranolol (10 µM) for 30 min before being incubated with isoproterenol (0 and 10 µM) for an additional 24 h. The cell cultured supernatants (100 µL) were subjected to the measurement of TNF-α and interleukin-6 (IL-6) using commercial ELISA kits (R&D Systems, Minneapolis, MN, USA). For nitric oxide (NO, nitrite/nitrate) determination, the samples were subjected to the measurement using a Griess reagent kit (Thermo Fisher Scientific, Waltham, MA, USA). Statistical Analysis All statistical results are presented as mean ± standard deviation. A one-way analysis of variance was performed in order to evaluate experimental values between groups, with a consequent Dunnett's test or Tukey post-hoc test performed for the purpose of comparison. It was considered statistically significant when the p value was less than 0.05. Propranolol Alleviated Postischemic Brain Injury To investigate the neuroprotective potential against cerebral ischemia brain injury, propranolol was delivered into rats 30 min prior to ischemia. Permanent cerebral ischemia caused neurological deficits (Figure 2A), brain infarction ( Figure 2B), brain edema ( Figure 2C), elevation of MDA ( Figure 2D), and increased caspase 3 activity ( Figure 2E). Propranolol alleviated the postischemic changes ( Figure 2), implying that pretreatment with propranolol protects the brain against cerebral ischemia injury. Propranolol Alleviated Postischemic Inflammation To explore the inflammatory changes, parameters of inflammation were determined in both blood samples and brain tissues. The circulating levels of CRP ( Figure 3A), free fatty acids ( Figure 3B), and corticosterone ( Figure 3C) were elevated in rats with cerebral ischemia, with the increments being alleviated by propranolol. Elevated COX-2 ( Figure 4A) and TNF-α ( Figure 4B) protein levels Propranolol Alleviated Postischemic Inflammation To explore the inflammatory changes, parameters of inflammation were determined in both blood samples and brain tissues. The circulating levels of CRP ( Figure 3A), free fatty acids ( Figure 3B), and corticosterone ( Figure 3C) were elevated in rats with cerebral ischemia, with the increments being alleviated by propranolol. Elevated COX-2 ( Figure 4A) and TNF-α ( Figure 4B) protein levels were detected in ipsilateral cortical tissues after cerebral ischemia. Parallel elevation was noted in astrocyte-associated GFAP, macrophage/microglia lineage-associated CD68, and activated microglia-associated IRF8. On the contrary, the levels of neuron-specific Microtubule-Associated Protein 2 (MAP-2) and tight junction ZO-1 protein were downregulated by ischemia. Those changes in cerebral ischemia were reversed by propranolol ( Figure 4A). Intriguingly, the expression of alternatively activated microglia-accompanied CD163, Nrf2, and Sirt1 was increased by propranolol ( Figure 4A). These findings indicate that propranolol has a negative effect on cerebral ischemia-activated systemic and brain inflammation. Propranolol Improved Postischemic Hyperglycemia Parameters of glucose metabolism were determined in fasting rats. Cerebral ischemia brought hyperglycemia ( Figure 5A) and hyperinsulinemia ( Figure 5B) upon the rats, while a reverse effect has Propranolol Improved Postischemic Hyperglycemia Parameters of glucose metabolism were determined in fasting rats. Cerebral ischemia brought hyperglycemia ( Figure 5A) and hyperinsulinemia ( Figure 5B) upon the rats, while a reverse effect has been observed in propranolol rats. Next, the effects of propranolol on postprandial glucose dynamics were evaluated. Postischemic rats had higher postload glucose levels after the intraperitoneal glucose Propranolol Improved Postischemic Hyperglycemia Parameters of glucose metabolism were determined in fasting rats. Cerebral ischemia brought hyperglycemia ( Figure 5A) and hyperinsulinemia ( Figure 5B) upon the rats, while a reverse effect has been observed in propranolol rats. Next, the effects of propranolol on postprandial glucose dynamics were evaluated. Postischemic rats had higher postload glucose levels after the intraperitoneal glucose injection, while the postload glucose levels were decreased by propranolol ( Figure 5C,D). These findings suggest there is a beneficial effect surrounding propranolol against postischemic hyperglycemia, hyperinsulinemia, and impaired glucose tolerance. Insulin and Glucose Had Opposite Effects on Postischemic Changes To further explore the outcomes of hyperglycemia on postischemic brain injury, insulin and glucose were predelivered to the rats prior to ischemia. Pretreatment with insulin alleviated-but glucose augmented-postischemic brain infarction ( Figure 6A), caspase 3 activity ( Figure 6B), and TNF-α protein ( Figure 6C). These findings suggest that hyperglycemia augments postischemic apoptosis, inflammation, and brain injury while propranolol possesses ameliorative effects. Insulin and Glucose Had Opposite Effects on Postischemic Changes To further explore the outcomes of hyperglycemia on postischemic brain injury, insulin and glucose were predelivered to the rats prior to ischemia. Pretreatment with insulin alleviated-but glucose augmented-postischemic brain infarction ( Figure 6A), caspase 3 activity ( Figure 6B), and TNF-α protein ( Figure 6C). These findings suggest that hyperglycemia augments postischemic apoptosis, inflammation, and brain injury while propranolol possesses ameliorative effects. Figure 6. Insulin reduced but glucose augmented cerebral ischemia injury. Rats receiving a normal saline vehicle, insulin (2 U/kg), or a glucose (2 g/kg) intraperitoneal injection were subjected to permanent cerebral ischemia for 24 h. (A) Representative photographs show the histological examination of brain infarction by TTC staining. The average percentage of infarction volume in the ipsilateral hemisphere is depicted. (B) Proteins were extracted from the contralateral and ipsilateral cortical tissues and subjected to an enzymatic assay of caspase 3 activity. (C) Proteins were extracted from the contralateral and ipsilateral cortical tissues and subjected to ELISA for the measurement of TNF-α. * p < 0.05 vs. saline or the contralateral tissues of the vehicle group and # p < 0.05 vs. the ipsilateral tissues of the vehicle group, n = 6. Cerebral Ischemia Impaired Insulin Action in Gastrocnemius Skeletal muscles are the main peripheral organs/tissues for the uptake and utility of glucose, and once impaired, results in hyperglycemia and insulin resistance [31]. Upon examining the gastrocnemius muscles, cerebral ischemia caused a reduction in Akt phosphorylation and an increase in IRS1 phosphorylation at 307 serine residue, JNK phosphorylation, p38 phosphorylation, and TNFRI. Propranolol reversed the altered protein content and protein phosphorylation in cerebral ischemic rats ( Figure 7A). There was an elevated level of TNF-α ( Figure 7B) and MDA ( Figure 7C) in the gastrocnemius muscles of cerebral ischemic rats. The altered parameters in the postischemic gastrocnemius muscles were alleviated by propranolol. In conclusion, the impaired insulin signaling in the gastrocnemius muscles represented an alternative mechanism for the induction of postischemic hyperglycemia and insulin resistance, with propranolol improving the impairment. Cerebral Ischemia Impaired Insulin Action in Gastrocnemius Skeletal muscles are the main peripheral organs/tissues for the uptake and utility of glucose, and once impaired, results in hyperglycemia and insulin resistance [31]. Upon examining the gastrocnemius muscles, cerebral ischemia caused a reduction in Akt phosphorylation and an increase in IRS1 phosphorylation at 307 serine residue, JNK phosphorylation, p38 phosphorylation, and TNFRI. Propranolol reversed the altered protein content and protein phosphorylation in cerebral ischemic rats ( Figure 7A). There was an elevated level of TNF-α ( Figure 7B) and MDA ( Figure 7C) in the gastrocnemius muscles of cerebral ischemic rats. The altered parameters in the postischemic gastrocnemius muscles were alleviated by propranolol. In conclusion, the impaired insulin signaling in the gastrocnemius muscles represented an alternative mechanism for the induction of postischemic hyperglycemia and insulin resistance, with propranolol improving the impairment. Propranolol Decreased Isoproterenol-Induced Cytokine Production To directly evaluate the effects of propranolol on cytokine production, the murine BV2 microglial cell line along with the RAW264.7 macrophage cell line were stimulated with the adrenergic agonist isoproterenol. Isoproterenol treatment caused increased production of NO, TNFα, and IL-6 in BV2 ( Figure 8A) and RAW264.7 ( Figure 8B) cells. Concurrently, propranolol alleviated the production of NO, TNF-α, and IL-6 in isoproterenol-stimulated BV2 ( Figure 8A) and RAW264.7 ( Figure 8B) cells. These findings indicate that adrenergic activation is able to induce cytokine production by macrophages/microglia, as well as show that there is an inhibitory effect of propranolol on isoproterenol-provoked cytokine production. Propranolol Decreased Isoproterenol-Induced Cytokine Production To directly evaluate the effects of propranolol on cytokine production, the murine BV2 microglial cell line along with the RAW264.7 macrophage cell line were stimulated with the adrenergic agonist isoproterenol. Isoproterenol treatment caused increased production of NO, TNF-α, and IL-6 in BV2 ( Figure 8A) and RAW264.7 ( Figure 8B) cells. Concurrently, propranolol alleviated the production of NO, TNF-α, and IL-6 in isoproterenol-stimulated BV2 ( Figure 8A) and RAW264.7 ( Figure 8B) cells. These findings indicate that adrenergic activation is able to induce cytokine production by macrophages/microglia, as well as show that there is an inhibitory effect of propranolol on isoproterenol-provoked cytokine production. Propranolol Decreased Isoproterenol-Induced Cytokine Production To directly evaluate the effects of propranolol on cytokine production, the murine BV2 microglial cell line along with the RAW264.7 macrophage cell line were stimulated with the adrenergic agonist isoproterenol. Isoproterenol treatment caused increased production of NO, TNFα, and IL-6 in BV2 ( Figure 8A) and RAW264.7 ( Figure 8B) cells. Concurrently, propranolol alleviated the production of NO, TNF-α, and IL-6 in isoproterenol-stimulated BV2 ( Figure 8A) and RAW264.7 ( Figure 8B) cells. These findings indicate that adrenergic activation is able to induce cytokine production by macrophages/microglia, as well as show that there is an inhibitory effect of propranolol on isoproterenol-provoked cytokine production. Discussion Our groups have described a state of hyperglycemia and insulin resistance, along with elevated circulating levels of adrenaline and noradrenaline, in rat models of cerebral ischemia. The postischemic hyperglycemia and insulin resistance are closely linked with adipose inflammation, hepatic inflammation, and hepatic gluconeogenesis. Our study, along with other relevant studies, further indicate that pretreatment with the nonselective β-adrenoreceptor antagonist propranolol improves postischemic hyperglycemia, impaired glucose tolerance, and insulin resistance [14][15][16]22,28]. The studies presented here further extend earlier findings that propranolol possesses anti-inflammatory, neuroprotective, and hypoglycemic effects in vitro and in vivo. Using rat models experiencing cerebral ischemia, pretreatment with propranolol offered protection against brain infarction, edema, and apoptosis. The neuroprotection caused by propranolol was accompanied by a reduction in fasting glucose, fasting insulin, glucose tolerance impairment, plasma CRP, plasma free fatty acids, plasma corticosterone, brain oxidative stress, and brain inflammation. Pretreatment with insulin alleviated-while glucose augmented-postischemic brain injury and inflammation. Additionally, the impairment of insulin signaling in the gastrocnemius muscles was noted in rats with cerebral ischemia, as well as its improvement due to the use of propranolol. The anti-inflammatory effects of propranolol were further demonstrated in isoproterenol-stimulated BV2 and RAW264.7 cells through decreasing cytokine production. These findings provide new insight into the anti-inflammatory, neuroprotective, and hypoglycemic mechanisms of propranolol in combating neurodegenerative diseases, such as stroke. It has been believed for a long time that hyperglycemia due to critical illness is an adaptive and protective response for patients who are combating stress. However, stress hyperglycemia transforms into a pathogenic factor with a high risk of mortality and morbidity after acute stroke. Tight glycemic control in the management of acute ischemic stroke remains controversial due to the hypoglycemic consequences [32]. Despite the debate, the control of stress hyperglycemia and having it return to the normal range is of benefit to ischemic stroke subjects, as seen in experimental studies [4,33,34]. The homeostatic regulation of circulating glucose levels is strictly counterbalanced by gluconeogenesis, glycogenolysis, and glucose uptake as it occurred in the liver, adipose tissues, skeletal muscles, and kidney. The liberation of free fatty acids and adipokines, along with oxidative stress and inflammation in the adipose tissues, interfere with insulin actions and glucose uptake. Hepatic oxidative stress and inflammation both facilitate hepatic glucose output as a result of their negative effects on insulin-inhibited gluconeogenesis and insulin-promoted glycogenesis. The hyperglycemic contribution of dysregulated adipose and liver tissues, along with the intervention by propranolol in cerebral ischemic rats, has been reported in our previous studies [15,16]. The skeletal muscles are the largest organs/tissues to fulfill the peripheral action of insulin, with an aim to uptake circulating glucose [31]. Akt plays a dominant role in the execution of insulin actions and is under the control of the insulin receptor and IRS1 phosphorylation cascade. Otherwise, TNF-α/TNFRI/JNK/p38 represents an alternative cascade to negatively impact Akt through the targeting of IRS1 [31,[35][36][37]. The impaired insulin action in the gastrocnemius muscles was highlighted by the reduction of Akt phosphorylation after cerebral ischemia. The decreased Akt phosphorylation in the gastrocnemius muscles was accompanied by oxidative stress, elevated TNF-α/TNFRI/JNK/p38 signaling, and inhibitory IRS1 serine-307 hyperphosphorylation. Since inflammatory cytokines and oxidative stress contribute substantially to the impairment of insulin action [31,[35][36][37], the positive effects of propranolol towards improving gastrocnemius insulin action and hyperglycemia could be attributed to its suppressive effects on cytokine production and oxidative stress. Being a regulator of glucose metabolism, insulin also displays neurotrophic actions to help protect against neurodegenerative injury. Our previous study demonstrated a reduction of IRS1/Akt signaling in the brains of cerebral ischemic rats [14]. Thus, the neuroprotective effects of insulin against cerebral ischemia may be to boost the IRS1/Akt-mediated neurotrophic activity. On a parallel level, glucose injection exacerbated postischemic inflammation, apoptosis, and brain injury. A growing body of evidence has suggested that there is a proinflammatory effect of hyperglycemia, even in the CNS [2,[5][6][7][8][9][10]. Here, the inhibition of brain TNF-α production implies that the neuroprotective effects of insulin against cerebral ischemic injury may be secondary to its hypoglycemic effect. However, its pleiotropic effects on postischemic alterations and brain injury warrant further investigation. Cerebral ischemia in rats is closely linked with the development of peripheral and CNS inflammation, along with the activation of sympathetic tone, HPA axis, and stress hormones [14][15][16]28]. In remaining consistent with the studies, plasma levels of CRP and corticosterone, along with the brain and gastrocnemius content of TNF-α, were elevated in cerebral ischemic rats. This study further demonstrated that propranolol brought reduction. Although activation of the sympathetic nervous system has been described in stroke-associated spleen atrophy and immune suppression [17,18], there have also been many studies indicating the proinflammatory effect of adrenergic action. Activation of the β-adrenergic receptor promotes proinflammatory responses in the microglia, primes microglia to immune challenge, and induces neuroinflammation in vitro and in vivo [23,25,26,38]. Regarding stroke, augmented β2-adrenergic signaling increases stroke size, while β-adrenoreceptor antagonists provide neuroprotection against cerebral ischemia [19][20][21]. In this study, the anti-inflammatory consequences of propranolol in cerebral ischemic rats were evidenced through a reduction in inflammatory mediators in the brain, blood, and gastrocnemius muscles. Apart from macrophages/microglia, β-adrenoreceptor agonists also induce cytokine production by the skeletal muscles [39,40]. Therefore, through intraperitoneal administration, propranolol is able to traffick through the blood during circulation to reach the gastrocnemius muscles and the CNS, followed by abrogating cytokine production in macrophages/microglia, skeletal muscles, or yet to be identified cell types. Data taken from BV2 and RAW264.7 cell studies have revealed the proinflammatory potential of the β-adrenoreceptor agonist isoproterenol, along with the immune suppressive effect of propranolol against adrenergic activation. Independent from the β-adrenergic system, the anti-inflammatory effect of propranolol is observed in trauma, sepsis, and infection [41][42][43]. These phenomena suggest that the anti-inflammatory effect of propranolol is universal and offers the opportunity to act as an anti-inflammatory agent. Macrophages/microglia can be categorized into two phenotypes: proinflammatory and anti-inflammatory. Neuroinflammation can arise from an imbalance between proinflammatory and anti-inflammatory phenotypes favoring the former, with a reversal in the balance ameliorating disease progression, including stroke [44]. The presence of IRF5, IRF8, P2X4R, P2X7R, and P2Y12R promotes microglia polarization towards proinflammatory phenotypes, while CD163, CD206, arginase 1, Ym-1, Nrf2, Sirt1, and Heme Oxygenase-1 (HO-1) shift microglia to anti-inflammatory phenotypes [27,[45][46][47][48]. We found that cerebral ischemia-associated neuroinflammation was accompanied by the activation of microglia and astrocytes, along with the reduction of neurons and compromise of the blood-brain barrier tight junction. Propranolol reversed the microglia polarization switch involving suppression of IRF8 expression and promotion of Nrf2, Sirt1, and CD163 expression. Adrenaline enhances the response of macrophages under Lipopolysaccharide (LPS) stimulation and communicates with the toll-like receptors to establish proinflammatory phenotypes [49,50]. However, under endotoxemia and acute lung injury, the β2-adrenergic receptor favors the M2 regulatory macrophages [51]. The controversial effects of adrenergic systems on macrophages/microglia polarization, immune activation, and immune suppression complicate their specific roles in immunity. Although current findings suggest that the macrophages/microglia polarization switch is associated with a reduction in inflammatory responses, the detailed anti-inflammatory mechanisms of propranolol against cerebral ischemia still requires additional investigation. Despite their potential benefits, stroke-associated hyperglycemia and inflammation are commonly linked with harmful consequences. Human study reveals a clinical benefit in patients taking β-blockers before stroke onset, resulting in improvement of poststroke hyperglycemia [52]. It has been reported that systemic adrenergic blockade before or after stroke normalizes extracellular ionic dynamics and facilitates recovery from acute ischemic stroke [53]. Additionally, renal ischemia/reperfusion injury also causes hyperglycemia along with elevated catecholamines [54]. The relevant studies highlight a role of adrenergic blockade against ischemic insults. Through this study, we have provided experimental evidence outlining the suppressive effects of propranolol on hyperglycemia, inflammation, and brain injury in a rat model experiencing cerebral ischemia. The neuroprotective capabilities of propranolol are closely linked with its actions on macrophages/microglia through its ability to switch polarization from proinflammatory towards anti-inflammatory phenotypes, while also reducing TNF-α production. The anti-inflammatory effects of propranolol were also duplicated in isoproterenol-stimulated microglia and macrophage cell lines. Propranolol improved postischemic hyperglycemia by subsiding oxidative stress and TNF-α-impaired insulin action in the gastrocnemius muscles. It should be noted that hemodynamic change is another target for the action of β-blockers. Our previous study found a negligible difference of blood pressure between sham-operated and ischemic stroke rats [16]. Therefore, the hemodynamic effect of propranolol in the current study appeared to be minor. However, the assumption should be concerning because of the use of only ischemic stroke rats. Its effects in hemorrhagic stroke rats and ischemia/reperfusion rats should be taken into consideration. Although there still remain limitations to our experiments, we believe the nonselective β-adrenoreceptor antagonist propranolol to be a proposed anti-inflammatory and neuroprotective candidate for the treatment of neuroinflammation-accompanied neurodegenerative diseases such as stroke. Before this theory can be translated into clinical practice, however, deeper investigative insight into its anti-inflammatory actions is still required.
v3-fos-license
2020-10-28T19:09:23.424Z
2020-10-08T00:00:00.000
231922074
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-88960/v1.pdf?c=1631857590000", "pdf_hash": "8e6201cd1f78f940ccedb5d2f2d0476357f6478f", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45192", "s2fieldsofstudy": [ "Medicine" ], "sha1": "b211340443f31d5f61d223308d61e325ab464f0c", "year": 2020 }
pes2o/s2orc
Towards Health System Resiliency: An Agile Systems Modelling Framework for Bed Resource Planning During COVID-19 Background: We describe the development of a dynamic simulation modelling framework to support agile resource planning during the COVID-19 pandemic. The framework takes into consideration the dynamic evolution of the pandemic and the rapidly evolving policies and processes to deal with the ever-changing outbreak scenarios. Methods: A specific use case based on short-term bed resource planning is described within the proposed framework. The simulation model was calibrated against historical data for the Singapore COVID-19 situation. The time period for model calibration was from 1st April till 30th April 2020. The model was used to project for bed resource needs over the period from 1st May 2020 till 31st May 2020. Multivariate sensitivity analysis was also conducted for ICU and general isolation bed demand, length-of-stay (LOS), and age-adjusted conversion rates across different care needs. The unmet needs under various scenarios were also evaluated for planning purposes. Results: Several variants of the agile resource planning model were developed to adapt to the fast-changing COVID-19 situation in Singapore. The use case demonstrated an agile adaptation of the model to account for previously unexpected scenarios. The rapid evolution of the pandemic locally revealed streams of new infections that arose from two distinct sources. The model projections were calibrated with the latest data for short-term projections. The agility in flexing plans and collaborative management structures to rapidly deploy human and capital resources to surge the level of care during the COVID-19 pandemic have proven utility in guiding the allocation of scarce healthcare resources and helped system resiliency. Conclusions: The rapidly evolving COVID-19 pandemic in Singapore has necessitated the development of an agile and adaptable modelling framework that can be quickly calibrated to changes both from demand and supply. The modelling framework is able to deploy systems modelling concepts in a holistic manner. This facilitates the evaluation of complex cause-and-effect relationships. A robust collaborative framework, coupled with the availability of in-depth domain knowledge and accurate and updated data availability ensures a model is realistic, timely and useful. Introduction With more than 21 million cases worldwide in August 2020 (1), the COVID-19 pandemic caused by the severe acute respiratory syndrome Coronavirus-2 virus (SARS CoV-2) is leading to substantial healthcare, economic, social and psychological impacts. Even though the first cases of COVID-19 were confirmed in December 2019, the scientific world has just begun to understand better the health problems caused by this virus (2). Apart from the obvious respiratory issues, the virus is known to attack other organ systems, and in some cases resulting in catastrophic damage. The many unknowns in such emerging infectious diseases would mean that governments and health systems around the world have to react rapidly in the face of new information. The sufficiency of healthcare capacity is the cornerstone for access to care and outcomes for COVID-19. Many healthcare facilities around the world are seeing a surge in demand for hospital and intensive care unit (ICU) beds due to the pandemic. The case fatality rates (CFR) of COVID patients has shown to be significantly worse in overstretched health systems (3). Apart from containment and mitigation strategies that are important for "flattening" the infection curve, there is a critical need to ensure a resilient health system that can withstand unpredictable shocks resulting from the pandemic. Well-coordinated public and private sector policies and initiatives are essential to maintain high-quality care outcomes for the population (4). The United States and several European as well as Asian countries have all reported commendable efforts in planning and implementing surge capacities to cope with the on-going COVID-19 pandemic (5-7). Singapore has proactively ramped up bed capacities for intensive care units (ICUs), isolation and quarantine beds for suspected and confirmed cases within the health system to prepare for a surge in COVID-19 infections. Local health authorities have moved rapidly since the start of the pandemic to ensure that there were sufficient and sustainable healthcare resources to deal with the fast-growing surge in bed resource needs within a short timeframe (24). A wide-variety of programs and purpose-built facilities were established within a relatively short time frame. The purpose-built in-hospital and out of hospital (ex-hospital) COVID-care facilities included in the model were deployed in phases as the demand increased since the first large clusters were detected on 30 th March 2020. The public healthcare system has managed to pivot nimbly to deal with a fast-evolving situation as the pandemic evolved, in part due to an ability to leverage on accurate ground data to develop agile planning models. Despite the unique epidemiological evolution of the disease in Singapore, the CFR for COVID-19 in Singapore has been one of the lowest in the world (8). Most of the traditional epidemiological models developed for COVID-19 have focused on the demand side (or flattening of the epidemic curve) (9). Such epidemiological models may not fully capture nuanced disease outbreak scenarios together with high-resolution policies and processes that are important to consider during pandemics (10). The projections from traditional epidemiological models are fraught with uncertainties during the initial phases of a pandemic. This has shown to be the case when potential hotspots cannot be identified a priori to guide the best resource modelling efforts (11). Such was the case in Singapore as most of the earlier demand projections were based on importation and secondary local transmission models in the community and were not able to predict the massive clusters of positive cases that were discovered in migrant workers' dormitories (12). On the other hand, systems modelling techniques (e.g., systems dynamics, or SD, and discrete events simulation, or DES, models) (13,14) have seen many applications for both predictable demand patterns (15)(16)(17) and situations where the surge in demand is less predictable (e.g., disaster planning and pandemics due to emerging infectious diseases) (18,19). These simulation methods have the ability to capture both the detailed behaviour of the system (dynamic complexity) and structure (causal relationship) to provide a risk-free virtualized experimentation platform to evaluate strategies in scenarios that are subjected to significant uncertainties (20). Such platforms allow for the rapid adaptation of new scientific information, structural and behavioural assumptions on both the demand and supply sides within the model, and dynamically adapts to the fast-evolving outbreak scenarios (21). We describe the development of a dynamic simulation modelling framework to support agile resource planning efforts during the COVID-19 pandemic. This modelling framework can efficiently take into consideration the dynamic evolution of the pandemic, which are much less predictable in the initial phases (22), and the rapidly evolving resource management policies and processes to take care of unexpected scenarios. To realize the effective development of the models, and deploy rapid adaptation capabilities, a robust and effective data capture infrastructure within health systems, effective data governance policies that can ensure the efficiency of data utilization for modelling and experienced modelling expertise embedded within the health systems with deep domain knowledge are among the critical ingredients. The development process leveraged on the data science and health services research expertise that has been embedded within our health system since prior to the pandemic. The dynamic simulation model that was developed by the team was able to support the indexed hospital (IH) with improved resource management strategies. The model was further expanded to incorporate the national response which included rapid flexing of bed capacity beyond the IH and informing the other resource planning needs for the IH and national healthcare authorities. Data and Methods The development of the modelling framework leverages on the Singapore COVID-19 scenario. Singapore is a city-state with a population size of approximately 5.7 million (of which 4 million are citizens and permanent residents) (23) with a land area of approximately 724 square km (24). Relevant to the context of this study, Singapore has approximately 300,000 migrant workers working in the construction sector at the time of the study (30). In addition, Singapore had about 12,000 acute care beds in public and private hospitals (25) and approximately 400 ICU beds. In 2018, Singapore had a trained doctor-to-patient ratio of 2.4 doctors per 1,000 patients (26). The Academic Medical Centre (AMC) in which the study was conducted is made up of the Duke-NUS Medical School and the Singapore Health Services (SingHealth). SingHealth is one of the three public healthcare clusters in Singapore with the IH being the largest and oldest comprehensive hospital in Singapore. A core resource of the AMC that developed and implemented the modelling framework is a small embedded data science unit in the Health Services Research Centre. The health services research and data science resources are strategic resources established since 2015 within the AMC. Together with these resources, the modelling team comprises domain experts in infectious disease, intensive care, emergency medicine, anaesthesiology and surgical specialties. The bed management unit was also involved in the development of the model. The modelling framework was conceived to provide estimates of mortality, morbidity and the impact on waiting times to specialist appointments and elective surgeries, length of stay (LOS) in the emergency department (ED), bed -demands and inpatient LOS and the utilization of critical hospital facilities (e.g., operating theatres, hospital and intensive care beds, isolation wards, diagnostic equipment, laboratory services). The systems modelling methodology based on system dynamics (SD) simulation [17], [18] was used to develop the inpatient bed resource management module. Where more detailed modelling was required-for instance, the Emergency Department (ED) and surgical resource planning, discrete events simulation (DES) [19] and agent-based modelling (ABM) [20] approaches were used. The entire modelling framework has segments that are structurally linked: infectious disease, primary care, ED, hospital inpatient care (including isolation wards, intensive care), surgical resources and specialist outpatient clinics. The high-level schematic of the systems modelling framework is shown in Figure 1, which describes how the models derive real-world data from the existing operational and clinical available through the enterprise data warehouse, electronic medical records (EMRs) and other sources. For the use case, the primary healthcare resource that we considered was the number of beds. Key outcome measures were: (1) Number of beds required for in-hospital and ex-hospital demands (demands that can be addressed by facilities outside of hospitals); (2) unmet needs considering in-hospital and ex-hospital capacities, and; (3) overall in-hospital and ex-hospital mortality rates. The types of beds that we considered were: (1) beds in isolation facilities for confirmed cases; (2) beds in quarantine facilities for suspect cases, and; (3) beds in the ICUs. ICU beds are in-hospital critical care facilities, whereas isolation and quarantine facilities can be set up outside of the hospital. The model was calibrated against historical data. The time period for calibration was from 1 st April till 30 th April 2020. The model was then used to project the demand for the period from 1 st May 2020 till 31 st May 2020. The COVID-19 pandemic in Singapore has turned out to be a perfect use case for the development of an agile and adaptable systems modelling framework that can be quickly calibrated to changes both from the demand and supply side. On the demand side, the dominant stream of dormitory infection from day 74 (since the 1 st case was detected on 23 January 2020) grew rapidly. Fortunately, the massive foreign worker dormitory clusters was contained and separated from the larger community, thereby preventing widespread community transmission. There have been several dynamic adjustments made in terms of the detection, diagnosis and disposition policies for COVID-19. These include the controlled rates of swabbing in the dormitories and community, heightened surveillance and targeted testing for vulnerable groups (27) and the agile flexing of in-hospital and the swift rampup of external isolation facilities (28). The various structural interventions and purpose-built facilities are summarized as follows. (1) Fever Screening Area (FSA) dedicated to the screening, triaging and disposition of COVID related cases were established within the hospital compound. These facilities were planned and went operational on 20 th March 2020 in the IH. FSA provides much needed screening services for those who meet suspect case definition but not yet require the acute and resuscitation care facilities available in hospital ED. The redirection of suspect cases to FSAs within the IH effectively kept all suspect cases within reach of acute care facilities and additional interventions would be readily available should the health conditions of suspect cases deteriorate. In addition, the dedicated FSA reduced the risk of cross-contamination of positive patients to uninfected patients who require acute or emergency care at the ED. FSA also played a key role in the implementation of the Swab and Discharge Programme. (2) Swab and Discharge Programme (SDP). The hospital's first points of contact (ED and FSA) take in patients with Covid-19 symptoms and those who met the suspect case definition. SDP allows some of the suspect cases who are at lower risks to be sent home for quarantine after the respiratory swab for SARS CoV-2 PCR (polymerase chain reaction) test has been performed, while awaiting result confirmation. (3) Types of beds for suspected and confirmed COVID-19 patients. The types of beds that are designated for the care of suspected/confirmed COVID-19 patients are the Acute Respiratory Infection (ARI) wards, the Isolation Wards (ISO) and the ICUs. ARI beds are dedicated for suspect cases as well as other acute respiratory conditions. Some of these beds were reconfigured from wards with 6 or 8 beds to house fewer beds per room (3 to 4 beds) to ensure there is sufficient distancing between beds. Policies requiring mandatory use of surgical masks in the wards are also instituted. For patients without other respiratory illnesses, and tested negative, they can be moved out of these ARI beds. ISO beds are isolation care facilities for confirmed cases. Patients with acute respiratory illnesses but are not infected remain in the ARI beds, whereas positive patients are moved to ISO wards. Both ISO and ARI beds have external community facilities for the decanting of patients to ensure hospital facilities are not overstretched. (4) Other surge capacities. In the hospital, the flexing of bed capacity and the corresponding reduction of the non-COVID care loads were activated from 7 February 2020 when the national outbreak alert system level was raised to "Orange". ARI beds that were deployed to admit suspects and pneumonia patients while waiting for lab results were consolidated from non-respiratory wards and ring-fenced. With the rapid case rise from community and migrant outbreaks, the isolation capacity was also expanded rapidly within the IH. Outside the hospital, the partnerships with private hospitals and external large-scale facility operators, such as exhibition centres, military camps and port facilities, accelerated the significant expansion of community isolation facilities (CIF) from 500 to more than 40,000 beds progressively in phases. Plans were put in place to deploy Community Recovery Facilities (CRF) for the further stepdown care of patients who are well and asymptomatic, but remained SARS-CoV2 PCR positive. Historical data from the IH and the national health authorities used for estimating the model parameters largely consisted of 3 main sources: (1) daily hospital reports from the IH about cumulative numbers of hospitalized, discharged, transferred to external facilities, death and the daily census of admissions; (2) situation reports and data consolidated by the disease outbreak task forces and institutional command centres, and; (3) the public domain data released by the Ministry of Health. These open and transparent statistics provided valuable data related to suspect and confirmed cases, test result and molecular lab workload to inform the modelling efforts. Data with missing records and incomplete statistics were excluded from the analysis. While the first case of migrant worker infection was reported in early February, it is worth noting that the large scale of the COVID-19 outbreak was not apparent until around the end of March. As such, the tracking of the dormitory flows only started from 6 th April 2020. Records containing data entry errors or duplicates were removed. All data were derived from government agency and public healthcare institutions for reporting purposes in management platforms and further data veracity checks revealed no outliers in both the raw and aggregate data. The multiple layers of data cleaning and veracity checks to ensure that data were sufficiently clean and robust for development of the models. For resource planning purposes, projections of daily cases were made for Best-Case, Base-Case and Worst-Case scenarios. The underlying infection dynamics governing the rise in cases were assumed to be the same across all the cases, except that the key differentiating factor amongst the three cases is the time when the apex of the infection curve is reached. The various scenarios were estimated from a compartmental model that included the migrant dormitories (29) with further assumptions of a finite population in the migrant dormitories approximately 300,000 migrant workers at the time of the study (30). For the Best-Case scenario, the apex was assumed to have been reached at the third week of April 2020. For the Base Case, the apex was projected to be reached in the start of May and for the Worst Case, the apex was estimated to be towards the end of May 2020. The projections started when the surge in infections occurred at the end of April 2020 (See Figure 2 for both the projected number of daily cases and the cumulative number of cases across the 3 scenarios) and have been constantly updated with available data as the pandemic evolved over time. Given the evolving pandemic, changing ground operations and dynamic policies in case detection and surveillance, the model projections can be regularly calibrated for short to medium term projections of what would be required for the next week or month ahead. Having both the Base and the Worst cases offers alternative scenarios to develop robust strategies that can deal with unpredictable surges in confirmed or suspect cases. To adapt to the dynamic COVID-19 situation over time, several variants of the fast response resource planning models were developed. Three of these variants that evolved across the initial phases of the pandemic are described as follows: • Variant 1: This variant captured all the hospital flows of COVID patients across the multiple inpatient ward classes and external decanting facilities. However, this model did not take into account the different flows of incoming patients from dormitories and community given that the tracking of these two flows only started on 6 April 2020. The variant considered the presence of both in-hospital care facilities across ARI, ISO and ICU, as well as external care facilities with the same capabilities. External surge ARI and ISO facilities were specially setup in dedicated facilities to care for COVID cases and suspects. External ICU facilities refer to the pool of ICU facilities available in both public and private hospitals in Singapore. • Variant 2: Variant 1 was extended to differentiate different incoming streams from the foreign worker dormitories and community. The model explicitly accounts for the fact that patients with mild symptoms and less risk, no existing comorbidities and patients who are younger than 40 years old will be decanted to external ISO facilities. On the other hand, patients who do not satisfy these conditions will be cared for in the ISO wards in the hospital. Consequently, the transition rates to more severe cases were assumed to be less for cases transferred to external ISO facilities (see Table 1). To better reflect the situation on the ground, we allowed for the transition of care from external ISO facilities to inpatient ISO and ICU wards (See Figure 3). This essentially accounted for the mild cases that were earlier decanted to return to the hospital for higher levels of care. Consequently, there could be recirculatory flows between the hospital and external isolation facilities. • Variant 3: Variant 2 was extended to consider the different age segments presented by the incoming dormitory flows. The consideration of age segments allows for the age-stratified risks of mild and asymptomatic cases turning symptomatic and severe for the dormitory cases, thus providing a highresolution representation of the actual risks of over-stretching the ICU capacities in the hospital. Data was not yet readily available at the time of model building in Variant 1 and Variant 2 to confidently estimate parameters such as the Length of Stay (LOS) statistics, ICU admission rate and the CFR. Regardless, the model was quickly built with available data and statistics found in peer-reviewed journal publications complementing the limited local data availability. Based on these aforementioned limitations when Variant 1 was developed, we assumed that out of all infected patients, 5% of patients would be critically ill, 15% would be moderately ill and the remaining 80% to be mildly ill (31,32). Furthermore, prevailing research has also established that the majority of cases would be asymptomatic (33). These assumptions were also incorporated in the model for the community cases for Variant 2. Given the unique situation that Singapore is facing with a surge primarily due to the migrant worker population in the foreign workers' dormitories, the risk factors had to be adjusted accordingly by age groups. The age-adjusted risks of ICU admissions for the dormitory population and community were then separately estimated based on the age profiles of these groups in Variant 3. These age-adjusted risks were accounted for in the demand projections across the facilities for in-hospital and ex-hospital ARI, ISO and ICU beds in Variant 3. The dual input streams from migrant dormitories and the community resulted in a bi-agent model that was deployed for Variant 2. For conciseness, the detailed model structure for Variant 2 is presented in Figure 3. This variant demonstrates the differentiated inflows for patients from the community and the migrant dormitories and captures the structural flows between various inpatient facilities as well as external decanting facilities. Variant 3 considered the following sub-groups according to the age distribution of patients from the migrant dormitories and community: (1) less than 45 years old; (2) 45-49 years old; (3) 50-54 years old; (4) 55-59 years old, and; (5) 50 years old and above. For Variant 3, the age distribution for the confirmed cases was estimated from historical data. Other parameters were estimated from the literature and available data and information provided by the IH (see Table 1). (2) median in-hospital ISO LOS of 10-21 days, and; (3) median LOS of inhospital ISO between 10-21 days and EISO of 14-24 days. In consideration of the interventions, the unmet needs under various scenarios were evaluated and dynamic perturbations to the resource capacities were evaluated through sensitivity analysis. Sensitivity analysis were also conducted on key parameters that would have a significant impact on the capacity projections -the average length of stay (ALOS) for COVID patients in the ICU and ISO wards and the proportion of the daily national COVID demands that come to the IH. Validation of the results was conducted with the management representatives who are involved in the COVID planning and operations command in both the IH and the national health authorities. Results As of 21 April 2020, there were 3,566 COVID-19 patients in nationwide hospital isolation wards and 27 in intensive care, translating to about 31.7% of inpatient hospital beds used for COVID-19 patients. The IH has around 1,785 beds and 18 ICU beds in 2019. However, the bed occupancy rate (BOR) at the IH, was at over 70% occupancy rate most of the time. As the surge of the number of cases from the dormitories evolved to be the dominant stream of confirmed cases, tracking for dormitory and community cases was started only from Day 74 (see Figure 4). The number of cases from the dormitories were observed to increase steadily since the first cases were tracked from 6 April 2020 and peaked at 1,426 cases on 20 th April 2020, declining thereafter (35). This stream of cases remained stable due to controlled active surveillance policies deployed for the dormitories, frontline workers and at-risk populations in the community. The use case demonstrated the rapid evolution of the model to account for previously unexpected scenarios and new policies. The unfolding of the pandemic revealed the streams of new infections arose from two distinct sources: migrant dormitory population and community/imported cases. Given the dominant streams of positive cases from the dormitories and the distinct demographic characteristics between these two streams, a bi-agent model was deemed necessary. The later variant of the model also took into account the rapidly changing resource management policies both for in-hospital and external isolation facilities, swab and discharge policies. Within the hospital, the operations in a satellite FSA located separately from the ED were ramped up. The bi-agent model incorporated the two entry points into the IHthe ED and FSA. Decisions were then made through these areas for either hospitalization or swab and discharge (SDC) under the Swab and Discharge Programme (SDP). ED attendees were referred to FSA if they had mild conditions and required simple examination and consultation while FSA would send patients with more severe symptoms to the ED. During the period from 1 st till 20 th April 2020, 1,589 SDCs were carried out from the total of 2,189 suspect assessments. Out of the 600 cases admitted in the IH, 404 (37.10%) and 196 (17.82%) admission decisions were made at ED and FSA respectively. During the same period, it was noted that the higher positive swab rates corresponded with a higher number of hospitalizations. In the data collection period, 242 cases of confirmed Covid-19 were detected from the 600 hospitalized subjects. The differences in the positive swabs across the two premises revealed the severity of cases seen at each facility. As of 12 May 2020, approximately 3,900 tests per hundred thousand people in Singapore were conducted (26). Based on an analysis of 766 COVID-19 cases, it was determined that the duration of viral shedding through polymerase chain reaction (PCR) tests via nasopharyngeal swabs could be longer than 33 days for approximately 5% of all confirmed cases (36). Further external evidence has also shown that viable viral replication drops rapidly after 7-10 days from the onset of symptoms (37). Consequently, it was determined that deisolation and discharge policies should not depend solely on the viral ribonucleic acid (RNA) detection via PCR tests (36). More aggressive discharge of patients based on the evidence presented on the time of course of infectiousness and other clinical parameters, rather than PCR results, were subsequently instituted. This led to a higher discharge rate of patients from Day 110 as shown in Figure 4, and better resource focus on patients with early presentations and those with acute respiratory symptoms which would have positively impacted timelier public health intervention and containment (36). Age-dependent risk factors for severe COVID-19 have been reported in existing research (14,31,38). An unpublished study by the National Centre for Infectious Diseases (NCID) in Singapore estimated the age-adjusted risks of step-up care for Singapore COVID-19 patients (from non-ICU to ICU beds and from EISO to ISO) based on data of 1,481 patients. The risk of patients requiring ICU care in the limited empirical study on Singapore's COVID-19 patient profile ranged from 0% for the population below 30 years old to 19.45% for patients who are 65 years old and above. The age-adjusted ICU conversion rate was determined to be 1.55% for the dormitory and 4.95% for the community stream. The age profile of dormitory cases is also different from the age profile of community cases (see Figure 5). The bi-agent model was re-calibrated with the new age-adjusted risk assumptions. The number of national ICU and non-ICU caseloads was validated to track the epidemiological and interventional changes to support policy decisions on the need to either expand or reduce the number of ICU, in-hospital and external isolation beds. For the use case presented, model calibration was made against historical data from 1 April till 30 April 2020. The model was then used to project the demand for the period from 1 May 2020 till 31 May 2020. The calibration curves closely matched the historical trends and tracked the rapidly evolving dynamics, from stabilising community infection to the surge predominantly arising within the dormitories. More than 90% of the cases were projected and observed to be from the migrant dormitories. The projections based on the bi-agent model (including the time to peak for ICU and ISO bed needs for the IH and national demands) with percentiles of the sensitivity analyses for the baseline, best, base and worst cases are listed in Table 2. Sensitivity analysis results for the bed demands, assuming 4-8% and median ICU LOS of 7-18 days, are shown in Figure 6 for the best and base case parameters for IH ICU beds requirements. Discussion During the early stages of the pandemic, the lack of scientific knowledge regarding the SARS-CoV2 betacoronavirus resulted in the difficulties of accurate predictions and the necessary precautionary measures to take (e.g., the wearing of face masks). Due to the inherent uncertainties, any predictive models must be able to learn from new data, and the health system must have the capability to assimilate the new knowledge, plan and respond effectively. Health system resilience, which is the capacity of health actors, institutions, and populations to prepare for and effectively respond to crises whilst maintaining core functions when a crisis hits (39), is crucial in the sustainable delivery of care by the health system during such a novel pandemic. This study is a first step towards realizing a framework to achieve the goals of a resilient healthcare system, with the use case that demonstrated the ability of the health system to prepare, respond and strengthen the health services delivery system during the COVID-19 pandemic (41,42). To achieve the goals of a resilient system, the main challenges that the proposed modelling framework has been able to deal with are: (1) demand side uncertainties related to the uncertain pandemic scenarios, evolving detection and disposition policies (43); (2) supply side uncertainties related to the dynamic resource management policies including the bed capacities of in-hospital and external isolation facilities; (3) parametric uncertainties related to the lack of precise and accurate estimation of key parameters required for the modelling framework, and; (4) a collaborative framework that facilitates the development of the model and assimilation of modelling insights by key stakeholders. The proposed modelling framework is agile and adaptable, and is supported by a strong ecosystem to facilitate the assimilation of new data and knowledge for evidence-based decision support. The presence of robust communication channels throughout the health system further ensured the timely and accurate dissemination of modelling insights. With the ability to continuously learn and calibrate the responses with new data and knowledge, the framework facilitated the rapid reorganization and adaptation to achieve a resilient health system (39,40). With the dynamic evolution of the pandemic in Singapore, the model had been rapidly adapted to deal with demand side uncertainties brought about by new outbreak scenarios and evolving response strategies. (Day) dormitory cases. This happened in the initial stages of heightened pandemic preparedness period during national disease outbreak level of "Orange" (44). The dominant stream of infections in migrant dormitories from day 74 (since the 1 st case was detected on 23 rd January 2020) began to surge rapidly despite the additional measures introduced in the heightened preparedness period. Fortunately, the massive dormitory cluster was contained and separated from the community, thereby preventing widespread community transmission. After Day 74, the model quickly evolved to consider a bi-agent model to take into consideration the stream of infections from the dormitory clusters, separated from the community cluster. With the updated model, we were able to differentiate the unique stream from the dormitories which grew from 6 th April till 6 th May 2020 to stabilise at an average of 98.8% (IQR: 0.11%) (from 7 th till 31 st May 2020). There had also been a number of dynamic adjustments made in terms of the detection, diagnosis and disposition policies for COVID-19. These included the controlled rates of swabbing in the dormitories and community, heightened surveillance and targeted testing for vulnerable groups (27), the agile flexing of in-hospital capacities and the swift ramp-up of external isolation facilities (28). The agile modelling framework allowed for these adaptations, and the incorporation of age-adjusted risk modifiers based on the most updated datasets. On the supply side, the modelling framework supported high-resolution resource planning decisions over the next month. In anticipation of the surge in demand prior to the migrant worker outbreak, the IH had progressively increased its bed capacities and healthcare resources. The IH had surge plans for her ICU capacity from a pre-COVID level of around 40 beds to 200 beds. Across the various phases of surge, the additional capacities for ICU require setup times ranging from 1 day to 15 days to ramp up. Consequently, plans have to be established at least two weeks in advance to prepare for any potential surge in infections. By working in tandem with external isolation facilities run by the government and private hospitals, the modelling results showed that the swift decanting of Covid-19 patients with mild symptoms to external CIFs and CRFs had further prevented the over-congestion of hospital capacities as seen in other jurisdictions. These external isolation facilities are manned by trained medical and nursing manpower that can ensure all patients are provided with adequate care and health monitoring services. Model results showed that the planning capacities for the IH and national ICU beds was well-prepared to deal with the worst-case demand projections at the 97.5 th percentile level against the uncertain parameters. For the non-ICU beds, the planning capacity appeared to be sufficient to deal with 97.5% for the best-case scenario in the IH bed demands, and the base case scenario for the national ISO bed demands (which included the external isolation facilities). By maintaining the sufficiency of healthcare capacity with a respectable safety margin, Singapore was able to keep the CFR to be amongst the lowest in the world (8). Data transparency and information sharing are important to develop robust policies under parametric uncertainties. The database architecture within the AMC was well-developed prior to the pandemic. The core data science expertise embedded in the health system have the necessary domain knowledge to pool together the set of realworld data from the various source systems and the enterprise data warehouse (EDW) that is necessary for the model to be updated with the latest projections. Based on patient demographics, comorbidities, laboratory and radiological test results, the bed types were generalized to consider high needs patients (ICU), patients with minor symptoms but require isolation in external isolation facilities and those who require isolation and higher levels of in-hospital care. Clinical risk factors that have been considered in patient disposition included age, chronic comorbidities (diabetes mellitus, heart, lung and kidney diseases), supplementary oxygen needs, clinical features (e.g., dyspnoea, respiratory rates and SpO2 levels), chest X-rays and laboratory results. The data showed limited risks of patients turning severe to require higher levels of care (e.g., in-hospital beds and ICUs). To achieve more robust insights, sensitivity analyses were performed to evaluate the resource needs (ICU and external vs inhospital conversion rates and, the LOS in these different facilities) and the best, base and worst-case scenarios in the pandemic. The projections and sensitivity analysis given the uncertainties in the risks of ICU conversion and LOS provided useful inputs for the policy makers and have been utilized to guide the evaluation and improvement of the pandemic preparedness plans. Rapid health systems modelling during pandemics requires a strong collaborative effort amongst a wide variety of stakeholders (45,46). The modelling team in this study comprised of senior clinicians from the emergency medicine, critical care medicine, infectious disease, epidemiological and health services research domains. The modelling expertise came from disciplines ranging from industrial engineering, computer scientists and biostatisticians within the AMC. The health services research and data science team within the AMC supported a robust collaborative infrastructure that coordinates key technical, clinical and operational partners is in place. The team was able to quickly adapt the model with access to high resolution clinical and operational data. The formal organization structures established since the beginning of the outbreak (e.g., health system's disease outbreak task force, the critical care and ICU planning team, the operating theatre and bed management unit) facilitated the rapid dissemination of study results. Clear channels of communication with healthcare policy-makers and stakeholders, together with ready availability of robust and credible data sources, ensured that the models are realistic and credible for the use by decision-makers. All these factors are critical considerations for the realization of a resilient healthcare system during pandemics (41). The main limitations of the modelling framework are that the model was built in consideration of the bed management policies in a public hospital that was demonstrated in a use case related to a sudden surge in infections from a well-defined source (migrant workers' infection in the dormitories). Nonetheless, the modelling framework can be customized to consider different types of bed resources, patient demands, and process flows. Despite the ability of Singapore's health system to adapt and respond swiftly to the migrant workers' clusters, there is a need for constant vigilance. Robust data sharing and collaborative mechanisms must be maintained to effectively manage the bed capacities given the constantly evolving situation. Moving forward, the dynamic hypotheses captured by the current models have to continually evolve over time. Even as we see declining cases in Singapore, sporadic numbers of community and imported cases have been detected through active surveillance and the screening of targeted groups (26). As the countries start to gradually exit from lockdowns in phases, and transnational travel are revived, allowing for the cross-border flows of people (48), the risk of new waves of infections is a realistic concern (49). Given that Singapore is an open economy and has proceeded to open up the economy in phases, future resurgence of the pandemic may be possible until effective vaccines or drugs can be developed. Conclusion This study showcases a modelling framework that was successfully deployed in a use case in Singapore for healthcare resource planning during the COVID-19 pandemic. The rapidly evolving pandemic and growing clinical and scientific knowledge of the disease necessitated the development of an agile modelling framework. The framework provides a platform for decision-makers to quickly evaluate complex cause-and-effect relationships, internal feedback and delays to support resource planning in a holistic manner. The study has also shown that the tightly integrated nature of the Singapore healthcare system is important to enable close coordination and timely information sharing across diverse groups of stakeholders and decision makers.
v3-fos-license
2021-07-26T00:05:11.625Z
2021-06-30T00:00:00.000
236313480
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://dergipark.org.tr/en/download/article-file/1687992", "pdf_hash": "00b8d6346bc980256e3989e62d57aa391aa4f246", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45193", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "79935626b75e6bf824436cd93251b42465eabef0", "year": 2021 }
pes2o/s2orc
The Effect of Injection Parameters on Fuel Consumption and Emissions in A PFI Small Spark Ignition Engine The fastening of emission standards and the desire to lower fuel consumption decide the direction of research on internal combustion engines. Mixture formation affects fuel consumption and harmful emissions. One of the common problems in spark-ignition engines with liquid fuels is the evaporation of fuel for mixing with air because of fuel stored in liquid form. The intake valve is hot in port fuel injection engines. The fuel is directed to the hot surface of the intake valve so it evaporates. The injection pressure, injection time, injection angle and the number of injections in one cycle affect mixture formation [1–7]. The effect of different injectors and injection methods were investigated by Kim et al. HC emissions were reduced when the fuel injected behind of intake valve with smaller droplet size. Open valve injection forms higher HC emissions than close valve injection in cold start. The fuel droplets cannot evaporate in the first cycle of engine start-up because of the cold intake manifold. Mixture formation gets better when turbulence is created in the intake manifold [8]. Meyer et al. investigated the fuel film formed in the intake port of spark ignition engines. The close valve injection forms a thick fuel film in the intake port after the first cold running of the engine for fifteen seconds. As the engine runs, the film thickness decreases because of the hot intake valve and port. The droplet diameter is changed by increasing port temperature [9]. The injection system must provide excellent atomization of the fuel for the lowest releasing amount of HC emissions. When the fuel is injected behind of intake valve, the wetness on the intake port is decreased [10,11]. Research Article Introduction The fastening of emission standards and the desire to lower fuel consumption decide the direction of research on internal combustion engines. Mixture formation affects fuel consumption and harmful emissions. One of the common problems in spark-ignition engines with liquid fuels is the evaporation of fuel for mixing with air because of fuel stored in liquid form. The intake valve is hot in port fuel injection engines. The fuel is directed to the hot surface of the intake valve so it evaporates. The injection pressure, injection time, injection angle and the number of injections in one cycle affect mixture formation [1][2][3][4][5][6][7]. The effect of different injectors and injection methods were investigated by Kim et al. HC emissions were reduced when the fuel injected behind of intake valve with smaller droplet size. Open valve injection forms higher HC emissions than close valve injection in cold start. The fuel droplets cannot evaporate in the first cycle of engine start-up because of the cold intake manifold. Mixture formation gets better when turbulence is created in the intake manifold [8]. Meyer et al. investigated the fuel film formed in the intake port of spark ignition engines. The close valve injection forms a thick fuel film in the intake port after the first cold running of the engine for fifteen seconds. As the engine runs, the film thickness decreases because of the hot intake valve and port. The droplet diameter is changed by increasing port temperature [9]. The injection system must provide excellent atomization of the fuel for the lowest releasing amount of HC emissions. When the fuel is injected behind of intake valve, the wetness on the intake port is decreased [10,11]. Anand et al examined the importance of injection time and location. When the injector angle and position are not adjusted properly, fuel film is formed on the intake port and valve. The less HC emission is obtained when fuel is injected behind of injection valve in open valve injection [12]. Port Fuel Injection (PFI) system is widely used in gasoline engines. A PFI injector generally operates at pressures between 3 bar and 5 bar and its operating temperature rises to 80 °C. Therefore, they are significantly cheaper to produce [13]. A regular PFI injector produces droplets with Sauter mean diameter between 70 and 150 µm. If the fuel droplets are too large, they can crash the port walls, and thus the mixture formation becomes less dependent on the droplet size [14]. Moreover, more dependent upon the fuel film transient behavior. Open valve injection produced smaller mean diameters as fuel passed through the valve gap. Kato et al. [15] conducted in an experimental and numerical study, their showed that cyclical variations in combustion affect the formation of the mixture in the combustion chamber and around the spark plug. Arcoumanis et al. [16] investigated droplet velocity/size and mixture distribution in a single-cylinder spark-ignition engine. Laser Doppler velocimetry, phase Doppler anemometry, and Mie scattering were carried out for transparent liner and piston. Research has been done on lean air/fuel mixture ratios of 17.5 and 24, tumble flow droplet size and velocity distributions during intake and compression stroke. Pressure analysis was carried out thanks to the mixture distribution and flame images obtained with two injection strategies. As a result, it is advantageous to combine open valve injection with a tumble, engine operating more stable and efficient and faster flame growth at 24 air / fuel ratio. Lang and Cheng [17] focused on the extent to which the interaction of the intake port gas flow within a port-fuel-injection engine facilitates the mixture preparation process, and whether there is improvement in HC emissions through this interaction. The result was a slight improvement (compared to closed valve injection) in cold valve conditions, with a second pulse of fuel 25%: a 6% reduction in specific HC emissions and a 4.5% increase in the fuel delivery fraction. Hushim et al. [18] investigated the effects of the intake manifold angle of a PFI retrofit kit on engine performance and emission characteristics. In the experimental study, the engine was operated on a wide open throttle with variable dynamometer loads for two different angles, 90° and 150°. The angle of 150 ° was found to be the optimum angle for brake power (BP) and brake mean effective pressure (BMEP), brake specific fuel consumption (BSFC) and hydrocarbon (HC) emission parameters. From the literature review, the effect of injection parameters of PFI gasoline spark ignition engine on performance and emissions has not been clearly researched. Therefore, more studies should be done to get more information about the deficiencies of these topics in the literature. Because of that, the aim of this study is the investigation of injection start angle, injection pressure and injection number per cycle on engine parameters and its emissions. Experimental Study Experiments were made in the engine laboratory of Istanbul Technical University. They were made with a single cylinder research engine, which was originally a four stroke compressed ignition Antor 3LD 450 engine. Further details about the engine are given in Table 1. The engine was converted to a spark ignition engine by adding a throttle valve and electronic control unit (ECU) [19]. The Spark plug was relocated to the injector location. Spark plug location is nearly the center of the piston head. ECU, designed and manufactured as a part of a master thesis, controls the start of injection, duration and ignition period. The hardware card used in ECU is an Ardunio Mega 2560. Fuel injection pressure and timing were able to change. Dwell duration was set to 5 ms for ignition [20]. The maximum original engine power and torque are 10 HP at 3000 rpm and 30 Nm at 1800 rpm respectively. The injector used in the experiments is Bosch EV 6.2 L. It has four holes for the injection of gasoline fuel. The injection angle is 25 degrees at a 300 kPa pressure. The fuel injection pressure could be changed according to the desired value with the fuel supply system. The start of injection was changed with an electronic control unit. 1 shows the experimental apparatus. The engine was loaded by an eddy current dynamometer. The load on the dynamometer was measured by using a strain gauge load sensor. The accuracy of the load sensor is ±0.02 %. An inductive pickup speed sensor was used to measure the speed of the engine. The accuracy of the speed sensor is ±3 rpm. Fuel consumption was measured with AVL 733S fuel consumption measurement and conditioning system. The accuracy of the fuel measurement system is ±0.08 kg/h. The exhaust emissions, CO2, THC, and CO were sampled directly from the exhaust pipe. Emission concentrations and excess air coefficient were measured and calculated by exhaust gas analyzers (Horiba Mexa 7500). A laboratory automation system produced by OTAM. This system collected all data, such as exhaust gas, lubricating oil and other temperatures, the position of the throttle valve, intake and exhaust pressure from sensors. During the test, the temperature of the cooling water was kept constant at around 72 degrees Celsius. The experimental data were recorded for 90 seconds using an automation system. Ignition advance for maximum torque (MBT) is obtained at each experimental point. The excess air coefficient (λ) was set to 1 (stoichiometric mixture) for all experiments. The intake manifold temperatures were about 44 °C and 37 °C for 1 and 5 bar of MEP at 1200 rpm engine speed respectively. At 1500 rpm engine speed, the values of air inlet temperature were respectively 42, 41, and 35 °C at 1, 3, and 5 bar engine load. Results and Discussions The BSFC, THC and CO values are shown in Fig. 2 for different injection numbers in one cycle at 1200 and 1500 rpm speeds. Different injection numbers in one cycle were occurred according to the connection situation of the encoder to the crankshaft or camshaft. The once injection is called as a cam and twice injection is called a crank in figures. When the encoder is coupled to the camshaft the once injection is obtained. Twice injection occurs when the encoder is connected to the crankshaft. The injection was performed when the intake valve was opened in one injection. In twice, injections were performed in intake and expansion times. The BSFC and THC values are less for once injection at all speeds and loads. Since there is airflow at the moment of injection in the intake manifold, the fuel is carried by air and does not reach the wall in one injection. In twice injection, there is no airflow in the expansion stroke. So fuel can reach the manifold wall and enters the cylinder as droplets. The fuel entered in the liquid phase increases THC emission and BSFC value. CO emission mostly depends on excess air coefficient. It was changed with little uncontrolled differences of excess air coefficient values. The BSFC, THC and CO values are depicted in Fig. 3 for different injection pressures (1, 2 and 4 bar) at different loads (1, 3 and 5 bar value of MEP) and 1500 rpm constant engine speed. The fuel was injected one time for all injection pressure experiments. When the fuel was injected at 4 bar the engine load was not able to be obtained less than the 1.20 bar value of MEP. So the BSFC value of 4 bar injection less than others at 1 bar load. With the increase of injection pressure, fuel reaches the intake manifold wall and fuel enters the cylinder in liquid form. The values of BSFC and THC increased with the rising of injection pressure at higher loads. CO emission values changed according to excess air coefficient. It was able to be obtained less or more for different injection pressures at different loads. The Electronic control unit (ECU) set different injection start angles. Fig. 4 shows injection start angles (-343, -243, -143, 150 and 250 ºCA) according to crank angle (CA) values. The injections were made at intake and exhaust periods. The intake valve is opened on intake and closed on the exhaust period. The experiment points were chosen roughly at the start, mean and end of the intake period. It is the same for the exhaust period of start and mean. The spark advance was set to the same value all injection starts at the same loads. The act of fuel in the intake port, when injected in liquid form, depends on many parameters like temperature and pressure. One of the most critical parameters for comparing of entering fuel into the cylinder is the injection start angle. The fuel can be injected while airflow is present or absent. If Air flows, it can carry fuel to the cylinder. Fuel can reach the manifold wall without airflow. Besides that, the temperature of the intake manifold affects the situation of fuel too. High temperatures evaporates liquid fuel [21,22]. Intake manifold pressures for 1 and 5 bar values of MEP are depicted in Fig. 5. The lowest values of intake manifold pressure were obtained around (-260) -(-180) crank angle (ºCA) in the intake stroke. Therefore, injection in the regions at the lowest manifold pressure generated positive results for the formation of the mixture. There is no airflow when the intake valve is closed. The intake manifold temperature rises with decreasing of the load. The change of BSFC, THC and CO is given in Fig. 6 for different injection start angles (-343, - Conclusion The results of experiments showed that the connecting of the encoder to the camshaft decreases brake specific fuel consumption and THC emission.  When the encoder was mounted to the camshaft (one injection per a cycle), the BSFC value was reduced by about 9.5 % and 5% at 1200 and 1500 rpm engine speeds, respectively.  At this running situation, THC emission was reduced about 10 % at all loads and speeds.  The rising injection pressure increased BSFC and THC value.  The changing of injection pressure from 4 bar to 1 bar decreased BSFC and THC emission by about 7 % and 3.5 % respectively at higher than one bar load and at all engine speeds.  At low engine load (1 bar value of MEP), the start angle of injection strongly affects BSFC and THC values. -243 °CA value of injection start decreases roughly by 5% these parameters.  At 5 bar load, injection start do not affect strongly BSFC and THC value. Because intake manifold temperature is higher than lower load. As a result of that, it evaporates more liquid fuel. The lowest values of BSFC and THC were found for -243 °CA injection start at 5 bar load like 1 bar.  The CO emission strongly depends on the excess air coefficient. The change of injection situations like different injectors did not affect this emission value. As a recommendation for future studies, the researchers can study the footprints of liquid fuel on the intake manifold and intake valve. Besides that, the test engine can be run at higher loads and higher engine speeds for getting more knowledge about these running conditions. Acknowledgment The Authors thank the supporter of this project Istanbul Technical University Scientific Research Project Unit, Turkey Bosch auto parts, OTAM and team of automotive laboratory.
v3-fos-license
2020-04-30T09:08:08.204Z
2020-04-28T00:00:00.000
238925603
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-24636/v1.pdf?c=1590735797000", "pdf_hash": "4415e6703427d1aa23567a11ed98d515c174b4dd", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45194", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "42364a4ddc9462eae1625358fd43ff0425849505", "year": 2020 }
pes2o/s2orc
Ecosytem Services Assessment of Urban Forests of Adama City, Ethiopia Background: The recent urban challenges due to climate change and urban environment deterioration requires proper planning and inventories of urban forests. In this paper, trees and shrub information were used to estimate leaf area/biomass, carbon storage, carbon sequestration, pollution removal, and volatile organic compound (VOC) emissions, hydrological and functional values of Adama city urban forest. This study was conducted to assess and quantify the ecosystem services of urban forests of Adama city, Central Ethiopia. Results: The result of i-tree Eco model has indicated that the tree species such as Azadirachta indica, Eucalyptus globulus, Carica papaya and Delonix regia sequester high percentage of carbon which is approximately 14.7%, 7.4%, 7.3% and 6.2% of all annually sequestered carbon respectively. Besides, urban forests of the city was estimated to store 116,000 tons of carbon; the most carbons were stored by the species such as Eucalyptus globulus, Azadirachta indica, Carica papaya and Delonix regia that stores approximately 22.1%, 12.3%, 9.5% and 4.2% of all stored carbon respectively. Trees in Adama urban forests were estimated to produce 19.93 thousand tons of oxygen per year. It was estimated that trees and shrubs remove 188.3 thousand tons of air pollution due to O 3 , CO, NO 2 , PM2.5 and SO 2 per year. In the city, 35 percent of the urban forest's VOC emissions were from Eucalyptus cinerea and Eucalyptus globulus. Besides, the monetary value of Adama urban forest in terms of carbon storage, carbon sequestration, and pollution removal was estimated to 16,588,470 ETB/yr, 118,283 ETB /yr and 12,162,701,080. 9 ETB /yr respectively. Conclusion: Introduction In our world, human population growth and urbanization have adverse environmental impacts such as elevated temperatures, increases in air pollution and stormwater quantity, and decreases in stormwater quality, which pose major environmental and public health problems in cities (Rydin et al. 2012;Seto and Shepherd, 2009). In this regard, urban forest ecosystem plays an important role in providing multiple service and environmental bene ts to urban environment (Forrest et al. 1999 ;Strohbach & Haase, 2012). Ethiopia has one of the largest urbanization rates (about 4-5%) in the world, and its urban population is expected to increase from time to time. Also, urbanization at a rapid pace is a reality at present (Rama, 2013). The current phenomenon in Ethiopia has been associated with environmental problems in most cities. The major problems are urban sprawl, solid and liquid waste management; water, air, and noise pollution; illegal settlements and the degradation of open green areas (Thomas, 2013). 15% in 2000 to almost 30% in 2030 (UN Population Division, 2004). Ethiopia is experiencing the effects of climate change such as an increase in average temperature and change in rainfall patterns. There are several techniques and models that have been developed to help quantify ecosystem services, such as i-Tree Eco and i-Tree Streets (i-Tree, 2010a). In this work, i-Tree Eco is a software suite were used for the analysis. i-Tree Eco was designed to use standardized eld data from randomly located plots, as well as local hourly air pollution and meteorological data, to quantify urban forest structure, ecological function, and the associated value (Nowak et al. 2008a, McPherson 2010b. The main aim of this study is assessing the ecosystem service of urban forest of Adama city interms of climate change mitigation; speci cally, the study was intended i) to assess carbon storage and sequestiration potential of adama city trees ii) to estimate the oxiygen production and pollution removal by different species of adama city trees and iii) assess the hydrological and functional values of trees in Adama city. Study Area This study was conducted in Adama city of Oromia Regional State, Central Ethiopia. Adama city is geougraphically situated between 8° 32′ 24″ N, latitude and 39° 16′ 12″ E longitude within the altitudinal range of 1,712 meter a.s.l. (Fig. 1). The total area of the city is about 13,366.5 hectar and 99 km far from Addis Ababa the capital city of Ethiopia. The annual average minimum and maximum temperature of the study area is 13 0 c and 27 0 c, respectively. The annual average rainfall is 837-1005.7 mm and climate varies due to the great variation in altitude (BoFED, 2012). The total population of Adama is about 303,569 of which 150,228 are males and 153,341 are females. Currently, the city contains 18 kebele administrations. Research Design and Sampling The reconessance survey was conducted (from October to December, 2018) by a team of 5 people. The site assessment has done to observe the general plot information used to identify the plots and its general characteristics. In this work, trees and shrub information were used to estimate trees and shrubs leaf area/biomass, pollution removal, and volatile organic compound (VOC) emissions. Finally, tree informations used to estimate forest ecosystem value, carbon storage, carbon sequestration and hydrological functions of Adama city urban forest. In this study, a total of 214 sample plots (27 percent of the city) have established by using a simple random sampling method. As a general rule, 200 plots (one-tenth acre each) will yield a standard error of about 10% for an estimate of the entire city. As the number of plots increases, the standard error will be decrease; and therefore we were more con dent to estimate for the population. With regard to the sample plot size, the standard plot size for an Eco analysis is a 0.1-acre circular plot with a radius of 11.16 m or 0.0407 hectares. The samples of plots were created directly in the Eco application using the random plots generator via the Google Maps function (Fig. 2). The diameters of all identi ed trees and shrubs were measured at breast height (1.3 m above ground) using a diameter tape (5 m length). Diameter of individual trees were recorded to calculate basal area and relative basal area of plant species. Height of all sampling trees and shrubs were measured by silva hypsometer. The eld data collection crews were typically located eld plots using maps to indicate plot location. Aerial photographs and digital maps were used in order to locate plots and features. During random plots distribution in the city, the researchers faced a challenge of miss place placement of some plots; for example, some plot center has fallen in buildings, private land and the border of different land ownerships and land-use types; as a result the researchers professional skills were used to shift the plot center into appropriate locations. Data collection and analysis In this study, the data was collected from sample plots which have an area of 0.0407 ha (1/10 ac) that randomly laid in city areas of states and data was analyzed using the i-Tree Eco (formerly Urban Forest Effects (UFORE)) model (Nowak et al., 2008). The state plots were based on Forest Inventory Analysis national program plot design and data were collected as part of pilot projects testing FIA data collection in urban areas (Cumming et al., 2008). For each tree fournd in the sample plots carbon storage, annual sequestration, oxygen production, pollutant removal and hydrological functions were estimated using biomass and growth equations. Inorder to carryout in national estimates of carbon storage and sequestration, the carbon data was standardized per unit of tree cover. Results The results of this study were from a complete tree inventory and i-Tree Eco analysis of the 214 plots from Adama city, Central Ethiopia. In this section, the structure, carbon storage, carbon sequestration, volatile organic compound (VOC) emissions, air pollution removal and hydrological functions of Adama city urban forest were analyzed and presented in detail. Structure of tree species of Adama tree During data collection, trees were identi ed to the most speci c taxonomic classi cation possible. In this work, eld data were collected during the leaf-on season to properly assess tree canopies. Typical data collection includes land use, ground and tree cover, individual tree attributes of species, stem diameter, height, crown width. In this work a total of 86 woody speiceis have identi ed and the height, crown area, DBH of 806 trees and shrubs were measured at eld level. Leaf area of trees were assessed using measurements of crown dimensions and percentage of crown canopy missing. In the event that these data variables were not collected, they are estimated by the model. Many tree bene ts equate directly to the amount of healthy leaf surface area of the plant. Trees cover about 20 percent of Adama city trees and provide 8.871 square miles of leaf area. Indeed, total leaf area is greatest in urban areas. In Adama urban trees, the most dominant species in terms of canopy cover and leaf area are Acacia albida, Casimiroa edulis, and Eucalyptus cinerea. The attributes of 20 species were presented in (Table 1). Carbon Storage and Sequestration Trees reduce the amount of carbon in the atmosphere by sequestering carbon in new growth every year. The amount of carbon annually sequestered is increased with the size and health of the trees. The gross sequestration of Adama city trees is about 8,291 thousand tons of carbon per year with an associated value of Eth. ETB 1.18 million. Net carbon sequestration in the urban forest is about 7,474 thousand tons. The most common species that are known for the greater share of carbon sequestration in adama urban forest are listed in (Table 2). In particular, the tree species such as Azadirachta indica, Eucalyptus globulus, Carica papaya and Delonix regia sequester the most perecentage of carbon which is approximately 14.7%, 7.4%, 7.3% and 6.2% of all annually sequestered carbn respectively (Fig. 3). Air Pollution Removal by Urban Trees Pollution removalby trees and shrubs in Adama city trees was estimated using eld data and recent pollution and weather data available. removal was greatest for sulfur dioxide (Fig. 4). It is estimated that trees and shrubs remove 188.3 thousand tons of air pollution (ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), particulate matter less than 2.5 microns (PM2.5), and sulfur dioxide (SO2)) per year with an associated value of Eth. ETB. 26.2 billion. Volatile Organic Compaound Emisson In 2018, trees in Adama city emitted an estimated 51.44 tons of volatile organic compounds (VOCs) per year (33.81 tons of isoprene and 17.63 tons of monoterpenes). The emissions vary among species based on species characteristics (e.g. some genera such as Grevellia robusta was high isoprene emitter) and amount of leaf biomass. In Adama city, 35 percent of the urban forest's VOC emissions were by Eucalyptus cinerea and Eucalyptus globulus. These VOCs are precursor chemicals to ozone formation. Eco bene t of Adama urban forest The summary of Ecosystem value that include number of trees, carbon storage and sequestration, pollution removal, and structural value of woody species of Adama urban forest were estimated and summarized in (Table 5). Discussions This study provided a quantity of the C stored and sequestered by urban trees in Adama city of Central Ethiopia. The result of carbon sequestration and storage of Adama city was appeared higher than carbon assessment work conducted in cities such as Padua, Bolzano and Florence, Lisbon,Portugal, Zurich Switzerland (Crema 2008;Paoletti et al. 2011;Wälchli 2012). In the results current study the amount of carbon stored and sequestered in Adama urban trees was higher than result indicated in the study of Pace Rocco et al. (2018) regarding ecosystem services modeling for urban trees in Munich city of Germany; which was estimated to be 6225 ton and 214 tons per year respectively. Further more, the carbon storage and sequestration indicated in the current study were also compared with the study results presented for three cities of North America. Accordingly, the carbon storage and sequestration estimates of cities such as New York, Chicago and Jersey City were 1,225,200 & 38,400 tonn C − yr , 854,800 & 40,100 tonn C − yr and 19,300 & 800 tonn C − yr respectively (Nowak and Crane, 2002). This comparison showed that the annual carbon storage and sequestration of the cites were higher than that of Adama city of Ethiopia except the annual carbon sequestration of Jersey City which was less than Adama city. The C storage and sequestration results from this study were di cult to assess in terms of accuracy and to compare with other studies because of the use of different estimation methodologies, climatic condition, different species composition, and urban forest structures (Jo & McPherson 1995;Strohbach & Haase 2012). The pollution removal indicated in this study was lower than the result reported form City of Baton Rouge which was 860 tons/year. In the work of Nowak et al. (2014) recently analyzed the effects of urban forests on air quality and human health in the United States, they found that in highly vegetated areas, trees can improve air quality by as much as 16% (Kroeger et.al 2014). Baumgardner et al. (2012) pointed out that around 2% of the ambient PM10 in Mexico City is removed from the study area. In a study carried out in the city of Barcelona (Spain), Barò et al. (2014) reported that urban forest services reduce PM10 air pollution by 2.66%. Moreover, in the Mediterranean city of Tel-Aviv, Cohen et al. (2014) observed that an urban park signi cantly mitigated nitrogen oxides (NOx) and PM10 concentrations, with a greater removal rate being observed in winter, and increased tropospheric ozone levels during summer. In this result, the amount of annual Volatile Organic Carbon (VOC) removal was lower than the report of study conducted in Scotlandville's trees which yearly produce 8.91 tons of monoterpene, 125.53 tons of isoprene, and produce 134.43 tons of volatile organic compounds (VOCs); that may contribute to ozone formation. (Nowak & Dwyer 2007). In Adama urban forest trees such as Acacia tortilis, Azadirachta indica and Ficus elastica have higher potenatial evapotranspirationa and transpiration (Table 4). Similary, Xiao and McPherson (2016) reported that trees in urban areas can increase the return of runoff to the atmosphere through transpiration, providing associated air cooling bene ts. Furthermore, according to the study of Gwynns Falls watershed in Baltimore indicated that heavily forested areas can reduce total runoff by as much as 26% and increase low-ow runoff by up to 13% compared with non-tree areas in existing land cover and land use conditions (Neville, 1996). Studies have also reported that tree cover over pervious surfaces reduced total runoff by as much as 40%; while tree canopy cover over impervious surfaces had a limited effect on runoff. The Adama urban forest interms of monetory value have presented in the result sction (Table 5). The outcome of current study was compared with the study conducted in city of Baton Rouge the annual monetory value of urban forest service were lower, interms of Carbon storage ($6.2 million/year), Carbon sequestration ($41.0 million) and pollution removal ($ 1.1 million/year). In general, this work has tried to quantify the ecosystem service value of Adama city of Ethiopia which will help for further urban forest development work and government intervention interms of policy and awareness creation. Further researches should be conducted the assess and evaluate the ecosystem service value of urban trees in several Urban Green Insfrustures (UGI) and comparing with different cities in the country. This will sensitize cities to learn and compute in urban forest development to enhance the ecosystem value of trees. Conclusions Urban forests are a signi cant and increasingly vital component of the urban environment that can impact human lives. Trees and forests have a positive effect on human health and well-being by improving air quality and reducing greenhouse gases, mainly through reducing air temperatures and energy use and through direct pollution removal and carbon sequestration. Understanding the value of an urban forest can give decision makers a better understanding of urban tree management. These results provide baseline information for management recommendations to maximize the ecological bene ts provided by trees. By understanding the effects of trees and forests on the atmospheric environment, urban forest managers and policy makers can decide on the policy and strategic planning of urban greening. Subsequently, it will help for designing appropriate and healthy vegetation structure in cities to improve air quality and consequently human health and well-being for current and future generations. Estimated annual gross carbon sequestration (points) and value (bars) for urban tree species with the greatest sequestration, Adama city Annual pollution removal (points) and value (bars) by urban trees, Adama city
v3-fos-license
2020-10-16T06:50:54.548Z
2020-06-24T00:00:00.000
222476628
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://ejournal.radenintan.ac.id/index.php/al-jabar/article/download/4961/3513", "pdf_hash": "3fea68368b881e81cf4051075bd92076502680a3", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45199", "s2fieldsofstudy": [ "Education", "Mathematics" ], "sha1": "3fea68368b881e81cf4051075bd92076502680a3", "year": 2020 }
pes2o/s2orc
Improvement of Creative Thinking Ability through Problem-Based Learning with Local Culture Based on Students’ Gender and Prior Mathematics Ability *Correspondence:rahmiramadhani3@gmail.com The purpose of this study was to determine the increase of students' creative thinking, which taught using problem-based learning with local culture (PBL-Local Culture). In this study also saw the interaction between students' gender and students' prior mathematics ability. This type of research is quasi-experiment research, using pretest post-test control group design. Data were analyzed using SPSS 25 through Two-Way ANOVA. The result shows that increasing the creative thinking abilities of students taught using problem-based learning with local culture is significantly higher than the creative thinking abilities of students taught using classical learning. Based on the result, we found that the use of problem-based learning local culture (PBL-Local Culture) offered the opportunity to give student new experience to solving real problems in their daily life, primarily related in their local culture. Students can describe how to solve the daily life problem with mathematically modelling. This learning model has given the facility to students at the end to improve their creative thinking ability. Students make the new model of problemsolving and finally they can solve that problem with their model. This study also found that the factor of students' gender and students' prior mathematics ability has not given the effect of students' creative thinking ability. It means that there is no gap in gender and the contribution of students' prior mathematics ability in students' academic skills. Based on this study, we recommendation of this model to using in other subjects in the learning class and to improving other learning ability, not limited in creative thinking ability. Introduction Mathematics is one of the subjects that must be taught in the curriculum in Indonesia. Learning math is important in education because mathematics is a systematic science. Mathematics is the study of problems by finding solutions in a systematic and organized way. Language in mathematics is presented with unique symbols and characters, however the use of symbols and characters aims to understand the purpose of systematic solutions (Ramadhani, 2018). The statement was in line with that given by the National Council of Teachers Mathematics (Gordah & Astuti, 2013) where one of the goals of mathematics learning is to learn to solve problems (mathematical problem solving). In addition to the NCTM statement, application of mathematics must be able to integrated and support the learning system in schools, one of which is in developing curriculum in 2013 which is implemented in Indonesian schools (Abdurrahman et al., 2020). In the 2013 curriculum (Azis, 2012), one of the important mathematical ability in learning in school is creative thinking ability. Creative thinking ability needed to master and create the technologies of the future, which means that mathematics needs to be given to all students ranging from elementary schools to colleges to equip the students in the ability of logical, analytical, systematic, critical, creative and ability to cooperate. In line with the statement, (Nur'aeni, 2008) also said that creativity is essential in human life, he needed to overcome the difficulties, find a way out of all the complications, breaking the stagnation and to achieve desirable goals. Without creativity, a person will often hit a deadlock, and it inhibits even will reduce the spirit of achievement. Based on the explanation above, it is concluded that mathematics is essential for student learning in school. However, it is inversely proportional to the facts obtained from TIMSS 2007, TIMSS 2011and PISA 2009 that the Indonesian students can answer math questions in a low international standard (Ramadhani, 2018), The results were not many different studies were also obtained from Hans Jellen of the University of Utah, the United States and Klaus Urban of the University of Hannover, Germany. They obtain the result that from the eight countries studied, children's creativity lowest Indonesia (Rahman, 2012), He continued that the lack of ability to think creatively impact on the low achievement of students. One of the causes of low ability students creative thinking is a learning process in schools is less than optimal. Teachers are more dominant than students in explaining the material while students just being recipients of the information. As a result, students were solely concerned with steps to resolve given by the teacher. The reason of that is the student has no other alternative ability to solve problems, and students ultimately lack the ability good flexibility, while the flexibility is one important component in the ability to think creatively (Munandar, 2009). That is because the focus of the learning that has been applied tend to use conventional learning models. In classical learning, students tend to be passive and not free to be creative in learning the teaching materials to solving mathematical problems. The application of models of student-centered learning (student-centered learning) needs to improve the ability of students to be creative in the learning process. The learning model student-centred curriculum is also offered in 2013 to be applied in the classroom. One is a model of problem-based learning. Problem-based learning focused on the real issues that non-routine. It is expected to help the students to be creative when solving real problems. (Ramadhani & Narpila, 2018), Characterized by problem-based learning students work with each other. They cooperate to motivate to continuously engage in complex tasks and increase opportunities for shared inquiry and dialogue to develop social skills think (Ramadhani, 2016). The application of problem-based learning in improving the ability of creative thinking will be more effective if it is integrated with the local culture. Frudental (Van den Heuvel-Panhuizen, 1996) stating that mathematics should be linked with the existing realities, remain close to the students and relevant to society. This perspective involves mathematics not only as a subject but as a human activity, which is very close to the local culture. The same opinion was also expressed by Bishop (Tandailing, 2013) that mathematics is a part of the culture, which has integration in all aspects of human life. Thus, the mathematics for a person will have an impact on the person's cultural background, because the whole thing that they do is based on what they saw and felt. Culture-based learning (ethnomathematics) is one alternative that can bridge cultures in mathematics. Pannen (Sutama, Mulyaningsih, & Lasmawan, 2013) also said that a strategy of creating a culture based learning environments and learning experiences that integrate culture as part of the learning process. Cultures are integrated into the learning of mathematics in this study is the local nature of-context culture Medan, Indonesia. Based on the previously stripped-elaboration, the researchers have a hypothesis that through the application of problem-based learning local culture is expected to improve the ability of creative thinking of students, especially high school students in Medan, Indonesia. Types of research This type of research is quasi-experimental (quasi-experiment). The independent variables in this research are problem-based learning local culture (PBL-Local Culture) and classical learning. The dependent variable is students' creative thinking ability. While the control variables in this study were gender students, who are classified in two categories, namely: women and men and the students' prior mathematics ability were classified into three categories: high, medium and low. This design research is pretest-posttest control (pre-posttest control group design) and selected in two groups, there are an experimental class and a control class. Research Procedure This study used a quasi-experimental research design (quasi-experiment) with the type of pretest-posttest control group. This study uses two classes, namely research, experimental class (who got treatment PBL-Local Culture) and the control class (who does not receive the treatment. The research began by preparing a research instrument consisting of questions related to students' creative thinking abilities. Each research instrument has the elements and indicators needed in the ability to think creatively, including fluency, flexibility, originality, and elaboration. In addition, each research instrument also reflects daily problems that contain elements of local culture. The next stage is to conduct a research instrument testing test conducted on a group of students who are not a research sample group. The instrument testing consists of content validity and construct validity tests, and reliability tests. After the research instrument shows the value of validity and reliability, the researcher can use the research instrument to be used in research in the field. The next stage is to carry out the learning process using a problem-based learning model based on local culture. At this stage, groups of students in each learning class (experimental class and control class) are divided into several groups of students randomly based on gender categories and the level of prior mathematics abilities of students. This is done so that this study can provide significant and objective results on the controlling factors of this study, namely the gender difference factor and the students' prior mathematics ability factor. The learning process lasts for 8 face-to-face meetings and begins with the provision of pre-test and ends with the provision of post-test related to mathematics material. During the learning process, the researcher uses the stages of learning with a problem-based learning model, which is preceded by giving daily problems that contain elements of local culture, grouping students into groups, giving students the opportunity to investigate the given problem, making a percentage of results of the investigation, reviewing the results of the presentation to conclude the results of the discussion. The learning process emphasizes the student-centered approach and minimizes the role of the teacher. The teacher only acts as a facilitator and referee in the process of discussion the Research Methods and problem solving. The teacher also provides assistance by strengthening scaffolding to students so students can use prior knowledge to solve the problem. Data Collection Techniques Data was collected through the provision of test descriptions consisting of pre-test and post-test. Validity and reliability testing is done using the Product-Moment Correlations test and the Cronbach-Alpha test. The next step is to carry out the prerequisite test, namely the data normality test using the Kolmogorov-Smirnov test and the data homogeneity test using the Levene Test. This test is carried out in order to find out what hypothesis testing is appropriately used in the data. Data analysis technique Data were analyzed using statistical tests Two-Way ANOVA. All statistical tests using significant values below 0,05. Software used for the whole test the SPSS 25. The results of the analysis of research data will be interpreted and used as a guide in determining which statistical hypotheses are accepted. Table 1 provides general information about the students' creative thinking ability following the factors involved. Overall accordance with the gender category of students, students' prior mathematics ability, the experimental group (treated by using PBL-Local Culture) get better results than the control group (treatment using conventional learning). From the results of data processing using Kolmogorov-Smirnov Test and Levene Test showed that samples taken from a normally distributed population and has a homogeneous variance. Table 2, Table 3 and Table 4 presents the data description of students' creative thinking ability based on the scores of N-Gain seen from learning groups, gender groups and groups of students' prior mathematics ability. Based on table 2 above, we know that both the minimum N-Gain value, the maximum value and the average value in the two learning classes are different. Students who are taught by using the model of problem-based learning based on local culture (PBL-LC) get higher grades compared to the value of students taught using the classical learning model. This is due to the influence of using the new learning model for students, so students feel enthusiastic and enthusiastic in following the learning process. The group of students in the experimental class also has high self-confidence so that they are enthusiastic in solving the given problem. On the the Results of the Research and the Discussion other hand, if you pay attention to the standard deviation values obtained by the two learning classes are also different, but this can be interpreted that the values obtained by the two learning classes spread normally, and no student has a significantly different value from other students. Based on this, the two classes obtained the medium category at the standard deviation value. The results presented in table 3 specifically explain that there are differences between the average N-Gain scores of groups of students by sex category. There is a difference that is not too far between students with male gender and students with female gender. Based on table 3 above, practically it can be seen that differences in gender of students do not affect the results of increasing students' creative thinking abilities. In addition, the number of students with male and female genders that are not too much different can also be a reason why the average N-Gain value of the two groups of students is also not significantly different. Other results can also be seen in table 4 above, namely the students 'prior mathematics ability factor also has no practical effect on increasing students' creative thinking abilities. Table 4 shows that the scores held by groups of students at high, moderate, and low levels (prior mathematics ability level) do not have a significant difference. This is also due to the fact that the students' initial knowledge is not that students can understand teaching material and can solve everyday problems properly and appropriately. Early knowledge is indeed needed in learning mathematical material, because mathematics itself is a hierarchical science. However, it is not a major supporting factor in increasing students' creative thinking abilities. Based on the explanation above, it can be concluded that practically gender and students 'prior mathematics abilities have no role in increasing students' creative thinking abilities. Furthermore, researchers feel the need for further statistical tests on the results of the calculation of research data obtained. Researchers Prepare research hypotheses consisting of three hypotheses, including: Hypothesis 1: There is an increased of students' creative thinking ability which taught using problem-based learning local culture (PBL-Local Culture). Hypothesis 2: There is no interaction between the increased of creative thinking ability with students are taught by using problem-based learning local culture (PBL-Culture Local) and students' gender. Hypothesis 3: There is no interaction between the increased of creative thinking ability students are taught by using problem-based learning local culture (PBL-Local Culture) and students' prior mathematics ability. The test result of two way-ANOVA and N-Gain of students' creative thinking ability of experiment and control group and presented in Table 5. Based on Table 5 shows that the factors for learning group, the value of F count equal to 8,100 and the significant value of 0.006. Because the value is significantly smaller than the value of the significant level of 0.05, then H0 rejected, and H1 accepted. Thus, it can be concluded that the increase in students' creative thinking ability who received problem-based learning local culture (PBL-Local Culture) higher than the students' creative thinking ability who received conventional learning. From Table 5 also shows that the gender factors for students and group learning, the value of F count equal to 0,455 and the significant value of 0.503. Based on the result, we found that the significant value is higher than the value of the significant level of 0.05, then H0 rejected, and H1 accepted. Thus, it can be concluded that there was no significant interaction between gender and learning to increase students' creative thinking ability acceptable. It shows that the average gain of students' creative thinking ability with students' gender (male and female) who are taught by local cultural problem-based learning (PBL-Local Culture) did not differ significantly with students taught by classical teaching. The interaction test chart can be seen in Figure 1 below: Figure 1. Interaction between students' creative thinking ability and students' gender Another factor to be tested involvement to increase creative thinking ability is the capability of students' prior mathematics ability. Based on test results analysis using Two-Way ANOVA on the interaction factor of students' prior mathematics ability can be seen in Table 6 below: From Table 6 also shows that the factors for learning group and prior knowledge of students, the value of F count equal to 1,279 and the significant value of 0,287. Based on the result, we found that the significant value is higher than the value of the significant level of 0.05, then H0 rejected and H1 accepted. Thus, it can be concluded that there was no significant interaction between students' prior mathematics learning and students' creative thinking ability can be accepted. It shows that the average gain of students' creative thinking ability with prior knowledge of students (high, medium, low) taught by local cultural problem-based learning (PBL-Local Culture) did not difference significantly with students taught by classicall teaching. The interaction test chart can be seen in Figure 2 below: Figure 2. Interaction between students' creative thinking ability and students' prior mathematics ability Based on the explanation above, it can be concluded that students who are taught with a problem based learning model locally (PBL-Local Culture) have more influence in improving students' creative thinking abilities (based on the average scores obtained by students in this class are higher than the average score average obtained in conventional classes). So there is no interaction between gender learning by students and students 'initial ability to improve students' creative thinking abilities. Further tests conducted further analysis of the capability of beginning students is done by using the Post-Hoc LSD test that can be seen in Table 7 below: According to table 7 above, it can be seen that there is no difference N-Gain of creative thinking abilities of students on any prior knowledge of students (high, medium and low). It can be concluded that the initial ability of students did not give further effect in improving the ability of creative thinking high school students. The accordance with the statement of Mellin-Olsen (Ramadhani & Narpila, 2018) namely: "Increasingly acknowledged that the cognitive level of student response in mathematics is determined not by the 'ability' of the student, but the skill with the which the teacher can engage the student in mathematical 'activity'. From these opinions, it can be concluded that the cognitive level of students in mathematics is not determined by the ability of the students, but the skills of teachers in engaging students in mathematics learning activities. Additionally, due to the advantage of problem-based learning that is supported by the theory of constructivism and constructionism. The constructivist theory is supported by the theory (Perkins, Piaget and Vygotsky) explains that individuals can build knowledge through their neighborhood. Thus, by way of investigation, conversation, or activities (Grant, 2002). Similar results were obtained by previous researchers, which showed that students who used problem-based learning were higher than the final test results of students using conventional teaching (Ajai, Imoko, & O'kwo, 2013). The results obtained state that students' mathematical problem-solving abilities to use problem-based learning are higher than students who use direct learning. Problem-based learning (PBL) also gives the effect of increased cognitive abilities of students. Furthermore, he stated that the application of problem-based learning (PBL) is very useful in improving student performance in the classroom. They also found that through the application of PBL, students learn a new experience, namely real resolve issues close to the everyday lives of these students (Bahri, Putriana, & Idris, 2018), Students can analyze the problem and apply deductive and inductive processes to understand the problem and find a solution (Tarmizi & Bayat, 2010), PBL is also a significant learning model, practical and relevant in everyday problems of students (Albanese & Mitchell, 1993), Through the application of PBM, students feel the sensation of learning enjoyable, stimulating, and easily applied in solving real problems (De Vries, Schmidt, & de Graaff, 1989;Wijnen, Loyens, Smeets, Kroeze, & van der Molen, 2017), The application of the model student-centred learning (one of which PBL) integrated with the local culture identifies that mathematical ability of students in the process of solving real problems can develop. Thus, environmental factors such as the local cultural context can provide a positive impact on the development of students' mathematical abilities. As has been explained previously that student-centred learning, one of the local culture-based on learning and teaching support the meaningful learning process and improve students' ability to think creatively. This is the distinguishing factor between this research and other studies. Research that only uses non-routine problems of students in problem-based learning students still provides difficulties for students to apply them in everyday life. This is because the non-routine problem is not close to the environment of student life and is not a problem that is often encountered by students. The use of non-routine problems so far has only focused on nonroutine problems that are abstract in nature (Hendriana & Fadhillah, 2019;Ratnaningsih, 2017;Saptenno et al., 2019;Sihaloho et al., 2017). This is the main factor why this research focuses on the use of non-routine problems that are near the environment of student life through the concept of applying local culture. The application of the concept of local culture provides students with a way to find their version of a non-routine problem solving model, and this has an effect on increasing students' creative thinking abilities (Rahmawati et al., 2019). To improve students' creative thinking ability in the learning process in school requires a high commitment between students and teachers. Another thing that is very important in increasing students' creative thinking abilities is a learning model that implements the application of active learning model, it is problem-based learning model. Problem-based learning model invites students to contributed in the learning process, as in group investigation. Investigation group aims to create a meaningful learning atmosphere. Collaboration between students, teachers and appropriate learning model can create an atmosphere conducive to learning can improve students 'mathematical abilities, one of which is the students' creative thinking ability. In addition to the students' cognitive factors that may evolve, affective factor students can also develop well. In this study, we can concluded that problem based on local culture can use in mathematics learning. It means that, students' can solve their daily nonroutine problem with their way and can use their local culture to solve that problem. The usage of local culture concepts in mathematics problem can assists students to know their culture and get used it to their problem life. This study was found that gender gap and prior knowledge cannot be used as main factor on improving students' academic skill. Students' creative thinking ability can being improve because of the contribution of learning model, and also joyful learning environment. But, this study still have limitation, especially in sampling amounts. We suggest for the next researcher could developed this research in large sampling.
v3-fos-license
2018-04-03T04:31:42.045Z
2017-01-27T00:00:00.000
18788842
{ "extfieldsofstudy": [ "Environmental Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0171100&type=printable", "pdf_hash": "6d8984aa80040d01cacc6bbeb00ca70ab1670e19", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45201", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "6d8984aa80040d01cacc6bbeb00ca70ab1670e19", "year": 2017 }
pes2o/s2orc
Detailed Distribution Map of Absorbed Dose Rate in Air in Tokatsu Area of Chiba Prefecture, Japan, Constructed by Car-Borne Survey 4 Years after the Fukushima Daiichi Nuclear Power Plant Accident A car-borne survey was carried out in the northwestern, or Tokatsu, area of Chiba Prefecture, Japan, to make a detailed distribution map of absorbed dose rate in air four years after the Fukushima Daiichi Nuclear Power Plant accident. This area was chosen because it was the most heavily radionuclide contaminated part of Chiba Prefecture and it neighbors metropolitan Tokyo. Measurements were performed using a 3-in × 3-in NaI(Tl) scintillation spectrometer in June 2015. The survey route covered the whole Tokatsu area which includes six cities. A heterogeneous distribution of absorbed dose rate in air was observed on the dose distribution map. Especially, higher absorbed dose rates in air exceeding 80 nGy h-1 were observed along national roads constructed using high porosity asphalt, whereas lower absorbed dose rates in air were observed along local roads constructed using low porosity asphalt. The difference between these asphalt types resulted in a heterogeneous dose distribution in the Tokatsu area. The mean of the contribution ratio of artificial radionuclides to absorbed dose rate in air measured 4 years after the accident was 29% (9–50%) in the Tokatsu area. The maximum absorbed dose rate in air, 201 nGy h-1 was observed at Kashiwa City. Radiocesium was deposited in the upper 1 cm surface layer of the high porosity asphalt which was collected in Kashiwa City and the environmental half-life of the absorbed dose rate in air was estimated to be 1.7 years. Introduction The environmental radiation levels in eastern Japan were dramatically changed after the Fukushima Daiichi Nuclear Power Plant (F1-NPP) accident in March 2011. According to the UNSCEAR 2013 report [1], the released total amounts of artificial radionuclides were estimated to be 6-20 PBq of 137 Cs and 100-500 PBq of 131 I and they are about 20% and 10% of the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 respectively estimated amounts emitted in the 1986 Chernobyl accident. Since the F1-NPP accident, distributions of absorbed dose rates, affected by artificial radionuclides, have been observed by public officials and researchers [2][3][4][5][6][7]. In the most extensive surveys, the Japanese government has carried out air-and car-borne surveys centered on Fukushima Prefecture at regular intervals, and the distribution maps of dose equivalent rate have been made available on the website of the Nuclear Regulation Authority, Japan [8]. The Tokatsu area is located in the northwestern part of Chiba Prefecture (Fig 1) and it includes the six cities of Noda, Nagareyama, Kashiwa, Abiko, Matsudo and Kamagaya. Within Chiba Prefecture, this area received the most radionuclide contamination from the F1-NPP accident [8]. Initially, the Japanese government did not provide support for environmental surveys in these cities because they were located outside Fukushima Prefecture; support was begun 6 months after the accident. Therefore, in response to requests from their residents, the local city governments established a new organization named the "Conference on Radiation Countermeasures in the Tokatsu area (CRCT)" 3 months after the accident, and they officially surveyed the dose equivalent rates at public facilities such as schools and parks [9]. Information about the rates has been available on the websites of each local government office. According to the results obtained from air-and car-borne surveys by the Japanese government measured in September 2011 [8], the Tokatsu area, excluding the north area of Noda City (#1 in Fig 1B) and the south area of Kamagaya City (#6 in Fig 1B), had dose equivalent rates (absorbed dose rate in air) roughly ranging from 0.2-0.5 μSv h -1 (267-668 nGy h -1 , dose conversion factor: 0.748 Sv Gy -1 [10]). In the latest result measured in November 2015, Kashiwa and Abiko Cities had dose equivalent rates in the range of 0.1-0.2 μSv h -1 (134-267 nGy h -1 ), and the dose equivalent rates in the other four cities were below 0.1 μSv h -1 (134 nGy h -1 ) which is the minimum level [8]. A detailed dose rate distribution map to estimate the impact from the F1-NPP accident on Tokatsu area has not been obtained. Additionally, while the fixed-point observations for absorbed dose rate in air have been carried out by local governments, the general public is often ill-informed about the presence of natural radiation sources such as terrestrial gamma-rays and cosmic-rays and their contribution to the absorbed dose rates. According to the report from CRCT [11], as of December 2011, the average dose equivalent rate (absorbed dose rate in air) for the Tokatsu area measured with a CsI(Tl) portable scintillation survey meter was 0.18 ± 0.07 μSv h -1 (224 ± 99 nGy h -1 ) at 1 m above the ground surface; however, the measured values are mixed dose rates from the natural and artificial radionuclides. Researchers at the National Institute of Radiological Sciences (NIRS) carried out a nationwide survey of absorbed dose rate in air from natural radiation in the 1960s-1970s [12]. In this survey, done well before the F1-NPP accident, measurements were made on school grounds in Noda (n = 1), Kashiwa (n = 2) and Matsudo (n = 3) Cities for the Tokatsu area, but measurements were not made in the other three cities. Sugino et al. [13] also measured the terrestrial gamma ray dose rate by fixed-point observation for the Kanto district which included Chiba Prefecture, but the whole Tokatsu area was not covered by the measurement points. Therefore, no accurate estimation has been made for the impact of the F1-NPP accident on the Tokatsu area. In this study, a car-borne survey for the whole Tokatsu area was carried out to make the detailed dose rate distribution map. Additionally, the gamma-ray pulse height distribution by the fixed-point observation was determined using a NaI(Tl) scintillation spectrometer to estimate the contribution ratio of artificial radionuclides for the dose rate. Survey route The absorbed dose rates in air (nGy h -1 ) from both natural radionuclides ( 40 K, 238 U series and 232 Th series) and artificial radionuclides ( 134 Cs and 137 Cs) were measured on June 5 and 10-14, 2015, in the Tokatsu area of Chiba Prefecture, Japan (Fig 1). The survey route is shown in Fig 2. Main roads including national routes (NRs) 6 and 16 were selected to the extent possible, primarily centered on residential areas. No expressways were included in this survey. The survey route was 669 km long. The weather condition was sunny or cloudy throughout the survey. This route map was drawn using the Generic Mapping Tools (GMT) created by Wessel and Smith [14]. Car-borne survey A car-borne survey technique is a convenient method for the evaluation of radiation dose in a wide area in a short period [15]. A 3-in × 3-in NaI(Tl) scintillation spectrometer (EMF211, EMF Japan Co., Osaka, Japan) with a global positioning system was used for the present carborne survey. This spectrometer was positioned inside the car. Measurements of the counts inside the car were carried out every 30 s along the route. Latitude and longitude at each measurement point were measured at the same time as the gamma-ray count rates (50 keV-3.2 MeV) were recorded. Car speed was kept around 40 km h -1 . The photon peak of 40 K (E γ = 1.464 MeV) and 208 Tl (E γ = 2.615 MeV) was used for the gamma-ray energy calibration from the channel number and gamma-ray energy before the measurements. Measured count rates inside the car were corrected by multiplying them by a shielding factor to estimate the unshielded external dose rates. The shielding factor of the car body was estimated by making measurements inside and outside the car at 43 locations (Fig 2). Those measurements were recorded over consecutive 30-s intervals during a total recording period of 2 min. A shielding factor was obtained from the slope of the regression line in the relation between count rates inside and outside the car. The gamma-ray pulse height distributions were also measured outside the car for 10 min at 43 locations (Fig 2). These observations were carried out on private land after obtaining specific permissions from the land owners and it was also confirmed that the field studies did not involve endangered or protected species. The NaI(Tl) scintillation spectrometer was positioned 1 m above the ground surface. Measured gamma-ray plus height distributions were then unfolded using a 22 × 22 response matrix for the estimation of absorbed dose rate in air. The detailed method has been reported by Minato [16]. These calculated dose rates were used to estimate the dose conversion factor (nGy h -1 /cps) because it is difficult to obtain the photon peak for each gamma-ray energy in a 30-s measurement. In this study, the dose conversion factor was obtained from the slope of the regression line in the relation between corrected inside count rates and calculated absorbed dose rates in air, and inside count rates were multiplied by the dose conversion factor to convert them to external absorbed dose rate in air. Based on all of the calculated external absorbed dose rates in air, the detailed dose distribution map in the Tokatsu area was plotted using GMT [14] and the plotted data on the map were interpolated using a minimum curvature algorithm. The calculated absorbed dose rates in air using a 22 × 22 response matrix method were separated as natural radionuclides ( 40 K, 238 U series and 232 Th series) and artificial radionuclides ( 134 Cs and 137 Cs). In this study, the energy bins were set to 1.39-1.54 for 40 K, 1.69-1.84 MeV and 2.10-2.31 MeV for 214 Bi ( 238 U series), 2.51-2.72 MeV for 208 Tl ( 232 Th series), 0.55-0.65 MeV and 0.75-0.85 MeV for 134 Cs and 0.65-0.75 MeV for 137 Cs to unfold the gamma-ray pulse height distribution, and the contribution ratio of artificial radionuclides for absorbed dose rate in air was observed in the spectra. These energy intervals for the bins were given by Minato [16]. Additionally, external effective dose (mSv) was estimated based on the measured absorbed dose rate in air. Shielding and dose conversion factors The correlation between count rates inside and outside the car measured at 43 locations is shown in Fig 3A, and the shielding factor and standard uncertainty [17] were found to be 1.42 and 0.08, respectively. Although the shielding factor is influenced by the type of car, number of passengers and dosimeter position inside the car, this factor has been reported in previous reports as ranging from 1.3-1.9 [2, 5-7, 15, 18-20]. The presently obtained factor was in this range. Fig 3B shows the correlation between absorbed dose rate in air (nGy h -1 ) calculated using the 22 × 22 response matrix method and count rate outside the car (cps) (i.e., corrected count rate inside the car). The dose conversion factor and uncertainty were found to be 0.14 nGy h -1 /cps and 0.01, respectively. The decision coefficients (R 2 ) for shielding and dose conversion factors were 0.867 and 0.950, respectively (Fig 3A and 3B). A lower decision coefficient of the shielding factor was observed compared to previous measurements made in Aomori, Japan (R 2 = 0.973, n = 73) [7], Kerala, India (R 2 = 0.964, n = 34) [15] and Brunei (R 2 = 0.97, n = 16) [21] which were places not contaminated by artificial radionuclides. It seemed that the lower decision coefficient might be affected by heterogeneously deposited artificial radionuclides. Based on these results, the absorbed dose rates in air (D out ) outside the car 1 m above the ground surface at each measurement point were calculated using Eq (1): where C in is count rate inside the car (cps) obtained by the measurements of the car-borne survey. Contribution ratio of artificial radionuclides The statistically analyzed absorbed dose rates in air measured by the car-bone survey are shown in Fig 4A. The outliers were defined as: < lower quartile-1.5 × distance from upper quartile to lower quartile (IQD) or > upper quartile + 1.5 × IQD (KaleidaGraph, Synergy Software, USA). The mean absorbed dose rate in air in the whole Tokatsu area was 68 ± 20 nGy h -1 (25-201 nGy h -1 ). The mean and range of absorbed dose rate in air of the six cities are shown in Table 1. The most contaminated city was Kashiwa, whereas the least contaminated was Noda. Absorbed dose rates in air observed from all radionuclides, natural radionuclides ( 40 K, 238 U series and 232 Th series) and artificial radionuclides ( 134 Cs and 137 Cs) at 43 locations are shown in Table 2. The mean dose rates from natural and artificial radionuclides were 46 ± 10 nGy h -1 (24-82 nGy h -1 ) and 24 ± 19 nGy h -1 (0-78 nGy h -1 ), respectively. According to Abe et al. [12] the absorbed dose rates in air measured in the 1960s to 1970s in Noda, Kashiwa and Matsudo Cities were 55, 51 and 59 nGy h -1 , respectively, and these values were higher than those of the present study. Because the pavement ratio has increased from 43% in 1973 to 91% in 2011, the shielding effect of terrestrial gamma-rays by asphalt has increased accordingly. Additionally, the effects from the nuclear tests performed in the 1950s to 1980s and Chernobyl NPP accident in 1986 cannot be ignored. According to reporting from the Meteorological Research Institute [22], the integrated deposited activity of 137 Cs with decay for 1954 to before the F1-NPP accident was estimated to be 2 kBq m -2 , whereas that for 2011 (i.e., after the F1-NPP accident) was 25 kBq m -2 /y, resulting in a 12.5-fold difference. Although the above values are simple integrated deposited activities, it was estimated that the measured dose rate in this study included a few nano-gray contribution from the earlier events. The mean contribution ratio of artificial radionuclides to the absorbed dose rate in air measured 4 years after the F1-NPP accident in the six cities was 29% (9-50%). The authors previously obtained the contribution ratio of artificial radionuclides measured in December 2014 in metropolitan Tokyo's 23 wards, located to the southwest from the Tokatsu area, as 13% (0-36%; n = 26) [19]. The degree of contamination in the Tokatsu area was 16% higher than that of Tokyo's 23 wards. Fig 1B. doi:10.1371/journal.pone.0171100.t001 Table 2. Absorbed dose rate in air from natural and artificial radionuclides in the Tokatsu area of Chiba Prefecture. No. a Municipality (City) n Absorbed dose rate in air (nGy h -1 ) Contribution ratio of artificial radionuclides (%) All radionuclides Natural radionuclides surveys during August 2011 to May 2012 [3], higher dose areas were observed for the southern area of Ibaraki Prefecture (Fig 1) and they subsequently extended toward the southwest direction (i.e., Tokatsu area). Thus, this shift might have influenced absorbed dose rate in the Tokatsu area. A heterogeneous distribution of absorbed dose rates in air was seen. Especially, higher absorbed dose rates in air of over 80 nGy h -1 were observed along NRs 6 and 16 that pass through Matsudo (#5 in Fig 1B), Kashiwa (#3 in Fig 1B) and Abiko (#4 in Fig 1B) Cities. The highest absorbed dose rate in air (i.e., 201 nGy h -1 ) was observed at Akebono-cho, Kashiwa City (Fig 1B) where these two routes cross each other. These higher absorbed dose rates in air seemed to be related to rates in parts of metropolitan Tokyo [19]. Especially, for Katsushika Ward (Fig 1B) where the highest absorbed dose rate in air in metropolitan Tokyo was observed, higher absorbed dose rates in air exceeding 70 nGy h -1 were measured in 2014 all along NR 6 and other main roads [5], and their tendency was similar to that of the present study. According to the results of the air-borne survey performed in September 2011 [8], a homogeneous dose rate distribution was observed for the Tokatsu area excluding Noda City. A similar result was also obtained from a car-borne survey performed by Andoh et al. [3]. The dose distributions measured in September 2011 [8] and in the present study showed different tendencies, homogeneous versus heterogeneous. Contamination of artificial radionuclides on asphalt surfaces For a more detailed evaluation, asphalt samples were collected from Akebono-cho (Fig 1B) which had the highest absorbed dose rate in air, and they were imaged using autoradiography. In this study, this sample (i.e., asphalt spoil) was received from Kumagai Gumi Co. Ltd. (Tokyo, Japan) with their permission in association with roadwork the company was carrying out. Fig 5A shows the color, autoradiography, and merged images of one asphalt sample. The Fig 1B). The autoradiography image (A) was obtained by exposing a phosphor-imaging screen to beta and gamma rays emitted from an asphalt sample for 1 week. The energy spectra of the surface (B) and deep (C) layers of this asphalt sample as measured by a high-purity germanium semiconductor detector for 30,000 s. autoradiography image was obtained by exposing a phosphor-imaging screen to beta-and gamma-rays emitted from an asphalt sample for 1 week and scanning the phosphor-imaging screen using a FLA-7000 scanner (Fujifilm Co., Ltd., Tokyo, Japan). Higher image intensities were observed in the upper 1 cm asphalt surface layer. The energy spectra of the surface and deep layers are shown in Fig 5B and 5C, respectively. Dominant peaks were detected for radiocesium, such as 134 Cs (E γ = 605 and 796 keV) and 137 Cs (E γ = 662 keV), and for natural radionuclides, such as 40 K (E γ = 1461 keV), 228 Ac (E γ = 911 keV), 214 Pb (E γ = 352 keV) and 214 Bi (E γ = 609 keV), from the asphalt surface layer, but no dominant peaks were detected for radiocesium ( 134 Cs and 137 Cs) from the deep layer. The authors collected samples of porous asphalt with a coarse aggregate diameter of more than 2.36 mm. This type asphalt has a high drainage function, resulting in its wide use recently for highways and main roads including NRs as show in Fig 4B to provide improved visibility for drivers in the rain. According to a supplier of such asphalt, it can be quickly clogged by dust depending on the amount of traffic. Thus, the deposited radiocesium remained within the 1 cm layer from the asphalt surface. Additionally, the deposited radiocesium was firmly attached to the dust particles near the asphalt surface [23]. The NRs 6 and 16 with road width of 20 m are heavily traveled roads (55,000 cars per day for each NR) compared to local roads, and it was expected that the amount of dust on the surface was extremely high. Thus, higher absorbed dose rates in air were observed along the NRs. On the other hand, radiocesium deposited on the low porosity asphalt (diameters of fine aggregates of asphalt ranged from 0.075 mm D < 2.36 mm) which is utilized for local roads is easily washed out by rainfall compared to high porosity asphalt. The low porosity asphalt has a water repellency effect. Additionally, local roads tend to have a gentle center crown, and drainage ditches are placed along the sides of the roads to carry away the rainfall. Therefore, the low porosity asphalt surface has hardly any dust deposition compared to high porosity asphalt because of the difference in natural weathering processes on the road surface. Thus, the degree of radiocesium contamination on the low porosity asphalt surface was low. In fact, most of the radiocesium had been held in only the upper 1 mm layer of low porosity asphalt in the test performed in the 1990s on mechanical decontamination measures after the Chernobyl accident [23]. Thus, the heterogeneous dose distribution measured 4 years after the F1-NPP accident (Fig 4) was made depending on the difference of asphalt types and traffic volume (i.e., dust volume on the asphalt surface). External effective dose estimation The external effective doses for the six cities of Tokatsu area were estimated using the following equation: where E is the external effective dose (mSv y -1 ), D out is the average absorbed dose rate in air (nGy h -1 ), DCF is dose conversion factor from the dose rate to the external effective dose for adults (0.748 ± 0.007 Sv Gy -1 ) [10], T is 8,760 h (24 h × 365 d), and Q in and Q out are indoor (0.9) and outdoor (0.1) occupancy factors [24], respectively. R is the ratio of indoor dose rate to outdoor dose rate (0.4) for 1-and 2-story wooden houses [25]. The estimated external effective doses (mSv y -1 ) for the six cities are shown in Table 1. The average value for Tokatsu area was 0.20 mSv y -1 . This value was 60% of the Japan average before the F1-NPP accident (0.33 mSv y -1 ) [26] and 42% of the worldwide average (0.48 mSv y -1 ) [27]. In addition, the average value for Tokatsu area was only 9% of the reported annual medical exposure dose from CT examinations for Japan which is 2.3 mSv y -1 /person [28]. Environmental half-life on asphalt pavements The dose equivalent rate for all of Kashiwa City has been regularly observed since October 1, 2012 by the Kashiwa City Office using a CsI(Tl) scintillation spectrometer (Mobile G-DAQ, Keisokugiken Co., Tochigi, Japan) [29]. Fig 6 shows the change of absorbed dose rate in air at Akebono-cho ( Fig 1B). The dose conversion factor used was 0.784 Sv Gy -1 [10] for converting to the absorbed dose rate in air. The absorbed dose rate in air from artificial radionuclides was calculated by subtracting background absorbed dose rate in air observed in this study (i.e., 43 nGy h -1 in Table 2). The origin of the horizontal axis was set to March 21, 2011 when the F1-NPP radioactive plume was observed around the Tokatsu area [30]. Here, the decay constant and environmental half-life by artificial radionuclides were calculated with the following equation to estimate the change of absorbed dose rate in air in the future: where D is the absorbed dose rate in air by artificial radionuclides, D L is the initial absorbed dose rate in air due to long half-life radionuclides ( 134 Cs and 137 Cs), λ L is the decay constant, t is the elapsed years after the date that the radioactive plume reached the Tokatsu area and T environ is environmental half-life (year). In the present study, the environmental half-life was defined as "apparent half-life" to distinguish it from physical half-life. Thus, the calculated half-life in the above included the effect of physical half-life and mechanical wear of the asphalt surface. In some previous reports, this apparent half-life was described as the ecological halflife or environmental half-life [31,32]. As a result, D L , λ L and T environ were 223 nGy h -1 , 0.034 and 1.7 y, respectively. That environmental half-life was shorter than the physical half-life of 137 Cs. In Katsushika Ward (Fig 1B), near the Tokatsu area, the environmental half-life 3 months after the accident was estimated to be 1.9 y based on changes of the absorbed dose rates in air at 1 m above the ground surface [20]. In addition, in the Chernobyl accident, the environmental half-life of 137 Cs was calculated to be 3-4 years for lichen species [31,33]. The calculated environmental half-life at Kashiwa City was shorter than the values of this report 1B). This figure was drawn using data published by the Kashiwa City Office [29]. These data were measured by the car-borne survey technique using a CsI(Tl) scintillation spectrometer. doi:10.1371/journal.pone.0171100.g006 Distribution Map of Absorbed Dose Rate in Air in Tokatsu Area, Chiba because surface contamination on asphalt is easily washed away by rainfall compared to bare ground or lichen-covered areas. Additionally, the environmental half-life on asphalt might be highly dependent on the traffic volume (which relates to the amount of dust) and the type of asphalt. Combined relative standard uncertainty for the car-borne survey in Tokatsu area The standard uncertainties of the one-time measurement (30 s) can be calculated from the measured value. The obtained range of counts inside the car was 3600-28770 (counts per 30 s) in the present study. The standard uncertainty depending on measured counts was calculated to be 120-339 counts. The range of relative standard uncertainty for the 30-s measurements was also calculated to be 1.2-3.3%. Here, the relative standard uncertainties for the shielding factor, dose conversion factor, traceability of the dose rate (calibrated by Pony Industry Co., Ltd., Osaka, Japan), and the dose calculation procedure by the response matrix method (software developed by EMF Japan Co., Osaka, Japan) were given as 7.5%, 0.5%, 4.1% (k = 2), and 5.0%, respectively. The maximum combined relative standard uncertainty of the estimate absorbed dose rate in air in this study was calculated to be 11.6%. Conclusion The car-borne survey with a NaI(Tl) scintillation spectrometer was carried out for the Tokatsu area, located in northwestern Chiba Prefecture, Japan, to make the detailed distribution map of absorbed dose rate in air 4 year after the F1-NPP accident. While the absorbed dose rate in air just after the accident had shown a homogeneous distribution in the Tokatsu area, it was now a heterogeneous distribution. Higher absorbed dose rates in air of over 80 nGy h -1 were observed along NRs 6 and 16. The type of asphalt and traffic volume strongly affected the dose distribution. Radiocesium ( 134 Cs and 137 Cs) was deposited within a 1 cm layer from the asphalt surface. The environmental half-life of radiocesium on asphalt was estimated to be 1.7 y. The means absorbed dose rate in air from radiocesium and the contribution ratio of radiocesium to the absorbed dose rate in air for Tokatsu area were 24 ± 19 nGy h -1 and 29% (9-50%), respectively. It was estimated that this measured dose rate included a few nano-gray contribution from the nuclear tests performed in the 1950s to 1980s and the Chernobyl NPP accident in 1986. The external effective dose calculated based on dose rate from natural and artificial radionuclides for Tokatsu area was 0.20 mSv y -1 which is 42% of the world wide average (0.48 mSv y -1 ).
v3-fos-license
2020-01-04T15:33:19.036Z
2020-01-03T00:00:00.000
209672621
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10570-019-02935-7.pdf", "pdf_hash": "fd5ed690575fb5b0deb7bcefdd8ccaa2ba1619ac", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45204", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "fd5ed690575fb5b0deb7bcefdd8ccaa2ba1619ac", "year": 2020 }
pes2o/s2orc
Lightweight, flexible, and multifunctional anisotropic nanocellulose-based aerogels for CO2 adsorption CO2 adsorption is a promising strategy to reduce costs and energy use for CO2 separation. In this study, we developed CO2 adsorbents based on lightweight and flexible cellulose nanofiber aerogels with monolithic structures prepared via freeze-casting, and cellulose acetate or acetylated cellulose nanocrystals (a-CNCs) were introduced into the aerogels as functional materials using an impregnation method to provide CO2 affinity. The microstructure of the adsorbent was examined using scanning electron microscopy, and compression tests were performed to analyze the mechanical properties of the adsorbents. The CO2 adsorption behavior was studied by recording the adsorption isotherms and performing column breakthrough experiments. The samples showed excellent mechanical performance and had a CO2 adsorption capacity of up to 1.14 mmol/g at 101 kPa and 273 K. Compared to the adsorbent which contains cellulose acetate, the one impregnated with a-CNCs had better CO2 adsorption capacity and axial mechanical properties owing to the building of a nanoscale scaffold on the surface of the adsorbent. Although the CO2 adsorption capacity could be improved further, this paper reports a potential CO2 adsorbent that uses all cellulose-based materials, which is beneficial for the environment from both resource and function perspectives. Moreover, the interesting impregnation process provides a new method to attach functional materials to aerogels, which have potential for use in many other applications. Introduction Adsorption is an energy-efficient and low cost method for capturing CO 2 (Pino et al. 2016;Yu et al. 2012). A suitable CO 2 adsorbent should have a high specific surface area, a high porosity with proper pore sizes, a favorable surface chemistry, and a high chemical/ mechanical stability. Many CO 2 adsorbents based on different materials, such as zeolites (Banerjee et al. 2009;Cavenati et al. 2004;Kongnoo et al. 2017;Ojuva et al. 2013), silica gels (Arellano et al. 2016), activated carbon (Fujiki and Yogo 2016;Roberts et al. 2017;Singh et al. 2017), and metal-organic frameworks (An and Rosi 2010;Millward and Yaghi 2005;Salehi and Anbia 2017), have been investigated in recent decades. However, adsorbents based on these materials have some drawbacks, such as their high cost, complex preparation methods, toxicities, and the use of corrosive materials during their preparation. Therefore, researchers continue to look for some new materials to produce CO 2 adsorbents that could be used on the industrial scale. Biomass-derived materials have attracted attention owing to their abundant resources, renewability, and biodegradability (Jonoobi et al. 2014;Khalil et al. 2012;Oksman et al. 2003). Nanocellulose materials, including cellulose nanofibers (CNFs) and cellulose nanocrystals (CNCs), that can be extracted from various types of biomass have attracted considerable interest owing to their specific physical and chemical properties (Geng et al. 2018;Herrera et al. 2017;Nissilä et al. 2019). Recently, several studies on the preparation of highly porous nanocellulose-based aerogels using ice-templating have been reported, and some have shown interesting adsorption/absorption capabilities. In an adsorption process, gas or liquid molecules are captured on the surface of the adsorbents while in an absorption process, those molecules can enter the bulk of the absorbents and create strong interactions there (Yu et al. 2012). Yang et al. (2018) prepared aerogels containing CNFs and sodium alginate using unidirectional and bidirectional ice-templating. They found that after silane modification, the obtained aerogel was superoleophilic, which have oil absorption capacities up to 34 times its weight. Using similar techniques, Zhang et al. (2019) also reported CNF-based aerogel for selective oil/ organic solvent absorption. The aerogels shown outstanding compression flexibility and high hydrophobicity which allowed the aerogels to absorb oils that were 99 times of its weight. Gorgieva et al. reported a unidirectionally ice-templated cationic dye remover prepared from carboxymethyl cellulose and CNFs with the highest dye adsorption capacity of approximately 1.8 g/g adsorbent (Gorgieva et al. 2019). Valencia et al. (2019) prepared hybrid aerogels consisting of CNFs, gelatin and zeolite for CO 2 adsorption via unidirectional freeze-casting. The aerogel containing 85 wt% zeolite showed an CO 2 adsorption capacity of approximately 1.2 mmol/g at 308 K and 101 kPa. Although these previous studies illustrate that nanocellulose materials can form highly porous structures with large surface areas, which is beneficial for adsorption, pure nanocellulose does not have good affinity to CO 2 molecules (Gebald et al. 2011). Thus, the sorbent-sorbate affinity needs to be improved to produce a nanocellulose-based CO 2 adsorbent that can adsorb CO 2 from a mixed gas. It has been reported that some polar groups such as nitriles (-CN), carbonyls (-C=O-), acetates (-COO-), and amides (-NHCO-) contribute to a high CO 2 solubility and CO 2 /N 2 selectivity (Park and Lee 2008). There have been several studies on the functionalization of cellulose by amine groups to improve the cellulose-CO 2 affinity. Gebald et al. (2011) synthesized amine-based CO 2 adsorbents composed of CNFs and N-(2-aminoethyl)-3-aminopropylmethyldimethoxysilane, where the highest adsorption capacity achieved was 1.39 mmol/g after 12 h. Subsequently, they found that amine-functionalized nanocellulose provided a stable CO 2 adsorption capacity (approximately 0.9 mmol/g) in ambient air during adsorption/desorption cycling tests (Gebald et al. 2013). However, in addition to an enhanced CO 2 adsorption capacity, amine-based materials require a relatively large amount of energy to regenerate adsorbed CO 2 (Gebald et al. 2011), which is an obstacle to using this kind of adsorbent on a large scale. Under this circumstance, acetyl groups could be an alternative. Kilic et al. (2007) experimentally confirmed that acetate-functionalized polymers are more CO 2 -soluble than polymers with only ether groups because the addition of an ester oxygen provides a much stronger attractive site than that of the ether group for CO 2 interactions. Moreover, the oxygen of carbonyl groups is more electron-rich and more favorable for CO 2 binding. Karimi et al. (2016) functionalized silica membranes with acetyl groups and reported that the CO 2 adsorption was increased compared to that of the unmodified membranes. Inspired by the previous studies on nanocellulosebased aerogels and acetate-based materials for CO 2 adsorption, in this study, we demonstrated a novel method to prepare lightweight and flexible nanocellulose-based CO 2 adsorbents with functional acetate groups and anisotropic structures. The adsorbents consist of a freeze-cast CNF aerogel and acetylated CNCs in which the acetylated CNCs were attached in the anisotropic porous CNF aerogel by an impregnation process driven by capillary forces. The acetylated CNCs have a certain amount of acetyl groups on the surface; however, the integrity of CNCs is maintained. Thus, they are expected to build nanoscale scaffolds together with the CNF aerogel, which is beneficial for the CO 2 adsorption capacity. Cellulose acetate was used as the reference, which was also attached to the CNF aerogel by impregnated with different amounts. The structure and morphology of the materials and the prepared adsorbents were investigated, and the mechanical properties and CO 2 adsorption capacity of the adsorbents were examined and discussed. Preparation of the CNF aerogel First, 62 g of 1 wt% CNF aqueous suspension was mixed with 0.03 g of BTCA (as a crosslinker) by stirring for 1 h at room temperature. The mixture was then frozen unidirectionally by a freeze-casting setup with a freezing rate of 5 K/min (Deville et al. 2006) and freeze-dried for 72 h at a pressure of 0.064 mbar using a freeze dryer (Alpha 1-2 LD plus, Martin Christ GmbH, Germany). All the freeze-dried samples were placed in a vacuum oven at 393 K for 3 h to crosslink the aerogel via esterification between CNFs and BTCA. Finally, the crosslinked CNF aerogel was obtained for further processing. Acetylation of cellulose nanocrystals The acetylation method used in this study is based on a heterogeneous process that uses iodine as a catalyst (Abraham et al. 2016). Briefly, solvent exchange of the CNC suspension from water to acetic anhydride (Ac 2 O) through acetone was carried out prior to the modification. A CNC/Ac 2 O suspension with a concentration of 0.02 g/mL of CNC in Ac 2 O was prepared and heated to 373 K for 1 h. Iodine (0.001 g/mL of I 2 in Ac 2 O) was added as a catalyst, and the reaction ran for another 30 min. Subsequently, the suspension was cooled to room temperature, and saturated sodium thiosulphate solution was added dropwise until the color of the reactant changed from brown to transparent to stop the reaction. The mixture was poured into ethanol/water (weight ratio of 2:1) and stirred for 30 min to precipitate the acetylated CNCs (a-CNC). The precipitated a-CNCs was then further washed with ethanol (70 vol%) and distilled water. Finally, a-CNCs was collected and dispersed in acetone to make a homogenous suspension for impregnation. Impregnation of the aerogels First, 40 g of the a-CNC/acetone suspension at a concentration of 0.25 wt% was used for impregnation, and the same amounts of cellulose acetate/acetone solution with 0.25 wt% and 1.25 wt% of solid contents, respectively, were prepared as references. The crosslinked CNF aerogel was placed vertically in the suspension/solution until the suspension/solution was completely adsorbed by the aerogel owing to the capillary force (Movie S1, Supporting Information, SI). Because all the 40 g of the solution/suspension was impregnated into the CNF aerogel, the amount of the a-CNCs/cellulose acetate added is controlled by the concentration of the solution. The impregnated aerogel was then placed in the vacuum oven at 368 K overnight to remove acetone. The weight of the aerogel was measured before and after the impregnation process to calculate the amount of a-CNC or cellulose acetate that had been attached to the sample. The sample coding and the compositions of all samples prepared for this study are shown in Table 1. Atomic force microscopy (AFM) AFM was used to characterize the morphology of the CNFs used in this study. To prepare the AFM sample, the CNF suspension was diluted to 0.001 wt% of solid content, and then, a droplet was deposited on freshly cleaved mica and dried at room temperature. The sample was scanned using a Veeco MultiMode scanning probe (Santa Barbara, USA) in tapping mode. The width of the CNFs was measured according to the height in the AFM height images. More than 80 nanofibers were analyzed, and the average fiber width was then calculated. Fourier transform infrared spectroscopy (FT-IR) FT-IR was used to study the crosslinking of the CNF aerogel and the acetylation of the CNCs. To prepare the FT-IR samples, the crosslinked CNF aerogel was washed thoroughly with distilled water to remove the possible unreacted BTCA and then grounded and dried in a vacuum oven at 393 K. The filtered a-CNCs were also dried in the vacuum oven overnight prior to the FT-IR test. The uncrosslinked CNF aerogel and unacetylated CNCs were prepared in the same manner for comparison. Then, 30 mg of the sample was carefully mixed and pressed with 270 mg of KBr and then tested using a Vertex 80v FT-IR spectrometer (Bruker, USA). For each single experiment, 128 scans were run at a resolution of 4 cm -1 . Titration The degree of substitution (DS) of the prepared a-CNCs was measured by titration according to ASTM D871-96 (Zhou et al. 2016). Briefly, 0.1 g of dried a-CNCs were dispersed in 40 mL of 70 vol% ethanol at 60°C for 30 min; then, 40 mL of 0.1 N NaOH was added. The mixture was stirred for 15 min; then, it stood at room temperature for 48 h with occasional shaking. Phenolphthalein was used as a pH indicator, and 0.5 N HCl was used to titrate the mixture until the color changed from pink to faint pink. The unacetylated CNC was titrated as a blank using the same method, and the DS of the a-CNCs was calculated using the following equations (Ramírez et al. 2017): where Acyl% denotes the acyl group content. V b and V s represent the volumes of HCl added to the blank and to the a-CNC sample, respectively. N HCl corresponds to the normality of the HCl solution, which was 0.5 N. W is the mass of the used sample. Porosity measurement The porosity of the prepared aerogels was calculated as (Sehaqui et al. 2011): where the density of the aerogel, q * , was calculated by dividing the weight of the aerogel by its volume. The density of the solid material, q, was calculated as: where x c , x A and x B denote the weight percentages of CNFs, the impregnated material (cellulose acetate or a-CNCs), and BTCA, respectively. q c , q A and q B correspond to the solid densities of cellulose, the impregnated material, and BTCA respectively, which were 1.46 g/cm 3 , 1.28 g/cm 3 , and 1.65 g/cm 3 (Sehaqui et al. 2011). Scanning electron microscopy (SEM) The morphology of the aerogels was investigated using a scanning electron microscope (JSM-IT300 InTouchScope, JEOL, Japan). Samples were cut by a sharp blade that was perpendicular and parallel to the freezing direction, respectively. The cut surfaces were coated with gold using a Leica EM ACE200 coater (Wetzlar, Germany) prior to the observation to avoid charging, and the secondary electron images were captured. Brunauer-Emmett-Teller (BET) surface area The BET surface area was measured using a BET analyzer (Gemini VII 2390a, Micromeritics Instrument Corp., Norcross, USA). The samples were degassed at 368 K for 24 h under vacuum before the BET surface area measurements were performed. Compression testing The mechanical properties of the aerogels were examined by compression testing using a Q800 dynamic mechanical analyzer (TA Instruments, USA) with the compression configuration. Samples were cut into a 1-cm cubic geometry and tested in both axial and radical directions. The experiment was carried out when equilibrated at 303 K with a 0.01 N preload and a strain rate of 10%/min. The elastic modulus (E) of the sample was calculated according to the slope of the initial linear part of the stress-strain curve, and the specific elastic modulus (E s ) of the samples was calculated as shown below: CO 2 adsorption measurement The capacity for CO 2 adsorption of the impregnated aerogels was studied using a BET analyzer (ASAP 2020 Plus, Micromeritics Instrument Corp., Norcross, USA) as a function of pressure. Approximately 20 mg of the sample was loaded and degassed at 393 K for 24 h, and the degassed sample was precisely weighed and transferred back to the analyzer. The CO 2 isotherm was then measured at 273 K in a pressure range of 0-101 kPa. The CO 2 forward-step change breakthrough curve of the aerogels was determined by loading the sample in a steel column (15 mm in inner diameter, 300 mm in length). The sample was first treated under a 0.5 L/min nitrogen flow at 368 K for 24 h. After cooling, a mixture of N 2 /CO 2 with 10% CO 2 (AGA Gas AB, Sweden) was fed at a flow rate of 0.3 L/min at room temperature. The CO 2 concentration was monitored by an IR 1507 fast-response CO 2 infrared transducer (CA-10 Carbon Dioxide Analyzer-Sable Systems International, USA). Results and discussion The morphology of the CNFs used to prepare the CNF aerogel was characterized by AFM, as illustrated in Fig. 1a. Their width distribution is shown in Fig. 1b, and the average width was determined to be 8.5 ± 6.1 nm. The CNCs used for acetylation were fully characterized in our previous study with a width of 5.0 ± 1.5 nm and a length of 122.6 ± 53.3 nm (Butylina et al. 2016). As reported by Kilic et al. (2007), polymers containing acetate groups have a considerable higher CO 2 solubility than those with only ether functionalities because the acetate groups have three binding modes that can interact with CO 2 molecules, i.e. two with a carbonyl oxygen and one with an ester oxygen, as shown in Fig. 1c. Thus, the acetylated CNCs were prepared in this study as the functional materials for CO 2 adsorption. The successful acetylation was confirmed using FT-IR, as illustrated in Fig. 1d. Three peaks are commonly used to identify the acetate group. The carbonyl stretching vibration is at 1751 cm -1 (C=O), the C-H in-plane bending of -CO-CH 3 is at 1377 cm -1 and the stretching of C-O in the acetyl group (-COO-) is at 1232 cm -1 (Hu et al. 2011;Uschanov et al. 2011). The prominent peaks for the a-CNC sample indicate that heterogeneous acetylation was successful. No other adsorption was observed between 1840 and 1760 cm -1 ; and therefore, there was no unreacted acetic anhydride left. The absence of the carboxylic group peak at 1700 cm -1 indicated no acetic acid byproducts (Hu et al. 2011). The degree of substitution of the a-CNCs was determined to be 1.6 by titration. After the preparation of the CNF aerogel via freezecasting and freeze-drying, the CNF aerogel was crosslinked with BTCA and then impregnated with an a-CNCs/acetone suspension or cellulose acetate/ acetone solution to prepare the CO 2 adsorbents. The entire procedure is shown schematically in Fig. 2. The crosslinking process was performed to strengthen the CNF aerogel to help it maintain intact during the impregnation process. Figure 3a shows the FT-IR spectra of the CNF aerogel before and after crosslinking. A new adsorption peak appeared in the crosslinked sample at 1730 cm -1 , which corresponded to carbonyl stretching (C=O) in the ester group. It suggests that ester groups were formed between the The impregnation process is a simple yet robust method to attach functional materials to aerogels that we developed in this study. Movie S1 shows a representative impregnation experiment. Driven by capillary forces, the to be impregnated material can be adsorbed by the CNF anisotropic aerogel in a couple of seconds depends on the length of the aerogel and the viscosity of the impregnated material. Moreover, we demonstrate that not only the polymer solutions (cellulose acetate) but also the nanoparticle suspensions (a-CNCs) can be impregnated, which provides a unique method to produce porous materials with special functionalities. Finally, a CO 2 adsorbent with a length as long as 10 cm can be obtained after drying, and there is no obvious shrinkage in the aerogels after the impregnation with 0.1 g CA, as illustrated in Fig. 3b. The aerogel impregnated with 0.5 g CA shrank a bit more due to the increase in the viscosity of the impregnating solution. However, the cylindrical shape of the aerogel was remained. The density, porosity, and specific surface area of the prepared adsorbents are shown in Table 2. The porosities of all the adsorbents are higher than 98%. The specific surface area of CNF-X-a-CNC reached 21.04 m 2 /g, which was higher than CNF-X-0.1CA, and higher than that of CNF-X-0.5CA, which is the lowest likely owing to the relatively large amount of impregnated cellulose acetate (0.5 g) filled part of the pores in the CNF aerogel. The microstructure of the adsorbents was investigated by SEM and is shown in Fig. 4. It is obvious from Fig. 4a i -d i and Fig. 4a ii -d ii that all adsorbents have a distinct monolithic structure with anisotropic pores, which is formed during the freeze-casting Fig. 2 Process of preparing the acetate-functionalized nanocellulose-based CO 2 adsorbents used in this study Fig. 3 a FT-IR spectra of the native CNF aerogel and crosslinked CNF aerogel and b illustration of the aerogels after freeze-drying, crosslinking, and the impregnation process process owing to the preferred growth of ice crystals when frozen unidirectionally (Lee and Deng 2011). Typically, in the CO 2 adsorption process, gas containing CO 2 will pass through columns filled with adsorbents. Therefore, the microstructure of the adsorbent is important to ensure a rapid and smooth adsorption process. It is reported that the monolithic structure has potential advantages in CO 2 adsorption applications because of the improved mass-transfer, low pressure drop at high flow rates and uniform flow distribution (Rezaei and Webley 2010;Svec and Huber 2006). In Fig. 4, there was no collapse of the pores observed in the impregnated adsorbents, indicating that the impregnation process did not damage the anisotropic structure. However, the width of the pores in CNF-X-0.5CA (Fig. 4c ii ) showed a decreasing trend compared to that of the pores in CNF-X (Fig. 4a ii ) and CNF-X-0.1CA (Fig. 4b ii ). This phenomenon may be a result of the increase in viscosity of the impregnation solution. With an increase in the amount of cellulose acetate dissolved in a certain amount of acetone, the impregnation solution containing 0.5 g of cellulose acetate became more viscous than the one containing 0.1 g of cellulose acetate. Therefore, CNF-X-0.5CA suffered more of a pressure gradient when dried in the oven, which led to a decrease in the pore width. Comparing the magnified images of CNF-X (Fig. 4a iii ) with the impregnated adsorbents, the cell walls of aerogels impregnated with cellulose acetate (Fig. 4b iii and Fig. 4c iii ) were more compact than that of CNF-X, suggesting that small pores on the cell wall of CNF-X may be filled with cellulose acetate during the impregnation process. Such a filling effect is much less prominent in CNF-Xa-CNC (Fig. 4d iii ), which corresponds to the specific surface area results ( Table 2). Instead of filling up small pores on the cell wall, the a-CNCs could build a nanoscale scaffold together with the CNF aerogel owing to their integrity, which is beneficial for the CO 2 adsorption properties. The mechanical properties of the adsorbents were characterized by compression tests. The resulting stress-strain curves in both the axial and radial directions are illustrated in Fig. 5, and the E, E s , and yield strength data of the adsorbents in the axial direction are shown in Table 3. The stress-strain curves in the axial direction shown in Fig. 5a display the typical compression behavior of foams, including linear elastic, plateau, and densification regions. Owing to equipment limitations (maximum load is 18 N), the CNF-X-0.5CA and CNF-X-a-CNC experiments could not reach the same strain as those of CNF-X and CNF-X-0.1CA. Compared to those of CNF-X, both E and the yield strength of the aerogels were enhanced after impregnated with cellulose acetate. The E was 9% and 60% higher by impregnated with 0.1 g and 0.5 g cellulose acetate, respectively. This enhancement is attributed to the higher integrity of the cell walls in the impregnated aerogels. Interestingly, CNF-X-a-CNC demonstrated the highest E (328.9 kPa) and E s (19.8 kNm/kg) among all the samples with a relatively low amount of impregnated a-CNCs (0.07 g), indicating that the a-CNCs provided a remarkable reinforcing effect caused by the very high rigidity of the CNCs. Moreover, the E s of CNF-X-a-CNC outperformed previously reported cellulosebased aerogels (4-18 kNm/kg) (Fan et al. 2018;López Durán et al. 2018;Zhang et al. 2019), and it was in the range of expanded polystyrene and polyurethane foams (10-100 kNm/kg) (Gibson and Ashby 1999). Figure 5b shows the compression behavior of the samples in the radial direction, and there was no significant difference observed. All the samples showed very high flexibility, and their elastic strain could reach 60%. The CO 2 adsorption properties of the adsorbents were first evaluated by adsorption isotherms measured at 273 K. Figure 6a shows the experimental data (points) and polynomials fitted to the data (curves). As shown from Fig. 6a, modified CNF-X-0.1CA and CNF-X-a-CNC adsorbed much more CO 2 than CNF-X did at a given pressure. The CO 2 adsorption capacities at 101 kPa were 1.14 mmol /g and 1.05 mmol/g for CNF-X-a-CNC and CNF-X-0.1CA, respectively. The slightly higher CO 2 adsorption capacity of CNF-X-a-CNC may be due to the nanoscale scaffold formed by the a-CNCs, which helps the aerogel to maintain more surface area after impregnation and provides more effective physisorption sites for CO 2 on the surface; this corresponded to the BET surface area data in Table 2. However, CNF-X-0.5CA only showed a slightly better specific capacity than CNF-X, which can be attributed to the decrease in surface area (less adsorption sites) and increase in density simultaneously. The decrease in surface area can be due to the filling effect of CA on the cell walls of the aerogels as discussed above, and it can also be a result from the impregnation process where some delicate porous structure may lose. CO 2 forward step change breakthrough experiments were also conducted to evaluate the performance of the adsorbents in an adsorption column. As shown in Fig. 6b, the measurements were conducted using a 90 kPa/10 kPa N 2 /CO 2 gas mixture fed to the column under atmospheric pressure, and the CO 2 concentration at the outlet was detected. The amount of adsorbed CO 2 estimated from the breakthrough curves as illustrated in Fig. 6c should correspond to the adsorption capacity obtained from the adsorption isotherms at 10 kPa. CNF-X is considered a carrier material for functional materials because it adsorbs a very low amount of CO 2 (0.007 mmol/g) at 10 kPa, which was determined from the fitted polynomial in Fig. 6a. Therefore, the specific capacity from adsorption isotherms (q iso ) at 10 kPa was calculated using: where Q ad and Q CNF-X are the measured adsorption capacity of the corresponding adsorbent and CNF-X, . 6 a Experimental adsorption isotherms (points) and fitted polynomials (curves) of CNF-X-a-CNC, CNF-X-0.1CA, CNF-X-0.5CA, and CNF-X measured at 273 K in the pressure range of 0 to 101 kPa. b Schematic illustration of the CO 2 forward step change breakthrough equipment. c Forward step change breakthrough curves of CNF-X-a-CNC, CNF-X-0.1CA, CNF-X-0.5CA, and CNF-X respectively, and m is the mass of the functional material in the adsorbent. The specific capacity from the breakthrough experiments (q bt ) was calculated as: where v gas is the molar flow rate of the mixed N 2 /CO 2 gas (mmol/s), c 0 and c 1 are the concentrations of CO 2 (vol%) in the inlet and outlet, and t ads is the time when c 1 /c 0 reaches 1. Figure 7 summarizes the specific CO 2 adsorption capacity of the functional materials, i.e. cellulose acetate and a-CNCs, in the related adsorbents calculated from both adsorption isotherms and breakthrough measurements. It shows that the CO 2 adsorption capacity of the functional materials from both experiments has similar trends in which the a-CNCs in CNF-X-a-CNC had an approximately 45% higher CO 2 adsorption capacity than cellulose acetate did in CNF-X-0.1CA. This indicates that the use of nanomaterials to form a nano-scaffold on the surface of the aerogel provided more physisorption sites for CO 2 , which contributed to the higher adsorption efficiency of a-CNCs. The cellulose acetate in CNF-X-0.5CA had the lowest specific capacity, which implied that the thicker layers formed by more cellulose acetate may not be fully accessible to CO 2 . Notably, although Fig. 6a shows that the entire CNF-X-a-CNC adsorbent had a similar adsorption capacity as that of CNF-X-0.1CA at 10 kPa, and the mass difference between the impregnated a-CNCs (0.07 g) and the impregnated cellulose acetate (0.1 g) caused the difference in their specific capacities. Conclusions Lightweight and flexible CNF-based aerogels with monolithic structures were prepared via freeze-casting. The obtained aerogels were further functionalized by being impregnated in a-CNC/acetone suspension or cellulose acetate/acetone solution. Owing to the anisotropy of the monolithic structure, the aerogels were soft and flexible in the radical direction, while the specific elastic modulus was as high as 19.75 kNm/kg in the axial direction. Introduction of acetate groups increased the affinity of the aerogels to CO 2 , which allowed the functionalized aerogels to be used as CO 2 adsorbents. Unlike the cellulose acetate-forming compact layer on the surface of the aerogel, a-CNCs built a nanoscale scaffold on it. This delicate nanoscale scaffold not only significantly improved the mechanical performance of the adsorbents but also provided more physisorption sites on the aerogel; therefore, CNF-X-a-CNC had a better CO 2 adsorption capacity than CNF-X-0.1CA. Even though CNF-X-a-CNC had the highest CO 2 adsorption capacity among the adsorbents prepared in this study (1.14 mmol/g at 273 K and 101 kPa), more work is needed to make the most of this potential. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds Fig. 7 CO 2 adsorption capacity of a-CNCs and cellulose acetate in the corresponding aerogels estimated from the forward step change breakthrough curves and from the adsorption isotherms the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
v3-fos-license
2014-10-01T00:00:00.000Z
2008-05-28T00:00:00.000
17632195
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0002268&type=printable", "pdf_hash": "871c67e900b1624dc1a91b71e88d3fff0fca7fb3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45205", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "871c67e900b1624dc1a91b71e88d3fff0fca7fb3", "year": 2008 }
pes2o/s2orc
Environmental Factors Contributing to the Spread of H5N1 Avian Influenza in Mainland China Background Since late 2003, highly pathogenic avian influenza (HPAI) outbreaks caused by infection with H5N1 virus has led to the deaths of millions of poultry and more than 10 thousands of wild birds, and as of 18-March 2008, at least 373 laboratory-confirmed human infections with 236 fatalities, have occurred. The unrestrained worldwide spread of this disease has caused great anxiety about the potential of another global pandemic. However, the effect of environmental factors influencing the spread of HPAI H5N1 virus is unclear. Methodology/Principal Findings A database including incident dates and locations was developed for 128 confirmed HPAI H5N1 outbreaks in poultry and wild birds, as well as 21 human cases in mainland China during 2004–2006. These data, together with information on wild bird migration, poultry densities, and environmental variables (water bodies, wetlands, transportation routes, main cities, precipitation and elevation), were integrated into a Geographical Information System (GIS). A case-control design was used to identify the environmental factors associated with the incidence of the disease. Multivariate logistic regression analysis indicated that minimal distance to the nearest national highway, annual precipitation and the interaction between minimal distance to the nearest lake and wetland, were important predictive environmental variables for the risk of HPAI. A risk map was constructed based on these factors. Conclusions/Significance Our study indicates that environmental factors contribute to the spread of the disease. The risk map can be used to target countermeasures to stop further spread of the HPAI H5N1 at its source. Introduction The H5N1 subtype of the influenza A virus was initially detected in poultry on a farm of Scotland, UK, in 1959 [1]. The highly pathogenic avian influenza (HPAI) virus reappeared in 1997 and caused an outbreak in chicken farms and live bird markets in Hong Kong, where 18 human cases were reported with 6 deaths [2]. The recent chain of outbreaks caused by H5N1 started among poultry in South Korea in December 2003, and has affected 61 countries in Asia, the Middle East, Africa and Europe leading to the deaths of millions of poultry and more than 10 thousands of wild birds [3,4]. Even worse, the HPAI H5N1 virus appears to have gained ability to cross the species barrier and induce severe disease and death in humans as well as other mammals. As of 18-March 2008, there have been 373 laboratoryconfirmed human infections, of which 236 have died [5]. The worldwide spread of the disease is providing more opportunities for viral re-assortment within a host (genetic shift) and mutation over time (genetic drift). These factors may lead to a viral strain that is more efficient at person-to-person transmission, raising the potential for another pandemic to occur [6][7][8][9]. Since January 2004, HPAI H5N1 outbreaks in poultry and wild birds, and occasional trans-species transmission to humans have been reported throughout mainland China [5,10]. Surveillance studies suggested that poultry movement and wild bird migration may have contributed to such a quick spread [7,11,12]. However, the processes, including environmental factors, influencing the spread of HPAI H5N1 virus are not clearly understood. In this study, we explore environmental factors associated with such outbreaks in mainland China to provide essential information for developing effective and appropriate countermeasures. Results Since the emergence of HPAI H5N1 infections in mainland China in January 2004, a total of 128 outbreaks of HPAI H5N1, spanning a large geographic area of mainland China, has occurred in poultry and wild birds at the village/township level in 26 of 31 provinces, municipalities or autonomous regions by the end of 2006 [10]. The spatial distributions of HPAI H5N1 outbreaks in poultry and wild birds, and human cases in mainland China were displayed in the thematic map ( Figure 1). The background of the map was the poultry density. The generalized migration routes of birds were overlapped on the map. In a case-control study, minimal distances to the nearest lake, wetland, national highway and main city, as well as annual precipitation appeared to be significantly associated factors in the univariate analysis (Table 1). Multivariate logistic regression demonstrated that three variables, minimal distance to the nearest national highway, annual precipitation and the interaction between minimal distance to the nearest lake and wetland, were significantly associated with HPAI H5N1 outbreaks ( Table 1). Goodness of fit for the logistic regression model was evaluated using Hosmer-Lemeshow test, showing a high risk discrimination between outbreak sites and ''control'' areas (X 2 = 4.305, P = 0.829). We have also investigated possible overestimation of effects due to clustering of neighboring outbreaks. Within clusters, transmission of H5N1 from one location to another could have occurred directly, instead of through the investigated environmental factors. Using the criterion of a distance ,20 km and a time-interval ,3 weeks, we could identify 6 clusters in a total of 30 outbreaks. We have repeated the multivariate logistic regression using only one randomly selected outbreak site from each cluster (i.e., 104 cases and 520 controls), but this did not lead to significantly different results. The adjusted OR and P-value for the three factors, minimal distance to the nearest national highway, annual precipitation and the interaction between minimal distance to the nearest lake and wetland, were 0.825 (P = 0.005), 0.915 (P = 0.001), and 0.970 (P,0.001) respectively. Based on the predictive model derived from the logistic regression analysis, a predictive risk map of HPAI H5N1 infections was established for mainland China by GIS technologies (Figure 2). On the risk map, the locations of HPAI H5N1 outbreaks in poultry from January of 2007 to March 13, 2008 were also plotted (8 outbreaks). The overlapping analysis revealed that 87.5% (7/8) of outbreaks of HPAI H5N1 in poultry occurred in the predictive highest or high-risk areas (Figure 2). Discussion The results of our case-control study demonstrate that the interaction between minimal distance to the nearest lake and wetland minimal distance to the nearest national highway, and the annual average precipitation are the principal environmental variables contributing to the spread of HPAI H5N1 virus in mainland China. These findings indicate that bird populations in close proximity to a body of water are in danger of becoming infected, and provide further evidence for the role of waterfowl in the transmission of avian influenza. HPAI is mainly characterized by a quick spread and a high mortality rate in poultry. However, except for some species [13], waterfowl are typically asymptomatic reservoirs for H5N1 [14], perhaps shedding virus through their salivary and nasal secretions and feces into bodies of water in which they are inhabiting [11]. It is estimated that the virus can survive in water at 22uC for up to 4 days and up to 30 days at 0uC [15]. People may unconsciously take the virus to a body of water from contaminated surfaces or infected birds. Birds and other animals also may transport the virus on their feathers or fur to a water source after coming into contact with an infected animal or contaminated surface on a farm. The spread of the virus is facilitated when the body of water, such as a lake, is stagnant and adjacent to a farm or community [16]. Interestingly, as the precipitation in a region increases, the risk of HPAI H5N1 outbreaks is reduced. One explanation for this result may be that lower precipitation levels may lead to a higher concentration of birds in a reduced number of wetlands, thus increasing the chances of bird becoming infected through contact with the virus. Proximity to national highway is another risk factor contributing to HPAI infection. National highways in mainland China are funded and constructed by the central government and are vital connections between provinces, without collection of toll. As there are many restrictions on railway transportation of poultry in mainland China [17], and tolls on freeways are quite high, national highways are usually top-priority to transport poultry and their products throughout the country. During long-distant transportation, a variety of birds and animals from various origins are caged on top of each other, perhaps providing an easy way of cross-infection of avian influenza. In addition, many open live poultry markets are established along or near the national highways, which may further increase the chance of virus transmission. These findings suggest that trade and mechanical movement of poultry may facilitate spread of HPAI H5N1 virus, supporting laboratory evidence as demonstrated by Li et al. [9]. The logistic regression analysis in the study demonstrates that the risk of HPAI H5N1 infections are not increased with the poultry density, as is usually presumed. This may due to the fact that poultry, especially chickens in the areas with high population densities, are usually bred in industrialized farms with good animal husbandry practices and properly vaccinated [18]. Although the importance as an ecological reservoir is uncertain, migratory birds may spread H5N1 viruses to new geographic regions. Usually migratory birds cannot fly the full distance to their annual migratory destination. Instead, they usually interrupt their migration to rest and refuel [19]. Avian influenza may be spread between wild and domestic birds when migratory birds search for food, water and shelter. The infected wild birds can carry the influenza virus for long distances during migration [20]. Migratory birds are loyal to their annual migratory destinations and their stopover points, such as water bodies, wetlands and forests [21]. As wetlands and forests are destroyed however, due to increased human activities, especially land utilization practices, migratory birds may be forced to search for shelter and food in other places such as farms. This may result in increases contact between wild and domestic birds, thus facilitating the transmission of the virus to domestic bird populations. In conclusion, the analyses of the spatial distribution and underlying environmental determinants reveal that the spread of the HPAI H5N1 is probably taking place at two different but interlinked patterns. Transportation of poultry and their products along highways may contribute to the long-distant national wide spread of the disease. Contacts with infected birds, trade and mechanical movement of poultry may be responsible for local transmission. The two spread patterns can exist simultaneously, and HPAI H5N1 outbreaks can take place near national highways, near relatively stagnant bodies of water such as lakes and wetlands, and in particular when there is reduced rainfall. The predictive risk map of HPAI H5N1 infections established for mainland China on basis of the above contributing factors may be useful for identifying the areas where surveillance, vaccination and other preventive interventions should be targeted. Data collection and management The data on HPAI H5N1 outbreaks in animals were obtained from monthly reports, the official veterinary bulletins of the Ministry of Agriculture of China and from updates on HPAI in animals from the World Organization for Animal Health (OIE) [3,22]. All the outbreaks were confirmed with laboratory based virological methods and officially reported in mainland China. We developed a database including the information on the incident dates (rather than the dates of reporting) and locations of outbreak in birds. The information on human cases with H5N1 infection in mainland China was also included in the database. Each of the HPAI H5N1 outbreaks in birds, as well as human infections, were geo-coded at the village/township level and linked to a digital map at the scale of 1:100,000 using geographical information system (GIS) technologies. Point-type information (single pair coordinates) was created for each outbreak site, while line-type information was generated for migration routes of migratory birds, based on detailed bird banding records from mainland China [23]. Polygon-type information for water bodies were derived from digital maps. The three-type information was overlapped for analyses in our study. Water bodies included lakes with a surface AREA $1.0 km 2 , reservoirs having a surface area $1.0 km 2 and both were used as polygon-type map layers. In addition, information on transportation, main cities and elevation were directly obtained from digital maps (provided by the coauther Dr. Peng Gong from State Key Laboratory of Remote Sensing Science). According to the definition suggested by U.S. Fish and Wildlife Service [24,25], wetlands stated in the current study only included swamps, water meadows, wading lakes with a surface $1.0 km 2 , and excluded rivers, reservoirs and deep lakes. The data on wetlands were obtained from the National Geographical Resourse Center, which were derived from wetland census data collected in mainland China. These data were digitized as point-type information. Transportation including railways, freeways and national highways were used as line-type information. Main cities included 31 provincial capitals and 305 prefecture-level cities in mainland China and were used as point-type information. The layer of elevation was used as line-type information and was preprocessed to convert it to a raster-type layer for this study. The raster-type map layer of precipitation extrapolated by the kriging technique using 700 weather stations in mainland China was collected from the Institute of Geographical Sciences and Natural Resources Research, Chinese Academy of Sciences. Poultry density information was obtained from the FAO [26], which was a raster-type layer and was as the predicted poultry density, corrected for unsuitability and adjusted to match observed totals. GIS spatial analysis The spatial distribution of outbreaks in birds and human cases was studied through overlapping analysis. A thematic map was established on which the poultry density was taken as the background. To understand the role of bird migration in the spread of H5N1 virus, map layer of bird migration was created and overlapped on the map of spatial distribution of outbreaks in birds and human cases. Analysis of environmental factors associated with H5N1 outbreaks A case-control study design was used to clarify the environmental factors associated with the spread of HPAI H5N1. The 128 outbreak sites were taken as ''cases''. All other villages and townships in mainland China except for those affected by HPAI H5N1 outbreaks from 2004 to 2006, were defined as nonepidemic areas. Then 640 ''control'' sites (5 controls/case) were randomly selected from the non-epizootic areas in mainland China and then were geo-coded (see Figure 3 for the location of cases and controls). Eight environmental factors (bodies of water, wetlands, transportation routes, migration routes, main cities, precipitation, elevation and poultry density) involving twelve variables were considered in the study. The minimum distances to the nearest bodies of water including lakes, reservoirs and rivers, as the polygon-type information, were measured using a proximity function of spatial analysis in such an algorithm that the minimal distance from each case and control site to its nearest water body edge was calculated. The minimal distance from each ''case/ control'' site to its nearest points or lines was calculated, using point-type or line-type information (i.e., wetlands, transportation routes including railways, freeways and national highways, migration routes and main cities). Furthermore, using a zonal statistical calculation technique, an 8 km mean buffer zone (the outbreak area of 3 km plus risk area of 5 km around the outbreak site) was calculated for the variables, annual precipitation, elevation and poultry density (as raster-type map layers), for each case and control site. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS Inc, Chicago, IL, USA). Unconditional logistic regression was performed, and odds ratios (ORs), their 95% confidence intervals (CIs) and P-values were estimated using maximum likelihood methods. ORs of the variables involving minimal distances to the nearest body of water, wetlands, transportation routes, bird migration routes and main cities were calculated for a ten-kilometer difference. However, for annual precipitation for a 100-millimeter difference, elevation for a 100meter difference and poultry density for 1000-bird difference per square kilometer, ORs were estimated respectively. Univariate analyses were conducted to examine the effect of each variable separately. Quadratic and logarithmic transformations of each variable were also tested, but these did not perform significantly better than a linear association for any of the variables. Multivariate analysis was then performed using the variables with P-value of ,0.1 from the univariate analyses as covariates. The possible interactions between each covariate were also included in the multivariate analysis. The colinearity between covariate in the case control study was quantitatively assessed. Correlations between minimal distances to the nearest lake, minimal distance to the nearest wetland and minimal distance to the nearest river were identified. Models were also optimized by comparing the 22 log likelihood and Hosmer-Lemeshow goodness of fit when correlated variables were added or removed. It was discovered that more accurate models could be derived by removing the variable of minimal distance to the nearest river. A P-value ,0.05 was considered statistically significant by using backward-LR method. Goodness of fit for the logistic regression model was evaluated by using the Hosmer-Lemeshow goodness of fit test. To predict the risks of HPAI H5N1 occurrence, a grid map was created using GIS techniques. The values of the above predictive variables were determined for each grid with an area of 100 km 2 (10610 km) based on the predictive model derived from the multivariate logistic regression analysis. By interlinking all the grids, a predictive risk map of HPAI H5N1 infections was established for mainland China. The risk of HPAI occurrence for each grid was calculated and classified as the highest risk, high risk, medium risk and low risk according to quartile levels for predicted prevalence in the predictive map.
v3-fos-license
2021-04-04T06:16:31.444Z
2021-03-26T00:00:00.000
232773155
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1729/11/4/280/pdf", "pdf_hash": "07ddf6261bbfd70e52abfc91fad07cef4a6a5c3d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45207", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "sha1": "31f2fee7b44ab8463c76b9dae33f56cfe0e51d6a", "year": 2021 }
pes2o/s2orc
Analysis of Soil Fungal and Bacterial Communities in Tianchi Volcano Crater, Northeast China High-altitude volcanoes, typical examples of extreme environments, are considered of particular interest in biology as a possible source of novel and exclusive microorganisms. We analyzed the crater soil microbial diversity of Tianchi Volcano, northeast China, by combining molecular and morphological analyses of culturable microbes, and metabarcoding based on Illumina sequencing, in order to increase our understanding of high-altitude volcanic microbial community structure. One-hundred and seventeen fungal strains belonging to 51 species and 31 genera of Ascomycota, Basidiomycota and Mucoromycota were isolated. Penicillium, Trichoderma, Cladosporium, Didymella, Alternaria and Fusarium dominated the culturable fungal community. A considerable number of isolated microbes, including filamentous fungi, such as Aureobasidium pullulans and Epicoccum nigrum, yeasts (Leucosporidium creatinivorum), and bacteria (Chryseobacterium lactis and Rhodococcus spp.), typical of high-altitude, cold, and geothermal extreme environments, provided new insights in the ecological characterization of the investigated environment, and may represent a precious source for the isolation of new bioactive compounds. A total of 1254 fungal and 2988 bacterial operational taxonomic units were generated from metabarcoding. Data analyses suggested that the fungal community could be more sensitive to environmental and geographical change compared to the bacterial community, whose network was characterized by more complicated and closer associations. Introduction Microorganisms are considered an essential component of natural environments [1]. They are ubiquitous in nature based on their characteristics, such as small size, flexible capability to exploit nutrients and adaptability to unfavorable and extreme environmental conditions [1]. Microbial communities in volcanic environments are of particular interest for research on the emergence and evolution of life due to the unique extreme conditions that characterize these natural habitats [2]. Extremophilic microorganisms inhabiting volcanic environments can be a source of important biotechnological products like antibiotics, biofertilizers or bio-control agents [3][4][5]. New extremophiles have been isolated from different, very peculiar environments, which are often considered "empty" in the biological sense [6]. Novel microbial species have been recently described from barren alpine environments [7,8] and high-altitude volcanic soils [4,5]. Both morphological culture-based and molecular approaches are generally used to characterize environmental microbial communities. Culturable microorganisms provide more detailed information for their identification at species level, and for understanding their role in environmental processes, but only less than 5% of the microorganisms on Earth are culturable in lab condition, due to the selectivity of media and culture conditions [9,10]. To overcome this limitation, diverse methods have been developed to obtain genetic information on the presence of microorganisms living in natural environments. These methods allow the culture-independent analysis of the total microbial genomes called Life 2021, 11, 280 2 of 18 "metagenome" in a particular environment [2,[11][12][13]. The development of molecular biological techniques (particularly high-throughput sequencing) represents a powerful tool to access a much larger proportion of microbial communities, compare to traditional culture-based methods, by means of environmental sample total DNA extraction [14]. Changbai Mountain Nature Reserve is located in the northeast of China. It is considered one of the most well-protected and conserved natural ecosystems in China [15]. Many new microbial species have been described in this area, which is particularly predisposed to harbor uncommon microorganisms due to its distinctive landforms and complex climatic conditions [16]. For instance, a novel bacterial strain Bacillus methylotrophicus was isolated from the soil near the roots of Pinus koraiensis plants at an altitude of 2749 m a.s.l. [4], while a novel thermophilic anaerobic bacterium Fervidobacterium changbaicum was collected from the mixture of water and mud from a hot spring, in Changbai Mountain [17]. A number of novel macro-mycete taxa were also described from the highly diverse fungal community of this Nature Reserve [18,19]. Tianchi Volcano is located at the highest point of Changbai Mountain ranges [20]. It is an active volcano with the highest potential eruption risk in China [21,22]. Water accumulates at the top of the volcanic cone and forms Tianchi lake, the highest volcanic lake in China, at an altitude of 2189 m a.s.l. The lake is surrounded by 16 peaks, the highest peak reaching the altitude of 2749.5 m a.s.l. [23,24]. The crater of Tianchi Volcano is the accumulation area of volcanic lavas and debris, which are mainly composed by Ti, Fe, Mn, Si, Al, Ca, Na, K and Mg oxides, trace elements, including high concentration of rare-earth elements, and heavy metals, such as Zn, Pb and Cu [23,25]. To date, a few studies on microbial diversity have been performed in Changbai Mountain Nature Reserve, mainly focusing on the structure and function of the soil microbial communities in vertical vegetation zones, with little attention to microbial communities in the high crater area of Tianchi Volcano. For instance, the fungal and bacterial diversity and community composition along an elevation gradient (2000-2500 m a.s.l.) on the northern alpine tundra belt of Changbai Mountain were analyzed using Illumina sequencing [26,27]. Other studies focused on specific soil microbial diversity associated with vegetation zones [28][29][30][31]. Zhao et al. characterized microbes in rhizosphere soils of Cowskin Azalea (Rhododendron aureum) using culture-dependent methods [32]. The diversity of culturable forest micro-fungi in different vegetational belts from 700 to 2600 m a.s.l. was described by Yang et al. [16]. In this study, we focused on the mountaintop area of Tianchi Volcano, which was expected to harbor a peculiar microbial community adapted to the particularly extreme environmental conditions characterizing the highest elevation montane zones, including colder temperatures, and higher exposure to wind and solar radiation. We performed a comprehensive analysis of the soil fungal and bacterial diversity colonizing the crater margin of Tianchi Volcano, using a combination of molecular and morphological analyses of culturable microbes, and metabarcoding based on Illumina sequencing, in order to shed light on the structure of the microbial community inhabiting this unexplored extreme environment and increase our understanding of high-altitude volcanic microorganism network. Study Area and Sampling The study plots were located along the crater margin of Tianchi Volcano, which is part of the Changbai Mountain Nature Reserve ( [32]. This Nature Reserve, at an altitude ranging from 500 m to 2691 m a.s.l., is characterized by a typical mountain climate, with low temperature and heavy precipitation, being the mean annual temperature 4.9-7.3 • C, and the mean annual precipitation over 800 mm [33]. The Changbai Mountain high altitude zone environment is constituted by a tundra belt, which is distributed between 1950 m and 2650 m a.s.l., and includes volcanic, glacial and periglacial landforms. The specific mean annual temperature (−4.8 • C) and precipitation (1154 mm) of this area are typical of tundraperiglacial climate. The tundra belt is covered with snow for about 6 months every year, Life 2021, 11, 280 3 of 18 from mid-October to mid-May [26]. The Tianchi Volcano crater area, around Tianchi lake, belongs to the tundra belt. During 26-27 October 2019, soil stone mixture samples from northern and western crater area of Tianchi Volcano were collected. Six plots of 20 m diam were randomly selected at the very top of the mountain, along the crater margin, on each of the two analyzed sides of the crater to collect a total of 12 soil stone mixture samples. All plots were completely devoid of vegetation. The number of plots was chosen based on the accessibility of the crater site and following the sampling restrictions in the protected area. Each sample was collected from the center of the plot at shallow depth (1-5 cm), after removing the surface layer (ca. 1 cm). Figure 1 shows the geographical distribution of sampling locations, while specific information of coordinates and altitudes are listed in Table 1. The samples were immediately stored on ice in insulated containers. After returning to the laboratory, each sample was divided into two subsamples, one stored at 4 • C for isolation of culturable microbes, the other one stored at −80 • C for metabarcoding analysis. The pH of soil samples was measured adding distilled water to ground material at a ratio of 1:2.5 (w/v). low temperature and heavy precipitation, being the mean annual temperature 4.9-7.3 °C, and the mean annual precipitation over 800 mm [33]. The Changbai Mountain high altitude zone environment is constituted by a tundra belt, which is distributed between 1950 m and 2650 m a.s.l., and includes volcanic, glacial and periglacial landforms. The specific mean annual temperature (−4.8 °C) and precipitation (1154 mm) of this area are typical of tundra-periglacial climate. The tundra belt is covered with snow for about 6 months every year, from mid-October to mid-May [26]. The Tianchi Volcano crater area, around Tianchi lake, belongs to the tundra belt. During 26-27 October 2019, soil stone mixture samples from northern and western crater area of Tianchi Volcano were collected. Six plots of 20 m diam were randomly selected at the very top of the mountain, along the crater margin, on each of the two analyzed sides of the crater to collect a total of 12 soil stone mixture samples. All plots were completely devoid of vegetation. The number of plots was chosen based on the accessibility of the crater site and following the sampling restrictions in the protected area. Each sample was collected from the center of the plot at shallow depth (1-5 cm), after removing the surface layer (ca. 1 cm). Figure 1 shows the geographical distribution of sampling locations, while specific information of coordinates and altitudes are listed in Table 1. The samples were immediately stored on ice in insulated containers. After returning to the laboratory, each sample was divided into two subsamples, one stored at 4 °C for isolation of culturable microbes, the other one stored at −80 °C for metabarcoding analysis. The pH of soil samples was measured adding distilled water to ground material at a ratio of 1:2.5 (w/v). Cultivable Microbe Isolation, Microscopy and Molecular Analyses Microbes were isolated following a modified dilution plate method [35]. Soil stone mixture suspensions were prepared as follows: samples were first evenly grounded in sterilized mortars, then 10 g of each sample were suspended in 40 mL sterile water to obtain the diluted suspension. 1 mL of uniform diluted solution was taken and further diluted in 9 mL sterile water. All sample diluted suspensions were finally shaken at 220 rpm for 5 min. 100 µL of suspension from each of the two dilutions were plated on Potato Dextrose Agar (PDA) amended with 100 mg/L penicillin and streptomycin to reduce the growth of bacteria to a minimum level and favor the isolation of fungi. Petri dishes were sealed and incubated at room temperature (25 • C), in the darkness. After microbial colonies appeared, the different morphotypes were accurately selected for strain isolation, based on characters such as texture and pigmentation. Single colonies were picked and transferred to new Petri dishes containing the same medium to obtain pure cultures. The modified CTAB (cetyl trimethyl ammonium bromide) method was used to extract fungal DNA from mycelia of pure cultures [38]. A total of 500 mg of fungal mycelia was added to a 2.0 mL microcentrifuge tube with 0.2 g zirconia beads, then 100 µL of sterile lysis buffer (1 M Tris-HCl (pH 8.0) 10 mL, 0.5 M EDTA (pH 8.0) 4 mL, 5 M NaCl 28 mL, CTAB 2 g, add ddH 2 O to 100 mL) was added. The mixture was homogenized for 30 s, 60 Hz, 2 cycles using a high-throughput tissue grinder (SCIENTZ-48, SCIENTZ, Ningbo, China). 400 µL lysis buffer was added after homogenization, then the mixture was incubated at 60 • C for 30-60 min in a water basin by inverting the tubes at intervals of 15 min. The tubes were added with 500 µL of SEVAG (chloroform:isoamyl alcohol, 24:1), mixed evenly, then centrifuged at 12,000 rpm for 15 min. The supernatant was transferred to a new clean 2.0 mL tube, and an equal volume of SEVAG was added. The resulting solution was mixed evenly and centrifuged at 12,000 rpm for 15 min. The last step was repeated twice, then the supernatant (225-300 µL) was taken and transferred to a new clean 1.5 mL microcentrifuge tube, 600 µL isopropyl alcohol were added and the tubes inverted gently several times, thus making DNA "ropes" precipitate visible. The tubes were put at −20 • C for 1 h and then centrifuged at 12,000 rpm for 5 min. Supernatant was removed carefully from each tube and the pellets were washed with 400 µL of 70% ethanol, then centrifuged at 12,000 rpm for 3 min. After repeating the last step twice, the aqueous phase was discarded and the tubes placed in super clean bench or fume cupboard to dry. The DNA was redissolved in 50 µL of sterile ddH 2 O for further study. For bacteria, DNA was obtained by boiling method [39]. Bacterial samples were put into 100 µL sterile ddH 2 O in 1.5 mL microcentrifuge tubes and heated at 100 • C boiling water for 15 min, DNA was dissolved in the liquid phase. This step was followed by centrifugation (14,000 rpm/1 min) to recover supernatant, then the tubes containing supernatant were stored at −20 • C. The internal transcribed spacer (ITS) region of fungal rDNA was amplified using primers ITS1 and ITS4, whereas the V3 to V4 hypervariable region of bacterial 16S rRNA was amplified using the primers U341F and U806R (Table 2). PCR amplifications were performed in 25 mL reaction mixtures that were prepared with 1 µL of genomic DNA as template DNA, 12.5 mL of 2× Vazyme Rapid Taq Mater Mix, 1 µL of each primer (10 µM), and double-distilled H 2 O to a total volume of 25 µL, using the following reaction conditions: Temperature of 95 • C for 3 min, 30 cycles at 95 • C for 15 s, 55 • C for 15 s, 72 • C for 15 s, and a final extension at 72 • C for 5 min. PCR products were Sanger sequenced at Tianjin Tsingke Biological Technology Co., Ltd. (Tianjin, China). Sequences were assembled using BioEdit [40]. All obtained sequences were BLASTn searched in NCBI and assigned to potential genera and species. The nomenclature followed Index Fungorum (indexfungorum.org) [41]. Sequences were deposited in GenBank under the accession numbers MW582315-MW582428 (fungi) and MW577449-MW577451 (bacteria). All fungal colonies isolated in this study were inoculated, both on slanted medium in glass tubes and in strain preservation tubes containing double-sterilized ultra-pure water, and stored at 4 • C for long term preservation. Bacterial strains were preserved in 30% glycerol at -80 • C. All strains were deposited in the LP Culture Collection (personal culture collection held in the laboratory of Prof. Lorenzo Pecoraro), at the School of Pharmaceutical Science and Technology, Tianjin University, Tianjin, China. Assessment of Microbial Community Using Illumina Sequencing The total genomic DNA was extracted from 0.5 g of soil stone mixture for each sample by using a FastDNA ® Spin Kit for soil (MP Biomedicals, Solon, OH, USA) according to the manufacturer's instructions. The products were checked in 1% agarose gel and quantified with NanoDrop 2000 UV-vis spectrophotometer (Thermo Scientific, Waltham, MA, USA). The fungal ITS2 rDNA region was amplified using the fungus-specific primer pair ITS3F Bioinformatics of Fungal and Bacterial Sequences Raw sequences were quality-filtered by Fastp v0.19.6 [42] according to the following rules: (1) Filter the bases in the tail of the reads with quality score below 20; (2) set 50 bp sliding windows on the reads, cut off the back-end bases from the window if the average quality value in the window is lower than 20, and the reads below 50 bp after quality control were removed; (3) reads containing ambiguous nucleotide (N) were eliminated; (4) the barcode mismatches were not allowed and the maximum number of primer mismatches was 2. Paired-end reads were subsequently merged by FLASH v1.2.11 [43] with a minimum overlap of 10 bp and maximum allowable mismatch ratio 0.2. Quality-controlled sequences were clustered in operational taxonomic units (OTUs) at a 97% similarity threshold using Life 2021, 11, 280 6 of 18 UPARSE v7.1 [44]. Chimeric sequences were identified and removed using UCHIME [45]. The sequence with the highest abundance were selected as representative sequence for each OTU. Taxonomic assignments of the representative sequence of each OTU were based on RDP Classifier v2.2 [46] using the UNITE Database v8.0 (for fungal sequences) [47], and SILVA 16S rRNA Database (release 138, for bacterial sequences) [48,49]. Volcanic fungal and bacterial OTU table were rarefied to counts up to 43,002 and 36,117 reads per sample, respectively, (these were the lowest sequence depths obtained from all samples) and used for downstream analyses, including calculation of relative abundance of different taxonomic groups, α-diversity and β-diversity analyses, as well as network analyses. Statistical Analysis α-diversity indexes, including richness (number of OTUs), Shannon, Chao1, and Good's Coverage were calculated in Mothur used to estimate the richness, diversity and coverage of microbial communities. Rarefaction curves [50] and Rank abundance curve [51] were drawn in R package for evaluation of sufficient sequence depth, as well as assessment of community richness and evenness. Between-groups Venn diagram was plotted using R to identify unique and common OTUs. Principal co-ordinate analysis (PCoA) and Analysis of Similarity (ANOSIM) with 999 permutations were performed for the assessment of dissimilarities between samples based on taxonomic Bray-Curtis [52], phylogenetic Weighted and Unweighted Unifrac distance matrices [53]. Network Analysis The interactions between microbial taxa were analyzed through a network structure to assess the complexity of fungal and bacterial communities in Tianchi crater. Rarefied fungal and bacterial OTU tables were used for network analyses of crater soils. Only the OTUs in the top 100 abundance level were retained for the analysis to reduce the complexity of the data sets. Spearman's rank correlation coefficients (r) between the OTUs with a magnitude ≥0.8 or ≤−0.8 and statistically significant (p < 0.05) were further included for network construction. Gephi (version 0.9.2) was used for the topological properties estimation, visualization and modular analysis of the network [54]. The nodes in the constructed network represent the OTUs indicating different taxa, whereas the edges correspond to significant positive or negative correlations between taxa. Microbial Diversity by Illumina Sequencing From metabarcoding analyses, a total of 4,371,622 raw reads with 772,186 effective sequences were obtained by fungal ITS2 gene sequencing, using the Illumina MiSeq PE300 platform. The number of sequences from the samples ranged from 43,002 to 70,583. After normalization of sequences by resampling 43,002 reads every sample, 1254 OTUs could be classified into the fungal kingdom (Table S3). From bacterial 16S V3-V4 region sequencing, 3,105,944 raw reads with 507,467 effective sequences were obtained. 2988 bacterial OTUs were clustered subsequently after the OTU table was rarefied to counts up to 36,117 reads per sample (the number of sequences ranged from 36,117 to 49,230) (Table S4). In terms of the relative abundance, the class Leotiomycetes (49.13%), followed by Dothieomycetes (12.90%), dominated the Tianchi crater soil fungal community ( Figure 4A), while for the bacterial community, Gammaproteobacteria (26.30%), Actinobacteria (26.04%) and Alphaproteobacteria (17.29%) were dominant ( Figure 4B). At genus level, the relative abundance of Leucosporidium was found to be the highest (2.30%) among Basidiomycota, while the most abundant ascomycetes genera remained not clearly discriminated as belonging to the class Leotiomycetes ( Figure 5A). The genera from other phyla were with Life 2021, 11, 280 9 of 18 very low relative abundances (<1%) and merged as "others" in the bar plot ( Figure 5A). Among bacteria, the genus Sphingomonas (5.50%) was the most abundant in the analyzed crater soil ( Figure 5B). From a comparison of OTUs diversity between northern and western crater soil samples, a total of 566 fungal and 1816 bacterial OTUs, accounting for 45.14% and 60.78% of the total OTUs, respectively, were found to be common to both sides of the crater (Figure 6). Some of the common OTUs exhibited a relatively high abundance among all OTUs. For instance, fungal OTU1055 (unclassified Leotiomycetes) and OTU1230 (unclassified Helotiales) were the most abundant OTUs detected, showing a relative abundance of 11.89% and 11.87%, respectively. Similarly, common bacterial OTU11 (unclassified Erwiniaceae), with relative abundance 5.07%, and OTU541 (unclassified Pseudarthrobacter, 4.62%) dominated the bacterial community. The variations of fungal and bacterial OTUs were analyzed by principal co-ordinate analysis (PCoA). Fungal communities in northern and western crater soil were clearly distinguished based on Bray-Curtis distance (ANOSIM p = 0.011) ( Figure 7A), weighted Unifrac distance (ANOSIM p = 0.035) and unweighted Unifrac distance (ANOSIM p = 0.003) ( Figure 7B,C). The separation of communities was more significant using unweighted Unifrac distance, which is more sensitive to taxa presence or absence regardless of their abundances compared with Bray-Curtis distance and weighted Unifrac distance. For bacterial communities, on the contrary, no clear separation was found based on the different dissimilarity matrices (Figure 8). Network Complexity for Bacterial and Fungal Communities Separated co-occurrence network analyses were performed for both fungal and bacterial communities to explore and evaluate the complex interactions between the microbial taxa detected in the analyzed crater soils. All interactions comprised in the network were strongly correlated and statistically significant (r ≥ 0.8 or ≤ −0.8 and p < 0.05). In general, there were considerable differences in network topology and structure between fungal and bacterial communities (Figure 9, Table 5). The more compact and complicated bacterial network than the fungal network was reflected by the average path lengths (2.661 vs. 4.982) and average degrees (10.143 vs. 3.487) (Figure 9, Table 5). Contrasting correlations between OTUs within fungal and bacterial communities were observed. Seventy-seven nodes were included in the network of the fungal community, with 136 edges, 108 positive interactions and 28 negative interactions, while 84 nodes were included in the bacterial network. The network metrics of fungal community were significantly lower than those for bacterial community (426 edges, 302 positive and 124 negative interactions) (Figure 9, Table 5). Both networks were predominated by positive interactions (79.41% and 70.89%), indicating that the mutual effects dominate the microbial communities in the crater. For fungal community, Leucosporidium sp. (OTU348), Paraphoma fimeti (OTU398), Alatospora sp. (OTU952), unclassified Chaetothyriales (OTU955) and unclassified Eurotiomycetes (OTU987) were found to have the most abundant interactions with other nodes (degree: 9) ( Figure 9A). Pedobacter sp. (OTU2266) was found to have the most abundant interactions in the network of bacterial community (degree = 26) ( Figure 9B). Discussion In this study, we provided a comprehensive characterization of fungal and bacterial communities along the crater margin of Tianchi Volcano, in Changbai Mountain Nature Reserve, by combing molecular and morphological analyses of culturable microbes, and metabarcoding analyses based on Illumina sequencing. A few previous studies have been conducted on microbial diversity in relatively lower altitudes of Changbai Mountain northern slope, mostly using high-throughput sequencing methods alone [14,33,55]. To the best of our knowledge, this study represents the first microbial community analysis in different sides of Tianchi Volcano crater margin, using integrated culture-dependent and metabarcoding approaches. Many microbial species isolated from this study have been frequently reported from high-altitude or cold environments, and have been extensively applied in biotechnological fields, showing particular research significance due to their peculiar ecology. For instance, Aureobasidium strains isolated from Tianchi northern crater sediments, highly matched with A. pullulans isolated from saxicolous lichens growing in the north side of Taibai Mountain in China, at the altitude of 2614 m a.s.l. [56], and A. pullulans found from the glacier surface snow of Tibetan Plateau in China (>99% similarity) by Shao [57]. Besides, A. pullulans have been previously isolated from high Italian glaciers [7], high Arctic glaciers [58] and Antarctica soils [59]. It has been well documented that A. pullulans is a psychrophilic fungus able to produce various useful bioproducts (enzymes, pullulan, single-cell proteins and siderophore) for waste treatments, chemical industry materials, food industries and biocontrol [59]. Our finding of Candida tropicalis in the western crater of Tianchi Volcano is in agreement with previous studies in which Candida is regarded as a common Antarctic yeast genus [8]. Candida tropicalis has been described capable of degrading high-concentration phenol [60], and tolerant to copper [61]. This species may play an important functional role in the studied crater area of Tianchi volcano characterized by high heavy-metal concentration. The ascomycete Epicoccum nigrum, isolated from both sides of the analyzed crater, has been described as a xerotolerant species in a study conducted in Annapurna Mountains of Nepal [62]. This fungal species has been found to produce antibacterial compounds [3] and described as a biocontrol agent [63]. Among the few bacterial isolates belonging to Chryseobacterium and Rhodococcus genera, Chryseobacterium strain showed high similarity with C. lactis (accession number = MT065804) isolated from Qinling high altitude area in China by Men X. in an unpublished study, while Rhodococcus strains highly matched with R. degradans (accession number: LR216744) isolated from sodic soil in Hungary by Krett G., and R. qingshengii (accession number: MT632489) isolated from oil well in Russia by Borzenkov et al., in unpublished studies. Species from Rhodococcus have been previously found to be psychrotolerant in a study conducted in Antarctica [64]. This considerable number of microbes isolated from Tianchi Volcano, which have previously shown preference for high-altitude, cold, and geothermal extreme environments, and have been described as psychrotolerant and metal-tolerant microorganisms, provide new insights in the characterization of the ecological features of the investigated environment, and may represent a precious source for the isolation of new bioactive compounds [65,66]. The use of molecular and high-sensitive metabarcoding methods allowed the detection and identification of both common and rare microbial taxa, thus providing detailed and accurate information on fungal and bacterial communities in the investigated high-altitude volcanic habitat. The high relative abundances of unclassified genera detected in this study may indicate the presence of a significant number of undiscovered and possibly endemic taxa in the analyzed communities. Recent studies have described the fungal diversity in apparently barren (plant-free) zones of high alpine environments, including the Himalayas (Nepal, 5146 and 5509 m a.s.l.) [67], Colorado (USA, 3660-3800 m a.s.l.) [68,69], Llullaillaco (6030 and 6330 m a.s.l.) [8] and Socompa (5235 m a.s.l.) volcanoes [70] in the Andes, along the Chilean-Argentinian border. A major group of basidiomycetes found from these barren high-altitude soils (mostly Colorado) was constituted by members of the Microbotryomycetes, including Leucosporidium antarcticum. Accordingly, our results revealed that Leucosporidium was the most abundant genus in Basidiomycota from highthroughput sequencing, and two sequences obtained from isolated strains showed 100% similarity with L. creatinivorum isolated from the soil of King George Island, in the sub-Antarctic region. Leucosporidium creatinivorum is known as a psychrotolerant yeast, which has been frequently isolated from cold extreme environments, such as glaciers in Argentina, Russia, Iceland and Italian Alps [71,72], and has been also reported from soil, marine sponges and lichens in Antarctica [73][74][75]. Members of Leucosporidium have been described as sources of cold-active enzymes [73,76,77] and anti-freeze proteins [78], and have been shown to possess phenol and phenol-related compounds degradation ability [79]. The high relative abundance of Leucosporidium in high-altitude volcanic soil stone mixtures of Tianchi volcano confirmed the peculiar preference of this genus for extreme cold environments. Up to now, to our knowledge, the existence of Leucosporidium in China was only detected by metagenomic analysis [80,81]. Our results represent the first isolation of L. creatinivorum in China. These isolated strains certainly deserve further attention as potential producers of metabolites for biotechnological application. Besides, in our study, Leucosporidium was also found to have the most complicated interactions in the fungal network, which may indicate a fundamental ecological role of this taxon in maintaining the stability of the fungal community in high elevation volcanic environments. The bacterial community was dominated by Proteobacteria, result in agreement with previous studies where this taxon has been described as the most dominant soil bacterial phylotype in different continents across the globe [82]. Gammaproteobacteria was the most abundant class within Proteobacteria, followed by Alphaproteobacteria, which is consistent with previous findings from Deception Island Volcano in Antarctica [83]. Sphingomonas, the most abundant bacterial genus in Tianchi Volcano, has been previously reported from other high-altitude volcanic areas, including the active volcano Socompa in Argentina [84], and El Chichón volcano in Mexico [85]. Members of this genus play prominent functional roles in the remediation of contaminated environments. For instance, several Sphingomonas species have been extensively associated to the chelation of heavy metals, and degradation/biotransformation of aromatic hydrocarbons [86]. Sphingomonas strains have been also found to produce highly beneficial phytohormones [86]. The dominant presence of Sphingomonas in high-altitude, metal-enriched, and carbon poor sediments of Tianchi crater, suggested an important contribution of this bacterial group to the functionality of the studied ecosystem, and confirmed the metal-tolerant ability of these microorganisms, which probably possess special nutritional strategies to adapt and survive in barren habitats, under extreme conditions. Further studies focusing on the metabolic activity and nutritional mode of Sphingomonas species may clarify the mechanisms allowing extremotolerant microbes to strive in environments which are unsuitable for the majority of life forms. The significant clear variation of soil fungal communities in the two sides of the Tianchi crater suggested that geographical and environmental factors may have a stronger influence on the diversity of fungi than bacteria, which, instead, did not show any remarkable community difference between North and West sampling areas. Different community diversity patterns may be the result of mixed effects of multiple factors, such as solar radiation, temperature, and soil properties that may be more relevant to fungi than bacteria [87]. However, our analysis of samples pH showed highly homogeneous values, with average pH at 7.76, which could not provide useful information to explain the observed community diversity variation. Further studies are needed in order to analyze environmental factors affecting bacterial and fungal community diversity in high elevation volcanic extreme environments. The results of PCoA were consistent with co-occurrence network analyses. Compare to the fungal network, the bacterial network was composed by more complicated and closer associations, in terms of numbers of nodes, edges, average degree and average path length, thus indicating that the bacterial communities possess rapid responsiveness to perturbation [88], and much stronger resistance to environmental changes than fungal communities. However, these related environmental factors remain to be studied. The detailed characterization of Tianchi Volcano microbial diversity provided in this study may constitute an important reference for future long-term monitoring, aiming at tracing the effects of global warming on this delicate environment. Indeed, previous studies have proved that increasing global average temperatures can drive successions of microbial communities, for instance by changing the quality and quantity of genes potentially available for horizontal gene transfer, and can lead to increasingly divergent succession, with possibly higher impact on fungi than bacteria [89,90]. Besides, the impact of climate changes is predicted to be very pronounced at high-elevations and in tundra ecosystems [91,92]. The area along the margin of the Tianchi crater represents a typical example of a high-altitude tundra ecosystem in China. It has been reported that the mean annual temperature in Changbai Mountain Nature Reserve increased at a rate of 0.392 • C/10 years from 1958 to 2015, which is significantly higher than the national warming rate (0.22 • C/10 years) [93]. Because global warming may particularly alter the biogeochemistry and ecology of cold soil ecosystems [94][95][96], the increasing warming registered in Changbai Mountain region deserves attention, and further studies are needed to verify the response to environmental changes of the particularly sensitive and fragile high-altitude tundra microbes unveiled from our study in Tianchi volcano. Conclusions Our results represent an unprecedent comprehensive microbial community analysis along the high-altitude crater margin of Tianchi Volcano, by combing culture-dependent and metabarcoding analyses. We observed that Tianchi Volcano hosted a combination of taxonomic groups characteristic of high-altitude, cold and geothermal environments, with a considerable number of isolated microbes being of particular research significance, due to their rarity and peculiar ecology. Our study suggested that the structure and diversity of fungal community was more sensitive to environmental and geographical changes compare to bacterial community in the analyzed area. Our findings may represent an important starting point for future studies to explore the precious metabolite resources of the isolated microbes, and elucidate the effect of different environmental factors on community structure and dynamics in high-altitude volcanic environments. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/life11040280/s1, Figure S1: Number of fungal genera, species and strains in different phyla obtained from Tianchi crater soil. Figure S2: Number of fungal species and strains from most abundant genera observed in Tianchi crater soil. Table S1: Fungal and bacterial diversity molecularly detected in Tianchi Volcano soil, from DNA extracted from isolated microbes. Table S2: An overview of fungal strains isolated from the crater soil samples. Author Contributions: X.W. and L.P. conceived the study; samples were collected by X.W. and L.P.; the experiments were designed and supervised by L.P.; laboratory experiments and analysis were performed by X.W.; results were analyzed by X.W. and L.P.; X.W. prepared the original draft while L.P. critically revised the manuscript. All authors have read and agreed to the published version of the manuscript.
v3-fos-license
2019-03-28T13:14:24.116Z
2019-01-02T00:00:00.000
86416863
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/65017", "pdf_hash": "c71995c387d8a5b781df1e7f0a52d1cdb0f93df4", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45209", "s2fieldsofstudy": [ "Medicine" ], "sha1": "283e3d151325e81a933c314a1e602377afbfd10e", "year": 2019 }
pes2o/s2orc
Learning Curve and Septorhinoplasty The learning curve as a concept has been considered and discussed in medical education and surgical practice. Rollin Daniel stated that rhinoplasty is the most difficult of all cosmetic operations for three reasons: (a) nasal anatomy is highly variable, (b) the procedure must correct form and function and (c) patients’ expectations. With this in mind, a study was planned on learning curve in septorhinoplasty based on a surgeon questionnaire. The aims of the study were to extract the idea of learning curve from different surgeons across experience about septorhioplasty, to calculate certain parameters of the learning curve in rhinoplasty and to prepare a roadmap for an early rhinoplasty surgeon. The conclusion derived from the study was the concept of the learning curve in rhinoplasty should not be generalised as certain factors, for example, minimum number to achieve proficiency has a wide range. It is thought that each type of rhinoplasty should be dealt with separately and learning curve calculated accordingly. A roadmap for a novice surgeon is hereby charted out. Introduction The concept of the learning curve was first described in 1885 by Hermann Ebbinghaus (Figure 1). It is a graphical representation, where the vertical axis depicts increase in learning as compared to the horizontal axis that suggests experience. It is also referred to as experience curve and productivity curve. In medical terms it can now be described as improvement in one's technical performance over time secondary to increased experience and training [1][2][3][4][5]. The learning curve in surgical procedure (Figure 2) has four phases: Phase I is commencement of training-residency (post-graduation, senior residency). Phase II is a stepwise ascent in which individuals' performance improves, and this may take a lot of time (fellowship programme, working at a high-volume centre and working under a senior mentor). Phase III is when a procedure is performed independently and with competence (medical university and private practice). This is followed by a plateau where experience improves progress by only smaller fractions. This is followed by a downward sloping curve due to advancing age. Therefore, a study was formulated which would help to plan a learning curve in the speciality of septorhinoplasty. 2. To calculate the learning curve in rhinoplasty as for other surgical procedures, that is, minimum number of procedures to attain proficiency, surgical time and accelerators to your learning curve. 3. To suggest a roadmap for early career rhinoplasty surgeon. Methodology A clinical-based questionnaire was prepared. The study was executed at a workshop attended by rhinoplasty surgeons around the globe. It was completely voluntary, and no personal details pertaining to name and place of practice were obtained. Seventy questionnaires were distributed. A total of 30 completely filled questionnaires were obtained and further analysed. Observations and results A total of 30 surgeons participated that included resident surgeons, assistant surgeons, specialist surgeons and private practitioners (plastic/ENT/maxillofacial). It was found that exclusive open rhinoplasty was practised by 20 surgeons whereas open and closed by 10 surgeons. The participants were distributed into four groups according to the level of experience ( Table 1). The lowest number was 0 (resident doctor) and the highest was 3000. The number of procedures required to achieve proficiency in open rhinoplasty was studied ( This emphasised that surgeons with lesser experience thought about 86 procedures are to be dealt with to achieve proficiency (i.e. the operation is difficult). The mid-level thought the figure in the range of 48-65 procedures to achieve proficiency. The masters with more experience inarguably stated minimum 100 procedures are required to achieve proficiency. This once again implied the intricacies and complexities of the surgery. Change in operation time as surgical experience increases was studied. All the four groups thought that as experience grows, one can perform the procedure earlier by about 60 min. Learning curve accelerators are methods by which one can accelerate the learning curve. Participants were asked to rate the importance of each on a scale of 0-10 (10 being most important) ( Table 3). Conclusion The minimum number of procedures required to achieve proficiency in open rhinoplasty ranged from 20 to 100 with mean of 76.66 and in closed rhinoplasty from 40 to 200 with mean of 106. It was uniformly opined that as experience grows the surgical time of the procedure reduces by about an hour. The most important accelerators of learning were observership under an expert, well-structured fellowship and number of procedures one performs. Discussion The surgery of septorhinoplasty is difficult to understand even with adequate knowledge because it relies on understanding the various expectations the patient has and the act of delivering consistent results. The surgical techniques used by different surgeons are unique and sometimes not reproducible. It being a highly individualised surgery, no single technique works all the time. A good photographic analysis helps one to prepare a surgical plan be it structural rhinoplasty or a surface rhinoplasty. It surgical procedure addresses various issues: dorsal hump, dorsal deviation, tip, radix and other deformities. The surgical time remains static over one's career or even increases a little by experience in doing additional graft work. Factors like temperament and personality of surgeon might also affect the results. According to W. Gubisch, one can become a good rhinoplasty surgeon only if one can address the nasal septum effectively. In his book on Advanced Caucasian and Mediterranean Rhinoplasty, P.J.F.M. Lohius stated that the learning curve depends on genes, exercise and also on luck [6]. Senior Surgeon Rollin Daniel has described few accelerators, namely, detailed preoperative analysis of photos in various view, well-written surgical workflow, use of instruments of good-quality, intraoperative photography and self-explanatory diagrams, analysis of postoperative photos, revision surgeries of own patients, reading the subject, attending meetings for paper presentations and publishing articles [7]. The formulated path in septorhinoplasty: A. To learn good photography techniques in standardised angles using good lens (macro/ telephoto) with standard lighting environment. B. To perform self-analysis of patients' preoperative photographs and devise a plan of surgery. F. To independently perform functional septoplasty or submucous resection of septum, minimum of 100 procedures, to understand septal anatomy better and also perform preoperative nasal endoscopy to understand intranasal anatomy mainly inferior turbinates and posterior septum. G. To watch live operations by experts and surgical videos by stalwarts. H. To perform cadaver dissections. I. One can use plastic or cardboard cutouts in form of different grafts for practice during cadaveric dissections. J. To classify cases according to difficulty, that is, easy, intermediate and difficult rhinoplasty. K. In difficult rhinoplasty cases, one should not hesitate to involve another experienced colleague in the surgery. Cases to be avoided by rhinoplasty surgeon of experience of less than 10 cases are extracorporeal septoplasty, tip plasty, cleft nose, extreme bony deviations, saddle nose, revision cases, pure aesthetic, ethnic nose, multiple deformities and unrealistic expectations. Cases to be avoided by surgeon with experience of more than 10 but less than 100 rhinoplasties are saddle nose, cleft nose, ethnic nose, revision cases and secondary skinny nose. L. To follow surgical techniques of one mentor repeatedly over 100 operations, improvised with each operation. To incorporate inputs from other surgeons and device your own plan for each subtype of rhinoplasty. In my honest opinion, one should assist a rhinoplasty surgeon for a minimum 25 cases along with cadaveric dissections and attending courses at established centres around the globe, like the ones at Stuttgart, Bergamo, Milan, Chicago, London, Singapore and the European and American congresses. This should be followed by doing simple cases in presence of your mentor or an experienced surgeon (Figures 3 and 4). A long-term follow-up of cases should be maintained, and photographic analysis should be compared to pre-op at 6 months and yearly interval. This should be discussed with your senior colleague who shall criticise a point out your mistakes. This shall help you to undertake intermediate difficult surgeries as follows: This refines your surgery in a better way. Also never hesitate to ask for help from seniors as they themselves have been through similar phases and only they can guide you better rather than being lost (Figures 5-8). With this approach one can attempt difficult cases as follows: A happy surgeon-happy patient combination is one for which you should strive for (Figures 9-12).
v3-fos-license
2018-10-05T01:42:49.084Z
2018-09-19T00:00:00.000
52845083
{ "extfieldsofstudy": [ "Medicine", "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7717/peerj.5623", "pdf_hash": "59cd12838f1e985ce9d3cf4ffde358e1c70f085c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45211", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "59cd12838f1e985ce9d3cf4ffde358e1c70f085c", "year": 2018 }
pes2o/s2orc
Potential changes in the distribution of Carnegiea gigantea under future scenarios Over the last decades several studies have identified that the directional changes in climate induced by anthropogenic emissions of greenhouse gases are affecting the ecology of desert ecosystems. In the Southwest United States, the impacts of climate change to plant abundance and distribution have already been reported, including in the Sonoran Desert ecosystem, home of the iconic Saguaro (Carnegiea gigantea). Hence, there is an urgent need to assess the potential impacts of climate change on the saguaro. The goals of this study are to provide a map of actual habitat suitability (1), describe the relationships between abiotic predictors and the saguaro distribution at regional extents (2), and describe the potential effect of climate change on the spatial distribution of the saguaro (3). Species Distribution Modeling (SDM) was used to investigate the relationships between abiotic variables and the Saguaro distribution. SDMs were calibrated using presence records, 2,000 randomly-generated pseudo absences, and ten abiotic variables. Of these, annual precipitation and max temperature of the warmest month was found to have the greatest relative influence on saguaro distribution. SDMs indicated that 6.9% and 8.1% of the current suitable habitat is predicted to be lost by 2050 and 2070, respectively. Therefore, predicted changes in climate may result in a substantial contraction of the suitable habitat for saguaro over the next century. By identifying the drivers of saguaro distribution and assessing potential changes in habitat suitability due to climate change, this study will help practitioners to design more comprehensive strategies to conserve the saguaro in the face of climate change. INTRODUCTION Predicting climate change effects on biodiversity is one the most important challenges that researchers face (Parmesan, 2006). Over the last decades several studies have identified the changes induced by anthropogenic emissions of greenhouse gases is affecting the ecology of desert ecosystems (Parmesan & Yohe, 2003;Kimball et al., 2010). Climate change researchers predict that changes to desert ecosystems will alter the nutrient cycles, fire regimes, genetic diversity of populations with implications to evolutionary changes, and cause species range shifts (Loik et al., 2004;Huxman et al., 2004;Bickford et al., 2010;SDNIMP, 2010). In the Southwest United States, the impacts of climate change on plant abundance and distributions have already been reported, including in the Sonoran Desert (Munson et al., 2012). The saguaro (Carnegiea gigantea) represents one of the most noticeable patterns of plant distribution in the Sonora Desert (Hutto, McAuliffe & Hogan, 1986). The saguaro is a large columnar cactus that grows to a height of 12 m or more. The main stem, which can range up to 75 cm in diameter, has 12 to 25 vertical ribs (Turner, Bowers & Burgess, 1995;Anderson, 2006). This species inhabits rocky and outwash slopes and grows on sandy flats on or near alluvium (Turner, Bowers & Burgess, 1995). The saguaro is very important to the people of the Tohono O'odham Nation, because they rely on this species for food (Beckwith, 2015). The spatial distribution of the saguaro extends through the Sonoran Desert in Arizona, California, and Mexico, but most of its population occurs in Sonora, Mexico (USFR, 2016). Saguaro disease (e.g., bacterial necrosis), air pollution, cattle grazing, wood-cutting, land use changes, urbanization, freezing and drought are significant threats to the saguaro (NPS, 2010;Burquez-Montijo et al., 2013). However, despite its importance as one of the signature species of the Sonoran Desert, the giant saguaro has been largely ignored by biogeographers. There is limited evidence that the spatial distribution of the saguaro is driven mainly by the climate in the northernmost part of its range (Hutto, McAuliffe & Hogan, 1986;Turner, Bowers & Burgess, 1995;Arundel, 2005). However, the limiting factors affecting the growth of the saguaro in the eastern Sonora State of Mexico have not been identified yet (Turner, Bowers & Burgess, 1995). Hence, there is an urgent need to review our current understanding of the effects of climate on the saguaro distribution. Species Distribution Modeling (SDM) are correlative models built from the relationships between environmental variables and incomplete presence records (Guisan & Zimmermann, 2000) that have been used to provide understanding on detailed ecological relationships between abiotic predictors and species distributions, and to predict species' distributions across space and time (e.g., Guisan & Zimmermann, 2000;Araújo & Rahbek, 2006;Elith et al., 2006). The outcome of an SDM is a habitat suitability map, useful for assessing species invasion and proliferation, designing ecogeographic regions, modeling species richness and composition, and supporting conservation planning and spatial prioritization, among others (Ferrier et al., 2002;Franklin, 2010;Benito, Cayuela & Albuquerque, 2013;Guisan et al., 2013). When forecasting the effect of climate change on species 'geographical ranges, it is important to consider multiple climate change scenarios (Sala et al., 2000;Parmesan, 2006;Araújo & Rahbek, 2006;Beaumont et al., 2007;Beaumont, Hughes & Pitman, 2008;Bellard et al., 2012) based on four Representative Concentration Pathways (RCPs, IPCC, 2013). RCPs describe scenarios based on assumptions on socio-economic, and greenhouse and air pollutant emissions to provide trajectories for major agents of climate change (Van Vuuren et al., 2011). In this paper we apply SDMs to investigate how habitat suitability for the saguaro can potentially respond to a range of climate change scenarios. Our results provide guidance on the potential impacts of climate change on the saguaro's geographical range, while increasing our understanding on the impacts of climate change in the ecology of the Sonora Desert. MATERIALS & METHODS The Study area comprised the Sonora Desert (Fig. 1), which extends from the Southwestern United States into Northern Mexico, including the states of Arizona, California, Baja California, and Sonora. It is rich in both habitat and biodiversity, and encompasses biotic community representing all of the World's biomes, such as tundra, forest, grassland, chaparral, desert and riparian communities (Arizona-Sonora Desert Museum, 2018). The Sonoran Desert lifeforms include more than 350 birds, 100 reptiles, and more than 2,000 plant species, including the iconic columnar cacti species saguaro (NPS, 2017). Data preparation We obtained 824 records on the saguaro distribution from the GBIF (Global Biodiversity Information Facility; URL: http://www.gbif.org), SEINet Portal Network (http://swbiodiversity.org/seinet/index.php) and the TROPICOS database (Missouri Botanical Garden; URL: http://www.tropicos.com). To prepare a reliable presence dataset, we cleaned the data by (1) removing records with wrong latitudes and longitudes (e.g., records located in the Pacific Ocean); (2) deleting duplicated records; (3) reducing spatial aggregation by imposing a minimum distance among nearby presence records (Benito, Cayuela & Albuquerque, 2013). We generated a set of 2,000 random points not overlapping the presence data to be used as pseudo-absences to fit SDMs. Because data is often collected at easily accessed areas and bias in the selection of sampling sites can affect model quality (Phillips et al., 2009), we used four target distances (one km, four km, seven km and 10 km) to estimate the optimal minimum distance between consecutive presence and background records. To do so, we first created a regular grid of cells for record sampling. The cell size values are the same as defined by the target distances. We randomly selected one record per grid cell to take a sample of points within the grid cells. We used a stratified random split to split presence and pseudo-absence records into a training dataset (30%) and a testing dataset (70% points). We included two classes of variables in the models: (1) topographic variables derived from WorldGrids (2018), such as elevation, slope, topographic wetness index, topographic openness index, and potential incoming radiation (mean and range). (2) Climate variables for the present time and future climatic projections from WorldClim (http://worldclim.org /;Hijmans et al., 2005), including annual and seasonal means, extremes, and ranges of temperature and precipitation. A list of all variables used is available at Table S1. We used future climatic projections produced from two global climate models (GCMs), CCSM4 and HadGEM2-ES. The Community Climate System Model (CCSM4) is a coupled global climate model, simulating Earth's atmosphere, ice, land, and ocean from the past into the future (Gent et al., 2011). The Hadley Global Environment Model (HadGEM2-ES) is an earth systems model incorporating terrestrial, oceanic, and atmospheric conditions Buisson et al., 2010;Naujokaitis-Lewis et al., 2013). The data targets two-time periods: 2050 (average for 2041-2060) and 2070 (average for 2061-2080) at approximately 1 × 1 km resolution and has been generated following each one of the four Representative Concentration Pathways (RCPs) described in the Intergovernmental Panel on Climate Change's Fifth Assessment Report (IPCC, 2013;Bosso et al., 2016). Each RCP (2.6, 4.5, 6.0 and 8.5) assumes a set of different socioeconomic, technological, and political scenarios, representing optimistic to pessimistic greenhouse gas concentration trajectories. Species distribution modeling The calibration of SDMs requires the following steps: obtaining relevant presence records; selecting relevant predictors; selecting the appropriated numerical model; fitting and evaluating the model from training and test data; and mapping predictions to the geographical space (Elith & Leathwick, 2009). Variable selection Following Benito, Cayuela & Albuquerque (2013), we computed the correlation matrix among predictors, and used a hierarchical cluster analysis (hclust R function) to group predictors according their mutual correlation by setting the maximum correlation at 0.5 Pearson's index. We identified nine strongly-correlated groups: one related with potential radiation, and eight groups associated with measures of precipitation, temperature, and elevation (Fig. S1). We generated Biserial correlation models (Kraemer, 2006), a special case of Pearson correlation in which one variable is quantitative and the other variable is binomial, to investigate relationships between environmental predictors and the saguaro distribution. For each group identified by hclust, we selected the predictor that best correlated with the saguaro distribution. Finally, we used variance inflation factor analysis (VIF) to minimize collinearity among predictors. We considered values of VIF above five as an evidence of collinearity (Heiberger & Holland, 2004). The selected variables were: annual mean temperature, max temperature of warmest month, mean temperature of wettest quarter, annual precipitation, precipitation seasonality, topographic wetness index, topographic openness index, and potential income radiation. Model fitting We used the training data and the selected environmental variables to fit boosted regression trees models (BRT; Elith, Leathwick & Hastie, 2008), an ensemble algorithm that combines the strengths of two models: decision trees and boosting. The former is known by its ability to (1) handle several types of response variables (e.g., numeric, categoric, multivariate), (2) handle complex interactions, and (3) deal with missing values with minimal loss of information (De'ath, 2007). Boosting is an optimization technique for minimizing the loss function (in this case deviance). The general idea is to generate a sequence of trees, and for each successive step, a tree is built using the residuals of the previous iterations as input (De'ath, 2007;Elith, Leathwick & Hastie, 2008), until residuals stop decreasing. The resulting BRT model is the combination of all the fitted trees, and the prediction is computed from the sum of the output of the individual trees (Elith, Leathwick & Hastie, 2008). BRT models were calibrated with the function gbm.step of the R package dismo (Hijmans et al., 2017;R Core Team, 2017). BRT requires the specification of five main parameters: bag fraction (bf ), learning rate (lr), tree complexity (tr), step size (ss), and number of trees (nt ). Bag fraction is the percentage of the data randomly selected to build the next tree. Learning rate is used to set the weight applied to individual trees. Smaller lr values will increase the number of trees required. Tree complexity represents the number of nodes in a tree. Model evaluation For each model, we used 10 k-fold cross-validation procedure to split the training data into ten random subsets to estimate the area under the receiver operating characteristic curve value, namely AUC (Area under the curve Fielding & Bell, 1997). We also considered the deviance explained by the model as reported by the gbm.step function output. We compared model performance using target-distance and BRT parameters, and we selected the model with highest AUC and deviance explained to determine the optimal bg, lr, tc and ss parameters. We analyzed the relative influence of each variable provided by the gbm.step function for the best BRT model, and used the function gbm.plot to produce partial dependence plots (Hastie, Tibshirani & Friedman, 2001) showing the relationship between predictor variables and the distribution of the saguaro. Model prediction The best BRT model was used to forecast habitat suitability in the present time, and over every combination of time period, GCM, and RCP to produce eight future presence range maps. We used the maximization of the sum of sensitivity and specificity statistics to transform habitat suitability as estimated by the best model, into a binary prediction (Liu et al., 2005;Lawson et al., 2014). We followed Hatten et al. (2016) to identify potential range expansion, contraction, or consistency under the four RCPs. For each RCP, we summed the binary GCM maps, resulting a value of 0 where both SDMs predicted absence and a value of two where they predicted presence. We computed potential changes in saguaro habitat suitability between the present and future by subtracting the composite range maps for each period to the present habitat suitability map. We also computed the maps of differences in temperature and rainfall between today and 2070, and calculated the match between the expansion and contraction areas with the maps of differences in temperature and rainfall. RESULTS From the 360 models produced with different combinations of target distance and parameters, models with a target distance of 1km had the best performance. Among them, three models had the same tc (4), bf (0. 5), and AUC (0.87). We selected the model with the smallest learning rate (lr = 0.005). According to this model, the annual precipitation and max temperature of warmest month were found to have the greatest relative influence on saguaro's habitat suitability, 24.7%, and 22.8%, respectively (Fig. 2). Also, the mean annual temperature showed a significant contribution (15.7%). According to the partial dependence plots obtained from the best BRT model, the relationship between saguaro's habitat suitability and the environmental predictors are non-linear. The partial contribution of the individual predictors to the model fit (Fig. 3) Figure 2 Variable importance measures. Variable importance measures as produced by boosted tree regressions. Variables are annual precipitation, max temperature of warmest month (Max t. of the warmest month), annual mean temperature (A. mean temperature), mean temperature of wettest quarter (Mean t. of the wettest quarter), precipitation Seasonality, topographic openness index, topographic wetness index and potential income radiation (mean, p. income radiation). Full-size DOI: 10.7717/peerj.5623/ fig-2 indicate a preference of the species for warmer areas with high precipitation in the summer and open landscapes. For max temperature of warmest month, the logit of the probability of presence displayed a constant response to about 35 • C and then showed a steep increase (Fig. 3). The eastern and southern areas of Arizona, USA, and Sonora, Mexico, showed the largest concentrations of areas with high suitability, with a secondary concentration in central Sonora state (Fig. 4). Cells with high habitat suitability were also concentrated in a northernmost part of the Sonora desert (Fig. 4). Model forecast under different RCPs indicated significant habitat suitability reductions across the species' presence range and few opportunities for range expansion (Fig. 5). In all RCPs, models predicted a high contraction of suitable habitat. By 2050, RCPs predict a loss of 6.9%, on average, of the of currently suitable habitat, with values ranging from 5.6% (RCP 2.6) to 8.6% (RCP 4.5). Much of the contiguous loss of suitable habitat is on the western edge of saguaro's range in Arizona, with a sizeable loss in a central patch of Arizona. This pattern continues into Mexico, with contractions on the western edge of Sonora, jetting into the mainland, and receding north from Sinaloa. The models projected little habitat suitability increases, most notably in Sonora State's range. By 2070, this pattern of habitat suitability loss is projected to continue and worsen with an additional 1.2% contraction from 2050. The greatest expected change appears to be enlarging inland patches of unsuitable habitat from Arizona to Mexico. Our models suggest a moderate growth in habitat suitability in the northward expansion from saguaro's actual range. We also observed an expansion in the center of the Sonora State. Models also suggest that temperature has low impact on defining expansion and contraction areas for RCPs 2.5 and 4.6, where increased rainfall is the main factor explaining the increase of habitat suitability (Fig. 6). For RCPs 6.0 and 8.5 expansion areas seem to be mostly explained by an increment in both temperature and rainfall (Fig. 6). DISCUSSION For first time, we used BRT models to describe the relationships of environmental variables and the habitat suitability of the iconic saguaro across the Sonoran Desert, and predicted habitat suitability change under climate warming for four different RCPs. In general, the predictive performance of boosted regression trees depends on the model parameters, such as lr and tc. BRT models required a large tr value to achieve a minimum predictive error. After testing different combinations of parameters, three models emerged as best candidates: they shared the same tr (4), bf (0.50) and AUC (0.87). We selected the model with smaller lr (0.005), because small values for lr results in a slower learning and requires a higher number of trees to improve the predictive error (De'ath, 2007). Also, small lr values shrinks the contribution of each tree and reliably estimate the response (Elith, Leathwick & Hastie, 2008). BRT models were also expected to be affected by target distance, as it happened. Models with smaller target distance, and therefore larger sample size, produced higher AUC values than models with larger target distances, which also produced lower explained deviance values. We found that the habitat suitability of the saguaro is strongly related with climate variables, which also agrees with previous studies performed at local extent (Parker, 1993;Drezner & Balling, 2002). According to BRT's variable importance measures, annual precipitation had the strongest influence on the habitat suitability of the saguaro. Overall, the probability of occurrence increased with the annual precipitation, rising steeply and uniformly up to 300 mm, followed by a steep decrease and a stationary phase. Several other studies have identified precipitation as a key factor for the demography of saguaros. Turner, Bowers & Burgess (1995) observed that the saguaro grows in areas of the Sonoran Desert where summer rainfall is substantial. Drezner (2006) reported that the reproductive success of the saguaro is closely related with global and regional-scale variations and increases in rainfall. Precipitation is also related with patterns in saguaro establishment and survival . The lack of water is pointed as a major factor affecting cacti mortality, probably because water limitation can reduce the survivorship of young and juvenile individuals (Pierson & Turner, 1998). Furthermore, desert ecosystems of western United States and Northern Mexico are particularly susceptible to climate variability and specifically to drought (Archer & Predick, 2008). According to our model, the max temperature of warmest month had a strong influence on the habitat suitability for the saguaro. Overall, habitat suitability dramatically increased when the maximum temperature of the warmest month went beyond 36 • C. This relationship may occur because of the saguaro is well adapted to the harsh temperature conditions of the Sonoran Desert (Franco & Nobel, 1989). Temperature has been identified as one of the most important factors for the regeneration and population viability of the saguaro (Turner, Bowers & Burgess, 1995;Drezner, 2006), since it plays a key role in driving the establishment and survival of young saguaros, maintaining its distribution over time (Turner et al., 1966;Nobel, 1982). On the other hand, saguaros are sensitive to extended periods of subfreezing temperatures (Nobel, 1982), and catastrophic freeze events have been reported to increase the mortality of the saguaro (Orum, Ferguson & Mihail, 2016). Our models show potential impacts of climate change on saguaro's habitat suitability in the Sonoran Desert, a result that is consistent with previous analyses of climate change in desert ecosystems (SDNIMP, 2010;Munson et al., 2012). All models projected onto different RCPs predict a reduction in habitat suitability for the saguaro. Specifically, results indicate that the eastern and central parts of Mexico, and especially the Sonora State, are more sensitive to changes, and face large habitat suitability decreases. The impacts of climate change on the distribution of the saguaro have recently been reported at the Saguaro National Park, Arizona . Climate change seems to affect the saguaro directly through increased drought, the occurrence of extended, extreme freezing events, and indirectly because warmer winters temperatures may enhance the spread of exotic species, such as buffelgrass (Cenchrus ciliaris; Swann et al., 2018). Also, drought directly promotes the decline of saguaro density and growth, and reduces perennial shrub and tree cover (nurse plants), which help to protect saguaro from extreme temperatures (Archer & Predick, 2008). We add on to previous studies, showing for the first time the potential changes in the habitat suitability of C. gigantea, under future climate change scenarios, for the whole Sonoran Desert area. Although much work remains to be done to evaluate the effect of climate change on the distribution of saguaro across the Sonoran Desert, our findings provide a strong reason to engage in that work. Because the saguaro distribution is so poorly documented, conservation planners need reliable assessments to monitor the reduction in suitability at Sonoran Desert. CONCLUSION In this study, we used boosted regression trees to investigate effects of climate change on the saguaro habitat suitability, and to explore the complex relationship between environmental factors and saguaro distribution in the Sonora Desert ecosystem at a regional extent. Based on our results, we reached three conclusions: (1) the performance of BRT algorithms varied with the selection of BRT parameters. Overall, BRT models performed well, which reinforces its use for typical ecological analyses (Elith, Leathwick & Hastie, 2008). As indicated by cross-validation analysis, BRT is a useful algorithm for analyzing and predicting ecological data (De'ath, 2007). (2) BRT models identified precipitation and temperature as the main drivers of the habitat suitability for the saguaro in the Sonoran Desert. (3) Although previous studies have reported impacts of climate change on the saguaro, this study is the first attempt to identify potential impacts of climate change on the saguaro's habitat suitability across its whole range. Previous studies on the possible effects of climate change on saguaro distribution have mostly focused on local scales (Turner et al., 1966;Pierson & Turner, 1998;Archer & Predick, 2008;Swann et al., 2018), while this study is focused on a regional scale. Regardless of the RCP used, models predict a decrease in the saguaro's habitat suitability across the study area. Also, our results allow us to conclude that under warming conditions an increase in precipitation is required to ensure a high habitat suitability for saguaros. Because saguaros are much more resistant to extended drought than many other species , we suggest that elucidating the patterns and drivers of species distribution change under climate warming can provide key ecological knowledge necessary to conserve species at the Sonoran Desert. ADDITIONAL INFORMATION AND DECLARATIONS Funding The authors received no funding for this work.
v3-fos-license
2023-06-18T06:17:07.798Z
2023-06-01T00:00:00.000
259184081
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1002/gps.5953", "pdf_hash": "1b7c2f5082a6a359cca2910be4440a88352062b0", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45217", "s2fieldsofstudy": [ "Psychology" ], "sha1": "0eef321ea0d202a0f7aa065f7fd70501ceaffb33", "year": 2023 }
pes2o/s2orc
Do older adults respond to cognitive behavioral therapy as well as younger adults? An analysis of a large, multi‐diagnostic, real‐world sample Older adults (OA; ≥55 years of age) are underrepresented in patients receiving cognitive‐behavioral therapy (CBT). This study evaluates mental health outcomes for OA compared to younger adults (YA; <55 years of age) receiving CBT. | INTRODUCTION Cognitive-behavioral therapy (CBT) is a proven treatment modality for anxiety and mood disorders in adults. [1][2][3] Previous studies have reported that CBT is effective in older adults (OA). Despite this, there is widespread ageism and a perception that OA will not benefit from CBT as much as younger adults (YA), and that OA are not as willing or able to discuss their mental health issues. [4][5][6] Thus, OA are less likely to be referred for therapy by their physicians, and more likely to be prescribed antidepressants. 7 Several studies have shown that older adults benefit from CBT, especially for depression and anxiety disorders (mainly generalized anxiety disorder). 4,8,9 Unfortunately, OA represent less than 10% of all CBT referrals. 5 There is still a large gap in the outcome literature on older adults, as most previous studies are with younger adults, and studies with community-dwelling older adults have small sample sizes or are often limited to specific diagnoses. [10][11][12][13][14][15][16][17] While these studies suggest that older adults do as well as younger adults, there is a need for larger studies exploring real-world outcomes of CBT treatment in community-dwelling older adults. With that aim, we analyzed data collected over 20 years from a CBT service located in a university-affiliated tertiary care hospital. We compared the effectiveness of CBT for older adults and younger adults. We hypothesized that "real world" older patients would benefit less from CBT than younger patients, as a way to understand the disparity in referral rates. | METHODS Data analyzed in this study was collected between July 2001 and July 2021, as part of an ongoing prospective observational study of consecutive referrals for short-term CBT in the McGill University Health Centre (MUHC) CBT Unit. The MUHC CBT Unit is a specialized teaching unit, located in a tertiary care hospital. Patients are referred from physicians both within the hospital and from the general community. The research ethics board of the McGill University Health Centre approved the study and reviewed the application annually. | Participants The sample for the present study consisted of 1500 patients referred for CBT for any diagnosis during the study period. The sample was then divided into a YA group (less than 54 years of age) and an OA group (55 years of age and older). | Procedure All patients underwent an initial telephone triage where preliminary diagnoses were recorded. They were then asked to fill out a set of validated symptom measures (BDI II, BAI, SCL90-R, SCID II selfreport), [18][19][20][21] as well as diagnostic questionnaires specific to their preliminary diagnoses. For example, patients referred for obsessivecompulsive disorder were asked to fill out the Yale-Brown Obsessive-Compulsive Scale 22 in addition to the above measures. A subsequent 2-h clinical interview with a psychiatrist ascertained the precise clinical diagnoses using the prevailing Diagnostic and Statistical Manual of Mental Disorders that is, DSM-IV, DSM-IV-TR, DSM-5. [23][24][25] All primary, secondary, and exploratory data were collected at baseline and after CBT treatment. Inclusion criteria were: (1) being over 18 years of age (2) willingness to engage in CBT, and (3) having a diagnosis for which there was evidence of CBT effectiveness. Exclusion criteria included requiring emergency or alternate psychiatric services. The CBT administered in this study was problem-focused and employed standard, evidence-based techniques. Therapy was not manualized but guided using the individualized case conceptualization approach. 26 Patients were told therapy was short term, usually between 12 and 20 sessions, but all therapy endpoints were decided collaboratively. Trainee CBT therapists were from all mental health disciplines and included psychiatry residents, psychology interns, psychiatry fellows, and allied mental health professionals who were already experienced therapists getting additional training in the Center. All trainees received close individual supervision, and all therapy sessions were videorecorded for supervision and treatment integrity purposes. Selected videotapes were evaluated in their entirety to ensure competent CBT delivery using the Cognitive Therapy Rating Scale 27 or the Cognitive Therapy Scale Revised. 28 Complete protocol details have been previously published. 29 | Primary outcome measure The primary outcome measure of this study was the Reliable Change Index (RCI). The RCI is a pre to post treatment change score, corrected for the reliability of a given diagnostic specific measure. This provides a single continuous measure of outcome across a range of diagnoses. A value greater than 1.96 indicates that an observed change is statistically reliable, and not due to measurement error. 30 The full list of diagnosis-specific measures used is available upon request. | Secondary outcome measures Clinical significance A secondary outcome measure was a dichotomous variable indicating whether clinically significant change had occurred in the individual patient. If the patient's RCI was >1.96 (indicating statistically reliable change), and if the value of the post treatment test score was less than the "cut score" established for a given diagnostic measure, then clinically significant change was said to have occurred. This meant the patient no longer met the threshold for diagnosis of their condition. 30 RCI by diagnostic categories To explore diagnosis-specific effects, RCI values for patients within different diagnostic categories were compared. The categories were: anxiety disorders, mood disorders, obsessive-compulsive (OCD) and related disorders, psychotic disorders and other disorders. This last category captured any disorder not included in the previous categories. Clinical global impression and improvement The Clinical Global Impression (CGI) assesses the overall severity of each patient's symptoms before and after treatment. Possible scores for severity range from 1 (normal, not at all ill) to 7 (among the most extremely ill patients). The CGI Improvement measure indicates the extent of change over the course of the therapy, ranging from 1 "Very much improved" to 7 "Very much worse". 31 Initial CGI values were the means of the assessment team member scores at intake. Post-treatment severity and improvement values were entered by the treating therapist. | Exploratory measures Participants also completed the Symptoms Checklist-90-Revised (SCL-90-R), a 90 item self-report inventory where items are rated on a 5-point Likert scale of distress. 20 The Global Severity Index (GSI) from the SCL-90-R was used as a measure of general psychopathology. | Statistical analyses Data were analyzed using SPSS release 24 for Mac. The sample size for each analysis includes only those participants with complete data on the main outcome measure. In this study, we are interested in comparing the mean difference in RCI, as well as the change of CGI, and SCL-90 subscales over time and between OA and YA. This can be done by using the ANOVA which is a statistical test to determine whether two or more population means are different. That is, the ANOVA is used to compare two or more groups to see if they are significantly different. 32 A one-way ANOVA with a two-factor design (age) was done to assess group differences in RCI between older adult (age >= 55) and younger adult (<55) groups. A two-way ANOVA was done to assess group differences between OA and YA in each diagnostic category group. Repeated measures ANOVA were used to test for group dif- (Table 1) with no significant difference in proportion of trainees treating older versus younger adults. Table 4 Table 5). | DISCUSSION In this study, we compared CBT outcomes in adults ≥55 years of after their CBT treatment. The CGI severity and improvement scores also indicated improvement with time in both groups after treatment. However, both before and after treatment, CGI severity scores for older patients had lower values (i.e., OA had "milder" illnesses). Despite this, there were no interactions. This could reflect a selection bias in our sample in which only patients with a milder illness were selected. Given that older adults are less likely to be referred for psychotherapy, 5 it could reflect that only "extremely good" older candidates were referred by their physicians. This is consistent with previous findings. 5 While this could have helped with therapy adherence, it also limited potential improvement by excluding the cases with more improvement to be gained. In any case, our CBT treatment effect remains robust for adults of all ages. T A B L E 3 Reliable change indices by diagnostic categories in older versus younger adults (older n = 99; younger n = 601). Group Younger (<55) Older (≥55) Mixed between-within ANOVAs (F) Category £ age Category Age Note: Description: Reports GSI total score, a severity measured derived from the subscales of the SCL checklist (higher scores = more psychopathology). Interpretation: Both younger and older people improved over time (had a lower GSI subscore). There were no significant differences between older and younger groups. There was no significant interaction effect. -5 of 7 Our findings corroborate and extends the existing literature on CBT outcomes in older adults. To our knowledge, there has never been such a large non-veteran comparative study between older and younger adults. Most studies to date are limited by small sample sizes and have focused on single diagnoses (such as depression). [10][11][12][13][14][15][16][17] Our work adds to the literature on the potential effectiveness of CBT in older adults for less-studied disorders, such as obsessive-compulsive and related disorders. In this paper, we show how CBT appears to benefit a diagnostically heterogenous sample of patients aged 55 years and older in the same way it helps those younger than 55 years of age. | Limitations and strengths There are some limitations to this study. A lot of the data collected comes from self-reported symptom questionnaires. A selection bias could have occurred, so that only the most motivated patients filled the questionnaires. This would be true of both groups, however. The questionnaires themselves reflect symptom burden, and do not directly touch upon the question of quality of life, which is a major concern for the elderly, and for all individuals with mental health disorders. While it is possible that quality of life is unchanged or worsened with CBT, a collaborative, problem focused therapy, is more likely to improve quality of life. Another possibility is that most of the patients are treated with trainee therapists; results may be expected to be lower than CBT practiced by professionals in the real world. However, these trainees are closely supervised, and treatment integrity checks ensure that structured CBT is being administered as described. This might lead to comparable results in the real world, where therapist "drift" is a known phenomenon. 33 DATA AVAILABILITY STATEMENT The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
v3-fos-license
2018-03-27T13:07:11.962Z
2018-03-27T00:00:00.000
4737441
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2018.00249/pdf", "pdf_hash": "e2e9d04e6cffd67a5f5bf064598cafbd391fd981", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45218", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "e2e9d04e6cffd67a5f5bf064598cafbd391fd981", "year": 2018 }
pes2o/s2orc
Leptin Signaling in the Carotid Body Regulates a Hypoxic Ventilatory Response Through Altering TASK Channel Expression Leptin is an adipose-derived hormone that plays an important role in the regulation of breathing. It has been demonstrated that obesity-related hypoventilation or apnea is closely associated with leptin signaling pathways. Perturbations of leptin signaling probably contribute to the reduced sensitivity of respiratory chemoreceptors to hypoxia/hypercapnia. However, the underlying mechanism remains incompletely understood. The present study is to test the hypothesis that leptin signaling contributes to modulating a hypoxic ventilatory response. The respiratory function was assessed in conscious obese Zucker rats or lean littermates treated with an injection of leptin. During exposure to hypoxia, the change in minute ventilation was lower in obese Zucker rats than chow-fed lean littermates or high fat diet-fed littermates. Such a change was abolished in all groups after carotid body denervation. In addition, the expression of phosphorylated signal transducers and activators of transcription 3 (pSTAT3), as well as putative O2-sensitive K+ channels including TASK-1, TASK-3 and TASK-2 in the carotid body, was significantly reduced in obese Zucker rats compared with the other two phenotype littermates. Chronic administration of leptin in chow-fed lean Zucker rats failed to alter basal ventilation but vigorously increased tidal volume, respiratory frequency, and therefore minute volume during exposure to hypoxia. Likewise, carotid body denervation abolished such an effect. In addition, systemic leptin elicited enhanced expression of pSTAT3 and TASK channels. In conclusion, these data demonstrate that leptin signaling facilitates hypoxic ventilatory responses probably through upregulation of pSTAT3 and TASK channels in the carotid body. These findings may help to better understand the pathogenic mechanism of obesity-related hypoventilation or apnea. INTRODUCTION Leptin, a peptide hormone secreted mainly by adipocytes, regulates multiple physiological functions including metabolism, cardiovascular activity, and breathing (Grill et al., 2002;Bassi et al., 2015). Leptin's role in controlling breathing has been implicated in recent studies, with its participation in sleep-related breathing disorders including obesity hypoventilation syndrome (OHS) and obstructive sleep apnea (Malhotra and White, 2002). In animal models, leptin deficient mice exhibit impaired ventilatory responses to CO 2 which can be rescued by leptin replacement therapy, in favor of facilitation of breathing by leptin (O'Donnell et al., 2000). Leptin receptors (ob-Rs) are composed of six isoforms termed from ob-Ra to -Rf, with the long form of ob-Rb mediating the majority of leptin's intracellular signal transduction (Tartaglia, 1997). Although the role of leptin signaling pathways in mediating various physiological actions has been investigated intensively, the molecular mechanism underlying its action on breathing remains incompletely understood. The peripheral respiratory chemoreflex serves as a homeostatic regulatory mechanism by which enough oxygen must be supplied to the organism when challenged by hypoxia through altering respiratory amplitude and frequency. The carotid body (CB) chemoreceptors, located near the fork of the carotid artery, are activated shortly after exposure to hypoxia and then send information to the nucleus tractus solitarius and high integrative centers, with the outcome of adaptive ventilatory responses (Gonzalez-Martin et al., 2011;Ciriello and Caverson, 2014). Accumulated evidence indicates the presence of ob-Rb in the CB cells (Porzionato et al., 2011;Messenger et al., 2013), and that leptin signaling contributes to CB-mediated ventilatory responses (Olea et al., 2015;Ribeiro et al., 2017). However, it remains controversial whether the CB mediates the acute effect of leptin on hypoxic ventilatory response (HVR) because leptin's role may involve the change in gene expression and protein synthesis, requiring hours even days for full effects (Hall et al., 2010). We thereby predicted that the stimulatory effect of leptin on HVR may require chronic activation of CBs, but such a confirmation is yet to be put forward. In the CBs, leptin signaling pathways involve the downstream signaling proteins of ob-R signal transducers and activators of transcription 3 (STAT3), suppressor of cytokine signaling 3 (SOCS3), and extracellular-signal-regulated kinase 1/2 (ERK1/2) (Messenger et al., 2013;Moreau et al., 2015), reminiscent of modulatory effects of these molecules on carotid chemoreceptor sensitivity. Emerging evidence has shown that the chemosensitivity of glomus cells in CBs requires two-pore K + channels including TWIK-related acid-sensitive K (TASK)-1 channels and acid-sensitive ion channels (Trapp et al., 2008;Tan et al., 2010). However, very little is known concerning whether the activation of the ob-R and downstream signaling molecules modulates the sensitization of CB chemoreceptors via affecting these ion channels. We sought to address herein whether the leptin signaling pathway in the CB contributes to regulating HVR and the possible mechanism involved. We utilized whole body plethysmography (WBP) to assess HVR in obese Zucker rats (ob-R deficiency) or in lean littermate controls treated with injections of leptin. The main findings suggest that chronic application of leptin contributes to facilitation of HVRs probably through upregulation of phosphorylated STAT3 (pSTAT3) and TASK channel expression. Animals The experiments were carried out in 12∼20-week-old male obese Zucker rats (OZR) and lean littermates (LZR) obtained from the Charles River Laboratories (USA). Animals, synchronized for a 12:12 h light-dark cycle (lights on at 8 am, lights off at 8 pm), were housed individually and allowed to move freely in standard plastic cages in a climate-controlled room (22 ± 1 • C). Food and water were provided ad libitum for LZRs and OZRs. In some cases, a group of LZRs were placed on a highfat diet (HFD, 45% kcal/g fat, Research Diets D12451) for 8 weeks and used as a simple obesity control (LHZR). The LHZR and OZR groups were weight-matched to determine the effect of simple obesity-induced mechanical resistance on ventilation. Body weight was measured once a week (n = 20 for each phenotype). All experiments were performed in accordance to ethical guidelines of the Animal Protection Association and were approved by Animal Care and Ethical Committee of Hebei Medical University. When the animal experiments were completed, an overdose of intraperitoneal injection of sodium pentobarbital (> 200 mg/kg) was carried out for euthanasia. Breathing Measurement Breathing was studied by WBP in conscious, freely moving rats (EMKA Technologies, France) as described previously (Kumar et al., 2015;Fu et al., 2017). In brief, rats were placed in the WBP chamber on the day before the testing protocol (2 h acclimation period). For acute hypoxia, rats were exposed to 10% O 2 (balance N 2 ) for up to ∼7 min by a gas mixture devices (1,500 ml/min, GSM-3, CWE, USA). Ventilatory flow signals were recorded, amplified, digitized and analyzed using IOX 2.7 (EMKA Technologies) to determine breathing parameters over sequential 20 s epochs (∼50 breaths) during periods of behavioral quiescence and regular breathing. Minute volume (V E ; ml/min/g) was calculated as the product of the respiratory frequency (f R , breaths/min) and tidal volume (V T ; ml/kg), normalized to rat body weight (g). To further confirm the CB-mediated effect of leptin, breathing parameters were also measured in rats with carotid body denervation (CBD). The carotid sinus nerves were sectioned as depicted before (Kumar et al., 2015). Shortly, anesthesia was induced with 4% halothane in 100% O 2 and maintained by reducing the inspired halothane concentration to 1.5∼1.8%. The depth of anesthesia was assessed by an absence of the corneal and hindpaw withdrawal reflex. Body temperature of all mice was maintained at 37 • C using a temperature-controlled heating pad. To prevent any functional regeneration of chemosensory fibers, the carotid sinus nerves were removed completely from the cranial pole of the CB until reaching the branch to the glossopharyngeal nerve. The wound was carefully sutured and disinfected with 10% of polividone iodine. Conscious chemodenervated rats were exposed to ventilatory challenge 5-7 days after recovery. No significant weight loss was observed. All the three groups of rats (LZR, LHZR and OZR) were submitted to surgery. To examine whether hypoventilation resulted in retention of CO 2 in obese rats, arterial blood gas was measured using an OPTI-CCA blood gas analyzer (OPTI Medical Systems, USA) at a steady state in halothane-anesthetized, paralyzed rats. General anesthesia was induced with 4% halothane in room air as depicted above. Arterial blood (200 µl per sample) was drawn from the femoral artery in the three animal groups. Arterial blood measurements of interest included partial pressure of arterial O 2 (P a O 2 ), partial pressure of CO 2 (P a CO 2 ) and pH. Plasma Leptin Levels and Hypodermic Leptin Injections Measurements of plasma levels of leptin were performed at room air (21% O 2 ) in anesthetized rats. After general anesthesia as described above, whole blood samples were taken through a cardiac puncture. Blood samples were drawn into collection tubes containing the anticoagulant EDTA (Sigma-Aldrich, USA) and kept on ice. After centrifugation, the plasma was stored at−80 • C for leptin analysis by ELISA kit (#ab100773, Abcam, USA), an in vitro enzyme-linked immunosorbent assay for the quantitative measurement as previously described (Panetta et al., 2017). The assay was read using a power wave XS2 plate reader (Biotek Instruments, USA). To confirm whether the chronic activation of leptin signaling pathways played a part in the HVR, subcutaneous injections of leptin (60 µg/kg) or equal volume of vehicle (saline) were carried out once daily for 7 days in CBI LZRs (n = 8 for each group), and breathing parameters were measured after 7 day injections during exposure to room air or hypoxia. To further confirm the CB's role, subcutaneous injections of leptin or saline were performed 7 days after the carotid sinus nerves were sectioned in each group (n = 8 for both). Data Acquisition and Processing Data are expressed as mean ± SEM. Unless indicated otherwise, two-tailed unpaired t-test, one-way ANOVA with Dunnett's or Tukey's post-hoc test and two-way ANOVA with Bonferroni post-hoc test were used to compare significant difference between different groups. Differences within or between groups with P-values of <0.05 were considered significant. Reduced Basal Ventilation in OZRs Adult OZRs exhibit many abnormal physiological attributes due to the deficiency of the ob-R, representing a good animal model to study the obesity-related hypoventilation as observed in humans. We thus addressed leptin's role using this phenotype and the littermate control rat. First, we measured the body weight of three groups of rats (n = 20 for each group). The averaged body weight was larger in LHZRs (492 ± 30 g) and OZRs (512 ± 49 g) than LZRs (382 ± 14 g, P < 0.01 vs. LHZR or OZR). In addition, weight gain was accompanied by higher levels of blood plasma leptin level in OZRs (15.28 ± 0.55 µg/L) in relative to LHZRs (7.45 ± 0.50 µg/L, P < 0.01 vs. OZR) or LZRs (7.34 ± 0.38 µg/L, P < 0.01 vs. OZR; P > 0.05, vs. LHZR). Baseline breathing parameters were measured in the three groups of animals while breathing room air (21% O 2 ). Compared with the LZRs, V E and V T were considerably lower in OZRs and LHZRs (P < 0.01 for both, vs. LZRs, Figures 1A,C), whereas the OZR has a faster f R than the other two groups (P < 0.01 for both). Of interest, although no remarkable difference in basal V E was observed between OZRs and LHZRs, V T and f R were comparable (P < 0.01 for both, Figure 1B), indicative of a different breathing pattern. To evaluate whether hypoventilation resulted in hypercapnia or respiratory acidosis in obese animals, the arterial blood gas was measured at a steady-state. Apparently, hypercapnia, acidosis and normal P a O 2 were observed in LHZRs and OZRs (Table 1). Therefore, both OZRs and LHZRs exhibit basal hypoventilation, overweight and hypercapnia, with the exception of hyperleptinemia in OZRs but not LHZRs. Impairment of HVR in OZRs To address whether the ob-R deficiency yielded a diminished HVR, hypoxia was achieved while inhaling 10% O 2 to activate peripheral respiratory chemoreflex. When acutely challenged with hypoxia, all three phenotypes displayed robust increases in f R and V E except for V T , with the smallest change in the HVR in OZRs (Figures 1A-C). In addition, the hypoxiastimulated increment of V E was far less in OZRs compared to the other two groups (P < 0.01, Figure 1E). Interestingly, LZRs and LHZRs displayed a similar change in V E during hypoxia FIGURE 1 | The OZR exhibits impaired HVR. (A-C) Effect of hypoxia on V T , f R , and V E in the three groups of rats. (D) The stimulatory effect of hypoxia on V E in three groups of CBD rats. (E) Changes in V E during exposure to 10% O 2 between CBI and CBD rats. n = 20 for each group, **P < 0.01 as drawn by two-way ANOVA with Bonferroni post-hoc test. V T , tidal volume; f R , respiratory frequency; V E , minute ventilation; CBI, carotid body innervation; CBD, carotid body denervation. (P > 0.05, Figure 1E). In spite of similar degree of body weight, OZRs exhibited far more severe hypoventilation in response to hypoxia compared to LHZRs (P < 0.01, Figures 1A-C). However, exposure to hypoxia caused no significant difference in increments of V E in all three groups of rats after sectioning carotid sinus nerves (P > 0.05, Figures 1D,E), in favor of the involvement of CBs in such an effect. Collectively, the ob-R deficiency (OZR), instead of simple obesity (LHZR), plays a predominant role in the impaired HVR. Downregulation of pSTAT3 and TASK Channels in OZR CBs The pSTAT3/STAT3 signaling has been implicated in mediating major effects of the ob-R. To determine the expression level of ob-R and STAT3, the quantitative analysis was made using Western blot in the three groups (n = 4 for each group). Compared with LZRs and LHZRs, pSTAT3/STAT3 (Figures 2C,D) were remarkably downregulated in the CBs of OZRs (P < 0.01), reliably correlating pSTAT3/STAT3 expression with the ob-R deficiency (Figures 2A,B). Several lines of evidence demonstrated that the chemosensitivity is associated with TASK-1 and TASK-3 in CB glomus cells and TASK-2 in retrotrapezoid nucleus neurons (Trapp et al., 2008;Gestreau et al., 2010;Wang et al., 2013). The reduced sensitivity of CBs to hypoxia in OZRs was probably associated with these ion channels. Evidently, the expression level of TASK-1, TASK-2, TASK-3 in OZR CBs was lower in relative to the other two groups (P < 0.05∼0.01, Figures 2E-H). However, no statistical significance in these channel expression was found between LZRs and LHZRs (P > 0.05 for all, Figures 2E-H). Hence, the ob-R deficiency contributes to reduced expression of pSTAT3 and TASK channels. Facilitation of HVR by Chronic Application of Leptin To examine the effect of activation of ob-Rs on the HVR, subcutaneous injections of leptin (60 µg/kg) or equal volume of normal saline were carried out once daily for 7 days in LZRs (n = 8 for each group), and breathing parameters were measured at different time points separated by 7 days. As shown in Table 2, compared with the vehicle control (8.2 ± 0.4 µg/L), the plasma levels of leptin was raised to 13.1 ± 0.6 µg/L (P < 0.01) over 7day treatment and restored to control level after 1 week. Chronic administration of leptin for 7 days produced no significant change in body weight (P > 0.05, data not shown) and basal breathing parameters (V T , f R and V E ) in relative to the vehicle control (Figures 3A-C). Neither P a CO 2 nor blood pH changed significantly in leptin-injected rats (data not shown). During exposure to 10% O 2 , V T , f R , and V E were all increased in either leptin-or saline-injected LZRs (P < 0.05∼0.01) but the change in V E was greater in leptin-injected rats in relative to the vehicle controls ( Figure 3E). The stimulatory effect of leptin on HVR persisted for at least 2 weeks ( Table 3). After bilaterally sectioning carotid sinus nerves, leptin-induced increase in V E was abolished (Figures 3D,E). Collectively, chronic administration of leptin potentiated the HVR. Effect of Leptin on Expression of pSTAT3 and TASK Channels To investigate the possible mechanism underlying exogenous application of leptin action on the HVR, we tested the expression of ob-R and the downstream pSTAT3 and TASK-1, TASK-2, TASK-3 channels in CBs after injection of leptin for 7 days. The findings indicated that leptin administration caused greater upregulation of ob-R and of pSTAT3 (P < 0.01, n = 4 for each group, Figures 4A,B). Furthermore, leptin also enhanced expression of TASK-1, TASK-2, TASK-3 (P < 0.01, n = 4 for each group, Figures 4C-E). The results suggest that the stimulatory effect of leptin on HVR are closely associated with enhanced expression of pSTAT3 and TASK channels, which may contribute to the regulation of CB's chemosensitivity. DISCUSSION We demonstrate herein that the ob-R deficiency, rather than simple obesity, not only reduces baseline ventilation but also inhibits the HVR, with decreased pSTAT3 expression in CBs. Chronic administration of leptin has no marked effects on basal ventilation but considerably enhances the HVR, accompanied by increased expression of pSTAT3. Additionally, either ob-R deficiency or leptin administration is reliably associated with changes in expression of TASK-1, TASK-2, and TASK-3 channels. These findings suggest that leptin signaling in the CB contributes to potentiation of HVR probably through enhancement of pSTAT3 and TASK channels expressions. The obese Zucker rat represents a good model of ob-R deficiency and manifests relatively early onset obesity (Bray and York, 1971). One line of early evidence shows that respiratory system compliance was significantly lower in the OZR compared with the lean phenotype, and that resting ventilatory parameters (uncorrected for body weight) were similar between obese and lean animals, with similar ventilatory response to hypoxia between two phenotypes (Farkas and Schlenker, 1994). In the present study, we compared the difference between obese and lean phenotypes using the previously described method (Kumar et al., 2015;Fu et al., 2017) to normalize breathing parameters to body weight. Interestingly, with this analysis method, the OZRs exhibited fast f R , reduced V T and thus lower V E during exposure to room air. This outcome interprets occurrence of hypercapnia and respiratory acidosis observed in OZRs. Based on these attributes, the OZR resembles an animal model of leptin resistance. The LHZR, a simple obesity control carrying genotype of the LZR, displayed hypercapnia, respiratory acidosis, relatively normal serum leptin, basal hypoventilation and moderate response to hypoxia, probably representing a model of simple obesity rather than leptin resistance. In obese patients, a higher level of leptin is found to cause an increase in basal ventilation associated with excess body mass, with OHS patients exhibiting an even higher serum leptin level than eucapnic individuals matched for body mass index (Phipps et al., 2002). In animal models, HFD rats exhibit an unchanged (Olea et al., 2014) or enhanced basal V E (Ribeiro et al., 2017). Importantly, Ribeiro et al. found 3 weeks of HFD blunted leptin responses to hypoxia in the CB, probably due to development of CB leptin resistance, suggesting at least 3 weeks required for the establishment of leptin resistance. However, in the present study, hyperleptinaemia and leptin resistance did not occur in the LHZR most likely because of the genetic manipulation which makes the difference and the hypoventilation thus appears to be a restrictive ventilatory pattern. Another explanation may be supported by previous findings suggesting that elevation of leptin levels is a consequence of hypoxia and not of fat accumulation (Tatsumi et al., 2005). Taken together, our findings support that the impaired basal ventilation and HVR in OZRs is ascribed mainly to ob-R deficiency rather than mere obesity. We did not measure chest wall mechanics and compare the difference in chest wall impedance between two obese rats, whereas it would be expected that obesity-induced increase in chest wall impedance must play a relatively small part in such effects. This also helps better understanding of the results that V E induced by hypoxia is insignificant between LZRs and LHZRs. Leptin's actions involve fast or slow onset, thus requiring minutes, several hours, even days before major changes occur. Its acute effects on breathing has been implicated in prior studies (Chang et al., 2013;Olea et al., 2015;Pye et al., 2015;Ribeiro et al., 2017). For example, Olea et al. have found that acute application of leptin in anaesthetized animals augmented basal V E and potentiated the ischemic hypoxia-induced V E in a dosedependent manner (Olea et al., 2015). More recently, Ribeiro et al. reported that leptin increases V E both in basal and hypoxic conditions in control rats but such effects were blunted in high fat diet fed rats (Ribeiro et al., 2017). In contrast, acute application of leptin in isolated CB type I cells failed significantly to alter the resting membrane potential and acidification-induced depolarization was unaffected by leptin, thereby suggesting that acute leptin stimulation did not alter CB's chemosensitivity (Pye et al., 2015). In addition to acute effects, chronic treatment with leptin in vivo also has been shown to potentiate respiratory chemoreflex (Bassi et al., 2014). Acute or chronic actions may be mediated by different leptin signaling pathways. Chronic administration of leptin in the present study did not augment basal ventilation but potentiated HVR in conscious rats, an effect persisting for ≥7 days. This appears to play an essential role in reinforcing ventilation to supply more oxygen when challenged by hypoxia. The plasma levels of leptin after 7 days treatment are quite similar to those quantified in plasma for the OZRs, taken together with the enhancement of HVR with 7 day leptin administration and the impairment of HVR in OZRs rats, indicates that 7 day stimulatory effects of leptin did not result in leptin resistance. The reason why the basal ventilation was herein not enhanced by the chronic application of leptin would be attributable to animal's state (anesthetized vs. conscious) and the dose of leptin administered. The dose of leptin applied to our animals was chosen based on what was chronically applied previously (Wjidan et al., 2015), and lower than that used in prior reports examining the acute effects of leptin on cardiovascular (Rahmouni et al., 2002;Rahmouni and Morgan, 2007) and respiratory functions (O'Donnell et al., 2000;Bassi et al., 2012). Moreover, in the case of potentially therapeutic utilizations, this dose would be expected to specifically exert a respiratory but not excessive cardiovascular effects. Higher concentrations of leptin would be expected to saturate plasma carrier molecules and some of unbound leptin would be degradated. Furthermore, since the amount of leptin to cross blood-brain barrier is relied on receptor-mediated transport mechanism (Morris and Rui, 2009), access to the brain should be limited somehow. Leptin's intracellular signal transduction has been extensively investigated, with exception of molecular mechanisms underlying its effect on respiratory chemosensitivity. It remains poor understood how the activation of leptin signaling affects O 2sensitive channels which may determine CB's chemosensitivity. Along with recent studies indicating that the chemosensitivity of CBs is closely associated with TASK-1 and TASK-3 channels (Trapp et al., 2008;Tan et al., 2010), TASK-2 channels has also been evidenced to set central respiratory CO 2 and O 2 sensitivity (Gestreau et al., 2010). In addition, activation of ob-Rs appears to regulate ion channels including ATP-sensitive K + channels and voltage-gated K + channels (Gavello et al., 2016). Although STAT3, SOCS3, and ERK1/2 may mediate leptin's role in the CBs, the critical information is lacking to data concerning modulatory effects of these molecules on TASK channels. In the present study, we did not directly address how the activation of ob-Rs and downstream signaling proteins regulated TASK-1, TASK-3, and TASK-2 channels, but notably, the altered expression levels of these channels would be expected to be attributable to leptin signaling and contribute to leptin-stimulated facilitation of the HVR. Future work is required for revealing such mechanisms. In summary, leptin signaling participates in setting CB's O 2 sensitivity probably through the modulation of TASK-1, TASK-3, and TASK-2 channels and thus contributes to the potentiation of HVR. This line of cellular evidence extends our understanding of molecular mechanism of leptin action on breathing, shedding light on the etiology of obesity-related hypoventilation or apnea. AUTHOR CONTRIBUTIONS FY, JF, and HW acquired the data; FY, JF, and ZW analyzed and interpreted data; FY, XZ, and HY drafted the manuscript; FY, SW, and YZ were responsible for study concept and design; YZ and SW obtained research funding.
v3-fos-license
2021-03-18T06:17:41.526Z
2021-03-12T00:00:00.000
232262337
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scielo.br/j/mioc/a/QT7jyZP3FYp8zPRgRgy8Crr/?format=pdf&lang=en", "pdf_hash": "fab4f2e32c0a7cd761724143d103df59e3becc9c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45219", "s2fieldsofstudy": [ "Biology" ], "sha1": "5c5b46e1455f7c108037cbae706d06d6065bcd30", "year": 2021 }
pes2o/s2orc
Flight tone characterisation of the South American malaria vector Anopheles darlingi (Diptera: Culicidae) BACKGROUND Flight tones play important roles in mosquito reproduction. Several mosquito species utilise flight tones for mate localisation and attraction. Typically, the female wingbeat frequency (WBF) is lower than males, and stereotypic acoustic behaviors are instrumental for successful copulation. Mosquito WBFs are usually an important species characteristic, with female flight tones used as male attractants in surveillance traps for species identification. Anopheles darlingi is an important Latin American malaria vector, but we know little about its mating behaviors. OBJECTIVES We characterised An. darlingi WBFs and examined male acoustic responses to immobilised females. METHODS Tethered and free flying male and female An. darlingi were recorded individually to determine their WBF distributions. Male-female acoustic interactions were analysed using tethered females and free flying males. FINDINGS Contrary to most mosquito species, An. darlingi females are smaller than males. However, the male’s WBF is ~1.5 times higher than the females, a common ratio in species with larger females. When in proximity to a female, males displayed rapid frequency modulations that decreased upon genitalia engagement. Tethered females also modulated their frequency upon male approach, being distinct if the interaction ended in copulation or only contact. MAIN CONCLUSIONS This is the first report of An. darlingi flight acoustics, showing that its precopulatory acoustics are similar to other mosquitoes despite the uncommon male:female size ratio, suggesting that WBF ratios are common communication strategies rather than a physical constraint imposed by size. Malaria is one of the most important vector-borne diseases worldwide, with 219 million cases reported in 2017. (1) In South America, Anopheles darlingi is a major malaria vector, (2,3) being the primary neotropical malaria vector in the Amazon region of Brazil, Colombia, Peru and Venezuela. (4) An. darlingi is an efficient malaria vector, able to maintain high levels of transmission even when present at low densities. (3) Further, genetic differentiation among An. darlingi populations is suggested to enable adaptation of this species to a range of habitats. (4,5) However, little is known about the basic biology of this species, and its reproductive behaviors have not been reported to date, making the control of this vector challenging. Disruption of pre-mating reproductive behaviors has been proposed as a means to control mosquito populations by exploiting important mating-specific cues to prevent male-female interaction. (6,7) One area of focus is precopulatory behavioral interactions between males and females. Prior to copulation, male and female mosquitoes must locate each other and interact. Males are attracted to tones produced by the female wing beat, (8,9) although additional cues are likely to aid male-female attraction. (10) The mating encounter site for some species is around the host (e.g., Aedes aegypti) -males intercept females as they attempt to blood-feed. (11,12) Males of some anopheline species form swarms -females penetrate the swarm to find a mate; (13) it is unknown if An. darlingi males form swarms. Male and female Ae. aegypti, Anopheles gambiae, Anopheles albimanus and Culex quinquefasciatus interact acoustically pre-copula. (14,15,16,17,18) One such interaction is rapid frequency modulation (RFM), the rapid increase of the male wing beat frequency (WBF), followed by tone oscillation, terminating with a decrease in tone frequency. (19,20) This phenomenon occurs when males approach females during a mating attempt (19,20) and appears to be a common male acoustic behavior during courtship, having been described in Ae. aegypti, Cx. quinquefasciatus, An. gambiae, An. coluzzi, and An. albimanus. (17,19,20,21) Further, in Ae. aegypti, Cx. quinquefasciatus, and An. gambiae males and females modulate their WBFs to match in a shared harmonic during courtship, a phenomenon known as harmonic convergence. (14,16,18,22) Mosquito WBFs can be an important characteristic of a species, (23,24,25) as it mediates mating, although closely related species can have similar flight tones. (10) In addi-tion, WBFs are influenced by factors such as temperature, humidity and age. (22,26) Intraspecific body size has also been suggested to influence WBF, (22,27) although other reports find no such effect. (17,28) Thus, how body size influences flight tones of individuals remains unclear. In addition, males and females of most species usually exhibit different wing-beat frequencies, with males producing higher frequencies. It has long been assumed that this frequency difference is associated with the smaller size of males. (9,12,29) However, in An. darlingi, males and females collected in the field are similarly sized. (30) This species therefore presents an ideal opportunity to evaluate how male-female size ratio influences mosquito acoustic interactions. Furthermore, evaluation of the acoustic mating behaviors of An. darlingi, an understudied species, will contribute to our knowledge of mosquito mating behaviors. To examine precopulatory behavioral interactions in an understudied malaria vector, we characterised An. darlingi WBF and examined acoustic behaviors upon exposure to the opposite sex. We find that lab reared males are larger than females, unlike other mosquito species. Although larger, males broadcast a significantly higher WBF than females, showing that the WBF ratio between males and females is not determined by body size, contrary of what has been assumed in the past. (9,12,29) Upon exposure to tethered females, males displayed RFM upon female approach. This behavior is observed in other mosquito species, showing that the communication dynamic during mating is preserved in An. darlingi despite the particular size of males and females of this species. These findings support the idea that the acoustic dynamic of mosquito mating behavior is a process of communication rather than a byproduct of motion. Moreover, as flight tones are currently being used in the development of novel surveillance strategies, (6,23,25,(31)(32)(33) insight into An. darlingi reproductive behaviors and WBF characterisation will aid their surveillance and control. MATERIALS AND METHODS Mosquitoes -An. darlingi from Universidad Peruana Cayetano Heredia -ICEMR insectary (Iquitos, Peru) were used in our experiments. This colony has been maintained since 2012. (34,35) Pupae were individualised in 5 mL tubes to ensure virginity and adults were separated by sex and transferred to sex-specific cages upon eclosion. Mosquitoes had access to 15% honey-water solution ad libitum. All recordings were conducted at 26ºC and 80% relative humidity (RH). Five to seven-day-old mosquitoes were used in all experiments. Audio recording set up -Mosquitoes were recorded in a 4L plastic cage using a particle velocity microphone (NR-23158-000, Knowles; Itasca, IL, USA). A USB audio interface (M-Track Quad Four Channel Audio; M-Audio, Cumberland, USA) was used to amplify and digitise recordings at a sample rate of 11025 Hz/24 bits. Experimental procedures -We first recorded mosquitoes individually to determine WBFs when in free flight and when tethered. As similar aged individuals were difficult to obtain from this colony (i.e., we could not synchronise hatch rates and experienced high mortality subsequent to individualisation), the same mosquitoes were utilised to determine WBF under both conditions. Tethered mosquitoes were immobilised as in Pantoja-Sánchez et al. (17) and placed 1 cm above the microphone to record their WBFs. Free flying mosquitoes were recorded using a rod with an adhered microphone; a researcher manually followed their flight, maintaining a distance of 5-10 cm. We next recorded free flying males and tethered females to examine acoustic interactions during a mating interaction -an immobilised female was placed ~5 cm from the microphone to allow movement around her [Supplementary data ( Figure)]. Ten-fifteen males were then introduced into the cage. Females were replaced upon copulation; spermathecae were subsequently dissected to determine insemination status. We recorded all trials with a camera (FLIR-FLEA 3 1.3 MPColor USB 3 Vision with a Fujinon -FF125HA-1B 12.5 mm lens) and identified and timed behaviors in real time. Mosquito wings were measured as in van den Heuvel et al. (36) to estimate body size. Signal and statistical analysis -Flight-tone audio recordings were analysed using spectrograms (Fast Fourier transform-based, length of 4096 points, hamming window of 80 ms and 50% overlapping). To evaluate acoustic interactions, audio segments occurring during observed mating attempts were analysed. From the spectrograms, we used the male second harmonic and the female third to examine flight-tones during malefemale behavioral interactions due to frequency resolution but present our results in terms of their fundamental WBFs for simplicity, as done previously. (17) We report the time of the interaction as the length of male's flight tone. Male and female responses were assessed by evaluating the extent of frequency modulation in the second and third harmonic, respectively. Male measurements were divided by two (ΔF = (F max -F min )/2) and female measurements by tree (ΔF = (F max -F min )/3) to express results in terms of fundamental frequency equivalent to the WBF. We further assessed female behavior by examining the rate of increase (ΔF/Δt) after male detection. To compare frequencies between flight conditions (tethered and free flight), and to discriminate differences between sexes, we used t-tests. Normality for the variable WBF was determined with a Shapiro-Wilk test. A linear regression was used to determine the relationship between wing size and WBF. Residuals were tested for normality, homogeneity of variance and independence using Shapiro-Wilks, Bartlett and Durbin-Watson tests, respectively. We compared male-female contact and copulation interactions using the non-parametric Mann-Whitney U test to assess similarities and differences in the distributions of the variables analysed. Results are reported as mean ± standard deviation for variables that followed a normal distribution, and as median (interquartile range -IQR) for variables that did not. Signal processing was performed using Matlab (R2016a, Mathworks Inc., Natick, USA). Statistical analysis was performed using the car package (37) of R (Vienna, Austria). (38) RESULTS Body sizes of An. darlingi adults -Using dry weight to examine body size in field collected An. darlingi, Lounibos et al. (30) reported that males and females of this species were similarly sized. As An. darlingi dry weight is highly correlated with wing length, (30) we measured wing lengths to determine male and female size of our lab reared specimens. Female An. darlingi (2518 ± 88 μm) were significantly smaller than males (2635 ± 14 μm) (t-test: t (29) = 3.82, p < 0.01; Fig. 1A) when reared under standard laboratory conditions unlike what has been observed in many species across different genera of Culicidae. (9,12) The male/female size ratio in An. darlingi (size ratio 1.046) is distinct compared to other species in the same genus such as An. albimanus (size ratio: 0.95) (17) or in a different genus such as Ae. aegypti (size ratio: 0.81) (39) (Fig 1A). Male-female acoustic interactions during a mating attempt -To investigate acoustic interactions between sexes, we exposed free flying males to a tethered-flying female. We analysed all interactions where the male's flight-tone was detected. Two types of interactions were observed: (1) male-female leg contact and (2) copulation, defined as visible genitalia engagement. However, sperm transfer was never detected upon female dissection, possibly due to the inability to achieve a proper angle for insemination. (41) Male signal was detected from 40 distinct interactions using 15 females (11 copulations, 29 contacts). In all interactions, when nearing the female, RFM of the male flight tone occurred followed by modulation of the female frequency (Fig. 2). In copulation interactions, the median time of male interaction was 2.35 s (n = 11, IQR 1.95 -4.82 s), male flight-tones were characterised by RFMs with a modulation extent (F max -F min ) of 396.51 Hz (IQR 296.84 -460.69 Hz). RFM was followed by a decrease in the modulation ( Fig. 2A) or by male wing beat cessation; in both cases while engaged with the female genitalia. In contact interactions, the median time of male interaction was 2.36 s (n = 29, IQR 2.13 -3.78 s), male flight-tones were characterised by RFMs until departure from the female (Fig. 2B), with a modulation extent of 321.99 Hz (IQR 279.347 -383.14 Hz). No differences in the interaction time (Mann-Whitney U-test: U 11,29 = 129.00, Z = 0.92, p = 0.35; Fig. 3A) or modulation extent (Mann-Whitney U-test: U 11,29 = 113.00, Z = -1.41, p = 0.16; Fig. 3B) were detected between interaction types. Female flight-tones were also analysed. From the 40 interactions described above, 12 were excluded from our analysis as females stopped their wing beat upon male contact. Despite being tethered, female signal modulation was detected in the remaining 28 interactions (11 copulations, 17 contacts). Females reacted to the male approach by modulating their flight-tone frequency -we observed an initial WBF increase similar to descriptions in other mosquito species. (19,21) The magnitude of the frequency increase in contact (median 130.38 Hz, IQR 73.18 Fig. 1: wing beat frequencies and sizes of male and female Anopheles darlingi. We used wing length as a proxy for body size. (A) Distribution of female (red; n = 30) and male size (blue; n =3 0). Sizes for male and female Aedes aegypti (39) and An. albimanus (17) DISCUSSION Mosquito mating has been widely used as a target to develop novel control strategies. (42) The effectiveness of such strategies, however, require a profound understanding of behaviors associated with mosquito reproduction, (42) in which flight-tones play a major role. (21,43) Increasing our knowledge on the mating behavior of various species provides new insights on different aspects of acoustic interactions that influence mating. This study investigates the mating behavior of a particular mosquito species: An. darlingi. We show how An. darlingi bioacoustics are similar to other species despite their uncommon intraspecific male/female size ratio. (9,12,29) When exposed to a tethered female, male RFM occurred when approaching and contacting the female. Similar observations in Cx. quinquefasciatus, An. gambiae, An. coluzzi and Ae. aegypti, indicate the importance of this behavior. (19,20,21) RFM duration was similar in all interactions but varied greatly during male-female contact in the absence of copulation. In Ae. aegypti, RFM cessation coincides with formation of the ventralventral copulation position. (21) Similarly, An. darlingi RFM ends when a pair engage their genitalia; the male decreases the frequency modulation of his tone or ceases flying. This might be a general Anopheline behavior, as mating pairs fall to the ground after couple formation. (44,45) Tethered An. darlingi females also modulated their flight tone. When males neared or made contact, tethered females increased their WBF at a higher rate than when the interaction resulted in copulation. This might be reflective of active female acceptance or rejection, although more experiments are necessary to test this hypothesis. Tethered Culex females modulate their WBF as a result of physical contact by the male. (19) Whether WBF modulation by An. darlingi females initiates during the male approach or only upon physical contact will require higher resolution videos. Contrary to what has long been assumed, (9,12,29) this study shows that male-female WBF distributions of medically relevant mosquito species and acoustic interactions that occur in mating attempts are not constrained exclusively by the intraspecific male/female size ratio. Moreover, our findings support the hypothesis that the characteristics of the acoustic dynamic between males and females correspond to a communication strategy rather than a byproduct of motion. By describing the precopulatory acoustic behaviors and WBF distribu- . Female (f) and male (m) fundamental frequency and its harmonics are shown. Colours indicate the power of each frequency component; red is the most powerful, blue the least powerful, and white indicates the noise floor. Notable events are indicated for males (black numbers) and females (red); male signal is specified with a bracket in (A) and (B). (A) Male rapid frequency modulation (RFM) begins (black 1) and ends (black 2). In this interaction, male flight tone was stable until disengagement from the female (black 3). (B) RFM begins (black 1) and ends (black 2), with male departure shortly afterwards. In both outcomes, the female increased her wing beat frequency (WBF) upon male detection (red 1), quickly reaching peak frequency (red 2). tions of An. darlingi, this study contributes to the overall knowledge of the flight acoustics of this vector and mosquitoes in general. We hope this study will promote future investigations on species of medical relevance for Latin America and aid their surveillance.
v3-fos-license
2023-10-16T15:08:55.702Z
2023-10-01T00:00:00.000
264134983
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "CLOSED", "oa_url": "https://www.mdpi.com/2223-7747/12/20/3572/pdf?version=1697267644", "pdf_hash": "09223c1bd7846011ce919de5f28347b0895e7952", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45221", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "070800ec0023f1563a310e3bd3c5fc295b4edb0c", "year": 2023 }
pes2o/s2orc
Chiranthodendron pentadactylon Larreat (Sterculiaceae), a Potential Nephroprotector against Oxidative Damage Provoked by STZ-Induced Hyperglycemia in Rats Background: Chiranthodendron pentadactylon, known in Mexico as the “tree of the little hands”, flower’s infusion is used to treat kidney failure associated with diseases such as diabetes. The aim of this work is to evaluate the antioxidant effect of the methanolic extract of its flowers on oxidative damage in kidneys caused by streptozotocin in rats. Methods: The extract phytochemical profile was performed with HPLC. Antioxidant potential in vitro was determined with DPPH and total phenolic tests; antioxidant evaluation in vivo was performed in diabetic rats administered daily via the intragastric route (100 and 200 mg/kg) for 6 weeks; serum glucose/creatinine, food/water consumption, and urinary volume were measured. Relative weight, protein/DNA ratios and oxidative stress were measured in renal tissue. Results: The extract showed 20.53% of total phenolic content and IC50 of 18.05 µg/mL in DPPH, and this was associated with ferulic acid, phloretin and α-amyrin. Both doses showed a moderate decrease in the protein/DNA ratio in renal tissue, and the same behavior was observed for total urinary protein loss and serum creatinine, while the best antioxidant effect was exerted by a lower dose, which increased catalase activity and decreased lipid peroxidation in the kidneys. Conclusions: Results demonstrated that C. pentadactylon methanolic flower’s extract improves renal function through antioxidant mechanisms during experimental diabetes. Introduction Diabetic nephropathy is one of the most common consequences of diabetes; in the last 15 years, its prevalence has risen from 19% to 30%, with a global prevalence of 1016 per Plants 2023, 12, 3572 2 of 18 million diabetics [1].In this context, the prevalence and incidence of DN continue to rise globally, since it is one of the leading causes of death associated with this disease [2,3]. Hyperglycemia during the early stages of diabetes promotes oxidative stress (OS) and a chronic inflammatory state that is initially of low impact, but its contribution increases progressively.Both mechanisms are key to the systemic damage produced by this disease.In the kidneys, it damages the microvasculature, impairing glomerular filtration and causing proteinuria in most patients, with the latter being the first clinical manifestation of DN.A total of 40% of patients with DN progress to end-stage renal disease, even though glycemic control, among other therapeutic actions, improves the patient's life expectancy; end-stage renal failure continues to be one of the main complications associated with mortality due to diabetes.Current allopathic therapy available to treat diabetes only focuses on reducing hyperglycemia and last-generation drugs intended to restore kidney function during diabetes have poor accessibility, high cost and can generate adverse side effects such as fluid retention and heart failure [4]. In this context, the Mexican population still continues to use medicinal plants as an alternative treatment for these illnesses due to their easy access and low cost; various studies have shown the antioxidant and anti-inflammatory effectiveness of compounds obtained from medicinal plants, which have been studied as a cost-effective treatment for managing patients with chronic illnesses in most developing countries, as they are easily accessible and affordable [5][6][7].In traditional Mexican medicine, Chiranthodendron pentadactylon Larreat-commonly known as the "Devil's or monkey's hand tree", or "Mexican hand tree" in English, the "árbol de las manitas" (tree of the little hands) in Spanish, and in the native languages of Mexico such as nahuatl as "mācpalx ōchitl" (palm flower) and "Canak or Canac" in the Mayan language-is distributed in the nation's southwest states such as Guerrero, Oaxaca and Chiapas, but it is also cultivated in the central states of Morelos and Michoacán [8]. It has been extensively used since pre-Hispanic times in Meso-and Central America and in the treatment of secondary adverse physiological consequences provoked by chronic diseases in vital organs, for example, after heart strokes and during metabolic disorders such as diabetes, mostly as a remedy to renal end-stage complications by reducing edema (water retention) and regulating high blood pressure as a diuretic, as well to decrease serum cholesterol levels, with all of them related to low-impact oxidative and inflammatory processes [9][10][11][12].Chiranthodendron pentadactylon Larreat flowers can still be found sold in markets in Mexico and Guatemala as a common herbal remedy for diabetes affectations such as kidney failure, where it is prepared in infusions using a tablespoon of -four sliced flowers (approximately 10 g) in 500 mL of hot water without boiling, for 10 min, then filtered and taken twice a day for a week [13]. However, to date, the possible beneficial effects of C. pentadactylon flower's extract as an antioxidant agent in DN during experimental diabetes in laboratory animals has not been evaluated; therefore, in this study, the antioxidant effect of the methanolic extract of flowers from C. pentadactylon was evaluated on oxidative damage in kidneys caused by streptozotocin (STZ)-induced diabetes in rats. Antioxidant Potential In Vitro TCP results showed values of 205.27 mg eq GA/g of the extract at 0.1 mg/mL of the tested concentration, which represents up to 20.53% of the total dry weight of the extract, while it exerted a significant scavenging capacity on the DPPH radical with an IC50 value of 18.05 µg/mL for MECP, which is lower compared to pure quercetin, and had an IC50 value of 5.92 µg/mL.These results are consistent with those shown in the chemical composition analysis of MECP, since SMs such as phenolic acids and flavonoids were identified. MECP Effect on Glycemia, Total Body Weight and Urinary Volume All altered parameters of the clinical evaluation of animals, as well as the analytes of blood biochemistry, showed no improvement caused by treatments during the development of experimental diabetes, and are summarized in Table 1.Before STZ induction, all animals presented a normal glycemic state ≈90 mg/dL.Seventy-two hours after STZ injection, diabetic-induced groups showed a significant increase (four-fold) in plasma glucose (≈400 mg/dL) compared to the values showed by healthy rats of vehicle and E200 group (≈100 mg/dL).At the end of the experiment, similar values were maintained 6 weeks prior, without observable differences between the treated groups and the DG-untreated group at final day (≈500 mg/dL) compared to both groups of healthy rats (79 ± 4.63 mg/dL).A consistent total BW was seen for non-hyperglycemic animals of vehicle and E200 groups (≈360 g) at the experiment endpoint day.Nonetheless, total BW showed a significant decrease of almost ≈25% in all other four diabetic-induced groups, even in those that received treatments (Table 1). Food and water ingestion increased statistically in a four-fold manner in all the diabetic rat groups (≈40 g/24 h, and ≈150 mL/24 h, respectively), including those treated with vitamin E and MECP and compared to the results showed by both healthy animal groups (≈10 g/24 h, and ≈25 mL/24 h, respectively).Also, urinary volume was robustly significantly increased in a ten-fold manner in groups with experimental diabetes (≈90 mL/24 h), even in those that were administered with the antioxidant reference and MECP, compared to the data shown by control healthy rats groups (≈10 mL/24 h) (Table 1).Of all identified SMs, who were compared with pure standards through HPLC, the total quantity summations of each type showed that MECP obtained from its flowers has a higher amount of terpenoids (124.04 mg/g MECP), followed by phenolic acids (19.16 mg/g MECP) and flavonoids (6.10 mg/g MECP). Antioxidant Potential In Vitro TCP results showed values of 205.27 mg eq GA/g of the extract at 0.1 mg/mL of the tested concentration, which represents up to 20.53% of the total dry weight of the extract, while it exerted a significant scavenging capacity on the DPPH radical with an IC 50 value of 18.05 µg/mL for MECP, which is lower compared to pure quercetin, and had an IC 50 value of 5.92 µg/mL.These results are consistent with those shown in the chemical composition analysis of MECP, since SMs such as phenolic acids and flavonoids were identified. MECP Effect on Glycemia, Total Body Weight and Urinary Volume All altered parameters of the clinical evaluation of animals, as well as the analytes of blood biochemistry, showed no improvement caused by treatments during the development of experimental diabetes, and are summarized in Table 1.Before STZ induction, all animals presented a normal glycemic state ≈90 mg/dL.Seventy-two hours after STZ injection, diabetic-induced groups showed a significant increase (four-fold) in plasma glucose (≈400 mg/dL) compared to the values showed by healthy rats of vehicle and E200 group (≈100 mg/dL).At the end of the experiment, similar values were maintained 6 weeks prior, without observable differences between the treated groups and the DG-untreated group at final day (≈500 mg/dL) compared to both groups of healthy rats (79 ± 4.63 mg/dL).A consistent total BW was seen for non-hyperglycemic animals of vehicle and E200 groups (≈360 g) at the experiment endpoint day.Nonetheless, total BW showed a significant decrease of almost ≈25% in all other four diabetic-induced groups, even in those that received treatments (Table 1). Food and water ingestion increased statistically in a four-fold manner in all the diabetic rat groups (≈40 g/24 h, and ≈150 mL/24 h, respectively), including those treated with vitamin E and MECP and compared to the results showed by both healthy animal groups (≈10 g/24 h, and ≈25 mL/24 h, respectively).Also, urinary volume was robustly significantly increased in a ten-fold manner in groups with experimental diabetes (≈90 mL/24 h), even in those that were administered with the antioxidant reference and MECP, compared to the data shown by control healthy rats groups (≈10 mL/24 h) (Table 1). Renal Relative Weight and Protein/DNA Ratios during Experimental Diabetes Kidney weight/body weight index results showed a significant increase in diabeticuntreated animals (4.91 ± 0.25) compared to healthy vehicle-treated rats (2.81 ± 0.03) and to healthy rats administered with MECP at 200 mg/kg (2.86 ± 0.1). In contrast, diabetic animals treated with vitamin E exerted a significant reduction in this index (4.27± 0.19); for diabetic rats that received MECP treatments at 100 and Plants 2023, 12, 3572 5 of 18 200 mg/kg, no differences were observed (4.67 ± 0.08, and 4.85 ± 0.06, respectively), with them being statistically similar to the DG group (Figure 4A). Renal Relative Weight and Protein/DNA Ratios during Experimental Diabetes Kidney weight/body weight index results showed a significant increase in diabeticuntreated animals (4.91 ± 0.25) compared to healthy vehicle-treated rats (2.81 ± 0.03) and to healthy rats administered with MECP at 200 mg/kg (2.86 ± 0.1). In contrast, diabetic animals treated with vitamin E exerted a significant reduction in this index (4.27± 0.19); for diabetic rats that received MECP treatments at 100 and 200 mg/kg, no differences were observed (4.67 ± 0.08, and 4.85 ± 0.06, respectively), with them being statistically similar to the DG group (Figure 4A). On the other hand, a significant increase in protein/DNA ratio was also shown in the DG group (3.49 ± 0.24) versus the vehicle (2.38 ± 0.06) and E200 (2.62 ± 0.13) of the healthy rat groups.Treatments (vitamin E and MECP) given to diabetic animals did not change these values, as there were no significant differences between them and the other three groups (3.10 ± 24) (Figure 4B). Kidney Function in Diabetic Rats Proteinuria results showed a significant increase in the total urinary protein loss/24 h (53.84 ± 3.54 mg/24 h) in the DG group compared to the results observed in the vehicle group (14.38 ± 1.19 mg/24 h), and there was no differences between the E200 healthy animal groups (17.71 ± 4.89 mg/24 h).Moreover, in diabetic rats treated with VE (250 mg/kg) On the other hand, a significant increase in protein/DNA ratio was also shown in the DG group (3.49 ± 0.24) versus the vehicle (2.38 ± 0.06) and E200 (2.62 ± 0.13) of the healthy rat groups.Treatments (vitamin E and MECP) given to diabetic animals did not change these values, as there were no significant differences between them and the other three groups (3.10 ± 24) (Figure 4B). Kidney Function in Diabetic Rats Proteinuria results showed a significant increase in the total urinary protein loss/24 h (53.84 ± 3.54 mg/24 h) in the DG group compared to the results observed in the vehicle group (14.38 ± 1.19 mg/24 h), and there was no differences between the E200 healthy animal groups (17.71 ± 4.89 mg/24 h).Moreover, in diabetic rats treated with VE (250 mg/kg) and MECP at 100 and 200 mg/kg, a significant reduction in proteinuria/24 h in almost ≈38% occurred compared to diabetic-untreated group (53.84 ± 3.54 mg/24 h) (Figure 5A).and MECP at 100 and 200 mg/kg, a significant reduction in proteinuria/24 h in almost ≈38% occurred compared to diabetic-untreated group (53.84 ± 3.54 mg/24 h) (Figure 5A). Regarding serum creatinine, the DG group showed a higher concentration (10.76 ± 1.23 mg/dL) compared to vehicle and the E200 healthy animal groups (5.8 ± 0.42, and 4.95 ± 0.23 mg/dL, respectively), and there were no statistical differences between these two.On the other hand, diabetic-treated rat groups showed a significant reduction of serum creatinine levels of almost ≈40% compared to results of diabetic-untreated rats (10.76 ± 1.23 mg/dL); however, values such as these (5.8 ± 0.42 mg/dL) were exerted by healthy rats that only received vehicles (Figure 5B). Antioxidant Enzyme Activity in Diabetic Rats' Renal Tissue Activity of three enzymes (CAT, SOD and GSH-Px) in diabetic-untreated rats decreased significantly compared to the healthy rats group administered only with vehicles, and there were no statistical differences between this group and healthy rats that received MECP at 200 mg/kg (Figure 6). In the case of CAT enzyme, the DGE100 experimental diabetic animals presented an almost three-fold increased activity after the administration treatment period, compared to the results shown by diabetic-untreated rats (195.58 ± 25.5 IU/mg), restoring activity to the values observed in healthy rats from the vehicle (455.45 ± 35.87 IU/mg) and E200 (459.6 ± 76.6 IU/mg) groups.On the other hand, diabetic rats treated with vitamin E at 250 mg/kg and MECP at 200 mg/kg showed both a statistical different increase in CAT activity and in the renal tissues of almost ≈two-fold when compared to the DG-untreated group; however, both treatments were unable to restore it to normal values (Figure 6A). In addition to this scenario, SOD's activity was also significantly lower in diabetic rats that were administered only with the vehicles (73.45 ± 6.33 IU/mg) vs. not-induced rats that treated with the vehicles (146.6 ± 7.3 IU/mg).There were no differences between this late group and the E200 (142.23 ± 8.7 IU/mg).In this context, the treatment with MECP in diabetic groups, DGE100 and DGE200, restored this antioxidant enzyme activity ≈twofold, even reaching values exerted by the healthy rats in both control groups, although SOD's function in kidneys of diabetic rats was upregulated with vitamin E administration Regarding serum creatinine, the DG group showed a higher concentration (10.76 ± 1.23 mg/dL) compared to vehicle and the E200 healthy animal groups (5.8 ± 0.42, and 4.95 ± 0.23 mg/dL, respectively), and there were no statistical differences between these two.On the other hand, diabetic-treated rat groups showed a significant reduction of serum creatinine levels of almost ≈40% compared to results of diabetic-untreated rats (10.76 ± 1.23 mg/dL); however, values such as these (5.8 ± 0.42 mg/dL) were exerted by healthy rats that only received vehicles (Figure 5B). Antioxidant Enzyme Activity in Diabetic Rats' Renal Tissue Activity of three enzymes (CAT, SOD and GSH-Px) in diabetic-untreated rats decreased significantly compared to the healthy rats group administered only with vehicles, and there were no statistical differences between this group and healthy rats that received MECP at 200 mg/kg (Figure 6). In the case of CAT enzyme, the DGE100 experimental diabetic animals presented an almost three-fold increased activity after the administration treatment period, compared to the results shown by diabetic-untreated rats (195.58 ± 25.5 IU/mg), restoring activity to the values observed in healthy rats from the vehicle (455.45 ± 35.87 IU/mg) and E200 (459.6 ± 76.6 IU/mg) groups.On the other hand, diabetic rats treated with vitamin E at 250 mg/kg and MECP at 200 mg/kg showed both a statistical different increase in CAT activity and in the renal tissues of almost ≈two-fold when compared to the DG-untreated group; however, both treatments were unable to restore it to normal values (Figure 6A). In addition to this scenario, SOD's activity was also significantly lower in diabetic rats that were administered only with the vehicles (73.45 ± 6.33 IU/mg) vs. not-induced rats that treated with the vehicles (146.6 ± 7.3 IU/mg).There were no differences between this late group and the E200 (142.23 ± 8.7 IU/mg).In this context, the treatment with MECP in diabetic groups, DGE100 and DGE200, restored this antioxidant enzyme activity ≈two-fold, even reaching values exerted by the healthy rats in both control groups, although SOD's function in kidneys of diabetic rats was upregulated with vitamin E administration in 60% Plants 2023, 12, 3572 7 of 18 of cases compared to diabetic rats without treatment, and its renal tissue concentrations were not normalized to basal levels shown by healthy rats (Figure 6B). Finally, as it happened to other two antioxidant enzymes, the GSH-Px activity significantly decreased in the diabetes-untreated animals group without treatment (23.75 ± 2.96 IU/mg) compared to the results observed in healthy rats of both vehicle (49.32 ± 4.63 IU/mg) and E200 (64.52 ± 5.46 IU/mg) control groups, with no statistical differences between them; while treatments generated in diabetic animals' renal tissues showed an increase in GHS-Px activity of 58% for those that were administered with vitamin E, and in a dose-dependent manner for the diabetic rats treated with MECP at 100 and 200 mg/kg with increments in its activity of 90% (p < 0.05) and 120% (p < 0.01), respectively (Figure 6C).Finally, as it happened to other two antioxidant enzymes, the GSH-Px activity significantly decreased in the diabetes-untreated animals group without treatment (23.75 ± 2.96 IU/mg) compared to the results observed in healthy rats of both vehicle (49.32 ± 4.63 IU/mg) and E200 (64.52 ± 5.46 IU/mg) control groups, with no statistical differences between them; while treatments generated in diabetic animals' renal tissues showed an increase in GHS-Px activity of 58% for those that were administered with vitamin E, and in a dose-dependent manner for the diabetic rats treated with MECP at 100 and 200 mg/kg with increments in its activity of 90% (p < 0.05) and 120% (p < 0.01), respectively (Figure 6C). LPO Rate in Renal Tissue All these previous modifications in renal tissues' antioxidant environment of diabetic rats were supported, and the results observed for LPO rate of these renal samples were Plants 2023, 12, 3572 8 of 18 corroborated.MDA determination showed that there was a significant increase in LPO in diabetic-untreated rats' kidneys (300.17 ± 7.8 nmol/mg) compared to the results shown by healthy rats administered only with the vehicles (53.16 ± 11.38 nmol/mg).Moreover, it was observed that there was no statistical difference between the vehicle and E200 control groups (62.02 ± 4.69 nmol/mg) (Figure 7); meanwhile, treated groups showed a significant decrease in the LPO rate in renal tissue of 65% administered with vitamin E, followed by a reduction of 53% for diabetic rats administered with MECP at 200 mg/kg and compared to the rats of diabetes-untreated control group; however, the diabetic animals that received MECP at 100 mg/kg showed a statistical decrease in the LPO rate in kidneys of 71% compared to diabetes-untreated group, and reached values comparable to those of the healthy rats in the vehicle group (Figure 7). LPO Rate in Renal Tissue All these previous modifications in renal tissues' antioxidant environment of diabetic rats were supported, and the results observed for LPO rate of these renal samples were corroborated.MDA determination showed that there was a significant increase in LPO in diabetic-untreated rats' kidneys (300.17 ± 7.8 nmol/mg) compared to the results shown by healthy rats administered only with the vehicles (53.16 ± 11.38 nmol/mg).Moreover, it was observed that there was no statistical difference between the vehicle and E200 control groups (62.02 ± 4.69 nmol/mg) (Figure 7); meanwhile, treated groups showed a significant decrease in the LPO rate in renal tissue of 65% administered with vitamin E, followed by a reduction of 53% for diabetic rats administered with MECP at 200 mg/kg and compared to the rats of diabetes-untreated control group; however, the diabetic animals that received MECP at 100 mg/kg showed a statistical decrease in the LPO rate in kidneys of 71% compared to diabetes-untreated group, and reached values comparable to those of the healthy rats in the vehicle group (Figure 7). Discussion In this work, the antioxidant effect of Chiranthodendron pentadactylon flowers on altered kidney function during an STZ-induced diabetes 1 model in male Wistar rats was assessed.This specimen was chosen to being evaluated because of its wide use in traditional Mexican medicine for the treatment of late adverse sequelae caused, on organs such as the kidney, by the oxidative damage characteristic of the pathophysiology of chronic degenerative diseases such as diabetes [17][18][19][20]; however, this is the first evidence of the beneficial contribution through its antioxidant effect of this specie in DN, on one of the greatest complications suffered by patients with diabetes, through the regulation of renal function by decreasing the OS generated with hyperglycemia, supporting its ethnomedicinal use of also regulating blood pressure, and possibly treating diabetes from other action mechanisms different than hypoglycemia [11,13,21]. After chemical fragmentation of MECP, preliminary chemical composition screening was positive for phenolic compounds, most importantly, and studied SMs with antioxidant properties [22], which are well-known for their direct inhibition of the chemical Discussion In this work, the antioxidant effect of Chiranthodendron pentadactylon flowers on altered kidney function during an STZ-induced diabetes 1 model in male Wistar rats was assessed.This specimen was chosen to being evaluated because of its wide use in traditional Mexican medicine for the treatment of late adverse sequelae caused, on organs such as the kidney, by the oxidative damage characteristic of the pathophysiology of chronic degenerative diseases such as diabetes [17][18][19][20]; however, this is the first evidence of the beneficial contribution through its antioxidant effect of this specie in DN, on one of the greatest complications suffered by patients with diabetes, through the regulation of renal function by decreasing the OS generated with hyperglycemia, supporting its ethnomedicinal use of also regulating blood pressure, and possibly treating diabetes from other action mechanisms different than hypoglycemia [11,13,21]. After chemical fragmentation of MECP, preliminary chemical composition screening was positive for phenolic compounds, most importantly, and studied SMs with antioxidant properties [22], which are well-known for their direct inhibition of the chemical structures of free radicals.The beneficial mechanism proposed for MECP may be similar to those described in previous studies with other medicinal plants with high TPC, such as Nelumbo nucifera (lotus flower) [5] and Camelia sinensis (green tea) [23], where its nephroprotective effect was correlated to its antioxidant effect against oxidative damage provoked by experimental diabetes in vivo.For in vitro antioxidant potential assays, it is understood that the lower the IC 50 value, the greater the reduction capacity.However, it must be considered that positive controls (quercetin and gallic acid), against which MECP was compared in both in vitro techniques, are pure phenolic compounds, of whose antioxidant capacity has been widely verified [24].Moreover, MECP is a complex mixture of not only polar compounds but apolar nature SMs as well.Furthermore, these results are consistent with those reported by Ibarra-Alvarado et al. [13], who demonstrated a positive correlation between TPC of the direct aqueous extract of the flowers of C. pentadactylon (221.4 ± 0.6 mg eq GA/g of extract) and its antioxidant potential in vitro, such as the direct inhibition of DPPH free radicals (IC 50 =140.6± 1.0 µg/mL).Although there were lower values of TPC for MECP in this work, which had a preprocessing of degreasing with hexane, the DPPH IC 50 value was greater for MECP in spite of that, even with the previous de-fatty process, compared to the direct aqueous extract reported by authors.In the same context, Villa-Ruano et al. [20] also reported an IC 50 value for DPPH of 102.60 ± 6.60 µg/mL for the direct ethanolic extract of this specie's shoots and leaves, which is a value greater than that observed for MECP. In relation to in vivo results, STZ, due to its similarity to glucose molecule, enters β cells of the pancreas through the GLUT-2 transporter and produces cellular necrosis in the pancreas.Consequently, additionally to this damage, insulin production by these cells is altered, and lost within the time that causes hyperglycemia [31].One of the most important inclusion parameters when studying experimental diabetes in rodents is that they have a blood glucose level of >300 mg/dL at 72 h after STZ administration, provoking 6-weekmaintained hyperglycemia throughout the experiment, along all classic manifestations of this illness comprising polydipsia, polyphagia, polyuria and loss of body weight [32], which were not modified daily with MECP for the 6-week administration period in this study. By analyzing the protein/DNA ratio, a mild improvement could be attributed to MECP and vitamin E treatments during experimental diabetes by reducing this preliminary parameter of a possible renal hypertropia.Although the kidney's relative weight and its protein mg/DNA ratio, together with a dysregulation of renal function demonstrated by serum creatinine elevation and proteinuria in STZ-induced diabetic rats, are important to consider as indicators of a possible and incipient renal hypertrophy, as described by previous studies [33], it is necessary to perform specific histological analyses on renal tissue in subsequent experiments to ensure that treatment with MECP is able to reduce renal hypertrophy and avoid the appearance of kidney edema associated with a decrease in the organ-relative weight ratio [6]. Among other aspects, DN develops concomitantly along with hyperglycemia, and the severity and development of the disorder to a chronic stage is reflected in the biochemical parameters of proteinuria and serum creatinine, with them both being useful in experimental models of diabetic animals induced with STZ, and in clinic studies helping to assess early diagnosis of kidney injury in patients with diabetes 1 [31], and are indicators of improvement when it is reversed by new nephroprotective molecules [21]. It is well known that chronic hyperglycemia depletes kidney function and favors the appearance of renal oxidative damage caused by a sustained perpetual OS microenvironment.Late-stage diabetes is also characterized by the oxidative inflammatory pathways' activation, kidney's altered function of antioxidant enzymes, and exacerbated renal tissue's LPO [34][35][36]. As evidence of the relationship that exists between OS and kidney damage, in this work, proteinuria, high serum creatinine and an increase in the relative weight ratio were Plants 2023, 12, 3572 10 of 18 documented in diabetic-untreated rats as possible processes of established renal OS and inflammation.Nevertheless, it is essential to evaluate specific biomarkers of inflammation to elucidate this issue.Although it is known that the STZ model is used to experimentally emulate diabetes, in the search for new hypoglycemic agents, previous work has described the use of the model to evaluate the antioxidant activity of new molecules against OS associated with these chronic degenerative diseases, and its adverse sequelae, and how biological systems such as neuronal and renal are affected [36][37][38][39]. Due to this, authors who have studied the antioxidant effect of various substances, such as vitamins E and C [33], did not use a hypoglycemic reference drug, since it is not the mechanism by which it is desired to exert a possible protective effect on other tissues against OS caused by hyperglycemia.In this context, Niu et al. [40] demonstrated in a study of experimental STZ-induced diabetes in rats that the administration of Eucommia ulmoides root extract, although it did not decrease hyperglycemia, helped to improve renal integrity and function, without using a hypoglycemic agent as a control. These nephroprotective effects of various medicinal plants have even been proved to occur independently of hypoglycemic activity [5], pointing out that phenolic compounds are responsible for the effect of inhibiting the formation of advanced glycation end products (AGEs) [41] and reactive oxygen species (ROS) during DN development of STZ-induced experimental diabetes in murine models [42]. These renal structural and function alterations were proportional to the antioxidant enzymes activity decrease and a higher quantity of MDA in the kidneys' homogenates of diabetic-untreated rats; however, renal function parameters improve notoriously in diabetic rats treated with MECP (decreased proteinuria and serum creatinine), and this may be provoked by MECP's high amounts of ferulic acid which can regulate creatinine renal depuration values by also decreasing OS damage in renal tissue [21]. Due to the lack of hypoglycemic activity induced with MECP, it can be inferred that any nephroprotective effect observed by treatments is unrelated to the catalytic pathway of glucose metabolism, and rather might work through a diuretic effect by one of its main SMs identified in this work, such as syringic acid [43], vanillic acid [44], ferulic acid [21] and α-amyrin [45], which can inhibit NF-kB.Cytokine is also related to high blood pressure (with this also being an ethnomedicinal use of the species), with a beneficial impact on the renal system during diabetes, such as the other identified SMs in previous works by other authors, such as (-)-epicatechin [16,46]. It is worth highlighting that the results compared to the 6-week treatment in this work were obtained by Kędziora-Kornatowska et al. [6], who demonstrated the progressive improvement of the kidneys' antioxidant endogenous defense parameters by testing vitamin E and C effects on STZ-induced experimental DN, since week 6 to week 12 of the treatment; at day 42, authors identified an increase in antioxidant enzyme's function.However, prevention of renal hypertrophy and complete improvement in kidney function were only significant after 12 weeks of both vitamin treatments.Similar results related to antioxidant enzymes activities were also reported previously for C. pentadactylon extract by Segura-Cobos et al. [33], in a STZ-induced DN in rats. In this context, this work showed that MECP had a protective effect in the course of DN associated with STZ-induced diabetes, mostly through its antioxidant effect exerted by some of its phenolic compounds identified in C. pentadactylon flowers in previously published studies such as (-)-epicatechin [16], which is a catechin-isoflavonoid that could exert nephroprotection against OS provoked by diabetes in chronic stages through direct structure inhibition of free radicals.This was also demonstrated for other medicinal species such as the Camellia sinensis in chronic degenerative models in rats [47], as well as ferulic acid, and major SMs identified in MECP, which can improve antioxidant enzymes activity and prevent LPO of renal tissue [21].However, the enhancing effect that certain SMs exert on the activity of antioxidant enzymes (CAT, SOD and GSH-Px), such as syringic acid [43], phloretin [48] and α-amyrin (inhibitor of cytochromes P 450 ), reduces the number of ROS produced in tissues where these enzyme systems are abundant, such as the kidneys [49], and have been demonstrated in previous works, to which the effect generated by the MECP can be attributed.Therefore, it is likely that the renal regulation mechanism of MECP is triggered by its antioxidant effect, due to its high phenolic compounds composition, and thus prevents LPO and reestablishes the normal function of antioxidant enzymes at a renal level during experimental STZ-induced diabetes, even more than the treatment with vitamin E. Efficiency and action mechanisms of vitamin E, as an antioxidant agent, have been widely documented, referring that it works by stimulating the activity of antioxidant enzymes and avoiding the translocation of inflammatory factors towards the nucleus, such as NF-kB [7,33,50,51].MECP possibly has anti-inflammatory effect due to its SM's NF-kB-inhibitors, such as syringic acid [43], and ferulic acid [21], which demonstrate this activity for in vitro tests, as well as vanillic acid [44] as well α-amyrin [45], which achieve NF-kB inhibition in acute and chronic inflammation with in vivo evaluation in mice models.Furthermore, although phloretin does not inhibit NF-kB, if it decreases TNF-α-stimulated gene expression of vascular adhesion proteins preventing leukocyte migration [52] it inhibits all of them, including the oxidative burst of inflammation as well the oxidative damage in renal tissue during experimental diabetes; however, other possible action mechanisms should be investigated further. In the past, interest in new therapies for the treatment of diabetes was limited to hypoglycemic agents.Currently, it is known that, although the main source of damage in diabetic patients is sustained hyperglycemia, restoration of an organism's antioxidant endogenous defenses is not only achievable with glycemia normalization, and that there are benefits of complementing glycemic management with antioxidant co-adjuvant therapies, which counteract oxidative damage mechanisms related to chronic complications of diabetes such as DN [21].The foregoing provides consistent evidence of the benefit of antioxidant molecules as a complementary treatment to pharmacological control of the complications of this disease. Plant Specimen Chiranthodendron pentadactylon Larreat flowers were obtained from Mercado Pasaje Catedral located in República de Guatemala street #10, Col. Centro, PC, 06000, Mexico City (coordinates 19 • 26 3.403 N 99 • 7 31.173W), in March of 2018.Those flowers that were complete, without color changes, completely formed and not in bud, that did not have spots or were contaminated by pests were cut off and selected for this study.Complete dried plant samples were deposited at the Herbario Iztacala Herbarium, identified as Chiranthodendron pentadactylon Larreat by Biologist Ma.Edith López Villafranco and were registered with the batch specimen number 2949 IZTA.Based on data described in the International Plant Names Index, entered through http://www.theplantlist.org/(accessed on 20 July 2023), it was determined that this specie belongs to the taxonomic family of Sterculiaceae.(https://www.ipni.org/n/822543-1,accessed on 20 July 2023). Obtention of Methanolic Extract of Chyrodendron pentadactylon flowers Selected flowers were air-dried at room temperature (25 ± 5 • C) in dark conditions for three days, and then crushed with a mechanical mill until they became powder.Extraction of the powder of flowers (135 g) was performed with Soxhlet equipment ("IMPARLAB" brand, 50 mL, nozzles 55/50 and 24/40) [19] using 300 mL of hexane at 30 • C, and subsequently same volume of methanol at 60 • C for 24 h.Residue was filtered and concentrated at 40 • C using a rotary evaporator (Buchii RE-111, Buchii, Meierseggstrasse 40, 9230 Flawil, Switzerland) coupled to a vacuum system (BuchiiVacV-153, Buchii, Meierseggstrasse 40, 9230 Flawil, Switzerland) and a cooling system (ECO20, Atlas Copco Group, NASDAQ OMX Stockholm, ATCO A, ATCO B, Sweden.) and refrigerated at the end at 4 • C in dark conditions until its use.The extraction process was carried out to dryness under reduced pressure for the total elimination of alcohol.MECP was dissolved previously in propylene glycol (10%) in water, for in vivo experiment, because previous studies have shown that the possible nephrotoxic effect of this solvent agent is achieved in rats via oral administration of concentrated solutions of 45%, at doses of 1000 mg/kg, for periods of 28 to 90 days [53][54][55].Although ethnomedicinal use of this specie is described as hydroalcoholic preparation or in infusion with water, it was chosen to make a methanolic extract to have greater chemical complexity in the extract and to be able to identify and quantify the greatest number of secondary metabolites present in flowers, mainly flavonoids, which are of high polarity, and have shown greater potential and antioxidant effect in previous studies.Experimental conditions used for phenolic acids search were Macherey-Nagel Nucleosil column (5 µm, 125 × 4.0 mm diameter i.d.), with a gradient of (eluent A) acidified water (H 2 0 pH 2.5) with TFA (trifluoroacetic acid) and (eluent B) CH 3 CN (acetonitrile).Other experimental parameters included the following: flow rate of 1 mL/min and injection volume of 20 µL; temperature at 30 • C; peaks were detected at wavelength of 280 nm; and analysis time of 23 min.Caffeic, gallic, chlorogenic, vanillic, ferulic, p-coumaric and syringic acids were used as pure standards patterns [56]. Experimental conditions used for terpenoid identification were carried out with ZOR-BAX Eclipse XDB-C8 column (5 µm, 125 × 4.0 mm diameter i.d.).Isocratic analysis was performed using as eluent A CH 3 CN 80%, and as eluent B: H 2 O 20%, with the following experimental parameters: flow rate of 1 mL/min and injection volume of 20 µL; temperature at 40 • C; peaks were detected at wavelength of 215 and 220 nm; and analysis time of 21 min.Terpenoids carnosol, ursolic acid, stigmasterol, oleanolic acid, α-amyrin and β-sitosterol were used as pure standards [56].Folin-Ciocalteau method was executed using gallic acid as reference standard [57,58].Each experiment was carried out in triplicate and results were described as mg equivalents of gallic acid (GA)/g of dried extract (mg eq GA/ g dried MECP extract).A calibration curve was performed using pure gallic acid as a standard diluted in deionized water, in a range of concentrations from 0.02 to 0.12 mg/mL (R 2 = 0.99, y = 13.398x+ 0.0013).Sample MECP was evaluated using a stock solution of 0.2 mg/mL in deionized water to carry out the assay, using 250 µL and adding deionized water until a final volume of 1 mL, after which 500 µL of Folin-Ciocalteau and 1.5 mL of sodium carbonate was added.Then, the reaction was left to incubate at room temperature for two hours.All absorbance was measured with a spectrophotometer (Shimadzu Double Beam Scanning UV-Vis (Model UV-1700)) and all samples were measured at 750 nm. In Vivo Models 4.5.1. Experimental Animal Conditions Thirty-six adults male Wistar rats, body weight (BW) of 240 ± 10 g, were obtained from Facultad de Estudios Superiores Iztacala vivarium.For the purposes of this experimental model, the influence of biological sex on the results is negligible.Animals were located in plastic cages during a 7-day conditioning period in the Pharmacology Laboratory of the Interdisciplinary Research Unit in Health Sciences and Education; rats were maintained prior and during the experiments under the following laboratory-controlled room conditions: temperature (25 ± 2 • C), light/dark automatized cycles (12 h/12 h), humidity saturation (55-80%), and with daily changes of food (RodentChow ® , PURINA Company, France) and filtered water.Measurements were performed in the consumption of both food and water during experimental protocol. Procedures conducted on the laboratory animals followed the statutes of the International Committee for the Care and Use of Laboratory Animals (IACUC), as well as the following animal research statutes: Reporting of In Vivo Experiments (ARRIVE), EU Directive 2010/63/EU for animal experiments, and National Research Council's Guide for the Care and Use of Laboratory Animals guidelines, and the procedures described in the Mexican Official Norm (NOM-062-ZOO-1999, modified in 2001) [60], revised in 2023 and entitled "Technical specifications for the production, care and use of laboratory animals".The project experimental protocol was approved by the Research Ethics Committee of the National School of Biological Sciences (ENCB), with registration number CEI-ENCB-ZOO-02-2022. Two days prior to the end of the experiment, all experimental animals were placed in metabolic cages for 24 h to measure water and food consumption, as well as to collect and register urinary volume.Urine samples were reserved in freezing −4 • C for latter biochemical evaluations. Rats were anesthetized with pentobarbital (45 mg/kg) via i.p. route, and a laparotomy process was performed to obtain blood samples from right renal artery without anticoagulant to obtain serum and latter kidneys tissues.Right kidneys were decapsulated, weighed, and separated into cortex and marrow to be stored in an ultra-freezing fridge at -80 • C. Determination of Renal Relative Weight and Protein/DNA Ratios Right kidney was weighted to calculate organ weight/BW ratio in mg both measures; after this, a sample of 500 mg was homogenized as described in the technique to measure protein/DNA ratio with TRIzol reagent method (Invitrogen, Grand Island, New York, NY, USA, TRIzoq, No. cat 15596-018, lot 50563207) according to Segura-Cobos et al. [33] and Amato et al. [62]. Tissue preparation Renal cortex (100 mg) was homogenized in Eppendorf tubes with 1 mL of cold phosphate buffer (50 mM, pH 7) and a protease inhibitor mixture (Complete Mini Tablets, Roche).One tablet per each 10 mL of buffer solution [reference number SKU 11697498001, Sigma-Aldrich Chem.Co., St. Louis, MO, USA] was added at 4 • C and 10,000 r.p.m., in three cycles.Afterwards, 1 mL of each homogenate were centrifuged at 10,500 r.p.m. and 4 • C/15 min, and supernatant s aliquots for each evaluation were subsequently taken.The rest of volume of complete uncentrifuged homogenates were kept at −80 • C until further use. Statistical Analysis GraphPad Prism ver.8 software was utilized for analysis of results and graphic elaboration.Data were presented as standard error of the mean (SEM, ±) for quantitative results.Multiple comparisons between groups were performed using one-way analysis of variance (ANOVA) followed by Tukey's post hoc test; values of p < 0.05 were considered statistically significant. Conclusions Results of this project indicated that the Chiranthodendron pentadactylon flower extract and its constituents, as promising supplements or nephroprotective agents, respectively, restored the renal function in rats with experimental diabetes in the treatment on complications derived from diabetes such as nephropathies, mainly through their antioxidant mechanism stimulating endogenous enzymatic defense and preventing lipid peroxidation in kidney tissues. 4. 3 . Chemical Characterization of MECP with HPLC Phytochemical analysis for the identification and quantification of the main secondary metabolites (SMs) contained in MECP was carried out with high performance liquid chromatography (HPLC) in Hewlett Packard liquid chromatograph (model 1100), equipped with an automatic injector Agilent Technologies (model 1200), and Hewlett Packard diode array detector (model 1100), and Hewlett Packard quaternary pump (model 1100).Management and programming of equipment, data registration and processing was performed using Agilent Technologies 2006 B.02.01 program (ChemStation Family Software Products version Rev B.02.01, 2006).All used standards showed 90 to 99% purity and were purchased at Sigma-Aldrich, St. Louis, MO, USA. 4. 4 . In Vitro Antioxidant Potential Techniques 4.4.1.Total Phenolic Content (TPC) Author Contributions: E.S.-B.: writing-original draft preparation, methodology of in vitro assays and in vivo models, formal statistical analysis and data curation; D.S.-C.: methodology of in vitro assays; R.P.-P.-B.and G.A.C.-C.: conceptualization, visualization, resources, supervision, project administration and funding acquisition; G.A.G.-R.: writing-reviewing and editing and discussion investigation; M.E.G.-A.and R.S.M.-C.: methodology, validation and formal analysis of HPLC chromatograms; J.M.C.-L.and E.M.-S.: resources, and methods.All authors have read and agreed to the published version of the manuscript.Funding: This work was supported by the National Council of Science and Technology (CONACyT) through the financing obtained by the project registered with the number A1-S-5231, also authors thank for the support granted to Santiago-Balmaseda through the postgraduate scholarship (CVU number 963347). Table 1 . Effect of MECP on glycemia, body weight, food and water consumption and urinary volume after 6 weeks of treatment in diabetic rats.
v3-fos-license
2021-06-14T13:13:29.675Z
2021-06-10T00:00:00.000
235421217
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/13/12/2279/pdf", "pdf_hash": "e1af386f959d1d4e61e36efe476248c6b8f8037a", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45223", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "e1af386f959d1d4e61e36efe476248c6b8f8037a", "year": 2021 }
pes2o/s2orc
Assessing the Accuracy of GEDI Data for Canopy Height and Aboveground Biomass Estimates in Mediterranean Forests Global Ecosystem Dynamics Investigation (GEDI) satellite mission is expanding the spatial bounds and temporal resolution of large-scale mapping applications. Integrating the recent GEDI data into Airborne Laser Scanning (ALS)-derived estimations represents a global opportunity to update and extend forest models based on area based approaches (ABA) considering temporal and spatial dynamics. This study evaluates the effect of combining ALS-based aboveground biomass (AGB) estimates with GEDI-derived models by using temporally coincident datasets. A gradient of forest ecosystems, distributed through 21,766 km2 in the province of Badajoz (Spain), with different species and structural complexity, was used to: (i) assess the accuracy of GEDI canopy height in five Mediterranean Ecosystems and (ii) develop GEDI-based AGB models when using ALS-derived AGB estimates at GEDI footprint level. In terms of Pearson’s correlation (r) and rRMSE, the agreement between ALS and GEDI statistics on canopy height was stronger in the denser and homogeneous coniferous forest of P. pinaster and P. pinea than in sparse Quercus-dominated forests. The GEDIderived AGB models using relative height and vertical canopy metrics yielded a model efficiency (Mef) ranging from 0.31 to 0.46, with a RMSE ranging from 14.13 to 32.16 Mg/ha and rRMSE from 38.17 to 84.74%, at GEDI footprint level by forest type. The impact of forest structure confirmed previous studies achievements, since GEDI data showed higher uncertainty in highly multilayered forests. In general, GEDI-derived models (GEDI-like Level4A) underestimated AGB over lower and higher ALS-derived AGB intervals. The proposed models could also be used to monitor biomass stocks at large-scale by using GEDI footprint level in Mediterranean areas, especially in remote and hard-to-reach areas for forest inventory. The findings from this study serve to provide an initial evaluation of GEDI data for estimating AGB in Mediterranean forest. Introduction Based on the general FAO definition of forests, there were an estimated 88 million ha of forest area in Mediterranean countries in 2015, occupying the 10.04% of the total area of many researchers could consider using GEDI data as the only source on which to rely for forest structure estimation, in the absence of multi-temporal ALS in the near future. Hence, developing GEDI-based AGB models from Mediterranean areas using existing ALS-based AGB estimates towards the domain of new spaceborne datasets (specifically to those designed to map ecosystem structure observation, i.e., GEDI, ICESat2, NASA's NISAR, ESA's BIOMASS) is a relevant exercise, since satellite-derived data offers unique opportunities to produce large-scale forest estimates on 3D forest structure. Several studies have assessed the accuracy of GEDI and ICESat-2-derived canopy heights [10,24,25,28] and AGB estimates [15,20,21,29] from different ecosystems around the world. However, canopy height was validated using non coincident temporal or simulated ALS data [10,24,25,28] and simulated GEDI and ICESat-2-data to calibrate global AGB models [15,20,21,29]. Hence, it is important, first, to evaluate the accuracies of post-launch GEDI data and products, since they might differ from the accuracies of the simulated GEDI data, and secondly, to calibrate/validate the local and regional spaceborne LiDAR-AGB model. The dynamics of forest ecosystems are especially challenging in sparse ecosystems for which the interpretation of laser echoes remains more uncertain than in full-cover conditions, where the upper canopy layer more uniformly captures the energy from laser beams. The case of Mediterranean forests could provide valuable insights on how forest horizontal complexity determines the performance of GEDI-based estimates when compared to ALS-based estimation. The study analyzes a gradient of forest ecosystems from sparse to dense forest cover conditions on which to show the performance of ABA models when applied using recent ALS surveys (2018-2019) and recent GEDI scanning shots (2019). The spatial and temporal co-registration between ALS and GEDI shots were specifically considered in this investigation. To the best of our knowledge, no studies have been conducted in Mediterranean areas focused on developing AGB models using ongoing satellite LiDAR missions. This is a crucial step to better understand forest AGB distribution and spatial changes in terrestrial carbon fluxes. Therefore, the main goal of this work is to assess the main capabilities of the GEDI sensor for estimating canopy height and AGB over five different Mediterranean forest ecosystems in south-west Spain. To fulfill this goal, two specific objectives were defined: (1) assess the accuracy of GEDI-derived canopy height, and (2) quantify the performance of the GEDI-derived models based on canopy metrics (height and cover) in predicting AGB. To do that, the ALS-derived canopy height and AGB computed for the study area were used as reference data. Study Area The study area was located in the province of Badajoz (region of Extremadura, southwest of Spain). Badajoz is the biggest province in Spain covering 21,766 km 2 ( Figure 1). The Spanish Forest Map (SFM) and the sampling design of the SNFI-4 were used in this research to cover a wide range of Mediterranean types of forests. The last version of the SFM at 25-m resolution was used to select the forest areas corresponding to the most representative species in Extremadura. We selected five forest ecosystems: (i) Dehesas: agro-forestry-pastoral ecosystem that contains scattered tree cover (60-100 trees per ha) dominated by even-aged old-growth evergreen oaks (Quercus spp.) usually with an absence of natural regeneration due to the presence of pastures and agricultural fields as undercover; (ii) Encinar: uneven-aged sparse oak forest (Quercus ilex subsp. ballota (Desf.) Samp); (iii) Alcornocales: even-aged multilayered forest dominated by the cork oak (Quercus suber L.); (iv) Pinaster: even-aged forests of Pinus pinaster subsp. mesogeensis Aiton; and (v) Pinea: even-aged forests of Pinus pinea. According to the SFM, the analyzed forest stands cover a total 822,623 ha distributed as follows: 664,529.97 ha (Dehesas), 98,950.11 ha (Encinares), 14 Airborne Laser Scanning Acquisition and Processing ALS campaigns can be considered temporally coincident with the GEDI track, since there was a maximum of 1 year time interval between ALS acquisition date and the set of the analyzed GEDI full waveform (FW) beam footprint. Two sets of ALS point clouds were processed in this study: (i) Extremadura North (EXT-N, collected during the period of October 2018 to March 2019) and (ii) Extremadura South (EXT-S, collected during the period of October 2018 to July 2019). Both data sets correspond to the second round of countrywide ALS measurements, which are publicly available in Spain through the PNOA project (Plan Nacional de Ortofotografía Aérea). Squared ALS blocks of 2-km side, covering the whole region of Extremadura were obtained from the CNIG ("Centro Nacional de Infomación Geográfica" http://centrodedescargas.cnig.es/CentroDescargas/index.jsp, accessed on 1 June 2020) to cover the province of Badajoz. The scanning sensors involved in collecting the ALS data in the study area were a RIEGL LMS-Q1560 for the EXT-N dataset and a LEICA ALS80 for the EXT-S dataset. The nominal laser pulse density varied between 2 points m −2 in the EXT-N and 1 point m −2 in the EXT-S. The vertical accuracy of the scanning survey was 0.15 m for both ALS datasets. The processing workflow comprised the following steps: Firstly, thindata command implemented in FUSION software [30] was used to reduce the nominal pulse density to 1 point m −2 in order to homogenize the results from both datasets (EXT-N, EXT-S). Secondly, the ALS data sets were processed using the LAStools software [31]. A detailed description of the software parametrization and processing workflow is provided in Pascual et al., 2020 [4]. Briefly, lasheight were used to normalize the classified point cloud of ALS echoes. The lascanopy command was used to extract the metrics from the ALS normalized point cloud using a buffer of 12.5 m of radius for the center of each GEDI footprint (Table 1). Finally, the above-ground height of ALS echoes was used to distinguish tree canopies (echoes above 2 m) and the shrub layer (echoes below 2 m) when computing the ALS height statistics (lascanopy parameters: height_cutoff = 2, cover_cutoff = 2) [32] (Table 1). Table 1. Set of statistics derived from ALS data computed for the ground-footprint location of GEDI laser beams across the training area. Airborne Laser Scanning Acquisition and Processing ALS campaigns can be considered temporally coincident with the GEDI track, since there was a maximum of 1 year time interval between ALS acquisition date and the set of the analyzed GEDI full waveform (FW) beam footprint. Two sets of ALS point clouds were processed in this study: (i) Extremadura North (EXT-N, collected during the period of October 2018 to March 2019) and (ii) Extremadura South (EXT-S, collected during the period of October 2018 to July 2019). Both data sets correspond to the second round of countrywide ALS measurements, which are publicly available in Spain through the PNOA project (Plan Nacional de Ortofotografía Aérea). Squared ALS blocks of 2-km side, covering the whole region of Extremadura were obtained from the CNIG ("Centro Nacional de Infomación Geográfica" http://centrodedescargas.cnig.es/CentroDescargas/index.jsp, accessed on 1 June 2020) to cover the province of Badajoz. The scanning sensors involved in collecting the ALS data in the study area were a RIEGL LMS-Q1560 for the EXT-N dataset and a LEICA ALS80 for the EXT-S dataset. The nominal laser pulse density varied between 2 points m −2 in the EXT-N and 1 point m −2 in the EXT-S. The vertical accuracy of the scanning survey was 0.15 m for both ALS datasets. The processing workflow comprised the following steps: Firstly, thindata command implemented in FUSION software [30] was used to reduce the nominal pulse density to 1 point m −2 in order to homogenize the results from both datasets (EXT-N, EXT-S). Secondly, the ALS data sets were processed using the LAStools software [31]. A detailed description of the software parametrization and processing workflow is provided in Pascual et al., 2020 [4]. Briefly, lasheight were used to normalize the classified point cloud of ALS echoes. The lascanopy command was used to extract the metrics from the ALS normalized point cloud using a buffer of 12.5 m of radius for the center of each GEDI footprint (Table 1). Finally, the above-ground height of ALS echoes was used to distinguish tree canopies (echoes above 2 m) and the shrub layer (echoes below 2 m) when computing the ALS height statistics (lascanopy parameters: height_cutoff = 2, cover_cutoff = 2) [32] ( Table 1). GEDI Data Adquisition and Processing GEDI-derived Level 2A (L2A) and Level 2B (L2B) data were used in this study ( Table 2). GEDI L2A data contain the latitude, longitude, elevation, canopy height, and surface energy metrics (rh0, rh10, . . . , rh90, rh95, rh98, rh99, rh100) extracted from return waveforms of the various reflecting surfaces located within each laser footprint [33]. The GEDI L2B standard data product adds vertical profile metrics: the canopy cover (CC GEDI ), plant area index (PAI), estimated vertical canopy directional gap probability for the selected L2A algorithm (PGP_THT), and foliage height diversity index (FHD) for each laser footprint located on the land surface [34]. The bounding box of the Badajoz province was used to select all the GEDI beams within study area. There were 99 GEDI orbit tracks in HDF5 format available to the 15 July 2020 date ( Figure 1). The GEDI laser shots considered in this research were collected in 2019 between 21 April and 31 October. The attribute Quality Flag (QF) was used to disregard all GEDI shots classified as 0, meaning that the technical and quality attributes for a given shot number (identifier) were not according to standards. A QF value of 1 indicates that a given shot number meets quality criteria based on energy, sensitivity, amplitude, and real-time surface tracking, and therefore, these shots were used for further analysis. See details of interpretation of L2A and L2B QF in [33,34]. The rGEDI package [35] was used to retrieve and process the GEDI data under the 3.6 version of the R statistical software [36] (R Core Team 2020). [37] The benchmark between ALS and GEDI was carried out within the boundaries of SFM polygons intersecting GEDI ground tracks. First, we selected GEDI shots completely contained within the forest type-specific SFM polygons (Dehesas, Encinares, Alcornocales, Pinaster, and Pinea). Second, GEDI shots located further than 30 m from the edge of SFM boundaries were selected [27]. The final filter disregarded GEDI shots above the percentile 99th values observed SNFI-4 plots in Badajoz. The total set comprised 63,135 shots: 38,983 for Dehesas, 15,958 for Encinares, 3026 for Alcornocales, 1534 for Pinaster, and 3634 for Pinea. Finally, the matching between the GEDI-and ALS-derived canopy height metrics was evaluated in terms of Pearson's correlation coefficient (r) (Equation (1)), the overall root mean square error (RMSE, Equation (2)), the relative root mean square error (rRMSE, Equation (3)), Bias (Equation (4)), and rBias % (Equation (5)). where n is the number of GEDI shots, x i is the elevation ALS metric in m for the 25-m diameter footprint GEDI shot i, y i is the elevation estimation metric from the 25-m footprint GEDI in the GEDI Level 2A, x is the mean elevation observed values for the ALS-estimated at footprint GEDI level and y is the mean elevation observed values for the GEDI-estimated metric at footprint GEDI level. Field Data Adquisition The field data used for this study were obtained from the SNFI-4 in Extremadura. A total of 508 georeferenced plots with high-end positioning equipment were used to calibrate forest type-specific ALS-derived AGB models ( Table 3). The field measurements of the SNFI-4 campaign in Extremadura were carried out during the year 2017. The uncertainty in the co-registration between ALS and field measurements was mitigated using high-performance global navigation satellite systems (GNSS) to improve positioning information. A handheld data collection system (TRIMBLE Juno 5B handheld, Trimble Inc., Sunnyvale, CA, USA) was used to determine the coordinates (error range 1-2 m of positioning error after postprocessing) during field measurements. For further details of the procedure conducted to obtain the field data, see the protocol outlined in Álvarez-González et al., 2014 [6]. Table 3. Summary of ground data collected in the 4th National Forest Inventory (SNFI-4) for the five forest ecosystems. Plot-level estimates are presented for aboveground biomass (AGB, Mg ha −1 ), stand basal area (G, m 2 ha −1 ), and tree density (N, trees ha −1 ). ALS-Derived AGB Models We used (multiplicative) power-function models to establish empirical relationships between field measurements and ALS variables. The respective general expressions are as follows (Equation (6)): where y is the estimated AGB from ALS; X 1 ·X 2 . . . ·X n are potential explanatory ALSderived variables related to metrics of height distributions or measurements related to canopy density (Table 1); a, b, c are the parameters to be estimated by non-linear regression analysis; and ε is the additive random error. The models were fitted using the nls function implemented in the BASE package of the R software [36]. Forest type-specific ALS-based AGB models for Dehesas, Encinares, Alcornocales, Pinaster, and Pinea were calibrated. The first step in the modeling phase was to select the optimal set of predictor variables to be used in the estimation of AGB. The leaps package, which is available for R [38], was used to select the significant predictors of the regression. In this study, we proposed the use of two predictors to estimate the parameter from the models. Collinearity between regressors was prevented by checking the variance inflation factor (VIF). In this study, regressors with VIF above 10 were disregarded [39]. In addition, a leave-one-out cross-validation (LOOCV) was performed for each potential regression model using programming routines in R [36]. where n is the number of plots, y i is the field-estimated AGB in the plots i, y is the mean observed value for the field-estimated AGB in the plot,ŷ i is the estimated value of AGB derived from the non-linear regression model, and p is the number of parameters in the models. GEDI-Derived AGB Models Firstly, the set of ALS-derived biomass models were applied to estimate AGB for the different forest ecosystems (Dehesas, Encinares, Alcornocales, Pinaster, and Pinea forests) at laser footprint level (~25 m), i.e., by using the ALS-derived metrics extracted from the extent of the GEDI shots. Secondly, the forest type-specific ALS-derived AGB estimates at footprint level was used as independent variable to develop GEDI-derived AGB models for each forest type, by using the upper rh metrics (rh60, rh70, rh80, rh90, rh95, rh98, rh99) from L2A and canopy profile metrics CC GEDI , PAI, PGP_THT, and LHD from L2B as explanatories variables (Table 2). For GEDI-derived AGB models, the method was similar to ALS-derived AGB models. The empirical relationship between ALS characteristics and stand-level forest biomass suggests that common models based on metrics of height distributions or a combination between metrics of height distributions and measurements related to the vertical canopy structure may be widely applicable to diverse forest types [40][41][42]. In addition, the combination of upper metric rh90, rh95, rh98, rh99, CC GEDI , PAI, PGP_THT, and LHD was also tested. Then, we computed the performance of these forest type-specific models at GEDI footprint level in terms of Mef, RMSE, rRMSE, bias, and rBias as described in Section 2.5. Models were compared using Mef, RMSE, rbias, and a graphical inspection of the model residuals at the end of each model procedure. A leave-one-out cross-validation (LOOCV) was performed for each potential regression model using the R software [36]. GEDI-ALS Metrics Accuracy The relationship between p98 and rh98 was the best in terms of r correlation for the five forest ecosystems, except for Dehesas where p99-rh99 was slightly better in r ( Table 4, Figures 2 and 3). The p98-rh98 comparison for 5 forest ecosystems yielded r Pearson values ranging from 0.49 to 0.65 for Dehesas, Encinares, and Alcornocales, and 0.71 for Pinaster and Pinea forests. In terms of rRMSE values, the error was slightly lower for the comparison between p98 and rh98 than for p95-rh95 and p99-rh99, except for Dehesas and Pinaster. The RMSE values of p98-rh98 for Dehesas, Encinares, Alcornocales, Pinaster, and Pinea were 2.05, 2.17, 1.95, 3.96, and 2.37 m, respectively; the rRMSE values were 29.39%, 38.68%, 31.14%, 28.63%, and 28.29%, respectively; while the bias values were −0.50, 0.39, −0.06, −0.97, and 0.27 m, respectively. In terms of bias and bias%, the p99-rh99 relationships were slightly better than p98-rh98 for Dehesas and Pinaster. Finally, Figures 2 and 3 depict the mean difference between rh98 and p98 metrics when the values are classified based on CC ALS for all the forest ecosystems. For the Dehesas, Alcornocales, and Pinaster formations, we found negative differences, on average, between rh98 and p98 when canopy cover is <50% (Figure 2b,f and Figure 3b), indicating that GEDI underestimates the canopy height (rh98) when compared with ALS p98. On the opposite, a mean positive difference between rh98 and p98 was observed for both Encinares and Pinea forest types, indicating that the GEDI canopy heights were higher than the ALS data across all the canopy cover classes, except when canopy cover is >90% for which the mean differences turn negative (Figures 2d and 3d). This could indicate GEDI limitations in penetrating dense canopy cover conditions, such as the case of Encinares and Pinea forest types. (e) (f) (e) (f) ALS AGB-Derived Models The performance of models for each forest type in terms of Mef, RMSE, and rRMSE are shown in Table 5. Non-linear regression models for AGB in Dehesas, Encinares, and Alcornocales forest ecosystems yielded Mef values ranging from 0.27 to 0.84 and from 0.76 to 0.86 for Pinaster and Pinea, respectively. In terms of rRMSE values, the values were slightly higher in Dehesas and Encinares (49.75 and 51.48%) than in Alcornocales (31.01%), Pinaster (37.01%) and Pinea (27.22%). In general, ALS-based models were better in terms of Mef, RMSE, and rRMSE in more closed Pinus and Alcornocales forests than in Encinares and Dehesas characterized by more open canopies. Table A1 (Appendix A) shows AGB estimation accuracies from LOOCV procedures by applying the best ALS-based AGB model summarized by the Mediterranean formations. There was no appreciable bias from the models throughout the observed AGB range using the best ALS-derived AGB models. ALS AGB-Derived Models The performance of models for each forest type in terms of Mef, RMSE, and rRMSE are shown in Table 5. Non-linear regression models for AGB in Dehesas, Encinares, and Alcornocales forest ecosystems yielded Mef values ranging from 0.27 to 0.84 and from 0.76 to 0.86 for Pinaster and Pinea, respectively. In terms of rRMSE values, the values were slightly higher in Dehesas and Encinares (49.75 and 51.48%) than in Alcornocales (31.01%), Pinaster (37.01%) and Pinea (27.22%). In general, ALS-based models were better in terms of Mef, RMSE, and rRMSE in more closed Pinus and Alcornocales forests than in Encinares and Dehesas characterized by more open canopies. Table A1 (Appendix A) shows AGB estimation accuracies from LOOCV procedures by applying the best ALS-based AGB model summarized by the Mediterranean formations. There was no appreciable bias from the models throughout the observed AGB range using the best ALS-derived AGB models. Table 6 reports the model performance of the GEDI-derived AGB models for the five analyzed forest types. Scatterplots of ALS-observed vs. GEDI-estimated AGB at GEDI footprint level are shown in Figure 4 for the best forest type-specific model in terms of Mef. Table A2 (Appendix A) shows AGB estimation accuracies from LOOCV procedures by applying the best model summarized by the Mediterranean formations. The negative and positive mean values in bias (Mg/ha) and rBias (%) indicate that the GEDI-derived AGB estimates are systematically underestimating (Dehesa, Pinaster, and Pinea) or overestimating (Encinares and Alcorncoles) ALS-based AGB estimates. Non-linear regression models for five formations yielded Mef values ranging from 0.31 to 0.46. In terms of rRMSE values, the values were slightly lower with the Dehesas (38.17%) and Encinares (57.87%) than Alcornocales (84.74%), Pinaster (48.19%), and Pinea (63.97%), respectively. In general, based on the histograms results for the best model (Figure 4b-j), the models slightly underestimated AGB over lower and higher intervals, corresponding with low and high canopy cover conditions. The models fitted with the best combination of one upper metric and one measurement related to vertical canopy structure were less unbiased than the models using stepwise selection methods throughout the observed AGB-ALS range. Models fitted with the combination of rh95, rh98, rh99, CC GEDI and FHD variables proved to be the most accurate and less unbiased models for Dehesas, Alcornocales, Pinaster, and Pinea, respectively. Although PAI and PGP_THT were also significant variables in the models, the proportion of variation explained by the regressions was lower than the models fitted by upper metrics in combination with CC GEDI and FHD in AGB modeling, except for Encinares, where PGP_THT was included in the best model. The model for pure homogenous Pinea forest yielded the best performance throughout the observed AGB (Figure 4i,j). Discussion The usability of previously developed AGB models from ALS surveys and expensive fieldwork campaigns is especially relevant under region-wide sampling designs. The NASA GEDI mission brings an opportunity to update estimates of biomass across vast forest landscapes. The laser technology applied in FW GEDI differs from the discrete-return scanning carried out under mainstream ALS applications. Therefore, it is important to evaluate and understand the potential of GEDI data for its integrations in ALS-based workflows in forest inventory and forest management. Our study evaluated the ALS and GEDI performances using robust AGB models under the same temporal co-registration between ALS and GEDI datasets, and over five forest types with different complexities in terms of vertical and horizontal structure of the canopies. The bias in the prediction biomass estimates using GEDI observations as independent data depends on the model structure and the predictor variables included. Despite the fact that some recent studies in the literature assessed the accuracy of real [10,25] or ALS-simulated GEDI data [20,21,43,44], our study is, to the best of our knowledge, the first one to evaluate the performance of on-orbit GEDI L2A and L2B (Version 1) products in obtaining AGB estimates at footprint level (GEDI-like Level4A) by comparing with spatially and temporally coincident, discrete-return ALS data across vast areas of diverse Mediterranean types of forests. Regarding the accuracy of GEDI canopy height estimates, our results are in accordance with the previously published studies that analyzed simulated GEDI data from prelaunch GEDI mission (e.g., Silva Discussion The usability of previously developed AGB models from ALS surveys and expensive fieldwork campaigns is especially relevant under region-wide sampling designs. The NASA GEDI mission brings an opportunity to update estimates of biomass across vast forest landscapes. The laser technology applied in FW GEDI differs from the discrete-return scanning carried out under mainstream ALS applications. Therefore, it is important to evaluate and understand the potential of GEDI data for its integrations in ALS-based workflows in forest inventory and forest management. Our study evaluated the ALS and GEDI performances using robust AGB models under the same temporal co-registration between ALS and GEDI datasets, and over five forest types with different complexities in terms of vertical and horizontal structure of the canopies. The bias in the prediction biomass estimates using GEDI observations as independent data depends on the model structure and the predictor variables included. Despite the fact that some recent studies in the literature assessed the accuracy of real [10,25] or ALS-simulated GEDI data [20,21,43,44], our study is, to the best of our knowledge, the first one to evaluate the performance of on-orbit GEDI L2A and L2B (Version 1) products in obtaining AGB estimates at footprint level (GEDI-like Level4A) by comparing with spatially and temporally coincident, discretereturn ALS data across vast areas of diverse Mediterranean types of forests. Regarding the accuracy of GEDI canopy height estimates, our results are in accordance with the previously published studies that analyzed simulated GEDI data from pre-launch GEDI mission (e.g., Silva et al., 2018 [43] (p. 3517, p98 = rh98, RMSE = 2.99 m, bias = 0.47 m, bias(%) = 1.50, n = 2987) and Hancock et al., 2019 [45] (p. 306, p98 = rh98, RMSE = 4.78 m, bias = 0.22 m). However, it is important to evaluate the accuracies of post launch GEDI data and products, since they might differ from the accuracies of the ALSsimulated GEDI data [15,36,38] due to possible geolocation inaccuracies or spatiotemporal variations in atmospheric attenuation [26]. The use of the 95th percentile resulted in more negative and positive bias and less accurate estimates than the 98th (RMSE increased from 2.03 m in Alcornocales to 4.17 m in Pinaster). In terms of bias and bias%, the p99-rh99 relationship was slightly better than p98-rh98 for Dehesas and Pinaster. The results confirm that the accuracy of GEDI-FW estimates of canopy height depends on the complexity of the horizontal and vertical structure of the Mediterranean vegetation. The GEDI footprint estimates were better in more closed and homogeneous coniferous forests of P. pinaster and P.pinea species (r = 0.71) than in open canopy Quercus-dominated forests with values of r ranging from 0.50 (sparse Dehesas) to 0.61 (multi-layered Alcornocales). The performance between metrics were similar in terms of RMSE and bias to recent published research at the European level [28] (RMSE = 2.5 m, rRMSE = 45%, and positive bias = 0.70), although our rRMSE values were slightly better from our study ranging from 28.29% to 38.68%. [10,28]. Potapov et al., 2020 [10] compared GEDI relative height metrics data (rh90, rh95, and rh100) to the 90th percentile of ALS-derived height distribution (p90). The authors found that rh90 underestimated canopy height compared to p90 (mean difference −2.3 m) and rh100 overestimated it compared to p90 (mean difference +2.7 m). Our results showed that rh95 tends to underestimate canopy height compared to ALS-derived canopy height estimations for Dehesas (bias = −1.37), Pinaster (bias = −1.69), Pinea (bias = −0.53), and Alcornocales (bias = −0.80) and rh99 tends to overestimate canopy height compared to ALS-derived canopy height estimations in forest ecosystems as Alcornocales (bias = 0.80), Encinares (bias = 0.40), and Pinea (bias = 0.70). The results confirmed that GEDI height metric rh95 tends to underestimate canopy height when compared with ALS data in Mediterranean areas of sparse tree cover as reported by Potapov et al., 2020 [10]. Regarding the effects of canopy cover on the accuracy levels of GEDI height estimates, our results revealed that canopy heights were most accurate in the 50-90% range of canopy cover, and tend to present higher errors in dense cover (>90%) conditions as documented by [45] (see Figure 7d with rh90 in [45]). Neuenschwander et al., 2020 [29], in a study focused on assessing the accuracy of ICESat-2 data for canopy height estimates, also found this pattern. This confirms that, at low canopy cover (<50%) conditions, both ICESat-2 and GEDI full-waveform (FW) energies are more likely to be reflected from the terrain surface rather than the canopy, which precludes an accurate estimation of canopy height [33,34]. On the contrary, for the dense canopy cover conditions (CC > 90%) the terrain-reflected signal received by the GEDI and ICESat-2 sensors is more weak than the canopy signal leading to errors in canopy heights measurements [28,29,33].Therefore, rh metrics may be biased particularly in extreme (low and high) canopy cover conditions. The purpose of this study was to assess the performance and usefulness of the first release of the GEDI data (Version 1), which has a systematic geolocation error around 10-20 m [33,34]. As such, the accuracy of this data version was assessed without performing any geolocation error correction. Hence, the lower performance of the GEDI-derived canopy height in low density tree cover conditions, such as in Dehesas, can also reflect the impacts of the GEDI (Version 1) geolocation errors. In this type of ecosystem, the spatial fuzziness caused by the tree density variability can preclude the true comparison between GEDI measurements and the observed measurements on the ground. In a scattered tree ecosystem, such as Dehesas, an horizontal offset between 10 and 20 m can result in several meters of height errors, affecting model calibration and validation at the GEDI footprint level [10]. The GEDI mission was specifically designed to retrieve vegetation structure and AGB under a large range of environmental conditions sufficient to meet AGB mapping requirements. Regarding the exercise of developing GEDI AGB models using ALS AGB equations as similar to what GEDI's footprint level AGB product 4A will produce, the results suggested that existing ALS-AGB estimates could be used to generate robust GEDIderived AGB models to predict AGB at footprint level in Mediterranean areas. For any spaceborne biomass estimate, validation using reference data is challenging, given that almost all reference data will have errors [19]. The results of the present study showed that GEDI-derived AGB models based on upper rh metric, CC GEDI , FHD, and PGP_THT represent a sufficient quantitative description of Mediterranean structure analyzed at 25-m diameter footprint GEDI level using ALS-derived AGB estimates as reference. In terms of RMSE and rRMSE for the five forest ecosystem (RMSE = 15. 25, 14.13, 22.06, 1.87, 27.95 Mg/ha, and rRMSE 37.85%, 57.87%, 84.74%, 47.75%, 63.02% for Dehesas, Encinares, Alcornocales, Pinaster, and Pinea, respectively), the precision of the GEDI-derived AGB models were similar or better (except in Alcornocales and Pinea) than those values reported by Duncanson et al., 2020 [20] (using GEDI simulations and a locally trained biomass model, calibrated against the ALS 30-m reference map in Sonoma County (US) (p. 111779- Table 2, rRMSE = 57.1%)). In Dehesas, Encinares, and Pinaster modeling, the rRMSE values achieved were also similar or better than to those reported by Silva et al., 2021 [21], who obtained an rRMSE of 54% for Sonoma County (US), with GEDI and ICESat-2 fused AGB calibration at a regular grid-based using GEDI's AGB models from Duncanson et al. 2020. However, the more complex and multilayered forest as Alcornocales (rRMSE = 84.74%) was the least accurately modeled of the Mediterranean forest. Our GEDI-derived AGB models using a combination of canopy height and vertical canopy structure metric from L2A and L2B product, respectively, were less unbiased in terms of bias and rbias (bias = −0.08, 0.14, 0.71, −0.45, −0.56 Mg/ha, rbias = −0.20%, 0.65%, 2.73%, −0.67%, and 1.27%, for Dehesas, Encinares, Alcornocales, Pinaster, and Pinea, respectively) than those values reported by Duncanson et al., 2020 [20] (bias = −26.3 Mg/ha and bias% = −18.7%) using US-wide GEDI AGB models based on only rh metrics and simulated AGB estimates at footprint level (GEDI-like Level4A as our study). In terms of rbias, the values were also slightly better than the values reported by Silva et al., 2021 [21] (bias% = −5.60%) using GEDI, ICESat-2, and NISAR fusion. In general, the GEDI-based AGB models were slightly negatively biased at lower and higher intervals, meaning that GEDI derived AGB models underestimated AGB under low and dense canopy cover conditions, as previous studies using simulated GEDI data [20]. There was no appreciable bias from the models throughout the observed AGB in pure homogenous Pinea forests (Figure 4i,j), in comparison with more complex vegetation as Alcornocoles and Encinares. Our models yielded slightly lower values of Mef (0.31 to 0.46) (similar as adj. R 2 ) to those obtained by Silva et al., 2021 [21] (adj. R 2 ranging from 0.46 to 0.51). GEDI-derived models from our study captured AGB variations slightly worse, probably due to the following: (i) a wide variety of vegetation was analyzed in AGB modeling from the same dataset by Duncanson et al., 2020, (ii) ALS reference map was used to calibrate instead of applying the trained model to ALS point cloud metrics derived at GEDI footprint level as in our study, (iii) Duncanson et al., 2020 and Silva et al., 2021 used ALS-simulated GEDI data and the ALS point cloud density was higher than in our study, and (iv) the influence of the GEDI (Version 1) geolocation errors into the models. If we compare with the ICESat2 mission, the values obtained in the present study were also worse, in terms rRMSE, than those reported by Narine et al., 2020 [46] using a simulated ICESat-2 vegetation product and ALS estimates AGB as reference (RMSE values were 28.90 Mg/ha or rRMSE = 37% of a mean value of 79 Mg/ha with the training dataset) in south Texas (US) (approximately 58% of the region or 80% of its forested area, predominantly coniferous forests). Our results demonstrated that metrics derived from L2A and L2B products at GEDI footprint level, such as canopy height (rh99, rh95, rh90) and vertical canopy structure metric (CC GEDI , FHD), were found to be significant explanatory variables for predicting AGB. We suggest that foliage height diversity index (FHD) [37], which measures the complexity of canopy structure, should be included as explanatory variables in Alcornocales formation. The use of canopy height metrics alone may omit some information in profiles with more vertical and horizontal heterogeneity, such as in natural Mediterranean forest. Conversely, vertical canopy structure metrics contributed to most models for estimating the AGB. The variable PGP_THT, based on the model from [47], in combination with rh90 upper metric, resulted in less biased models than GEDI-derived models using the variable canopy cover (CC GEDI ) and plan area index (PAI) in Encinares. Our results also demonstrate that a specific second metric related to canopy cover (CC GEDI ) from LiDAR waveforms is potentially useful for improving most of the models (Table 6). According to the results obtained, the set of models strengthened the idea that the combination of mean height and vertical canopy structure metrics represents a sufficient and concise quantitative description of a homogeneous vertical structure in Mediterranean Areas as Pinea forest. However, the results also showed the limitations of the GEDI spaceborne LiDAR mission in differentiating AGB and characterizing vegetation structure, specially under sparse canopy cover [22,29]. The trade-off between model accuracy when building AGB models and their use to predict AGB estimates using alternative data sources is an interesting hot research topic embraced under data fusion methods in forest monitoring and assessment. The result from this study would allow forest managers and scientists to better understand ABG dynamics in forest ecosystems, while adding value and use to existing ALS-AGB models developed in the past over valuable fieldwork data. The study from Adam et al., 2020 showed also the influence of low canopy conditions when validating GEDI estimates using multitemporal ALS as benchmark. The assessment that we performed controlled the temporal co-registration and the spatial co-registration uncertainty, as our filters were less tolerant with the effect of edges and fragmentation. In the study by Adam et al., 2020, the use of ALS data collected in 2014, 2018, and 2019 somehow restricted the accuracy of the benchmark, as only part of the ALS surveys overlapped in time with the start of the GEDI mission. The integration of laser satellite products from the GEDI mission might be regarded as a first take before the launch of the BIOMASS satellite mission. Therefore, understanding the technology of satellite missions to address co-registration problems seems vital in order to combine the rich array of sources that forest managers have nowadays as national forest inventories, ALS campaigns, small-scale photogrammetry, and ongoing satellite image missions [17,29,48,49]. Upcoming version of GEDI products addressing geolocation could improve the results obtained in the Extremadura Region. In addition, further work is recommended to introduce the uncertainty of the ALS-AGB models and the forest species map were ignored in the modeling approach. Conclusions This study used real GEDI data for assessing canopy forest height and determining AGB in different Mediterranean forest ecosystems. Firstly, the results showed the accuracy in canopy height from the L2A products using upper metrics from ALS. Secondly, GEDI calibration equations from Mediterranean forest were developed as a similar exercise to generate L4A products at footprint level (~25-m diameter). Our study highlighted the difficulty in differentiating AGB and characterizing vegetation structure under sparse forest cover, which is characteristic of Mediterranean forests. The findings provide an initial evaluation of the ability of GEDI to estimate AGB and serve as a basis for further upscaling efforts. For future challenges, reference canopy height, wall-to-wall ALS-AGB maps, and NFI plots from the Region of Extremadura will be used to validate the upcoming L4A and gridded Level 4B products where these footprints are used to produce mean AGB and its uncertainty in cells of 1 km. Further research could evaluate the robustness of the GEDI version 2 to compute part of the uncertainty in height/AGB caused by the mismatch in GEDI footprint geolocation and ALS point cloud data and how the new orbit corrections have improved the conditions to conduct ground-versus-ALS-versus-GEDI studies. Funding: This work was partially supported by 'National Programme for the Promotion of Talent and Its Employability' of the Ministry of Economy, Industry, and Competitiveness (Torres-Quevedo program) via postdoctoral PTQ2018-010043 to Dr. Juan Guerra Hernández. This research was supported by the project "Extensión del cuarto inventario forestal nacional mediante técnicas LiDAR para la gestión sostenible de los montes de Extremadura" from the Extremadura Forest Service. The authors also thank to Forest Research Centre, a research unit funded by Fundação para a Ciência e a Tecnologia I.P. (FCT), Portugal (UIDB/00239/2020). Acknowledgments: We gratefully acknowledge Vicente Sandoval and Elena Robla from the National Forest Inventory Department for supplying the inventory databases NFI data and the latest version of FMS. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2016-05-04T20:20:58.661Z
2013-04-02T00:00:00.000
925438
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0060243&type=printable", "pdf_hash": "565703d807bebc37f46ac335200a24b035cd1664", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45224", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "sha1": "565703d807bebc37f46ac335200a24b035cd1664", "year": 2013 }
pes2o/s2orc
Equilibrium and Kinetic Studies of Phosphate Removal from Solution onto a Hydrothermally Modified Oyster Shell Material Phosphate removal to a hydrothermally modified fumed silica and pulverized oyster shell material for use in wastewater treatments were made. Sorption data modeling (pH’s 3–11, P concentrations of 3, 5, 10, 15, 20, & 25 mg/L, and at an ambient temperature of 23°C) indicate that an optimal removal of P occurs at pH 11. Three kinetic models were also applied (a pseudo-first-order Lagergren kinetic model, a pseudo-second-order (PSO) kinetic and Elovich) and indicate that a PSO model best describes P-removal. In addition, an application of the Weber and Morris intra-particle diffusion model indicates that external mass transfer and intra-particle diffusion were both involved in the rate-determining step. Langmuir, Freundlich modeling of the sorption data also indicate that the heterogeneous Freundlich sorption site model best describes the data although Langmuir data also fit with data tailing suggesting data are not linear. The data collected indicates that the hydrothermally modified fumed silica and pulverized oyster shell material is suitable for use in wastewater treatment, with P-removal to the solids being preferential and spontaneous. Introduction Contamination of potable ground water with phosphate and its removal in water treatment have become increasing focus worldwide. Many dephosphorization studies have been made for wastewaters, including biological, chemical precipitation, and adsorption processes [1]. Of all phosphate removal techniques, adsorption is receiving increasing attention and becoming an attractive technology because of its simplicity, low cost, ease of operation and handling, sludge free operation, and the capacity to regenerate and re-use solids. In this regard, many adsorbents have been explored such as: zeolite; modified-bentonite (Phoslock); bauxite refinery residues (red mud; Bauxsol TM ); calcined dolomite; fly ash; and ferric iron oxides [2][3][4][5][6][7][8][9]. With the rapid expansion of oyster cultivation in many coastal areas, an excess of oyster shell from shucking can not be used, which often leads to direct dumping at shoreline, roadside etc as a waste material that causes serious environmental pollution. However, full utilization of the calcium resource in oyster shell and the development of a highly efficient P removal material can not only reduce the environmental impact, but also change wastes into valuable resource. Moreover, oyster shells consist of three separate layers, a cuticle, prismatic, and nacreous layer in a particular configuration. The prismatic layer is dominant having a foliated texture that contains a great number of micropores [1,[10][11][12]. These natural pores can be utilized such that the oyster shell can have fairly strong absorbability, exchange capacity, and catalytic surface area that can be used for phosphorus removal from wastewaters. We have previously reported on a calcined and hydrothermally annealed material shaped as a hollow cylindrical derived from oyster shell, which exhibited excellent phosphate removals [1]. We also characterized this material using XRD, SEM and EDS techniques to identify the crystalline phases and the microstructure evolution pre-and post-calcinations, hydrothermal annealing, and phosphate removal. It has been shown that CaSiO 3 is produced during calcination that forms hydrated calcium silicates during hydrothermal annealing; these hydrated calcium silicates react with the soluble phosphate in wastewaters to precipitate a calcium phosphate. The SEM results also show an open microstructure was formed after calcinations and hydrothermal annealing process, which was benefit for adsorption [1]. In this study, the factors affecting phosphate adsorption and detailed information on equilibrium and kinetic removal properties of phosphate were investigated in order to optimize the adsorption process. Materials Oyster shells intended for waste disposal at Xiyangxincun market, Fuzhou City, were collected, cleaned, dried and ground for usage and the fumed silica was purchased from XIBEI Iron Alloy Company, China. For oyster shell powder (,200 mesh particle size) and fumed silica in the molar ratio of CaO/SiO 2 was 5:6, which provided an optimum weight ratio of 1:1.39 g. After mixing as homogenous powder, cylindrical specimens (with a dry sample weight of 2.0 g) were formed [1], and were calcined at 800uC for 1 h before hydrothermal annealing at 150uC for 12 h. Solution phosphate concentrations were determined using the ammonium phosphomolybdate blue method [13], with the absorbance was measured with an SP-721E spectrophotometer at 960 nm. Mass loading of the solids to solution was 1 g to 40 mL (25 g/L) and all the experiments were conducted at ambient temperature 23uC. To allow for any adsorption to the container surface, several control experiments without adsorbent were made, and showed that no adsorption occurred. Equilibrium and kinetic Experiments Equilibrium and kinetic experiments were performed at pH = 7, with initial concentrations fixed at 3, 5, 10, 15, 20, & 25 mg/L. The time to reach equilibrium was determined to be the time after which the solution concentration did not change significantly and was determined by kinetic adsorption study using sampling times of 3, 6,9,12,36,48,72,192 h, respectively. In order to generate an adsorption isotherm for P, the adsorption capacity of the oyster shell adsorbent (mg of P per g of adsorbent) was determined by calculating the mass of P adsorbed (mg) and dividing it by the weight of the adsorbent (g) for each different initial concentration (mg/L). The equilibrium sorption capacity (q e , mg?L 21 ) and the removal ratio (R, %) for P were determined using Equations (1&2): where C o is the initial P concentration and C e is the concentrations of P at the equilibrium time (mg/L); mis the mass of adsorbent (g/L); and V is the volume of the phosphate solution (L). Effects of the Initial Concentrations and pH Values on the P Adsorption show the mass loading (mg/g) with time and removal ratio from solution for the experiments. As expected, the mass loading of P initially increased rapidly and then continued increasing at a slower pace until equilibrium was achieved, moreover these mass loadings show that the higher the concentration of P the greater the mass loading on the solids (Fig. 1). For the initial concentration of 3, 5, 10, 15, 20, & 25 mg/L, the equilibrium adsorption capacities of P ion were 0.119, 0.198, 0.388, 0.576, 0.768, 0.910 mg/g, respectively. However, although adsorbed P ion increased from 0.119 mg/L to 0.910 mg/L, the removal ratio decreased from 99.17% to 91.00% (Fig. 2), which may be due to the ratio between P ion and the available binding sites [14,15]. When the initial P ion concentration was low, available binding sites were relatively higher, as described previously [14][15][16][17][18]. The removal ratio exceeded 90% for all concentrations, indicating that the material can provide significant P-removal for wastewaters across a wide concentration range. Furthermore, the removal equilibrium at each concentration was attained within 48 hrs, and the slope of the plots ( Fig. 1) represents the initial removal rate. Figure 1 also shows that when the removal times lengthened that surface loading continued to increase, although at a much slower rate than the more rapid initial adsorption rate. This continual increase may be attributed to the abundant pores existing in the solids [1], which allows water and P infiltration within the pellet, thereby contributing to the adsorption capacity. Furthermore, wastewater with higher initial P-concentrations provided more P to meet the highly dynamic removal conditions in the initial removal stage; the very sharp P removals in these initial stages would suggest precipitation may be a key removal mechanism. The P removal results for 10 mg/L at different pH are provided in Figure 3, and show that pH had substantial effect on the P removal capacity. Under acidic conditions, removal capacity is substantially lower than for alkaline conditions and reaches a maximum value at pH 11. Consequently, with continuous raising the pH to 13, the mass loading could reach 0.971 mg/g at pH 11, which is a pH where HPO 4 22 is the dominant species. This result between solution pH and mass loading correlates well with calcium phosphate precipitation [1]. In our preview work [1], we reported that during calcination at 800uC, CaCO 3 was converted to CaO which partially fused with SiO 2 to form CaSiO 3 , contributing to the active calcium ion distributing in crystal lattice particles that could react with free phosphate radicals in wastewater. Under neutral or alkaline condition, direct precipitation Ca 3 (PO 4 ) 2 and Ca 5 (PO 4 ) 3 OH are readily achieved (Equations 3 & 4). Moreover, Ca 5 (PO 4 ) 3 OH is the most thermodynamically stable and most difficult to solubilise [19]. However, under more acidic conditions, CaHPO 4 , Ca 4 H(PO 4 ) 3 , and Ca 3 (PO 4 ) are thermodynamically more stable. The precipitation of Ca 3 (PO 4 ) 2 and Ca 5 (PO 4 ) 3 OH most likely follow as: Where increases in OH 2 allow the chemical precipitations to occur more readily resulting to the higher removals. However, a large excess of OH 2 would appear to impair Ca 5 (PO 4 ) 3 OH precipitation allowing a dissolution back in to solution with increased reaction times (Fig. 3). Hence, the calcined and hydrothermally annealed oyster shell material is most effective in P immobilisation in the pH range of 9-11. Adsorption Isotherm Two most common models used to investigate, and describe solution removals processes and mechanisms are Langmuir and Freundlich models. The Langmuir isotherm model assumes a completely homogeneous surface, where the sorption onto the surface has the same activation energy [20],whereas the Freundlich isotherm model is suitable for highly heterogeneous surfaces [11]. where Ce is the equilibrium concentration (mg/L), qe is the amount removed to the solid (mg/g), qm is the maximum saturation capacity at the isotherm temperature (mg/g), and KL (L/mg) is the sorption equilibrium constant related to the energy of adsorption. KL and q m can be determined from the slope and the intercept in a plot of C e =q e against C e . A dimension less constant separation factor (R L ) is defined based on the Equation 6 [21,22]: where C o is the initial concentration of adsorbate (mg/L); R L is considered as a more reliable indicator of the adsorption. There are four possibilities for the R L value: (i) for favorable adsorption, 0, R L ,1; (ii) for unfavorable adsorption, R L .1; (iii) for linear adsorption, R L = 1; (iv) for irreversible adsorption, R L = 0 [22]. The Linear regression of Langmuir equation (Fig.4) and the Langmuir constants calculated are shown in Table 1, and suggest the maximum saturation capacity that could reach at pH 7 is 0.722 mg/g (typical of waste waters), which is lower than the 0.971 mg/g seen at the sorption maxima at pH 11. A correlation coefficient for R 2 of 0.993 indicates that the removal process fits well with the Langmuir model, namely the adsorption behavior belonged to a single-layer adsorption. What's more, the value of R L ranged from 0.005 to 0.041, indicating the adsorption was a favorable process (Fig. 5). However, despite the strong linear correlation, the Langmuir data indicate that the curve is 2 straightline segments, with an intersection at about 5 (1/C e L/mg), which would suggest that surface sites are not homogeneous, or that 2 competing processes e.g., sorption and precipitation are occurring. 2) Freundlich isotherm. The Freundlich isotherm equation in its linear form can be expressed as: ln q e~l n K F z 1 n ln C e ð7Þ Figure 8. Plots of the pseudo-second order model for the removal of P to the modified oyster material. Strong correlations for data ( Table 2) and agreement between experimental and calculated Qe data suggest that this model represents P-removal extremely well. doi:10.1371/journal.pone.0060243.g008 where K F is the intercept, and n the derivative of the slope, are the Freundlich constants representing the adsorption capacity and the adsorption intensity respectively. In generally, the greater K F , the greater the heterogeneity, and the larger the value of 1=n (1=n.1) more spontaneous the adsorption process became. The linear fitting curve (Fig. 6) and the calculated parameter ( Kinetic Models In general, kinetic models are classified into two groups: a reaction model and the diffusion model [23]. 1) Reaction models. Three models were used to describe the removal of P from solution: a pseudo-first-order or Lagergren kinetic model; a pseudo-second-order (PSO) model; and Elvoich model. The linear forms of these three models can be represented by Equations (8, 9 & 10), respectively: where q t (mg/g) and q e (mg/g) are the solid loadings at time t and at equilibrium, respectively. k 1 (min 21 ) is the pseudo-first-order rate constants for removal; k 2 (g?mg 21 ?min 21 ) is the PSO rate constant; a and b are Elovich constants. Figures 7, 8 and 9 show the linear fitting curves for the three kinetic models, respectively, where data to generate the kinetic reaction plots was obtained from Figure 1 (see Section3.1.1.); Table 2 lists the parameters and regression coefficients (R 2 ). It can be seen from three plots, and from data in Table 2, that the PSO model provided better correlation coefficients than the other two models with the regression coefficients R 2 .0.999 for all initial concentrations investigated. Meanwhile, the equilibrium removal capacity calculated depending on the PSO rate model (Qe, cal) was much closer to the experimental data (Qe, exp; Table 2) indicating that the PSO rate model provides very good summary of P removal from solution to the oyster shell material. This is significant because the PSO rate model assumes a chemically ratecontrolling removal process [24]. However, the PSO includes the whole removal procedure, such as, precipitation, co-precipitation, external film diffusion, surface adsorption and intra-particle diffusion, being compatible with the analysis results below [25]; data presented here is in good agreement with the theory above. For the PSO model, the initial sorption rate could be obtained from Equation (11) [20]: The PSO constants k 2 ( Table 2) obtained from plotting (t=q t ) against t, and the value of h increased according to the increased initial concentration. The data indicate that similar observations made from plot slopes ( Fig. 1; Section 3.1.) are present for the kinetic models that there appears to be a 2 stage process with an increasing rate of removal with increased concentration in the initial stage of removal. 2) Diffusion models. The sorption process can be described by four-consecutive steps [18,20,23,24,[26][27][28][29]: 1. transport in the bulk of the solution; 2. diffusion through the solution to the external surface of the adsorbent (also called film mass transferor boundary layer diffusion of solute molecules); 3. particle diffusion in the liquid contained in the pores and in the sorbate along the pore walls; 4. sorption and desorption within the particle and on the external surface. Generally, steps 1 and 4 occur rapidly so that the ratecontrolling steps becomes step 2, step 3, or the combination of them. The PSO model cannot identify the diffusion mechanism, hence to determine the rate-controlling step for P-removal to the oyster shell material, the Weber and Morris intra-particle diffusion model was introduced. The Weber and Morris intra-particle diffusion model was derived from Fick's second law of diffusion and is expressed as [15,28,29]: where Kid (mg.g 21 .min 21/2 ) is the intra-particle diffusion constant, and is derived from the slope of the plot q vs. t 1=2 . C (mg/g) is the intercept of the plot, often referred to the thickness of the boundary layer [30], and a large C value is indicative of external mass transfer as being significant in the sorption, thereby acting as the rate-controlling step. In addition, when data fitting is linear, intra-particle diffusion is involved in the sorption process and when the fit passes through the origin (C = 0), intra-particle diffusion is rate-limiting step [24,28]. Curve fitting of Weber and Morris intra-particle diffusion model (Fig. 10) shows that all the plots were similar, and that curves have two parts. The first steeper curved portion to the break in slope at 3-3.5 (t 1=2 (h 1=2 )), is regarded as being a rapid external mass transfer, the second oblique linear, is regarded as the intra-particle diffusion portion; a third section (not clearly evident in our data) is a final plateau portion, where the removal process tends to equilibrium. Linear fitting of the oblique linear section (Fig.10) provides reliable Kid and C estimates (Table 3), high correlation coefficient R 2 . These data (Fig. 10, Table 3) indicate that the intraparticle diffusion was involved in rate limiting P-removal, however no fits pass through the origin, and therefore intra-particle diffusion was not the sole rate-controlling step during the gradual adsorption (Fig. 1); rather a combination with external mass transfer. The intra-particle diffusion rate constant Kid shows an increase in the adsorption rate increasing with increasing of initial concentrations. These data are consistent with diffusion theory [15,28,29], and is internally consistent with data collected. Diffusion is driven by the concentration gradient that develops between the surface and intra-particle sorption sites within the crystal lattice. Higher surface loading (Fig. 1) show that surface loadings are highest at higher solution loadings, and hence a greater concentration gradient develops to drive diffusion from the surface to deeper crystal lattice position and typically leads to a greater irreversibility of binding (see Clark et al [31,32]). Moreover, a decrease in the efficiency in the P-removal (Fig. 2) shows that there is an increased residual solution concentration, which also sets up a second diffusion gradient between solution and surface, which further encourages the diffusion to occur. However at the highest P-concentration, there is a significant decrease in P-removal at 8 h, indicating a dissolution/desorption occurs. This is particular evident at high pH 13 and this dissolution/desorption lowers Kid at higher concentrations (Table 3). Conclusion Experimental results of this study indicate that the oyster shell material is an effective adsorbent for phosphate removal from wastewater. Equilibrium is obtained rapidly within 48 hrs with removal ratio exceeding 90% for all initial concentrations. P removal is highly pH-and concentration dependent, where alkaline condition (pH = 9-11) are more conducive to removal; P removal maximum is at pH = 11. Langmuir and Freundlich isotherm models both give high R 2 values, however, competing removal processes suggest that the heterogeneous surface distributions of the Freundlich model are the better fit. Moreover, the kinetics of P-removal is best described by pseudo-second-order rate model, where the implied ratecontrolling process is the Weber and Morris intra-particle diffusion model. Consequently, the P-adsorption process is complex mixture of intra-particle diffusion and external mass transfers that have a combined impact on the rate-controlling process.
v3-fos-license
2018-04-03T03:07:32.288Z
2004-05-20T00:00:00.000
55323
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.bjbms.org/ojs/index.php/bjbms/article/download/3412/995", "pdf_hash": "09331b5199f888b8c5ffc2ecde3b4c8c1d9f8004", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45225", "s2fieldsofstudy": [ "Medicine" ], "sha1": "da4397d0bd85da1bc6e4eccc32fb2f2806a84bb5", "year": 2004 }
pes2o/s2orc
DIFFERENT DIGITAL IMAGING TECHNIQUES IN DENTAL PRACTICE Different imaging techniques are used to pick up the signal of interest in digital sensors, including charge-cou-pled devices (CCD), complementary metal-oxide semi-conductors (CMOS), photostimulable phosphors plates (PSP) and tuned-aperture computed tomography (TACT) Digital radiography sensors are divided into: storage phosphor plates (SPP) called photostimulable phosphor plates (PSP), silicon devices such as charge-coupled devices (CCD) or complementary metal oxide semicon-ductors (CMOS). Relatively new type of imaging that may hold advantage over current radiographic modalities is tuned-aperture computed tomography (TACT) INTRODUCTION Direct digital imaging was first presented in 1984 by Dr Frances Moujones. Since then, digital radiography, as a new technology in dental imaging practice, has been successfully advanced for almost last twenty years. Different imaging techniques were used to pick up the signal of interest in digital sensors, including charge-coupled devices (CCD), complementary metal-oxide semiconductors (CMOS), photostimulable phosphors (PSP) and tuned-aperture computed tomography (TACT) (1,2). Digital radiography sensors are divided into: storage phosphor plates (SPP) called photostimulable phosphor plates (PSP), silicon devices such as charge-coupled devices (CCD) or complementary metal oxide semiconductors (CMOS). Relatively new type of imaging that may hold advantage over current radiographic modalities, is tuned-aperture computed tomography (TACT) There are two types of digital sensor array designs: area and linear. Area arrays are used for intraoral radiography, while linear arrays are used in extraoral imaging. Area arrays are available in sizes comparable to size 0, size 1, and size 2 film. The sensors are rigid and thicker than radiographic film and have smaller sensitive area for image capture. Area array CCDs have two primary formats: fiberoptically coupled sensors and direct sensors. Fiberoptically coupled sensors utilize scintillation screen coupled with CCD. X-rays interact with the screen material, light photons are generated, detected, and stored by CCD. Direct sensor CCD arrays capture the image directly. Further development in direct digital sensor technology is introduction of complementary metal oxide semiconductor. CMOS sensors are less expensive to produce, use an active pixel technology and have low power requirements. CMOS sensors have more fixed pattern noise and a smaller active area of image acquisition. Three basic types of operations are improved by digital imaging processing: analysis -that produce numeric information based on acquired image, enhancement -that subjectively, or objectively modify the appearance or qualities of the image and encoding -that code the image into a new format. The most common analysis operation is histogram. The image histogram is a graphic representation of the number of pixels with a specific gray value. Brightness, contrast, and dynamic range data can readily be obtained from this analysis. This is the starting point for determining appropriate enhancement operations that will produce the desired result. Density analysis is the determination of the intensity of gray value at a specific point in the image. Dimensional analysis such as length, width, angle, area or perimeter are facilitated by digital imaging. The most common used enhancement operations are: contrast manipulations, spatial filtering, subtraction, and pseudo-color. Digital image subtraction reduces the structured noise of normal anatomic detail and therefore increases signal to noise ratio. Increasing signal to noise ratio makes the pathology more evident to human observer. Digital image subtraction has been applied to almost every disease in dental hard tissues. With this application it is possible to monitor, for example, the healing process of an apical radioluscens, marginal bone retraction or progress of caries decay. A prerequisite for digital subtraction is that projections are identical at different examinations, proper alignment of the two images, which is referred to as registration, and the ability to correct variations in exposure and processing that may obscure the changes in radiographic density associated with the pathology. These prerequisites limit the clinical application of the techniques. Image coding permits either faster transmission of image data, or better utilization of a storage device. There are two basic types of encoding schemes. Those types where no information is lost are called lossless algorithms, while types where information is lost are called lossy algorithms. Tuned-aperture computed tomography The TACT presents a new method for creating threedimensional radiographic displays. This system uses digital radiographic images. Software collates individual images of a subject and forms layering of images that can be viewed as slices. The result is reconstructed image, made from a series of eight digital radiographs that are assimilated into one. Preliminary studies show that this system may have advantages over conventional film in the visualization of root canals (3). It also proved to be an effective diagnostic tool for evaluation of dental caries and simulated osseous defects (4,5). This system consists of a standard radiographic unit, digital image acquisition device, and necessary software for reconstruction of the acquired images. The TACT and digital subtraction radiography as more sensitive techniques might be recommended in cases of early bone changes detection, as important diagnostic procedure. The clinical application of these techniques is still explored. DIGITAL RADIOGRAPHY IN DENTAL PRACTICE-RESEARCH FINDINGS An imaging system that would allow better visualization of caries decay, marginal bone retraction, fine files at the apex, and enhance early detection of periapical lesions involving lamina dura, cancellous and cortical bone lesions, would be clinically desirable. Digital radiography may offer some advantages over film radiography conventionaly used in diagnostics and monitoring of healing processes. Advantages and Disadvantages Advantages include 50-70% less radiation exposure to the patient, reduction in time between exposure and image generation, ability to manipulate and produce clear diagnostic image, elimination of chemical processing of radiographs, as well as ability to electronically store patient records. Disadvantages include size, shape, thickness and rigidity of the sensor, lower image resolution, greater initial cost, unknown life expectancy of the sensor, inability to control possible infections present in direct digital imaging (6,7). CCD sensors cannot be sterilized. Direct saliva contact with the receptor and electrical cable must be avoided to prevent cross-contamination (8). During CCD procedures, patient discomfort may result in greater number of retakes (9). Versteeg et al. demonstrated substantial horizontal placement errors, especially in molar areas, and vertical angulation errors, in the anterior regions where the incisal edges were cut off and not viewable. 28% CCD images were unacceptable and required retakes compared to 6% for films (9). Direct digital image is the original image captured in a digital format, made of picture elements, called pixels. On the other hand, indirect digital imaging implies that an image is taken in analog format and then converted into digital one. This analog to digital conversion results in a loss and alteration of information. The original indirect digital imaging technique meant optical scanning of a conventional film image (analog) and generation of a digital image. This technique required an optical scanner for processing of transparent images, and software for production of the digital images. Later, more sophisticated conversion techniques were developed. Photostimulable phosphor radiographic systems were first introduced in 1981 by Fuji Corporation, in Tokyo (10). The image is captured on a phosphor plate as analog information, and then converted into digital format when the plate is processed. The PSP consist of polyester base coated with crystalline halide emulsion that converts X-radiation into stored energy. The crystalline emulsion is made of europium-activated barium flourohalide com-pound. The energy stored in these crystals is released in the form of blue fluorescent light when PSP is scanned with helium-neon laser beam. The emitted light is captured and intensified by a photomultiplier tube and then converted into digital data. PSP images have limited resolution of approximately 6 Ip/mm (line pairs per millimeter). This resolution is significantly smaller than can be achieved with conventional film (~20 Ip/mm). The receptor of PSP is approximately the same in size as conventional film, somewhat flexible and easy for placement. PSP is used for intraoral and extraoral imaging techniques. (11) In endodontics, researchers have examined the effects of enhancement on periapical lesion detection and application of measurment algoritms for dimensional assessment. Digital radiography research investigation did not reach consensus on the volume and type of bone loss that must be present for bony lesions to be detected. Some have concluded that lesions can be detected only if perforation or erosion of the bone cortex is present. (12,13) Other researchers reported that cancellous lesions or lesions that involved lamina dura were evident (14). Yokota at al. (15) found no difference between films and digital images of lesions that involved cortical bone. Kullendorff et al. (16) found no differences in diagnostic accuracy between conventional film and digitally acquired images. Wallace at al. (17) found that sensitivity and specificity calculations reveal that film had higher values, followed by digital images, both with high specificity and low sensitivity values. Film had the highest PSR score, followed by PSP-and CCD-based images in that order. Use of statistical programs such as Receiver Operating Characteristic (ROC) curve analysis gives support for the visualization of bony lesions. Enhancement of digital images through use of histogram equalization or contrast has proven to be valuable for the detection of periapical lesions in low density images (18). Others have shown that although contrast and brightness adjustments produce preferred images, image processing does not improve diagnostic accuracy (19,20). Recently, color-coding has been proposed for detecting differences between sequential images, in periodontics, by means of image addition to detect marginal bone changes. Color image displays may be superior to achromatic or monochromatic display as they provide perceptual dimension that enhances observer information processing and heightens the ability to interpret different types of data present in a particular image (21). William at al. (22) found that color-coded image processing applied to digital images had limited value in the estimation of periradicular lesional dimensions. CONCLUSION Using solid state sensors, Norwegian dental practitioners found preparation and placement of the sensors significantly more difficult than films (23). They reported that technical problems and repairs were common. However, practitioners also reported that processing, viewing, and archiving were easier than for filmbased systems (23). Sommers et al. found greater number of technical errors and unsatisfactory images in CCD imaging when compared to film (24). The errors in periapical CCD imaging were vertical angulation and cone cutting, while errors in periapical film were placement and horizontal angulation (24). CMOS have more fixed pattern noise and smaller active area for image acquisition (10). More sensitive techniques, such as TACT and digital subtraction radiography, might be recommended in cases of early bone changes detection. Clinical application of these techniques is still explored. Generally, the findings are consistent and demonstrate that film and digital imaging modalities are not different in their ability to record dental disease conditions (1,10,17).
v3-fos-license
2018-04-03T04:33:33.547Z
2016-06-30T00:00:00.000
35402635
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://mra.asm.org/content/ga/4/3/e00595-16.full.pdf", "pdf_hash": "543706d7c6bfe14bb499bf675241c699fd858109", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45226", "s2fieldsofstudy": [ "Biology" ], "sha1": "543706d7c6bfe14bb499bf675241c699fd858109", "year": 2016 }
pes2o/s2orc
Genome Sequences of Gordonia Bacteriophages Obliviate, UmaThurman, and Guacamole We describe three newly isolated phages—Obliviate, UmaThurman, and Guacamole—that infect Gordonia terrae 3612. The three genomes are related to one another but are not closely related to other previously sequenced phages or prophages. The three phages are predicted to use integration-dependent immunity systems as described in several mycobacteriophages. members of the taxonomic order Corynebacteriales. The hundreds of sequenced mycobacteriophages display considerable genetic diversity (1), but few Gordonia phage genome sequences are available, and their diversity and relationships to the mycobacteriophages are ill defined (2)(3)(4)(5)(6)(7). Isolation and characterization of Gordonia phages in the Science Education Alliance-Phage Hunters Advancing Genomics and Evolutionary Science (SEA-PHAGES) program assists in addressing these questions (1,8). Phages Obliviate, UmaThurman, and Guacamole were isolated by direct plating of soil samples from Pittsburgh, Pennsylvania, USA, on lawns of G. terrae 3612. They were then plaquepurified and amplified, and their DNA was extracted. All three phages have similar virion morphologies with 50-nm diameter isometric heads and long flexible tails, approximately 250 nm long. Each genome was sequenced using the Illumina MiSeq platform, and 140-bp single-end reads were assembled into major single contigs with lengths of 49,286 bp, 50,127 bp, and 49,894 bp with 619-fold, 1,434-fold, and 809-fold coverage for Obliviate, UmaThurman, and Guacamole, respectively. All have defined ends with 10-base 3= extensions (Obliviate and Guacamole: 5=-TCGCCGGTGA; UmaThurman: 5=-TCTCCGGTGA). The GC contents of the genomes are 67.2, 67.5, and 67.0%, similar to G. terrae (67.8%). The three phages share extensive nucleotide sequence similarity with pairwise 96% nucleotide sequence identity spanning 72 to 82% of their genome lengths. The greatest similarities are within the virion structural and assembly genes, with interspersed segments of similarity within the nonstructural genes. The phages do not share extensive nucleotide sequence similarity with other phages or predicted prophages, although there are two small segments (~1.5 kb) with similarity to putative capsid assembly protease and lysis genes of a potential prophage in Gordonia sp. KTR9 (9). Using GeneMark and GLIMMER (10, 11) we identified 80, 83, and 78 protein-coding genes in Obliviate, UmaThurman, and Guacamole, respectively; none of the genomes encode tRNAs. All the predicted genes are transcribed rightward, with the exception of five to seven leftward-transcribed genes near the genome centers that include putative immunity repressors and integrase genes. The genome left arms contain the virion structure and assembly genes, and the right arms contain nonstructural genes, including recET recombinases. The lysis genes are unusually located amid the phage tail gene cluster, and two genes, Obliviate 19 and 20, encode endolysin functions-the peptidase and glycoside hydrolase domains, respectively. Obliviate is predicted to use an integration-dependent immunity system as described for some mycobacteriophages (12,13), characterized by the location of the phage attachment site (attP) within the repressor gene (38), and a degradation tag (-DAA) at the C-terminus of gp38. Obliviate is predicted to integrate at an attB site overlapping a Gordonia tRNA thr gene (KRT9_RS04270). Guacamole encodes a distantly related repressor (gp40; 41% amino acid [aa] identity with Obliviate gp38), although it contains the same attP core and is predicted to integrate at the same attB site. The UmaThurman repressor (gp36) is more distantly related to the Obliviate/Guacamole repressors (Ͻ40% aa identity) and contains a different attP corresponding to an attB site overlapping a tRNA arg gene conserved in many actinobacterial strains. Nucleotide sequence accession numbers. The Obliviate, UmaThurman, and Guacamole genomes are available from GenBank under the accession numbers KU963254, KU963251, and KU963259. FUNDING INFORMATION This work, including the efforts of Graham F. Hatfull, was funded by Howard Hughes Medical Institute (HHMI) (54308198).
v3-fos-license
2020-06-04T09:05:03.742Z
2020-06-02T00:00:00.000
236872874
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-31586/v1.pdf", "pdf_hash": "247c5fa4e40a49bb3b4b3780cc3d6cd955a7e18c", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45228", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "0c8157de9e3976d13888a7ed8b322c33406bfd6c", "year": 2020 }
pes2o/s2orc
CD52 is a prognostic biomarker and correlated with immune features in breast cancer Background: Breast cancer (BRCA) is the most commonly diagnosed cancer of women, which is aggressive cancer and has a mortality rate. CD52 and its monoclonal antibody (Alemtuzumab) play a critical role in inammatory diseases, but the relationship between CD52 and BRCA is not clear. Methods: We rst used the random forest algorithm to nd the most critical genes related to the prognosis of BRCA patients. Then, according to the analysis of RNA sequence and clinical data of the TCGA dataset, we explored the relationship between CD52 with immune response-related pathways and immune metagenes. The pan-cancer analysis shows the importance of CD52 in a variety of tumors Results:CD52 was related to the prognosis of BRCA patients (p < 0.001). Subsequent analysis based on RNA-seq and clinical data from the TCGA dataset revealed that CD52 is positively correlated with immune response-related pathways and immune metagenes. TIMER analysis showed that CD52 expression was positively correlated with immune inltrating levels of B, CD4+ T, and CD8+ T cells, macrophages, neutrophils, and dendritic cells (DCs) in BRCA (r = 0.466, r = 0.645, r = 0.483, r = 0.149, r= 0.542,r = 0.665, respectively; p < 0.001). CpG sites (cg16068833, cg19743891, cg19743891, cg16664472, cg19677267, cg22517705, and cg27430637) were negatively correlated with CD52 expression (r = -0.662, r = -0.629, r =- 0.598, r = -0.519, r= -0.492, r = -0.445, respectively; p < 0.001). Furthermore, the expression of CD52 was signicantly correlated with the following pathological stages (T stage, N stage, and survival state; p=0.024, p=0.047, and p=0.007, respectively). The results of the pan-cancers study suggest that CD52 may play an important role in the occurrence, development, and prognosis of multiple tumors. Conclusions: These ndings suggested that CD52 is a promising immunotherapy target and prognostic prediction value for BRCA. Conclusions: These ndings suggested that CD52 is a promising immunotherapy target and prognostic prediction value for BRCA. Background Breast cancer (BRCA) is the most commonly diagnosed cancer of women, which is second-deadliest cancer after lung cancer. The overall death rate of breast cancer increased by 0.4% per year in 2 decades since 1975. Even though up to 2017, the total fatalities have declined rapidly by 40%. About 13% of women are likely to be diagnosed with aggressive breast cancer during their lifetime, according to the estimate of the American Cancer Society in 2019 [1]. Invasive breast cancer accounts for a signi cant part [2,3]. Early-stage (stage I and II) perform favorable prognosis with a 5-year survival rate of 98% and 92% bene t by the popularization of mammography and the progression of targeted-therapy. Nevertheless, the prognosis of breast cancer and the ve-year survival rate reveal signi cant disparity on account of the variety of scales, districts, ages, clinical stages, molecular phenotypes, and local immune in ltration. As a result, poor prognosis is not rare [1,4]. Thus, exploring more acute and useful biomarkers as a predictor is still around the corner. Tumor-associated macrophages (TAMs) are the most critical components of tumor-in ltrating immune cells in the tumor microenvironment [5]. There are two principal functional states in TAMs: proin ammatory M1 macrophages which are indicated as protective factors for obliterating tumor cells , and alternatively activated M2 macrophages (which are considered as unfavorable factors for prompting tumor proliferation) [3,6,7]. Previous research has established that macrophages can decrease the expression of estrogen and progesterone receptors, whereas increasing the expression of urokinase-type plasminogen activator receptor and Ki67 in breast cancer. In addition their results demonstrated a signi cant positive association of TAMs and poor prognosis in breast cancer patients [8]. In this study, we rst performed univariate Cox proportional analysis for selecting prognostic macrophage-related gene signatures. Then, the random forest was recognized to build a 13-gene signature for BRCA, and the variable importance suggested that CD52 is the most critical for further analysis.CD52(CAMPATH-1 antigen) is a glycosylphosphatidylinositol (GPI) -anchored protein of 12 amino acids present on the cell surface of immune cells, including monocytes/macrophages [9,10]. Piccaluga PP et al found that CD52 was up-regulated in peripheral T-cell lymphoma, and the estimation of CD52 expression might provide a theoretical basis for the e cacy of treatment response [11]. Moreover, Alemtuzumab, an anti-CD52 monoclonal antibody, has been investigated as a molecular target for immunotherapy to treat acute myeloid leukemia [12]. Nevertheless, the relationship between CD52 expression, prognostic value, and immune in ltration in BRCA is not clear. Therefore, we analyzed the clinical and molecular data of CD52 in BRCA samples from the TCGA dataset to explore the expression of CD52 and its relationship with immune-related molecules. It may provide a possible basis for the use of Alemtuzumab in the treatment of BRCA patients. Data source and downloaded We downloaded available RNA-sequence and clinical data of invasive breast cancer patients from the TCGA database (https://portal.gdc.cancer.gov). The RNA-seq results were combined into the gene expression matrix. We obtained all methylation information from patients with BRCA and normal tissue controls from the UCSC Xena browser (https://xenabrowser.net/). Extraction of macrophage-related gene matrix and selection process of the target gene We extracted macrophage-related gene expression patterns according to the gene signatures of M1 and M2 macrophages in the literature [13]. We performed a univariate Cox proportional hazard regression to identify the differentially expressed hypoxia-related genes associated with overall survival time (P<0.05 was considered statistically signi cant). Then we use a random forest to establish a prognosis model. The most crucial gene was selected as the target gene for further analysis according to the importance of variables. The Kaplan-Meier (KM) method was used to evaluate survival differences. The receiver operating characteristic (ROC) curve identi es the accuracy of the model prediction. We used the STRING database (https://string-db.org/) version 11.0 to assess the protein-protein interaction network information of the target gene. Relationship between CD52 expression and clinical symptoms The "ggstatsplot" package validated the relationship between and expression of CD52 in the TCGA database and six clinical symptoms (age, survival state, stage, T stage, M stage, N stage). GSEA-based enriched KEGG analysis To detect signi cant differences differentially activated in BRCA, we performed GSEA (Gene Set Enrichment Analysis)-based enriched KEGG (Kyoto Encyclopedia of Genes and Genomes) analysis between low and high CD52 expression phenotype using GSEA software. The enrichment score (ES) >0.4 as a lter and false discovery rate (FDR)value <0.05 was considered to be statistically signi cant. ssGSEA analysis revealed the immune features of CD52 in BRCA We downloaded 16 signatures from the literature [14], including immune-relevant signature, stromalrelevant signature, and mismatch-relevant signature. The list of these genes is shown in Supplementary Table 1. We performed ssGSEA (Single-Sample GSEA) analysis to determine the enrichment scores of immune features using the GSVA package of R language [15]. We calculated Pearson correlation values between CD52 expression and immune features based on correlation analysis. Association between CD52 expression and immune metagenes We downloaded seven immune metagenes from the literature [16], including IgG, hemopoietic cell kinase HCK , MHC-I (major histocompatibility complex-I), MHC-II (major histocompatibility complex-II), LCK (lymphocyte-speci c kinase), STAT1 (signal transducer and activator of transcription 1), and Interferon. The list of these genes is shown in Supplementary Correlation of CD52 expression and methylation We obtained DNA methylation data from the UCSC Xena browser. We calculated the correlation between the CD52 expression and the methylation of the CpG sites by using Pearson's correlation analysis. 0.4 as a lter value of the correlation coe cient. Assessment of the expression and prognostic importance of CD52 in pan-cancers We used the TIMER database to evaluate the expression of CD52 in pan-cancers. Moreover, the present study assessed the prognostic importance of CD52 in pan-cancers, using the TCGA analysis database, the Kaplan-Meier Plotter database (http://kmplot.com/analysis/). Fig.1C). Based on the variable importance in Fig.1B, CD52 is the most crucial gene, and the survival analysis results indicated that the prognosis value of CD52 was signi cant (Fig.1D). The PPI of the CD52 protein showed its value (Fig.1E). High expression of CD52 was associated with low risk and was a protective factor. These results indicated that CD52 is a prognostic marker for further analysis. Relationship between expression of CD52 and clinical symptoms By exploring the association between clinical symptoms and expression of CD52 in the TCGA database, we found that there was a very signi cant correlation between CD52 expression and the T stage, N stage, and survival state (p = 0.024, p = 0.047 and p = 0.007) ( Fig.2A-C). Age, N stage, and stage were not signi cantly correlated with the CD52 expression (Supplementary gure 3). GSEA analysis of CD52-related pathway GSEA analysis results showed that B cell receptor, chemokine, NOD-like receptor, Toll-like receptor, and T cell receptor signaling pathways were signi cantly enriched in the CD52 high expression group, all of which are strictly related to tumor immunity. In contrast, Glycosylphosphatidylinositol (GPI)-anchor biosynthesis and metabolism-related pathways were signi cantly enriched in CD52 low expression samples (Fig.2D). Assessment of the expression and prognostic importance of CD52 in pan-cancers We used TIMER and Kaplan Meier Plotter databases to evaluate the expression and prognostic value of CD52 in pan-cancers. Discussion Based on the expression of macrophage related genes in TCGA of BRCA patients, univariate Cox proportional analysis and the random forest algorithm were performed to build a prognostic model. The variable importance suggests that CD52 is the most critical gene. Moreover, patients with high CD52 expression have a better prognosis. CpG methylation typically results in abnormal gene expression [19]. In our study, six CpG sites (cg16068833, cg19743891, cg19743891, cg16664472, cg19677267, cg22517705, and cg27430637) were negatively correlated with CD52 expression (r = -0.662, r = -0.629, r =-0.598, r = -0.519, r= -0.492, r = -0.445, respectively; p < 0.001). DNA methylation is most common in CpG dinucleotide and is related to the clinicopathological features of BRCA patients, including stage and histological grade [20,21]. Furthermore, the CD52 expression was signi cantly correlated with the following pathological stages (T stage, N stage, and survival state). These results suggest that CD52 is important for the prognosis of BRCA patients. CD52 epitopes were expressed on the surface membrane of peripheral lymphocytes, monocytes and macrophages, and on the epithelial membrane of the male reproductive system [22]. Rashidi M et al have demonstrated that CD52 can inhibit the activation of NF -κ B by inhibiting the signal transduction of Tolllike receptor and tumor necrosis factor receptor and thus inhibit the production of in ammatory cytokines by macrophages, monocytes, and dendritic cells [23]. As the GSEA results show, CD52 was signi cantly enriched in a variety of immune-related pathways,such as B cell receptor, chemokine, NOD-like receptor, Toll-like receptor, and T cell receptor signaling pathways. These enrichment pathways are closely related to the immune in ltration in cancer [24,25]. Previous studies demonstrated has demonstrated that immune cell in ltration was associated with activation of the immune response, and it will contribute to anti-tumor effects and get a better prognosis in BRCA [26][27][28]. To clarify the correlation between CD52 expression and immune features based on ssGSEA analysis, we found that CD52 expression is related to T cell and macrophage related pathways and functions. Previous studies have con rmed that immune in ltration is widespread in breast cancer tissues and affects patient prognosis [29,30]. We further investigated the correlation of CD52 expression with multiple levels of immune in ltration in BRCA. Our results indicated a signi cant positive correlation between the in ltration levels of CD8 + T cells, CD4 + T cells, B cells, DC cells, and neutrophils in BRCA and the expression of CD52. Thus, a large number of data con rmed that CD52 plays a role in tumor immunology in regulating BRCA. Drug Alemtuzumab, an anti-CD52 monoclonal antibody, has been used in the treatment of various immune-related diseases, including multiple sclerosis and in ammatory myopathy [31,32]. We found that CD52 expression is abnormal in breast cancer patients and its role in regulating tumor immunity. Meanwhile, the results of the pan-cancer study suggest that CD52 differentially expressed in multiple tumors, which may play an essential role in the occurrence, development, and prognosis of multiple tumors. These results may indicate the possibility of expanding the application of Alemtuzumab in tumor immunotherapy. However, the current study was limited by the absence of experimental evidence. Our PPI results suggested the protein-protein interaction of CD52 related-proteins, which may provide the basis for further mechanism study. Conclusion These ndings will deepen our understanding of CD52 expression, prognosis, and immune-related features in BRCA.CD52 is a promising immunotherapy target for most cancer patients, and the drug (Alemtuzumab) may also bring new hope for immunotherapy of cancer.
v3-fos-license
2017-04-05T20:40:05.715Z
2012-05-24T00:00:00.000
407151
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/1750-1172-7-S1-S2", "pdf_hash": "44cf4ad5f1dc40825a1ae125e06858923bcd8df3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45229", "s2fieldsofstudy": [ "Medicine" ], "sha1": "041523067a2394dc3af349ba0413083e46b75b4f", "year": 2012 }
pes2o/s2orc
Clinical guidelines for the management of craniofacial fibrous dysplasia Fibrous dysplasia (FD) is a non-malignant condition caused by post-zygotic, activating mutations of the GNAS gene that results in inhibition of the differentiation and proliferation of bone-forming stromal cells and leads to the replacement of normal bone and marrow by fibrous tissue and woven bone. The phenotype is variable and may be isolated to a single skeletal site or multiple sites and sometimes is associated with extraskeletal manifestations in the skin and/or endocrine organs (McCune-Albright syndrome). The clinical behavior and progression of FD may also vary, thereby making the management of this condition difficult with few established clinical guidelines. This paper provides a clinically-focused comprehensive description of craniofacial FD, its natural progression, the components of the diagnostic evaluation and the multi-disciplinary management, and considerations for future research. Definition Fibrous dysplasia (FD) is a non-malignant condition in which normal bone and marrow are replaced by fibrous tissue and haphazardly distributed woven bone [1,2]. Patients may exhibit involvement of one bone (monostotic FD; MFD), multiple bones (polyostotic FD; PFD) or they may have McCune-Albright syndrome (MAS), which has been classically defined by the triad of PFD, café-au-lait skin macules and endocrinopathies, including among others, precocious puberty [3]. FD is caused by somatic activating mutations in the α subunit of the stimulatory G protein encoded by the gene GNAS [4,5]. A related disorder, cherubism, is manifest by expansile, multiloculated, radiolucent fibro-osseous lesions with multiple giant cells located bilaterally and symmetrically in the jaws. Cherubism is genetically distinct from FD and will be discussed elsewhere in the Proceedings of this meeting. Prevalence MFD is reported to be the most common manifestation of the disease, in some references it is estimated to occur four times more often than PFD [6]. However, in other series PFD is reported to be more common than MFD [7,8]. While the prevalence of MFD is probably greater than PFD, in none of the studies that define the relative prevalence of MFD versus PFD have the subjects undergone thorough skeletal and/or endocrine screening to determine the full extent of the skeletal and/or endocrine involvement. The most common locations are the craniofacial bones, proximal femur, and rib [2,[8][9][10][11]. In MFD, the zygomatic-maxillary complex is reported to be the region most commonly involved ( Figure 1A&B) [12]. In the less prevalent PFD and MAS, the craniofacial region is involved in 90% of the cases and the anterior cranial base is involved in over 95% of cases. (Figure 2) [13]. Depending on the type and location of FD, the signs and symptoms vary and include facial deformity and asymmetry, vision changes, hearing impairment, nasal congestion and/or obstruction, pain, paresthesia, and malocclusion. Many patients are asymptomatic and the diagnosis is made when a family member, friend or health care provider who has not seen the patient for a period of time notices asymmetry, or there is an incidental abnormality noted on dental or panoramic x-rays or on a head and neck computed tomogram (CT). Figure 1 An 11-year old female with monostotic fibrous dysplasia of the left zygomatic-maxillary region. A-C) Clinical photographs demonstrating the appearance. The lesion was quiescent and asymptomatic. It had grown slowly over a period of years. D) Her dentist noted delayed eruption of her teeth (*) on that side as well as mild facial asymmetry and obtained the panorex that identified the lesion. E-I) CT images demonstrate the pathognomonic appearance of FD for her age, a homogenous, "ground-glass" lesion. J) The reconstructed CT image gives a sense of the three dimensional shape of the lesion that accounts for the clinical appearance. Natural progression and clinical behavior FD most commonly behaves as a slow and indolent growing mass lesion. The facial deformity and distortion of adjacent structures such as optic nerve, eye/globe, nasal airway, cranial nerve VII, middle ear ossicles, and teeth are gradual and insidious. Uncommonly, in young children and pre-pubertal adolescents, the lesions may demonstrate rapid growth, cortical bone expansion and displacement of adjacent structures such as the eye and the teeth. In some patients, rapid growth is associated with other pathological lesions such as aneurysmal bone cysts (ABC) or mucoceles ( Figure 3) [13][14][15], or more rarely with malignant transformation. Malignant change to osteosarcoma or other forms of sarcoma has been reported to occur in less than 1% of cases of FD [16][17][18][19][20][21][22]. When rapid enlargement occurs, adjacent vital structures, such as the optic nerve, globe and auditory canal/ structures and nasal airway may be invaded or compressed, resulting in functional deficits. For these reasons, some authors have advocated aggressive surgical resection to avoid potential blindness or hearing loss [23][24][25][26]. Rapid enlargement of FD in the nasal bones, maxilla or mandibular symphysis may result in airway obstruction by obliteration of the nasal cavity or by posterior displacement of the tongue. However, it has recently been demonstrated that such aggressive behavior with rapid expansion is the exception and that a conservative expectant approach is more prudent [13,14,27]. In MFD and PFD, progression of the lesions appears to taper off as the patients approach puberty (defined as skeletal maturity throughout this article) and beyond. Although continued active disease and symptoms into adulthood are uncommon, they have been reported [28][29][30]. In addition, in the NIH Screening and Natural History Study of Fibrous Dysplasia (SNHFD, protocol 98-D-0145) has documented persistent active disease and pain into adulthood in some patients. Based on >25 years of observation at the NIH, it appears that MFD, Worsening asymmetry of the left eye and face was noted, and an on examination he was noted to have vertical dystopia of the orbit in the preoperative photograph. He was found to have a rapidly growing ABC within FD and underwent immediate resection and decompression of the ABC. B) The asymmetry and symptoms resolved after surgery. Note the classic café au lait spots of the left face and neck region as part of the triad of MAS. C&D) Preoperative CT images of the patient in A showing the FD lesion and associated ABC. Note the fluid/fluid level diagnostic of an ABC (arrows). The association of an ABC often results in aggressive behavior and rapid enlargement of the FD lesion with displacement of adjacent structures, in this case, the eye. does not progress to PFD and neither progress to MAS [31]. In MAS, while growth of the lesions may also diminish after puberty, the overall degree of bony enlargement and deformity is often more severe and disfiguring than in patients with PFD. Data in the literature and observations by the NIH SNHFD indicate that the most severe deformities and symptoms occur in patients who have poorly controlled growth hormone excess [32][33][34]. It is recommended, therefore, that growth hormone excess in patients with PFD and MAS be aggressively managed. In a retrospective study of 266 serial bone scans from 66 patients followed for up to 32 years in patients with extensive PFD or MAS, Hart et al. demonstrated that 90% of FD lesions, regardless of the site, were present prior to 15 years of age [31]. In the craniofacial region, 90% of all the lesions were detectable by bone scan by age 3.4, and no new lesions in the craniofacial region are very reported beyond the age of 10. Medical history and examination A thorough history and physical examination are necessary to determine the extent of disease and to determine whether the FD is isolated or one of multiple lesions associated with PFD or MAS. Documentation of the onset and types of symptoms, presence of functional impairments and duration are imperative. Inquiries should include onset of menarche in females (to rule-out precocious puberty), other endocrine abnormalities or pathologies (such as hyperthyroidism, pituitary abnormalities, and renal phosphate wasting), growth abnormalities (review of growth charts), and history of fractures (to rule-out the presence of other FD lesions in the extremities) as well as the presence of skin lesions (café-aulait lesions). These questions are particularly critical in young patients where underlying endocrine abnormalities may not have been detected and aggressive management is warranted. If there are any positive responses to the above inquiries, a referral to an endocrinologist is strongly recommended to rule out PFD or MAS. A skeletal survey or bone scan may be indicated if there is a suspicion of PFD or MAS, particularly in a patient that is not skeletally mature. Additional FD lesions beyond the craniofacial region require further evaluation by an orthopedic surgeon. If the symptoms include rapid expansion, new onset of pain, visual change or loss, hearing change or loss, evidence of airway obstruction, new onset of paresthesia or numbness, a referral to a surgical specialist should be made immediately. Appropriate specialists that may be consulted include: neurosurgeons, craniofacial surgeons, oral & maxillofacial surgeons, otolaryngologists, neuroophthalmologists, audiologists and dentists, depending on the site of involvement or symptoms. In institutions where a craniofacial anomalies team is available, this may be an alternative referral that would assist the patient in further comprehensive evaluation. Imaging CT imaging is recommended to define the anatomy of individual lesions and to establish the extent of disease. A standard craniofacial CT, without contrast and with slice thickness no greater than 3.75 mm (from top of the head to the thyroid region), is used to evaluate for the presence of FD in the skull base and facial bones. Historically, plain films of the craniofacial region were used but because of the overlapping of adjacent structures, involvement of the skull base was often underreported. For similar reasons, plain radiographs are not recommended for diagnostic purposes for cranial or facial lesions. Dental radiographs (i.e. panorex and dental films) or a conebeam CT are appropriate to examine and help manage lesions around the dentition. Depending on the site of involvement, the appropriate referrals should be made for further analysis. The most common radiographic characteristic of craniofacial FD is a "ground-glass" appearance with a thin cortex and without distinct borders [35]. In an ongoing study at NIH [36], it was demonstrated that the typical characteristics of FD on CT and the natural radiographic progression may vary from a "ground-glass" or homogenous appearance to a mixed radio-dense/radio-lucent lesion as the patient ages ( Figure 4). In pre-pubertal patients with PFD or MAS, the lesions most often appear as homogenous, radio-dense lesions on CT. As these patients enter the second decade of life, the FD lesions progress to a mixed appearance, which stabilizes in adulthood but does not resume a homogenous appearance. While the change to a mixed radiographic appearance alone does not require further biopsy or investigation, we recommend careful monitoring and intermittent craniofacial CT during the pubertal phase of the young patient. This period of change in CT appearance coincides with case reports of increased activity of the FD lesions either through rapid growth, worsening facial asymmetry, malignant transformation, or association with other pathologic, radiolucent lesions such as an ABC and accelerated expansion [15]. Additionally, in our collective experience, there have been young patients who have the clinical and histologic diagnosis of a monostotic fibro-osseous lesion that are G s mutation negative, yet demonstrate a rapidly enlarging and predominantly multi-loculated radiolucent appearance on CT and not the typical indolent growth. The exact pathophysiologic mechanism and its relationship to the variable genotype, i.e. is this a false negative gene test or another entity, has yet to be determined. If the patient is experiencing new onset of symptoms or rapid enlargement at any age, an updated CT is recommended as well as an immediate referral to the appropriate specialist for further investigation and management. Biopsy A bone biopsy, by the appropriate surgical specialist, should be obtained to confirm the diagnosis of FD, if the site is amenable to biopsy. Unfortunately, the histology does not predict the biological behavior of these lesions [37,38]. Biopsy of FD does not specifically induce growth of the lesion. However FD lesions may be quite vascular and bleeding can be brisk. The surgeon should be prepared to deal with this. If the lesion is quiescent or asymptomatic, and/or in the cranial base, a biopsy may not be possible or necessary. History, clinical examination and the classic radiographic presentation are often adequate to establish the diagnosis of FD. Facial bones Asymmetry and swelling are the most common complaints when FD is found in the bones of the facial skeleton. Secondary deformities due to slow growing FD include vertical dystopia (difference in the vertical position of the eyes), proptosis, frontal bossing, facial and jaw asymmetries or canting. The degree of facial deformity varies, but those with MAS are the most severely affected, particularly when associated with untreated or inadequately treated growth hormone excess ( Figure 5 & 6). The diagnosis and management of facial lesions is at least in part based on the patient's age and stage of skeletal maturity i.e. pediatric versus adult (skeletally mature). In the pediatric population, of all the patients who present for evaluation of facial swelling and asymmetry, more than half of all jaw tumors encountered are of mesenchymal cell lineage, and of these tumors nearly 50% are fibro-osseous lesions, a significant proportion of which are FD [37,39]. Thus, FD must be high on the differential diagnosis for children with facial swelling and asymmetry. The management of FD in young and older patients is dictated by the clinical and biological behavior of the lesion, as the histology does not provide reliable prognostic or predictive information. There are currently no biomarkers to predict the behavior of these fibro-osseous lesions [37]. This is particularly concerning in pediatric patients because of the potential for active growth, malignant transformation and association with other tumors. The FD lesions of the face may be described as quiescent (stable with no growth), non-aggressive (slow growing), or aggressive (rapid growth +/-pain, paresthesia, pathologic fracture, malignant transformation, association with a secondary lesion). In the case of a quiescent FD lesion in which the patient does not complain of facial deformity, observation and monitoring for changes is an acceptable treatment modality. Annual evaluations may be adequate. The patient's concerns and symptoms, clinical assessment including sensory nerve testing in the region of involvement, photographs, and facial CT should be obtained at each visit. An annual CT may be necessary for the first 2 years; however, the interval may be lengthened based on the clinical findings. Surgical contouring by a maxillofacial or craniofacial surgeon is indicated if the patient is bothered by facial disfigurement. While complete resection may be possible in monostotic lesions, it is unlikely to be possible in PFD or MAS), and A B C Figure 4 Variations in CT appearance of fibrous dysplasia based on age. A) FD in the young patient most often appears as homogenous, radiodense lesions often described as having a ground glass appearance on CT. B) As these patients enter adolescence, the FD lesions progress to a mixed appearance which stabilizes in adulthood (C) but does not necessarily resume a homogenous appearance. This may explain the numerous radiographic descriptions of FD in the literature such as "ground-glass", "pagetoid", "lytic", and "cystic". the surgeon must weigh the reconstruction options that will provide the patient with the best outcome as well as preserve the function of adjacent nerves and structures. These patients may also require orthognathic surgery to correct a concurrent malocclusion or facial/dental canting [40]. There is no documented contraindication for orthognathic surgery so long as the lesions are quiescent. Bone healing appears to be normal with conventional rigid fixation [40]. Regular follow-up with the surgeon is necessary to determine that there is no recurrence and further deformity. In patients with non-aggressive but active FD, it is ideal to wait until the lesion becomes quiescent and the patient has reached skeletal maturity before performing an operation. However, in cases where the patient's psychosocial development may be impaired due to the facial deformity, surgical contouring and/or resection may be warranted. The patient and family must be aware of the potential for regrowth if the lesion cannot be resected completely, which is often the case. In cases of PFD or MAS where the disease is extensive, the lesions are often not resectable. Repeat surgical contouring and extensive debulking may be necessary to achieve acceptable facial proportions [41]. In the future, improvement in CT imaging and software will allow for accurate surgical simulation and intraoperative navigational tools may guide the surgeon throughout the contouring. Advanced CT software is useful for superimposition of pre-and postoperative images. These can then be compared to followup CT scans to determine stability of the result or the presence of regrowth. Despite these new imaging technologies, there is no therapy or technology that can predict and/or prevent regrowth. Patients with aggressive and rapidly expanding FD, occasionally complain of new onset pain or paresthesia/ anesthesia [15]. Based on the site of involvement, the patient may also report visual disturbances, epiphora, impaired hearing, nasal congestion or obstruction, sinus congestion and pain and malocclusion. We recommend immediate evaluation by a maxillofacial surgeon, ENT, or craniofacial surgeon and CT imaging. The etiology of this change in behavior may not be readily identified but documented causes include: associated expansile lesions such as ABC or mucocele, malignant transformation, and osteomyelitis. A biopsy of the area of growth is necessary prior to surgical management. Treatment may range from contour resection to en bloc resection depending on the diagnosis. In cases of an associated lesion, the management is based on that associated lesion e.g. an ABC with FD would warrant curettage of the ABC and contouring of the underlying FD. The diagnosis may be difficult, particularly in cases of low-grade osteosarcoma [46,47]. In such cases, immunohistochemical analysis with MDM2 and CDK4 may assist in distinguishing FD from a malignancy as a malignancies will often express MDM2 or CDK4 while FD will not [48,49]. The treatment is based on the management of the malignancy and resection with adequate margins is necessary. Osteomyelitis must be treated with prolonged antibiotic therapy and consultation with an infectious disease specialist. The limited literature and our collective experience indicate that osteomyelitis in the setting of FD is difficult to diagnose and to successfully treat [50][51][52][53][54]. We have managed patients that developed osteomyelitis of the jaws after attempts at exposure and orthodontic movement of impacted teeth. It may resolve with prolonged antibiotic treatment and pain management, however en bloc resection of the FD lesion may be required for refractory pain and persistent infection. Sinuses The sinuses may be affected by FD, with the most frequent site being the sphenoid sinus, followed by the ethmoid and maxillary sinuses (Figure 7) [55]. This is not surprising, as the anterior cranial base is often affected in patients with craniofacial PFD [13]. The entire sinus can be completely obliterated by FD, yet surprisingly the incidence of sinusitis is not greater than the general population in these patients. This may be explained by the loss of air space and Schneiderian membrane in an obliterated sinus and the elimination of a source of infection. Patients typically complain of nasal congestion (>34% of those with symptoms and sinus involvement), headaches or facial pain, recurrent sinusitis, and hyposmia. This appears to be associated with FD in the inferior turbinate and the subsequent hypertrophy. There appears to be a correlation between nasal congestion and hyposmia and the severity of disease, but a history of sinusitis and facial pain/headaches does not correlate with the amount of Normal face CT FD of the right side of the face A B * Figure 7 Fibrous dysplasia involving the right maxillary sinus and turbinate. A) Normal facial CT without any FD for comparison. B) FD in the right maxilla and extension into the maxillary sinus. There is also FD involvement of the right turbinate (*) that may explain the patient's nasal congestion. craniofacial disease [55]. The findings by DeKlotz and Kim also note that growth hormone excess is associated with more significant involvement of the sinonasal region [55]. The management of sinus and nasal congestion includes nasal saline spray, nasal steroid spray, antihistamines for those with seasonal allergies, and antibiotics for suspected bacterial sinus infections. Consultation with an otolaryngologist may be necessary for persistent congestion and chronic sinus infections. Though there is very little literature on the effectiveness of sinus surgery in patients with FD sinus disease and sinus obliteration, if surgery is indicated, we recommend waiting until the adjacent FD is quiescent and the patient is at least in the late teens and skeletally mature to minimize the possibility of regrowth and necessity for re-treatment. Endoscopic sinus surgery with and without image-guided systems has become a popular approach [56][57][58], although it may be necessary to combine endoscopy with a traditional external approach [59,60]. The extent of resection should be based on the location of the pathological bone and its proximity to important sinus structures, as radical or complete resection may not be necessary or possible. The effectiveness of endoscopic surgery for FD is undetermined as sinus surgery is not commonly done in patients with FD. The association of other expansile lesions such as a mucocele or ABC with sinus FD may result in rapid growth of the combined lesion [61,62]. This is particularly concerning in areas adjacent to the skull base and brain such as the sphenoid, ethmoid, and frontal sinuses where access may be limited. The symptoms depend on the adjacent involved structures such as the eye, optic nerve, crista galli, and brain. A referral to a multidisciplinary skull base surgery center is necessary for further evaluation and treatment. Teeth The dental variations in FD and the management of dental problems in patients with FD are poorly characterized. Due to the lack of information, the dental community is wary of treating patients with FD or MAS out of concern for potential post-procedure complications and exacerbation of the FD lesions around the teeth [63]. Akintoye et al [64] examined 32 patients with craniofacial FD that were enrolled in the SNHFD Study. Twentythree patients had PFD/MAS and 9 had monostotic disease; this population reflected the NIH study population with more extensive disease. In this study, 41% of the patients had dental anomalies in general, and 28% of the patients had the dental anomaly within FD bone. The most common anomalies included: tooth rotation, Figure 8 Dental anomalies seen in patients with fibrous dysplasia of the jaw bones. In a study by Akintoye et al [64], 41% of the patients with FD had dental anomalies in general and 28% of the patients had the dental anomaly within FD bone. Adapted from reference [64] Lee et al. Orphanet Journal of Rare Diseases 2012, 7(Suppl 1):S2 http://www.ojrd.com/content/7/S1/S2 oligodontia, displacement, enamel hypoplasia, enamel hypomineralization, taurodontism, retained deciduous teeth, and attrition ( Figure 8). There was no correlation between any endocrine dysfunction or renal phosphate wasting and enamel hypoplasia or hypomineralization, attrition, or any of the other tooth anomalies. However, taurodontism, a condition noted on dental radiographs characterized by enlargement of the pulp chamber in multi-rooted teeth, has been described in patients with syndromes including growth hormone excess [65,66] but never in FD/MAS. Taurodontism was noted only in the FD patients that had 1 or more endocrinopathies. While taurodontism does not require special dental care, it may be an indicator of an underlying endocrinopathy associated with MAS. The caries index scores were higher among FD patients (Table 1). This may be attributed to the increased enamel hypoplasia and hypomineralization or the limited dental care these patients receive. There were no histological abnormalities in the extracted wisdom teeth that may explain the increased caries index scores. We recommend more frequent dental visits, every 3-4 months. Additionally, no patients reported any complications or exacerbation of their FD lesions after dental restorations, tooth extractions, orthodontic therapy, odontoma removal, maxillary cyst removal, or biopsy of the jaws. Among the 10 patients that received orthodontic therapy, the duration of treatment appeared somewhat longer than conventional cases (2-4 years in duration), the results were less than satisfactory, and there was relapse. We recommend careful monitoring of the post-orthodontic results in patients with FD. Despite the extensive disease in and around the dentition in some of the patients, the arch form was predominantly maintained without significant displacement of the teeth as compared to other benign growths. While this may describe the natural progression of most FD, there is clearly a subset of patients that have the clinical and histologic diagnosis of FD that have rapid growth of the facial lesions, radiolucent changes on CT, and the displacement of teeth from the natural arch form. While some of these lesions have tested G s α mutation negative, many patients in this subset have not been genetically characterized to determine if the absence of the G s α mutation in the presence of a fibro-osseous lesion increases the risk of aggressive behavior and aberrant growth. Further studies are necessary to discern the implications of the mutation or lack of the mutation. For patients with missing teeth, dental endosseous implants may be considered [67]. Bone healing and integration of the implants occurs, though it may be slower and the quality of bone is consistent with grade 3 or 4 bone as the cortex is often thin or nonexistent. In a reported case of a 32-year old female with MAS, successful integration and loading of dental implants in the maxilla and mandible occurred. The maxillomandibular lesions had been quiescent for 3 years. The dental implants were at least 15 mm in length and were functional after 5 years. The literature is limited, and it is unclear whether there is an increased risk of implant failure. There is also the concern that osteomyelitis may occur in the setting of a failed implant. If implant treatment is considered, we recommend that the implant be placed once growth of the FD lesion has subsided. Additionally, we would recommend following the principles of implant placement and place the dental implants after a young patient has completed growth to avoid submerged implants and revision of the prosthesis [68]. Skull base disease Orbit/optic nerve/sphenoid bone Common findings associated with PFD around the eye include proptosis, dystopia, and hypertelorism due to the involvement of the frontal, sphenoid, and ethmoid regions [30,69]. Less common findings include: optic neuropathy, strabismus, lid closure problems, nasolacrimal duct obstruction and tearing, trigeminal neuralgia and muscle palsy with skull base involvement [70,71] (FitzGibbon, unpublished data). There has been significant controversy regarding the management of FD of the sphenoid bones that encase the optic nerve, particularly in patients whose vision is normal (Figure 9). Clinicians have assumed that such encasement seen on CT will cause blindness because of the proximity and compression of the optic nerve by FD, and because of reported cases of acute loss of vision. In one study it was reported that vision loss was the most common neurologic complication in this disease [72]. With such concerns in mind, prophylactic decompression of the optic nerve ("unroofing") has been recommended by many surgeons [23][24][25][26]. Unfortunately, decompression may result in no improvement of vision (reported in 5-33% of cases), or worse postoperative blindness. In addition the abnormal bone tends to grow back in most cases. The first case-control study was conducted by Lee et al [13] to evaluate a cohort of patients with extensive cranial base FD, and determined that observation with regular ophthalmologic examinations in patients with asymptomatic encasement was a reasonable treatment option and optic nerve decompression was not warranted. Though there was statistically significant narrowing of the optic canal in patients with FD, this did not result in increased vision loss and there was no correlation between the findings on the CT and the neuro-ophthalmologic exam. These findings were confirmed by Cutler et al. in a study that included an analysis of the same group of subjects after longer follow-up together with an initial analysis of additional subjects [33]. A recent meta-analysis that included, in addition to the most recent analysis of the NIH SNHFD cohort, an analysis of all the published cases of optic nerve decompression surgery, came to the same conclusions [73]. Based on these results, we recommend that FD in the skull base around vital structures, including the optic nerve, should be managed according to the clinical examination and regular diagnostic imaging and observation is appropriate in asymptomatic patients [13,27,33,73,74]. Once it is determined that there is FD surrounding the optic nerve(s) and orbit, a comprehensive neuro-ophthalmologic examination should be done to establish the baseline. This should be followed by comprehensive annual exams. The exam should concentrate on assessing for optic neuropathy and include visual acuity, visual-field exam, contrast sensitivity, color vision, and dilated fundus exam. Additional examination should include pupillary examination for afferent pupil, extraocular movements, proptosis measurement with exophthalmometry, lid closure, hypertelorism, and tear duct and puncta exam. The diagnosis of optic neuropathy should be reserved for those with a visual field defect or if 2 of the 3 exams (contrast sensitivity, color vision, and fundus/disc exam) are abnormal. A new diagnostic modality, optical coherence tomography (OCT), uses high resolution cross-sections of the optic nerve to determine the thickness of the retinal nerve fiber layer (RNFL) [75][76][77][78][79]. A thin RNFL correlates with visual field changes and evidence of optic neuropathy. This modality may be useful for examining patients that cannot undergo a visual field exam (such as children) or may predict visual recovery after surgery. In the case where the RNFL may be thin prior to surgery, it is unlikely that surgery will improve vision while a patient with a normal RNFL may have some [13] demonstrated that statistically significant narrowing of the optic canal by FD did not result in vision loss. Thus, observation with regular ophthalmologic examinations in patients with asymptomatic encasement was a reasonable treatment option and optic nerve decompression was not warranted. Adapted from reference [13] improvement after surgical treatment (either decompression or proptosis correction). A representative case of the utility of the combination of OCT, clinical examination, and imaging is shown in Figure 10. The etiology of the visual changes and vision loss in patients with craniofacial FD remains unclear. However, patients with abnormal findings are more likely to have an associated endocrinopathy, most commonly growth hormone excess, which typically results in gradual loss of vision, if vision loss is observed. In the cases of other lesions such as an aneurysmal bone cyst or mucocele, vision loss can be much more rapid. A study by Cutler et al [33] demonstrated that 12% of patients with relatively severe craniofacial PFD had evidence of optic neuropathy, that patients with GH excess had a higher relative risk for complete encasement of the optic nerve (4.1 fold), and had a higher relative risk for optic neuropathy (3.8 fold) compared to patients without GH Photos also demonstrated subtle temporal pallor of her left optic disc. There were no objective changes in visual acuity. She has been followed clinically with neuro-ophthalmologic examination approximately every three months to assess for any significant progression, which would be an indication for surgical intervention. The findings on the OCT study confirm the clinical impression of a left optic neuropathy and are particularly useful when visual fields are not obtainable or particularly reliable (usually due to age-related inability to perform the test), as well as an objective measure for longitudinal follow-up. The nerve fiber layer findings on OCT can also be used to predict what visual outcome one might expect after a successful decompression surgery. If one were to find a field defect on examination, but the corresponding optic nerve retinal nerve fiber layer was preserved on OCT testing, it would be reasonable to expect full recovery of vision after surgery. However, if there were nerve fiber layer loss, recovery of vision would be unlikely as the findings most likely represent axons that have died back. * * excess. Preliminary findings by Glover et al demonstrated that patients with an early diagnosis and treatment of GH excess had no optic neuropathy (0 of 14 patients that were diagnosed and treated by age 18) while 4 of 7 patients diagnosed and treated for growth hormone excess after age 18 had optic neuropathy [80]. We strongly recommend that patients with craniofacial PFD are evaluated for growth hormone excess or MAS and that if endocrinopathies are present they be aggressively managed. Patients with acute visual change or vision loss should undergo a CT of the cranial base and immediate referral to a neurosurgeon or craniofacial surgeon and neuroophthalmologist. Several case reports have noted the association of a new, expansile lesion near the optic nerve, typically an aneurysmal bone cyst, and high dose glucocorticoids with immediate decompression and resection is indicated [15]. Unfortunately, the success of surgical treatment is unknown due to the limited cases of acute vision loss. Auditory canal/temporal bone/cranial nerves The temporal bone is frequently involved (>70%) in patients with craniofacial PFD or MAS [81], while temporal bone involvement is uncommon in monostotic disease [82,83]. In a recent analysis by DeKlotz et al., despite the high incidence of disease of the temporal bone in PFD, nearly 85% of patients had normal or near-normal hearing; 10% had conductive hearing loss due to PFD, approximately 4% had sensorineural or mixed hearing loss (both conductive and sensorineural), and the remainder had hearing loss due to other, non-PFD related causes. In most cases, the degree of hearing loss was mild (77%) and did not correlate to the amount of disease involvement of the temporal bone. The common causes of hearing loss appeared to be narrowing of the external auditory canal due to the surrounding FD ( Figure 11) and fixation of the ossicles within the epitympanum from adjacent involved bone ( Figure 12). The narrowing of the external auditory canal may result in significant cerumen buildup. Therefore, it is recommended that regular otolaryngology exams are performed to maintain patency in patients in whom the external auditory canal is particularly narrowed. A rare but potentially concerning complication is the development of a cholesteatoma, an obstruction of the canal with cerumen and desquamated skin [83,84]. This complication typically requires surgical intervention to relieve the obstruction and chronic infection [82,85]. In the case of PFD or MAS, there is concern that contouring and excision of the surrounding FD may exacerbate regrowth of the lesion. However, only case reports have been documented noting this possibility. We recommend a comprehensive audiology examination and ear evaluation once the temporal bone is found to be involved with FD. Annual hearing/audiology exams are recommended during the active bone growth. For external auditory canal stenosis, regular exams under microscopy are usually required by the otolaryngologist. Surgery for the external auditory canal is recommended for complications such as cholesteatoma or near total ear canal stenosis; however it may be beneficial to wait until growth has slowed and the patient has progressed beyond puberty. Temporal bone involvement may also result in facial nerve weakness or paralysis as the CN VII exits the cranium through the petrous temporal bone. This finding is quite rare and is likely caused by the compression of the cranial nerve within the Fallopian canal and/or the internal auditory canal [71,83,86,87]. Unfortunately, the location of the compression may be extremely difficult to access. In case of sudden facial weakness, a high resolution cranial base or temporal bone CT is indicated. If an expanding mass within the FD is noted, a referral to a skull base surgeon is warranted for consideration of surgical decompression. Nonsurgical and adjuvant management of craniofacial FD While pain is common among FD patients, [88], there are very few studies with a detailed assessment of the symptoms and there is a need for more data relating pain to the location and activity of disease and the effectiveness of various treatment modalities. Kelly et al [11] examined 78 patients (35 children and 43 adults) and found 67% complained of pain. It was not uncommon for the pain to be undertreated; some patients required NSAIDs with and without narcotic treatment, and others were treated with bisphosphonates. Interestingly, the pain scores did not correlate with the disease burden, and adults were more likely to have pain and have more severe pain than children, suggesting there is an age-related increase in the prevalence of pain in FD. They also noted that, despite the high prevalence of craniofacial FD, less than 50% had pain in the craniofacial region, in contrast to at least 50% of patients with lower extremity disease, another high prevalence site, complained of pain. In the same study, approximately 20% of the patients were managed with bisphosphonates and nearly 75% reported pain relief or improvement with this class of drugs. The use of bisphosphonates such as alendronate, pamidronate, or zoledronic acid for craniofacial FD has been considered for pain reduction and to reduce the rate of growth of the lesion. In general, the clinical studies have demonstrated mixed results on the efficacy of bisphosphonates and FD-related pain with small sample sizes and with most studies examining all skeletal regions, not just the craniofacial sites. Plotkin et al [89] examined 18 children and adolescents with PFD or MAS and initiated IV pamidronate therapy. They found that pain seemed to decrease (not quantified) and serum alkaline phosphatase and urinary N-telopeptides decreased. There were no serious side effects from the bisphosphonate use however they noted no radiographic or histomorphometric change or improvement of the FD lesions. Matarazzo et al [90] reported on 13 patients with MAS who were treated with pamidronate for 2-6 years, and found a decrease in long bone pain, lowered fracture rate and bone turnover markers, and an increase in bone density on DEXA scan. Chan et al [91] followed 3 children with MAS for 8-10.5 years who were age 2.5-5 years at the start of treatment with pamidronate for MAS. They too noted a decrease in long bone pain and fracture rate however the long bone lesions continued to expand and grow while the facial lesions did not expand; there was no encroachment on the optic nerve throughout the follow-up. Chao et al [92] noted that oral alendronate over a 6-month course reduced intractable headaches and relieved the 3 patients from analgesic dependence. They reported no tumor progression, however the 3 patients were adults and may not have shown progression without the bisphosphonate treatment. Further studies are necessary to determine the efficacy osteoclast inhibitor therapies such as bisphosphonates or denosumab in slowing the growth of craniofacial FD and reducing intractable craniofacial FD pain. The variation in response between children and adults with FD and the safety of prolonged bisphosphonate use in children also require more investigation. New therapies are emerging that include RANK ligand inhibition (i.e. denosumab) however at this time their role in the treatment of FD-related pain or reduction in growth remains to be determined [93]. Narrowing of the canal Normal auditory canal A C B Figure 11 Narrowing of the external auditory canal due to fibrous dysplasia (FD). A) A CT image of a coronal slice through the temporal bone shows a narrowed external auditory canal (arrow) B) Narrowing of the canal is shown and can be compared to a normal canal in (C). The arrow on the CT image (A) demonstrates narrowing of the canal. This has resulted in hearing loss. The clinical images on the right compare a canal narrowed by FD to a normal external auditory canal. Conclusion We have provided the current understanding of the biologic and clinical characteristics of FD and recommendations for the clinical management in the craniofacial region. Most importantly, each patient may present with variable symptoms and clinical findings, thus the care of these patients must be customized to their needs and sites of involvement. Recommendations 1. Aggressively screen for and manage endocrinopathies (particularly growth hormone excess). 2. Active disease (rapid growth, new onset of pain or paresthesia, visual or hearing changes) warrants an immediate surgical referral and evaluation. 3. A bone biopsy should be obtained if there is any doubt about the diagnosis. If the lesion is in a site that cannot be biopsied due to unacceptable risks, history, clinical examination and radiographic diagnosis may be adequate for diagnosis. 4. Postpone surgical treatment of lesions until after skeletal maturity when the lesion is quiescent. 5. Surgical resection or contouring may be warranted prior to skeletal maturity if there are symptoms or rapid change in the lesion, however, patients must be aware of the risk of regrowth. 6. Potential use of adjuvant therapy such as bisphosphonates may be considered for refractory pain at the FD site. 7. Management of patients with FD, particularly PFD and MAS, requires a comprehensive evaluation and multidisciplinary involvement for optimal care. Research questions 1. What are the mechanisms for changes in FD that occur as patients age? 2. What is the mechanism and effect of growth hormone excess on the growth rate and activity of FD? 3. What are potential targeted therapies and mechanisms that can be used to treat FD? 4. What biomarkers might be useful to predict biological behavior and growth of FD lesions? 5. What potential biomarkers or predictors of transformation and associated pathologies can be developed? 6. What combined therapies will prevent recurrence and regrowth (e.g. an operation with adjuvant bisphosphonates, interferon)? 7. What pharmacologic or molecular therapies may reverse the effects of the abnormal gene products in FD? 8. Does the detectability of a G s mutation in a fibroosseous lesion predict clinical behavior? 9. Is mutation testing a necessary component of FD evaluation?
v3-fos-license
2021-12-02T02:16:30.512Z
2021-11-30T00:00:00.000
244773449
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://projecteuclid.org/journals/electronic-communications-in-probability/volume-27/issue-none/Moments-of-the-superdiffusive-elephant-random-walk-with-general-step/10.1214/22-ECP485.pdf", "pdf_hash": "6b4f07cd0579f608c72693ee8934710f67adf99f", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45230", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "sha1": "6b4f07cd0579f608c72693ee8934710f67adf99f", "year": 2021 }
pes2o/s2orc
Moments of the superdiffusive elephant random walk with general step distribution We consider the elephant random walk with general step distribution. We calculate the first four moments of the limiting distribution of the position rescaled by $n^\alpha$ in the superdiffusive regime where $\alpha$ is the memory parameter. This extends the results obtained by Bercu. Introduction and results The elephant random walk (ERW) is a one-dimensional discrete-time random walk with memory. With probability α the walker repeats one of its previous steps chosen uniformly at random and with probability 1 − α the next step is sampled independently from the past where α ∈ [0, 1] is the memory parameter. The ERW was first introduced in 1993 by Drezner and Farnum [DF93] as correlated Bernoulli process with Bernoulli step distribution and time-dependent memory parameter. For the case of time-homogeneous memory parameter and Bernoulli step distribution it was proved in [Hey04] that the behaviour shows a phase transition in the value of the memory parameter α. In the diffusive regime (α < 1/2) asymptotic normality is proved after diffusive scaling, in the critical regime (α = 1/2) normality remains valid with a logarithmic correction in the scaling. In the superdiffusive regime (α > 1/2), after scaling with n α , the limiting distribution is found to be non-degenerate. It was also stated without proof that the limiting distribution is different from the normal distribution. The proof uses the martingale which naturally appears in the problem. In the case of general time-dependent memory parameter sufficient conditions for the law of large numbers, central limit theorem and law of iterated logarithm were given in [JJQ08] using the martingale method. The same model with +1 and −1 jumps was first named as elephant random walk in [ST04] and the probability distribution of its position after n steps was analysed. The connection of the ERW with Pólya-type urns was exploited in [BB16] to prove process convergence of the ERW trajectory using know results on urns. The fact that the limiting distribution of the superdiffusive ERW is not Gaussian was first proved rigorously in [Ber17] by computing its first four moments using martingales. New hypergeometric identities are obtained in [BCR19] by computing these moments in two different ways. Number of zeros in the elephant random walk is analysed in [Ber22b]. The generalization when zero jumps are also allowed is called the delayed ERW, see [GS21,Ber22a]. In [Bus18] the steps of the ERW are sampled from the β-stable distribution with parameter β ∈ (0, 2] and the phase transition in the memory parameter is proved to happen at the value α = 1/β using the connection with random recursive trees. In the superdiffusive regime the fluctuations after subtracting the non-Gaussian limit are proved to be normal in [KT19]. In the present note we consider the ERW with general step distribution which is defined as follows. Let α ∈ [0, 1] be the memory parameter of the ERW. Let ξ 1 , ξ 2 , . . . be an arbitrary i.i.d. sequence of random variables with certain moment conditions imposed later. We denote by X n the nth step of the random walk. We suppose that the random walk starts from the origin, i.e., S 0 = 0. The first step is X 1 = ξ 1 . Every further step is defined as where the index K has uniform distribution on the index set {1, 2, . . . , n}, that is, with probability α one of the previous steps is repeated and otherwise the step is an independent new sample from the step distribution. Note that the steps X 1 , X 2 , . . . are not independent but the walk has a long memory. The position of the ERW is denoted by for k = 1, 2, . . . denote the moments and centered moments of the step distribution. For the ERW with general step distribution the same phase transition appears in the value of the memory parameter α as for the original model since the martingale method used in the majority of the previous literature on ERW extends naturally for our model as we explain below. We believe that the proof of the law of large numbers and central limit theorem in the diffusive and critical regime survives for the case of general step distribution after appropriate modifications with the same Gaussian limits. Hence we focus on the most exciting superdiffusive regime in the present note where the limiting distribution is different from Gaussian. Our main results for the superdiffusive ERW with general step distribution are the following. 1. Let (S n ) denote the elephant random walk with memory parameter α. Assume that α ∈ (1/2, 1], that is, we consider the superdiffusive regime. Suppose that the step distribution has finite variance, that is, m 2 < ∞. Then almost surely with some non-degenerate random variable Q. 2. Let p be a positive even integer. Assume that the pth absolute moment of the step distribution is finite, that is, m p < ∞. Then the above convergence is also true in L p which means that (1.5) Theorem 1.2. Assume that the step distribution of the elephant random walk (S n ) has finite fourth moment, that is, m 4 < ∞. Then the first four moments of the random variable Q which arises as the limits in (1.4)-(1.5) are given by . (1.9) Theorem 1.1 follows from the application of the martingale method to the case of general step distribution. The almost sure convergence in (1.4) was already proved in Theorem 1.1 of [Ber21a]. The L p convergence was established for p = 2 in Theorem 3.2 of [Ber21c] and for general p in Theorem 2.2 of [BCR19] for the standard elephant random walk. See also [Ber21b] for other generalizations of these convergence results. We provide a simple and elementry proof of the almost sure and L p convergence results of Theorem 1.1 in Section 2 which relies on proving the L p boundedness of the natural martingale if the step distribution has a finite pth absolute moment. In particular we prove the L p boundedness of a sequence of martingale differences in Lemma 2.1. Theorem 1.2 is proved in Section 3 by solving the recursions for the mixed moments of the centered ERW. The moments in (1.6)-(1.9) generalize the formulas found in [Ber17] in the case of symmetric first step. We mention that higher moments of Q could in principle be determined using the method presented here but the recursions are much more complicated beyond the fourth moment. Martingale method and convergence We assume that the first two moments of the step distribution are finite. Let denote the centered ERW. Then by the definition (1.1) we have for any n = 1, 2, . . . that where F n = σ(X 1 , . . . , X n ) is the natural filtration. As a consequence, holds and the process Q n = a n S n (2.4) is a martingale with respect to F n where the sequence (a n ) is given by as n → ∞ with the empty product understood to be equal to 1 in the definition of a 1 = Γ(1 + α) −1 . We mention that our definition (2.5) of a n compared to the literature is simplified by a factor Γ(1 + α), see e.g. [Ber17]. The martingale (Q n ) can be written as where ε 1 = X 1 − m 1 and for all k = 2, 3, . . . , Lemma 2.1. Let p be a positive integer and assume that the pth absolute moment of the step distribution is finite. Then the martingale differences (ε n ) are bounded in L p and Proof of Lemma 2.1. We first use induction to see that E(|X n | p ) = E(|ξ 1 | p ). The statement is clear for n = 1 and for n = 2, 3, . . . , one can write by the law of total expectation that which is equal to E(|ξ 1 | p ) by the induction hypothesis. On the other hand, Jensen's inequality implies that |E(X n |F n−1 )| p ≤ E(|X n | p |F n−1 ), which after taking expectation yields that E(|E(X n |F n−1 )| p ) ≤ E(|ξ 1 | p ). Then by applying the Minkowski inequality for ε n = X n − E(X n |F n−1 ) from (2.7), we have that which proves (2.8). Proof of Theorem 1.1. 1. It is clear from the representation (2.6) and from Lemma 2.1 that the expectation of the predictable quadratic variation process can be bounded as which remains finite in n exactly in the superdiffusive regime α ∈ (1/2, 1]. As a consequence, the increasing limit lim n→∞ Q n is an almost surely finite random variable and the martingale (Q n ) converges almost surely to its limit Q = ∞ k=1 a k ε k . 2. The conditional expectation of the pth power of Q n+1 using Q n+1 = Q n + a n+1 ε n+1 from (2.6) can be written as (2.12) Note that the k = 1 term above vanishes since E(ε n+1 |F n ) = 0. The absolute value of the expectation of the random variable which appears in the k = 2, . . . , p terms on the right-hand side of (2.12) can be upper bounded as where we used Hölder's inequality in the second inequality above and Jensen's inequality for conditional expectations in the last one. By taking expectation in (2.12) we get that (2.14) where we used (2.13) and the fact that a n+1 ∈ (0, 1] in the first inequality above and the upper bounds (E(|ε n+1 | p )) k/p ≤ 1 + E(|ε n+1 | p ) and (E(Q p n )) (p−k)/p ≤ 1 + E(Q p n ) in the second one. Note also that since p is even, we have E(Q p n ) = E(|Q n | p ). By Lemma 2.1, we have E(|ε n+1 | p ) ≤ 2 p E(|ξ 1 | p ) where the upper bound does not depend on n. By Lemma 2.2 below with β = 2α and c = 2 p (1 + 2 p E(|ξ 1 | p )), the expectations E(Q p n ) remain bounded in n, that is, the martingale (Q n ) is bounded in L p , hence it converges to its limit Q also in L p . holds for all n = 1, 2, . . . . The upper bound on the right-hand side of (2.16) is increasing in n and its n → ∞ limit is finite since β > 1. Limiting moments We give the proof of Theorem 1.2 in this section. For this we introduce We define the mixed moments Note that the moments M k given in (1.3) can be expressed in terms of the moments m k as The idea to compute the moments of the limit Q in Theorem 1.2 is to use the convergence in L p from Theorem 1.1 with p = 4 and to write down and solve recursions for the mixed moments of the elephant random walk, see Propositions 3.1 and 3.2 below. Proposition 3.1. The mixed moments of S n , T n and U n satisfy the following recursions: n. Proof of Proposition 3.1. We start by writing We use these formulas on the left-hand side of the recursions (3.9)-(3.15) and we expand the products under the expectation. Then we get the sum of several expectations involving products with combinations of S n , T n , U n multiplied by powers of X n+1 . The expectation of such a product is computed by taking the conditional expectation of the factor involving X n+1 with respect to F n first and then by taking expectation, e.g. (3.26) There are two types of terms in the resulting expressions: mixed terms including powers of X n+1 multiplied by an expression of S n , T n or U n under the expectation (k = 1, 2, 3, 4 terms in (3.26)) and pure terms being the expectation of a polynomial of X n+1 only (k = 0 term in (3.26)). In order to compute the expectation appearing in mixed terms, we use the conditional expectation of powers of X n+1 given in (3.27)-(3.32) below. For the pure terms, the computation of the conditional expectation of the appropriate polynomial is not needed, the expectations given in (3.36) are enough to get the recursions (3.9)-(3.15) for the expectations. Further using the definitions (3.8), (3.4), (3.3) and (3.5), we can see by induction on n the equality of expectations (3.36) For the expectation of the recentered sums holds. Then we are ready to verify the recursions (3.9)-(3.15). We rewrite the (n + 1)st terms on the left-hand side by For the proof of Proposition 3.2 about the solutions of recursions in Proposition 3.1 one uses the following two lemmas. The first one provides the general solution of recursions which the moments of the elephant random walk satisfy, the second one contains two useful identities about sums of gamma ratios. Lemma 3.4. Let a and b be two arbitrary non-negative real numbers such that b = a + 1. Then for all n = 1, 2, . . . , the following identities hold n j=1 Γ(j + a) where we used the solutions (3.17) and (3.18) in the second equality above and the identity M 1,2 − 2m 1 M 2 = M 3 in the last one. With this value of c n , the summation on the right-hand side of (3.39) is − Γ(n + 1) Γ(n + 3α) (3.44) with the use of (3.41) from Lemma 3.4 in the last equality. Substituting it to the righthand side of (3.39) one arrives at (3.19) after the simplification of the leading term. The proof of (3.22) is similar. We have β = 3α, b 1 = M 1,1,2 and can be given as follows where the asymptotic equality above follows since (3.48) holds as n → ∞ and from three other similar asymptotic equalities corresponding to the summations in further terms of (3.47). These asymptotics can be seen from Lemma 3.4 by neglecting the terms vanishing in the n → ∞ limit. By substituting (3.47) into (3.39) we see that as n → ∞, E( S 4 n ) is asymptotically equal to a constant times Γ(n + 4α)/Γ(n) ∼ n 4α . The value of the constant is obtained by adding b 1 /Γ(4α + 1) = M 4 /Γ(4α + 1) to the expression in (3.47). This verifies that vanishing terms in (3.47) can be disregarded. Straghtforward simplification of the sum of M 4 /Γ(4α + 1) and the right-hand side of (3.47) yields the coefficient of Γ(n + 4α)/Γ(n) on the right-hand side of (3.23) which completes the proof.
v3-fos-license
2019-11-18T15:10:53.763Z
2019-11-01T00:00:00.000
208086850
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11882-019-0883-1.pdf", "pdf_hash": "e5aa94ed8247cde8c164ad7bdd364bcd04cdfb45", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45231", "s2fieldsofstudy": [ "Medicine" ], "sha1": "e5aa94ed8247cde8c164ad7bdd364bcd04cdfb45", "year": 2019 }
pes2o/s2orc
Conditioning Regimens for Hematopoietic Cell Transplantation in Primary Immunodeficiency Purpose of Review Hematopoietic cell transplantation (HCT) is an established curative treatment for children with primary immunodeficiencies. This article reviews the latest developments in conditioning regimens for primary immunodeficiency (PID). It focuses on data regarding transplant outcomes according to newer reduced toxicity conditioning regimens used in HCT for PID. Recent Findings Conventional myeloablative conditioning regimens are associated with significant acute toxicities, transplant-related mortality, and late effects such as infertility. Reduced toxicity conditioning regimens have had significant positive impacts on HCT outcome, and there are now well-established strategies in children with PID. Treosulfan has emerged as a promising preparative agent. Use of a peripheral stem cell source has been shown to be associated with better donor chimerism in patients receiving reduced toxicity conditioning. Minimal conditioning regimens using monoclonal antibodies are in clinical trials with promising results thus far. Summary Reduced toxicity conditioning has emerged as standard of care for PID and has resulted in improved transplant survival for patients with significant comorbidities. Introduction Primary immunodeficiency (PID) comprises a large, heterogeneous group of disorders that result from defects in immune system development and/or function. Long considered as rare diseases, recent studies show that one in 2000-5000 children younger than 18 years is thought to have a PID. There are now around 350 single-gene inborn errors of immunity and the underlying phenotypes are as diverse as infection, malignancy, allergy, autoimmunity, and autoinflammation. Therefore, presenting features, severity, and age of diagnosis vary immensely. Hematopoietic cell transplantation (HCT) is a wellrecognized curative therapy for many of these PIDs. Since the first transplant took place in 1968, utility of HCT was initially limited by high rates of graft failure and transplant-related morbidity and mortality; however, transplant survival and graft outcomes have significantly improved, particularly since 2000 [1,2]. Many factors have contributed to this improvement including earlier diagnosis, a detailed graft selection hierarchy, superior HLA matching technology, improved methods for graft manipulation, greater availability of grafts, improved supportive care, vigilant infection surveillance and pre-emptive treatment, and more effective antimicrobial therapy. In the modern era, graft engineering, additional cellular therapy, and pharmacokinetic-guided conditioning regimens enable precise personalized transplant care including prescription of graft components, better cell-dosed grafts, and a patient-tailored conditioning regimen [3, 4•, 5••]. Short-term transplant survival outcomes must be carefully distinguished from long-term disease outcomes and late effects of transplant. As survival from transplant has improved, more attention is now given to long-term disease outcomes and quality of life. Therefore, the goal of conditioning is to give the least toxic regimen with minimal short-and long-term side effects but still achieve cure of the underlying condition. This review will focus on newer conditioning regimens, how they have changed, and possible future directions. It is important to note that success does not simply depend on which conditioning chemotherapeutic agents are employed but on a combination of factors such as additional serotherapy, timing and dosage, and stem cell source. In almost all cases, preparative conditioning with a combination of chemotherapeutic agents, with or without monoclonal antibodies, is required for successful engraftment and stable robust long-term immune reconstitution. Definition The intensity of the conditioning regimen can vary substantially and has been classified as myeloablative conditioning (MAC), reduced toxicity conditioning (RTC), reduced intensity conditioning (RIC), and minimal intensity conditioning (MIC) in decreasing order (Fig. 1). MAC, consisting of alkylating agents with or without total body irradiation (TBI), is expected to myeloablate the recipient's hematopoiesis which does not allow for autologous hematological recovery. This aims to prevent rejection by the use of supralethal chemotherapy to remove hostversus-graft reaction and create marrow niche space for donor stem cells. Newer myeloablative chemotherapy agents are being explored to reduce toxicity and enable safer HCT. These reduced toxicity conditioning (RTC) regimens, including pharmacokinetic targeted busulfanfludarabine (Bu-Flu) and treosulfan-fludarabine, have a comparable myeloablative effect with conventional MAC but reduced organ toxicities. Compared to MAC, RIC has b e e n t r a d i t i o n a l l y c h a r a c t e r i z e d b y r e v e r s i b l e myelosuppression in the absence of stem cell rescue, reduced regimen-related toxicity, and a higher incidence of mixed chimerism. MIC is strictly non-myeloablative, does not eradicate host hematopoiesis, and allows relatively rapid autologous hematopoietic recovery without a transplant, but adequately myelosuppresses the recipient to enable at least partial donor engraftment. Myeloablative Conditioning Regimens in PID Historically, conditioning therapy prior to HCT in PID was based on the combination of alkylators busulfan and cyclophosphamide. However, many children with PID have significant comorbidities at the time of HCT, and these conventional myeloablative preparative regimens are associated with significant toxicity and a relatively high incidence of transplant mortality, as well as long-term sequelae. While initial results may have been acceptable, appreciation of acute conditioning toxicities and recognition of long-term sequelae mean that few centers now approach transplantation of PID patients with conventional myeloablative preparative regimens (Table 1) [6][7][8][9]. RTC Regimens in PID The use of reduced toxicity conditioning regimens are now generally preferred for patients with PID as there is no malignant disease to eradicate, stable mixed chimerism achieves cure for many diseases, and many patients enter HCT with chronic infections and end-organ comorbidities. Additionally, many patients are infants at the time of transplant and may be more susceptible to toxicity [10]. Less toxic regimens may reduce early and late adverse effects, particularly infertility [4•]. There are several reduced toxicity regimens that have been utilized by investigators in PID (Table 2) [14•, 49, 50]. Fludarabine and Treosulfan Treosulfan (L-treitol-1,4-bis-methanesulfonate) is a prodrug and a water-soluble bifunctional alkylating agent which has been used for many years as treatment for various neoplasms, but more recently as part of conditioning for HSCT. In addition to myeloablative properties, it has marked immunosuppressive properties which contribute to the achievement of stable engraftment posttransplant. It causes relatively low organ toxicity compared to high-dose busulfan and cyclophosphamide leading to fewer complications such as venoocclusive disease of the liver. The first successful allogeneic transplant in a child using treosulfan was performed in 2000 and since then many reports have confirmed its efficacy and safety in both malignant and non-malignant disorders [11••, 12•, 13, 14•, 15-18]. Slatter et al. first published results of 70 children with PID who received treosulfan in combination with either cyclophosphamide (n = 30) or fludarabine (n = 40) with an overall survival of 81% (median follow-up 19 months) equivalent in those aged less or greater than 1 year at time of transplant [13]. Toxicity was low but worse after cyclophosphamide, and T cell chimerism was significantly better after fludarabine [18]. Slatter et al. more recently reported 160 patients who had received conditioning with treosulfan and fludarabine achieving a probability of 2-year survival of 87.1% with a high level of complete or stable mixed chimerism in the diseased cell lineage, sufficient to cure disease [11••]. There was a high survival rate in children transplanted under 1 year of age in whom toxicity can be a problem with conventional and other reduced intensity conditioning regimens [24,25]. A 100-day survival of 94% demonstrated the low toxicity of this regimen making it suitable for patients with PID who often have infection and organ damage prior to HCT. In this series, a higher level of myeloid chimerism was found in recipients of PBSC compared to CB and BM, without an increased risk of grade III/IVacute or chronic graft-versus-host disease (GvHD). This highlights the importance of the whole transplant package including stem cell source and serotherapy when tailoring therapy [26]. Excellent results were reported by Lehmberg et al. in 19 patients with hemophagocytic lymphohistiocytosis (HLH) following HCT with treosulfan, fludarabine, alemtuzumab, with or without thiotepa, all of whom survived with a median follow-up of 16 months [16]. Haskologlu et al. reported 15 patients with PID who had a high risk of developing transplant-related toxicity due to previous lung and liver damages and were given treosulfan-based conditioning [27]. At 32 months follow-up, the overall survival was 86.7% with excellent chimerism and low conditioning associated morbidity despite the high-risk population. Mixed chimerism is sufficient to achieve cure in some nonmalignant disorders, but the specific diagnosis and level of chimerism needed to achieve cure must be taken into account when balancing the need for increased myeloablation against short-and long-term toxicities from the conditioning regimen. The addition of thiotepa is common in order to increase the intensity of the regimen, but there are few reports of any comparison in outcomes comparing treosulfan and fludarabine with or without additional thiotepa. Yael Dinur-Schejter et al. reported 44 patients with non-malignant diseases: 19 received treosulfan with fludarabine 66.7% of whom achieved complete engraftment compared to 94.7% of 20 patients who received additional thiotepa, but this did not translate into any significant difference in overall or event free survival [15]. Fludarabine and Busulfan Traditionally, busulfan (Bu) was used in combination with cyclophosphamide (Cy) as the standard myeloablative conditioning regimen for HCT for both malignant and nonmalignant disorders in both adult and pediatric patients. Cyclophosphamide is increasingly being substituted with fludarabine (Flu), a nucleoside analogue with immunosuppressive properties, to provide a less toxic but equally effective regimen [19,21,28]. Harris et al. compared 1400 children who received Bu-Cy to 381 who received Bu-Flu. Busulfan doses were comparable between the 2 groups and the majority had pharmacokinetic monitoring. Eight hundred and three had non-malignant disorders including 195 with PID who received Bu-Cy and 86 who received Bu-Flu. Nine hundred and seventy-eight had malignant disorders. Children receiving Bu-Flu for non-malignant conditions experienced less toxicity than those receiving Bu-Cy, but survival was comparable. Children with malignancy had shorter postrelapse survival with Bu-Flu than Bu-Cy although transplant-related mortality and relapse were similar [29]. The pharmacokinetics of busulfan have been studied extensively and the use of a lower target area under the curve (45-65 mg/L × h) combined with fludarabine has been pioneered by Tayfun Güngör and colleagues in Zurich. Particularly impressive results have been seen using this regimen for patients with chronic granulomatous disease (CGD). Fifty-six children and young adults with CGD were reported, many of whom had high-risk features such as intractable infections and autoinflammation. Twenty-one HLA- alone or in combination with fludarabine or thiotepa in 10 patients with severe combined immunodeficiency. All the patients survived, one patient required second HCT, and 3 had no B cell reconstitution [19]. Fludarabine and Melphalan Increasing recognition of the significant toxicities associated with conventional doses of busulfan and cyclophosphamide, particularly in very young infants and especially in those with pre-existing end organ damage, led to the adoption of immunosuppressive-based, rather than myelo-ablative-based regimens, with fludarabine and melphalan. The results, principally in those with significant preexisting comorbidities, were striking with significantly improved early survival [22,23,30,31,49]. However, donor chimerism was not always optimal, and there was a high incidence of late viral reactivation, and late onset acute GvHD. Furthermore, toxicities in infants < 1 year of age remained significant [25]. Melphalan in particular has been associated with cardiac toxicities [32]. Good results have been reported for patients with hemophagocytic lymphohistiocytosis [33]. Patients with X-linked inhibitor of apoptosis protein (XIAP) deficiency, which is difficult to transplant, also have good outcomes reported using fludarabine and melphalan-based regimens [34]. It has been used in adults with PID with good transplant survival [23] While the approach remains attractive in terms of reduced toxicities, concerns regarding late graft failure and high mortality in the < 12-month-aged infants remain. Minimal Intensity Conditioning for PID Fludarabine and Low-Dose TBI Burroughs et al. from the Seattle group have reported the transplant outcome of using fludarabine and low-dose TBI in 14 PID patients with significant preexisting organ dysfunction and infections. All received posttransplant GvHD prophylaxis with cyclosporin and mycophenolate mofetil but no serotherapy. Overall survival at 3 years was 62%, but there were high rates of acute (79%) and extensive chronic GvHD (47%) [35]. One had graft failure and an additional three patients required a second procedure for decreasing chimerism. Of 10 evaluable patients, 8 had correction of immune deficiency with stable chimerism. However, the high rate of GvHD has limited the broader use of this conditioning regimen in children with PID [35,36]. Antibody-Based While conditioning regimens have undoubtedly become less toxic, the ability to achieve donor chimerism without the use of chemotherapeutic agents, particularly in patients with nonmalignant disease, is extremely attractive. Furthermore, some primary immunodeficiencies have significant toxicities associated with the administration of alkylating agents, due to the nature of the molecular defect, leading to serious long-term effects or early mortality [37][38][39]. A number of different strategies have been employed to minimize the exposure to chemotherapeutic agents by the use of antibodies to aid stem cell engraftment, with or without adjunct chemotherapy. Anti-CD45 Antibodies CD45 is selectively expressed on all leucocytes and hematopoietic progenitors but is absent on non-hematopoietic tissues. Straathoff and colleagues studied 16 patients with PID who were less than 1 year of age or had significant preexisting comorbidities and were felt not suitable for conventional reduced intensity conditioning [24]. The conditioning regimen was comprised of alemtuzumab 0.2 mg/kg daily for 3 days for unrelated donors, or 0.1 mg/kg daily for 3 days for matched sibling donors on day − 8 to day − 6, clinical grade rat anti-CD45 (YTH24·5and54·12) 0.4 mg/kg on day − 5 to day − 2, fludarabine (30 mg/m 2 daily for 5 days on day − 8 to day − 4) and cyclophosphamide (300 mg/m 2 daily for 4 days on day − 7 to day − 4). Twelve patients were alive and well at the end of the study, one failed to engraft and was successfully re-transplanted, and 3 died-none of conditioning toxicity. Donor chimerism was variable but high level and sufficient to cure disease in the survivors. Radioimmunotherapy Radioimmunotherapy is an attractive concept for conditioning of patients with PIDs as it exploits of the physical cytotoxic effect of radiation and reduces the toxicity to other organ systems by its internal application and the conjugation of radioisotopes to specific antibodies [40]. Radioisotopes emitting α, β or γ-radiation of calculated intensity can be brought in direct proximity to the cells of interest. This enables malignant cells to be eradicated or benign hematopoietic cells to be depleted as part of conditioning before autologous or allogeneic HSCT. The method was developed to allow better and more specific control of malignant cells in the setting of HSCT without an increase in non-relapse mortality. Considerable clinical data was accumulated with conjugates of 90 Yttrium or 131 Iodine to anti-CD20 antibodies in the treatment of patients with refractory or recurrent B cell non-Hodgkin lymphoma (B-NHL). The drugs were used in combination with chemotherapy to prepare patients for autologous and allogeneic stem cell transplantation. This experience resulted in the approval of two drugs (Zevalin® and Bexxar®) by the FDA at the beginning of the century [40]. The use of RIT for the treatment of leukemias or for myeloablation in non-malignant disease until present is limited to clinical studies. A conjugate of 131 Iodine to anti-CD45-antibody was explored in the treatment of patients with AML and high-risk MDS, again a combination of RIT with conventional myeloablative or immunosuppressive drugs was used for conditioning before allogeneic HSCT [41,42]. CD45 is expressed on most AML and ALL blasts as well as on virtually all developing and mature cells of normal hematopoiesis. Radiolabeled anti-CD45 antibody doses up to 43 Gy were administered to the bone marrow in combination with RIC and allogeneic transplantation with good tolerance and without additional toxicity in younger adult patients with AML and MDS [43]. For children, limited published data exists for the use of RIT for pretransplant conditioning. A conjugate of 90 Yttrium to an antibody targeting CD66 was used in combination with melphalan and fludarabine or TBI for the treatment of children with considerable comorbidities with malignant and non-malignant disease. 90 Yttrium emits pure β-radiation with a maximum range of 11 mm and a half-life of 2.7 days [44]. With these qualities, no isolation of the pediatric patients was necessary, but the dosimetry had to be performed with another isotope, emitting γ-radiation to be detected in a γ-camera. CD66 is abundantly present on mature myeloid cells but usually not expressed on malignant blasts. The therapeutic principle of RIT with this antibody in malignant disease therefore relies on the so-called cross-fire effect, which describes the indirect depletion of blasts by binding of the antibody to cells in close proximity [40]. In order to avoid graft rejection in unrelated or mismatched grafts, recipients received serotherapy with ATG in this setting. Fifteen of 16 children with non-malignant disease survived the procedure, 13/15 with complete donor chimerism. The Kaplan-Meier estimation for disease-free survival at 24 months was 94%. This clearly documented feasibility of and reliable myeloablation by RIT in children and young adults with non-malignant disease. Anti-CD117 Antibodies The molecule CD117 (c-Kit receptor) is expressed on hematopoietic stem cells at all stages of development. Interactions with the ligand of CD117, stem cell factor, are crucial for hematopoietic stem cell survival, and this signaling pathway plays a critical role in the homing, adhesion, maintenance, and survival of hematopoietic stem cells in the hematopoietic niche. Preclinical studies demonstrated that using an antibody against CD117 to impede CD117-stem cell factor signaling selectively depleted hematopoietic stem cells with no effect on differentiated progenitor or mature cell lineages, and enabled engraftment of donor cells [45]. A clinical trial is currently in progress using anti-CD117 antibody alone to treat patients w i t h p r i m a r y i m m u n o d e f i c i e n c i e s ( A M G 1 9 1 Conditioning/CD34 + CD90 Stem Cell Transplant Study for SCID Patients, ClinicalTrials.gov Identifier: NCT02963064). The early results of this dose finding study show that some donor stem cell chimerism, leading to donor T and B lymphocyte chimerism can be achieved [46]. These preliminary data are extremely exciting and potentially lead the way to a step change in approaches to conditioning in patients with PIDs. Conditioning for Haploidentical Donor Transplant As the outcomes of HCT using newer T cell depletion methods have improved, there is an increasing number of haploidentical transplants performed for both SCID and non-SCID PID. Various non-myeloablative conditioning regimens have been used in T-deplete and T-replete haploidentical transplant (Table 3) [5••, 47, 48, 51]. The Great North Children's Hospital (GNCH) group in Newcastle has used fludarabine, treosulfan, ATG (Grafalon), and ritixumab for patients who received CD3 TCR ab/CD19 depleted peripheral blood stem cells. Patients with non-SCID PID received additional thiotepa. Pharmacokinetic Studies Although levels of busulfan have been measured for many years, to target the narrow myeloablative therapeutic window, minimize toxicity from supra-therapeutic levels and avoid sub-myelo-ablation and rejection, it is only recently that the importance of pharmacokinetic monitoring of other agents of the conditioning cocktail has been appreciated. Fludarabine Pharmacokinetics Ivaturi et al. prospectively studied the pharmacokinetics and pharmacodynamics of 133 children undergoing HCT for a variety of disorders with a variety of conditioning regimens but all included fludarabine. Young age and renal impairment were found to lead to an increased exposure. In the setting of malignancy, disease-free survival (DFS) was highest 1 year after HCT in subjects achieving a systemic fludarabine plasma (f-ara-a) cumulative area under the curve (cAUC) greater than 15 mg*hour/L compared to patients with a cAUC less than 15 mg*hour/L (82.6% versus 52.8%, p = 0.04) [52]. Further development of model-based dosing may minimize toxicity and maximize efficacy, resulting in superior outcomes for malignant and non-malignant patients. Treosulfan Pharmacokinetics Relatively high variability of treosulfan pharmacokinetics in pediatric patients may raise the need for implementing therapeutic drug monitoring and individual dose adjustment in this group. Vander Stoep et al. and Mohanan et al. recently published the first results of a relationship between the exposure of treosulfan and early toxicity, as well as clinical outcome, in children undergoing conditioning prior to HSCT. In the former study, patients with an AUC > 1650 mg h/L demonstrated a statistically higher incidence of mucosal and skin toxicity than those with an AUC 1350 mg h/L (odds ratio 4.4 and 4.5, respectively). The odds of developing hepato-and neurotoxicity were also higher in the former group, but the difference did not reach statistical significance. No association was found between treosulfan exposure and early clinical outcomes, i.e., engraftment, donor chimerism, acute graft-versushost disease, treatment-related mortality, and overall survival. PK parameters were shown to be age-dependent, with higher AUC values in younger children (< 1 year old) and corresponding lower treosulfan clearance. A challenge in therapeutic monitoring of treosulfan within conditioning prior to HCT is a very brief course of treatment, consisting of three doses administered on 3 consecutive days. This allows personalization of only the second and third dose of the prodrug unless a test dose is applied prior to starting the actual regimen. Since pharmacokinetic studies of treosulfan began, it has been assumed that plasma (serum) concentrations of the prodrug are a good representation of the alkylating activity of its epoxy transformers. However, for years, a correlation between treosulfan concentrations in plasma and levels of specific DNA adducts in tissues, for example the bone marrow, or clinical effects, have not been investigated. Therapeutic drug monitoring of not only prodrug but also its active epoxide might be needed. In addition blood pH, body temperature, and intravenous fluid delivery may influence glomerular filtration, tubular reabsorption, and nonenzymatic epoxy transformation of the prodrug [53]. Serotherapy Levels It is now well recognized that type of serotherapy, dose and timing in relation to the transplant all have an impact on outcome of transplant in terms of occurrence of GVHD, immune reconstitution importantly in terms of viral reactivation, clearance of infection, and chimerism. Marsh RA et al. collected data from 105 patients to examine the influence of peritransplant alemtuzumab levels on acute GVHD, mixed chimerism, and lymphocyte recovery. Significantly higher levels of aGVHD but higher levels of donor chimerism, lymphocyte counts at D+30 and T cell counts at D+100 were associated with lower alemtuzumab levels at day 0 [54]. In a recent report, the clearance of the active components of the 2 widely used types of ATG (Fresenius/ Grafalon and Genzyme) was studied in 38 children with malignant hematological disorders. They found that ATG Fresenius was cleared rapidly and uniformly from the circulation whether they received 60 mg/kg or 45 mg/kg, but there were significant differences in patients who received a high dose of ATG Genzyme (10 mg/kg) who had significantly slower reconstitution for CD3, CD4, and CD8 T cells compared to patients who received a low dose of ATG Genzyme (6-8 mg/kg) or ATG Fresenius [55]. Stem Cell Source in Non-MAC Conditioning Historically bone marrow has been the preferred stem cell source for HCT in children due to concerns that peripheral blood stem cell products led to an increased risk of GVHD. In Slatter et al.'s report of 160 PID patients who received uniform conditioning with treosulfan and fludarabine, a higher level of myeloid chimerism was found in recipients of PBSC compared to CB and BM, without an increased risk of grade III/IV acute or chronic GvHD [26]. This is an important finding particularly for patients with diseases where a high level of chimerism is required to achieve complete cure. Conclusions The use of RTC and RIC has been a major paradigm shift in HCT for PID and may have contributed to improved survival through a reduction in early post-HSCT toxicities. Almost certainly, long-term toxicities will be reduced, although further data are required to confirm this. However, the use of antibody-based conditioning regimens is likely to transform the field in the future. The drive for this has been that PID can be completely cured by HCT, and as malignancy is rarely a feature of the disease, toxicity from the curative procedure should be minimized. More recently, newborn screening for severe combined immunodeficiencies has meant that these patients are now being identified by 2-3 weeks of age [56]. Rapid transplantation is preferred, as survival and neurological outcome results are best in patients with no preexisting infection [57,58]. As gene therapy approaches become mainstream treatment, then a non-toxic conditioning approach followed by an autologous gene-corrected stem cell procedure should almost eliminate short-and long-term treatment-related morbidities for patients with SCID [59,60]. These conditioning approaches will have to be modified for combined immunodeficiencies and gain-of-function diseases where high-level or complete donor chimerism is required to abolish disease manifestations [61][62][63][64]. However, combinations of antibody-based regimens and pharmacokinetically targeted reduced lowtoxicity agents may help resolve these issues. The future for patients with PID looks extremely encouraging. Compliance with Ethical Standards Conflict of Interest The authors declare no conflicts of interest relevant to this manuscript. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors. Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
v3-fos-license
2019-05-16T13:02:59.329Z
2014-06-12T00:00:00.000
154499335
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://impactum-journals.uc.pt/notaseconomicas/article/download/2183-203X_39_2/2632", "pdf_hash": "3492cb6a6bd489022038e7fad829be2538b65aa4", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45232", "s2fieldsofstudy": [ "Economics" ], "sha1": "b15b84dad3cc49d9a03dd38c901b4a7505afa456", "year": 2014 }
pes2o/s2orc
The Silence at the Stands : Agony in the Portuguese Market for Taxis Na medida em que todas as obras da UC Digitalis se encontram protegidas pelo Código do Direito de Autor e Direitos Conexos e demais legislação aplicável, toda a cópia, parcial ou total, deste documento, nos casos em que é legalmente admitida, deverá conter ou fazer-se acompanhar por este aviso. The silence at the stands: agony in the Portuguese market for taxis Autor(es): Murta, Daniel Publicado por: Imprensa da Universidade de Coimbra URL persistente: URI:http://hdl.handle.net/10316.2/34609 DOI: DOI:http://dx.doi.org/10.14195/2183-203X_39_2 Taxis cannot come cheap. You are driven by a professional with all the flexibility of cars minus several inconveniences. And yet … why aren't they? Drivers will grumble about prohibitive gas prices not covering lousy fares. The root, and truth, of the matter lies in service: clients, miles… and the dearth of them. The market fits a description of a free-entry cartel, run by associations, tolerated by a captured regulator, where neither entry nor exit budge price. For consumers, it is a raw deal. Meanwhile, monopoly profits are squandered among the maximum of deadbeats, who barely get by. Better if regulation evolved from the capture by the drivers' organizations, towards price and licenses set according to traffic levels, after a clearing negative price shock. Será que isso importa? Os táxis podem ser preciosos para fazer a ponte entre os transportes públicos e o ambientalmente frágil e caro transporte de carro. Taxi is a unique transport medium, overshadowed in land transportation by both the car leadership and the mass public transportations bus/coach and metro/train. Nonetheless, it can play a pivotal role in an efficient transportation system, bridging gaps between various modes, including the aerial and maritime, mostly as a complement but also, at times, as a substitute or, again, as a complement to walking. Taxis naturally form a market, in terms of distance to related services, geographically dominated by the urban component with a residual, dwindling regional element. And yet, their market identity notwithstanding, the demand ought to be sensitive to the quality of other public transportationssuch as the metro or dedicated lane express buses -their reach, working hours, frequency, comfort, and … cost. Clients should bear in mind the costs of motoring which, even if most drivers narrowly define as the everyday operating costs, still encompass fuel -a hot commodity, nowadays -and parking. Also, being a somewhat premium solution to most urban transport needs, business surely varies with general living conditions, in terms of real wages and unemployment. Historically, and to a great extent to the present day, bar a few deregulation experiences scattered around the world, it has been a business of rents. Naturally, protected, regulated rents. They have generally stood on two pillars: a fixed, limited supply of licenses -often a tradable valuable rent, on occasion a source of auctioning income for municipalities; and a regulated level of fares. These usually comprise a fixed part that allows for the first x meters to be covered, and a variable part, charging y for each additional z meters. Extras for luggage or special cases are common 1 . Nightly duty can have either a higher parallel fare structure, as it has in Portugal, or involve a fixed extra. Henceforth, the fare structure will be referred, in spite of the simplification involved, simply as the price. How the source of the rents is divided between the two restrictions -price and quantitydepends on the individual markets around the world: New York is (also) famous for having less permits now than in the 1930s, suggesting busy drivers earning their companies dollars essentially by quantity control 2 , whereas in other cities/countries, like Portugal the rides are expensive, relative to both purchasing power and alternatives, and the rent rests more on the price than quantity. Literature and experience on deregulation address these two restrictions: the first by freeing entry, (the lack of) which hurts consumers and excluded professionals, the second by deregulating -freeing -price, intending to benefit consumers. Before mixed results stemming from these actions are presented, a summary of the issues involved. The first one (entry) poses problems in being set and the second one (price) creates costs/distortions of its own. Essentially, freeing entry removes the rent from its owners, which should give way to compensation. There is usually no appetite for that, both by the scarcity of public funds and for using today's or tomorrow's taxpayers' money to giveaways to renters. Freeing price, although financially positive to the consumer in terms of tumbling prices, can cause problems in loss of clarity and transparency, giving way to bargaining. In a market well loaded with casual and uninformed customers 3 , the end user can often be the loser. Absent of social consent for "shopping around" practices for the best rate, a collapse in an arranged and well publicized fare structure can be bad for the consumer. Any one of those two deregulations, hurting profits and attracting new players can also endanger safety, comfort and quality of service, further downgrading the pool of drivers modest enough to put up with less pay, but these concerns can be dealt with by direct regulation of standards. 38 38 39 39 Several factors, some more unique to the country than others, contrive to put the Portuguese market for Taxis in a terrible situation. Explaining it, and placing it within the context of other developments elsewhere in the World, as well as examining possible clues for improvement, will form the core of this paper. First, a brief revision on literature beckons. Notwithstanding major references in Industrial Organization, which will help formalize some of the subsequent description of the Portuguese situation, three surveys of different focus and date will be referred and discussed, as they will provide a framework and arguments on the performance and regulation of the Taxi Industry. Starting with the most recent, Aquilina (2011) covers ten case studies in England, following the Office for Fair Trading's (OFT) 2003 recommendation to Licensing Authorities that they should lift quantity restrictions on Taxi licenses. The studies cover cities were the restrictions were lifted and others where they were not. In all cases there were two periods of observation 4 ; data were collected on waiting times, both of customers and drivers; fares, number of vehicles and a survey of customers' perception of service quality. Crucially, in both periods of observation fares were always regulated 5 , and were generally raised during the time of the study. Furthermore, the biggest city where de-regulation of licenses took place was affected by major construction work. These mishaps dampen the conclusions. They amount to the finding that the cities which de-regulated were the ones where waiting times were longer, and thus where those were cut the most; there, too, was where the fleets increased in biggest numbers. A much feared loss of quality in service, in association with license de-regulation, was not reported by the relevant survey in either of the sub-group towns. The author downplays such fear, noting that standards of quality can be imposed by a regulator, instead of letting high fares deliver such an outcome. Anyway, in this study fares remained regulated and rose. The similar performance by the cities that did not de-regulate led to the conclusion that such a move is not necessary to achieve a good performance by the industry, but can, nonetheless, act as a substitute, when the regulator cannot find the adequate balance in terms of licenses. The background for Taxi regulation consists of the prerogative of Licensing Authorities to refuse new licenses if they are satisfied about the absence of unmet demand (1985), and the widespread power of Licensing Authorities to set maximum fares, which they generally use. In a typically British succinct fashion, vehicle and driver quality are those that assure they are 'fit for purpose'. The above mentioned Office_of_Fair_Trading 2003 report, besides endorsing license reform, linking reduced availability to reduced quality, stresses the importance of quality and safety regulations and, critically, sanctions fare regulation (in a market where maximum fares are always adopted by drivers) since they protect vulnerable customers, or those found in vulnerable situations, from overcharging. Vulnerability is detailed to encompass lack of price competition stemming from inability (or costliness) to shop around for prices, both on the street -hailing -or at Taxi stands (if, as often is the case, it is not socially established); also, tourists lacking skills and/or information to negotiate a fare, or disabled people lacking alternatives. Aquilina (2011) focuses on the street and rank hiring segment 6 to state that people hire randomly, either hailing or through the 'first in, first out' custom. So hiring is done in absence of competition, with consumers forming their expectations on the basis of the market as a whole. This provides drivers with monopoly power, which may lead to monopoly pricing, in spite of multiple suppliersa very important point in the subsequent description of the Portuguese market. Drivers, goes the author on to say, would not gain from lowering fares or investing in improving quality, since the 4 In one case there were three. 5 Whether there was quantity de-regulation or not. 6 As opposed to the smaller, and rather different, phone or internet hired segment. DanielMurta TheSilenceattheStands: AgonyinthePortugueseMarketforTaxis customer considers the market as a whole, and the driver would not capture either market share or loyalty, through any of those actions. The observed reality of Taxi markets, including the adherence to maximum fares, seems to bear these statements out. The work by Liston-Heyes and Heyes (2005), also cited by Aquilina (2011), seeks to provide economic background to the debate and momentum around de-regulating the Taxi industry. They trace the foundations and rationale for regulation on market failure, agreeing with established consensus of the optimal performance [in economic Welfare] of competitive markets. This market's failures, from textbook perfectly competitive ideals, start with the inherently interdependent supply and demand schedules -a parametric increase in supply decreases waiting times thereby increasing demand in a price quantity framework -a feature Beesly and Glaister (1983) note, citing Harold Demsetz: " […] effective regulation depends on […] suitable information. In markets in which demand cannot be kept analytically separate from supply, this is not easy." Then there is the above mentioned lack of price competition. Diamond (1971) has established that monopoly pricing can prevail even with numerous suppliers, due to search costs. Here, these costs comprise lost time, the gamble in turning a sure offer down and the lack of information/ability from infrequent/non local customers, aggravated in cases of disability, heavy luggage, odd hours or odd destinations. Often, there is also the strong social convention of taking the first car of the rank, or shyness towards price comparisons -Liston-Heyes and Heyes (2005). The prevalence of frequent customers, and of booking rides can curtail these information asymmetries, but predicting the market outcome, and its performance on Welfare, is very difficult. The authors proceed with describing waiting times and excess capacity [to keep them acceptably low] as public goods, going back to the interdependency of supply/demand, and noting a competitive environment [in the classic assumptions] will under-provide such a [public] service. Important models, like that of Chamberlin (1933), while anticipating excess capacity, fail to appraise its value to customers. Liston-Heyes and Heyes (2005) mention economies of scale and the prospect of excess entry due to replication of fixed costs, but argue they are low. This is a point of central importance in the present predicaments of the Portuguese market, which will be discussed later. Anyhow, nobody suggests fixed costs are absent from this market. Some, like Fingleton et al. (1997), accept the evidence of their small size to recommend free entry. Regarding quality of service 7 beyond waiting times, Liston-Heyes and Heyes (2005) find, again in agreement with Aquilina (2011), that direct regulation is the way to go. In cataloging regulatory instruments, they start with price controls, usually under a fare formula based on distance and time, plus some fixed cost, calculated by meters 8 . Then, there is entry regulation, with evidence of low fixed costs and the interdependency of service quality and demand (through waiting times) both arguing for de-regulation. That same oddity is referred by Arnott (1996), to push for subsidizing Taxi travel -to correct under-supply of vacant capacity. Such an instrument -subsidy -is also plausible if outcomes in related markets, for instance, congestion reduction in personal transportation, by car, are to be sought. In a further insight to the Portuguese market, Liston-Heyes and Heyes (2005) say the Taxi market is prone to regulatory capture 9 . Taxi consumers -visitors, businessmen on paid accounts, low income people without car, or your regular person in an unusual situation -typically will not be the ones lobbying for lower fares to the municipalities that set them. Taxi associations, on the other hand, generally will engage in canny lobbying, both for higher fares and stagnant licensing. Optimal independent regulation would be difficult and require much more information than the regulator will normally have. 40 41 The authors conclude arguing for transparency in policy objectives, accepting there is low risk of excess entry, and some instances of the contrary; they find the case for fare regulation ambiguous, its balance depending on characteristics of the [individual] local market 10 . Finally, in terms of main foreign references centered on the Taxi market, the work by Cairns and Liston-Heyes (1996), somewhat older, ploughs away against deregulation of fares and entry. It starts with the demonstration that the very existence of equilibrium depends on the regulation of fares. The (by now, oft cited) interdependence of supply/demand is presented here as a negative externality -one man's ride, increases another's wait. Under their assumptions, a socially optimal price -equal to the cost of adding capacity -will yield negative profits. Attaining zero profits will result in a zero value for the license. The price that achieves them, if waiting is very costly for intra-marginal consumers, can be higher than the [freely optimized] monopoly price. With high search costs, examples of which have been mentioned before, and risk averse customers, they themselves may prefer a somewhat higher price to bargaining. On a full stand, a choosing, and choosy, customer may have the advantage, and drive the price down, in a Bertrand-like fashion. In such a case, drivers would prefer a fixed fare to bargaining with a 'bad hand'. In periods of high demand, a regulated fare will protect the client; in the opposite times, both he and the driver may have high search costs for another Taxi/customer, and bargaining will be difficult -an established fare is also better. Once price is regulated, according to the authors, the market will become like an open-access resource -a commons -where too much entry can take place, hence where constraining it can increase Welfare. Optimum Welfare is, thus, compatible with positive profits and resulting valuable medallions 11 . Such a valuable -as opposed to worthless -license can also be seen as a bond of its holder, driver or Taxi owner, towards the (municipal) authority, the stripping of which can prevent morally hazardous behavior, such as deliberate choice of longer/congested routes, or fare-gouging. The paper concludes restating price regulation is necessary for equilibrium to occur. Since the intensity of use may be difficult to monitor, a regulator can improve Welfare by limiting the number of Taxis as well. Humbly, they stress that regulatory capture can clearly take place, that no system of regulation is universally superior, and that their effectiveness is a matter for empirical judgment. Portugal is now commemorating 40 years of democracy. Remnants from both the long lasting preceding political system of Fascist inspired authoritarian rule (1920s to 1974), and the turbulent short communist leaning revolutionary period (1974)(1975)(1976) linger to this day in the economy. However, it is the first corporatist, pre-democratic economic environment that helps characterize the market for Taxis, up until the late eighties. Government competition policies were designed to hinder competition, instead requiring for each new establishment a permit -which could be refused -as well as for any expansion of economic activity on the part of existing firms. Although watered down amidst a liberalization phase of the regime, this law and environment produced a culture of coziness that fitted perfectly with the classical way of functioning and regulating the Taxi industry -regulated fares and entry. This long, idyllic phase, delivered what is expected and observed in most markets for Taxis thus regulated: modest profits, which attract enough people and capital; stable, transparent and dependable prices -published and metered; enough quality of service, absence of price competition, biding and other pressures or risks to the consumer. Quantity, in this quiet, almost secretive industry, and with so much time gone by, is for no-one to measure. Price, from which quantity can be qualitatively inferred -for the sake of comment -real relative price, would have DanielMurta TheSilenceattheStands: AgonyinthePortugueseMarketforTaxis to take into account a very poor peasant-turning-urban society, where walking and cycling were mass transportation 'systems' and the public ones were limited, in scope and performance, and otherwise privilege only of the biggest cities. Taxis were there for the few -few richer civil servants, few tourists and very few occasions. Next, is the second phase, and its cutoff point from the one just described. To place it a quarter of a century ago, in the late eighties, has nothing to do with a concrete shock. Following the entrance in the European Union (E.U.), Portugal experienced rapid growth in per capita income, in itself a good thing for Taxis, but even quicker, steeper growth in car ownership. The general access to a caryour own, a close relative's, your neighbour's -forever altered the prospects for Taxis. Of course, this is nothing new to the rich industrialized world, but decisively changed the landscape for professional drivers. The car, which in Portugal as elsewhere in Europe or America is credited with a 90% share of personal land based transportation, hurts the alternative transport mediums. All public transportation -urban, regional and long range which, in Portugal, is typically under 500 kilometers or just over 300 miles -struggled to cope with the ascent of the car on Portuguese choices. (For a description of the Portuguese transport system, see Murta, 2010.) Other negative shocks for the Taxi industry have since occurred: -The capital's underground system, has undergone continuous development, with new lines, connection to the main national railway line (1998) and, most recently, much more interconnectedness between lines, improving accessibility; less than two years ago, a long awaited connection to the (very central) airport was opened; to convey a sense of the importance of Lisbon, the Metro and the airport, it is enough to say greater Lisbon houses 4 million people out of 10 million Portuguese; the airport, besides being the country's biggest, is the capital's only such structure for civil flights; -The second city -Oporto -has, since 2003, a brand new mixed light rail/underground system, fully connected to its only, rather busy, airport; -A stagnant economy, with stagnant personal disposable incomes and high personal indebtedness, with real growth this century zero or negative in five of thirteen years, and at or over 2% only twice (Table 2) severely on those it contracted with Taxis 13 ; industry sources assuredly say that prices contracted in bulk, of €0,30 per Km or less, in some cases, were considerably lower than those practiced by Fire Department related firms which, although also hit by cuts in transportation, were not as drastically affected; this development has brought demonstrations of Taxi drivers to the streets and, sources say, has caused hundreds of professionals to leave the industry. In light of the two described phases -a steady one, before the late eighties, and a declining one, until the present -how did the market use to work, and how does it now? The sector produces very little data, and is dominated by individually owned one Taxi firms. It has a strong, but incomplete membership of two drivers' associations -Antral and FPT 14 . Small firms employing drivers are to be found in the two major cities, in the Algarve tourist hub, and marginally in Coimbra. Five years ago, both claimed there were around 30.000 drivers, between their associates and the rest. Price is regulated by conventions, which are signed between the biggest association, or the two main associations, and the relevant Ministry. The last one, which is in place, was signed in December of 2012, to run until the end of 2014, between the two associations and a branch from the Ministry for the Economy and Employment 15 , now only consisting of Ministry for the Economy. Portugal, in this respect, follows many countries and cities and what many authors (e.g. Cairns and Liston-Heyes, 1996) recommend in having a regulated price, to which drivers fully commit. However, if one analyses the various conventions and the projected length of their being in effect, one notices several gaps, in different years. Asked about these aspects, sources in the Associations say when drivers fail to see the point in raising fares, they do not press for a new convention, leaving the last one to stand in force. This candid truth, thus exposed, allows two major aspects of the market to be ascertained: -There is regulatory capture, as all cited authors say the sector is prone to have, whereby prices are set in accordance to the drivers' interests; -The economics of the sector, namely, its contracting demand relative to its existing supply, contrive to make nominal price hikes unattractive to the drivers, with them settling for real price decline instead. This practice of keeping prices still, at times, is rather recent, and could not happen in the inflation plagued decades of the seventies and eighties. Before, either because of inflation or also in view of less dire conditions on the demand side, the associations complained about the need to raise prices. One can interpret this as evidence that the regulatory capture was less complete in those days -it is true that government, while fighting inflation, tried to delay price hikes in which it had a say. Or, it can be noted that complaints are not as strong as actions, and the whole scheme of signing conventions with the government, thereby splitting the blame for higher prices, is very nice for the industry. Anyhow, whereas formerly drivers could be heard complaining about prices and perhaps taxes, the last decade has seen a consensual claim emerge: slow business, bankrupting, nerve racking and slow business; and anecdotic evidence of drivers over four hours into their shift without a single customer. The number of licenses for Taxis has always been regulated, in as much as Municipalities are responsible for them, and auction new ones, which drivers then validate with the IMT institute 16 , from the Ministry for the Economy. Lately, there have not been any new licenses distributed 17 . 13 It also contracted with firms related to Fire Departments. It also contracted with firms related to Fire Departments. 14 Antral is the acronym for 'Associação Nacional dos Transportadores Rodoviários em Automóveis Ligeiros'. It Antral is the acronym for 'Associação Nacional dos Transportadores Rodoviários em Automóveis Ligeiros'. It It is bigger and much older than FPT, that stands for 'Federação Portuguesa do Taxi', created in 2003. They are reckoned to split a 90% and 10% share of the associated drivers' numbers. 15 Direção Geral das Atividades Económicas, Ministério da Economia e Emprego. IMT stands for Instituto da Mobilidade e dos Transportes Terrestres, I.P., and regulates transportation. 17 Confirmed by the responsible from Coimbra Municipality. Confirmed by the responsible from Coimbra Municipality. TheSilenceattheStands: AgonyinthePortugueseMarketforTaxis Industry sources are certain that there are too many Taxis everywhere, including in Lisbon, and have evidence of licenses being dropped, given away or sold for symbolic amounts. This fact -it will be taken as such, since it makes sense, is reported by entities whose interest is contrary to it, and there is no evidence against it -has huge implications for the market -in effect, it turns regulated entry, with positively valued medallions, into free entry, in terms of ceasing to be an active/costly constraint. To grasp the consequences on equilibrium of free entry, with monopoly prices, a curious example is evoked. Lipsey et al. (1990), in their economics textbook, tell a story about price fixing and profits 18 . If barbers in a town, unhappy with their miserly earnings, teamed up and set a price, whilst allowing for free entry, such a price would indeed be higher, and revenues would go up, for a time. Afterwards, either they would raise costs to try and gain market share (plusher service) and/or entry would take place until profits were wiped out. Even in the presence of monopoly pricing. As noted above, Aquilina (2011) states that monopoly pricing is compatible with multiple suppliers, since one driver would be unable to gain meaningful market share from a unilateral cut on price. To present the market equilibria as they seem to be developing, the classical framework found, for example in (Martin 2002), will be followed. is the standard linear demand. Linear costs with small, but non zero, fixed costs are given by yielding the benchmark competitive total quantity (3) and the freely optimized monopoly price, in which optimal quantity is half of the competitive one: ; (4) The number of firms, which stop entering or leaving when variable profits equal fixed costs is given by (5) (6) 44 44 45 Finally, welfare from this zero profit equilibrium: This is, naturally, a far cry from an ideal omnipresent sole Taxi, incurring in F losses, and charging P = c: Notice that, apart from the value of one fixed cost, a stylized market of a firm charging the competitive price would yield four times more Welfare; the largest part of the difference -fully two thirds of it -lies not on the smaller quantity -half -that the (regulatory captured) monopoly pricing envisions, but on the squandering of monopoly profits on the replication of fixed costs, by a large, excessively large, number of firms, as shown by the following expression: The market arrives to this clearly underperforming equilibrium through a doubly convoluted path: -It is the experience of exit, and worthlessness of licenses that demonstrates there is nonconstrained free entry, for practical purposes; -It is not raising prices -letting price conventions 'expire', or be left to outlast their set periodthat points to prices being already 'captured' at a level drivers don't want to raise, the hallmark of 'monopoly' pricing. A glance at other developments abroad, before turning attention to possible improvements on this sorry state of affairs. Europe has seen a fair share of Taxi drivers' demonstrations, from the most recent in France, against unregulated entry by allegedly less qualified competitors (February, 2014), to others in Athens (September, 2011); Rome (July, 2006) and Milan (January, 2003) all contrary to licensing reforms. In Portugal, they cannot demonstrate against themselves, or against a number that bad economic fundamentals, rather than a reform minded government, has turned excessive. New York, is a case apart. A city of millions, a Mecca for tourism 19 , is notorious for its tight grip on drivers' licenses, especially its meager number, and has just auctioned several more for prices that exceeded $1 M USD, per Taxi (November, 2013)! In New York, there are no demonstrations to be seen. The drivers are employees of investors who have the capital to spend on such a 'license to rent', and they are mostly quiet foreigners 20 . Customers of a possibly steeply fared service are not organized, and come a lot from out of town. Does it matter? Is there a better way to run a city Taxi market? Well, going back to the beginning, Taxis can play a pivotal role in an efficient transportation system, and if they stray too much from a competitive solution, efficiency is bound to suffer. Such a high value for a license highlights the nature of rent seeking market even Cairns and Liston-Heyes (1996), acknowledge. Any change on this equilibrium is bound to affect the economic value of those rents, implied in the values they DanielMurta TheSilenceattheStands: AgonyinthePortugueseMarketforTaxis fetch at the auctions. If New York collects that money, it stands in an awkward position to 'take it away'. That is a reason for not rushing into issuing many licenses, as it is also for not pressing for fare reductions. From a regulator's perspective, it is humble to start thinking about improvements on status quo, by recalling the Hippocrates oath's first commandment: 'first do no harm'. It is common to mistake this for doing nothing, but that's surely a good way to start. If nothing is done, the market will continue to bleed some drivers, with the remaining struggling to get by on the lowest revenues that cover their expenses; consumers will not flock to the stands, as the price is rather heavy on their lean wallets. A word on price sensitivity. Given the high prevalence of casual/rare travel, aggravated with the demise of Health related contracted transportation, demand is not elastic. From most of the studies covered above, one infers there is some price sensitivity, not so much as it would tempt drivers into a price war. A thorough study on demand, out of the scope of this text, would be most welcome, and the author would be glad to participate. Acting on the number of licenses, when they are worthless is pointless or tricky -expanding is pointless; reducing enough for them to become valuable is tricky, because it would create random winners, giving a plausible claim for compensation on the part of the numerous losers, who saw their license 'confiscated'. With high public debt, who would accept the State spending money on people who, to some extent, overcharge customers? That leaves price as the instrument of change. It seems a one-off sizable reduction on prices, say a third on global fares, keeping the way they are calculated and transparently shown to the customers, would achieve a lot on the short run, and could have a proper follow-up later on. It would speed up the (already existing) exit from the market by drivers who wouldn't be able to earn enough to cover costs. -a fixed tax take on revenues, which can be later partially recovered, if there aren't enough profits; -the renewal of the driver's license; -etc. Therefore, a cut on price, when variable profits barely cover fixed cost -zero profits, negative for those who leave -would force the departure of more drivers. It would also favor every consumer, and bring a few extra ones, dampening the exodus of motorists. The remaining Taxis would be busier, charging less, which is tantamount to a better deal on efficiency, or Welfare, in microeconomic sense. This is the static result. The dynamic implications depend on whether this one-off gesture was the 'single bullet'. If it was, it would 'scar' the market, deterring investment for fear of another round of 47 price cuts and, in general, a perception of utter lack of sympathy for drivers and obsession with the consumer. A more coherent and balanced approach, inspired on the, much commended and practiced, forward guidance by central bankers, would be to offer a path to future regulation, both in terms of fares and licenses. The price cut would have to be put into a context of returning the market to growth, and adding value for the economic activity of Taxi driving. Bringing extra customers, and getting drivers busier. During the painful process to get there, some pledges should be made: -Once licensed driver or Taxi numbers stabilized to a given degree, for example a variation in number under 5%, a 5 year stay on new licenses would ensue; -Some agreed measure(s) of traffic, like 'average number of trips per shift', that the Nevada Taxi Cab Authority collects for Las Vegas, or 'average number of miles driven per shift', that the New York City Taxi and Limousine Commission collects 22 , would be good proxies to base a discussion either on fare increases, along with data on inflation, of course, or (new) license issuance. The balance to be found would keep the transparency on prices, as de-regulation throws actors into opportunistic prone biding. It would seek to return to regulated entry -now, it is in effect free. Keep an eye on prices on behalf of the consumer, but gladly welcome profits for Taxi drivers, who cannot remember having them. Taxis have never been cheap, nor can they be. High petrol prices, and a protracted economic crisis in Portugal, have left them as expensive as ever. Further, public transport has improved, a lot in the two major cities. Taxis have lost their piggy-bank, in the form of Health related transportation, paid by taxpayers. Fares and entry have always been regulated, and pricing does conform to drivers' wishes. But the permanent sharp contraction of demand has left them too many, dividing among themselves just enough trips to cover cost, many of them not making it and leaving. This free entry, monopoly priced market delivers a poor result in Welfare, both for consumers and drivers. In the short run, a big price cut, demanded by the regulator, would deliver speedier exit, more work for those staying, growth, and better prices for customers. In the long run, it should be placed within a framework measuring traffic, freezing license issuance and prices until demand and growth allowed, evenly, both to rise. Profits and inherent positive value for licenses would be welcomed. Consumers would remain protected stakeholders, but not more.
v3-fos-license
2018-04-03T05:05:43.953Z
2017-12-15T00:00:00.000
22254612
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=23279&path[]=73382", "pdf_hash": "d4cfcc4d40948653a2039248766d73a5b4a59ae0", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45233", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "d4cfcc4d40948653a2039248766d73a5b4a59ae0", "year": 2017 }
pes2o/s2orc
The efficacy and safety of anti-PD-1/PD-L1 antibody therapy versus docetaxel for pretreated advanced NSCLC: a meta-analysis Antibodies against the immune checkpoint proteins PD-1 and PD-L1 are novel therapeutic drugs for the treatment of advanced non-small cell lung cancer (NSCLC). Many clinical trials involving these drugs achieved breakthroughs in patients previously treated for advanced NSCLC. However, the results of these clinical studies are not consistent. In this report, we performed a meta-analysis to assess the efficacy and safety of anti-PD-1/PD-L1 antibodies compared with docetaxel treatment for advanced NSCLC patients from 5 randomized clinical trials. We demonstrated that the patients in anti-PD-1/PD-L1 antibody therapy groups had significantly longer overall survival (OS) (HR = 0.69, 95% CI 0.63–0.75, P < 0.05) and progression-free survival (PFS) (HR = 0.76, 95% CI 0.63–0.92, P < 0.05) than those in chemotherapy groups, especially PD-L1 positive patients. Anti-PD-1/PD-L1 antibodies improved the objective response rate (ORR) compared with docetaxel (OR = 1.64, 95% CI 1.19–2.26, p < 0.05). In addition, the anti-PD-1/PD-L1 antibody therapy had fewer treatment-related adverse events (AEs) (OR = 0.33, 95% CI 0.28–0.39, P < 0.05) than docetaxel, especially the grade ≥3 AEs (OR = 0.18, 95% CI 0.12–0.28, P < 0.001). In conclusion, our study revealed that, compared with docetaxel, anti-PD-1/PD-L1 antibody therapy improved clinical efficacy and safety in previously treated advanced NSCLC patients. This therapy may be a promising treatment for advanced NSCLC patients. INTRODUCTION Lung cancer is one of the most common malignancies and is the leading cause of cancer-related deaths worldwide [1].Each year, 1.8 million new cases of lung cancer are diagnosed, and 1.6 million people die as a result of this disease [1,2].Non-small cell lung cancer (NSCLC) accounts for approximately 85% of all lung cancers.When diagnosed, about two-thirds of NSCLC patients are at an advanced stage.Patients with advanced NSCLC have a very poor prognosis, and the mean overall survival is less than one year [3].The primary treatment for advanced NSCLC is chemotherapy or targeted therapy.Platinum-based chemotherapy is the first-line treatment for patients with stage IIIB-IV NSCLC [4], but patients often suffer from severe adverse events and limited drug efficacy [3].Docetaxel is one of the most commonly used second-line regimens for NSCLC.It prolongs survival of patients and relieves symptoms of the disease.However, it also causes some severe side-effects, such as neutropenia, anemia, and asthenia [4,5].Therefore, scientists and doctors are constantly investigating new treatments for Meta-Analysis advanced NSCLC.In the past few years, targeted therapies, such as epidermal growth factor receptor (EGFR) and anaplastic lymphoma kinase (ALK) receptor tyrosine kinase inhibitors, have achieved great success in the treatment of NSCLC.They effectively control tumor growth in patients harboring specific genetic mutations and rearrangements.Unfortunately, many patients cannot benefit from targeted therapy because they do not have the driver mutation [6].In addition, in NSCLC patients who have undergone effective chemotherapy or targeted therapy, tumor progression may occur due to drug resistance, resulting in limited treatment options.Therefore, it is necessary to explore a new way of treating these patients in order to prolong their survival time and improve their quality of life. Immunotherapy is emerging as a promising therapeutic strategy for the treatment of NSCLC.Cancer immunotherapy aims to restore the immune responses of CD4+ and CD8+ T cells, enabling them to function in an anti-tumor manner [7].Immunotherapy for NSCLC involves two types of therapeutic agents: allogeneic vaccines (e.g., Liposomal BLP25, MAGE-A3, EGF, Belagenpumatucel-L, Tergenpumatucel-L, and TG4010) and immune checkpoint inhibitors (e.g., anti-CTLA-4 and anti-PD-1/PD-L1 antibodies) [7].However, almost all phase II or phase III clinical trials involving vaccines failed to prolong the overall survival for vaccinated patients.In contrast, many clinical trials involving anti-PD-1/PD-L1 antibodies achieved breakthroughs for previously treated patients with advanced NSCLC. Programmed death protein-1 (PD-1) receptor is expressed on activated T cells (especially on T Reg cells), which is engaged by the tumor-expressed ligands PD-L1/L2 to inhibit T-cell activation and promote tumor immune escape [8].Anti-PD-1/PD-L1 antibodies block the interaction of PD-1 with its ligand PD-L1 to activate T cells and reverse immune escape.To date, numerous clinical trials have validated the efficacy of the treatment of various malignant tumors, such as melanoma, nonsmall-cell lung cancer, and renal-cell carcinoma [8,9].The outcomes of the clinical trials for NSCLC demonstrate that these antibodies can prolong patients' survival and improve their quality of life, thus providing a promising therapeutic strategy for NSCLC patients. Although several phase II/III randomized clinical trials have been conducted to assess the efficacy and toxicity of anti-PD-1/PD-L1 antibodies for previously treated patients with advanced NSCLC, outcomes such as progression-free survival (PFS) seem to be controversial.Several previously published meta-analyses have analyzed the efficacy and toxicity of anti-PD-1/PD-L1 antibodies [6,10,11], but none of them compared anti-PD-1/PD-L1 antibodies with the second-line chemotherapy, docetaxel, for pretreated advanced NSCLC patients.In addition, the importance of PD-L1 expression should also be analyzed in the treatment of NSCLC with anti-PD-1/PD-L1 antibodies [11].Therefore, we performed this metaanalysis systematically utilizing data from the published literature to evaluate the efficacy and safety of anti-PD-1/ PD-L1 antibodies versus docetaxel in previously treated advanced NSCLC patients. Summary of included studies Two investigators independently identified the articles eligible for further review by screening titles and abstracts.As a result, a total of 3228 records were identified according to the primary search strategy; 2800 records remained after removing the duplicates; 2698 records were removed after screening; 53 were excluded after screening the titles and abstracts; and 44 studies were excluded after reviewing each publication.Finally, we enrolled 5 published clinical trials involving a total of 3025 patients.The flow chart of our study is shown in Figure 1. The characteristics of the 5 included studies are listed in Table 1 [12][13][14][15][16].Of the 5 studies enrolled, two articles were published in 2015, and the other three were published in 2016.All the trials were randomized, controlled, open-labeled clinical trials.The POPLAR study was in phase II; the KEYNOTE-010 was in phase II/ III; and the remaining 3 studies were all in phase III.The POPLAR [14] and OAK [16] studies involved the anti-PD-L1 antibody (atezolizumab) versus the second-line chemotherapy docetaxel for previously treated advanced NSCLC, while anti-PD-1 antibodies (nivolumab and pembrolizumab) were involved in the CheckMate-057 [12], CheckMate-017 [13] and KEYNOTE-010 [15] studies.Table 1 summarizes the characteristics of the included studies and agents.In addition, as the participants of the randomized clinical trial KEYNOTE-010 were assigned (1:1:1) with a central interactive voice-response system to receive pembrolizumab at 2 mg/kg or 10 mg/kg or docetaxel at 75 mg/m², the KEYNOTE-010 analysis included two studies with different doses of treatment agents compared with the docetaxel group [15].We assessed the quality of each study included in this analysis according to the Jadad score, which mainly focuses on the randomization, blinding, and follow-up. Anti-PD-1/PD-L1 antibodies prolonged overall survival compared with docetaxel All trials reported the overall survival (OS) data.The median overall survival (OS) and the 95% confidence interval (95% CI), hazard ratio (HR) and the 95% CI for the treatment group versus control group were retrieved from the published edition as well as the supplementary materials (Table 2).The pooled HRs with 95% CIs for OS were calculated using the Review Manager 5.35.The pooled HR showed a significant improvement in OS for anti-PD-1/PD-L1 antibody therapy over docetaxel (Figure 2A; HR = 0.69, 95% CI: 0.63-0.75,P < 0.001). PD-L1 is a potential biomarker that is expressed on tumor cells and tumor-infiltrating immune cells.The PD-L1 expression level plays a crucial role in the prognosis of cancer patients [11,17].Therefore, we performed a subgroup analysis to assess the influence of PD-L1 expression level on the efficacy of anti-PD-1/PD-L1 antibody therapy.KEYNOTE-010 only enrolled patients whose biopsy and archives showed a PD-L1 tumor proportion score of 1% or greater (PD-L1 positive), but the remaining four RCTs included patients with different PD-L1 expression levels. To better analyze the importance of PD-L1 expression, we redefined the positive PD-L1 as more than 1% or TC1/2/3 or IC1/2/3 based on the included 5 RCTs and analyzed the OS/PFS in the subgroups according to PD-L1 expression.We also defined the PD-L1 negative as less than 1% or TC0 and IC0. Anti-PD-1/PD-L1 antibodies prolonged progression-free survival compared with docetaxel The progression-free survival (PFS) remains controversial in several randomized clinical trials (Table 2).In the CheckMate-057, POPLAR and OAK studies, progression-free survival was similar between the treatment groups in the intention-to-treat population.However, in the CheckMate-017 and KEYNOTE-010 studies, PFS was improved after anti-PD1/PD-L1 antibody treatment, which showed superior efficacy to docetaxel.Thus, we calculated the pooled HRs for PFS in this study. Anti-PD-1/PD-L1 antibodies improved the objective response rate compared with docetaxel All the studies included in this meta-analysis reported the partial or complete overall response rate according to RECIST (version 1.1).We compared the overall response rate of anti-PD-1/PD-L1 antibodies (nivolumab, pembrolizumab and atezolizumab) with docetaxel for advanced NSCLC patients.The polled odds ratio (OR) for overall response rate (ORR) was 1.64 (Figure 4; 95% CI 1.19-2.26,P < 0.05), which suggested a higher clinical response rate for anti-PD-1/PD-L1 antibodies than for docetaxel in advanced NSCLC patients. DISCUSSION Programmed death protein 1 (PD-1) is a coinhibitory molecule expressed by activated T cells.When it binds its ligands, PD-L1 or PD-L2, T-cell activation is inhibited and antitumor immune response is dampened [18,19].PD-L1 is expressed on tumor cells as well as tumor-infiltrating T cells in many kinds of cancers.Therefore, the PD-1/PD-L1 pathway plays an important role in tumor immunologic escape [8]. In recent years, antibodies targeting the PD-1/PD-L1 pathway have been widely explored in clinical trials and have exhibited satisfactory results [20].Nivolumab, an IgG4 monoclonal antibody that targets the PD-1 receptor, is now being used in the clinical trials of non-small-cell lung cancer, metastatic melanoma, renal-cell carcinoma [21], ovarian cancer and Hodgkin's lymphoma [9].It was approved in 2015 by the Food and Drug Administration (FDA) for the treatment of previously treated advanced or metastatic NSCLC 5 [22].Pembrolizumab is an IgG4-engineered humanized antibody that targets the PD-1 receptor.It is now being used in clinical trials for advanced melanoma, advanced urothelial cancer, and NSCLC.The US FDA granted accelerated approval to pembrolizumab for the treatment of metastatic NSCLC patients whose tumors expressed high levels of PD-L1 [22,23].Atezolizumab (MPDL3280A) is a humanized engineered IgG1 monoclonal antibody against PD-L1.Several clinical trials have also been designed to evaluate the efficacy and safety of atezolizumab in the treatment of many tumors, including NSCLC [13,16]. These PD-1/PD-L1 inhibitors are breakthroughs in the treatment of NSCLC [23].Some clinical trials proved the safety and efficacy of anti-PD-1/PD-L1 antibodies, and other studies compared the treatment effects of PD-1/ PD-L1 inhibitors therapy and chemotherapy.A few metaanalyses on PD-1/PD-L1 inhibitors in the treatment of NSCLC patients have been published.For example, Jiaxing Huang et al. [7] wrote a meta-analysis about the efficacy and safety of PD-1 inhibitors in previously treated advanced NSCLC patients.However, this study only enrolled clinical trials involving nivolumab, and most of the trials included were single-arm treatments without a control group.Guo-Wu Zhou et al. [18] conducted a similar meta-analysis comparing anti-PD1/ PD-L1 antibody therapy with chemotherapy for pretreated NSCLC patients, but they only included three randomized clinical trials enrolling 1141 patients who received treatment with nivolumab or atezolizumab.Additionally, because of the time of publication, recently published high-quality literature was not included. Some phase II/III clinical trials published recently provided more information about the safety and efficacy of anti-PD-1/PD-L1 antibody therapy [13,16].In our meta-analysis, we included 5 randomized clinical trials to evaluate the efficacy and safety of anti-PD-1/PD-L1 antibody therapy compared with docetaxel in previously treated advanced NSCLC patients. In these clinical trials, all patients with stage IIIB or IV NSCLC had previous treatment, such as surgical resection, radiation therapy or platinum-based chemotherapy, and these patients had tumor recurrence or progression during or after the regular treatment.All patients enrolled in the experimental groups received the anti-PD-1/PD-L1 antibodies intravenously at an appropriate dose identified by the previously conducted phase I clinical trials.In the control groups, the participants received docetaxel intravenously at a dose of 75 mg/m 2 .The expression of PD-L1 in tumor specimens was detected by immunohistochemistry (IHC).All clinical trials were conducted under the guidance of previously designed protocols, and all participants were followed up regularly during the clinical trials. Our meta-analysis demonstrated that immune checkpoint inhibitors significantly improved efficacy in previously treated advanced NSCLC patients using OS/ PFS/ORR as the primary or secondary endpoints. PD-L1 is a potential biomarker for anti-PD1/PD-L1 antibodies; a positive status is defined differently in these clinical trials.In CheckMate-017 and CheckMate-057, more than 1% of positive IHC staining cancer cells were defined as PD-L1 positive.The KEYNOTE-010 clinical trial only enrolled patients with PD-L1 expression ≥1%.In POPLAR and OAK, IHC staining of PD-L1 expression was detected on both tumor cells (TC) and tumor-infiltrating immune cells (IC).TC1/2/3 or IC1/2/3 was defined as PD-L1 positive, and TC0 and IC0 were defined as PD-L1 negative.To better reflect the role of PD-L1 expression in PD1/PD-L1 inhibitors treatment, we redefined the positive PD-L1 as more than 1% or TC1/2/3 or IC1/2/3 based on the included 5 RCTs analyzed the OS/PFS in the subgroups according to PD-L1 expression.In the PD-L1 positive subgroup, anti-PD-1/PD-L1 antibody therapy showed significantly improved OS compared with chemotherapy (P < 0.001) and significantly prolonged PFS (P < 0.001).However, the improvement of OS between the two treatments in the PD-L1 negative subgroup (P = 0.02) was not as much as that in the PD-L1 positive subgroup (P < 0.001), and there was no significant difference in PFS between the two groups (P > 0.05).We found that PD-L1 expression might be an important prognostic factor for the efficacy of PD-1/PD-L1 inhibitors in advanced NSCLC.However, we did not compare objective response rate (ORR) or adverse events (AEs) in the subgroup according to PD-L1 expression due to the lack of data. Consistent with previous findings in clinical trials of different phases, our study demonstrated a more favorable safety profile for PD-1/PD-L1 inhibitors than that of second-line docetaxel chemotherapy.Treatment-related adverse events and severe adverse events (grade ≥3) including fatigue, decreased appetite, nausea, diarrhea, and anemia were identified in all trials.The side effects of anti-PD1/PD-L1 antibody therapy were less than the docetaxel groups.This finding might be related to the damage of epithelium-derived cells and renewing cell populations caused by docetaxel.Although anti-PD1/PD-L1 antibodies caused few chemotherapy-related adverse events, the immune-mediated adverse events, including inflammatory pneumonitis, interstitial nephritis, hyperthyroidism, and hypothyroidism, occurred more frequently in pulmonary, endocrine, mucocutaneous and renal sites and even immunologically privileged sites such as the eye.Most of these immune-mediated adverse events were moderate and could be controlled by following guidelines.Occasionally, the side effects were life threatening, such as severe inflammatory pneumonitis, and required cessation of therapy and treatment with immunosuppressants such as corticosteroids [24].It was rare that severe toxic events led to the discontinuation of treatment or death of a patient.Therefore, the immune-mediated adverse events were relatively tolerable and acceptable.Our study demonstrates that PD1/PD-L1 antibody therapy is safer and more effective than docetaxel, which supports future clinical applications of anti-PD-1/PD-L1 antibody-based immunotherapy. However, our study has some limitations.First, we extracted data from published articles without individual patient data, which might result in the bias of data analysis.Second, the definition of PD-L1 expression on the tumor and tumor-infiltrating cells remains inconsistent in different clinical trials.For this reason, we formulated a uniform definition of PD-L1 expression in patients within all these clinical trials.Third, we only included RCTs using docetaxel because it is the most common drug used as the second line of chemotherapy in advanced NSCLC.Therefore, the number of studies included in this meta-analysis is small.Because of the above limitations in our study, further studies based on the information from ongoing trials are needed to verify the efficacy and safety of anti-PD1/PD-L1 therapy versus docetaxel in patients with advanced NSCLC. In conclusion, our study indicates that anti-PD-1/ PD-L1 antibody therapy improves PFS, OS and ORR and shows less toxicity in patients with advanced or metastatic NSCLC.Despite some limitations, our study suggests that immune checkpoint inhibitors may provide a promising therapeutic strategy for patients with advanced NSCLC. Literature search strategy We searched relevant databases to select corresponding clinical trials, such as Pubmed (Medline), EMBASE, the Corane library, clinicaltrial.gov, and ASCO meeting abstracts (until April 20, 2017).The following terms were used to select trial publications or presentations: non-small cell lung cancer, NSCLC, Inclusion and exclusion criteria The eligible literature was confined to randomized clinical trials written in English.The studies included met the following criteria: (1) Published studies comparing anti-PD-1/PD-L1 antibodies with docetaxel for patients with pretreated advanced non-small cell lung cancer; (2) The outcomes of the trials were available: overall survival (OS), progression-free-survival (PFS), objective response rate (ORR), adverse events (AE), and hazard ratio (HR).The exclusion criteria were (1) The phase I trials and (2) studies with no available outcomes. Data extraction and quality assessment Two investigators conducted the literature research and reviewed the studies independently to avoid bias.Disagreements were resolved by discussion and adjudicated by a third investigator.For the included studies, we extracted the following data: authors, year of publication, abbreviations of the trials, registered number, trial phase, dose of drugs, number of enrolled patients, tumor histology, PD-L1 expression level, as well as the outcomes mentioned above. The quality of the studies included was assessed using the method reported by Jadad et al [25].We scored the papers and answered the following questions: (1) Was the study described as randomized?(0-2 points); (2) Was the study described as blinded?(0-2 points); and (3) Was there a description of withdrawals and dropouts?(0-1 point).If the trial scored fewer than 3 points, it was considered to be low quality.Trials that scored ≥3 points were considered to be high quality. Statistical analysis We used the Review Manager 5.3.5 to perform the statistical analysis under the guidance of the Cochrane library.The pooled HRs (hazard ratio) with 95% CIs for OS and PFS, and the ORs (odds ratio) with 95% CIs for ORR and AEs were calculated using the Review Manager 5.35.HRs > 1 favored the docetaxel arm while HRs < 1 favored the anti-PD-1/PD-L1 antibodies arm.ORs > 1 for ORR and AEs meant a higher response rate and toxicity, whereas ORs < 1 reflected lower response rate and safety.P < 0.05 was considered to be statistically significant.The I 2 statistic and Q statistics were used to test statistical heterogeneity of included studies, with a predefined significant threshold of I 2 < 50% or p > 0.1.If the I 2 was ≤ 50%, then the trials were considered to be homogeneous, and a fixed-effect model was used.Otherwise, a random-effect model was used. Figure 2 : Figure 2: The forest plot of the overall survival (OS) in advanced NSCLC patients who received anti-PD1/PD-L1 antibody therapy compared to docetaxel.(A) total; (B) subgroup analysis of OS based on PD-L1 expression level. Figure 3 : Figure 3: The forest plot of the progression-free survival (PFS) in advanced NSCLC patients who received anti-PD1/ PD-L1 antibody therapy compared to docetaxel.(A) total; (B) subgroup analysis of PFS based on PD-L1 expression level. Figure 4 : Figure 4: The forest plot of the objective response rate (ORR) in advanced NSCLC patients who received anti-PD1/ PD-L1 antibody therapy compared to docetaxel. Figure 5 : Figure 5: The forest plot of the adverse events (AEs) in advanced NSCLC patients who received anti-PD1/PD-L1 antibody therapy compared to docetaxel.(A) treatment-related AEs; (B) severe treatment-related AEs (Grade ≥ 3).
v3-fos-license
2015-09-18T23:22:04.000Z
2015-08-20T00:00:00.000
61719
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2079-4991/5/3/1366/pdf", "pdf_hash": "467e62001f2c2dc1bef58cc069544fb499648a84", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45235", "s2fieldsofstudy": [ "Engineering" ], "sha1": "467e62001f2c2dc1bef58cc069544fb499648a84", "year": 2015 }
pes2o/s2orc
Carbon Nanotube/Alumina/Polyethersulfone Hybrid Hollow Fiber Membranes with Enhanced Mechanical and Anti-Fouling Properties Carbon nanotubes (CNTs) were incorporated into alumina/polyethersulfone hollow fibre membranes to enhance the mechanical property and the efficiency of water treatment. Results show that the incorporation of CNTs can greatly limit the formation of large surface pores, decrease the void size in support layers and improve the porosity and pore connectivity of alumina/polyethersulfone membranes. As a result of such morphology change and pore size change, both improved flux and rejection were achieved in such CNTs/alumina/polyethersulfone membranes. Moreover, the CNTs/alumina/PES membranes show higher antifouling ability and the flux recoveries after being fouled by bovine serum albumin (BSA) and humic acid were improved by 84.1% and 53.2% compared to the samples without CNT incorporation. Besides the improvement in water treatment performance, the incorporation of CNTs enhanced the tensile properties of inorganic/polymer membranes. Therefore, such CNTs/alumina/PES hollow fiber membranes are very promising candidates for good filter media in industry, considering their high efficiency and high mechanical properties. Introduction As fresh water shortage and water contamination is becoming increasingly prevalent, a significant amount of research interest and focus have been directed towards the applications of membrane separation technology to improve both water quality and treatment efficiency [1][2][3]. Compared to other technologies, membrane technologies have advantages, such as ease of operation, minimal impact on permeate quality, little or no chemicals required, low energy consumption, moderate capital costs, etc. [4]. According to the market report investigated via Acmite Market Intelligence, global demand on membranes in water treatment and industrial uses was valued at approximately U.S. $15.6 billion in 2012 and the market is expected to reach U.S. $21.22 billion by 2016 [5].Therefore the development of high-performance membranes in water and waste treatment has been placed under the spotlight of scientific research. Membranes can be classified based on the type of materials like polymeric, inorganic, and hybrid membranes. Polymeric membranes have advantages including low costs and ease of fabrication, high efficiency for the removal of particles, high flexibility, etc. [6][7][8]. However, due to the fact that most polymers have a hydrophobic nature and generally poor mechanical properties, polymeric membranes are liable to be fouled and to be physically damaged, especially hollow fibre membranes [9,10]. Inorganic membranes, on the other hand, have better chemical and thermal stability than the polymeric membranes and higher antifouling property due to the hydrophilic nature of most inorganic materials. Therefore, the combination of inorganic and polymeric material to make hybrid membranes has become a key innovation step allowing researchers to tackle the weaknesses of polymeric membranes. Previous studies on the inorganic/polymer hybrid membranes for water treatment are mainly focused on the improvement in flux and antifouling properties in comparison with the polymeric membranes [11][12][13][14][15][16][17]. The incorporation of hydrophilic inorganic additives usually facilitates the non-solvent intrusion during the phase inversion process to lead to the formation of more surface pores with larger surface pore size thus resulting in the improvement of flux. Moreover, such hybrid membranes usually have a highly hydrophilic membrane surface due to the surface aggregation of inorganic additives; as a result, improved antifouling property was achieved compared to the pristine membranes [18]. However, despite these improvements mentioned above via the incorporation of inorganic additives into polymeric membranes, the rejection was usually sacrificed as the result of the formation of large surface pores [15,19]. This is even more severe when high-loading micron-size inorganic additives were used. Moreover, due to the stiffness of inorganic materials, elongation and tensile toughness were usually sacrificed after the incorporation of inorganic particles [20][21][22]. Therefore, there is a need to tackle these problems and to further improve the efficiency of current inorganic/polymer membranes to meet the ever-growing residential, environmental, and industrial requirements. Since they were first found in 1991, carbon nanotubes (CNTs) have attracted wide attention due to the high aspect ratio and the unique high mechanical, optical, and electrical properties [23,24]. So far, CNTs have been incorporated into polymer membranes or dense inorganic membranes, mainly to enhance the mechanical properties with few papers focusing on hybrid membranes [25][26][27][28][29]. As discussed in the introduction, inorganic-polymer membranes combined the advantages of both polymeric and inorganic membranes and become more and more attractive for industrial application. Therefore, how to further improve water treatment efficiency is of significance and worth studying. In this work, we chose CNTs as an additive and incorporated them into alumina/polyethersulfone hollow fibre membranes, aiming to achieve hybrid membranes with high mechanical properties and water treatment efficiency. Figure 1 shows the SEM images of the morphology of the membrane surface and the cross-section of alumina/polyethersulfone hollow fibre membranes with and without CNTs. For membranes without CNT incorporation, many alumina particles intruded from the membrane surface, which makes the membrane surface rough (Figure 1a). Additionally, many large voids surrounding the intruding particles were observed. In comparison, with the CNTs' incorporation, the surface seems smoother and the number of large surface pores was greatly reduced (Figure 1b). This might be due to the increased viscosity of the casting solution resulting from the incorporation of CNTs (the viscosity of casting solution with 0. 0.2, 0.5, and 1.0 wt % CNTs loading is 2.32, 2.56, 2.72, and 2.58 Pa s respectively), which would result in the formation of a denser separation layer and the slow migration and out-diffusion of alumina particles from the polymer [30,31]. It should be noted that when the CNT loading further increased to 1.0 wt % the viscosity decreased, which might result from the fact that agglomeration of CNTs might occur and then affect the efficiency of CNTs. In terms of cross-section, all the samples show similar asymmetric structure (Figure 1c,d). Morphology The inner diameter is about 1.5 mm and the outer diameter is about 2.1 mm. Despite the similarity, the porosity and pore size was changed after the incorporation of CNTs, which will be discussed in the next section. Table 1 shows the maximum surface pore size of membranes obtained via bubble point test, whereas Figures 2 and 3 give the porosity and pore size distribution. From Table 1, it can be seen that the maximum pore size decreased from 178 nm to 145 nm as the CNTs loading increased from 0.0 wt % to 1.0 wt %. This is consistent with the above hypothesis that the CNTs' incorporation can reduce the surface pore size. Figure 2 shows the porosity of the membranes with different CNT loadings obtained via the mercury intrusion test. As a general trend, the porosity was increased with the increase of CNT loading and the porosity of 1.0 wt % CNTs/alumina/polymer membrane is approximately 10% higher than the sample without CNTs loading. Due to the high aspect ratio of CNTs, they might act as "bridges" and intertwine among alumina and polymer membranes ( Figure 1e); therefore improving the pore interconnectivity. Moreover, the total surface area of membranes (obtained by mercury intrusion) was improved from 13.0 m 2 /g to 17.1 m 2 /g with 1.0 wt % CNTs; thus, better pore connectivity in CNTs/alumina/PES membranes is expected compared to the membranes without CNTs' incorporation. Figure 3, it can be seen that for pure alumina/PES hollow fibres and 0.2 wt % CNTs/alumina/PES hollow fibres, many pores with tens of micron were detected. When the CNTs' loading is higher than 0.2 wt %, the number of pores with the pore size larger than 10 µm were greatly reduced, whereas the number of small pores with pore size about 3-4 µm increased. The decrease of large voids should play a positive role in improving the mechanical properties because of the fact that large voids usually act as crack initiator and lead to continuous cracks under stress. Mechanical Properties Due to their special configuration, hollow fibre membranes are liable to breakage and deformation. Therefore higher tensile strength and Young's modulus are required. Figure 4 shows the tensile strength, Young's modulus, elongation at break and the toughness of the pristine membranes and the membranes with CNTs incorporated. It is obvious that the incorporation of CNTs improved the mechanical properties of alumina/polymer hollow fibres. Specifically, the tensile strength was improved by 25.4% with 0.5 wt % CNTs while the Young's modulus was enhanced by 30.7% with 1.0 wt % CNT loading compared to the pristine membranes (the tensile strength and the Young's modulus of the pristine membranes are 1.73 MPa and 201.39 MPa, respectively). The well-dispersed CNTs intertwined among the polymer, acting as bridges, which improved the connection among polymers, particles, and the large voids. Additionally, as discussed above, the incorporation of CNTs decreased the pore size of voids in support layers and the large surface pores. These voids usually serve as stress concentrations; therefore, tensile strength and Young's modulus were improved by the incorporation of CNTs. In addition to the improved tensile strength and Young's modulus, the incorporation of CNTs enhanced the toughness (Figure 4d). It is believed that crack deflection and the bridging and pulling-out effects of CNTs are the major contributors to the improvement of CNTs. Due to the intertwined CNTs in the matrices, multi-cracks occurred and no single crack could propagate freely; therefore, more energy was required to break the samples. Moreover, the bridging and pulling out of CNTs from the matrices contributed to the work of fracture since work must be done to pull the fibre ends out of the matrix against the bonding forces, as illustrated in Figure 5. Therefore, toughness was greatly improved due to these effects resulting from the incorporation of CNTs. From Figure 4, 1.0 wt % CNTs loading is considered better than the other loadings in terms of the tensile strength, toughness, and Young's modulus. Flux, Rejection and Antifouling Properties Despite the decrease of surface pore size in CNTs/alumina/polymer membranes, the flux was slightly increased in comparison with the alumina/polymer membranes ( Figure 6a). As discussed above, the intertwined CNTs inside polymer improved the pore connectivity and the total surface area; therefore, less resistance of water flow was expected in CNTs/alumina/polymer membranes, which might be contributed to the improvement in flux. In terms of the rejection, due to the decrease of maximum surface pore size and average surface pore size via the incorporation of CNTs, the CNTs/alumina/polymer membranes show higher rejection for BSA and humic acid and the rejection ratios peaked with 0.5 wt % CNTs loading (Figure 6b). However, due to the fact that the maximum surface pore size of our membranes is in the range of 140 nm to 180 nm (Table 1), all the membranes show better rejection for humic acid (>90%), whereas they have poor rejection for BSA (<40%). Compared to other membranes reported in the literature, despite the fact that the maximum pore size of our CNTs/alumina/polymer membranes is in the range of microfiltration, the rejection for humic acid of our membranes is comparable to ultrafiltration membranes reported but the water flux is 2-3 times as high as those ultrafiltration hollow fibre membranes (the flux of reported ultrafiltration membranes is normally less than 150 LMH with the humic acid rejection higher than 95%) [32][33][34]. For microfiltration membranes reported in other studies for humic acid removal, due to the fact that the pore size is usually larger than 0.2 µm, the rejection for humic acid of those membranes is lower than the CNTs/alumina/polymer membranes in this study [35].Therefore, these inorganic/polymer membranes are good filter media for the removal of humic acid from water. In addition to the flux and rejection, the antifouling property is another important consideration during the operation. The incorporation of hydrophilic alumina particles can improve the hydrophilicity of membrane surface via the surface aggregation of particles on the interface of polymer and nonsolvent; thus, higher antifouling property would be expected. However, due to the random detachment of alumina from the polymer, a rough surface was obtained in alumina/polymer membranes, as shown in Figure 1. This would greatly limit the antifouling property of membranes. For example, the flux recoveries of alumina/polymer membranes in this study after being fouled by BSA and humic acid are only 34.5% and 50%, respectively (Figure 7). In comparison, for CNTs/alumina/polymer membranes, the flux recovery of BSA was improved by 84.1% with 1.0 wt % CNT loading, whereas the flux recovery of humic acid was enhanced by 53.2% with 0.5 wt % CNT loading in comparison with the samples without the incorporation of CNTs. As discussed above, the incorporation of flexible CNTs increased the viscosity of casting solution and thus slowed down the migration of alumina particles during the phase inversion process; as a result, more alumina particles might be kept in the water/film interface without intruding from the polymer and smoother surface was formed (Figure 1b). Moreover, because of the above-mentioned effects, a more hydrophilic membrane surface was observed with the incorporation of CNTs (The contact angle of the 0.0 wt %, 0.2 wt %, 0.5 wt % and 1.0 wt % CNTs/alumina/polymer membranes are 45°, 36°, 30° and 35° respectively). Therefore, all these effects resulting from the incorporation of CNTs attributed to the improved flux recovery and antifouling property. Despite the improvement in antifouling property of CNTs/alumina/PES membranes for both BSA and humic acid compared to membranes without CNTs, such membranes show higher antifouling property for humic acid than BSA, as shown in Figure 7. This might be due to the fact that BSA has smaller size and more BSA would pass the skin layer and cause more severe internal fouling. Therefore, lower flux recovery was obtained in the case of BSA than humic acid. Sample Preparation The inorganic/polymer and the CNTs/inorganic/polymer hollow fibre membranes were prepared via the nonsolvent induced phase inversion method at room temperature [36]. Specifically, CNTs were first dispersed into NMP via ultrasonification, followed by the addition of PES polymer and alumina particles. The ratio of PES: NMP: Al2O3 powders is 7:46:47 (wt %) (3.5 g PES, 23 g NMP and 23.5 g Al2O3 powders). The CNTs loading is varied as 0.2 wt % (47 mg), 0.5 wt % (118 mg) and 1.0 wt % (235 mg) based on the weight of alumina. The obtained CNTs/alumina/PES suspensions were then ball-milled at a speed of 20 rpm for at least 2 days to obtain the homogeneous mixture followed by degassing overnight. The achieved suspensions were then extruded through a tube-in-orifice spinneret (the outer diameter is 2.6 mm and the inner diameter is 1.6 mm) using pressurized nitrogen gas. Double de-ionized (DDI) water was used as inner and outer coagulant and the air gap was set as 4 cm. The obtained hollow fibre precursors were maintained in outer coagulant until use. Morphology and Surface Hydrophilicity The cross-sections of membranes were prepared via fracturing membranes in liquid nitrogen and then examined using scanning electron microscopy (Nova Nano SEM, FEI Company, Hillsboro, OR, USA); the top surface of membranes was characterized using scanning electron microscopy (Magellan SEM, FEI Company, Hillsboro, OR, USA). All the SEM work was performed at an accelerating voltage of 5 kV with the secondary electron (SE) detector and all samples were coated with Pt. The total porosity, total surface area, and pore size distribution of the samples were determined via mercury intrusion (Auto pore III, Micromeritics, Norcross, Switzerland). The viscosity of casting solution was measured via rheometer (HAAKE MARS Rheometer, Thermo Electron Corporation, Waltham, MA, USA). The hydrophilicity of hollow fibre surface was measured via the captive bubble method and the contact angle was recorded and measured via the video-based optical contact angle measuring instrument (OCA-15EC, Dataphysics, Filderstadt, Germany). Mechanical Properties The mechanical properties of hollow fibres were measured using mini-instron (Micro Tester 5848, 100 N load cell, Instron Calibration Laboratory, Buckinghamshire, UK). Before the tensile test, Torr Seal (low vapour pressure resin, Varian, Jefferson Hills, PA, USA) was used to seal both ends of hollow fibres to keep the configuration of fibres at both ends and to ensure the crack does not occur at the fixing points. Tensile strength and elongation were measured with a 30 mm gauge length and a constant elongation velocity of 0.5 mm/min. The tensile Young's modulus was calculated based on the stress-strain curve with the range of 0.5%-1.0% tensile strain. The toughness was calculated based on the area under the stress-strain curve. For every sample, at least five specimens were tested. Flux, Rejection and Antifouling The filtration test was carried out in HP4750 cell (Sterlitech, Kent, WA, USA) with compressed nitrogen gas to control the feed pressure. [37] To fix the fibre membrane, the non-porous stainless steel supporting disc with a circular hole in the centre was used. The disc has a diameter of 50 mm and a thickness of 2 mm meanwhile the hole in its centre has a diameter of 2 mm. The hollow fibre membrane was placed perpendicularly to the supporting disc in the hole. An epoxy resin sealant (Varian Vacuum Technologies, Jefferson Hills, PA, USA) was used to seal the top end of the membrane and the space between the membrane and the supporting disc.The permeate water was accumulated on a beaker sitting on top of an electronic balance and its mass change was automatically recorded. During the flux test, 150 kPa was used to precompact the membrane and the flux was tested and recorded at the pressure of 100 kPa (denoted as Jw1). For the rejection and antifouling test, 1.0 mg/mL BSA/PBS buffer (pH = 7.4) and 10 ppm humic acid were used as foulants respectively. The rejection ratio (R) is calculated using the following equation: where Cp and Cf were the foulant concentrations of permeate and feed solutions, respectively. The concentrations of BSA solution were determined based on the absorbance at 280 nm and the concentrations of humic acid were determined at 308 nm using a UV spectroscope (UV mini-1240 spectrophotometer, Shimadzu, Kyoto, Japan). After fouling, the membranes were cleaned with double deionized (DDI) water; the cell was then emptied and the pure water flux was measured again (now denoted as Jw2). To evaluate the antifouling property of the membranes, the flux recovery ratio (FRR) is calculated using the following equation: Conclusions The incorporation of CNTs can greatly limit the formation of large surface pores and decrease the void size in support layers, yet improve the porosity and pore connectivity of alumina/polymer hybrid membranes via increasing the viscosity of casting solution and slowing the migration of alumina particles during the phase inversion process. As a result of these morphology changes, both improved flux and rejection were achieved in CNTs/alumina/polymer hollow fibre membranes compared to the samples without CNT incorporation. Moreover, due to the smoother yet more hydrophilic membrane surfaces, CNTs/alumina/polymer membranes show higher antifouling property. In terms of the mechanical properties, all the tensile properties (strength, Young's modulus, elongation, and toughness) were enhanced after the incorporation of CNTs. Taking the mechanical and filtration performance into consideration, 0.5 wt % CNT loading is optimal in this study and such CNTs/alumina/polymer hollow fibre membranes are very promising to be used as filter media in practical industrial applications such as the removal of humic acid.
v3-fos-license
2020-01-23T09:05:03.568Z
2020-01-20T00:00:00.000
242906944
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-4623/v2.pdf", "pdf_hash": "77e4ee2c4b67bad43a7c626dd9fbba36c416afa9", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45236", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4fe3a472d85adec23e6df87e79c2c96c521e5d5f", "year": 2020 }
pes2o/s2orc
Adapting protocol to the evolving context of practice: A grounded theory exploration of the strategies adopted by emergency nurses to address situations of uncertainty and change during the management of emerging infectious diseases CURRENT Background: During an epidemic event, emergency care settings are fraught with urgency, uncertainty and changes to the clinical scenario and practice. Such situations challenge the capability of emergency nurses to perform their duties in a well-planned and systematic manner. To date, little is known about the coping strategies adopted by emergency nurses during an epidemic event. The present study explored the behaviours and strategies developed by emergency nurses to handle uncertainty and practice changes during an epidemic event. Methodology and methods: A qualitative design based on the Straussian grounded theory approach was established. A total of 26 emergency nurses from Hong Kong were recruited by purposive and theoretical sampling strategies. Semi-structured, face-to-face, individual interviews were conducted for data collection. The data were transcribed verbatim and analysed using grounded theory coding procedures. The Consolidated Criteria for Reporting Qualitative Research guidelines were followed. Results: Adapting protocol to the evolving context of practice was revealed as the core category. Four interplaying sub-categories were identified: (1) completing a comprehensive assessment, (2) continuing education for emerging infectious disease management, (3) incorporating guideline updates and (4) navigating new duties and competencies. The nurses demonstrated the prudence to orientate themselves to an ambiguous work situation and displayed the ability to adapt and embrace changes in their practice and duties. Conclusions: These findings explain how emergency nurses must adapt and adjust their practice and behaviours to the evolving nature of an epidemic event. These findings also offer insights on the need for education and training schemes that allow emergency nurses to acquire and develop the necessary decision-making and problem-solving skills to handle a public health emergency. and technologies in infection prevention and control, EIDs continue to represent a substantial threat to public health and pose a serious challenge to both developed and developing countries [2]. To address the long-lasting pandemic threats towards public health worldwide and to ensure global health security, it is crucial that international communities and organisations collaborate and coordinate the global endeavours in enhancing public health surveillance and response capacities [3]. Local healthcare systems and healthcare institutions must also align with the global endeavours to combat the spread of EIDs and the occurrence of epidemics. To improve the capacity of healthcare facilities to overcome the possible challenges faced during an EID outbreak, the World Health Organization (WHO) formulated a preparedness checklist for potential outbreaks [4]. This checklist details the structure of the incident command system, which assists healthcare administrators to gain ground on efficient functioning in a hospital-based response to an outbreak. To further strengthen the preparedness and response planning of local healthcare systems, this checklist offers advice on how to establish a network of healthcare facilities consisting of public hospitals, private hospitals, professional medical organisations and other non-governmental organisations [5]. This network would coordinate the multi-sectoral collaboration of healthcare system units and departments in preparing for and responding to an EID event in terms of disease surveillance, infection control, quarantine, treatment and prophylaxis [6]. The capacity of healthcare institutions and facilities to respond to potential outbreaks involves not only implementing contingency plans to meet the precautionary needs but also preparing and equipping frontline healthcare providers for impending EID events [7]. Within the healthcare system, accident and emergency departments (AEDs) are directly accessible to the public and serve as the main gateway for the delivery of healthcare services; therefore, emergency nurses are often at the forefront of an outbreak response [8]. Indeed, emergency nurses have various duties during the management of an epidemic event, such as early recognition of suspected or infected patients, implementation of proper infection control measures and coordination of patient logistics [9]. Emergency nurses can be confronted with various challenges during EID management. One major challenge faced by emergency nurses amid an EID outbreak is the elevated but unpredictable risk of infection. For example, the infectious status of most of the attendees in the AED is unknown; this uncertainty increases the risk of exposure to frontline emergency nurses compared with that of other healthcare professionals during an epidemic event [10]. In addition to the occupational risks associated with EIDs, a well-reported challenge encountered by emergency nurses is adapting to the changes in the existing guidelines and recommendations for addressing a particular EID [11][12][13]. Studies have highlighted that emergency nurses are required to frequently and rapidly adjust their practice in accordance with amendments to infection control guidelines, even if such amendments are subtle and/or implicit [11,12]. The literature suggests that the antecedents of these challenges could be consequential to the uncertainty and changes experienced by emergency nurses in the workplace during an outbreak: changes and uncertainty within the work environment thus seem to be major and primary barriers to the emergency nursing practice [13]. Although these clinical uncertainties and changes to working practice encountered by emergency nurses have been reported in the literature [14,15], little is known about the strategies that emergency nurses adopt in response to them amid an epidemic. Addressing such gaps might offer insights into the development of plans and interventions for promoting the preparedness and response of emergency nurses in prevailing EID events. The present study aimed to explore the behaviours and strategies adopted by emergency nurses to overcome the challenges of uncertainty and practice changes during an EID event. Over the recent decades, the appearance of emerging infectious diseases (EIDs) with pandemic potential has been accelerating because of the intensifying global commerce activities as well as the widespread disruption of ecological systems [1]. Despite the enormous amount of efforts channelled into advancing mechanisms and technology in infection prevention and control, EIDs continue to represent a substantial health threat to the public health, posing an explicit challenge to both developed and developing countries [2]. To tackle the long-lasting pandemic threats towards people worldwide and ensure global health security, it is of paramount importance for international communities and organizations to collaborate and coordinate the global endeavours in enhancing public health surveillance and response capacities [3]. Allied with the global endeavours to combating the spread of EIDs and the occurrence of epidemics is the local efforts of healthcare systems and healthcare institutions. To improve healthcare facilities' capacity to meet possible challenges during an EID outbreak, the World Health Organization (WHO) has formulated a preparedness checklist for potential outbreaks [4]. The checklist portrays the structure of an incident command system, assisting healthcare administrators to gain ground on efficient functioning in a hospital-based response to an outbreak. To strengthen the preparedness and response planning of local healthcare systems, the checklist offers advice for the establishment of a network of healthcare facilities, which consists of public hospitals, private hospitals, professional medical organizations, and other non-governmental organizations [5]. This network would coordinate the multi-sectoral collaboration of healthcare system units and departments in preparing for and responding to an EID event, in the aspects of disease surveillance, infection control, quarantine, treatment, and prophylaxis [6]. The capacity to potential outbreak respond involves healthcare institutions and facilities not only drawing up contingency plans to meet the precautionary needs required, but also preparing and equipping frontline healthcare providers for impending disease situations [7]. Within the healthcare system, accident and emergency departments (AEDs) are directly available to the public and serve as the main gateway to the delivery of healthcare services, and emergency nurses therefore have often been at the forefront of outbreak response [8]. Indeed, they are involved in various duties during the management of an epidemic event, such as early recognition of suspected or infected patients, implementation of proper infection control measures, and coordination of patient logistics [9]. However, it is reported that emergency nurses could be confronted with various difficulties in the process of EID management. For instance, one of the major challenges emergency nurses faced amid an EID outbreak is the elevated but unpredictable risk for infection. The risk for exposure to infected patients in the emergency departments was considered to be greater than healthcare professionals from other disciplines during an epidemic event, as the infectious status of the attendance of emergency departments are mostly undiagnosed and unconfirmed [10]. In addition to the occupational risk for EID infection, another issue encountered by emergency nurses that has been repeatedly reported in the literature was the changes of existing guidelines and recommendations that occur in addressing a particular EID. Studies have highlighted that emergency nurses were required to adjust their practice frequently and rapidly to address the amendments in infection control guidelines, while such amendments could be subtle and implicit [11,12]. It is suggested in the literature that the antecedents of these challenges could be consequential to the uncertainty and changes experienced by emergency nurses in the workplace during an outbreak, suggesting that that changes and uncertainty within the work environment is one of the major and primary barriers to emergency nurses practice [13]. While the uncertainty and changes encountered by emergency nurses have been revealed in the literature [14,15], there has been a paucity of understanding about emergency nurses' actions and strategies to overcome the barriers and address the instabilities and vulnerabilities of the circumstance. Addressing such gaps might offer insight into the development of plans and interventions for promoting the preparedness and response of emergency nurses in prevailing EID events. Thus, the present study aimed to explore emergency nurses' behaviours and strategies in addressing the challenges of uncertainty and change during an EID event. Design This article presents data from a larger doctoral project that used grounded theory to study the phenomenon and process of how emergency nurses engage in EID management. The present article provides data regarding the behaviours and strategies adopted by emergency nurses in addressing challenges posed by uncertainty and changes at the workplace during an EID event. The Consolidated Criteria for Reporting Qualitative Research guidelines were followed for reporting the findings of this study [16]. A qualitative study was designed to explore the perceptions and experiences of emergency nurses during epidemic events. The grounded theory approach [17] was selected to facilitate the interactive process of data collection and analysis. This approach belongs to the qualitative research paradigm that emphasises the exploration of participants' perceptions and experiences in understanding a particular social situation or event about which very little is known [18]. Grounded theory also attempts to understand the involvement of individuals through the exploration and interpretation of their perspectives and meaning within a social phenomenon [19]. This approach prioritises the interpretation of interactions among individuals within the embedded realities of a phenomenon, which offers insight into the elements that shape their beliefs and actions [18]. By adopting the grounded theory approach to guide the data collection and analysis scheme, the interactions between emergency nurses and their work environment during EID management could be evaluated. The findings of this study offer a substantive explanation of how emergency nurses address uncertainty and changes during EID management, and improve the understanding of the actions and strategies emergency nurses adopt in overcoming the associated challenges at work. This study adopted the Straussian framework of grounded theory data collection and analysis that was developed by Anselm Strauss and Juliet Corbin [17]. Rather than maintaining objectivity and neutrality, this framework recognises the influence of researchers' interpretation of the data, valuing the involvement of the researchers' reflection as an inevitable component in understanding the phenomenon being studied [19]. In practice, the Straussian framework offers explicit and well-defined analytical steps for data analysis with the provision of a coding paradigm. This paradigm facilitates the discovery of associations among conditions, interactions and consequences [17]. The detailed framework of data interpretation supports the procedural operations of the data analysis process, which might help to establish the plausibility and completeness of the findings while preserving the intertwined and dynamic nature of the data [20]. This study thus used the Straussian framework to help gain an in-depth comprehension of the strategies adopted by emergency nurses to address uncertainty and changes in EID management. Participants In line with the grounded theory approach, the participants in this study were recruited using a combination of purposive and theoretical sampling strategies. A purposive sampling strategy was used initially to recruit the first 10 participants. This strategy allowed for participants to be selected from the relevant population according to an initial set of criteria [21]. For inclusion, the participants had to be a fulltime registered nurse in an emergency department in Hong Kong, Cantonese and English-speaking and willing to participate in the study and share their experiences. After initial recruitment, additional participants were solicited using a theoretical sampling strategy. This sampling strategy uses a systematic and cumulative participant recruitment method and is considered to be the major impetus to the progression of data collection and analysis in grounded theory studies [17]. Theoretical sampling is an iterative process wherein the participant recruitment and data collection processes are driven and performed according to the concepts alluded by the previous data [21]. It enables the ongoing development and elaboration of preliminary concepts and categories that emerge in the data analysis process and thus grants greater representativeness to the findings [22,23]. In general, participant recruitment by theoretical sampling is continued until theoretical saturation is achieved, i.e. when no additional relevant information emerges from the data analysis and the concepts and categories are amply unfolded to display patterns of properties and dimensions [24]. In the present study, the repetition of concepts and categories continued after data had been collected from and analysed for 26 participants, when no additional concepts or categories were yielded. The participant demographic information is summarised in Table 1. Data collection Data were collected from semi-structured, face-to-face, individual interviews held between the participants and the first study author (SKKL). Once eligible individuals agreed to participate, they were given an information sheet that explained the rationale and objectives of the study. The details of their involvement in the study were also indicated and described in the information sheet. After the participants confirmed that they understood the nature of the study and agreed to participate, an interview was scheduled with each participant at their preferred location and time. Prior to the interview, demographic information of the participant, such as age range, ranking and years of work experience, was collected using a demographic data sheet. The participants gave permission for the interviews to be audiotaped. An interview guide, which included broad and general open-ended questions, was adopted in the larger study to stimulate the thoughts and opinions of the participants regarding their experiences during epidemics. The questions in the interview guide of the larger study that pertained to the experiences of emergency nurses in addressing uncertainty and changes at work are presented in Figure 1. For participants who were recruited by theoretical sampling, areas of particular interest to the study were highlighted in the interview guide and modified in each interview to address the progress of data analysis and category development. A total of 26 interviews were conducted and the length of the interviews ranged from 55 minutes to 3 hours. Data analysis The theoretical sampling, data collection and data analysis processes were performed simultaneously and were thus characterised as a concurrent procedure. The insights gleaned from the interviews during the data analysis helped navigate and inform the directions for subsequent data collection and, consequently, theoretical sampling [19]. Prior to data analysis, each audiotaped interview was transcribed verbatim by the first author (SKKL). The transcripts were then checked against the interview tapes to ensure the accuracy of the transcription. Once the transcription was completed, data analysis commenced as per the three-phase coding framework suggested by Corbin and Strauss [17], namely open coding, axial coding and selective coding. The first step of the data analysis process was open coding. The primary goal in this phase was to discover preliminary codes and categories from the data. To start, the textual content of the interviews was read and reread several times by the first author (SKKL) to capture a general understanding of the participants' points of view. Each transcript was then scrutinised by examining the content line-by-line and paragraph-by-paragraph. The concepts and meaning expressed in each passage that was considered to be of analytical relevance were coded. The codes were further interpreted using a constant comparative method wherein the existing codes were compared with the codes that emerged from the preceding analyses [20]. Codes that showed similarities in features were collated to form categories. In the axial coding phase, the established categories were refined by examining their connections with one other. In addition to the constant comparative method, the coding paradigm reported by Corbin and Strauss [17] was used as an analytic tool to explore the relationships among categories. This coding paradigm highlights four main components for the establishment of connections among categories, namely phenomena, conditions, interactional strategies and consequences. Related categories were connected and further developed into more sophisticated categories that comprised clusters of categories and sub-categories. In the selective coding phase, the core category that could underpin the essence of the phenomenon of inquiry was identified. The core category is characterised by several properties, such as frequent appearance in the data, a considerable degree of abstraction, and extensive connections with all other categories and codes [17]. In this study, the determination of the core category was discussed among the authors and a consensus was reached to use Adapting protocol to the evolving context of practice as the core category that could represent the whole phenomenon under study. Results Adapting protocol to the evolving context of practice was identified as the core category that delineates how emergency nurses overcome the uncertainty and change in various areas of their practice during epidemics. Four interplaying sub-categories were identified: (1) completing a comprehensive assessment, (2) continuing education for EID management, (3) incorporating guideline updates and (4) navigating new duties and competencies. These categories represent the strategies adopted by the emergency nurses to address uncertainty and changes during EID management. Adapting protocol to the evolving context of practice Emergency nurses are subjected to a work environment constituting changes and uncertainties in different aspects of emergency care provision during an EID outbreak. To address the diverse needs arising from the evolving context of practice, emergency nurses are required to showcase their capacity to adapt and embrace changes, depending on the situation. The following comment illustrates how an experienced emergency nurse valued the importance of being adaptable when responding to untoward incidents during EID management: "Various unexpected issues that demand our action come all of a sudden, and we are unable to stop or control them. At this moment, it is time to examine our ability to stand the test of these challenges. It tests our leadership, our problem-solving skills, and our ability to improvise. In addition, it challenges our critical thinking skills and decision-making abilities. These are all crucial as we work in the accident and emergency department, especially in the midst of unpredictable and unforeseen events." (P16) This view was echoed by another participant, who indicated that technical solutions were inadequate and unavailable for emergency nurses to handle unexpected issues while performing EID duties. The participant remarked that it was crucial for emergency nurses to be capable of swiftly adjusting to peculiar situations by identifying alternatives on an impromptu rather than on a prepared basis: This incident showcases a situation in which emergency nurses encountered an unpredictable and unexpected event that, as described by some participants, 'stirred up troubles'. Instead of following the established protocol, the emergency nurses were required to develop their adaptive capacity to acclimate to the evolving context of practice amid EID management. (1) Completing a comprehensive assessment The findings revealed that a major challenge encountered by emergency nurses in managing EIDs was the uncertainty surrounding the patient and disease context. Such an uncertain situation could create ambiguity among emergency nurses in achieving the goals and objectives of their practice. In addition, participants stated that they doubted whether they had been well-prepared for handling an epidemic and questioned the relevance of their prior knowledge and skills in managing EIDs. Some participants expressed the belief that the most pertinent way to resolve uncertainty was to obtain relevant information on how to address any erratic situation. Indeed, gathering up-to-date information was considered crucial by emergency nurses to acquire a general picture of the nature and progress of an EID scenario. This strategy enabled them to comprehensively assess their workplace to orientate themselves to the circumstances. One participant succinctly highlighted the importance of obtaining relevant information when trying gaining familiarity with an EID scenario: "It is of the utmost importance that you know what is happening. As long as you understand the situation, you realise the problem. You have to acquire the latest information and maintain an up-todate understanding of the situation." (P16) One of the major concerns raised by the participants surrounded the quality of the information, as some of them pointed out that the information they received was not standardised. Two participants stated that the information provided by their colleagues, which included disease information, infection control guidelines and patient logistics protocols, was sometimes inconsistent, leading to confusion. Although they worked in different hospitals, these two participants held similar opinions about the inconsistency of the information they received. One of the participants described the problem as follows: "The information could sometimes be regarded as 'hearsay'. Perhaps one staff member had said something about the disease, then others started to discuss and circulate the information. However, no one had confirmed the creditability or sources of that piece of information. The information might be distorted, exaggerated or even misleading. However, we do not have an official and standardised source for obtaining information, and therefore, hearsay persists among staff." (P20) Many participants highlighted that instead of depending entirely on the provided information, which could be inconsistent, personal alertness and vigilance were also required in addressing the unclear situations they were facing. In their everyday work, emergency nurses serve as gatekeepers who are closely connected to the community. Their frontline position helps emergency nurses to collect clues on disease trends and progression and perform a comprehensive and first-hand assessment of the general disease situation. The comment below illustrates how one participant recognised the outbreak of H1N1 influenza by engaging in routine practice: "You know about the disease situation and progress at work, especially if you are the triage nurse. There were a large number of patients attending AED and eight out of 10 had similar flu-like symptoms. You would then realise and be able to tell, there was something wrong, it was the influenza that was causing this -you experienced it and sensed it. This sense did not merely improve your alertness, but also provided you with the whole picture of the outbreak, including the severity, the magnitude and the extent." (P17) (2) Continuing education for EID management To respond to an epidemic event, the participants highlighted the importance of acquiring relevant knowledge and skills to bolster their preparedness both theoretically and practically. Indeed, EID management requires emergency nurses to demonstrate proficiency in various skills and techniques. Several participants reported that specific skill sets, such as clinical assessment skills and precautionary measures, enabled them to accomplish various unforeseeable tasks in an effective and appropriate manner during EID management. One advanced practice nurse highlighted the necessity for emergency nurses to develop the skill of rapid and accurate clinical assessment for patient surveillance: "Sometimes there are junior colleagues making mistakes in simple tasks while handling EID cases. The main reason is that they are not familiar with this type of knowledge. I often ask them to do some infectious diseases revision. This is basic for emergency nurses. For example, being able to identify the signs and symptoms of an EID is the most important task in EID management, but if the nurse did not have the related knowledge, how can one differentiate infected patients from the others?" (P17) Because of the importance of obtaining pertinent knowledge and skills, various resources, such as workshops and drills, are available to emergency nurses to facilitate learning on the required techniques to optimally engage in EID management. The participants described that such training courses offered opportunities to AED staff to familiarise themselves with the process of managing epidemic events that were likely to occur. One participant shared their experience of an Ebola drill as EIDs. However, guideline changes for an EID situation also impact emergency service delivery. Some participants did not feel confident about their readiness to adhere to guideline changes due to lack of practice, even though instructions were provided. They commented that there were often distinct differences between new recommendations and the practices they had been accustomed to following, (4) Navigating the new duties and competencies During an EID event, the scope of emergency healthcare services is broadened such that emphasis placed on infection prevention and control, in addition to the usual life-saving practice of emergency care provision. Although all participants acknowledged the participation of emergency nurses in an epidemic event response, some encountered difficulties incorporating their extended duties into practice. Several participants commented that performing the extended range of responsibilities was challenging because of a lack of clarity surrounding their scope of practice during an epidemic event. For instance, one participant, who was relatively new to the emergency care setting, expressed the following concerns about performing the responsibilities of an emergency nurse during an H7N9 avian influenza epidemic: Discussion The emergency nurses in this study demonstrated the prudence to orientate themselves to an ambiguous work situation and displayed the flexibility to embrace changes in their routine and practice. According to the findings, the uncertainties surrounding the workplace environment amid an epidemic created obstacles for emergency nurses, preventing them from performing and adopting the skills and tactics required to handle EID management tasks in a fully prepared manner. To address this problem, emergency nurses attempt to obtain information regarding an encountered situation, which involves information about the disease and specific guidelines in response. Acquiring precise information during an EID event is essential for nurses as it provides them with the relevant facts, such as disease identification, management and prevention. In addition, effective information provision could strengthen a nurses' capacity to offer health promotion and health education in the community, which could help calm public fears about EIDs [25]. Despite the importance of the swift provision of epidemic information, the emergency nurses included in this study noted that the disease information they receive might be temporary or incorrect: such erroneous information can lead to confusion and conflict in practice. In fact, similar problems regarding issuing unreliable information during epidemics have been reported in previous studies [26,27]. These findings indicate the need for healthcare facility administration and management to review and revise the effectiveness of current information dissemination strategies and systems. The findings of this study suggest that healthcare administrators should not merely disseminate information among nurses across services but should also appropriately streamline information to facilitate its integration into routine practice. In addition to obtaining information from official sources, the emergency nurses in this study reported that they often gain an overall impression of a disease situation through observation and clinical encounters, i.e. by evaluating a disease situation in terms of the number of patients presenting with similar symptoms and the severity of the disease. Although this strategy is seemingly useful to nurses for obtaining first-hand information on an EID event, it may result in inaccurate estimations of the disease situation and adversely affect their awareness of the situation [28]. For instance, Sridhar et al. [29] reported that healthcare workers might underestimate the likelihood of Ebola infection because its incidence and seriousness are comparatively lower based on their practice experience than those quoted in the existing data. Consequently, this underestimation might undermine their awareness of the disease. The findings of the present study underpin the importance of maintaining effective communication between healthcare facilities and frontline healthcare workers, particularly emergency nurses, in improving the estimation of the magnitude of an EID situation. Apart from the uncertainties surrounding the workplace amid epidemics, the participants of this study considered changes as another major barrier to fulfilling their duties. Such changes include changes in the disease situation, in the information provided and in emergency nursing practice. As frequently reported in the literature, changes in the workplace typically create tension for an organisation's stakeholders, including those who decide to initiate the changes and those who are required to implement the changes [30]. Changes in the general clinical context of EID management can have a considerable impact on the usual practice, expectations and work practices of the nurses. For instance, a change of the disease situation might induce structural changes in the workplace, which in turn might require nurses to change a well-adapted and accepted behaviour or working style to a new and unfamiliar practice. These changes might induce insecurity and create further uncertainty among nurses [31]. Other workplace changes might pose challenges to a nurses' practice. For example, an increase in workload might increase the likelihood of mistakes being made during practice [32]. This issue may partially explain the reluctance of some of the participants included in the present study to accept changes made in different aspects of emergency care provision during EID management. The findings highlight that some emergency nurses exhibit a willingness to adapt to changes despite the possible difficulties as they realise the importance of those changes in addressing the new EID management challenges. However, some participants stated that the time and support provided to frontline emergency nurses to adjust their routine and incorporate the changes into their practice was insufficient. This finding is in line with those of previous studies, showing that changes made within healthcare facilities might not always be in line with the ability of healthcare workers to adjust [33,34]. Discrepancies may exist between the expectations of hospital administration on the renewal of existing practices and the actual preparedness and capacity of the staff to adopt the new practices [11]. We propose that a prudent approach that is sensitive to the overall preparedness of nurses in learning new practices or standards is implemented by healthcare facility administrators to ensure that nurses are adequately trained. Although key knowledge and skills in public health and infection control are integrated as a compulsory part of most current nursing curricula [35], the findings of the present study indicate that emergency nurses are frequently assigned new duties and are required to perform unfamiliar tasks during an epidemic event. These altered duties and new tasks are often considered by emergency nurses to be far beyond their originally perceived scope of practice. For instance, nurses might be required to shoulder the responsibility of public health surveillance during an EID event, including case ascertainment and contact tracing, which could be perceived by emergency nurses as an extra duty outside of their usual domain of practice. To strengthen the capability of emergency nurses in subsequent epidemics, education and training should be provided to equip them with the relevant skills, knowledge and attitudes required to effectively perform their duties in the unprecedented circumstances of an EID outbreak. The training and education that is provided in hospitals to prepare emergency nurses for EID management is often focused on instilling the technical knowledge and skills required to implement infection control measures, such as hand hygiene practices or PPE use [36]. What might get overlooked is the provision of training and practice in the acquisition and augmentation of decision-making and problem-solving abilities. Thus, in addition to technical skills, we propose that education and training should place equal emphasis on developing nurses' cognitive skills, such as critical thinking. Such training will help equip nurses with the core skills required to process and apply knowledge in chaotic and complicated conditions. Limitations The major limitation of the present study is that a conceptual theory could not be developed from the findings. In general, the aim of a grounded theory research is considered to be the discovery of a theory from data [37]. In the present study, the findings established a conceptual ordering of welldeveloped and plausible categories that delineates the properties of emergency nurses' experiences, perceptions and actions [17]. Although such descriptions are not adequate to establish an integrated theoretical scheme, the findings of the present study offer a substantive explanation of how emergency nurses address uncertainty and changes at work during epidemics by providing a comprehensive conceptual description of the situations encountered by emergency nurses. The core categories and the associated sub-categories aptly illustrate the descriptive details, which are empirically grounded in research, pertaining to the behaviours and strategies adopted by emergency nurses in overcoming uncertainty and changes in their workplace. These findings are expected to provide practical and relevant explanations regarding the phenomenon under study, and to offer useful insights for future implications on the preparedness and competence of emergency nurses in public health responses. Conclusion This study found that emergency nurses are required to adapt and adjust to the evolving context of practice during an epidemic event. In addition to factual information, emergency nurses are often required to gather first-hand information through everyday practice to assist them in comprehensively assessing the situations they encounter. While addressing their duties and responsibilities, it is important for emergency nurses to demonstrate critical thinking, flexibility and adaptability. To reinforce the preparedness of emergency nurses, learning by practical experience, which preserves the essence of clinical wisdom, should be taken into account as it is an efficient approach to train emergency nurses in EID management. Ethics approval and consent to participate Ethical clearance of the larger study was granted by the Human Ethics Committee of the Hong Kong Polytechnic University (no reference number, approved November 2013). Complete information about the nature of the research and participation was provided to participants. All emergency nurses who participated in the study provided written informed consent regarding their involvement in the study and gave permission for their interviews to be audiotaped. Throughout the study, participant anonymity and confidentiality were guaranteed by various strategies, such as removing any personal information and identifiers from the transcripts, masking their identities by replacing their name with unique codes, and protecting the digital recordings and documents in encrypted files to prevent 35. Clark M, Raffray M, Hendricks K, Gagnon AJ. Global and public health core competencies for nursing education: a systematic review of essential competencies. Figure 1 The interview guide Supplementary Files This is a list of supplementary files associated with this preprint. Click to download.
v3-fos-license
2021-02-11T14:21:54.186Z
2021-02-11T00:00:00.000
231876581
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.546312/pdf", "pdf_hash": "8cdcb78336203aa33cb372891e67e6f14e665f4e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45237", "s2fieldsofstudy": [ "Medicine" ], "sha1": "8cdcb78336203aa33cb372891e67e6f14e665f4e", "year": 2021 }
pes2o/s2orc
Clear and Consistent Imaging of Hippocampal Internal Architecture With High Resolution Multiple Image Co-registration and Averaging (HR-MICRA) Magnetic resonance imaging of hippocampal internal architecture (HIA) at 3T is challenging. HIA is defined by layers of gray and white matter that are less than 1 mm thick in the coronal plane. To visualize HIA, conventional MRI approaches have relied on sequences with high in-plane resolution (≤0.5 mm) but comparatively thick slices (2–5 mm). However, thicker slices are prone to volume averaging effects that result in loss of HIA clarity and blurring of the borders of the hippocampal subfields in up to 61% of slices as has been reported. In this work we describe an approach to hippocampal imaging that provides consistently high HIA clarity using a commonly available sequence and post-processing techniques that is flexible and may be applicable to any MRI platform. We refer to this approach as High Resolution Multiple Image Co-registration and Averaging (HR-MICRA). This approach uses a variable flip angle turbo spin echo sequence to repeatedly acquire a whole brain T2w image volume with high resolution in three dimensions in a relatively short amount of time, and then co-register the volumes to correct for movement and average the repeated scans to improve SNR. We compared the averages of 4, 9, and 16 individual scans in 20 healthy controls using a published HIA clarity rating scale. In the body of the hippocampus, the proportion of slices with good or excellent HIA clarity was 90%, 83%, and 67% for the 16x, 9x, and 4x HR-MICRA images, respectively. Using the 4x HR-MICRA images as a baseline, the 9x HR-MICRA images were 2.6 times and 16x HR-MICRA images were 3.2 times more likely to have high HIA ratings (p < 0.001) across all hippocampal segments (head, body, and tail). The thin slices of the HR-MICRA images allow reformatting in any plane with clear visualization of hippocampal dentation in the sagittal plane. Clear and consistent visualization of HIA will allow application of this technique to future hippocampal structure research, as well as more precise manual or automated segmentation. INTRODUCTION The hippocampus is one of the most studied subcortical structures in the brain. It has been linked to the pathobiology of epilepsy (Sloviter, 1987;De Lanerolle et al., 1989;Wieser, 2004), Alzheimer's disease (Jack et al., 1992), schizophrenia (Lahti et al., 2006;Kraguljac et al., 2013Kraguljac et al., , 2016, PTSD (Smith, 2005;Shin et al., 2006;Wang et al., 2010), and TBI (Ariza et al., 2006). For many years, hippocampal imaging research focused largely on volumetric measurements and surface morphometry, but in recent years there has been increasing interest in studying specific hippocampal subfields (Mueller et al., 2007;Van Leemput et al., 2009;Yushkevich et al., 2010Yushkevich et al., , 2015aPluta et al., 2012;Wisse et al., 2012Wisse et al., , 2016. A number of protocols for manual subfield segmentation have been proposed and widely discussed (Zeineh et al., 2001;Mueller et al., 2007;La Joie et al., 2010;Malykhin et al., 2010;Wisse et al., 2012;Winterburn et al., 2013;Yushkevich et al., 2015a;Steve et al., 2017), and several automated subfield segmentation software tools are available (Van Leemput et al., 2009;Pipitone et al., 2014;Iglesias et al., 2015;Yushkevich et al., 2015b). Precise subfield segmentation requires direct visualization of the hippocampal internal architecture, defined by apposing layers of gray and white matter that create the characteristic spiral appearance of Ammon's horn in coronal section. Specifically, the strata radiatum, lacunosum, and moleculare (SRLM) together have a hypointense (dark) appearance on T2w scans, while the pyramidal cell layer (CA1-4) is more hyperintense and is isointense with cortical gray matter (Figure 1). However, the dark band of the SRLM is often not clearly or consistently seen in images acquired with conventional MRI sequences. We have previously demonstrated that hippocampal internal architecture (HIA) is seen clearly in only 39% of slices through the body of the hippocampus using common high-resolution T2-weighted coronal images (Ver Hoef et al., 2013b), and in fact even adjacent slices from a good-quality scan may show internal architecture with drastically different clarity. As a consequence, manual subfield segmentation in some slices must rely on inferring the boundaries of Ammon's horn based on "fuzzy" image features or expected boundary location as opposed to direct, clear visualization of the SRLM in each slice. Likewise, many of the automated methods must rely heavily on atlas/template-based approaches and somewhat less on the boundary information contained in the image itself (Wisse et al., 2014). Consequently the resulting automated segmentation may reflect the template to a greater or lesser degree than the target image. Standard high-resolution sequences with adequate in-plane resolution (<0.5 mm) typically require a slice thickness of 2-3 mm or greater to have adequate signal-to-noise ratio (SNR). These relatively thick slices result in volume averaging of any features that vary through the thickness of a single slice. In some cases, the gray and white matter layers may be very consistent across the slice thickness resulting in clear visualization of internal architecture, while in others slices the layers may vary or undulate through the thickness of the slice resulting in blurring of these features (i.e., volume averaging effects). Sequences with relatively thinner slices, like common T1-weighted volumetric sequences with ∼1 mm isotropic resolution, lack the resolution in the coronal plane to fully depict the contours of the SRLM and Ammon's horn, which may be less than 1 mm thick. If we simply modify a typical 2D T2w TSE sequence to have thinner slices, the time of acquisition (TA) doubles for each reduction of slice thickness by half. So a 6-min scan with 3 mm thick slices would become a 12-min scan with 1.5 mm thick slices, which dramatically increases sensitivity to movement. Indeed experience teaches us that even highly motivated control subjects have a hard time remaining still enough for a 10-min scan, so simple extension of a 2D TSE sequence cannot work in this case. Further, the SNR, which is directly related to the quality of the image, drops in half when slice thickness is cut in half, resulting in a noisy image that precludes confident delineation of subtle image features such as the margins of the SRLM and hippocampal subfields. Poor SNR can be improved by acquiring multiple samples and averaging them (Bronen et al., 2002), but SNR only improves with the square root of the increase in the number of samples averaged, so to double the SNR requires increasing the number of samples by a factor of 4 and tripling the SNR requires increasing the samples by a factor of 9. The dramatically increased time this takes would absolutely necessitate compensating for the inevitable movement associated with long scanning sessions. From these considerations we can conclude that any sequence that will show hippocampal internal architecture clearly must have the following characteristics: sufficient resolution in the coronal plane to show the SRLM and boundaries of Ammon's horn clearly; sufficiently thin slices (i.e., resolution in the A-P direction) to minimize volume averaging; and the TA of a single acquisition must be reasonable so as to keep movement artifacts to a minimum in most subjects. In this work, as proof-of-principle, we describe a flexible approach to clearly and consistently image HIA. Unlike some advanced approaches that require ultra-high field strength (e.g., 7T), special research pulse sequences, or complicated postprocessing pipelines, we use a first generation clinical 3T platform with a basic 8-channel parallel coil with a commonly available sequence. The approach uses at variable flip-angle turbo spin echo sequence (BrainView, Phillips Healthcare) to acquire a 3D volumetric T2-weighted image with parameters adjusted for the shortest possible TA while preserving gray-white contrast, but with relative disregard for SNR. By acquiring an image volume in as short TA as possible, we minimize movement artifact in the individual scan, then we repeat the scan many times and co-register each scan to the first scan in post-processing using freely available software. The co-registered scans are then averaged to improve SNR. We refer to this approach as HR-MICRA (High Resolution Multiple Image Co-registration and Averaging). While the concept of co-registering and averaging multiple scans to improve SNR is nothing new (Bronen et al., 2002), the innovation of this work lies in the combination of the specific application to HIA visualization, the counter-intuitive use of low SNR base images, and the absence of necessity of sophisticated hardware or proprietary software. We report the performance of the HR-MICRA approach in regard to depicting HIA clarity at three different levels of averaging (4x, 9x, and 16x), using a single acquisition of a common high-resolution 2D T2weighted scan as a baseline reference. The purpose of this work is to demonstrate that it is possible to show HIA with good to excellent clarity in a high proportion of slices using relatively basic equipment and methods with this approach. Subject Demographics After obtaining approval from our Institutional Review Board, 20 healthy control subjects were recruited, consented, and enrolled. Among study subjects, 11 were female, and ages ranged from 21 to 57 with a mean of 28.25 years. Four participants self-identified as African-American, 15 as white, and one as "other." Years of formal education ranged from 12 to 22 years with a mean of 16. Imaging Sequences All scans were performed on a Philips Achieva 3T platform with an 8-channel head coil. The HR-MICRA scans were based on a variable flip angle turbo spin echo sequence [BrainView on the Phillips platform, similar to SPACE (Sampling Perfection with Application optimized Contrasts using different flip angle Evolution) on Siemens platforms or CUBE on GE platforms]. By using a prescribed evolution of refocusing pulses with variable flip angles, this sequence preserves nearly constant T2 contrast between gray and white matter over a prolonged echo train duration allowing for over 100 echoes in a single TR (Busse et al., 2006). Since almost all of these refocusing pulses are less than 180 • , long echo trains can be used without exceeding SAR limits (Busse et al., 2006) allowing for efficient acquisition of 3D T2-weighted data. While most structural imaging sequences are designed to balance TA, resolution, and SNR, this sequence was designed specifically to minimize TA for a specific target resolution while preserving contrast as opposed to SNR. A typical T2 sequence uses a moderately long TE (100-140 ms) and a very long TR (3000-4000 ms) to maximize longitudinal relaxation prior to the next excitation pulse. One can easily obtain high resolution within a long TR by extending the echo train length further into the TR, but this has the consequence of prolonging the effective TE, which ultimately decreases the T2 contrast between gray and white matter. This results in an image in which tissue is dark and CSF is bright, which works very well for high-resolution imaging of CSF filled structures like the semi-lunar canals and cochlea of the inner ear, but gray-white borders become indistinct. Variable flip angle sequences help with this effect dramatically by maintaining a shorter equivalent TE for a given effective TE, Busse et al. (2006) but there is a limit to this ability. In light of this, we designed our sequence with an equivalent TE that provides good gray-white contrast (143 ms), but shortened the TR dramatically from a typical TR of 3200 to 1750 ms. This results in decreased SNR due to the fact that the transverse magnetization has not fully relaxed into the longitudinal axis, but it also shortens the scan time to a reasonable amount that most subjects can remain still for. Conversely, using a more typical TR (e.g., 3200 ms), each scan would be over 11 min, but few subjects can remain still for that long. The time-optimized sequence has an in-plane resolution of 0.5 mm in the coronal plane and a slice thickness of 0.75 mm (A-P) with a scan time of 6:02 min. The resolution of 0.5 mm in the coronal plane was chosen to ensure that HIA was visible throughout the length of the SRLM given that the thickness of this band may be slightly less than 1 mm. The slice thickness of 0.75 mm was secondarily derived as the minimum resolution that could be acquired with a target time of acquisition of approximately 6 min to minimize the likelihood of intra-scan movement. We chose to use this non-isotropic resolution with lower resolution in the A-P direction instead of a larger isotropic voxel size because curvature of the SRLM in A-P direction is generally less than in the coronal plane, and we wanted to maintain high resolution in the plane in which HIA is visualized. This is a whole-brain sequence with parameters as follows: FOV (mm) = FH 200/RL 178/AP 219, TR (ms) = 1750, TE eff /TE equiv (ms) = 348/143, TSE factor = 100, Echo spacing (ms) = 6.0, and Echo train duration (ms) = 643. This sequence was repeated 16 times over one long session for most subjects. For two subjects, the scan was broken into two shorter sessions due to subject and scanner availability. There was no perceptible difference in the scan-to-scan variation between individual scans acquired across two sessions as compared to those acquired within a single session. The total scan time for all 16 iterations of the sequence is 96.5 min, not including brief pauses between scans to check on patient comfort. In post-processing, the scans were coregistered using FSL-FLIRT (Jenkinson et al., 2002) and averaged. A shell script to perform co-registration and averaging along with detailed instructions are included in the Supplementary Material. To assess the effect of the number of scans averaged on image quality, mean images were generated from a subset of 4 and 9 scans as well as the total 16 scans (HR-MICRA 4x, HR-MICRA 9x, and HR-MICRA 16x), which reflect a theoretical improvement in SNR by a factor of 2, 3, and 4, respectively, over a single base scan. In a few subjects, one to four scans were discarded due to obvious, marked intra-scan movement artifact such that inclusion in the average would degrade image quality. Scans with minor movement artifact were not excluded. One subject had one scan removed, two subjects had two scans removed, one subject had three scans removed, and three subjects had four scans removed. The removed scans tended to be at the end of the scanning session, presumably due to the fact that some subjects were becoming restless due to the long scanning session. In cases where scans were removed due to movement, the remaining scans were averaged and used in place of the full set of 16 scans in reporting the proportions of slices with each HIA score because to remove them could artificially inflate the proportion of good or excellent slices, which would be a misrepresentation of what was available with the investment of time to acquire all 16 scans. However, the statistical model used the exact number of scans used to estimate the relationship between HIA clarity and number of scans averaged. The presence of a shortened dataset due to movement was included in our multivariate logistic regression model and did not significantly affect the results when correcting for the number of scans used. Image Scoring After generating separate mean images (HR-MICRA 4x, HR-MICRA 9x, and HR-MICRA 16x), each slice through the hippocampi was scored on each side for HIA clarity according to a previously published and validated rating scale (Ver Hoef et al., 2013a). This rating scale is based on clarity of visualization of the hypointense band of the SRLM allowing delineation of hippocampal subfields and is described in brief in Figure 2 (Ver Hoef et al., 2013a). Scoring was performed by one of two experienced raters (LWV, JC), both of whom were involved in a previous study describing the rating system and reporting its inter-rater reliability. Because inter-rater reliability was already established and published including these two raters (Ver Hoef et al., 2013a), it was not deemed necessary to repeat it within this dataset. The anatomic differences of the head, body, and tail make imaging HIA more or less challenging in different segments, particularly the digitations of the head. Therefore each segment (head, body, and tail) was scored separately. Similar to other published hippocampal segmentation schemes (Boccardi et al., 2015), the landmark for the boundary between head and body was the first slice in which the hippocampus shows the characteristic c-shaped appearance and is not a double-layered structure, and boundary between body and tail was the first coronal slice showing in which the quadrigeminal cistern can be seen in the midline (i.e., the plane of the superior and inferior colliculi). In the most anterior slices through the head, the SRLM appears as a single horizontal line without distinct subfields, so scoring in the head started six slices posterior to the first slice in which the SRLM is visible. In the tail, scoring continued until the hippocampal gray matter abuts the splenium of the corpus callosum. Scores in the head, body, and tail are reported separately and together. Statistical Analysis We used generalized estimating equations (GEE) for ordinal data (Heagerty and Zeger, 1996) to examine the association between Frontiers in Neuroscience | www.frontiersin.org HIA ratings of hippocampal architecture clarity and image type (4x HR-MICRA, 9x HR-MICRA, and 16x HR-MICRA) and side (left, right) for each slice. GEE was employed as it utilizes data from all participants and provides more robust estimates in the presence of missing data (Schafer and Graham, 2002). All analyses modeled the odds of higher ratings (better clarity) with the 4x HR-MICRA images used as the reference category. Since SNR increases with the square root of the number of averages, we modeled the relationship between number of averages and improvement in HIA clarity using the square root of the number of averages instead of as a direct linear relationship between the raw number of averages. An exchangeable correlation structure was used to model the correlation of scores among images obtained from the same participant. Analyses were performed using the geepack package (Halekoh et al., 2006), version 1.2-1, in the R statistical environment, version 3.5.0 (R Core Team, 2017). RESULTS As expected, an obvious reduction in noise is visible with the increased number of samples averaged using the HR-MICRA approach. Figure 3 illustrates how image quality and HIA clarity improves from a single scan to HR-MICRA 4x, HR-MICRA 9x, and HR-MICRA 16x. Examples of HIA clarity in representative slices of each hippocampal segment are shown in Figure 4 depicting their markedly different cross sectional appearances. The distribution of HIA scores for the head, body, tail, and total hippocampus are shown in Figure 5. An HIA score of 3 or 4 indicates that all subfields can be directly visualized; therefore for purposes of doing accurate subfield segmentation, the proportion of slices rated as a 3 or greater is the best metric of performance. In the body of the hippocampus, the proportion of slices scoring 3 or greater is 90%, 83%, and 67% for the 16x, 9x, and 4x HR-MICRA images, respectively. Using the 4x HR-MICRA images as a baseline, the 9x HR-MICRA images were 2.6 times, and 16x HR-MICRA images were 3.2 times more likely to have high HIA ratings (p < 0.001). As expected, due to differences in the complexity of anatomy across the three segments of the hippocampus the data shows the head has 58% more low ratings (2 or less) than the body (p < 0.001) across all HR-MICRA images. The ratings for the tail segment were also lower, but this was not statistically significant (p = 0.06). No significant differences in HIA scores were seen between left and right sides (p = 0.627) across all image types. Key to demonstrating HIA clearly is obtaining slices that are thin enough to minimize volume averaging effect. A common 2D coronal T2w sequence used at our institution has an inplane resolution of ∼0.25 mm and slices that are 3 mm thick versus the HR-MICRA images that are 0.75 mm thick, thus four HR-MICRA slices cover the same 3 mm thick slab of tissue represented by a single conventional T2 slice. As such, any feature of the image that is consistent across all four HR-MICRA slices will be represented well in a single conventional T2w slice, but any features that vary across those four slices will be blurred in the conventional T2w slice due to volume averaging. This is illustrated in Figure 6. Specifically, note that the lateral inferior portion of the SRLM in the conventional T2 slice is blurred, but the four HR-MICRA images that correspond to the same slab show that the contour of the SRLM is changing markedly in this area from slice to slice, even with a slice thickness of only 0.75 mm. By contrast, on the right side the SRLM has a more consistent appearance across three of the four HR-MICRA slices in the CA1 region, which translates to good HIA clarity in the corresponding conventional T2w image. It is also important to note that the HR-MICRA images in Figure 6 show the SRLM more clearly despite the fact that the in-plane resolution of the HR-MICRA images (0.5 mm) is significantly less than that of the conventional T2w images (0.25 mm). Because the HR-MICRA approach is a 3D acquisition with a sub-millimeter slice thickness, the 3D volumes can be reformatted in any plane with good image clarity. This allows for clear visualization of hippocampal dentation, a morphologic feature of the human hippocampus that varies dramatically across healthy individuals and correlates with measures of memory (Beattie et al., 2017). While some individuals have rather "smooth" contours on the inferior aspect of the hippocampus, other individuals show a prominently dentated (tooth-like) appearance of the inferior aspect of the hippocampus as shown in Figure 7. Note that both of the individuals in this figure are healthy controls, so the variation in the degree of complexity of the hippocampal contours is not due to pathology. In contrast, all information about dentation is lost when conventional 2D approaches with thicker slices are used as shown in Figure 7. While diagrams of the hippocampus and exemplary MRI images almost universally show SRLM to have a perfect C-shape (Figure 1), thin-sliced images can show a great deal of variation in the appearance of the SRLM, particularly in subjects with prominently dentated hippocampi. Figure 8 demonstrates how the SRLM can vary from being a thin band to a very thick band or even thin discontinuous segments in adjacent slices depending how it cuts through the folds in CA1 that create hippocampal dentation. DISCUSSION Many reports have been written about subfield segmentation but rarely is it stated how clearly the imaging sequence used showed HIA, or if it is mentioned, the ability of a particular sequence to show HIA is not examined rigorously. Instead it is generally merely presumed that HIA is shown well enough to outline subfields or at least to infer their location from surrounding landmarks. Similarly, automated subfield segmentation algorithms tend to rely heavily on atlas templates to apply probabilistic estimates of subfield boundaries to fill in gaps when the source image is not clear and may even segment a subfield that is not visible in the source image. In this work we evaluate HR-MICRA, an approach that allows direct visualization of the hippocampal internal architecture and hippocampal subfields and we rigorously examine its performance. We show HR-MICRA's ability to delineate all subfields clearly in the hippocampal body in up to 90% of individual slices. Furthermore, the location of subfields in most of the other suboptimal slices may be reasonably estimated from adjacent slices because they are (B1-4) shows four contiguous 0.75 mm thick HR-MICRA slices that together cover the same 3.0 mm thick coronal slab as (A). The thin arrows show that the contour of the SRLM on the left varies significantly from slice to slice in the inferolateral aspect of the hippocampus, which results in blurring in that portion of the SRLM in (A) due to volume averaging. The SRLM of the right hippocampus is generally consistent across slices in (B1-4) and is not blurred in (A). Frontiers in Neuroscience | www.frontiersin.org FIGURE 7 | Sagittal views of a prominently dentated hippocampus (top) and minimally dentation hippocampus (bottom) using HR-MICRA 16x (left) and conventional T2 (right). The difference in dentation between the upper and lower images is obvious in the HR-MICRA images, but the thick slices of the conventional T2 image do not allow differentiation between the two. Images are not interpolated. FIGURE 8 | Dramatic changes in SRLM shape can be seen in consecutive slices in prominently dentated hippocampi. The top row shows coronal slices through a left hippocampus and middle row shows identical slices with the SRLM highlighted in light blue and index points marked with colored dots. The bottom row shows a sagittal slice through the same hippocampus. Solid red lines indicate the slice location of the images above, the dashed red lines in the bottom left image indicate the relative location of the other three slices, and the colored dots indicate the location of the corresponding dots in the middle row. In the first and last column, the SRLM has a simple curvilinear shape, but in the second column the SRLM is cut en face as it dips down into a dente (fold in CA1) and widens in appearance. In the third column, the SRLM is split into two parts as it weaves in and out of the plane of section. likely to have good clarity and with minimal interpolation error due to the small slice thickness. Not only does this approach provide greater clarity of each slice, it provides many more slices and correspondingly much more data. Because the coronal slice thickness of HR-MICRA images is less than 1 mm, the image volumes are amenable to reformatting in any plane. This is best illustrated in visualizing the complex contours of hippocampal dentation in the sagittal plane, which would be impossible to visualize with common 2D approaches (Figure 7). Angulation of the plane of section such that it is orthogonal to the long axis of the hippocampus is important to optimize HIA clarity when slices are relatively thick, but for many subjects, particularly those with prominent dentation, slice thicknesses in the range of 2-4 mm are insufficient to avoid volume averaging effects, as seen in Figure 6. Ideally, the slice thickness should be relatively small compared to the contour of the structure to be visualized for volume averaging effects to be ameliorated, and contour information of features that vary significantly across a thick slice cannot be seen even with many averages. Dentation can be visualized to a limited degree in other 3D volumetric sequences, such as a T1w MPRAGE, but only when HIA is seen with sufficient contrast in the coronal plane can distinct gray and white matter layers be visualized in the sagittal plane. So, while dentation may be appreciated as bumps on the inferior surface in common T1w images, HR-MICRA images allow visualization of all the layers and the full extent of in-folding of CA1. Thinner slices will also allow for more precise measurements of subfield volumes and surface contours, particularly in cases with prominent dentation where the subfield pattern varies significantly from slice to slice. Consistent and clear visualization of HIA should also allow automated or semi-automated subfield segmentation algorithms to more accurately define these small, but distinct, and important regions of interest by relying more on the features of the image and less on an atlas template. In this study, averaged images were created in the same resolution as the source images using linear registration methods for simplicity sake. However, image quality potentially could be further improved by upsampling images to an even higher resolution and/or employing non-linear registration techniques as has been demonstrated elsewhere (Shaw et al., 2019). While the ability to show HIA clearly with this approach is very good, it is not perfect. In fact, one in 10 of slices through the body show suboptimal HIA clarity, as well as one in four in the head and approximately one in five in the tail. However, these limitations should be considered in context. First, even with relatively thin slices of 0.75 mm, volume averaging effects are still at play for layers that dramatically undulate through the plane of section as seen in the second column of Figure 8. This phenomenon, particularly frequent in subjects with prominent dentation, is a common cause of blurring of HIA in the body and tail. This will always be a factor for visualizing HIA until slices become extremely thin (e.g., much less than 0.5 mm), which will be very challenging at 3T. Second, using a sub-millimeter slice thickness provides many more slices, which minimizes error that comes from inferring the location of an ambiguous subfield boundary. For example, if only one in ten or one in five slices shows suboptimal HIA out of 60 slices through a typical 4.5 cm long hippocampus, the contours of the subfield boundaries can be reasonably estimated from surround slices with minimal error. Furthermore, most of the suboptimal slices still show some degree of internal architecture [HIA rating of 2 (Figure 2)], and no slices in the body (for HR-MICRA 16x) and very small percent in the head and tail show no internal architecture at all. This means that using adjacent slices to infer the location of the SRLM, which defines HIA, is only necessary for part of the SRLM in almost all cases, which further minimizes error. With HR-MICRA, subfield definition in the body is excellent, which is the area focused on the most in the subfield segmentation literature. While HIA clarity is not as consistent in the head and tail as it is in the body, which is a nearly always the case in in vivo hippocampal imaging, our finding of good HIA clarity in 75%+ of slices in the head or tail is remarkable compared to standard T2 sequences with thick slices that virtually never show HIA clearly in the head and tail. The obvious limitation of this approach is the amount of time it takes to acquire a full dataset. The 16x HR-MICRA scans take well over 1 h. However, several factors are important to consider. First, The approach is flexible and can be adapted to, however, much time is available. Even the 4x HR-MICRA images showed HIA significantly better than conventional T2 images. The repetition of the sequence up to 16 times here was not intended to be the most common implementation of this approach, but rather to demonstrate how good the performance can be if enough time is committed. The improvement in percentage of scans showing HIA clearly is roughly linear from 4 to 9 to 16 averages, which is consistent with the theoretical linear relationship between SNR and the square of the number of images averaged. As such, if the scan time did not allow acquiring four scans, one could extrapolate the SNR of averaging less scans, which would be proportional to the ratio of the square root of the number of scans averaged [e.g., SNR 3 /SNR 4 = sqrt(3)/sqrt(4) = 0.87]. This yields an estimated 13% decrement in clarity using a threescan average and 29% decrement using a two-scan average compared to a four-scan average. Likewise, the clarity for any higher number of scans could be estimated in a similar fashion. Second, the current experiments were performed on a firstgeneration clinical 3T scanner (installed 2004) with only an 8-channel parallel coil to demonstrate that high-resolution 3D data sets can be generated without the necessity of ultra high field scanners (7T+) or state-of-the-art high performance 3T scanners, which makes access to such high quality images far more widespread. Given enough time and repetitions, virtually any scanner worldwide could generate very high quality images with this approach, even with field strengths lower than 3T. We have, however, implemented a similar sequence on a Siemens Prisma platform (Siemens Healthcare, Erlangen, Germany) with a high performance gradient set and 64-channel head coil and are able to generate a 6x average image that is similar to the 16x HR-MICRA images in this study, for a time savings of almost two-thirds (data not shown). Higher channel-count coils (e.g., 32, 48, and 64) that are commonly available now allow both higher SNR and better parallel acceleration. Third, at this point this approach is not intended to be used as part of a routine clinical study, but rather as a research tool for investigating the hippocampus or other brain structures requiring high resolution in all three dimensions. Though, combining this approach with other highly accelerated MRI acquisition techniques, such as CAIPI imaging (Breuer et al., 2006;Bilgic et al., 2015) (Controlled Aliasing in Parallel Imaging Results in Higher Acceleration), other forms of simultaneous multi-slice imaging (Barth et al., 2016), or compressed sensing (Lustig et al., 2007;Toledano-Massiah et al., 2018) for example, may further compress the total study time to something reasonable for a research protocol. In any case, we want to be very clear that this approach is best suited for applications that place a premium on hippocampal internal architecture clarity and can justify investing significant scanner time to obtain high resolution hippocampal images and is not intended to be used in routine clinical scans. Numerous factors affect SNR including TE, TR, acceleration factor, acquisition scheme (2D vs. 3D, constant refocusing flip angle vs. variable flip angle), and many others. And as described in the "Materials and Methods" section, we deliberately chose a short TR in order to get more TRs in limited time to scan the whole volume, but the short TR limits the relaxation between excitations and consequently decreases the SNR because there is less longitudinal magnetization to be excited at the beginning of the next TR. However, this short TR allows scanning of a large, high-resolution volume in a reasonable period of time (which minimizes movement) with good gray-white contrast, at the expense of SNR. SNR can be compensated for with coregistration and averaging of additional acquisitions, but intrascan movement and poor gray-white contrast that come with longer sequences with better SNR cannot. There is a slight smoothing effect that comes from interpolation in resampling a co-registered image volume. This effect is more prominent when the source image and target image have different resolutions. Since our images were all of identical resolution, we did not consider this to be a significant factor and any smoothing that may be present was not apparent in visual comparison of the original reference image to the co-registered images. A nearest neighbor interpolation scheme would avoid any smoothing, but is not commonly used. Moreover, since the approach relies heavily on averaging anyway the effect of smoothing from interpolation is expected to be comparatively small. CONCLUSION High Resolution Multiple Image Co-registration and Averaging is a flexible approach that provides consistently clear visualization of HIA that is notably inconsistent in common 2D TSE scans. Using thin coronal slices not only minimizes volume averaging effects but also allows reformatting in any plane of section, which allows for study of hippocampal morphologic features such as dentation. The improved clarity of HIA visualization allows more precise and accurate segmentation of hippocampal subfields as well as more sensitive detection of pathologic alterations of HIA. This approach was demonstrated using a basic 3T imaging platform without advanced head coils, advanced pulse sequences, or proprietary processing software, making it widely accessible to any imaging laboratory. Furthermore, the approach is generally applicable to and will further benefit from advanced imaging techniques like ultra high field imaging, high channel head coils, and accelerated pulse sequences and reconstruction schemes. DATA AVAILABILITY STATEMENT Data used in the study will be made available to investigators upon request after receipt of an IRB approved protocol and a signed data use agreement from the receiving institution. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Institutional Review Board, The University of Alabama at Birmingham, Birmingham, AL, United States. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS LV: conceptualization, formal analysis, funding acquisition, investigation, methodology, project administration, supervision, roles/writing -original draft, and writing -review and editing. HD: data curation and investigation. JC: formal analysis. GS: data curation and writing -review and editing. JB: data curation, investigation, and writing -review and editing. REK: formal analysis, methodology, software, and writing -review and editing. RCK: conceptualization, methodology, and writingreview and editing. JS: conceptualization and writing -review and editing. All authors contributed to the article and approved the submitted version. FUNDING This work was supported by the National Institutes of Health (K23EB008452 and R01NS094743).
v3-fos-license
2020-06-11T09:05:23.111Z
2020-06-01T00:00:00.000
219588615
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2076-2615/10/6/1003/pdf", "pdf_hash": "e908342928b40604016b06ab5021c333e8eb679d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45238", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "1c5c968771735454274d5e6c7e8733ee9e528ead", "year": 2020 }
pes2o/s2orc
Physiological Stress Reactions in Red Deer Induced by Hunting Activities Simple Summary Game hunting is an activity largely practiced all over the world. Understanding its consequences on wildlife is crucial for the proper management and development of hunting directives. In this study, we examined stress levels in hunted wild red deer by assessing cortisol levels and its metabolites in multi-temporal biological samples. Overall, we found evidence for an influence on stress levels of red deer caused by repeated exposure to hunting events, which could have important implications on the sustainability and conservation of wild populations. Furthermore, our results highlight the use of hair samples as a useful long-term stress indicator. Abstract Hunting activity is usually seen as a factor capable of causing an intense stress response in wildlife that may lead to short but also long-term stress. In the Lousã Mountain, Portugal, the population of red deer (Cervus elaphus) is the target of intensive seasonal hunting. We collected and measured cortisol (and its metabolites) in three tissues types (blood, feces and hair) from red deer hunted during two hunting seasons to evaluate the stress levels at different time windows. We also assessed the immunological and physical condition of the animals. We predicted that the hunting activity would act as a stressor inducing increased short and long-term stress levels in the population. Results showed an increase in hair cortisol levels during the months of harvesting. Surprisingly, the tendency for plasma cortisol levels was to decrease during the hunting season, which could be interpreted as habituation to hunting activity, or due to the hunting duration. Contrary to our predictions, fecal cortisol metabolites did not show any clear patterns across the months. Overall, our results suggest an influence of hunting activities on the physiological stress in red deer. In addition, hair seems to be useful to measure physiological stress, although more studies are required to fully understand its suitability as an indicator of long-term stress. Methodologically, our approach highlights the importance of simultaneously using different methods to assess short and long-term effects in studies on physiological stress reactions. Introduction Stress responses occur when an animal perceives an external noxious stimulus (stressor) such as predation, adverse weather, habitat change, or anthropogenic disturbances. In physiological behavior and general health, highlight the importance of management and regulation of hunting activities [5,27,33]. The main aim of this study was to investigate the impact of hunting activities on a wild population of red deer using measurements of glucocorticoids in plasma and hair and its metabolites in feces, all collected from hunted animals. We thus indirectly evaluated the stress levels of each hunted animal from a few months before the hunting event until its death. Given the wide effect that a stress response can have on the body, the physical and immunological condition of the animals was also evaluated. Study Area and Red Deer Population The study took place at Lousã Mountain (40 • 3 N, 8 • 15 W) and surrounding hunting areas, located in Central Portugal. The climate in this Mediterranean area is characterized by hot and dry summers and rainy winters [34]. With an altitude that ranges from 100 to 1205 m, this mountainous region is predominantly composed of plantations of coniferous and broadleaf trees, and large shrubland areas [26]. The red deer population in this region is the result of a reintroduction program that occurred between 1995 and 1999 with the release of 96 animals. Currently, this species occupies around 435 km 2 of the Lousã Mountain and surrounding areas, with an estimated density of 5.6 deer/km 2 during the rut season [35,36]. The calving season of this population occurs in May/June and rutting in mid-September to the end of October. With females being 37 ± 3% (mean ± SE) smaller than males, this species shows a very marked sexual dimorphism in body size. In addition, the sexual segregation outside the breeding season results in a matriarchal society wherein the females adopt a more philopatric behavior and males tend to disperse [35,37]. Since there are no natural predators, red deer are mainly preyed upon by feral dogs. This species is one of the most hunted big game species of the Lousã Mountain region, which includes 12 hunting areas, with red deer hunted in seven of them since the start of hunting in the region in 2006/2007 [26]. Data Collection The study was performed during two hunting seasons (2013/2014 and 2014/2015), enabling the collection of samples from 80 red deer (38 adults: 26 females and 12 males; 30 sub-adults: 14 females and 16 males; and 12 young). The samples were collected during autumn (October and November) and winter (January and February) from red deer hunted in six "montarias" in three contiguous hunting areas: ZCM (Municipal Hunting Area) of Lousã (three hunting events-34 animals), ZCM of Vila Nova (one hunting event-7 animals) and ZCA (Associative Hunting Area) of Miranda do Corvo (two hunting events-39 animals) located in the Lousã Mountain area. No hunting events were conducted during the month of December. Post-mortem examination was made in situ, one to four hours after the animals were killed, and included the collection of blood, feces directly from the rectum, hair from the dorsal region and the metatarsus, and the recording of the sex and age class (based on animal size, body conformation and characteristics of antlers) of each animal. In addition, in November, fecal samples (n = 11) were also collected non-invasively during behavioral observations in the non-hunting area (i.e., area where hunting is not allowed) located in the central part of the Lousã Mountain, to use as the control. Blood samples were taken directly from the heart into EDTA-tubes and centrifuged at 2000× g for 5 min. Plasma was collected and frozen at −20 • C in multiple aliquots. Feces, hair, and the metatarsus were frozen and stored at −20 • C for subsequent analyses. Steroid Extraction and Quantification Five ml of diethyl ether was added to each plasma sample (0.5 mL). After being shaken and centrifuged (2500× g, 15 min), the samples were frozen. Afterwards, the liquid component was transferred to a glass vial and dried under a stream of nitrogen (40 • C). This procedure was performed twice to increase the recovery of the extraction process. The combined and dried down extracts of each sample were dissolved in 0.5 mL of assay buffer and an aliquot analyzed by a cortisol enzyme immunoassay (EIA), as described in detail before [38]). From each dried fecal sample 0.2 g were taken and mixed with 4 mL of methanol (100%) and 1 mL of water. The samples were shaken for 30 min and centrifuged at 2500× g for 15 min [39]. A group (with a 3α-11-one structure) of fecal cortisol metabolites (FCMs) was measured using an 11-oxoetiocholanonolone EIA. The utilized biotinylated label and the antibody (including its cross-reactivity) have been previously described in detail [40]. The assay has been successfully validated for use in red deer [41]. Hair samples were cut into fragments (<0.5 cm), washed with 5 mL 100% n-hexane to remove any lipids and potential external contamination, and air dried. Hair samples (0.05 g) were mixed with 5 mL of 100% methanol and incubated for 72 h for glucocorticoid extraction. After transferring the supernatant into a new glass vial, the organic solvent was evaporated at 40 • C using a stream of nitrogen. The extracts were dissolved in 0.5 mL of assay buffer and analyzed with a cortisol EIA [38,42]. Physical and Immunological Conditions The physical condition of animals was assessed using bone marrow fat (BMF). The BMF was determined using the metatarsus, from where the bone marrow was extracted and weighed (± 0.0001 g). The bone marrow samples were then oven-dried at 60 • C and reweighed. BMF was determined as BMF = (the weight of oven-dried marrow/fresh marrow) × 100 [43]. To evaluate the immunological state of an individual, we used the blood collected in the field to make blood smears in the lab, followed by staining [44]. White blood cell counts and identification (100 cells per smears) were performed using a Nikon Eclipse Ni microscope (Nikon Instruments Europe B.V., Amsterdam, Netherlands). The examination and classification of blood cells were made based on morphological criteria and properties of staining, allowing the identification of five types of white blood cells, namely lymphocytes, neutrophils, eosinophils, monocytes and basophils, respectively [45]. Statistical Analysis The correlation between the concentrations of GC in the different sampling tissues (i.e., plasma, feces and hair) was tested using Pearson's correlation. To analyse the physiological stress reactions, general linear models (LM) were used to test the effects of sex, age-class and month (independent variables) on the different GC levels (cortisol in plasma and hair, FCMs; dependent variables). Additionally, to evaluate the influence of the physical condition and the immunological status of the animals on GC levels, BMF and percentage of lymphocytes (the most abundant white blood cells (WBC)) were included in this analysis as independent variables. The concentrations of GC or its metabolites were transformed using a log transformation. BMF and WBC were logit transformed to achieve an approximation of a normal distribution and to reduce heterogeneity [46]. Since no significant interaction effects between the independent variables were found, only the main effects were used in the final models. The year had no significant effect on the concentrations of cortisol in plasma (F (1,48) To evaluate the physical and immunological conditions of red deer, we tested the effects of sex, age-class and month (independent variables) on BMF and WBC (dependent variables) using general linear models (LM). The results are expressed as mean ± SE and 95% confidence intervals (CI). All the statistical tests were considered significant when p < 0.05. The statistical analyses were performed using IBM.SPSS ® , version 22 (IBM Corporation, New York, NY, USA). Physiological Stress Reactions Concentrations of plasma and hair cortisol and its metabolites in feces were not correlated (Table 1). There was only a weak, but not significant correlation of cortisol in hair with fecal cortisol metabolites (FCMs). Physical and Immunological Conditions In terms of physical condition, measured by the BMF index, we found significant differences between the sexes (F (1,72) = 43.370; p < 0.001), age classes (F (2,72) = 6.842; p = 0.002) and months (F (3.72) = 3.288; p = 0.025). Females had higher BMF values than males in all months ( Figure 2). The differences obtained between age classes were mainly due to the poorer physical condition of calves (79. 6 Physical and Immunological Conditions In terms of physical condition, measured by the BMF index, we found significant differences between the sexes (F(1,72) = 43.370; p < 0.001), age classes (F(2,72) = 6.842; p = 0.002) and months (F(3.72) = 3.288; p = 0.025). Females had higher BMF values than males in all months (Figure 2). The differences obtained between age classes were mainly due to the poorer physical condition of calves (79. 6 Discussion Our results confirmed that concentrations of cortisol (or its metabolites) in plasma, feces and hair samples taken at the same time are not correlated. These results were expected since each sample matrix provides information about the endocrine state at different times. GC levels in plasma reflect an immediate physiological state [12], while feces provide information about the endocrine state a certain time before the sample collection [11] and hair is supposed to reflect the status of an accumulated period of some weeks to a few months [22]. Therefore, each biological sample matrix reflects distinct time-windows and thus complementary information can be gained. The levels of cortisol and its metabolites did not differ between sex or age classes. These results were in agreement with those obtained by Huber et al. [17] who did not find differences in the concentration of FCMs between female and male red deer. However, other studies described sexspecific differences in glucocorticoid levels for some ruminant species [47,48] as well as age variations [49]. Regarding the influence of the reproductive state on stress levels, changes in cortisol levels were reported in red deer, with older females having higher cortisol levels in late gestation than nonpregnant females [50]. However, for reindeer (Rangifer tarandus) no differences were found in plasma Discussion Our results confirmed that concentrations of cortisol (or its metabolites) in plasma, feces and hair samples taken at the same time are not correlated. These results were expected since each sample matrix provides information about the endocrine state at different times. GC levels in plasma reflect an immediate physiological state [12], while feces provide information about the endocrine state a certain time before the sample collection [11] and hair is supposed to reflect the status of an accumulated period of some weeks to a few months [22]. Therefore, each biological sample matrix reflects distinct time-windows and thus complementary information can be gained. The levels of cortisol and its metabolites did not differ between sex or age classes. These results were in agreement with those obtained by Huber et al. [17] who did not find differences in the concentration of FCMs between female and male red deer. However, other studies described sex-specific differences in glucocorticoid levels for some ruminant species [47,48] as well as age variations [49]. Regarding the influence of the reproductive state on stress levels, changes in cortisol levels were reported in red deer, with older females having higher cortisol levels in late gestation than non-pregnant females [50]. However, for reindeer (Rangifer tarandus) no differences were found in plasma cortisol concentrations between adult males, barren, and pregnant females [51]. Although the reproductive state of the females was not assessed in the present study, previous long-term data from the same study area has shown that during the sampling period more than 80% of females were usually pregnant (unpublished data), which means that probably more than 80% of the sampled females for this study were also pregnant. This fact, together with the absence of differences between males and females or age classes suggests that all animals were exposed to the same levels of stress during the sampling period. In fact, considering the type of hunting process used in our study area, which is not targeting any particular sex or age class, the obtained results are in agreement with our predictions. In fact, the absence of differences between males and females or age classes suggests that all animals were exposed to the same levels of stress during the sampling period. We found a trend in plasma cortisol concentrations to decrease during the hunting season from November to February. A decrease of GC concentrations after regular and frequent occurrences of a stressor is often an indication of acclimation [2], as observed in some studies with Brahman cattle (Bos taurus indicus) and Magellanic penguins (Spheniscus magellanicus) [52,53]. However, besides the fact that we are dealing with a major stressor in our study, we need to consider the possible influence that this specific hunting method may have on our results. Since we have no information for how long deer were chased by the dogs before being killed, the cortisol values in plasma could reflect different chasing periods. Furthermore, the observed trend for plasma may also be influenced by other factors, such as food intake or/and any environmental disturbance [54][55][56], which make an interpretation more difficult. Although an influence of circadian rhythm in cortisol levels has been reported [55], the hunting events in the study area occurred within the same period of the day, decreasing the possible effect of daily circadian rhythm in our results. Besides the trend obtained for plasma cortisol concentrations, the results also showed a positive correlation of plasma cortisol levels and the percentage of lymphocytes. We expected the opposite, namely a decrease of the percentage of lymphocytes with the increase of the stress levels, mainly due to the effect of stress as an immune suppressor [57]. Instead, our results were more in line with the immune-enhancing character of acute stress, promoting the passage of leukocytes from the blood to other parts of the body, while chronic stress induces immune suppression [57][58][59]. Although this interpretation requires caution given the existence of some unevaluated health-related factors, it points to the important relation between white blood cells and cortisol levels, and the need for more studies approaching this interaction. We did not find differences in FCM concentrations across the sampled months. However, a seasonal pattern of GC levels in cervids is suggested by some authors who documented higher values in colder months than in warmer months of the year [17,60]. The minor influence of Mediterranean mild winters [34] where the occurrence of snow is uncommon and food availability is not significantly affected may have contributed to the lack of a seasonal pattern in our FCM levels during the hunting season. In fact, some studies in Mediterranean red deer [61] and roe deer (Capreolus capreolus) [62] suggested summer as the season with the most energetic constraints due to decreased food quality and quantity due to hydric stress. These results point, although weakly, given the small control sample size in terms of control group, to an influence of hunting in the stress levels. These results are in agreement with those reporting higher FCM levels in chamois (Rupicapra rupicapra tatrica) in areas with high disturbance than at low disturbance locations [16]. Hair cortisol concentrations differed significantly across months, with an increase from the beginning (October) to the end (February) of the hunting season. Cortisol levels recorded in February were significantly higher than the ones obtained in October. Taking into account that the molt from the summer to the winter coat is gradual, especially in adults where the development of new hair can occur before shedding the old one [63], our samples from October included new hair, which began to grow in September and October, and hair from the summer months which had not been shed yet. Furthermore, once hair follicle activity is reduced in February, because the end of the winter season is getting close and the winter coat is fully grown [63], cortisol measured in hair that was sampled in this month should largely reflect the conditions from the previous three months [64]. Therefore, the increase in cortisol levels across months may be an indication of a period of prolonged stressful conditions induced by hunting activity, which is supported by the higher FCM levels in individuals from impact areas than from control sites found in our results. Similarly, Caslini et al. [65] found that hair cortisol levels in the same species (Cervus elaphus) were higher in greater density areas associated with more difficult environmental conditions and higher levels of anthropogenic disturbance (such as tourism). Our results are in agreement with those reported in the mentioned study which suggested that long-term HPA axis activity and allostatic load, as a consequence of higher densities, anthropogenic disturbances and/or environmental conditions in red deer populations, can be evaluated using cortisol hair levels as an index [65]. In addition, Bryan et al. [66] also documented higher hair cortisol levels in heavily hunted wolves (Canis lupus) than in wolves with lower hunting pressure. Hair cortisol seems to be a good indicator of long-term stress and has gained importance as a novel method to assess stressful conditions [19,22]. Hair can often be collected without capture and handling of the animals (i.e., hair traps). Collection of hair for hormone analyses may thus be a useful, non-invasive tool to monitor prolonged stressful conditions. Moreover, the fact that cortisol levels in hair may provide a long-term endocrine profile [67,68] can be extremely useful to study chronic stress and animal welfare [69]. On the other hand, taking into account our results regarding plasma cortisol levels and FCM concentrations, hair might not be suitable at capturing short-term stress levels. Based on our results, and previous work, increased hair cortisol concentrations in our red deer population seem to be a consequence of hunting activities. However, considering that our study was focused on a wild population, there are other factors that may be contributing to the GC levels obtained in hair, like temperature, food availability or season [17,60,70]. As the energetic balance is a crucial factor in the ability of the animal to respond effectively to certain stressors [4], the body condition, a measure of the long-term energetic reserves [26,71], can have an influence on cortisol levels. Therefore, considering this bidirectional interaction, not only can stress affect body condition, energetic parameters could also be important in dealing with stress. Our results did not show any correlation between cortisol levels and BMF, which is a measure of physical condition, making the influence of season and/or food availability on cortisol levels unlikely. Moreover, the source of cortisol accumulated in hair is unclear, and some possible explanations have emerged. Keckeis et al. [72] reported a local production of GC in the hair follicles of guinea pigs, however, how this mechanism is modulated is still unknown. Recently, experimental evidence was provided in domestic sheep (Ovis aries) that mechanical irritation of the skin significantly increased hair cortisol concentrations [73]. Another study suggested the existence of a cutaneous HPA axis, able to synthesize and secrete cortisol, as well as negative feedback regulation by cortisol under corticotropin-releasing hormone (CRH) expression [74]. The uncertainty about the origin of cortisol in hair leads to an additional caution in the interpretation and analysis of GC levels in these types of samples [75,76]. To decrease the influence of confounding factors in our study, hair samples were taken from the dorsal region in all the individuals. However, further investigation would be very important to clarify whether cortisol concentrations are affected by the level of hair pigmentation as well as body area, hair type, or if there is any pattern along the hair shaft as suggested by some studies [18,20,21]. In addition, it is also relevant to emphasize the importance of the combined use of different indicators of stress to obtain more complete and precise information about the stress conditions of wild populations [10]. Plasma, feces and hair are complementary tools, which can provide information from different time-windows, allowing a better evaluation of the effects of human activities, like hunting, on the physiological stress response. In terms of physical condition, our results showed that females were in better physical condition than males. This could be due to differential costs of reproduction for each sex, with males going through a phase of hypophagia and high activity levels during the rut [26,37]. Young individuals also had lower BMW indices than adults, which may be the result of a greater investment of these animals into growth [77]. Despite the observed differences in physical condition, and contrary to our predictions, the stress parameters we measured were not associated with physical condition. However, Cabezas et al. [78] reported lower values of body condition in animals with high GC levels in wild rabbits (Oryctolagus cuniculus). The absence of an association between stress levels and physical condition may indicate that the studied red deer population had enough fat reserves to cope with the stress induced by hunting activities. Conclusions The ability of plasma, feces and hair to provide multi-temporal information about physiological state proved to be very useful in the present study. Although the use of different biological samples increased the difficulty in the interpretation of our results, it allowed a broader panorama, which was more complete and reliable to understand given such a complex topic like stress reactions. We found evidence that repeated exposure of our red deer population to game hunting activity had an impact on stress levels, which can have important consequences for sustainability and conservation of this species. Specifically, stress can affect population dynamics, by changing foraging and breeding behavior, animal welfare, and, ultimately, the evolutionary processes, by changing individual fitness and selection [2,5,6]. Thus, exploring these topics is crucial to understanding the implications of hunting for the conservation of this species and to improve hunting management activities. Our study highlights the fundamental and broad role of stress in wildlife, emphasizing the need for more studies capable of clarifying how different biological matrices may be useful to evaluate the impacts of human pressure on wildlife, both in terms of stress level and stress processes.
v3-fos-license
2022-12-07T19:39:34.837Z
2022-11-29T00:00:00.000
254355473
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://a-jhr.com/a-jhr/article/download/1/31", "pdf_hash": "46149719cc957ae681441321b2b6da8dc5edffde", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45240", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fe6aca70ac3713a7695d24bf61d0eec056cbd1c2", "year": 2022 }
pes2o/s2orc
Efficacy of Non-Soy Isoflavone ( Pueraria phaseoloides ) on The Symptoms Severity Scoring by Kupperman Index in Menopausal Women Introduction: Menopause is defined by a decline in estrogen levels, which causes various symptoms. Treatments based on foods or supplements enriched in phytoestrogens, notably isoflavones, plant-derived compounds with estrogenic effects, have recently become quite popular. This study aims to see how effective non-soy isoflavone supplementation is for menopausal symptoms in women. Methods: 26 menopausal women were given 67.5 mg of non-soy isoflavone, and 25 menopausal women were given a placebo daily for 12 weeks in an analytical double-blind, randomized clinical trial (RCT). Inclusion criteria were (1) RCT, (2) perimenopausal or postmenopausal women experiencing menopausal symptoms, (3) intervention with an oral non-soy isoflavone. Symptoms Severity Score (SSS) based on the Kupperman index (KI) questionnaire was administered to the patients before starting and at the end of the study. The Statistical Package for Social Sciences (SPSS) software was used to analyze the responses. Results: The difference in SSS scores between the treatment and control groups was significant (p = 0.000). Women receiving 67.5 mg of non-soy isoflavone daily, reduced myalgia, fatigue, and hot flushes by 92.3%, 77%, and 53.8%, respectively. For clinical significance rating, relative risk reduction (RRR) is used. A significant RRR value resulted from myalgia symptoms (76.6%), fatigue (55.7%), hot flushes (39.2%), and SSS (68.7%). Conclusion: Isoflavones did not bring a significant change in Kupperman Index compared to placebo but provided significantly improved Symptoms Severity Scoring in menopausal women. INTRODUCTION Menopause is the lack of follicular activity in the ovaries, which produces menstrual hormones, resulting in permanent cessation of menstruation. It affects women aged 50 to 10 and is diagnosed after a 12-month period of amenorrhea with no apparent physiological or pathological causes [1]. Hot flushes, night sweats, sleep disturbances, sexual discomfort, depression, changes in sex drive, vaginal dryness, dry skin, weight changes, hair loss, and urinary incontinence are all symptoms of menopause [2]. Those symptoms associated with estrogen deprivation due to loss of follicular activity affected the quality of life and required therapeutic intervention. While hormone replacement treatment (HRT) substantially improves menopausal symptoms, it is linked to an increased risk of heart disease and breast cancer [3]. As a result, various treatments are needed to alleviate menopausal symptoms while minimizing side effects. Phytoestrogens, specifically isoflavones, are plants considered chemoprotective, with estrogen-like properties compounds, due to their conformational similarity to -estradiol, and can be used as an alternative therapy for hormonal disorders [4,5]. Isoflavones are phenols that enable the activation of estrogen receptors (ER) and the regulation of gene expression in target tissue cell nuclei. [4]. Taking the isoflavones, dietary supplementation may reduce the frequency of hot flashes (10-20%) in menopausal women [6]. Isoflavones have already been isolated from Pueraria species which are not soy. The genus of Pueraria is a structural analog of estrogen 17-estradiol in the human body that is useful for alternative therapy to treat menopausal symptoms [4,7]. Tropical kudzu or Tunggak bean, Pueraria phaseoloides was used in Chinese herbal medicine that traced back to century-old. However, no research on the effects of non-soy (Pueraria phaseoloides) isoflavone supplementation in menopausal women has been performed. Therefore, this study aimed to evaluate the effects of local non-soy (Pueraria phaseoloides) of isoflavone that improved menopausal symptoms. Study Design This was an experimental study with a double-blind, randomized clinical trial (RCT) applied to menopausal women in Malang, where one sample was obtained in each area in each sub-district among Malang's ten subdistricts. This research was conducted for seven months, from April to November 2003. Participants Menopausal women from Malang were the subjects of this research. The inclusion criteria of this study were (1) RCT, (2) perimenopausal or postmenopausal women experiencing menopausal symptoms as measured by the Kupperman Index on Symptom Severity Score (SSS), and (4) intervention with an oral non-soy (Pueraria phaseoloides) isoflavone. Exclusion criteria for subjects in this study were women who received hormone replacement therapy (HRT), had chronic diseases, breast and/or endometrium cancer, and subjects suspected of being hypersensitive to estrogenic supplementation Randomized and Interventions A double-blind, randomized sample of menopausal women was divided into the control and treatment groups. Twenty-five people in the control group were given placebo milk every day for 12 weeks in the distribution. At the same time, the other 26 women in the treatment group were given skim milk with extract of non-soy (Pueraria phaseoloides) composed of 67.5 mg of isoflavone, once a day for 12 weeks. The Kupperman Index (KI) was used to grade all subjects for menopausal symptoms using the Symptoms Severity Scoring (SSS) method. Ethics All techniques in this study were carried out in compliance with the appropriate manuals and regulations and were approved by the Health Research Ethics Committee, Faculty of Medicine, Brawijaya University, Malang, Indonesia. Statistical analysis We used SPSS Version 11.0 for Windows to conduct the statistical analysis. We used ANCOVA (Analysis of Covariant) to look at the differences between the two groups. Results were considered significant at a p-value <0.05. RESULTS A total of 51 samples were collected in this study. Based on the results of this study, it was found that after treatment of non-soy isoflavones, symptoms of sweating hot flushes, arthralgia and myalgia, fatigue, and headache have been reduced by 53.8%, 92.3%, 77%, and 50%, respectively, compared with before treatment (Table 1). Relative risk reduction (RRR) is used for clinical significant scoring. A significant RRR value resulted from myalgia symptoms (76.6%), fatigue (55.7%), hot flushes (39.2%), and SSS (68.7%). There was significantly differentiation in SSS scoring between the treatment (n = 26) and control group (n = 25) with p-value 0.000 (Table 2). DISCUSSION Isoflavones are being used for the alternative' natural' management of menopausal symptoms, which are analogs of 17-estradiol and can bind both estrogen receptors  (ER) and  (ER) [7]. One of the plantderived isoflavones was isolated from non-soy Tropical kudzu or Tunggak bean, Pueraria phaseoloides which compounds estrogenic properties, such as miroestrol, puerarin, deoxymireostrol, kwakhurin, and among coumestrol class [5]. This study evaluated the effects of local non-soy (Pueraria phaseoloides) of isoflavone that improved menopausal symptoms. The Kupperman Index's Symptoms Severity Scoring (SSS) was used to establish a composite score that multiplied the number of menopause symptoms by their severity at baseline and after 12 weeks of treatment with a non-soy extract (Pueraria phaseoloides) containing 67.5 mg of isoflavone daily. The SSS differed significantly between the treatment and placebo groups (p = 0.000) in this research. In women receiving 67.5 mg non-soy (Pueraria phaseoloides) isoflavone daily, sweating hot flushes has been reduced by 53.8%, arthralgia, and myalgia; 92.3%, fatigue; 77%, and headache; 50% compared with before treatment of non-soy isoflavones. Clinically meaningful rating is done using relative risk reduction (RRR) scoring. A significant RRR value resulted from myalgia symptoms (76.6%), fatigue (55.7%), hot flushes (39.2%), and SSS (68.7%). The condition of the relative decrease in circulating estrogen in menopausal women causes dysfunction of the thermoregulatory nucleus due to the symptom of sweating hot flashes. In this study, hot flashes in the treatment group were reduced by 53.8% after receiving 67.5 mg of non-soy (Pueraria phaseoloides) isoflavone daily for 12 weeks. It was related to a metanalysis study of 17 trials that revealed that respondents receiving 54 mg of isoflavones for six weeks to 12 months has significantly reduced the frequency of hot flashes (20.6%) [8]. Furthermore, isoflavone effectively reduced hot flashes and the Kupperman Index at 6 and 12 weeks compared to baseline in randomized, double-blind research [9]. The other trial found no difference in the frequency or intensity of hot flashes between the treatment group receiving 90 mg of isoflavone daily for 12 weeks and the placebo group [10]. In this study, the symptoms of arthralgia, myalgia, and fatigue were reduced. Isoflavones have been shown to improve bone health in several trials. Improvement of bone-specific alkaline phosphatase (BALP) and levels of osteocalcin (OC) has been significantly increased by 20.3% when used isoflavone 75 mg daily for 12 weeks [8]. Those indicate that estradiol as the isoflavone compound increases bone formation, involves recruitment and differentiation of osteoblastic precursors, and stimulates osteoblastic activity by activating estrogen receptors [9]. The Kupperman Index (KI) did not differ significantly (p=0.270) in this study, but KI providing Symptoms Severity Scoring (SSS) has been significantly improved between the treatment group and placebo group (p-value = 0.000) with RRR scoring 68.7%. After 4 and 6 months of treatment, a previous RCT study found that isoflavone significantly reduced menopause symptoms on the Kupperman Index (p = 0.0265) in the treatment group compared to placebo [11]. There are some limitations to this study that should be taken into account. There was a lack of standardization in the dosages utilized and data on treatment adverse effects. CONCLUSION When compared to placebo, phytoestrogen, namely isoflavones, did not result in a significant change in the Kupperman Index, but did result in a substantial improvement in Symptom Severity Scoring in menopausal women. While isoflavones help alleviate all menopausal symptoms, they also help reduce sweating, hot flashes, arthralgia and myalgia, fatigue, and headache. Isoflavones should be studied further to alleviate menopausal symptoms as well as their potential long-term side effects.
v3-fos-license
2019-04-08T13:04:05.241Z
2016-05-27T00:00:00.000
102358676
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40090-016-0086-8.pdf", "pdf_hash": "69af948ba933c9cc8026b0fc6023a2e9934f9d7d", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45242", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "sha1": "69af948ba933c9cc8026b0fc6023a2e9934f9d7d", "year": 2016 }
pes2o/s2orc
Levulinic acid from corncob by subcritical water process The productions of levulinic acid from corncob were carried out by subcritical water process in a temperature range of 180–220 °C, reaction time of 30, 45, and 60 min. The acid modified zeolite was used as the catalyst in the subcritical water process. The ratio between the mass of zeolite and volume of hydrochloric acid in the modification process were 1:5, 1:10 and 1:15. The optimum values of the process variables in the subcritical water process for the production of levulinic acid from corncob were: Temperature of 200 °C; 1:15 zeolite to acid ratio; and reaction time of 60 min. The maximum levulinic acid concentration obtained in this study was 52,480 ppm or 262.4 mg/g dried corncob. Introduction Levulinic acid (4-oxopentanoic acid or c-ketovaleric acid) is an organic compound with a short-chain fatty acids containing carbonyl group of ketones and carboxylic acids. Levulinic acid is an important chemical platform for the production of various organic compounds. It can be used for the production of polymers, resins, fuel additives, flavors, and others high-added organic substances. This chemical can be produced through several routes [1][2][3][4][5][6][7] and one of the most promising processes is the dehydrative treatment of biomass or carbohydrate with various kinds of acids. Biomass can be used as the precursor to produce levulinic acid and other organic chemicals. The use of biomass as the raw material for the production of levulinic acid in commercial scale was developed by Biofine renewables [3,7]. The Biofine process consists of two different stages of processes, the first stage of the process is the production of 5-hydroxymethylfurfural (HMF) while the second stage is the production of levulinic acid [3]. Several studies have reported that various types of homogeneous as well as heterogeneous catalysts have been used for the preparation of levulinic acid from lignocellulosic biomass [2][3][4][7][8][9]. Usually, the homogeneous catalysts are more effective than some of heterogeneous catalysts; however, the drawbacks of the use of homogeneous catalysts for levulinic acid production are associated with the corrosion of the equipment, environmental problem, and re-use of the catalyst. One of the advantages of using heterogeneous catalyst for the production of levulinic acid is the heterogeneous catalyst can be easily recovered and reused [3]. Zeolites have been used as catalysts or catalyst supports in many reaction systems. The properties of zeolites, such as porosity, types and the amount of surface acidity, and the type of the structure greatly influence the selectivity and catalytic performance of these materials. A number of synthetic zeolites have been used as the catalyst for the levulinic acid production, however, zeolites with low acidity and porosity gave a poor catalytic performance on the conversion of sugars into levulinic acid [3]. Zeolitetype materials, such as faujasite and modernite, have been used for the synthesis of levulinic acid from C 6 sugars and cellulose [6,8,10,11]. Some of agricultural wastes and other lignocellulosic materials have the potential application as the precursors for levulinic acid production [12]. The production of levulinic acid from agricultural waste materials involves two critical steps of processes; the first process is hydrolysis, in the hydrolysis process the hemicellulose and cellulose are converted into C 5 and C 6 sugars. The second process is dehydration process, in this process the C 5 and C 6 sugars are dehydrated into levulinic acid and furan derivatives [12]. In this study, the production of levulinic acid from corncob was conducted on subcritical water condition using acid modified zeolite as heterogeneous catalyst. Subcritical water (SCW) process is an environmentally friendly method, which can be applied in various applications, such as extraction, hydrolysis, and wet oxidation of organic compounds. Subcritical water is defined as the hot compressed water (HCW) or hydrothermal liquefaction at a temperature between 100 and 374°C under conditions of high pressure to maintain water in the liquid form [13]. At this subcritical condition, water acts as solvent and catalyst for the hydrolysis of cellulose and hemicellulose in the corncob. The use of acid modified zeolite increases the acidity of the system lead to the increase of the hydrolysis and dehydration rate of reactions and subsequently increases the yield of levulinic acid. To the best of our knowledge, there is no single study used the subcritical water process combined with acid modified zeolite as the catalyst in the production of levulinic acid from lignocellulosic waste material (corncob). The optimum condition for the production of levulinic acid from corncob was determined by Response Surface Methodology (RSM). Experimental Materials Corncobs used in this study were obtained from a local market in Surabaya, East Java, Indonesia. Prior to use, the corncobs were repeatedly washed with tap water to remove dirt. Subsequently the corncobs were dried in an oven (Memmert, type VM.2500) at 110°C for 4 h. The dried corncobs were pulverized into powder (20/60 mesh) using a JUNKE & KUNKEL hammer mill. The ultimate analysis of the corncob was determined using a CHNS/O analyzer model 2400 from Perkin-Elmer, while the proximate analysis was conducted according to the procedure of ASTM. The results of ultimate and proximate analyses of the corncob are summarized in Table 1. Natural zeolite used in this research was obtained from Ponorogo, East Java, Indonesia. The purification of natural zeolite was conducted using hydrogen peroxide solution (H 2 O 2 ) at room temperature (30°C) to remove organic impurities. The purified zeolite then was pulverized to particle size of 40/60 mesh. The chemical composition of the purified natural zeolite was SiO 2 (60.14 %), Al 2 O 3 (12. All chemicals used in this study, such as sodium hydroxide (NaOH), hydrochloric acid (HCl), hydrogen peroxide (H 2 O 2 ), the standard reference of levulinic acid, etc., were purchased from Sigma Aldrich Singapore and directly used without any further purification. Natural zeolite modification The natural zeolite was modified using hydrochloric acid solution (2 N). The ratio between the zeolite powder and hydrochloric acid were 1:5, 1:10, and 1:15 (weight/volume). Thirty grams of zeolite powder were mixed with a certain volume of HCl solution and transferred into a round bottom flask. Subsequently the mixture was heated at 70°C under reflux and continuous stirring at 500 rpm for 24 h. After the modification completed, the acid modified zeolite was separated from the mixture by vacuum filtration system. The solid was repeatedly washed with distilled water to remove the excess HCl solution. The acid modified zeolite was dried in oven at 110°C for 24 h to remove free moisture content. Then, modified zeolite was calcined in a furnace at a temperature of 400°C for 4 h. Delignification process Delignification process was carried out by soaking of corncob powder into 20 % of NaOH solution. The ratio between solid and solution was 1:10 (weight/volume). The delignification process was conducted at a temperature of 30°C under constant stirring (500 rpm). After the process completed (24 h), the treated corncob was separated from the liquid using vacuum filtration system. The biomass was repeatedly washed with distilled water until the pH of the washing solution around 6.5-7. Subsequently the treated corncob was dried at 110°C for 24 h. Conversion of corncob to levulinic acid The preparation of levulinic acid from corncob was conducted in a subcritical reactor system. The subcritical reactor system consists of 150 ml high pressure stainless steel vessel, a pressure gage, an external electrical heating system, type K thermocouple, and M8 screws for tightening the reactor with its cap. The maximum allowable temperature and pressure of the vessel are 250°C and 100 bar, respectively. The reaction experiments were conducted at a pressure of 30 bar and three different temperatures (180, 200, and 220°C). The typical reaction experiment is briefly described as follows: 20 g of corncob powder were mixed with 100 ml of distilled water; subsequently 0.5 g of acid modified zeolite was added into the mixture. The mixture was heated until the desired temperature was reached, and during the heating process, the nitrogen gas was introduced to the system to maintain the water in the liquid condition. During the reaction process, the mixture was stirred at 300 rpm. After the hydrolysis time was reached (30, 45, and 60 min), the reactor was rapidly cooled to room temperature. The solid was separated from the liquid by centrifugation at 3000 rpm. The concentrations of levulinic acid and other organic substances, such as sugars, organic acids and HMF, were determined by high performance liquid chromatography (HPLC) analysis. Characterization of corncob and zeolite The chemical composition of the corncob and delignified corncob was determined using Thermal gravimetric Analysis (TGA). The analysis was performed on a TGA/ DSC-1 star system (Mettler-Toledo) with ramping and cooling rate of 10°C/min from room temperature to 800°C under continuous nitrogen gas flow at a flowrate of 50 ml/min. The mass of the sample in each measurement was 10 mg. The surface topography of the corncob and zeolite catalysts was characterized using a field emission Scanning Electron Microscope (SEM), JEOL JSM 6390 equipped with backscattered electron (BSE) detector at an accelerating voltage of 15 and 20 kV at a working distance of 12 mm. Prior to SEM analysis, an ultra-thin layer of conductive platinum was sputter-coated on the samples using an auto fine coater (JFC-1200, JEOL, Ltd., Japan) for 120 s in an argon atmosphere. The X-ray powder diffraction (XRD) analysis of the samples was performed on a Philips PANalytical X'Pert powder X-ray diffractometer with a monochromated high intensity Cu Ka 1 radiation (k = 1.54056 Å ). The XRD was operated at 40 kV, 30 mA, and a step size of 0.05°/s from the 2h angle between 5 and 90°. The surface acidity of the zeolite acid activated zeolite was determined by amine adsorption analysis. A brief description of the method is as follows: a known amount of air dried zeolite or acid activated zeolite (50 mg) were added into a series of test tubes. Subsequently, different volumes (20-50 ml) of n-butylamine solution in benzene (0.01 M) were added to the test tubes. The test tubes then tightly stoppered and stores at 30°C. After the equilibrium condition was achieved, the remaining n-butylamine in the solution was determined by titration using 0.016 M trichloroacetic acid solution in benzene, and 2,4 dinitrophenol was used as the indicator. HPLC analysis The organic compounds in the aqueous phase of the product from subcritical water process was analyzed using a Jasco chromatographic separation module consisting of a model PU-2089 quaternary low pressure gradient pump, a model RI-2031 refractive index detector and a model LC-NetII/ADC hardware interface system. Prior to the injection in the HPLC system, all of the liquid samples were filtered through a 0.22 lm PVDF syringe filter. The analysis of monomeric sugars was conducted with an Aminex HPX-87P sugar column (Bio-Rad, 300 9 7.8 mm) using degassed HPLC-grade water isocratically flowing at a rate of 0.60 ml/min. The column was operated at 85°C. For the analysis of organic compounds, a Bio-Rad Aminex HPX-87H column (300 9 7.8 mm) was used as the separating column. The isocratic elution of sulfuric acid aqueous solution (5 mM) was used as the mobile phase with the flow rate of 0.6 ml/min. The column oven was set at 55°C. Details of the procedure can be seen elsewhere [12]. Results and discussion To determine the chemical composition of corncob and sodium hydroxide treated corncob, the thermal gravimetric Analysis (TGA) was conducted under the nitrogen environment. The TGA curves of both samples are given in Fig. 1. At temperature between 50 and 200°C, the weight loss of corncob and the pretreated corncob mainly due to the evaporation of both free moisture content and bound water. From Fig. 1 it can be seen that a gradual thermal decomposition process with a significant weight loss for both samples (more than 60 %) are observed at a range of temperature from 250 to 400°C. This significant weight loss of the biomasses mainly due to the thermal decomposition of hemicellulose (200-300°C) and cellulose (300-360°C) into smaller molecular weight compounds, such as water, carbon dioxide, carbon monoxide, methane, and other organic compounds. Some of lignin also degraded at this range of temperatures, which mainly due to the breakdown of chemical bonds with low activation energy [12,14]. The breakdown of more stable bonds in the lignin occurred in temperature range from 400 to 500°C. At higher temperature (above 500°C), the weight loss of both biomasses was insignificant as seen in Fig. 1. The chemical compositions of corncob and its pretreated form which were determined by TGA method are listed in Table 2. Because the corncob contains high cellulose, this material is suitable as the raw material for levulinic acid production. The SEM images of natural zeolite and acid modified zeolite are shown in Fig. 2. The modification using acid did not change the surface morphology of zeolite as indicated in Fig. 2. The XRD analysis was used to determine the crystalline structure of zeolite. In general, the modification using hydrochloric acid did not change or alter the crystalline structure of zeolite as shown in Fig. 3. The total surface acidity of natural zeolite was 0.517 mg n-butylamine/g and after modification using hydrochloric acid solution, the total surface acidity increased to 0.815 mg nbutylamine/g. The increase of surface acidity of acid modified zeolite due to the removal of some exchangeable cations (Ca 2? , Fe 3? and Al 3? ) from the framework of zeolite and replaced by H ? . The production of levulinic acid from lignocellulosic materials involves several complex reaction mechanisms which also producing several intermediate products. In the hydrolysis process, the cellulose is converted into glucose, while the hemicellulose is converted into hexose (glucose, mannose, and galactose) and pentose (xylose and arabinose). In the dehydration process, hexose will be converted into 5-hydroxy-methylfurfural (HMF) and pentose will be converted into furfural. The decomposition of HMF produces levulinic acid and formic acid. A byproduct produced during the process is humin, black insoluble polymeric materials. The subcritical water process has unique behavior and has been known as a green process for several applications [13,15,16]. Under high temperature and pressure, the water dissociates into H 3 O ? and OHions, and the presence of these excess ions indicates that the water can act as an acid or base catalyst. The subcritical water hydrolysis of pretreated corncob were conducted either with or without solid acid catalyst additions. The subcritical water hydrolysis products are summarized in Table 3. Without addition of solid acid catalyst, the breakdown of cellulose and hemicellulose into monomeric sugars significantly low as indicated in Table 3. At subcritical condition the ion products (H 3 O ? and OH -) in water will make the water slightly acidic and at this condition the water become a good solvent for converting cellulose and hemicellulose to sugar monomers. The yield of monomeric sugars (calculated as the amount monomeric sugar/L solution) in the subcritical water process hydrolysis without the presence of catalyst increased with the increase of temperature from 180 to 220°C (from 1.54 to 2.62 g/L) as seen in Table 3. At constant pressure, the increase of temperature will decrease the dielectric constant of water and increase the ionization of water into H 3 O ? and OHleading to more acidic of the system. The presence of H 3 O ? (hydroxonium) in the system represents the nature of the proton in aqueous solution and this proton subsequent attacks b-1,4-glycosidic bonds as the linking bonds of several monomeric Dglucose units in the long chain polymer of cellulose, and resulting C 6 sugars as the product. The attack of hydroxonium ions into the linking bond of the hemicellulose chain, resulting C 5 sugars as the product. With the increasing of temperature, the amount of hydroxonium ions also increase, therefore the breakdown of linking bonds of The addition of solid acid catalyst (modified zeolite) into the system significantly enhanced the breakdown of cellulose and hemicellulose into monomeric sugars (clearly seen in the temperature range of 180°-220°). The addition of the acid modified zeolite increased the number of protons (hydroxonium ions from subcritical water and H ? from the surface of acid modified zeolite), with the excess number of protons in the solution, the breakdown of linking bonds of the cellulose and hemicellulose became significantly increasing and as the results the yield of monomeric sugars also increases as seen in Table 3. In the levulinic acid production process, the C 6 sugars were dehydrated to HMF, this intermediate product subsequently converted into LA and formic acid. The C 5 sugars were converted to furfural, and the later was further degraded into formic acid and other insoluble products [17]. In the first step of dehydration of glucose, the isomerization reaction of glucose-fructose occurred and subsequently it further dehydrated to HMF and the later converted rapidly to LA and formic acid. The temperature plays important role in the dehydration process of glucose into LA, since all the reactions were endothermic process, the increase of temperature also increases the rate of reaction and the yield of products also increase. At temperature above 180°C, the isomerization reaction of glucose-fructose occurred much faster, and more HMF was produced during the process, however, based on the kinetic parameters for the hydrolysis of sugarcane bagasse proposed by Girisuta et al. [17], the formation of LA or dehydration of HMF is much faster than other reactions. As soon as the HMF formed it was instantaneously converted to LA. To obtain optimum process parameters for the levulinic acid production from corncob using catalytic subcritical water process, the response surface methodology (RSM) was employed to analyze the experimental data. The following polynomial equation was fitted to the response resulted from RSM by the LSM (least square method): where Y is the concentration of levulinic acid (C LA ) in the product, a o is a constant coefficient, a I are the linear coefficients, a ij are the interaction coefficients, and a ii are the quadratic coefficients. X i and X j are the codec values of the variables. The independent variables used in this study were ratio of zeolite and acid (R), temperature (T,°C), and reaction time (t, min). The regression model was calculated using Minitab 16.1.1 Statistical software to estimate the response of dependent variables. The analysis of variance (ANOVA) was employed to confirm the adequacy of the model parameters. The suitability of the model to represent the data was determined by the value of R 2 . The full quadratic model that describes the relationship between the effects of ratio of zeolite and acid (R), temperature (T,°C), and reaction time (t, min) on the concentration of levulinic acid is given as follow p value of the quadratic model (\0.0001) was significant at the probability level of 5 % (R 2 = 0.9614). The first order effect of variables R, T, and t on the output parameter (C LA ) were significant at the confidence level of 95 %. However, the second order effect of R and t as well as the interactions between R and t, R and T, T and t were insignificant as indicated in Table 4. Re-arrangement of Eq. (2) with the inclusion only the significant parameters give the following result: The effects of ratio of zeolite and acid (R), temperature (T) and time (t) of subcritical water hydrolysis on the concentration of levulinic acid are plotted as surface plots in Figs. 4, 5 and 6. Both of these parameters have positive effects on the yield of levulinic acid (concentration). As mentioned before that temperature play important role both in hydrolysis and hydration processes, by increasing temperature the formation of levulinic acid or dehydration of HMF is much faster than other reactions. However, if the temperature is too high and the activation energy of the formation of humin is achieved, the degradation of HMF into humin is faster than the dehydration of HMF into levulinic acid and this phenomenon decreases the yield of levulinic acid. By increasing the subcritical hydrolysis time, the contact between the cellulose and hemicellulose with the ionic product of water (H 3 O ? and OH -) become more intense and longer, and more of the cellulose and hemicellulose molecules were hydrolyzed and converted into monomeric sugars and subsequently dehydrated into HMF and levulinic acid. The ratio of zeolite and hydrochloric acid also had a positive effect on the concentration of levulinic acid, by increasing of the ratio of acid, the ion exchange between some metal cations with H ? also increased. Subsequently, with the increased of H ? in the surface of zeolite catalyst also increased the number of protons in the solution leading to the increase of the breakdown of linking bonds of the cellulose and hemicellulose to produce monomeric sugars. These monomeric sugars under acidic condition and high temperature were dehydrated into levulinic acid. The experimental results of the effects of temperature, reaction time, and the ratio of zeolite and hydrochloric acid (activation of zeolite) on the yield of levulinic acid are given in Table 5. To obtain the maximum yield or concentration of levulinic acid is an important point in this study to establish an efficient process. This objective can be achieved through the setting of all significant parameters at optimum conditions. The optimum condition of the production of levulinic acid from corncob through subcritical water process is depicted in Fig. 7. RSM indicates the optimum conditions for the variable of ratio of zeolite and acid was coded 1, variable of hydrolysis temperature was coded 0.1111 and hydrolysis time was coded 1. These units . To test the validity of the optimum condition obtained from the RSM, an experiment has also been conducted using process variables values from the RSM, and as the result the concentration of levulinic acid of 53,989.7 ppm (269.9 mg/g) was obtained. Since the difference between the experiment and the optimize value from RSM only 2.8 %, therefore, these theoretical optimum values obtained from RSM are considered to be appropriate. The stability and reusability of the heterogeneous catalyst are crucial issues for industrial application. To examine the stability and reusability of acid modified zeolite, the catalyst was recovered from the reaction mixture, re-calcined at 400°C for 4 h, and reused five times. The reaction temperature of 200°C, reaction time of 60 min, and zeolite to acid ratio of 1:15 were used as the reaction parameters to study of the reusability of catalyst. The reusability results of the spent catalyst are depicted in Fig. 8. This figure clearly shows that the yield of levulinic acid gradually decrease after the first run. This phenomenon indicates that the catalyst has gradually deactivated during the reaction cycle. The activation of catalyst during the reaction cycle due to the leaching of surface acid sites (the acidity of fresh catalyst was 0.815 mg n-butylamine/g and after 5th cycle was 0.423 mg n-butylamine/g) and the formation of humin in the active sites of the catalyst. Conclusion Corncob had been successfully used as the new raw material for levulinic acid production. The production of levulinic acid was conducted in subcritical condition with the presence of acid modified zeolite as catalyst. The yield of levulinic acid in the final product was strongly influenced by the ratio of zeolite and acid, reaction temperature, and reaction time. The optimum yield of levulinic acid was 262.4 mg/g dried corncob, and was obtained at temperature of 200°C, reaction time of 60 min, and zeolite to acid ratio of 1:15.
v3-fos-license
2024-05-26T06:17:19.089Z
2024-05-24T00:00:00.000
270002352
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "0eb75ab1040236248309750ff6d02a6faa100533", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45246", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "427d6fcab8ba75e371be5de4d16afb2108b5a145", "year": 2024 }
pes2o/s2orc
Comparison of CTX-M encoding plasmids present during the early phase of the ESBL pandemic in western Sweden Plasmids encoding blaCTX-M genes have greatly shaped the evolution of E. coli producing extended-spectrum beta-lactamases (ESBL-E. coli) and adds to the global threat of multiresistant bacteria by promoting horizontal gene transfer (HGT). Here we screened the similarity of 47 blaCTX-M -encoding plasmids, from 45 epidemiologically unrelated and disperse ESBL-E. coli strains, isolated during the early phase (2009–2014) of the ESBL pandemic in western Sweden. Using optical DNA mapping (ODM), both similar and rare plasmids were identified. As many as 57% of the plasmids formed five ODM-plasmid groups of at least three similar plasmids per group. The most prevalent type (28%, IncIl, pMLST37) encoded blaCTX-M-15 (n = 10), blaCTX-M-3 (n = 2) or blaCTX-M-55 (n = 1). It was found in isolates of various sequence types (STs), including ST131. This could indicate ongoing local HGT as whole-genome sequencing only revealed similarities with a rarely reported, IncIl plasmid. The second most prevalent type (IncFII/FIA/FIB, F1:A2:B20) harboring blaCTX-M-27, was detected in ST131-C1-M27 isolates, and was similar to plasmids previously reported for this subclade. The results also highlight the need for local surveillance of plasmids and the importance of temporospatial epidemiological links so that detection of a prevalent plasmid is not overestimated as a potential plasmid transmission event in outbreak investigations. ODM-method Plasmids were linearized using CRISPR/Cas9.For targeting blaCTX-M group 1 genes crRNA with sequence 5′-CCGTCGCGATGTATTAGCGT-3′ was used and for targeting blaCTX-M group 9 genes gRNA 5′-AGAGAGCCGCCGCGATGTGC-3′) was used 1 .gRNA was obtained by mixing equimolar amounts of crRNA and tracrRNA (0.5 nmol each, Dharmacon Inc., Lafayette, CO, USA) in the presence of 1X NEB-3 buffer (New England Biolabs, Ipswich, MA, USA) and 1× bovine serum albumin (BSA, 0.1 µg/mL), and incubating at 4°C for 30 min.To this mixture, Cas9 protein (600 ng, Sigma Aldrich, St. Louis, MI, USA) was added and the sample was incubated at 37°C for 15 min to form Cas9-gRNA complexes.Further, plasmid DNA (60 ng) together with NEB-3 buffer and BSA was added to the tube containing the Cas9-gRNA mixture to a final volume of 30 µl.The mixture was incubated at 37°C for 1 h to let Cas9 linearize the plasmids containing the gene of interest. In the next step the barcodes were formed by letting netropsin (Sigma-Aldrich) and YOYO-1 (Invitrogen) bind to the plasmids.Plasmid DNA was mixed with λ-DNA (48502 bp, New England Biolabs), used as size reference, YOYO-1 and netropsin in a 1.8:1:70 ratio (DNA:YOYO:netropsin) in 0.5X Tris-Borate-EDTA buffer (TBE, Sigma-Aldrich).The mixture was incubated at 50°C for 30 min.Next, sample together with Milli-Q water (total volume 500 µl) was filtered using a 3 kDa Amicon Ultra-0.5 Centrifugal Filter Unit (Millipore) in a tabletop centrifuge (Eppendorf MiniSpin®) set to 13400 rpm for 17 minutes.The filtrate was removed, and the procedure repeated two more times, the last time with 0.05X TBE with an addition of 2 % (v/v) of β-mercaptoethanol (BME, Sigma-Aldrich) to protect the DNA from photodamage during imaging. The nanofluidic devices and the fabrication procedure are discussed in detail elsewhere 2,3 . Briefly, the devices consist of four loading reservoirs, with microchannels connecting the reservoirs two and two.The microchannels, in turn, are connected by 200 parallel nanochannels that are 150 nm wide, 100 nm deep, and 500 µm long.The nanochannels enable stretching of the DNA molecules due to the nanoconfinement.The chip was mounted to a chuck, custommade to fit on top of an epi-fluorescence microscope (Zeiss AxioObserver.Z1).Images were collected using either a 100× oil immersion objective (Zeiss, NA = 1.46), a FITC filter (488 nm excitation/530 nm emission), and an sCMOS camera (Photometrix Prime 95B) or a 63x oil immersion objective (Zeiss, NA=1.46) with an additional 1.6x magnifier, a FITC filter (488 nm excitation/530 nm emission), and an EMCCD camera (Photometrix evolve 512x512 pixels). The chuck was designed with one pressure inlet to each of the four reservoirs, enabling pressure-driven flow of the DNA molecules. During an experiment, the sample is loaded in one reservoir while the other reservoirs are loaded with buffer.Using pressure, DNA is moved through one of the microchannels and concentrated at the entrance of the nanochannels.With a short pulse of higher pressure, DNA is pushed into the nanochannels where it stretches due to the confinement.An image containing 20 frames (100 ms each) is collected before the DNA is flushed out and a new set of DNA molecules are pushed into the nanochannels. The images are analyzed using custom-made MATLAB scripts 1 .Each DNA molecule is detected, its length (in µm) is determined and the intensity variation along the molecule obtained.Images for DNA molecules of similar length are grouped together, and the intensity patterns are compared.Identical cut-site among several molecules confirm Cas9 restriction and hence the presence of the resistance gene on the plasmid.If several molecules have the same pattern and Cas9 cut-site, the average pattern is calculated and hereafter called the plasmid's barcode.λ-DNA (48 502 bp) is used as an internal size reference to retrieve the length of the plasmids in kilo base pairs (kb). The barcodes from the different plasmids are then pairwise compared to investigate if some are similar.In this comparison we allow up to 10 % stretch of the plasmid length.If the p-value is 0.01 or lower the two barcodes are considered similar 4 .When a group of barcodes (n≥3) all have p-values that are 0.01 or lower, the barcodes/plasmids are grouped together in an ODMplasmid group. When comparing an experimental barcode to WGS data we first create a theoretical barcode from the WGS data, with the help of a custom-made MATLAB script 5 .Then the theoretical barcode and experimental barcode are compared in the same way as described above for two experimental barcodes 6 . Whole-genome sequencing methods Genomic DNA was extracted as previously described according to Marmur et al 7 DNA samples were quantified with the Qubit® 2.0 fluorimeter and the QubitTM dsDNA BR kit (Thermofisher Scientific, Waltham, MA, USA).Quality was determined by analysis of ratios 260/230 and 260/280 on a NanoDrop ND-1000 spectrophotometer (Thermofisher Scientific, Waltham, MA, USA).Estimation of the distribution of DNA fragment sizes was performed, using a TapeStation 2200 (Agilent Technologies, Santa Clara, CA, USA).DNA samples were sequenced as an external service, using an Illumina NovaSeq 6000 S4 (read mode 2x150 bp) (Eurofins Genomics, Germany).In-house MinION Mk101B long read sequencing (Oxford Nanopore Technologies, United Kingdom) was performed, using a rapid barcoding sequencing kit (SQK-RBK004) and a FLOW-MIN106 vR9.4,during 72 hours on MinKNOWN software v4.3.12 (Oxford Nanopore Technologies, United Kingdom).The reads were thereafter base-called and demultiplexed using Guppy v6.0.1 (Oxford Nanopore Technologies, United Kingdom) and the quality was determined using NanoPlot v1.32.1 8 .Hybrid assembly of Illumina and nanopore data was performed using Unicycler v0.4.8 9 .Basic quality parameters were determined with the Quality Assessment Tool (QUAST) 10 .The genomes were annotated with the Prokaryotic Genome Annotation Pipeline (PGAP) for submission to GenBank 11 . Conjugation of plasmids The strains carrying the blaCTX-M-15 and blaCTX-M-27 plasmids 15-1 and 27-1 served as donors and the tetracycline resistant E. coli strain CAG18439 served as recipient in filter mating assays. Equal biomasses of the donor and recipient were mixed on 0.22 µm filters placed on Mueller-Hinton agar (MHA).Filters with the respective donor and the recipient alone were prepared as controls.After incubating at 37°C overnight, MHA supplemented with either tetracycline plus cefotaxime or with tetracycline alone was used to select for transconjugants and recipients, respectively.Retrieved transconjugants were checked for carriage of the expected CTX-M gene by PCR. Figure S2 . Figure S2.Comparison of the 15-1 plasmid with the theoretical barcode of the pEK204 plasmid described by Woodford et al 21 .
v3-fos-license
2023-04-23T15:05:22.384Z
2023-04-01T00:00:00.000
258278845
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://assets.cureus.com/uploads/review_article/pdf/117076/20230421-12631-lpvvev.pdf", "pdf_hash": "8a297ab5a1a327b7dfe831f55c54d0ac673dd25c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45247", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c1680e14b04af70d952d0773d30b2cda0c87a094", "year": 2023 }
pes2o/s2orc
Alternative Birthing Positions Compared to the Conventional Position in the Second Stage of Labor: A Review The position in which the woman delivers has a lot of impact on the ease of delivery. Women's satisfaction with their birthing experience and the care they receive is significantly impacted by the fact that giving birth is frequently a challenging experience. Birthing positions refer to various postures which can be assumed at the time of delivery by a pregnant woman. Currently, the majority of women give birth either while lying flat on their backs or in a semi-sitting position. Upright positions, which include standing, sitting, or squatting along with side-lying and hands-and-knees, are less common birth positions. Doctors, nurses, and midwives are among the most important healthcare professionals, having a significant influence in deciding which position the woman will give birth in and on the physiological and psychological effects of the experience of a woman in labor. There isn't much research to back up the best position for mothers during the second stage of labor. This review article aims to review and compare the advantages and risks of common birthing positions and know about the knowledge of alternative birthing positions among pregnant women. Introduction And Background Birthing positions refer to various postures which can be assumed at the time of delivery by a pregnant woman. Delivering a baby is a lot of hard work and a little uncomfortable too. However, the position in which the patient delivers has a lot of impact on the ease of delivery. Certain positions can make the process of birthing easier during labor. There are a variety of good birthing positions which a patient can be in when it's time to push, and it does not necessarily always be the supine position. Studies have shown that when given the option, women will use a variety of postures, both supine and non-supine [1][2][3]. In Western nations until the 17th century, giving birth while upright was common [4]. When obstetric tools like delivery forceps were developed in the 18th century, women only gradually began to use supine positions like the lithotomy position [5][6][7]. Women who have given birth throughout the past few years report frequently using supine positions for labor and birth [8], even though assisted vaginal births are now considerably less common [9]. Alternatives to the supine position have become somewhat more common in the past few decades of the 20th century [10]. After reviewing various studies, the objective of this study is to determine which position is the best and which is the most popular among those in which a pregnant woman can give birth; which position is the best one and which is the most used, as well as various benefits and risk factors associated with alternative birthing positions. Review Methodology We performed searches in electronic databases via PubMed, Google Scholar, and Cochrane Library. The electronic database search was conducted using the following MeSH terms and keyword combinations: ("Alternative Birthing Positions"[Title/Abstract] (("birth s"[All Fields] OR "birthed"[All Fields] OR "birthing" [All Fields] OR "parturition"[MeSH Terms] OR "parturition"[All Fields] OR "birth"[All Fields] OR "births"[All Fields]) AND ("patient positioning"[MeSH Terms] OR ("patient"[All Fields] AND "positioning"[All Fields]) OR "patient positioning"[All Fields])) AND "labor stage, second"[MeSH Terms]). A hand search was also carried out. In addition, we searched the references list for additional studies that might be relevant. The relevant references included in the bibliographies of the studies retrieved from these electronic searches were reviewed. Based on the inclusion and exclusion criteria, 42 studies were finally included in this review for the synthesis of evidence ( Figure 1). Alternative birthing positions, which include squatting, reclining, sitting, or side-lying, over the conventional posture have definite psychological and physiological advantages. Most women currently give birth either flat on their backs (supine), accounting for 68% of births, or in a semi-sitting position (23%). Upright (standing, sitting, or squatting) (4%), side-lying (3%), and hands-and-knees (1%) are less common birth positions ( Table 1) [8]. Research has indicated that compared to a supine position, the duration of the second stage of labor is shorter in an upright position (squatting, sitting, on a birth stool, in a chair, or kneeling). The descent of the fetus is aided by gravity, and the dimensions of the pelvic outlet are also increased in an upright position reducing the chance of labor dystocia [11,12]. The need for episiotomies and assisted deliveries was also seen to be reduced with the upright position [13]. A spontaneous vaginal birth is facilitated by hip flexions, such as that experienced during squatting, which dramatically increase the fetal head angle of advancement via the pelvic axis, cervix, and pelvic floor [14]. When labor and delivery occur in a supine position, the likelihood of cesarean sections was also seen to be increased [13]. Since frequent position changes relieve fatigue, boost comfort, and improve maternal blood circulation, the certainty of the findings is ambiguous. The preparation of the birth canal is the primary focus of the first stage of labor, which includes the cervix's dilation and effacement as well as the full creation of the lower uterine segment. After the first stage of labor, the second stage of labor begins which includes complete dilatation of the cervix and the expulsion of the fetus through the birth canal [15]. The average duration of the second stage of labor in primigravidae is approximately 50 minutes, and in multiparous, it is approximately 20 minutes but is highly variable [16]. Let us assume a birthing position by a mother for the delivery of a baby vaginally. It is said that women who are moving around tend to have less pain than the ones who are in bed. According to the recommendations of the World Health Organization, an opportunity should be given to pregnant women to choose the type of position she wants to be in during labor [17]. Changing positions during labor and birthing is important for both the mother and the baby and also to make the mother as comfortable as possible ( Table 2). Birthing positions Description Dorsal supine position Lying flat on the back with head and shoulders slightly elevated Supine Position The most common position assumed worldwide by a mother during childbirth is the supine position despite evidence against its use [18]. In this, the woman gives birth on their back and includes dorsal (woman lying flat on her back) ( Figure 2), lateral (lying on her side), semi-recumbent, or lithotomy. Due to its prevalence, neither medical professionals nor laboring women anymore consider the supine position to be an intervention. Additionally, the presence of a delivery bed in labor rooms subtly informs women that lying flat is "normal" [17]. These results support research, which found that midwives thought the supine position was the best, most advantageous, and most well-known birthing position [19]. The supine position was associated with a rise in episiotomies [20]. Second-degree tears did tend to decline; however, this was not statistically significant. The rate was greater in the supine position when episiotomies and second-degree tears were coupled to imply perineal injury requiring suturing [10]. The rate of instrumental deliveries was higher in the supine position than in the other positions. Estimated blood loss was lower in the supine position, and postpartum hemorrhage incidence was likewise lower [10]. When a woman is in the supine or lithotomy position during labor, her back mostly supports her weight [15]. This forces the woman to fight gravity and puts the fetus at an unfavorable driving angle concerning the pelvis [21,22]. According to observational studies, lying on one's back when giving birth may have a negative impact on uterine contractions as the contractions occur frequently but are less effective [21,23], slow down labor, and in, certain cases, limit placental blood flow [23]. Lithotomy Position The lithotomy position is seen to be used by doctors in many hospitals for both spontaneous as well as assisted vaginal deliveries. The lithotomy position includes lying on the back with knees bent and positioned above the hips and spread apart with the stirrups [5]. The lithotomy position provides the doctor with good access to the mother and the fetus during childbirth. However, this may not be the most comfortable position for the patient. It was the most commonly used birthing position, but recently other positions like squatting, birthing stools, and birthing beds are also being used more often. Studies suggest that a woman delivering in a lithotomy position can experience more pain in the second stage of labor compared to alternative birthing positions [24]. Complications associated with lithotomy position include the increased need for episiotomy and increased chance of forceps delivery or cesarean section [18]. The lithotomy position lowers blood pressure and can increase pain during contractions. It is also associated with an increased risk of perineal injury and more fetal heart rate patterns [13]. While it is convenient for midwives and obstetricians to monitor the progress of labor and perform hands-on interventions as needed while in the lithotomy position, questions continue regarding the hazards of such settings ( Figure 3) [25,26]. Lateral Position Side-lying positions, often known as lateral positions, include the pure side-lying and exaggerated Sims position (semi-prone) [21]. The woman lies on her side in the "pure side-lying posture," either with her hips and knees flexed with a pillow in between the legs or with her legs lifted and supported [21]. The woman assumes the exaggerated Sims position, lying on her side with her lower arm behind (or in front of) her trunk, lower leg extended, and upper hip and knee flexed 90 degrees or more. She then rolls partially toward her front [21]. Additionally, a variation of the lateral position is the Sims position which is also referred to as the left lateral position [27]. When a woman is in the second stage of labor, French midwives prefer lateral positions for both epidural analgesia-treated and non-epidural-treated patients [28]. Squatting Position Squats are among the popular birthing positions and are also helpful for the induction of labor. In the squatting position, a woman's feet carry the majority of her weight, yet her knees are visibly bent. She may lean or pull on support [5]. The squatting position is frequently viewed as the most natural position, which resembles the way chimpanzees rest and possibly many of us also do [13]. In this position, gravity plays a role during labor as well as delivery. However, maintaining the squatting position for a longer period of time is difficult for pregnant women and, thus, is considered to be one of the major drawbacks [29]. During the bearing down phase and delivery, it is quite a challenge for the laboring women to maintain a squatting posture, despite research suggesting that it is a natural and, thus, ideal position [29]. This position can put a lot of pressure on the knees and back of the mother and is not easy to maintain. Therefore, the creation of supporting tools may be able to address this issue. According to the findings of a study carried out in Taiwan on the efficacy of ergonomic ankle support aid for squatting position during the second stage of labor, pushing during the second stage of labor puts less stress on the calf muscles of the laboring women when she squats with the help of an ankle supporter [29]. Additionally using a device to aid with squatting decreases the time duration of the second stage of labor, decreases pain, and improves perceived pushing efficiency [29]. Widening the pelvis, the baby has greater space to move in this position. It makes pushing easier by making the body weight press down the uterus (Figure 4). Birthing Stool There are two types of sitting positions: semi-sitting and sitting upright. In the latter, the pregnant woman sits straight up on a bed, chair, or tool, as opposed to the former, where she sits with her trunk at an angle greater than 45 degrees from the bed [21]. Some published research indicates that some Western developed nations appear to favor particular sitting positions more than Asian ones [30,31]. Sitting on a birth seat was the most typical maternal position during the second stage of labor, according to a French study [30]. However, even if they want to, women from various Asian countries have few options for choosing to give birth while sitting down. This is because these cultures frequently practice the position of lying on one's back during birthing [31]. The upright position using the birthing stool helps use gravity to stimulate the baby's downward progress, and the low height of the stool flexes the legs and increases the size of the pelvis. By using a birthing stool, there was a higher risk of blood loss greater than 500 ml ( Figure 5). FIGURE 5: Birthing stool Author's own creation Birthing Bar During the pushing phase, squatting bars that arch over the bed near the foot and are secured on each side can be useful. Most labor beds can have a birthing bar attached to them to make it easier to go into a squatting position. The squatting position uses gravity to encourage the baby's downward progress while also expanding the size of the maternal pelvis. When a woman feels a contraction coming, she can lean forward, grab the bar, and pull herself into a squatting position ( Figure 6). FIGURE 6: Birthing bar position Author's own creation. Kneeling Position Various kneeling positions are possible, including hands-and-knees and upright kneeling [13]. the woman kneels, leans forward, and balances herself on her fist or the palms of her hands in a kneeling position [21]. In comparison to other positions, kneeling positions are less frequent in some Asian countries [31]. If the woman experiences back pain during labor, the kneeling posture may be very helpful because it encourages the baby's movement. Since there is no external pressure on the pelvis, the woman can move more freely while kneeling (Figure 7) [32]. The benefits of giving birth when upright have been well-documented. Aorto-caval compression risk is decreased, the fetus is better aligned, contractions are more effective, and the pelvic outlet is expanded while the woman is in a squatting or kneeling position [10]. Upright positions have been linked to psychological advantages like decreased pain perception, an increased sense of control, more equitable communication with the delivery attendant, and increased partner involvement [33,34]. During delivery, the use of a particular birthing position also varies with parity. The semi-sitting birth position in bed is more frequently used by multiparous women as compared to primiparous women [35]. Regional block analgesia frequently restricts a laboring woman's capacity to move into a different position without help [36]. A meta-analysis of the advantages and risks of various positions during the second stage of labor has been done [37]. The authors concluded that any upright or lateral posture was related to a shorter second stage of labor, less intense pain reported, fewer instrumental deliveries, fewer abnormal fetal heart rate patterns, and fewer episiotomies as compared to supine or lithotomy positions [10,37]. The lateral birthing position also had the highest percentage of intact perineum (66.6% intact, 28.3% lacerations requiring suturing), while squatting was linked to the largest percentage of lacerations (53.2% lacerations requiring suturing, 41.9% intact perineum) as concluded in an Australian retrospective study, which examined the impact of six distinct delivery positions on perineal outcomes, including episiotomy [38]. The lateral recumbent position with its advantage of avoiding compression of the aorta or the inferior vena cava or both is also being used for delivery [13]. Problems for both the mother and the fetus increase when the second stage of labor is prolonged [39,40]. The experience of giving birth is often difficult, and this has a big impact on how satisfied women are with their experience and the care they receive. When a woman is in labor, doctors, nurses, and midwives are among the most important healthcare professionals, playing a significant influence on the physiological and psychological effects of the experience. The mother should be helped to find out which birthing position is the best suitable for her [13]. A study that took place in India shows that around 92% of the nurses working in labor and delivery rooms were aware of the upright birthing positions, and most of them, about 83%, believed that women should be given the choice of whether to deliver in an upright or supine position. However, all of the nurses (100%) said that the most commonly used birthing position was the lithotomy position because of the ease and convenience of the doctors and health care providers [17]. The understanding of several positions, including standing, squatting, lateral, sitting, and hands-and-knees, was implicitly lacking. About 50% of the nurses were familiar with the squatting position, 37% sitting, 23% lateral, 23% hands-and-knees, and 13% standing among the different alternative categories of birthing positions [17]. Some evidence-based guidelines encourage and support women to move and take any position they feel most comfortable with throughout labor and delivery, as opposed to supine or semisupine positions [41][42][43]. Conclusions There is strong evidence that second-stage labor should not be performed while the mother is in the supine position. Supine positions are linked to greater fetal heart rate abnormalities and fewer spontaneous vaginal deliveries than upright or side-lying positions. When the second stage is prolonged or an expedited birth is necessary, squatting or sitting may be advantageous, while side-lying or hands-knee positions may help prevent lacerations. Despite the proven advantages of giving birth in an upright position, most women deliver vaginally lying on their back in lithotomy, semi-sitting, or semi-recumbent position. In addition, only a small portion of women use alternative birthing positions. As it is more convenient for health care providers to deliver in supine or semi-sitting positions, it is thought that they are the ones who encourage mothers to give birth in these positions. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2020-06-18T09:06:35.614Z
2020-06-12T00:00:00.000
241855815
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.researchsquare.com/article/rs-34261/v1.pdf", "pdf_hash": "14a9092f47e71228ac56503e9d022440acb6349a", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45250", "s2fieldsofstudy": [ "Medicine" ], "sha1": "d0e6c657da5e91e2514999715672094c29257782", "year": 2020 }
pes2o/s2orc
Alpha-Synuclein Pathology in the Submandibular Gland of LRRK2 p.G2019S Mutation Carriers. Background. The presence of intraneuronal aggregates of phosphorylated alpha-synuclein (pAS), the histological hallmark of Parkinson disease (PD), has been already demonstrated to be present in the autonomic nerve bres that innervate the submandibular gland in approximately 75% of living PD patients. The presence of pAS in the peripheral autonomic nervous system in carriers of LRRK2 mutations has not been studied so far. The objective of the current study is to evaluate the presence of abnormal pAS aggregates in the submandibular gland tissue of LRRK2 p.G2019S mutations carriers. Methods. This is a prospective observational study conducted between 2014 and 2015 at Hospital Clínic de Barcelona, Spain. A random sample of nine asymptomatic LRRK2 (aLRRK2) and 11 LRRK2 associated PD (LRRK2-PD) patients were recruited among a cohort of LRRK2-PD patients and their relatives already identied at our centre. All the participants underwent transcutaneous needle core biopsy of the submandibular gland under ultrasound guidance. The presence of pAS was assessed in all the participants by immunohistochemistry using anti-Serine 129-phosphorylated AS antibody. Results. Submandibular biopsy material containing glandular parenchyma was obtained in 4 (44.44%) aLRRK2 and in 6 (54.55%) LRRK2-PD patients. Aggregates of pAS were detected in the glandular parenchyma in one of the four (25%) aLRRK2 subjects and in none (0%) of the LRRK2-PD patients. Conclusions. Our study shows that pAS aggregates obtained by needle core biopsy of the submandibular gland are infrequent in LRRK2 mutation carriers but may be detected in asymptomatic mutation carriers. The low rate of pAS positive biopsies suggests either a different physiopathology between LRRK2-related and idiopathic PD or that a one-time unilateral submandibular gland biopsy is not the optimal procedure for the study of synuclein aggregation in LRRK2 mutation carriers. and controls. Results from the iPD and the from the aLRRK2 and LRRK2-PD participants. gland are infrequent in LRRK2 mutation carriers but may be detected in asymptomatic mutation carriers. The low rate of pAS positive biopsies suggests either a different physiopathology between LRRK2-related and idiopathic PD or that a one-time unilateral submandibular gland biopsy is not the optimal procedure for the study of synuclein aggregation in LRRK2 mutation carriers. Background Abnormal aggregates of phosphorylated alpha-synuclein (pAS) are the major component of the intraneuronal inclusions known as Lewy bodies and Lewy neurites, the histological hallmark of Parkinson disease (PD) 1 . Recently, pAS deposits were detected in the submandibular gland in living patients with idiopathic PD 2,3 , and also in the prodromal phase of this condition 4 , providing biological evidence of ongoing disease at its earliest stage. We have assessed the presence of pAS aggregates in the submandibular gland of LRRK2 p.G2019S mutations carriers, both in those with manifest PD (LRRK2-PD) and in asymptomatic carriers (aLRRK2). As LRRK2-related CNS neuropathology is not necessarily associated with pAS aggregates 5 , the detection of pAS in peripheral biopsies in LRRK2-PD cases would identify those with an underlying Lewy type pathology which could be important for implementing potential anti-synuclein therapies. In aLRRK2, the presence of peripheral pAS could re ect an underlying synucleinopathy and constitute a risk marker for the later development of manifest PD. Methods This is a prospective observational study conducted between July 2014 and May 2015 at the Hospital Clinic de Barcelona, Spain. The ethical committee at our institution approved the study and all participants gave their written informed consent. Participants At our institution ve groups of subjects underwent biopsy of the submandibular gland for research: aLRRK2, LRRK2-PD patients, idiopathic (iPD), idiopathic REM sleep behaviour disorder (IRBD) and controls. Results from the IRBD, iPD patients and controls were previously reported 4 . Herein we present the results from the aLRRK2 and LRRK2-PD participants. Asymptomatic LRRK2 mutation carriers and LRRK2-PD patients. Nine aLRRK2 carriers of the p.G2019S mutation and eleven LRRK2-PD patients were selected among a cohort of LRRK2-PD patients and their relatives. Participants had been already identi ed at our centre, had been screened for LRRK2 mutations as previously described 6 , and were positive for the LRRK2 p.G2019S mutation. Participation in the study was proposed to all who were alive and accessible. PD was diagnosed according to United Kingdom PD Society criteria with the exception that a positive family history was not considered an exclusion criterion 7 . Exclusion criteria in all participants were current or past medical history of disorders affecting the salivary glands (e.g., chronic sialadenitis, abscesses, active neoplasms, Sjögren syndrome), coagulation disorders, and anticoagulant or antiplatelet drug intake. Echogenicity of the substantia nigra was assessed by means of transcranial sonography as previously described. 15 Dopamine transporter imaging with 123 I-2β-carbomethoxy-3β-(4-iodophenyl)-N-(3uoropropyl)-nortropane single photon emission computed tomography (DaT-SPECT) was performed as previously described. 16 Procedures Submandibular gland biopsy and alpha-synuclein immunohistochemistry of the submandibular gland were assessed as described previously 4 . Brie y, a core needle biopsy was performed in all participants using a commercial 16-gauge needle. Biopsies were performed unilaterally under ultrasound guidance after subcutaneous in ltration of local anesthesia. Between 2-4 punctures were performed by an experienced radiologist who was masked to the clinical status of the participants. After each procedure, specimens were immediately processed and serial 4 µm sectioning of the whole specimens was performed. Three slides were selected from the rst, middle or nal third of the blank slide set from each subject and were stained with haematoxylin-eosin. Immunohistochemistry was performed on the third set with most submandibular parenchyma and/or nervous s tissue using an automated Dako Autostainer using anti-Serine 129-phosphorylated alpha-synuclein antibody (Wako clone pSyn#64;, Pure Chemical Industries, Osaka, Japan)), as described previously 4 . pAS immunoreactivity in nervous structures within or outside the submandibular gland parenchyma was assessed as present or absent. At the time of the histopathologist examination, neuropathologists were blinded regarding the clinical condition of the individual's tissues. Results Demographic and clinical data of participants are summarized in Tables 1 and 2. Biopsy specimens contained submandibular gland parenchyma in 6 of the 11 (54.55%) LRRK2-PD patients and in 4 of the 9 (44.44%) aLRRK2. The remaining samples contained periglandular connective tissue with variable amount of vessels and nerve bres or muscle. Aggregates of pAS were detected in the glandular parenchyma in one of the four (25%) aLRRK2 subjects ( Fig. 1) and in none (0%) of the six LRRK2-PD patients with available glandular tissue. pAS aggregates in the aLRRK2 individual were identi ed in nerve structures of the connective tissue within the gland, rarely surrounding individual glands. This type of aggregates has been shown to co-distribute with tyrosinehydroxylase positive sympatethic nerve bres 4 . None of the aLRRK2 or the LRRK2-PD showed extraglandular pAS aggregates. Asymptomatic LRRK2 p.G2019S carriers (Tables 1 and 2) The mean age of the aLRRK2 was 48.89 ± 8.43 years. The mean UPSIT score was 33.56 ± 2.19 points, and none of the nine aLRRK2 had hyposmia. None of the aLRRK2 had depression, constipation or RBD by the RBSQ. Seven of the nine aLRRK2 underwent a DaT-SPECT, which was normal in all of them. Three out of seven (42.9%) individuals who underwent transcranial sonography had hyperechogenicity of the substantia nigra . The asymptomatic LRRK2 mutation carrier who showed pAS pathology in the submandibular glandular parenchyma was a 60-year-old woman. Her score in the NMSQ was 14, with all points got from the sleep subscore due to insomnia. Her neurological examination was normal, without evidence of parkinsonian signs. A conducted interview detected no history of dream-enacting behaviours, constipation or depression. She had no hyposmia. The echogenicity of the substantia nigra and the DaT-SPECT were normal. At the age of 65, she is still asymptomatic. Discussion To the best of our knowledge this is the rst study that assessed the presence of pAS pathology in the submandibular gland of LRRK2 mutation carriers. pAS aggregates were found in one of the four aLRRK2 in whom glandular parenchyma was available after transcutaneous needle core biopsy. None of the LRRK2-PD patients had pAS aggregates in the submandibular gland or extraglandular tissues. Submandibular gland parenchyma was obtained only in around 50% of the LRRK2-PD and aLRRK2 individuals. The absence of pAS positivity in manifest LRRK2-PD was not expected, since pAS accumulation in the peripheral autonomic nervous system is thought to re ect an ongoing synucleinopathy, which occurs in the majority of, but not all, LRRK2-G2019S-PD cases. Still, in a study of alpha-synuclein aggregation in cerebrospinal uid by real-time quaking-induced conversion (RT-QuIC) 17 the percentage of positive cases in LRRK2-PD was also much lower than in iPD (40% vs 90%). The pleiomorphic pathology of LRRK2-PD linked to the p.G2019S mutation may in part explain the results. Other possible explanations include the possibility that, unlike in iPD, peripheral autonomic nervous system Lewy type pathology is less prominent or even absent in p.G2019S LRRK2-PD patients. This could be supported by the notion that dysautonomia seems to occur less frequently in p.G2019S LRRK2-PD than in iPD 18 , although no de nitive data are available. Also possible is that synuclein pathology might have been present in the peripheral autonomic system at an early disease stage and migrated later centripetally to the central nervous system, becoming undetectable in manifest LRRK2-PD, as has been speculated for idiopathic PD 19 . Finally, a sampling bias with under-representative tissue samples cannot be excluded. The number of submandibular glands with pAS positivity among aLRRK2 was low (25%). Of interest, the percentage of positive cases is close to the proportion of aLRRK2 reported to have misfolded synuclein in the cerebrospinal uid (18.8%) assessed by RT-QuIC 17 . Factors such the relatively young age of the patients may also in uence these results since parkinsonism appears generally late in LRRK2-PD. However, even if our patients possibly destined to develop PD might have been disease free at the time of the biopsy, manifest LRRK2-PD were also pAS negative. The main limitations of our study are the small sample size which precludes generalization of the results and the low frequency of glandular tissue obtained with unilateral transcutaneous needle biopsy of the submandibular gland despite the use of ultrasound guiadance. In previous studies in living PD using a similar biopsy methodology but without ultrasound guidance, submandibular glandular tissue was not obtained in 20-24% of participants 2-3 . Bilateral transcutaneous needle biopsies of the submandibular gland in PD patients seemed recently feasible and safe, showing a better tissue acquisition 21 . Technical re nement of the procedure is needed to improve the ability to obtain glandular parenchyma in living subjects. In conclusion, our study shows that pAS aggregates obtained by needle core biopsy of the submandibular gland may be detected in aLRRK2, suggesting that Lewy type pathology is already ongoing in a subset of subjects. However, the low rate of submandibular gland tissue obtained by the biopsies and the fact that none of the LRRK2-PD patients showed pAS aggregates suggest that a onetime unilateral submandibular gland biopsy may not be the optimal procedure for the study of pAS in LRRK2 p.G2019S mutation carriers. Other peripheral tissues such as skin or minor salivary glands that are easily accessible may prove a better target for tissue-based studies of synuclein aggregates in LRRK2 disease. Yet emerging synuclein ampli cation methods in cerebrospinal uid 16 Ethics approval and consent to participate: The ethical committee at Hospital Clínic de Barcelona approved the study and all participants gave their written informed consent. Figure 1 A: Representative histological image of the submandibular biopsy (haematoxylin-eosin stain). B: Figures Immunohistochemistry for phosphorylated alpha-synuclein shows small aggregates surrounding glandular structures (brown signal, arrows) in areas corresponding to the course of autonomic nerve bres. Scale bars: 20 μm.
v3-fos-license
2022-05-18T15:20:02.776Z
2022-01-01T00:00:00.000
248845975
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09775153.pdf", "pdf_hash": "b1ab2f3dbb112ae679dfff8e649174a31c7df99f", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45251", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "sha1": "7e7c7132ca125c163c8d747fd12a28e2335ed916", "year": 2022 }
pes2o/s2orc
A Single-Ended Transmitter with Low Switching Noise Injection and Quadrature Clock Correction Schemes for DRAM Interface This paper presents a transmitter with a phase controller for low switching noise injection and a quadrature clock corrector (QCC) for correcting both phase error and duty cycle distortion of the divided quadrature clocks. The phase errors and the duty cycle distortions of the quadrature clocks determine the quality of the output DQS. The proposed QCC simultaneously runs phase correction and the duty adjustment of quadrature clocks for fast correction time. In order to reduce power switching noise induced by output drivers, the proposed transmitter transfers the data at different timings using the phase controller which generates the interpolated quadrature clocks for even and odd channels. Since the even channel is synchronized with the reference quadrature clocks and the odd channel is synchronized with the interpolated quadrature clocks, the peak switching currents consumed by output drivers are spread. The proposed circuit has been designed in 180-nm CMOS process using VDD of 1.8-V and VDDQ of 0.6-V and the target data rate is 3.2 Gbps. The corrected quadrature clocks have the duty cycle distortion of 0.2 % and the phase error of 1.18˚ with input clock distortion. The output DQS of the transmitter shows the peak-to-peak jitter of 30.55-ps in the low switching noise injection mode with the phase offset of 122˚, which is improved by 28.8 % as compared to the normal mode. I. INTRODUCTION The developments of the Internet, mobile devices, Internetof-Things (IoT) technologies and the processing of big data give rise to the high performance computing system. With the development of computing technologies, the performance requirements of dynamic random access memory (DRAM) have also increased. So various advanced technologies are adopted in DRAM interface such as write and read trainings, periodic ZQ calibration, and low voltage swing terminated logic (LVSTL) with separate low supply [1], [2]. In the DRAM interface, the parallel communication is more effective than the serial communication since a large amount of data should be transferred at the high speed. When the same amount of data is transferred between a memory controller and a DRAM, the parallel communication can reduce the frequency by the number of data pin. Therefore, in the high frequency operation, the parallel communication has an advantage in relaxing design conditions. However, the parallel output drivers simultaneously operate to transmit internal data to off-chip and so instantaneously consume a lot of output current. The large output current induces supply voltage drop that causes the power switching noise and the electromagnetic interference (EMI) problem. In order to reduce the switching noise and EMI, a conventional slew-rate control scheme is commonly used in DRAM, which increases the rising and falling times of output signals [3]. But the slewrate control reduces the eye opening of the data so it should be carefully applied in high speed interfaces such as DDR4, LPDDR4, DDR5, and LPDDR5. Therefore, an alternative solution is required to decrease the maximum level of switching currents without any reduction in the eye opening. In the high-speed DRAM interfaces, the internal quadrature clock architecture can be used to mitigate the internal bandwidth limitation of a DRAM process. But the phase difference among quadrature clocks should be preserved in the quadrature clock architecture because the phase difference determines the data width and the duty cycle of strobe signal. There are many non-ideal factors resulted in phase error and duty cycle distortion of quadrature clocks, such as input clock distortion, input offset voltage, and process variations. Therefore, a duty cycle corrector and a clock phase corrector are required in the clock path to minimize the phase error and the duty cycle distortion which degrade write/read timing margins between a memory controller and DRAMs [4]- [10]. This paper presents a transmitter (TX) with a phase controller for low switching noise injection and a quadrature clock corrector (QCC) for correcting the quadrature clocks. The paper is organized as follows. Section II describes the overview of the proposed transmitter. Section III explains the operation of the phase controller. The QCC for the divided clocks is explained in Section IV. The measurement results of the proposed circuit are shown in Section V. Section VI summarizes this paper. II. OVERVIEW OF TRANSMITTER The block diagram of the proposed transmitter is shown in Fig. 1. The proposed transmitter consists of a CK buffer, a divider (DIV), a phase controller, quadrature clock correctors (QCCs), serializers (SERs), and output drivers (DRVs). The transmitter of DRAM interface has four channels and each channel consists of 8 data paths and 1 data strobe path [11]. The output driver of each data path consumes a large amount of current because it transmits the data through a back-plane channel which causes a loss of signal power. In a conventional transmitter for DRAM interface, since each output driver is synchronized with the single clock that comes from the memory controller, the output drivers simultaneously switch output nodes, which causes the switching noise injection and EMI problem. In the proposed transmitter, to solve this problem, each output driver is synchronized with different clocks that are generated by the phase controller. The phase controller generates two 4-phase clock groups (EVEN_CLK and ODD_CLK). Reference phases of two groups are different from each other. For example, if the reference phase of EVEN_CLK is 0˚, the reference phase of ODD_CLK becomes 0 + Φ˚. As a result, the even channel and odd channel drivers switch output nodes at different timings, which reduces the switching noise injection. Even if there is a timing skew between even and odd channels, the data sampling margin is ensured in the receiver because the DRAM interface employs a source synchronous signaling so the data and the DQS are synchronized and aligned in the same channel. III. PHASE CONTROLLER The detailed circuit of the phase controller is shown in Fig. 2. Transmission gates (TG) are added to a conventional phase interpolator for on/off control using the external signal (EN). Dummy transmission gates are also added to reduce the delay mismatch of clock paths. Fig. 3 shows the operation of the phase controller according to the EN. The EN controls the phase of ODD_CLK (ICLK_O, QCLK_O, ICLKB_O and QCLKB_O) If the EN is high, each clock of ODD_CLK has the interpolated phase by two adjacent phases of EVEN_CLK clocks so phases of ODD_CLK clocks and EVEN_CLK clocks are different from each other. If the EN is low, transmission gates are opened and one input path to ODD_CLK is cut off so clock phases of ODD_CLK and EVEN_CLK are the same like a conventional scheme. Fig. 4 shows the timing diagram of the output DQ and DQS signals. DQS EVEN/ODD signals are generated by serializers with quadrature clocks. If the proposed circuit are used as the conventional transmitter or the EN is low, output timings of DQ EVEN /DQS EVEN and DQ ODD /DQS ODD are the same. If the EN is high, the output timings of even and odd channels are different since switching timings of EVEN_CLK and ODD_CLK are different. Thus, switching currents of output drivers are dispersed and the maximum current peak value of the output drivers is reduced. The maximum peak current reduction using the phase controller does not degrade rising and falling times of output signals, whereas the conventional slew-rate control scheme increases the rising and falling times of output signals, which reduces the eye opening of the data. However, in a low speed mode, the slew-rate control scheme is effective from a power consumption point of view. Thus, the phase controller can be turned off to reduce the power consumption. The phase difference between EVEN_CLK and ODD_CLK is changed by size ratio of inverters in the phase controller. The simulated interpolated phase (Φ) variation according to process and temperature variation and the maximum current consumption of VDDQ (0.6 V) according to Φ are shown in Fig. 5. The VDDQ is the specified VDD that supplied only output drivers for reducing power consumption in [1]. According to simulation results with PVT variations, if inverter ratio is 1:1, the Φ varies from 34 to 52 degrees with PVT variations. In this range, the effect of reducing maximum current consumption is at least 42.3% compared to the OFF mode. A control block can be added to set the optimal Φ but it is not effective since the power and area consumptions increase. The Φ is obviously changed by PVT variations if the fixed inverter ratio is used. However, the peak current reduction by changing the data switching timing is effective. Applying this ratio, the total maximum current consumption of the 4-channel TX including the proposed clocking circuits, data serialization circuits and the output drivers which are supplied by the VDD and VDDQ is reduce by 23% in low noise injection mode as compared with the normal mode as shown in Fig. 6. IV. QUADRATURE CLOCK CORRECTOR The block diagram of the QCC is shown in Fig. 7. The quadrature clock corrector consists of four duty cycle adjusters (DCA), two duty cycle detectors (DCD), a phase error detector (PED) and cross-coupled latches. The DCA corrects the duty cycle and phase of clocks. The duty cycle adjuster in the proposed QCC is possible to correct phase error and duty cycle distortion at the same time. The output DQS is distorted by both phase errors and the duty cycle distortions of quadrature clocks. Therefore, the DCD uses the quadrature clocks to generate a control voltage V CTRL for the duty cycle correction of 4-phase clock and the PED uses the output DQS signal to generate control voltages V CP and V CN for the phase error reduction. For the IN0 and IN180 clocks, especially the constant phase control signals V REFP and V REFN are applied to provide reference phases so these clocks are not affected by the phase control loop. A. DUTY CYCLE ADJUSTER The unit cell of the DCA is shown in Fig. 8 (a). The duty cycle adjuster consists of 8-unit cells. In the unit cell, the transistors are added to the conventional structure to correct the duty cycle of input clock [6]. The waveform of unit cell of duty cycle adjuster is shown in Fig. 8 (b). As shown in Fig. 8 (b), according to the phase control signals V CP and V CN , rising and falling times of the signal at the internal node V INT is differently controlled to correct the duty cycle of the clock according to the duty cycle control signal V CTRL applied to both pull-up and pull-down transistors. For example, if the V CTRL is high, the NMOS current I NMOS is larger than the PMOS current I PMOS so the rising time of V INT is increased and the falling time is decreased. As a result, the duty cycle of output signal OUT is increased and wider than the input signal IN. If the V CTRL is low, I NMOS is smaller than I PMOS so the rising time is decreased and the falling time is increased. As a result, the duty cycle of the output signal OUT is decreased more than the input signal IN. Fig. 9 shows the schematic of the DCD and the output voltage when the input duty is distorted. The DCD uses a charge pump structure and consists of two charge pumps [12]. The DCD receives clock signals passing through the DCA as inputs and generates the V CTRL according to the duty of input signals. If duty cycle of the input clock, IN is greater than 50%, the V CTRL is increased. Whereas, if duty cycle of the IN is less than 50%, the V CTRL is decreased as shown in Fig. 9 (b). The generated V CTRL is applied to DCA for correcting the duty cycle of the quadrature clocks. Fig. 10 show a schematic of the PED and a timing diagram of the quadrature clocks and the output DQS. The PED consists of the DCD structure and a bias-translation circuit. The PED corrects phase errors between the quadrature clocks using output DQS signals (DQSp, DQSn). The output DQS signal is distorted if the quadrature clocks have a phase error as shown in Fig. 10 (b). In the PED, the DCD generates phase control voltage V C according to the duty of the output DQS and then the bias-translation circuit generates V CP and V CN for the PMOS and NMOS controls, respectively. Fig. 11 shows the simulated voltage profiles of the V CP and V CN in various process corners. The V C is translated to an appropriate control voltage insensitive to process variation by the bias-translation circuit. The simulated waveforms of duty cycle and phase control signals are shown in Fig. 12. The duty cycle control signal is locked at 5-ns and the phase control signal is locked at 36 ns. So the maximum locking time is 58 cycles which is 4.74 times faster as compared to the conventional QCC. According to the simulation results, when the duty cycle distortion of ±20% and phase error ±30˚ are applied to the input quadrature clocks of the QCC, the maximum duty cycle error of the corrected clocks is 2.6%, and the maximum phase error between the OUT0 and the OUT90 is 3.9˚ as shown in Fig. 13. V. MEASUREMENT RESULTS The proposed transmitter circuit has been implemented in 180-nm CMOS technology. The die micrograph with the magnified layout is shown in Fig. 14. The prototype chip consists of a clock input buffer, a clock divider, a phase controller and 4 channels (2-EVEN channels and 2-ODD channels). The output drivers have been designed using LVSTL with 0.6-V supply voltage and the other circuits have been implemented with a 1.8-V supply voltage. The whole circuit consumes 0.893-mm 2 including the modeled clock distribution network but the phase controller and the QCC occupy 0.004-mm 2 and 0.071 mm 2 , respectively. Fig. 15 (a) shows the measured waveform of quadrature clocks corrected by QCC. When the input clock with the duty distortion of 10% is applied, the output quadrature clocks show the maximum duty cycle distortion of 5.82% and the maximum phase error of 9.8 degrees without QCC. But when the QCC is enabled, the duty cycle distortion and the phase error are reduced to 0.2% and 1.18˚, respectively. The measured waveform of the DQS EVEN/ODD is shown in Fig. 15 (b). The output DQS signals are distorted by the reflection caused by the impedance mismatch of the pull-up NMOS transistor with 0.6-V supply for the LVSTL driver. In the low switching noise mode, the phase controller generates EVEN_CLK and ODD_CLK with different quadrature phases. When the phases of EVEN_CLK are 0˚, 90˚, 180˚, and 270˚ and the interpolated phases of ODD_CLK are 63˚, 153˚, 243˚, and 333˚, the phase difference of DQS EVEN and DQS ODD becomes 126˚ The measured jitters of DQS EVEN/ODD are shown in Fig. 16. In the low switching noise injection mode, jitter characteristics are improved by 28.8 % as compared to the normal mode. The peak-to-peak jitter and the RMS jitter of DQS EVEN/ODD are 30.55-ps and 8.95-ps, respectively. The comparison results between conventional QCC and the proposed QCC is shown in Table 1. VI. CONCLUSION In this paper, a single-ended transmitter has been proposed to minimize the switching noise injection. The transmitter uses the phase controller to generate two groups of quadrature clocks with different reference phases. In order to spread the peak switching currents of parallel output drivers, the even channel is synchronized with the reference quadrature clocks and the odd channel is synchronized with the interpolated quadrature clocks. Since pulse widths of output DQ and DQS are determined by the quality of quadrature clocks, the QCC has been implemented in the proposed circuit. The proposed QCC uses a dual loop structure to achieve fast correction time, which simultaneously adjust both phase error and duty cycle distortion. The proposed circuit has been fabricated with 180-nm CMOS technology. According to the experimental results, the quadrature clock shows the duty cycle distortion of 0.2% and the phase error of 1.18˚ with the QCC. In the low switching noise injection mode, the peak-topeak jitter and the RMS jitter of DQS EVEN/ODD are improved by 28.8 % as compared to the normal mode.
v3-fos-license
2023-10-06T13:53:44.209Z
2023-10-06T00:00:00.000
263674650
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcmededuc.biomedcentral.com/counter/pdf/10.1186/s12909-023-04733-z", "pdf_hash": "1d85a61e782ff1b9c1892c0030630c9ebdfe57c0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45252", "s2fieldsofstudy": [ "Education", "Medicine" ], "sha1": "3b51ff82872a60f4905963b0127906625f6b3cee", "year": 2023 }
pes2o/s2orc
Pain science and practice as a ‘threshold concept’ within undergraduate and pre-registration physiotherapy education: a jewel of the curriculum? Background Threshold concepts describe learning experiences that transform our understanding of a concept. Threshold concepts are variously: troublesome, transformative, irreversible, integrative and bounded. Purpose The aim of this narrative review is to consider the case for characterising pain science and practice as a threshold concept within undergraduate and pre-registration physiotherapy education. Summary This article considers the underlying tenets of threshold concepts as they relate to teaching and learning and the relative merits and limitations of characterising pain science and practice as a threshold concept within undergraduate and pre-registration physiotherapy education from both pedagogical and epidemiological perspectives. By evaluating pain, as it relates to physiotherapy education and practice, according to the five defining characteristics of a threshold concept then presenting data related to the epidemiology and impact of pain, the worthiness of characterising pain science and practice as a threshold concept will be discussed and further debate invited. Background Threshold concepts have been defined as "concepts that bind a subject together, being fundamental to ways of thinking and practising in that discipline" [1].They essentially describe learning experiences that transform our understanding of a concept.Understanding threshold Pain science and practice as a 'threshold concept' within undergraduate and preregistration physiotherapy education: a jewel of the curriculum?Keith M. Smart 1,2,3* 2. Transformative: involving knowledge that, once understood, causes a significant shift in the understanding of a concept; 3. Irreversible: involving knowledge that is unlikely to be forgotten or unlearned; 4. Integrative: involving knowledge that reveals and demonstrates the interrelatedness of concepts. 5. Bounded: involving knowledge that has frontiers bordering with thresholds into new conceptual areas, or knowledge that defines a particular conceptual field, creating a specific space of expertise within each discipline [5]. In addition to the pedagogical requirements of threshold concepts, the epidemiological data concerning the prevalence, impact and cost of pain/chronic pain from a societal perspective may usefully inform the discussion as to the relative merits of characterising pain as a threshold concept within undergraduate and pre-registration physiotherapy education.'Pain' , in this context, refers to the sensory and emotional experience of clinically encountered 'pain' [6] as experienced by people attending for healthcare. Purpose To the best of the author's knowledge, the merits of characterising pain science and practice as a threshold concept in physiotherapy education has not been previously discussed.The aim of this debate article is to consider the case for characterising pain science and practice as a threshold concept within undergraduate and pre-registration physiotherapy education.Pedagogical and epidemiological perspectives will be considered. Pain science and practice as a threshold concept: the pedagogical perspective Pedagogically, the appropriateness and worthiness of designating pain as a threshold concept can be determined by the extent to which it satisfies the five defining characteristics of a threshold concept. The troublesome nature of pain science and practice The complex and troublesome nature of the science and management of pain has been succinctly characterised as 'The Puzzle of Pain' [7]. Pain science and practice could be considered troublesome to study and understand from a number of perspectives.Firstly, underlying its complexity is the multidimensionality of the pain experience, with its perceptual (i.e.sensory-discriminative, affective-motivational, and cognitive-evaluative), as well as ontological (i.e. to do with its nature), epistemological (i.e. to do with ways of knowing), linguistic (i.e. to do with language as a conduit to human interaction) and existential (to do with meaning and purpose) dimensions [8][9][10][11].These dimensions can be incredibly challenging to understand and reconcile educationally and clinically. Secondly, the perception and experience of pain is inherently individual and subjective, and conflicts can exist between patients self-reported experiences and clinicians' understanding of the clinical significance of pain [12].In short, it is not possible for a clinician to 'know' another person's pain. Thirdly, clinical presentations of pain can be complex and troublesome to understand and treat.For example, interpretations of pain based on the biomedical model, which assumes that all pain is caused by injury or pathology, that the severity of pain is proportionate to the underlying cause and that treating the injury/pathology should be accompanied by relief of pain, do not explain either the complexity or variability inherent within many clinical presentations of pain where [10,13]: i) pain is reported in the absence of any clearly identifiable pathology; ii) pain is reported to persist after healing or resolution of pathology; iii) pain is absent despite evidence of injury or pathology; iv) the severity of self-reported pain appears, from the clinicians perspective, to be at odds with the severity of injury or pathology; v) patients' reports of pain severity in response to similar severities of injuries differs greatly; vi) relationships between pain, impairment and disability are unpredictable and incongruous; vii) pain severity is discordant with medical investigations (e.g., radiological imaging); viii) where patients' pain responses to identical interventions for the same injury or pathology are highly variable; ix) paradoxically, pain is absent despite evidence of injury/pathology. Encountering such variations in clinical presentations of pain could be seen as troublesome for students (and clinicians) to understand [13]. Fourthly, both the International Association for the Study of Pain (IASP) and the European Pain Federation (EFIC) recommend that physiotherapists should assess pain from a biopsychosocial perspective (i.e. the biological, psychological and social dimensions of the pain experience) [14,15].However, evidence suggests that some physiotherapists have some difficulty in applying the biopsychosocial model of pain in clinical practice in part because of its perceived complexity [16][17][18][19].Others have sought to demonstrate the complex and challenging nature of pain by deconstructing the biopsychosocial and other contemporary models of pain and inviting the acceptance of pain as potentially insoluble [10]. Together, these assertions highlight the troublesome nature of clinically encountered pain as a focus for teaching and learning. Knowledge and understanding of pain science and practice is transformative Pain science and practice has evolved significantly over the last few decades and these developments, it has been suggested, have had a significant impact on physiotherapy theory and practice [20].Parker and Madden [20] argue that developments in pain sciences have shifted physiotherapists' understanding of pain and approaches towards its assessment and treatment, expanded physiotherapy practice, elevated the status of the physiotherapy profession and led to the development of interdisciplinary teams in which physiotherapists play a prominent role. Theoretically, learning about contemporary approaches to understanding, assessing and managing pain, such as the biopsychosocial model of pain and pain mechanismsbased approaches may be transformative for undergraduate/pre-registration physiotherapy students as they encounter clinical presentations of pain.Such approaches may provide more sophisticated explanations for pain, and its assessment and treatment, beyond those offered by the more reductionist biomedical model [13]. A recent systematic review and meta-analysis found evidence of improved student/qualified health care professionals (including physiotherapists) pain-related knowledge and attitudes and an increased likelihood of clinical behaviour more in keeping with evidence-based practice in response to biopsychosocial focused education strategies [21].And a recent qualitative evidence synthesis found evidence that biopsychosocial-oriented training for qualified physiotherapists changed the way some considered musculoskeletal pain and its management, changed parts of their practice to a more biopsychosocial framework, improved their confidence in managing musculoskeletal pain and made their work more rewarding [17]. An expanding body of evidence shows that a brief pain neuroscience education session can improve undergraduate physiotherapy students pain knowledge in the short term and positively shift their pain attitudes towards people in pain in the short and medium term [22][23][24][25][26]. Collectively, these findings suggest that modern pain education is capable of at least changing, if not transforming, physiotherapists pain-related clinical knowledge and practice. Knowledge and understanding of pain science and practice is irreversible While there is some evidence to show that appropriate pain education can improve pain-related knowledge and attitudes associated with the biopsychosocial aspects and neurophysiology of pain among undergraduate physiotherapy students (as described above) in the short-and medium-term, the extent to which such knowledge and attitudes are 'irreversible' , that is unlikely to be forgotten or unlearned, is not known.Previous studies investigating pain neurophysiology [22][23][24][25][26] or biopsychosocialbased pain education [21] rarely, if ever, employ long term follow-up. Evidence from qualified physiotherapists shows that for some, their clinical reasoning (and practice) remains, to some extent, grounded within the biomedical model of pain despite education in and knowledge and experience of biopsychosocial approaches.The reasons for this are not fully known and could reflect either difficulties in applying such knowledge, as has previously been shown [16][17][18][19] or the loss or replacement of knowledge.Further research could explore this. Currently, the extent to which physiotherapists knowledge and understanding is irreversible is not known. Pain science and practice is integrative Clinically encountered pain is ubiquitous across healthcare-related settings, disciplines, specialisms and conditions.Therefore, knowledge and understanding associated with pain science and clinical practice has the potential to reveal and demonstrate to physiotherapy students its interrelatedness with a myriad of other healthrelated constructs, concepts and body systems [27].The literature is replete with evidence of how the sensory and emotional experience of pain is interconnected with emotions [28], cognitions [29], disability [30,31], the social environment [32], risk factors [33], as well as the nervous [34], immune [35], endocrine [36], stress [37,38] and cardiovascular systems [39]. A knowledge and understanding of pain developed through integrative pain education could help physiotherapy students begin to appreciate the interrelatedness of pain and develop meaningful connections between pain and broader health-related concepts.Future research could explore this. Pain as a bounded concept At the same time as being highly integrative pain science and practice is internationally recognised as a scientific and clinical discipline as represented by the International Association for the Study of Pain (see https://www.iasp-pain.org/advocacy/iasp-statements/desirable-characteristics-of-national-pain-strategies/).Furthermore, the assessment and management of pain are bounded by core outcome sets [40], clinical guidelines [41], standards of care [42], and contributes to an international disease classification system that considers chronic pain to be a condition in its own right and not solely a symptom of diseases and injuries [43]. Collectively, these findings confirm pain as a particular clinical specialty that has created a specific space of expertise within and between medical and scientific professions and disciplines. Pain science and practice as a threshold concept: the epidemiological perspective Understanding pain and its clinical presentations is vital given its prevalence and adverse personal and socioeconomic impact.Approximately 20-30% of the adult populations of Europe and United States of America are affected by chronic (typically ≥ 3 months in duration) pain [44][45][46][47].Pain, and pain-related conditions (e.g., low back and neck pain) are leading causes of disability and disease burden globally [48].Two pain-related conditions (arthritis and back pain) are included within the top 10 most common conditions for which consultations are sought in primary care globally [49]. Chronic pain can have a profound adverse impact on the daily activities, quality of life and mental health of those who suffer with it, together with wider consequences on home, work, and social life [50,51].The economic costs arising from healthcare expenditure, lost work productivity, absenteeism, and early retirement secondary to chronic pain can be enormous to nations, running into billions annually [44]. In light of these findings, chronic pain is becoming increasingly viewed as a public health concern [52][53][54].Data demonstrating the extent and impact of pain/ chronic pain globally could be used to support the case for characterising pain science and practice as a threshold concept within undergraduate/pre-registration physiotherapy education. Pain science and practice education within physiotherapy programmes Given that (chronic) pain is common and costly and that trainee physiotherapists, regardless of clinical speciality and setting, are frequently confronted with clinical presentations of pain as they undertake clinical placements it is incumbent on physiotherapy educators to ensure that student physiotherapists acquire the necessary knowledge and clinical skills required to understand, assess and manage it optimally. Evidence suggests that pain education within undergraduate healthcare training programmes, including physiotherapy, has long been insufficient and could be improved in order to meet best practice standards [55][56][57][58][59][60]. Guidelines for incorporating pain education into undergraduate curricula for healthcare professionals have been published [61,62] and reforming physiotherapy curricula to support students to develop clinical competencies based on current pain neuroscience has been advocated [63]. Pain curricula to improve pain education within undergraduate/pre-registration physiotherapy programmes and across the professional lifespan have been developed by the IASP [13] and EFIC [14] respectively.The IASP curriculum has subsequently informed the development and revision of undergraduate physiotherapy pain education in the United States [63] and Australia [64]. Recognition of pain science and practice as a threshold concept could provide the impetus to improve painfocused teaching and learning within undergraduate and pre-registration physiotherapy programmes and encourage others to implement guidelines and revise the nature and extent of pain education and content within their curricula. Limitations This article presents the perspective of one academic and clinical physiotherapist with a special interest in pain science and practice.It is hoped that this article might stimulate debate within the profession, among physiotherapy students, educators, clinicians, researchers, managers and professional regulators, as to the relative merits of characterising pain science and practice as a threshold concept within undergraduate and pre-registration physiotherapy education, and potentially, across the professional lifespan.It may also stimulate wider debate regarding the identification of those threshold concepts upon which physiotherapy education and training could or should be based. Understanding of threshold concepts continues to evolve and there is currently no agreement on how they should be identified or designated.For example questions such as, 'How many of the aforementioned five characteristics should a concept possess to be considered a threshold concept?' or ' Are some characteristics more important than others?' remain unanswered [65].Interpretations of the criteria vary, and additional characteristics associated with threshold concepts, such as being 'discursive' and 'reconstitutive' , have also been described [65].As such, methods for the identification and designation of threshold concepts remain ambiguous and somewhat arbitrary [66,67] although various research methods, such as consensus-building processes, could be employed to facilitate this. Also, there are no known and accepted ways to 'measure' or 'judge' the extent to which a concept satisfies the defining criteria of threshold concept, i.e. extent to which a given concept is troublesome or transformative.Consequently, the identification or designation of threshold concepts remains problematic, although a framework to assist educators to identify and embed threshold concept knowledge into their programmes has recently been described [68]. Furthermore, having knowledge of pain science and practice in no way guarantees that patients' care and outcomes will be enhanced [69]. Summary The relative merits of characterising pain science and practice as a threshold concept within undergraduate and pre-registration physiotherapy education have been considered.Pedagogically it appears that pain science and practice is, to varying degrees, troublesome, transformative, integrative and bounded.The extent to which it is irreversible is not known.Epidemiologically, it could be argued that the prevalence, impact and costs associated with pain/chronic pain from a societal perspective are of sufficient magnitude to support the characterisation of pain science and practice as a threshold concept.In the absence of accepted methodologies with which to identify threshold concepts this paper presents the perspective of a single author.Recognition of pain science and practice as a threshold concept could provide the impetus to improve pain education within undergraduate and pre-registration physiotherapy.Wider debate regarding the relative merits of characterising pain science and practice as a threshold concept would be welcome.
v3-fos-license
2018-12-05T10:34:12.658Z
2016-01-01T00:00:00.000
55733503
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2016/04/e3sconf_eunsat2016_17008.pdf", "pdf_hash": "499781800de976bdbcfb992d77c6c00e89d08c8e", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45256", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "499781800de976bdbcfb992d77c6c00e89d08c8e", "year": 2016 }
pes2o/s2orc
Fully undrained cyclic loading simulation on unsaturated soils using an elastoplastic model for unsaturated soils Several researchers have reported that Bishop's mean effective stress decreases in unsaturated soils under fully undrained cyclic loading conditions, and unsaturated soils are finally liquefied in a similar manner as saturated soils. This paper presents a series of simulations of such fully undrained cyclic loading on unsaturated soils using an elastoplastic model of the unsaturated soil. This model is formulated using the Bishop's effective stress tensor incorporating the following concepts: the volumetric movement of the state boundary surface containing the normal consolidation line and the critical state line due to the variation in the degree of saturation, a soil water characteristic curve model considering the effect of specific volume and hysteresis, the subloading surface model, and Boyle's law. Comparisons between the simulation results and the experimental ones show that the model agreed well with the unsaturated soil behavior under cyclic loading. Finally, the typical cyclic behavior of unsaturated soils under fully undrained conditions, such as the mechanism of liquefaction of unsaturated soils, the compression behavior, and an increase in the degree of saturation, are described through the proposed simulation results. Introduction Soils are often subjected to a cyclic loading under unsaturated conditions in actual fields such as the deformation of embankments and reclaimed grounds during an earthquake.In Japan, the Sanriku-Minami earthquake triggered a landslide in the town of Tsukidate on May 26, 2003.An artificial fill in this disaster area classified as a volcanic sandy soil lost its effective stress under cyclic loading although the degree of saturation is about 70% [1].In 2011, the landfills along the northeastern shorelines of the Tokyo Bay liquefied because of the Tohoku earthquake, which caused soil subsidence around an area of 42 km 2 [2].Until now, questions have been raised about the liquefaction potential of unsaturated soils. Recently, several researchers conducted cyclic shear tests on unsaturated soils to investigate the cyclic shear behavior of the unsaturated soil.Ishihara et al. [3] studied the effects of relative density and the degree of saturation on the undrained behavior of near-saturated sand through multiple series of monotonic and cyclic triaxial tests.Altun and Goktepe [4] conducted a torsional shear test on unsaturated silty clay to explore the small and large strain behavior of unsaturated soils.Unno et al. [1,5] conducted a series of strain-controlled cyclic triaxial tests on the unsaturated sand under fully undrained conditions, namely, unexhausted air and undrained water, to study the general liquefaction state of unsaturated soils.Okamura et al. [6,7] observed the influence of air and suction pressures on the liquefaction resistance of unsaturated soils through a series of cyclic triaxial tests on a fine clean sand and non-plastic silt under fully undrained conditions.Tsukamoto et al. [8] conducted a series of undrained stress-controlled cyclic triaxial tests on the unsaturated sand in order to examine the changes in the cyclic resistance of silty sand with different grain compositions. All the studies reviewed here support the hypothesis that the mean value of the Bishop's effective stress of the unsaturated soil gradually decreases under fully undrained cyclic loading conditions and the soil is finally liquefied in a similar manner as the saturated soil.The decrease in the effective stress under fully undrained cyclic loading is caused by an increase in pore-air and pore-water pressures, and the liquefaction of the unsaturated soil will occur when both air and water pressures reach the initial value of the total confining pressure [5].It can be observed that the liquefaction resistance of the unsaturated soil depends on the initial confining pressure; the higher the initial confining pressure, the higher the liquefaction resistance will be [5,6].The degree of saturation, which has a significant effect on the cyclic behavior of the unsaturated soil, has also been recognized.The cyclic shear strength of the unsaturated soil increases with the decreasing degree of saturation [1,[3][4][5][6][7][8].Some results showed that the cyclic stress ratio almost doubled as the degree of saturation decreased from 100% to 90% [4,6].However, the effect of the degree of saturation on cyclic shear strength will reduce if a low confining pressure is exerted on the unsaturated soil [6].The compressibility of the unsaturated soil, which depends on the density and soil particle structure, is another important factor that enhances liquefaction.The unsaturated soil with a low relative density (loose) or a highly compressible soil structure may easily lose its effective stress under cyclic loading [3,5].Moreover, the difference in the development of pore-air and pore-water pressures, which causes the liquefaction, varies with the volume change characteristics dependent on the grain size compositions [8].A change in the volume of the unsaturated soil during undrained cyclic shearing is directly influenced by pore air.According to the hypothesis that the individual soil particles and water are incompressible in comparison with the soil skeleton, a change in the volume of the soil skeleton is assumed to be equal to that of pore air under fully undrained cyclic loading [5].The pore air absorbs the generated excess pore pressures resulted in a reduction in the pore-air volume [6,7].A reduction in the volume of unsaturated soils during fully undrained cyclic loading causes an increase in the degree of saturation [1,5].Suction is a mechanism that affects the cyclic behavior of unsaturated soils [7].It is usually indicated that suction increases the stiffness of the soil.During cyclic loading, suction decreases because the development of air pressure is less than that of water pressure [1,[5][6][7].It should be noted that liquefaction does not occur when suction becomes zero [5]. The main purpose of this paper is to present a series of simulations of fully undrained cyclic loading on unsaturated soils using an elastoplastic constitutive model for unsaturated soils [9].This model is formulated using the Bishop's effective stress tensor incorporating the following concepts: the volumetric movement of the state boundary surface containing the critical state line due to the variation in the degree of saturation, a soil water characteristic curve model considering the effects of specific volume and hydraulic hysteresis, the subloading surface model, and Boyle's law.Comparisons between the simulation and experimental results show that the model agreed well with the unsaturated soil behavior under cyclic loading.Finally, the typical cyclic behavior of unsaturated soils under fully undrained conditions, such as the mechanism of liquefaction of unsaturated soils, the compression behavior, and an increase in the degree of saturation, are described through the proposed simulation results. Basic concepts This section describes the basic concepts applied to formulate a model for unsaturated soils [9], which is used to predict the cyclic behavior of unsaturated soils under fully undrained conditions. The Bishop's effective stress In this paper, the effective stress of unsaturated soils is defined based on Bishop's effective stress [10] as shown in equation (1).   where σ , net σ , a u , w u , r S , and s represent Cauchy total stress tensor, Cauchy net stress tensor, air pressure, water pressure, degree of saturation, and suction, respectively. is a variable given as a function of r S and assumed to be equal to r S in this study for simplicity. Soil water retention curve model considering the effects of density and hydraulic hysteresis As described earlier, the degree of saturation, suction, and a volume change affect the cyclic behavior of the unsaturated soil.Therefore, a proper water retention curve model is necessary to formulate an elastoplastic constitutive model for the unsaturated soil.This study uses a modified water retention curve considering the effects of density and hydraulic hysteresis [9]. To consider the effects of density, the van Genuchten model [11] is modified as  is the parameter controlling the effect of density. A water retention curve is generally described as the relationship between suction and the degree of saturation dependent on the drying and wetting histories.We define the drying and wetting curves as where d and w denote the main drying and wetting curves, respectively.Figure 1 illustrates the differences in the main drying and wetting paths which are the highest and lowest boundaries of the degree of saturation.The current degree of saturation is the locus of a point between the two main curves represented by w I . w I is defined as the ratio of interior division of the current state between two reference states on the main curves expressed by equation (5). Substituting equation ( 4) in (5) and solving for r S , we obtain where w  is the material constant controlling the effect of suction histories. Elastoplastic stress-strain relationship for unsaturated soil This section presents an elastoplastic constitutive model for the unsaturated soil [9] formulated by the subloading surface model and the volumetric movement of the state boundary surface containing the critical state line due to the variation in the degree of saturation. The state boundary surface of the saturated soil, which is the loosest state, can be expressed by where NC v is the specific volume of the state boundary surface,  is the reference specific volume of saturated, normally consolidated soil under atmospheric pressure,  is the compression index, p is the mean effective stress, and a p is the atmospheric pressure.As the unsaturated soil shows a relatively high stiffness and retains a larger specific volume than the saturated soil, the state boundary surface is assumed to shift upward (or downward) with the variation in r S in the direction of the specific volume axis.The effect of r S on the volumetric movement of the state boundary surface can be represented by The concept of the state variable ) 0 ( is shown in Figure 2. The variable  can be expressed as a function of r where  is the material parameter representing the vertical distance of the state boundary surface for dried and saturated samples in the compression plane. To consider the behavior of an overconsolidated soil, the subloading surface concept is applied to our model.According to the subloading surface concept [12], a soil exhibits elastoplastic deformation even in an overconsolidated state and then gradually approaches the normal consolidation plane with the increase in the stress level.An arbitrary specific volume can be represented by The concept of state variable ) 0 ( r is shown in Figure 2. State variable r is the difference between the specific volume of the current state and that of the state on the state boundary surface under the same stress . As r may decrease with the development of plastic deformation and finally converge to zero, an evolution law of r can be represented by where a is a parameter controlling the effect of density. A monotonic increasing function   h z based on the modified Cam-clay [13] is then applied.We obtain where G is the reference specific volume of the saturated soil under atmospheric pressure at a critical state, and where h is the stress ratio, which is equal to M at the critical state.By substituting 0 v , 0  , 0 r , 0 p , and 0  q as the initial state in equation ( 13), we obtain Total volumetric strain is given by 0 By substituting equations ( 13) and (15) in equation ( 16), we obtain Elastic volumetric strain can be obtained from a usual elastic relationship as where  is the swelling index. Plastic volumetric strain can be determined by subtracting the elastic volumetric strain from the total volumetric strain, which can be expressed as From equation ( 19), yield function can be written as   14) is applied.Equation (20) can be finally rearranged to The fully undrained condition, namely, unexhausted air and undrained water, is the condition where air and water are unable to drain out of the soil.In other words, the mass of water and air are constant. In order to simulate the unexhausted air condition, we assume that air is an ideal gas and the temperature is constant.Therefore, Boyle's law, which states that the pressure of a given mass of an ideal gas is inversely proportional to its volume at a constant temperature, can be used as shown in equation (22). where a V is the volume of air.A classic equation for solving problems involving threephrase relationships (solid, water, and air) can be used to satisfy the undrained water condition as where w , e , and s G are the water content, void ratio, and specific gravity of the soil, respectively. Finally, an elastoplastic model for unsaturated soils [9] can be formulated by applying these basic concepts in order to predict the cyclic behavior of unsaturated soils under fully undrained conditions. Simulations A series of simulations of cyclic triaxial tests on unsaturated soils under fully undrained conditions are performed here.The analysis had been carried out using parameters for Tsukidate volcanic sand (non-plastic sand), which has a specific gravity of 2.478, as shown in Tables 1 and 2. Cyclic triaxial tests, conducted by Unno et al. [14], have been performed on two types of unsaturated samples (initial degree of saturation = 78.9% and 73.5% with the same initial void ratio of 0.93).In the simulation, the initial state of cyclic shearing simulation is first set as shown in Table 3. Cyclic axial strain with a loading frequency of 0.005 Hz as shown in Figure 3 is then applied to the specimens under the unexhausted air and undrained water conditions at a constant confining pressure.Finally, the simulation results, i.e., the time histories of suction, mean effective stress, air pressure, water pressure, and void ratio, have been obtained as shown in Figures 4 and 5. Figures 4 and 5 show the comparison between the experimental results for cases c-2 and c-3 conducted by Unno et al. [14] and their corresponding calculations. According to Figures 4 and 5, the proposed model can precisely describe the cyclic behavior of unsaturated soils under fully undrained cyclic loading conditions.The unsaturated soil lost its effective stress because of the development of pore-air and pore-water pressures and a decrease in suction because the development of air pressure is less than that of water pressure during cyclic shear. The proposed model can illustrate the fact that liquefaction does not occur when suction becomes zero.Based on the Bishop's effective stress equation, liquefaction of the unsaturated soil will occur when the suction and the net stress become zero.This situation means that air pressure, water pressure, and total confining pressure must be equal.Moreover, the proposed model incorporating Boyle's law can capture the compression behavior of unsaturated soils under fully undrained conditions.As air pressure increases during cyclic loading, air volume will automatically decrease following Boyle's law.The magnitude of the decrease in the void ratio was also predicted accurately. The proposed model for water retention curve can predict the increase in the degree of saturation due to volumetric contraction as shown in Figure 6. Conclusion An elastoplastic constitutive model for the unsaturated soil, which is able to predict the cyclic behavior under fully undrained conditions, has been developed.This model is formulated using the Bishop's effective stress tensor incorporating the following concepts: the volumetric movement of the state boundary surface containing the critical state line due to the variation in the degree of saturation, the soil water characteristic curve model considering the effects of specific volume and hydraulic hysteresis, the subloading surface model, and Boyle's laws.The validity of the proposed model is checked through a series of cyclic triaxial tests on unsaturated soils under fully undrained conditions.It is indicated through the simulations that the proposed model properly describes the fully undrained cyclic behavior of unsaturated soils such as liquefaction phenomena, compression behavior, and an increase in the degree of saturation. to 1 with decreasing r S , and vice versa.Thus, an evolution law for w I can be written as Figure 2 . Figure 2. Modeling of the volumetric behavior of the unsaturated soil considering the effects of state variables r and . An associated flow is assumed in the model.As the soil exhibits an unlimited distortional strain at the critical state with no change in stress or volume, the derivatives of the yield function with respect to p becomes zero. Figure 3 . Figure 3.Time history of axial strain during a cyclic shearing process. Figure 4 . Figure 4. Comparison between the simulation results and the experimental results of case c-2 ( r S = 78.9%,s = 6.0 kPa). Figure 5 . Figure 5.Comparison between the simulation results and the experimental results of case c-3 ( r S = 73.5%,s = 14.8 kPa). Figure 6 . Figure 6.Water retention curve during fully undrained cyclic triaxial test. Table 2 . Parameters for water retention curve. Table 3 . Initial state of cyclic shearing simulation Note mean effective stress is calculated based on the Bishop's effective stress equation, and the pressure is gauge pressure, which excludes atmospheric pressure.
v3-fos-license
2021-07-26T00:06:29.722Z
2021-06-04T00:00:00.000
236226073
{ "extfieldsofstudy": [ "History" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://iris.unive.it/bitstream/10278/3741429/1/imig.12882.pdf", "pdf_hash": "038e6ce3c32707c82e424b201f65c774d1b16af0", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45257", "s2fieldsofstudy": [ "History" ], "sha1": "9fd2e2e245a3e96542fd638b60b0ef6fa0dc81f0", "year": 2021 }
pes2o/s2orc
Onward migration: An introduction Based on a qualitative research project that longitudinally followed African trajectories inside Europe, Schapendonk's article provides analytical space to discuss African mobilities in Europe that deviate from the conventional notion of "onward migration" from the peripheries of Europe to the self-declared core of western Europe. Unless Mukul and his family apply for settled status before the deadline set by the government, their status of EU citizens may no longer be enough and they will be turned into EU migrants, with the risk of having to make their stay regular through a resident permit. These include transit migration, secondary migration, stepwise international migration, multiple and serial migration, and posted migration. EU citizenship is one of the key achievements in terms of intra-EU mobility since the Maastricht Treaty and maintains that all people holding nationality of any of the 28 EU member states are also EU citizens with extra rights and responsibilities (Geddes et al., 2020), including the possibility of intra-EU mobility. [Extracted from the article] Copyright of International Migration is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.) McCarthy in this special issue).Thanks to his status of Italian citizen, Mukul was able to visit London, to reactivate his family networks and assess his opportunities in both labour and housing markets (on the role of "transnational explorative practices" see Dimitiradis in this issue).Once he realized that they could start again, but not from scratch, they moved in 2015: I chose to come to England because I thought first of all of the future.The future -not mine and of my wife, but the future of my children.Looking around in Italy … knowing that there is a crisis … I could not see any future for them in Italy, even young people [Italians] see no future in Italy … I was afraid for the future of my children.So I came to England for them, because I feel that there are better chances here. (Della Puppa & King, 2018King, : 1940) ) Although the current job situation is not much better than in Italy and the salary is not higher than his factory income, Mukul is currently working as mini-cab driver and Rokeya as a cultural facilitator in state schools.He has a stronger sense of security and feels that his children have greater opportunities in a more open and tolerant country than Italy (see also Mapril in this special issue).However, after his relocation to the UK and London, the European and British scenario has changed, because of Brexit.Unless Mukul and his family apply for settled status before the deadline set by the government, their status of EU citizens may no longer be enough and they will be turned into EU migrants, with the risk of having to make their stay regular through a resident permit.This could also have negative repercussions on the possibility of accessing welfare -a driving force behind the onward migration to the UK of many Italian-Bangladeshis (Della Puppa & King, 2018) The outcome of the referendum of 2016 seems to have accelerated mobility projects to the UK of many European citizens, afraid of not being able to move there after the country leaves the EU. 1.In a third vignette, we move with a highly skilled migrant worker to Spain, where football champion Luis Suárez lives and works.Born in Uruguay in 1987, instead of moving to Spain as many South American migrants often do (see McCarthy and Ramos in this special issue) Suárez moved to the Netherlands in 2006, where he won several titles and was named Dutch footballer of the year.In 2011, Suárez transferred to Liverpool; in 2014 he migrated to Barcelona in a transfer worth €82.3 million (£64.98 million), making him one of the most expensive players in football history.In September 2020, Suárez's manager and Juventus football club were about to agree a further transfer.However, according to FA regulations, football clubs are not allowed to enrol more than two non-EU players per season; because the Italian team had already recruited two extra-EU players, Suárez needed Italian citizenship to make the transfer happen. As far as the Spanish football association is concerned, Suárez is a EU citizen because his Uruguayan wife is the daughter of Friulians and has an Italian passport, as do their three children -but this is not enough for the Italian football association.Although he has been living in an EU member state for nearly 15 years and his wife is an Italian citizen, Suárez needed a language exam to become a national citizen to play with Juventus.The different regulations between Italy and Spain forced Suárez's entourage to race against time as the deadline for the transfer was in early October.Suárez had two paths to become an EU citizen: the years of work in Catalonia gave him the right to Spanish citizenship, but this possibility was abandoned for reasons of time.The other option was the Italian path, which should have been quicker, as he did not need to start from scratch, having started language courses some years beforehand.On 22 September, Suárez flew to the university for foreigners in Perugia to do a B1 Italian language examination.Although he passed the examination, 2. Juventus withdrew from their intention to buy the Uruguayan football player and the transfer was blocked. These three vignettes are all examples of onward migration and exemplify the different characteristics and issues that this type of migration raises (see also Schapendonk in this special issue).First, they show the different paths and outcomes that onward migration can take.Maslax's case is a typical secondary movement of asylum seekers and migrants who arrive by sea through the different Mediterranean routes.Once in Europe, many do not want to stay in their country of arrival, either Italy, Greece or Spain, but prefer to move on and reach other countries where they have links and can rely on them to build a new life (Benedikt, 2019;De Genova, 2017;Picozza, 2017).Most of the time, they manage to reach the country aimed at, 3. but the outcome of this attempted movement is often a forced return to the first country in which they were fingerprinted following the Dublin regulation, as happened with Maslax.In this case, onward migration fails as it collides with EU migration policy and its constraints (De Genova et al., 2017;Montagna & Grazioli, 2019).The second case shows an alternative variant of onward migration, which is successful thanks to an instrumental use of European citizenship (see also McCarthy, as well as Morad and Sacchetto in this special issue).Mukul and Rokeya managed to reach the UK and settle in London, where they feel there are more opportunities for them and their children, notwithstanding that the Bangladeshi community is one of -if not the -most disadvantaged in Britain and is always at the lowest level of social mobility in the country (Gardner, 2010;Peach, 2006;Redbridge Borough, 2004).As several studies show (Della Puppa & King, 2018;King & Della Puppa, 2020, and Della Puppa et al. in this special issue), since the 2008 economic crisis onward migration from the hardest-hit southern European countries to continental Europe and Britain has been a growing phenomenon, involving thousands of migrants with regular status (see also Cillo in this special issue).As in the case of Mukul and Rokeya's family, they also use European citizenship instrumentally and take up the chances provided by freedom of mobility.Finally, the third case is typically a form of labour migration, aborted by a strict interpretation of the law and the time Suarèz would have needed before being granted European citizenship.Although he has the advantage of being married to an Italian citizen and is a highly skilled worker who had lived in EU member states for nearly 15 years, this was not enough for him to move freely across the continent.Italian citizenship is not automatic.Marriage may facilitate it, but those who apply for it have to show that they have command of the language and tests still have to be successfully passed.The bureaucratic time required to be granted Italian citizenship caused Juventus to withdraw from the transfer. These different paths depend on policy arrangements across the EU.Therefore, onward migration interrogates EU migration policies and their role in facilitating or preventing mobility.In particular, the opportunity to move onwards and continue with the migratory project is linked to policy measures such as EU citizenship and the Dublin regulation (Della Puppa and Sredanovic, 2016).EU citizenship is one of the key achievements in terms of intra-EU mobility since the Maastricht Treaty and maintains that all people holding nationality of any of the 28 EU member states are also EU citizens with extra rights and responsibilities (Geddes et al., 2020), including the possibility of intra-EU mobility.Of the three cases looked at here, the only successful movement was made possible because Italian citizenship entailed the right to move across EU member states.For Mukul and his family, as well as for many other migrants who take EU citizenship, the opportunity to move on is a form of protection from the unfriendly attitudes of public authorities, welfare discrimination, inadequate conditions of social reproduction, precarity, unemployment and a lack of social mobility (Della Puppa and Sredanovic, 2016;Kofman & Raghuram, 2018).In this sense, citizenship can become instrumental and be turned into a resource, a form of capital providing migrants with more useful resources for their onward movement (on mobility capital see Della Puppa, Montagna, and Kofman in this special issue).Neither Maslax or Luis Suárez possessed such capital when they were refused mobility. As we have seen, onward migration also interrogates the Dublin Convention, with its aim of containing asylum seekers' mobility 4. .Maslax was prevented from moving freely to another country by this piece of EU policy that does not allow asylum seekers to choose the country where they apply for asylum.Instead of being allowed to travel to where they can rely on already-existing social networks, asylum seekers are forced to remain in the country where they have been fingerprinted and wait until their application is processed.This irrational mechanism, which was elaborated in order to prevent what has been dubbed "asylum shopping," forces thousands of asylum seekers to illegally cross internal EU borders, increasing the risks to their health and security, including being trapped in smuggler networks.Maslax, who could not rely on family ties 5. , was one of thousands of people trying to circumvent the Dublin Convention and risking being deported or losing their lives.While many do manage, others face deportation or are stranded in one of the many camps set up across the EU. Onward migration, whether successful or unsuccessful, is very much an outcome of social capital (Bourdieu, 1986;Coleman, 1990).As the case of Mukul's family shows, friendship and familial ties, as well as knowledge of fellow countrymen already migrated to the "new" destination country and their community organizations, constitute an important support for new migratory projects.Similarly, employers may play a role in connecting the different dots of the network, as was supposed to happen for Luis Suárez.They are bridgeheads in the first period of asylum seekers' arrival, "social guides" who help newcomers with the "new" social context, smoothing their access to local and national welfare, public benefits, the labour and housing markets, and the aforementioned associations (Coletto and Fullin, 2019;Dimitriadis et al., 2019;Jokinen et al. 2008).Within the rather broad category of social capital, we can also add the "mobile commons" (Montagna & Grazioli, 2019;Papadopoulos and Tsianos, 2013), that is all those forms of knowledge, information, mutual care, social relations and solidarity that facilitate migrants' mobility.Maslax's attempt to move onwards relied on the knowledge and information shared at the Baobab experience camp where he resided between his arrival in Rome and departure to northern Europe. Maslax used these resources in order to not be stuck in Italy and to look for new opportunities elsewhere.They constitute what Kaufman identifies as "motility," that is those assets that increase people's capacity to be mobile in social and geographical space and that are activated, as in Maslax's case, or re-activated in Mukul's and Suarez's cases, to move onwards in the migratory project (Kaufmann et al., 2004). Finally, the three cases illustrate another recurrent theme in research on onward migration: the search for opportunities as a driving factor (see Dimitriadis as well as Salamońska and Czeranowska in this special issue). In all three cases, our subjects are aiming for better opportunities, regardless of the huge differences in terms of starting conditions, socio-economic background, resources, etc (see also Dimitriadis in this Special Issue).Maslax's aim was to find his friends, who would have given him support in the process of settlement in the EU.He thought Italy could not provide the kind of life chances that other countries could, so he decided to move.There is a mix of agency, networks and pulling factors, they merge in a way that makes it difficult to grasp the prevailing factor (see also, Schapendonk in this special issue).Mukul and his family also thought that a different EU country (as the UK was at that time) would give them, and especially their children, more opportunities, better welfare and a more friendly society.Even Suarez's attempted migration was driven by similar factors, although these were mostly economic (better pay) and symbolic (playing in a different team) than social (access to better welfare). This special issue aims to address and explore in more depth some of the themes that emerge in these three vignettes: the role of social and mobility capital, the importance of the policy framework, migration as a multiple path, family and economic constraints and the role of agency.It stems from a panel we organized at the Migration Conference in 2019 under the title "Onward Migration in a Changing Europe."Not all the panel participants from that conference feature in this special issue, and not all the authors included here were present at the conference. Nevertheless, the panel represents the first important moment when we began to collect case studies and insights into a phenomenon that is not necessarily new historically (Bhachu, 1985), but which has emerged, with disruption, in new forms, shapes and trends in Europe, profoundly modifying its social, economic, political, demographic and cultural balances. When we thought about this special issue, our interest was not just in the mechanics of onward migration as another form of movement.Our focus on it is as a strategic movement, where agency -migrants' decision to move either forward or back and forth, as the case of posted workers, that is workers who are posted from an EU country to another member country -plays a major role (see also Cillo in this special issue).As the three cases show, we look at the strategic decisions that migrants take to circumvent constraining policies, to challenge and resist unfriendly socio-economic environments, or simply to look for better opportunities. More specifically, this special issue is structured as follows.In the first contribution, Francesco Della Puppa, Nicola Montagna and Eleonore Kofman give a theoretical introduction on onward migration and provide a review of empirical research in the EU on this topic.Existing studies on this growing area of research have looked at different, often overlapping, dimensions of this type of migration and how it may be an effect of mobility capital or influenced by variables as diverse as economic crises, gender, country of origin, age and skills.In the first part of their review, the authors show how ongoing migration and different types of mobility have been conceptualized.These include transit migration, secondary migration, stepwise international migration, multiple and serial migration, and posted migration.In the second part, the authors examine what has been written on onward migration and identify some common issues emerging from the research, such as the role of socio-economic factors and the importance of different forms of capital, including "migration capital," of which EU citizenship is one.In the concluding section, they highlight some missing areas and discuss how Brexit may impact on onward migration in the EU (see also Sredanovic in this special issue). Justyna Salamońska and Olga Czeranowska focus in their article on a specific aspect of onward migration. Arguing that migration is a more complex phenomenon that a one-off movement, their article aims to shed light on diverse migration patterns, including those encompassing more than one destination country within the migration trajectory.The authors aim to bridge the gap between different literatures on repeat and multiple migrations.These strands of literature developed separately, and each took one-off migrations as a frame of reference.In their empirical analysis, based on the European Internal Movers' Social Survey (EIMSS), Salamońska and Czeranowska quantify the volume of one-off, repeat and multiple movement between selected EU countries. Referring to information on migration patterns, they construct three migration types -those of one-off, repeat and multiple migrants -and show how these types are socially structured.Joris Schapendonk's article takes a critical look at how onward migration is conceptualized and how the lexicon surrounding it frames migratory processes as a staged process, involving a south-north directionality and hinting at a gradual progress for the migrants in question.While wondering why scholars also adopt this conceptual frame and look solely to south-north secondary movement, he argues that this "grand narrative" is politicized and reproduced by the EU's overarching policy frameworks.Based on a qualitative research project that longitudinally followed African trajectories inside Europe, Schapendonk's article provides analytical space to discuss African mobilities in Europe that deviate from the conventional notion of "onward migration" from the peripheries of Europe to the self-declared core of western Europe.Its aim is to offer a counter-narrative of im/mobility dynamics involving zig-zag routes, circulations and shifting horizons. In her article, Helen McCarthy shows how Latin Americans who had been living in Spain have taken the opportunity of dual citizenship to move onwards to the UK to escape unemployment caused by the economic and financial crisis in the late 2000s.Drawing on qualitative interviews, the article explores how families seek to negotiate within the opportunity structures afforded to them to make the most of the possibilities for both physical and social mobility.She shows that, while the initial migration from Latin America to Spain enabled Latin American migrants to gain some economic stability and, in some cases, gain citizenship or residency, it often involved periods when families had to cross different countries.Onward migration adds a further layer of complication to this: different legal statuses combined with family dynamics create situations in which some family members become trapped in immobility while they await residency or citizenship papers.McCarthy's article therefore aims to explore how Latin American families negotiate with the citizenship constellations they find themselves in. Christina Ramos' article also looks at Latin Americans migrating to Spain and how they reconfigured their future after the 2008 economic crisis in that country, and its impact on their employment and livelihoods.The crisis initiated a new migratory cycle in Spain, involving increased departures and decreased arrivals.Many of those leaving the country either returned to their countries of origin or migrated onwards.According to Ramos, the new mobilities were not just the result of the 2008 economic crisis, but should be understood within a broader context in which multiple moves are not uncommon.Relying on qualitative data collected in Madrid and London with Ecuadorians and Colombians who migrated to Spain and onwards from Spain, she shows that migration is a continuously evolving process in which migrants adapt to structural changes by showing individual agency, but where the options available are also determined by class and gender constraints. Using in-depth interviews with EU27 citizens residing in the UK, and Britons residing in Belgium, Djordje Sredanovic examines the role of Brexit in both triggering and obstructing onwards and return migration.As is well known, one of the consequences of Brexit will be a reduction in the freedom of movement and settlement for both EU and British citizens, an increase of xenophobia and potential economic instability, particularly in the UK.In this context, both EU27 citizens in the UK and Britons in Belgium can consider onward or return migrations, although Brexit may complicate this.The author argues that the realization of migration plans is therefore mediated both by individual resources and by imaginations about the future of the UK and the EU. In his article on Bangladeshi-Portuguese migrants in London, Jose Mapril aims to investigate why well-off Bangladeshi migrants living in Portugal decide to move on, and what are their expectations and the consequences of their new migratory project.Relying on a longitudinal ethnography with Bangladeshis in Lisbon and London, Mapril examines how the temporalities of kinship and intergenerational reciprocities are mobilized through onward migration, and the ways in which these are connected with care and the future in uncertain social and economic contexts.He shows that the decision among some Bangladeshis to migrate again was connected with the education of their children, namely to allow them the possibility of studying in British secondary or higher education.Thus, these "new beginnings" become part of a larger horizon of expectations in which the redistribution logic associated with kinship and domestic units becomes the core of the new migratory movement. The 2008 economic crisis and its consequences are the context of Iraklis Dimitriadis' contribution.His focus is on the aspirations of Albanians living in Greece and Italy to move on, and the transnational practices they need to activate if they want to turn aspirations into reality.Drawing on qualitative data, Dimitriadis examines how the desire to migrate again emerges as a reactive strategy to cope with poor working conditions and discrimination.Finally, employment is a strong motivating factor for remaining in Italy for those Bangladeshis not interested in onward migration. The special issue concludes with Rossana Cillo's article on another phenomenon related to onward migration. Her focus is on posted migrant workers and their use in the construction of the Rive Gauche shopping centre in Charleroi (Belgium).These workers are mostly from Albania, Egypt, India, Kosovo and Romania and are used by Italian companies to increase flexibility, particularly in terms of labour contracts, and reduce their costs.As the author argues, the expansion of posted migrant workers is favoured by a number of conditions, including the reorganization of the construction sector's production model: the uncontrolled expansion of subcontracting; the transformation of the EU labour market's stratification; and the need to maintain high profitability rates by lowering labour costs.The article shows how these workers are forced to accept conditions of extreme exploitation because of the lack of job opportunities and the impossibility of emigrating to other EU countries due to their migration status. Finally, despite the wide range of issues, perspectives and critical views focused on the phenomenon of onward migration and collected in this special issue, we must point out some shortcomings here and, more generally, in the debate on onward migration.Among these, for example, we may highlight the lack of quantitative studies -for example cross data on education and the decision to undertake an onward migration -as well as the paucity of gender or generational perspectives (but see Della Puppa et al. in this special issue and Ortensi & Barbiano di Belgiojoso, 2018) from which to observe intra-European mobilities and capture its complexity. Moreover, at the time of putting together this special issue, we were taken by surprise by the intensity of the pandemic and its profound consequences on policy and international and intra-European mobility.As it has been recently written, while the long-term consequences on mobility are still unknown. Behind the sharp decline in global mobility in 2020 on lies a complex story of travellers stranded abroad and awaiting repatriation, migrant workers getting locked out of destination countries where they might have performed seasonal or temporary work, displaced people facing severe difficulty in fleeing conflict and disaster zones across borders, and asylum seekers struggling to access the procedures to apply for international protection.(Benton et al., 2021: 23) Among those who will suffer the most from the situation created by COVID-19, there are the huge number of people who had planned or were about to plan onward movement.If policy discussion on the future of mobility has to contend with the issue of whether travel restrictions contain the virus (Ibidem), any discussion of health policy must consider the complex stories of migrants, including those who aim to start a new onward migratory project, and their rights.These themes pinpoint the scientific limits of our work, providing, along with the new global scenario transformed by the COVID-19 crisis and its policy management, important vectors for further research on onward migration in an increasingly changing Europe. E N D N OTE S 1.The Italian National Institute of Statistics reports that, in 2016, of the 29,000 Italians with a TCN background who left Italy (an increase of 19% compared to the previous year), over 2,500 were of Bangladeshi origin while 92% of Italians of Asian origin who migrated abroad moved to the UK. 2. Some members of staff at the University of Perugia are now under investigation for allegedly telling Suarèz the answer to the examination questions. 3. According to the Italian Institute for International Political Studies (Ispi), in 2018, the number of "Dubliners" pushed back to Italy was around 6,400.According to Eurostat, around 72,000 applications have been submitted for implementations of the Dublin Regulation in the EU in 2016. 4. The Dublin Convention aimed to harmonize the asylum procedure across the EU.By establishing which Community member state should process an asylum seeker's application, the Convention means that asylum seekers are allowed to enter the EU and apply for asylum but not to move freely.Its aim was to prevent migrants from making applications in more than one Member State in the hope that limited opportunity of movement would reduce the number of applicants. 5. According to Article 10 of the Dublin Regulation.Family members are eligible for a Dublin transfer if they can prove that they are dependent on the assistance of their child, sibling or parent in another European country.The child, sibling or parent must be "legally present" in the country where they are living (British Red Cross, 2018). Of particular importance are what Dimitriadis calls "explorative transnational practices," which are social spaces between Italy or Greece and a range of different countries and comprise occasional visits or short trips to work abroad.These practices can reconfigure aspirations when migrants grasp that new destinations cannot meet their needs and desires.Mohammad Morad and Devi Sacchetto also consider Bangladeshi migrants who move from Italy to the UK.They investigate the driving forces of onward migration for Bangladeshi migrants with Italian citizenship in Europe, and the factors associated with their intention to relocate to the UK.Their fieldwork, based on 50 in-depth interviews with Bangladeshi-Italian migrants living in the UK and Italy, illustrates three main points.First, Italian-Bangladeshis move to the UK because they see the destination as more appropriate for the cultural reproduction of Bengali traditions and the religious upbringing of their children.Second, the colonial legacy, on the one hand, and the political climate, on the other, are crucial in the selection of the UK as an onward migration destination.
v3-fos-license
2020-12-31T09:07:43.657Z
2020-12-28T00:00:00.000
234435920
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.lub.lu.se/os/article/download/22397/19993", "pdf_hash": "5725d778e62b23ce5aff0406d93f2d2bd79b4bd9", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45259", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "3aa2412217c3ef185f8fa03023db5b35e9c87e34", "year": 2020 }
pes2o/s2orc
The use of grey alder Alnus incana by foraging Black Woodpeckers Dryocopus martius during winter Spillkråkans Dryocopus martius vinteranvändning av gråal Alnus incana för födosök 60 Citation: Olsson C. 2020. The use of grey alder Alnus incana by foraging Black Woodpeckers Dryocopus martius during winter. Ornis Svecica 30: 60–72. https://doi.org/10.34080/os.v30.22397. Copyright: © 2020 the author(s). This is an open access article distributed under the CC BY 4.0 license, which allows unrestricted use and redistribution, provided that the original author(s) and source are credited. R E S E A RCH PA PE R Introduction Most European woodpecker species have specialised foraging habits and can only be found in abundance in forests that are not too affected by anthropogenic factors (Cramp 1985, Angelstam & Mikusiński 1994). Among them the Black Woodpecker Dryocopus martius is the largest resident woodpecker species in Fenno scandia that exclusively feeds on insects. It forages mostly on ants (Formicidae) and a variety of species, relying almost entirely on trunk-living insects in deciduous trees, was studied in central Sweden (Aulén 1988). The results suggest that silver birch Betula pendula, downy birch B. pubescens, Eurasian aspen Populus tremula, goat willow (great sallow) Salix caprea, European oak Quercus robur, common alder Alnus glutinosa, and grey alder Alnus incana are of importance for feeding White-backed Woodpeckers. Among these deciduous trees, the grey alder is abundant in the areas of this study, the province of Västerbotten, with proximity to the Gulf of Bothnia. When this study was carried out, in 2007, it seemed likely that the Whitebacked Woodpecker was extinct from all of northern Sweden. However, when the species still was found in small numbers in Västerbotten, Olsson & Wiklund (1999) indicated that grey alder is significant for its energy supply at winter. My primary hypothesis in this study was that the grey alder could be a significant alternative to Norway spruce for feeding Black Woodpeckers at winter in northern Sweden. The grey alder is an abundant tree associated with both fresh and salty waters in coastal forests in Västerbotten. Large-sized, decaying grey alders are a potential winter-food substrate, because alders are locally common and potential prey insects can be found throughout the trunk. An important objective of the study was to analyse the composition and abundance of potential prey, specifically the abundant alder wood-wasp Xiphydria camelus, and the characteristics of grey alders used by feeding Black Woodpeckers. In the province of Västerbotten the common alder appears very scarce, but the grey alder is a characteristic tree in its typical habitats. It is even considered a pioneering species in isostatic landscapes. In addition, I wanted to address whether there exists a relationship between increasing feeding activity in grey alder and decreasing temperature and/or increasing snow depth. Very little research on woodpecker feeding has been based on carving remains. This is therefore a pilot study to test a complementary method to field observation of feeding woodpeckers, which requires lots of time, radio tracking of several birds, and movements over long time spans. Similar studies could be made on the Black Woodpeckers' use of Norway spruce, Scots pine, birches and goat willow. other insects living in dead wood (Cramp 1985). The European breeding population is large, estimated to more than 740,000 pairs (Burfield & van Bommel 2004). The Swedish population is estimated to 29,000 pairs (Ottosson et al. 2012) and has declined during the last decades. Since 2015 it is listed as Near Threatened in the Swedish Red List (SLU Artdatabanken 2020). It has been suggested that the scarcity of snags in managed forests affects the ability of the Black Woodpecker to survive harsh winter conditions in northern Sweden (Mikusiński 1995). Compared to other European woodpeckers, the Black Woodpecker is large (body mass 200-350 g) and has a massive bill (Cramp 1985), which allows it to utilize food sources in hard wood that are unavailable to other woodpeckers. The Black Woodpecker can make deep carvings when feeding or preparing nesting holes in coniferous trees, such as Norway spruce Picea abies and Scots pine Pinus silvestris ( Johnsson 1993). Several studies have investigated Black Woodpeckers feeding in stands dominated by coniferous trees, mainly Norway spruce, where carpenter ants Camponotus herculeanus is the bulk food item (Cramp 1985, Haila & Järvinen 1977, Rolstad & Rolstad 1995. This is an abundant and energy-rich food source for woodpeckers, found in snags and basal parts of live spruce trees. It is reasonable to assume that carpenter ants are a main energy source for Black Woodpeckers in most of their Scandinavian range. However, very little attention has been paid to the Black Woodpecker's utilization of deciduous trees as a feeding substrate. One reason for this might be that deciduous trees occur in low abundance in managed forests and therefore are of less importance as feeding substrate, however implying that the relatively few that remain are of more importance. The feeding of the Middle Spotted Woodpecker Dendrocoptes medius in a contiguous alder forest tract in Brandenburg, NE Germany, was surveyed in a major study (Weiss 2003), showing that areas without trees of > 21 cm diameter at breast height (DBH) were avoided by the woodpeckers. Forest tracts with standing snags of > 35 cm DBH were clearly preferred by the Middle Spotted Woodpecker. The feeding of the White-backed Woodpecker Dendrocopos leucotos, a relatively largesized and strong-billed Fennoscandian woodpecker Spotted Woodpecker, no piece of litter was longer than 12 mm, for the Three-toed Woodpecker litter size did not exceed 23 mm and for the Black Woodpecker most litter was in the range 4-80 mm. From these observations one can conclude that there is a significant differentiation in size of grey alder litter produced by the different woodpecker species. Beneath alders visited by Grey-headed Woodpeckers I could not find any litter from hard wood at all, just bark that had been removed STUDY AREA The main study was then carried out in an area in the south coastal part of the province of Västerbotten in northern Sweden, where I had previously identified four established Black Woodpecker home ranges through surveys from 1985 and onwards (Table 1). Line surveys of Black Woodpecker feeding activity Each home range was monitored by walking a fixed route covering 1.5 km (in total 6 km) each Monday morning from 29 November 2004 to 7 March 2005. Snow depth was recorded on each occasion. Each route was in a habitat seemingly optimal for feeding Black Woodpeckers, with a high proportion of grey alders, the focal tree species of this project (Table 1). While walking, I carefully searched for new woodpecker carvings in grey alder. Roughly estimated, 20 ha were covered by each route. Each route was in areas with contiguous stands of grey alder, which were clearly delineated against other habitat types (e.g., open riverine habitats or FROM WOODPECKER FORAGING Because of a much more powerful bill, the Black Woodpecker produces debris of a significantly larger size than other woodpeckers. Many years of field studies have made me aware of there being a significant difference in size between the debris from a carving in a grey alder of the same decaying stage made by Black Woodpecker and by Three-toed Woodpecker Picoides tridactylus. It is much more difficult to confidently assess whether carvings have been made by Black or White-backed Woodpecker, but at the period of the initial studies in 2004-2005, the White-backed Woodpecker was extinct in the province of Västerbotten. Grey-headed Woodpecker Picus canus and Lesser Spotted Woodpecker Dryobates minor are less challenging, because they produce clearly smaller-sized litter than the Three-toed Woodpecker (personal observations). In my experience, the risk for misidentification of litter is confined to cases when a Black Woodpecker has quickly abandoned a tree and left only small amounts of debris on the ground. In those instances, it can be difficult to tell apart from that of Three-toed Woodpecker, but by excluding such ambiguous instances I have erred on the side of caution. To quantify debris size, I searched for an area where several species of woodpeckers co-occurred in the same grey alder stand. Close to Holmsund, approximately 15 km south of Umeå, on 4 November 2004, both Grey-headed, Three-toed, Lesser Spotted and Black Woodpeckers were observed feeding in a 1-ha large grey alder stand. For the Lesser stands dominated by coniferous trees). Approximately 50-100 meters on both sides of the walking path, depending on the density of trees, were scrutinized for woodpecker marks. Trees with new markings were examined carefully to assess whether they were made by a Black Woodpecker or by other woodpecker species, or if the carving was of another origin (e.g., by human activity, wind, or other animals). A recently made carving by Black Woodpecker catches the eye by its cavity hole entrance size, shape, and presence of large woody debris on the ground, and clearly visible amounts of yellowish wood at the entrance hole. Only carvings made in hardwood grey alders were included in this study. Typical carvings of Black Woodpecker were marked with red spray paint, the tree was tagged with a number attached to a steel wire, and all the litter found beneath the tree was collected and put in a numbered plastic bag. For each feeding tree the following data were recorded: DBH; life stage (based on visual inspection of the branches and the crown, and by putting a slight pressure with a car key in exposed parts of the trunk with no covering bark, to determine whether the wood was penetrable); traces of older woodpecker activity; and abundance of visible insect trails in the bare carving. Furthermore, the height of the carving was measured, and the orientation of the carving was assessed with a compass. I also measured the distance to the closest tree. Presence of wood-living insects Collection of litter was made meticulously, as not to lose smaller dust hidden in the snow or on the ground. An area of at least 20 m2 around each tree was scrutinized for litter. In a few cases, when there were many carvings, I had to return to continue the collection the following day. The litter samples were examined in a laboratory to find out whether there were invertebrate larvae or imagoes of insects remaining. I also examined the insect trails in the samples, because they sometimes can reveal what kind of insects that have caused them (Ehnström & Axelsson 2002). One of the insects that one is most likely to find in dying but still hard-wooded grey alders is the alder wood-wasp (Roger Pettersson, pers. comm.), which leaves characteristic patterns in the infested wood, with very regular trails of 3-4 mm diameter (Ehnström & Axelsson 2002). Even if one does not find any imagoes or larvae, one can say with a high degree of certainty whether the alder wood-wasp has been present, because its larvae live in symbiosis with fungi of the genus Daldinia, whose spores dye the walls of the trails black. The blacker the walls are, the more of the infecting fungus there is. These traces of Daldinia are diagnostic for the presence of the alder wood-wasp (Ehnström & Axelsson 2002). Litter samples were brought in cotton bags to the laboratory to dry up and to extract hatched imagoes. The collected litter and two pieces of 50-cm trunk samples from each area were stored in a box in the laboratory and checked twice a week for as long as imagoes hatched (up to three weeks). This was done in order to find out what kind of insects that incubated in the trees recently used by Black Woodpeckers. After two to three weeks, I placed the litter samples into aluminium boxes in a heat chamber, after which I recorded their dry weight. Quantification of alder stands in Black Woodpecker home ranges Eight transect areas were surveyed, i.e., two per investigated home range. The locations of the transects, each 400 m2 large (100 meters long with a width of 40 meters), were randomly designated in areas where grey alders were abundant within each home range. In this choice no consideration was given to tree size, age, or other parameters. The diameter of all the grey alder trunks was measured at breast height. Weather I measured the snow depth in ten randomly selected points along the transect in the survey areas each Monday and recorded the daily temperature at Umeå airport (63°47'30"N, 20°17'00"E) at 13.00 during the whole study period. Statistical analyses Statistical analyses were performed in Microsoft Excel, including the use of the Data Analysis Toolpak. Results During the 14 weeks of line surveys, a total number of 56 grey alders were found to have been visited by feeding Black Woodpeckers in the four areas (Appendix 1). Out of 56 studied trees, 41 were found in a non-exposed position, with lots of covering trees around it, at a distance of at least 10 meters up to at least 180 degrees. Feeding marks were on average 75 cm high (standard deviation [SD] = 67 cm; range 10 cm-2.95 m), situated with the lower edge on average 3.96 m over the snow cover (SD = 2.5 m; range 0.35-9.55 m; n = 63 feeding marks on the 56 trees; Appendix 1). The feeding marks had a strong directional bias and were found in the southerly sector of the trees (Figure 1; Appendix 1). Each week, litter from Black Woodpecker carvings of between 1 and 16 grey alders were collected, with an average dry weight per tree of 143.5 g (Appendix 2). In a limited range of the Rovsundet area 16 trees were intensively carved by Black Woodpeckers between 10 and 17 February 2005. Unfortunately, an early spring flood that week, followed by some cold nights, caused the majority of the grey alder litter from woodpeckers feeding to freeze firmly into the ice that covered an area exceeding 1 ha. For practical reasons it was impossible to retrieve all litter, but examining the size of the carvings compared with the weight of earlier litter collects, gives a rough estimation of 2.0-3.0 kg for that week, corresponding to 125-187.5 g per tree (Appendix 2). In the transect study 474 grey alders were classified as being in living stages, among which five (1.1 %) bore carvings made by Black Woodpeckers during the winter of the study, and six (1.3 %) bore similar carvings from previous years. Another 359 grey alders were classified as being in dying / dead stages with hard wood, among which 37 (10.3 %) bore Black Woodpeckers carvings made during the winter of the study and 119 (33.1 %) since previous years. If the last measurement is typical for optimal areas with grey alder, it implies that 1,480 suitable trees of this species in dead / dying stages are available for feeding Black Woodpeckers. Furthermore, this means that 324 grey alders per hectare can be used during one single winter by a feeding Black Woodpecker. Discussion This seems to be the first study where the quantification of woodpecker foraging at winter has been made with focus on the grey alder. I conclude that this tree species, where it is abundant, can be a very significant feeding tree during winter for Black Woodpeckers, predominantly in its dying stages. I hypothesized that when winter conditions became harsher, Black Woodpeckers would switch from the well-described feeding on basal parts of Norway spruce (Cramp 1985, Haila & Järvinen 1977, Rolstad & Rolstad 1995 to feeding on bare parts of grey alders, where potential prey can be abundant. However, I found no correlation between weather and feeding intensity on grey alder (Figure 2). Instead, the results show that also during the mild winter of 2004 / 2005, with little snow, Black Woodpeckers spent significant Litter dry weight (g), Black Woodpecker carvings Torrvikt (g) av träflis från spillkråkors födosök Average snow depth (cm) | Medelsnödjup (cm) b time feeding in grey alders, a poorly described food source. Since the snow cover was under the average during most of the study period, I could not assess the potential effect on feeding by significantly higher snow coverage. One might argue that this study could not conclusively test the potential effect by snow depth or temperature on the feeding behaviour or Black Woodpeckers, as the winter of 2004 / 2005 saw less snow and higher temperature than usual. It would therefore be beneficial to repeat this study during another winter with harsh conditions, when the study should also start already before the appearance of the first snow cover and continue until the snow starts to melt. The present study possibly started a little bit too late, as in one study area (Tidesviken) several grey alders had been recently carved by Black Woodpeckers at my first visit on 29 November. I also found at least six grey alders near Tavlefjärden, beneath which was a lot of recent litter from Black Woodpeckers two weeks after finishing this study. One can speculate about how the Black Woodpeckers cope with the harsh winter climate at northern latitudes, especially when the snow cover is so deep that trunks and snags may be covered. It has been suggested that they have problems to satisfy their daily energy requirements during winter days even in southern and central Scandinavia (Nilsson et al. 1992, Mikusiński 1995, where the climate is less harsh. This begs the question whether Black Woodpeckers in northern Sweden have adapted to other food sources in winter, when their main food source, carpenter ants, may be unavailable or very scarce due to their substrates (stumps and basal parts of spruce trees) being entirely buried in deep snow. It is known that Black Woodpeckers can still access these substrates at a snow depth of 30-40 cm (Rolstad & Rolstad 2000), but when the snow depth exceeds one meter-quite normal for many parts of northern Fennoscandia-prey such as wood-wasps in grey alders can be a substitute or complement to carpenter ants. Alder wood-wasps generally deposit their eggs for hatching in the upper parts of grey alders in early decaying stages, when they still are hard -wooded (Ehnström & Axelsson 2002). These upper parts of the trees are never covered even by the deepest snow (personal observation), implying that Black Woodpeckers can feed on alder wood-wasp larvae also during very snowy winters. Rolstad & Rolstad (1995) estimated that a Black Woodpecker in mid-Norway on a midwinter day spends 3-4 hours feeding. In this study, a female Black Woodpecker fed continuously for one hour in the same grey alder tree in the Tavlefjärden study area on 7 March 2005, producing 396 g (dry weight) of litter. Based on these figures, and if just a single Black Woodpecker was responsible for all litter, it can be estimated that this individual spent approximately 30 % of the total daily feeding time in this limited stand. Despite widespread occurrence of wood-wasp trails in the grey alder wood, few alder wood-wasps were retrieved from the debris investigated in the laboratory. One explanation can be that the Black Woodpecker is feeding very efficiently, leaving almost no larvae in the litter. Wood-wasp females exclusively oviposits in hard wood of recently decaying grey alders, and the development from egg to adult takes one year up to a couple of years (Ehnström & Axelsson 2002). I found very few deciduous trees, other than grey alders in their first decaying stages, which could serve as alternative feeding substrates: only some scattered goat willow trees (with some few larger-sized holes that could possibly have been made by a Black Woodpecker) and grey alders in late stages of decay, which were totally damaged by age or weather conditions. The finding that 41 of 56 studied feeding trees were in non-exposed positions suggests that Black Woodpeckers search for potentially energy-rich grey alder trunks in protective cover from potential flying predators. Ryrholm (1996) showed that in the southernmost part of Sweden some wood-living insects use the full 360 degrees of tree trunks, but in the northernmost part of the country, the insects could only be found in the parts of tree trunks that face southwards, where sunlight increases the temperature. This exposure gradient was corroborated by the findings in this study, that Black Woodpeckers forage almost exclusively on the southerly sector of grey alders (Figure 1). Areas abundant with grey alders in their earlier decaying stages have a great attraction on feeding Black Woodpeckers, even if the trunks are of a smaller dimension. Decaying grey alders that seem alive to the naked eye are over-represented among trees with carvings recently made by Black Woodpeckers. It thus seems as though grey alders only provide wood quality that is optimal for the Black Woodpecker's main insect prey for a few years. This suggests that a specific life stage, rather than a certain trunk diameter, of grey alders is crucial for feeding Black Woodpeckers. The habitat seems favourable for Black Woodpeckers if stands of grey alders are quite dense. As grey alders are weak competitors with for example Norway spruce, such dense stands are more often found in areas that are regularly flooded by water, which is un favourable for spruces. Two such terrains are plain areas close to rivers with high altitudinal differences in water level, and areas close to seashores that are exposed to different water levels, both of which are found in the study areas. To maintain a favourable habitat for Black Woodpecker through conservation measures, emerging Norway spruce or pine trees could continuously be manually removed from stands of grey alders. In the province of Västerbotten, areas in proximity of the Gulf of Bothnia host significant amounts of grey alder, occurring as an important pioneer tree species along coastal uplift areas. The grey alder has a short life cycle, approximately 30 years (cf., e.g., Uri et al. 2014), thereby providing Black Woodpeckers a food resource of relatively short duration. Close to the Gulf of Bothnia, one can find grey alder stands of uniform age, bearing Black Woodpeckers carvings of similarly uniform age, while nearby, younger stands lack any signs of feeding Black Woodpeckers. This indicates that the grey alders need to reach a certain age before they become host trees for insect larvae such as the alder wood-wasp. From an aspect of conserving important forest landscapes for Black Woodpeckers, I would argue that the grey alder is a significant feeding tree in the study areas. This implies that the north Swedish grey alder habitats described here ought to be protected for the good of the Black Woodpecker-a species classified as Near Threatened in the Swedish Red List of threatened species (SLU Artdatabanken 2020). I denna uppsats diskuteras vidare kamelstekelns Xiphydria camelus betydelse som komplementär födokälla vintertid till den välbeskrivna favoritfödan hästmyror Camponotus herculeanus, som spillkråkan Ornis Svecica (ISSN 2003(ISSN -2633 is an open access, peer-reviewed scientific journal published in English and Swedish by BirdLife Sweden. It covers all aspects of ornithology, and welcomes contributions from scientists as well as non-professional ornithologists. Accepted articles are published at no charge to the authors. Read papers or make a submission at os.birdlife.se.
v3-fos-license
2016-05-04T20:20:58.661Z
2015-10-06T00:00:00.000
14467542
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0138880&type=printable", "pdf_hash": "dc23dbcf666f24835348f67c95155f11fa6ac363", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45261", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "dc23dbcf666f24835348f67c95155f11fa6ac363", "year": 2015 }
pes2o/s2orc
Lactobacillus rhamnosus CNCMI-4317 Modulates Fiaf/Angptl4 in Intestinal Epithelial Cells and Circulating Level in Mice Background and Objectives Identification of new targets for metabolic diseases treatment or prevention is required. In this context, FIAF/ANGPTL4 appears as a crucial regulator of energy homeostasis. Lactobacilli are often considered to display beneficial effect for their hosts, acting on different regulatory pathways. The aim of the present work was to study the effect of several lactobacilli strains on Fiaf gene expression in human intestinal epithelial cells (IECs) and on mice tissues to decipher the underlying mechanisms. Subjects and Methods Nineteen lactobacilli strains have been tested on HT–29 human intestinal epithelial cells for their ability to regulate Fiaf gene expression by RT-qPCR. In order to determine regulated pathways, we analysed the whole genome transcriptome of IECs. We then validated in vivo bacterial effects using C57BL/6 mono-colonized mice fed with normal chow. Results We identified one strain (Lactobacillus rhamnosus CNCMI–4317) that modulated Fiaf expression in IECs. This regulation relied potentially on bacterial surface-exposed molecules and seemed to be PPAR-γ independent but PPAR-α dependent. Transcriptome functional analysis revealed that multiple pathways including cellular function and maintenance, lymphoid tissue structure and development, as well as lipid metabolism were regulated by this strain. The regulation of immune system and lipid and carbohydrate metabolism was also confirmed by overrepresentation of Gene Ontology terms analysis. In vivo, circulating FIAF protein was increased by the strain but this phenomenon was not correlated with modulation Fiaf expression in tissues (except a trend in distal small intestine). Conclusion We showed that Lactobacillus rhamnosus CNCMI–4317 induced Fiaf expression in human IECs, and increased circulating FIAF protein level in mice. Moreover, this effect was accompanied by transcriptome modulation of several pathways including immune response and metabolism in vitro. Results We identified one strain (Lactobacillus rhamnosus CNCMI-4317) that modulated Fiaf expression in IECs. This regulation relied potentially on bacterial surface-exposed molecules and seemed to be PPAR-γ independent but PPAR-α dependent. Transcriptome functional analysis revealed that multiple pathways including cellular function and maintenance, lymphoid tissue structure and development, as well as lipid metabolism were regulated by this strain. The regulation of immune system and lipid and carbohydrate metabolism was also confirmed by overrepresentation of Gene Ontology terms analysis. In vivo, circulating FIAF protein was increased by the strain but this phenomenon was not correlated with modulation Fiaf expression in tissues (except a trend in distal small intestine). Introduction Over the last decades, increased obesity is associated with increased metabolic syndromes characterized by type-2-diabetes (T2D), cardiovascular diseases (CVD) or low-grade inflammation. Regarding the increased prevalence of these diseases, scientific interest has emerged in developing new therapeutic approaches. The recognition of FIAF (Fasting Induced Adipose Factor) protein as a central regulator of energy homeostasis emphasized it as a strong candidate in obesity-associated disorders treatment and/or prevention. FIAF also known as ANGTPL4 (angiopoietin-like 4), is an adipokine expressed in several tissues including adipose tissue, liver, intestine and heart. With increasing studies on FIAF, it seems that its physiological effects are tissues dependent. FIAF inhibits lipoprotein lipase (LPL) and promotes lipolysis resulting in increased triglycerides (TGs) serum level and decreased free fatty acids (FA) and cholesterol uptake into different tissues [1,2]. Although a direct interaction has been established, exact mechanism of LPL inhibition is still not fully elucidated [3][4][5]. Conflicting data on the role of FIAF in glucose and lipid metabolism have been reported. Even transgenic mice showed impairment in glucose tolerance, overexpression of Fiaf gene in diabetic mice improvesd hyperglycemia and glucose tolerance [1,6]. In vivo, rodent experiments have associated FIAF to hyperlipidemia, caused by decreasing very low-density lipoprotein (VLDL) clearance [7]. These data were supported by human genetic studies, which revealed lower plasma TGs in E40K variant [8,9]. However, other studies failed to correlate FIAF level to plasma TGs levels [10]. FIAF beneficial effects have been reported against inflammation induced by high fat diet (HFD) by limiting macrophages lipid overload [11] and in no reflow protection after cardiac infarcts [12]. Recently, a higher circulating Fiaf level has been described in people characterized by a low gene count (LGC) microbiome and associated with marked inflammatory phenotype and adiposity [13]. Thus, FIAF displays a critical role in lipid and glucose metabolism even if more knowledge about mechanisms of action is required to better understand the physiological effects of FIAF regulation. Fiaf gene is considered as a target gene of peroxisome proliferator-activated receptors (PPARs) but several others regulators including glucocorticoids, and recently biliary acids have been described as Fiaf mediators [14][15][16]. Mice exhibiting a conventional microbiota but with intestinal Fiaf gene suppression are not protected against HFD-induced obesity as their GF counterparts [17] showing microbiota driven Fiaf gene regulation. More and more evidences revealed that some probiotics up-regulate intestinal FIAF expression through reactive oxygen species (ROS) or short chain fatty acids (SCFA) release [18,19]. Recently, a transcriptome analysis of murine jejunum revealed the induction of Fiaf after Lactobacillus rhamnosus (L. rhamnosus) HN001 administration [20]. Lactobacillus paracasei (L. paracasei) F19 induced Fiaf gene expression in a PPAR-γ, PPAR-α dependent manner and decreased fat storage under HFD. This effect seemed mediated by a non-identified secreted compound [21]. Thus, molecular mechanisms and microbial effectors regulating its expression are still poorly understood. Lactobacilli largely used in daily food and especially in fermented dairy products, can be delivered in amount up to 10 12 live bacteria into the digestive tract. Thus, being in direct contact with the intestinal mucosa, lactobacilli represent a large source of potential regulators of host physiology. In this context, we assessed the ability of 19 bacterial strains of L. paracasei and L. rhamnosus species to modulate Fiaf gene expression in IECs. In order to dig into the biological mechanism involved, we realized a whole genome transcriptome analysis of epithelial cells in contact with different bacterial strains. Finally, we used mono-colonized mice to validate Fiaf regulation in an in vivo model and to determine the impact of its modulation on host physiology. Epithelial cells culture and reagents The human intestinal epithelial cell lines HT-29 was obtained from the American Type Culture Collection (ATCC, Rockville, MD). HT-29 cells were cultured in DMEM supplemented with 10% heat-inactivated fetal calf serum (FCS), 2 mM L-glutamine (Sigma), 1X Non essential Amino acid (Invitrogen), penicillin (50 IU/ml) and streptomycin (50 μg/ml) in an humidified atmosphere containing 10% CO2 at 37°C. After seeding, cells were grown 48h in 6 or 12 wells plate in antibiotic-free medium at 3.25X10 5 and 6.5X10 5 respectively. Medium was changed just before the addition of bacterial or reagents for 6h. Rosiglitazone (used as positive control), GW9662, GW6471 and GW7647 (Cayman chemicals) were dissolved in DMSO following the manufacturer's instructions and diluted at 100μM in antibiotic-free DMEM. They were used at a final concentration of 10μM except for GW6471 at 1μM. The antagonists (GW9662, GW6471) were added 1h before challenging with rosiglitazone or GW7647 respectively. Bacterial strains culture and screening Bacteria from Danone collection (Table A in S1 File) were cultivated in MRS (Man, Rogosa and Sharpe medium, Oxoid CM0359) at 37°C in pseudo-aerobic condition. Bacterial cultures (stationary phase) were centrifuged at 5,000x g for 10 min. Conditioned media (CM) were then collected, and filtered on 0.2μm PES filters. Bacterial pellets were washed twice in PBS and resuspended in antibiotic-free DMEM at OD 600 = 0.1 (corresponding to mean Multiplicity Of Infection ranging from 23 to 113 bacteria for 1 cell) for bacteria. Cells were stimulated with 20% final volume of bacterial culture. To respect the same ratio (bacteria/cell), Heat Inactivated (HI) bacteria, prepared at OD 600 = 1 and heated at 80°c for 20 min, were added at 10% of final volume. Conditionned media (CM), were used at 10% of final volume to limit the presence of lactic acid. Transwell TM permeable support (Corning) was used to separate bacterial strain from cells for contact dependency test. In those assay, HT-29 were grown in the bottom of 24-well plates, transwell were then added and bacteria were seeded in the transwell preventing direct contact. RNA extraction and quantitative real-time PCR (RT-qPCR) of Fiaf gene on HT-29 cells to the manufacturer's recommendations. RNA concentration was measured by using a Nano-Drop spectrophotometer (NanoDrop Technologies, Wilmington, USA), and the RNA integrity value (RIN) was assessed by using a 2100 Bioanalyzer (Agilent Technologies Inc., Santa-Clara, USA). All samples had a RIN above 9,6. Briefly, cDNAs synthesis was realized from 1μg of RNA using High Capacity cDNA Reverse Transcription kit (Applied Biosystems, USA) according to the manufacturer's instructions. cDNAs were diluted at 20ng/ml. RT-qPCR were carried out with Taqman probes (Life technologies, France; Table B in S1 File) according to manufacturer instructions using an ABI Prism 7700 (Applied biosystems, USA) thermal cycler in a reaction volume of 25μl. For each sample and each gene, PCR run were performed in triplicates. In order to quantify and normalize the expression data, we used the ΔΔCt method using the geometric mean Ct value from β-Actin and Gapdh as the endogenous reference genes [22]. Microarray analysis of HT-29 cells Raw microarray data have been deposited in the GEO database under accession no. (GSE62311). A total of 28 microarrays were analysed: 8 replicates of HT-29 cells at different passage number for L. rhamnosus CNCMI-4317 treatment, rosiglitazone (positive control) and DMEM (negative control) and four replicates for L. rhamnosus CNCMI-2493. We used the Illumina human genome microarrays (HumanHT_12 v4 Expression BeadChip Kit, SanDiego, USA). For each sample, 750ng of labelled cDNA was synthesized from total RNA using Ovation PicoSL WTA System v2 and Encore BiotinIL Module kits (Nugen Technologies, Inc. Leek, The Netherlands). The slides were scanned with iScan Illumina and data recovered using Gen-omeStudio Illumina software (version 1.0.6). All microarray analyses, including pre-processing, normalization and statistical analysis were carried out using 'Bioconductor' packages in R programming language (version 3.0.2) (for more detail concerning data normalization, see S1 file). The list of differentially expressed (DE) genes were uploaded into using Ingenuity Software (IPA; version 5.5, Ingenuity Systems, Redwood City, CA) to identify relevant molecular functions, cellular components and biological processes using a right-tailed Fisher's exact test. IPA computed networks and ranked them following a statistical likelihood approach [23]. All networks with a score of 25 and at least 30 focus genes were considered to be biologically relevant. Additionally, ErmineJ software program was used as a complementary method to relate changes in gene expression to functional changes. ErmineJ software program is based on overrepresentation of Gene Ontology (GO) terms. GO terms were considered significantly at a FDR < 5%. To technically validate the data generated in the microarray study, quantitative RT-qPCR was carried out on 12 selected candidate genes (Table B in S1 File, see S1 File for more detail). This set of genes were analysed using a linear effect model, including treatment of interest as a fixed effect. Differences were considered significant at P <0.05. In vivo experiment All experiments were handled in accordance with the institutional ethical guidelines. The "Comité d'Ethique en Expérimentation Animale of the Centre INRA of Jouy-en-Josas and AgroParisTech-COMETHEA" ethics committee approved the study. Seven to eleven weeksold germ-free (GF) C57BL/6 mice (CNRS-CDTA, Orléans, France) were maintained in sterile isolators at INRA ANAXEM germ-free animal facility, 3 to 5 per cages, on ad libitum irradiated normal chow (R 03-40, SAFE) in 12h light cycles. Temperature and moisture were carefully controlled. Mice were observed once a day to ensure their welfare. Mice were separated in three groups depending on bacterial gavage. Mice were colonized by one-time gavage of L. rhamnosus CNCMI-4317 (n = 7), L. rhamnosus CNCMI-2493 (n = 6) prepared at 1X10 9 CFU/ml in PBS or PBS as control treatment (n = 5). Body weight was recorded twice a week after bacterial or PBS gavage. After eleven days, mice were sacrificed by cervical dislocation and all tissues (intestine, adipose tissues and liver) were removed and flushed with ice-cold PBS within the next 30 minutes. Tissues were immediately snap frozen in liquid nitrogen and stored at -80°C until processed. Colonization was confirmed by bacterial counting of feces and API 50CH (Biomerieux, France). Statistical analysis of RT-qPCR data and metabolites All data were normally distributed. The values presented herein are expressed as means ± standard deviation (SD). Data were analysed using one-way ANOVA followed by Turkey multiple post-hoc test Graph Pad Prism (version 5). Differences were considered significant at P <0.05. A linear regression was conducted to evaluate the association between RT-qPCR and microarray expression. Fiaf is up regulated by L. rhamnosus CNCMI-4317 strain in IECs In order to evaluate the potential of the two Lactobacillus species (L. rhamnosus, L. paracasei) in regulating host metabolism, we tested 19 bacterial strains for their ability to modulate Fiaf gene expression in IECs by RT-qPCR. HT-29 human epithelial cells were exposed to each bacteria at OD 600nm = 0.1 for 6h before RNA extraction. Among the 10 L. paracasei and 9 L. rhamnosus strains tested (detailed in Table A in S1 File), L. rhamnosus CNCMI-4317 showed the most effective activation of Fiaf gene expression (P<0.001) (Fig 1) suggesting a strain specific effect. This activation corresponded to about 65% of the one induced by rosiglitazone, a selective PPAR-γ ligand (used as positive control). So, we decided to focus our mechanistic analysis on the bacterial strain using L. rhamnosus CNCMI-2493 as bacterial negative control and L. Rhamnosus CNCM-4317. It is noteworthy that activation of PPAR-α resulted in a stronger regulation of Fiaf than PPAR-γ in our cellular model. L. rhamnosus CNCMI-4317 strain might act via a surface exposed molecule in IECs To determine the bacterial effector(s) involved in the activation of Fiaf by CNCMI-4317 strain, several bacterial fractions were tested. Conditioned medium (CM) (Fig 3a) and heat inactivated (HI) bacteria (Fig 3b) were not effective on Fiaf up-regulation suggesting that the effector was not a secreted product and was heat sensitive (Fiaf relative expression was: 50±12.53 and 21.33 ±14.22 respectively for CM and HI vs 92±17.69 and 87.67±4.16 for bacterial strain; P<0.001). The requirement of bacterial-cells direct contact was further assessed. To do so, HT-29 cells were separated from the bacteria using transwell TM permeable support (Corning) in which the bacteria were added (Fig 3c). Using this item, the ability of the lactobacilli to induce L. rhamnosus CNCMI-4317 modulated gene expression, cell death and survival, cellular growth and proliferation, immune response and lipid metabolism in IECs We performed a whole genome transcriptome analysis of IECs in response to bacterial strains. HT-29 cells were incubated for 6 hours either with the bacterial strain of interest (L. rhamnosus CNCMI-4317), a control bacterium that did not induce Fiaf gene expression (L. rhamnosus CNCMI-2493), a culture medium as negative control, or rosiglitazone. We performed eight independent cultures of HT-29 cells at different passage number for L. rhamnosus CNCMI-4317, negative and rosiglitazone controls and four replicates for L. rhamnosus CNCMI-2493. In view of the strong effect of the cell culture (S1 Fig), we decided to include it as a covariable in the statistical model. We failed to detect genes significantly differentially expressed (DE) between L. rhamnosus CNCMI-2493 and the negative control (without bacterial strain), and we also hardly detected significant differences between the two bacteria L. rhamnosus CNCMI-4317 and CNCMI-2493 (data not shown). However, when comparing L. rhamnosus CNCMI-4317 strain and rosiglitazone to the negative control, respectively 63 and 21 genes were modulated (P<0.05). An Euler diagram visualization approach of these results highlighted that only Fiaf gene was commonly expressed (Fig 4a), strongly supporting the hypothesis that bacterial strain CNCMI-4317 acted in a PPAR-γ independent manner. As presented in Table 1 1.63, qvalue<0.00089). To explore the molecular functions modified in response to L. rhamnosus CNCMI-4317, we measured the subsets of DE genes between treatments by using the core analysis function included in IPA software. Most biological functions found to be significantly enriched (P<0.05), by L. rhamnosus CNCMI-4317 were related to gene expression machinery, cell death/survival, cellular growth/proliferation, cellmediated immune response and lipid metabolism categories (Table 1). Interestingly, those functions included canonical pathways associated with PPAR signalling, and HIF1α signalling (P<0.05) (S2 Fig). Four networks were identified with scores ranging from 41 to 19. The Fiaf gene was found to play a role in the regulatory network involved in putative functions such as neurological disease, cell cycle and cell development (Fig 4b). On the contrary, most of the genes regulated by rosiglitazone were involved in lipid or carbohydrate metabolism functions (Table 1). In this context, it is no surprisingly that the Fiaf gene was found to play a role in the regulatory network involved in energy production and lipid metabolism putative functions (Fig 4c). To validate technically the microarray gene expression data, IECs RNA in response to L. rhamnosus CNCMI-4317 were analysed by RT-qPCR for 12 genes (Table B in S1 File). RT-qPCR results confirmed the microarray expression levels with most genes having high r 2 values (Fig 4d). For physiological relevance, microarray data was also analysed at the level of gene sets that together encoded for particular differentially expressed functional GO terms by ErmineJ (Fig 5). Notably, this analysis revealed that the gene sets involved in immune system signalling pathways and regulation or lipid and carbohydrate metabolism GO terms were enriched by L. rhamnosus CNCMI-4317 (P<0.05). Among the metabolic GO pathways regulated by our bacterial strain, 7 were shared with those induced by rosiglitazone treatment (data not shown). In vivo mono-colonization of Germ-free mice with L. rhamnosus CNCMI-4317 increased plasma IL-7 and FIAF and tend to modulate Fiaf gene expression in the intestine Germ-free mice were colonized with L. rhamnosus CNCMI-4317 strain, L. rhamnosus CNCMI-2493 (control strain) or PBS during 11 days and then sacrificed. Mice colonized with L. rhamnosus CNCMI-4317 presented an increase in the concentration of plasma FIAF as compared to control mice (Fig 6a). With regard to the Fiaf gene expression among different tissues, the Fiaf gene expression tended to increase in the distal small intestine in the presence of L. rhamnosus CNCMI-4317 (P = 0.14), but no significant differences could be observed for colonic expression (Fig 6b). Furthermore, circulating FIAF level was not correlated to the expression of Fiaf gene expression in adipose tissues nor in the liver (S3a and S3b Fig). Cytokine levels in the serum of mice mono-colonized with bacterial strains were investigated. Only IL-7 was significantly higher (P<0.05) when comparing animals colonized with L. rhamnosus CNCMI-4317 strain versus GF mice or mice colonized with the control strain (Fig 6c). The other 8 cytokines were not significantly modified by the colonization (S3c Fig) Discussion It is now well established that the human gut microbiota is composed of 10 14 bacteria and so represents a dynamic organ. Several bacteria are reported to play a role in host energetic metabolism regulation [17,21,24]. Several lactobacilli isolated from the human gut are widely used in dairy products. Assessment of their involvement in energy intake and storage appears crucial in the period when obesity is continuously growing worldwide. FIAF or ANGPTL4 has been identified as a key metabolism regulator. Intestinal epithelial cells (IECs) being the first line of contact between bacteria and the host represent an important interface for host physiology regulation by microbiota. In this context, we identified L. rhamnosus CNCMI-4317 strain as able to up regulate Fiaf gene expression in IECs. In order to identify the bacterial effector(s) responsible for modulating Fiaf expression, we tested several bacterial fractions. We showed that Fiaf regulation was not caused by a secreted compound and required the presence of live bacterial cells. Since the effect was abrogated by heat treatment, we hypothesized that surface exposed protein could be involved. Several beneficial metabolic effects have been reported under Lactobacilli treatment. These effects were linked to secreted compound as conjugated linoleic acids from L. rhamnosus P60 and L. plantarum P62 [24,25] or unknown molecule [21]. One study mentioned the requirement of live L. rhamnosus GG cells to decrease serum glucose levels in a diabetic mice model [26]. However, our work showed for the first time that L. rhamnosus under each node. PPAR signalling canonical pathway was added. CP mean canonical pathway. (d) RT-qPCR data are normalized using geometrical mean of β-Actin and Gapdh as control genes. CNCMI-4317 strain could play a role on host metabolism through the regulation of Fiaf expression in the epithelial cell via a direct contact. PPARs isotypes play an important role in Fiaf regulation [14,27,28]. A recent study provided evidence that L. paracasei F19 upregulates Fiaf expression in IECs in a PPAR-α and PPAR-γ dependent manner [21]. In our study, tested L. paracasei strains did not regulate Fiaf expression, highlighting a strain specific effect, which was also seen for our L. rhamnosus strain. Our results suggest a PPAR-α dependency, but rule out a role for PPAR-γ. On the contrary to a recent study from Alex et al (2013), and in agreement with Aronssson et al (2010), ours experiments showed that PPAR-α regulates Fiaf and even induced a stronger activity than PPAR-γ in HT-29 cells. In order to determine the mechanism of action involved, we performed a whole genome transcriptome analysis of IECs in contact with different bacterial strains. We detected a strong effect of independent cell culture passage, driving us to include it as a covariable in our statistical model. Unfortunately, the low number of replicates probably unabled us to identify genes differently regulated. However, a total of 63 annotated genes were revealed as significantly different between L. rhamnosus CNCMI-4317 and negative control. The IPA analysis of these genes disclosed that they encoded for molecular functions involved in PPAR and HIF1 pathways. In agreement, published data provide evidences for Fiaf regulation by PPAR and hypoxia [29]. However, the absence of effect of conditioned media excluded two known major potential regulators, namely H 2 O 2 [18] and SCFA [19]. Interestingly, genes affected by L. rhamnosus CNCMI-4317 were mainly involved in cellular growth/proliferation, cell death, immune response and lipid metabolism. In agreement with our findings, others strains of L. rhamnosus have been described as cellular growth and proliferation modulators in vivo suggesting potent lactobacilli shared effect [30,31]. Moreover, L. rhamnosus CNCMI-4317 regulated several transcription factors involved in gene expression and neurological diseases, cell cycle and cellular development. In this context, Fiaf did not appear in the network of energy production and lipid metabolism as rosiglitazone confirming a different mechanism of action between both treatments. However, few unregulated but intermediate genes in the neurological diseases, cell cycle and cellular development network were correlated to metabolism (Ldl, Erk1/2, Map2k1/2, Creb, Mek) especially through PPAR pathway. This underlines a potent role of L. rhamnosus CNCMI-4317 strain in host metabolism as revealed by the regulation of expression of Scl2a3 (solute carrier family 2, member 3) and Gdf15 (growth differentiation factor 15). The last, known for its role in cellular cycle has been recently involved in T2D [32,33]. Despite an evident in vitro regulation of Fiaf leading by PPAR-α, transcriptome analysis exhibited that the majority of genes were not regulated by PPAR-α. These results suggest that our bacterial strain could modulate multiple cellular functions by complex and diverse mechanisms. In order to validate in vivo the cellular regulation of Fiaf observed in vitro, we colonized C57BL/6 mice with L. rhamnosus CNCMI-4317. We observed a higher level of circulating FIAF and an increased tendency expression of Fiaf gene in the small intestine, although nonstatistically significant. However, Fiaf was not regulated in the colon. These data correlate with Korecka et al (2013), who showed different Fiaf level expression in gastrointestinal (GI) tract under bacterial administration due to different microbial population and fermentation in conventional model [19]. In our case, it may be explained by Lactobacilli colonization in GI upper part in GF model and absence of SCFA release (potent Fiaf activator) in colon. Additionally, no modulation of Fiaf gene expression in liver or adipose tissues was observed upon colonization with L. rhamnosus CNCMI-4317 strain. Neither serum lipoproteins level nor body weight was affected in comparison with control GF mice. Taken together, our data suggest that an upregulation of circulating Fiaf was not associated with lipoproteins levels. This is in disagreement with Aronsson et al (2010), who showed a correlation between plasma FIAF and VLDL TGs levels in mono-colonized mice [21]. However, Grootaert et al (2011) suggested an importance of FIAF isoform in specific physiological effect [18]. Thus, the discrepancies with our results may come from technical differences (Western blot vs Elisa) targeting different isoforms. Furthermore, L. rhamnosus CNCMI-4317 strain induced serum IL-7 suggesting a role in immune cell development/regulation. This is in agreement with a recent ex vivo human transcriptome analysis showing the ability of L. plantarum strain to induce IL-7 in the duodenum and suggesting a potent common property of Lactobacilli strains [31]. Finally, our in vivo study failed to identify strong Fiaf regulation in different tissues and impact on host metabolism but we may expect that Fiaf exerts a higher physiological effect on more complex environment, for example in rodent model exhibiting enhanced metabolic profiles (i.e. high fat diet). In the context where bacterial regulation of Fiaf appears to play a central role in fat storage, we provide evidences for the potential role of one particular L. rhamnosus strain as a Fiaf regulator in vitro. It is noteworthy that the effect is strain specific. To go deeper in the understanding of Fiaf involvement in host metabolism and to better understand strains specificities involved in this phenomenon, it will be important to study the impact of this bacterial strain on the physiology of conventional mice exposed to high fat diet and in human tissue set-up. (TIF) S1 File. Material and method in S1 file. Table A in S1 file: List of tested bacteria. Table B in S1 file: List of Taqman probes. (DOCX)
v3-fos-license
2017-06-20T17:30:15.089Z
2013-03-21T00:00:00.000
31776303
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=29206", "pdf_hash": "5987c7043e84f91501e2d781358929d2b8351bb1", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45262", "s2fieldsofstudy": [ "Biology" ], "sha1": "5987c7043e84f91501e2d781358929d2b8351bb1", "year": 2013 }
pes2o/s2orc
The bisulfite genomic sequencing protocol The bisulfite genomic sequencing (BGS) protocol has gained worldwide popularity as the method of choice for analyzing DNA methylation. It is this popular because it is a powerful protocol and it may be coupled with many other applications. However, users often run into a slew of problems, including incomplete conversion, overly degraded DNA, sub-optimal PCR amplifications, false positives, uninformative results, or altogether failed experiments. We pinpoint the reasons why these problems arise and carefully explain the critical steps toward accomplishing a successful experiment step-by-step. This protocol has worked successfully (>99.9% conversion) on as little as 100 ng of DNA derived from nearly 10-year-old DNA samples extracted from whole blood stored at −80 ̊C and resulted in enough converted DNA for more than 50 PCR reactions. The aim of this article is to make learning and usage of BGS easier, more efficient and standardized for all users. INTRODUCTION The bisulfite genomic sequencing protocol (BGS) has gained worldwide popularity as the method of choice to analyze DNA methylation.DNA methylation was the first epigenetic mark to be discovered in mammalian cells [1], and while many other epigenetic marks are known and even more are currently being discovered [2], the role that DNA methylation plays in the regulation of gene expression is now widely accepted. Bisulfite was first used to convert 5-methylcytosines to uracils in 1970 [3], but the BGS protocol used today to determine the methylation status of CpG dinucleotides in the genome was first published in 1992 [4]. Sodium bisulfite and metabisulfite ions are used to convert un-methylated cytosines to uracils in three steps (Figure 1). Conversion can only occur in single-stranded DNA (ssDNA).Importantly, bisulfite does not affect 5-methylcytosines: the methyl group in the 5 position of the cytosine base causes 5-methylcytosines to be non-reactive to bisulfite ions.The end result following PCR amplification of bisulfite-converted DNA is that 5-methylcytosines in the original sample are read as C, whereas cytosines are read as T (Figure 2). Optimal bisulfite conversion conditions are those that ensure that target molecules remain single-stranded throughout the chemical transformation steps.Paying particular attention to solvent molarities, pH, temperature and timing of denaturing steps is critical to the successful experiment.Unfortunately, false positives, uninformative results, or altogether failed experiments are often encountered, and this is due to three main reasons: 1) The deamination of all unmethylated non-CpG-cytosine residues in the target sequence under investigation is not achieved; 2) Conditions for successful conversion are in themselves highly degradative to DNA; 3) Frequently, the target sequence has inherent characteristics, such as unusually CG-rich regions, that make it difficult to sequence.This article carefully describes the key steps of the BGS protocol and has worked successfully on as little as 100 ng of DNA (starting material) resulting in enough converted DNA for more than 50 PCR reactions.The aim of this article is to make learning and usage of BGS easier, more efficient and standardized for all users. PROTOCOL 2.1.Prepare the DNA for Alkaline Denaturation and Bisulfite Conversion STEP 1. Extract the DNA from the sample using a standard technique appropriate for the sample type, such as the phenol-chloroform technique [5] followed by Proteinase K treatment to ensure that all nuclear proteins have been removed.Protocols and kits using chaotropic salts (such as the All Prep DNA and RNA kit from QIAGEN) do not necessarily require Proteinase K treatment.See Table 1 for suggested reagents and equipment required. STEP 2. Digest between 0.1 to 5 µg DNA with a suitable restriction enzyme such as PstI, BamHI or other frequent cutter according to the supplier's protocol in order to arbitrarily fragment the DNA into smaller, more manageable sizes.Use care to ensure that the restriction enzyme chosen does not have a restriction site in the sequence of interest.Sonication, a somewhat less reliable method as it may vary from experiment to experiment, may also be used.Test efficiency and uniformity of fragmentation across samples by loading a fraction of the fragmented DNA (200 to 500 ng) onto an agarose gel, if possible. Note: Fragmentation is an important step that reduces the possibility of double-strandedness, which prevents conversion.Using more DNA (e.g. up to 5 µg) is better as the highly oxidative conditions of the bisulfite conver sion reaction have high rates of DNA degradation.If using between 0.1 µg and 1.0 µg, make sure to increase all following precipitation times to 8h or more (overnight) and spin times to 1h or more at 13,000 xg at 4˚C.STEP 3. Clean-up and re-concentrate the DNA using a DNA purification column, such as provided in the QuickClean 5M PCR Purification Kit (GenScript) and elute in 60 µl 2 mM Tris, pH 8.5.Use 1-2 µl to evaluate the concentration of starting material by standard spectrophotometry or nanodrop before proceeding to bisulfite conversion.Store DNA at −20˚C to 4˚C for further use. Note: While distilled water may be used, we have found that using a common DNA solvent such as 2 mM Tris, pH 8.5 gives more consistent results.This may be explained by the fact that de-ionized or distilled water can vary in pH.STEP 4. Before beginning the protocol, ensure that all required reagents are on hand, including 3N NaOH, freshly prepared 3.6 M sodium bisulfite, pH 5.0, and 10 mM hydroquinone. 1) To make 3N NaOH, weigh out 6.67 g NaOH pellets and dissolve in 50 ml water. Note: NaOH solutions are stable at room temperature indefinitely. 2) To make 3.6 M sodium bisulfite, pH 5.0, weigh out 7.49 g sodium bisulfite in 15 ml water and mix for 10 minutes on a nutator protected from light.Then adjust the acidity to pH 5.0 using about 12 to 13 drops or so of 10N NaOH.Use a long glass pipette to slowly add the 10N NaOH.Remove potential contaminants by passing the sodium bisulfite solution through a 0.22 micron syringe-driven filter. Note: Sodium bisulfite solutions should always be freshly prepared. 3) To make 10 mM hydroquinone, weigh out 22 mg hydroquinone in 10 ml water and invert until dissolved.Remove possible contaminants by passing the hydroquinone solution through a 0.22 micron pore filter. Note: Hydroquinone solutions may be preserved in 1 ml aliquots in foil-wrapped microcentrifuge tubes at −20˚C for up to 2 months.STEP 5. Add 1 ml hydroquinone to 15 ml sodium bisulfite and adjust to 20 ml total with distilled water.This solution may be stored at 4˚C for up to 2 days in a foilwrapped 50 ml polypropylene tube. Denature the DNA Using NaOH and Heat STEP 6. Proceed by denaturing 54 µl of the digested DNA with 6 µl of 3N NaOH at 37˚C for 15 min (for a final concentration of 0.3N NaOH).Place sample on ice immediately following denaturation if the next step, STEP 3, is not performed immediately. Note: Ensure that the volume of 3N NaOH added is exactly 1/10th the final volume. Convert the DNA Using Bisulfite STEP 7. Sulfonate unmethylated cytosines in ssDNA.1) Add 430 µl of the freshly prepared sodium bisulfitehydroquinone solution to 0.1 to 5 µg of the digested DNA from STEP 3 and carefully mix 5 to 10 times by inversion for minimal aeration. Note: If you are planning on sequencing a sample several times to assess the various copies or alleles present in a same sample for example, it will be necessary to use several tubes per sample.Alternatively, several PCR reactions may be set up from as many aliquots of bisulfite-treated DNA.These precautions ensure that sequencings are not derived from a same template molecule in the PCR amplification step. 3) Incubate in a thermal cycler using the following program: (95˚C for 4 minutes and then at 55˚C for 4 hours) × 2 cycles; (95˚C for 4 minutes and then 55˚C for 2 hours) × 1 cycle.Store temporarily at 4˚C.Using a thermal cycler rather than a water-bath saves time, lowers the risk of contamination, and ensures that reactions will be kept in the dark without any hassles.Incubations at 95˚C ensure single-strandedness. Note: It is not necessary to cover the solutions with mineral oil as there is very little aeration in 200 µl capacity PCR tubes.However, overlaying solutions with a drop of mineral oil is highly recommended when conversions are performed in 1.5 ml capacity microcentrifuge tubes when using other types of incubators such as wa-terbaths. 4) Desalt the DNA using DNA purification columns such as those from Gen Script used before in STEP 3. Add 6 µl 3N NaOH to 54 µl bisulfite-treated DNA from the previous step and incubate for 15 min at 37˚C.Immediately place tube on ice. Note: Ensure that the volume of 3N NaOH is exactly 1/10th the final volume. STEP 9. Desulfonate uracil-6-sulfonates to uracils. 1) Precipitate and desulfonate DNA simultaneously by adding 2 µl glycogen (1 µg/µl) as a carrier, 26 µl of 10M ammonium acetate, pH 7.8 and 500 µl of 95% ice cold ethanol in this order to each reaction.Place tube in an ice-water bath for 10 minutes or at −20˚C for 1 hour (or at −20˚C overnight if using less than 1 µg starting material DNA). 2) Spin the DNA in a benchtop centrifuge at 13,000 xg for 30 minutes at 4˚C (or at 13,000 xg for >1 h at 4˚C if using less than 1 µg starting material DNA). 3) Wash the precipitate with 200 µl 70% ice-cold ethanol and spin at 13,000 xg for 5 minutes at 4˚C and air-dry.Ensure that all traces of ethanol have completely evaporated. 4) Resuspend the bisulfite-converted DNA in enough TE, pH 8.0 to make a final concentration of 25ng/µl, ideal for subsequent PCR amplification steps. Note: Calculate the volume of TE, pH 8.0 required based on the starting amount of pre-digested DNA used for bisulfite conversion (see STEP 3).Bisulfite-converted DNA concentration cannot be accurately assessed.If using less than 1 µg genomic DNA, use 50 -100 µl elution buffer to ensure optimal elution conditions but upwardly adjust the number of PCR cycles accordingly. 5) Make multiple aliquots to prevent too many freezethaw cycles.Store aliquots at −20˚C for short term use or at −80˚C for up to 6 months. PCR Amplify the Bisulfite-Converted DNA STEP 10.Design primers to the bisulfite-converted DNA sequence such that each primer is 25 to 30 bases long, has a melting temperature (Tm) of approximately 60˚C, and importantly, does not hybridize to any CpGcytosines, if possible.Aim for an amplicon length of no more than 500 bp with as few bases in the primers hybridizing to non-CpG-cytosines in the target sequence as possible.Softwares such as MethPrimer [6] work well for most sequences. Note: Re-amplifying a smaller region within the first amplicon using nested primers is a trick often used to overcome poor amplification due to the highly oxidative conditions of BGS.However, it has recently been suggested that detection of non-CpG methylation may be impaired by two rounds of PCR [7].STEP 11.Optimize PCR conditions, including annealing temperature (Ta) and extension time using a gradient thermal cycler.Recommended PCR conditions: 1 cycle at 95˚C for 5 min; 35 cycles at: 94˚C for 1 minute; the target annealing temperature (Ta) for 2.5 minutes, 72˚C for 1 minute; and finally 1 cycle at 72˚C for 5 minutes. Note: Ensure that temperatures 1 to 5 degrees below and 1 to 5 degrees above the Ta are tested in a gradient thermal cycler in order to determine the temperature at which a single, specific PCR product is amplified.Using more cycles may be necessary if starting with less than 1 µg of genomic DNA. STEP 12. Load a fraction of the PCR amplification onto an agarose gel of adequate concentration (i.e.use a 1.5% agarose gel for products between 200 and 500 bp), allow electrophoretic migration and view using standard viewing methods.If the amplified product is of the correct molecular weight and free of non-specific by-products, proceed to traditional cloning and sequencing, pyrosequencing, or other paired application. Determine the total number of non-CpG-Cs in the target sequence and set a successful conversion threshold.For example, a threshold of 98% conversion in a sequence containing 50 non-CpG-cytosines must have 49 of its non-CpG-C converted to T in the PCR product for the data from that sample to be included in analyses. DISCUSSION The three main problems that users of the BGS protocol run into are incomplete conversion, degradation of modified DNA and sub-optimal PCR amplification. Incomplete Conversion Incomplete conversion arises mainly due to inadequate solution alkalinization at steps 2.1 and 3.2.1,thereby preventing conversion due to the presence of double stranded regions in the target DNA molecules at step 2.1 or due to sub-optimal desulphonation at step 3.2.1.This challenge can most often be resolved by measuring the pH directly, or by adding one mock sample in order to measure it indirectly (and not waste precious sample material).A less frequent cause for incomplete conversion is insufficient incubation at 55˚C or lack of a melting step at 95˚C at step 3.1.3.We find it necessary to allow the reaction to proceed for a minimum of 6 -8 hours and to melt the DNA prior to the last 2 -3 hours of the reaction. Degradation of Modified DNA Degradation of the modified end-product results in low yield and cannot be overcome unless more starting material is used.Bisulfite conversion reaction conditions are very harsh, causing depurination and degradation of 75% -90% of the DNA being modified.We have found it necessary to start with at least 2 to 4 µg of genomic DNA when possible, or to spike the starting material with 5 µg of salmon sperm DNA or tRNA when not, in order to buffer the depurinating conditions mentioned above.These conditions ensure that there is enough bisulfite-converted DNA for multiple PCR reactions and for long term use. Sub-Optimal PCR Amplification Sub-optimal PCR results, including lack of signal, poor signal or poor signal-to-noise ratio, are often encountered following a single bisulfite DNA PCR experiment.This is most often due to insufficient yield (i.e.low modified target concentration) and/or many other homologous modified products exist in solution along with it.A practical way to get around this problem is to use a fraction (e.g.1/50th) of the first PCR product reaction volume to amplify a smaller region within the first amplicon using nested or semi-nested primers, however, this may not always lead to the desired outcome.In fact, it has been shown that detection of non-CpG methylation may even be impaired by two rounds of PCR [7].Lack of signal even after a second round of PCR suggests complete degradation of starting material while a smear of non-specific products suggests sub-optimal primer design.Designing longer primers (up to 35 -40 bp) or designing different primers (i.e. to another stretch of DNA within the target region) can often overcome this challenge. CONCLUSIONS Following these steps using 2 µg of starting DNA material (before conversion) will lead to sufficient converted DNA for at least 40 successful PCR reactions and related experiments such as pyrosequencing (Figure 3), assuming that 2 µl of 25 ng/µl bisulfite-converted DNA is used per PCR reaction. The BGS protocol is a powerful protocol that has gained worldwide popularity for analyzing DNA methyllation.It can be used alone, in clonal bisulfite sequencing [4] and direct bisulfite sequencing and GENESCAN [8], or it can be matched with many other techniques, including methylation-specific PCR [9], pyrosequencing [10], HRM [11], MS-snuPE [12], EpiTYPER MassAR-RAY [13] and Illumina Methylation based arrays [14].Particular care must therefore be used to ensure efficient conversion. Figure 2 . Figure 2. Schematic representation of an arbitrary sequence following bisulfite conversion.Note that only one CG-cytosine is methylated (uppercase) whereas the other CG-cytosine and the two non-CG-cytosines are not.Following PCR amplification, 5-methylcytosines in the original DNA sample will be read as C, whereas cytosines will be read as T. Table 1 . Suggested reagents and equipment required.
v3-fos-license
2023-01-20T15:12:11.309Z
2014-05-01T00:00:00.000
256008489
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP05(2014)014.pdf", "pdf_hash": "07e2a5820cae4be35a62db7bb399b54378e91379", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45263", "s2fieldsofstudy": [ "Mathematics" ], "sha1": "07e2a5820cae4be35a62db7bb399b54378e91379", "year": 2014 }
pes2o/s2orc
Tangles, generalized Reidemeister moves, and three-dimensional mirror symmetry Three-dimensional \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mathcal{N} $\end{document} = 2 superconformal field theories are constructed by compactifying M5-branes on three-manifolds. In the infrared the branes recombine, and the physics is captured by a single M5-brane on a branched cover of the original ultraviolet geometry. The branch locus is a tangle, a one-dimensional knotted submanifold of the ultraviolet geometry. A choice of branch sheet for this cover yields a Lagrangian for the theory, and varying the branch sheet provides dual descriptions. Massless matter arises from vanishing size M2-branes and appears as singularities of the tangle where branch lines collide. Massive deformations of the field theory correspond to resolutions of singularities resulting in distinct smooth manifolds connected by geometric transitions. A generalization of Reidemeister moves for singular tangles captures mirror symmetries of the underlying theory yielding a geometric framework where dualities are manifest. The (2, 0) superconformal field theories in six dimensions, in particular the theory of N parallel M5-branes, are among the most important quantum systems, and yet they remain poorly understood. Their importance stems not only from the fact that they represent the highest possible dimension in which superconformal field theories can exist [1], but also from the observation that their compactifications to lower dimensions yield a rich class of quantum field theories whose dynamics are encoded by geometry. For example, fourdimensional N = 2 theories arise upon compactification on a Riemann surface [2][3][4][5], and provide a geometric explanation for Seiberg-Witten theory [6,7]. It is natural to expect that more general compactifications will provide more information about these mysterious six-dimensional theories. One way to do this is to increase the dimension of the compactification geometry. Thus, the next cases of interest would be compactifications with dimensions d ≥ 3 resulting at low-energies in effective quantum field theories in dimensions 6 − d. The aim of this paper is to focus on the situation where d = 3 with N = 2 supersymmetry. Examples of this type have been recently considered in [8][9][10] for the situation where 2 M5-branes wrap some ultraviolet geometry. In such constructions, as advocated in [10], the infrared dynamics of the system is described by a single recombined brane, similar to the situation studied in [11], that can be viewed as a double cover of the original compactifiaction manifold. This infrared geometry is captured by describing the branching strands for the cover which in general are knotted. When the branching strands collide the cover becomes singular and on that locus an M2-brane of vanishing size can end on the M5-branes leading to massless charged matter fields. The goal of this paper is to clarify and extend the rules discussed in [10] and find the correspondence between the knotted branch locus encoding the geometry of the double cover, and the underlying N = 2 quantum field theory. With this background we can phrase more precisely what we wish to do: we would like to uncover the relationship between three-dimensional N = 2 supersymmetric conformal field theories and a class of mathematical objects called singular tangles. In words, a tangle is a generalization of a knot to allow for open ends, and a singular tangle is the situation where the pieces of string are permitted to merge and loose their individual identity. Examples are illustrated in figure 1. The class of three-manifolds M where the infrared M5-brane resides are defined as double covers of R 3 branched along a singular tangle. The reduction of the theory of a single M5-brane along M will result in the three-dimensional quantum field theories under investigation. The simplest class of examples are associated to non-singular tangles. In this situation M is a smooth manifold and a single M5-brane on M constructs a free Abelian N = 2 Chern-Simons theory in the macroscopic dimensions. Light matter, appearing in chiral multiplets in three dimensions, arises in the theory from M2-brane discs which end along M . When such matter becomes massless, the associated cycle shrinks and M develops a singularity. The collapsing of this cycle can be described by the geometry of a singular tangle. A conceptual slogan for the program described above is that we are investigating a three-dimensional analog of Seiberg-Witten theory. In the ultraviolet, one may envision an unknown non-Abelian three-dimensional field theory arising from the interacting theory of two M5-branes on R 3 with suitable boundary conditions at infinity. Moving onto the moduli space of this theory is accomplished geometrically by allowing the pair of M5-branes to fuse together into a single three-manifold M . The long-distance Abelian physics can then be directly extracted from the geometry of M . The situation we have described should be compared with the case of four-dimensional N = 2 theories whose infrared moduli space physics can be extracted from a Seiberg-Witten curve. In that case, charged matter fields are described by BPS states and can be constructed in M-theory from M2-branes. The case of an interacting conformal field theory can arise when the M2-brane particles become massless and the Seiberg-Witten curve develops a singularity, directly analogous to the three-dimensional setup outlined above. An important feature of the constructions carried out in this paper, familiar from many constructions of field theories by branes, is that non-trivial quantum properties of field theories are mapped to simpler geometric properties of the compactification manifold. In the case of N = 2 Abelian Chern-Simons matter theories the quantum features which are apparent in geometry are the following. • Sp(2F, Z) Theory Multiplets: the set of three dimensional theories with N = 2 supersymmetry and U(1) F flavor symmetry is naturally acted on by the group Sp(2F, Z) [12,13]. This group does not act by dualities. It provides us with a simple procedure for building complicated theories out of simpler ones by a sequence of shifts in Chern-Simons levels and gauging operations. • Anomalies: in three dimensions, charged chiral multiplets have non-trivial parity anomalies. This means that upon integrating out a massive chiral field the effective Chern-Simons levels are shifted by half-integral amounts [14]. • Dualities: three dimensional N = 2 conformal field theories enjoy mirror symmetry dualities. Thus, distinct N = 2 Abelian Chern-Simons matter theories may flow in JHEP05(2014)014 the infrared to the same conformal field theory. In the case of three-dimensional Abelain Chern-Simons matter theories there are essentially three building block mirror symmetries which we may compose to engineer more complicated dualities. -Equivalences amongst pure CS theories. These theories are free and characterized by a matrix of integral levels K. It may happen that two distinct classical theories given by matrices K 1 and K 2 nevertheless give rise to equivalent correlation functions and hence are quantum mechanically equivalent. -Gauged U(1) at level 1/2 with a charge one chiral multiplet is mirror to the theory of a free chiral multiplet [13]. -Super-QED with one flavor of electron is mirror to a theory of three chiral multiplets, no gauge symmetry, and a cubic superpotential [15,16]. One way non-trivial dualities appear stems from the fact that the M5-brane theory reduced on M does not have a preferred classical Lagrangian. To obtain a Lagrangian description of the dynamics requires additional choices. In our context such a choice is a Seifert surface, which is a Riemann surface with boundary the given tangle. For any given tangle there exist infinitely many distinct choices of Seifert surfaces each of which corresponds to a distinct equivalent Lagrangian description of the physics. This fact is closely analogous to the choice of triangulation appearing in the approach of [8] for studying the same theories, as well as the choice of pants decomposition required to provide a Lagrangian description of M5-branes on Riemann surfaces [5]. Throughout the paper, our discussion of duality will be guided by a particular invariant of the infrared conformal field theory, the squashed three-sphere partition function Z b (x 1 , · · · , x F ). (1.1) This is a complex-valued function of a squashing parameter b (which we frequently suppress in notation) as well as F chemical potentials x i . It is an invariant of a field theory with prescribed couplings to U(1) F background flavor fields. This partition function gives us a strong test for two theories to be mirror and as such it is useful to build into the formalism techniques for computing Z. One method of explicit computation is provided by supersymmetric localization formulas. At the classical level, an Abelian Chern-Simons matter theory coupled to background flavor fields is determined by the following data: JHEP05(2014)014 Given such data, the three-sphere partition function for the infrared conformal field theory can be presented as a finite dimensional integral 1 [17,18] Z(x i ) = d G y exp −πi(y x)K y x a E(q a · (y x)). (1. 2) In the above, E(x) denotes a certain transcendental function, the so-called non-compact quantum dilogarithm, which will be discussed in detail in section 3. The superpotential W enters the discussion only in so far as it restricts the flavor symmetries of the theory. The real integration variables y appearing in the formula can be interpreted as parameterizing fluctuations of the real scalars in the N = 2 vector multiplets. We will be interested in computation of Z up to multiplication by an overall phase independent of all flavor variables. Physically this means in particular that throughout this work we will ignore all framing anomalies of Chern-Simons terms. We will see that the partition function in (1.2) can be usefully viewed as a wavefunction in a certain finite dimensional quantum mechanics and develop this interpretation throughout. This connection of three-dimensional partition functions to quantum mechanics has been previously studied in [8,[19][20][21]. One important test of the ideas that we develop can be found in their application to a class of three-manifolds M of the form Σ t × R t , where the Reimann surface Σ varies in complex structure along the line parameterized by t. These examples are closely connected to four-dimensional quantum field theories. At a fixed value of t, the situation is that of an M5-brane on Σ which can be interpreted as a Seiberg-Witten curve for a four-dimensional N = 2 field theory. As t varies this field theory moves in its parameter space and hence describes a kind of domain wall in four dimensions. When equipped with suitable boundary conditions, this geometry can engineer a three-dimensional N = 2 theory. Moreover, in such a construction the physical significance of of the finite dimensional quantum mechanics governing the partition function becomes more manifest. It is the quantum mechanics whose operator algebra coincides with the algebra of line defects of the parent four-dimensional theory [8,9,22]. In the context of such examples, one may utilize the machinery of BPS state counting to determine the resulting three-dimensional physics. When the variation of Σ takes a particularly natural form, known as R-flow, the spectrum of three-dimensional chiral multiplets is in one-to-one correspondence with the BPS states of the underlying fourdimensional model in a particular chamber. As the moduli of the four-dimensional theory are varied, one may cross walls of marginal stability and hence find distinct spectra of chiral multiplets in three-dimensions. Remarkably, the resulting three-dimensional theories are mirror symmetric. In this way, the geometry provides a striking confluence between two fundamental quantum phenomena: wall crossing of BPS states, and mirror symmetry. The organization of this paper is as follows. In section 2 we explain how free Abelian Chern-Simons theories arise from tangles, and how their partition functions are encoded JHEP05(2014)014 in a simple quantum mechanical setup. In section 3 we show how the data of massless chiral fields is encoded in terms of singular tangles where branch loci collide. Each such singularity can be geometrically resolved in one of three ways, matching the expected deformations of the field theory. Upon fixing a Seifert surface, a surface with boundary on the tangle, we are able to extract a Lagrangian description of the theory associated to the singular tangle including superpotential couplings. In section 4 we generalize to arbitrary singular tangles, and explore physical redundancy in the geometry. As a consequence of mirror symmetries, distinct singular tangles can give rise to the same superconformal theory. These equivalences on field theories can be described geometrically by introducing a set of generalized Reidemeister moves acting on singular tangles. On deforming away from the critical point by activating relevant deformations of the field theory, we find that the generalized Reidemeister moves resolve to the ordinary Reidemeister moves familiar from elementary knot theory. The appearance of Reidemeister moves clarifies the relationship between quantum dilogarithm functions and braids first observed by [23]. In section 5 we describe how three-dimensional mirror symmetries can be understood from the perspective of four-dimensional N = 2 parent theories via R-flow. Finally, in section 6 we describe three-dimensional U(1) SQED with arbitrary N f . Abelian Chern-Simons theory and tangles In this section we explore the simplest class of examples: Abelian N = 2 Chern-Simons theories without matter fields. Such theories are free and topologically invariant. Thus, in particular, they are (rather trivial) conformal field theories. We find that such models are usefully constructed via reduction of the M5-brane on a non-singular manifold which is conveniently viewed as a double cover of R 3 branched over a tangle, and describe the necessary geometric technology for elucidating their structure. In addition we describe a finite dimensional quantum mechanical framework for evaluating their partition functions. Throughout we will study the theories with U(1) F flavor symmetries and couple them to F non-dynamical vector multiplets. The set of such theories is acted upon by Sp(2F, Z) and we describe this action from various points of view. Chern-Simons actions, Sp(2F, Z), and quantum mechanics Consider a classical N = 2 Abelian Chern-Simons theory. Let G denote the number of U(1) gauge groups, and F the number of U(1) flavor groups. 2 The Lagrangian of the theory coupled to F background vector multiplets is specified by a (G + F ) × (G + F ), symmetric matrix of levels Here, k G denotes the ordinary Chern-Simons levels of the U(1) G gauge group, k M indicates the G × F matrix of mixed gauge-flavor levels, and k F the F × F matrix of flavor levels. JHEP05(2014)014 The action for the theory is Where in the above the terms " · · · " indicate the supersymetrization of the Chern-Simons Lagrangian. K αβ is integrally quantized with minimal unit one. The first G vector multiplets are dynamical variables in the path integral while the last F are non-dynamical background fields. 3 It is worthwhile to note that one might naively think that the matrix K does not completely specify an N = 2 Chern-Simons theory. Indeed, since such theories are conformal they contain a distinguished flavor symmetry, U(1) R , whose associated conserved current appears in the same supersymmetry multiplet as the energy-momentum tensor. One might therefore contemplate Chern-Simons couplings involving background U(1) R gauge fields. However such terms, violate superconformal invariance [25]. Thus, as our interest here is superconformal field theories, we are justified in ignoring these couplings. 4 Already in this simple context of Abelian Chern-Simons theory, we can see the action of Sp(2F, Z) specified as operations on the level matrix K defined in equation (2.1). For later convenience, it is useful to use a slightly unconventional form of the symplectic matrix J In this basis, the integral symplectic group is conveniently generated by 2F generators σ n with n = 1, 2, · · · , 2F whose matrix elements are given as To define an action of the symplectic group Sp(2F, Z) on this class of theories, it therefore suffices to specify the action of the generators σ n . The action of the generators with odd labels σ 2n−1 preserves the number of gauge groups and shifts the levels of the n-th background field The action of the even generators, σ 2n , is more complicated and performs a change of basis in the flavor symmetries while at the same time increasing the number of gauge groups by 3 The normalization of the Chern-Simons levels appearing in (2.2) indicates that these are spin Chern-Simons theories [24] whose definition depends on a choice of spin structure on spacetime. Since all the models we consider are supersymmetric and hence contain dynamical fermions, this is no restriction. 4 We expand upon this point further in the following analysis. JHEP05(2014)014 one. Explicitly, σ 2n can be factored as σ 2n = g n •c U where c U is a change of basis operation where in the above, the F × F matrix U is given by And the gauging operation g n is given by Straightforward calculation using Gaussian path integrals may be used to verify that these operations satisfy the defining relations of Sp(2F, Z). Notice that, while these relations are simple to prove, they nevertheless involve quantum field theory in an essential way. If w is any word in the generators σ i which is equal to the identity element by a relation in the symplectic group, then the action of w on a given matrix of levels K produces a new matrix w(K) which in general is not equal, as a matrix, to K. Nevertheless, the path integral performed with the matrices K and w(K) produce identical correlation functions. Thus, the relations in Sp(2F, Z) provide us with elementary, provable examples of duality in three-dimensional conformal field theory. Let us now turn our attention to the partition function Z for this class of models. Since Abelian Chern-Simons theory is free, an application of the localization formula (1.2) reduces the computation to a simple Gaussian integral which is a function of an F -dimensional vector x of chemical potentials for the U(1) F flavor symmetry (2.9) JHEP05(2014)014 The integral is trivially done to obtain 5 From the resulting formula we see that the partition function is labeled by two invariants The possibility that the matrix τ may have infinite entries is included to allow for noninvertible k G . In that case, the associated vector in the kernel of k G describes a massless U(1) vector multiplet and the flavor variable coupling to this multiplet is interpreted as a Fayet-Illiopoulos parameter. At the origin of this flavor variable the vector multiplet in question has a non-compact cylindrical Coulomb branch. This flat direction is not lifted when computing the path integral on S 3 because the R-charge assignments do not induce conformal mass terms. This implies that the partition function Z has a diveregence. Meanwhile, away from from the origin the non-zero FI parameter breaks supersymmetry and Z vanishes. In total then, the partition function is proportional to a delta function in the flavor variable, and the narrow width limit of the Gaussian, when entries of τ are infinite, with infinite coefficient, det(k G ) → 0, should be interpreted as such a delta function. The partition function formula (2.10) provides another context to illustrate the symplectic group Sp(2F, Z) on conformal field theories, in this case, via its action on the invariants (2.11). A general symplectic matrix can be usefully written in terms of F ×F blocks as Where R is certain invertible matrix which transforms the standard symplectic form to our choice (2. 3) whose precise form is not important. Then, the action of symplectic transformations on τ is simply the standard action of the sympletic group on the Siegel half-space Meanwhile, det(k G ) transforms as a modular form Thus the symplectic action on field theories reduces, at the level of partition functions, to the more familiar symplectic action on Gaussian integrals. Before moving on to additional methods for studying these theories, let us revisit the issue of Chern-Simons couplings involving a background U(1) R gauge field. As remarked above such couplings are forbidden by superconformal invariance. Nevertheless, to elucidate the physical content of Z(x) as well as the partition functions on interacting field JHEP05(2014)014 theories appearing later in this paper it is useful to examine exactly how such spurious terms would enter the result. The squashed three-sphere partition functions under examination are Euclidean path integrals on the manifold This geometry is labelled by a parameter b, a positive real number, however the symmetry under b → 1/b allows us to restrict our attention to the parameter In this geometry preservation of supersymmetry requires one to turn on background values for scalars in the supergravity multiplet. While these fields are normally real, like the real mass variables x i coupling to the ordinary flavors, in this background they are imaginary and proportional to c b . As a result R − R Chern-Simons levels, and R-flavor Chern-Simons levels appear as Gaussian prefactors in the partition function of the form From the above, we note that the R − R Chern-Simons levels appear as multiplicative constants independent of the flavor variables x. Since we are interested in computation of partition functions up to overall multiplication by phases such terms are not relevant for this work. On the other hand, the R − F Chern-Simons terms appear as linear terms in x in the exponent. One can easily see why such terms violate superconformal invariance. The round three-sphere partition function for the conformal field theory in the absence of background fields is given by evaluating Z(x) at vanishing x and c b = i. The first derivative with respect to x evaluated at the round three-sphere and vanishing x therefore computes the one-point function of the associated current As the three-sphere is conformal to flat space, conformal invariance means that this one point function vanishes implying that k RF must also vanish. Quite generally throughout this paper we encounter examples of partition functions of interacting CFTs where the naive value of k RF , as extracted from the first derivative of Z(x) evaluated at the conformal point, does not vanish. Superconformal invariance can always be restored in such examples by explicitly including ultraviolet counterterm values for k RF to cancel the spurious contributions [25]. Thus, from now on we write expressions for partition functions with non-vanishing first derivatives, always keeping in mind that the true physical partition function of the conformal theory is only obtained by including suitable counterterms. Quantum mechanics and partition functions The partition function calculations described in the previous section can be phrased in useful way in elementary quantum mechanics. In this context, the associated action of Sp(2F, Z) is known as the Weil representation [26]. We consider the Hilbert space of complex valued functions of F real variables and aim to interpret Z(x) as a wavefunction. 6 First, introduce position and momentum operators acting on wavefunctions and consistent with the symplectic matrix J introduced in (2.3) We use Dirac bra-ket notation for states, and let |y denote a normalized simultaneous eigenstate of the position operatorŝ For convenience we also note that the wavefunction of a momentum eigenstate takes the form y|p = exp [2πi (y 1 p 1 + y 2 (p 1 + p 2 ) + · · · + y F (p 1 + p 2 + · · · + p F ))] . (2.21) On this Hilbert space there is a natural unitary representation of Sp(2F, Z). This representation is defined using the generators (2.4) as follows 7 One important feature of this representation is that its action by conjugation on position and momentum operators produces quantized canonical transformations. Explicitly, if M is any symplectic transformation we have This fact underlies the significance of this representation in all that follows. We now wish to show that we may interpret the partition function of a theory Ψ as a wavefunction of an associated state |Ψ Z Ψ (x) = x|Ψ . (2.24) Of course both wavefunctions and partition functions are complex-valued functions of a F real variables x i so we are free to make the identification appearing in (2.24). The nontrivial aspect of this identification is that the Sp(2F, Z) action on quantum field theories, JHEP05(2014)014 defined by the operations appearing in (2.5)-(2.6) can be achieved at the level of the partition function by the action of the operators of the same name defined by the representation given in (2.22). To see that these quantum mechanics operators behave correctly, note that given any arbitrary state |Ψ we have Thus, if the state |Ψ corresponds to a quantum field theory with partition function x|Ψ , then the integral definition of the partition function given in equation (2.9) implies that σ 2j−1 shifts the background Chern-Simons level for the j-th flavor by one unit as expected. We can similarly see that the quantum mechanical σ 2j operator acts as required. We have while the S generator acts to gauge the flavor symmetry and introduces a new flavor which is dual to the original symmetry The relevant quantum mechanics is now single variable for the single U(1) flavor symmetry with standard commutation relations And the representation of symplectic transformations is given by JHEP05(2014)014 A simple class of theories is defined starting from the trivial theory Ω. This theory has no gauge groups and vanishing flavor Chern-Simons levels. Its partition function is unity Z Ω (x) = x|Ω = 1. (2.32) More interesting theories can be generated by starting with the trivial theory Ω and acting with S and T . For a general SL(2, Z) element O we have the following result for the partition function 9 33) The answer thus takes the general form (2.10) with associated invariants As in the case of a single flavor symmetry discussed above, the resulting quantum field theory and partition function depends only on the element O in Sp(2F, Z), while a particular Lagrangian realization of the theory requires a choice of word in the generators σ n which represents O. This quantum mechanical setup naturally suggests additional quantities to compute. Rather than considering the wavefunction of O acting on the trivial state |Ω , we may instead double the flavor variables and compute the complete matrix element of O Z Op O (x 1 , · · · , x F , y 1 , · · · , y F ) ≡ x 1 , · · · , x F |O|y 1 , · · · y F . The construction of (2.36) is not limited to the case of symplectic operators. Indeed, in section 5 we will see that an interesting class of non-symplectic operators O have matrix elements which are identified with partition functions of interacting three-dimensional conformal field theories coupled to 2F flavor fields. In general, such matrix element partition functions have the following features. In the physical interpretation we have developed, the integration over the y variables is the gauging of the associated flavor variables at vanishing values of the associated FI parameters. • More generally, the quantum-mechanical operation of operator multiplication can be interpreted in field theory. A product of operators can always be decomposed into a convolution by an insertion of a complete set of states Again, the integration is physically interpreted as gauging. We consider the two theories, whose partition functions are given by the matrix elements of O i , we identify flavors as indicated in (2.38) and gauge with no FI-term. • Z Op O (x, y) is a partition function of a theory coupled to 2F background flavor fields. A general theory of this type is acted on by the symplectic group Sp(4F, Z), however a matrix element is acted on only by the subgroup which does not mix the x and y variables. The geometrical and physical interpretation of this splitting will be explained in section 5. Tangles Our goal in this section is to give a geometric counterpart to the field theory and partition function formalism developed in the previous analysis. A natural way to develop such an interpretation is to engineer the Abelian Chern-Simons theory by compactification of the M5-brane on a three-manifold M . In six dimensions, the worldvolume of the M5-brane supports a two-form field B with self-dual three-form field strength [27]. When reduced on a three-manifold, the modes of B may engineer an Abelian Chern-Simons theory. We review aspects of this reduction and explain the three-dimensional geometry required to understand the Sp(2F, Z) action. Reduction of the chiral two-form Consider the free Abelian M5-brane theory reduced on a three-manifold M . To formulate the theory of a chiral two-form, M must be endowed with an orientation which we freely use throughout our analysis. The effective theory in the three macroscopic dimensions is controlled by the integral homology group H 1 (M, Z). The simplest way to understand this fact is to note that a massive probe particle in the theory arises from an M2-brane which ends on a one-cycle γ in M . In particular the homology class of γ ∈ H 1 (M, Z) labels the charge of the particle. In the effective theory in three dimensions, massive charged probes are described by Wilson lines. Let C denote a one-cycle in the non-compact Minkowski space. A general Wilson line can be written as If the theory in question has G gauge fields and F flavor fields, then the charge vector q α has G + F components and integral entries. However, in the presence of non-vanishing Chern-Simons levels, the charge vector q is in general torsion valued. Thus, distinct values of the integral charge vector q may be physically equivalent. The allowed distinct values of the charge vector are readily determined by examining the two-point function of Wilson loops in Abelian Chern-Simons theory coupled to background vectors. The results are summarized as follows. Let Z G ⊂ Z G+F be the subset of charges uncharged under the flavor group U(1) F . By restricting to this subset we may view the level matrix as specifying a map K : and those charge vectors in the image of this map are physically equivalent to no charge at all. Since we have determined that possible Wilson lines encode the homology of M it follows that Equation (2.42) encodes the appropriate generalization of Kaluza-Klein reduction to the case of torsion valued charges. The fact that we study Chern-Simons theories up to possible framing anomalies (equivalently, overall phases in the partition function) means that the entire theory is characterized by the group (2.42). However, the homology of M , and hence the underlying physics has no preferred description via a classical Lagrangian. Indeed as we will illustrate in the remainder of this section, distinct classical theories, with the same group of Wilson line charges, can in fact arise from compactification on the same underlying manifold M . Thus, already in this elementary discussion of reduction of the two-form we see the important fact that compactification of the M5-brane theory produces a specific quantum field theory not, as one might naively expect, a specific Lagrangian presentation of a classical theory which we subsequently quantize. It is for this reason that our geometric constructions of field theories are powerful, dualities are manifest. Finally, before moving on to discuss explicit examples we remark on the geometry associated to flavor symmetries. These arise when the manifold M is allowed to become JHEP05(2014)014 non-compact. Suppose that M develops cylindrical regions near infinity which take the form of R × R + × S 1 . Then on the asymptotic S 1 cycle we may reduce the two-form field to obtain another gauge field (2.43) However, unlike the compact cycles in the interior of M , the cycle S 1 has no compact Poincaré dual and hence A is a non-dynamical background field; it provides the effective theory in three dimensions with a U(1) flavor symmetry. Moreover, since the boundary behavior of A must be specified to obtain a well-defined theory in three dimensions, the resulting theory is of the type we have considered in the introduction: a theory with flavor symmetries and a specified coupling to background gauge fields. As a result the partition function Z(x) is a well-defined observable of the theory. The number of flavor variables on which the result depends is the number of homologically independent cylindrical ends of M . In the context of the examples constructed in section 2.2.2, for F flavors we will require F + 1 cylindrical ends. Double covers from tangles The specific class of geometries that we will study are conveniently presented as double covers over the non-compact space R 3 , branched over a one-dimensional locus L Topologically L is simply the union of F + 1 lines, however its embedding in R 3 is constrained. On the asymptotic two-sphere at the boundary of three-space, we mark 2F + 2 distinct points p 1 , · · · p 2F +2 . The 2F + 2 ends of L at infinity are the points p i . Meanwhile, in the interior of R 3 the components of L may be knotted. Such an object is known as an (F + 1)-tangle. An example in the case of F = 1 is illustrated in figure 2. Given two distinct tangles L 1 and L 2 , they are considered to be equal topologically when one can be deformed to the other by isotopy in the interior of R 3 which keeps the ends at infinity fixed. In the following we will also need to be more precise about the behavior near the asymptotes p i . Let B r ⊂ R 3 denote the exterior of a closed ball of radius r centered at the origin. We view B r topologically as S 2 × I where I is an open interval. For large r the portion of the tangle L ∩ B r contained in B r consists of 2F + 2 arcs. We constrain the behavior of these arcs by requiring that the pair (B r , L ∩ B r ) is homeomorphic to the trivial pair (S 2 × I, {p 1 , p 2 , · · · , p 2F +2 } × I) where the p i are points in S 2 . This constraint implies that the knotting behavior of the tangle eventually stops as we approach infinity. In practice it means that any planar projection of the tangle L appears at sufficiently large distances as 2F + 2 disjoint semi-infinite line segment which undergo no crossings. For most of the remainder of this section, we will argue that the class of three-manifolds obtained as double covers branched over tangles have exactly the correct properties to engineer the Abelian Chern-Simons theories coupled to a background flavor gauge field which we JHEP05(2014)014 The four endpoints of L extend forever towards the points at infinity. have discussed in the previous section. As a first step, observe that such geometries do indeed support F flavor symmetries. Group the asymptote points into F +1 pairs {p 2i−1 , p 2i }. The double cover of R 3 branched over the two straight arcs emanating from {p 2i−1 , p 2i } yields the anticipated cylindrical ends of M required to support flavor symmetry. To see this more explicitly, note that the the boundary of the base may be viewed as a two-sphere. The boundary of the double cover is a double cover of the two-sphere branched over 2F + 2 points and is therefore a Riemann surface of genus F . The flavor cycles are the homology classes in this boundary Riemann surface which remain non-contractible in the three-manifold. It is easy to convince oneself that there are exactly F such flavor cycles. For instance the simplest class of such three-manifolds consists of handlebodies defined by choosing F non-intersecting cycles on the boundary and filling them. In section 2.3 we explain how to extract a Lagrangian for an Abelian Chern-Simons theory from the geometric data of a tangle. As we have previously described, the M5brane on M does not provide a preferred Lagrangian. Consistent with this fact, we find that a Lagrangian description of the field theory associated to a particular tangle requires additional geometric choices. In this case the choice is a Seifert surface, a surface whose boundary is the given tangle. For any fixed L there are infinitely many such surfaces each giving rise to a distinct Lagrangian presentation of the same underlying physics. Finally we argue that tangles, and hence the class of three-manifolds described as double covers branched over tangles, enjoy a natural action by Sp(2F, Z). To illustrate this action, we draw a generic tangle with F + 1 strands as in figure 3. Then, the action of the symplectic group is defined by the generators σ j where j = 1, · · · 2F , which act on the tangles by braid moves in a neighborhood of the asymptotes p i . Several examples are illustrated in figure 4. Again we can understand this three-dimensional geometry by examining the boundary at infinity of M. As we described above, this is a Riemann surface of genus F . The action defined in figure 4 is a surgery on M which in general changes its topology. This surgery is induced by mapping class group transformations in a neighborhood of the boundary of M . JHEP05(2014)014  Figure 3. A generic tangle L in R 3 . The ellipsis indicate that the strands continue to infinity with no additional crossings. In the interior of the box, the strands are in general knotted in an arbitrary way. In particular, as is clear from the illustrations, what we have defined is not, a priori, an action of the symplectic group, but rather an action of the braid group, B 2F +1 on 2F + 1 strands [28]. 10 The braid group and the symplectic group are related by a well-known exact sequence Where T 2F +1 is the Torrelli group, and the last map in the above arises because the the braid group B 2F +1 acts on the boundary Reimann surface preserving its intersection form. To make contact with our discussion of field theories, we wish to illustrate that the action of the braid group defined by figure 4 reduces to an action of the symplectic group on the associated field theories. This implies that any two elements of B 2F +1 that differ by multiplication by a Torelli element must give rise equivalent actions on the field theories extracted from an arbitrary tangle. More bluntly, the Torelli group generates dualities. One of the outcomes of this section is a proof of this fact. Seifert surfaces To understand the physics encoded by a tangle we need control over the homology of the cover manifold M . The appropriate tool for this task is a Seifert surface. In general given any knot, 11 a Seifert surface Σ for the knot is a connected Riemann surface with boundary 10 The 'last strand' appearing at the bottom of the diagram in figure 3 is stationary under all braid moves. Alternatively one may work with the spherical braid group and impose additional relations. For simplicity we stick with the more familiar planar braids. 11 In this paper the term knot will be used broadly to include both knots and multicomponent links. the given knot. An example is illustrated in figure 5. In the mathematics literature it is common to impose the additional requirement that Σ be oriented. In our context there is no natural orientation for Σ and hence we proceed generally allowing possibly non-orientable Seifert surfaces. For any knot, there exist infinitely many distinct Seifert surfaces and given a knot diagram a number of simple algorithms exists to construct a Σ [29]. We describe one useful algorithm in section 2.3.1. The reason that Seifert surfaces are relevant for our discussion is that if one wishes to construct a double cover branched over a knot then a choice of Σ is equivalent to a choice of branch sheet. As such, features of the homology of the branched cover M can be extracted from a knowledge of a Seifert surface. However, the resulting three-manifold M depends only on the branch locus L and hence the homology and ultimately the associated physical theory are independent of the choice of Σ. In the following we explain how any fixed choice of Seifert surface allows us to extract a set of gauge and flavor groups and a matrix of Chern-Simons levels from the geometry. To begin for simplicity, we assume that we are dealing with a knot in S 3 , as opposed the non-compact tangles in R 3 needed to support flavor symmetry. The generalizations to the present non-compact situation will then be straightforward. The detailed statements that we require are as follows. Any cycle in H 1 (M, Z) can be thought of as a cycle on the base S 3 which encircles Σ. This can be viewed as a direct parallel with the theory of branched covers of the two-sphere. 12 Thus, we deduce that there is a surjective map Meanwhile, there is a linking number pairing between cycles in H 1 (S 3 − Σ, Z) and cycles in H 1 (Σ, Z). This linking number pairing is perfect and hence we may extend (2.46) to Our task is thus reduced to determining which cycles on the Seifert surface correspond to trivial cycles in the homology of M . JHEP05(2014)014 To this end, we define a symmetric bilinear form, the so-called Trotter form (2.48) Our choice of notation is intentional: we will see that the Trotter form defines the Chern-Simons levels. To extract K we let α ∈ H 1 (Σ, Z), and set α to be the cycle in S 3 obtained from locally pushing α off of Σ in both directions. The cycle α is a two-to-one cover of α. If Σ is orientable then α consists of two disconnected cycles each on a given side of Σ (as determined by the orientation), however in general α is connected. The definition of the Trotter form is Where lk # denotes the linking number pairing of cycles in S 3 . A simple calculation illustrates that K is symmetric. A slightly less trivial argument shows that the image of K is exactly the set of cycles on Σ which are trivial in M . Thus, the completion of the sequence (2.47) is In particular we conclude that Double covers of S 3 branched over knots are exactly the geometries we expect to engineer Abelian Chern-Simons theories without flavor symmetries and we may relate theorem (2.50) to physics as follows: • A choice of Seifert surface Σ and a set of generators of homology α 1 , · · · , α G determines a set of G Abelian gauge fields. • The Trotter form pairing on cycles in H 1 (Σ, Z) is equal to the Chern-Simons levels matrix on the associated gauge fields. Distinct choices of Seifert surfaces are physically related by duality transformations. This fact is easy to verify directly. For example, distinct choices of Σ which differ by gluing in handles or Mobius bands add new gauge cycles and compensating levels to keep the underlying physics unmodified. Finally, we generalize our discussion of Seifert surfaces and homology to the case of non-compact geometries required to discuss flavor symmetries. Let L denote a tangle in R 3 . We introduce non-compact Seifert surfaces Σ again defined by the condition that they are connected surfaces with boundary L. However, now to compute flavor data we must fix a compactification of both L and Σ. We achieve this by identifying the points p i in pairs and glueing in arcs near infinity as illustrated in figure 6. Let δ indicate the union of the arcs at infinity, and Σ c the compactified Seifert surface including δ. The surface Σ c should be viewed as embedded inside S 3 , the one-point compactification of R 3 and calculations of linking numbers etc. take place inside S 3 . For simplicity in future diagrams we often leave the compactification data of the Seifert surface implicit by setting the convention that whenever a non-compact Seifert surface consists of JHEP05(2014)014  Figure 6. The asymptotic geometry of a Seifert surface for a generic tangle. The shaded blue region indicates the interior of Σ. The arcs at infinity indicate the compactification of L and Σ. The non-compact cycles on Σ give rise to flavor symmetries. strips extending to infinity in R 3 the intended compactification is the one where the strips are capped off with arcs as in figure 6. With these preliminaries about compactifications fixed, we may now state the required generalization of the sequence (2.50) Note that in addition to the boundaryless cycles in Σ c which give rise to gauge groups, H 1 (Σ c , δ, Z) also contains F cycles with boundary in δ. In the uncompactified Seifert surface these cycles are non-compact and illustrated in figure 6. They correspond physically to the U(1) F flavor symmetry. To complete the construction it thus remains to extend the definition of the Trotter form. For boundaryless cycles in Σ c the definition is as before. Meanwhile to evaluate the Trotter form on cycles with boundary we again push them out locally in both directions from Σ c and compute the local linking number from the interior of Σ. Alternatively, one may simply think of the pair of points in the boundary of a flavor cycle in Σ c as formally identified. In this way we obtain a closed cycle in S 3 and we compute its Trotter pairings as before. In this way we obtain a bilinear form K defined on H 1 (Σ c , δ, Z), and the image of this form restricted to the boundaryless cycles in H 1 (Σ c , δ, Z) defines the term Im(K) appearing in (2.51). To summarize, given any tangle L in R 3 , we extract a Lagrangian description of the effective Abelian Chern-Simons theory as follows: • a choice of Seifert surface Σ and a set of generators of the relative homology H 1 (Σ c , δ, Z), α 1 , · · · , α G+F , determines a set of Abelian vector fields. Generators corresponding to boundaryless one-cycles correspond to gauged U(1)'s while those corresponding to one-cycles with boundary in δ are background flavor fields. • The Trotter form pairing on cycles in H 1 (Σ c , δ, Z) is equal to the Chern-Simons levels pairing on the associated vector fields. We denote by Im(K) the image of this pairing restricted to the subset of boundaryless cycles in Σ c , and we have Checkerboards The previous discussion of Seifert surfaces is complete but abstract. numbers and hence extracting a set of Chern-Simons levels from geometry. One such method, described in this section, is provided by so-called checkerboard Seifert surfaces. To begin, fix a planar projection of the tangle L ⊂ R 3 . In such a planar diagram the information about the knotting behavior of L is contained in the crossings in the diagram. Each crossing locally divides the plane into four quadrants. We construct a Seifert surface for L by coloring the two of the four quadrants at each crossing in checkerboard fashion and extending consistently to all crossings. The colored region then defines Σ. Note that each crossing c in the diagram is endowed with a sign ζ(c) = ±1 depending on whether the cross-product of the over-strand with the under-strand through Σ at c is in or out of the plane as shown in figure 7. To compute the Trotter form, we first assume that Σ c appears compactly in the plane. 13 Then, there is a natural basis of boundaryless cycles in Σ c associated to the compact uncolored regions of the plane. We orient these cycles counterclockwise. Similarly, in the diagram of Σ, non-compact white regions may be associated to flavor cycles. These cycles are again canonically oriented "counterclockwise," i.e. the cross-product of the tangent vector to the cycle with the outward normal pointing into the associated non-compact uncolored region must be out of the plane. 14 The Trotter pairing on these cycles is determined by the summing over crossings involving a given pair of cycles weighted by the sign of the crossing. Explicitly, for α and β a pair of generators as defined above we have (2.53) Equation (2.53) provides a convenient way to read off Chern-Simons levels for a given tangle and will be utilized heavily (although often implicitly) throughout the remainder of this work. 13 This assumption cannot in general be relaxed. Indeed when Σc is non-comact in the plane one must take into account the fact that in the compactification procedure, the plane becomes and embedded S 2 inside S 3 and hence may endow Σc with additional topology. 14 There is one linear relation among the flavor cycles obtained in this way. So a given Σ will have F + 1 non-compact uncolored regions and F independent flavor cycles. The action of σ2 on L. Figure 8. The action of braid moves on linking numbers. In (a), all linking number are unmodified except for those of the flavor cycle α 1 which runs from δ F +1 to δ 1 , is illustrated in red, whose self-linking number is increased by one. In (b), we first change basis of flavor cycles to β j which runs from δ j to δ j+1 . Then we gauge β 1 , shown in green, and introduce a new flavor cycle, shown in red, linked with the gauged cycle. The Torelli group of dualities We are now equipped to investigate the symplectic action on tangles. In particular, we wish to prove that the action of the braid group B 2F +1 on tangles, reduces to an action of the symplectic group Sp(2F, Z) when considered as an action on the corresponding physical theories. To prove this statement, we proceed in the most direct way possible. We compute the action of the braid group generators σ n , illustrated in figure 4, on the Chern-Simons levels extracted from any Seifert surface associated to the tangle. We show that this action matches exactly the previously defined action (2.5)-(2.6). Since the later action is symplectic this implies that the former is as well. In particular, this suffices to prove that the Torelli group acts trivially on the underlying quantum field theory. To begin, we fix a Seifert surface with definite compactification data δ. As we have previously described, δ is a union of F + 1 arcs δ i with i = 1, · · · , F + 1. We draw diagrams such that the arcs are ordered down the page, with δ 1 appearing at the top, δ 2 next and so on. A basis of flavor cycles in H 1 (Σ c , δ, Z) is given by F cycles α i each of which begins at δ F +1 and terminates at δ i . This geometry is shown in figure 6. With these conventions, the braid moves act as in figure 8. Consider first the odd braid moves σ 2j−1 illustrated in figure 8a. According to formula 2.53, the effect of such a move is to modify the Trotter form by increasing K(α j , α j ) by one while leaving all other entries invariant. This is exactly the expected action given by (2.5) on Chern-Simons levels for this transformation. Similarly we may consider the braid moves with even index σ 2j illustrated in figure 8b. To understand this transformation we first change basis on flavor cycles to β i which run from δ i to δ i+1 . The transformation from the basis α i to the basis β i is facilitated by the U matrix of equations (2.6)-(2.7). Then, the braid move σ 2j gauges β j and introduces a new flavor cycle β j . Finally, we update the Trotter form to account for the new linking numbers apparent in figure 8b δK(β j , β j ) = −1, δK( β j , β j ) = −1, δK(β j , β j ) = 1. (2.54) JHEP05(2014)014 This is exactly the gauging operation of equation (2.8). Thus we have competed the verification of the symplectic action. As a result of this analysis we conclude that the Torrelli group T 2F +1 acts via dualities on Abelian Chern-Simons theories. Given any tangle one may act on it with a Torrelli element to obtain a new geometry. Fixing Seifert surfaces, the two geometries in general will have distinct classical Lagrangian descriptions yet their underlying quantum physics is identical. Moreover, as we see in section 3 and beyond, the technology of this section generalizes immediately to the more complicated geometries required for constructing interacting field theories. In particular, the symplectic action we have described arises from braid moves near infinity and hence is enjoyed by any geometry with the same asymptotics. Geometric origin of quantum mechanics To conclude our discussion of Abelian Chern-Simons theories we briefly comment on the origin of the quantum mechanical framework for partition function calculations discussed in section 2.1.1. We fix an Abelian Chern-Simons theory T (M ) engineered by reduction of the M5-brane on a three-manifold M . In this section we are only interested in the modes of this theory which descend from the six-dimensional chiral two-form, and throughout we ignore scalars and fermions. The three-sphere partition function of such a theory then has an underlying six-dimensional origin as the M5-brane partition function on the product Thus far, we have viewed M as small and interpreted the long-distance physics as an Abelian Chern-Simons theory coupled to flavors which we subsequently compactify on S 3 . However, an alternative point of view is to consider S 3 to be small, and obtain another effective three-dimensional description which is subsequently compactified on M . As S 3 has vanishing first homology, the resulting three-dimensional description is one with no Wilson line observables and hence from the point of view of this paper which studies partition functions on compact manifolds up to multiplication by overall factors we cannot distinguish the result from the trivial theory. However, a standing conjecture is that in fact the reduction on S 3 gives rise to a U(1) Chern-Simons theory at level one. Assuming the veracity of this statement, we then arrive at a beautiful physical interpretation of the quantum mechanical calculations in section 2.1.1. Recall that M is not a compact manifold, but rather has non-compact cylindrical ends required to support flavor symmetry. One may equivalently view M as a manifold with boundary at infinity and with specified boundary conditions supplied by the background flavor gauge fields. On general grounds, the path-integral of U(1) level one Chern-Simons theory on M produces a state in the boundary Hilbert space determined by the quantization of Chern-Simons theory on ∂M . In this case, as a consequence of the conjecture, one is quantizing a space of U(1) flat connections on a Riemann surface with 2F independent cycles. The Hilbert space thus consists of wavefunctions of F real variables x 1 , · · · x F , JHEP05(2014)014 which are interpreted as the holonomies of a flat connection around a maximal collection of F non-intersecting homology classes in ∂M . The symplectic action is then the standard action in this Hilbert space induced by the action on the homology of the genus F Riemann surface ∂M . Thus, the quantum mechanical framework which emerged abstractly from supersymmetric localization formulas in section 2.1.1, takes on a natural physical interpretation when the associated field theories are geometrically engineered. In particular, the viewpoint of the partition function Z T (M ) S 3 (x) as a wavefunction in a Hilbert space is a simple consequence of the six-dimensional origin of the computation and leads to a correspondence of partition functions Z This identification is reminiscent to the one studied in [30] and was obtained in the case of three-manifolds from different perspectives by [31,32]. Particles, singularities, and superpotentials In this section we exit the realm of free Abelian Chern-Simons theories and enter the world of interacting quantum systems. We study conformal field theories described as the terminal point of renormalization group flows from Abelian Chern-Simons matter theories. Thus, in addition to the vector multiplets describing gauge fields, our field theories will now have charged chiral multiplets. We will find that, in close analogy with the study of N = 2 theories in four-dimensions, such theories can be geometrically encoded by studying the M5-brane on a singular manifold. In the context of three-manifolds branched over tangles the natural class of singularities are those where strands of the tangle collide and lose their individual identity. We refer to such objects as singular tangles. Our main aim in this section is to give a precise description of these objects and explain how they encode non-trivial conformal field theories. In the process we will also describe how the geometry encodes superpotentials. A summary of results in the form of a concise set of rules for converting singular tangles to physics appears in section 3.4. Singularities and special Lagrangians We begin with a discussion of the geometric meaning of chiral multiplets and their associated wavefunctions in the three-sphere partition function. In our M-theory setting the three-manifold M is embedded in an ambient Calabi-Yau Q, and massive particles arise from M2-branes which end along M on a one-cycle. In the simplest case of a spinless BPS chiral multiplet, supersymmetry implies that M is a special-Lagrangian and the M2-brane is a holomorphic disc as illustrated in figure 9 [33,34]. The mass of the BPS particle is proportional to the area of the disc, and hence in the massless limit the cycle on which the M2-brane ends collapses. Thus, when a particle becomes massless the three-manifold M develops a singularity. A local model for this geometry is a special Lagrangian cone on T 2 in C 3 . Such a cone is defined to be the subset L 0 in C 3 obeying [35] L 0 = (z 1 , z 2 , z 3 ) ∈ C 3 : |z 1 | 2 = |z 2 | 2 = |z 3 | 2 , Im(z 1 z 2 z 2 ) = 0, Re(z 1 z 2 z 3 ) ≥ 0 . When the mass of the M2-brane is restored, the singularity is resolved. This can be done in three distinct ways [35]. Let m > 0, then the resolutions are The resulting spaces are special Lagrangain three-manifolds in C 3 [34] diffeomorphic to S 1 × R 2 . They differ by the orientation of a closed holomorphic disc in C 3 with area πm which represents the M2-brane. In the case of L 1 m this disc is given by The other cases, D 2 m and D 3 m , are analogous. We see that the boundary of the disc is an oriented S 1 in L 1 m whose homology class generates H 1 (L 1 m , Z) ∼ = Z. In the other cases the boundary is given by an oriented circle around the origin of z 2 and z 3 respectively. One can thus see that the difference between the three ways the disc appears is determined by the orientation of its central axis in C 3 . To make contact with our discussion of tangles we view this local model for the singularity as a double cover over R 3 . The special Lagrangians L a m are acted on by the involution The quotient space is parametrized by the triple ( Locally the x i provide coordinates on L a m , but the global structure of the special Lagrangian is a double cover. The branch locus is the fixed points of (3.4) and is composed of two strands explicitly given by Where t ∈ R provides a coordinate along the strands. One way to see that the branched cover is an equivalent description of the original topology is to slice R 3 into planes labelled by a time direction. The coordinate t on the branch lines in (3.5) provides such a foliation and increasing time defines a notion of flow. Each slice is a Riemann surface which is a double-cover of the plane branched over two points and is thus a cylinder. Therefore, including time, we see that topologically the cover is R 2 × S 1 . We pursue this perspective on local flows in M and connect them to four-dimensional physics in section 5. Returning to our analysis of the special Lagrangian cone, we note that when viewed as a double cover it is easy to see how the three different resolutions L a m are realized in terms of the configurations of the branch lines (3.5). We fix a planar projection of the geometry by declaringx 3 to be the oriented perpendicular direction. Then, we can depict the geometry as in figure 10. Note that figure 10c only shows the overcross. The other choice, where the strand from upper left to lower right goes under the second strand, called the undercross, does not occur. This is an artifact of the planar projection which we use to visualize the configuration. Indeed, exchanging the oriented normalx 3 to −x 3 exchanges the overcross for the undercross. By contrast, changing the normal direction fromx 3 tox 1 orx 2 permutes the resolutions appearing in 10 but leaves the triple, as a set, invariant. In the limit m → 0 the branch lines collide and we recover the singularity (3.1). In R 3 , this appears as four branch half-lines all emanating from the origin. These half-lines approach infinity in four distinct octants and hence specify the faces of a tetrahedron. In this way, we see the tetrahedral geometry of [8] emerge from the structure of special Lagrangian singularities. In particular, any triangulation of the three-ball into tetrahedra gives rises to one of the tangles described here. Having thoroughly analyzed the local model, we may now introduce a precise definition of the concept of a singular tangle. It is simply a tangle where we permit pairs of strands to touch at a finite number of points. The local structure of the cover manifold M at each such point is that of the singular special Lagrangian cone discussed above, and the global JHEP05(2014)014 (a) (b) Figure 11. Two different singularities. In (a) we see how an overcross singularity resolves after applying figure 10c. In (b) the corresponding resolution is shown for the undercross singularity. In both cases the two other resolutions of figure 10 are also present but not depicted. identification of strands in the tangle indicates how these local models are glued together. In specifying the gluing we must keep track of additional pieces of discrete data. • We draw singular tangles in planar projections of R 3 . Hence each singularity is equipped with an oriented normal vector ±x 3 . Varying the sign of the normal vector changes whether the overcross or undercross appears upon resolution. • Fix a sheet labeling 1, and 2, at each singularity. Then in the gluing we must specify whether the identified sheets are the same or distinct. Varying between these two choices alters the relative signs of the charges of the particles as determined by the orientation of the M2-branes. Both of the data described above have only a relative meaning: for a single singularirty they are convention dependent while for multiple singularities they may be compared. All told then, if we draw singular tangles in a plane, each singularity is one of four possible types. We encode the four possibilities graphically with a thickened arrow on one of the strands passing through the singularity as in figure 11. The thickened strand always resolves out of the page while the direction of the arrow encodes the charge of the massless M2-brane residing at the origin of the singularity. In general we expect that double covers branched over singular tangles may be realized as singular special Lagrangians embedded inside noncompact Calabi-Yau three-folds. However aside from the specific case of the geometry defined in (3.1) by [35], no examples are known. We view this as an interesting problem for future work. Wavefunctions and Lagrangians Our next task is to explain in general how to extract a Lagrangian description of the physics defined by a singular tangle. As in the case of the free Abelian Chern-Simons theories studied in section 2, there is no unique Lagrangian but rather for each choice of Seifert surface we obtain a distinct dual presentation. In the case of singular tangles, we will see that these changes in Seifert surfaces are related by non-trivial mirror symmetries. To begin, let us recall the data associated to a chiral multiplet in an Abelian Chern-Simons matter theory. JHEP05(2014)014 • A charge vector q α ∈ Z G+F indicating its transformation properties under U(1) G × U(1) F gauge and flavor rotations. In all of our examples the vector q α will be primitive meaning that the greatest common divisor of the integers q α is one. • A parity anomaly contribution. If a chiral multiplet is given a mass m, it may be integrated out leaving a residual contribution to the Chern-Simons levels of fields. The shift in the levels in given by For primitive charge vectors the above shift has at least one non-integral entry. This implies that the ultraviolet levels are subject to a shifted half-integral quantization law. We take the associated shift to be part of the definition of the chiral multiplet. • An R charge indicating the scaling dimension of the associated chiral operator in the conformal field theory. This data is fixed by a maximization principle once a superpotential is specified, and hence is not an additional data in the geometry [17]. This will be addressed in section 3.3. To encode the partition function of such chiral multiplets we must introduce a new class of wavefunctions depending on these data. Each is given by a non-compact quantum dilogarithm of the form where c b is the imaginary constant given in (2.16), and the function s b (x), defined as was obtained through a localization computation on the squashed three-sphere in [36] where the numerator and denominator come from vortex partition functions on the two half-spheres [37]. The physical interpretation of this function is read from the variables as follows. • The subscript of E ± encodes the fractional ultraviolet Chern-Simons level ± 1 2 assigned to the particle. • The variable z indicates the linear combination of gauge and flavor fields under which the chiral multiplet is charged. For E ± the charge is z = ±q · (y x). • The variable R denotes the R-charge. Thus, we see that the physical data of a chiral multiplet is completely encoded by the wavefunctions (3.7). It follows that to assign a definite matter content to a singular tangle, as well as extract the associated contributions to the partition function Z, it suffices to assign a quantum dilogarithm to each singularity. To proceed, we introduce a singular Seifert surface Σ for a singular tangle L. As explained in section 2.3, from the homology of Σ we extract a basis of gauge and flavor cycles under which particles may be charged. Let α be such a cycle. Utilizing the sequence (2.51), we may view α equivalently as a cycle in the cover M . An M2-brane disc D ending on M has a charge determined by its linking numbers q α = lk # (α, ∂D). (3.9) The extension of this formula to the case of singular M is then depicted in our graphical notation in figure 12. These dilogarithm assignments completely determine the matter content of a singular tangle. However, the assignments require a choice of Seifert surface. This surface is a choice of branch sheet for the double cover and varying it does not alter the underlying geometry. As a consequence, our rules are subject to the crucial test: the underlying quantum physics must be independent of the choice of Seifert surface. Given the dualities between free Abelian Chern-Simons theories already described in section 2, independence of the choice of Seifert surface is ensured provided we have the equality shown in figure 13. There, we see that one and the same singularity may make different contributions to an ultraviolet Lagrangian depending on the choice of Seifert surface. At the level of partition functions, this means that a singularity which contributes as E + (x α ) with one choice of branch sheet can contribute with E − (x β ) with a different JHEP05(2014)014 Figure 13. A duality results from changing the Seifert surface. In (a) a singularity contributing E + (x α ) to the partition function. In (b) the Seifert surface is changed and the same singularity contributes E − (x β ). Figure 14. The duality between a free Chiral multiplet and a U(1) gauge field with a charged chiral field. In (a) we see the free chiral field couplet to the flavor cycle α. In (b) we see the gauge cycle β and flavor cycle α of the dual theory. choice. Thus, we see that consistency of our analysis requires a mirror symmetry implying that the same underlying conformal field theory may arise from ultraviolet theories with distinct matter content. To understand the nature of the duality implied by figure 13 we analyze its impact on the local model of the singular tangle involving a single singularity. Equality in more complicated examples follows from the locality of our constructions. The singular tangle together with its dual choices of Seifert surface and fixed compactification data δ i are shown in figure 14. The ultraviolet field content in each case is given by the following. • Figure 14a: there is a background U(1) flavor symmetry associated to the cycle α and no propagating gauge fields. Associated to the singularity there is a chiral multiplet with charge 1 under the flavor symmetry. This particle contributes + 1 2 to the Chern-Simons level. The scalar x α in the background U(1) multiplet is the real mass of the chiral field. • Figure 14b: there is a U(1) flavor symmetry associated to the cycle α and a U (1) gauge symmetry associated to the cycle β. Associated to the singularity is a chiral multiplet uncharged under the flavor symmetry but with charge −1 under the gauge symmetry. The level matrix, including classical contributions from the Trotter JHEP05(2014)014 pairing as well as the fractional contributions of the particles is given by The off-diagonal portion of the level implies that the scalar x α is the FI-parameter of the gauged U(1). These two field theories are indeed known to form a mirror pair [13]. At the level of partition functions this equivalence is represented by a quantum dilogarithm identity, known as the Fourier transform identity [23] (3.11) The fact that our geometric description of conformal field theories provides a framework where this duality is manifest is a satisfying outcome of our analysis. To gain further insight into this duality we now study resolutions of the singularity in both theories and interpret these from the viewpoint of three-dimensional physics. These resolutions correspond to motion onto the moduli space of the conformal field theory. From the perspective of the ultraviolet Lagrangians, the various branches of the moduli space can be described as Coulomb or Higgs branches, and the effect of the mirror symmetry is to exchange the two descriptions. 15 The three different resolutions (3.5) have the following effect on the geometry of branch lines, see figure 15. Let us start with the case (c). One can clearly see that the self-Chern-Simons level of the field α, as determined by the Trotter pairing, is one. This has a simple explanation from the point of view of field theory. Resolving the singularity means making the M2-brane massive with a mass m 0. Thus the IR physics is obtained by integrating out this massive field which according to (3.6) gives rise to a shift Thus, as the ultraviolet Chern-Simons level was already one-half, the effective level is one exactly as the geometry of resolution (c) predicts. There is yet another way to see this. The limiting behavior of the quantum dilogarithm is as follows which results in an effective Chern-Simons level k αα = 0. This is in complete accord with the geometry as cycle α has no self-linking after push-off in figure 15b. Equivalently, this can be again seen in the limiting behaviour of the quantum dilogarithm The two resolutions we have studied thus correspond to motion onto the Coulomb branch of the theory parameterized by the real mass m. Now let us come to resolution (a) which is of a different nature. In order to understand what is happening we follow a path in the moduli space of the Joyce special Lagrangian starting from a point which corresponds to a resolution (b) or (c) to a point of resolution type (a). Along such a path the absolute value of the mass of the particle shrinks, as the volume of the M2-brane disc shrinks, until the field becomes massless at the singularity. As long as the field is massive it is not possible to turn on a vacuum expectation value for the scalar φ of the chiral multiplet as this would lead to an infinite energy potential. However, when we sit at the CFT point and the field is massless we can deform the theory onto the Higgs branch by activating an expectation value for φ. We draw the three branches of the theory schematically in figure 16. We claim that motion onto the Higgs branch corresponds to resolution (a) on the geometry. In order to see how this comes about we flip the Seifert surface to obtain the resolutions of the dual description of the theory as shown in figure 17. In this dual theory resolution (a) arises from choosing x β 0 as can be seen from the JHEP05(2014)014 Figure 17. Resolutions of the theory dual to a free Chiral field. limiting behavior of the negative parity quantum dilogarithm Thus in the dual channel this resolution is obtained by giving a vev to the scalar part of a vector multiplet and therefore corresponds to a point on the Coulomb branch of the dual theory. But then the D-term equation of the dual theory requires that x α be set to zero due to the Chern-Simons coupling of the two fields. Translating back to the original theory we indeed see that m = x α = 0 and that we have a propagating massless field and are thus capturing the correct effective description of the physics on the Higgs branch. For completeness we note that the dual theory is on the Higgs branch for resolution (b) and on the Coulomb branch for resolution (c). This can be easily seen by noting the limiting behavior of the negative parity quantum dilogarithm for x β 0 The fact that resolutions of singular tangles capture motion onto the moduli space of the corresponding conformal field theories is a general feature of our constructions which will be pursued in more detail in section 4.2. Superpotentials from geometry There is one more ingredient in defining a three-dimensional theory with N = 2 supersymmetry that we have yet to address: the superpotential. In this section we fill this gap. As explained below the existence of a superpotential can be described in terms of the intrinsic geometry of our three-manifolds. However, a precise form of the superpotential as an explicit expression involving fields depends on a choice of Seifert surface used to construct a Lagrangian description. In our context, the existence of interaction described by a superpotential can readily be seen in terms of M2-brane instantons, as described in [10]. Here we will briefly review that discussion. Consider some collection of massless chiral fields, X i . Our M5-brane resides on a three-manifold M, which is a double cover of R 3 branched over a singular tangle L. Meanwhile, the entire construction is embedded in an ambient Calabi-Yau Q. As studied above, each of the particles X i corresponds to a singularity of the tangle L. Given this setup, a superpotential interaction for the chiral fields X i may arise from an instanton configuration of an M2-brane. This is a three-manifold C in Q, whose boundary ∂C is a two-cycle in M that intersects the particle singularities X i . Consider the JHEP05(2014)014 projection of the instanton M 2 to one sheet of the double cover, ∂C ± . This must be a polygon bounded by the tangle L with vertices given by the singularities of X i . A volumeminimizing configuration of this three-cycle will correspond to an interaction generated by a supersymmetric M2 instanton. This object is precisely of the correct geometric form to generate a superpotential term of the schematic form W = i X i . To sharpen this discussion, there are several further considerations. • The coefficient of the interaction is controlled by the instanton action, which is proportional to e −V , where V is the volume of the supersymmetric three-manifold C. To generate a non-zero interaction, we need the three-manifold to have finite volume. Since our framework allows a non-compact manifold M with L going off to infinity, we must restrict our superpotential polygons on ∂C ± to be compact. • The instanton action gets a contribution of exp i ∂C B , from the boundary of the M2 ending on the M5-brane. If ∂C = 0, that is, the boundary of the M2 is a trivial two-cycle, then this term is irrelevant. However in general, ∂C is a non-trivial homology class and we find Where γ is a scalar field dual to a photon. This indicates the presence of a monopole operator M j = exp (σ + iγ) in the superpotential. So in this situation, we find a superpotential W = M i X i . Of course, more generally ∂C is some integer linear combination of homology basis elements and so we might find multiple monopole operators in the superpotential. • The invariance of W under all gauge symmetries apparent in the homology of the Seifert surface implies a compatibility condition on the discrete data living at the sin- JHEP05(2014)014 gularities bounding the associated polygonal region. To analyze the charge, we make use of the fact that the exact quantum corrected charge of the monopole operator is where k αβ is the Chern-Simons level including both the integral part from the Trotter form, and the fractional contribution from particle singularities. Given the above discussion, the next step is to analyze the explicit geometry of supersymmetric M2-brane instantons and determine which possible contributions in fact occur. This problem is important, but beyond the scope of this work. For our purposes we simply take as an ansatz that every possible gauge invariant contribution to the superpotential present in the geometry as a polygon bounded by singularities in fact occurs. With this hypothesis, to extract the superpotential in complete generality, we analyze a candidate contribution by expressing the boundary two-cycle ∂C in a basis of two-cycles We include such a term in the superpotential provided it is gauge invariant as dictated by the charge formula (3.19). The full superpotential is then a sum over all gauge invariant terms associated to all polygonal regions present in the tangle diagram of L. Although it may seem cumbersome to explicitly calculate which polygons yield gauge invariant contributions to W, in practice there is a simple sufficient, but not necessary, graphical rule which ensures gauge invariance that applies to the simplest class of contributions to the superpotential namely polygons which lie entirely in the plane of a given projection of the Seifert surface. This rule is simply that the arrows on the singularities must circulate all in one direction around the gauge cycle in question. It may be easily derived from formula (3.19) as well as the charge assignments of particles dictated by figure 12. Examples of this type are shown in figure 19. We encounter more general 'non-planar' superpotential terms in our analysis of examples in section 6.1. We shall provide non-trivial evidence for the consistency of these superpotential rules by using them to reproduce known mirror symmetries in section 6.2. We leave for future work the interesting problems of deriving our prescription from first principles in M-theory and further proving that our combinatorial rules are consistent with all possible dualities to be described in section 4. JHEP05(2014)014 (a) Superpotential without monopole.  (b) Superpotential with monopole. Figure 19. Projections of BPS M2-brane instanton, with the singular tangle in black. The particles X i are indicated by the location of the black arrows, the Seifert surface is shaded in blue, and the projection of the instanton is shown in green. In (a), the M2 instanton projects to a trivial 2-cycle in M, and therefore has no monopole contribution. We find W = X 1 X 2 X 3 . In (b), the M2 projects to the non-trivial 2-cycle dual to the 1-cycle y shown on the Seifert surface This contributes a monopole operator, yielding W = M y X 1 X 2 X 3 . Physics from singular tangles: a dictionary To conclude our discussion of singularities, we briefly summarize the algorithm for extracting an ultraviolet Lagrangian description of the physics associated to a singular tangle L. • Pick a Seifert surface Σ. The homology H 1 (Σ c , δ, Z) specifies a basis of gauge and flavor cycles. Boundaryless cycles are dynamical gauge variables, while cycles with boundary are background flavor fields. • Compute the Chern-Simons levels by computing the Trotter form on the homology H 1 (Σ c , δ, Z). In this procedure the singularities make fractional contributions to linking numbers. The singularities of plus type, illustrated in figures 12a and 12b, contribute 1/2. The singularities of minus type, illustrated in figures 12c and 12d, contribute −1/2. • Assign to each singularity a chiral field X i . The field is charged under cycles on Σ passing through the singularity. The charge is +1 (-1) if the singularity is of plus type and the cycle is oriented with (against) the arrow at the singularity. The charge is −1 (+1) if the singularity is of minus type and the cycle is oriented with (against) the arrow at the singularity. • Compute the superpotential by summing over gauge invariant contributions from closed polygonal regions in L. Each monomial entering in W contains a product of chiral fields dictated by the vertices of the polygon, and possibly various monopole operators determined by expressing the polygon in a basis of two-cycles dual to H 1 (Σ c , δ, Z). Gauge invariance of the contribution of a given polygon is determined by application of the quantum corrected charge formula for monopole operators (3.19). JHEP05(2014)014 The physical theory associated to L is the infrared fixed point determined by this ultraviolet Lagrangian data. Varying the choice of Seifert surface, provides mirror ultraviolet Lagrangians, but does not alter the underling infrared dynamics. In general the resulting theory is a strongly interacting system which enjoys a U(1) F flavor symmetry. The action of Sp(2F, Z) on this conformal field theory is determined geometrically by the braid group action studied in section 2.4. The three-sphere partition function Z is an invariant of the theory which is extracted from this ultraviolet Lagrangian by generalizing the quantum-mechanical framework of section 2.1.1 and assigning to each singularity the quantum dilogarithm wavefunctions dictated by figure 12. In the remainder of this paper we apply these rules to further analyze the geometric description of mirror symmetries, and explore applications of the framework. Dualities and generalized Reidemeister moves In the previous sections we have developed a technique for extracting conformal field theories from singular tangles. However, there is still non-trivial redundancy in our description: as a consequence of mirror symmetry, two distinct singular tangles may give rise to equivalent quantum field theories. In this section, we determine the equivalence relation implied on singular tangles by mirror symmetries, and explore their geometric content. In searching for such relationships, one may take inspiration from the case of nonsingular tangles. In that case, the basic relations are the Reidemeister moves shown below. = These moves are local and may be applied piecewise in any larger tangle diagram. Further, these moves are a generating set for equivalences: any two tangles which are isotopic may be related to one another by a sequence of Reidemeister moves. In the case of singular tangles, we find similar structure. Basic mirror symmetries determine relations on singular tangles which take the form of generalized Reidemeister moves. They are related to the moves presented above by replacing some crossings by singularities. Further, each of these equivalences is local, and hence they may be applied piecewise in a larger singular tangle to engineer more complicated relations. It is natural to conjecture that these generalized Reidemeister moves, together with the Torelli dualities of section 2.4 provide a complete set of quantum equivalence relations on singular tangles. In section 4.1 we present a detailed description of the generalized Reidemeister moves as well as the associated quantum dilogarithm identities that result from application of JHEP05(2014)014 these moves to partition functions. In section 4.2 we show how deformations away from the conformal fixed point result resolve generalized Reidemeister moves into the ordinary Reidemeister moves. Generalized Reidemeister moves In this section we present the list of generalized Reidemeister moves. Each takes the form of a graphical identity involving two singular tangles. The precise form of these equalities depends on the discrete data living at the singularities. There are two things to note about this dependence which follow immediately from our analysis of the local model in section 3.1. • If we flip all arrows by 180 degrees on both sides of an identity, it still holds. Indeed, such a flip is equivalent to reflecting the sign of all U(1) gauge and flavor groups. Geometrically, this is equivalent to globally changing the labeling of sheets from 1 to 2 in the double cover. • Given any identity, if we exchange all overcross and undercross of non-singular crossings in the diagram, while at the same time exchanging all overcross vs. undercross singularities, the identity still holds. This is true because each of our diagrams is drawn in a fixed projection with oriented normal vectorx 3 . Globally reflectinĝ x 3 → −x 3 generates the indicated transformation on diagrams, as shown for example in figure 20. In the following, we take these two principles into account and thereby present a reduced set of generalized Reidemeister moves. Additional dualities may be generated by changing the discrete data at the singularities as above. Rules descending from move 1 Here, we consider a singular version of the first Reidemeister move. Populating the singular tangles with a Seifert surface generates partition function identities. We will look at two such choices of Seifert surface differing by black-white duality. The first choice does not contain a gauge group whereas the second choice does and is yet another version of the Fourier transform identity. = With a choice of planar Seifert surface it has the following two interpretations. In quantum mechanics language this is equivalent to starting with a quantum dilogarithm and applying a T -transformation. This does not involve any integrals, as the quantum dilogarithm is an eigenstate of the T -operator. Hence there is also no gauge group in the 3d gauge theory interpretation. The only effect on the gauge theory is a change in the background Chern-Simons levels: they are decreased by one unit. This represents a duality containing a U(1) gauge field on the one side but no gauge field on the other. This rule is equivalent to the Fourier transformation identity discussed in section 3.1, and is another singular-tangle representation of that duality. Here, the theory of one U(1) gauge field at level one-half together with a charged chiral particle is mirror to a free chiral field. Rules descending from move 2 The second Reidemeister move can be generalized to give rise to an identity between singular tangles where neighbouring singularities cancel pairwise such that on the other side of the identity there is no singularity at all. Therefore, we denote these identities with the term pairwise cancellation of singularities. We will also examine a partition function identity inherited from the tangle identity for one choice of Seifert surface. The relevant singular tangle identities are the following. = = From the perspective of the 3d gauge theory these can be understood as follows. We have a closed polygonal region bounded by two singularities. As discussed in section 3.3 JHEP05(2014)014 this gives rise to a superpotential with the two chiral fields. Thus the particles are given mass and make no contribution to the infrared physics. The dual theory then contains no particles, but depending on the UV Chern-Simons levels it can contain background Chern-Simons levels. Picking a Seifert surface these rules translate to the following quantum dilogarithm identities. From this perspective, the underlying identity of pairwise cancellation of singularities is equation (16) in the appendix of reference [23]. Rules descending from move 3 The most important rule arises from singularization of the third Reidemeister move. This rule is called the 3-2 move and encodes a non-trivial three-dimensional mirror symmetry. In this section we will clarify its relation to the third Reidemeister move by singularizing all crossings on one side of the identity and only two on the other side. Apart from the 3-2 move, the third Reidemeister move can be singularized by adding only one singularity on both sides. This application follows from the previously identified Fourier transform identity and hence does not represent an independent mirror symmetry. Nevertheless, the simple application is useful when moving between Seifert surfaces in the examples of section 5 and 6. We will turn to this simple application first and then discuss the 3-2 move. Change of branch sheet. Applying the Fourier transform identity of figure 13 locally, we obtain a generalization of the third Reidmester move. On one side of the duality we have a theory with a chiral particle charged under a U(1) gauge field which in turn couples to two background gauge fields. The duality relates this theory to one with no gauge group, a chiral mulitplet and two flavor fields. The partition function equality is again an application of figure 13. The 3-2 move. The relevant singular tangle identity is depicted below. = We clearly see that this identity relates a theory with three chiral fields to the one with just two chiral fields. Such theories are known to come in mirror pairs in three dimensions [13,15,16,38]. Examining the left-hand-side we notice the presence of a closed polygonal region bounded by three singularities and hence the existence of a superpotential. To extract the physical content we choose Seifert surfaces as shown below. The physical theories are then read off: • left-hand-side: a theory with three chiral fields X, Y, Z no gauge symmetry and a cubic superpotential W = XY Z, known as the XY Z-model. • right-hand-side: a theory with a gauged U(1) with vanishing self Chern-Simons level and two oppositely charged chiral fields Q and Q, known as U(1) super-QED with N f = 1. These theories are known to form a mirror pair [16]. At the level of partition functions this duality is the pentagon identity for quantum dilogarithms [23]. Resolutions of dualities In this section we make the connection between generalized Reidemeister moves and ordinary Reidemeister moves precise. We show that motion onto the moduli space of the conformal field theories appearing on both sides of a generalized Reidemeister move resolves them into ordinary Reidemeister moves. To achieve this we will choose a particular Seifert surface such that all the resolutions in question are obtained as a motion onto the Coulomb branch. In general such a deformation gives masses to all chiral fields and in the infrared they can be integrated out. Generically, this leads to a fractional shift in the Chern-Simons levels of the form [16] (K IJ ) eff = K IJ + 1 2 N f a=1 (q a ) I (q a ) J sign(m a ) ∈ Z, I, J = 1, · · · , G + F, (4.1) where we have noted that the effective levels are integral in order to ensure gauge invariance. These effective levels are depicted in figure 21 as applied to a single singularity as studied in section 3.2. In applying this logic to study resolutions of singular tangles, one must take care to remain in a supersymmetric vacuum. In other words the F -and D-term equations have to be satisfied. This will be dealt with next. F-and D-term equations Let us elaborate the Coulomb branch resolutions from the viewpoint of the 3d gauge theory. The singular tangle describes the CFT at the origin of the Coulomb and Higgs branches. If we discuss only resolutions which remain at the origin of the Higgs branch then the resulting resolutions correspond to different leaves of the Coulomb branch parameterized by Fayet-Iliopoulos parameters and scalar fields in vector multiplets. In order to determine which resolutions are possible in a complicated singular tangle we need to solve the D-and F-term equations of the relevant 3d gauge theory. The potential V for the theory is a sum of a D-term and an F-term contributions of the form JHEP05(2014)014 In a supersymmetric vacuum this potential must vanish. As both V D and V F are nonnegative, both must vanish separately. Let us first consider the F-term potential which reads where W is the superpotential of the theory and φ a is the scalar component of the chiral field X a . In our geometric examples, W arises from a sum over polygons and hence each monomial in W has degree larger than one. It follows that if we remain at the origin of the Higgs branch φ a = 0 the F-term potential is trivially minimized. Let us next turn to the D-term potential. In the following we will drop the subscript eff from all Chern-Simons levels and assume that the IR limit has been taken. The D-term potential is then given by where the summation is over i, j = 1, · · · , G for the gauge indices, and λ = 1, · · · , F for the Fayet-Illiopoulos parameters x λ . The associated D-term equation then reads On the Coulomb branch we have that φ a = 0 which simplifies the above equation considerably. Defining it is possible to write equation (4.5) in the compact form for i = 1, · · · , G. Equation (4.7) is our desired result. It implies that provided we are interested only in Coulomb branch deformations, we can determine which are allowed by searching for null-vectors of the effective level matrix K. Resolution of move descending from rule 1 Here, we examine how a particular resolution on the two sides of our first generalized Reidemeister move gives back the ordinary Reidemeister move of first kind. In order to proceed, we need to pick a particular Seifert surface which allows us to obtain the relevant JHEP05(2014)014 resolution as motion onto the Coulomb branch. We will pick the second Seifert surface corresponding to the dilogarithm identity (4.8) The limit we take is the following resulting in z − y = 0. (4.12) As this is consistent with the limit taken we are indeed looking at a valid resolution satisfying the equations of motion of the gauge theory. The pictorial representation is shown in figure 22. We clearly see that the resolution reproduces the ordinary first Reidemeister move as claimed. Resolution of moves descending from rule 2 Next, we look at resolutions of the second generalized Reidemeister move. This rule consists of two parts and we shall examine both of them. Again we have to pick a Seifert surface which we choose to be the same as in section 4.1.2. The relevant quantum dilogarithm identity for the first subrule is Here we can consider the following limit (4.14) As the limit gives the right hand side of the identity trivially there is nothing to be checked. Therefore, this resolution does not involve any Reidemeister moves. Let us now move to the second subrule. The relevant quantum dilog identity is Taking the limit x → ∞ the left-hand-side becomes (4.16) The pictorial representation of this resolution is the second Reidemeister rule, as shown in figure 23. Resolution of move descending from rule 3 Let us now come to our last and most involved case, namely the 3-2-move. The relevant identity here is we will consider the limit c i 0 for i = 1, 2, 3. Setting w ≡ c 3 ensures that we have the effective Chern-Simons-levels The D-term equation (4.7) thus gives 22) and hence confirms that we are on the Coulomb-branch. The pictorial representation of the limit discussed is the third Reidemeister move as shown in figure 24. R-flow We have seen how singular tangles capture the content of a 3d conformal field theory with four supercharges and that resolutions of such objects describe dynamics on the moduli space of the same theory. This is very similar to how Seiberg-Witten theory describes the Coulomb branch of 4d gauge theories with eight supercharges. In fact, the similarity goes even further. In the Seiberg-Witten case the multi-cover of a complex curve with punctures captures all the information about the BPS states of the four-dimensional gauge theory [2][3][4][5]. In our case a multicover (more specifically, a double cover) of R 3 with specified boundary conditions captures the content of a three-dimensional theory. The connection of these two descriptions can be made precise by looking at specific class of examples where the three-manifolds in question arise from flows of a Seiberg-Witten curve of a 4d theory. By this we mean that there exists a slicing of the three-manifold along a time direction such that each slice represents a SW-curve. It turns out, that such a flow indeed exists and is known as R-flow [10,39]. This section is devoted to the definition and properties of R-flow. It is defined on the space of central charges of certain 4d N = 2 theories and describes a domain wall solution which has the interpretation of a 3d N = 2 theory [40][41][42]. Figure 25. R-flow for an example with three central charges. Definition of the flow R-flow is a motion in the space central charges of four-dimensional theories with eight supercharges. In theories which are known to be complete [43] deformations in the space of central charges are locally equivalent to deformations of branch points of the Seiberg-Witten curve. We define the flow to be of the following form where Z i is the central charge of the i-th charge in the N = 2 4d theory. This tells us that the central charges flow along straight lines preserving their real parts while their imaginary parts move at a rate which is proportional to their real parts. As a consequence of this flow equation, the phase ordering of central charges is preserved and hence the entire evolution takes place in a fixed BPS chamber. In summary, we can say that phase ordering is time ordering and depict this in a graph shown in figure 25. This describes a three-dimensional theory as a domain-wall solution of the four-dimensional parent theory where each 4d BPS state gives rise to a 3d BPS state whose mass is given by the real part of Z i . A n flow and the KS-operator In this paper we are in particular interested in flows of 4d gauge theories which arise from wrapping a M5-brane on a Riemann surface of the type A n describing Argyres-Douglas JHEP05(2014)014 CFTs [5,44]. These are Riemann surfaces which are double covers of the C-plane of the form where a n+1 = n i=1 a i . The Seiberg-Witten differential is given by the square root of the quadratic differential i.e. λ SW = √ φ. Having established the above definitions, it is straightforward to write down the central charges of the theory: Now, choosing a specific ordering of the phases of the central charges one arrives in a particular chamber of the moduli space where a specific number of BPS particles is stable. For the choice argZ 1 < argZ 2 < · · · < argZ n , (5.5) we obtain the so called minimal chamber with exactly n stable particles. On the other hand, the maximal chamber is defined for the configuration argZ n < argZ n−1 < · · · < argZ 1 . Here the number of stable BPS particles is 1 2 n(n + 1) [45]. There will be also intermediate chambers with less particles and we shall refer to the number of states in a given chamber by N . Note that for each of these states there is a corresponding central charge which in general is a linear combination of those given in (5.4). We next assign to each central charge ordering a Kontsevich-Soibelman operator of the following form [46][47][48]: where E + is the non-compact quantum dilogarithm while theγ i label the stable BPS states and can be interpreted as phase space variables of the quantum Hilbert space which differ by actions of Sp(n, Z) if n is even and Sp(n−1, Z) if n is odd. From the point of view of the A n curve theγ i represent cycles determined by two branch points a k and a l . In particular, from the point of view of the quantum mechanics description of section 2.1.1, they are linear combinations ofx i andp i and are mapped to each other by actions of the generators Figure 26. For each KS-operator there is an associated singular braid B K . We can assign to each KS-operator a quantum mechanical matrix element of the form which have an interpretation as partition functions of 3d theories as discussed in section 2.1.1. These partition functions enjoy a Sp(n, Z) × Sp(n, Z) action which has the interpretation of the braid group action on the two ends of a braid with n+1 strands. In our case we can thus assign a singular braid B K to the matrix element Z K . This is depicted schematically in figure 26. As also indicated there, the braid naturally defines a time direction which we can understand as follows. Each line of the braid describes the flow of a branch point of the A n -curve along the time direction and at the singularities these branch points come close to each other and actually touch, thereby loosing their individual identities. Let us zoom into the braid B K to see how the strands approach each other for an isolated singularity. To this end, we rewrite the partition function as a gluing of three braids according to the formalism developed in section 2.1.1 Z K = dx dy x| · · · |y y |E + (γ kl )|x x | · · · |y , (5.10) whereγ kl represents the contribution of the 4d BPS state whose central charge is given by Zooming into the braid we then have the local representation for an isolated singularity shown in table 1. Resolving the singularity means turning the points at which the branch points touch to near misses. As we have seen, for each singularity there are exactly three ways to do this. R-flow, as a flow of branch points of the Seiberg-Witten curve, is equivalent to choosing the resolution of figure 10 (b) for all singularities. Said differently, the singular braid B K is obtained from the flow defined by equation (5.1) in the limit in which all near misses are replaced by singularities. Table 1. Braid realization of a local singularity. The relevant branch points come close to each other until they collide in the singularity and loose their individual identities. After that they depart again until they reach their original positions in the braid. JHEP05(2014)014 Let us now come to the justification of this picture. The initial condition of R-flow is determined by the chamber in which the flow starts. Furthermore, as the flow continues one stays in the initial chamber due to the phase-preserving property of the flow. As central charges cross the real axis something special happens. Recall that a 4d BPS hypermultiplet has an interpretation as a geodesic on the complex plane between branch points of the Riemann surface [49,50]. These geodesics obey the equation where θ m , m = 1, · · · , N is the phase of the mth BPS state, i.e. θ m = argZ m . (5.13) There are two remarks in order here. First, R-flow describes a motion on the Coulomb branch (including mass parameters) of the four-dimensional gauge theory. On the other hand, the flow equation (5.12) is a flow on the C-plane at a fixed point in the moduli space. The Seiberg-Witten curve, being a double-branched cover of the C-plane, is not subject to change under the flow (5.12). Therefore, in order to relate the two motions, we have to choose a fixed angle θ m corresponding to a line in the complex plane of central charges. Secondly, the geometry of R-flow predestines exactly such a line, namely the real axis which defines a mirror axis for the flow. Thus we see that each time a central charge crosses the real axis there is a geodesic solution with minimal length. Thus at such points the pair of branch points corresponding to the BPS bound state whose central charge crosses the real axis are closest. Examples In this section present some examples of R-flow. We start with the simplest case and proceed to increasing complexity. Already in the very first example, the A 1 flow, we will find that R-flow gives insight into the behavior of branch lines near local singularities. As a first example we will consider the most simple case of R-flow. This is the theory corresponding to the curve y 2 = x 2 + , (5.14) with a single central charge, denoted by Z 1 , given by We will find that this theory has significant importance for the resolution of arbitrary singular tangles as it predicts the possible local resolutions of an isolated singularity by turning on different values of Fayet-Iliopoulos parameters. Let us describe how this comes by. First of all, note that we can parametrize as = − 2 π (−im + t) with m real and positive so that (5.16) obeys the flow equation (5.1). 16 The motion of the branch points of the curve are then given by the law where α is a proportionality constant. We can now view this motion from two perspectives. The first is as a motion on the C-plane which forms the base of the double cover. The second perspective is obtained by looking at the motion of the two branch points as giving rise to branch lines in C×R where R is the time-direction parametrized by t. As the square root behaviour of (5.17) is fairly simple we can depict the two perspectives easily as shown below in figure 27. We see that this exactly mirrors two of the three possible resolutions described in section 3.1, namely resolutions 10 (b) and (c). Note that resolution (a) cannot be obtained in this formalism as it breaks time-flow or equivalently keeps the mass parameter m at zero but deforms the theory onto the Higgs branch. We now turn to our next example, the A 2 curve. It is, apart from the A 1 case, the most important flow example as it provides insight into three-dimensional mirror symmetry in terms of flows of four-dimensional theories. In order to illustrate this we consider the two central charge orderings of this theory which provide two BPS chambers with different particle content. More precisely, we have a 2-particle chamber: argZ 1 < argZ 2 < 0, (5.18) and a three-particle chamber argZ 2 < argZ 1 < 0, (5.19) where the third state is the one with charge Z 1 +Z 2 . Looking at the Kontsevich-Soibelmann operator we see that in the first case it is given by (5.20) while in the second case one has The crucial point here is that these two operators are actually equal if we impose the commutator [x,p] = i 2π , (5.22) as was first proven in [23]. This is the underlying equality leading to the 3-2-move discussed in section 4.1.3. Therefore, the 3-2-move can actually be thought of as arising from R-flow of the A 2 curve. However, note that the 3-2 move is obtained by looking at matrix elemts x|K|p , that is position/momentum matrix elements, whereas R-flow is equivalent to matrix elements of the form x|K|y , namely position/position matrix elements. Furthermore, JHEP05(2014)014 there are many braid realizations of these matrix elements differing by the other various dualities discussed in section 4.1. In this section we will look at representations which are obtained from the prescription described in table 1. That is, we will now look at the above KS-operators and their braid realizations from the perspective of branch-point flow. Let us start with the minimal particle chamber. Using the identity we obtain This way we have rewritten the partition function in terms the σ i which describe actions of the braid group. The braid representation of the right-hand side of the above identity is shown in figure 29. 17 The single integration variable in (5.24) corresponds to a U(1) gauge group manifest as a compact white region in figure 29. Furthermore, we have used that σ 1 and σ −1 1 commute with E + (x) and therefore cancel each other. Note that the theory described by the braid 29 is related to U(1) SQED by changing the branch sheet as discussed in section 4.1.3. 18 We will not discuss this here and rather turn our attention to a particular resolution of the singular braid. Applying resolution rule (b) of figure 10 to all singularities we obtain the figure 31. It is also possible to explicitly solve equation (5.1) and compute the flow of branch points in the minimal chamber. The result is shown in the second part of figure 31. We see that the resolved braid and the flow of branch points are topologically equivalent and just differ by change of projection plane. That is the location of particles is represented in both pictures by cusps at which the same strands come closest. Next, we turn to the maximal chamber. Here, we need further the following identity which allows us to rewrite the partition function as We depict the corresponding braid representation in figure 30. One can immediately ex- 17 We have suppressed the R-charges of the singularities as these are not relevant for the present discussion. 18 We also need to apply an S-transformation to the boundary condition in order to switch from position boundary to momentum boundary. The KS-operator corresponding to the minimal particle chamber is given by JHEP05(2014)014 A partition function can be formed from this operator by considering the wave-function Z K = x|E + (x + c)E + (p)E + (x)|y . (5.29) 19 We have chosen here a different commutator betweenx andp compared to the A2 case. This is merely a convention. We could also have worked with the former commutator. This partition function now represents a singular braid. In order to extract the braid, we have to rewrite it as a gluing of simple partition functions containing no gauge groups. This is done by using the identity E + (p) = e −iπx 2 e −iπp 2 e −iπx 2 E + (x)e iπx 2 e iπp 2 e iπx 2 , (5.30) which allows us to rewrite Z K in the form Z K = dx x|E + (x + c)e −iπx 2 e −iπp 2 e −iπx 2 |x x |E + (x )e iπx 2 e iπp 2 e iπx 2 E + (y)|y . (5.31) This partition function can be represented by the singularized braid shown in figure 33. We see again a U(1) gauge group corresponding to the one compact white region. Furthermore, a chiral field is charged under this gauge group while the two other chiral fields are gauge neutral. Applying duality rules we can transform this picture to different ones with more or less gauge groups. Applying resolution rule (b) of figure 10 to all singularities we obtain picture 34. This resolved braid can again be reproduced by letting the central charges of the A 3 curve R-flow as depicted in figure 25. One can carry out the flow procedure by inverting the central charges as functions of the branch points locally along the flow. The resulting flow of branch points for the minimal chamber is depicted in figure 35. Comparing figure 34 with figure 35 we find that the two are topologically identical in that the strands which come closest at the location of particles are the same in both pictures, i.e. first γ 3 contracts, then γ 2 and at last γ 1 . They merely differ by a change of the projection plane. We find that this behavior generalizes. That is, associated to the KS-operator corresponding to the A n theory in a particular chamber, there exists a resolution which arises as R-flow of the branch points. The prescription for finding the resolution corresponding JHEP05(2014)014 to R-flow is as follows. Start with the partition function Z An = x|K(q)|x . (5.32) Associate to this matrix element the particular braid-representation which contains all particles as black dots within the Seifert-surface, where by within we mean that the Seifertsurface goes horizontally through the dot as depicted in table 1. Apply resolution rule of figure 10 (b). Note that it is not possible to obtain other resolutions for the singular braids such as the one of figure 33 from R-flow. The reason is that a local flip of the corresponding central charge, as described in the case of A 1 , changes the KS-operator and will thus lead to a completely different picture. Applications In this section we study some further applications of the developed rules. As a first example we examine a more complicated geometry arising from the R-flow prescription. The particular geometry contains a closed non-planar polygon, i.e. a superpotential, which is only partly shaded and thus gives rise to a monopole operator. We will establish that this monopole operator appears in the superpotential. As a second example for the application of the methods developed in this paper we will look at U(1) SQED with N f > 1. This example does not arise from R-flow. However, we will find that the rules presented in section 4.1 are powerful enough to establish mirror symmetry even for these more complicated models geometrically. Superpotentials from R-flow In this section we look at an example of a 3d gauge theory which arises from R-flow of an intermediate chamber of the A 4 theory. This example was already analyzed to some extent in [10]. The relevant KS-operator is given by The 3d partition function associated to the KS-operator is now Z K = x|E + (x 1 )E + (x 2 )E + (p 1 +x 2 )E + (x 2 )E + (p 2 )|x . (6.3) Its representation in terms of a singular braid is depicted in figure 36. We can clearly see 4 U(1) gauge groups represented by the four white regions in the braid. Applying the Fourier transform identity twice and the T -transform rule of section 4.1 we obtain the simpler braid depicted in figure 37. This braid represents a dual description of the same quantum field theory. In this description, there is a U(1) gauge group under which two chiral multiplets, denoted by X 3 and X 2 , are charged oppositely. Furthermore, one can clearly see a compact polygonal region bounded by three chiral singularities. This corresponds to a superpotential in the effective 3d gauge theory to which all three chiral multiplets contribute. This theory contains a monopole operator which also participates in the superpotential term. One way to see this, is through the white region contained within the bounded polygonal region. One can check, using the formula (3.19) for the charge of the monopole operator discussed JHEP05(2014)014 in section 3.3, that the monopole operator M is invariant under the U(1) gauge group. This immediately tells us that we can write down a superpotential of the form W = MX 2 X 3 X 4 , (6.4) which is gauge invariant. Furthermore, this superpotential breaks exactly one U(1) flavor symmetry which is consistent as there are five chiral fields but only four non-compact white regions in the geometry. 6.2 U(1) SQED with N f > 1 Here, we will demonstrate that our rules for the singular tangles provide a convenient geometric way of encoding general mirror symmetries of 3d N = 2 gauge theories. The example we will use to demonstrate this is the generalization of U(1) SQED/XY Z mirror symmetry. Start with a 3d N = 2 gauge theory with U(1) gauge group and N f > 1 charged hypermultiplets. This theory has a RG fixed point with a mirror dual description as a (U(1) N f )/U(1) gauge theory with N f charged hypermultiplets (consisting of chiral multiplets q i andq i ) and N f neutral chiral multiplets S i together with a superpotential [16] W The charge assignments are as follows The aim will now be to translate both theories into geometric tangles and transform them into each other by using ordinary as well as singularized Reidemeister moves, thereby proofing they are mirror pairs. 6.2.1 U(1) SQED with N f = 2 We will start with the geometry corresponding to U(1) SQED and specialize to the case N f = 2. The relevant diagram describing this gauge theory is depicted in figure 38. The interior white region represents the U(1) gauge group and each pair of singularities corresponds to a hypermultiplet whose constituents have opposite charges under the U(1). Let us next apply the second Reidemeister move to this diagram. The result is depicted in figure 39. Here we see that there are two extra U(1)'s and that two singularities are charged under the first one whereas the second pair is charged under the second. We are now in a position to apply the generalized Reidemeister move known as the 3-2 move. This move can be applied twice, once to the upper white triangle and once to the lower white triangle, resulting in figure 40. This diagram simply shows a U(1) gauge theory with two chiral JHEP05(2014)014 fields charges positively under it and two fields charges negatively. Moreover, we observe two superpotential terms each combining a neutral field with two oppositely charged fields. These data exactly match those of the mirror dual which confirms the duality. U(1) SQED with N f = 3 As a second and last example we will consider the more complicated case of U(1) SQED with N f = 3. The relevant diagram is We can see 6 chiral multiplets charged under a U(1) gauge group with the charges of the particles adding up to zero pairwise. The overcross and undercross singularities are arranged such that the net self-Chern-Simons level of the U(1) is zero. We can add a JHEP05(2014)014 T-transform to turn one type of singularity to another, as shown in figure 42. Next, we do a second Reidemeister move to create a white region. Performing the 3-2 move we end up with a superpotential and an extra U(1), shown in figure 44. We now perform the Reidemeister move a second time to create a third white region with two charged fields. Application of the 3-2 move for a second time leads to the second superpotential term. As should by now be obvious, we again perform the Reidermeister move with the result shown in figure 47. JHEP05(2014)014 The last step is again a 3-2 move leading to the final result depicted in figure 48. As one can clearly see the above picture is the diagram describing the mirror dual of our original theory. We have three superpotentials each containing one neutral field and we have three U(1)'s under each of which 2 chiral fields are charged. Note that the white region in the interior, under which no particle is charged, ensures that the sum of all U(1)'s JHEP05(2014)014 Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
v3-fos-license
2016-05-04T20:20:58.661Z
2013-07-11T00:00:00.000
5680981
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/1471-2148-13-146", "pdf_hash": "2186278b86b6c6059f8daa6eb34da94904491f64", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45265", "s2fieldsofstudy": [ "Biology" ], "sha1": "0bccbcc71d978947cad89a91d9dc85079463719d", "year": 2013 }
pes2o/s2orc
EGN: a wizard for construction of gene and genome similarity networks Background Increasingly, similarity networks are being used for evolutionary analyses of molecular datasets. These networks are very useful, in particular for the analysis of gene sharing, lateral gene transfer and for the detection of distant homologs. Currently, such analyses require some computer programming skills due to the limited availability of user-friendly freely distributed software. Consequently, although appealing, the construction and analyses of these networks remain less familiar to biologists than do phylogenetic approaches. Results In order to ease the use of similarity networks in the community of evolutionary biologists, we introduce a software program, EGN, that runs under Linux or MacOSX. EGN automates the reconstruction of gene and genome networks from nucleic and proteic sequences. EGN also implements statistics describing genetic diversity in these samples, for various user-defined thresholds of similarities. In the interest of studying the complexity of evolutionary processes affecting microbial evolution, we applied EGN to a dataset of 571,044 proteic sequences from the three domains of life and from mobile elements. We observed that, in Borrelia, plasmids play a different role than in most other eubacteria. Rather than being genetic couriers involved in lateral gene transfer, Borrelia’s plasmids and their genes act as private genetic goods, that contribute to the creation of genetic diversity within their parasitic hosts. Conclusion EGN can be used for constructing, analyzing, and mining molecular datasets in evolutionary studies. The program can help increase our knowledge of the processes through which genes from distinct sources and/or from multiple genomes co-evolve in lineages of cellular organisms. Background Genomic and metagenomic projects provide an increasing amount of molecular data with a considerable genetic diversity. A portion of these nucleic and proteic data is amenable to standard computationally expensive phylogenetic analyses, through the use of multiple alignments and the construction of individual or concatenated gene phylogenies [1][2][3][4]. However, evolutionary analyses of many of these sequences can be carried out using other, less time consuming and more inclusive, approaches [5][6][7][8]. Typically, phylogenetic reconstruction is suited for analyzing subsets of homologous genes that can be aligned with confidence. Distant homologs are thus generally absent from these analyses [2]. Moreover, even though classic phylogenetic analyses only require that sequences to be compared are alignable, they often focus on the genealogical relationships between entities from the same level of biological organization (e.g. viruses, plasmids or cellular organisms) resulting from the process of vertical descent. However, gene trees affected by processes of introgressive descent such as lateral gene transfer (LGT) [9][10][11][12] pose significant challenges to the reconstruction of a universal tree [13][14][15] or phylogenetic network of life [16][17][18][19]. In particular, it is difficult to study the incongruence between the histories of gene families with uneven distribution among microbial genomes. In addition, it is difficult to represent the transfer of DNA between donors and hosts, while including the vectors responsible for these genetic exchanges on a single representation [8,9]. Viruses (and other mobile genetic elements) are indeed most often not considered to be related to cellular beings, and their evolution as well as that of their genes is generally not described along the organismal species tree [20][21][22]. Thus some evolutionary information contained in genomic and metagenomic data is not readily exploited in standard phylogenetic analyses. Consistently, a new suite of methods is becoming increasingly popular, in order to handle more of the complexity of such data. Network-based approaches, that display similarity in a wealth of molecular sequences, have started to offer a valuable complement to improve our evolutionary knowledge on the processes responsible for LGT, as well as on the sources of genetic diversity. They provide useful tools to analyze mosaic sequences [23], genomes harbouring sequences from multiple origins [24][25][26][27] and the migration of DNA across metagenomes [28]. Network-based approaches also provide an additional framework in which the genetic diversity of sequences, genomes, or metagenomes can be compared and quantified using graph estimators, even for highly divergent sequences [29]. In general terms, we describe an evolutionary (or similarity) network (to distinguish it from phylogenetic networks) as any graph connecting nodes representing individual sequences, individual genomes or metagenomes, by edges, when these objects present some similarity according to various combinations of operational criteria (e.g. a significant level of similarity between two sequences, as indicated by a BLAST score and/or percentage of similarity; the presence of shared gene families between two genomes; the presence of identical sequences between metagenomes). For the moment, due to the lack of user-friendly freely distributed dedicated software, the construction and analysis of sequence similarity networks requires a certain amount of computer programming skills, and remain less accessible (and therefore less familiar) to biologists than standard phylogenetic approaches. Here, we introduce a simple but powerful software program, EGN (for Evolutionnary gene and genome network), for the reconstruction of similarity networks from large molecular datasets that may expand the toolkit of evolutionary biologists. EGN is programmed in Perl v5.10.1, it is fast, portable, and runs on Linux and OSX systems. EGN automates the construction of gene and genome networks from nucleic and proteic sequences, coupled to simple statistics describing genetic diversity in these samples, for various user-defined thresholds of similarities. We illustrate some of the options available in EGN, and the novel type of data it exploits. Then, as a proof of concept, we show how EGN can be used to study the complexity of evolutionary processes affecting microbial evolution. We tested whether plasmids were always used as genetic couriers, moving DNA from one lineage to another. Our null hypothesis was therefore that plasmids should always connect to more than one lineage in a gene sharing network in our dataset of 571,044proteic sequences, sampled in genomes from the three domains of life and mobile genetic elements. Our network approaches were able to reject this null hypothesis by identifying a set of plasmids in Borrelia that is not being used as such couriers. In this case, plasmids appear to have a different functionthat of "evolutionary sandbox"-that contributes to the creation of genetic diversity within their bacterial host lineage. Implementation EGN is implemented in Perl. v5.10.1. The script and a user guide are freely available under the GNU GPL license as Additional file 1 or at http://evol-net.fr. Network construction steps are presented in a simple contextual menu. EGN handles massive datasets of nucleic and/or proteic sequences from FASTA files in DEFLINE format. It automates the identification of homologous sequences using user-defined homology search software (BLAST [30] or BLAT [31]). In short, the identification of similar sequences relies on parameters defining relevant hits (based on e-value, identity thresholds in the aligned regions, minimal hit length), and on parameters tagging the hit quality (such as bestreciprocal hit, minimal length coverage represented by this hit over each of the compared sequences). In EGN, these parameters can be defined by the user. After a step of all against all comparison, clusters of sequences with significant similarities are identified using the exhaustive simple link algorithm [32,33], so that any sequence in a cluster presents at least a significant similarity with another sequence of the cluster, and no similarity with any sequence outside the cluster. Graph-wise, these clusters are called connected components. EGN provides several statistical information for each network as an output file: the average percentage of sequences identity, size (in number of sequences), number of connected components, and a global estimate of the clustering within each component, called graph density, implemented as: Graph density is comprised between 0 and 1 (i.e G reaches 1 when nodes in the component are maximally connected to one another, forming a clique). The distribution of these connected components in each species/samples is also compiled in a tabulated text outfile. Moreover, EGN produces files that are importable in the popular Cytoscape [34] and Gephi [35] network visualization software programs, in which gene and genomes networks can be further analyzed. EGN also generates FASTA files of sequences in each connected component. These files can be used to generate alignments and standard analyses of selection or phylogenetic analyses. For details, we refer to the User Guide. EGN analytical workflow EGN is a script implemented in Perl. v5.10.1 on the Linux and MacOSX platforms for generating evolutionary gene and genome networks from molecular data (proteic and/or nucleic sequences). A simple menu allows users to easily manage the step by step procedure and set up relevant parameters for their analyses. However, BLASTall (v ≥ 2.2.26) [30] or BLAT [35] must be installed on the computer where EGN is executed, and their directory locations properly specified in the OS. Once EGN is installed, it will take as input one or many files of sequences (in FASTA format) located in a working directory, chosen by the user (e.g. /myEGNanalysis/). The extension of these files must be either .fna, for DNA and RNA sequences, or .faa, for protein sequences. In the case of unique sequence type, user can choose between BLAST or BLAT homology searches to compare these sequences. If the dataset is composed of both nucleic and proteic sequences, BLAST will be chosen and EGN will automatically run BLASTN for nucleic sequences comparison, BLASTP for proteic sequences comparison, while comparisons between nucleic and proteic sequences will be performed by BLASTX and TBLASTN. To this end, EGN must simply be invoked using the command line 'perl egn. pl'. The software then proposes several analyses, organized in a stepwise fashion ( Figure 1). First, EGN parses the FASTA infiles present in the working directory to i) check that their format is correct (i.e. all sequences have a unique identifier, etc.), ii) to extract useful information about the sequences that will figure in the nodes of the networks under reconstruction (e.g. the sequences/samples/organisms names…), and iii) to assign a local EGN identifier to each sequence in order to speed up the next calculation steps. Once this step of creation of properly formatted input files (option 1, in EGN Main Menu) is successfully achieved, the user can perform two kinds of similarity searches between sequences in these files (option 2 in EGN Main Menu), either by selecting BLASTall [30], or BLAT [31], a faster software program, that speeds up the analysis of very large sequence infiles, but may be less accurate. The user can edit the egn.config file to modify the search parameters of each of these programs or use the wizard implemented in EGN. On multi-core computers, the speed of the BLAST search can be enhanced by changing the number of processors (parameter -a, set to 2 by default). Likewise, we implemented the parallelization of multiple BLAT processes in EGN (default value = 2 in egn.config). In order to reconstruct a comprehensive similarity network, all sequences can be compared against all. To this end, a third step of EGN (option 3 in EGN Main Menu) parses the results of the similarity search according to a set of properties that any pair of sequences showing similarity must satisfy to be included in a gene or genome network. This parsing step can be optimized since the user can select between two algorithms, depending on the amount of available memory on his/her computer. EGN offers a 'quicker' parsing step (option 1, in the Prepare Edge File Menu), requiring a maximum amount of memory, and a 'slower' parsing step (option 2, in the Prepare Edge File Menu), needing less memory, but more free disc space. Under both parsing options, the same large number of desired conditions is available to select relevant sequences and similarity relationships in the Edges File Creation Menu. Option 1 of this menu allows the user to determine a maximal e-value threshold to discard sequences that show too little similarity to be further considered. This decision is facilitated by a simple text interface (default value is also editable in egn.config). Moreover, important additional levels of stringency concerning features of the hit (e.g. the matching sequence segment between two similar sequences) can likewise be specified. Option 2 can be used to impose a minimal percentage of identity over the hit (by default set to 20%). In addition, option 3 can be applied to filter out edges between sequences, when the number of identical bases represents more than a minimal percentage of the shortest sequence length (by default 20%). Option 4 can be used to perform a similar operation since it eliminates edges in which the hit has a minimal absolute length (set by default to 75 for nucleotides and 25 for amino acids, respectively). Finally, two other properties can also be used to label pairs of similar sequences satisfying the above conditions, and build gene and genome networks. Option 5 evaluates the strength of the similarity between pairs of sequences. In order to determine when two sequences a and b are best-reciprocal hits, EGN computes the e-value of the sequence b in the BLAST (or BLAT) search where sequence a is used as a seed, with the e-value of sequence a in the BLAST (or BLAT) search where sequence b is used as a seed. Sequences a and b may not be each other first best hits in these searches, but when their evalues are no more than a certain used defined percentage away from the top scoring hit in these searches, then sequences a and b are considered as best-reciprocal. By default, this percentage is set to 5. This distinction matters to reconstruct networks based on "best-reciprocal" similarity edges vs networks based on any similarity edges, be they "best reciprocal" or not. Option 6 provides a second qualifier for pairs of similar sequences. It allows the user to filter for the extent of the pairwise similarity, i.e. whether a hit spans a great or a small portion of each of the similar sequences. By default, pairs of sequences for which the hit corresponds to ≥ 90% of each sequence length are considered as "homology" edges. Such similarity is not limited to a fragment of any of these sequences, which could happen when significant similarity occurs only for a short region of the sequence, e.g. partial similarity caused by the sharing of a domain. After these criteria have been set, and the parsing has been carried out, EGN can effectively generate outfiles for two kinds of networks: gene and genome networks (option 4 of EGN Main Menu). This step of network reconstruction also allows the user the option of setting additional selection criteria concerning the edges that will be retained. By default, edges corresponding to hits with a maximal BLAST e-value of 1E-05, 20% identity, and longer than 20% of the smallest similar sequence will feature in gene networks, and be used to reconstruct clusters of similar sequences (see Methods). If two samples/genomes contain sequences belonging to the same cluster, EGN will produce an edge in the genome network between these samples/genomes. In this last construction step, EGN not only generates network outfiles in Cytoscape [34] and/or Gephi format [35], it also optimizes the content of these outfiles. Typically, gene networks may contain hundreds of thousands of subgraphs, corresponding to the cluster of similar sequences, or connected components in graph terms (see next section). Thus, to ease the visualization of these connected components in Cytoscape and Gephi, EGN distributes them in files called (i) cc_#x_to_#y.txt, organizing these subgraphs by decreasing number of edges, and (ii) att_ cc_#x_to_#y.txt, in which all important attributes describing these nodes and edges (e.g. their sample/genome of origin, the weight of edges indicated by the homology searches, whether edges are "full or partial homology" edges, etc.) have been automatically summarized. These attribute files provide useful information for coloring the nodes and edges in the visualization network tools. Finally, EGN also provides some statistics about the sequences, and the connected components comprising of similar sequences. These text outfiles are created in subdirectories with explicit names, describing the exact parameters retained to perform the network reconstruction (e.g. GENENET_1e-05.50.50.0.0, for the reconstruction of a gene network at 1E-05 e-value threshold, at least 50% hit homology, 50% of homology on the smallest sequence, no best-reciprocal condition, and no minimal coverage condition). In particular, the gpcompo.txt outfile indicates for each cluster of similar sequences how many representative sequences it contains, and from which sample/genome these sequences originate. The gpstat.txt outfile provides further information about genetic divergence of these sequences: how much they cluster with one another in the network, the mean and standard deviation hit % identity between sequences in the cluster, the mean and standard deviation of % identity between the hit and the shortest sequence considered in each pairwise comparison, and the mean and standard deviation of the e-value between sequences of the cluster. As the user is guided along the various steps of the intuitive EGN menus, this wizard provides a tool with which users will be able to analyze their data under the framework of similarity networks. EGN automates the analysis of a significant data type EGN produces a useful network-based type of data for evolutionary analyses. This data type is different from the usual phylogenetic trees, for at least two reasons. First, while trees are always acyclic graphs, networks are generally cyclic graphs. Second, while phylogenetic trees usually aim at inferring the relationships between homologous sequences and their hypothetical ancestors, sequence similarity networks instead display significant resemblances between any sequences (in gene and protein networks) or any entities (in genome or sample networks), in a topologically less constrained, and in practice much more inclusive, framework. The usual data type used in phylogenetics is a tree (or a grouping on a tree), while it is a connected component in a sequence similarity network. In these latter networks, no explicit orthology relationship needs to be assumed. It is important to establish the distinction between these two data types, because it would be a logical mistake to evaluate connected components using the standards of phylogenetic trees, e.g. as if they were trees, which they are not. Sequence similarity networks are founded on a different theoretical background than phylogenetic analyses, which implies that the splits and edge lengths have different meanings than those observed in a phylogenetic tree or network. In fact, connected components are better understood by reference to "family resemblances" (a concept brilliantly heralded by Ludwig Wittgenstein, in his posthumously published book Philosophical Investigations of 1953 [36]). Just like family members in humans present various overlapping and criss-crossing resemblances, i.e. in their build, features, colours of eyes, organized in such ways that it is eventually possible and useful to distinguish different families, connected components in sequence similarity networks group sets of sequences, whose members show significant similarity according to a criterion (or a set of criteria), so that these sequences cannot be mistaken for other sequences, presenting a different pattern of "family resemblance". For instance, sequences coding for translation initiation factors SUI1 and for restriction modification type 1 endonucleases fall into distinct connected components in gene networks [6]. In this regard, it is interesting to note that phylogenetic gene trees are a particular display of one very particular instance of 'family resemblance'. Such trees group sequences that are sufficiently similar to be aligned together, because they come from a single last common ancestor. However, sequences can also (and not only) display significant similarities that do not meet the particular criteria retained in phylogenetics. For instance, sequences resulting from fusion or recombination events will show bona fide similarities introduced by processes of introgressive descent [2]. Sequences evolving by vertical descent from a single ancestor can also become too divergent to be aligned with their homologs, and therefore to be included in a gene tree. Such distant similarities, and resemblances originating from processes of introgressive descent, however can be analyzed through the definition of connected components, as automated by EGN. For example, when the parameters selected for the reconstruction of the gene networks are very stringent, imposing that the hit between sequences covers a high percentage of their length, and that the similar sequences show both a high e-value and % identity, allows to include divergent homologs in connected components. Unlike conserved homologs that will be all connected together (forming a pattern of maximal density known as a clique in the connected component), divergent homologs will only connect to some of the sequences within the component; i.e. divergent bacterial genes will only bond to some of their bacterial homologs, while less divergent bacterial genes will all be joined to one another. Consistently, the data type that is obtained by structuring molecular data in connected components of sequences in sequence similarity networks (or in connected components of genomes sharing similar sequences in genome networks), contributes in a different way than phylogenetics to extend the scope of evolutionary analyses. Evolutionary biologists can take advantage of this additional data type to explore and explain the various causes of their 'family resemblances'. Processes of introgressive descent (e.g. recombination, lateral gene transfer, gene or domain sharing, etc.) and vertical descent can be investigated simultaneously through these graphs. Phylogenetic relationships however will generally still require the reconstruction of a tree. Furthermore, this novel data type also provides an original comparative framework, which must not be confused with the phylogenetic framework. More precisely, EGN networks make it possible to compare sequence similarities for sequences of interest in connected components, i.e. by quantifying the distances and topological properties of sequences from two genomes in the network. This comparison cannot be equated with the phylogenetic resolution required to identify where a particular sequence (or organism) should be placed in a gene (or organismal) tree, but it can be useful in other situations. Among the most recent examples, a comparative analysis of the behavior of sequences in gene networks was carried out by Bhattacharya et al. [29] to investigate the modular genomic structure of a novel marine cyanophage in a protist single cell metagenome assembly. Sequences from these novel mosaic viruses presented a pattern of connection that was typical of that presented by sequences from mosaic cyanophages in the gene network. The use of a network proved particularly well-suited, offering much more detail concerning the complex evolution of such mosaic objects than allowed by the proposition of a single branching point in a viral tree for the novel virus. One main interest of network studies is therefore that they can employ an additional, very inclusive, relevantalthough non-phylogenetic -data structure for evolutionary analyses. Application to real data We used EGN to illustrate how its various options areuseful for devising and testing evolutionary hypotheses, while taking into account a large amount of data structured according to this data type. We tested whether plasmids were always used as genetic couriers, moving DNA from one lineage to another in a dataset of 571,044 protein sequences (see Implementation). We first used EGN (e-value ≤ 1E-20, hit identity ≥ 30% ) to reconstruct a genome network of 131 cellular organisms, 2,211 plasmids and 3,477 viral genomes, harboring either cellular chromosomes or genomes of mobile genetic elements at its nodes, connected when they shared sequences from the same similarity cluster, as in Halary et al. [25]. In the genome network, some plasmids displayed markedly distinct behaviors and patterns of connections, identifying two extreme sorts of plasmids. On the one hand, many plasmids had a broad range of connections with a diversity of distantly related genetic partners. These plasmids act as genetic couriers [37], contributing to exchanges of DNA material. On the other hand, some other plasmids were very isolated in the network, showing a very limited and sometimes even no genetic partnerships outside a limited gene sharing with the plasmids or the chromosomes of their host lineage. Plasmids of this second type typically use a closed DNA pool, and seem to rarely transit between different hosts cells and lineages, and even to rarely exchange genetic material with the chromosome of their hosts. Rather than being mobile vessels of genetic exchange, our network suggests that these non-promiscuous plasmids may fulfill a functional role of evolutionary significance distinct from that of the plasmids that are key players for lateral gene transfer. The best examples are offered by the plasmids of the bacterial genus Borrelia, which display a very low conductance (C = 0.015). Indeed, the corresponding/respective nodes are extremely isolated in the genome network, and are linked to it just by edges with nodes of the Borrelia chromosomes (Figure 1). Borrelia's plasmids do not share a single gene family with any other plasmids outside these hosts (Figure 2a), and only harbor six genes (oligopeptide ABC transporter, vlp protein alpha and gamma subfamily, arginineornithine antiporter, putative lipoprotein and type I restriction enzyme R protein), that are also found on Borrelia's main chromosome (Figure 2b), consistently with the literature [38]. In agreement with Tamminen et al. [39], our genome network identifies that the flow of DNA material in and out of Borrelia plasmids is lower than for many other plasmids. This remarkable genetic isolation may be explained by biological considerations, prompted by the detection of this network structure. Borrelia is an obligate pathogen [40]. This lifestyle entails that these bacteria have fewer opportunities to meet a diversity of genetic partners (be they mobile elements or other bacteria) than the majority of the bacteria growing in biofilms [41]. Borrelia genetic diversity must come from within the lineage, rather than from adaptative gene transfer from other microbes, even though Borrelia's plasmids are able to transfer 42,43]. Moreover, Borrelia's lifestyle imposes a strong selective pressure on these parasitic cells that must constantly evolve to escape their host immune system. Plasmids within Borrelia play a role in this evasion process [44][45][46][47], and we hypothesize that it is because they provide a genetic compartmentalization inside the cells that allows Borrelia to partition DNA on two distinct kinds of molecules with distinct evolutionary regimes [48,49]. Most of the genes are located on a slow evolving linear chromosome, heavily constrained in its structure, while other genes are stored on the more flexible, fast evolving, and heavily recombining plasmids [43,49,50]. We propose that this partition helps Borrelia cells to survive in a hostile environment. The chromosomes are highly streamlined and optimized to support Borrelia's parasitic life, while the plasmids would be the locus of substantial rearrangements, recombination and gene conversion, producing necessary variations on genes coding for outer surface proteins (osp), genes that repress the cytolitic activity of host's serum (Erp and CRASP), and genes coding for antigenic variation (vlp, vsp, vlsE) to escape Borrelia's host immune system [40,[44][45][46][47]. To further this hypothesis -that some plasmids act as compartments of DNA material and intracellular organs of genetic innovation rather than vectors of mobile DNA -we used the hit coverage option of EGN's gene network reconstruction, that we set to > 90%. This option allowed us to distinguish two types of edges in sequences network. We used EGN to detect and quantify "full homology" edge links between Borrelia's sequences. Sequences connected by "full homology" edges have probably progressively diverged by the combined effects of small mutations, natural selection and drift, because their sequences greatly overlap, and can be aligned all along their length. By contrast, when sequences are not only evolving in a tree-like fashion, i.e. when segments of divergent sequences fuse to form a single a gene, or when segments within genes recombine through illegitimate recombination, sequences are connected by edges that are not necessarily "full homology" edges [23]. These sequences do not come entirely from a single ancestral gene copy, but various segments of these sequences have a diversity of sources. Such sequences, produced by more complex processes than vertical descent alone, do not neatly align all along their sequences, but are at best only connected through local regions of similarity. Such similar segments, as opposed to similarity overall their DNA, are also detected in EGN analyses: they constitute a second type of edge in gene networks (Figure 2c). Interestingly, we observed that connected components of Borrelia's chromosomes were largely connected by "full homology" edges (thus likely evolving by vertical descent), but that connected components on Borrelia's plasmids were largely connected by "partly similar edges", and therefore seemed to be subjected to more complex evolutionary processes. These processes result in a large amount of genetic diversity in the plasmids. This result, based on a gene network, (Figure 2b) strengthens our hypothesis that a structural partition of DNA within Borrelia's cells (observed in the genome network) is coupled with a "partition" of the processes affecting this DNA, in that case contributing to the recruitment of Borrelia's plasmids as "organs of genetic innovation". In other words, Borrelia's plasmids and their genes can be seen as private goods of the Borrelia lineage [7]: they benefit to this lineage but are not shared with others. Of course, quite a few other prokaryotic genera contain plasmids with low conductances, such Sodalis(C = 0.16), Coxiella (C = 0.17), and Buchnera (C = 0.23) (Additional file 2: Table S1 & Additional file 3: Figure S2, see Implementation). We do not wish to elaborate here on whether the lifestyle of these bacteria may explain this relative genetic isolation. Sodalis are intra and intercellular symbionts, Buchnera are obligate intracellular symbionts and Coxiella are obligate intracellular pathogens. However, we want to underscore the fact that sequence similarity network can be a great tool to foster this type of hypothesis. Conclusions The use of similarity networks appears as a compelling complement to standard phylogenetic analyses in order to perform comparative analyses of an increasing amount of molecular sequences from genomic and metagenomic projects. Several publications have already benefited from this analytical framework [2,6,25,29,51,52]. However, such network analyses still require more Borrelia's plasmids are directly only connected to Borrelia's chromosomes. c. Schematic connected components, same color code as above. "Full homology" edges are indicated by solid lines, other similarity edges are indicated by dashes. Components with a majority of nodes corresponding to genes on chromosomes were significantly richer in "full homology" edges than connected components with a majority of nodes corresponding to genes on plasmids. programming skills than is usually necessary to carry out phylogenetic analyses, for which users can rely on a diversity of user-friendly software. By contrast, few (if any) user friendly software programs, running on desktop computers, explicitly designed to reconstruct distinct kinds of similarity networks from nucleic and/or proteic data have yet been made available to the biology community. We introduce EGN in the hope that it might constitute a timely opportunity to provide network construction tools to a broader audience. We are confident that software like EGN will enhance the exploitation of the evolutionary signal of genomic and metagenomic projects. Genomic datasets and analysis parameters We sampled 571,044 protein sequences from the chromosomes of 70 eubacterial complete genomes, 54 archaebacterial complete genomes, and 7 eukaryotic genomes, covering the diversity of cellular life, as well as from the genomes of two types of mobile genetic elements: 228,040 protein sequences from all the available plasmids and phages at the time of this analysis from the NCBI (see Additional files). We first used EGN (e-value cutoff 1E-05, identity thresholds 30%) to construct a genome network, harboring either cellular chromosomes or genomes of mobile genetic elements at its nodes, connected when they shared sequences belonging to the same cluster of similar sequences, as in Halary et al. [25]. To test whether plasmids hosted in a bacterial lineage were connected to genomes in multiple other lineages, we estimated the conductance of their nodes (C) in the genome network. For instance, for plasmids of Borrelia, we estimated C as the number of edges connecting Borrelia's nodes to non-Borrelia's nodes / number of edges connecting Borrelia's nodes to any node [53]. We assessed whether the observed value for C was significantly different and lower than the conductance obtained by chance for the same number of nodes in the genome network by shuffling node labels on the same network topology for 1,000 replicates, which estimates the various conductances expected by chance alone in a network of same size and with the same topology. In order to test whether genes within Borrelia chromosomes and plasmids had been affected by different evolutionary processes, we then reconstructed the similarity network of Borrelia genes using the same parameters, but setting up the hit coverage condition at > 90% (e.g. edges were tagged as positive when the hit was longer than 90% of each gene's length, else negative). Number of 'positive' and 'negative' edges linking plasmidic gene to plasmidic gene, plasmidic gene to chromosomal gene, and chromosomal gene to chromosomal gene were quantified. Over-or under-representation of such edges was also estimated by shuffling node labels on the same network topology for 1,000 replicates.
v3-fos-license
2020-11-19T09:12:05.476Z
2020-11-01T00:00:00.000
227248783
{ "extfieldsofstudy": [ "Medicine", "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academic.oup.com/cdn/article-pdf/4/11/nzaa165/34561953/nzaa165.pdf", "pdf_hash": "6009295ef0aece141317f18c51c3e1defd04d160", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45266", "s2fieldsofstudy": [ "Biology" ], "sha1": "2857180f9365b0d1d446784cab924bc441d95d25", "year": 2020 }
pes2o/s2orc
Pomegranate Metabolites Impact Tryptophan Metabolism in Humans and Mice ABSTRACT Background We showed that pomegranate juice (PomJ) can help to maintain memory in adults aged >50 y. The mechanism for this effect is unknown, but might involve Trp and its metabolites, which are important in brain function. Objectives We aimed to test the hypothesis that PomJ and its metabolites ellagic acid (EA) and urolithin A (UA) affect Trp metabolism. Methods Stool and plasma from a cohort [11 PomJ, 9 placebo drink (PL)] of subjects enrolled in our double-blind, placebo-controlled trial (NCT02093130) were collected at baseline and after 1 y of PomJ or PL consumption. In a mouse study, cecum and serum were collected from DBA/2J mice receiving 8 wk of dietary 0.1% EA or UA supplementation. Trp metabolites and intestinal microbiota were analyzed by LC-MS and 16S rRNA gene sequencing, respectively. Results In the human study, the change in the plasma Trp metabolite indole propionate (IPA) over 1 y was significantly different between PomJ and PL groups (P = 0.03). In serum of experimental mice, we observed a 230% increase of IPA by EA but not UA, a 54% increase of indole sulfate by UA but not EA, and 43% and 34% decreases of kynurenine (KYN) by EA and UA, respectively. In cecum, there was a 32% decrease of Trp by UA but not EA, and an 86% decrease of KYN by EA but not UA (P < 0.05). The abundance of 2 genera, Shigella and Catenibacterium, was reduced by PomJ in humans as well as by UA in mice, and their abundance was negatively associated with blood IPA in humans and mice (P < 0.05). Conclusions These results suggest a novel mechanism involving the regulation of host and microbial Trp metabolism that might contribute to the health benefits of ellagitannins and EA-enriched food, such as PomJ. Introduction Trp is an essential amino acid, and Trp metabolism has been involved in many aspects of host metabolism and physiology (1,2). Both host and gut microbiota are involved in Trp metabolism. The majority of Trp is metabolized through the kynurenine (KYN) pathway by host cells, which generates many bioactive metabolites important for immune regulation (3). Trp is also metabolized into many neuroactive metabolites, such as serotonin by host cells, and indole derivatives by gut microbiota (3,4). Although host cells catabolize Trp into serotonin, indige-nous spore-forming bacteria from the gut microbiota promote serotonin biosynthesis in colonic enterochromaffin cells by regulating other bacterial metabolism (4). Serotonin and indole derivatives, such as indole propionate (IPA), also have a regulatory role in immune responses as well as host metabolism (5)(6)(7)(8). We previously reported that daily consumption of 237mL of pomegranate juice (PomJ, n = 15) for 1 mo improved memory performance and altered brain neural activity in older subjects with mild memory complaints compared with daily intake of taste-matched placebo juice (PL) with high fructose content but no polyphenols (n = 13) (9). In addition, we recently evaluated the effects of longterm PomJ consumption on cognitive ability of healthy middle-aged and older adults. We found that subjects in the PomJ group who consumed 237mL of PomJ daily for 12 mo (n = 98) experienced stabilization of performance of a memory score involving visual-spatial learning compared with subjects in the PL group (n = 102) who consumed the PL and showed a decline on that learning score (10). The underlying mechanism of improvement of memory performance and metabolic markers associated with PomJ consumption remains largely unknown (9)(10)(11)(12). PomJ contains a variety of bioactive compounds such as phenolics [ellagitannins (ETs) and ellagic acd (EA)] and flavonoids (anthocyanins, etc.) (13,14). After oral consumption, ETs are hydrolyzed to EA in the intestine. EA can be absorbed into the bloodstream, or remain in the intestine to be further transformed into urolithins, such as urolithin A (UA), by gut microbiota. ETs and EA are poorly bioavailable, but UA has 10-fold better bioavailability compared with EA (15)(16)(17)(18). The beneficial effects of ETs, EA, and their microbial metabolite UA have been reported (17,(19)(20)(21). PomJ intake was found to acutely regulate the concentrations of the Trp metabolite melatonin in both healthy and insulin-resistant (IR) subjects (11). EA supplementation was associated with neuroprotective, analgesic, antiamyloidogenic, and antidiabetic effects (22)(23)(24). Metabolism of Trp into serotonin was found to be critical for the antidepressant-like activity of EA (25). We therefore hypothesized that PomJ and its bioactive component EA likely induce alterations of the microbial and host metabolism of Trp, which could be a potential mechanistic link underlying the observed health benefits of PomJ. In humans, ∼25-80% of study subjects produce UA from ETs/EA after consuming ETs/EA-containing food (26,27). Individuals differ in their ability to convert ETs/EA to UA after ETs/EA intake, resulting in large interindividual differences in the concentrations of EA and UA present in the gut and circulation (26,27). Results from previous studies suggest that EA and UA share certain similar biological activities, but also have distinct functions (20,28,29). The health benefits of UA in aging and inflammatory bowel disease (IBD) are of great interest. A recent clinical trial has reported improved mitochondrial and cellular health in humans supplemented with UA (30). The aims of this study included: 1) investigating the possible mechanism of PomJ on memory by evaluating the effect of PomJ intake on Trp metabolism; 2) investigating the individual effects of EA and UA on Trp metabolism by supplementing EA and UA individually in mice lacking the ability to produce UA; and 3) investigating the effect of PomJ, EA, and UA intake on gut microbiota and the association with Trp metabolism. For this purpose, we first analyzed plasma Trp metabolites and fecal microbiota from a small cohort of subjects enrolled in our recent randomized controlled trial comparing memory performance of 1-y PomJ compared with PL consumption in healthy older adults (10). We recently showed the differential metabolic effects of EA and UA in mice with high-fat/high-sucrose (HFHS) diet-induced IR (31). The antidiabetic and neurotropic effects of EA were reported in diabetic rats, and a contribution of Trp metabolism to the EA-mediated neurotropic effects has previously been reported (24,25). We therefore used mice with HFHS diet-induced IR as a model to study the individual effects of EA and UA on Trp metabolism. However, whether dietary macronutrients affect these effects requires further investigation. Human study We recently evaluated the memory performance in subjects consuming 237 mL/d PomJ (n = 98) or PL (n = 102) for 1 y in a randomized, double-blind, 2-arm, parallel-design study (10). The study was carried out at the Semel Institute for Neuroscience and Human Behavior and Center for Human Nutrition, David Geffen School of Medicine at the University of California, Los Angeles, CA in accordance with the guidelines of the Human Subjects Protection Committee of the University of California, Los Angeles. All subjects gave written informed consent before the study began. This study was registered at clinicaltrials.gov as NCT02093130. Subjects (aged 50-70 y) were randomly allocated to consume 237 mL of PomJ (Wonderful Company, LLC) or PL every day. The PL contained matched constituents as PomJ (37g sugar from highfructose corn syrup, flavor and acidity level) except for phenolic compounds (3). A subset (11 PomJ and 9 PL) of 20 subjects who provided stool and blood samples at baseline and final visit were selected for the analysis. Stool samples were used for microbiota analysis. Plasma samples were used for Trp metabolite analysis. Data from 1 subject from the PL group were excluded due to detection of Pom metabolites in the blood. Data about the cognitive outcomes from this clinical trial were published previously (3). Animal study All mouse procedures were approved by the UCLA Animal Research Committee in compliance with the Association for Assessment and Accreditation of Laboratory Care International. Twenty-four male DBA/2J mice aged 5-6 wk were purchased from the Jackson Laboratory. After 1 wk of acclimation, PomJ was used to replace drinking water for 4 d, and stool samples were collected every day for 4 d to confirm the lack of UA production capability. Mice were then switched to regular water and fed an HFHS diet (42% energy from fat, 30% energy from sucrose) for 8 wk; they were then randomly assigned to 1 of 3 groups, and fed either an HFHS diet, or an HFHS diets supplemented with 0.1% EA (94% EA; Ecological Formulas), or 0.1% UA (97% UA; Feitang) (Supplemental Table 1). Two hundred and thirty-seven millilitres PomJ provides ∼120mg EA when completely hydrolyzed (32). EA and UA were supplemented at 0.1% to HFHS-fed mice in this study, which is similar to a daily intake of 780 mg in humans. This dose is about 7-fold higher than a daily consumption of 237 mL of PomJ in humans (31). However, the mouse study was a short-term experiment compared with the 1-y human study, and it is has been shown that humans can tolerate EA and UA at 500-1000 mg/d (16.7 mg/kg) via oral administration (30,32). After EA or UA feeding for 8 wk, mice were killed, and blood and cecum contents were collected, weighed, and stored at −80 • C until analysis (33). Measurement of Trp and its major metabolites Fifty microliters of human plasma or mouse serum samples were precipitated with 500 μL methanol and centrifuged at 10,000 × g for 10 min at 4 • C. Supernatant was dried using a SpeedVac evaporator (ThermoFisher Scientific) and then resuspended in 50% methanol for analysis. Samples with the percentage of recovery between 80% and 120% were included in the results. Sample preparation for blood amino acid HPLC analysis Twenty-five microliters human plasma or mouse serum samples were mixed with 500 μL of methanol, vortexed for 1 min and then centrifuged at 10,000 × g for 10 min at 4 • C. Supernatant was dried by SpeedVac and reconstituted in 10 μL water. Seventy microliters AccQ Fluor Borate buffer and 20 μL reconstituted AccQ Fluor Reagent were added to the sample tubes following manufacturer's manual (AccQ Fluor Reagent kit, Waters Corp.). Samples were vortexed for 1-2 min and incubated at 55 • C for 10 min. Samples were then cooled at room temperature and centrifuged, and the supernatant was used for HPLC analysis as described above. 16S rRNA gene sequencing and taxonomic analysis DNA from stool or cecum content was extracted using the DNeasy PowerSoil Kit (Qiagen). Sequencing of the V4 variable region was performed at MR DNA (www.mrdnalab.com) on a MiSeq sequencer (Illumina). Sequence data were processed using the MR DNA analysis pipeline as previously described (34). Final operational taxonomic units were taxonomically classified using BLASTn against a curated database derived from Greengenes 12_10 (35), RDPII (http://rdp.cme.msu.edu), and the National Center for Biotechnology Information (www.ncbi.n lm.nih.gov) as previously described (34). All taxonomic analyses were conducted in R (version 3.5.2; R Foundation) (36) with phyloseq (37), ggplot2 (38), vegan (39), and DESeq2 (40) packages as previously described (34). α-Diversity indexes (Chao1 and Shannon) were estimated using count value after rarefication. Measure of β-diversity was performed using Bray-Curtis dissimilarity. The relations of samples across groups were determined by permutational multivariate analysis of variance using the Adonis command provided by vegan in R and were displayed via principal coordinate analysis (PCoA) ordination. DESeq2 was used to identify abundance changes at the genus level that occurred differentially between PomJ and PL groups. Null and test models were constructed, and an interaction between "time (baseline and final)" and "intervention (PL or PomJ)" was the differentiating term between 2 models as previously described (34). A likelihood ratio test was used to identify differentially abundant genera between PL and PomJ groups. Negative binomial Wald test provided in DESeq2 was used to identify genera of differential abundance between groups or between baseline and final within group as previously described (34). P values were adjusted for multiple testing using the Benjamini-Hochberg false discovery rate correction in DESeq2. Because Bonferroni correction is often considered overly conservative, we listed genera with P values < 0.05 and marked those with adjusted P < 0.2 using an asterisk ( * ). Statistical analysis Statistical analysis was performed using the Statistical Package for the Social Sciences version 8.0 software (SPSS Inc.). Summary statistics (mean, SD, and SEM) were calculated. The sample size was determined based on a previous study showing that 200 mL PomJ intake for 4 wk effectively changed phenolic metabolites and gut bacteria (n = 12) (41). For the human intervention study, Mann-Whitney test and Fisher exact test were used to analyze differences in baseline characteristics between groups, and Wilcoxon signed rank test was used to analyze the differences in measures (final − baseline) within groups. The Mann-Whitney test was also used to see if the changes in dependent variables in the 2 groups were different, including BMI, body weight, and Trp metabolites. For animal data, 1-factor ANOVA was used when data were normally distributed. The Tukey-Kramer multiple comparison procedure was used for post hoc comparisons. The Kruskal-Wallis test with Bonferroni correction was used when data were not normally distributed. P values < 0.05 were considered statistically significant. Effects of PomJ on plasma concentrations of Trp metabolites in humans Subject characteristics are shown in Table 1. There were no differences in baseline demographic characteristics in the PL and PomJ groups. The change in plasma IPA over time in the PL group was significantly different (P = 0.03) from that in the PomJ group. In the PL group, IPA concentrations decreased, whereas in the PomJ group the IPA concentrations were stable over 1 y of PomJ consumption (Figure 1D). Plasma concentrations of IS increased significantly in the PL group (P = 0.008), whereas IS concentrations remained stable over 1 y of PomJ consumption ( Figure 1F). Trp and other major Trp metabolites, including serotonin, KYN, and IAA ( Figure 1A-C, E) did not change significantly in either group. Effects of PomJ on gut microbiota PomJ and PL drink intake did not significantly change α-diversity indices (Chao1 and Shannon; Supplemental Figure 1A, B). The βdiversity measure Bray-Curtis dissimilarity was calculated and visualized via PCoA. No distinct separation between baseline and final visits in PL or PomJ groups was observed (Supplemental Figure 1C). Comparing final with baseline in the PL group, the fecal abundance of 3 genera (Shigella, Rothia, and Eggerthella ) was increased, and in 2 genera (Fusobacterium and Barnesiella) it was decreased (P < 0.05, Figure 2A). Comparing final with baseline microbiota composition in the PomJ group, 3 genera (Acetitomaculum, Faecalicoccus, and Kopriimonas) were increased and 7 genera (Tyzzerella, Turicibacter, Parasutterella, Catenibacterium, Haemophilus, Thermotoga, and Lactococcus) were decreased (P < 0.05, * P adjusted < 0.2; Figure 2B). Compared with abundance changes between final and baseline in the PL group, 1 y of PomJ intake increased the abundance or reversed the decrease of 2 genera (Kopriimonas and Fusobacterium), as well as decreased or reversed the increase of 7 genera (Dorea, Catenibacterium, Intestinibacter, Shigella, Parasutterella, Eggerthella, and Lactococcus) ( Figure 2C). Some associations were identified between changes in abundance of genera and changes in the Trp microbial metabolites ( Figure 2D). For example, the genera Catenibacterium and Sutterrella were negatively and positively associated with IPA, respectively ( Figure 2D). Effects of dietary EA and UA supplementation on Trp and its metabolites in mice Serum and cecum Trp and its metabolites were evaluated in experimental mice fed an HFHS diet or HFHS diets supplemented with 0.1% EA or UA for 8 wk (Figures 3 and 4). In serum, concentrations of the Trp microbial metabolite IPA were significantly increased by EA but not UA, whereas IS was significantly increased by UA but not EA ( Figure 3B, C). The KYN pathway is the major route of host-mediated Trp metabolism, and we observed a significant decrease of serum KYN in mice with EA or UA supplementation ( Figure 3E). EA and UA did not change serum Trp concentrations ( Figure 3A). In cecum content, UA but not EA significantly reduced Trp concentrations, whereas EA but not UA reduced KYN concentrations ( Figure 4A, E). Other Trp metabolites with detectable concentrations in cecum, including indole, IPA, and IAA, were not altered by EA or UA ( Figure 4B-D). Some associations were identified between relative abundance of cecal genera and Trp and its microbial metabolites in both serum and cecum ( Figure 5). For example, the abundances of the genera Lactobacillus and Eubacterium were positively associated with serum and cecum IPA, whereas the abundance of Alistipes was negatively associated with serum and cecum IPA ( Figure 5). Other amino acids in serum were not altered by dietary EA or UA supplementation (Supplemental Figures 2 and 3). Discussion This study investigated the effect of PomJ and its bioactive constituent EA and microbial metabolite UA on Trp metabolism. We first showed that plasma concentrations of the microbial Trp metabolite IPA were maintained in individuals who drank PomJ compared with a decrease observed in the PL group in a small cohort from our recent clinical trial (10). We further demonstrated the effect of EA and UA supplementation on Trp metabolism in a mouse study. Dietary supplementation is a powerful tool in shaping the gut microbial composition as well as microbial metabolism of nutrients. Our data present an example of how phytochemical intake can alter the host and microbial metabolism of nutrients. UA is a microbial metabolite of ETs/EA, but its potential in regulating microbial composition and metabolic activity has not been documented (26,27). In this study, we investigated the individual effect of oral supplementation with EA and UA not only on Trp metabolism but also on the microbial composition in DBA/2J mice fed a well-defined HFHS diet and lacking the ability to produce UA. Serum concentrations of IPA and IS were significantly increased by supplementation of EA and UA, respectively ( Figure 3). However, IPA concentrations in cecum were similar among experimental groups, and IS concentrations in cecum were below detection limits (data not shown). The KYN pathway in the liver accounts for the majority of dietary Trp degradation (42). The KYN pathway also exists extrahepatically and accounts for minimal Trp degradation, but becomes quantitatively more significant under conditions of immune activation (43). In our mouse study, KYN was reduced by both EA and UA in blood and by EA only in cecum, suggesting the potential effects of EA and UA on the Trp-KYN pathway. Our data overall suggest a potential novel mechanism by which dietary EA and UA affect host physiology by regulating Trp microbial and host metabolism. However, this observation is limited to the mouse model with HFHS-induced IR. Future evaluations of the impact of macronutrient composition as well as host physiological status on EA/UA-mediated Trp metabolism are warranted. We also evaluated the effects of PomJ intake on fecal microbiota in the small cohort from our recent clinical trial (10). Because the effect of PomJ intake on microbial composition and metabolism was not the primary outcome of this clinical trial, the dietary background of human subjects was not controlled and recorded (10). We observed a large variation when analyzing human gut microbial composition due to the complexity of study participants and the lack of dietary control during the 1-y intervention (Supplemental Figure 1). The fecal microbial composition was significantly altered in both groups by daily consumption of placebo (sugar-matched) drink or PomJ for 1 y (Figure 2). Changes in the abundance of many genera in human study participants consuming PomJ differed from changes in the mouse microbiota induced by EA or UA supplementation. Our mouse study showed that dietary UA modulated the gut microbiota more potently compared with EA supplementation, as indicated by an increased α-diversity richness index Chao1 as well as significant distinct clustering by β-diversity analysis ( Figure 6). In spite of many uncontrolled variables in the human study, 2 genera, Shigella and Catenibacterium, were found to be decreased by PomJ intake in humans as well as by UA supplementation in mice. Shigella is a well-known pathogenic Gramnegative bacterium that causes inflammatory destruction of the intestinal epithelial barrier and has been associated with IBD (44). The exact role of Catenibacterium is not well known, but it is positively correlated with the dietary intake of animal fat and is reduced in the gut of colorectal cancer patients (45,46). In addition, the abundance of Catenibacterium and Shigella was negatively associated with blood IPA in humans and mice, respectively. Trp microbial metabolism and its pathways are of great interest and widely explored (1,6,47). Recent in silico analysis showed that Trp metabolism pathways that produce neuroactive metabolites are enriched in the 5 genera Clostridium, Burkholderia, Streptomyces, Pseudomonas, and Bacillus (47). Gut bacteria involved in Trp metabolism include species belonging to Clostridium, Bifidobacterium, Lactobacillus, and Escherichia (1, 6). Here we identified a variety of bacterial genera that were positively or negatively correlated with blood and/or cecum Trp metabolites ( Figure 2D and Figure 5). For example, Lactobacillus was positively correlated with both serum and cecum IPA in mice. Future in vitro and in vivo studies are needed to investigate the causeeffect relation between identified bacterial genera and Trp metabolism, and how EA and UA regulate Trp metabolism. There are several limitations of the present study. The most important limitation is the small sample size for this subset of the clinical cohort. Due to the small sample size, the between-group difference in the primary cognitive outcome did not reach significance (P = 0.27; Supplemental Table 2). The second limitation is that PomJ contains a variety of bioactive compounds in addition to ETs and EA, but we only evaluated the impact of EA in the mouse experiment. It would be interesting to study whether other constituents in PomJ also affect Trp metabolism. The third limitation is that although we show the impact of PomJ, EA, and UA on Trp metabolism in human and mice, no analysis of the host and microbial genes or enzymes involved in Trp metabolism was performed. The fourth limitation is that it is unclear how the impact of PomJ, EA, and UA on Trp metabolism subsequently contributes to health benefits of their intake. Additional research is necessary to provide this link. The present data include analyses of a small cohort of a doubleblinded and placebo-controlled trial and mouse feeding study to analyze changes in Trp metabolites in response to consumption of PomJ and EA/UA, respectively. Our data show that dietary PomJ, EA, and/or UA supplementation not only affected the metabolism of the essential amino acid Trp but also altered the gut microbiota composition. In addition, to the best of our knowledge, our data show for the first time that UA is not only a postbiotic, generated from microbial EA metabolism, but also significantly affects microbial composition and metabolism. Manipulation of the complex interplay between diet, microbiota, and host represents a powerful strategy for altering the physiological status of the host.
v3-fos-license
2021-05-21T16:57:00.853Z
2021-04-14T00:00:00.000
234872596
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.journaljpri.com/index.php/JPRI/article/download/31406/58958", "pdf_hash": "b25910b0ecbb9cc1f258b5b7cb15dbe05a58b3e8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45267", "s2fieldsofstudy": [ "Medicine" ], "sha1": "a97e76e8769595df2b2de96bd84461b57a625c4d", "year": 2021 }
pes2o/s2orc
Efficacy and Safety of Topical Calcipotriol Plus Betamethasone Dipropionate Versus Topical Betamethasone Dipropionate Alone in Mild to Moderate Psoriasis Psoriasis is one of the prototypic papulosquamous skin diseases characterized by erythematous papules or plaques. This disease is chronic in nature with a tendency to relapse. Betamethasone dipropionate binds to specific intracellular glucocorticoid receptors and subsequently binds to DNA to modify gene expression. The study duration was 6 weeks. A total of 60 study participants of both sexes diagnosed with mild to moderate psoriasis were included in the study. They were randomized into two groups. Group A – topical calcipotriol plus betamethasone treatment group consisting of 30 patients and Group B – topical betamethasone alone also consisting of 30 patients and a total of 28 patients in topical calcipotriol plus betamethasone treatment group and 28 patients in topical betamethasone group completed the study. A total of 98 patients were screened for this study, out of which 38 patients were excluded where 11 patients refused to participate and 27 patients didn’t meet the inclusion criteria of our study. These results shows that there is statistically significant percentage reduction in PASI score after 2 weeks (p=0.01) and 4weeks (p<0.001) of treatment in both groups. As a conclusion, the combination therapy of topical calcipotriol plus betamethasone provided a promising strategy for the treatment of mild to moderate psoriasis. Original Research Article Sindhuja and Muthiah; JPRI, 33(23A): 28-38, 2021; Article no.JPRI.66785 29 INTRODUCTION Psoriasis is a common autoimmune skin disease is derived from Greek word "psora" which means "itch" [1]. It is characterized by round, circumscribed, dry, scaling plaques of varying sizes and covered by greyish white or silvery white scales, affecting approximately 2% of the population and leads to considerable impairment of the quality of life of the affected patients [1][2]. The most commonly affected sites are the scalp, tips of fingers and toes, palms, soles, umbilicus, gluteus, under the breasts and genitals, elbows, knees, shins and sacrum [3]. Psoriasis affects both sexes and can occur at any age. It can occur in people of all races. Whites suffer more than blacks [4]. Psoriasis has a genetic basis and studies clearly signify a genetic association in psoriasis, with the incidence being greater amongst first-degree and second-degree relatives of patients [4]. The genetics of psoriasis are known to be complex, with ten or more susceptibility loci, and these probably interact with various environmental factors that act on the skin or immune system. It is a T-lymphocyte mediated autoimmune disease. T-lymphocytes, both CD4 + and CD8 + cells, are activated (HLA DR + and CD25 + ). In dermis the CD4 + cells predominate, while CD8 + cells prevail in the epidermis. One of the earliest events in the psoriatic plaques is the influx of activated CD4 + cells. In resolving plaques an influx of CD8 + cells predominates, while there is a decrease of CD4 + cells [5]. The induction of Tcell activation by psoriatic epidermal cells is highly dependent on the population of CD1a-DR+ dendritic cells, while CD1a+ Langerhans cells, HLA-DR+ keratinocytes and dermal dendrocytes might also be relevant APCs in psoriasis. A Activated T-lymphocytes produce two different patterns of cytokines by (1) Th1 cells produce IL-2 and IF-γ, (2) Th2 cells produce IL-4, IL-5, and IL-10. Abnormal activation of leukocytes leads to the accumulation of T cells and other immune cells in developing skin lesions. T cells secrete proinflammatory cytokines which cause keratinocyte hyper proliferation and altered differentiation [6]. The turnover time of the epidermis speeds up significantly and leads to the characteristic psoriatic lesions [7]. The active form of vitamin D3 is known to play an important role in the stimulation of cellular differentiation, inhibition of proliferation and immunomodulation [10]. This makes vitamin D3 a potential candidate for treatment of psoriasis. However, oral administration of parent vitamin D3 might not be suitable for treating psoriasis due to potential for hyperkalemia. Hence, several vitamin D3 analogues have been developed for treatment of psoriasis. Vitamin D analogues bind to the vitamin D receptor, thus causing biological actions on both corneocytes and on immunecompetent cells in the skin [11]. Calcipotriol is a synthetic vitamin D3 analogue formulated as a cream and scalp solution. Calcipotriol regulates the proliferation and differentiation of keratinocytes [12]. The calcipotriol cream is effective and statistically significant in treating psoriasis than the placebo alone [13]. For decades, topical corticosteroids, particularly high-potency steroids, have been the mainstay in the topical treatment of psoriasis. Psoriatic patients with thick plaques often require treatment with the highest potency corticosteroids and are prone to multiple side effects with long term use [14]. Safe and effective therapeutic options for plaque psoriasis are limited and the results are not satisfactory to some extent. Hence the treatment plan should include obtaining rapid control of the disease and maintaining that control. The combination therapy has synergistic or additive effects thereby showing superior efficacy and is better tolerated than immunotherapy. The purpose of the present study was to compare the efficacy and safety of calcipotriol combination therapy with betamethasone propionate and betamethasone immunotherapy. MATERIALS AND METHODS The study was conducted in Sree Balaji Medical College and Hospital, Chennai during the period from March 2016 to September 2016 in accordance with declaration of Helsinki and ICH-GCP guidelines. The Drug Therapy was given free of cost to the patients and they were given assurance that any withdrawal from the study would not affect their future treatment in the same hospital. Patients diagnosed to have mild to moderate psoriasis by the physician based on Psoriasis severity index score (P ASI score) who meets the inclusion criteria and are willing to give consent for the study were selected. Total of 98 patients were screened in that 11, refused to participate and remaining 27 didn't meet the inclusion criteria. All the 60 patients selected were randomized (selected using the computer aided randomized chart) with the help of a statistical software SPSS version 16 and allotted a treatment group. Participants received either one of the study drug for a period of 4 weeks. The baseline features like demographic data, general, systemic and local examination were carefully noted in the case report form. Contact numbers of the investigators and Emergency physicians were provided to all the study participants for any queries during the study period and for reporting of any adverse events. There were four scheduled visits during the study, baseline visit, after 2weeks, then after 4weeks and after 6weeks (end of study visit). Randomization All 60 subjects are randomized to two treatment groups in 1:1 ratio using computer generated randomization with Statistical Package for the Social Sciences (SPSS) version 16. Laboratory Investigations The following Basic Laboratory investigations (such as vitals and other parameters) are done during screening i.e. baseline visit ("0" weeks) Blood Biochemistry (Complete blood profile was tested). Statistical Analysis Data analysis was done using Statistical Package for the Social Sciences (SPSS) version 16. "Intent to treat" principle was employed; meaning all volunteer participants who had received at least one dose of study medication was included in the statistics. Independent "t" test was done between the two study groups. Paired "t" test was done for comparing measurements within each group. The statistical significance was reported based on the p value, where value <0.05 was considered to be significant. RESULTS AND DISCUSSION The selected population of 60 patients was randomized to two groups and the treatment started as and when they reported to the hospital. Two patients from topical betamethasone group and 2 patients from topical calcipotriol plus betamethasone group failed to complete the study. One patient from betamethasone group couldn't be reached from the first week. One patient from calcipotriol plus betamethasone group requested to remove from the study during his 1 st visit (2 weeks) and withdrew his consent. There was no discontinuation or withdrawal due to an adverse event. All statistical analysis was done in SPSS version 16 and intent to treat principle is employed for analysis. Results were distributed in demographics, treatment comparison, and adverse event profile. Age The mean age is 43.83 years with standard deviation of 13.16 years (MEAN ± S D DEVIATION= 43.83 ± 13.16) with minimum age of 20 years and a maximum of 63 years. Past History Majority of them N=26 (86.7%) did not have any past history. Two of them (6.7%) had same complaint and discontinued treatment 3 months before. One patient (3.3%) had same complaint and discontinued treatment 4 months before. One patient (3.3%) had same complaint and discontinued treatment 6 months before. Completed Treatment and Follow Up 28 patients had completed treatment (93.3%) and follow up after 2 weeks of the study whereas 2 patients discontinued the treatment. (6.7%). Age The mean age is 42.70 years with standard deviation of 11.72 years (MEAN ± STD DEVIATION= 42.70 ± 11.72) With minimum age of 26 years and a maximum of 61 years. Past History Two of them (6.7%) had same complaint and discontinued treatment 4 months before. One patient (3.3%) had same complaint and Discontinued treatment 6 months before. Pasi 0 week The mean PASI score of 30 patients at 0 week is 9.447 with standard deviation of 0.768 (Mean ± SD = 9.447± 0.768). The minimum score was 7.2 and maximum score was 10.8. Pasi 2 week The mean PASI score of 29 patients (one patient discontinued) at 2 weeks is 5.786 with standard deviation of 0.769 (Mean ± SD = 5.786± 0.769). The minimum score was 4.2 and maximum score was 7.4. Pasi 4 week The mean PASI score of 28 patients (Two patients discontinued) at 4 weeks is 3.732 with standard deviation of 0.520 (Mean ± SD = 3.732 ± 0.520). The minimum score was 2.4 and maximum score was 4.8. Completed Treatment and Follow Up 28 patients had completed treatment (93.3%) and follow up after 2 weeks of the study whereas 2 patients discontinued the treatment (6.7%). Adverse Reactions There were no adverse reactions in the 30 patients. Baseline Investigations Baseline investigations remains similar in both the groups and are within normal limit. Pasi score at 0 week This result shows that PASI score at 0 week (p= 0.44) is not significant. Effect of Drug on PASI Score after 2 Weeks This result shows that PASI score after 2 weeks of the treatment (p= 0.01) is statistically significant. Effect of Drugs on Pasi Score after 4 Weeks These results show statistically significant reduction in PASI score from base line after 4 weeks of treatment in both treatment groups (p<0.001). The significant reduction in PASI score in group I am mainly due to the combined action of calcipotriol and betamethasone dipropionate. Betamethasone produce prolonged antiinflammatory, antipruritic, vasoconstrictive and immunosuppressive properties without curing the underlying condition. Calcipotriol induces differentiation and suppresses proliferation of keratinocytes, thus reversing the abnormal keratinocyte changes in psoriasis. Thus the combination of calcipotriol plus betamethasone not only produce symptomatic relief but also aids in curing the underlying condition thereby leading to normalization of epidermal growth. Group I-Topical Calcipotriol Plus Betamethasone Group II-Topical Betamethasone Alone Psoriasis is a chronic disease where treatments are often needed throughout the life. Hence quality of patient's life is often affected. The availability of studies relating to the management of Psoriasis is limited when compared to the prevalence of its complications. Topical therapies are the mainstay of treatment for chronic plaque psoriasis. These include keratolytics, coal tar, corticosteroids, topical PUVA, calcipotriol alone and in combination with topical steroids. The antipsoriatic effect of betamethasone and calcipotriol has been individually reported in a number of studies. Calcipotriol, a vitamin D analogue has proven to be highly efficacious in limited chronic plaque psoriasis. There are few trials comparing its efficacy with other topical agents such as steroids and coal tar. Sharma Fig. 8. Psoriasis area and severity index (PASI) percentage reduction in group i and group II after 2 weeks and 4 weeks et al., reported >50% reduction in ESI score at week 4 in 60% of lesions treated with calcipotriol compared to 23.3% of lesions treated with coal tar (P <0.01) [15]. Another study found the efficacy of calcipotriol/ Betamethasone formulations to be better than calcipotriol alone at 2 and 4 weeks follow-up and showed a greater reduction in mean PASI in combined formulation group (68.6% in once daily, 73.8% in twice daily group) than in the twice daily calcipotriol alone group (58.8%) and the vehicle group (26.6%). The two -compound formulation of calcipotriol with betamethasone has been found to be superior to either component used alone [16]. In this study effectiveness in reduction of psoriasis score index (PASI) was compared between the treatments with topical calcipotriol plus betamethasone (combined therapy) and betamethasone. Both group's topical calcipotriol plus betamethasone (Group I) and topical betamethasones (Group II) were well-matched in terms of pretreatment characteristics. The mean age was (43.83 ± 13.16) years and (42.70 ± 11.72) years in Group I and II respectively. There were15 males (50%) and 15 females (50%) in group I and 13 males (43.33%) and 17 (56.66%) females as participants in group II. Family history of same disease was found in 10 patients (33.3%) in group I and 12patients (40%) in group II. The current study shows that topical calcipotriol plus betamethasone (Group I) and topical betamethasone (Group II) significantly reduced PASI. The Mean PASI score of Group I and Group II at baseline was (9.293±0.7697) and (9.447±0.7682) respectively. After 2 weeks of the treatment, mean PASI score was reduced to (5.314±0.711) for Group I and (5.786±0.7694) Group II. The mean difference in PASI score was statistically significant (p=0.01) between both groups after 4 weeks of the treatment, score further decreased to (2.989±0.5087) and (3.732±0.52) in Group I and Group II. Topical calcipotriol plus betamethasone treated patients show very much significant (p<0.001) reduction in their PASI as compared to betamethasone alone treated patients In our study, the mean percentage of PASI reduction after 4 th week of the treatment was 67.83% in the topical calcipotriol plus betamethasone group and 60.49% in the betamethasone group respectively (p<0.001). These results shows that topical calcipotriol plus betamethasone to be more effective than betamethasone. Therefore combination therapy is more effective in reducing the psoriatic lesions thereby decreasing the psoriasis score index. These findings were consistent with other studies. Dahri [19]. Adverse events were more frequent with calcipotriol than betamethasone. This shows that Calcipotriol plus Betamethasone therapy is safer as it produces less adverse effects than calcipotriol alone. Calcipotriol has antipsoriatic action through inhibition of epidermal proliferation and inflammation and enhancement of normal keratinization [20]. They can also affect the local immune system by triggering apoptosis in inflammatory cells, inhibiting T helper (Th) 1 cytokine production, and induction of a Th1 to Th2 switch. Topical corticosteroids have antiinflammatory and ant proliferative effects [20]. Betamethasone inhibits production of cytokines and reduces mediators of inflammation. Calcipotriol plus Betamethasone two compound formulation provides better compliance for patients than immunotherapy due to the combined action of vitamin D3 analogues on keratinocyte differentiation with the antiinflammatory effect of steroids. Thus combination of Calcipotriol plus Betamethasone has a more rapid onset of action compared to betamethasone alone there by leading to significantly faster clinical improvement and patient satisfaction. Meanwhile, the sample number is the main limitation of the present study. The heterogeneity nature of the data could be reduced by increasing the sample numbers. CONCLUSION This study revealed that topical calcipotriol plus betamethasone is efficacious compared to betamethasone alone in patients with mild to moderate psoriasis. Both treatment groups provided symptomatic relief in psoriasis. But the reduction in PASI score was high in topical calcipotriol plus betamethasone when compared with topical betamethasone alone. There was no serious adverse effect noted in both the groups. CONSENT AND ETHICAL APPROVAL The study protocol was reviewed and approved by the Institutional Ethics Committee and all trial participants have been informed about the study procedures and written informed consent was obtained.
v3-fos-license
2022-03-16T13:25:35.574Z
2022-03-16T00:00:00.000
247454500
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.821867/pdf", "pdf_hash": "d096f2439de51cccd4cd3185ead031ec2eab7def", "pdf_src": "Frontier", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45271", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "d096f2439de51cccd4cd3185ead031ec2eab7def", "year": 2022 }
pes2o/s2orc
Prevention of Alcohol Consumption Programs for Children and Youth: A Narrative and Critical Review of Recent Publications Background Youth substance use is a public health problem globally, where alcohol is one of the drugs most consumed by children, and youth prevention is the best intervention for drug abuse. Objective Review the latest evidence of alcohol use prevention programs in empirical research, oriented to all fields of action among children and youth. Methods A narrative and critical review was carried out within international databases (PsychInfo, Pubmed, Web of Science, and Scopus) in August 2021 and was limited to empirical studies that appeared in the last five years (2017–2021). A flow diagram was used according to the PRISMA statements. Empirical research articles in English with RCTs and quasi-experimental design that included alcohol, children, and young people up to 19 years of age (universal, selective, or indicated programs) were included. The authors examined the results and conceptual frameworks of the Prevention programs by fields of action. Results Twenty-two articles were found from four fields of action: school (16), family (2), community (2), and web-based (2), representing 16 alcohol prevention programs. School-based alcohol prevention programs are clinically relevant [Theory of Planned Behavior, Refuse, Remove, Reasons, Preventure, The GOOD Life, Mantente REAL, Motivational Interviewing (BIMI), Primavera, Fresh Start, Bridges/Puentes], they are effective in increasing attitudes and intentions toward alcohol prevention behavior, while decreasing social norms and acceptance of alcohol, reducing intoxication, and increasing perceptions with regards to the negative consequences of drinking. Discussion This narrative and critical review provides an updated synthesis of the evidence for prevention programs in the school, family, community, and web-based fields of action, where a more significant number of programs exist that are applied within schools and for which would have greater clinical relevance. However, the prevention programs utilized in the other fields of action require further investigation. INTRODUCTION Youth substance use represents a public health problem globally (Somani and Meghani, 2016;Stevens et al., 2020). The neurological development that occurs during childhood and adolescence combined with the onset of substance use (between the ages 15 and 19) (Blanco et al., 2018) becomes a particularly vulnerable stage that must be studied (Thorpe et al., 2020). Alcohol is one of the drugs most consumed by adolescents and young adults (Johnston et al., 2020). Particularly in the United States, 62.5% of underage alcohol users are binge alcohol users (Substance Abuse and Mental Health Services Administration [SAMHSA], 2018). Use and misuse of alcohol are associated with poor cognitive and executive functioning (Lees et al., 2020), increased risk of injury, death, and physical and sexual violence (Centers for Disease Control and Prevention [CDC], 2020), poor academic performance (Bugbee et al., 2019;Chai et al., 2020), and increased exposure to social risks and early sexual activity (Boisvert et al., 2017). Moreover, young people who drink alcoholic beverages are more likely to use tobacco and other drugs and develop risky sexual behaviors (Lee et al., 2018). Currently, alcohol abuse is characterized by high relapse rates, around 70-80% within a year (Dousset et al., 2020). In 2017, a systematic review found that children are aware of and able to recognize alcohol and its effects, suggesting the importance of starting prevention as soon as possible (Jones and Gordon, 2017). For this reason, the National Institute on Drug Abuse (National Institute on Drug Abuse [NIDA], 2020a) considers prevention the most cost-effective intervention for drug abuse. Unfortunately, there is no single accepted concept to define "drug use prevention". The European Monitoring Center for Drugs and Drug Addiction (European Monitoring Centre for Drugs and Drug Addiction [EMCDDA], 2015) defines "prevention" as any policy, program, or activity to (at least partially) delay or, directly or indirectly reduce drug use, including the possibility of minimizing drug use, limiting the negative consequences for health and social development or the progression of problematic drug use. As well it states that preventive actions among young people should be initiated early in their lives (European Monitoring Centre for Drugs and Drug Addiction [EMCDDA], 2021). In addition, substance use prevention also emphasizes protection against the initiation, progression, and maintenance of drug use, training in healthier coping strategies and social skills, or the development of social policies that reduce the availability and accessibility (such as prices) of alcohol (Becoña, 2007;Caywood et al., 2015). Overall, evidence-based prevention programs are encouraged (Harrop and Catalano, 2016;Funk et al., 2020). Drug Use Prevention Programs Most prevention programs seek to reduce the number and type of drugs consumed, delay the age of onset of drug use, eradicate the impact of negative consequences among those who already use drugs or have abuse/dependence problems, as well as reduce risk factors and enhance protective factors by providing healthy alternatives to consumption (Becoña and Cortés, 2011;National Institute on Drug Abuse [NIDA], 2020b). Most programs are based on three essential components (Reno et al., 2000;Tobler et al., 2000): reducing supply (reducing access and availability of drugs), reducing or delaying drug demand, and limiting health and social consequences. Prevention, conceptualized as an intervention that occurs before the onset of the disorder, is usually classified into three types: universal, selective, or indicated (Griffin and Botvin, 2010). Universal prevention programs are aimed at the general population. These are less intense and expensive than the other two types (for example, this would include school-level preventive activities that promote skills to refuse drug offers, improve self-esteem, and other factors that protect against substance abuse) (Espada et al., 2003;Griffin and Botvin, 2010). Selective prevention programs are aimed at high-risk groups within the general population and indicated prevention strategies are aimed at a specific subgroup of the community, which are usually consumers who show premature signs of danger for the development of addictive disorders (Griffin and Botvin, 2010;Becoña and Cortés, 2011). That is, indicated prevention targets those who already show early signs of substance use problems, engage in substance abuse, or other high-risk behaviors associated with drug consumption (Griffin and Botvin, 2010). In addition, prevention programs can be developed in different fields of action, such as family-based, that encourage positive interaction between parents and children in connection with different developmental milestones (Van Ryzin et al., 2016); school-based, that provide a safe space for children and adolescents to discuss their problems with their friends and peers, and allow for regular supervision, as children spend a significant amount of time each day at school (Spanemberg et al., 2020); community-based, that refers to the community's efforts to prevent consumption by its members (Hafford-Letchfield et al., 2020); and recently, mindfulness-based intervention (MBI), that includes paying attention in the present moment in a particular way: on purpose and without judgment (Korecki et al., 2020). Recent systematic reviews of prevention programs have focused solely on either family-based (Van Ryzin et al., 2016;Ballester et al., 2020), school-based (Tremblay et al., 2020), or community-based fields of action (Melendez-Torres et al., 2016;Hafford-Letchfield et al., 2020). However, most programs are included within other broader programs whose objective is to improve the school climate and prevent bullying (Spanemberg et al., 2020) or are specific micro-interventions, such as interventions based on mindfulness (Korecki et al., 2020). This study aims to critically review the latest empirical evidence of alcohol prevention programs in children and youth. MATERIALS AND METHODS A narrative and critical review was carried out in international databases (PsychInfo, Pubmed, Web of Science, and Scopus) in August 2021 and was limited to empirical studies that appeared in the last five years (2017)(2018)(2019)(2020)(2021). The keywords used were: "alcohol", "child * ", "young adults", and "prevent * ". The Boolean connector used was AND. The criteria to carry out the selection process were the following: empirical research articles with randomized controlled trials (RCTs) and quasi-experimental design that included alcohol as a variable, that the target group constituted children and young people up to 19 years of age (universal, selective or indicated programs), and that the studies had been published in English journals of high quality and impact factor. Although this is not a systematic review, a flow chart according to the PRISMA statements (Moher et al., 2009;Page et al., 2021) was used for this narrative and critical review (Figure 1). The records were removed before screening in the identification stage because they were duplicates or unrelated to the intervention. In contrast, the papers were eliminated in the first stage of the screening (records screened) because the prevention was not in substance use. The authors examined the results and conceptual frameworks of the prevention programs by fields of action in children and young people up to 19 years: Do these interventions reduce the amount and/or frequency of use? Does the intervention influence other variables such as attitudes, intentions, perceptions, or social norms about alcohol consumption? The evidence reviewed along with the conceptual frameworks and key results of the reviewed articles are described in Supplementary Table 1. Description of the Programs Supplementary Table 1 summarizes basic information of the 16 prevention programs reviewed, the intervention, the conceptual framework, and their results. The school prevention programs found were: the Triad; Primavera; Bridges/Puentes; Mantente REAL; Preventure; Refuse, Remove, Reasons program (RRR); Fresh Start; based in Motivational Interviewing program (BIMI), Unplugged (Tamojunto); The GOOD Life; pragmatic prevention, and a program based in Theory of Planned Behavior. The family prevention programs found were Media Detective Family and Effekt. The community prevention programs found were Öckerö Method and a program based on the Theory of Planned Behavior. Finally, the web-based prevention program was RealTeen. Almost all reviewed alcohol prevention programs were universal; that is, they intervened before the initiation stage, except one (Lammers et al., 2017), which was a selective prevention program. The fields of action ranged from school (16 studies, 72.7%), family (2 studies, 9.1%), community (2 studies, 9.1%), to web-based (2 studies, 9.1%) prevention programs. Some of these programs were aimed at preventing the use of other drugs in addition to alcohol. All studies explicitly explained subject randomization and pooling in their analyses, mainly involving subjects, groups, or clusters (classes or schools). The studies showed heterogeneous sample sizes, ranging from N = 45 to 6,658; and n = 23 to 3,340 participants in the experimental group. Two studies (Schwinn et al., 2017;Park et al., 2021) applied their programs exclusively to girls, while the remaining investigations were developed for both boys and girls. The age of the children and youth ranged from 10 to 19 years old. Outcomes ranged from immediately post prevention to 5-year assessment period follow-ups. Conceptual Framework of School-Based Prevention Programs All the programs were universal programs (except Lammers et al., 2017, who studied adolescents with previous drinking experience) applied to students in a longitudinal design, regardless of their risk of alcohol consumption. The programs focused on social skills, intention and motivation, personality traits, and risk and protective factors for alcohol use. Considering the stage of development, children and young people begin to consume alcohol due to social and psychological pressure from peers, family, culture, and the media, since they lack or do not yet have all the skills and knowledge to recognize and resist such pressure. In other words, the programs seek to avoid alcohol consumption by resisting external pressure and increasing coping skills, considering their personality traits, and also by allowing children and young people to analyze their negative emotional reactions, irrational thoughts and behavioral intention while maintaining a negative attitude toward alcohol consumption to promote healthy behavior. Several programs seek to develop social skills to reduce the effects of the social influence of alcohol consumption. Sanchez et al. (2017Sanchez et al. ( , 2018 (Diaz et al., 2021) uses health promotion as a reference basis (Dudley et al., 2015) and is mainly based on experiential learning (Potvin and Jones, 2011) via the development of psychosocial skills for preventing adolescent alcohol and tobacco use. Among the programs that are based on behavioral intention are Kim et al. (2021) (web-based) and Onrust et al. (2017) (Fresh Start program), based on the Theory of Planned Behaviour which states that behavioral intention is the direct determinant of changing to healthy behavior and that people with solid intentions strive to achieve the goal of not drinking and are more easily motivated to change their behavior (Ajzen and Madden, 1986). Mantente REAL (Kulis et al., 2020) (uses ecological risk and Resiliency Theory, Communication Competence Theory, and Narrative Theory), a Spanish language version of keepin' it REAL (kiREAL), increases the use of culturally accepted drug resistance skills and promotes non-permissive norms and attitudes about substance use (Gosin et al., 2003). Motivational Interviewing (BIMI) (Reyes-Rodríguez et al., 2019) seeks to identify a present or latent problem about consumption and from there motivate the person to carry out a change (Pilowsky and Wu, 2013). Bridges/Puentes (Gonzales et al., 2018) emphasizes risk reduction (prevention) as well as positive youth development (promotion) in multiple domains (family, school, and peers) (Koning et al., 2013); Hodder et al. (2017a) used a pragmatic intervention to implement available programs and resources targeting individual and environmental 'resilience' protective factors. Finally, Preventure is a selective prevention program based on Cognitive Behavioural Therapy with a personality-targeted approach (Lammers et al., 2015). Sanchez et al. (2017) found that the Unplugged program (culturally adapted to Brazil) seemed to increase alcohol use initiation (9 months follow-up). Three studies based their results on the intervention performed by Sanchez et al. (2017Sanchez et al. ( , 2018 did a 21-month follow-up and found an increase in alcohol use in intervention and control groups. Sanchez et al. (2019) showed that the program's effect on drug use via normative beliefs was not statistically significant. Valente et al. (2019) found that the impact of the intervention is unlikely to be conditioned to parenting style dimensions. Moreover, Vigna-Taglianti et al. (2021) applied Unplugged in Nigeria (culturally adapted) and found that the program significantly reduced the prevalence of recent alcohol use; furthermore, the program prevented regress across stages of alcohol use. Outcomes of School-Based Prevention Programs Several programs made it possible to reduce alcohol consumption. Diaz et al. (2021) used the Primavera prevention program and showed that children from the control group were less likely to report current alcohol use at the end of the first year of the intervention. Gonzales et al. (2018) (Lammers et al., 2017) found significant intervention effects on reducing alcohol use within the anxiety sensitivity group and reducing binge drinking and binge drinking frequency within the sensationseeking group. Conceptual Framework of Family-Based Prevention Programs Two universal family-based prevention programs (Scull et al., 2017;Tael-Öeren et al., 2019) focused on parent-child dyads. They seek the development of parental control skills, parenting behaviors, and the establishment of clear limits or rules, as well as their children's peer and social resilience skills, and maintaining parental restrictive attitudes toward adolescents' alcohol use over time. Tael-Öeren et al. (2019) applied Effekt (previously known as the Örebro Prevention Program) sought to delay and reduce adolescents' alcohol use by maintaining parental restrictive attitudes toward adolescents' alcohol use over time (Koutakis et al., 2008). The Media Detective Family was an online media literacy education substance abuse prevention program that parents and their children complete together, whose goals are to enhance the message interpretation process skills of both parents and children and reduce children's use of alcohol and tobacco (Scull et al., 2017). Outcomes of Family-Based Prevention Programs The Effekt prevention program (Tael-Öeren et al., 2019) positively affected parental attitudes, but it failed to delay or reduce adolescents' alcohol consumption. The Media Detective Family prevention program, applied by Scull et al. (2017), found that children reported a significant reduction in their use of substances over time. Conceptual Framework of Community-Based Prevention Programs Two universal community-based prevention programs (Park et al., 2021;Svensson et al., 2021) focused on strengthening the community as a more protective environment from alcohol use for children and youth. They provided information and offered education about alcohol and its associated risks, reduced access to alcohol, promoted health, improved advocacy for the media, strengthened restrictions, attitudes, and approaches to youth alcohol use among parents, other adults, and the community. The study carried out by Park et al. (2021) used the Theory of Planned Behavior explained above. Öckerö Method was a program whose goal was delaying the onset of alcohol use and reducing alcohol consumption among youths by strengthening restrictive attitudes and approaches to youth alcohol consumption among parents and other adults (Svensson et al., 2021). Outcomes of Community-Based Prevention Programs The results of both studies were heterogeneous. Conceptual Framework of Web-Based Prevention Programs Although some programs from different fields of action use the web as a tool (online), two studies have been found that do not fit into any of these fields and are described simply as web-based and gender-specific interventions (girls). RealTeen prevention program [used by Schwinn et al. (2017) and Schwinn et al. (2019)] is based on Social Learning Theory. It is aimed at helping girls navigate the risks associated with peer and social influences to use alcohol. This intervention focuses on goal setting, decision making, puberty, body image, coping, drug knowledge, and refusal skills. Schwinn et al. (2017) found that girls reported less binge drinking, higher alcohol refusal skills, coping skills, and lower peer drug use rates at one-year follow-up. On the other hand, Schwinn et al. (2019) [based on data from Schwinn et al. (2017)] didn't find reductions in binge drinking at 2-and 3years follow-up. DISCUSSION In this research, the latest evidence of alcohol use prevention programs in empirical research oriented to all fields of action in children and youth has been reviewed, utilizing data from the last five years (2017)(2018)(2019)(2020)(2021). Programs aimed at children and young people were reviewed due to the importance of prevention in these stages of development. Twenty-two studies were identified representing 16 prevention programs. The fields of action ranged from school (16 studies), community (2 studies), family (2 studies) to web-based (2 studies) prevention programs. Despite the significant heterogeneity of programs (both in sample size and follow-ups) and the difference in the number of studies for each field of action, most prevention programs are clinically relevant, given their results. The effects of universal prevention programs are generally miminal (Onrust et al., 2016), and may be attributed to the inconsistency of program content and the diversity of the theoretical frameworks, as well as the boomerang effect (whereby trying to correct exaggerated perceptions of overall prevalence, consumption increases rather than protects against alcohol consumption (Hopfer et al., 2010)). School-Based Prevention Programs Beginning with school-based prevention programs based in Theory of Planned Behavior (Onrust et al., 2017), and Bridges/Puentes (Gonzales et al., 2018), all are effective in increasing attitudes and intention toward alcohol prevention behavior, decreasing social norms and acceptance of alcohol, reducing insobriety, and increasing perceptions about negative consequences of drinking. In contrast to this, the prevention program called Unplugged not only did not show effectiveness in the study by Valente et al. (2019), but even seemed to increase alcohol use initiation in the studies by Sanchez et al. (2017) and Sanchez et al. (2018). However, it was effective in Nigeria (Vigna-Taglianti et al., 2021). The "pragmatic prevention" (Hodder et al., 2017a) was not effective either, possibly because the school staff selected the type, manner, and order of implementation of curriculum resources and programs; such interventions are less likely to be effective than nonpragmatic approaches (Yoong et al., 2014). The Triad (Beckman et al., 2017) did not affect the likelihood of drinking alcohol, applying only one of the program's three components. Other systematic reviews and meta-analyses have found similar results on school-based prevention programs. For example, the systematic review by Tremblay et al. (2020) found that 70% of the programs demonstrated reductions in the use of substances, including both alcohol and drugs; and the systematic review and meta-analysis by Melendez-Torres et al. (2018) concludes that this type of intervention was broadly effective for reducing specific alcohol and drug use. However, opposite results have also been found. The systematic review conducted by Hodder et al. (2017b) found that the universal school-based interventions that address adolescent 'resilience' protective factors as part of any intervention approach are ineffective for reducing adolescent alcohol use. The school-based prevention programs that are most likely to be successful are those that combine the practice of social skills and the transmission of educational knowledge (Tobler et al., 2000;Botvin and Griffin, 2007) but also those programs that target their interventions at more than one risk factor (Griffin and Botvin, 2010;Hale et al., 2014). Among the components that increase the effectiveness of the programs are: the strengthening of social, emotional, behavioral, cognitive, and moral competencies; the increase in self-efficacy; improving social relationships with adults, peers, and younger children; and longer interventions (Catalano et al., 2004;Cairns et al., 2014). However, research is lacking in universal alcohol prevention programs with primary and lower grade students that promote personal and social life skills (Onrust et al., 2016), including self-control, promotion of self-esteem, and problem-solving skills (Onrust et al., 2016), supplemented with the offer of healthy alternatives, work with parents and peer education (MacArthur et al., 2016;Onrust et al., 2016). According to a systematic review (Jones and Gordon, 2017), children's attitudes toward alcohol become more positive as they get older. For this reason, early interventions must be applied to delay or prevent the formation of positive attitudes, perceptions, and social norms toward alcohol and follow alcohol consumption prevention guidelines that allow students to control the pressures of alcohol consumption , delaying consumption. Among the programs that target their intervention at more risk factors is Unplugged, which supports the development of life skills (communication, assertiveness, critical thinking, coping strategies, goal setting, decision making, and problem-solving) and skills to resist the pressure to use drugs (Kreeft et al., 2009). The program seeks to strengthen adolescents' personal and interpersonal skills that reduce the effects of social influence by modifying attitudes, beliefs, and normative perceptions (Sussman et al., 2004;Giannotta et al., 2014). The change in drinking behavior, which did not decrease but rather increased after nine months (Sanchez et al., 2017) and at the 21-month follow-up (Sanchez et al., 2017) in Brazil, could be due to the context and probably influenced by many factors, such as the age of the pupils, prevalence of use, social pressure, and, not last, fidelity of implementation. In addition, adaptations have to ensure that the intervention content, language, examples, and delivery methods are culturally appropriate, relevant, and acceptable to the new population (Castro et al., 2004). Some research has found that the effectiveness of preventive interventions in schools may depend on implementation parameters such as acceptance of the building blocks, the scope of intervention, and mode of delivery (Cuijpers, 2002;Perkins and Craig, 2006;Domitrovich et al., 2008). In other words, the students' attention would increase if the intervention is attractive to them, facilitating their ability to retain the central messages (Domitrovich et al., 2008;Durlak and DuPre, 2008); Vallentin-Holbech et al. (2019) (The GOOD Life) studied these variables, finding that no significant effects for any level of exposure were found, neither for satisfaction, nor recall for binge drinking. Further research is required to determine the impact of these variables on other prevention programs. Students with anxiety sensitive traits have shown higher levels of alcohol use and drinking problems in previous research (Sher et al., 2000;Krank et al., 2011), andLammers et al. (2017) (Preventure) found significant intervention effects on reducing alcohol use within the anxiety sensitivity group, reducing binge drinking and binge drinking frequency. This is one of four personality profiles at higher risk of developing alcohol problems (sensation seeking, impulsivity, anxiety sensitivity, and negative thinking) (Comeau et al., 2001). The application design of the programs must be taken into account. Although most were randomized controlled trials, three were quasi-experimental (Beckman et al., 2017;Mogro-Wilson et al., 2017;Kim et al., 2021). A limitation of the quasi-experimental studies is that the program's identification of a causal effect is based on the assumption that the intervention and control schools would have had the same trend in alcohol consumption without the intervention, which is impossible to test. Family, Community, and Web-Based Prevention Programs Similarly, prevention programs based on the family and the community do not allow conclusions to be reached on their effectiveness, since only Scull et al. (2017) (family-based) found a reduction in alcohol consumption among children. Park et al. (2021) (community-based) found that the program improved alcohol-related knowledge and converted individuals' positive expectations of alcohol to negative ones. Two systematic reviews (Allen et al., 2016;Kuntsche and Kuntsche, 2016) and a meta-analysis (Van Ryzin et al., 2016) analyzed the effectiveness of family-oriented alcohol prevention offerings, allowing for the conclusion that these programs may have preventive effects on alcohol consumption in young people. For the most part, they aimed to strengthen parental behavior and self-efficacy to improve alcohol-related family communication. Both parents and youth worked on their life skills and leisure activities in family programs. Van Ryzin et al. (2016) found that the overall impact across different programs was small to moderate. Moreover, two systematic reviews of community programs of mentoring to prevent or reduce alcohol found a significant overall effect on alcohol consumption (Thomas et al., 2013;Tolan et al., 2014); Toomey and Lenk (2011) found that programs that change the community environment can reduce alcohol use and related problems among youth. Strategies that lead to a general increase in the price of alcoholic products, increased regulation, control, and penalties for providing alcohol to minors, and restricting alcohol advertising could be recommended (Paschall et al., 2009). On the other hand, two web-based prevention programs (Schwinn et al., 2017(Schwinn et al., , 2019 applied to girls showed, in the same way, their clinical importance as gender-specific prevention, since they reported less binge drinking and higher alcohol avoidance skills and coping skills, even at 1-year follow-up. From these only two results, no general conclusions can be reached, apart from the fact that it is a gender-specific prevention program; however, a web-based prevention program applied to first-year college students showed a reduction in alcohol consumption (Gilbertson et al., 2017), so more research is required on this type of program. The results obtained by Tael-Öeren et al. (2019) using Effekt are possibly due to it being an adaptation aimed at 11-yearold children, while additional versions were designed for 13year-old children (Koutakis et al., 2008), which resulted in the choice of different measures to address the initiation of alcohol consumption. Beckman et al. (2017) applied only the intervention "Fight against drugs" of The Triad, not the other interventions associated with other behavioral issues. It may be that using all the themes is more effective, as the entire program addresses various risk behaviors. Limitations Among the limitations of this research are studying alcohol consumption in populations that include young age groups which still do not drink or are starting to do so, so the evaluation and the results should be analyzed with caution. Furthermore, this is not a systematic review, which is restricted to the findings of the last five years. Knowing the most current evidence of prevention programs in children and youth in the different fields of action implies comparing varying program interventions, conceptual frameworks, and results, which limits the generalization of results and conclusions. CONCLUSION Individual studies are certainly not sufficient to conclude for or against the large-scale implementation of, for example, family, community, or web-based alcohol prevention programs in the clinical setting. In light of how alcohol use can be countered in the population, prevention science can support practice and policy by providing reliable knowledge for children, adolescents, and youth-oriented addiction prevention. Research and clinical practice must be evidence-based. Its implementation must take into consideration accumulated practical knowledge and the particularities of the target group and the local context. In only this way can a consensus be reached on the methods by which causality of the connection between alcohol-related issues and consumer behavior be established. Future research should continue to seek evidence of the most effective programs but also expand into new, under-studied fields, such as technology-based substance use prevention programs (Stinson et al., 2020) and mindfulness-based programs (MBP), which should be systematically tested in this population (Riggs and Greenberg, 2019). In addition, studies are needed to assess the quality of investigations and reviews that employ prevention programs to reach more effective conclusions (Shea et al., 2017), such as standardizing follow-ups. Given the individual and social costs of alcohol use in youth, and increasingly in children, as a public health problem, it is the responsibility of the family, the school, the community, and the state to know the most current evidence of alcohol prevention programs. To this end, this narrative and critical review provides an updated synthesis of the evidence for prevention programs in the school, family, community, and web-based fields of action, where a greater number of programs applied in the school which ultimately carry greater clinical relevance. However, the prevention programs used in the other fields of action require further investigation. AUTHOR CONTRIBUTIONS PR, CL-Z, and RS-P: record review, evaluation of full-text studies for inclusion, and data extraction. RS-P: writing-original draft preparation. PR, CL-Z, and SV-G: writing-review and editing the final version. All authors have read and agreed to the published version of the manuscript. FUNDING The publication of this research was funded by the Particular Technical University of Loja (Ecuador). Additional funding was provided by the European Union-Next Generation EU through the Grant for the Requalification of the Spanish University System for 2021-2023 at the Public University of Navarra (Resolution 1402/2021). The funders had no role in the study design, data collection, analysis, decision to publish, or manuscript preparation.
v3-fos-license
2023-08-25T06:42:19.324Z
2023-08-24T00:00:00.000
261101106
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2024)201.pdf", "pdf_hash": "a0f66846273e6e26dfb9739cdbb56f9088521dc7", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45272", "s2fieldsofstudy": [ "Physics" ], "sha1": "176fb8e59b71d9f7ffebcda02986fce7580ad2d2", "year": 2023 }
pes2o/s2orc
Gravitational Waves from Phase Transitions in Scale Invariant Models We investigate the properties of the gravitational waves (GWs) generated during a strongly first order electroweak phase transition (EWPT) in models with the classical scale invariance (CSI). Here, we distinguish two parameter space regions that correspond to the cases of (1) light dilaton and (2) purely radiative Higgs mass (PRHM). In the CSI models, the dilaton mass, or the Higgs mass in the PRHM case, in addition to some triple scalar couplings are fully triggered by the radiative corrections (RCs). In order to probe the RC effects on the EWPT strength and on the GW spectrum, we extend the standard model by a real singlet to assist the electroweak symmetry breaking and an additional scalar field $Q$ with multiplicity $N_Q$ and mass $m_Q$. After imposing all theoretical and experimental constraints, we show that a strongly first order EWPT with detectable GW spectra can be realized for the two cases of light dilaton and PRHM. We also show the corresponding values of the relative enhancement of the cross section for the di-Higgs production process, which is related to the triple Higgs boson coupling. We obtain the region in which the GW spectrum can be observed by different future experiments such as LISA and DECIGO. We also show that the scenarios (1) and (2) can be discriminated by future GW observations and measurements of the di-Higgs productions at future colliders. Introduction The standard model (SM) in particle physics is a successful theory to explain results at the Large Hadron Collider (LHC) [1].On the other hand, the SM has several experimental and theoretical problems.For the theoretical aspect, radiative corrections (RCs) to the Higgs boson mass cause the quadratic divergences in the SM, which is the origin of so-called the hierarchy problem [2].Related to this problem, extended Higgs models with the classical scale invariance (CSI) have been often considered [3][4][5][6][7][8][9][10][11][12][13][14][15][16].The CSI requires that all mass terms in the Lagrangian are forbidden.Therefore, the electroweak symmetry breaking (EWSB) does not occur at the treelevel.However, the CSI is violated by quantum effects of new particles.Then, the EWSB can be realized radiatively.This mechanism is often called as the Coleman-Weinberg mechanism [17].An extension of the Coleman-Weinberg mechanism to the case with multi-scalar fields has been also discussed by Gildener and Weinberg [18].The phenomenology of CSI extensions of the SM has been investigated in the literature, including the testability at current and future collider experiments [3][4][5][6][7][8][9][10][11][12][13][14][15]. Among the SM unsolved questions, the baryon asymmetry of the Universe is an important problem for both particle physics and cosmology.The observed baryon asymmetry is expressed by [19] where n B , n B and n γ refer to the number density of baryons, antibaryons and the CMB photons, respectively.In order to explain the ratio η B , three conditions need to be satisfied simultaneously at the early Universe, known as the Sakharov conditions [20].These conditions can be summarized by the existence of such interactions that violate the baryon number, violate C and CP symmetries; and occur out of the thermal equilibrium.One of the most interesting scenarios for baryogenesis is the electroweak baryogenesis (EWB), where the third Sakharov condition is satisfied via a strongly first order phase transition at the electroweak scale [21].However, it has been shown that this scenario cannot be realized in the SM since the CP violation source in Yukawa interactions is too small to produce η B and the electroweak phase transition (EWPT) is not first order.Therefore, going beyond the SM is mandatory.In order to realize the EWB scenario, the sphaleron processes should decouple after the first order EWPT.This condition can be approximately expressed by [22][23][24][25][26] where T c is the critical temperature at which the potential minima (the false and true ones) are degenerate; and υ c is the vacuum expectation value (VEV) of the SU (2) L doublet scalar field at T = T c .Once the condition (1.2) is satisfied, the triple Higgs boson coupling hhh significantly deviates from the SM prediction [27].Therefore, precise measurements of the hhh coupling at future colliders are important to test the scenario of the EWB.In order to realize a large deviation in the triple Higgs boson coupling, large quantum corrections from new particles play an important role.Such large quantum corrections may be able to appear in the other Higgs boson couplings like hγγ [28][29][30][31][32][33][34][35]. In addition to the hhh coupling measurement, gravitational waves (GWs) generated during a strongly first order EWPT can be used to explore new physics models [36][37][38][39].A first order phase transition at the early Universe occurs via the nucleation and expansion of bubbles of the broken vacuum.When the broken vacuum bubbles collide with each other, detectable GWs can be produced.For a first order EWPT, the typical peak frequency of the GW spectrum is around 10 −3 − 10 −1 Hz [40].Such GW spectrum can be observed by future space-based interferometers like LISA [41] and DECIGO [42].It means that new physics models can be explored by using the GWs.Predictions on the GW spectrum in extended Higgs models with the CSI have been discussed in the literature [8,[43][44][45][46][47][48][49][50].The dynamics of the EWPT in the CSI models has been also discussed in the literature [8,[51][52][53][54][55]. Recently, it is often discussed that the first order EWPT can be tested by primordial black hole observations [56,57].When the first order EWPT forms primordial black holes (PBHs), those masses are around 10 The CSI implies that a CP-even scalar should be massless at tree-level; and becomes massive due to the quantum corrections that trigger the EWSB.In Ref. [14], a generic model has been considered to investigate the RC effects on the EWSB, dilaton mass, scalar mixing, as well other observables.In this model, the SM is extended by a real singlet and an extra singlet Q with multiplicity N Q and the couplings (α Q , β Q ) to the Higgs doublet and real singlet, respectively.Then, the RCs are quantified by using N Q , α Q and β Q in addition to the singlet VEV.In a case of large (N Q , α Q , β Q ), the dilaton mass can be 125 GeV due to the large RCs.This scenario corresponds to the case of a purely radiative Higgs mass (PRHM) [14].Although large RCs (with large values for N Q , α Q , and β Q ) are beneficial for vacuum stability, they may introduce tension with the triviality bound [62,63].This aspect needs to be investigated within the viable parameter space of this model. In this work, we will investigate the effect of quantum corrections on the EWPT dynamics and on the corresponding GW spectrum using this generic CSI model with a scalar mixing [14].Here, the triple scalar couplings are very sensitive to the quantum corrections, which makes the di-Higgs production cross section at both LHC and ILC useful observables to probe the viable parameter space.According to the full LHC Run 2 data with 139 fb −1 , it is required that the cross section for the non-resonant di-Higgs production process is lower than 3.4 times of the SM prediction [64].At the High Luminosity LHC (HL-LHC), the cross section will be measured by 0.7 times of the SM prediction [64,65].At the International Linear Collider (ILC), it is expected that the di-Higgs production cross section is limited less than 2 or 3 times of the SM prediction [66].The di-Higgs production processes in extended Higgs models have been investigated in the literatures [67][68][69][70][71][72][73][74].In this paper, we show that the light dilaton and PRHM cases can be distinguished by using complementarity between collider experiments and GW observations. The structure of this paper is as follows.In Section 2, we introduce the CSI models; and discuss the EWSB and the scalar mass spectrum.In Section 3, we identify the theoretical and experimental constraints on these CSI models.In Section 4, we define the parameters characterizing the first order EWPT and the corresponding GW spectrum.Our numerical results for GW observations and collider experiments are shown in Section 5. Our conclusion is given in Section 6. Models with Classical Scale Invariance The CSI in quantum field theories is defined by x µ → κ −1 x µ , ψ i → κ a ψ i with a = 1, 3/2 for bosons and fermions, respectively.This invariance implies that the scalar quadratic terms vanish.Therefore, the general representation of the potential with the CSI is given by [18] V where Φ i represents all the scalar representations.Obviously, the EWSB could not take place because the scalar potential in Eq. (2.1) contains only quartic terms, which requires the contributions of the quantum corrections that break both of the electroweak and CSI symmetries at the same time.In order to achieve the EWSB in this class of models, the SM is extended by a real singlet S; in addition to other bosonic and fermionic degrees of freedom (dof) with (n i , α i , β i ) as multiplicities and couplings to the Higgs doublet and scalar S, respectively.Here, the Higgs doublet field H and the singlet field S are written as with χ + and χ 0 are the Goldstones; and ⟨H⟩ = υ and ⟨s⟩ = υ S are the VEV of the doublet and singlet, respectively.In this model, two CP-even eigenstates h 1,2 (with m 2 > m 1 ) are obtained by using the 2 × 2 mixing matrix with angle α.We distinguish two possibilities where the observed SM-like Higgs with the mass m h = 125 GeV corresponds to (1) h 2 ≡ h and h 1 ≡ η is a light dilaton (light dilaton scenario); and (2) h 1 ≡ h and h 2 ≡ η is a heavy scalar (PRHM scenario) [14]. The full one-loop effective potential in terms of CP-even fields (H, s) can be written as with δλ H , δλ S , δω are the counter terms, n i and m 2 i (H, s) are the multiplicities and the field dependent masses for particles running loops, respectively.Here, the function G(r) is defined a la DR scheme, i.e., G(r) = r 2 64π 2 log(r/Λ 2 ) − 3/2 , with the renormalization scale is taken to be Λ = m h = 125.18GeV. In this setup, we can eliminate the couplings λ H , λ S and ω in favor of the tadpole conditions and the Higgs mass at tree-level, and the corresponding counter-terms δλ H , δλ S , δω can be also eliminated using the one-loop tadpole and Higgs mass conditions as shown in Refs.[14,15].In case where all field dependent masses can be written in the form the counter-terms can be simplified as with The light dilaton and PRHM cases are identified via the conditions [14,15] δω( ) respectively.However, the condition a + b = 1 + δω(υ 2 + υ 2 S )/m 2 h corresponds to the special case of degenerate eigenmasses m 1 = m 2 = m h .This case is of great interest that deserves an independent study. In our analysis, we consider a generic model where the SM is extended by a scalar singlet S to assist the EWSB; and another boson Q with multiplicity N Q and the squared mass . Clearly, the quantum correction from the boson Q is proportional to the field multiplicity N Q , the couplings (α Q , β Q ) to H and S and/or the singlet VEV υ S . Constraints and Predictions The different constraints on this model have been discussed in details in Ref. [14].Here, we mention the constraints coming from the total Higgs strength modifier µ tot = c 2 α × (1 − B BSM ) ≥ 0.89 at 95 % CL [75], that implies the scalar mixing to be s 2 α ≤ 0.11 in the absence of invisible and undetermined Higgs decay (B BSM = 0).Here, the RCs play a crucial role to satisfy this bound.The mixing sine can be decomposed into a tree-level part and a one-loop part as s α = s S in the light dilaton case.Therefore, small RCs will keep the constraint s 2 α ≤ 0.11 fulfilled for υ S ≫ υ.However, in the PRHM case, we have s S which makes the constraint s 2 α ≤ 0.11 explicitly violated for υ S ≫ υ.Interestingly, large RCs in the PRHM case that are responsible to push the light CP-even eigenmass from zero tree-level value to the measured 125 GeV; are also responsible to generate large negative contributions to the mixing s (1) α that makes the condition s 2 α ≤ 0.11 fulfilled.This point has been discussed in details in Ref. [14] for this model; and in Ref. [15] for the SI scotogenic model. The counter-terms δλ H , δλ S , δω cannot take any numerical values since they are constrained by the one-loop perturbativity conditions Here, the one-loop quartic couplings are defined as the 4 th derivatives of the full one-loop scalar potential in Eq. ( 2.3) at the broken vacuum.In what follows, we will consider the one-loop value for the Higgs-dilaton mixing angle, since it has been shown that the role of RCs is crucial in driving the mixing value towards the experimentally allowed region, both in the context of a light dilaton and PRHM cases [14].In the CSI models, the leading term in the one-loop scalar potential is φ 4 log φ rather than φ 4 , where φ could be any direction in the H-s plane.Thus, the vacuum stability conditions differ from those used in the literature, which are given by i n i α 2 i > 0 ∧ i n i β 2 i > 0 [14,15]. In addition to the perturbativity and vacuum stability conditions, we discuss here the triviality bound as a theoretical constraint on our model.The triviality bound plays a significant role in discussing upper bounds on couplings [76].Especially, the triviality bound can give strong constraints on models with the CSI or a strongly first order EWPT [62,63].Generally, the Landau pole scale is defined as the energy scale at which the perturbative description of the model is broken down, which suggests the need for a more fundamental theory or modifications at higher energy scales.In this work, we define the Landau pole scale (Λ Lan ) as the approximate scale where Λ Lan = min({Λ i }) with λ i (µ = Λ i ) = 4π, and λ i (µ)'s are the couplings at the scale µ in our model.In what follow, we consider the condition Λ Lan < 10 TeV as a triviality bound. In Appendix A, we present the β-functions for the renormalization group equations (RGEs) in our model using the ordinary scheme.However, these functions do not decouple the effects of heavy new particles in the RGEs flow, even when our focus is on physics at a low energy scale.These non-decoupling effects are commonly known as threshold effects.Recently, it has been confirmed that such threshold effects can be naturally taken into account by employing mass-dependent β-functions [77].In the following, we will utilize the mass-dependent β-function when discussing the RGEs analysis. As well known, the triple Higgs boson coupling, λ hhh = λ SM hhh (1 + ∆ hhh ), is an important quantity to probe the strongly first order EWPT [27,78,79].In order to obtain information about the relative triple Higgs coupling enhancement (∆ hhh ), it is necessary to measure the di-Higgs production processes at the LHC or the ILC.However, in models where the SM Higgs doublet mixes with a singlet, as in our model, the di-Higgs production process involves an extra Feynman triangle diagram mediated by the extra scalar η.In this case, the di-Higgs production cross section has three independent contributions that come from: (1) the Feynman diagrams involving only the triple scalar couplings λ hhh and λ hhη (σ λ ), (2) the diagrams with only pure gauge couplings (σ G ); and (3) the interference contribution (σ Gλ ) [14,80].In our model, the di-Higgs production cross section scaled by its SM value can be expressed as The coefficients ξ i (i = 1, 2, 3) are defined at the CM energy √ s as [14,80] with where Γ h is the measured Higgs total decay width, Γ η is the estimated heavy scalar total decay width; and λ SM hhh is the triple Higgs boson coupling in the SM.We take the value of λ SM hhh as in Refs.[29,30]. Here, we consider the di-Higgs production processes pp → hh and e + e − → Zhh at the LHC with 14 TeV and ILC with 500 GeV, respectively.In the following, we use the notations R LHC and R ILC for the processes R(pp → hh@LHC14 TeV) and R(e + e − → Zhh@ILC500 GeV), respectively.The values σ λ , σ G and σ Gλ for the process pp → hh@LHC14 TeV and e + e − → Zhh@ILC500 GeV are given in Table .1. One has to notice that, in the PRHM scenario, the heavy scalar η decays into all SM Higgs final states in addition to a di-Higgs channel.This means that the negative searches for a heavy resonance at both ATLAS and CMS can be used to constraint our model, like: (1) the heavy CPeven resonance in the channels of pair of leptons, jets or gauge bosons pp → H → ℓℓ, jj, V V [82][83][84]; and (2) the resonance in the di-Higgs production pp → H → hh [85][86][87].It has been shown in Ref. [14] that almost the full parameter space region is allowed even if we consider the heavy CP-even resonance.Thus, these constraints will not be considered in our work. 4 Gravitational Waves from a First Order Phase Transition In order to analyze the GW spectrum generated during a strongly first order EWPT, we need to define the parameters α and β, which characterize the GWs from the dynamics of vacuum bubbles [40].These parameters represent the latent heat and the inverse of the EWPT time duration, respectively.However, an accurate description of the EWPT dynamics is mandatory, which requires the exact knowledge of a key physical quantity: the full one-loop effective potential at finite temperature2 , that is given by [91] where the last contribution in Eq. (4.1), that is called the daisy (or ring) part, represents the leading term of higher order thermal corrections [92].This contribution can be taken into account by replacing the scalar and longitudinal gauge field-dependent masses in the first and second terms of Eq. (4.1) by their thermally corrected values, i.e., m . The thermal self-energies are given by [94] with g and g ′ are the SU (2) L and U (1) Y gauge couplings, respectively.Here, λ Q is the selfcoupling constant of boson Q.Since the thermal corrections related to λ Q only come from the thermal mass of the new boson Q, this correction can be negligible.Thus, for reasons of simplicity, we take λ Q = 0 in our analysis. The EWSB takes place during the transition from the symmetric vacuum (⟨H⟩ = 0) to the broken one (⟨H⟩ ̸ = 0), at the nucleation temperature T n ≤ T c .A strongly first order EWPT occurs through the tunneling between the symmetric and broken vacuum, which corresponds to the vacuum bubble nucleation at random points in the space.These bubbles expand and collide with each other until filling the Universe by the broken vacuum (⟨H⟩ ̸ = 0).When a bubble wall passes through an unbroken symmetry region (where ⟨H⟩ = 0) at which a net baryon asymmetry is generated by the B + L and CP violating processes, the thermal equilibrium will washout this asymmetry unless the B number violating process is suppressed inside the bubble (broken vacuum ⟨H⟩ ̸ = 0).This condition is often called as the sphaleron decoupling condition and approximately expressed by Eq. (1.2).It has been shown that the singlet plays an important role in the EWPT dynamics even though its VEV ⟨S⟩ c is absent in the condition in Eq. (1.2) [23,24].The precise evaluation of the sphaleron decoupling condition in extended Higgs models has been also discussed in Refs.[25,26]. In what follows, we analyze the GW spectrum from a first order EWPT in this model by estimating the above mentioned parameters α and β [40].The parameter α is the latent heat normalized by the radiation energy density, which is given by with ρ rad (T ) = (π 2 /30)g * T 4 is the radiation energy density, g * is relativistic degrees of freedom in the thermal plasma; and is the released energy density, where the configuration (H(T ), s(T )) is the broken vacuum at temperature T .The parameter β that describes approximately the inverse of time duration of the EWPT is defined as where S E and Γ are the 4d Euclidean action of a critical bubble and vacuum bubble nucleation rate per unit volume per unit time at the time of the EWPT t n , respectively.In the following, we use the normalized parameter β by Hubble parameter where S 3 (T ) is the 3d action for the bounce solution.The transition temperature T n is defined by Γ H 4 with the bubble nucleation rate [95] Γ(T ) ≃ T 4 S 3 (T ) 2πT If Γ/H 4 T cannot be larger than the unity (Γ/H 4 T ≪ 1), the first order phase transition does not complete by today.Thus, this condition is used as another theoretical constraint on our model.Here, we use the public code CosmoTransitions to obtain the bounce solutions [96]. The GWs from a first order phase transition can be produced via three mechanisms: (1) collisions of bubble walls and shocks in the plasma Ω ϕ h 2 [39], (2) the compressional waves (sound waves) Ω sw h 2 [97]; and (3) magnetohydrodynamic (MHD) turbulence in the plasma Ω tur h 2 [98].Therefore, the stochastic GW background can be approximately expressed by (4.10) The importance of each contribution depends on the EWPT dynamics, especially on the bubble wall velocity v b .In this work, we take v b = 0.95 as a free parameter; and focus on the contribution to GWs from the sound waves in the plasma, which is the dominant contributions among the other GW sources.According to the numerical simulations, a fitting function for the GW spectrum from the sound waves is given by [99] Ω where the peak of the amplitude is given by and the peak frequency is expressed as with κ is the efficiency factor which characterize how much of the vacuum energy is converted into the fluid motion [100]. Numerical Results In our analysis, we consider the following values for the multiplicity N Q = 6, 12, 24; and the singlet VEV υ S = 500 GeV, 1 TeV, 3 TeV, while allowing the couplings to lie within the perturbative regime In addition, we consider the constraints: (1) vacuum stability, (2) the mixing angle s 2 α ≤ 0.11, (3) the perturbativity one-loop constraints λ 1−ℓ H,S , |ω 1−ℓ | < 4 π; and (4) the completion condition of the phase transition.As well known, the GWs from a first order EWPT is useful to probe the structure of the Higgs potential [8,43].Since the RCs have the key role in this model, one expects that future space-based interferometers would be able to detect the GWs generated during the first order EWPT.In order to understand the impact on the EWPT strength from the quantum corrections, we show the ratio υ c /T c in the palette in Fig. 1 for different values of (α Q , β Q ), N Q and υ S . In all panels of Fig. 1, the upper parameter space region corresponds to the PRHM scenario; and the lower region is the light dilaton case.The regions colored in cyan represent the parameter region where the GW spectra from the first order EWPT may be detected at DECIGO [42]. Here, one notices that not only the dilaton case, but also the PRHM scenario predicts testable GW spectra.The magenta region in Fig. 1 is constrained by the completion condition of the first order EWPT.Interestingly, in Fig. 1, there is a parameter space region where the GW spectra can be tested at future space-based interferometers even if the EWPT is not strongly first order, i.e., the condition in Eq. (1.2) is not satisfied.One remarks that satisfying the condition in Eq. (1.2) requires small α Q and β Q .Furthermore, the triviality bound (Λ Lan < 10 TeV) excludes a portion of the parameter space corresponding to large values of α Q and β Q in the PRHM case.This observation has also been noted in extended scalar sector models, where more parameter space is excluded when the triviality bound is considered at a higher scale [101,102].On the other hand, as we will explain later, the di-Higgs production cross section can be large for relatively large β Q value.It indicates a complementarity between the GW observations and collider experiments to test our model.The details for the di-Higgs production will be discussed later. According to Fig. 1, as N Q is getting large, the parameter space region satisfying the condition in (1.2) becomes large in the PRHM scenario.Whereas, in the light dilaton case, such parameter space region is narrowed down, which means that the PRHM scenario is preferred in order to realize the EWBG scenario within CSI models.From the results in Fig. 1, one learns that the scenario with degenerate masses m 1 = m 2 = m h is possible; and it corresponds to the we do not consider these methods in our analysis for the sake of simplicity. As shown in Fig. 3, the peak height of the GW amplitude lowers as the value of β Q gets smaller in general.Here, one has to mention that for small υ S values, the light dilaton scenario is most likely to be detectable than the PRHM one.The qualitatively reason is as follows.The strength of the first order phase transition is generally related to the potential difference at the zero temperature between the symmetric vacuum (H, s) = (0, 0) and the broken vacuum (H, s) = (υ, υ S ).The potential difference is approximately given by where the loop corrections from the SM particles are neglected here.According to Eq. (5.1), the parameters N Q , α Q and β Q should be small to make a high potential barrier, which means that the EWPT is strongly first order in the parameter space region with small N Q , α Q and β Q .Generally, in models with singlet scalar fields a non-trivial vacuum (0, s ̸ = 0) can be preferred rather than the origin (0, 0).If such vacuum exists, the strongly first order EWPT can be easily realized [23].In order to avoid the existence of such non-trivial vacuum, the following condition should be satisfied Since our model has a Z 2 symmetry s → −s even at the quantum level, the condition in Eq. (5.2) is always satisfied.Therefore, the approximation in Eq. (5.1) is meaningful in our model.We note that the typical values of α and β we found are around 0.1 − 1 and 100 − 10000, respectively.Is is well known that the bubble collision contribution can be dominant when 1. We also show the sensitivity curves of LISA [41], DECIGO [42], TianQin [103] and Taiji [104]. α ≫ 1 [99].Our results indicate that the dominant source of the GWs in our scenario is the sound wave contribution. As noted above, the parameter space region satisfying the condition in Eq. (1.2) may imply a large deviation in the cross section for the di-Higgs production from the SM prediction.To confirm this, we present the di-Higgs production cross section at the LHC and ILC in Fig. 4 and Fig. 5, respectively.These figures show the ratio of R LHC and R ILC as defined in Eq. (3.1) for different values of N Q and υ S .These figures are produced by taking into account the different theoretical and experimental constraints that are mentioned above.According to Fig. 4 and Fig. 5, the large α Q values are preferred to realize a large deviation in the cross section of the di-Higgs production from the SM prediction.As was mentioned earlier, for the light dilaton case, the R ratios cannot be large due to the constraints from the mixing angle and the completion condition for the phase transition.This means that we can distinguish the PRHM and the light dilaton scenarios by combining the measurement of the R LHC and R ILC with the GW observations.By comparing Fig. 4 with Fig. 5, the value of R ILC is larger than R LHC .One has to mention that the recent LHC negative searches for the di-Higgs signal established the upper bound on the di-Higgs production cross section σ LHC hh < 112 fb [64], which excludes significant regions in the parameter space, especially in the case with υ s = 3 TeV.These regions are presented in Fig. 5 as the black dashed regions.Thus, it is expected that the ILC may impose more severe constraints on the parameter space region with a large deviation in the R ratios.In Fig. 6, we show that the constraints on the model from current and future collider experiments such as the LHC, HL-LHC and ILC.The orange and gray regions can be explored by the LHC [64] and HL-LHC [65], respectively.The yellow regions are within the reach of the ILC with √ s = 500 GeV [66], while the blue regions cannot be explored by the di-Higgs production measurement at the LHC, HL-LHC and ILC.On the other hand, the GW observations can be used to probe the models within the blue region. Even if the PRHM scenario cannot be distinguished from the dilaton scenario by the di-Higgs production measurements, we can determine which scenario is preferred by combining collider and GW results.For instance, we focus on the two BPs with N Q = 12 and v S = 3 TeV shown in Fig. 2. The predicted GW spectra of these BPs has been shown in the panel with N Q = 12 and v S = 3 TeV in Fig. 3.One remarks that the positions of peak of the GWs are close.However, the GW peak position in the PRHM scenario is different from that in the light dilaton scenario.This implies that we are able to determine which scenario is preferred by observing the GW spectra even if a large cross section di-Higgs signal would not be observed at future collider experiments.Therefore, one can determine whether the PRHM scenario or the light dilaton scenario are realized by utilizing complementarity of collider experiments and GW observations. We provide a commentary on the Landau pole scale in our scenario.It has been numerically confirmed that the typical Landau pole scale in our model is around O(1) TeV when using the 2. The Landau pole scale values with the mass dependent beta functions.Here, (R) and (G) denote the values for the BPs considered in Fig. 3. beta functions outlined in Appendix A. In Table 2, the Landau pole scales for the two BPs considered in Fig. 3, with mass-dependent beta functions, are presented.The results in Table 2 indicate that the typical values of Λ Lan in our model are around O (10) TeV.This finding suggests that our phenomenological analyses satisfy perturbativity. For the PRHM scenarios with v s = 500, GeV, the Landau pole scale can be in the O(1) TeV range.However, such BPs are constrained by the triviality bound with Λ Lan < 10, TeV. Before closing this section, we would like to comment on the possibility of projecting our results to other renormalizable new physics models.In this work, we considered a generic CSI model, where the quantum effects that trigger the EWSB lead to the EWPT dynamics and GWs discussed above.These quantum corrections lead to the same behavior whether they are a consequence of single/multiple field contributions that are just bosonic or bosonic and fermionic contributions.However, in renormalizable new physics models, additional model-dependent theoretical and experimental constraints should be considered; and the viable parameter space is significantly affected.For instance, if such a new physics model suggests a DM candidate; and/or proposes a solution to the neutrino mass smallness problem, the new masses and couplings in the model would be severely constrained not only by the motivation requirements (DM relic density and/or neutrino mass smallness); but also by other constraints like lepton flavor violating processes, Higgs invisible and di-photon decays.For instance, in the current generic model, if the singlet Q is considered to be U (1) Y charged, a significant part of the parameter space α Q , β Q , N Q and υ S would be excluded by the bound from h → γγ.Therefore, one expects that the parameter space regions where the light dilation or PRHM scenarios can be realized are in general narrowed down when specific new physics models are considered. Conclusion In this work, we have investigated the possibility of detectable GWs that are produced during a strongly first order EWPT, within a class of models with the CSI.In order to get model independent results, we have considered the SM extended by a real scalar singlet to assist the EWSB, and an additional scalar field with multiplicity N Q , and couplings α Q and β Q that accounts for the RCs.By scanning over the model free parameters (N Q , α Q , β Q and υ S ), the impact of the RCs on the EWPT and GW observables have been estimated.Especially, we have focused on the difference between the light dilaton and PRHM scenarios.We have analyzed the properties of the GWs from a strongly first order EWPT by considering a generic model with the CSI.We have shown that peaked GW spectrum might be detected at future space-based interferometers such as the DECIGO and other GW observations even if the condition υ c /T c > 1 is not satisfied.As a result, the wide parameter space region in the CSI model may be tested by utilizing the GW observations.Also, we have analyzed the cross section in the di-Higgs production processes at the LHC and ILC.As we have shown, the cross section can be more than three times larger than the SM prediction in the PRHM scenario.As shown in Fig. 6, the parameter regions with such large deviation in the di-Higgs production process can be explored at the LHC, HL-LHC and ILC.In addition, we have shown that GW observations are useful to distinguish the light dilaton scenario from the PRHM scenario even if collider experiments cannot observe the di-Higgs production signal.This fact indicates that we may be able to determine whether the light dilaton scenario or the PRHM scenario is preferred by utilizing complementary of collider experiments and GW observations. A Renormalization Group Equations In this appendix, the renormalization group equations for the couplings in our model are discussed.The beta functions for each coupling in our model at one-loop level are given by 16π where g ′ , g and g 3 are gauge couplings for U (1) Y , SU (2) L and SU (3) C gauge groups, respectively.Also, y t is the Yukawa coupling for top quarks.It is confirmed that these equations are consistent with the previous work if we take N Q = 1 [107]. Figure 2 . Figure 2. The relative difference between the critical and the nucleation temperature values in function of the couplings α Q (left) β Q (middle) and the singlet VEV υ S (right).The phase transition in our model is clearly supercooling. Figure 3 . Figure3.The predicted GW spectra for the two BPs with different values of the multiplicity N Q and the singlet VEV υ S .The red and green lines correspond to the light dilaton scenario and the PRHM scenario, respectively, as shown as the green and red starts in Fig.1.We also show the sensitivity curves of LISA[41], DECIGO[42], TianQin[103] and Taiji[104]. Figure 4 . Figure 4.The parameter dependence for the value of R LHC .The cyan region can be explored by the GW observations at the DECIGO.The magenta region is excluded by the completion condition for the phase transition in Eq (4.8).The large deviation in the cross section for the di-Higgs production prefers the large α Q .The black region is constrained by the triviality bound with Λ Lan < 10 TeV. Figure 5 . Figure 5.The parameter dependence for the value of R ILC .The cyan region can be explored by the GW observations at the DECIGO.The magenta region is excluded by the completion condition for the phase transition in Eq. (4.8).The black shaded region is the current constraint on the di-Higgs production from the LHC results.The black region is constrained by the triviality bound with Λ Lan < 10 TeV.N Q v S = 500 GeV v S =1 TeV v S =3 TeV 6 (R) 31.0TeV (G) 7.05 TeV (R) 19.8 TeV (G) 26.0 TeV (R) 14.9 TeV (G) 53.0 TeV 12 (R) 34.0 TeV (G) 6.66 TeV (R) 37.6 TeV (G) 40.3 TeV (R) 17.4 TeV (G) 92.8 TeV 24 (R) 27.5 TeV (G) 6.19 TeV (R) 29.7 TeV (G) 46.6 TeV (R) 19.3 TeV (G) 96.9 TeV Table 2.The Landau pole scale values with the mass dependent beta functions.Here, (R) and (G) denote the values for the BPs considered in Fig. 3. Figure 6 . Figure 6.The parameter space region explored by the LHC, HL-LHC and ILC.The gray, orange and green regions are explored by HL-LHC, LHC and ILC, respectively.DECIGO can explore the cyan shaded region.The magenta region is constrained by the completion condition for the phase transition.The blue regions are the unexplored parameter space regions.The black region is constrained by the triviality bound with Λ Lan < 10 TeV. −5 times of the solar mass.It implies that we can test the first order EWPT by observing such PBHs at current and future microlensing experiments like Subaru HSC [58], OGLE [59], PRIME [60] and Roman telescope [61].
v3-fos-license
2020-12-17T09:07:19.376Z
2020-12-10T00:00:00.000
230534355
{ "extfieldsofstudy": [ "Business" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.emerald.com/insight/content/doi/10.1108/BFJ-07-2020-0625/full/pdf?title=do-consumers-really-recognise-a-distinct-quality-hierarchy-amongst-pdo-sparkling-wines-the-answer-from-experimental-auctions", "pdf_hash": "1d191207ab344e7a3c44dbbb61520bbb2691d66c", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45273", "s2fieldsofstudy": [ "Business" ], "sha1": "b606b2b73d64b832b76dd65e3857f36ad1158267", "year": 2020 }
pes2o/s2orc
Do consumers really recognise a distinct quality hierarchy amongst PDO sparkling wines? The answer from experimental auctions Purpose – Consumer likeability and willingness to pay (WTP) for two Italian sparkling wines, (Conegliano Valdobbiadene Prosecco DOCG and Prosecco DOC) are evaluated through a non-hypothetical Becker-DeGroot-Marschak (BDM) auction during a wine-tasting experiment. The purpose of this paper is to estimate individual WTPandrelateittolikeabilityforbothwines,withandwithoutsupplyingadditionalinformationontheirfeatures. Design/methodology/approach – Data were collected in May – June 2019 from a sample of 99 consumers in Northern Italy. A non-hypothetical BDM auction in a wine-tasting experiment was implemented. Findings – The results show that the additional information plays a significant role in widening the WTP gap between the two geographical indications (GIs), while the blind tasting narrows this gap. The “ superiority ” of the ConeglianoValdobbiadeneProseccoDOCGisconfirmedbutreliesmoreonitsbetterreputationthanitsbettertaste. Research limitations/implications – The authors are aware of two main limitationsin the study. The first istheterritorialcompositionoftheconsumersample.ThesecondistheselectionoftheProseccobottlesusedintheexperiment.Theresultsareconsideredpioneeringandneedtobeverifiedbyadditionalexperimentswithdifferentconsumerandbottlesamples. Practical implications – Promotional suggestions for the Tutelary Consortia of the two GIs stem from Originality/value – Tothebestoftheauthors ’ knowledge,nopreviousstudyhasrelatedlikeabilityandWTP for similar GI wines produced in contiguous areas. Moreover, the current research has applied a nonhypothetical BDM auction in a wine-tasting experiment. Introduction Today, the European Union's wine producers are facing severe worldwide competition, in which the increased number of Protected Denominations of Origin (PDOs), sub-appellations and the range of wines (i.e. blend and varietal wines, types and versions) play crucial roles in shaping wine markets (Johnson and Bruwer, 2007). Over the last 20 years, the number of EU wine quality labels has risen noticeably, and this trend is expected to continue in the coming years. While the creation of new PDOs can generate both consumers' preference heterogeneity and producers' competitive advantages (Porter, 2008;Caracciolo et al., 2016), the introduction of new appellations can also produce conditions in which consumers cannot distinguish different schemes and fail to attain value-added wines, at least in the short run (Aprile et al., 2009;Resano et al., 2012). Experimental auctions (EA) are largely used to measure consumers' preferences in a nonhypothetical scenario, with participants facing real economic incentives to disclose their real preferences (Corrigan and Rousu, 2006;Gracia et al., 2011;Lewis et al., 2016;Lusk, 2003;Shogren et al., 1994). Some authors Lusk and Hudson, 2004) have demonstrated the usefulness of EA as a valuable tool to support policymakers in their marketing decisions, whether in public institutions or private firms. The paper is structured as follows. In the first section of the article, the literature background is given. The second section illustrates materials and methods, while in the third section, the major findings from data analysis are presented. Finally, the concluding discussion of the work is reported. Literature review 2.1 Consumers' preference on sparkling wines The previous literature has largely showed that consumers have heterogeneous preferences with respect to sparkling wine (Caracciolo and Furno, 2020): factors such as consumer demographic characteristics and psychological attitudes may largely shape preferences and consumption behaviour for sparkling wines (Zepeda and Deal, 2009). For example, as concerns demographic characteristics, young Korean wine consumers (aged 20-29 years) have shown a stronger preference for sparkling wines than older consumers (Lee et al., 2005). These results have been confirmed by more recent research conducted in the North American market, where some authors have shown differences in sparkling wine consumption in Canada between genders and generations (i.e. millennial and older consumers) (Bruwer et al., 2011(Bruwer et al., , 2012. The study by Charters et al. (2011) confirmed the effect of gender on consumer preferences, showing transcultural similarities. Sparkling wines were considered women's drinks, in line with previous findings by (Hoffman, 2004), who considered that women are more likely to drink sparkling wines than men. Indeed, in the purchasing decision-making process, socio-demographic and cultural characteristics may play a mediating role between consumer psychological attitudes and the consumption of sparkling wines (Zepeda and Deal, 2009). For instance, a study targeting four English-speaking countries (United States, United Kingdom, Australia and New Zealand) has provided empirical evidence that the cultural differences of young consumers may influence the perception of Champagne and sparkling wines (Velikova et al., 2016). Using socio-demographic covariates and attitudinal scores Thiene et al. (2013), identified WTP patterns concerning differences between Prosecco PDOs and PGI, while the research of Olarte et al. (2017) identified the role of social norms in shaping the purchasing intentions of sparkling wine consumers, although with less importance than other factors such as sensory characteristics and price. Social norms as well as other psychological factors are important drivers, as argued in a recent study conducted in Australia by Verdonk et al. (2017), which showed that positive social image, reputation and symbolism are particularly relevant for Champagne consumption. The above-mentioned literature shows the influence of consumer characteristics on consumer purchasing behaviour for sparkling wines. However, product characteristics are also becoming increasingly relevant, since the sparkling wines market is becoming increasingly differentiated in terms of intrinsic and extrinsic attributes, quality, complexity and price range. Along these lines, Culbert et al. (2017) and Vecchio et al. (2019) focussed on the role of production methods (Charmat or traditional method) in affecting the sensory profile and therefore the quality perception of sparkling wines. Similarly, Thiene et al. (2013) highlighted the importance of certification of origin and type of production (sparkling and semi-sparkling) in affecting Prosecco choices. A few studies have highlighted the role played by region or country of origin and their supporting certifications in sparkling wines. Rossetto and Gastaldello (2018), for instance, identified a positive effect of the PDO certification reputation in influencing Prosecco consumers' loyalty. Similarly, Chamorro et al. (2015) considered multiple BFJ 123,4 origin designations of Cava sparkling wines and demonstrated that consumers' preference structure is largely influenced by the area of origin. In particular, the study showed that when consumers recognise the differentiation in denominations of origin, they show a higher involvement with the product in terms of consumption frequency and purchasing value. The consumers' recognition of the quality attributes is a crucial issue while, as the next paragraph will illustrate, consumers often might misperceive the information signal associated to GIs. Consumers' perception towards geographical indications The "quality perception gap" between consumers and producers represents a crucial issue to be investigated for the well-functioning of any markets (Steenkamp, 1990). With respect to wine, this topic mainly involves how the GIs are recognised and perceived credible by consumers. To this regard, Caracciolo et al. (2016) empirically proved that consumers appreciation of wines increases as the level of origin designation increases from lower (PGI) to higher quality (DOCG). This result is largely consistent with findings of other studies (Cembalo et al., 2014), even if, there are many exceptions: for instance, Saenz-Navaja et al. (2014) showed that while wine geographical origin is one of the main quality cue for less-involved consumers, more-involved consumers may use a wider range of cues for identifying the highest quality wines. In general terms, the longer the GI history, the higher its awareness and its positive impact on consumers' choice (Costanigro et al., 2017). However, reforms of GI systems regularly occurred with the final aim to increase vertical differentiation while consumers may be confused or misled since only a smaller proportion of them are generally prepared towards an ever-increasing variety of wines choice (Johnson and Bruwer, 2007;Teuber, 2011). For instance, Costanigro et al. (2019) demonstrated that the introduction of an upper-tier quality geographic classification (i.e. Chianti Classico's Gran Selezione wines) increased the perceived quality of the new product while, at the same time, it decreased the quality perception of lower-tier wines. Similarly, a change of the hierarchical levels within Burgundy GIs might generate significant changes in the consumers' perception of the quality levels. In particular, the promotion of medium-tier wines to higher quality wines is beneficial for those wines, while the loss is limited to wines in the other levels (Saidi et al., 2020). These results were confirmed by Gokcekus and Finnegan (2017) by analysing the impact of introducing new sub-divisions within Oregon's Willamette Valley American Viticultural Area. By means of a non-hypothetical Becker-DeGroot-Marschak (BDM) auction in a winetasting experiment, this paper contributes to this debate (Galletto, 2005;Scarpa et al., 2009) by investigating if the quality signalling within the different Prosecco sparkling intra-PDO certifications is effective, or, in other words, if consumers appreciably recognise a distinct quality hierarchy amongst Prosecco PDOs as introduced by the Reform of the 2009. Materials and methods Consumers' preferences for different Prosecco PDOs are elicited here using an EA within a non-hypothetical experiment. In particular, preferences are measured in monetary metrics in terms of individuals' WTP by using the BDM mechanism in a mixed within/between-subjects experimental design (Charness et al., 2012;Rousu et al., 2004). By using monetary metrics and observing effective purchases, consumers are fully incentivised to truthfully reveal their real preferences . Within the BDM procedure, participants simultaneously presented an offer price in a closed envelope. Then, a sale price was randomly drawn from a uniform distribution, ranging from three to ten euros in increments of V0.50. The above-mentioned range was unknown to the participants. Any participants who provided an offer price greater than the sale price received the product by paying the sale price. Because the sale price was drawn at random, participants were informed that it was in their interest to offer the real price that they were Distinct quality hierarchy amongst wines? willing to pay for the products. This mechanism is incentive-compatible since bidders have no reason to underestimate their real WTP because the sale price is determined by a random drawing and not by the same participants (Shogren et al., 1994;Becker et al., 1964;Corrigan and Rousu, 2006). The BDM procedure is widely used by food researchers Caracciolo et al., 2019) and is easily implemented, particularly within a real market (such as shops and restaurants), which allows the experimenters to carry out a study applying a random sampling method (Lusk, 2003). Amongst the main disadvantages is that participants in the BDM auctions may tend to deviate from their true WTP (Bohm et al., 2004) for the socalled anchoring effect. The distortion occurs because the participants may refer to other people's WTP, as participants can use this information to adjust their own evaluations. Therefore, as mentioned above, the researcher should try to avoid reference prices (Drichoutis et al., 2008;Harrison and List, 2004). Moreover, consumers' expected liking and informed liking for the different Prosecco PDOs are collected. Following criteria established in Harrison and List (2004), the main characteristics of the experiment will be illustrated in the next sub-paragraph, including the characteristics of the participants and of the evaluated wines, the rules of the BDM auction, the activities carried out by participants and the information provided. Participant characteristics Overall, 99 sparkling wine consumers were recruited to participate in an EA at the University of Padova's sensory analysis room (UNI EN ISO 8589: 2014) in Conegliano (a small town to the north of Treviso), avoiding non-wine drinkers (Depositario et al., 2009). The sample was almost entirely formed of participants living in the Veneto region, between the PS-DOCG (49%) and P-DOC (51%) areas (Table 1). The EA was conducted in 2019. Amongst participants, there was a higher proportion (39% of total) of young people (aged 18-25 years) compared to other age groups, 29% (aged 25-50 years) and 31% (older than 50 years), respectively. In the sample, there were more men (60%) than women. The cohort of frequent drinkers was the most prominent: 23% consumed wine habitually (every day) and 56% two or three times a week, whereas the occasional drinkers (17%) and infrequent drinkers (3%) accounted for lower percentages. Most of the participants (81%) had not received specific training on wine. Product characteristics In accordance with the Consortium's sparkling specifications and their wine production features, 14 Prosecco PDO producers with similar characteristics (i.e. own vineyards and considerable direct sales' share) and wines were selected to represent the wide heterogeneity of Prosecco production: seven sparkling wines from the P-DOC and seven sparkling wines from the P-DOCG. The selection of the wines for the two Prosecco PDOs has been planned, by considering different producers in each Prosecco PDOs level, to avoid the producer effect (Saidi et al., 2020). Each producer provided 15 bottles of sparkling wine (standard format of 0.75 l) made from Glera grapes (100%). In particular, the sparkling brut (nonvintage) line was used in the experiment, given consumers' higher familiarity with this specific product line, which is characterised by low residual sugar (between 6 and 12 g per litre) Lusk and Shogren, 2007;Plott and Zeiler, 2005;Zhao and Kling, 2004). Table 2 summarises the characteristics of the 14 wines chosen for the experiment. Although, on average, the residual sugar between P-DOC and PS-DOCG wines is the same, the latter has an average price that is 56% higher. The difference in the price between the two typologies reflects the current market [2]. The front label, indicating the Prosecco PDOs certification and the producer brand, and the back label were hidden during the auction. The study design The study was based on a mixed within/between-subjects design without any deceptive practice or unfaithful communication of information. As discussed in the second paragraph, the study assumes that, holding all other things constant, consumers' preferences can be influenced by taste likeability and PDO information (Lange et al., 2002). For this reason, participants joined two consecutive rounds, in each of which they were asked to bid on two Prosecco wines, a P-DOC and a PS-DOCG. However, the two rounds differed in the amount and type of information participants received. In the first round, the participants were asked to indicate their bid based only on their background knowledge of Prosecco PDO wines and without tasting the product (round 1: no taste; no additional information). In the second round, the participants received more information on the two products but did not taste them (round 2: additional information) and then indicated their WTP. In the third and last round, participants tasted the wines after blind tasting (round 3: blind tasting) and then indicated their offer price for the two wines and their overall liking for each wine. To summarise, each participant submitted 6 bids (2 wines 3 3 rounds), and, when they tasted the wine, they also reported a hedonic rating in terms of overall liking. The detailed information provided to the participants on the two Prosecco Denominations is shown in Table 3 (Roth et al., 1995), while their overall liking was measured through a 9-point hedonic categorical scale with the following anchors: "I find it extremely unpleasant" (51), "I find it very unpleasant" (52), "I find it unpleasant" (53), "I find it slightly unpleasant" (54), "It leaves me indifferent" (55), "I find it slightly pleasant" (56), "I find it pleasant" (57), "I find it very pleasant" (58), and "I find it extremely pleasant" (59). Participants received cash compensation of V10. However, as this might overestimate WTP values, participants were asked to indicate how they would spend the cash in the near future. This approach minimises the windfall or house money effect (Lombardi et al., 2019). Half of the sample played the last two rounds in reverse order (round 3blind tasting before, then round 2information) to control for any potential order effects. At the beginning of the experiment, participants were given a questionnaire to collect essential consumer sociodemographic information (age, gender, origin), wine knowledge, and consumption attitudes and habits. Specifically, consumers were asked about their specific training in wine, extracurricular knowledge, where they usually buy wine and their intention to improve their wine knowledge and tasting skills. To prevent collusion between participants, no form of communication was permitted amongst the bidders during the auction. To avoid the affiliation effect, price feedback was not provided to participants in the three rounds (Lusk and Shogren, 2007). Moreover, the wealth effect was controlled by randomising only one wine in one round as binding. The careful explanation that the money provided to the bidders in the auction represents a fee linked to the cost of participation helped minimise the windfall effect (Carlsson et al., 2013). Problems concerning the order of presentation of wines were avoided by randomisation (List et al., 2011). The procedure was repeated seven times, involving a total of 99 participants. As shown in Figure 1, each of the seven sessions was divided into seven major phases, requiring approximately 45 min of participation. Table 4 exhibits the mean WTP of the two Prosecco PDO alternatives. The comparison of means showed that, in the reference round, when participants did not receive any information in addition to what they already knew about GIs and had not tasted the wines, the PS-DOCG was the most preferred. Results After receiving the additional information, participants still reasonably preferred the PS-DOCG, which entailed a systematic decrease in the WTP for P-DOC. In the blind tasting round, the mean WTP remained higher for the PS-DOCG than for the P-DOC, but participants' bids switched back to higher means for the latter than in the first round. The representation of participants' preferences across rounds makes these outcomes explicit (Figure 2). Selection of wines (No. 14) The Prosecco PDO ("Treviso") and Conegliano Valdobbiadene Prosecco Superior PDO wine samples were identified for both production areas. Recruitment and selection of participants (No. 99) Administration of a short questionnaire, whereas socio-demographic characteristics were considered in sampling people. Preparation of the BDM's auction Administration of the consent form and economic incentive (10 euros) as compensatatory fees for partecipating in the auction (45 minutes); preparation of general procedure, wines and uniform format for the bidders. BDM's auction: training The general procedure for the elicitation of WTP was explained to the participants so that they were fully informed. The procedure and field context were made familiar to the individuals through a trial with a chocolate bar. Participants were asked not to communicate with each other, to be honest in the judgements while the given responses would be checked during the auction. BDM's auction: rounds First round: participants were asked to indicate the maximum WTP for the two PDO's sparkling wines. Second round (following between-subject and within-subject design): each participant received more information about the two products. WTP for the two denominations was requested. Third round (following between-subject and within-subject design): Participants now tasted the two wines without any sort of information. WTP and hedonic likings for the two denominations was requested. BDM's auction: randomization One round, one product and one price were randomly drawn as per bidding. BDM's auction: assignment of the incentives to the participants Allocation of the economic incentives to participants and the boĴle of wine to the winners. The effects of information and wine tasting on Prosecco wines exert varying influences on preferences, leading, in the first case, to an increase in the price differential between the PS-DOCG and the P-DOC, from 60.2% in round one to 77.7% in round two, while in round three, the gap was reduced to 15%. This result means that, on average, participants are willing to pay almost the same price for a bottle of the PS-DOCG when additional information is supplied, while its WTP is reduced by 21.8% when the first round is compared with the blindtasting round (Table 5). In contrast, the values of P-DOC WTP with the additional information decrease compared to those in the first round (À10%); however, the blind tasting shows a substantial recovery of the P-DOC WTP, with higher values compared to the first round (þ8.9%), thereby reaching the closest average value, amongst the rounds, to that of the PS-DOCG (Table 6). The hedonic liking (HL) mean scores for the two GIs show a higher level for the PS-DOCG, although the gap is less significant than that for the WTP. The WTP percentage difference is slightly less than double than the percentage difference in terms of HL score. Figure 3 shows box-and-whisker plot differences in the mean WTP for the PS-DOCG and the P-DOC across rounds. In particular, in round two, where the difference in ΔWTP is larger, the PS-DOCG shows less heterogeneity in participants' judgement compared to the third round. In contrast, in round three, the whiskers extend between V-2 and V5.5, 87.5% more than in round two, as there is even more dispersion in the negative ΔWTP. More Blind tasting (HL) 5.62 6.10 ** 8.6 Note(s): significance levels according to the paired t-test; *p < 0.1, **p < 0.05, ***p < 0.01; n 5 99 participants. HL 5 Hedonic liking assessment (1-9-point Likert scale) interestingly, in the second round, ΔWTP's interquartile range goes from V3 to V4, 1.5 times less heterogeneous than in the third round. Discussions and conclusions This study focussed on the consumers perceived differences between the two Prosecco PDO designations and how the sensory characteristics as well as the provision of additional information may influence consumers' preferences in terms of WTP. A first observation arises from comparing the WTP difference in the first round and the price gap shown for the tasted wines. Of course, the two mean WTP values are higher than the two mean values of the Prosecco sample, given that they are maximum values and not market prices. However, the two percentage differences are quite similar (60 and 57%), reflecting the quality and reputation information used in participants' past purchasing experience of sparkling Prosecco (Landon and Smith, 1997), which is largely due to the subset of information on the two PDOs that was available and used in their purchasing decision (Rosen, 1974). A second point to be stressed is the significant widening of WTP gap following the introduction of additional information, which is exclusively due to decreased WTP for the Prosecco P-DOC and not to an improved WTP for the PS-DOCG. As with the first point, this fact can be explained mostly by collective reputation variables as relevant elements in the consumers' information set. We can argue that, for consumers, the regional reputation of Prosecco sparkling wine is essentially that of the PS-DOCG, which is strongly rooted in their minds and for which they know both the production methods and the history and tradition associated with it. Therefore, the additional information on this wine did not change its reputation and WTP. In contrast, the additional information about the Prosecco PDO gave most participants the awareness of a less-demanding production technique, a somewhat newer tradition and less appealing vine-growing landscape, all factors that contributed to its reduced WTP. The third round, though still showing a highly significant WTP predominance of the PS-DOCG, rehabilitates the P-DOC, especially in terms of HL, reducing the gap between the two GIs. This result should not be considered unfavourable for the PS-DOCG. Previous blind wine-tasting tests (Goldstein et al., 2008) using non-wine-experts have shown a substantial inability of testers to distinguish positive relationships between wine sensory attributes and price. The WTP difference -V0.80 per bottle, or 15%can be interpreted as reflecting the minimum quality gap between the two sparkling wines, which is determined exclusively by experience attributes (taste). At the opposite end, the widest WTP difference -V3.44 per bottle, or 77.7%found in the second round can be viewed as the expression of the overall quality when consumers are fully cognisant of all the credence attributes. However, the effect of the latter on the appreciation of the two Proseccos is divergent: for the PS-DOCG, they increase the WTP by almost 28% above the bottom WTP level defined by its likeability; for the P-DOC, they reduce the WTP by 21% below the top WTP level defined by its likeability. Although limited by a somewhat narrow sample constituted mainly by consumers living almost exclusively in the Veneto region, our analysis once again underlines the "superiority" of the PS-DOCG in comparison with the P-DOC. In addition, it supplies useful indications for a promotion strategy for the two collective brands. Even if we cannot infer general conclusions on the influence of information, we would add the idea that employing a set of ad hoc communication strategies could lead to a process learning effect that raises the possibility of discriminating the ranking at the wine-tasting step and identifying the heterogeneity between the two PDOs (Combris et al., 2009). Indeed, for the PS-DOCG, it appears crucial to effectively communicate the values associated with the production land, tradition, history and any other aspect that increases its image and reputation. This communication is particularly important for markets in which it competes with the other GI and must be differentiated from the latter. At the same time, both the Consortium and producers have to continuously improve the wine's sensorial quality to avoid disappointing feedback following consumption, especially for consumers who already know the other P-DOC. The P-DOC promotion strategy should be based on two aspects to be communicated. The first is its main strength, i.e. the high value for money of its likeability (the mean price of one HL score is V0.86, while for the PS-DOCG, it is V1.25) arising from a quite good sensorial level. The second, which is particularly useful for markets in which it competes with the PS-DOCG, consists of stressing all the credence features that are shared with the latter (region, Glera variety, winemaking method, etc.) to avoid a negative effect by emphasising the main differences between them. This study suffers from some limitations that are often frequent in research on WTP and likeability. First, the sample size limits the representativeness concerning the broader population, leading to restrictions in the generalisation of results. Second, despite the attention paid to the sampling characteristics used in the auctions (i.e. limitation to wine drinkers and mainly to non-expert drinkers, consumer sociodemographic information), the recruitment method relied on the population who were conveniently available to participate in the study, considering inclusion criteria restricted to regional cohorts inside and outside the two Prosecco PDO areas. Third, although the wines were accurately selected, we can argue that a different set of brands belonging to both the GIs might change their evaluation in terms of likeability, leading to other WTPs in the blind tests. These two reasons suggest the necessity of performing further BDM auctions during wine-tasting experiments in different locations and with other wine sets. Therefore, these wines were not representative of the whole intra-PDO heterogeneity of the two Prosecco PDO alternatives. Fourth, it is also recognised that the choice of wine when operating in a laboratory environment can prompt different evaluations from those given in the context of a real market (Harrison and List, 2004). Further research might investigate, in other Italian regions and foreign countries, the importance of factors such as consumer socio-demographic characteristics, cultural features and psychological attitudes in affecting both consumer preferences for Prosecco PDO and their purchasing behaviour. In this context, from the producer's perspective, the question arises: how important is the role of sensorial characteristics in highlighting a distinctiveness quality hierarchy across Prosecco PDO sparkling wines for consumers? Notes 1. In wine tasting, Prosecco wines have numerous similar elements that represent a source of unclear information for the Prosecco consumer who is not able to perceive intra-PDO sensory differences. 2. According the annual report on the Prosecco PDO wine market, the average ex-cellar price of the PS-DOCG wines was mainly concentrated in the super-premium range (71%), while the P-DOC wines showed a price positioning between popular premium (57%) and premium wines (43% - Boatto et al., 2018).
v3-fos-license
2018-12-15T14:02:35.266Z
2018-12-01T00:00:00.000
56173869
{ "extfieldsofstudy": [ "Medicine", "Physics", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/18/12/4386/pdf", "pdf_hash": "51e7f1a4f16a69a2dc8b9e0658bb1aa231b36e5d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45275", "s2fieldsofstudy": [ "Engineering", "Physics" ], "sha1": "51e7f1a4f16a69a2dc8b9e0658bb1aa231b36e5d", "year": 2018 }
pes2o/s2orc
Scale Factor Calibration for a Rotating Accelerometer Gravity Gradiometer Rotating Accelerometer Gravity Gradiometers (RAGGs) play a significant role in applications such as resource exploration and gravity aided navigation. Scale factor calibration is an essential procedure for RAGG instruments before being used. In this paper, we propose a calibration system for a gravity gradiometer to obtain the scale factor effectively, even when there are mass disturbance surroundings. In this system, four metal spring-based accelerometers with a good consistency are orthogonally assembled onto a rotary table to measure the spatial variation of the gravity gradient. By changing the approaching pattern of the reference gravity gradient excitation object, the calibration results are generated. Experimental results show that the proposed method can efficiently and repetitively detect a gravity gradient excitation mass weighing 260 kg within a range of 1.6 m and the scale factor of RAGG can be obtained as (5.4 ± 0.2) E/μV, which is consistent with the theoretical simulation. Error analyses reveal that the performance of the proposed calibration scheme is mainly limited by positioning error of the excitation and can be improved by applying higher accuracy position rails. Furthermore, the RAGG is expected to perform more efficiently and reliably in field tests in the future. Introduction The gravity gradient signal is very important data of Earth. Gravity gradient measurements have attracted considerable attention in the fields of resource exploration, gravity aided navigation, etc. [1][2][3]. Hungarian physicist Roland von Eötvös invented the first gravity gradiometer using a torsional pendulum in 1890, and it was applied for oil and gas field exploration. However, it took several hours to measure a single point and required a quiet measure environment [4]. The first gravity gradient measurement based on a moving platform was accomplished through design, manufacture, and testing in 1982, by Bell Aerospace Inc. (Buffalo, NY, USA) [5,6]. In the 1990s, BHP Billiton from Australia cooperated with Lockheed Martin (former Bell Aerospace), and developed the partial gradient component measurement system for geological exploration using the Rotating Accelerometer Gravity Gradiometer (RAGG) technology. It was completed in 1997 and named the Falcon TM Airborne Gravity Gradiometer (AGG), with a noise performance of 3.3 E achieved in a 0.18 Hz bandwidth and it came into service after a two-year flight test [7,8]. Moreover, Lockheed Martin upgraded the system from a marine Full Tensor Gradiometer to an air Full Tensor Gradiometer named Air-FTG TM , with noise power densities of 7-8 E 2 ·km in a Cassna C208 and 5-6 E 2 ·km in a Basler BT-67. Since 2014, it has flown a two million kilometer test-line, and has shown lots of advantages and excellent success in navigation and commercial applications [9,10]. Starting from the 1990s, rapid progress has been made in atomic interference, superconducting, and other technologies. Atomic interferometer gravity gradiometers (such as Stanford AI), superconducting gravity gradiometers (such as VK1), and other gravity gradiometers based on late-model technologies have come into public sight [11][12][13]. RAGG is one of the few airborne gravity gradiometers which has been applied for navigation and resource exploration. E.H. Metzger, who led the first RAGG development, reported the status of the RAGG development program and gave the instrument structures and self-generated noise, including thermal noise and electronic noise. He also reported that the RAGG output fluctuation was less than 2 E in a stability experiment of 18 h and the instrument bias trend was about 2.2 × 10 −3 E/h [14][15][16]. However, the scale factor calibration had not been reported and the output unit of RAGG was directed to E. Hofmeyer et al., who focused on the intrinsic noise of the RAGGs and proved an achievable sensitivity below 3 E/ √ Hz for stationary measurements using the eight-accelerometer gravity gradiometer and the performance has been improved through optimization in the gradiometer measurement [17][18][19]. Although the sensitivity has been tested and calculated by E/ √ Hz instead of V/ √ Hz, the scale factor calibration of RAGG has still not been released. Cai et al. calculated and simulated a calibration method of an RAGG using centrifugal gradients, and provided detailed procedures and mathematical formulations for calibrating scale factors and other parameters in their model [20]. However, the particular and detailed RAGG calibration method needs to be further verified in experiments. In this paper, we propose a calibration system of a gravity gradiometer to obtain the scale factor of RAGG. In this calibration system, RAGG measures the spatial variation of the gravity gradient caused by the approaching reference mass body. In order to alleviate the influence of surrounding masses and the human disturbance, we change the direction of the gravity gradient excitation move towards and away from RAGG, and the calibration results can finally be demodulated. The process shows the repeatability of the scale factor calibration and obtains a stable scale factor. Additionally, the theoretical analysis results are highly consistent with the experimental results. Principle By the law of superposition, the gravitational field potential can be generalized using the concept of mass density, as shown in Equation (1): where r denotes the location where the potential is determined, r indicates the location of differential volume element dV, and G is the gravitational constant. The gravitational acceleration g is the first order derivative of the spatial gravitational potential, and the gravity gradient is the second order derivative of the spatial gravitational potential. The spatial gravity gradient tensor is given by Due to the continuity of the derivatives of the potential, and considering Poisson's field equation, the gravity tensor is symmetric and its trace is zero. Generally, the gravity gradient signal is extremely weak, whose unit, 1 E, is defined as the gravity difference of approximately 0.102 ng (1 g ≈ 9.8 m/s 2 ) between two mass bodies with a distance of 1 m. The RAGG employs mechanical rotation modulation and synchronous electrical demodulation to extract the gravity gradient signal as a lock-in amplifier, as shown in Figure 1. In order to measure the gravity gradient at point O, the four accelerometers are assembled onto a disc whose center is point O. The directions of the sensitive axes of the accelerometers, which the black arrows indicate, are along the tangent of the disk in a clockwise manner. Therefore, the sensitive axes of the two adjacent accelerometers are orthogonal to each other. The disk is then driven by a precision motor at a constant angular rate, ω. The output model of the accelerometers that we used is U = K I a I , where U,K I ,a I , are the output voltage, scale factor, and acceleration along the direction of the sensitive axis of the accelerometer, respectively. By expanding the Taylor series of the i th accelerometer (i = 1, 2, 3, 4) at point O, the summed output of the four accelerometers can be expressed by Equation (3): where K Id = (K I1 + K I2 ) − (K I3 + K I4 ), K Is = K I1 + K I2 + K I3 + K I4 . g ox , g oy are the component of the acceleration of gravity at the x and y axis, respectively, and ∂ω ∂t is the rotation angular acceleration of the rotary table. According to Equation (1), the gravity gradient signal at point O, Γ yy − Γ xx , and Γ xy , can be modulated at the 2ω frequency domain, which is double the rotational frequency, successfully. By demodulating U 1234 using the reference signals sin 2ωt and cos 2ωt, the gravity gradient tensor component can be obtained through a low-pass filter. From the frequency domain, the horizontal component of the gravitational acceleration will couple to the measured value of the gravity gradient basis of the trigonometric function. Likewise, the 2ω component of the rotary speed will contribute to the output result of the gravity gradient and the angular rate noise is a relatively diminutive error. Furthermore, the static surrounding masses cause a gravity gradient, which will also contribute to zero bias of the RAGG output. Since the relative variation of the gravity gradient, rather than the absolute measurement, is measured by RAGG, little consideration is taken in these zero bias issues in the calibration experiments. Assuming that the RAGG is put in the coordinate O-XYZ (coordinate B) and the direction of baseline O-A1 is along the X axis, RAGG measures the gravity gradient that is dependent on coordinate O-XYZ. At the same time, creating coordinate O-X Y Z (coordinate A) for the movement pattern of the reference mass object, and letting Z and Z axis coincide, the angle between X and X axis is β, as shown in Figure 2. Then, the gravity gradient tensor Γ B is obtained by using the tensor transformation principle: where,C B A is the direction cosine matrix from coordinate A to B, as: where, is the direction cosine matrix from coordinate A to B, as: When the direction of the reference mass object movement has an angle β towards the RAGG's baseline O-A1, the RAGG's output can be expressed by: However, in the coordinate O-X'Y'Z', the gravity gradient that the mass body moves towards and away from RAGG always stays the same. From Equation (2), the gravity gradient tensor is equal to zero and is non-zero, which can be seen as the excitation signal when the excitation pattern is following the X'-axis. However, in the coordinate O-XYZ, the RAGG measurement output varies with sin2 and cos2 . For example, by demodulating using the reference signals, the calibration signal appears in the sin2 demodulation when = 0; however, it appears in the cos demodulation when = /4. Additionally, the phenomenon for which the results vary with sin2 and cos2 of a 50 E step change is shown in Table 1. When the gravity gradiometer is calibrated, we can set the angle = 0 first, and get a calibration step signal. Secondly, by changing the angle to = /4 and = /2, the influence of masses of the surrounding can be decreased because the influence will not change along with the angle during the calibration. Therefore, the masses influenced by the surrounding, such as human disturbance, can be decreased by multiple measurements of the reference object with a different angle . So, the gravity gradient that the mass body cylinder excited in the O-X Y Z can be measured by the RAGG instrument described as the gravity gradient in the O-XYZ, given by: When the direction of the reference mass object movement has an angle β towards the RAGG's baseline O-A1, the RAGG's output can be expressed by: However, in the coordinate O-X Y Z , the gravity gradient that the mass body moves towards and away from RAGG always stays the same. From Equation (2), the gravity gradient tensor Γ X Y is equal to zero and Γ Y Y − Γ X X is non-zero, which can be seen as the excitation signal when the excitation pattern is following the X -axis. However, in the coordinate O-XYZ, the RAGG measurement output varies with sin 2β and cos 2β. For example, by demodulating U 1234 using the reference signals, the calibration signal appears in the sin 2ωt demodulation when β = 0; however, it appears in the cos ωt demodulation when β = π/4. Additionally, the phenomenon for which the results vary with sin 2β and cos 2β of a 50 E step change is shown in Table 1. When the gravity gradiometer is calibrated, we can set the angle β = 0 first, and get a calibration step signal. Secondly, by changing the angle to β = π/4 and β = π/2, the influence of masses of the surrounding can be decreased because the influence will not change along with the angle β during the calibration. Therefore, the masses influenced by the surrounding, such as human disturbance, can be decreased by multiple measurements of the reference object with a different angle β. Experimental Results In the scale factor calibration experiment, we chose lead as the mass body material due to its high density and relatively cheap price. In order to get a gravity gradient where the excitation varies from ~0 E to ~500 E within the distance of ~0.5 m to ~1.6 m, a lead cylinder, whose height is 33 cm and diameter is 32 cm, was used in the experiment. However, some system errors, such as volume error (especially, to error of height and diameter) and density error of the lead cylinder, accuracy of positioning, central line of cylinder misalignment to point O, and temperature fluctuation to change the volume (the coefficient of thermal expansion of lead, 2.9 × 10 −5 /°C), should be discussed. Moreover, these system errors are varied from one location to another. Finally, the processing error of each term and its contributions to the gravity gradient are shown in Table 2 and Figure 3. Experimental Results In the scale factor calibration experiment, we chose lead as the mass body material due to its high density and relatively cheap price. In order to get a gravity gradient where the excitation varies from ~0 E to ~500 E within the distance of ~0.5 m to ~1.6 m, a lead cylinder, whose height is 33 cm and diameter is 32 cm, was used in the experiment. However, some system errors, such as volume error (especially, to error of height and diameter) and density error of the lead cylinder, accuracy of positioning, central line of cylinder misalignment to point O, and temperature fluctuation to change the volume (the coefficient of thermal expansion of lead, 2.9 × 10 −5 /°C), should be discussed. Moreover, these system errors are varied from one location to another. Finally, the processing error of each term and its contributions to the gravity gradient are shown in Table 2 and Figure 3. Experimental Results In the scale factor calibration experiment, we chose lead as the mass body material due to its high density and relatively cheap price. In order to get a gravity gradient where the excitation varies from ~0 E to ~500 E within the distance of ~0.5 m to ~1.6 m, a lead cylinder, whose height is 33 cm and diameter is 32 cm, was used in the experiment. However, some system errors, such as volume error (especially, to error of height and diameter) and density error of the lead cylinder, accuracy of positioning, central line of cylinder misalignment to point O, and temperature fluctuation to change the volume (the coefficient of thermal expansion of lead, 2.9 × 10 −5 /°C), should be discussed. Moreover, these system errors are varied from one location to another. Finally, the processing error of each term and its contributions to the gravity gradient are shown in Table 2 and Figure 3. Experimental Results In the scale factor calibration experiment, we chose lead as the mass body material due to its high density and relatively cheap price. In order to get a gravity gradient where the excitation varies from~0 E to~500 E within the distance of~0.5 m to~1.6 m, a lead cylinder, whose height is 33 cm and diameter is 32 cm, was used in the experiment. However, some system errors, such as volume error (especially, to error of height and diameter) and density error of the lead cylinder, accuracy of positioning, central line of cylinder misalignment to point O, and temperature fluctuation to change the volume (the coefficient of thermal expansion of lead, 2.9 × 10 −5 / • C), should be discussed. Moreover, these system errors are varied from one location to another. Finally, the processing error of each term and its contributions to the gravity gradient are shown in Table 2 and Figure 3. As shown in Figure 3, the positioning error of the lead cylinder is the main error source. Furthermore, the gravity gradient Γ Y Y − Γ X X that the lead cylinder excited in the coordinate O-X Y Z and the positioning error in each location are calculated, as shown in Figure 4. The calibration experiment setup was built in a laboratory, where the temperature was kept at 23 ± 1 • C, and the setup was shielded to avoid electromagnetic interference. The gravity gradient instrument was put on the point O. A lead cylinder weighing 260 kg, acting as the gravity gradient excitation, was built and mounted on a rail. In order to eliminate the disturbance of ground vibration with someone passing through, the RAGG was put on the vibration isolation foundation and the lead cylinder was suspended along the rail whose supporting feet were outside the vibration isolation foundation, as shown in Figure 5. It can be seen from Figure 6 that the noise floor of the four accelerometers was~50 ng/ √ Hz@0.25 Hz. The data extracting of the gravity gradient measurement at point O is shown in Figure 7. After a band-pass and amplifying stage, Γ yy − Γ xx and Γ xy can be extracted by demodulation, with reference signals of sin 2ωt and cos 2ωt, respectively. Without the loss of generality, the lead cylinder was pushed towards and away from the RAGG with various distances of 153 cm, 91 cm, 76 cm, . . . , 52.5 cm, and 50 cm along the X-axis. For the repeatability to measure the scale factor of the RAGG, the movement direction was changed by the Y-axis and 45 degrees line in the first quadrant. A step change of 50 E in the β = 0, and a 100 E change in the β = π/4 or β = π/2, were designed. The measured results of the gravity gradient at point O are shown in Table 3. a band-pass and amplifying stage, and can be extracted by demodulation, with reference signals of sin2 and cos2 , respectively. Without the loss of generality, the lead cylinder was pushed towards and away from the RAGG with various distances of 153 cm, 91 cm, 76 cm, …, 52.5 cm, and 50 cm along the X-axis. For the repeatability to measure the scale factor of the RAGG, the movement direction was changed by the Y-axis and 45 degrees line in the first quadrant. A step change of 50 E in the = 0, and a 100 E change in the = /4 or = /2, were designed. The measured results of the gravity gradient at point O are shown in Table 3. As is shown in Table 3, this RAGG system is able to detect a gravity gradient excitation (260 kg) within a nearly 1.6 m range. From Equation (2), the gravity gradient tensor Γ X Y is equal to zero and Γ Y Y − Γ X X is non-zero, which can be an excitation signal when the excitation pattern follows the X -axis, which is shown in Table 1 when β = 0. From Equation (6), when β = π/4, the demodulated result has the calibration curve that displays the same trend as β = 0. Additionally, for β = π/2, the sin 2ωt demodulated result has the calibration curve which displays the opposite trend as β = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/µV, (5.2 ± 0.2) E/µV, and (5.5 ± 0.2) E/µV, while the angles are β = 0, β = π/4, and β = π/2, respectively. Finally, it is (5.4 ± 0.2) E/µV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by sin2ωt Demodulated by cos2ωt As is shown in Table 3, this RAGG system is able to detect a gravity gradient excitation (260 kg) within a nearly 1.6 m range. From Equation (2), the gravity gradient tensor is equal to zero and is non-zero, which can be an excitation signal when the excitation pattern follows the X'-axis, which is shown in Table 1 when = 0. From Equation (6), when = /4, the cos2 demodulated result has the calibration curve that displays the same trend as = 0. Additionally, for = /2, the sin2 demodulated result has the calibration curve which displays the opposite trend as = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/μV, (5.2 ± 0.2) E/μV, and (5.5 ± 0.2) E/μV, while the angles are = 0, = /4, and = /2, respectively. Finally, it is (5.4 ± 0.2) E/μV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by Demodulated by As is shown in Table 3, this RAGG system is able to detect a gravity gradient excitation (260 kg) within a nearly 1.6 m range. From Equation (2), the gravity gradient tensor is equal to zero and is non-zero, which can be an excitation signal when the excitation pattern follows the X'-axis, which is shown in Table 1 when = 0. From Equation (6), when = /4, the cos2 demodulated result has the calibration curve that displays the same trend as = 0. Additionally, for = /2, the sin2 demodulated result has the calibration curve which displays the opposite trend as = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/μV, (5.2 ± 0.2) E/μV, and (5.5 ± 0.2) E/μV, while the angles are = 0, = /4, and = /2, respectively. Finally, it is (5.4 ± 0.2) E/μV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by Demodulated by within a nearly 1.6 m range. From Equation (2), the gravity gradient tensor is equal to zero and is non-zero, which can be an excitation signal when the excitation pattern follows the X'-axis, which is shown in Table 1 when = 0. From Equation (6), when = /4, the cos2 demodulated result has the calibration curve that displays the same trend as = 0. Additionally, for = /2, the sin2 demodulated result has the calibration curve which displays the opposite trend as = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/μV, (5.2 ± 0.2) E/μV, and (5.5 ± 0.2) E/μV, while the angles are = 0, = /4, and = /2, respectively. Finally, it is (5.4 ± 0.2) E/μV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by Demodulated by within a nearly 1.6 m range. From Equation (2), the gravity gradient tensor is equal to zero and is non-zero, which can be an excitation signal when the excitation pattern follows the X'-axis, which is shown in Table 1 when = 0. From Equation (6), when = /4, the cos2 demodulated result has the calibration curve that displays the same trend as = 0. Additionally, for = /2, the sin2 demodulated result has the calibration curve which displays the opposite trend as = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/μV, (5.2 ± 0.2) E/μV, and (5.5 ± 0.2) E/μV, while the angles are = 0, = /4, and = /2, respectively. Finally, it is (5.4 ± 0.2) E/μV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by Demodulated by is non-zero, which can be an excitation signal when the excitation pattern follows the X'-axis, which is shown in Table 1 when = 0. From Equation (6), when = /4, the cos2 demodulated result has the calibration curve that displays the same trend as = 0. Additionally, for = /2, the sin2 demodulated result has the calibration curve which displays the opposite trend as = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/μV, (5.2 ± 0.2) E/μV, and (5.5 ± 0.2) E/μV, while the angles are = 0, = /4, and = /2, respectively. Finally, it is (5.4 ± 0.2) E/μV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by Demodulated by is non-zero, which can be an excitation signal when the excitation pattern follows the X'-axis, which is shown in Table 1 when = 0. From Equation (6), when = /4, the cos2 demodulated result has the calibration curve that displays the same trend as = 0. Additionally, for = /2, the sin2 demodulated result has the calibration curve which displays the opposite trend as = 0, and it is matched in the theoretical analysis, as shown in Table 1. In order to analyze the scale factor of RAGG, the calibration step results are line-fitted with the X and Y error bar, as is shown in Figure 8. The scale factor of RAGG is (5.6 ± 0.2) E/μV, (5.2 ± 0.2) E/μV, and (5.5 ± 0.2) E/μV, while the angles are = 0, = /4, and = /2, respectively. Finally, it is (5.4 ± 0.2) E/μV through the error analysis. Since the accuracy of the results is mainly dependent on the lead cylinder positioning error, there is still great potential for improvement in the calibration experiment. Table 3. Calibration of RAGG with excitation at different toward-away direction patterns. Different Toward-Away Directions Demodulated by Demodulated by Figure 8. Fitting curve of scale factor of static-platform state calibration of the RAGG. In order to analyze the accuracy of the gravity gradient measurement, the excitation motion is adjusted until the tensor Γ xy remains instant. Figure 9 shows the measured results by changing the excitation distance of from 50 to 155 cm, and compares them with theoretical calculations. The curve shows a good consistency when repeating the excitation pattern approaching/retreating point O. Conclusions In this paper, we proposed a static-platform calibration system of a gravity gradiometer to obtain a stable scale factor of the RAGG. The gravity gradient signal extraction technology, based on the rotation accelerometers by lock-in amplifier technology with mechanical modulation and electrical demodulation, was studied. Then, a gravity gradient based on a field verification platform for gravity gradiometer calibration was built. The direction of the gravity gradient excitation moving towards and away from RAGG was changed to alleviate the effect of the surroundings, and finally, the calibration output showed different results by demodulating. The process demonstrated the repeatability of the scale factor calibration and obtained a stable scale factor of RAGG, (5.4 ± 0.2) E/µV. The theoretical analysis results were highly consistent with the experimental results. We have established an experimental setup of RAGG for calibration use, and the experimental results were mainly limited by the positioning error of the mass body of excitation and could be further improved by using a higher accuracy position horizontal guide rail. Furthermore, we acknowledge that the scale factor calibration will be improved by considering a more accurate gravity gradient excitation model including high order effects, a spherical lead ball, a higher precision accelerometer, multiple independent RAGG measurements, and so on.
v3-fos-license
2019-06-07T23:16:01.516Z
2019-07-01T00:00:00.000
182152913
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "http://qspace.qu.edu.qa/bitstream/10576/14804/1/10-1108_EMJB-12-2018-0090.pdf", "pdf_hash": "286d7a9d2c64af7997d3fc28a0d9f69d01d980e9", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45276", "s2fieldsofstudy": [ "Business" ], "sha1": "ed3d232d485424306fbff01056dce86a17bb19f3", "year": 2019 }
pes2o/s2orc
Local rooting and creativity within the fashion industry in Beirut Purpose – The purpose of this paper is to analyze the factors that make Beirut a fashion hub by studying the characteristics of creativity and the role of the different stakeholders in setting an environment that encourages creativity in Beirut. Design/methodology/approach – The methodology of this research is based on a literature review and information collected through semi-structured interviews with the different stakeholders of the sector. Findings – The research reveals three results. First, this dynamic fashion design in Beirut is explained by the international success of some Lebanese fashion designers. Second, as there is an absence of any form of governmental intervention, the development of the sector is totally based on private business initiatives. Third, the research demonstrates the importance of the local culture, knowledge exchanges and lifestyle in shaping creativity and designers’ careers in Beirut. Originality/value – These findings contribute to the clarification and critical analysis of the current state of fashion design in Beirut, which would have several policy implications. increasingly important role (DDFC, 2016). According to Business of Fashion and McKinsey Institute (2019), China is expected to overtake the USA as the largest fashion market in the world in 2019. In addition, India becomes a focal point for the fashion industry as its middleclass consumer base grows and manufacturing sector strengthens. In the Middle East and North Africa (MENA) region, creative industries have played a prominent role in the cultural and economic development of the region for centuries (art, calligraphy, music, etc.). In 2014, the MENA fashion retail market is estimated $75bnin terms of retail sales. The United Arab Emirates leads the region with 28.3 percent of apparel revenues. In terms of growth, the MENA fashion market has outpaced global industry growth by over 4.7 times since 2010, with a CAGR of 15.7 percent vs a global CAGR of 3.3 percent. However, there are a number of challenges and barriers that remain within the MENA region: the lack of design institutions, the poor enforcement of Intellectual Property protection, the lack of educational facilities and regional talents, and the lack of production facilities (DDFC, 2016). Lebanon is a small country in the Middle East, with an estimated population of 4.65m inhabitants. Classified by the World Bank as an ipper middle-income country with a GDP of $47.1bn in 2015 (Harake et al., 2016), the Lebanese economy is often described as having an open, liberal and modern outlook with minimal state intervention (Marseglia, 2004;Leenders, 2012), and mainly driven by services such as banking and tourism. One of the key issues facing Lebanon is the political instability of the country and the region, which is both internally and externally. In 2013, according to a study by BankMed (2013), the Lebanese fashion industry increased by an average annual growth rate of 3.9 percent, reaching $606mn (in terms of spending). The demand for fashion in Lebanon mainly leads to imports, from Europe in particular, as the local production capacity does not meet the local demand. Currently, Beirut leads the fashion design in the MENA region, with a number of successful luxury labels launched by Lebanese designers. In fact, Beirut is a well-known regional hub with a particularly strong reputation in fashion. Despite the present economic and political obstacles, fashion is an important industry in Beirut and our research shows that it continues to progress. However, the reasons for this success have not been fully explored and we do not know much about the present challenges. It appears that some local factors related to creativity or knowledge exchanges are important to explain business success in this industry, but this remains to be shown, and this is what we aim to do here. The main objective of our research was to explain the business success of Beirut's local fashion designers by analyzing the factors that have contributed to make these designers and Beirut a fashion hub. In order to explain this success, we studied the fashion business's characteristics and development over the years. We analyzed the characteristics of creativity in the fashion industry and the role of the different stakeholders in Beirut. As mentioned above, our results show the importance of the local culture, knowledge exchanges and life style in shaping creativity and designers' careers in Beirut, including in explaining the international success of some of them. Before we expose the results, we present a brief literature review, and then our methodology. Literature review The starting point of our research had to do with the relationship between geography and creativity. In recent years, creativity has moved to the center of the agenda of urban development, and has given birth to a number of new concepts: creative economy, creative class, creative city, creative industries, place-making, etc. (Florida, 2002(Florida, , 2005Markusen, 2006;Pancholi et al., 2015). The creative economy refers to a set of creative and cultural activities such as advertising, architecture, design, fashion, crafts, filming, music, publishing and arts (Chaston and Sadler Smith, 2012). The literature on the geography of the creative industries has widely demonstrated that those activities often concentrate in specific cities and metropolitan areas that provide a particularly fertile environment for producers and for consumers of creative goods and services (Landry, 2000;Scott, 2000Scott, , 2006. Those spaces play a crucial role in the growth of the creative economy (Florida and Gertler, 2003;Scott, 2006). A number of different approaches have attempted to explore the non-market factors that make some places or cities more attractive to creative workers or to the "creative class." The first approach addresses the cultural and social characteristics of those cities. Florida (2002) indicated that coolness, tolerance, talent and diversity, are significant assets in attracting creative people and capitals. Cultural and ethnic diversity is seen to have a positive impact on creativity (Audretsch et al., 2010). For Storper and Scott (2009), the accumulation of high levels of human capital in "tolerant" or open-culture regions, can be successful both in terms of attracting the creative class and for assimilating recent immigrants. Certain urban districts function as contexts for the production of symbolic meanings and could encourage the collective process of cultural production (Hauge and Hracs, 2010). The second approach analyzes the role that quality of life plays in attracting talent to cities, and examines the role of cultural or service amenities. Some urban scholars and economic geographers underline the role of entertainment and lifestyle in cities, indicating it can act as a magnet in attracting human capital and business (Bocock, 1992;Clark and Lioyd, 2000). This might be more important in creative or cultural sectors, as it might prove successful in turning creativity into commercially exploitable knowledge. Piergiovanni et al. (2012) show that the presence of a rich variety of amenities, which make life pleasant, attracts more educated, talented and creative workers who, in turn, contribute significantly to the growth of the city or the region. According to Scott (2010), urban milieus have many aspects that can affect creativity and attract creative workers such as a system of leisure opportunities and amenities that provide relevant forms of recreation and distraction. Third, the local environment can be a source of information and knowledge through the different networks found locally. The creation process appears as the fruit of a collective action, which leads the creator to activate numerous networks of affinities, to multiply contacts and relations of cooperation, and to diffuse his work toward various groups (Becker, 2006). In fact, creativity is now widely acknowledged to be a social process (Rantisi and Leslie, 2015), where new ideas are generated through interactions, exchanges and observations of the various actors of creativity (Grandadam et al., 2013). The greater the environmental uncertainty, the more likely that entrepreneurs rely on social relations for acquiring a competitive advantage (Peng and Luo, 2000). Growing emphasis on its "design-intensive" nature and its high brand visibility in the public sphere has led the fashion industry to be considered, in both academic and policy circles, as a key component of the creative economy (Evans and Smith, 2006;Business of Fashion and McKinsey Institute, 2016). According to a study by the World Intellectual Property Organization, in 2014 for the textiles, apparel and leather products, including fashion, 29.9 percent of the value of manufactured products sold around the world comes from "intangible capital," such as branding, design, and technology (WIPO, 2017). Nevertheless, fashion is a volatile market and fashion requires constant change (Khan, 2003). Ephemerality, ambivalence and ambiguity make fashion design a highly uncertain and risky business (Tremblay, 2012;Yagoubi and Tremblay, 2016). As a matter of fact, the notion of risk is omnipresent in fashion design (Yagoubi and Tremblay, 2016). However, the network and intermediary organizations' support can be seen as a risk reduction factor (Lupton, 1999;Klein et al., 2007;Tremblay and Yagoubi, 2014). Rantisi and Leslie (2010) demonstrate that the public spaces and the neighborhood in which fashion designers locate are important to them as this can have an impact on the chance encounters and interactions between creative workers, which can also be a source of cooperation and risk reduction. For example, Hauge and Hracs (2010) demonstrated that, in Toronto, the growing prevalence of independent production is making the long-standing connections between musicians and fashion designers more crucial to their success. Fashion designers are using musicians to promote their brands and clothing lines and musicians are getting fashion designers to enhance the visual components of their stylistic portfolios. Rieple et al. (2015) demonstrated that, even in a world in which ideas are accessible globally via the internet, location and proximal resources are important to a significant subset of fashion design firms in the UK. Tremblay (2012) shows that fashion designers in Montreal appreciate some support from intermediary support organizations or government programs and those sources of information and knowledge helps them develop their career and business. The support from intermediary organizations is also shown to be important in the work by Yagoubi and Tremblay (2016). Also, He (2013) demonstrates that social capital embedded in Guanxi (Traditional Chinese networks) is a valuable and unique resource that gives creative entrepreneurs advantages for successful venture creation. Moreover, from an entrepreneurship standpoint, policy interventions, including incubator and workspace initiatives, financing opportunities, creative clusters and hubs, as well as skills training and business support, play an important role in the production of creative spaces (Foord, 2009). Despite the importance of fashion design in Beirut, considered one of the main fashion hubs in the MENA region, there are very few studies about the geographical aspects of this industry and its links to culture and locally based resources. Most of the existing studies are reports from international or local organizations, which focus on some aspects of the fashion industry. For example, the ESCWA (2003) report focuses on the productivity and competitiveness of small-and medium-sized enterprises (SMEs) in the apparel-manufacturing sector in Lebanon. The report of the American University of Beirut. (2007) entitled "Mapping the creative industries," analyzed the characteristics of the main creative industries in Lebanon. It sheds light on the characteristics of these sectors, their structure and specific dynamics and inter-relationships with other sectors/industries. However, the report did not analyze the dynamics of creativity in the fashion industry. In addition, its results are somewhat out of date, especially with the new challenges facing the fashion industry: e-commerce, social media, etc. The study of Hill (2008) also offers a general overview of the creative sectors in Lebanon and presents recommendation on how to improve the state of those sectors. The report of Endeavor Lebanon (2015) offers an interesting analysis of the dynamics of entrepreneurship within the fashion sector in Lebanon. The report concludes that Lebanon's fashion design ecosystem has strengths and weaknesses, which sometimes depend on the subsector: couture, ready-to-wear or accessories. Strengths include talent and strong cultural support while weaknesses relate to failures in the supply chain, difficulty accessing local and foreign markets, scarcity of support organizations and lack of funding. We notice here the absence of any scientific study on the geographical aspects of the fashion industry in Beirut. Given these elements, we sought to fill this gap by analyzing the geographical aspects of the fashion industry in Beirut. We show the importance of the local culture, knowledge exchanges and lifestyle in shaping business, and designers' careers in Beirut, through creativity, as well as in explaining the international success of a certain number of designers. Methodology The methodology of this research is based on a literature review and information collected through semi-structured interviews. After the literature review on theoretical dimensions, presented above, the empirical research started with a literature review on the Lebanese fashion industry, with written documents, governmental reports, websites, newspapers, etc. This step gave us a general overview of the fashion industry in Beirut, in order to determine the most important stakeholders and the characteristics of this industry. The second step was a qualitative investigation on the basis of two semi-structured questionnaires that included two rounds of interviews, conducted during the months of July and August 2016. The first series of interviews were done with ten experts from organizations involved in the fashion sector, such as Lebanon for Entrepreneurs, Creative Space Beirut (CSB), Starch Foundation, ESMOD and The Investment Development Authority of Lebanon (IDAL). The average length of the interviews was 1 h 08 min. The questions (Questionnaire 1) covered four areas: (1) General information about the organization: programs and services, role in the development of the industry, etc. (2) Dynamics of the local network: the network of partners, role in the network, the key leaders of the network, the level and type of relationships and interactions, barriers, challenges, limits, results, etc. (3) The government policies and regulations: the relationship with governmental actors: the nature of relationships, the degree of government involvement in the development of the industry, etc. (4) Challenges of the fashion industry in Lebanon: strengths, weaknesses, opportunities, threats, etc. The second series of interviews were held with 15 fashion designers located in Beirut. The average length of the interviews was 1 h 20 min. For our study, we focused on two aspects of the fashion industry: the design of women's wear and ready-to-wear. It needs to be mentioned that here that there are no official statistics on the number of fashion houses in Beirut. Furthermore, as mentioned by the American University of Beirut (2007) report, there is no organization that collects statistics on the Lebanese fashion industry. The 15 designers were randomly selected based on three criteria: the size of the company, the level of maturity of the designer career (early career or established designers) and the location within the different neighborhoods of the Beirut. We established a list of 20 potential interviewees based on the list provided by the Endeavor (2015) report, trying to get some diversity but also a certain representativity, although this cannot be totally ensured in a qualitative process such as ours. We then contacted these persons by phone and asked for interviews. Most designers accepted and we had few refusals ( five refusals). The questions (Questionnaire 2) covered six areas: (1) Basic information about the company and the designer: history, evolution, career, activities, etc. (3) Dynamics of creativity: local culture, quality of life, etc. (4) Dynamics of the local network: the network of partners, role in the network, the key leaders of the network, the level and type of relationships and interactions, barriers, challenges, limits, results, etc. (5) The government policies and regulations: the relationship with governmental actors: the nature of relationships, the degree of government involvement in the development of the industry, etc. (6) Challenges of the fashion industry in Lebanon: strengths, weaknesses, opportunities, threats, etc. Our analysis began with the processing of the data. To facilitate the processing of information collected, we transcribed the interviews. Subsequently, we conducted a content analysis of interviews, and finally, we classified the information and elements of the interviews, using an analytical framework that incorporated the main research themes (Table I). Results: the strength of fashion design in Beirut At the moment, Beirut leads the fashion design sector in the MENA region, with a number of successful luxury labels launched by Lebanese designers. In fact, Beirut is a well-known regional hub with a particularly strong reputation for the production of haute couture dresses and wedding dresses. Beirut is frequently designated as the "fashion capital" of the Middle East. This recognition seems to be expanding, particularly in response to shopping visits to Beirut by customers from the Gulf countries; also, the international reputation of many Lebanese haute couture designers is reinforcing the trend (ESCWA, 2003). According to our interviews, while the fashion industry in Beirut is not clearly supported by the State, it does benefit from a certain number of advantages, which we will present in the following pages: the local culture and quality of life, the dynamics of other related sectors such as tourism, and also the leadership or "engine" role of the international and regional success of some Lebanese fashion designers. Culture and quality of life When we asked the fashion designers why they choose Beirut to locate their business, most of them mentioned the fact that Lebanon is their home country and they do not really have other options. This may appear somewhat blunt, but other factors are important in explaining fashion designers' success in this city. Indeed, other reasons were mentioned such as freedom (in comparison with the rest of the Middle East region), the culture, the diversity and the general quality of life. Unlike many other Middle East countries, Lebanon is an open and pluralistic society. Lebanon has long been considered one of the most cosmopolitan and progressive countries in the region. This openness is believed to have fostered creativity in various domains such as fashion (Endeavor Lebanon, 2015). First, our interviews indicate that Lebanese fashion is shaped by the country's characteristics, in particular, its cultural diversity. The cultural heritage of Lebanon is a historical melting pot of multiple civilizations and cultures: from the Phoenicians, Greeks, Romans, Islam, the Crusaders, Ottoman Turks to the French (Plourde Khoury and Khoury, 2009). "Their remnants are still clearly visible today not just in the ruins and tourist sites, but in the culinary style, the language, architecture, folklore and crafts, fashion, literature and performing arts of the country" (American University of Beirut, 2007, p. 5): I lived all my life here so I know Beirut very, very well. I really like the fact that we can work with craftsmen so it's handmade, very old techniques that can be reworked in a modern way. In addition, I make jewels sometimes, objects, it can be something portable or not. So it's rather a studio of experimentation. Therefore, I like that I can work with the guy that makes fiber glass or the carpenter or, you see the encrusted pearl that we can see in the traditional Lebanese furniture. So I think it's very interesting to take this very old know-how and put it in a more modern context. The modern culture of Lebanon was mainly influenced by the French mandate . Quickly, it became a hub for business, culture and fashion. There was a need for the Lebanese people to mix tradition and a "French style" of dress (Sharif, 2017). Consequently, fashion design in Beirut started with talented tailors who imported haute couture models from French fashion houses and executed them locally (Endeavor Lebanon, 2015). During the "Golden Age" period , Beirut flourished as a trading hub with a tolerant multicultural and multilingual society (Hill, 2008). Beirut was designated as the "Paris of the Middle East," "With its French Mandate architecture, its world-class cuisine, it's fashionable and liberated women, its multitude of churches on the Christian side of town, and its thousand-year-old ties to France, it fit the part" (Totten, 2013). During this period, a rising number of designers/tailors were hired by bourgeois families to design custom-made dresses. During the civil war (1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990), many designers moved their workshops outside Lebanon. After the civil war, tailors started developing their individual and original designs. Second, all the persons we interviewed said that Lebanese people are a sort of a mix between Arab people and western society. In fact, Lebanon is an open, pluralistic society with 18 different religious groups and a parliament split equally between Christians and Muslims. Lebanon's historically fragmented culture is the substance of "what Lebanese design is, because that is what the whole country is" (Schellen, 2017). Indeed, Lebanese designers are considered as trendsetters of regional fashion, and the country is traditionally a major shopping destination for visitors from the Middle East (Endeavor Lebanon, 2015). Furthermore, the Lebanese culture can be designated as "fashion-friendly." In general, Lebanese people are very interested in fashion and appearance. For them, image and appearance are really important and therefore they are very aware of their physical appearance (Daoud and Högfeldt, 2012). They love designer clothes and care about their appearance and the impression they make. In fact, Lebanese are well-known for their eleganceeven men are well-groomed and well-dressed (Global Affairs Canada, 2017). The average Lebanese are very proud of their appearance, very conscious of their image, which they so rightfully earned (BloomInvest Bank, 2013). Third, a recurrent description of Beirut mentioned in interviews is a "mix of freedom and chaos." Beirut is often described as a space full of paradoxes, reactions and contradictions (Harb, 2014). The contrast between the two images of Beirutthe worldly one and the disorderly one, led fashion designers to develop their relationships with Beirut through different forms of interaction that affect the city's image and its artistic presentation. This process is for many designers a source of inspiration and creativity. One of the designers provides a good overview of Beirut as a source of inspiration: Living in Beirut day-to-day is inspiring, because of the chaos but it's like fun, beautiful city where I like partying but then you have a disaster. It's kind of an inspiration because your mind goes crazy. Which is what happens when you're getting inspired and you want to create something, you have days where you're up, you have days where you're down. Because when the city disappoints me, I really want to have fun, to explode, and when I'm happy, I want to show how much I like Beirut. What is exactly is inspiring about Beirut? I think it's the city itself with the people, the streets, the buildings, etc. even if Beirut, sometimes, it can be very boring, and embarrassing, it always remains my source of inspiration. (Fashion designer No. 4, Questionnaire 2, Interview, 2016) Most of the fashion designers are related to Beirut and its "chill" and "Bohemian" side. Asked about the possibility to move to Dubai, (the other fashion capital in the region) a well-known fashion designer answered: I'm still more of a Beirut person, this lifestyle. I love visiting Dubai. However, I can't see myself living there because I'm more of a chill person. Dubai is just too much for me. Beirut is a very chill city. In Beirut, you can do whatever you want. You can decide to just get out and meet friends for a drink in any bar. In Dubai, people like to show off and when they want to party they want to get dressed from head-to-toe and I'm not this kind of person at all. I go out just to really have fun, not to show off. (Fashion designer No. 1. Questionnaire 2, Interview, 2016) Fourth, another interesting dimension of the fashion industry in Beirut, according to our interviewees, is the role played by material factors, such as historic buildings, mixed-use zoning, and public space, in nurturing and supporting creativity in some specific neighborhoods in Beirut. In our research, we studied the locations chosen by the fashion designers in Beirut. As we mentioned above there are no official statistics on the number of fashion houses in Lebanon, nor is there any organization that collects statistics on the Lebanese fashion industry. Our sample was based on the list provided by the Endeavor (2015) report. In total, we had a list of 36 fashion designers that we succeeded in locating as showed in the table below. We noticed that the two main locations for fashion designers are Achrafieh and Mar Mikhael, both located in the inner city (Table II). The neighborhood of Achrafieh is located in the eastern part of Beirut and it is a dense neighborhood. Today, Achrafieh is a prime location for banks, business, restaurants, etc. (El-Achkar, 2011). With its historic building, restaurants, cafés and its central location, Achrafieh, is considered one of the main fashion designer's hubs in Beirut, as mentioned by one of the respondents: "My family is from Achrafieh. So, when I started my business, I took our apartment and I transformed it into my atelier. Actually, I like it here, the neighborhood is vibrant, young, etc. It's easy for my clients to come here" (Fashion designer No. 7, Questionnaire 2, Interview, 2016). The second neighborhood is Mar Mikhael. Since 1990, Mar Mikhael has seen a commercial transformation. Attracted by the architectural typology of local buildings, several arts, crafts and design industries settled in the area. Simultaneously, Mar Mikhael became a spillover basin for nightlife and the "hip" place-to-be, targeting a specific clientele that appreciates "authenticity" and the proximity of art and design (Krijnen, 2016). "The area has a bohemian character defined by its numerous art galleries and small locally owned bars. While you walk, you cross old and big staircases, a rare site in Beirut" (Harb, 2014, p. 22), Asked about the reasons why he is located in Mar Mikhael, a fashion designer mentions: "Why Mar Mikhael? I love it here. Its mixture of ancient shops and new businesses and it's so cool. Don't miss the narrow streets around, where you can visit some nice spots. It is a nice area packed with great restaurants and cafés. At night there are nice clubs to have a drink and enjoy good music" (Fashion designer No. 3, Questionnaire 2, Interview, 2016). However, since the end of the civil war, Beirut has faced an urban transformation with a strong gentrification movement. Several of its neighborhoods, such as Achrafieh and Mar Mikhael, are experiencing numerous upscale real-estate developments coupled with a change in the resident population (El-Achkar, 2011; Krijnen, 2016). As a result, "The cost of space, particularly in creative hotspots such as Mar Michael and Ashrafia, has increased dramatically in the last few years, so creative start-ups are now competing against more established businesses for centrally located retail premises" (Thelwall, 2012). Tourism and fashion Another source of business activity is the tourism industry. Indeed, tourism has long been one of Lebanon's leading economic sectors. The World Travel and Tourism Council's latest report ranked Lebanon 36th worldwide in terms of travel and tourism's total contribution to GDP, which maintained its level at 19.4 percent in 2016, around $9.2bn (The Investment Development Authority of Lebanon, 2014). Tourists in Lebanon spend on average $3,000 per visit, one of the highest averages in the world. Arab tourists account for the highest share of tourists coming to Lebanon, reaching 458,069 visitors and representing 33.5 percent of the total number of tourists in 2012 (The Investment Development Authority of Lebanon, 2014). The Lebanese domestic market is small and tourism is a booster for the fashion industry in Beirut. In fact, tourists are the main consumers of recreational and cultural services as well as a variety of creative products such as crafts and music. In fashion design, the majority of clients are from the region. The Gulf market is the most important market and is the key market of the Lebanese haute couture (around 40 percent of the haute couture exports). They provide great opportunities for local designers. The strong spending power of these clients makes this group of women among the few in the world who can afford to exclusively buy from the haute couture designers. Approximately one-third of the global haute couture clientele stems from the Middle East (DDFC, 2016). However, with the decrease in oil prices and political turmoil, the Gulf has been witnessing and this purchasing power has decreased significantly (Rahhal, 2017). Most of the Lebanese fashion designers try to take advantage of this market. For our sample, the average percentage of exports to the Gulf region is approximately 60 percent and only 30 percent for the Lebanese market (Table III): I work a lot more for the Gulf, that's for sure. I would say, maybe 80%, it's for the Gulf countries, so it's for Qatar, it's for Kuwait. being in Beirut makes things easier. I can meet with people from Dubai or from Qatar, we can meet in Beirut, during their vacations here, it's so easy for them, so […] And I go to Dubai from time to time, if someone can't come to Beirut, I meet them in Dubai, it's easier for everyone. (Fashion designer No. 1, Questionnaire 2, Interview, 2016) The main fashion designers Another important variable in explaining the success of the fashion sector as a whole is the presence of a few "stars" in the fashion design ecosystem. Indeed, when it comes to fashion design, names like Elie Saab, Zuhair Murad, Georges Chakra, Rabih Kayrouz and Abed Mahfouz are internationally acclaimed fashion designers and role models for the young generation, who through their international fame saw fashion design as a viable and prestigious career path (Rahhal, 2017). In fact, as mentioned by Endeavor Lebanon (2015, p. 16 "Inspiration is a key aspect of prosperous entrepreneurial ecosystems. Success stories inspire would-be entrepreneurs and drive their ambition to get to the top." Those prominent Lebanese fashion designers have made their mark on the international scene. They hosted fashion shows in the top fashion capitals of the world and dressed many celebrities worldwide (Endeavor Lebanon, 2015). In our interviews, Elie Saab is the most frequently cited local entrepreneur to have inspired the younger generation. Also, the fashion designers that we met have repeatedly highlighted how inspirational it was to see a local designer with international recognition and a global brand, as confirmed by one of them: "Elie Saab, I take him as a model, because he is a success story and because he made it. He's now an international designer, he's all over the place, and we're very proud in Beirut of having him on the map" (Fashion designer No. 6, Questionnaire 2, Interview, 2016). Also, as one of the fashion designers notes, Elie Saab offers learning opportunities for young designers mostly through offering them internship and access to important knowledge on the sector and the dynamics of a career in fashion: I did an internship and then I continued working for a long time at Elie Saab. Elie Saab, it was a very interesting experience for me because it's a very large company, which is complete, from that time, in 2006, with ready-to-wear, sewing, etc. There was a fairly high customer service too so I really had the opportunity to discover the world of fashion, in all its small details I also had the chance to go to all of Elie Saab's workshops, I worked live with the clients, I worked live with Elie, and I also worked in the studio. (Fashion designer No. 4, Questionnaire 2, Interview, 2016). Social networks All the respondents strongly insisted that their personal and professional networks influenced their business success. Most of the persons that we met mentioned the critical importance of relationships and networks to do business in Beirut. In fact, a recurrent advantage of Beirut mentioned in interviews is the social capital and the availability of contacts, which offers access to knowledge. For most of the designers, the fact that Lebanon is a "small country" made things "easier" for them, in particular regarding setting up their company, contacting suppliers and having access to tailors. Personal networks and personal social capital through family and friends and close acquaintances have provided financial support and moral-support, as mentioned above and also contributed to access to knowledge and to establish a client base. Social capital is important, particularly for young emerging designers. Indeed a strong social network for knowledge-sharing often helps them in their start-up period: Also, another benefit is that the market is small here in Beirut. It's a very small community so we all know each other. So, through friends and family, you are immediately recognized, especially if you do something different, something new, people recognize you right away. So they recognize your work, your talent, and that opens up a lot of doors in the market here. So that's really important, speaking of a very local side. (Fashion designer No. 4, Questionnaire 2, Interview, 2016) 5. Main challenges in the Beirut fashion industry While many fashion designers have been successful, as mentioned above, it remains that many challenges make it difficult for the fashion industry to progress in Beirut. First and foremost, many actors indicate the lack of sufficient government support. While this is not always necessary, it can clearly be useful as many international cases have shown for Europe or North America (Yagoubi and Tremblay, 2016). Government and fashion Unlike in other countries, there is little institutional support for fashion designers in Beirut and for the creative sectors in general. The majority of the persons interviewed highlighted the problem of the absence of a clearly stated and government endorsed national strategy for creativity in Beirut. Most of the fashion designers that we met repeat that "the Lebanese Government has other priorities." In fact, the Lebanese economy is often described as open, liberal and modern outlook with minimal state intervention (Leenders, 2012;Marseglia, 2004). This has been considered as an advantage since the private sector has been the driving force behind Lebanon's economic development (Ahmed and Julian, 2012). The Lebanese Government is hindered by two main issues. First, over the last 30 years, Lebanon has been considered a fragile country that has faced many internal and external shocks: civil war, war of 2006, the Syrian civil war since 2011, a series of events that weakened the government (Raphaeli, 2009, p. 124). Second, the Lebanese society is organized along sectarian lines of 18 recognized religious communities that each have their own political leaders and social institutions. Accordingly, citizens have historically depended on sectarian leaders more than on any national government (Welsh and Raven, 2006). Hindered by those issues, the Lebanese state has a relatively weak capacity. The post-civil war Lebanese state has been able to accomplish little in the way of rebuilding public services or dealing with socio-economic problems (Nagel and Staeheli, 2016). The public actions in favor of creativity have thus been very timidly developed. In fact, for the government, the creative sectors and especially fashion design are successful sectors and they do not need government support, as mentioned by one of the government officers interviewed: Fashion is a very strong sector in Lebanon. Beirut is a big shopping destination for the Gulf countries. We have also important international fashion designers like Elie Saab or Zuhair Mourad. And we think they are making good work and good money. Our government has other priorities especially with the Syrian civil war and the refugees. Honestly, fashion design doesn't need our help. (Organization No. 3, Questionnaire 1, Interview, 2016) Asked if the Lebanese Government is supportive of the sector, a fashion designer answered: "I don't think they do. The government […] we don't have a government, so […] that doesn't work with Lebanese designers" (Fashion designer No. 1, Questionnaire 2, Interview, 2016). Access to finance Access to finance is another challenging issue for many fashion designers in Beirut. Most respondents have relied on their own funds or their families' resources. As a matter of fact, Lebanese entrepreneurs rely on family members to establish, develop and grow their enterprises (Fahed-Sreih et al., 2010). Most of the sector is supported only by personal funds: To be able to exist in fashion, firstly, it's not easy. You need a lot of money to have a team, to be able to have the equipment, a workshop, to be able to sell enough, to be able to survive. It's very difficult to exist, so when the designer takes his diploma, there are several directions in which the creator goes. There will be many obstacles. But it depends on the strength of this person. He should rely on his own financing and resources. For me, without my parents, I would not be here at all. They helped me emerge, get started, etc. (Fashion designer No. 4, Questionnaire 2, Interview, 2016) We mentioned here that some financing funds exist in Beirut such as Kafalat, but most of the respondents consider that this fund is not made for their type of business. Kafalat is a Lebanese financial company with a public concern that assists SMEs to access commercial bank funding. Kafalat helps SMEs by providing loan guarantees based on business plans/ feasibility studies that show the viability of the proposed business activity. But Kafalat targets mostly SMEs and innovative startups that belong to one of the following economic sectors: industry, agriculture, tourism, traditional crafts and high technology, and there are no specific funds for the creative sectors or for the fashion industry. In addition, "The focus of Venture Capital-based approaches is necessarily on high growth businesses, so with the exception of games and media it would seem that there are relatively few of these in the Lebanese creative sector" (Thelwall, 2012). Absence of trade organizations Another main difficulty or obstacle mentioned in the interviews is the absence of trade organization or a fashion council in Beirut which can represent the national fashion industry, organize events and promote young designers. This absence obstructs the development of the sector. The Lebanese Syndicate of Fashion Designers, while existent, seems to be dormant, with lack of activities and lack of support for young designers. In addition, the internationally acclaimed Lebanese fashion designers such Elie Saab prefer to be member of an international federation or syndicate such as the Fédération de la haute couture et de la Mode of Paris, because it is more interesting for their image and prestige. Moreover, there is a lack of concerted efforts from well-known designers to promote a common cause (Endeavor Lebanon, 2015). For some designers, this can be explained by the individualistic culture in Lebanon: The Lebanese is always very protective of what he has, he is not at all generous in what he knows, he does not want to share his knowledge, he is very jealous of someone else's success. Also, we have a high competition in the fashion design. So I think that this is also why we do not have a federation of fashion here in Lebanon. This is one of the reasons why the great creators of Lebanon have not been able to agree to create a federation. (Fashion Designer No. 4, Questionnaire 2, Interview, 2016) This quote indicates that knowledge transfers appear to be difficult in Beirut, although fashion designers also indicate that these knowledge exchanges are crucial to develop a career, especially for the young designers, who need such access to knowledge. As indicated by Chakour (2001), Lebanese perceive Lebanon as a place where there is a high value placed on self-sufficiency, individualism and personal initiative. Because of this, the Lebanese do not generally rely on the government or any organization to provide for their well-being. Limited state capacity might be reflected in the booming social entrepreneurship (Doumit and Chaaban, 2012) and this may have contributed in developing a civil society consisting of NGOs (Nagel and Staeheli, 2016), the "NGO-ization" of the country that was often referred to in our interviews. This situation started during the civil war which was a period of proliferation of NGOs and associations in response to the weakening governmental institutions and state and the rise in international development funding (Chaaban and Seyfert, 2012). Experts also mention the fact that the Lebanese market is too small and successful designers prefer to organize fashion shows in more international and prestigious locations. Fashion weeks are organized locally by private event planning firms or by NGOs such as Beirut Fashion Week. Also, the MENA Design Research Center initiated the Beirut Design Week, in 2012. The MENA Design Research Center is a non-profit organization based in Beirut. Founded in 2011, it remains one the region's few institutions that focus on design as a multidisciplinary tool for social development and research. It promotes all Lebanese designers ( fashion or others) by organizing workshops, talks and exhibitions for one week once a year. Some support is also offered by international organizations such are the British Council or la Maison Méditerranéenne des Métiers de la Mode. Other organizations that repeatedly came up in interviews are the CSB and Starch organization. In fact, the little institutional support for fashion designers and for the creative sectors in general encouraged some fashion designers to lunch projects to help and support the sector. The two main projects are CSB and Starch organization. Creative space Beirut (CSB) One of the issues pointed out by some of the young designers is the high cost for fashion education in Lebanon. In fact, all the graduate and undergraduate diplomas in fashion design existing in Lebanon are offered by private schools or universities such as ESMOD or the Lebanese American University and the fees are expensive. "From Elie Saab to Zuhair Murad and Reem Acra, it seems that expensive design schools were essential for Lebanon's renowned fashion designers" (Aboulkheir, 2015). In addition, being a Lebanese fashion designer has traditionally been restricted to those who could afford to get their education and careers started abroad, or have the networks within Lebanon to begin their hometown fashion houses with an established clientele base. A substantial percentage of the talent could be going to waste because of the lack of free design education in Lebanon (Rahhal, 2017). In an effort to break the rule and allow "underprivileged" talents to follow in the footsteps of these style giants, in 2011, Sarah Hermez, a Lebanese designer, decided to co-launch Beirut's first free fashion school with her New York-based former professor, Caroline Simonelli. CSB describes itself on its website as "a free school in fashion design providing quality creative design education to talented individuals who lack the resources to pursue a degree at increasingly costly institutions of higher learning" (www.creativespacebeirut. com). CSB was created in order to provide highly talented but resource-limited young people of all backgrounds with the means, the knowledge and the place to grow their skills and passion for design. Starch foundation Starch is a non-profit organization founded by Rabih Kayrouz, an internationally renowned Lebanese fashion designer and Tala Hajjar, PR and marketing manager in various fashion and jewelry houses, in collaboration with Solidere, Lebanese corporation responsible for the reconstruction of downtown Beirut, after the end of the civil war in 1990. Starch is an incubation program that helps shape and promote the work of young emerging Lebanese designers. It is an annual program and a rotation of debut collections where four to six young designers are selected each year. The designers are guided through the process of developing their collections, as well as promoting them (communication, marketing, branding and press). These collections will then be presented for a period of one year at Starch boutique. Throughout their one-year at Starch, the designers also get the chance to participate in design-related workshops, seminars and collaborations. Most of the designers who participated in the Starch program acknowledged that it was important for their career especially the mentoring; also, the fact that they were showcased in the Starch boutique helped them acquire marketing skills and brand recognition: Starch was a very good experience because I realized the difference between working for a designer and working on your own. So already, the fact of existing in the market, it helped me to better understand my customers' needs, to know how to sell my product. Also, I had the chance to meet the press, the press is very important, too, especially in Lebanon. It's a very small market, so everyone knows each other. So, Starch, was really a school, as such. (Fashion designer No 7, Questionnaire 2, Interview, 2016) 6. Conclusion and limits to the research The main objective of our research was to explain the success of the fashion sector in Beirut and analyze the factors that make Beirut a fashion hub. As shown above, various characteristics explain this success (Beirut's open and diversified culture in comparison with the rest of the Middle East, a few "stars" but also a network of designers, tourism, etc.). Also, the role of the different stakeholders in setting an environment that encourages creativity in Beirut has been shown to support the fashion ecosystem. Our research thus highlights the importance of the local culture, knowledge exchanges and life style in shaping creativity and designers' careers in Beirut, and also in explaining the international success of a certain number of these designers. Our research results show that the fashion industry in Beirut benefits from many advantages: the local culture, the quality of life, the dynamics of other related sectors such as tourism, including particularly knowledge exchanges with these sectors. These knowledge exchanges feed into the fashion designers' careers and constitute the "engine" role of the international and the regional success of some Lebanese fashion designers, which in turn has an impact on local and regional development (Secundo et al., 2015). In fact, our interviews indicate that Lebanese fashion is largely shaped by the country and its cultural diversity. Consequently, the findings of our study confirm the importance of non-market factors, mentioned in the literature on the geography of the creative industries for the development of the fashion industry, including knowledge exchanges between individuals and firms. Furthermore, our results confirm that creativity is strongly related to local culture. Consequently, our study enhances the existing knowledge on creativity, geographic location and the fashion industry in a non-western context. In addition, in our research, we focused on the product design step, rather than on manufacturing and commercialization, which have tended to be the focus of past research in the fashion industry. Our results also show that the future development of this industry is limited by the absence of governmental or institutional support as well as by the absence of intermediary organizations such as professional associations, trade organizations or a fashion council which could represent the national fashion industry, organize events, encourage more knowledge exchanges and promote young designers. In these difficult times (civil war in Syria, brain drain, etc.), there is a need to focus on the regeneration of Lebanon and to plan for the role it will play in the new globalized economy, especially in light of the recent emergence of the creative sectors as an important economic sector. Consequently, we recommend the creation of a fashion organization in Lebanon, chaired by a Government Minister and including the main Lebanese fashion designers. The organization would be responsible for developing a detailed action plan to support the sector, a plan focusing on the young generation of designers, creating common branding identity and opening more markets. We need to mention a few limits of this research. The main limit is the fact that we may not have accessed all fashion designers active in Beirut and Lebanon, nor a representative sample, as this is quite difficult to attain. Amongst the interesting dimension of the fashion industry in Beirut that would require more attention is the role played by material factors, such as historic buildings, low-cost rents, mixed-use zoning and public space. These elements were put forward as contributing in nurturing and supporting creativity in some specific neighborhoods in Beirut such as Achrafieh or Mar Mikhael, but more research would be needed to show the exact role of these various factors. Finally, it would be interesting to compare the situation of fashion design in Beirut and Lebanon more systematically with that of other cities and countries, something which we hope to be able to do in future work.
v3-fos-license
2023-03-15T15:03:00.462Z
2023-03-01T00:00:00.000
257512794
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "b5267a3aa4b56006c7ed5fd6110bf31b8a5aeeb7", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45278", "s2fieldsofstudy": [ "Medicine" ], "sha1": "c8671201206c2b4c19bb8710e9df96f8f9f4ad06", "year": 2023 }
pes2o/s2orc
Association of Major Adverse Cardiac Events and Beta-Blockers in Patients with and without Atherosclerotic Cardiovascular Disease: Long-Term Follow-Up Results of the T-SPARCLE and T-PPARCLE Registry in Taiwan Beta-blockers are widely used, but the benefit is now challenged in patients at risk of atherosclerotic cardiovascular disease (ASCVD) in the present coronary reperfusion era. We aimed to identify the risk factors of a major adverse cardiac event (MACE) and the long-term effect of beta-blockers in two large cohorts in Taiwan. Two prospective observational cohorts, including patients with known atherosclerosis cardiovascular disease (T-SPARCLE) and patients with at least one risk factor of ASCVD but without clinically evident ASCVD (T-PPARCLE), were conducted in Taiwan. The primary endpoint is the time of first occurrence of a MACE (cardiovascular death, nonfatal stroke, nonfatal myocardial infarction, and cardiac arrest with resuscitation). Between December 2009 and November 2014, with a median 2.4 years follow-up, 11,747 eligible patients (6921 and 4826 in T-SPARCLE and T-PPARCLE, respectively) were enrolled. Among them, 273 patients (2.3%) met the primary endpoint. With multivariate Cox PH model analysis, usage of beta-blocker was lower in patients with MACE (42.9% vs. 52.4%, p < 0.01). In patients with ASCVD, beta-blocker usage was associated with lower MACEs (hazard ratio 0.72; p < 0.001), but not in patients without ASCVD. The event-free survival of beta-blocker users remained higher during the follow-up period (p < 0.005) of ASCVD patients. In conclusion, in ASCVD patients, reduced MACE was associated with beta-blocker usage, and the effect was maintained during a six-year follow-up. Prescribing beta-blockers as secondary prevention is reasonable in the Taiwanese population. Introduction Risk factors predicting major adverse cardiac events (MACEs) are similar in patients with and without atherosclerotic cardiovascular disease (ASCVD) [1,2]. By controlling hypertension, hyperlipidemia, and diabetes mellitus, as well as lifestyle modification, physicians aim to lower the MACE occurrence. Several new medications are proven to lower MACE in primary and secondary prevention, such as sodium-glucose cotransporter 2 (SGLT2) inhibitors [3], glucagon-like peptide-1 (GLP1) receptor agonists [4], and statins, but some are shown nonbeneficial. The role of beta-blockers is now under challenge. Beta-blockers were recommended to lower mortality or cardiovascular events in patients with acute coronary syndrome (ACS) [5][6][7], silent ischemic heart disease [8], stroke [9,10], and peripheral artery disease (PAD) [8]. Since most studies were conducted in the pre-percutaneous coronary intervention era [11,12], newer cohort studies showed limitations in decreasing mortality in ACS patients without heart failure [13,14]. Long-term benefit in survival and cardiovascular events was also non-significant in post-MI patients in a three-year follow-up [15]. A meta-analysis published recently showed no association between beta blockers and all-cause mortality in the present coronary reperfusion era [16]. For primary prevention, the usage of beta-blocker was also suggested in some guidelines [17] since lower blood pressure per se could be beneficial to stroke prevention, heart failure, and CVD [8,18]. However, there were few studies targeting the association between composite MACE and beta-blockers in both primary and secondary prevention. We aimed to identify the risk factors of MACE and the long-term effect of beta-blockers in two large cohorts in Taiwan. Inclusion and Exclusion Criteria (the T-SPARCLE and T-PPARCLE Registry) This study was conducted by the Taiwan Clinical Trial Consortium for Cardiovascular Diseases (TCTC-CVD), using the Taiwanese Secondary Prevention for patients with AtheRosCLErotic disease (T-SPARCLE) and Taiwanese Primary Prevention for AtheRosCLErotic disease (T-PPARCLE) Registry. The study design was published elsewhere [19]. Briefly, these two registries were initiated at 14 hospitals (eight medical centers and six regional hospitals) in order to recruit and follow a large population with or without atherosclerotic cardiovascular disease (ASCVD). These two cohorts included men and women aged >18 years. T-SPARCLE Registry enrolled the patients with evidence of ASCVD, which included (1) coronary artery disease (CAD, evidenced by cardiac catheterization examination, having a history of myocardial infarction, or with angina showing ischemic electrocardiogram changes or positive response to stress test); (2) cerebral vascular disease, cerebral infarction, intracerebral (excluding intracerebral hemorrhage associated with other diseases); (3) transient ischemic attack (TIA) with carotid artery ultrasound confirming atheroma-tous change with more than 70% blockage; or (4) peripheral atherosclerosis (symptoms of ischemia and confirmed by Doppler ultrasound or angiography). T-PPARCLE Registry enrolled the patients with no evidence of ASCVD but with at least one of the following risk factors: diabetes mellitus (DM), dyslipidemia, hypertension, chronic kidney disease (CKD), smoking, elder age (men > 45 years old, women > 55 years old), family history of premature CAD (men < 55 years old, women < 65 years old), and obesity (waist circumference: men > 90 cm, women > 80 cm). Patients were defined as having dyslipidemia if one of the following criteria was met: total cholesterol (TC) > 200 mg/dL; LDL-C > 130 mg/dL; TG > 200 mg/dL; men with HDL-C < 40 mg/dL or women with HDL-C < 50 mg/dL, or under lipid-lowering therapy. Hypertension and diabetes were diagnosed following conventional definitions and confirmed by the physicians that recruited the study participants. Chronic kidney disease was defined as patients with an estimated glomerular filtration rate (eGFR) < 60 mL/min/1.73 m 2 . The exclusion criteria were patients with (1) other serious heart diseases; (2) ≥New York Heart Association functional class III heart failure; (3) life-threatening malignancy; (4) treatment with immunosuppressive agents; (5) other atherosclerotic vascular diseases with unknown disease type; (6) two or more statins treatment at enrollment; (7) chronic dialysis or (8) any condition or situation which, in the opinion of the investigators, might be not suitable for this registry study. Eligible patients who fulfilled the enrollment criteria at the screening visit would be followed at six and twelve-month intervals and every year thereafter, and those with follow-up time <1 year but without MACE were also excluded. Written informed consent was obtained from each patient included in the study. The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the Taiwan Joint Institutional Review Board for each participating hospital (JIRB number 09-S-015). Data Collection The baseline characteristics, laboratory data and medication use were collected at the time of enrollment. During each follow-up, clinical endpoints, vital signs, concurrent medication, laboratory data, and other relevant clinical information were recorded. The patient's demographic data, major vascular risk factors, previous disease history, and medications were collected according to a predetermined protocol. The body mass index (BMI) was calculated as the body weight divided by the square of the body height (kg/m 2 ). Laboratory test results, including creatinine levels, total cholesterol (TC), triglyceride (TG), LDL-C and HDL-C, were obtained after enrollment. Non-high-density lipoprotein cholesterol (non-HDL-C) levels were calculated by subtracting the HDL-C from the total cholesterol (TC) levels. Serum creatinine was used to calculate eGFR by using the Modification of Diet in Renal Disease equation. Outcome The primary endpoint was defined as the time to the first MACE event after enrollment. MACE is a composite endpoint, including (1) cardiovascular death; (2) nonfatal stroke (ischemic stroke, hemorrhagic stroke, TIA or VBI); (3) nonfatal myocardial infarction (non-ST elevation acute coronary syndrome and ST-elevation myocardial infarction); and (4) cardiac arrest with resuscitation. Statistical Analysis Patients were classified into two groups: with and without ASCVD. The qualitative variable was summarized by count and percentage and compared with the chi-square test. For quantitative variables, the data were presented as mean ± standard deviation and analyzed with a student's t-test. In order to identify independent risk factors on MACE occurrence, risk factors with a p-value less than 0.1 were included in the multivariate Cox proportional hazards model. Kaplan-Meier survival analysis was conducted separately in patients with and without ASCVD, and the difference was analyzed by log-rank test. For missing data, we used multiple imputations (PROC MI procedure in SAS). The predictor variables in the imputation model include BMI, HDL-c, non-HDL, eGFR, heart failure and diabetes, as well as important non-missing variables such as age, gender, and MACE. In the study cohort, variables regarding the history of heart failure have missing values less than 5%; variables of BMI, HDL-c, non-HDL, eGFR and history of diabetes have missing values greater than 5%. The imputation step resulted in 20 complete data sets, each of which contains different estimates of the missing values for all 11,747 patients. After imputation, we used SAS/PROC PHREG to fit Cox proportional hazards (PH) model for each dataset and then used SAS/PROC MIANALYZE to combine results from the 20 Cox PH models. Risk Factors Associated with MACE in All Patients A total of 12,224 patients were enrolled in the T-SPARCLE and T-PPARCLE registry between December 2009 and November 2014. After excluding patients with other serious heart disease or ≥functional class III heart failure (n = 310), with dialysis (n = 131), and taking two statins (n = 17), 11,747 patients were included in the final analysis (6921 and 4826 in T-SPARCLE and T-PPARCLE, respectively). The median follow-up duration was 2.4 years, the same as the mean follow-up duration. Of all the enrolled patients, the mean age was 64.7 ± 11.8, and 62.8% were male. Among all the participants, 52.2% took beta-blockers. Multivariate Cox Proportional Hazards Model of Risk Factors Associated with MACE With multivariate Cox PH model analysis ( Table 4). Although statistically non-significant, there was a trend of increased HR of the patients with a history of heart failure or under antiplatelet therapy. Kaplan-Meier Survival Analysis of Long-Term Follow-Up The Kaplan-Meier curves for MACE occurrence in patients with and without ASCVD were plotted in Figures 1 and 2. During the follow-up period in the ASCVD group, with a maximal interval of six years, the primary event rate was significantly lower in patients with beta-blocker use (p = 0.0024). The trend was not significant in patients without ASCVD (p = 0.8747). Discussion The T-SPARCLE and T-PPARCLE are two large registry cohorts initiated in severa hospitals in order to analyze the MACE in patients with and without ASCVD in Taiwan Discussion The T-SPARCLE and T-PPARCLE are two large registry cohorts initiated in severa hospitals in order to analyze the MACE in patients with and without ASCVD in Taiwan Discussion The T-SPARCLE and T-PPARCLE are two large registry cohorts initiated in several hospitals in order to analyze the MACE in patients with and without ASCVD in Taiwan. During a median 2.4-year follow-up, beta-blocker use was the only significant independent protective factor for MACE in secondary prevention, but it was not associated with a reduced MACE in primary prevention. It was also shown that the protective effect was maintained during the follow-up period. For both cohorts, elderly and CKD were associated with MACE, and additional independent risk factors, including low BMI, DM, CHF, and high non-HDL-c, were found only in ASCVD patients. The role of beta-blockers in secondary prevention is controversial in the contemporary era, although it is still recommended in guidelines [5,7,17,20]. Since reperfusion therapy could limit the infarct size, the risks of recurrent MI, arrhythmia, and heart failure have declined [21]. For post-myocardial infarction patients without heart failure, some previous studies showed that prolonged beta-blocker treatment was not associated with reduced mortality [14,22]. In a propensity score-matched analysis from the Reduction of Atherothrombosis for Continued Health (REACH) registry, beta-blocker was not associated with a lower risk of MACE among patients with known prior MI, known CAD without MI, or with CAD risk factors [23]. Although lower mortality was observed in patients receiving beta-blockers within one year after MI in CLARIFY registry [14], the effect was not significant during further follow-ups up to five years. In contrast to these previous studies, the T-SPARCLE registry suggested fewer MACEs were associated with beta-blocker usage. The major difference between these registries was the study populations, of which Asians contributed to less than 20% of the REACH registry. In addition, the T-SPARCLE registry enrolled a relatively high percentage of patients with a history of myocardial infarction (74.8%) or diabetes mellitus (49.2%). Beta-blockers were known to be protective in ASCVD patients with diabetes [24]. Another registry done in Asian people with previous myocardial infarction, the Korea Acute Myocardial Infarction Registry-National Institutes of Health (KAMIR-NIH), also suggested a beneficial effect of beta-blocker use in the one-year risk of cardiac death [25]. The multivariable analysis of the T-SPARCLE registry showed that beta-blocker use was the only independent protective factor, instead of statin, ACEI or ARB, or antiplatelet therapy. It was also shown that a significant event-free survival remained during the follow-up period. It is well known that beta-blockers could decrease oxygen demand due to reductions in heart rate, blood pressure, and contractility. It also prolonged the diastolic phase and increased coronary perfusion. Additionally, the antiarrhythmic effect and reduction of myocardial oxidative stress both contributed to its benefit [26]. Therefore, beta-blockers were recommended in the guidelines [5,7,17,20]. For patients with suspected CAD and stable angina symptoms, the benefit of beta-blocker has not been demonstrated in randomized controlled trials [2], but it is still recommended in current guidelines due to its anti-ischemic effect. In hypertensive patients, beta-blocker use has also been downgraded in current guidelines [8,17,18] since meta-analysis from previous studies failed to show its benefit in all-cause mortality, MI, or coronary heart disease [27,28]. In the T-PPARCLE population, this study showed beta-blocker use was not associated with decreased MACE in patients without ASCVD too. However, these hypertension treatment guidelines even impacted the physicians in Taiwan to under-prescribe beta-blockers for patients with ASCVD. This means this study could still show the protective effect of beta-blocker for ASCVD patients. There might be several explanations for the neutral effect of statins, ACEI/ARB, and antiplatelet therapy for secondary prevention of MACE in the T-SPARCLE registry. In univariate analysis, those with MACE were less likely to take statins. Although statistically non-significant, there was a trend of decreased HR of the patients under statin therapy in multivariate analysis. These indicated statins still had a protective role. However, since the LDL and non-HDL levels had been low in the T-SPARCLE population, the protective effect of statin might be fading out. Because ACEI/ARBs are independently associated with decreased mortality in all ischemic heart disease patients, current practice guidelines support the use of ACEI/ARB in patients with coronary artery disease without heart failure. However, a number of cited trials were performed prior to the era of prevalent statin use. When accounting for statin use in those without heart failure, the additive effects of ACEI/ARBs in reducing CV mortality appear to be nullified. Using data from the REACH registry, it has been shown that the use of ACEI/ARBs was not associated with better outcomes in stable CAD outpatients without HF [29]. A meta-analysis of ten ACEI trials and five ARB trials showed in CAD patients without HF, ACEI, but not ARBs decreased the risk of nonfatal MI, cardiovascular mortality and all-cause mortality, while both ACEI and ARBs decreased the risk of stroke [30]. In the T-SPARCLE registry, patients with advanced heart failure were excluded, and nearly 70% of the patients had received statin therapy. These might explain why there was no significant effect of ACEI/ARB on the MACE in the T-SPARCLE patients. Antiplatelet therapy is also indicated for secondary prevention of MACE, and its neutral effect in this study might be due to the high prevalence (85%) of usage in AS-CVD patients. In contrast, still more than one-third of T-PPARCLE patients were under antiplatelet therapy. Although statistically non-significant, there was a trend of increased HR of the patients under antiplatelet therapy. This implies that many of the clinicians in Taiwan still over-used antiplatelet medication in primary-prevention patients, although recently published guidelines did not recommend routine prescription of antiplatelets for primary prevention [31]. Several limitations of this study should be mentioned. This study was a prospective observational study in the Taiwanese population, so the finding is not as powerful as a randomized controlled trial, and the study population was limited to an Asian population sample. We did not analyze the kind and the dose of beta-blockers. For the enrolled people with ASCVD, CAD composed the main population (90%), followed by CVA (15%), while PAD contributed only 2%. Thus, the result might not reflect the patients of all kinds of ASCVD. There were also some limitations of the T-SPARCLE Registry, which were discussed elsewhere [24]. Mainly, the detailed history of each patient and the exclusion of patients with dialysis or advanced heart failure (NYHA Class III-IV) made extrapolation of the results beyond the study population with caution. Conclusions In conclusion, in ASCVD patients, beta-blockers were associated with a lower rate of MACE occurrence, and the effect remained in long-term follow-up even in the postreperfusion and statin era. However, the benefit of beta-blockers was not significant in non-ASCVD patients. Prescribing beta-blockers for secondary prevention coincides with the recommendation in current guidelines. Ministry of Health and Welfare (Project code: MOHW111-TDU-B-211-134002), is the major contributor for this work since 2012. The Taiwan Society of Lipids and Atherosclerosis and the Taiwan Association of Lipid Educators also sponsored this project. However, the funders had no role in study design, data collection and analysis, the decision to publish, or preparation of the manuscript. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and approved by the Taiwan Joint Institutional Review Board for each participating hospital (JIRB number 09-S-015). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Data available on request due to privacy and ethical restrictions.
v3-fos-license
2018-12-22T02:27:42.370Z
2017-10-31T00:00:00.000
62837825
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://academicjournals.org/journal/IJFA/article-full-text-pdf/26372B166365.pdf", "pdf_hash": "948694802557c839c9cf8f3159ef8c05c9d5efdd", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45279", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "948694802557c839c9cf8f3159ef8c05c9d5efdd", "year": 2017 }
pes2o/s2orc
Assessment of monthly physico-chemical properties and fish yields of two micro dams of Tigray Region , Northern Ethiopia This study was conducted to assess some physico-chemical properties and yields of two micro dams (Korrir and Laelay Wukro) located in eastern zones of Tigray region (Northern Ethiopia). In each location, water samples and fish yield were examined from October 2013 to March 2014 once per month in each dams. The examined physico-chemical parameters were: Dissolved oxygen (DO), electrical conductivity (EC), total dissolved solids (TDS), water temperature, pH and transparency. The result showed that the monthly record values of DO, EC and TDS decreased while water temperature and transparency increased in studied months. Mean monthly pH value also fluctuated across months. The monthly mean value of DO, pH, temperature, EC, TDS and transparency were ranged: 3.20-7.70 mg/L, 8.23-9.14, 15.3-22.2°C, 284-353 μS/cm, 1.47-2.11 mg/L and 19.00-39.00 cm in Korrir dam while in Laelay Wukro were: 3.44-5.70 mg/L, 7.93-8.61, 19.4-24.2°C, 209-462 mg/L, 110-270 μS/cm and 18.00-40.20 cm, respectively. Fish yield was assessed by morphoedaphic index empirical models. There was no significant (p<0.05) difference in fish yield and the average estimated productivity of Korrir and Laelay Wukro dams were 13.99 and 14.47 quintal/ha/year respectively. Therefore physico-chemical properties of these micro dams were good for fish production and fish farmers should practice utilization of the micro dams through good management of the water bodies. INTRODUCTION Water quality refers to all physical, chemical and biological characteristics of the water and plays an important role in the growth and survival of aquatic organisms.Relationship between water quality and aquatic productivity is a pre-requisite for obtaining optimum growth and production of aquatic organisms including fish.The water used for aquaculture and fish production would not give the desired production unless the prevailing water quality parameters are optimum for the organism under culture (Bisht et al., 2013).Some of the physico-chemical parameters that are regularly measured within water bodies like pond include temperature, dissolved oxygen (DO), alkalinity, hardness, pH, electrical conductivity (EC), turbidity, total dissolved solids (TDS) and biological oxygen demand (USDA, 1996).Dissolved oxygen is required for respiration by *Corresponding author.E-mail: hzhzebib6@gmail.com.Tel: +251344460741. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License most aquatic animals.Apart from this, dissolved oxygen combined with other important elements such as Carbon, Sulphur, Nitrogen and Phosphorous that could have been toxicants in the absence of oxygen in the water bodies to form carbonate, sulphate, nitrate and phosphate respectively that constitute the required compounds for aquatic organisms for survival (Araoy, 2009).Each species of fish has a preferred or optimum temperature range where it grows best.At temperatures above or below optimum, fish growth is affected.Mortalities may occur at extreme temperatures (Piper et al., 1982).Turbidity determines light penetration in water.This in turn will have an effect on the temperature of the water and the amount of vegetation and algae that will grow in the pond, thus affecting the rate of photosynthesis and primary productivity (USDA, 1996;Environmental Review, 2008). The annual fish production potential of Ethiopia based on empirical methods on individual lake surface area and mean depth of major water bodies was estimated to be 30,000 to 51,000 tons (FAO, 2003).The yield of fish, season change in fish density and the factors which influence them are important for the sustainable management of fisheries in the reservoirs (Tadesse et al., 2015).Currently, government of Ethiopia is engaged in harvesting rain water for the purpose of irrigation.Thus, in Tigray, more than 70 micro dams have been constructed for enhancing food production through integrated irrigation and fish farming (Tadesse et al., 2008).The governments have a strategy to make fishery and aquaculture sector as one on the method to reduce poverty in the country. Considerable information exists on the limnology of the natural lakes of Ethiopia but little is known about the country's dams, rivers and reservoirs (Zenebe et al., 2012).Atakilt and Tadesse (2012) studied population dynamics and condition factor of Oreochromis niloticus L. in Korrir and Laelay Wukro dams.Tadesse et al. (2015) also reported condition factor of O. niloticus L. and yield in five tropical small and samples were collected in four months (December, 2012, March, April andMay 2013).Araoy (2009) and Ovie et al. (2011) reported that monthly water physico-chemical variations have impact on fish growth and production potential.Therefore recent information on monitoring physico-chemical properties and estimating potential yields of these water bodies is useful to understand fish production potential, status, sustainability and select the most suitable way of area utilization.In this study the main physico-chemical water parameters (DO, pH, temperature, EC, TDS, transparency) and their potential fish yields of the Korrir and Laelay Wukro micro dams were examined. Physico-chemical properties The main physico-chemical parameters were measured from October 30, 2013 to March 30, 2014 once per month at three representative stations in the earthen micro dams and the average of reading record was used in each month.The pH was determined with Elmetron pH meter (Model: CP-411).The pH meter was calibrated using buffer solution of pH value of 7. Electrical conductivity and total dissolved solid was determined using Elmetron conductivity meter (Model: CC-411).Dissolved oxygen was measured using Elmetron oxygen meter (Model: CCO-401). Surface water temperature was measured by Elmetron oxygen meter installed with temperature electrode.The pH, conductivity, oxygen and temperature electrodes were immersed in water 6 cm below the water surface for about 4 to 5 min until stabilized reading is obtained.Transparency (black and white disk) was measured by lowering the disk by hand into the water to the depth at which it vanishes from sight.The difference of distance to vanishing and appeared point was recorded and then divided by two. Where A = depth at which secchi disk disappear, and B = depth at which secchi disk reappears. Fish yield estimation from morphoedaphic index (MEI) The MEI has been used to estimate fish yield and production in reservoirs and lakes.Potential fish yield were estimated from three different empirical models developed by Henderson and Welcome (1974), Toews andGriffith (1979) andMarshall (1984).The area and depth were taken from literature and used in the empirical yield models.Where; Where, A = Surface area. Statistical analysis The one way ANOVA was used to test for significant variations between means of monthly physico-chemical parameters and fish yields using SAS institute and Cary, NC version 9.0.Duncan's multiple range tests was used to identify significant differences at Dissolved oxygen The mean DO value (4.77 and 4.85 mg/L) was observed in Korrir and Laelay Wukro dams respectively and monthly DO was decreased from October to March in the dams (Table 1).Tadesse et al. (2015) reported higher average DO value 7.03 mg/L from five tropical small dams (Korrir, Laelay Wukro, Mai Nigus, Mai Sessea and Mai Seye dams).Atakilt and Tadesse (2012) also reported 7.19 mg/L average DO value for Korrir and Laelay Wukro dams for one year data which were higher than the current study results.Mustapha (2008) observed low DO concentration value due to high temperature record and high rate of decomposition in the dry season which is similar with current findings.Boyd (1979) reported that DO concentrations of 3 to 12 mg/L will promote the growth and survival of fish in reservoirs.The minimum DO of 5.0 mg/L was reported for tropical fishes by Saloom and Duncan (2005).Singh (2007) reported that the minimum concentration of DO, 4 mg/L should be maintained in fish ponds at all times.The water quality for any fish cultured in tropical region must be such that the DO concentration must not be less than 3 mg/L (Robert, 2007).Brian (2006) and Ita et al. (1995) noted that increased level is needed to support an increase in metabolic rates and reproduction.Dastagir et al. (2014) reported that DO requirements of any fish species varied with the age, temperature and concentration of minerals in water. pH The mean pH values recorded in the present study was 8.19 and 8.57 in Laelay Wukro and Korrir dams, and values fluctuated across months (Table 1).This mean values which was within the pH range of 6.5 to 9.0 reported by Boyd and Tucker (1998) and Ali et al. (2000) is suitable for diverse fish production.Any variation beyond acceptable range could be fatal to many aquatic organisms (Iqbal et al., 2004).Depending on species, fish can generally be vulnerable or resistant to high or low pH (Ovie et al., 2011).Warm waters develop increased pH levels during daylight by photosynthesis and pH declines by the process of respiration at night (King, 1970). Water temperature The mean temperature record value (20.07 and 21.98°C) was increased across months in Korrir and Laelay Wukro dams respectively which were within range (20-30°C) reported by Boyd (1990).Atakilt and Tadesse (2012) found 18.88°C average temperature value for Korrir and Laelay Wukro dams for one year data.The above temperature range can still be considered as good for growth and body metabolism of the fishes in the water body.This is because intensity of metabolism of warm water fish is closely associated with water temperature, that is, the higher the temperature, the higher the metabolic rate (Ovie et al., 2011).Variations in water temperature in the dry season can be attributed to intensified heat radiation and effect of harmattan (Olele and Ekelemu, 2008).Bishat et al. (2013) observed wide fluctuation in temperature in earthen pond, cemented ponds and open water body might be due to variation in volume of water. EC EC with mean value 284 and 358 µS/cm in Korrir and Laelay Wukro dams was consistent with the range 20 to 1500 µS/cm reported by Boyd and Tucker (1998) and Ali et al. (2000) for natural waters but higher than the value 201.56 µS/cm reported by Tadesse et al. (2015).The EC values observed decreased in the studied months.Roshinebegam and Selvakumar (2014) reported the higher EC value in some water bodies may be due to low flow of water, reduced volume and stagnation of water. TDS Mean TDS value (169 and 194 mg/L) was obtained in Korrir and Laelay Wukro dams respectively (Table 1).Tadesse et al. (2015) reported lower average TDS value of 101.28 mg/L from five tropical small dams (Korrir, Laelay Wukro, Mai Nigus, Mai Sessea and Mai Seye dams).A maximum value of 400 mg/L of TDS is permissible for a diverse fish population (Alikunhi, 1957).TDS concentration below 200 mg/L promoted even healthier spawning conditions (Deepak and Singh, 2014).Imoobe and Oboh (2003) and Olele and Ekelemu (2008) stated the higher TDS value in water bodies during dry season may be due to the evaporation of water and leaving a higher concentration of salt within a smaller volume of water. Transparency The mean transparency values were 28.17 and 29.55 cm in Korrir and Laelay Wukro dams respectively which were lower than the value 39.75 cm reported by Tadesse et al. (2015) in five tropical small dams.Boyd (1982) reported the range (15-45 cm) for fish growth.Mustapha (2008) reported the range of Secchi disc visibility to be between 62 and to 162 cm which are higher compared with the current study results.These values reflect the depth of light penetration which is good for a shallow reservoir as plankton growth thus making food available to fish.Monthly transparency record value was increased from October to March in the dams.Mustapha (2008) and Sachinkumar Patil et al. (2013) reported that the increase transparency during dry season in water bodies may be due to gradual settlement of suspended particles at the bottom of the reservoir.Table 2 show potential yield estimated of the dams.Results showed the lowest production yield (8.81 and 11.24 kg/ha/year) was obtained by Henderson and Welcome (1974) while highest production values (20.76 and 18.92 kg/ha/year) were according to Marshal (1984) models for Korrir and Laelay Wukro, respectively.The average of three empirical yield models were 13.99 and 14.47 quintal/ha/year which were higher than fish yield value (4.35 and 4.53 quintal/ha/year) according to Schlesinger and Regier (1982) model reported by Tadesse et al. (2015) in the Korrir and Laelay Wukro dams respectively. Conclusion The water physico-chemical properties of the dams were good for fish production and were within the range already documented though physico-chemical properties showed variations across months.Therefore fish farmers should practice utilization of the dams through good management of the water bodies. Figure 1 . Figure 1.Map of the study areas. Table 1 . The value of monthly physico-chemical variations of the dams in the studied months.Mean values in column with the same letter are not significantly different (p˂0.05). Table 2 . Estimation of fish yields of the dams using different empirical models in the studied months.
v3-fos-license
2022-08-06T13:29:31.785Z
2022-08-05T00:00:00.000
251352355
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "151432ef5e9d4a65eabd513c67357edb94b25d85", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45282", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "sha1": "7bd1915aefbe6375d8ddc5b1091b42de7f307532", "year": 2022 }
pes2o/s2orc
Gut microbiome reflect adaptation of earthworms to cave and surface environments Background Caves are special natural laboratories for most biota and the cave communities are unique. Establishing population in cave is accompanied with modifications in adaptability for most animals. To date, little is known about the survival mechanisms of soil animals in cave environments, albeit they play vital roles in most terrestrial ecosystems. Here, we investigated whether and how gut microbes would contribute to the adaptation of earthworms by comparing the gut microbiome of two earthworm species from the surface and caves. Results Two dominant earthworm species inhabited caves, i.e., Allolobophora chlorotica and Aporrectodea rosea. Compared with the counterparts on the surface, A. rosea significantly decreased population in the cave, while A. chlorotica didn’t change. Microbial taxonomic and phylogenetic diversities between the earthworm gut and soil environment were asynchronic with functional diversity, with functional gene diversity been always higher in earthworm gut than in soil, but species richness and phylogenetic diversity lower. In addition, earthworm gut microbiome were characterized by higher rrn operon numbers and lower network complexity than soil microbiota. Conclusions Different fitness of the two earthworm species in cave is likely to coincide with gut microbiota, suggesting interactions between host and gut microbiome are essential for soil animals in adapting to new environments. The functional gene diversity provided by gut microbiome is more important than taxonomic or phylogenetic diversity in regulating host adaptability. A stable and high-efficient gut microbiome, including microbiota and metabolism genes, encoded potential functions required by the animal hosts during the processes of adapting to and establishing in the cave environments. Our study also demonstrates how the applications of microbial functional traits analysis may advance our understanding of animal-microbe interactions that may aid animals to survive in extreme ecosystems. Supplementary Information The online version contains supplementary material available at 10.1186/s42523-022-00200-0. Background Terrestrial caves differ from surface habitats and are regarded as "natural laboratories" [1]. Organisms in caves are subjected to strong selective pressures that are rather different from surface, such as constant dark and static climatic conditions [2]. Further, food resources for animals are depleted in cave as compared to the surface, which rely on photosynthesis for primary production [3]. Instead, food webs in caves are based on microbes, making the interactions in soil food webs in cave hitherto unknown [4]. Cave-dwelling animals therefore have to modify their feeding strategies in ways different from surface animals to cope with food deficiency [5]. On the surface, earthworms are common ecosystem engineers which provide a variety of ecosystem functions and services [6]. Some of them are able to establish stable populations in subterrain caves [7,8]. Species that survive in caves need to adjust to the rather different environment compared with those living in other ecosystems. Compared to cave arthropods that are usually pale and blind [9], cave earthworms appear to be similar in external morphological characters to their relatives inhabiting surface layers. However, adaptive strategies of earthworms to cave environments and the manner how different earthworm species sustain populations in the resource-limited environments remain obscure. Gut-associated microbes are considered as "nutrient factories" that increase host fitness in many ways [10][11][12]. With the help of diverse functional genes encoded by gut microbes, the hosts may digest a wide range of compounds, thereby surviving in unfavorable environments [13]. Reciprocally, gut microbiome are regulated by the physiological conditions and feeding diets of the hosts [14,15]. For example, the community structure and functional potential of the gut microbiome differ in gut compartments and are associated with the feeding strategies of the hosts [16]. Besides, species-specific effects from the hosts on gut microbiome have also been discovered for soil animals [17]. Therefore, cave earthworms are presumably to harbor specific gut microbiome which may help them to adapt to the cave environment, and the same earthworm species may reshape their gut microbiome when living in a different environment. Here, we asked whether gut microbiome of cave earthworms differ from the respective earthworm species inhabiting surface and if the gut microbiome may provide functions for earthworm adaptation to the optimal environments (i.e., surface or cave). We explored patterns of gut bacterial communities and predicted functional genes encoded by gut bacteria as well as the copy numbers of the 16S rRNA gene to reflect the ecological strategies of bacterial communities in nutrient exploitation [18]. We also examined networks of gut bacterial communities which suggest interactions and stability of the gut microbiome [19]. We hypothesized that 1) gut bacterial communities differ between cave and surface earthworm populations in taxonomic, functional and phylogenetic diversity and the difference depends on the species adaptability to environments; 2) The assembly processes of gut microbiome are more deterministic than that of soil microbiota since cave earthworms are likely to select bacteria of certain functions for the adaptation in the specific environment; 3) Gut microbiome are characterized by less fast growing species and more stable networks when the earthworm hosts inhabit favorable environments. Study sites and sampling The study sites are located in two interconnected cave systems, the Amatérská Cave and the Sloupsko-Šošůvské Caves, in the Moravian Karst Protected Landscape Area in the south-east part of the Czech Republic ( Fig. 1 and Table 1). The cave system is associated with streams, Sloupský potok and Bílá voda, which later merge to the Punkva River. The water mainly flows underground, dropping to the lime bedrock massif. The gallery-like caves were originally formed by streams and have been connected with the surface layers via water flow. Nowadays they are situated in the vadose zone far from the streams, with the only water supply being infiltration of rainwater through soil and bedrock rifts, except for extreme flooding events. The two sampled cave systems are separated by a series of water siphons and are accessible only through an artificial corridor. No bats are present here and support of organic matter is possible via floods only. Just the corridors of the Sloupsko-Šošůvské Caves (Site 1; Fig. 1 and Table 1) are connected directly with the surface. The sampling took place in the center of the caves, which were far away from the water siphons. Soils in all sites are of grey rendzina type, affected by close permanent or semi-permanent water stream. Vegetation of surface sites is very similar, represented by Stellario nemorum-Alnetum glutinosae. Soil pH values of the cave and soil substrates ranged between 8.5 and 7.0, soil organic carbon (SOC) 7-17.1 g/kg. [20]. Earthworms and soils from each site were sampled during the data 3-10 May 2016, which was spring for the sampling sites and corresponding to the growing season for the earthworms, evidenced by the casts they produced in the field. The sampling sites were surveyed for common soil animals, e.g., mesofauna springtails and macrofauna earthworms, and the present study was focused on the earworms. In the cave, the soil was mixed with earthworm casts (Fig. 1A), thus the sampling of cave soil was a mixture of soil and earthworm casts. On the surface, the soil sampling was conducted in an area dominated by a specific earthworm species. When sampling, five 5 × 5 m quadrats were randomly set up at each site with a distance of at least 10 m (Fig. 1B, C), and eight soil cores were collected and mixed for one composite sample for each quadrat. The soil cores were taken with a 2.5 cm diameter cylinder, and to a depth of 10 cm, or until reaching the rock. This method enabled at least 100 g of soil for each core. After soils were collected, the earthworms were dug out from each quadrat. In each quadrat, earthworms were hand-sorted and preserved in their living soils and were transported with ice to the lab. In the lab, the earthworms were identified to species level and stored in absolute ethanol prior to molecular gut content analysis. The abundance of the earthworms per site was calculated as the mean of earthworm density in the eight quadrats. After sampling, soils with earthworms submerged in their living soils were transported with ice to the lab. In the lab, the earthworms were fixed and stored in absolute ethanol and then identified to species level prior to molecular gut content analysis. Only two earthworm species were found in the cave systems, i.e., Allolobophora chlorotica and Aporrectodea rosea in Sloupsko-Šošůvské Cave (Fig. 1, site1) and Amatérská Cave ( Fig. 1, site 2), respectively. A. chlorotica and A. rosea belong to the same family, i.e., Lumbricidae, and are both widespread in Europe [21]. Molecular gut content analysis Five individuals of earthworms from each site were dissected and separately used for DNA extraction during the molecular analysis. The earthworms were dissected aseptically under a stereomicroscope. An incision was made longitudinally along the body wall and the whole gut, from the clitellum to anus, was removed and placed in a 1.5 mL Eppendorf tube. Thereafter, total DNA of the gut content as well as soils were extracted using the FastDNA Spin Kit for Soil and the FastPrep Instrument (MP Biomedicals, Santa Ana, CA, USA). All steps were carried out following the manufacturer's instructions. The quality and quantity of the extracted DNA were certified with 1% agarose gel electrophoresis and a Nanodrop-2000 spectrophotometer (NanoDrop Technologies Inc. Wilmington, DE, USA), respectively. The V4 hypervariable region of the bacterial 16S rRNA gene was amplified and sequenced with a Miseq sequencer at the University of Illinois-Chicago with primers of 16S rRNA gene V4 region (FWD: 5'-GTG YCA GCMGCC GCG GTAA-3'; REV: 5'-GGA CTA CNVGGG TWT CTAAT-3') were used following the EMP protocol (https:// earth micro biome. org/ proto cols-and-stand ards/ 16s/). Negative controls that replace DNA templates with sterilized water were included in the amplification period. The raw sequences were deposited in NCBI Sequence Read Archive under the accession number PRJNA400302. Sequence data processing Paired-end sequence data were joined, demultiplexed and analyzed using the QIIME 1.9.1 pipeline [22]. Briefly, sequence lengths < 200 bp, of average quality score < 20 or with ambiguous characters were discarded. After chimeras and singletons were removed, closed reference operational taxonomic units (OTUs) were clustered on the basis of 97% similarity. Taxonomy of bacterial OTUs was assigned using Greengenes v13_8. A phylogenetic tree was generated using "make_phylogeny.py" by the default setting of the "FastTree" method. The resulting Table 1 Abundance of earthworms (Allolobophora chlorotica and Apporrectodea rosea) and the soil properties in caves and surfaces. Values are presented as mean ± SD ┼ Locations of sampling sites as shown in Fig. 1 Habitat Cave Surface OTU table was then rarefied to 9800 sequences per sample before further analysis. Statistical analysis The 16S rRNA gene copy numbers as well as functional gene abundance were calibrated and predicted by PIC-RUSt [23]. The abundance of functional genes was predicted using a script predict_metagenomes.py implemented in the PICRUSt according to the recommended protocol. The predicted genes were then grouped at the first KEGG level using the script categorize_by_function. py implemented in the PICRUSt. Other statistical analyses were performed in R 4.0.0 [24]. Student's t-test was used for comparing the mean abundance of earthworms between cave and surface. The standardized effect size of abundance weighted mean phylogenetic distance of the bacterial community was quantified using the function ses.mpd implemented in the R package "picante" [25]. OTU numbers, diversities and phylogenetic relatedness of microbial communities were compared between treatments using ANOVA followed by Tukey's HSD test. Community weighted means (CWM) of the 16S rRNA gene copy numbers were calculated with the equation: CWMoperon = n 1 Pi * mi , where Pi and mi is the proportion and operon numbers of each bacterial OTU. Pairwise correlations of the bacterial OTUs within treatments were calculated using the command sparcc with 1000 bootstraps in the program mother v.1.35.0 [26]. Significant correlations were set by R 2 > 0.7 and P < 0.01. Topological properties of the bacterial community network of each treatment included (I) numbers of nodes and edges, (II) average degree, which measures network complexity, and (III) average path length (i.e., distance between any two nodes). Network properties were calculated using the "igraph" package in R [27]. Microbial diversity Functional diversity inferred by the predicted functional gene richness was greater in the gut of earthworms than those in the soils (Fig. 2A). Gut bacteria of both A. chlorotica and A. rosea were functionally more diverse in the caves than in the surface ( Fig. 2A). Regarding the eight categories of predicted functions, both A. chlorotica and A. rosea exhibited greater genes related to metabolisms (Additional file 1: Figure S1). Gut microbiome holds more abundance of functional genes than that of soils, but the difference of genes between cave and surface was not significant (Fig. 2A). Taxonomic and phylogenetic diversities of bacterial communities, however, were lower in the gut of earthworms than those in the soils (Fig. 2B and C). Regardless of gut or soil microbiota, the taxonomic and phylogenetic diversity was lower in the caves than in the surface for both A. chlorotica and A. rosea ( Fig. 2B and C), with the greater reductions for A. rosea (F = 4.93 and 14.77 for A. chlorotica and A. rosea, respectively; P < 0.05). Phylogenetic relatedness Except for the bacterial communities in the gut of A. chlorotica from the surface, which exhibited a random pattern of phylogenetic relatedness, the bacterial communities of all the other treatments showed phylogenetic clustering (Fig. 3). The standardized effect size of mean phylogenetic distance of soil bacteria was significantly lower than in the gut of A. chlorotica irrespective of the habitats (i.e., surface or cave; P < 0.05). However, in the gut of A. rosea bacterial communities were more phylogenetic clustered than the soil bacterial communities if they were collected from the surface but not the cave. For A. chlorotica, the mean phylogenetic distance of the gut bacterial communities was greater in the surface than cave, while for A. rosea, the reverse was true (Fig. 3B). Community weighted mean operon numbers For both earthworm species in general, the communityweighted mean of the 16S rRNA gene (rrn) copy numbers of bacteria were significantly higher in the gut of earthworms than that in the soils (mean value 3.8 and 2.5, respectively; Fig. 4). The CWM operon numbers were not different between the surface and cave soils (P > 0.05). However, in the gut of A. chlorotica, the value was significantly greater in the surface than in the cave, while an opposite pattern was found in the gut of A. rosea (P < 0.05). Co-occurrence networks of microbial communities Networks mainly consisted of the most abundant phyla and comprised highly connected OTUs structuring densely connected groups of nodes ( Fig. 5 and Table 2). The networks of bacterial communities in the gut of both earthworm species exhibited fewer degrees. The degree of the network in the gut of A. rosea was reduced ~ 50%, while A. chlorotica increased 18% when living in the cave as compared to the surface layers. Gong et al. Animal Microbiome (2022) 4:47 Discussion A. chlorotica and A. rosea exhibited different variations of abundance in caves. Presumably, they might consume more forms of food with the aid of their gut microbes. Adaptation of animals to different environments usually requires physiological adjustments, including changes in biochemical activities. However, physiological or geneticbased adaptation of the earthworms usually takes generations, longer than the lifespan of individual earthworms. The gut microbiome, as part of the holobionts, may facilitate the animal hosts to adapt to unfavorable environments by a diverse encoding of genes [28,29]. The gut microbiome of earthworms, therefore, could instantly be recruited by the host and help them to utilize more diverse food compounds, thereby increasing their fitness in different environments [30]. Functional diversity of microbiota in the earthworm gut We found that cave earthworms harbor more diverse functional genes in their gut, despite lower taxonomic and phylogenetic diversities, supporting our first hypothesis. Both earthworms enriched the genes related to the function of metabolism in their gut microbiomes, which is likely to provide essential metabolites to them and increase their survival rates [31]. Higher proportions of metabolism-related genes were found in the gut of soil animal microbiota compared to other functions. It is evidenced soil animals are highly dependent on the metabolism genes from their gut microbiome for nutrient needs [32,33]. As the cave environment is deficient in food sources, when the earthworms are living in the caves, their gut might be stimulated to serve as a more efficient "nutrient factory" [12]. Notably, a large proportion of the genes, especially from the gut microbiome of earthworms in the caves, was unclassified. This demonstrates that the gut microbiome of cave earthworms is more functionally diverse than has been seen. More studies integrating new technologies should be conducted to uncover their roles. Deterministic community assembly in the earthworm gut Our results show that the assembly processes of earthworm gut microbiome were deterministic, supporting our second hypothesis. The oxygen and water contents differed along the digestive tract of earthworms, thus the profile of microbiota is shaped by the digestive environment [34][35][36]. In addition, studies have revealed that the food source is another deterministic factor shaping the gut microbiome [37][38][39][40], both phylogeny and food preference of the host may deterministically shape the gut microbiome of soil animals [41]. For surviving in caves, earthworms therefore might select microbiota coding more metabolic activities. Community features of gut microbiome The fact that gut microbiome holds greater numbers of CWM copy numbers of the 16S rRNA gene than soil microbiota suggesting a more nutrient demand in the gut microbiome. The multiplicity of rRNA genes is an indicator of the ecological strategy of bacteria for nutrient exploitation [18]. For example, during the exponential growth phase, the number of rRNA operons of Escherichia coli may increase from 7 to 36 [42,43]. Therefore, communities dominated by bacteria with fewer rrn copy numbers usually have a higher nutrient use efficiency than high-rrn-dominated communities [44]. In caves, where soil organic matter is different compared with surface systems, earthworms might benefit from the help of a highly efficient gut microbiome supporting higher metabolism [45]. That was the reason A. chlorotica established a more stable population, compared with A. rosea. Conclusions The present study demonstrates the functional roles of gut microbiome in contributing to host adaptations of two earthworms in surface and cave environments. Our results reflect the tight interactions between host earthworms and their gut microbiome. The gut microbiome exhibits a more functional diversity in caves, which can be interpreted as evidence of stronger food limitation. A more stable and highly efficient microbiota providing metabolites is needed for the earthworms to survive in the resource-limited cave habitat. Together, the gut microbiome-host crosstalk is of pivotal importance in facilitating the animal hosts in their physiological adaptation and even the population expansion. Copy numbers were estimated using PICRUSt and weighted values were obtained by multiplying copy numbers by the relative abundance for each operational taxonomic unit and taking the sum of these values for each community
v3-fos-license
2018-04-03T04:57:49.904Z
2008-04-02T00:00:00.000
12908591
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-2-97", "pdf_hash": "9c1a14980423cc3c4c56104fe95727de25c9e788", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45285", "s2fieldsofstudy": [ "Medicine" ], "sha1": "90990cde34c31f11318862ef2d5f50ba2319ce50", "year": 2008 }
pes2o/s2orc
Coliform pyosalpinx as a rare complication of appendicectomy: a case report and review of the literature on best practice Introduction Coliform pyosalpinx is a rare entity. We report a case that occurred three months after appendicectomy for gangrenous appendicitis. There follows a literature review on best practice for the treatment of pyosalpinx. Case presentation A seventeen year old girl presented with an acute abdomen three months after an appendicectomy for gangrenous appendicitis. Intraoperative findings were bilateral pyosalpinx treated by aspiration, saline and Betadine irrigation and intravenous antibiotics. Conclusion Microbiological analysis of the pus revealed Escherichia coli and anaerobes. Chlamydia and Candida were not isolated. This is the first known reported case of Coliform Pyosalpinx following appendicectomy. The best treatment does not necessarily involve salpingectomy especially in women of reproductive age where fertility may become compromised. Introduction Pyosalpinx, in the majority of cases, is a sequela of pelvic inflammatory disease. The ramifications of this condition are important and include tubal infertility and ectopic pregnancy [1]. There have been cases where a non-sexually transmitted cause for pyosalpinx has been described. Notable examples are pyosalpinx following in vitro fertilization [2] and infection by streptococcus pneumoniae [3] and coliforms [4]. Only one case of spontaneous coliform pyosalpinx has been published; that case involved a nine year old girl [5]. We report a case of coliform pyosalpinx in a seventeen year old girl following a recent appendicectomy. The best treatment for pyosalpinx in pre-menopausal females is discussed. Case presentation A seventeen year old girl presented as an emergency with a two-day history of lower abdominal and back pain. She experienced rigors and appetite loss but no nausea, vomiting, dysuria, cystitis or vaginal discharge. Three months previously, she had undergone immediate appendicectomy for a gangrenous retrocaecal appendix. Other intraoperative findings at the time were a macroscopically normal right ovary and fallopian tube. There was no history of recent sexual activity or pelvic inflammatory disease. Menstrual cycles were regular and every 28 days and the patient was mid-cycle at the time of presentation. On examination, she had a temperature of 38.5°C, pulse of 100 beats per minute and blood pressure of 114/59. Lower abdominal rebound tenderness, guarding and absent bowel sounds were present. The patient had a leucocytosis of 16.4 × 10 9 .l -1 and C-reactive protein concentration of 322 mg.l -1 . A pregnancy test was negative and an emergency computerized tomographic scan showed a complex pelvic mass associated with or near to the right ovary and overriding, but not connected to the uterus ( Figure 1). She subsequently underwent an emergency laparotomy. The right fallopian tube was found in the midline above the uterus. It was grossly enlarged, measuring 10 × 5 cm, with multiple necrotic areas oozing pus. The fimbrial end was oedematous with a radius of 2 cm. The left fallopian tube was slightly enlarged and was found postrolateral to the uterus, adherent to the sigmoid colon by fibrinous adhesions. There was no visible enterosalpinx fistula and no appendicular stump leak. The left salpinx was released by blunt dissection and pus drained from both fallopian tubes by retrograde "milking". Both tubes were irrigated generously with a 0.9% saline and Betadine mixture. Microbiological analysis of the pus revealed Escherichia coli and anaerobes but not Chlamydia or Candida spp. A postoperative Gastrografin enema did not reveal an occult fistula (Figure 2). The patient was treated postoperatively with intravenous Co-Amoxiclav and Metronidazole for a week and made an uneventful recovery. However, she now faces the longterm sequelae of potential infertility, ectopic pregnancy and chronic pelvic pain. Discussion Coliform pyosalpinx is very rare, and coliform pyosalpinx following gangrenous appendicitis treated by appendicectomy has not been reported in the literature. This is the first report ever of this disease entity. Pyosalpinx following appendicectomy may be one explanation for the small association between perforated appendicitis and sterility [6,7]. When encountered, it is vital for the trainee surgeon to be aware of the best treatment, with the least morbidity. This encompasses a wide range of interventions varying from intravenous antibiotics, laparoscopic aspiration or laparoscopic salpingotomy with saline irrigation, image-guided aspiration and/or drainage [8,9] to salpingectomy. The latter should be considered as last resort in premenopausal females. Repeat laparoscopy of patients who have undergone irrigation have shown no recurrence [10]. A randomized trial has Emergency computerized tomographic scan: a right ovarian mass is visualized shown that transvaginal sonographic drainage with intravenous antibiotics produces a faster resolution of symptoms than intravenous antibiotics alone; hospital stay and need for surgery were also lower in the study cohort. The role of transvaginal drains and the effect of intra-fallopian antibiotic instillation on fertility still remains unclear. One possible way to assess fertility is by performing a repeat diagnostic laparoscopy. This may demonstrate tubal features (e.g. occlusion, adhesions) that are linked to infertility [11,12]. The ideal time for the procedure is varied and ranges from between two to 33 weeks [13,14]. Tubal function may also be assessed by salpingography and/or salpingoscopy. A "cobblestone" appearance of the tubal mucosa is suggestive of patchy loss and damage to ciliated mucosal cells [13]. In premenopausal females, salpingectomy or laparotomy is not encouraged as subsequent infertility is said to be high [14]. In summary, coliform pysosalpinx may be a complication of acute gangrenous appendicitis and/or may follow appendicectomy. If diagnosed preoperatively sonographic or laparoscopic drainage is advocated. The small risk of infertility following open appendicectomy for perforated or gangrenous appendicitis may also be one argument for all premenopausal females to undergo a laparoscopic procedure for this condition. Conclusion This is the first documented case of coliform pyosalpinx following appendicectomy for gangrenous appendicitis. It may be one reason for the association between perforated appendicitis and sterility [5,6]. In order to decrease the risk of infertility, minimally invasive treatment options should be used which endeavour to preserve the fallopian tubes in young females. Tubal patency and mucosal architecture can be assessed subsequently, by salpingography and salpingoscopy. Repeat diagnostic laparoscopy may also be useful in assessment of premenopausal females who have had appendicectomy but who are unable to conceive.
v3-fos-license
2021-12-11T14:12:28.893Z
2021-01-01T00:00:00.000
245014565
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09627977.pdf", "pdf_hash": "8dd43e582a04e3d1cc0f47892743454e94bd4d4c", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45286", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "b4aba63e73a63dc4f1a91435e6c7f3f3057172cc", "year": 2021 }
pes2o/s2orc
Robust Incremental Outlier Detection Approach Based on a New Metric in Data Streams Detecting outliers in real time from multivariate streaming data is a vital and challenging research topic in many areas. Recently introduced the incremental Local Outlier Factor (iLOF) approach and its variants have received considerable attention as they achieve high detection performance in data streams with varying distributions. However, these iLOF-based approaches still have some major limitations: i) Poor detection in high-dimensional data; ii) The difficulty of determining the proper nearest neighbor number k; iii) Instead of labeling the outlier, assigning a score to each sample that indicates the probability to be an outlier; iv) Inability to detect a long sequence (small cluster) of outliers. This article proposes a new robust outlier detection method (RiLOF) based on iLOF that can effectively overcome these limitations. In the RiLOF method, a novel metric called Median of Nearest Neighborhood Absolute Deviation (MoNNAD) has been developed that uses the median of the local absolute deviation of the samples LOF values. Unlike the previously reported LOF-based approaches, RiLOF is capable of achieving outlier detection in different data stream applications using the same hyperparameters. Extensive experiments performed on 15 different real-world data sets demonstrate that RiLOF remarkably outperforms 12 different state-of-the-art competitors. I. INTRODUCTION The rapid development of computer technology has led to the emergence of many scientific and commercial applications that generate high-speed large-volume data streams today. These applications must produce precise and accurate data to provide useful and reliable information to the user. However, dynamic environmental conditions and abnormal patterns (outliers) that do not conform to the expected behavior due to hardware malfunction, aging equipment, concept drift and sensor measurement errors may be observed during the data stream [1], [2]. Because outliers have become critical and actionable, their discovery is still one of the most important research topics in many real time applications. Recently, local outlier detection methods have attracted great attention since they do not make any assumptions about the distribution of the data set. In these methods, a degree of being an outlier is assigned to each instance indicating how isolated the instance is with respect to the surrounding neighborhood. This degree is called Local Outlier Factor (LOF) of an instance [3]. The LOF is formed depending on the local density that is usually computed based on the Euclidean distance between the instance and its ′ k ′ nearest neighbor points. Generally, data with high LOF and low density are considered outliers. This strategy has been applied in many areas to detect outliers and has yielded very successful results [4], [5]. Therefore, various extensions and improvements of the LOF have been proposed, e.g., COF [6], LOCI [7], INFLO [8], LDOF [9], LoOP [10], ABOD, fastABOD [11], CARE [12], and DLC [13]. The outlier detection methods mentioned above operate in batch mode, so they are not suitable for use in real time applications generating large data streams. To overcome this limitation, Pokrajac et al. introduced the instance incremental LOF (iLOF) approach [14]. It only updates the LOFs of existing samples affected by the newly arrived sample. Since a small portion of the data set is affected by the arrival of a new sample, the processing time required to compute its LOF is significantly reduced. Hence, the iLOF has become a popular approach, and incremental versions of other methods have also been proposed, such as i-COF [15], i-LoOP [16], i-LOCI [17], i-ABOD, and i-fastABOD [18]. The existing LOF-based approaches have made significant improvements in detecting outliers from streaming data; however, there are still some major limitations. For example, anomalous points are detected depending on the relative density of data points. This means that as the dimensionality of the data increases, the computational efficiency will gradually decrease. Another disadvantage of LOF-based approaches is that even if the data distribution is known, it is difficult to accurately determine the nearest neighbor number parameter k. Depending on the k; more local or more global outliers can be identified. Generally, local outliers are detected in small k values, while global outliers are detected as k increases. Considering that data distribution in many real-time applications changes rapidly over time, it is obvious that the parameter k will be even more difficult to determine correctly [19]. Also, the value of the parameter k is closely related to the processing time. Increasing k values will result in higher processing time. The essence of LOF is to characterize the anomaly level of each data point. This indicates whether the data points are distributed over highdensity regions. However, when outliers occur sequentially, the density of the region with outliers inevitably increases, resulting in an outlier cluster. This situation becomes no longer detectable in the data stream. In order to overcome these limitations of the LOF based approaches, a new metric named the Median of Nearest Neighborhood Absolute Deviation (MoNNAD) has been developed to determine outlier score. The MoNNAD score gives information about whether the incoming sample is an outlier according to the specified threshold (M T ) value or not, if it is detected to be an outlier, it is discarded from the data set. The combination of the new MoNNAD metric and the iLOF method, called Robust iLOF (RiLOF), is well suited for use in outlier detection on different streaming data with proper (fixed) k and M T parameters, without requiring hyperparameter adjustment from one application to another. Extensive experiments are conducted on 15 different realworld data sets to measure the effectiveness of the proposed system. From the experimental results, the proposed system can handle outliers that arise from different factors. The main contributions of this paper are summarized as follows: 1) A new incremental outlier detection system, RiLOF, is proposed that will not affect computational efficiency even if the dimension of the data increases. 2) To minimize the negative impact of the k parameter, a new robust metric, MoNNAD, has been developed that uses the median of the local absolute deviation of the incoming sample from the nearest neighborhood instead of using all samples. 3) Most outlier detection methods in the literature either assign outlier score or perform labeling, but the proposed RiLOF method both labels outliers and assigns the degree of probability to be an outlier, thanks to the MoNNAD metric. 4) Since the proposed method remove outliers immediately after detecting them, it prevents the outliers from forming small outlier clusters and identifying them as inliers The remainder of this paper is organized as follows. Section II provides a summary of the literature on studies specifically on the use of incremental LOF-based methods for outlier detection. Section III gives information on LOF and iLOF, and then the proposed unsupervised and incremental robust outlier detection method is detailed. In section IV, experimental results on the real-world data sets are reported and performance results are compared with the benchmarking algorithms. In the last section, concluding remarks and the future aspects are drawn. II. RELATED WORK In recent years, many approaches have been developed to detect outliers and comprehensive surveys have been presented [20], [21]. Outlier detection methods can be roughly classified into two categories based on their labeling information: supervised and unsupervised methods. Unsupervised methods are more preferred as they do not require labeled data. When deciding whether the point is an outlier, it is called a global outlier if the entire database is used, and a local outlier if only part of the database (subset) is used [22], [23]. It is not possible to store large amounts of data stream entirely in memory, so it is challenging to detect outliers in real time using global approaches. Therefore, this study focuses on unsupervised local outlier detection strategies. Many unsupervised local outlier detection techniques are available in the literature, but among them, density-based techniques have been widely used as they outperform their competitors, such as statistical-based and distance-based approaches. With the recent advances and studies in the field, new approaches have been proposed to detect outliers. Janssens et al. proposed a Stochastic Outlier Selection (SOS) method based on affinity relation on the data [24]. Samples with weak affinity to all other samples in the data set are more probable to be outliers. In this technique, a single point in the training set has a huge impact on outlier scores in its neighborhood. Therefore, a falsely labeled sample may have negative outcomes on the decision rule. Almardeny developed a Rotation-based Outlier Detection (ROD) method in which the feature space is divided into 3D subspaces and 3D vectors representing data points are rotated around the geometric median [25]. Outlier score of samples is computed with median-based statistical method and volumes of the rotations. Although the performance of the ROD method gives good results in low dimensional data, it is negatively affected in high dimensional case. Liu et al. proposed a Single-Objective Generative Adversarial Active Learning (SO-GAAL) method, which uses a single generator, and the Multiple-Objective Generative Adversarial Active Learning (MO-GAAL) method, an extended version of it, using multiple generators [26]. It can easily handle various set types and high rate of unrelated variables from experimental results in synthetic data sets. However, there is still more work to be done, such as the difficulty of discovering true outliers for the human analyst, the need for better insights and interpretations of outlier scores, and finally the need for active learning algorithms to process data streams that are still challenging. The methods mentioned above are developed for static data sets, but are not suitable for data streams. For this purpose, the LOF approach based on sliding windows has been widely used for outlier detection recently. Salehi et al. developed fixed memory based incremental local outlier detection method (MiLOF) to decrease the memory requirement of iLOF [27]. MiLOF first clusters the data with kmeans and then use the cluster center point of each cluster, thus saving time and memory. However, since only the cluster center point data is used, this results in a decrease in overall anomaly detection accuracy. Na et al. proposed the density summarizing incremental LOF algorithm called DiLOF [28]. Since the density distribution between the data are not fully taken into account during the data extraction phase, the extracted data cannot represent historical data well, which leads to a decrease in accuracy. Most recently, Yang et al. proposed the Extract Local Outlier Factor (ELOF) [29]. The success of ELOF depends on many parameters. Therefore, very poor performance can occur with parameters that are not properly adjusted. Pevny proposed an unsupervised ensemble learning method to identify the anomalies defined as Lightweight Online Detector of Anomalies (LODA) [30]. It combines one-dimensional histograms created from arbitrary projections of data identified as weak classifiers to obtain a powerful detector. However, because the projections are randomly selected, they cannot be guaranteed to perform well in isolating anomalies. Wang et al. developed multiple instance triggered incremental outlier detection method [31]. Instead of an instance incremental process, an inserted-bag based algorithm is used. Although experimental results show good performance in both synthetic and real data sets, it requires long processing time. On the other hand, there are many similar methods [32]- [35], the accuracy of the algorithms is generally closely related to the selected data set. As can be seen from the studies, LOF-based approaches provide good accuracy in detecting outliers without the need to know exactly the underlying distribution. With RiLOF introduced in the next section, specific solutions are proposed that can eliminate LOF limitations such as reducing the negative effect of the k parameter, not affecting performance even if the data dimension increases, labeling and deleting outliers, and preventing the occurrence of small outlier clusters. III. METHODOLOGY This section is divided into three parts. Firstly, the LOF approach is briefly described. Then incremental version of LOF is given and finally the proposed method is presented in detail. A. LOCAL OUTLIER FACTOR (LOF) LOF measures the outlierness degree of each instance based on the distribution density in the data set. As the LOF value increases, the probability of the instance being an outlier increases. After the LOF of all instances are computed, instances with LOF which are higher than the predefined threshold value are identified as outliers. The LOFs of the instances can be measured by following the steps below. Interested readers are directed to original articles for more detailed descriptions of these processes [3]. • Determine k-distance: x i is the i th sample in the data set X. k distance of a sample x i is the distance between x i and a sample x j , when it comply with the following conditions: is the distance between the i th and j th samples in the data set X (i ̸ = j). k is the number of neighborhood and defined by the user. In the first step, k-distance (x i ) is computed. The kdistance neighborhood of sample x i is the k th closest distance between the i th sample and all samples in the data set, denoted by k-distance(x i ) where k denotes the k th nearest neighbor of sample x i . k distance of an outlier samples is higher compared to the inlier samples because k th nearest neighbors of the outlier samples are more distant unlike inlier samples. • Determine reachability distance: In the second step, reachability distance of a sample x i with respect to sample x j is computed as • Determine local reachability density: In the third step, local reachability density of a sample x i is denoted as lrd k (x i ) and defined by • Determine LOF: In the last step, LOF value of each sample is computed. LOF is equivalent to average of the ratio of local reachability density of the k nearest neighborhoods and the local reachability density of the sample and demonstrated as In the incremental LOF approach, an insertion algorithm is used to compute the LOF of each incoming data point as well as update the affected points LOFs. According to the computed LOF value, it is determined whether the incoming sample is outlier or not. The iLOF algorithm is started after k number of samples is loaded to the system where k is the user defined nearest number parameter. After the algorithm starts, the steps in the LOF approach are performed respectively with each incoming instance. The steps of the iLOF algorithm used in the study are as follows. • The kNN of the incoming instance is defined using the Mahalanobis distance and the indices of the nearest k samples are determined. Subsequently, affected samples are identified through k nearest neighbors (kNN) and reverse k-nearest neighbors (RkNN). Then, reach-dist, lrd, LOF values of the incoming instance is computed and these values of the affected instances in the data set is updated. All these operations are done for the each incoming point. • Since the reach-dist, lrd, LOF values as well as the kNN and RkNN indexes do not change, the unaffected instances are not updated. Hence, process time of the iLOF algorithm is lower compared to the batch mode LOF (requiring running the LOF from scratch for each incoming instance) while achieving the same performance. C. PROPOSED METHOD In the literature, statistical techniques such as Standard Deviation (SD), Inter Quartile Range (IQR), the Generalized Extreme Studentized Deviation (GESD), Median Absolute Deviation (MAD), Z-Score and Robust Z-Score are employed to separate outliers from inliers in univariate outlier detection. In SD technique, threshold values are set to the mean ± α * SD for the lower and upper threshold [36], where α value is commonly chosen 2 or 3. In IQR technique, lower an upper threshold are determined as Q1 − 1.5 * IQR and Q3 + 1.5 * IQR, respectively [37]. The sample outside in this range is considered as an outlier. In GESD technique, the number of possible outliers is determined by the user, and R i = max i |x i − mean (X)|/SD is computed for each sample [38]. Samples with scores higher than the significance level are considered outliers and deleted, and the same process is performed on the remaining samples, iteratively. In MAD technique, samples with a threshold value of 2.5 or higher is determined as outliers [39]. It is computed as M AD = α * median(|x i − median(X)|); where α is the constant parameter directly related to data distribution. It is set to 1.4628 for a Gaussian distribution and 1.0 for a Cauchy distribution. In Z-score technique, samples with a Z-score higher than 3 are possible outliers and it is defined as Z-score = (x i − mean (X))/SD. The robust version of the Z-score uses median and M AD instead of mean and SD [40]. It is defined as Robust Z-score = (x i − median (X))/M AD and samples with a threshold value higher than 2.5 are determined as outliers. The statistical techniques mentioned above may fail due to the masking effect of the cluster of outliers. Because their LOF values are getting closer to the inliers. This can be solved in two different ways: decreasing the T value or increasing the k value. However, it is not easy to set the threshold value because there is a tradeoff between the T and the performance. Lowering the T raises the success of predicting outliers, but this yields to an increase in the false positive rate. On the other hand, increasing the k value can reduce the impact of outlier clusters, resulting in a decrease in the accuracy of detecting local outliers. Moreover, it also leads to a higher computational time. In this work, to overcome the disadvantage of these outlier detection tecniques, a new MoNNAD metric has been developed. MoNNAD score of the incoming sample is computed by the median of absolute differences between the LOF value of the incoming sample and the k nearest neighborhood LOF values of this sample. It is defined by where, LOF xi is the incoming sample's LOF value and the LOF xj is the LOF value of the kNN samples. Outlier labeling and scoring process of the query sample (for k = 3) in the proposed RiLOF method is shown in Fig 1. The RiLOF method determines whether each incoming sample is an inlier or an outlier using the MoNNAD score. Samples with a MoNNAD score higher than or equal to the specified threshold value are determined as outliers. While equal emphasis is given to the query sample and the nearest neighbor samples in statistics-based techniques, more emphasis is given to the query sample in the RiLOF method. It makes the distinction between inliers and outliers clearer, causing samples with higher probability of outliers to obtain higher scores, which is the most important advantage of the RiLOF method. The statistical techniques, SD, IQR, GESD, MAD, Z-Score, and Robust Z-Score are suitable for univariate outlier detection problems. Therefore, multivariate data instances need to be transformed into univariate by moving to another domain. This transformation is performed by computing the LOF of multivariate data samples. Fig. 2 shows a synthetic data set containing global and local outliers, and visual graphs of outliers detected using the above-mentioned statistical techniques. In the scatter plot in Fig. 2a, the cluster of normal samples (C 1 ), the small outlier cluster (C 2 ), and p 1 , As it is known, in LOF-based algorithms, outlier score is computed instead of labeling. Outliers can be recognized with a threshold value determined by considering the scores of the samples. If the threshold value is chosen as 2, global outliers shown in red can be identified due to high LOF, while outliers in small cluster C 2 cannot be identified. Because, samples in C 2 form an outlier cluster and their LOF values become similar to the inliers. In addition, a point inside the cluster C 1 is incorrectly detected as an outlier. Results for the GESD technique are shown in Fig. 2c. Global outliers are successfully detected, but 5 points inside the cluster C 1 are falsely detected as outliers. Besides, outlier cluster C 2 is not detected at all. The threshold plane is displayed for visual interpretation, including the sample with the smallest LOF value among detected outliers. According to the results obtained by the IQR technique in Fig. 2d, global outliers are determined precisely, while 6 different inliers in cluster C 1 VOLUME 4, 2016 are falsely determined as outliers. Also, the outlier cluster C 2 is determined as the inliers. Fig. 2e shows the results of the SD technique. It is clear from Fig. 2e that although the global outliers are recognized, the outlier cluster C 2 is not identified. The same results can be seen from the Z-score graph shown in Fig. 2f. The results of the MAD technique are given in Fig.2g. From Fig. 2g, samples p 1 and p 2 are considered outliers, while the sample p 3 and outlier cluster C 2 are not recognized. Robust Z-score results are shown in Fig. 2h. While accurately detecting global outliers, it incorrectly detects a sample in the cluster C 1 as an outlier. Also, the Robust Z-score graph could not accurately detect the C 2 outlier cluster as in Fig. 2f. Finally, Outlier detection results of the proposed MoNNAD metric are shown in Fig. 2i. While none of the other techniques could recognize the outlier cluster C 2 , it is obvious that MoNNAD correctly detected all outliers in the data set, including outlier cluster C 2 . That is because the proposed RiLOF method uses the median of the local absolute deviation from the incoming sample instead of using all samples to detect and delete outliers, thus avoiding the formation of a small outlier cluster. Determination of the parameters in the proposed RiLOF method is critical as in other unsupervised methods. The optimum parameter determination strategy for the kNN and threshold is presented in Section IV. In the RiLOF method, samples with a MoNNAD score of 0.5 or higher are considered to be outliers. Detected outlier samples are deleted during incremental learning. This leads to less memory usage. The intuition behind this approach is to avoid the formation of outlier clusters and prevent the outlier points determined as inliers. Outlier clusters cause outlier points to have a lower LOF score and degrade algorithm performance in both batch and incremental mode. When the samples with high probability of being outlier are deleted in the process of incremental learning, the LOF scores of the new samples that show similar characteristics to the outlier samples remain high because they locate in less dense regions and yields a performance increase. The implementation of the proposed RiLOF method is demonstrated in Algorithm 1. IV. RESULTS AND DISCUSSION This section presents the effectiveness of the proposed RiLOF method and compares it to the other unsupervised outlier detection methods. All experiments are conducted on a computer with an Intel Core i7-6900K CPU@3.2 GHz processor, 64 GB of RAM and Windows 10 operating system. The data sets and evaluation criteria used in the study are explained in detail below. In addition, the effects of the parameters used in the proposed RiLOF method on the performance are discussed in detail, and the observed findings are reported. A. DATA SETS In experimental evaluation, only real-world data sets are taken into account, as synthetic data sets cannot fully reveal the behavior of outliers. To this end, 15 different real-world for ∀ x i ∈ kN N (xq) do 5 Compute reach − dist(xq, x i ) using Equation 1 6 end for 7 for Update lrd(x i ) using Equation 2 16 Update LOF (RkN N (x i )) using Equation 3 17 end for 18 Compute lrd(xq) using Equation 2 19 Compute LOF (xq) using Equation 3 20 Compute [42]. The information on the data sets used such as number of instances, features, inlier and outlier inclusion conditions, are detailed in Table 1. B. EXPERIMENT SETUP Proposed RiLOF method implemented using Phython version 3.7.7. In the implemantation of LODA and ROD, Python Outlier Detection (PyOD) toolbox is utilized [43]. For the SOS 1 , So-GAAL 2 and MO-GAAL 2 methods, publicly available source code are employed. iLOF [14], i-LOCI [17] and i-fastABOD [18] methods are implemented based on the original articles. Since there is no reliable Python code for the INFLO and LDOF methods, our own implementation based on the author's article is used [8], [9], respectively. In the proposed RiLOF method, singular matrix error is observed in some data sets due to the use of Mahalanobis distance metric in the nearest neighbor search. This problem has been solved by increasing the starting index in incremental learning. C. PERFORMANCE MEASURES Data sets used for outlier detection contains inlier and outlier samples, it can be considered as a binary classification problem [26]. Metrics such as accuracy, recall and F-score used as success criteria in binary classification methods are sensitive to data distribution [44]. Therefore, using them as performance criterion in outlier detection methods may cause erroneous evaluations. Because, the classes are unbalanced as the ratio of the number of outliers is very low compared to inliers. The Receiver Operating Characteristic (ROC) curve, which is not affected by data distribution, is frequently used as a performance evaluation criterion in the literature. ROC curve depicts the trade-off between sensitivity and specificity. Moreover, it allows visual comparison of the different methods on the same graph. The Area Under the ROC Curve (ROC-AUC) summarizes the ROC curve to a scalar value and makes it easy to compare different methods. A higher ROC-AUC score indicates a better performance. D. PARAMETER SELECTION Determining the optimum parameters in any machine learning algorithm is crucial to the algorithm's performance. In particular, optimum parameter selection is even more difficult in unsupervised outlier detection algorithms. Because the number of features and also outliers are not known in advance and they differ from one application to another. Therefore, two parameters need to be adjusted in order for the proposed RiLOF method to perform better in different applications. Table 2. As can be seen from Table 2 Table 2 are taken into account, it will be seen from that ROC-AUC is the highest 0.94, the smallest 0.58 and the average 0.84. Table 3 shows the ROC-AUC scores for different k and M T values in the WBC data set. From Table 3 Table 3, it will be seen that the highest, smallest and average values of the ROC-AUC scores are 0.98, 0.87, and 0.95, respectively. Table 4 Table 4, it will be noted that the highest, smallest and average values are 0.88, 0.56, and 0.79, respectively. Finally, ROC-AUC scores for different k and M T values in Glass data set are given in Table 5 Table 5, it will be realized that the highest, smallest and average values of the ROC-AUC scores are 0.97, 0.55, and 0.83, respectively. As can be seen from detailed k and M T analyzes, there is no linear relationship between k and M T values even on the same data set. High ROC-AUC scores can be obtained at low M T and high or low k values in one data set and VOLUME 4, 2016 [3]. Also, according to the experimental results of previous scientific studies, the effect of k cannot be fully predicted even in data with Gaussian distribution, while it will be more difficult in real-world data whose distribution is unknown. It is inevitable that this situation will also affect the scores of the developed MoNNAD metric. One of the most important aims of the study is to minimize the negative impact of userdefined parameters (k and M T ) and to quickly get the best (or closest) results in different data stream applications. According to the results in the Hepatitis data set in Table 4, the ROC-AUC score (0.79) is above the average for k > 9 only at 0.50 ≤ M T ≤ 0.65. In the Glass data set in Table 5, the ROC-AUC score (0.83) is above average with 0.20 ≤ M T < 0.65 for all k values. In the Breast_O data set in Table 2, the ROC-AUC score (0.84) is above the average with 0.50 ≤ M T ≤ 0.80 for k ≥ 9. On the other hand, in the WBC data set in Table 3, the effect of increase or decrease in M T at different k values on ROC-AUC scores is negligible. Based on these observations, it has been concluded that M T = 0.5 is an acceptable value for all data sets in terms of robustness and accuracy in the developed MoNNAD metric. E. DETAILED ANALYSES OF K In this section, the behavior of the proposed RiLOF method for M T = 0.50 against various k values is analyzed. For this purpose, in Fig. 3, ROC-AUC curves of RiLOF in realworld data sets are plotted with different k values for constant M T = 0.50. Based on the results, increasing the k value in 8 of these data sets (Breast_P, Connectionist_B, Glass, WBC, Ecoli, Musk, Page Blocks, and Shuttle) either does not affect the performance or slight changes are observed. This is due to the use of the Mahalanobis distance metric, which computes data distribution based distance instead of Euclidean distance, and the proposed MoNNAD is robust against small fluctuations in LOF values. In 7 of these data sets (Hepatitis, Biomed, Heart Disease, Boston_HP, Breast_O, SpamBase, and Satimage), the ROC-AUC score rises with the increase of k. On the other hand, according to the experimental results in [3], LOF values stabilize after k = 10. Therefore, the value of k should be chosen to be at least 10 in order to be less affected by fluctuations in the LOF value caused by k. In addition, the process time should be taken into account, since choosing the value of k too large will require high process time (cost) [45], [46]. In view of these facts, it would be a good option to choose the smallest k value greater than 10 according to M T = 0.50 to obtain the highest (or near) ROC-AUC score. k = 11 is a suitable choice considering the trade-off between process time and accuracy. For M T = 0.50 and k = 11, the highest ROC-AUC score is obtained in the Breast_O data set (Table 2), while the results closest to the highest ROC-AUC score (differences are negligible) are obtained in WBC (Table 3), Hepatitis (Table 4), and Glass data sets (Table 5). F. PERFORMANCE COMPARISON In this section, RiLOF method is compared with the 12 different unsupervised outlier detection methods that can be divided into two main groups according to their operating modes: incremental mode algorithms (RiLOF, iLOF, i-fastABOD, i-COF, i-LOCI, and LODA) and batch mode algorithms (INFLO, LDOF, LoOP, SOS, ROD, SO-GAAL, and MO-GAAL). For fair comparison of the proposed RiLOF with the benchmarking outlier detection methods, the same parameters are used as much as possible. iLOF, i-fastABOD, i-COF, INFLO, LDOF, and LoOP take k parameter, and it is set to 11. However, i-LOCI uses α and k σ to detect outliers instead of k parameter. In the reference study [7], these parameters are defined as α : 0.5 and k σ : 3, so the same values are used in this study. In the implementation of the SOS method, the perplexity parameter h is set to 4.5 as in the reference paper [24]. Similarly, in the So-GAAL and MO-GAAL methods, the parameter values recommended in the reference article are used [26] . ROD is a parameterless method, so it does not need any user defined parameters [25]. The ROC-AUC scores of the proposed RiLOF and the comparative methods with the given hyperparameter described above are demonstrated in Table 6. Bold highlighted ROC-AUC scores indicate the highest performance for the particular data set. Due to the process time of the i-LOCI, data sets larger than 4500 samples in size are not implemented. Moreover, ROD could not be implemented on the Musk data set owing to the out of memory error. Performances of the LODA, SO-GAAL, and MO-GAAL methods are varied in each implementation, so they are iterated in 10 consecutive times for each data set to obtain more reliable and scalable results and then the average ROC-AUC scores are given in Table 6. The experimental results in Table 6 demonstrate that the proposed RiLOF method achieved the best performance in twelve out of the fifteen data sets. In the remaining three data sets, it is ranked the third, yet very close to the top two. Specifically, RiLOF improves performance by approximately 10% in Hepatitis, Breast_P, Heart Disease, Ecoli, and Spam-Base data sets, and at least 40% more in Connectionist_B data set compared to other methods. It also shows the optimum outlier detection performance in the Musk data set. From the results shown in Table 6, it is seen that the performances of benchmarking techniques vary depending on specific data sets. For example, the i-LOCI's maximum ROC-AUC score (0.99) is closest to the best in the Musk data set, but performs rather poorly in the other data sets such as Biomed (0. Table 6 that RiLOF produces more robust results using the iLOF algorithm, which has the ability to detect outliers in data streams without considering the data distribution, and the developed MoNNAD metric. Finally, as can be seen from the last column of Table 6, the RiLOF method performs well above average in each of the 15 different data sets. Further, from the last row of Table 6, the average ROC-AUC score (0.8431) achieved by RiLOF in 15 different data sets is also quite remarkable. For example, RiLOF outperforms the closest method (ROD) by 17% in terms of average performance. Considering that especially the ROD, LODA, and i-fastABOD methods that show the closest performance, it is obvious that the proposed RiLOF method can be easily applied to data stream applications in different fields. For a more comprehensive comparison, ROC-AUC charts showing the performance of the proposed RiLOF and other outlier detection methods based on the k parameter should be considered. Fig. 4 shows the ROC-AUC scores corresponding to k values ranging from 5 to 35 across 15 different real-world data sets of the benchmarking algorithms, with the exception of the i-LOCI algorithm. In i-LOCI method, ROC-AUC scores of the different k σ values analyzed. Due to limit of space and to make the comparison on the same graph, the k σ values (for 1,2, and 3) of the i-LOCI method are indicated with a star marker corresponding to the k value 5, 7, and 9, respectively. The performances of SOS, LODA, ROD, SO-GAAL, and MO-GAAL methods are not demonstrated in Fig. 4 since they do not take k as an input parameter. From Fig. 4, it can be seen that the proposed RiLOF method outperformed the comparative methods in 12 out of 15 data sets for the almost all k values by the ROC-AUC performance metric. Moreover, in Breast_P, Con-nectionist_B, Glass, WBC, Boston_HP, and Page Blocks data sets, the increase of k hardly changes the ROC-AUC score achieved with RiLOF. Furthermore, RiLOF shows the optimum outlier detection performance for all k values in Musk and Shuttle data sets. In the remaining data sets, the increase of k slightly increases the ROC-AUC score obtained by the RiLOF method. As can be deduced from these graphs, the proposed RiLOF method shows more robust behavior compared to other nearest neighborhood based methods, even with changes in k. Extensive experiments on real-world data sets show that the proposed RiLOF method performs much better than both batch mode and incremental mode algorithms in different data sets. Especially, considering that fixing k = 11 and M T = 0.5 gives very effective results in RiLOF method, it is obvious that RiLOF is very suitable and efficient for data streams in different fields. V. CONCLUSION In this article, a robust outlier detection method (RiLOF) is presented to detect outliers from data streams in real time. RiLOF has many improved advantages over iLOF based algorithms: (1) High detection rate even in high dimensional data; (2) Hardly affected by the number of nearest neighbors, k; (3) The ability to both label the outlier and assign a score to each sample that indicates its probability of being outlier; (4) The ability to detect a long sequence (small cluster) of outliers. The success of RiLOF is due to the newly developed MoNNAD metric. The MoNNAD score gives information about whether the incoming sample is an outlier according to the specified k and M T values or not, if it is detected to be an outlier, it is deleted from the data set. The intuition behind the deletion of possible outliers is that increasing the effect of inlier points on the computation of LOF and MoNNAD scores while reducing the potential adverse side effects of outliers and preventing the formation of outlier clusters. A series of experiments are performed on 15 different realworld data sets to analyze k and M T 's effect. According to these results, it has been determined as k = 11 and M T = 0.5. The proposed RiLOF method performed better than both incremental mode algorithms (iLOF, i-fastABOD, i-COF, i-LOCI, and LODA) and batch mode algorithms (INFLO, LDOF, LoOP, SOS, ROD, SO-GAAL, and MO-GAAL). Consequently, the RiLOF method is very suitable and effective for use in varied dimension of fast and large data streams in different fields. In the future, the proposed RiLOF method's performance can be improved by the following aspects. Although the proposed RiLOF method requires less memory than the iLOF method, it is open to memory improvements for large data streams. Nearest neighbors and the LOF score's computation can be implemented by using Graphical Processing Units to decrease the processing time.
v3-fos-license
2024-01-28T16:10:17.041Z
2023-12-21T00:00:00.000
267282505
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://brill.com/downloadpdf/view/journals/rela/48/3-4/article-p519_011.pdf", "pdf_hash": "463e71ea4e3b113feb2fb985e596bf1f8c9d89a9", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45289", "s2fieldsofstudy": [ "Law", "Political Science" ], "sha1": "d6058c5db8e6c5c92e4d02960e4f830e8746fbde", "year": 2023 }
pes2o/s2orc
Between Dialogue, Conflict, and Competition: The Limits of Responsive Judicial Review in the Case of the Romanian Constitutional Court A response to Rosalind Dixon’s Responsive Judicial Review (Oxford University Press 2023) assessing her theory’s prospects and caveats in the Romanian constitutional context. The piece analyses recent case law from the Romanian Constitutional Court and highlights three important shortcomings that limit the applicability of Dixon’s framework: the tendency toward formalism in constitutional interpretation, an impoverished rights review culture, and the persistent conflictual positioning of the Constitutional Court vis-à-vis other constitutional actors. The article ends by speculating on developments that may yet render responsive judicial review more of a reality in Romanian constitutionalism than present conditions may allow. Democratic backsliding in Central and Eastern Europe (cee) and elsewhere has not infrequently used the guise of legality to hide its attacks on the constitutional edifice.Be it in the form of wholesale constitutional replacement and (rcc) would struggle to engage in rjr as proposed by her.Neither of these contradict her preconditions for courts' ability to engage in rjr, namely judicial independence, political and civil society support, and remedial power.Rather, they have to do with the style of judicial reasoning and the judicial culture prevalent at the rcc.This makes them more elusive to pinpoint as well as remedy.Nor is the problem one of individual capacity, i.e. of rcc judges simply lacking, as an individual failing, "the requisite mix of legal and political skills necessary to identify relevant democratic blockages and determine how and when they can most effectively be countered by judicial intervention".8Instead, the limitations discussed below are systemic and long-standing.In my concluding remarks, however, I revisit rjr's potential in the Romanian context and explore possible sources of change to the rcc's jurisprudence and working methods that offer reasons for optimism. A Brief Summary of the Theory of Responsive Judicial Review Dixon's rjr theory has a clear aim: to increase the likelihood that judicial review exercised by apex and constitutional courts increases democratic responsiveness in any given jurisdiction.It is grounded in Elyan ideas insofar as it starts from a defence of judicial review as representation-reinforcing (rather than anti-democratic), but it updates and adds to Ely's ideas considerably. Dixon's objective is to add at least two components to what was missing in Ely's theory: an investigation into the preconditions for effective judicial intervention, as well as a broadening of the thesis beyond the United States and its particular institutional arrangements and forms of judicial review.In both respects, certainly, her account succeeds in widening and deepening the scope of analysis and represents a significant contribution to the field.Dixon's rjr also makes two additional contributions, one acknowledged by the author explicitly and one that is up to her readers to note and appreciate.On the one hand, it is an account that is nuanced and deeply contextual, carefully circumscribing the scope and application of its different elements to the nature of the jurisdictions and courts it discusses.It is in this sense that she calls hers a "sometimes view of the promise of judicial review":9 not one that can be assessed in the abstract and for all time, but always tentatively and based on conditions on the ground.She is the rare constitutional theorist who exhibits modesty about the reach of their own theory, such as when she admits that rjr may not always be achievable in practice.10Dixon is addressing courts primarily, seeking to offer judges (who it assumes are well-intentioned and eager to 'get it right') guidance for how to proceed in thorny scenarios, such as when constitutional meaning is ambiguous and the priority among constitutional values unclear.11Dixon's focus on understudied and undertheorised aspects of judicial practice such as authorship, tone, and narrative in judicial decision-making12 therefore adds to the growing literature on judicial statecraft.13 On the other hand, Dixon's theory also has the virtue of offering a useful new vocabulary.Her discussion of antidemocratic political monopoly, legislative blind spots, and burdens of inertia arms us with discrete and carefully constructed notions that capture different facets of what, exactly, is going wrong in the legislative (and sometimes executive or administrative) arena and what ills courts are well-placed to address.These are the specific manifestations of democratic dysfunction her theory seeks to tackle.Bringing conceptual clarity in this area -especially given the proliferation of scholarship addressing democratic backsliding which does not always distinguish between them -is another virtue of the book. What Dixon means by antidemocratic political monopoly includes both institutional and electoral measures designed to entrench partisan bias and skew democratic competition.14Such measures may be public and overt or less visible and indirect, but their overall effect will be to diminish the capacity of the opposition to perform its accountability role and to stand a chance at elections, as well as to disempower other accountability mechanisms that could call the executive to account.Legislative blind spots, in turn, refer to those gaps or oversights in legislation that may be due to anything from limited foresight about how it may operate in practice to unintended consequences or knock-on effects.15Finally, by legislative burdens of inertia, Dixon seeks to capture the variety of ways in which democratic legislatures may be unresponsive to their public, whether it be due to shifting priorities, coalition bargaining, the complexity of operationalising certain commitments (for example, giving effect to social rights protections), or simply state weakness.16 As can be seen from this brief summary, she is careful not to impute nefarious motives to legislatures whose democratic responsiveness is faulty.Instead, Dixon makes room for the possibility that judicial intervention may well be needed as a way to complement, nudge, and improve the actions of the legislature.17 One final note is useful here, before proceeding to test Dixon's ideas as against the Romanian constitutional experience.She herself admits that, while aiming at universal applicability, her rjr theory may operate differently between common law and civil law jurisdictions.18 An example discussed below is that of the different role played by precedent and doctrines of stare decisis across the two traditions.There are, however, other relevant distinctions.For example, Dixon cites with approval the concreteness of judicial review in common law systems, insofar as it allows judges to consider legislation in the context of a concrete case and, therefore, to assess its effects (including indirect).19She considers specialised, Kelsenian courts -who typically have powers of abstract review -possibly better positioned to identify legislative burdens of inertia, while ordinary courts may be better placed to spot legislative blind spots.20 However, these observations are more speculative and tentative, understandable given that Dixon's main objective is not necessarily to explain how her theory matches onto different types of legal systems.Nevertheless, her claim to offer a universal theory of judicial review bears testing against not just civil law jurisdictions in general, but those steeped in particular traditions, pathologies, and habiti as the Central and Eastern European ones are.The Romanian example discussed here, and the other contributions to this special issue are attempts to work through what Dixon's theory may look like as applied to this context. Formalistic Legal Reasoning The rcc is not the only court in the region to exhibit a high level of formalism in its judicial reasoning.21Inherited from the communist legal method, this formalism manifests itself in the form of judgments that are dry and couched in the language of applying objective norms rather than solving complex conflicts.These courts often do not perceive themselves as discursive courts, meaning they do not perceive their audience as including the general public and do not necessarily identify a duty incumbent upon themselves to justify their reasoning to this broader public.With the rise of longer judgments, separate and dissenting opinions, the publication of decisions online, the translation of judgments etc., this detachment from the public at large is beginning to change.22However, formalism in legal reasoning remains difficult to overcome. It is important to note that the type of legal formalism I refer to here is an extreme and pernicious version of the appeal to law's inherent normativity and rejection of extra-legal sources for its interpretation.It is not necessary to rehash old debates between defenders of legal formalism and its critics in order to show the appeal to legal formalism in cee has often resulted in an abdication of the judicial role.Legal theorists defending legal formalism, such as Ernest Weinrib, are concerned with safeguarding the law's autonomy and internal intelligibility (the law as "intelligible as an internally coherent phenomenon") against the encroachment of political readings of legal normativity.23Often, such scholars react to what they perceive as the oversimplification -to the point of caricature -of the appeal to form that is the bread and butter of the legal method.In the words of Brian Leiter, accounts of formalism that equate it with "nothing more than mechanical deduction on the model of the syllogism" misinterpret it and do not represent views shared by anyone today.24He terms this view "vulgar formalism" and contrasts it with the more sophisticated versions that are actually in use.25 The problem in the cee context is that, in many instances, adjudication there has exhibited the features of such "vulgar formalism".This was often the case in the early years of democratic transition, when new constitutional courts were created with unprecedented powers of constitutional review but also lacking in experience of navigating the more political waters they found themselves in.Examples below illustrate how still today, the Romanian Constitutional Court struggles to move beyond the mechanical appeal to legal rules, even in cases where the law may have exhausted its ability to provide clear-cut answers.Again, however, this is not solely a Romanian pathology.For instance, Serbia's Constitutional Court has remained notoriously ineffective, particularly at holding political branches to account.Its "judicial dormancy in political disputes" has been attributed, at least in part, to "judges' past habits and extensive ideology of legal formalism".26The anti-politics stance adopted by the Serbian judges combined with a long-standing attitude of judicial deference to political authority together make the Serbian court distinctly ineffective.27 A final concession here is that the charge of legal formalism against cee courts needs to be contextualised and substantiated.Zdenek Kühn's account of the continuities between communist and postcommunist legal ideology and method was a necessary addition to our understanding of cee judiciaries.28Nevertheless, it is important to accompany such general remarks with in depth case studies and empirical analysis to trace how, precisely, such formalism has operated in practice and over time.29 My aim is not to levy a blanket accusation, either against legal formalism as a method and philosophy of adjudication or against cee or Romanian judges as a whole.Instead, I wish to illustrate how even in highly sensitive rights disputes, and decades into the postcommunist transition, the Constitutional Court of Romania continues to exhibit some of the worst traits of Leiter's "vulgar formalism".This in turn raises the prospect that Dixon's theory of responsive judicial review may be a long way away from capturing the reality of Romanian constitutional adjudication. Let us take as an example the 2016 decision that validated the citizen's initiative on constitutionally redefining the family as between a man and a woman.30Its proponents had argued that, given that the Romanian Civil Code only recognised heterosexual marriage, the Constitution needed to be brought in line and a possible expansion of the right to marry needed to be prevented via explicit amendment.They also invoked international human rights law to support their position, arguing that both Article 16 of the Universal Declaration of Human Rights and Article 12 of the European Convention on Human Rights recognised the foundational societal role of the (in their view, heterosexual and procreative) family and mandated the state to protect it as an institution.Here, then, was a preemptive, rights-restricting initiative couched in the language of human rights, casting the right to marry as one that pertains exclusively to heterosexual couples and that would stand to be negatively impacted by any extension of its scope. The rcc's approach was formalistic, even while paying lip service to various theories of judicial interpretation and to an engagement with international human rights law (more on the latter below).It engaged in a thin textualist reading of 'family' , checking for its ordinary, dictionary meaning, before engaging in what could be termed an originalist reading of Article 48 which stood to be amended.It concluded that, because at the time of the 1991 constituent assembly, the general, accepted meaning of the family was as between a man and a woman, that is the meaning that was enshrined in the constitutional text.This ignored the language of Article 48 itself, which guarantees equality between the spouses in gender-neutral language.It also ignored the fact that the Civil Code at the time of the Constitution's adoption was similarly ambiguous and only when revised in 2009 clearly stipulated the sexes of the spouses.While I would not go so far as to argue that this meant de facto constitutional recognition for same-sex marriage since 1991, it is nevertheless the case that the constitutional text was far more elastic and capacious than the rcc had allowed. Moreover, the rcc also answered the question of whether the proposed amendment would contravene Art 152(2) of the Romanian Constitution, part of the Constitution's eternity clause, incompletely and formalistically.Article 152(2) contains a typical non-retrogression clause on rights, stipulating that "no revision shall be made if it results in the suppression of the citizens' fundamental rights and freedoms, or of the safeguards thereof."The Court largely sidestepped the substantive question before it, however, of whether an explicit constitutional reference to the heterosexual family/marriage inserted in the Constitution violated the constitutional guarantee against discrimination.Instead, the Court reasoned convolutedly that the proposed amendment merely specified the content of an existing right to marry; as that right had never been extended to same-sex couples, it could not be the case that their rights were being suppressed here.It said nothing whatsoever on equality and non-discrimination.The substantive part of the judgment, moreover, was disposed of in a single paragraph.Nowhere did the Court admit that, given that same-sex marriage was barred by legislation, the citizens' initiative could only be read as seeking to future-proof this prohibition by enshrining it at the constitutional level.31 To be clear, I am not claiming that the rcc should have de facto recognised same-sex marriage in its review of the citizens' initiative.Nor am I here criticising the outcome of its review for its conservative nature (though I do discuss its limited conception of rights in the next section).Instead, the analysis above is meant to show how the appeal to legal formalism obscured an overly rigid style of judicial reasoning, one that even by its own measure failed to engage with the complexity of the case before it.Whether this was incidental or a purposeful obfuscation meant to remove the rcc from the line of fire regarding a highly controversial social issue is irrelevant.What remains instead is an unconvincing appeal to formalist method and the purported limits of positive law. This formalistic (others have called it unprofessional,32 or worse) style of judicial reasoning has broader implications for the suitability of the rjr framework to the rcc.This style of reasoning makes the rcc as it currently operates still ill-suited to the type of value judgment rjr demands.It is not just that the rcc lacks a coherent conception of the democracy or minority rights it views itself as the guardian of, but rather that it fails to even raise the question in the first place.In the case discussed, it was not that the Court failed to sanction legislative in/action33 nor that it prioritised other constitutional values or even constitutional non-responsiveness34 nor that it developed a responsive approach to the protection of the rights in question in the face of widespread disagreement over their content.35Instead, the formalist method applied, coupled with a dubious understanding of what rights protection actually required, led the rcc down a path where it could serenely sidestep the crucial substantive question before it.As we will see shortly, more recent rcc activity does show a willingness to move beyond this type of analysis, but this remains exceptional, while the bulk of the Court's output remains formalistic. An Impoverished Rights Review Culture and Transnational Engagement Briefly revisiting the 2016 decision on the constitutional definition of the family, we are reminded that the rcc accepted the amendment proponents' linking of the family to the right to marry, subsuming both to a narrow, ultraconservative procreative function.It bears repeating that what is impugned here is not that the Court did not recognise same-sex marriage.Indeed, that was not the question before it.Instead, it is both the style and the robustness of its legal reasoning that are disputed and whose suitability to responsive judicial review are being called into question.The Court went further, however, no doubt attempting to buttress its credentials as a rights protecting court.It sought to justify an artificial distinction between the family as tied to marriage under Article 48 and the protection of the family and private life elsewhere in the Constitution (Article 26(1)), arguing that only the former and not the latter were in question in the case.Nor did the Court resort to international human rights law as a source for a richer analysis, despite numerous amici curiae raising comparative case law to argue that the law on same-sex marriage recognition had advanced considerably.This omission is even more glaring when considering that Article 20 of the Romanian Constitution explicitly makes international human rights treaties directly applicable where they offer a level of protection higher than domestic law.They have been recognised as part of a "constitutionality block" meant to enshrine constitutional supremacy and compliance with international human rights standards, whereby the latter would prevail in case of conflict but be interpreted as in line with the fundamental principles of the Romanian Constitution.36Put differently, once duly ratified, an international treaty is to be interpreted as being in compliance with national law and any apparent inconsistencies are to be reconciled through interpretation.37 As seen in the case of the 2016 popular initiative, however, this requirement to harmonise constitutional and international human rights norms was sidestepped when the Court framed the issue as not involving rights limitations at all. One might be tempted to view a subsequent, 2018 decision as more robust in terms of its rights analysis on account of its outcome.38The decision accepted the right of same-sex couples married in the EU to have their marriage recognised in Romania for the purposes of entitling the foreign spouse to residence.Unlike in 2016, the 2018 decision recognised that samesex relationships were protected under the constitutional (Article 26) and European right to family and private life (Article 8 echr and Article 7 of the EU Charter of Fundamental Rights).Both the judgment and the Chief Justice in extrajudicial statements were unambiguous that the decision did not amount to a de facto recognition of same-sex marriage in the country.Instead, the outcome was presented as the rcc merely falling in line with European law.39This after it had itself seized the European Court of Justice seeking a preliminary ruling on the legal question at the heart of the case.The ecj recognised residence rights of same-sex partners whose marriage had been legally performed in the EU, even where the Member State does not otherwise formally recognise same-sex unions.40 For five years after the Coman judgment, Romanian immigration authorities continued to refuse to issue a residence permit to Coman's spouse, despite threats of infringement proceedings by EU authorities.No administrative procedure was initially created by the Romanian Government to facilitate recognition of same-sex unions completed abroad; nor did the Romanian Parliament amend legislation in order to mandate this.It was only in September 2023 that the Ministry of Interior finally introduced the needed amendments to the relevant legislation.41Its justification for doing so was tailored in the narrowest of terms, making it clear that it was not aiming at recognition of either same-sex marriage or civil partnerships and that the changes were meant to give effect to European law and avoid the country being subject to 38 Decision no.534/2018 of 3 October 2018.39 Cited in Ema Stoica, "Şeful ccr, despre cazul Coman-Hamilton: Persoanele de acelaşi sex căsătorite au dreptul de liberă circulaţie şi la şedere în România/ ACCEPT: Adrian şi Clai, doar "un pic" căsătoriţi", Mediafax (18 July 2018), available at https://www.mediafax.ro/social/seful-ccr-despre-cazul-coman-hamilton-persoanele-de-acelasi-sex-casatorite-au-dreptul-de-libera-circulatie-si-la-sedere-in-romania-accept-adrian-si- Coman and others had in the interim initiated proceedings before the European Court of Human Rights (ECtHR), arguing that non-implementation of the ecj decision violated their Convention rights.43Given the direction of travel in Strasbourg, it was only a matter of time until the ECtHR would add another layer to this saga and find the Romanian state in breach of the Convention.The inevitable indeed happened in May 2023, when the Strasbourg court -in a case involving twenty-one same-sex couples -found Romania in breach of its Convention obligations, notably Article 8, for failing to provide the applicants with any form of legal recognition of their relationships.44It also dismissed the Romanian authorities' arguments that there were public interest reasons for failing to act or that their approach was within their margin of appreciation under the Convention. Returning to responsive judicial review theory, is this an example where the multi-actor dialogue that should have followed judicial intervention failed to materialise?In the first instance, the clear position of neither the domestic nor supranational courts seemed to suffice to incentivise national authorities to change the de facto situation of same-sex partners.Is the five-year timespan between judicial pronouncements and legislative change realistic or problematic?Is the change in the law to be seen as a victory, despite its very narrow contours (only the right to residence of foreign spouses where the marriage was concluded within the European Union being recognised)?Or in rjr terms, is this to be assessed as a burden of inertia finally pierced, if not overcome entirely? Dixon herself is somewhat ambivalent about defining success in her theory of judicial review.The main measure of that success appears to be whether actual legislative change ensues, certainly in what legislative blind spots or burdens of inertia are concerned.45However, where the effects of judicial intervention are less direct, and also where the on the ground impact of legislative change is not straightforward, gauging success will be a complex equation.Dixon suggests empirical work and large-n comparisons as the methods needed to assess such impacts.46 The Romanian example here also adds another dimension: the difficulty of determining good faith when evaluating legislative change.One could say that, even belated and incomplete, legislative reform should be welcomed.However, obfuscation, delays, and minimal impact by design can be tactics used to diminish the impact of otherwise positive legal change. one argue that, at least in the courts involved are concerned, the Coman case represents a successful instance of judicial dialogue?I believe that would overestimate the good faith of the rcc.Rather, I would argue this was an instance in which it sought to pass the buck on a controversial decision to the European Court.The rcc had been notoriously reluctant to refer cases to the European Court of Justice, interpreting the mechanism of seeking ecj preliminary rulings on the interpretation of EU law as voluntary rather than compulsory.47 In some instances, in fact, they went further and used the preliminary ruling mechanism as a mechanism to settle domestic scores, including intra-judicial conflict. For example, in 2019, the Romanian High Court of Cassation and Justice asked for a preliminary ruling on whether a decision of the rcc on the composition of judicial panels in corruption cases -resulting in the need to redo the panels, and the trials, in 800 cases -was in violation of EU law.This was part of a larger package of reforms to the judicial system, including creating a new institution to hold magistrates criminally liable for disciplinary offences, that seriously called into question judicial independence in the country.The Romanian Constitutional Court had controversially found these changes in line with the Constitution and positioned itself as both defiant of and acting within the bounds of European rule of law standards.48It also thereby came into conflict with ordinary courts, which had sought to comply with such standards and disapply the problematic reforms as contravening EU law.49 The ecj judgment reinforced the primacy of EU law, including standards set in the Cooperation and Verification Mechanism to which Romania was still subject.50However, in the aftermath of the ecj's decision, the rcc put a stop to this dialogue by prohibiting domestic courts from disapplying national law already found to be constitutional (thus also the impugned justice reforms).51The rcc invoked national constitutional identity in a sovereigntist way as a shield against its encroachment and interpreted the Constitution in isolation from EU law.52I acknowledge that this form of dialogue between the domestic and the supranational is not the main object of the rjr theory, which focuses instead on dialogue with the legislature and acknowledges the lack of focus on international or regional courts.53Nevertheless, the good faith and commitment to responsiveness that inform the book's approach could be extrapolated to courts' engagement internationally.The Romanian case, however, represents a cautionary tale on this front.Rather than aiming at better, more robust decisionmaking and more responsive outcomes, dialogue in the examples presented here was resorted to as a means to eschew responsibility for controversial decisions or else shift the arena for score settling to the international plane. A counterexample came in 2020, when we find a Constitutional Court with a partially different composition issuing a much more comprehensivelyreasoned decision in a case involving a bill seeking to ban the teaching of gender studies.54The judgment was certainly an improvement on prior ones, insofar as it directly engaged with the substantive question before it.The Court proceeded to distinguish between biological sex and socially-constructed gender and to defend that distinction in law.It also emphasised the need for Romania to comply with its obligations under international human rights law and European law, both of which have a superior status to the constitution within the hierarchy of norms and both of which protect the principle of equality.The rcc found the proposed ban on gender studies to conflict with a number of constitutionally guaranteed rights, as well as to contravene the principle of the rule of law insofar as it introduced confusion in Romanian law.This is probably one of the closest instances in which the rcc has come to operating as a court that could embrace rjr.Gaps remained in the judgment, as it did not engage in a proportionality analysis when assessing conflicts with qualified rights, nor a 'best interests of the child' analysis when considering the right to education and the protection of children and youth.55The link between embracing proportionality analysis and giving rise to a so-called "culture of justification" has long been discussed in the scholarship on judicial review and need not be entered into here.56Suffice it to note that the close examination of legislative aim and means that proportionality review entails is seldom exhibited in the judgments of the rcc.If and when the Court develops this aspect of its reasoning, it will be a contributing factor to its rapprochement to the tenets of rjr. A Conflictual Rather than Dialogical Disposition vis-à-vis other Constitutional Actors The opposition between the rcc and the High Court mentioned above has not been limited to contexts in which the European Court could be brought in as umpire.As Bianca Selejan-Gutan has documented, the rcc underwent a gradual and painful process of becoming established in the country's constitutional arena.57According to Selejan-Gutan, its role as Kelsenian negative legislator rendered it both alien to the constitutional architecture initially and prone to political capture subsequently.The Court's incremental transformation into a positive legislator brought it into conflict with the legislature the more it trespassed on the latter's constitutional competence. In some instances, this assertiveness reached into the realm of informal constitutional amendment.For example, the Court went from accepting the constitutionality of a quorum requirement in referendums on presidential impeachment, left to the discretion of parliament, to viewing a turnout quorum as constitutionally required to ensure an 'authentic' expression of sovereign will, to again retreating behind deference to the legislature a span of six years.58Case law inconsistency is not unheard of, of course, but in this instance, it occurred in highly politically charged contexts (two presidential impeachments with referendums pending at the time of the decisions) and on the basis of judicial reasoning that mainly ignored the inconsistency rather than seeking to reconcile or at least acknowledge it.In Dixon's account of rjr, such inconsistencies are discussed in terms of the role the doctrine of stare decisis may play.She acknowledges the limited weight of precedent in civil law systems, though even there a consistent line of decisions will have weight.59Even in common law jurisdictions, however, when it comes to constitutional case law, the ultimate authority rests with the constitutional text rather than precedent, thus weakening the import of stare decisis.60Interestingly, Dixon discusses the possibility of courts exploiting the flexibility of the stare decisis doctrine to widen their room for manoeuvre.Thus, how strongly they insist on the doctrine of precedent will be part of the choices at their disposal in calibrating judicial review, and they may indeed choose to weaken the scope of the doctrine where they wish to leave more room for dialogue with the legislature.61However, the Romanian example above may be a better fit among the scenarios Dixon acknowledges of where the case for weakening stare decisis, and thus weakened judicial review, is not warranted.62If the main aim of such judicial modulation is to increase democratic responsiveness, a court adopting blatantly contradictory judgments on highly politically salient questions within a short timespan cannot be said to serve that aim. Other instances of conflict with the legislature have included the rcc declaring unconstitutional legislative omissions and also so-called 'abrogatory norms' such as: the decriminalisation of insult and slander; the abrogation of the special retirement benefits for magistrates (including constitutional judges); and the abrogation of procedural remedies.In addition to striking these down, the rcc also brought the norms they were meant to replace back into force, thereby arguably undermining the principle of legal certainty.63 Moreover, the rcc's competence was expanded legislatively in 2010 to include powers of review over parliamentary standing orders and other resolutions.The exercise of this power further politicised the Court.An attempt to claw back this power via constitutional amendment was later also struck down by the rcc on grounds of its breaching the principle of judicial independence, entrenched in the constitutional eternity clause (more specifically, access to constitutional justice as an unamendable fundamental right).64Here was the rcc therefore deploying its powers of review to protect its own jurisdictional turf, by "blocking the removal of additional competences it had only gained via legislation and which had never been constitutionalized directly."65 This last case could be viewed as partially analogous to the Indian Supreme Court's njac case and therefore to judicial overreach, in the name of unamendability, beyond the democratic minimum core.66In that case, the Indian Supreme Court struck down a legislative attempt to modify the judicial appointment process on the grounds that, by bringing in political elements, the new model undermined judicial independence, itself a component of the constitutional basic structure and as such unamendable.In practice, the change would have removed the veto power of a collegium of justices, including the Chief Justice and other judges, and brought India in line with other jurisdictions' mixed model of judicial appointments.Dixon's analysis of the Indian case, though sympathetic to the Court's sensitivity to possible attacks on its independence, notes the misfire of the basic structure doctrine.The Indian court wrongly elevated the country's particular, longstanding model of judicial appointments to the rank of necessary element for democratic constitutionalism.67 I would argue that the problem is compounded in the case of the rcc by the fact that it has never developed a comprehensive unconstitutional constitutional amendment doctrine, preferring instead to issue limited findings of breaches of the constitutional eternity clause on a case by case basis.In some instances, it did so on a macro level, without differentiating between elements of a massive amendment package to clearly indicate which one violated which unamendable feature.68Whatever benefits in combatting democratic dysfunction an unconstitutional constitutional amendment doctrine may have, they will be opaque and hard to predict in the Romanian constitutional context given the rcc's ad hoc, under-reasoned, and formalistic deployment of unamendability.Furthermore, as we saw in the previous section, the rcc has moved to embrace a sovereigntist doctrine of constitutional identity to curtail the reach of EU law in the domestic sphere, thereby aligning itself with "defiant constitutional and supreme courts" in Europe.69 Concluding Remarks The three axes of analysis above -interpretive formalism, an impoverished rights review culture, and the conflictual relationship with other constitutional actors -would suggest there is no hope for the rcc to operate in line with Dixon's rjr theory.Indeed, the critical lens adopted here is meant to caution against too easy extrapolation from the contexts in which rjr is an easier fit.Unlike the discursive courts often found in common law systems, the rcc and many of its counterparts in cee have very different approaches to constitutional interpretation and to solving constitutional conflicts.As discussed, they often approach questions before them formalistically, in an effort to be seen to merely apply the law rather than mould it.This occurs even though the constitutional controversies raised are deep and no less ripe for reasonable disagreement.This weakness is compounded by the rcc's tendency to either ignore or else underplay the rights conflicts before it.The examples given here may have focused on gender equality and judicial independence, but the fact that the rcc continues to be unable to deploy a comprehensive proportionality analysis is problematic beyond these specific contexts.Ultimately, what I hope to have highlighted is how much rjr depends on courts discharging their role in good faith.Not only has the rcc not engaged in dialogue and cooperation with other constitutional and supranational actors constructively, but it has consistently found itself in open conflict with both ordinary courts and the High Court.This has resulted in delays and ambiguities in solving legal questions as well as in tarnishing the rcc's standing as a judicial actor. 537 The picture is not entirely bleak, however.As we have seen in the case of its 2020 gender studies decision, the rcc has shown itself capable of more robust substantive reasoning on rights issues, of departing from dry formalism, as well as of resisting the regional tide of apex courts embracing the fight against so-called 'gender ideology' .70It is surely not a coincidence that there were intervening changes to the judicial bench, with one new judge bringing expertise on gender equality.71 I would argue that another development has improved the quality of rcc output, even if not necessarily changing the outcome of individual judgments.I refer here to the growing use of dissenting opinions in a court that initially rejected them entirely.Dissents have the virtue of revealing the complexity of a legal case and may even indicate the future direction of travel for the court.72They have grown in popularity among European courts,73 even though the civil law tradition had long sought to retain the impersonal, stylised language of nameless collective judgments as a guarantee of their legitimacy.74For instance, the two dissenting judges in the controversial rcc Decision 390/2021 which pitted constitutional identity against EU law supremacy emphasised the principle of sincere cooperation with the European legal order and its protection of the rule of law and judicial independence.As such, they can be said to have shown the alternative path the rcc could have taken -and maybe could still take -when confronted with similar conflicts in the future. Finally, we could speculate whether the conflictual positioning of the different constitutional actors in Romania could itself be a guarantee against democratic backsliding.Persistent clashes and even competitiveness, including between courts at different levels and the rcc, are certainly not desirable in their own right.Paradoxically, however, this cacophony of voices may well be preferable to unanimity and falling in line to rubberstamp the undoing of rule of law safeguards as we have seen courts do elsewhere in the region.Thus, while rjr may not be a frame of analysis that appears immediately pertinent to the Romanian context, its prospects may well improve as Romanian constitutionalism itself evolves. infringement proceedings.42It was only in September 2023 that the law was finally changed.
v3-fos-license
2020-06-18T09:09:06.654Z
2020-06-13T00:00:00.000
225699520
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4344/10/6/666/pdf", "pdf_hash": "b8a2820128010f9b37c2c8fff9e7dc4b3b12302a", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45293", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "sha1": "7a041db09c0e64a11d3b0976162fab4a1835a938", "year": 2020 }
pes2o/s2orc
Covalent Functionalization of Nanodiamonds by Ruthenium Porphyrin, and Their Catalytic Activity in the Cyclopropanation Reaction of Olefins : Detonation nanodiamonds (DNDs) were functionalized by ruthenium porphyrins and used as catalysts in the cyclopropanation reaction of olefins. The heterogeneous catalyst was characterized by transmission electron microscopy (TEM), scanning electron microscopy (SEM), and XPS (X-ray photoelectron spectroscopy). The XPS was used to control the binding of the ruthenium porphyrin to the DNDs’ surface. This catalyst was used in the cyclopropanation reactions of simple olefins and was reused with no loss of activity in four consecutive cycles, after recovering each time by simple centrifugation. Introduction The cyclopropyl ring is present in a number of interesting natural products. Recently, this ring has been found in drugs active in vitro against leukemia. [1] Several methods have been proposed in the past for synthesizing this ring, using copper, rhodium and osmium compounds as efficient catalysts for obtaining cyclopropanes from diazocompounds and olefins. [2] Synthetic iron, rhodium and osmium porphyrins have also been reported as catalysts for the cyclopropanation reaction of simple olefins by ethyldiazoacetate (EDA). [3][4][5][6][7] Compared to copper salts, like CuCl, which gives the anti isomers, some porphyrin catalysts are able to reverse the syn/anti ratio of the products, depending on the nature of the metal. The mechanism of the metalloporphyrins catalyzed cyclopropanation reactions, was deeply studied but not completely elucidated in all of its aspects, because of both the lability of the bond between the metal, and the acetate residue in the reaction intermediate. [8] Recently, in our laboratory we have been involved in researching the catalytic oxidation of alkenes, the cyclooligomerization of alkynes, and the cyclopropanation of olefins catalyzed by metalloporphyrins [9][10][11]. This last reaction was extensively investigated, developing iron [12] and rhodium [13] meso-tetraphenylporphyrin catalysts used in the homogeneous phase. Recently, we have also reported the immobilization of metalloporphyrins on a Merrifield resin, and their use as catalysts for the cyclooligomerization of alkynes and the cyclopropanation of olefins [14,15]. Several papers have also reported the cyclopropanation reactions of α-substituted styrenes, 1,3-dienes and some terminal alkenes by diazoacetates, catalyzed by metalloporphyrin complexes. [16][17][18][19][20] The choice of the metal, and the possibility of varying both its electronic properties and the steric hindrance of meso-tetraphenylporphyrins-introducing substituents on phenyl groups and/or on β positions of the tetrapyrrolic rings-allows us to modulate both the reaction selectivity, and the stereochemical ratio of the reaction products. The chemical robustness of the rhodium and ruthenium porphyrins, and their high selectivity in cyclopropanation reactions, has encouraged many efforts to obtain the recovery and reusability of these catalysts. The binding of 5,10,15,20-tetraarylporphyrin carbonyl ruthenium complexes to poly(ethyleneglycol), gives a soluble polymer-supported catalyst that can be easily removed from the reaction media-by addition of diethyl ether to the solution or on cooling [19]. In this paper, we report the covalent functionalization and characterization of nanodiamonds (DNDs) with a ruthenium porphyrin, and their usage as catalysts for the cyclopropanation of olefins by EDA. DNDs have exceptional physiochemical properties and present a broad range of potential applications in nanotechnology research [21][22][23][24][25][26]. Moreover, this nanomaterial exhibits great potential as a green catalyst in itself, or as catalyst support for a variety of chemical reactions [27]. We also decided to use DNDs in the place of other carbon supports, because the cyclopropanation involves double bonds, this fact giving us side reactions in the unsaturation of the matrix. The ability of DNDs in these applications are still under investigation, and this research would add value to the field of DNDs porphyrin-based catalysts. Results The DNDs obtained by the detonation technique are available from commercial sources, but for our purposes, it has been necessary to increase the content of hydroxyl groups on the surface, by a reduction of all of the oxygen containing groups, i.e., ketonic, carboxylic and epoxydic. This was accomplished by a preliminary H 2 plasma treatment by PECVD [28], followed by a chemical reduction by means of a boron hydride reagent under dry conditions, as reported in the literature [23]. Morphological characterization of the plasma reduced DNDs, was performed by transmission electron microscopy (TEM) analysis ( Figure 1). The investigation revealed that DNDs are agglutinated in the form of aggregates of several tens of nm, making the study of single particles difficult. Nevertheless, the particles maintain their individuality, so that, by using two different approaches (by taking pictures at different focus values and by the inverse Fourier transform of the HREM pictures diffractogram with one reflection at a time), we were able to measure a remarkable uniformity of the particle size between 5 and 10 nm. Catalysts 2020, 10, x FOR PEER REVIEW 2 of 10 [16][17][18][19][20] The choice of the metal, and the possibility of varying both its electronic properties and the steric hindrance of meso-tetraphenylporphyrins-introducing substituents on phenyl groups and/or on β positions of the tetrapyrrolic rings-allows us to modulate both the reaction selectivity, and the stereochemical ratio of the reaction products. The chemical robustness of the rhodium and ruthenium porphyrins, and their high selectivity in cyclopropanation reactions, has encouraged many efforts to obtain the recovery and reusability of these catalysts. The binding of 5,10,15,20-tetraarylporphyrin carbonyl ruthenium complexes to poly(ethyleneglycol), gives a soluble polymer-supported catalyst that can be easily removed from the reaction media-by addition of diethyl ether to the solution or on cooling [19]. In this paper, we report the covalent functionalization and characterization of nanodiamonds (DNDs) with a ruthenium porphyrin, and their usage as catalysts for the cyclopropanation of olefins by EDA. DNDs have exceptional physiochemical properties and present a broad range of potential applications in nanotechnology research [21][22][23][24][25][26]. Moreover, this nanomaterial exhibits great potential as a green catalyst in itself, or as catalyst support for a variety of chemical reactions [27]. We also decided to use DNDs in the place of other carbon supports, because the cyclopropanation involves double bonds, this fact giving us side reactions in the unsaturation of the matrix. The ability of DNDs in these applications are still under investigation, and this research would add value to the field of DNDs porphyrin-based catalysts. Results The DNDs obtained by the detonation technique are available from commercial sources, but for our purposes, it has been necessary to increase the content of hydroxyl groups on the surface, by a reduction of all of the oxygen containing groups, i.e., ketonic, carboxylic and epoxydic. This was accomplished by a preliminary H2 plasma treatment by PECVD [28], followed by a chemical reduction by means of a boron hydride reagent under dry conditions, as reported in the literature [23]. Morphological characterization of the plasma reduced DNDs, was performed by transmission electron microscopy (TEM) analysis ( Figure 1). The investigation revealed that DNDs are agglutinated in the form of aggregates of several tens of nm, making the study of single particles difficult. Nevertheless, the particles maintain their individuality, so that, by using two different approaches (by taking pictures at different focus values and by the inverse Fourier transform of the HREM pictures diffractogram with one reflection at a time), we were able to measure a remarkable uniformity of the particle size between 5 and 10 nm. The reduction of the DNDs surface was evaluated by Raman spectroscopy analysis. Figure 2 reports the spectra of the DNDs sample, as received and after the H 2 plasma treatment. In the Raman spectra of untreated and treated DNDs, the diamond peak is well detectable and located at about 1324 cm −1 . The plasma-enhanced chemical vapor deposition (PECVD) treated sample, exhibits an increase in diamond peak intensity, indicating that plasma treatment is effective in eliminating the non-diamond phase. The broad feature in Raman scattering at 1400-1800 cm −1 , can be assigned to the contributions from the sp 3 /sp 2 amorphous carbon phase, sp 2 graphitic phase and some surface functional group. The accurate interpretation of the bands occurring in the spatial frequency range, is still under debate in the literature. However, although the Raman spectroscopy appears to not be very sensitive to the surface terminations such as C-H and C-OH, the 1640 cm −1 band is commonly attributed to the sp 2 carbon phase [28]. Catalysts 2020, 10, x FOR PEER REVIEW 3 of 10 The reduction of the DNDs surface was evaluated by Raman spectroscopy analysis. Figure 2 reports the spectra of the DNDs sample, as received and after the H2 plasma treatment. In the Raman spectra of untreated and treated DNDs, the diamond peak is well detectable and located at about 1324 cm −1 . The plasma-enhanced chemical vapor deposition (PECVD) treated sample, exhibits an increase in diamond peak intensity, indicating that plasma treatment is effective in eliminating the non-diamond phase. The broad feature in Raman scattering at 1400-1800 cm −1 , can be assigned to the contributions from the sp 3 /sp 2 amorphous carbon phase, sp 2 graphitic phase and some surface functional group. The accurate interpretation of the bands occurring in the spatial frequency range, is still under debate in the literature. However, although the Raman spectroscopy appears to not be very sensitive to the surface terminations such as C-H and C-OH, the 1640 cm −1 band is commonly attributed to the sp 2 carbon phase [28]. In order to better compare the spectra of treated and untreated sample DNDs, the difference spectrum is reported (as the blue line in Figure 2). The main differences are detected at approximately 1330 cm −1 , 1640 cm −1 and 1750 cm −1 , indicating that the treatment is effective in reducing sp 2 phase. The fact that the shoulder signals at approximately 1750 cm −1 , could be related to the C=O bonds located on the DNDs surface. The decreasing of these bands for the treated DNDs, could indicate a reduction of this oxygenated functional group on the DND surface, after the plasma treatment [29]. The plasma treated and subsequently chemically reduced DNDs, were functionalized by the use of (3-aminopropyl)trimethoxysilane, as reported in Scheme 1 [30]. In order to better compare the spectra of treated and untreated sample DNDs, the difference spectrum is reported (as the blue line in Figure 2). The main differences are detected at approximately 1330 cm −1 , 1640 cm −1 and 1750 cm −1 , indicating that the treatment is effective in reducing sp 2 phase. The fact that the shoulder signals at approximately 1750 cm −1 , could be related to the C=O bonds located on the DNDs surface. The decreasing of these bands for the treated DNDs, could indicate a reduction of this oxygenated functional group on the DND surface, after the plasma treatment [29]. The plasma treated and subsequently chemically reduced DNDs, were functionalized by the use of (3-aminopropyl)trimethoxysilane, as reported in Scheme 1 [30]. The functionalization with Ru-TF5-tetraphenylporphyrin, has been obtained using the well-known nucleophilic aromatic substitution of the activated fluorine atom of the phenyl rings, by the amine group of the silane, as reported in Scheme 2 [31]. The functionalization with Ru-TF 5 -tetraphenylporphyrin, has been obtained using the well-known nucleophilic aromatic substitution of the activated fluorine atom of the phenyl rings, by the amine group of the silane, as reported in Scheme 2 [31]. Scheme 1. Silanization of the DNDs. The functionalization with Ru-TF5-tetraphenylporphyrin, has been obtained using the well-known nucleophilic aromatic substitution of the activated fluorine atom of the phenyl rings, by the amine group of the silane, as reported in Scheme 2 [31]. The functionalization of the DNDs was checked by the X-ray photoelectron spectroscopy (XPS). The measurements were carried out on the silanized nanodiamonds functionalized with the metalloporphyrin, in order to investigate the resulting chemical composition and the covalent binding of the silane, and consequently of the Ru-porphyrin, on the surface of the DNDs. XPS allows us to effectively distinguish the porphyrins from porphyrinate. The XPS analysis performed on the functionalized DNDs ruthenium porphyrin catalyst showed a unique N 1s peak in the imino-pyrrolic region, typical of the porphyrinate nitrogens (399 eV). This is, therefore, diagnostic of the presence of a porphyrin and, thus, of the successful coordination of ruthenium in the porphyrin core, demonstrating its presence on the surface of the nanodiamonds. The sample also has a clear additional component, probably due to the amino group of the linker (3-aminopropyltriethoxysilane), which is found in the ratio 1:4 with the peak of nitrogen from the porphyrinate, confirming its anchoring bond to the hydroxylated nanodiamonds. However, the peak of Ru 3d (which is located very close to the peak of C 1s) was not detected, because it reasonably represents a minimum percentage with respect to the large quantity of carbon in the nanodiamond. In Figure 3, the XPS spectrum of the N 1s peak is reported. The functionalization of the DNDs was checked by the X-ray photoelectron spectroscopy (XPS). The measurements were carried out on the silanized nanodiamonds functionalized with the metalloporphyrin, in order to investigate the resulting chemical composition and the covalent binding of the silane, and consequently of the Ru-porphyrin, on the surface of the DNDs. XPS allows us to effectively distinguish the porphyrins from porphyrinate. The XPS analysis performed on the functionalized DNDs ruthenium porphyrin catalyst showed a unique N 1s peak in the imino-pyrrolic region, typical of the porphyrinate nitrogens (399 eV). This is, therefore, diagnostic of the presence of a porphyrin and, thus, of the successful coordination of ruthenium in the porphyrin core, demonstrating its presence on the surface of the nanodiamonds. The sample also has a clear additional component, probably due to the amino group of the linker (3-aminopropyltriethoxysilane), which is found in the ratio 1:4 with the peak of nitrogen from the porphyrinate, confirming its anchoring bond to the hydroxylated nanodiamonds. However, the peak of Ru 3d (which is located very close to the peak of C 1s) was not detected, because it reasonably represents a minimum percentage with respect to the large quantity of carbon in the nanodiamond. In Figure 3, the XPS spectrum of the N 1s peak is reported. Catalysts 2020, 10, x FOR PEER REVIEW 5 of 10 The analysis of the morphology of the functionalized DNDs, was performed by a local analysis at the nanoscale, by means of the HAADF-STEM technique. Under proper conditions, the contrast intensity of the images is highly sensitive to variations of the atomic number of the elements in the sample under investigation. In Figure 4a, a typical high-angle annular dark field (HAADF) image of the sample is reported, and we can observe the sample appear as approximately 200 nm sized aggregates, given by a high density of homogenously distributed DNDs particles. Figure 4b reports The analysis of the morphology of the functionalized DNDs, was performed by a local analysis at the nanoscale, by means of the HAADF-STEM technique. Under proper conditions, the contrast intensity of the images is highly sensitive to variations of the atomic number of the elements in the sample under investigation. In Figure 4a, a typical high-angle annular dark field (HAADF) image of the sample is reported, and we can observe the sample appear as approximately 200 nm sized aggregates, given by a high density of homogenously distributed DNDs particles. Figure 4b The analysis of the morphology of the functionalized DNDs, was performed by a local analysis at the nanoscale, by means of the HAADF-STEM technique. Under proper conditions, the contrast intensity of the images is highly sensitive to variations of the atomic number of the elements in the sample under investigation. In Figure 4a, a typical high-angle annular dark field (HAADF) image of the sample is reported, and we can observe the sample appear as approximately 200 nm sized aggregates, given by a high density of homogenously distributed DNDs particles. Figure 4b Discussion Our heterogeneous catalyst was tested in the cyclopropanation reaction, with different olefins such as styrene, 4-methoxystyrene, 4-chlorostyrene, norbornene, 1-methylcyclopentene and 2,2,4-trimethylpentene. All of the reactions were carried out under the same conditions, with Binding Energy/eV Discussion Our heterogeneous catalyst was tested in the cyclopropanation reaction, with different olefins such as styrene, 4-methoxystyrene, 4-chlorostyrene, norbornene, 1-methylcyclopentene and 2,2,4-trimethylpentene. All of the reactions were carried out under the same conditions, with ethyldiazoacetate (EDA) and the olefin dissolved in dry CHCl 3 , following the standard procedure as described in the experimental section. At the end of the reaction, an internal standard was added, and the reaction products were analyzed by gas chromatography (GC). The data resulting from the reactions are reported in Table 1. Table 1. Results for the cyclopropanation of olefins catalyzed by Ru-porphyrin functionalized DNDs. The yields were calculated from the EDA. Entry Substrate Yield% Syn/Anti Ratio We also performed some further control experiments using non functionalized DNs. These experiments did not give any cyclopropanation products, giving us further proof of the functionalization of the nanodiamonds. The yields and the syn/anti ratios for styrene, 4-chlorostyrene and 4-methoxystyrene appear basically comparable, demonstrating how the electronic nature of the substituent on the phenyl has a low effect. Since the catalyst consists of a great concentration of fluorine atoms, we think that, during the catalytic reaction, aromatic olefins can approach the core of the macrocycle through a π-π interaction with the fluorinated rings, stabilizing the transition state which leads to the anti isomer [3]. On the other hand, no cyclopropane formation was observed in the reactions performed on aliphatic olefins 1-methylcyclopentene and 2,4,4-trimethylpentene, even when varying the reaction conditions. In our opinion, this is probably due to a different carbene transfer mechanism from the metal to the double bond of the substrate. The excellent anti-selectivity observed with norbornene is noteworthy. The presence of sterical hindrance in the catalytic sites, probably affects the products' distribution. The lower yields of the reactions compared with those reported in a previous investigation [14], could be due to the minor amount of ruthenium porphyrin molecules bound onto the nanodiamonds surface. General The UV-vis spectra were recorded with a Varian Cary 10 spectrophotometer (Varian, Australia) and 1 H NMR spectra were recorded on a Bruker AM 400 spectrometer (Karlsruhe, Germany) as CDCl 3 solutions. Chemical shifts are given in ppm from tetramethylsilane (TMS) and are referenced against residual solvent signals. Mass spectra (FAB) were recorded on a VG-Quattro spectrometer (Manchester, UK), using 3-nitrobenzyl-alcohol (NBA) as a matrix. The FT-IR spectra were routinely recorded on a Perkin Elmer Spectrum One spectrometer (Waltham, MA, USA), and XPS spectra were obtained on a modified Omicron NanoTechnology MXPS system (GmbH Taunusstein, Germany). TEM studies were carried out within a JEOL 2200FS (Akishima, Tokyo, Japan) field emission microscope operating at 200 kV, with a point-to-point resolution of 0.185 nm, equipped with an in-column Ω filter, an X-ray microanalysis, and two HAADF detectors, for chemical imaging. DNDs powders dispersed in chloroform, and treated with ultrasounds for 15 min were deposited on carbon grids for the analysis. Raman Spectroscopy was performed with a Horiba micro-Raman spectrometer XploRA ONE TM (HORIBA, Ltd, Hakata Fukoku Building, Tenyamachi, Hakata-ku Fukuoka, Japan), equipped with a 532 nm laser. Excitation spectra were recorded and analyzed in the 1000-2000 cm −1 wavenumber region. The products' yields, and the isomeric ratios for all of the reactions, were measured by GC analyses, with a Focus Thermo Fisher Instrument (Waltham, MA, USA), using helium as the carrier gas (35 cm/s). A fused silica Supelco capillary columns SPB-5 (30 m × 0.25 mm i.d.; 0.25 mm film thickness), was used. The gas chromatographic conditions were as follows: initial temperature, 70 • C for 1 min; temperature increase rate, 20 • C/min; final temperature, 200 • C; injector temperature, 150 • C; detector temperature, 230 • C. The chemical yields were determined by adding a suitable internal standard (decane or dodecane) to the reaction mixture at the end of each experiment and were reproducible within ±2% for multiple experiments. The analytical data for cyclopropane derivatives of styrene, 4-chlorostyrene, 4-methoxystyrene and norbornene are reported in the literature [14]. Chemicals All of the reagents and solvents (Aldrich Chemical, Merck KGaA, Darmstadt, Germany), were of the highest analytical grade, and were used without further purification. The silica gel 60 (70-230 and 230-400 mesh, Merck KGaA, Darmstadt, Germany), was used for column chromatography. High-purity grade nitrogen gas was purchased from Rivoira. The free base H 2 -TF 5 -tetraphenylporphyrin, was purchased from Aldrich. The nanodiamonds used for this work were purchased from Altai Technical State University, Barnaul, Russian Federation. The ruthenium TF 5 -tetraphenylporphyrin carbonyl was obtained as reported in the literature [32] and was compared with a commercially available sample from Porphyrin Products. Reduction of DNDs DNDs powders were subjected to a hydrogen plasma-assisted CVD treatment, to remove the non-diamond carbon and reduce the oxygen species at the NDs surface. The process was carried out by using a custom designed Plasma Enhanced Chemical Vapour Deposition (PE-CVD) reactor, where the gas phase was excited by a dual-mode MW/RF plasma [28]. The parameters adopted for the reduction experiment were the MW and RF powers at 100 W, a H 2 flux of 100 sccm, and a reduction time of 10 min. The sample temperature was fixed at 550 ± 10 • C. Five hundred milligrams of the CVD-treated DNDs was then suspended in 30 mL of dry THF, and 5 mL of 1 M BH 3 THF was added dropwise under the nitrogen. Following this, the mixture was refluxed for 24 h, and after cooling to room temperature, the diluted HCl was added until the hydrogen evolution ended. The obtained solid product was isolated by centrifugation, and then subjected to water washing and centrifugation cycles, until the supernatant liquid presented a neutral pH. The sample, after drying in a vacuum, appeared as a grey powder (400 mg). Covalent Functionalization of DNDs with (3-Aminopropyl)Trimethoxysilane (APS) Two hundred and twenty-five milligrams of reduced DNDs was added to 31.5 mL of a 5% solution of APS, and stirred for 48 h at 60 • C. The sample was isolated by centrifugation, and then washed with 10 mL of acetone and isolated by centrifugation yielding 186 mg of a grey powder, after drying in a vacuum. Functionalization of NDs with Ru(TF 5 PP)CO One hundred and twenty milligrams of silanized DNDs in 30 mL of toluene, was added to 42 mg of Ru-TF 5 -tetraphenylporphyrin carbonyl. The mixture was stirred at 120 • C for 4 h. After centrifugation, the precipitate was subjected to 3 consecutive washing/centrifugation cycles with 10 mL of chloroform each, and finally dried in a vacuum, yielding 108 mg of a brownish powder. Typical Cyclopropanation Reactions Zero-point-two milliliters (1.9 mmol) of ethyldiazoacetate and 0.7 mL (6.64 mmol) of styrene, were dissolved in 3 mL of dry CHCl 3 . Then, 10 mg of DNDs catalyst were added, and the resulting solution was refluxed for 12 h. At the end of the reaction, decane was added as an internal standard, and the mixture was analyzed by GC (Focus Thermo Fisher Instrument, Waltham, MA, USA). Recycling of the Catalyst At the end of the reaction, the solvent was evaporated under vacuum, and the solid was washed with three portions of fresh chloroform, each time using centrifugation to obtain the solid. The solution remained clear and the UV-vis analysis showed the absence of any free catalyst in the solution. The catalyst was dried under vacuum and reused for a new reaction. The yield of the reaction did not change under the experimental conditions, after four recyclings and within the experimental error. Conclusions In this paper, we showed the possibility for the covalent functionalization of DNDs with a metalloporphyrin, and the potential use of this adduct for catalyzing the cyclopropanation reaction of olefins. The catalyst can be recovered by a simple centrifugation and can be reused several times without losing its activity. This functionalization, and the use of the DNDs, opens up new fields of research for these new materials, which can be used in several applications.
v3-fos-license
2021-09-26T19:06:50.339Z
2021-09-26T00:00:00.000
237632316
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.867425/pdf", "pdf_hash": "999572781ebb1f5fcaf124721c4c24a4724db2c3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45295", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "sha1": "4ee57708d9e799972d080aeb87218db2d205c642", "year": 2022 }
pes2o/s2orc
Social Contacts and Transmission of COVID-19 in British Columbia, Canada Background Close-contact rates are thought to be a driving force behind the transmission of many infectious respiratory diseases. Yet, contact rates and their relation to transmission and the impact of control measures, are seldom quantified. We quantify the response of contact rates, reported cases and transmission of COVID-19, to public health contact-restriction orders, and examine the associations among these three variables in the province of British Columbia, Canada. Methods We derived time series data for contact rates, daily cases and transmission of COVID-19 from a social contacts survey, reported case counts and by fitting a transmission model to reported cases, respectively. We used segmented regression to investigate impacts of public health orders; Pearson correlation to determine associations between contact rates and transmission; and vector autoregressive modeling to quantify lagged associations between contacts rates, daily cases, and transmission. Results Declines in contact rates and transmission occurred concurrently with the announcement of public health orders, whereas declines in cases showed a reporting delay of about 2 weeks. Contact rates were a significant driver of COVID-19 and explained roughly 19 and 20% of the variation in new cases and transmission, respectively. Interestingly, increases in COVID-19 transmission and cases were followed by reduced contact rates: overall, daily cases explained about 10% of the variation in subsequent contact rates. Conclusion We showed that close-contact rates were a significant time-series driver of transmission and ultimately of reported cases of COVID-19 in British Columbia, Canada and that they varied in response to public health orders. Our results also suggest possible behavioral feedback, by which increased reported cases lead to reduced subsequent contact rates. Our findings help to explain and validate the commonly assumed, but rarely measured, response of close contact rates to public health guidelines and their impact on the dynamics of infectious diseases. INTRODUCTION A wide variety of infectious respiratory diseases, including influenza, measles, plague, tuberculosis and the new and ongoing Coronavirus Disease 2019 , are transmitted largely through close-contact and spread based on the social contacts and mixing patterns of the host population (1)(2)(3). Effective contacts (interactions that allow pathogen transfer between individuals) typically involve inhalation of infectious secretions from coughing, sneezing, laughing, singing or talking, but may also include touching contaminated body parts or surfaces followed by ingestion of the pathogen (4). Control strategies against such infections are based on contact avoidance measures, including isolation of those who are ill, use of personal protective equipment such as gloves and face masks, and physical distancing (5,6). In this study, we examine the relations between selfreported social contact patterns, public health control measures, and the dynamics of COVID-19 in the province of British Columbia (BC), Canada. The history and epidemiological features of COVID-19 have been documented by several studies including in (7)(8)(9)(10)(11)(12)(13)(14), and we present a summary of these as well as conventional COVID-19 transmission control measures in Appendix 1. A small number of studies, including in (15)(16)(17)(18), have analyzed population patterns of social contacts, and their connection to the dynamics of close-contact infectious diseases. Overall, the studies show that disease incidence and effective reproduction number (average number of newly infected individuals per case) increase with contact rates. However, contact rates and their effects on infection dynamics may vary over time and with factors such as geographical location, sex, age, household size, occupation and other socio-economic factors. In our study, we explore and quantify associations between social contact patterns, public health orders, transmission, and reported cases of COVID-19, in BC and in the two most populous BC regional health authorities: Fraser Health Authority (FHA) and Vancouver Coastal Health Authority (VCHA) (19). We make use of detailed contact survey data and estimate transmission using a model-based metric of the time-varying reproductive number, Rt. We specifically consider data from autumn of 2020 onward, during which a series of regional and provincial public health orders were introduced to reduce the number of close contacts and curb transmission. METHODS We studied the association between close-contact rates [based on the BC Mix COVID-19 Survey data, which is summarized in Appendix 2 and described in detail in (20)], daily new confirmed COVID-19 cases [obtained from BC COVID-19 data, which is provided by the BC Centre for Disease Control (21), and also available at (22)] and R t [derived by fitting the covidseir transmission model of (7), where R t was computed using the Next-Generation matrix method (23,24), to the reported case data] in BC, from September 13, 2020 to February 19, 2021, a period in which three public health contact-restriction orders were introduced (October 26, November 7 and November 19). Further details of the public health orders are provided in Appendix 3. For each successive four-day period, we calculated (i) population rates of contact as the average number of selfreported close-contacts made by an individual in a day (average daily contacts); (ii) the average number of newly reported COVID-19 cases per day (average daily cases or new cases); and (iii) transmission rate of COVID-19 as the average daily value of our model-based estimate of R t . We used segmented linear regression [described in Appendix 4 and ( [25][26][27]] to investigate the impact of public health orders on the three variables. We used Pearson correlation [summarized in Appendix 5 and described in detail in (28)(29)(30)(31)] to assess the instantaneous relationship between contact rates and R t . Finally, we used vector autoregressive (VAR) models [described in Appendix 6 and in (32)(33)(34)(35)] to quantify lagged associations between contact rates, new cases and R t . All analysis was performed using R version 3.6.3. We use α = 0.05 for all statistical tests. Effects of Public Health Orders on Average Daily Contacts, Average Daily Cases and Transmission Provincially, rising contact rates and transmission (R t ) reversed shortly after the first health order on October 26, 2020 (Figures 1A,G); for contacts, this declining trend lasted only until the second public health order (13 days later, on November 7), whereas for R t , the decline continued to at least the third order (25 days later, on November 19). Both contact rates and R t were relatively stable after the third order until the end of our study period (February 19, 2021). As expected, the trend in new cases mirrored that of our transmission indicator but was shifted about 2 weeks later, corresponding to the delay between transmission to symptom onset followed by diagnosis, and case reporting (Figures 1D-G). The same patterns were generally apparent in both of the regional health authorities we studied, although declines in contact rates and R t appeared to start roughly 1 week before the first public health order in FHA, and roughly 1 week after the first order in VCHA (Figures 1B-I). Simple comparison of overall contact rates and R t before and after the introduction of public health orders indicated that in BC, FHA and VCHA, contact rates declined by 30.1, 29.2, and 29.9%, while R t declined by 17.9, 25.0, and 5.4%, respectively, following the first public health order onwards. Our segmented linear regression models showed that in BC, FHA and VCHA, the slope of the contact rate regression line was positive before the first public health order, turned substantially negative thereafter and slightly increased, but remained negative or close to zero through all other health orders ( Table 1). The changes in contact rate slope after the first public health order (i.e., 1 ≤ t ≤ 2 ) were statistically significant in the province and in VCHA (p < 0.05), but not in FHA. Provincially and in the two regional health authorities, the changes in contact rate slope following the second and the third health orders (i.e., Slope > 0.05). Provincially and in the two regional health authorities, the slope for transmission (R t ) was positive before the first public health order, turned negative after this order, decreased further following the second public health order, and stabilized after the third health order ( Table 1). Changes in transmission slope following all public health orders were statistically significant (p < 0.05), except after the second health order in FHA. Pearson Correlation of Average Daily Contacts and Transmission Our correlation analysis showed that high contact rates and high transmission tended to occur at the same time. Provincially, and in both regional health authorities, transmission (average daily R t ) was significantly positively correlated with average daily contacts (r BC = 0.64, p < 0.001); r FHA = 0.53, p < 0.001; r VCHA = 0.34, p = 0.033). Based on these values, the magnitude of the correlation was about 50% stronger in FHA compared to VCHA (r FHA = 1.56× VCHA ). VAR Models of Average Daily Contacts and Average Daily Cases, and Average Daily Contacts and Transmission The notations BC contacts_t , BC casest and BC transmission_t represent the (stationary) time series of average daily contacts, cases, and transmission, respectively, in BC. The corresponding notations for FHA and VCHA are similarly defined. Our time series models showed that variation in new cases and transmission of COVID-19 were significantly attributable to past values of average daily contacts, whereas variation in average daily contacts was explained largely by its own past values (Figure 2). Each panel of the FEVD plots shown in Figure 2 illustrates the proportion of variation in cases, contacts or transmission that is explained by that variable's own past values vs. the past values of other variables. Provincially, on average, about 19% of the variation in average daily cases, and about 20% of the variation in COVID-19 transmission, was explained by previous rates of daily contact (Figures 2A,B). In FHA, previous average daily contacts contributed up to 22% of the variation in average daily cases ( Figure 2C) and up to 61% of the variation in transmission (Figure 2D). In VCHA, up to 30% of the variation in average daily cases was explained by average daily contacts, whereas contact rates explained up to 36% of the variation in transmission (Figures 2E,F). Supplementary Table S13 in Appendix 6.5 shows numerical representations of all FEVD plots in Figure 2. Granger causality testing confirmed that provincially and for VCHA, previous daily contacts were a significant time series driver of average daily cases (BC: p = 0.006, VCHA: p = 0.011), but the same did not hold for FHA (see Table 2). Supplementary Figure S4 in Appendix 6.5 provides a visual description of the Granger causality testing results in Table 2. Our time series models also showed that some variation in average daily contacts was explained by previous average daily cases and transmission of COVID-19. Provincially, average daily cases and transmission explained up to 13% (or 10% on average) and up to 18%, respectively, of the variation in average daily contacts (Figures 2A,B). In FHA, past average daily cases contributed up to 55% of the variation in the contact rates (Figure 2C), whereas previous transmission rates contributed up to 7% to the variation in average daily contacts in (Figure 2D). In VCHA, the reverse was true with previous average daily cases explaining little (up to 6%) variation in average daily contacts, but transmission explaining up to 35% of the variation in average daily contacts (Figures 2E,F). The impact of previous case counts on average daily contacts was significant at the provincial level and in FHA (BC: p = 0.049; FHA: p = 0.001), but not significant for VCHA. Past values of average daily contacts did not significantly impact transmission provincially or in FHA; however, these two variables were significantly associated in VCHA. DISCUSSION The primary approach to prevent the spread of many infectious diseases transmissible through close person-to-person contact is reduction or avoidance of such contacts altogether. Yet, few studies have quantified the impact that such contact-restrictions have on rates of "effective" contact (those actually involved in transmission) and on transmission itself. In our study, we explored time series relationships between close contact patterns and the dynamics of the ongoing COVID-19 pandemic in British Columbia, Canada and in its two most populous regional health authorities, FHA and VCHA, from mid-September, 2020 to mid-February, 2021. During this period, three public health contactrestriction measures were introduced (on October 26, November 7 and November 19) to control rising numbers of cases. We used data from the BC Mix Survey, which specifically captures rates of close contacts that are likely to underlie transmission. We analyzed contact rates in relation to the timing of contactrestriction measures and assessed their impact on COVID-19 transmission (average daily number of new infections generated per case, R t ) and reported new cases. We found that in BC, FHA and VCHA, all three public health orders reduced contact rates and transmission, or helped to maintain lowered rates. Overall, declines in contact rates and transmission occurred concurrently with the announcement of public health orders, whereas declines in newly reported cases were, as expected due to reporting delays, lagged by roughly 2 weeks. The decline we observed in contact rates in FHA about 1 week prior to the public health orders could have resulted from public anticipation and early media reporting of the upcoming restriction orders and/or from reports of rising numbers of new cases of COVID-19. Contact rates declined by roughly 30% overall after the first public health order. Transmission similarly declined in response to these orders, although this effect varied by region (R t reduced by 17.9, 25.0, and 5.40% in BC, FHA and VCHA, respectively). This observation suggests that compliance to public health orders by limiting the frequency of personto-person contacts played an important role in reducing the transmission of COVID-19. In all regions, transmission curves mirrored, and were highly correlated with those of contact rates, suggesting that these self-reported rates of close contact were directly and concurrently related to spread of COVID-19. Through time series analysis, we showed that lagged daily contacts significantly predicted, and explained roughly 19% of the variation in subsequent new cases at the provincial level. Interestingly, we also found evidence of behavioral feedback at the population level, whereby increased reported cases led to reduced subsequent rates of contact: overall, previous daily cases explained about 10% of the variation in subsequent daily contacts in the province. The interdependence of previous contact rates, new cases and transmission of COVID-19 varied by region. It is important to note that our time series analysis only assesses the impact of previous or lagged contacts on transmission and new cases, i.e., it does not include the impact of concurrent contacts. Hence, we find that previous contacts primarily impact numbers of new cases, where there is naturally a delay due to reporting, rather than rates of transmission (where the impact is expected to largely occur concurrently). However, we show through our correlation analysis that contacts and transmission are significantly concurrently related. A few studies have quantified variation in transmission or cases of an infectious disease as a function of contact rates. For instance, in (16), the authors analyzed United Kingdom contact survey data during periods before and after the March 2020 lockdown due to the COVID-19 pandemic, and found that a model-derived effective reproduction number declined by 75% as a response to a 74% reduction in average daily contacts. In (15), the authors studied contact survey data from Belgium during different stages of intervention against COVID-19 and found that an 80% decline in the average number of contacts during the first lockdown period resulted in a decline of the effective reproduction number to below one, resulting in fewer reported new cases. In (36), the authors studied United Kingdom population mixing patterns during the 2009 H1N1 virus influenza epidemic and found that a 40% reduction in contacts among school children during school holidays resulted in about 35% decline in the reproduction number of influenza. These studies confirm a relation between self-reported contact rates and infectious disease transmission, but also show variation that may be due to epidemiological factors such as difference in the transmission environment (e.g., use of personal protective equipment) and the types of contacts being measured. Other studies that have explored the control of COVID-19 by management of social contacts include (37,38), which indicated that the relatively low transmission rate of COVID-19 in India in early 2020, was attributable to public compliance to a strict government-imposed lockdown on social gatherings. The possibility of a feedback mechanism in which contacts rates decrease as a result of increasing transmission and new cases, has been documented in some previous studies. For instance, during the 2014 Ebola outbreak in Sierra Leone, selfreported prevention practices such as avoidance of contacts with corpses, were found to have increased with rising disease prevalence (39). During the early stages of the COVID-19 pandemic, the practice of cautious social contacts by the Singaporean population, increased with rising rates of infection due to behavioral drivers such as fear and perceived risk of infection (40). Similarly, the decline of close contacts in Hong Kong during the first quarter of 2020 is thought to have resulted from increasing messaging and spread of information about the prevalence of COVID-19 (41). Thus, wide-spread public awareness of increasing numbers of new cases, through public health and various information media, may help to explain population reductions in contact rates. In our study, we found that contact patterns and the related dynamics of COVID-19 varied with the geographies considered. A number of previous studies have also identified variation in contact rates by geography, and by factors that themselves vary geographically. In (17), the authors analyzed and compared social contact survey data for eight European countries in 2005 and 2006, and found that contact rates varied by geographical location, but also by sex, age and household size. In (42), the authors reviewed contact survey data across several countries from varying economic brackets and found that, in general, high contact rates were associated with densely populated settings and large household sizes, which characterized most low to middleincome countries. This is consistent with the general expectation that close-contact infectious diseases are more likely to impact densely populated regions and settings with large household sizes. Geographic variation in our results, particularly the higher contact rates, transmission and numbers of new cases in FHA compared to VCHA, may reflect the generally higher population density and larger household sizes in FHA (19). Related to the above factor is the evidence that the geographic spread of COVID-19 cases is connected to the local economic structure of a location relative to neighboring regions-in Italy, COVID-19 hit economic core locations (which were also characterized by higher populations densities) harder than regions with lower economic activities (43). Variations in close contact, case counts and transmission of COVID-19 can offer guidance for shaping or relaxing public health restrictions (44). For instance, a more rapid deployment of control measures can be applied in densely populated regions reporting high contact rates and cases than in sparsely distributed populations; and control measures can be tailored to capture population heterogeneity and other infection risk factors such as age groups. Our analysis has several important limitations. We relied on case surveillance data to determine the number of new cases and the transmission indicator of COVID-19 over time. This means we did not account for asymptomatic infection, which may be a strong driver of COVID-19 transmission, and could have impacted the conclusions of our study. Relying on case surveillance data may also underestimate the actual number of new cases in settings where symptomatic individuals did not seek testing or where testing capacity is constrained by inaccessibility or shortage of resources. Three regional health authorities were not included in the assessment of regional associations of contact rates to COVID-19 dynamics -the Northern, Interior and Vancouver Island Health Authorities. These health authorities have relatively smaller population sizes, are more sparsely populated and have many rural communities (19). In these health authorities, self-reported contact rate data were too sparse for us to explore relations with reported cases and transmission. As a result, this study may not be representative of patterns in more rural populations. Limitations of the self-reported contact rates that may affect our analysis are provided in (20). For instance, some population groups including the economically marginalized, the under-housed, and those in immigration detention or incarceration, are likely underrepresented in the survey. In this study, we compared time series of means (averages) of daily contacts, cases and transmission of COVID-19, and did not consider other measures of central tendency, which may be crucial when analyzing skewed data. For instance, in the early stages of the COVID-19 pandemic contact rates were possibly higher during social gatherings over holidays, while more cases of COVID-19 tended to be reported on days after weekends and on days following holidays (45). Our conclusions may also be impacted by the choice of the time series analysis methods employed-in (46), the authors showed how the choice of the best times series analysis method can depend on factors such as the stage of an outbreak and the granularity of the geographic level explored. This is the first study analyzing extensive and novel data on person-to-person contacts collected continuously throughout the province of British Columbia, Canada to understand the role of close contacts in transmission and control of infectious diseases. The study provides a quantitative approach to measuring the temporal associations among self-reported close contact rates, public health contact-restriction orders, and transmission dynamics of COVID-19. The observed impacts of person-toperson contacts on COVID-19 dynamics, as well as the capability of public health measures to modify these contact rates, are likely to prevail, although with varying magnitudes, in other jurisdictions and for other infectious diseases with similar modes of transmission. These findings support the quantitative study of population contact rates, which can inform infectious disease control strategies. DATA AVAILABILITY STATEMENT The raw COVID-19 case data used in this article was extracted from a line list generated by BCCDC Public Health Reporting Data Warehouse (PHRDW). The contact rate data used in this study was retrieved from the BC Mix COVID-19 survey and may be available upon reasonable request. ETHICS STATEMENT The study was approved by the University of British Columbia Behavioral Research Ethics Board (No: H20-01785). AUTHOR CONTRIBUTIONS NR and MO developed this concept along with NJ. All authors reviewed and agreed on the final submission.
v3-fos-license
2022-06-22T06:17:08.830Z
2022-06-20T00:00:00.000
249886638
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-022-14476-4.pdf", "pdf_hash": "7a6608bd3b244120c3927c80c7cf71d8d8d80a36", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:45297", "s2fieldsofstudy": [ "Medicine" ], "sha1": "44f2a3a3204a173080858b85fa0d442d5de80433", "year": 2022 }
pes2o/s2orc
Magnified endoscopy with texture and color enhanced imaging with indigo carmine for superficial nonampullary duodenal tumor: a pilot study This pilot study aimed to investigate the utility of texture and color enhancement imaging (TXI) with magnified endoscopy (ME) for the preoperative diagnosis of superficial nonampullary duodenal epithelial tumors (SNADETs). We prospectively evaluated 12 SNADETs. The visibility for ME-TXI, ME with indigo carmine (ICME)—white-light imaging (WLI), ICME-TXI compared to ME-NBI (narrow-band imaging) was scored (+ 2 to − 2 ME-NBI was set as score 0) by 3 experts. Scores + 2 and + 1 were defined as improved visibility. The intra-observer and interobserver agreement for improved visibility of surface structure (SS) was evaluated. Sensitivity, specificity, and positive predictive value (PPV) for Vienna Classification (VCL) C4/5 associated with the preoperative diagnosis of ICME-TXI were analyzed. The SS visibility score of ICME-TXI was significantly higher than that of ME-NBI, ME-TXI, and ICME-WLI (P < 0.001 respectively). The kappa coefficients of reliability for intra-observer and interobserver agreement for the SS visibility improvement with ICME-TXI were 0.96, 1.00, 1.00 and 0.70, 0.96, 0.96 respectively. All endoscopists preferred ICME-TXI for visualizing SS mostly for all lesions. The sensitivity, specificity, and PPV (%) of ICME-TXI for VCL C4/5 were 80, 66.7, and 63.2, respectively. ICME-TXI facilitates the visibility of the SS of SNADETs and may contribute to their preoperative diagnosis. The incidence of identified superficial nonampullary duodenal epithelial tumors (SNADETs) has gradually increased owing to advancements in endoscopic technology 1 . Pancreaticoduodenectomy (PD) is a standard treatment for duodenal cancers. However, PD is an invasive treatment with a mortality rate ranging from 1 to 4% 2,3 . Therefore, it is very important to accurately diagnose SNADETs to allow the less invasive management such as underwater endoscopic mucosal resection [4][5][6] or endoscopic submucosal dissection [7][8][9][10] . The accuracy of preoperative biopsy for SNADETs is unsatisfactory, ranging from 68 to 71.6% 11,12 . In addition, small biopsy bites for SNADETs can cause severe fibrosis, which may become later obstacles for endoscopic resection 11 . Nowadays, there are more studies of the utility of image-enhanced endoscopy (IEE) for SNADETs. Magnified endoscopy (ME) with narrow-band imaging (NBI) for SNADETs is useful [13][14][15] . Further, the accuracy of diagnosis for the Vienna Classification (VCL) C3, C4 or C3, C4/5 16,17 has been reported to range from 65.1 and 87% [13][14][15] , suggesting that endoscopic diagnosis is at least comparable to biopsy. In 2020, Olympus Medical Systems Corporation (Tokyo, Japan) introduced a new IEE system for texture and color enhancement imaging (TXI). In TXI, an image usually obtained by white-light irradiation is divided into a texture image and a base image. After enhancing the texture and correcting the color tone and brightness, the images are combined 18 . We previously reported the usefulness of TXI for visualizing gastric mucosal atrophy and gastric neoplasms 18 . However, its utility with ME for SNADETs has not yet been clarified. www.nature.com/scientificreports/ Thus, this pilot study aimed to investigate the utility of TXI with ME for the preoperative diagnosis of SNADETs. Methods Study design and participants. This prospective study was conducted at Chiba University Hospital (Japan) between March and July 2021. Patients diagnosed with sporadic SNADETs were prospectively enrolled in this study before diagnostic endoscopy. After endoscopic diagnosis, all patients underwent endoscopic resection or an operation. This study was reviewed and approved by the institutional review board of Chiba University School of Medicine and was registered at the University Hospital Medical Information Network (UMIN000041436). Informed consent was obtained from all participants. All methods were performed in accordance with the relevant guidelines and regulations. Instruments. We used the CV-1500 light source equipped with a TXI system and the GIF-XZ1200 and GIF-H290Z endoscopes (Olympus Medical Systems Corporation, Tokyo, Japan). There are two types of TXI: mode 1 and mode 2. In TXI mode 1, the color tone changes are more coordinated than in TXI mode 2. In this study, we used only TXI mode 1. For the structure enhanced mode, "A7" was selected for NBI and (white-light imaging) WLI, while "strong" was selected for TXI 18 . Magnified endoscopy. ME with WLI (ME-WLI), NBI (ME-NBI), and ME with TXI (ME-TXI) were performed for each lesion. For ME-WLI and ME-TXI, indigo carmine (IC) dye was also used (ICME-WLI, ICME-TXI, respectively). During the preoperative endoscopic diagnosis of SNADETs, the lesions were assessed for size (mm), location in the duodenum, macroscopic findings, endoscopic diagnosis according to VCL 16,17 , and the deposition of white opaque substance (WOS) 19 . The macroscopic findings were assessed according to the Paris endoscopic classification 20 . VCL endoscopic diagnosis was based on the presence of WOS, size of the lesion, and surface structure (SS) (closed-or open-loop) as described in a previous report 15 . Closed-loop structures were defined as oval-shaped mucosal structures that could be interpreted as connecting the starting point of the mucosa to the endpoint. Open-loop structures were defined as linear mucosal structures 15 . Endoscopic diagnosis with ICME-TXI was performed separated by each endoscopist. Outcomes. The primary endpoint of this study was the visibility of the SS for SNADETs with ICME-TXI. We also assessed the visibility of the SNADETs' blood vessels. The visibility for ME-TXI, ICME-WLI, ICME-TXI compared to ME-NBI (ME-NBI was set as score 0) was scored by three experts (an endoscopist with > 5 years of IEE experience) in five stages, a modification of previous reports [21][22][23] . The details of the score are as follows, + 2 (improved visibility remarkably), + 1 (improved visibility), 0 (unchanged visibility), − 1 (worsened visibility), − 2 (worsened visibility remarkably). Scores + 2 and + 1 were defined as improved visibility. The intra-observer and interobserver agreements for the improved visibility of SS were evaluated. For intra-observer agreement, the first and second evaluations were spaced 6 months apart. In addition, the favored modality by the examiners for the evaluation of the SS and blood vessels was assessed. The sensitivity, specificity, and positive predictive value (PPV) for VCL C4/5 associated with the preoperative diagnosis of ICME-TXI and preoperative biopsy were also investigated. Sample size calculation. There were no data regarding ICME-TXI for SNADETs. As this was a pilot study, 12 lesions were enrolled based on criteria from a previous report 24 . Statistical analysis. Baseline data were presented as mean ± standard deviation (SD). The differences in visibility scores were analyzed using Friedman's test with Bonferroni correction. Intra-observer and interobserver agreement for the visibility improvement was assessed with Cohen's Kappa analysis. The kappa coefficient of reliability was classified as follows: 0.0-0.2 (slight agreement), 0.21-0.40 (fair agreement), 0.41-0.60, (moderate agreement), 0.61-0.80 (substantial agreement), and 0.81-1.0 (almost perfect or perfect agreement). All statistical analyses were performed using the Statistical Package for the Social Sciences software version 26 (SPSS Inc., Chicago, IL, USA). P values < 0.05 were considered statistically significant. Results Patients and lesions. In total, 11 patients and 12 lesions were evaluated. The characteristics of the patients and lesions are shown in Table 1. The median tumor size was 10 mm. The macroscopic findings were IIa dominant. Seven lesions (58.3%) were VCL C3, and 5 lesions (41.7%) were VCL C4/5. WOS was identified in 8 lesions (66.7%). Representative cases of endoscopic observation in each modality for SNADETs are shown in Fig. 1a, b. Visibility score. The mean visibility scores (± SD) in each modality are shown in Fig. 2a, b. As described previously, the mean visibility score of ME-NBI was set as 0. For visibility of SS, the mean visibility scores for ME-TXI, ICME-WLI, ICME-TXI were − 0.08 ± 0.81, 0. 22 www.nature.com/scientificreports/ of blood vessels, the mean visibility score for ME-TXI, ICME-WLI, ICME-TXI were − 0.78 ± 0.80, − 1.19 ± 0.71, − 0.58 ± 0.55, respectively. The P value of the visibility scores of SNADETs in ME-TXI, ICME-WLI, and ICME-TXI compared to ME-NBI are shown in Table 2a, b. In summary, the visibility score of SS in ICME-TXI was significantly higher than those of ME-NBI, ME-TXI, and ICME-WLI (P < 0.001, respectively). The visibility scores of blood vessels in ME-TXI, ICME-WLI, ICME-TXI were significantly lower than ME-NBI (P = 0.001, P < 0.001, P = 0.028, respectively). In addition, the visibility score of blood vessels in ICME-TXI was significantly higher than ICME-WLI (P = 0.003). Preference of modality for surface structure and blood vessels. For visualizing SS, all endoscopists preferred ICME-TXI mostly for all lesions. On the other hand, for visualizing blood vessels, all endoscopists preferred ME-NBI mostly for all lesions. Intra-observer and interobserver agreement for the improvement of visibility for surface structure with ICME-TXI. The kappa coefficients of reliability for intra-observer agreements for the visibility improvement for SS with ICME-TXI were 0.96, 1.00, and 1.00. While those of interobserver agreement for the visibility improvement for SS with ICME-TXI were 0.70, 0.96, and 0.96. Intra-observer agreements for SNADETs were "almost perfect or perfect agreement". Interobserver agreements for SNADETs were "substantial agreement" or "almost perfect or perfect agreement. " Sensitivity and PPV for VCL C4/5 in association with preoperative diagnosis of ICME-TXI. The sensitivity, specificity, and PPV (%) for VCL C4/5 associated with a preoperative diagnosis of ICME-TXI (calculated for 36 lesions, 12 for each endoscopist) were 80, 66.7, and 63.2, respectively. Preoperative biopsy was performed in 9 of 12 lesions. The sensitivity, specificity, and PPV (%) for VCL C4/5 in preoperative biopsy were 40.0, 75.0, and 66.7, respectively. The sensitivity of preoperative diagnosis with ICME-TXI for SNADETs tended to be higher than with preoperative biopsy. Discussion This study is the first to report the usefulness of ME-TXI, including ICME-TXI. Conventionally, NBI was the main modality for ME [13][14][15] . The visibility of the SS with ICME-WLI and ME-TXI did not differ from that of ME-NBI. But with ICME-TXI, the SS were more visible than with ME-NBI. The visibility of the SS did not improve with IC or texture enhancement of TXI alone. It is thought that accumulated IC in the concave parts of the SNADETs' surface made the ductal structure outline stand out due to the color emphasis. This made the SS easy to recognize. The kappa coefficients of reliability for interobserver agreement for improving visibility for SS of ICME-TXI and the percentage for endoscopists' preference for ICME-TXI regarding the recognition of the SS were high. It seems www.nature.com/scientificreports/ that the reproducibility is high. Clinical effects of the results in this study were considered as follows. First, better visibility of ICME-TXI for SS might allow easier preoperative diagnosis of SNADETs even for non-expert. Second, a new diagnostic system might be constructed on the basis of clearly identified SS in the future. Akazawa et al. reported that SNADETs showed distinct endoscopic features according to the mucin phenotype 25 . For instance, the gastric phenotype had dense pattern dilatation of the intervening part with ME-NBI 25 . Although this study did not analyze the mucin phenotype, it could be possible to distinguish mucus phenotype preoperatively by observing the SS with ICME-TXI. Figure 1. Representative cases of SNADETs for each modality. SNADET, Superficial nonampullary duodenal epithelial tumors; ME-WLI, magnified endoscopy with white-light imaging, ME-NBI magnified endoscopy with narrow-band imaging, ME-TXI magnified endoscopy with texture and color enhancement imaging, ICME-WLI indigo carmine dye with ME-WLI, ICME-TXI indigocarmine dye with ME-TXI, VCL Vienna classification, IMC intramucosal carcinoma. (a) 13 mm, IIa, pathological diagnosis of VCL C3. A. ME-NBI. B. ME-TXI. C. ICME-WLI. D. ICME-TXI. Open-loop was observed. (b) 10 mm, IIa, pathological diagnosis of VCL C4/5 (IMC). A. ME-NBI. B. ME-TXI. C. ICME-WLI. D. ICME-TXI. Closed-loop was observed. www.nature.com/scientificreports/ In the previous report of the ME-NBI diagnosis of SNADETs referred to for the diagnosis in this study, the sensitivity, specificity for VCL C4/5 in association with the preoperative diagnosis were 90.5% and 52.4% respectively 15 , which were comparable with our results. The sensitivity of preoperative diagnosis for ICME-TXI tended to be superior to that of biopsy in this study. ICME-TXI can recognize the SS with ease, and preoperative diagnosis may be made more efficiently. NBI has an absorption range wavelength that matches the blood vessels. Capillaries in the superficial mucosal layer are emphasized by the 415 nm light and are displayed in brown, whereas deeper mucosal and submucosal vessels are visualized by the 540 nm light and are displayed in cyan 26 . Therefore, ME-NBI was still superior in vascular visibility to any other modality in this study. Interestingly, visibility scores for blood vessels of ME-TXI tended to be higher than ICME-WLI. In addition, those of ICME-TXI were significantly higher than with ICME-WLI. The outline of the blood vessel is emphasized by the texture enhancement of TXI and contrast with IC as well, so the blood vessels were easier to identify than with ICME-WLI. The diagnostic method used in this study does not include evaluating blood vessels, but TXI may be useful for diagnosis including assessing the vascular structure. The mean visibility scores (described mean ± SD) for each modality. The mean visibility score of ME-NBI was set as score 0. SD standard deviation, ME-WLI magnified endoscopy with white-light imaging, ME-NBI magnified endoscopy with narrow-band imaging, ME-TXI magnified endoscopy with texture and color enhancement imaging, ICME-WLI indigo carmine dye with ME-WLI, ICME-TXI indigocarmine dye with ME-TXI. (a) Mean visibility score for surface structure. Mean visibility scores for ME-TXI, ICME-WLI, ICME-TXI were − 0.08 ± 0.81, 0.22 ± 0.87, 1.58 ± 0.60, respectively. (b) Mean visibility score for blood vessels. Mean visibility score for ME-TXI, ICME-WLI, ICME-TXI were − 0.78 ± 0.80, − 1.19 ± 0.71, − 0.58 ± 0.55, respectively. Table 2. P value of visibility scores of SNADETs in ME-TXI, ICME-WLI, and ICME-TXI compared to ME-NBI. The ME-NBI was set as score 0. P value was calculated using Friedman's test with Bonferroni correction. SNADETs superficial nonampullary duodenal epithelial tumors, ME-WLI magnified endoscopy with white-light imaging, ME-NBI magnified endoscopy with narrow-band imaging, ME-TXI magnified endoscopy with texture and color enhancement imaging, ICME-WLI indigo carmine dye with ME-WLI, ICME-TXI, indigocarmine dye with ME-TXI. P value ME-NBI vs ME-TXI ME-NBI vs ICME-WLI ME-NBI vs ICME-TXI ME-TXI vs ICME-WLI ME-TXI vs ICME-TXI ICME-WLI vs ICME-TXI (a) P value of visibility scores for surface structure www.nature.com/scientificreports/ This study had several limitations. First, it was a pilot study with a sample of only 12. Therefore, the results of this study only suggested the potential of ICME-TXI in the preoperative diagnosis of SNADETs. The results of visibility score of surface structures for ICME-TXI and visibility score of blood vessels for ME-NBI that show significant differences compared to other modalities clearly were considered to be credible. On the contrary, visibility score for blood vessels of ICME-TXI was need to be analyzed with larger samples. Second, the diagnostic method referred to in this study was originally for ME-NBI. A large prospective study using a specific method for TXI for the preoperative diagnosis of SNADETs is crucial for assessing the true value of ICME-TXI. In conclusion, ICME-TXI facilitates the visibility of the SS of SNADETs and may contribute to the preoperative diagnosis of SNADETs. Data availability The datasets generated and/or analyzed during the current study are not publicly available due to the protection of personal information.
v3-fos-license