added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-04-03T04:03:56.858Z
|
2012-01-17T00:00:00.000
|
2991436
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://flore.unifi.it/bitstream/2158/848696/1/No%20proinflammatory%20signature%20in%20CD34+%20hematopoietic%20progenitor%20cells%20in%20multiple%20sclerosis%20patients.pdf",
"pdf_hash": "bda171abdd9025adbc5f30bdec7b87246908ddba",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:861",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "649dfea13280229d544facf37d7234fd5474a07a",
"year": 2012
}
|
pes2o/s2orc
|
No proinflammatory signature in CD34+ hematopoietic progenitor cells in multiple sclerosis patients
Autologous hematopoietic stem cell transplantation (aHSCT) has been used as a therapeutic approach in multiple sclerosis (MS). However, it is still unclear if the immune system that emerges from autologous CD34+ hematopoietic progenitor cells (HPC) of MS patients is pre-conditioned to re-develop the proinflammatory phenotype. The objective of this article is to compare the whole genome gene and microRNA expression signature in CD34+ HPC of MS patients and healthy donors (HD). CD34+ HPC were isolated from peripheral blood of eight MS patients and five HD and analyzed by whole genome gene expression and microRNA expression microarray. Among the differentially expressed genes (DEGs) only TNNT1 reached statistical significance (logFC=3.1, p<0.01). The microRNA expression was not significantly different between MS patients and HD. We did not find significant alterations of gene expression or microRNA profiles in CD34+ HPCs of MS patients. Our results support the use of aHSCT for treatment of MS.
Introduction
Intense immunosuppression followed by autologous hematopoietic stem cell transplantation (aHSCT) is a potential treatment for patients suffering from aggressive multiple sclerosis (MS). 1 aHSCT is able to induce a long-lasting remission of inflammatory disease activity, which can persist years beyond complete immune reconstitution. The rationale for aHSCT in MS is based on the concept that lympho-/myeloablative conditioning eliminates pathogenic autoreactive immune cells and facilitates the regeneration of a new and tolerant immune system from CD34+ hematopoietic progenitor cells (HPC). In fact, thorough analysis of the T cell repertoire in the regenerating immune system after aHSCT in MS supports that a new and antigen-naïve T cell repertoire develops from the HPC compartment via thymic regeneration. 2 To date, it remains unresolved whether autoimmunity in MS is merely a consequence of loss of peripheral immune tolerance or whether it results from immune dysregulation, which is already predetermined in HPC. To approach this key point we compared the global gene-and miRNA expression profiles of CD34+ and CD34-cells collected from MS patients and healthy donors (HD).
Patients and controls
MS patients (n=8) with relapsing-remitting (RRMS) or secondary-progressive (SPMS) disease (mean disease duration 10 years, range 6-16 years) were treated with aHSCT at the University of Hamburg, Germany (four female SPMS) and the Haematology Unit, Careggi Hospital of Florence, Italy (two male RRMS and two female SPMS). All patients had previously received immunomodulatory and/or immunosuppressive therapy. Control HPC samples were obtained from five age-matched HD (three female). All patients provided written informed consent and all study protocols were in accordance with the Declaration of Helsinki and approved by Institutional Review Boards at each centre.
Mobilization and collection of CD34+ cells
Before collecting HPC from peripheral blood by leukocytapheresis, the Hamburg MS cohort and the five HD were mobilized with subcutaneous injection of granulocyte colony-stimulating factor (G-CSF) analogue (2x5µg/kg/day) for 5-8 days. The Florence cohort was mobilized with intravenous cyclophosphamide (Cy, 4g/m 2 ) and G-CSF (5µg/kg/day) until cell harvest by leukocytapheresis. Cell collections were frozen in liquid nitrogen according to standard procedures. 3,4 All samples were thawed and processed at one centre by a standardized protocol and CD34+ HPC purified by magnetic bead separation using the autoMACS system (Miltenyi). The control samples consisted of the remaining CD34-negative cell fraction after magnetic bead separation, i.e. a population of peripheral blood mononuclear cells. Purity and viability of CD34+ cells were analyzed by FACS and revealed a mean of 84.8% (range 73.5 -89.7%) viable CD34+ cells. There was no difference in the purity or viability of cells between MS patients and HD (see supplemental methods).
Microarray analysis
Whole genome gene expression was analyzed with the Human 4x44K Design Array (Agilent-Technologies). Differentially expressed genes (DEGs) of interest were confirmed by quantitative rtPCR. miRNA profiling was performed with the Human miRNA Array V2.0 (Agilent-Technologies). The microarray data were generated conforming to the MIAME guidelines and are deposited in the Gene Expression Omnibus database (http://www.ncbi.nlm. nih.gov/geo/query/acc.cgi?acc=GSE27694).
Statistics and bioinformatics
Standard microarray analysis methods were used for processing intensity data and normalization (see supplemental methods). 5 Individual genes were considered differentially expressed above a fold-change of 1.7 (logFC>0.7). P-values were corrected for multiple testing. 6 miRNA data were analyzed in an analogous way.
Gene expression analysis
Principal component analysis (PCA) confirmed separation between CD34+ and CD34-samples and showed a clear clustering of CD34+ cells according to the mobilization regimen (Cy/G-CSF versus G-CSF; Figure 1). Accordingly, we found 2801 DEGs in CD34+ (adj.p-value≤0.05) and 9440 DEGs in CD34-(adj p-value≤0.05) cells comparing MS patients mobilized with G-CSF only or Cy/G-CSF, respectively.
Comparing CD34-cells between MS and HD we found 167 DEGs (logFC>0.7), however none reached statistical significance. miRNA expression analysis miRNA expression was analyzed in samples obtained from MS patients mobilized with G-CSF only and HD mobilized with G-CSF only. None of the miRNA showed statistically significant differential expression levels comparing MS patients and HD mobilized with G-CSF only.
Discussion
The immunologic rationale for aHSCT as treatment for autoimmune diseases like MS is being discussed intensively among basic and clinical immunologists in recent years. 2,7,8 A key issue has been the question whether replacement of the autoreactive immune system by autologous HPC is able to stop the autoimmune process for long or forever, or alternatively whether the autoaggressive immunity will rebound after hematologic reconstitution. If the latter occurred it would indicate that the autoimmune process is pre-programmed in HPCs of genetically predisposed individuals rather than evolving at the stage of mature T cells and in the peripheral immune system. In this study we approached this question by comparing the gene expression profile of CD34+ HPCs collected from MS patients before autologous transplantation with CD34+ HPCs or from HDs. To the best of our knowledge, this is the first study to analyze the gene expression profile of CD34+ HPC in an autoimmune disease.
The results of this study support the view that HPC of MS patients are not pre-conditioned towards autoimmunity. We did not find significant alteration in the gene expression profile of CD34+ HPC in MS. Only one DEG (TNNT1) maintained statistical significance after correction for multiple comparisons (Table 1). TNNT1 encodes a subunit of troponins involved in contraction of slow skeletal muscle. Of note, the TNNT1 gene is expressed on chromosome 19q13, which carries predisposing loci for several autoimmune diseases, but with conflicting results in MS. [9][10][11] A recent genome-wide association study did not find SNPs (single nucleotide polymorphism) associated with MS in the TNNT1 gene. 12 Comparison of miRNA expression profiles of CD34+ HPC between MS and HD did not reveal statistically significant differences, thereby corroborating our DEG results lacking substantial alterations in CD34+ cells in MS. There were no statistically significant DEGs in CD34-cells comparing MS and HD. The interpretation of our results must consider that the mobilization regimen with G-CSF provides a strong stimulus to the peripheral immune compartments and the stem cell niche and might thereby overshadow more subtle differences in the gene expression pattern of CD34-and CD34+ cells. Currently, experts recommend that the mobilization regimen for HSCT in MS should include G-CSF and cyclophosphamide, which precludes any comparison with HD. Since the mobilization regimen clearly influenced gene expression and HD are always mobilized by G-CSF only, our patient cohort provided a unique opportunity to directly compare the gene and miRNA expression profile of highly purified CD34+ cells from MS patients with HD. Consistent with our results, it has been shown that both the gene and miRNA expression differ depending on the stem-cell source and the mobilization regimen used. [13][14][15] Studies analyzing gene and miRNA expression in hematopoiesis or hematological malignancies mainly used HPC obtained by bone marrow aspiration or from in-vitro cultured cells, precluding a direct comparison with our results. A caveat in the interpretation of our study is the small number of samples, which leaves the possibility of a false negative result.
In summary, we did not find significant alterations of gene expression or miRNA profiles in CD34+ HPCs of MS patients. Thus, we provide evidence that the immune deviation seen in the peripheral immune system in MS patients is probably not at the CD34+ precursor cell stage. One must consider that the immune changes seen in MS may represent a secondary response to a primary CNS pathology. Nevertheless, we feel that the lack of significant alterations of gene expression or miRNA profiles in CD34+ HPCs of MS patients supports the use of autologous HPC for HSCT in MS.
|
v3-fos-license
|
2017-08-17T15:40:37.661Z
|
2018-07-17T00:00:00.000
|
28702815
|
{
"extfieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-018-2264-5",
"pdf_hash": "85aef41225d70a0dcd33d250323b98a907f14e1c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:862",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"sha1": "5e214e2141b9c47a8d058484967a05490e952c3a",
"year": 2018
}
|
pes2o/s2orc
|
Random forest versus logistic regression: a large-scale benchmark experiment
Background and goal The Random Forest (RF) algorithm for regression and classification has considerably gained popularity since its introduction in 2001. Meanwhile, it has grown to a standard classification approach competing with logistic regression in many innovation-friendly scientific fields. Results In this context, we present a large scale benchmarking experiment based on 243 real datasets comparing the prediction performance of the original version of RF with default parameters and LR as binary classification tools. Most importantly, the design of our benchmark experiment is inspired from clinical trial methodology, thus avoiding common pitfalls and major sources of biases. Conclusion RF performed better than LR according to the considered accuracy measured in approximately 69% of the datasets. The mean difference between RF and LR was 0.029 (95%-CI =[0.022,0.038]) for the accuracy, 0.041 (95%-CI =[0.031,0.053]) for the Area Under the Curve, and − 0.027 (95%-CI =[−0.034,−0.021]) for the Brier score, all measures thus suggesting a significantly better performance of RF. As a side-result of our benchmarking experiment, we observed that the results were noticeably dependent on the inclusion criteria used to select the example datasets, thus emphasizing the importance of clear statements regarding this dataset selection process. We also stress that neutral studies similar to ours, based on a high number of datasets and carefully designed, will be necessary in the future to evaluate further variants, implementations or parameters of random forests which may yield improved accuracy compared to the original version with default values. Electronic supplementary material The online version of this article (10.1186/s12859-018-2264-5) contains supplementary material, which is available to authorized users.
Additional file 3: Results on partial dependence
We further investigate the behavior of logistic regression (LR) and random forest (RF) based on a few interesting example datasets from OpenML by considering partial dependence plots-as we did in subsection 2.3 for simulated datasets. More precisely, the aim of these additional analyses is to assess whether differences in performances (between LR and RF) are related to differences in partial dependence plots. After getting a global picture for all datasets included in our study, we inspect three interesting "extreme cases" more closely. For this purpose we need a measure to quantify the difference between partial dependence plots of two methods (here, LR and RF). Since we did not find such a measure in the literature, we suggest a simple approach in the next section.
Measuring differences in partial dependences
For feature X j (j ∈ 1, .., p), let u i,j , i ∈ 1, .., 10 denote the uniform grid on which the partial dependence is computed, with u 1,j = min(X j ) and u 10,j = max(X j ). Let P D denote the corresponding values of the partial dependence at point u i,j for RF and LR, respectively. Our ad-hoc measure is based on the absolute difference |P D i,j | between these two quantities. To give more importance to ranges of X j with many observations, these differences are weighted by the proportion W i,j of observations of feature X j that are closer to point u i,j than to any other point (note that 10 i=1 W i,j = 1).
Finally, to obtain a measure of the difference of partial dependence plots over the p features, each feature is weighted by its relative importance R j in order to give more weight to informative features. The relative importance R j is defined as the variable importance of feature X j (or 0 if this variable importance is negative) divided by the sum of the variable importances of all features.
Our simple measure of the differences between partial dependences for RF and LR for a dataset of interest is thus defined as Difference in accuracies vs. difference in partial dependences for the 243 datasets When displaying the scatterplot of ∆acc vs. ∆P artialDependence for the 243 datasets included in our study, no clear trend can be identified. We subsequently select three "extreme" cases from OpenML and inspect them more closely.
As a first extreme case (Case 1), we select a dataset with low |∆acc| and high ∆P artialDif f erence. The second extreme case (Case 2) shows both low |∆acc| and low ∆P artialDif f erence. The third extreme case (Case 3) shows a very high ∆acc and a high ∆P artialDif f erence. These three datasets are investigated in detail below. In this case, p is large and no feature has a relative importance exceeding 1.8%. It seems that the dataset does not have enough useful information for classification, hence the relatively poor accuracies with both RF and LR. It can be seen from Figure 1 (top-right panel) that the two main features are highly correlated and insufficient to separate the two classes (depicted as blue and red points, respectively). LR does not converge and yields incoherent partial dependence patterns. RF seems to be more robust to this lack of information and to better extract information from the two best features, which is however insufficient in improving accuracy, hence the similar accuracies of RF and LR. In this case the two models are very close. This is due to the linearity of the problem, as can be seen from Figure 2 (top-right panel). In this easy scenario, both algorithms perform equally well, close to perfect classification. It can be seen from Figure 2 (top-left and bottom-right panels) that RF and LR partial dependences are nearly indistinguishable for the two main features 'northing' and 'isns'. In this case, p = 2 so that we can visualize the whole dataset as 2D representation in Figure 3 (top-right panel). ∆acc is large, i.e. RF performs substantially better than LR. We can clearly see a dependency in Figure 3 that explains the better performance of RF. This dependency can also be seen in the difference between partial dependences of RF and LR, especially for feature V2. This extreme case illustrates the better behaviour of RF in case of non-linear dependency structures (as also previously outlined through our simple simulation in Section 2.3).
|
v3-fos-license
|
2022-12-21T16:30:09.922Z
|
2022-12-01T00:00:00.000
|
254907944
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/23/24/16167/pdf?version=1671361480",
"pdf_hash": "ab5abb5e37ed99eace3eb2f76abacacdc5ed3f60",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:864",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "5b80c077d97c0f36f7ed29a2f20b3825779d1038",
"year": 2022
}
|
pes2o/s2orc
|
Effects of Phytochelatin-like Gene on the Resistance and Enrichment of Cd2+ in Tobacco
Phytochelatins (PCs) are class III metallothioneins in plants. They are low molecular-weight polypeptides rich in cysteine residues which can bind to metal ions and affect the physiological metabolism in plants. Unlike other types of metallothioneins, PCs are not the product of gene coding but are synthesized by phytochelatin synthase (PCS) based on glutathione (GSH). The chemical formula of phytochelatin is a mixture of (γ-Glu-Cys)n-Gly (n = 2–11) and is influenced by many factors during synthesis. Phytochelatin-like (PCL) is a gene-encoded peptide (Met-(α-Glu-Cys)11-Gly) designed by our laboratory whose amino acid sequence mimics that of a natural phytochelatin. This study investigated how PCL expression in transgenic plants affects resistance to Cd and Cd accumulation. Under Cd2+ stress, transgenic plants were proven to perform significantly better than the wild-type (WT), regarding morphological traits and antioxidant abilities, but accumulated Cd to higher levels, notably in the roots. Fluorescence microscopy showed that PCL localized in the cytoplasm and nucleus.
Introduction
In recent decades, heavy metal pollution has become a severe environmental problem [1]. Cadmium (Cd) is a heavy metal element with substantial biological toxicity and poses a severe threat to human health since metals can enter the food chain and bioaccumulate [2][3][4][5]. Therefore, Cd-contaminated soil has raised widespread concerns. Phytoremediation is an effective and environmental alternative to physical and chemical remediation methods. Generally, plants are tolerant to low concentrations of heavy metals. However, once a certain threshold is exceeded, heavy metals have adverse effects on plant growth and development, such as the inactivation of enzymes, disruption of cell membrane integrity, inhibition of photosynthesis, root rot, and the wilting and death of plants [6][7][8][9].
Plants have evolved several detoxification mechanisms to minimize the damage caused by heavy metals, including an antioxidant system, chelating agents, and transporters [10]. Phytochelatins (PCs) were first discovered in plants and subsequently found in fungi and other organisms [11]. They are cysteine-rich polymers with the general structure (γ-Glu-Cys) n -Gly (n = 2-11) [12]. PCs are synthesized by plant chelating peptide synthase (PCS), using glutathione (GSH) as a substrate. Their synthesis, therefore, is not a direct product of gene expression [13]. PCs can combine with heavy metal ions to form stable high molecular weight complexes that are sequestered in vacuoles, thus reducing their adverse effects on plants [13,14]. Introducing the exogenous BnPCS1 gene (from Boehmeria nivea) into tobacco significantly improved Cd endurance and antioxidant capacity under Cd stress [15]. In addition, the overexpression of MnPCS1 and MnPCS2 in Arabidopsis and tobacco enhanced Zn/Cd resistance [16]. their adverse effects on plants [13,14]. Introducing the exogenous BnPCS1 gene (from Boehmeria nivea) into tobacco significantly improved Cd endurance and antioxidant capacity under Cd stress [15]. In addition, the overexpression of MnPCS1 and MnPCS2 in Arabidopsis and tobacco enhanced Zn/Cd resistance [16].
Although some studies have shown that the overexpression of the PCS gene can increase Cd accumulation in plants [17], other studies have demonstrated the opposite results. Overexpression of AtPCS1 in Arabidopsis leads to increased arsenic resistance and cadmium hypersensitivity [18]. Moreover, the heterologous expression of Arabidopsis AtPCS1 in tobacco was associated with high sensitivity to Cd [19]. It has been speculated that the increase in intracellular PCs might cause a decrease in GSH, which may lead to a strong imbalance between the biosynthesis of GSH and PCs, resulting in a disruption of cellular metabolism and reduced resistance to Cd [18,19].
In this study, a phytochelatin-like (PCL) gene, encoding a peptide with the sequence Met-(α-G1u-Cys)11-Gly, was synthesized using the amino acid sequence of the plant PCs (γ-Glu-Cys)11-Gly as a reference. The PCL gene and the CaMV 35S promoter were introduced into tobacco, and the Cd endurance and accumulation ability of the transgenic tobacco plants were investigated.
Transgenic Tobacco with PCL High Expression
The expression of the PCL gene in transgenic tobacco lines was analyzed by quantitative real-time PCR (qRT-PCR), and lines with high expression were selected for future experiments. The results ( Figure 1) showed that PCL expression differed in the 13 transgenic lines. The expression levels of lines L9, L11, and L12 were about five to six times higher than that of the L1 line, which displayed the lowest expression level. In this study, the three selected lines with overexpression were designated as OE9, OE11, and OE12, and their homozygous plants were obtained from the T2 generation (Supplementary Figure S1b).
PCL Gene Improved the Endurance of Tobacco to Cd
We measured the root length of seedlings after germination under 0 or 200 µM Cd 2+ . Seeds of wild-type (WT) and the three homozygous transgenic lines (OE9, OE11, and OE12) were sown under 0 or 200 µM Cd 2+ for 3 d. The results showed that the effect of Cd on root growth was evident. Under control conditions, there was no apparent difference between WT and transgenic tobacco (Figure 2a). When treated with 200 µM Cd 2+ , the root and leaf growth of transgenic and WT plants was inhibited, but the inhibition of transgenic plants was much lower than that of WT (p < 0.01; Figure 2b,c).
Seeds of wild-type (WT) and the three homozygous transgenic lines (OE9, OE11, and OE12) were sown under 0 or 200 μM Cd 2+ for 3 d. The results showed that the effect of Cd on root growth was evident. Under control conditions, there was no apparent difference between WT and transgenic tobacco (Figure 2a). When treated with 200 μM Cd 2+ , the root and leaf growth of transgenic and WT plants was inhibited, but the inhibition of transgenic plants was much lower than that of WT (p < 0.01; Figure 2b,c). OE9, OE11, and OE12: Three homozygous transgenic tobacco lines (T2). Transgenic and WT seedlings were grown in Hoagland nutrient solution containing 0 or 200 μM Cd 2+ for 3 d. Asterisks indicate the significant difference between OE lines and WT (***, p < 0.001).
To further investigate the effect of the PCL gene on the growth of tobacco under Cd stress, 4-week-old transgenic and WT plants were treated with 0 or 200 μM Cd 2+ for seven days under hydroponic conditions. The results showed that the growth of transgenic and WT plants had no noticeable difference under control conditions (Figure 3a,c), whereas the growth of WT tobacco was slow after Cd 2+ treatment (Figure 3b,c). The root length of transgenic plants under Cd treatment was significantly longer than that of WT ( Figure 3b). The leaves of WT tobacco were wilted and yellow, and the old leaves withered and dropped under Cd treatment (Figure 3c). The green loss of transgenic tobacco leaves was To further investigate the effect of the PCL gene on the growth of tobacco under Cd stress, 4-week-old transgenic and WT plants were treated with 0 or 200 µM Cd 2+ for seven days under hydroponic conditions. The results showed that the growth of transgenic and WT plants had no noticeable difference under control conditions (Figure 3a,c), whereas the growth of WT tobacco was slow after Cd 2+ treatment (Figure 3b,c). The root length of transgenic plants under Cd treatment was significantly longer than that of WT ( Figure 3b). The leaves of WT tobacco were wilted and yellow, and the old leaves withered and dropped under Cd treatment (Figure 3c). The green loss of transgenic tobacco leaves was less severe than that of the WT, and there were no wilting symptoms (Figure 3c). Under 0 µM Cd 2+ , all chlorophyll-related parameters and fresh weight were maintained at relatively high levels, which were all significantly reduced under 200 µM Cd 2+ conditions, including chlorophyll a, chlorophyll b, and carotenoid contents (Figure 3d-g). However, these contents in transgenic lines were higher than that in WT plants (Figure 3d-g). These results demonstrated that PCL could alleviate the impacts of Cd 2+ stress and promote endurance to Cd 2+ .
μM Cd , all chlorophyll-related parameters and fresh weight were maintained at rela-tively high levels, which were all significantly reduced under 200 μM Cd 2+ conditions, including chlorophyll a, chlorophyll b, and carotenoid contents (Figure 3d-g). However, these contents in transgenic lines were higher than that in WT plants (Figure 3d-g). These results demonstrated that PCL could alleviate the impacts of Cd 2+ stress and promote endurance to Cd 2+ .
PCL Gene Increases Cd Accumulation in Tobacco
To investigate the effects of PCL expression on Cd concentration, we determined the Cd content in the leaves and roots of transgenic and WT tobacco. Under control conditions, the Cd concentration level in transgenic and WT tobacco was very low and showed no significant difference (Figure 4a,b). Under 200 μM Cd 2+ , the leaf accumulation in WT plants was 24.1-31.6% less (OE9 and OE11, p < 0.01; OE12, p < 0.001) than in transgenic lines ( Figure 4a). The Cd accumulation in the roots of transgenic plants was about 1.8-2.2 times that of WT (p < 0.0001; Figure 4b). The results showed that transgenic plants had a superior capacity to chelate Cd, and accumulated Cd in roots to higher levels than in aboveground parts.
PCL Gene Increases Cd Accumulation in Tobacco
To investigate the effects of PCL expression on Cd concentration, we determined the Cd content in the leaves and roots of transgenic and WT tobacco. Under control conditions, the Cd concentration level in transgenic and WT tobacco was very low and showed no significant difference (Figure 4a,b). Under 200 µM Cd 2+ , the leaf accumulation in WT plants was 24.1-31.6% less (OE9 and OE11, p < 0.01; OE12, p < 0.001) than in transgenic lines ( Figure 4a). The Cd accumulation in the roots of transgenic plants was about 1.8-2.2 times that of WT (p < 0.0001; Figure 4b). The results showed that transgenic plants had a superior capacity to chelate Cd, and accumulated Cd in roots to higher levels than in aboveground parts.
PCL Gene Increases Cd Accumulation in Tobacco
To investigate the effects of PCL expression on Cd concentration, we determined the Cd content in the leaves and roots of transgenic and WT tobacco. Under control conditions, the Cd concentration level in transgenic and WT tobacco was very low and showed no significant difference (Figure 4a,b). Under 200 μM Cd 2+ , the leaf accumulation in WT plants was 24.1-31.6% less (OE9 and OE11, p < 0.01; OE12, p < 0.001) than in transgenic lines ( Figure 4a). The Cd accumulation in the roots of transgenic plants was about 1.8-2.2 times that of WT (p < 0.0001; Figure 4b). The results showed that transgenic plants had a superior capacity to chelate Cd, and accumulated Cd in roots to higher levels than in aboveground parts. . Asterisks indicate a significant difference between the transgenic lines and the WT (**, p < 0.01; ***, p < 0.001; ****, p < 0.0001).
PCL Increases Antioxidant Enzyme Activities in Tobacco
Under heavy metal stress, plant cells strengthen the antioxidant system to deal with excessive reactive oxygen species and reduce cell membrane damage. Therefore, we analyzed the catalase (CAT), peroxidase (POD), superoxide dismutase (SOD), and ascorbate peroxidase (APX) activities of transgenic and WT tobacco under Cd stress.
The results showed no significant difference in antioxidant enzyme activity between transgenic and WT plants grown under control conditions. Under 200 µM Cd 2+ stress, the CAT activity in the leaves and roots of transgenic tobacco was 31.1-39.2% (OE9 and OE12, p < 0.01; OE11, p < 0.001) and 27.1-31.9% (OE11 and OE12, p < 0.05; OE9, p < 0.01) higher than corresponding values in WT plants (Figure 5a,b). The SOD activity in leaves and roots of transgenic tobacco was 20.7-24.4% (OE9, p < 0.05; OE11 and OE12, p < 0.01) and 28.6-35.7% higher than in WT plants (Figure 5c,d). POD activity in the leaves and roots of transgenic tobacco was 53.3-55.2% (p < 0.05) and 27.5-31.7% (OE9, p < 0.01; OE11 and OE12, p < 0.05) higher than in WT plants (Figure 5e,f). The APX activity in the leaves and roots of transgenic lines was about 1.5 (p < 0.05) and 1.2 (OE11 and OE12, p < 0.05; OE9, p < 0.01) times that of WT (Figure 5g,h). These results showed that the PCL gene enhanced the antioxidant capacity of tobacco in response to Cd stress.
PCL Transgenic Tobacco Was More Tolerant to Osmotic and Oxidative Stress
Under heavy metal stress, ROS levels increase, damaging various components of cells, including cell membranes, nucleic acids, and proteins [20]. Hydrogen peroxide (H 2 O 2 ) is one of these ROS. H 2 O 2 can react with 3,3 -diaminobenzidine (DAB), producing brown precipitates, and with nitrotetrazolium blue chloride (NBT), producing blue precipitates. To study the oxidative stress level in tobacco leaves under Cd stress, we measured the content of H 2 O 2 and performed histochemical staining by DAB and NBT.
PCL Increases Antioxidant Enzyme Activities in Tobacco
Under heavy metal stress, plant cells strengthen the antioxidant system to deal with excessive reactive oxygen species and reduce cell membrane damage. Therefore, we analyzed the catalase (CAT), peroxidase (POD), superoxide dismutase (SOD), and ascorbate peroxidase (APX) activities of transgenic and WT tobacco under Cd stress.
The results showed no significant difference in antioxidant enzyme activity between transgenic and WT plants grown under control conditions. Under 200 μM Cd 2+ stress, the CAT activity in the leaves and roots of transgenic tobacco was 31.1-39.2% (OE9 and OE12, p < 0.01; OE11, p < 0.001) and 27.1-31.9% (OE11 and OE12, p < 0.05; OE9, p < 0.01) higher than corresponding values in WT plants (Figure 5a,b). The SOD activity in leaves and roots of transgenic tobacco was 20.7-24.4% (OE9, p < 0.05; OE11 and OE12, p < 0.01) and 28.6-35.7% higher than in WT plants (Figure 5c,d). POD activity in the leaves and roots of transgenic tobacco was 53.3-55.2% (p < 0.05) and 27.5-31.7% (OE9, p < 0.01; OE11 and OE12, p < 0.05) higher than in WT plants (Figure 5e,f). The APX activity in the leaves and roots of transgenic lines was about 1.5 (p < 0.05) and 1.2 (OE11 and OE12, p < 0.05; OE9, p < 0.01) times that of WT (Figure 5g,h). These results showed that the PCL gene enhanced the antioxidant capacity of tobacco in response to Cd stress.
PCL Transgenic Tobacco Was More Tolerant to Osmotic and Oxidative Stress
Under heavy metal stress, ROS levels increase, damaging various components of cells, including cell membranes, nucleic acids, and proteins [20]. Hydrogen peroxide (H2O2) is one of these ROS. H2O2 can react with 3,3′-diaminobenzidine (DAB), producing brown precipitates, and with nitrotetrazolium blue chloride (NBT), producing blue precipitates. To study the oxidative stress level in tobacco leaves under Cd stress, we measured the content of H2O2 and performed histochemical staining by DAB and NBT.
According to the results, transgenic and WT tobacco did not differ significantly in H2O2 content under control conditions. Under Cd treatment, the H2O2 content of the three transgenic lines in the leaves and roots was 28.4-32.2% (p < 0.0001) and 22.1-26.4% (p < 0.01) lower than that of the WT (Figure 6a,b). DAB and NBT histochemistry produced only weak staining in both WT and transgenic plants under control conditions. The staining intensity appreciably increased upon exposure to 200 μM Cd, with a stronger response in WT plants, reflecting higher levels of H2O2 (Figure 6c,d). The results showed that the PCL expression reduced the content of H2O2 in tobacco leaves under Cd stress. According to the results, transgenic and WT tobacco did not differ significantly in H 2 O 2 content under control conditions. Under Cd treatment, the H 2 O 2 content of the three transgenic lines in the leaves and roots was 28.4-32.2% (p < 0.0001) and 22.1-26.4% (p < 0.01) lower than that of the WT (Figure 6a,b). DAB and NBT histochemistry produced only weak staining in both WT and transgenic plants under control conditions. The staining intensity appreciably increased upon exposure to 200 µM Cd, with a stronger response in WT plants, reflecting higher levels of H 2 O 2 (Figure 6c,d). The results showed that the PCL expression reduced the content of H 2 O 2 in tobacco leaves under Cd stress.
Oxidative damage under heavy metal stress includes membrane lipid peroxidation. Malondialdehyde (MDA) is the final decomposition product of membrane lipid peroxidation [21]. The results showed no significant difference in MDA content between transgenic tobacco and WT under control conditions. Under 200 µM Cd 2+ , the MDA content in the roots of WT was 33.5-48.9% (p < 0.001) higher than in transgenic lines (Figure 7a). Likewise, the MDA content in the leaves of transgenic plants was about half that of WT plants, and only OE13 reached a level comparable with WT (p < 0.05; Figure 7b). The results showed that under Cd treatment, the membrane lipid peroxidation of tobacco was aggravated, and the damage was lower in transgenic than WT plants. Oxidative damage under heavy metal stress includes membrane lipid peroxidation. Malondialdehyde (MDA) is the final decomposition product of membrane lipid peroxidation [21]. The results showed no significant difference in MDA content between transgenic tobacco and WT under control conditions. Under 200 μM Cd 2+ , the MDA content in the roots of WT was 33.5-48.9% (p < 0.001) higher than in transgenic lines (Figure 7a). Likewise, the MDA content in the leaves of transgenic plants was about half that of WT plants, and only OE13 reached a level comparable with WT (p < 0.05; Figure 7b). The results showed that under Cd treatment, the membrane lipid peroxidation of tobacco was aggravated, and the damage was lower in transgenic than WT plants. In addition to being indispensable for protein synthesis, proline (Pro) is also involved as an osmoprotectant and for heavy metal resistance. Pro helps maintain a correct water balance, prevents membrane distortion, and acts as a hydroxyl radical scavenger [22]. The results showed that under Cd 2+ treatment, Pro accumulation in the leaves and roots of transgenic tobacco was 2.0-2.3 (p < 0.001) and 1.7-1.9 (p < 0.01) times that of WT (Figure 7c,d).
Subcellular Localization of PCL
The PCL was fused with the enhanced green fluorescent protein (EGFP) gene and controlled by the CaMV 35S promoter (Supplementary Figure S2a). The transgenic tobacco overexpression of 35S::PCL-EGFP was cultured (Supplementary Figure S2b), and the protein localization was observed under laser confocal microscopy. Fluorescence showed that PCL localized both in the nucleus and the cytoplasm (Figure 8). In addition to being indispensable for protein synthesis, proline (Pro) is also involved as an osmoprotectant and for heavy metal resistance. Pro helps maintain a correct water balance, prevents membrane distortion, and acts as a hydroxyl radical scavenger [22]. The results showed that under Cd 2+ treatment, Pro accumulation in the leaves and roots of transgenic tobacco was 2.0-2.3 (p < 0.001) and 1.7-1.9 (p < 0.01) times that of WT ( Figure 7c,d).
Subcellular Localization of PCL
The PCL was fused with the enhanced green fluorescent protein (EGFP) gene and controlled by the CaMV 35S promoter (Supplementary Figure S2a). The transgenic tobacco overexpression of 35S::PCL-EGFP was cultured (Supplementary Figure S2b), and the protein localization was observed under laser confocal microscopy. Fluorescence showed that PCL localized both in the nucleus and the cytoplasm (Figure 8).
Discussion
Many studies have confirmed that the PC-dependent pathway is a key mechanism for plants to resist heavy metals [11,18,19,23]. PCs have the general formula (γ-Glu-Cys)n-Gly (n = 2-11) and are synthesized from GSH (γ-GluCysGly) by PCS [14]. The whole synthesis process includes multistep enzymatic reactions and is thus influenced by many factors. Some studies indicated that the substrate (GSH) for PCs might limit the synthesis rate. The addition of a specific GSH synthase inhibitor (buthionine sulfoximine, BSO) diminished GSH and PC accumulation in the Dianthus carthusianorum L [24]. Expressing AtPCS1 in Arabidopsis did not comparably affect the Cd accumulation [25]; however, ex-
Discussion
Many studies have confirmed that the PC-dependent pathway is a key mechanism for plants to resist heavy metals [11,18,19,23]. PCs have the general formula (γ-Glu-Cys) n -Gly (n = 2-11) and are synthesized from GSH (γ-GluCysGly) by PCS [14]. The whole synthesis process includes multistep enzymatic reactions and is thus influenced by many factors. Some studies indicated that the substrate (GSH) for PCs might limit the synthesis rate. The addition of a specific GSH synthase inhibitor (buthionine sulfoximine, BSO) diminished GSH and PC accumulation in the Dianthus carthusianorum L. [24]. Expressing AtPCS1 in Arabidopsis did not comparably affect the Cd accumulation [25]; however, expressing GSH1 and AsPCS1 accumulated Cd to a high level [26]. It was obvious that the presence of GSH or PCs alone was not enough to confer Cd accumulation and resistance. To construct a more efficient expression system, we synthesized PCL referring to the amino acid sequence of PCs to obtain a stable polypeptide encoded directly from a gene. Then, we expressed the PCL gene from the CaMV 35S promoter in transgenic tobacco to clarify its biological function.
Some studies hypothesized that expressing the PCS in plants could lead to heavy metal resistance and accumulation, but these studies presented contradictory results. Although AtPCS1 overexpression in Escherichia coli [17] and Saccharomyces cerevisiae [27] increased Cd resistance, it resulted in hypersensitivity to Cd in Arabidopsis [25] and tobacco [19]. The overexpression of AtPCS1 in tobacco (Nicotiana tabacum) led to increased cadmium sensitivity, while tobacco transformed with CePCS from Caenorhabditis elegans was more tolerant to Cd [19]. The functional differences between enzymes and changes in cellular thiol concentrations are possibly related to the distinct sensitivity to Cd [25]. However, no plausible explanation for these Cd resistance disparities has yet been found.
In addition, those types of PCS transformants did not show substantially increased cadmium accumulation. Similar results have been reported in other studies. Tobacco expressing NtPCS1 from Nelumbo nucifera exhibited increased resistance to arsenic (As) and Cd, but changes in the accumulation of As and Cd were not significant [28]. Transgenic tobacco expressing CdPCS1 showed Cd accumulation increased several folds after Cd exposure, but the growth was significantly inhibited [29]. The Cd content of BnPCS transgenic lines was significantly increased in shoots but not in roots, thus the total Cd accumulation remained low [15]. In this study, the heterologous expression PCL in tobacco increased the Cd accumulation to a high level. The Cd content of transgenic tobacco was 1.3-1.5 times in roots and 1.8-2.2 times in leaves that of WT, and the root accumulation was much higher than that of the aboveground part ( Figure 4). Rapid Cd chelation by PCL at the root might alleviate the impacts of Cd toxicity on plants. Transgenic tobacco plants exhibited increased root length, fresh weights, chlorophyll content, Pro content, and antioxidant enzyme activities but decreased H 2 O 2 and MDA levels. These results indicated that the PCL gene in tobacco could improve accumulation and resistance to Cd.
It appears that no PCS gene would be suitable for the transformation of all plant species for phytoremediation. Further research can therefore be conducted with respect to this problem. However, we have demonstrated that the PCL gene in tobacco could promote resistance and accumulation to Cd. In the future, we can investigate if PCL works on other species.
Previous studies showed that AtPCS1 [30] and VsPCS1 [31] localized in the cytoplasm. In addition to cytoplasm localization, PCS showed other kinds of cellular localization; SpPCS1 from Schizosaccharomyces pombe was localized to the mitochondria; and AtPCS2 [32] and BnPCS1 [15] localized in the cytoplasm and nucleus. These studies used protoplasts as material rather than intact tissue. Moreover, PCS are enzymes that play a catalytic role in the synthesis of PCs, therefore, these results only represent the possible sites for PC synthesis. In this study, the subcellular localization analysis of PCL overexpressed in transgenic tobacco showed that the PCL localized in the cytoplasm and nucleus (Figure 8). This subcellular localization indicates that PCL might be involved in other physiological processes and may merit further investigation.
Experimental Materials and Reagents
Wild species of bare-stem tobacco (Nicotiana nudicaulis Watson, 2n = 2x = 24) were provided by our laboratory. PCR Primers were synthesized by Shenzhen Huada Biotechnology Co., Ltd. (Shenzhen, China). Taq DNA polymerase and DNA Mark were from the TaKaRa company. RNA extraction reagent was purchased from Dalian Bao Biological Company. The Real-Time PCR Kit was purchased from Bio-Rad Biological Company. The reverse transcription kit, gel recovery kit, and plasmid extraction kit were purchased from Tiangen and Omega, respectively. Escherichia coli str. Top10, Agrobacterium tumefaciens str. LBA4404, Kanamycin (Kan), Streptomycin (Str), Rifampicin (Rif), Zeatin (ZT), and other antibiotics and hormones were purchased from Beijing Dingguo Changsheng Biotechnology Co., Ltd. (Beijing, China).
Generation of Tobacco Plants Expressing PCL
The leaves of tobacco plants grown in a sterile agar medium were used for leaf disc transformation. A sequence containing the PCL gene and CaMV 35S promoter (35S-PCL; Supplementary Figure S1a) was transformed into Agrobacterium tumefaciens strain LBA4404 by the freeze-thaw method. Tobacco leaf discs were transformed with A. tumefaciens [33,34].
The total RNA from tobacco leaves, isolated with RNAiso Plus (TaKaRa, Dalian, China) as described in the manufacturer's instructions, was treated with DNase I (TaKaRa, Dalian, China). DNase-treated RNA was reverse-transcribed using a PrimeScript TM RT Reagent Kit (TaKaRa, Dalian, China). qRT-PCR was performed using a CFX96 Real-Time PCR System (Bio-Rad Laboratories, Hercules, CA, USA) with Eva Green S (Bio-Rad Laboratories, Hercules, CA, USA). The relative expression of the detected gene was calculated using the 2 −∆∆Ct method.
Growth Conditions and Treatments
WT and transgenic homozygous tobacco seeds were sterilized with 20% sodium hypochlorite for 10 min and then germinated in an incubator at 28 • C. After germination, the seeds were arranged in Petri dishes (each containing 30 seeds and three replicates) with a layer of filter paper and 20 mL of Hoagland nutrient solution [35] containing 0 or 200 µM CdCl 2 . The Petri dish was placed vertically for 3 days to allow the roots of the tobacco seedlings to grow vertically. Root growth and differences between tobacco lines were observed and statistically recorded.
The treatment method for adult plants is not the same as for seedlings. The WT and the homozygous lines of the transgenic tobacco were sown in vermiculite. After germination, the seedlings were transferred to Hoagland solution at 26 • C and the light cycle was adjusted to 16 h daylight (250 µmol m −2 s −1 ) and 8 h night. When the plants were 4 weeks old, transgenic tobacco plants and WT plants with the same growth status were transferred to Hoagland nutrient solution containing either 0 or 200 µM CdCl 2 . After 7 days, leaf and root samples were collected for analysis of physiological indexes, with at least three samples repeated in each group.
Determination of Cadmium in Plants
After being treated with 0 and 200 µM Cd 2+ for seven days, the samples were washed with distilled water for a short time and then dried separately at 60 • C. The content of Cd in dried specimens was determined by atomic absorption spectrometry [36].
Determination of Chlorophyll Content in Plants
In total, 0.1 g of fresh samples were cut and placed in 5 mL of 95% ethanol and soaked in the dark for 24 h. The absorbance of the pigment extract was measured at 665 nm, 649 nm, and 470 nm [37].
Measurement of Antioxidant Enzyme Activity
Fresh leaf and root samples (0.2 g) were harvested and thoroughly ground in 1.6 mL 0.05 mol L −1 (pH = 7.8) precooled phosphate buffer with some quartz sand grains. The homogenate was centrifuged at 12,000 rpm at 4 • C, centrifuged for another 15 min, and then the supernatant was retained.
The total SOD was estimated from the increase in NADH oxidation and measured at 560 nm using its molar extinction coefficient of 6.22 × 10 3 M −1 cm −1 [38].
The activity of CAT was determined by measuring the decomposition of H 2 O 2 at 240 nm (molar extinction coefficient 39.4 M −1 cm −1 ) [40].
Determination of MDA and Pro Content in Plants
Pro and MDA were measured by acid-ninhydrin and colorimetric methods, respectively [41]. The MDA content was determined in 0.1 g of the samples using 4 mL of thiobarbituric acid (TCA)/thiobarbiturate (TBA) reagent {0.67% (w/v) TBA in 10% TCA (w/v)}. Absorbance was measured at 450, 535, and 600 nm, and the MDA concentration was calculated using an extinction coefficient of 155 mM −1 cm −1 . Proline was extracted with 3% sulfosalicylic acid from 0.1 g of plant tissue and determined with ninhydrin reagent. The absorbance was measured at 520 nm (extinction coefficient 8.7 × 106 M −1 cm −1 ).
Determination of H 2 O 2 Content
H 2 O 2 content was determined by homogenizing the samples (0.5 g) with 2.5 mL of 0.1% (w/v) TCA and centrifuging at 13,000× g for 30 min. The reaction mixture consisted of 0.5 mL supernatant, 0.5 mL (0.1 M) potassium phosphate buffer (pH 7.6), and 2 mL (1 M) KI. After 1 h of incubation in the dark, the absorbance was measured at 390 nm, and the H 2 O 2 content was calculated using a standard curve for H 2 O 2 [41].
DAB and NBT Staining
Leaf samples were collected after seven days of Cd treatment, washed with distilled water, and immersed in 1% DAB (50 mM Tris-HCl, pH 3.8) and 0.5 mg mL −1 NBT (25 mM HEPES, pH 7.8) for 6 h in the dark. The leaves were then transferred to anhydrous ethanol and bathed in boiling water for about 10 min until the chlorophyll disappeared utterly [42].
Subcellular Localization of PCL Peptide
The PCL gene was fused with the EGFP gene and controlled by the CaMV 35S promoter in a binary vector (Supplementary Figure S2a). Transgenic tobacco plants with the integrated 35S::PCL-EGFP gene were obtained (Supplementary Figure S2b). Localization of PCL-EGFP was observed by laser confocal microscopy.
Statistical Analysis
All the tests were repeated three times independently. The measured data were statistically analyzed using Microsoft Excel (version 2019) and graphically displayed GraphPad Prism (GraphPad Software 8.0.2, San Diego, CA, USA).
Conclusions
We synthesized PCL according to the amino acid sequence of PCS and examined its function in the Cd resistance of tobacco. We found that the PCL localized in both the nucleus and cytoplasm and that the heterologous expression of the artificial PCL significantly improved plant Cd resistance and accumulation.
|
v3-fos-license
|
2021-12-06T14:35:30.103Z
|
2021-12-06T00:00:00.000
|
244897195
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00247-021-05227-0.pdf",
"pdf_hash": "d6c16eeab4f49b68a951539c2904bafbd81ef40b",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:866",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "d6c16eeab4f49b68a951539c2904bafbd81ef40b",
"year": 2021
}
|
pes2o/s2orc
|
Reference phantom selection in pediatric computed tomography using data from a large, multicenter registry
Background Radiation dose metrics vary by the calibration reference phantom used to report doses. By convention, 16-cm diameter cylindrical polymethyl-methacyrlate phantoms are used for head imaging and 32-cm diameter phantoms are used for body imaging in adults. Actual usage patterns in children remain under-documented. Objective This study uses the University of California San Francisco International CT Dose Registry to describe phantom selection in children by patient age, body region and scanner manufacturer, and the consequent impact on radiation doses. Materials and methods For 106,837 pediatric computed tomography (CT) exams collected between Jan. 1, 2015, and Nov. 2, 2020, in children up to 17 years of age from 118 hospitals and imaging facilities, we describe reference phantom use patterns by body region, age and manufacturer, and median and 75th-percentile dose–length product (DLP) and volume CT dose index (CTDIvol) doses when using 16-cm vs. 32-cm phantoms. Results There was relatively consistent phantom selection by body region. Overall, 98.0% of brain and skull examinations referenced 16-cm phantoms, and 95.7% of chest, 94.4% of abdomen and 100% of cervical-spine examinations referenced 32-cm phantoms. Only GE deviated from this practice, reporting chest and abdomen scans using 16-cm phantoms with some frequency in children up to 10 years of age. DLP and CTDIvol values from 16-cm phantom-referenced scans were 2–3 times higher than 32-cm phantom-referenced scans. Conclusion Reference phantom selection is highly consistent, with a small but significant number of abdomen and chest scans (~5%) using 16-cm phantoms in younger children, which produces DLP values approximately twice as high as exams referenced to 32-cm phantoms Supplementary Information The online version contains supplementary material available at 10.1007/s00247-021-05227-0.
Introduction
The rapid rise over the last few decades in computed tomography (CT) imaging and consequent population exposure to ionizing radiation, a known carcinogen, have raised concerns about the levels and variability of radiation doses across patients, institutions and countries, as well as the need for dose optimization [1][2][3][4][5][6][7][8]. Diverse organizations and campaigns, such as Choosing Wisely and Image Gently, promote improving the safety and effective imaging care of children worldwide to optimize and reduce patient radiation dose exposures [9,10].
Dose optimization tools like diagnostic reference levels use metrics such as the volume CT dose index (CTDI vol ), reflecting the average dose (per slice) over the total volume scanned for the selected CT conditions of operation, and the dose-length product (DLP), reflecting the total dose imparted to the patient. While these metrics reflect scanner output and not patient absorbed dose, they correlate closely with absorbed doses and help physicians and imaging practices compare their doses to a uniform standard [11].
CTDI vol values are reported directly from the scanner and must be referenced to a calibration reference phantom for reporting. By convention, 16-cm diameter cylindrical polymethyl-methacyrlate phantoms are used for head imaging and 32-cm diameter phantoms are used for body imaging in adults. Accuracy (validity) of the estimated dose to reflect the true patient absorbed dose depends on the closeness of fit between the volumes of the imaged body section and the reference phantom, as well as kilovoltage peak (kVp) setting and bow-tie filter. The 32-cm body phantom corresponds to a patient with a 47-in. (~120 cm) waistline. Therefore, a dose estimate for very small patients based on a 32-cm phantom at 120 kVp will underestimate the true patient absorbed dose by approximately a factor of 2, and vice versa; a CTDI vol of 8 mGy from a 16-cm phantom vs. a CTDI vol of 4 mGy from a 32-cm phantom would indicate the same CT output [12,13].
An underappreciated challenge in pediatric dosimetry concerns the choice of phantom for dose reporting, as pediatric phantom selection may be inconsistent [14]. The source of this variation may be that some manufacturers follow adult conventions, while other manufacturers choose the smaller 16-cm phantoms for reporting abdomen and chest doses in children, as this more closely reflects actual patient size [15,16]. The reported dose will vary considerably between the two phantom sizes, even when the technical parameters are identical [17]. This inconsistency in reporting can result in patient distress and confusion when they undergo scans on machines with different reporting conventions [18].
Several investigators have created ad hoc corrections, for example suggesting that CTDI vol and DLP values estimated from 32-cm diameter phantoms should be multiplied by a factor of 2 to obtain "correct" values in pediatric body scans [19]. This problem is not only important when understanding an individual patient's dose, but also when trying to optimize protocols because the applicability of a benchmark will vary depending on what phantom was used. Some pediatric reference values have been explicitly reported using only one specific size reference phantom, but unless dose comparisons use the same size phantom, it is easy to unknowingly introduce errors [20]. Similarly, the Alliance for Radiation Safety in Pediatric Imaging created conversion factors for normalizing CTDI vol and DLP to patient size to estimate actual absorbed doses and specified that these values be consistently calculated with the 32-cm phantom [21].
Despite recognition of the importance of phantom selection in pediatric dosimetry, we lack representative data on what phantoms are used in actual practice, how these selections vary by manufacturer, and how the reported doses vary by phantom size in actual practice. Using data from a large multicenter CT dose registry, this study describes variations in practice and differences in estimated doses that result from the differential use of 16-cm (head) and 32-cm (body) phantoms in young patients.
Registry
The University of California San Francisco (UCSF) International CT Dose Registry includes 6.65 million CT exams assembled from across 160 hospital and imaging facilities [6,7]. The registry was created with funding from the University of California Office of the President, the Centers for Disease Control and Prevention, the National Institutes of Health and the Patient Centered Outcomes Research Institute, and includes data from health care institutions that used Radimetrics Radiation Dose Management Solution (Bayer HealthCare, Whippany, NJ) and expressed interest in collaborating with UCSF on radiation-related research. The UCSF Institutional Review Board approved the registry study and waived informed consent. Collaborating institutions either approved the study locally or relied on UCSF approval.
Study population
We included 106,837 pediatric diagnostic CT examinations obtained in 118 U.S. facilities for children under 18 years of age performed between Jan. 1, 2015, and Nov. 2, 2020, that included imaging of the head, cervical spine (c-spine), chest, or abdomen and pelvis (abdomen). We divided head scans into brain and skull imaging (including sinus, facial bones and temporal bones); neck and c-spine exams are included in a single category. These body regions reflect 87% of all exams during the study period. We excluded CTs that included insufficient numbers for analysis or that covered multiple body parts (n=15,849 or 13% of all scans), or those performed as part of radiation oncology guidance, surgical or interventional procedures, combined positron emission tomography (PET)-CT and single photon emission CT (SPECT) imaging.
Variables
We report DLP, which reflects the total scanner emitted radiation, defined as the product of CTDI vol and the scan length, reflecting the total radiation output received by the patient for a CT scan and measured in mGy·cm. Each dose metric is referenced to a 16-cm or 32-cm phantom. Results are shown for complete CT examinations including all irradiating events (excluding scouts, localizers and boluses). A CT examination including a scan with and a scan without contrast is considered a single examination. Exam-level DLP is calculated as the sum of all constituent series-level DLP values. For simplicity, we excluded multiphase examinations that were referenced to more than one phantom (n=7,204). We categorized patients into the five mutually exclusive age groups used by the Leapfrog Group [22]: <1 year, 1-4 years, 5-9 years, 10-14 years, 15-17 years.
Statistical analysis
For each body region, we report the number and percent of examinations that used 16-cm and 32-cm phantoms, stratified by body region, patient age and manufacturer. We calculated the number and proportion of exams using the "expected" phantom, based on predominate usage patterns across all manufacturers (16 cm for brain and skull, 32 cm for chest, abdomen and c-spine) by body region, age group and scanner manufacturer.
We calculated the median and 75th percentiles for each dose metric, stratified by body region, patient age and manufacturer, and calculated the relative median dose (i.e. ratio) between phantom sizes (16 cm vs. 32 cm) to measure the magnitude of difference due to reference phantom selection when there were at least 5 CT examinations performed by age and body region using each phantom. The Radimetrics dose tracking platform was employed to extract all patient, scanner and exam variables (see [6,7] for details), and SAS (version 9.3; SAS Institute, Cary, NC) and R (version 3.6.3; R Foundation for Statistical Computing, Vienna, Austria) were used for all analyses.
Results
Overall, 54.6% of the exams were comprised of males, 59.2% of exams used 16-cm phantoms and the most common body region imaged was the head, including the brain (n=48,680, 45.6%) and skull (n=12,929, 12.1%). A total of 44.1% of exams were performed at pediatric-specific hospitals ( Table 1). Across all body regions, the number of scans generally increased with age (Table 2).
Phantom selection varied by body region, and for most patients the phantom choice was the same as in adults ( Table 2). The 16-cm phantom was used in more than 98.0% of examinations for brain and skull CT examinations regardless of patient age. The 32-cm phantom was used for most chest examinations (95.7%) and abdomen examinations (94.4%), and use of the 32-cm phantom increased with increasing patient age. The 32-cm phantom was used for 100% of c-spine CT examinations.
We observed consistent use of the 16-cm phantom for brain and skull imaging across manufacturers with few exceptions, while greater differences in phantom selection by manufacturer were observed for chest and abdomen CT examinations (Table 3). Philips and Siemens used the 32-cm phantom in more than 99% of children for both chest and abdomen CT. For chest CT, Canon used the 32-cm phantom uniformly above age 5, and GE used the 32-cm phantom in more than 95% of children above age 10; in the younger age in the use of the 32-cm phantom for reporting abdomen and chest CT (Fig. 1). For example, for GE, use of the 32-cm phantom ranges from 45.3% of children <1 year old Most of the CT examinations are reported using a consistent phantom choice. Nonetheless, 0.4% of head scans (n=243) used 32-cm phantoms and 5.2% of chest and abdomen scans (n=1,877) used 16-cm phantoms. The use of 32-cm phantoms for head/skull scans is difficult to explain, though we suspect the use of "body" protocols could play a role. The use of 16-cm phantoms in chest and abdomen scans, on the other hand, could indicate intentional efforts to select a best size match, or manufacturer-specific rules related to scanning parameters such as field of view.
The use of different phantoms has a large impact on reported dose metrics. The median DLP by body region, patient age and manufacturer reported when using each phantom is shown in Table 4. Note that we omitted all c-spine and all Philips combinations from the table because none had the minimum number of five scans of each phantom size to allow comparison. The relative median DLP is approximately twofold higher when using 16-cm phantom (range: 0.7-4.9). While DLP generally increases with advancing age in the pediatric population (not necessarily in adults), these data show that the relative DLP (between 16-cm and 32-cm phantoms) generally declines with advancing age, though inconsistently. For example, the relative dose for chest exams in GE scanners actually increases from 2.8 in patients <1 year old to 4.9 in patients 1-4 years old, before decreasing thereafter for reasons we cannot explain. Results are similar when comparing the 75th percentiles of DLP and relative DLP, with an average 1.9-fold higher dose (range: 0.8-6.0) when reported using the 16-cm phantom (Online Supplementary Material 1). Table 5 shows the same comparisons for CTDI vol , which partially removes scan length as a confounding factor. In almost all cases, the comparable ratios of relative dose exceed the values for DLP (Table 4). Results for the 75th percentiles of CTDI vol are similar to the medians, with relative doses two-to threefold higher (range: 1.0-5.5) when reported using the 16-cm phantom (Online Supplementary Material 2).
Discussion
Using a large multicenter CT dose registry, we report phantom selection by body region, patient age and scanner manufacturer, and its impact on reported dose. Our findings demonstrate that most scans are reported consistently: 99% of head scans are reported using the 16-cm phantom and 95% of chest and abdomen scans are reported using the 32-cm phantom. Nonetheless, the overall consistency masks notable differences in phantom selection by manufacturer, most notably that GE frequently uses the 16-cm phantom for abdomen CT in younger children. We found, as expected, that the reported DLP values are approximately twice as high when the 16-cm vs. the 32-cm phantom is selected. With growing interest in CT dose documentation, reflected in annual hospital surveys of pediatric doses performed by the Leapfrog Group [22] and regulatory requirements of radiation dose reports in the medical record [23], it is important to use consistent standards across all patients so that physicians, radiology technologists, patients and researchers can clearly and accurately understand the results and know that they were calculated consistently.
While our research highlights both similarities and differences in phantom selection by manufacturer, even Fig. 1 Percent of brain and skull exams referenced to the 16-cm phantom and percent of chest and abdomen exams referenced to the 32-cm phantom, by age group and manufacturer consistency in reporting might not reflect best practice. For example, while all manufacturers used the 32-cm phantom for c-spine exams, because of the large difference in size between the 32-cm phantom and child (or adult) neck sizes, reported DLP values for neck scans will markedly underestimate absorbed doses unless some adjustment factor is employed. Similarly, GE frequently uses the 16-cm phantom when reporting chest and abdomen doses in small children -a sensible decision when the 16-cm phantom more closely approximates their size than the 32-cm phantom. Yet this will result in reporting of a significantly higher dose for a given child than had that child been scanned on a device that used the 32-cm phantom. The impact on a patient when they are scanned using devices that report differently can be substantial [19]. Watson and Coakley [14] reported the inconsistent rules that the manufacturers used for selection of the phantom over a decade ago.
There is a robust discussion in the radiology and medical physics literature regarding how to best estimate patient absorbed dose using scanner output combined with information on patient size in order to understand how well scan settings have been tailored to patient size [14]. As an objective way to adjust the CTDI vol to a closer representation of the actual dose delivered to the patient, and hence partially correct for the mismatch between phantom and patient dimensions, size-specific dose estimates (SSDE) were developed [24,25]. Nonetheless, how accurately reported dose will reflect patient absorbed dose when the phantom is poorly matched to patient size remains an important question because patient doses will be underestimated when 32-cm phantoms are used in smaller patients. This paper does not address this important question, but instead focuses on how the basic interpretation of CT scanner dose output is highly dependent on which phantom is used for reporting. The scanner output must be understood in terms of the actual phantom selected; on average, all else equal, the same DLP dose output will be reported approximately twofold higher if it is scaled to a 16-cm rather than a 32-cm phantom. Our purpose was to demonstrate the magnitude of typical differences in dose that may be obscured by existing pediatric reference value studies and individual clinical applications. Table 4 Median dose-length product (DLP) by body region, patient age, manufacturer and phantom, and relative DLP comparing 16-cm with 32-cm phantoms Values are not shown when there were fewer than 5 computed tomography examinations performed by age and body region using each phantom (numbers of scans can be derived from the n and percent values of Table 3 This study has limitations. The sample includes data filtered through a single dose-management software vendor. However, all metrics come directly from either the radiation dose structured report or from the dose report images (via optical character recognition). Consequently, this convenience sample should not affect the phantom-derived dose differences we found. These analyses are limited to 41 scanner models from 4 manufacturers. The current sample size is insufficient to stratify phantom usage patterns by type of facility (e.g., pediatric vs. adult hospital or academic vs. community setting); however, this would be an important and worthwhile area of future study. Ideally, one would stratify and determine optimal pediatric dosing by patient size rather than age, which is a relatively poor predictor of patient diameter [26]. However, actual patient size is usually missing from Digital Imaging and Communications in Medicine data, and we were not able to generate tables by patient size. Similarly, we do not report SSDE values as they are frequently missing in the Radimetrics-derived data, unlike DLP and CTDI vol . In addition, we did not attempt to control for kVp setting, which is known to impact conversion of CTDI vol from 16-cm to 32-cm phantoms. Lastly, this paper did not explore manufacturer rules and algorithms for phantom selection, though this is an important question for future study.
Conclusion
These analyses empirically elucidate reference phantom selection patterns by body region, patient age and scanner manufacturer, and also demonstrate the substantial differences in scanner-reported DLP that arise due to reference phantom selection in clinical studies. Without specifying or stratifying by phantom size, any reporting of aggregate DLP values unwittingly will show a weighted summary that depends on the (unspecified) mixture of scanner manufacturers, patient ages and sizes, and phantoms used. While the use of SSDE avoids some of these problems, standardization of both phantom selection and phantom reporting would improve clinical, research and monitoring applications. Table 5 Median volume computed tomography (CT) dose index (CTDI vol ) by body region, patient age, manufacturer and phantom, and relative CTDI vol comparing 16-cm with 32-cm phantoms Values are not shown when there were fewer than 5 CT examinations performed by age and body region using each phantom (numbers of scans can be derived from the n and percent values of Table 3
|
v3-fos-license
|
2019-09-09T21:21:56.601Z
|
2019-09-19T00:00:00.000
|
202031999
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2019.00887/pdf",
"pdf_hash": "b31c054db7d074be88b331af68fcbe8298364662",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:867",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "6d1b80349003e9661e952905229c0431fd7dabc9",
"year": 2019
}
|
pes2o/s2orc
|
Hypermethylation of the SEPT9 Gene Suggests Significantly Poor Prognosis in Cancer Patients: A Systematic Review and Meta-Analysis
Background: Aberrant hypermethylation of the Septin 9 (SEPT9) is an early event in several human cancers, and increasing studies have reported good performance of methylated SEPT9 (mSEPT9) in cancer diagnosis. Recent studies further focused on its value in cancer prognosis, but results are not clearly elucidated. Methods: A comprehensive search to identify relevant studies about the association between mSEPT9 and cancer prognosis was conducted through the EMBASE, PubMed, and Web of Science databases (up to January 2019). The main outcomes were overall survival (OS) and disease-free survival (DFS). The hazard ratio (HR) and 95% confidence interval (CI) for OS and DFS were extracted from each included study and pooled using a random-effects model. Results: Ten eligible studies comprising 1,266 cancer patients were included. Results demonstrated that mSEPT9 was associated with poor OS (HR = 2.07, 95% CI = 1.40–3.06). Specially, mSEPT9 detected in preoperative plasma predicted worse OS in cancer patients (HR = 3.25, 95% CI = 1.93–5.48). In addition, we also identified a significant association of mSEPT9 with decreased DFS of cancer (HR = 3.24, 95% CI = 1.81–5.79). Conclusion: Our meta-analysis supports that mSEPT9 is associated with reduced OS and DFS in cancer patients. Moreover, detection of mSEPT9 using plasma appears to be a convenient and promising way to predict long-term survival of cancer patients.
INTRODUCTION
Septins are a conserved group of GTP-binding proteins that play a crucial role in cytokinesis, cytoskeleton, and cell cycle control (Hall and Russell, 2004;Russell and Hall, 2011). As a star member of the Septin gene family, Septin 9 (SEPT9) is located at chromosome 17q25.3 and demonstrates both oncogenic and tumor-suppressive impacts on human cancers (Connolly et al., 2011;Verdier-Pinard et al., 2017). Previous studies have uncovered that methylated SEPT9 (mSEPT9) is associated with tumorigenesis based on transcriptionally silencing due to aberrant hypermethylation of the CpG island within the SEPT9 promoter (Connolly et al., 2011;Wasserkort et al., 2013;Wang et al., 2018). Detection of mSEPT9 has been reported in several cancers, including colorectal cancer (CRC), head and neck squamous cell carcinoma (HNSCC), and gastric cancer (GC) (Lee et al., 2013;Schrock et al., 2017;Song et al., 2018).
Nowadays, the diagnostic significance of mSEPT9 has been elucidated in several cancers, and specially, the mSEPT9 assay (Epi proColon) becomes the first blood-based test approved by U.S. FDA for CRC screening. Some researches further pay attention to the mSEPT9's prognostic performance on cancer. In 2013, Dietrich et al. detected malignant pleural effusions from 58 cases with various cancers and found that mSEPT9 indicated a poor survival (Dietrich et al., 2013). Subsequently, the association of mSEPT9 with cancer prognosis was investigated in CRC (Lee et al., 2013;Tham et al., 2014;Freitas et al., 2018;Song et al., 2018), GC (Lee et al., 2013), HNSCC , and so on (Kuo et al., 2014;Angulo et al., 2016;Branchi et al., 2016;Jung et al., 2016).
To date, however, the prognostic value of mSEPT9 in cancer patients has not yet been methodically elucidated. Herein, we performed a systematic review and meta-analysis to summarize the published data and evaluate the prognostic impact of mSEPT9 on human cancers.
MATERIALS AND METHODS
Our meta-analysis was conducted based on the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (Moher et al., 2009). The PRISMA 2009 checklist is shown in Supplementary Table S1.
Search Strategy
A comprehensive electronic search was performed via the EMBASE, PubMed, and ISI Web of Science databases through January 2019 without any restriction. The search items were combinations of "SEPT9, " "mSEPT9, " "septin 9, " "prognosis" and "survival. " There was no language restriction.
Criteria of Inclusion and Exclusion
Two independent authors conducted the literature search and study selection. Discrepancies were resolved by discussion. Studies were considered eligible if they met the following criteria: (1) cohort studies for evaluating the prognostic role of mSEPT9 in cancer patients; and (2) studies reporting hazard ratios (HRs) and 95% confidence intervals (CIs) or providing information to estimate HRs. The exclusion criteria were as follows: (1) reviews, meta-analyses, opinion, abstracts, and cellular or animal experiments; and (2) studies with overlapping data. If studies had overlapping data, we kept the one with the larger sample size.
Data Extraction
Two independent authors extracted the following items from each included study: first author, publication year, country, patient number, sampling time, follow-up, cancer type and stage, detection method, and prognostic outcomes. Outcome measures included overall survival (OS), disease-free survival (DFS), disease-specific survival (DSS), and progression-free survival (PFS).
Quality Evaluation
Two authors independently conducted quality evaluation, and discrepancies were resolved by discussion. We used the Newcastle-Ottawa scale (NOS) to assess the quality of each included study, with quality score from 0 to 9 (Supplemental Table S2) (Stang, 2010). Quality evaluation was not an exclusion criterion for eligible studies.
Statistical Analysis
Multivariate-adjusted HRs and 95% CIs were preferentially extracted from each included study, if available. If a study did not report the HR and 95% CI, these measures were extrapolated by the method of Parmar and Tierney (Parmar et al., 1998;Tierney et al., 2007). We used the random-effects model (DerSimonian and Laird) to pool these HRs and 95% CIs and examined the heterogeneity by Cochran's Q test and I 2 statistic (Higgins et al., 2003;Harris et al., 2008). P < 0.10 or I 2 > 50% indicates considerable heterogeneity (Higgins and Thompson, 2002). We also performed subgroup analyses to further evaluate the mSEPT9's prognostic effects based on sample type, sampling time, and cancer type. To assess the stability of pooled results, we applied one-way sensitivity analysis by excluding one study at a time. In addition, the publication bias was examined by Begg's and Egger's tests (Begg and Mazumdar, 1994;Egger et al., 1997). All P values were two-sided, and P ≤ 0.05 was considered significant, unless otherwise specified. All statistical analyses were carried out by Stata 12.1 software (College Station, TX, USA).
Study Characteristics
Our search strategy initially obtained 275 records from the PubMed, EMBASE. and Web of Science databases. By title and abstract review, we removed 114 duplicates and 146 records. This large proportion of excluded records consisted of reviews, opinions, conference abstracts, diagnostic studies, in vitro studies, and nonhuman studies. Of the remaining 15 full-text publications, five studies were further excluded because of focusing on lymph node metastasis (Nagata et al., 2017), having overlapping data , or insufficient information to estimate HRs and 95% CIs (Perez-Carbonell et al., 2014;Villanueva et al., 2015;Chang et al., 2017). Finally, a total of 10 eligible studies were included for this meta-analysis (Dietrich et al., 2013;Lee et al., 2013;Kuo et al., 2014;Tham et al., 2014;Angulo et al., 2016;Branchi et al., 2016;Jung et al., 2016;Schrock et al., 2017;Freitas et al., 2018;Song et al., 2018) (Figure 1).
Association Between mSEPT9 and DFS in Cancer Patients
Two included studies comprising three datasets of 371 cancer patients reported the association of mSEPT9with DFS of cancer (Lee et al., 2013;Tham et al., 2014). The heterogeneity test showed no heterogeneity among these studies (P heterogeneity = 0.866, I 2 = 0%). The pooled HR of the aforementioned studies was 3.24 (95% CI = 1.81-5.79), indicating that mSEPT9 predicted for worse DFS in cancer patients ( Figure 2B). Subgroup analysis failed to be performed because of the limited number of relevant studies.
Sensitivity Analyses and Publication Bias
Sensitivity analyses suggested that our pooled results were quite stable for both OS (Supplementary Figure S1A) and DFS (Supplementary Figure S1B). We observed a borderline significant publication bias in meta-analysis for OS (P Egger's test = 0.048, P Begg's test = 0.063). Therefore, we conducted a trim-and-fill analysis and found that despite publication bias, the adjusted pooled HR consistently demonstrated a significant association between mSEPT9 and OS (HR = 1.61, 95% CI = 1.09-2.38, Supplementary Figure S2). There was no obvious publication bias for meta-analysis for DFS (P Egger's test = 0.443, P Begg's test = 0.296).
DISCUSSION
Several studies have investigated the association between mSEPT9 and prognosis in human cancers, but results are uncertain due to the limited sample size and various cancer types. Herein, we conducted a systematic review and metaanalysis and supported that mSEPT9 significantly predicted for worse cancer prognosis.
By systematic literature search, rigorous screening, and analysis, we identified that mSEPT9-positive cancer patients would suffer two-fold risk of decreased OS. Further subgroup analysis supported this result. Sensitivity analysis and trim-and-fill analysis guaranteed the robustness of our results. Specially, mSEPT9 detected in preoperative plasma significantly indicated a worse OS, implying a convenient and promising way to predict long-term survival of cancer patients. In addition, our meta-analysis also supported that mSEPT9 was significantly associated with poor DFS of cancer. Sensitivity analysis suggested that the result was stable, and Cochran's Q test and I 2 statistic did not indicate considerable heterogeneity. The aforementioned results all suggested that mSEPT9 could be a good prognostic biomarker for cancer patients. Traditionally, serum tumor markers (i.e., CEA, CA19-9) are used for screening and prognosis prediction, but their performance is still unsatisfactory. Previous studies have confirmed the excellent property of mSEPT9 in early diagnosis of several cancers and have clearly elucidated the potential mechanisms (Church et al., 2014;Koch et al., 2018;Pan et al., 2018). Now we provide evidence to support that mSEPT9 also could be a promising biomarker for cancer prognosis, which can be combined with traditional tumor biomarkers to greatly improve prognosis prediction in the future. There were several limitations in our work. First, our results strongly supported that mSEPT9 could be a prognostic indicator of OS and DFS for human cancer, but there were not enough studies for subgroup analysis to fully clarify its impact on different cancer types, sampling times, and pathological stages. Second, there were only two included studies about DSS and PFS. The limited number of studies impeded us to conduct a meta-analysis to evaluate the impact of mSEPT9 on DSS and PFS. Last, some included studies did not provide multivariate-adjusted HRs, so we used unadjusted HRs instead. These unadjusted HRs were possibly influenced by potential confounders in the original studies. When we pooled them into a meta-analysis, the influence might be magnified and lead to a risk of bias on the pooled results.
More studies with elaborate design should be conducted to verity our results and further explore more detailed impacts of mSEPT9 on cancer prognosis.
CONCLUSION
Our meta-analysis suggests that mSEPT9 could predict for worse OS and DFS in cancer patients. Specially, patients with detection of mSEPT9 in preoperative plasma would suffer significantly decreased OS of cancer. To the best of our knowledge, this is the first meta-analysis providing robust evidence that mSEPT9 could be a promising biomarker for cancer prognosis.
DATA AVAILABILITY
All datasets analyzed for this study are included in the manuscript and the Supplementary Files.
AUTHOR CONTRIBUTIONS
HX and YL designed the study and revised the manuscript. NS designed the study, summarized the data, and wrote the manuscript. TW, DL, and YZ performed literature search, collected data, and performed some analysis. All authors read and approved the final manuscript.
|
v3-fos-license
|
2020-10-19T18:12:23.884Z
|
2020-09-15T00:00:00.000
|
224910523
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.ancene.2020.100263",
"pdf_hash": "2cfc520d9a79f3bc602d78af5f971ea21ea816cb",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:870",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "18a4fee3cff1280ac2397f3d07119a91341f2c41",
"year": 2020
}
|
pes2o/s2orc
|
Anthropogenic drivers for exceptionally large meander formation during the Late Holocene
Large-amplitude meanders may form in low-energy rivers despite generally limited mobility in theses systems. Exceptionally large meanders which even extend beyond the valley sides have developed in the Overijsselse Vecht river (the Netherlands) between ca. 1400 CE (Common Era) and the early 1900s, when channelization occurred. Previous studies have attributed the enhanced lateral dynamics of this river to changes in river regime due to increased discharges, reflecting climate and/or land-use alterations in the catchment. This paper focuses on local aspects that may explain why exceptionally large meanders developed at specific sites. Through an integrated analysis based on archaeological, historical, and geomorphological data along with optically stimulated luminescence dating, we investigated the relative impact of three direct and indirect anthropogenic causes for the local morphological change and enhanced lateral migration rates: (1) lack of strategies to manage fluvial erosion; (2) a strong increase in the number of farmsteads and related intensified local land use from the High Middle Ages onwards; and (3) (human-induced) drift-sand activity directly adjacent to the river bends, causing a change in bank stability. Combined, these factors led locally to meander amplitudes well beyond the valley sides. Lessons learned at this site are relevant for management and restoration of meandering rivers in similar settings elsewhere, particularly in meeting the need to estimate spatial demands of (restored) low-energy fluvial systems and manage bank erosion. © 2020 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Introduction
Low-energy meandering rivers often show relatively little lateral migration (Kuenen, 1944;Eekhout, 2014;Makaske et al., 2016;Candel, 2020) because of their low specific stream power (< 10 W m À2 ) (Nanson and Croke, 1992). Nevertheless, meanders with high amplitude may occur in low-energy rivers, with relatively high lateral migration rates compared to other reaches of the same river (e.g. Hooke, 2007). Generally, the lateral migration rates of rivers strongly depend on local bank strength (Schumm, 1960;Hickin and Nanson, 1984;Ferguson, 1987;Nicoll and Hickin, 2010). For example, Hudson and Kesel (2000) compared sections of the Mississippi river and showed that the lowest lateral migration rates occurred in sections where erosionresistant deposits were present (e.g. clay plugs).
Additionally, anthropogenic effects on river morphodynamics specifically deserve attention, as their influences are much more varied and intense than previously thought (Gibling, 2018). Many Elevation is in meters relative to Dutch Ordinance Datum (roughly mean sea level). The dashed yellow line indicates the valley side, reconstructed at places where large rivers worldwide have been subject to significant anthropogenic pressure during the Late Holocene by land use changes, partly explaining increased fluvial activity on the entire river-scale (Kondolf et al., 2002;Macklin et al., 2010;Notebaert and Verstraeten, 2010;Brown et al., 2018;Candel et al., 2018;Gibling, 2018;Notebaert et al., 2018). More locally, humans have stabilized many river channels by bank protection, groynes, dikes and other engineering works (Hudson et al., 2008;Dépret et al., 2017). The potential direct and indirect role of humans in destabilising river banks locally has received little attention in literature, and is the main topic of this paper.
Formation of exceptionally large meanders extending beyond valley sides has previously been linked to major climate changes in temperate regions (Alford and Holmes, 1985;Vandenberghe, 1995). At the transition from the Pleniglacial to the Late Glacial, the climate became warmer and wetter and vegetation re-established. Consequently, sediment availability decreased and river discharge increased, resulting in large incising meandering rivers (Vandenberghe and Bohncke, 1985;Vandenberghe and Van Huissteden, 1988;Vandenberghe, 1995). Large meanders from this period are still visible in many river valleys, such as the Dommel, Roer and Niers valleys in the Netherlands (Kasse et al., 2005(Kasse et al., , 2017Candel et al., 2020), Tisza valley in Hungary and Serbia (Vandenberghe et al., 2018) and Murrumbidgee valley in Australia (Schumm, 1968).
Exceptionally large meander bends locally also occur in the Dutch Overijsselse Vecht river valley, reaching well beyond the valley sides with a maximum amplitude of almost 1.5 km (Fig. 1). This is more than twice as large as would be expected based on empirical estimations for the Overijsselse Vecht given by Hobo (2006), whose calculations are based on discharge regime and sediment characteristics. Recent geochronological research revealed that these remarkably large meanders formed between ca. 1400-1900 CE (Quik and Wallinga, 2018a,b). During this period meander amplitudes increased at a relatively steady rate of 1-3 m y À1 . After ca. 1900 CE, meander migration was halted as the river course was straightened and channelized. Factors that might explain the exceptional meander growth between 1400 and 1900 CE include regime shifts, bedload changes, high-discharge events, varying erodibility of bank sediments, or (indirect) human interference with the river system. Candel et al. (2018) demonstrated that the Overijsselse Vecht river experienced a discharge regime change around the 15th century, resulting in a shift from a laterally stable to a meandering channel pattern and marking the onset of meander formation. The change in palaeodischarge, characterized by increased peak discharges, may have resulted from climatic fluctuations during the Little Ice Age and large-scale land use change in the catchment (i.e. peat reclamation). This catchment-scale change does however not explain the exceptional meander expansion observed locally, and the historical changes of bankfull discharges and bedload reconstructed by Candel et al. (2018) could not account for the ongoing lateral migration of the large meanders during the 19th and early 20th century.
The Overijsselse Vecht river valley predominantly consists of aeolian coversand deposited during the Late-Pleniglacial, overlying older fluvial deposits (Huisink, 2000). Several meanders of the river seem to have been confined in their expansion (Fig. 1a, Wolfert and Maas, 2007) by the sides of the river's Late-Pleistocene valley, whereas locally some meanders have expanded beyond the valley sides. It has been suggested that (human-induced) driftsand complexes that developed on river banks may have enhanced bank erodibility locally (Wolfert et al., 1996;Wolfert and Maas, 2007). Alternatively, large meander formation may be linked to increased settlement density and anthropogenic pressure since the High Middle Ages. Additionally, river management may have played a role in local prevention and/or acceleration of bank erosion.
Due to excellent preservation of some cut-off meanders, the availability of detailed geochronological information for the development of two meander bends (Quik and Wallinga, 2018a,b) and a previous palaeohydrological reconstruction (Candel et al., 2018), we consider the Overijsselse Vecht river an ideal case to study local factors influencing lateral meander migration. To gain insight in these factors and their degree of influence we address the following research questions: (1) What is the character of historical river management during the period of meander expansion (i.e. Modern period), and does this indicate direct human interferences with the river system that resulted in exceptional meander formation observed locally? (2) How did habitation density in the direct vicinity of the floodplain change through time (i.e. prior to and during meander expansion), and could related land use changes from the Modern period onwards cause enhanced local meander growth? (3) What was the timing and spatial distribution of (humaninduced) drift-sand activity in the study region during the period of meander expansion, and is there evidence for interaction of aeolian and fluvial dynamics resulting in the local formation of exceptional meanders?
To answer these research questions we performed an integrated analysis of archaeological, historical and geomorphological information and optically stimulated luminescence (OSL) dating.
Study area
The Overijsselse Vecht (Fig. 1) is a low-energy sand-bed river originating west of Münster in North Rhine-Westphalia (Germany) and entering the Netherlands south of the city of Coevorden. It is a rain-fed river with a catchment of 3785 km 2 . The river has its outlet in the Zwarte Water near the city of Zwolle, which debouches into the IJsselmeer (Lake IJssel). Before 1932, the IJsselmeer was still an inland sea (Zuiderzee), with a small tidal range of about 0.2 m Makaske et al., 2003). Characterizations of the water levels in 1850 demonstrate that the Overijsselse Vecht did not experience tidal influences before closure of the Zuiderzee (Middelkoop et al., 2003; their figures 4 and 5 shows no tidal stroke at Kampen and Katerveer). In the Dutch part the valley gradient is fairly uniform at 1.4 * 10 À4 (Wolfert and Maas, 2007). According to measurements in the period 1995-2015 from a discharge station in the investigated section of the river, the average discharge and mean annual flood discharge are 22.8 and 160 m 3 s À1 respectively. The area is characterized by an average annual rainfall of 800À875 mm and an average maximum temperature of 4.9-5.4 C in January and 24.3-24.7 C in July (KNMI, 2019a(KNMI, , 2019b. Through large-scale engineering works between 1896 and the 1930s, the original river length of 90 km in the Netherlands has meander bends occur. A detailed view of the meanders is provided in (c) and ( been reduced to 60 km by cutting off 69 meanders (Wolfert and Maas, 2007). Revetments fix the position of river banks and the water level is controlled by weirs.
The Dutch part of the Overijsselse Vecht was subdivided into three river reaches by Wolfert and Maas (2007) based on fluvial style. The central reach, stretching from the city of Hardenberg (east) to the city of Dalfsen (west), is characterized by several conspicuous meanders (Fig. 1a). For this study we focus on two presently cut-off meander bends with exceptional amplitudes reaching outside the valley sides named 'Junner Koeland' and 'Prathoek' (Fig. 1b), (cf. Quik and Wallinga, 2018a,b;and Candel et al., 2018), and their wider environment (area of circa 4 Â 4 km). These meanders were not cut-off naturally, but through channelization in the early 1900s.
The river banks consist of aeolian coversand on top of fluvioperiglacial deposits. Near the studied bends, the river channel had a width of about 40 m in 1848 CE, and the elevation difference between the river banks and deepest part of the channel was approximately 2.3 m (Staring and Stieltjes, 1848). In the vicinity of the floodplain between Hardenberg and Ommen drift-sand complexes developed that consist of eroded and re-deposited coversand. Nearly all these drift-sand areas are now stabilized by forests that were planted since the mid-nineteenth century. Currently several parts of the floodplain and former drift-sand areas are protected nature reserves.
Methods
To identify potential factors for the formation of large meanders in the Overijsselse Vecht we used a combination of data from different disciplines, drawing methods from archaeology, historical geography, geomorphology and geochronology. The methodology is divided in three parts, consisting of analyses of (1) historical river management, to understand the type and level of direct interference with the river system, (2) habitation history and land use change, to detect changing pressures on the landscape, and (3) occurrence and activity period of (human-induced) drift-sands near the two investigated meander bends, which may change stability of river banks. All parts are described below, further details are available in the Supplementary Material. For methods regarding reconstruction of meander formation (lateral migration rates), palaeodischarge and meander cross sections we refer to earlier publications (Quik and Wallinga, 2018a,b;Candel et al., 2018).
Historical river management
Following a general literature review (Wieringa and Schelhaas, 1983;Coster, 1999;Neefjes et al., 2011) several governmental levels were selected for closer study: the higher authorities being the Dutch government and the province of Overijssel, followed by dike districts (after 1879 continuing as water boards) and marks (local commons (Dutch: 'marken'), i.e. Late Medieval and Early Modern farmer collectives). To reconstruct the character and intensity of historical river management two subsequent methods were applied. First, we studied general trends in river management for the Dutch part of the Overijsselse Vecht catchment by analysing activities at different governmental levels. Information was obtained from the archives of the various governmental institutions. Second, we focused on river management activities in one of the marks in the study area (i.e. the mark of Arriën), to gain detailed insights in local management. Arriën was chosen because the archives of the mark of Junne are lost, whereas those from the mark of Stegeren are obscured by low readability.
Habitation history and land use development
To identify possible land use related drivers for increased meander expansion we reviewed various archaeological and historical geographical sources. Late prehistoric, Roman and Medieval archaeological sites from the study area were inventoried using the national Dutch archaeological database (Archis III) and published literature. Historical sources provide information on habitation development from the Middle Ages onwards (see Supplementary Materials for further details).
Within the scope of the present study, highly detailed archival research on the age of individual farmsteads and the numbers of farm animals per unit surface area was not feasible. Instead, we used (1) number of farmsteads as proxy for land use intensity, and (2) a generic retrospective method to date the farmsteads (see Supplementary Materials), based on the historical layering of Medieval property rights (Spek et al., 2010;Neefjes et al., 2011). We are aware that the relation between habitation density and increasing land use intensity is not necessarily linear, but assume a positive correlation as corroborated by e.g. Bieleman (2008).
Additional information on collective land use was obtained from the archives kept by the marks. Toponymical and etymological publications were used to explain and date field name types from the scroll-bar complexes in the Dutch part of the river valley, to interpret former local land use (e.g. Schönfeld, 1955Schönfeld, , 1950Van Berkel and Samplonius, 1989;Malinckrodt, 1974).
Drift-sand activity
Recent integrative analyses by Pierik et al. (2018) indicated that human pressure on the landscape was the predominant facilitating condition for Late-Holocene drift-sand activity in the Netherlands. Analyses of drift-sands near the Overijsselse Vecht could therefore be considered as part of investigations on habitation and land use development (section 4.2). However, as the geomorphological and geochronological methods that we applied to analyse drift-sand activity diverge from the methods applied in section 4.2, we present them separately here (similarly in the Results and Discussion). To investigate whether a chronological overlap between drift-sand activity and meander expansion exists and to what extent drift-sand deposition may have destabilized river banks we: (1) analysed subsequent historical maps to determine the size of the area covered by active drift-sands through time; (2) conducted a lithological survey of two distinct drift-sand dunes near the two meander bends (location in Fig. 1a, detailed view in Fig. 3a and 3b) and (3) performed optically stimulated luminescence (OSL) dates on selected drift-sand samples to determine the onset of local drift-sand deposition. These steps are described below.
Use of historical maps to estimate drift-sand extent
We analysed the area covered by active drift-sand based on five historical maps dating from 1720 to 1884 CE that were used in the geochronology developed by Quik and Wallinga (2018a). More details on these maps are available in the Supplementary Materials. We used the area currently classified as arenosols (Dutch: 'duinvaaggronden' and 'vlakvaaggronden') in the Dutch soil classification system (Alterra, 2014) as validation to compare with the historically indicated drift-sand surface, as these young soils predominantly formed in stabilized drift-sand areas (Jongmans et al., 2013). Further details on the historical maps and procedure are provided in the Supplementary Materials.
Lithological survey drift-sand areas
A lithogenetic survey of two former drift-sand areas adjacent to the two meander bends was performed based on corings covering the width of a distinct dune. The drift-sand deposits are located on top of coversand deposits. Both sediments are often clearly distinguishable by a palaeo-podzol that formed in the top of the coversand prior to burial by drift-sand. In some places drift-sand is found directly on top of coversand parent material, indicating that the podzol eroded prior to drift-sand deposition. The two dunes (Fig. 1a, 2a and 2b) were selected based on (1) their proximity to the investigated meander bends and position such that drift-sand would have blown towards the river channel under the dominant SWÀ ÀNE wind direction (Koster, 2010), and (2) presence of a palaeo-podzol in the underlying coversand deposits (at least at the lee side of the dune), to maximize chances that the base of the drift-sand deposit represents the age of first drift-sand activity. Further information on lithogenetic interpretation is available in the Supplementary Materials. Corings were performed using an extended Edelman auger to a depth of 1.2-3.6 m, which was at most points sufficiently deep to reach the in-situ podzol (if present). Five corings were done between the abandoned channel of Junner Koeland and the adjacent drift-sand dune to exclude presence of fluvial deposits underneath the drift-sand covered area (also visible in Fig. 2a). The location and elevation of the corings were determined with a Topcon Global Navigation Satellite System (GNSS) receiver, with a horizontal precision of $10 mm and vertical precision of $15 mm.
Optically stimulated luminescence (OSL) dating of drift-sands
To determine the onset of drift-sand deposition near the two river bends, six OSL samples were collected in drift-sand areas 1 and 2 directly above the coversand podzols, aiming to determine the age of first drift-sand deposition and podzol burial (sample locations: see Fig. 2a and b and Supplementary Materials for more details).
After augering to the desired depth, OSL samples were collected in a PVC pipe extension attached to the auger head, which was carefully pressed down the auger hole. At one location (sample NCL-2415164) the pipe was pressed into a vertical exposure. Upon retrieval of the PVC pipe both ends were immediately covered with plastic caps and light-impermeable black tape. For OSL measurements and dose rate determination we followed the procedures described by Quik and Wallinga (2018a). Statistical analysis of the dating results was done using the bootstrapped Minimum Age Model (Galbraith et al., 1999;Cunningham and Wallinga, 2012) with an assumed overdispersion of 0.15 AE 0.03.
Historical river management
Archival study demonstrated that the Overijsselse Vecht was scarcely mentioned in archives of various institutions on multiple governmental levels, indicating that river management was limited and poorly coordinated. This is most evident from reported conversations between the Dutch government and the province of Overijssel, including a request from the government in 1853 CE to the province to investigate which party was concerned with river management. Inevitably, the province of Overijssel concluded that no party was concerned with river management and that land owners locally applied river management practices (e.g. placement of groynes) without governmental coordination (HCO, 2018a). Additionally, willingness of water boards and dike districts to develop regional river management was limited. Archival material of the dyke districts (HCO, 2018b(HCO, , 2018c(HCO, , 2018d indicates that these institutions were solely concerned with water safety and related reparation works. The river is also barely mentioned in the constitutions of the local water boards (HCO, 2018f,g). Additionally, the province of Overijssel refused to contribute financially, upon which the government discontinued its financial support for river improvements. Complaints about troublesome water levels downstream of our study area at the municipality of Dalfsen were disregarded by the province, eventually causing the municipality to directly address the king in search for help in 1863 CE (HCO, 2018h).
The mark books of Arriën (1549-1826 and 1765-1835 CE) also indicate scarce attention for river management. Some erosion problems and prevention discussions were recorded, which were solely directed at protection of the mark's greenlands (see 4.2.2). Interestingly, the expansion of the Junner Koeland meander, which migrated northward eroding land in the Arriën territory, was not discussed in the Arriën mark books (HCO, 2018i,j).
Spatiotemporal patterns in habitation
The study area includes four rural villages and their territories named Arriën, Junne, Stegeren and Beerze (Fig. 3). These were first mentioned in Late Medieval written sources. However, the frequent occurrence of manorial property rights indicates that all four villages already existed since at least the Early Middle Ages (Neefjes et al., 2011). They are situated at the lower slopes of coversand ridges nearby the Overijsselse Vecht (Fig. 3b) and may well be the successors of earlier settlements that were situated on the higher parts of the coversand ridges. These areas were inhabited since late prehistory. This is corroborated by the distribution pattern of the approximately 20 archaeological sites in the area, including late prehistoric, Roman period and Early Medieval finds (Fig. 3a). A Pleistocene terrace remnant directly east of the Junner Koeland meander was inhabited from late prehistoric to Roman times as well, but has been deserted since then (compare Fig. 3a, letter T, with Fig. 3b). The cultural landscape patterns of the four village territories are rather similar. However, some local differences occur. For example, the oldest nucleus of Stegeren is situated close to the river floodplain on the valley margin (Fig. 3a, letter S), whereas Arriën, Junne and Beerze are slightly further away from the river. Fig. 4 shows the estimated number of farmsteads at four dates between the end of the Early Middle Ages and 1832 CE. Starting from a very low number of farms around 1000 CE, there is a clear increase until 1300 CE, after which the growth reduces or stagnates until 1500 CE. The Early Modern period shows again a marked Table 2). For the map sheet of the Hottinger atlas the survey covered 3 years; we indicated the middle year in (b). Low-water channel centrelines were derived from the analysis by Quik and Wallinga (2018a). Dotted areas in (a-e) indicate the area covered by drift-sand as displayed on historical maps. Grey shading indicates the area currently classified as arenosols in the Dutch soil classification ('duinvaaggronden' and 'vlakvaaggronden' combined). The graph in (f) shows the relative area covered by drift-sand through time. Source Dutch soil map: Alterra (2014). increase in the number of farmsteads. The foundation of various new farmsteads in (especially) the High and Late Middle Ages resulted in a more compact settlement pattern, as new farms were built nearby older ones and in similar landscape settings (Fig. 3b). In the Early Modern period this densification process accelerated. Additionally, new groups of farmsteads were built at some distance from the older settlement nuclei, most prominently at the edge of the heathland zone (Fig. 3b).
Spatiotemporal patterns in land use
In late prehistory and the Roman period, the arable fields were situated nearby the settlements. Both were located on the higher parts of the large coversand ridges alongside the river valley (Van Beek, 2009;Van Beek and Groenewoudt, 2011). The position of the arable fields did not change much through time; in the High Medieval period the tops of the coversand ridges were reclaimed into open field complexes (arable land; Dutch: 'essen'), which stayed in use for agriculture until the present day (Neefjes et al., 2011;Van Beek and Groenewoudt, 2011). The settlements had gradually moved to their present-day position, at the lower slopes of the coversand ridges, from the Medieval period onwards. This important change of settlement location led to a larger fixation or place continuity of different landscape elements, most notably the hamlets and their open field complexes.
Not much is known about the appearance and exploitation of the river floodplain in late prehistory and the Roman period. Highquality archaeobotanical evidence is lacking. With regard to the High Middle Ages, archaeobotanical data (macro remains and pollen) were collected at the archaeologically investigated settlement site of Dalfsen-Gerner Marke, situated approximately 15 km downstream of our study area in a similar landscape setting (Van Haaster, 2006). It was demonstrated that the investigated coversand ridge along the Overijsselse Vecht had lost most of its original woodland vegetation before the Middle Ages. In the High Middle Ages it was largely in use as arable fields, where rye, flax and probably barley and oats were grown. Two different types of grasslands were present in the nearby river valley: one type related to relatively dry soils, that probably was exploited as pasture, and a type linked to wetter soils that was probably used as hayland (Van Haaster, 2006). After the harvest these areas may temporarily have been used for grazing as well. Heather probably grew on the large coversand plains at some distance from the river and may have been used for sheep grazing (Neefjes et al., 2011).
The evidence obtained at Dalfsen corresponds well with information derived from historical sources, which indicate the Medieval reclamation of floodplains for use as haylands (Bakker, 1989). Grasslands were essential for local communities. In his research on farming in the province of Drenthe in the period 1600-1910CE, Bieleman (1987 analysed old land-tax registers. These distinguish several types of 'greenland' (Dutch: 'groenland') occurring in stream and river valleys, which were used as either hayland and/or pasture. Even though the various greenland types were taxed differently, the overall value of greenlands was substantial (1.5-2.5 times higher) compared to the value of arable lands. Hay formed an indispensable crop, providing winter fodder for draught animals that were used to cultivate the arable fields (Franklin, 1953;Dirkx, 1997).
The Overijsselse Vecht formed the administrative boundary between marks north and south of the river (Fig. 3b). The typically large floodplain parcels directly bordering the river were owned by the marks and consisted of high-quality grasslands that were exceptionally suited for grazing by cattle, which have higher demands regarding food quality than sheep. In these common pastures every mark member had rights to graze a specific number of cattle (Van Engelen van der Veen, 1924). These greenlands are often indicated with the terms 'mars' or 'maat/maten'. 'Mars' is a toponym for 'land by the water' (Van Berkel and Samplonius, 1989) or 'marshland' (De Vries and De Tollenaere, 1995). 'Mat' stems from the word 'dagmaat' (Schönfeld, 1950), an old land measure indicating the area that could be mown by one man in one day (Bieleman and Brood, 1980). By custom, pastures were named after the livestock type grazing there (Schönfeld, 1950). For instance, 'maat' also occurs in the eastern Netherlands combined with the Dutch word for cow ('koe'), as in 'Koemaat' (Ter Laak and Groenewoudt, 2005). The two scroll-bar complexes investigated here are situated in the marks of Junne and Stegeren. The name 'Junner Koeland', literally translates as 'cowland of Junne'. 'Prathoek' is a combination indicating the shape of the land ('hoek' means corner, Schönfeld, 1950) and its vegetation ('prat' probably originates from the Latin pratum, meaning grassland, Malinckrodt, 1974). Based on the combination of archaeological, historical and toponymical information it is highly likely that both scroll-bar complexes were intensively used for cattle grazing for centuries, and that this practice goes back to at least the High Middle Ages.
Drift-sand covered area through time
The drift-sand covered area derived from each of the five historical maps is displayed in Fig. 5a to 5e, combined with the position of the river as indicated by each map (following Quik and Wallinga, 2018a). The graph in Fig. 5f shows the relative surface area of the drift-sand through time. The area covered by drift-sand increased over time to approximately 17 % in 1851 CE, subsequent large-scale afforestation led to a quick drop in the drift-sand covered area. The former drift-sand areas largely overlap with present-day arenosols, while at a few locations podzols have developed in the drift-sand deposits after stabilization (Alterra, 2008(Alterra, , 2014. For instance in the southern areas in Fig. 5c, where there is no overlap with arenosols, podzols have developed over time. As the onset of drift-sand activity took place at least at 1500 CE (see below), initial development of the drift-sands could not be derived from the maps. The spatial patterns in Fig. 5 clearly show the development of drift-sands directly adjacent to the meanders of Junner Koeland and Prathoek. The land use pattern in Fig. 3b corresponds well with the indicated drift-sand area in Fig. 5c.
Intruding sands formed a nuisance for the inhabitants of the Overijsselse Vecht valley. The first records of defence measures against the drift-sand in mark books from the Dutch part of the river valley date from the 16th century (Bruins, 1981). Sanddrifting was controlled e.g. by construction of tree girths (visible on the historical map of 1720 CE in Fig. 6) or dykes. Consequently drift-sands could reach the river only locally. For instance, the drift-sand west of Junner Koeland at location 1 probably caused relatively few problems, because the arable fields of Arriën lay upwind of the predominant SW-NE wind direction (Koster, 2010). As human-induced barriers were absent, the drift-sand could influence the meander at Junner Koeland. A large part of the driftsand coming from south of the river was probably caught in the tree girths and dykes surrounding the arable fields of Junne. Driftsand blowing towards the apex of Prathoek was probably not limited in this way, because no arable fields were present directly south of this meander.
Lithological survey drift-sand areas
The drift-sand dune at location 1 has a diameter of about 30 m and varies in elevation from circa 6-9 above sea level (a.s.l.). At the coring locations the thickness of the drift-sand layer varies between 10 and 180 cm. At location 2 the drift-sand forms a linear structure bordering the arable fields of Junne (Fig. 2b). This drift-sand ridge has a width of about 60 m and a length of circa 1 km. It varies in elevation from circa 8 m a.s.l. at its borders to about 16 m a.s.l. at its highest point. Corings showed that thickness of the drift-sand layer varies between 120 and 350 cm (here the maximum thickness of the drift-sand layer is at least 350 cm; augering depth was not sufficient to reach the coversand deposits at the centre of the ridge). The five corings that were placed between the abandoned channel of Junner Koeland and the adjacent drift-sand dune (Fig. 2a) demonstrated that these driftsands are underlain by coversand, indicating absence of Holocene fluvial deposits underneath the drift-sands. The two corings at the west end of this transect proved that the boundary between fluvial deposits and the drift-sand area is abrupt, which matches geomorphological observations based on the DEM (Fig. 2a) and in the field.
Optically stimulated luminescence (OSL) dating results
The OSL dating results are listed in Table 1. At drift-sand location 1 the OSL results show that the sample taken in the middle of the dune is the oldest (1837 AE 20 CE). The sample from the lee side is somewhat younger (1919 AE 13 CE) and the stoss side sample is youngest (1983 AE 10 CE). A similar pattern was found for location 2, where the sample collected near the middle of the dune is oldest (1500 AE 31 CE), the lee side sample is younger (1620 AE 48 CE) and the stoss side sample is youngest (1820 AE 45 CE). These ages indicate that growth of the dunes was directed against the dominant wind direction, perhaps with some sediment blowing over the dune to the lee side during stormy weather. At location 1, where some bare patches are present today and the dating Table 1 Optically stimulated luminescence (OSL) dating results for the drift-sand samples. The column 'Podzol' refers to presence or absence of a spodic horizon in the coversand that underlies the drift-sand. RD = Dutch coordinate system, NA = not available. For all samples the palaeodoses and ages are based on the bootstrapped Minimum Age Model (Cunningham and Wallinga, 2012). Source of sample NCL-2112028: Reimann et al. (2016); Rotthier and Sýkora (2016 indicated very recent sedimentation, an increasing importance of NE winds in currently active drift-sands (Jungerius and Riksen, 2010) could also have played a role. According to the cadastral map ( Fig. 3b) drift-sand location 1 was covered by heather in 1832 CE, however the OSL dates indicate that the drift-sand remained active until 1983. The largest part of the drift-sand ridge at location 2 had been stabilized by a forest cover in 1851 CE (Fig. 5d). One OSL date was available from earlier work (Reimann et al., 2016;Rotthier and Sýkora, 2016) and denoted as drift-sand location 3 (Fig. 1b, Fig. 2c, listed at the bottom of Table 1). This sample was collected at a depth of 0.28À0.43 meters below the surface in aeolian deposits found on top of a Pleistocene terrace remnant in Junner Koeland. It was dated at 1463 AE 28 CE (Reimann et al., 2016), matching with the ages found at drift-sand location 2, where the oldest sample was dated at 1500 AE 31 CE.
River management
The consulted archival material demonstrated that there was no specific authority concerned with regional coordination of river management of the Overijsselse Vecht. Locally, farmers and land owners concerned with protection of their property applied smallscale practices such as placement of groynes. However, this happened only to a limited degree as appears from the low number of records. The aloofness of higher authorities and resulting lack of regional management strategies provided free rein for local land use and drift-sand activity to affect meander development.
Settlement pattern and land use
The chronological development of the estimated number of farmsteads (Fig. 4, Fig. 7) follows the common trend in Northwest Europe (Bieleman, 2008;Persson and Sharp, 2015). A gradual increase in settlement size during the Early Middle Ages was followed by a strong population growth and reclamation activity between c. 1100c. 1350, followed by a stabilization phase or period of reduced growth between c. 1350-1500, and a strong renewed growth after 1500 CE. The farmstead numbers for 1832 are exact as they were derived from the first national Cadastre. The numbers for all other time points should be considered as conservative estimates, since there may have been yeomen and peasants owning farms that were unrecorded in the used Late Medieval sources.
Historical sources indicate that drift-sand became exceedingly problematic near the river valley since at least the 16th century, and that different measures were taken to restrain them (Bruins, 1981). This trend coincides with the strong growth in farmstead numbers from 1500 CE onwards (Fig. 4). It is highly likely that intensified land use led to increased drift-sand activity (Castel et al., 1989;Pierik et al., 2018), as corroborated by the OSL dating results indicating drift-sand deposition from the beginning of the 16th century CE onwards (Table 1). We previously showed that the shift of the Overijsselse Vecht from a laterally stable to a meandering channel pattern took place during the Late Middle Ages, caused by an increase of peak discharges (1400-1500 CE, Fig. 7; Candel et al., 2018). The period of large meander formation locally (roughly 1400-1900CE, Quik and Wallinga, 2018a overlaps with strong growth in the farmstead numbers and the activity period of drift-sands directly adjacent to the meanders of Junner Koeland and Prathoek (Fig. 7). It seems reasonable to assume that the increase in farmstead numbers resulted in new reclamations, higher land use intensity and indirectly affected activity and expansion of drift-sands.
The grasslands on the scroll-bar complexes (Fig. 3b) were used for grazing (and possibly hay-making) from (at least) the High Middle Ages onwards (cf. Van Haaster, 2006). It remains however difficult to assess whether related land use effects had a significant impact on meander development, as land use on scroll-bar complexes of meanders expanding beyond the Pleistocene valley may have been similar in meanders that remained confined by the valley side. Detecting land use changes at this scale would consequently require archival study at the level of individual farms. We hypothesize that (1) cattle may have enhanced local bank erosion through trampling (Trimble and Mendel, 1995), with shallow water levels in the Overijsselse Vecht (Staring and Stieltjes, 1848) potentially aiding access of cattle to the concave river bank; (2) the high value of the grasslands and the function of the river as administrative border between the marks suggests an economic incentive for deliberate human-induced bank disruption or actions promoting erosion of the concave bend and point-bar development along the convex bend. The mark affected by erosion would lose only drift-sand covered territory of low value. We have, so far, not been able to identify historical sources to test these hypotheses. Highly detailed archival studies may shed light on the relevance of these processes.
Drift-sand
Our dating results indicate a chronological conformity between lateral meander expansion (ca. 1400-1900CE, Quik and Wallinga, 2018a and nearby drift-sand activity (Fig. 7). The oldest driftsand sample was dated at 1500 AE 31 CE. This date indicates the start of the formation of the drift-sand dyke surrounding the arable fields of Junne. Activity of drift-sands must have started even earlier, as construction of this dyke signifies a response to the driftsands threatening the arable land.
Previous studies suggested that bank stability of outer banks may decrease as they become covered by drift-sand (Wolfert et al., 1996;Wolfert and Maas, 2007). Deposition on the banks may cause Fig. 4). Upper middle: period of activity of drift-sand areas 1 and 2 as determined with OSL dating and relative drift-sand area as derived from historical maps (see Fig. 5). Lower middle: shift from a laterally stable to a meandering channel pattern (circa 1400 -1500 CE, set to 1450 in the graph, data derived from Candel et al., 2018) and development of meander amplitude of Prathoek and Junner Koeland (data derived from Quik and Wallinga, 2018a). Bottom: reconstructed mean bankfull discharge (data derived from Candel et al., 2018). For the sake of clarity uncertainties are not shown here. riparian vegetation to cease, diminishing the bank erosionresistance. Additionally, the drift-sand cover itself consists of very non-cohesive material that is prone to fluvial erosion. Historical maps show that the drift-sands were situated directly adjacent to the meanders of Junner Koeland and Prathoek (Fig. 5), and oriented such that sand would blow towards the river under the predominant wind direction (Fig. 6). Protection measures such as the dyke of Junne (drift-sand location 2) locally inhibit driftsands from reaching the river. Lacking protective structures leave the apex of Prathoek fully exposed. Consequently drift-sand deposition will have affected bank stability of this meander's apex. Additionally, as Prathoek is located directly upstream of Junner Koeland, blown-in sands may have affected Junner Koeland as well prior to drift-sand activity west of this meander (at driftsand location 1). According to our OSL dates drift-sands were active here from 1837 AE 20 CE onwards. However, historical maps point towards drift-sand activity at this site from as early as 1720 CE (Fig. 5a). Drift-sand activity at location 1 overlaps partly with meander formation at Junner Koeland, and fully with formation of the skewed apex. Drift-sands that were present north of Junner Koeland, as visible on the historical map of 1720 (Fig. 5a), will gradually have been eroded by the river. Drift-sand in this position would not blow towards the river under the predominant wind direction, but the drift-sand cover probably resulted in lower stability of the northern bank and hence was prone to erosion by the expanding Junner Koeland meander. In addition, coring evidence from drift-sand location 1 shows a sharp boundary between fluvial and drift-sand deposits which points towards fluvial erosion of drift-sand-covered terrain (Fig. 2a).
Interactions between fluvial and aeolian geomorphology are widespread but are often not studied in combination, hence the underlying mechanisms are less well understood (Liu and Coulthard, 2015). Our observations support a reduction of bank stability following drift-sand deposition as discussed by Wolfert et al. (1996) and Wolfert and Maas (2007) as active geomorphic process. Additionally, drift-sands can act as an extra sediment supply to the river, altering its morphodynamics by enhancing the rate of scroll bar growth and therefore the rate of bank erosion (Ferguson, 1987;Nanson and Croke, 1992). The collapse of driftsand covered banks upon fluvial erosion will also lead to additional sediment supply to the river, which may affect river morphodynamics. Cross-sections by Candel et al. (2018) showed that the channel deposits did not incise since 1400/1500 CE, but that the river bed in fact slightly aggraded during the meandering phase. This may point towards high sediment supply due to the drift-sands.
Reflection on relative importance of drivers
Formation of the Junner Koeland meander started at 1433 AE 92 CE (Quik and Wallinga, 2018a,b). Following the channel pattern change from laterally stable to meandering, the initial formation of meanders was part of the natural regime of the river (Candel et al., 2018). This initial meander formation was not caused or influenced by local allogenic drivers, but by a catchment-scale discharge regime change. However, the continuous growth of meanders like Junner Koeland and Prathoek which migrated beyond the river's valley sides reaching amplitudes over twice their expected size is remarkable. Lack of regional river management strategies led to a situation where the river could meander freely. Rising numbers of farmsteads in the study area created land use pressures that were hitherto unprecedented. A consequence of this high land use intensity was formation of local drift-sands (Castel et al., 1989;Pierik et al., 2018), and drift-sand deposition on river banks will have lowered their resistance to fluvial erosion (Wolfert et al., 1996;Wolfert and Maas, 2007). Our data show that drift-sand influence was present during the entire period of meander growth at Prathoek, and from 1720 CE onwards for Junner Koeland. Based on our multidisciplinary analysis we consider drift-sand activity as the most prominent effect to cause the exceptional meander expansion observed at Junner Koeland and Prathoek. Sand-drifting is in itself a consequence of the high population density and land use pressure. Lacking fluvial management left the boundary conditions for meandering unchanged and allowed drift-sands that locally affected the river to exert their influence on river dynamics.
Implications
During the Late-Holocene, fluvial activity of rivers generally increased due to catchment-scale land use changes by humans, changing rivers into more actively laterally migrating rivers due to enhanced sediment load and discharges (e.g. Notebaert and Verstraeten, 2010;Brown et al., 2018;Gibling, 2018;Notebaert et al., 2018). Here we show that human influence did not only occur at the catchment-scale, but also contributed to exceptional morphodynamics at the level of individual meander bends.
The lessons learned at this site are relevant for management and restoration of meandering rivers in similar settings elsewhere, particularly considering the need to estimate spatial demands of (restored) low-energy fluvial systems and to adequately manage bank erosion and related hazard risks (e.g. Piégay et al., 2005). In many parts of the world, low-energy rivers are presently being restored from their channelized state to rivers that are allowed to freely erode their banks (Wohl et al., 2005). Restoration goals are often based on the channel planform preceding channelization and aiming to resemble the river course from historical maps. Consequently palaeochannels are reconnected to redesign the river channel (Kondolf, 2006). However, meandering activity strongly relates to stream power (Candel et al., 2018) and local land use, which change over time. Hence, rivers should not be restored to a certain historical reference, as the conditions that allowed this planform to develop may no longer be valid and impossible to return to (Dufour and Piégay, 2009). Instead, priority should be given to characterizing the current and future morphological conditions prior to setting goals for river restoration. We have shown that local changes of morphological conditions may result in exceptional changes of river dynamics. This may lead to unwanted and unexpected erosion of land and infrastructure. Hence a further development of our understanding of small-scale human-landscape interactions in fluvial environments could be of great practical value for the restoration of lowenergy rivers and predicting future change.
Conclusion
We identified potential direct and indirect anthropogenic drivers for the development of exceptionally large meanders in lowland rivers. Our multidisciplinary and in-depth analysis of the Overijsselse Vecht river: (1) Indicates lacking regional management of the river system throughout the Early and Late Modern period, which created a situation where local land use and drift-sand deposition could interfere with river dynamics; (2) Shows a strong increase in the number of farmsteads and related intensification of local land use starting in the High Middle Ages and continuing through the Early and Late Modern period. This increase in habitation density matches with the period of meander growth; (3) Reveals a chronological conformity between lateral migration of two exceptionally large meanders and (human-induced) drift-sand deposition on their outer banks. Our results indicate that this interaction may have caused exceptional meander expansion beyond the valley sides.
Data availability
All data from this study are available under CCÀ ÀBY 4.0 license at the 4TU.Centre for Research Data; see Quik et al. (2020). Details of the OSL dates of the meander bends are available in Wallinga (2018a, 2018b) and additional information on the palaeohydrological reconstruction can be found in Candel et al. (2018).
Author contributions
BM, GM, JW and CQ proposed the initial outline of the research and selected the field sites. CQ performed the corings in the drift-sand areas followed by OSL sample collection by CQ, JC, and BM. CQ assisted with preparation of the OSL samples in the laboratory, OSL results were analysed by JW. RvB collected and analysed the archaeological data. MP, RvB and TS performed the analyses of Medieval farmsteads. MV analysed historical river management based on archival study under the supervision of RvB, MP and JC. CQ wrote the initial manuscript, with main additions by JC, RvB, and MP. The draft was finalized by all authors.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2021-07-02T06:16:46.965Z
|
2021-06-30T00:00:00.000
|
235698734
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0253566&type=printable",
"pdf_hash": "ff462d908a0535a55950fda86c072048fe74bc44",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:879",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "12f23c799f2392a74d85987909bb5cdd5dc0b89b",
"year": 2021
}
|
pes2o/s2orc
|
COVID RADAR app: Description and validation of population surveillance of symptoms and behavior in relation to COVID-19
Background Monitoring of symptoms and behavior may enable prediction of emerging COVID-19 hotspots. The COVID Radar smartphone app, active in the Netherlands, allows users to self-report symptoms, social distancing behaviors, and COVID-19 status daily. The objective of this study is to describe the validation of the COVID Radar. Methods COVID Radar users are asked to complete a daily questionnaire consisting of 20 questions assessing their symptoms, social distancing behavior, and COVID-19 status. We describe the internal and external validation of symptoms, behavior, and both user-reported COVID-19 status and state-reported COVID-19 case numbers. Results Since April 2nd, 2020, over 6 million observations from over 250,000 users have been collected using the COVID Radar app. Almost 2,000 users reported having tested positive for SARS-CoV-2. Amongst users testing positive for SARS-CoV-2, the proportion of observations reporting symptoms was higher than that of the cohort as a whole in the week prior to a positive SARS-CoV-2 test. Likewise, users who tested positive for SARS-CoV-2 showed above average risk social-distancing behavior. Per-capita user-reported SARS-CoV-2 positive tests closely matched government-reported per-capita case counts in provinces with high user engagement. Discussion The COVID Radar app allows voluntarily self-reporting of COVID-19 related symptoms and social distancing behaviors. Symptoms and risk behavior increase prior to a positive SARS-CoV-2 test, and user-reported case counts match closely with nationally-reported case counts in regions with high user engagement. These results suggest the COVID Radar may be a valid instrument for future surveillance and potential predictive analytics to identify emerging hotspots.
Results
Since April 2nd, 2020, over 6 million observations from over 250,000 users have been collected using the COVID Radar app. Almost 2,000 users reported having tested positive for SARS-CoV-2. Amongst users testing positive for SARS-CoV-2, the proportion of observations reporting symptoms was higher than that of the cohort as a whole in the week prior to a positive SARS-CoV-2 test. Likewise, users who tested positive for SARS-CoV-2 showed above average risk social-distancing behavior. Per-capita user-reported SARS-CoV-2 positive tests closely matched government-reported per-capita case counts in provinces with high user engagement.
Introduction
The world is in the throes of the coronavirus-disease-2019 (COVID-19) pandemic with more than 100 million cases and over 2 million confirmed deaths worldwide as of December 2020 [1]. In the Netherlands, the first case of COVID-19 was diagnosed in February 2020 and since then over one million cases and 17,500 deaths have been confirmed [2]. To date more than 60,000 COVID-19 patients have been admitted to Dutch hospitals, with over 12,000 of these eventually admitted to intensive care [2]-this in a country with just over 1,000 intensive care beds [3]. The strategies of Test Trace and Isolate (TTI), and of measures intended to reduce social contact, have been widely adopted to "flatten the curve" [4,5]. An important limitation of the TTI strategy is transmission of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) by COVID-19 carriers without symptoms. Given their lack of symptoms, they may not be tested and remain unidentified by the TTI process despite being a possible source of viral transmission [6]. Recent studies show that this subpopulation may account for as much as half of COVID-19 transmissions [6,7]. An instrument to continuously monitor social-distancing behavior and symptoms in the population at a local level may support and improve the TTI process by decreasing the delay in identification of risk areas and populations. Research using voluntary symptom self-reporting apps performed in the United Kingdom, the United States of America, and Israel show promising results in the local prediction of COVID-19 using symptom-based tracking [8][9][10]. However, we find no apps using voluntary social-distancing behavior-reporting to track local COVID-19 hotspots.
During the first COVID-19 wave in the Netherlands, the Leiden University Medical Center (LUMC) and the tech company ORTEC developed and introduced the COVID Radar app. This questionnaire-based app allows individuals to anonymously report COVID-related symptoms and social-distancing behaviors on a regional and population level. The app provides users with direct feedback on, and peer comparison with, their reported social-distancing behavior and symptoms. Our theory is that tracking of symptom and social-distancing behavior data at a population level can be used to identify regions where more COVID-19 cases will subsequently occur, allowing (regional) policy makers and healthcare professionals to affect changes to regulations earlier, and thus more effectively.
In this first descriptive study, our aim is to observe the associations between self-reported symptoms, social-distancing behavior, and self-reported COVID-19 infection by the app's users (i.e. criterion validity), and the associations between these variables and state-reported COVID-19 infections by the National Institute for Public Health and the Environment (i.e. external validation).
COVID radar app
The COVID radar app was released on the 2 nd of April 2020 following a short publicity campaign in the local and national media [11]. The app is free to download and allows for multiple user accounts from the same household on one smartphone. The app is not age-limited, meaning children are allowed to download and use the app. Over 85% of the households in COVID radar's user population with minors under 18 years of age are linked to an adult smartphone. Upon first use of the app, users are asked to provide informed consent to share the following information with the research institution as stipulated by the conditions of the European General Data Protection Regulation. Users may opt out by either removing the app or by requesting the data manager to remove all data collected from that individual. Users are asked to register by entering the four digits of their postal code, gender (Male/Female/Other/Not Specified), age category (0-5, 6-11, 12-18, 19-29, ten-year increments from 30-80 and a category for 80+), and occupation (healthcare, education, catering industry, or other occupation with high risk of close contact). Following the initial setup, users are asked to report their symptoms and behavior daily via a questionnaire. A push-reminder is sent every-other day to users reminding them to do so. Fig 1 shows screenshots of the app and S1 Table shows a list of the questions users are asked.
PLOS ONE
Each observation is comprised of questions assessing symptoms, social-distancing behavior, whether or not the user has been exposed to an individual with COVID-19 in the past 2 weeks, and a user's COVID-19 test history. The questions asked were periodically updated with the addition/removal dates of each question detailed in the online supplement. Via maps displayed within the app, users are presented with regional incidences of symptoms and personal feedback on their social-distancing behavior compared to regional and national means (Fig 1).
Data are transferred daily to a safe data environment within the Information Technology system of the LUMC (S2 Fig). Following importation of the daily data, we exclude observations from users who had requested to opt out, observations listing nonexistent postcodes, and double measurements within one user. Given users are asked if they have tested positive for SARS-CoV-2 within the past two weeks, we considered users SARS-CoV-2 positive/negative if they indicated a SARS-CoV-2 test result at least twice in the app, with the date of the first report used as day zero. More details on the development of the app, selection of the chosen questions, (external) data sources, and data cleaning is available in the supplement. Ethical approval was provided by the Medical Ethical Board of the LUMC (dossier number N20.067), which gave permission to refrain from obtaining consent from parents or guardians as data collection was anonymous. Only data on age category, profession and four digit postal code was collected rendering the data was untraceable to an individual.
Comparison of included/excluded observations
Following the data cleaning process detailed in the online supplement, we compared the available data in the excluded cohort with that of the included cohort. For each of the binary (symptom) variables collected by the app, we compared the proportion of excluded and included observations reporting this symptom. For each of the continuous social-distancing behavior variables, we compared the mean values for the included/excluded cohorts.
Descriptive statistics
To describe participant characteristics, we used histograms to explore age distributions of the app users, the number of times the app was used each day, and the number of times individual users used the app. We further compared age, gender, and profession for users ever having tested positive with those never having tested positive for SARS-CoV-2.
Validation testing
Given the eventual goal of the COVID Radar app is to predict emerging hotspots, we tested the expected associations between symptoms/behavior and SARS-CoV-2 test outcome. We used user-reported test results as our outcome measure for criterion validity testing and cases reported by the National Institute for Public Health and the Environment (RIVM) as our outcome measure for external validation [2].
Criterion validity
As a measure of criterion validity, we explored associations between the binary symptom variables (e.g., cough, sore throat, loss of smell/taste) and the continuous social-distancing behavior variables (e.g., number of house outside house, number of people within 1.5m) within the cohort of users ever reporting a SARS-CoV-2 test. For users within this ever-tested cohort, we used the date of the test as day 0 and observed the 21 days before and after the test. We calculated the daily mean or proportion for each variable for the entire user-cohort. We then calculated the difference between ever-positive or ever-negative users' reported values and the mean values for the entire user-cohort on that day. By comparing data from the same days, we eliminated bias introduced by variations in time due to the various lock-down measures implemented during the observation window, as well as seasonal effects on symptoms. The mean values and 95% confidence intervals for these differences were then plotted to show how the ever-positive and ever-negative cohorts compared to the cohort as a whole with regard to these variables in the days surrounding a test. Given the formulation of the question ("Have you tested positive/negative for SARS-CoV-2 in the past two weeks"), the date of the test cannot be determined for those answering this question in the 14 days following the implementation of the question about testing in the app. Given this and the fact that this analysis involved looking at the 14 days prior to a test, users reporting a SARS-CoV-2 test in the 14 days following implementation of the question about testing were not included in this analysis.
External validation
As a measure of external validation, we compared per-capita user-reported COVID-19 status among the 12 Dutch provinces with per-capita rates as reported by RIVM over the course of the pandemic [2]. Within each province, we plotted 7-day backward looking moving averages of the daily proportion of users reporting each symptom variable alongside the daily nationally reported COVID-19 case counts and the weekly proportions of users reporting each symptom variable alongside the number of Rhinovirus cultures reported by Dutch laboratories [12]. We further plotted daily means and 7-day backward looking moving averages of each social-distancing behavior variables and qualitatively observed how well they reflect nationally applied lockdown-measures and holidays.
Sensitivity analyses
We repeated the above-described analyses for (a) the cohort of users using the app an abovemedian number of times during the observation period, (b) the cohort excluding healthcare professionals, and (c) the cohort excluding inhabitants of the province 'Zuid-Holland', the home province of the LUMC where the app was created and users were most exposed to COVID Radar app-related media and advertisements. All statistical analyses were performed in STATA 16.1 (StataCorp, College Station, USA). STATA syntaxes for all analyses are provided in the online supplement.
Results
In the period 2 April, 2020 to 31 January, 2021 (305 days), the COVID Radar app was down-
Comparison of included/excluded observations
The data for the 102,445 (1.65%) excluded observations were fairly representative of the included observations' data in terms of symptoms and behavior. However, excluded observations were less often from a health professional and showed a slightly different age distribution (i.e. older age groups are over-represented in the excluded cohort) (see S2 Table).
Descriptive statistics
The age distribution of the app's users showed a fairly consistent distribution of users 18-69 years old, and an under-representation of young (<18) and old (>70) users. Female users were overrepresented compared to national figures (See S5 Fig). The number of observations (questionnaires answered) per day dropped from over 100,000 in the first week of the app to a steady-state of around 10,000 observations per day during the course of the observation window (2 April, 2020 to 31 January, 2021) (See Fig 2).
The effects of the push reminder sent every-other day to all users is seen in the periodicity in the number of observations between even and odd days. The number of daily observations was highest in the province Zuid-Holland, the home province of the LUMC where the app was conceived and advertised (see Fig 3).
Criterion validation
From a total of 278,523 unique users, 1,981 (0.71%) reported ever testing positive and 1214 (0.44%) negative for SARS-CoV-2. Ever-positive users were more likely to be women, older than 40 years of age, and healthcare professionals ( Table 1).
The proportion of users reporting the eight symptom variables increased beginning approximately 7 days prior to a positive test. This increase was smaller in the cohort of negative tested users (Fig 5a and 5b).
The continuous social-distancing behavior-based variables likewise showed above-mean values in this ever-positive cohort until approximately 7 days prior to a positive test, at which point they sharply decreased to remain below-mean in the week before and after a positive test. These fluctuations were not seen in users testing negative for SARS-CoV-2 (see Fig 6a and 6b).
External validation
As of early January 2021, almost one million cases of COVID-19 had been reported in the Netherlands by the National Institute for Public Health and the Environment (RIVM). The RIVM-reported daily case counts varied from 0 to over 13,000 cases per day. Positive SARS-CoV-2 tests reported in the COVID Radar app alongside the case count as reported by the RIVM for each province show that the association between these two is highest in provinces with a higher number of users, especially Zuid-Holland (Fig 7).
Symptoms and social-distancing behavior varied over time, with both showing a clear temporal association with RIVM-reported case counts over time (Figs 8 and 9).
Plotting the RIVM-reported number of reported positive cultures of Rhinovirus alongside our symptom data suggests variables 'fever', 'pain in the chest' and 'loss of smell' are associated with COVID-19 case count while variables 'coughing' and 'sore throat' correlated more closely with Rhinovirus cultures (Fig 10).
The daily mean number of people within 1.5 meters declined sharply around the middle of September, reflecting the national lockdown measures introduced, and showed peaks during national holidays (Fig 11).
The variable 'number of visitors' likewise showed peaks in the period around Christmas and New Year's Eve (Fig 12).
Sensitivity analyses
These analyses were repeated using (a) only users reporting an above median number of observations (referred to as 'faithful' users), (b) only users outside the province Zuid-Holland, and (c) only non-healthcare professionals. Differences in the results for these three sensitivity
PLOS ONE
analyses were minimal and none of the trends seen here were reversed (data shown in supplements).
Discussion
Since April 2020, the COVID Radar app has collected over 6 million user-provided questionnaires detailing COVID-related symptoms and social-distancing behaviors from over 275,000 unique users within the Netherlands. Symptom and behavior data were temporally associated with user-reported SARS-CoV-2 tests. A correlation between in-app reported case count and national-reported case counts was likewise seen, especially in provinces with high user-engagement. Social-distancing behavior variables showed the expected pattern in relation to national applied lockdown measures and holidays.
Criterion validity
Our qualitative (visual) association testing showed clear associations between both userreported symptoms and user-reported social-distancing behavior, and user-reported SARS-CoV-2 test results. While not here quantified, some variables (e.g. 'fever', 'pain in the chest' and 'loss of smell') were more closely associated with case-count than others (e.g. 'coughing' and 'sore throat'), which seemed as associated with Rhinovirus as with SARS-CoV-2. These associations are supported by prior research [13][14][15][16]. The pattern of social-distancing behaviors within the cohort of users who eventually report a positive SARS-CoV-2 test was particularly interesting. This cohort showed above-mean risk social-distancing behavior (e.g. more people within 1.5m, more visitors at home) between 20 and 10 days prior to a positive test (i.e. the period during which transmission likely occurred), at which point their social-distancing behavior quickly drops to a below-mean value as they became symptomatic and decided to be tested. The extent of above mean risk behavior was lower in users eventually testing negative.
External validity
Comparing COVID radar data to external data sources showed logical (temporal) associations in symptoms, social-distancing behavior, and test results. The strongest associations were observed in regions with high user-engagement. Given the symptoms tracked by the app are common both to SARS-CoV-2 and other respiratory tract infections, future efforts directed at prediction will need to correct for Rhinovirus and other viruses using viral surveillance data from Dutch laboratories. The extent and types of restrictions imposed on the Dutch population varied during the observation period and their effects were clearly visible in the social-distancing variables reported by users.
Comparison of excluded and included observations showed slight differences in age distribution but relative consistency in other variables. The small size of the excluded cohort minimized the risk of bias being introduced via this exclusion step. There was a large variance in the number of observations per user, with some users answering questionnaires daily while others filled in the app only once during the observation period. While it is reasonable to assume more faithful users may provide more accurate data, sensitivity analyses performed using data from users with an above-median number of app entries show no significant differences as compared to our primary analyses. The lack of a clear difference in the results when analyzing users of different engagement-levels suggests any bias introduced by differences in the reporting habits of these users was small.
There was an overrepresentation of users from the province Zuid-Holland in our data, due to Zuid-Holland being the home province of the LUMC, the hospital in charge of app design/ analysis. This also likely explains the over-representation of health care professionals, to whom the app was thoroughly advertised within the environment of the LUMC. Despite this overrepresentation, our sensitivity analyses excluding Zuid-Holland users and healthcare professionals showed similar results, suggesting any bias introduced by their overrepresentation is minimal. COVID radar users were more often female and middle-aged. This was due to the overrepresentation of healthcare workers (who were more often female and mid-aged). However the sensitivity analysis excluding healthcare workers resulted in no different conclusions.
Noteworthy too is the fact that fully 30% of those users reporting a positive SARS-CoV-2 test reported no symptoms on the day of the positive test (data shown in S8 Fig). This is in line with the estimated number of COVID-19 carriers without symptoms, as reported by other studies [6,7]. Our analysis likewise showed loss of smell and cough may continue for weeks following the positive SARS-CoV-2 test, as also confirmed in previous studies [17].
Limitations
All data in the app was self-reported and thus subjected to differences in personal interpretation of the questions. However, we do not expect differential misclassification as we see logical trends in symptoms and behavior on both individual and national levels. State-reported case counts were those reported by RIVM, whose data should include tests performed in private practices as they are required to be forwarded to RIVM. However, as there is no oversight for this process, the RIVM-reported case-counts likely represent under-estimates of the number of confirmed cases [18].
COVID radar additionally provided direct feedback to users on how their symptoms and behavior compared to their peers which likely has an effect on user behavior. This may bias the generalizability of COVID radar data, especially behavioral data. The effect of this feedback loop on users' behavior would be expected to lead to an overly conservative estimate of the behavior of the population. Despite this, expected changes in reported behavior in the periods following national holidays and changes to social distancing policies are observed in COVID radar data. Additionally, altered behavior due to app-feedback would be expected to be observed in more loyal users of the app. Our sensitivity analysis on loyal users showed no significant difference in reported behavior. Given these realities, while we accept that app feedback altering user behavior has the potential to bias our results, we feel any bias introduced has been shown here to be small.
Testing capacity in the Netherlands was low during the developmental stage of the app and has increased during the study period. In the final months of 2020, testing was expanded to include those without symptoms. As a result, the prevalence of COVID-19 in the Netherlands could be underestimated. Because of this change in testing policy, the question regarding negative tests was implemented at a later date resulting in less data about negative tests compared to data on positive tests during a shorter period of time. Nonetheless, we were able to show that the association between symptoms and a negative test is less apparent than their associations with a positive test, suggesting our conclusions remain valid. Also, testing of the underaged (<12 years) was rare during the study period, resulting in a relatively old SARS-CoV-2 positive cohort in this study.
Future implications
Having validated the expected associations between symptoms, social-distancing behavior, and COVID case-count, our next steps will involve attempted prediction of emerging hotspots by combining symptom and social-distancing behavior data to quantify risk of COVID-19 cases. Such predictions could be used to help guide COVID-19 policy. Our study indicated the quality of the submitted data is best where user-engagement is high. Prediction-based goals will thus be aided by increasing user count. Regional predictions may additionally be improved through incorporation of data from general practitioners, more detailed demographic data, and mobility data using a machine learning based approach. Another possibility for further research is testing of associations between regional SARS-CoV-2 cases, symptoms, behavior and other regional data related, for example, to the physical environment.
Conclusion
The COVID Radar app successfully collects anonymous, user-reported data on COVID-19-related symptoms and social-distancing behavior. Initial validation showed symptoms and behavior reported within the app are correlated with in-app reporting of a SARS-CoV-2 test. The predictive potential of the COVID Radar is demonstrated as external validation showed in-app reported positive SARS-CoV-2 tests track well with state-reported case counts. Future research will focus on regional predictions using these data.
|
v3-fos-license
|
2019-04-16T13:22:04.873Z
|
2017-01-01T00:00:00.000
|
115627770
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/29/matecconf_sts2017_02017.pdf",
"pdf_hash": "e028a41964c4f4ac60d76a30d15ec7e709da6818",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:880",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"sha1": "04c196d50418a0537f4431506bb40957b36efa39",
"year": 2017
}
|
pes2o/s2orc
|
Large-scale structures in a stratified open channel flow
We present the results of direct numerical simulation of stably stratified open channel flow. The simulation was conducted in a frame of reference moving with the mean flow. The chosen setup allowed us to apply temporal filtering without affecting the long-living large-scale laminar and turbulent patches to see the dynamics of the processes on their boundaries. The main goal of the paper was to investigate the vorticity balance in different parts of the flow. It was shown that boundaries between laminar and turbulent patches contain weak large-scale vortex structures sustained by the baroclinic generation.
Introduction
Stably stratified flows are known for their fascinating behaviour with large intermittent turbulent and laminar patches. The most common examples of such phenomena are hot airflow over a cooler ocean or night atmospheric boundary layer where the ground cools faster than the air above it. Stratified flows attract increasing attention in scientific community over past decade [1][2][3].
Stably stratified channel flows are a computationally low-cost model for studying intermittency caused by thermal stratification. Depending on the ratio between the shear and negative buoyancy the flow may become fully turbulent, laminar, or intermittent, with the dynamic alternation of turbulent and laminar spots.
There are two main regions in the stratified boundary layer: the outer region, far from the wall where the shear is low and the flow dynamics is mostly governed by gravity waves [4][5], and the inner region, close to the wall, where the shear is high and the turbulent production opposes the damping effect of negative buoyancy. The outer region is extensively studied and overall well-understood, while the inner region and its dynamics are much more complex, and less-understood, especially the dynamic equilibrium of longlived turbulent and laminar patches.
The complexity of the phenomena is in large amount of energetic interacting vortices in a strongly anisotropic environment. Not only the flow is anisotropic because of the effect of a wall, but it also becomes anisotropic in horizontal directions due to the intermittency effects. Such highly dynamic configuration is very hard to grasp as a whole so the hope is in statistical processing of the data to simplify the group effects of many vortex interactions in some deterministic way.
In this short paper we investigate the processes on the boundaries between the turbulent and laminar zones with the focus on vorticity dynamics. The vorticity dynamics and especially the large-scale structures that are masked by a numerous smaller eddies generated by the shear stress at the wall may play an important role in sustaining the dynamic equilibrium in the intermittent flow. One way to make these larger structures visible is by changing the reference frame in order to stop the advection and filtering out the small-scale eddies. , where D is thermal expansion coefficient, g is gravitational acceleration and 'T is temperature difference between top and bottom boundaries) was large enough to produce an intermittent turbulent/laminar flow pattern. The Prandtl number value was equal to 0.71. The computational domain dimensions were 8π × 1 × 4π with 512 × 50 × 512 grid points respectively. In longitudinal and transversal directions the grid was uniform, while in vertical direction the grid was clustered toward the wall with the grid step range: 0.1 5.5 y ' . We used a DNS solver based on an open-source CFD package called OpenFoam. OpenFoam has been verified for a number of scientific and engineering applications [6,7].
Computational details
The Navier-Stokes equations of incompressible fluid flow with Boussinesq approximation for buoyancy force were solved numerically using the finite-volume method with second order accuracy in space and time. No-slip and free-slip conditions were set at bottom and top boundaries respectively while at other boundaries the periodic conditions were prescribed. The simulation was conducted in a reference frame moving with the bulk velocity of the flow. This allowed us to investigate the evolution of laminar and turbulent patches as they appeared almost stationary in the chosen reference frame.
The mean velocity and temperature profiles (together with the Reynolds stress and turbulent heat flux components, not shown here) were compared with the DNS data from [3] (Fig 1a,b). The profiles show good agreement with the literature. 1c shows a typical pattern of laminar and turbulent patches in the lower part of the channel. In our simulations the laminar spots were relatively stationary due to the chosen reference frame. This allowed us to do time-averaging of the turbulent fields without filtering out the spots structure. By using this procedure it was possible to find some weak large-scale vortex formations which were hidden behind the turbulence. Fig 1d shows the result of time averaging over 6 t l (t l = L/U bulk , where L is a streamwise domain length) of longitudinal vorticity component. The dotted ovals show the location of laminar spots. It is clearly seen that after time-averaging there are opposite signs of vorticity on the opposite boundaries of both laminar zones. This vorticity would induce the velocity field which lifts the flow up in the turbulent region and pulls it down in the laminar region, thus contributing to their stability. Fig. 2a shows the transverse cross-section of the flow intersecting the laminar patch, it is clearly seen that after averaging a pattern emerge with positive and negative vorticity maximums at the boundaries between laminar and turbulent region.
Results and discussion
To investigate the source of this longitudinal vorticity we computed separately the components of vorticity balance equation: Fig 2c it is clearly seen that the vorticity is at least partially accumulated at the boundary due to advection. Further analysis shows that the main component of advection is longitudinal, that means that the longitudinal vorticity of the needed sign is generated somewhere upstream. The vortex bending term Fig. 2d has a minimum in laminar patch and has large maximums in the turbulent areas, its direction on the boundary between the laminar end turbulent regions is opposite to the existing vorticity sign, so it would work like a sink, meaning that longitudinal vorticity is being bent in some direction. The last term in Eq.1 is baroclinic term. This term describes the production of vorticity by buoyancy gradient. Fig 2b shows the time-averaged temperature profile over the transversal crosssection. It is evident that the laminar patch is cooler than the turbulent spots thus it generate negative buoyancy. So the existing temperature gradients contribute to the vorticity distribution of Fig 1a. One interesting effect observed in the moving reference frame is that the small velocity fluctuations inside the laminar spots are advected at lower speed than the fluctuations in turbulent patches. Thus, the small eddies from turbulent patches are frequently got into boundaries of laminar zones where they are slowed down and dissipated rapidly. This process creates a constant flow of vorticity toward the boundaries between the laminar and turbulent regions. We speculate that this vorticity flow could be connected with the described large-scale structures.
Conclusion
The simulation of stratified open channel flow in a frame of reference moving with the mean velocity allowed us to apply some statistical tools on the laminar and turbulent patches in the flow separately. Weak but large-scale vortex structures were found at the boundaries between the laminar and turbulent regions. These structures are sustained mostly by the baroclinic generation and they may take a part in the supporting of the dynamic balance between laminar and turbulent patches. This work was supported by the Russian Science Foundation (grant №16-19-00119).
|
v3-fos-license
|
2018-04-03T03:29:35.532Z
|
2012-02-06T00:00:00.000
|
5579468
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/287/13/10081.full.pdf",
"pdf_hash": "000dad3b7b64aac66df0030a59334c40d40920a6",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:881",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"sha1": "3e0084ff6dae4826d8173eb8e78998841702a341",
"year": 2012
}
|
pes2o/s2orc
|
Human Protein N-terminal Acetyltransferase hNaa50p (hNAT5/hSAN) Follows Ordered Sequential Catalytic Mechanism
Background: Nα-Acetylation is catalyzed by N-terminal acetyltransferases (NATs). The reaction mechanisms of NATs are unknown. hNaa50p is a member of the human NAT family. Results: Kinetic parameters and product inhibition patterns were determined. Acetyl-CoA binding induced conformational changes facilitating peptide binding. Conclusion: hNaa50p most likely utilizes the Theorell-Chance mechanism. Significance: Bisubstrate inhibitors, mimicking a ternary complex, should function as specific inhibitors of human NATs. Nα-Acetylation is a common protein modification catalyzed by different N-terminal acetyltransferases (NATs). Their essential role in the biogenesis and degradation of proteins is becoming increasingly evident. The NAT hNaa50p preferentially modifies peptides starting with methionine followed by a hydrophobic amino acid. hNaa50p also possesses Nϵ-autoacetylation activity. So far, no eukaryotic NAT has been mechanistically investigated. In this study, we used NMR spectroscopy, bisubstrate kinetic assays, and product inhibition experiments to demonstrate that hNaa50p utilizes an ordered Bi Bi reaction of the Theorell-Chance type. The NMR results, both the substrate binding study and the dynamic data, further indicate that the binding of acetyl-CoA induces a conformational change that is required for the peptide to bind to the active site. In support of an ordered Bi Bi reaction mechanism, addition of peptide in the absence of acetyl-CoA did not alter the structure of the protein. This model is further strengthened by the NMR results using a catalytically inactive hNaa50p mutant.
Acetylation is one of the most common covalent modifications, occurring on the majority of eukaryotic proteins, in which an acetyl group is transferred from acetyl coenzyme A either to the ␣-amino group of protein N termini (N ␣ -acetylation) or to the ⑀-amino group of specific lysine residues (N ⑀ -acetylation) (1,2). So far, six different N-terminal acetyltransferase (NAT) 4 complexes have been identified in eukaryotes (NatA-NatF) (3,4), and interestingly, subunits of the human NatA complex, i.e. hNaa10p and hNaa15p, have been increasingly linked to cancer development and prognosis. For example, the genes encoding hNaa10p and hNaa15p are up-regulated in several types of cancer (5)(6)(7). Functional studies indicate that hNaa10p and hNaa15p are essential for growth and survival of cancer cell lines (8 -11). Antitumorigenic roles have also been proposed (12,13). Naa50p is physically associated with NatA (14,15) but has its own catalytic activity defined as NatE (16,17). From fruit flies to humans, Naa50p is essential for proper sister chromatid cohesion and chromosome resolution (18 -20).
These observations highlight the biological significance of the human NATs, and mark them as potential targets for cancer (21). Detailed knowledge of the catalytic and kinetic mechanisms of the NATs will undoubtedly aid our efforts to develop inhibitors targeting these enzymes. Two types of kinetic mechanisms are observed for acetyl transfer reactions: a ping-pong mechanism (22) and the ternary complex/sequential mechanism (23)(24)(25)(26). The former mechanism is typically associated with an acetyl-enzyme intermediate, whereas the latter mechanism requires that both substrates bind to the enzyme to form a ternary complex prior to acetyl transfer. Our data demonstrate that the acetyl transfer reaction catalyzed by hNaa50p involves the formation of a ternary complex. Thus, bisubstrate inhibitors, mimicking a ternary complex molecule, should function as highly specific and efficient inhibitors of hNaa50p and possibly other human NATs.
Prokaryotic Expression and Purification of Recombinant
Proteins-GST-hNaa50p and the H112A mutant were expressed and purified as described previously (16).
In Vitro Acetylation/Kinetic Assays-All enzyme kinetic experiments were performed essentially as described previously (27). The steady-state kinetic parameters and the enzyme inhibition patterns were analyzed by global fit analysis using GraFit 7 software (Erithacus Software). See supplemental material for detailed information. The peptide substrate used was 1 MLGPEGGRWGRPVGRRRRPVRVYP 24 (denoted 1 MLGP-RRR 24 ), as it is the optimal in vitro substrate for hNaa50p (17). The C-terminal 17 amino acids of this peptide correspond to the sequence of ACTH, except that all Lys residues have been replaced with Arg to minimize the potential interference from N ⑀ -acetylation. The positively charged Arg residues also facilitate peptide solubility.
NMR Spectroscopy-Resonances were assigned for hNaa50p samples (250 M in 600 l, 15 N-and 13 C-labeled) in 90% H 2 O, 10% D 2 O, 1.0 mM acetyl-CoA, 100 mM NaCl, and 50 mM NaH 2 PO 4 (pH 7.4). A two-dimensional 1 H-15 N heteronuclear single-quantum correlation spectrum (28,29) was collected at 600.13 MHz ( 1 H) on a Bruker BioSpin AV600 spectrometer equipped with a superconducting actively shielded magnet. A 5-mm triple-resonance ( 1 H, 13 C, 15 N) inverse cryogenic probe head with z-gradient coils and cold 1 H and 13 C preamplifiers was used. The sample temperature was kept at 310 K. The data were processed using Bruker BioSpin TopSpin 1.3 software. Sequential backbone assignment was achieved using the standard three-dimensional experiments (HNCA, HN(CO)CA, HNCO, HN(CA)CO, HNCACB, and CBCA(CO)NH) (30) on a Varian Inova 800 NMR spectrometer with a 5-mm triple-resonance ( 1 H, 13 C, 15 N) probe at 310 K (Swedish NMR Centre, University of Gothenburg). The spectra were processed with NMRPipe (31) and analyzed using Cara (32). For further diminishing of side chain ambiguity, three-dimensional CC(CO)NH (33,34) and several two-dimensional 1 H-15 N MUSIC (multiplicity selective in-phase coherence transfer) (35-37) experiments were conducted on the Bruker BioSpin AV600 spectrometer. The 1 H and 13 C chemical shifts were referenced to 4,4-dimethyl-4-silapentane-1-sulfonic acid as an internal standard. The 15 N chemical shifts were calculated from the adjusted 1 H frequency (38). In spectra without 4,4-dimethyl-4silapentane-1-sulfonic acid, the signal of the solvent HDO was set to 4.68 ppm, as this was the measured shift value of HDO in the 4,4-dimethyl-4-silapentane-1-sulfonic acid sample. Furthermore, 4.68 ppm is the calculated shift for HDO at this temperature, pH, and ionic strength (39). All experiments with important acquisition parameters are listed in supplemental Table S1.
To determine the substrate binding order, two-dimensional 1 H-15 N SOFAST-heteronuclear multiple-quantum correlation (HMQC) experiments (40) were collected at 600 MHz on the following samples: 100 M 15 N-labeled hNaa50p, hNaa50p with 500 M 1 MLGP-RRR 24 , hNaa50p with 500 M acetyl-CoA, hNaa50p with 500 M CoA, hNaa50p with 500 M CoA and 500 M 1 MLGP-RRR 24 , and hNaa50p with 500 M CoA and 500 M acetylated 1 MLGP-RRR 24 . To study the interaction between the enzyme and both substrates simultaneously, HMQC spectra of ϳ100 M 15 N-labeled hNaa50p H112A mutant with 500 M acetyl-CoA and mutant with 500 M acetyl-CoA and 1 mM 1 MLGP-RRR 24 were collected. The temperature was lowered to 298 K to further slow down the enzyme reaction. All samples were dissolved in the buffer that was used for the resonance assignment, with exception of the H112A mutant, for which the pH in the buffer was raised to 8.0. 15 N T 1 and T 2 relaxation rates and 15 N{ 1 H} heteronuclear NOEs (41) at 298 K were measured using two-dimensional 1 H-15 N heteronuclear single-quantum correlation-based methods at 600 MHz. T 1 and T 2 values were obtained using three parallel series of 10 randomized delays in the range of 50 -1850 ms for T 1 and the range of 0 -123.2 ms for T 2 . Four interleaved heteronuclear NOE spectra were recorded. In the spectra with NOE, a proton saturation time of 3 s and a recycling delay of 7 s were used, whereas in the spectra without NOE, the recycling delay was 10 s. No saturation was included in the latter case. All spectra were processed using the NMRPipe software package. Peak heights were measured in all spectra of a relaxation series and fitted with a two-parameter single-exponential function to extract the relaxation rates, except for the heteronuclear NOE experiments, in which the result was determined by the intensity ratio from the spectrum with NOE and the reference spectrum without NOE, averaged for the four experiments. Errors in T 1 and T 2 were obtained by Monte Carlo simulations (42), whereas the errors in heteronuclear NOE are the root mean squares of ratios derived from the four sets of measurements. The T 1 , T 2 , and heteronuclear NOE values were used to calculate the order parameter (S 2 ) for a completely anisotropic model in the TENSOR2 program (43).
RESULTS
Bisubstrate Kinetics-Because the hNaa50p reaction involves two substrates, acetyl-CoA and peptide, bisubstrate kinetic experiments were performed to begin to distinguish between the requirement for an acetyl-enzyme intermediate (pingpong) or a direct transfer mechanism (ternary complex) reaction, as well as to determine kinetic parameters. A summary of the inhibition patterns and values of kinetic parameters is presented in Table 1. Initial velocity experiments were performed by varying the concentration of acetyl-CoA at several fixed concentrations of peptide (supplemental Fig. S1). The data were globally fit to the ternary complex model and the ping-pong model. Double-reciprocal plots of the initial velocity experiments generated an intersecting line pattern, indicating that hNaa50p follows a ternary complex mechanism (supplemental Fig. S1).
Product Inhibition-To further investigate the kinetic mechanism, product inhibition experiments were performed using several fixed concentrations of the reaction product CoA and the peptide substrate while varying the concentration of acetyl-CoA. Double-reciprocal plots of acetyl-CoA versus the initial velocities at different concentrations of CoA produced an intersecting line pattern consistent with CoA functioning as a competitive inhibitor for acetyl-CoA (supplemental Fig. S2A). K I for the competitive inhibition of CoA was calculated to be 2.27 Ϯ 0.16 M. To verify the product inhibition observations, the experiments were repeated with desulfo-CoA, a dead-end analog inhibitor of acetyl-CoA. Desulfo-CoA showed the same competitive inhibition pattern as CoA (supplemental Fig. S2B), with a K I of 67 Ϯ 9 M.
Next, CoA was used as product inhibitor in experiments in which the concentration of acetyl-CoA was maintained at a constant level and the concentration of the peptide substrate was varied (supplemental Fig. S3). A non-competitive inhibition pattern was obtained (K I ϭ 27.7 Ϯ 1.7 M), well in line with a ternary complex mechanism.
In a typical ternary complex mechanism, the reaction product, acetylated peptide, should also show an inhibitory effect on both substrates. To study the possible product inhibition patterns of acetylated peptides, experiments were performed using fixed levels of N-terminally acetylated peptide as the product inhibitor and acetyl-CoA or peptide as the variable substrate. Several experiments using saturating or subsaturating concentrations of substrates were performed. Surprisingly, even very high concentrations (up to 1 mM) of acetylated peptide failed to show any inhibition against any of the substrates (supplemental Fig. S4, A and B, and data not shown). This indicates that the acetylated peptide is an extremely weak product inhibitor. Unfortunately, a dead-end analog of the acetylated product peptide does not exist; thus, additional dead-end analog inhibition experiments could not be performed to further refine the kinetic mechanism of hNaa50p. Nevertheless, these data indicate that hNaa50p forms a very unstable ternary complex and suggest that hNaa50p does not follow a classical ternary complex mechanism.
NMR Resonance Assignment and Secondary Structures-140 of the 164 amino acid backbone NH correlations (ϳ85%) could be assigned to the hNaa50p sequence using three-dimensional heteronuclear NMR experiments combined with amino acidspecific techniques (Biological Magnetic Resonance Data Bank accession number 18202) (supplemental Table S2). Missing residues are due to the protein size, leading to signal broadening and spectral overlap in some regions and to conformational exchange in flexible parts of the protein, like the unstructured C terminus.
The 13 C shift deviations from their random coil values (39,44) were used to determine the secondary structural elements of hNaa50p (supplemental Fig. S5). The results are overall in agreement with available information on the secondary structures obtained from x-ray crystallography of hNaa50p in complex with acetyl-CoA (Protein Data Bank code 2OB0).
Determining Order of Substrate Binding by NMR-The changes in the backbone NH correlations were used to investigate the order of substrate binding. First, a HMQC experiment was performed on hNaa50p in the absence of substrates. Next, recordings were made in the presence of substrates and products in various combinations. The regions of the spectra presented in Fig. 1 comprise the majority of peaks that were consistently and strongly affected by substrate or product binding.
The spectral positions of Val-29, Thr-76, Leu-77, Ser-116, Phe-127, and Ile-142 show significant changes upon adding acetyl-CoA to hNaa50p (Fig. 1B), whereas most of the remaining NH signals in the spectrum were either unaffected or only slightly altered (Fig. 1, A and B). This spectral repositioning indicates that acetyl-CoA is capable of binding to hNaa50p in the absence of peptide substrate and that probably a conformational change occurs upon binding this substrate.
Because the product inhibition experiments (supplemental Fig. S2A) indicated that CoA is a competitive inhibitor of acetyl-CoA, NMR spectroscopy was used to determine whether or not CoA binds directly to hNaa50p in the absence of acetyl-CoA (Fig. 1C). The spectrum of hNaa50p in the presence of CoA shows several similarities to the spectrum of hNaa50p in complex with acetyl-CoA (Fig. 1B); both spectra differ from that of the substrate-free protein (Fig. 1A). The spectral positions of Thr-76, Leu-77, Phe-127, and Ile-142 change similarly to that observed in the spectrum with acetyl-CoA. On the other hand, the NH shifts of Val-29, Ile-115, and Ser-116 (all situated close to the acetyl group of acetyl-CoA) differ between the two spectra (Fig. 1, B and C). This indicates that CoA binds free hNaa50p at the same structural moiety and has a similar structural effect on hNaa50p as acetyl-CoA, but with a differing "induced fit" response.
We further used NMR spectroscopy to investigate whether the peptide substrate interacts with hNaa50p in the absence of acetyl-CoA. Evidence of such an interaction would support a random ternary complex mechanism (random Bi Bi). When hNaa50p was mixed with saturating concentration of peptide, the only residue that appeared to be affected was Ile-142 (supplemental Fig. S6A), suggesting that the peptide substrate does not bind significantly to free enzyme to form a stable complex. Thus, a random ternary complex mechanism is unlikely. The lack of observable inhibition by acetylated peptide (supplemental Fig. S3) was further investigated. Acetylated 1 MLGP-RRR 24 was added to the hNaa50p-CoA complex, and a HMQC spectrum was recorded (supplemental Fig. S6, B and C). The spectrum did not differ from the one with hNaa50p and CoA (Fig. 1C).This result is consistent with a model in which acetylated 1 MLGP-RRR 24 does not form a stable interaction with the hNaa50p-CoA complex, concordant with the lack of product inhibition observed in the kinetic experiments (supplemental Fig. S4).
The different ternary complexes of hNaa50p and both substrates were investigated using the wild-type hNaa50p-CoA complex in the presence of peptide and the enzymatically inactive but natively folded hNaa50p H112A mutant (45). Asp-38, Tyr-73, Leu-77, Ala-81, and Tyr-110 showed small chemical C, overlay of the HMQC spectra of hNaa50p (black) and hNaa50p with CoA (green). For clarity, the complete peak identity is shown only in A, whereas peaks with significant chemical shift changes upon acetyl-CoA or CoA binding are shown in B and C. The side chain signals of Asn and Gln present in the expansions in the left panels (severely overlapped peaks) are not assigned. Negative signals (red) are either noise or side chain signals of Arg. There was no effect observed on these peaks. Peaks with unknown identity are indicated with red asterisks. Note that the extraction in the right panel in B includes one of the autoacetylated lysine side chains, the peak marked K-N H (not assigned).
Protein Dynamics-To elucidate the flexible parts in hNaa50p, the dynamics of the protein in the presence of acetyl-CoA were analyzed by NMR spectroscopy. The experimental T 1 , T 2 , and heteronuclear NOE values were used to calculate the order parameters (S 2 ) (Fig. 2) in TENSOR2 (43) utilizing the simple Lipari-Szabo approach (46,47). The most flexible parts in the structure, defined by an S 2 value below 0.8 (Fig. 2), are the regions comprising the first two helices in the molecule (residues 17-26 and 33-38) in addition to the stretches 73 YIMTLG 78 and 87 GIGTKML 93 . According to the crystal structure of hNaa50p in complex with acetyl-CoA, the former stretch is close to the cysteamine moiety of acetyl-CoA and appears to be involved in enzyme catalysis. The direct involvement of Tyr-73 in catalysis is strongly supported by a recent structural (Protein Data Bank code 3TFY) and functional investigation (45). The latter stretch, which is close to the pantothenic acid moiety, is part of the hNaa50p variant of the conserved motif A responsible for acetyl-CoA binding, (Q/R)XXGX(G/A), common to the GCN5 superfamily (48,49). Overall, our NMR results suggest that acetyl-CoA, in accordance with the kinetic data (supplemental Fig. S1-S3), is the first substrate to enter the active site, causing structural changes required for peptide substrate interaction as indicated by crosspeak splittings and the dynamic data.
DISCUSSION
In this study, we have presented enzyme kinetic and NMR data that point to an ordered ternary complex mechanism for the human Naa50p NAT activity. The initial velocity experiments, as well as the competitive product inhibition pattern observed for hNaa50p using CoA as product inhibitor and acetyl-CoA as variable substrate (supplemental Fig. S2A), rule out a ping-pong type of mechanism. Also, results from experiments using desulfo-CoA as a dead-end acetyl-CoA analog (supplemental Fig. S2B) are inconsistent with a ping-pong mechanism. The non-competitive inhibition pattern observed when CoA was tested against peptide as variable substrate and at fixed concentrations of acetyl-CoA also rules out a ping-pong mechanism (supplemental Fig. S3). In a ping-pong mechanism, one expects the acetyl group to be transferred to an enzyme residue, with the subsequent release of CoA. However, in the crystal structure of the hNaa50p-acetyl-CoA complex, the acetyl group remains covalently bound to CoA. Thus, the enzyme kinetic experiments strongly support a ternary complex mechanism, which is consistent with other GCN5-like acetyltransferases (23, 50 -52). Although the results of our kinetic experiments cannot completely distinguish between a random and an ordered sequential mechanism, the data are inconsistent with both the steady-state and equilibrium ordered mechanisms (see Table 1 for a summary of inhibition patterns and kinetic parameters).
From the NMR studies, it appears that the residues that show the greatest chemical shift changes upon addition of acetyl-CoA/CoA and 1 MLGP-RRR 24 are all either involved in the binding of the substrates or part of flexible structures of hNaa50p undergoing conformational changes similar to an induced fit response. Leu-77, the amino acid closest to the cysteamine moiety of acetyl-CoA, shows the greatest difference, shifting 2 ppm upfield in N H and 0.8 ppm upfield in H N upon acetyl-CoA binding (Fig. 1B). The change in the N H chemical shift of Leu-77 is less pronounced upon binding of CoA (Fig. 1C). The preceding amino acid, Thr-76, shows a similar effect. Remarkably, the peak strength of Leu-77 increases significantly when 1 MLGP-RRR 24 is added to either the hNaa50p-CoA or H112A mutant-acetyl-CoA complex, indicating a more stable structural conformation in the region of Leu-77 upon binding of the second substrate. In the dynamic data, the S 2 values for hNaa50p in complex with acetyl-CoA indicate that amino acid region 73-78 is more flexible than expected for a -strand (Fig. 2). This flexibility might be necessary for the second substrate, the peptide, to enter and bind the active site. In accordance with this suggestion, it was observed that Arg-71, Leu-72, and probably Ala-81 change spectral positions when both substrates are added to the hNaa50p H112A mutant.
Val-29 and Ile-142 show very interesting chemical shift changes compared with all of the other amino acids. These residues are positioned opposite to each other in two loops that, according to recent results (45), are part of the hydrophobic pocket in which the N-terminal methionine of the peptide substrate binds (supplemental Fig. S9). Thus, it is likely that these loops alter positions relative to each other in an induced fit response upon acetyl-CoA binding.
It was possible to assign Val-29 only in the spectrum of the hNaa50p-acetyl-CoA complex at 310 K, suggesting that Val-29 undergoes intermediate-rate conformational exchange both in the absence of acetyl-CoA and at lower temperatures, leading to signal broadening. This ␣-helix has already previously proven to be of special interest: we demonstrated that Lys-34 and Lys-37 are autoacetylated and important for catalytic activ-FIGURE 2. Order parameters (S 2 ) of hNaa50p determined from NMR relaxation data. The order parameters (S 2 ) with their S.D. are plotted against the amino acid positions. Highly ordered residues have S 2 Ͼ 0.8 (solid line), whereas residues below this border belong to more flexible parts of the protein. Missing residues in the plot are either unassigned or severely overlapping. The unstructured C terminus of the protein was excluded from the S 2 calculation. The arrows and boxes above the plot indicate strands and helices, respectively. The secondary structures are according to the crystal structure of hNaa50p in complex with acetyl-CoA.
Enzyme Kinetic Mechanism of hNaa50p NAT
ity and specificity (17). Introducing conservative K34R and K37R mutations results in a 4-fold decreased N ␣ -acetyltransferase activity and specificity toward 1 MLGP-RRR 24 . In addition, the dynamic data (Fig. 2) suggest that amino acid region 20 -40 is more flexible than expected for a sequence that contains two ␣-helices, whereas the chemical shift indices indicate that the second helix is shorter than observed in the crystal structure (Protein Data Bank code 2OB0) (supplemental Fig. S5). This region is likely to be prone to conformational changes that are important for optimal catalytic activity and specificity. This notion is supported by the fact that Asp-38 is one of the amino acids that change spectral position upon addition of peptide to hNaa50p in complex with both CoA and acetyl-CoA.
The NH signal of Ile-142, on the other side of the hydrophobic pocket, is split in two in the spectrum of free hNaa50p, indicating a slow conformational change between two states, where the strongest peak corresponds to the dominating conformation (Fig. 1A). Surprisingly, this splitting is not observed in the spectrum of hNaa50p in combination with 1 MLGP-RRR 24 (supplemental Fig. S6A). After binding of acetyl-CoA/ CoA, the ratio between the two states changes, and the former dominating signal almost disappears; however, the chemical shifts of Ile-142 in the two spectra are not identical (Fig. 1, B and C). Because both Val-29 and Ile-142 appear to take part in peptide binding, the chemical shift differences of these two residues upon addition of acetyl-CoA or CoA indicate that binding of acetyl-CoA leads to the formation of a more rigid hydrophobic pocket for 1 MLGP-RRR 24 than CoA does. This idea is supported by the observation that Ile-142 changes spectral position again upon binding of the peptide to the mutant-acetyl-CoA complex. Also, the dynamic data and the 13 C chemical shifts suggest that Ile-142 in the hNaa50p-acetyl-CoA complex is situated in a more rigid environment than is expected for a loop (Fig. 2).
No major chemical shift changes are observed in the sequence 84 RRLGIG 89 , which is the hNaa50p variant of the acetyl-CoA-binding motif A, (Q/R)XXGX(G/A), common to the GCN5 superfamily (48,49), upon binding of the first substrate. However, the dynamic data presented in Fig. 2 strongly indicate that amino acid stretch 87-93 is far more flexible than is expected for the start of a long ␣-helix. Again, this flexibility might be important for the binding of the peptide as the second substrate: both Gly-87 and Thr-90 show small chemical shift changes upon addition of peptide to the mutant-acetyl-CoA complex.
Interestingly, Tyr-124 (supplemental Fig. S8A) shows chemical shift changes in the spectrum of the hNaa50p-acetyl-CoA complex, which may indicate that Tyr-124 is involved in substrate binding. On the basis of the crystal structure (supplemental Fig. S8B), we hypothesize that a water molecule mediates contact between the hydroxyl group of Tyr-124 and the nitrogen atom of the cysteamine moiety of acetyl-CoA. The same water molecule may also be coordinated to the backbone of Leu-77, which presents significant chemical shift changes upon substrate binding (as discussed above and in Fig. 1B). The Tyr-124 chemical shift change is consistent with our previous observation that Tyr-124 is essential for both the N ␣ -acetylation activity and N ⑀ -autoacetylation function of hNaa50p (17). Fur-thermore, the mutant shows chemical shift changes of the neighboring Phe-123.
Recent results (45) suggest that the residues likely to be involved in enzyme catalysis are Tyr-73 and His-112. Tyr-73 shows chemical shift changes upon binding of CoA to the protein, whereas this chemical shift is not affected by acetyl-CoA binding (data not shown). Interestingly, the spectrum of hNaa50p-CoA in complex with 1 MLGP-RRR 24 contains both peaks (supplemental Fig. S7A). On the other hand, the addition of peptide to the hNaa50p H112A mutant-acetyl-CoA complex changes the intensity distribution of the three Tyr-73 peaks that are found in the HMQC spectra of the mutant (supplemental Fig. S7B). Asn-108, Tyr-110, Leu-111 (supplemental Fig. S8A), Ile-115, and Ser-116 all present changes upon acetyl-CoA/CoA and peptide binding whereas, the catalytically important residue His-112 (supplemental Fig. S9) was not possible to assign. Similar to Tyr-73, Tyr-110 shows chemical shift changes only upon binding of CoA, but not acetyl-CoA (data not shown), and the spectra (Tyr-110) of hNaa50p-CoA-1 MLGP-RRR 24 and hNaa50p H112A mutant-acetyl-CoA-1 MLGP-RRR 24 differ slightly from the spectra of both the hNaa50p-CoA and mutant-acetyl-CoA complexes with or without acetylated peptide (supplemental Fig. S7). In hNaa50p H112A, Asn-108 and Leu-111 are also affected. The latter appears to show conformational exchange between two states, in both the presence and absence of 1 MLGP-RRR 24 . On the other hand, Ile-115 and Ser-116 are detected only in spectra of hNaa50p in complex with CoA (Fig. 1C). Combined, these data all support an ordered Bi Bi sequential mechanism in which the peptide is the second substrate to enter the active site. Significant NMR signal shifts indicating an interaction between acetylated peptide and enzyme were not identified.
Finally, two residues change spectral position only upon addition of peptide to the mutant-acetyl-CoA complex: Gly-104, which shows conformational exchange between two states, and Phe-129. Neither of these amino acids is in direct contact with the substrates, indicating that some long-range effects occur upon substrate binding.
Taken together, the NMR results indicate an essential difference in interactions between the protein and the substrate acetyl-CoA and the product CoA around the active site. Furthermore, the affinity between the enzyme and the peptide substrate is very small in the absence of acetyl-CoA. Overall, the NMR data suggest that hNaa50p utilizes an ordered mechanism of substrate binding, with acetyl-CoA binding to the enzyme prior to the peptide substrate. Given that the product and dead-end analog inhibition studies ruled out both the steady-state and equilibrium ordered mechanisms, the sum of the data suggests that hNaa50p utilizes a special type of Bi Bi sequential mechanism, possibly of the Theorell-Chance type as outlined in Fig. 3. This type of reaction mechanism has been reported for the human lysine acetyltransferase p300 (26) and, as such differs, from other protein acetyltransferases (25). A Theorell-Chance mechanism is supported by the following observations. (a) Only one chemical shift in the HMQC spectrum of hNaa50p slightly changes when the enzyme is incubated in the presence of peptide, which indicates that the affinity of the free enzyme for the peptide substrate in the absence of
Enzyme Kinetic Mechanism of hNaa50p NAT
acetyl-CoA is very low (supplemental Fig. S6A). At the same time, close to 20 NH correlations change when the peptide is added to the hNaa50p H112A mutant incubated with acetyl-CoA. Similar but more subtle effects are observed when the peptide is added to the wild-type hNaa50p-CoA complex. (b) Upon binding of either acetyl-CoA or CoA, the enzyme undergoes a dramatic conformational change, which is observable in different NMR experiments. It is likely that this change is required for the formation of the hNaa50p-acetyl-CoA-1 MLGP-RRR 24 complex. (c) There is no observable effect on the NMR spectrum when acetylated peptide product is added to the enzyme-CoA complex. This indicates that the affinity between the hNaa50p-CoA complex and acetylated peptide is significantly lower than would be expected for a classical ordered Bi Bi kinetic mechanism. This work represents the first mechanistic study of a eukaryotic NAT and provides important information that will potentially aid the generation of specific inhibitors of this group of GCN5 enzymes.
|
v3-fos-license
|
2019-04-19T13:27:55.398Z
|
2019-04-19T00:00:00.000
|
121298186
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cmbl.biomedcentral.com/track/pdf/10.1186/s11658-018-0130-0",
"pdf_hash": "baa3961e6c335ff4af4c49198cf0ab74f985c96e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:882",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "284fffe72f8762c81964abc5ac3717dc6ff0de85",
"year": 2019
}
|
pes2o/s2orc
|
MiR-613 inhibits proliferation and invasion and induces apoptosis of rheumatoid arthritis synovial fibroblasts by direct down-regulation of DKK1
Background This study aimed to investigate the effects of miR-613 on the proliferation, invasion and apoptosis of rheumatoid arthritis synovial fibroblasts (RASFs). Methods Synovial tissue samples were collected from 20 rheumatoid arthritis (RA) patients and 10 patients with joint trauma undergoing joint replacement surgery. The RASFs were isolated and cultured. MiR-613 and DKK1 expression in both synovial tissues and cells was detected using quantitative real-time PCR (qRT-PCR). Dual luciferase reporter gene assay was employed to evaluate the effect of miR-613 on the luciferase activity of DKK1. Then RASFs were transfected with miR-613 mimics, si-DKK1 and pcDNA-DKK1. Changes in cellular proliferation, invasion and apoptosis were detected through BrdU assay, Transwell invasion assay and flow cytometry analysis, respectively. Results MiR-613 was significantly down-regulated in RA tissues and RASFs compared to normal tissues and cells, whereas DKK1 was up-regulated in RA tissues and RASFs. Dual luciferase reporter gene assay showed that miR-613 could specifically bind to the 3′UTR of DKK1 and significantly inhibit the luciferase activity. Moreover, miR-613 significantly reduced the expression of DKK1. Overexpression of miR-613 or knockdown of DKK1 suppressed proliferation and invasion of RASFs, and induced RASF apoptosis. The reverse results were observed when DKK1 was up-regulated in miR-613-overexpressing RASFs. Conclusions MiR-613 can inhibit proliferation and invasion and induce apoptosis of RASFs by directly targeting DKK1 expression.
greatly increase the disease and social burden in RA patients, which necessitates early diagnosis and sufficient treatment in RA management [5].
DKK1 is a Wingless (Wnt) signaling pathway inhibitor, and it has been considered as a master regulator of joint remodeling [6]. It was reported that joint erosions and inflammation were positively correlated with serum levels of DKK1 in RA [7]. Indeed, higher DKK1 levels were observed in serum of patients with a genetic variant of DKK1 and resulted in having more progressive joint destruction [8], suggesting a fundamental role for DKK1 in the pathogenesis of RA. The loss of bone in murine models of arthritis can be restored by treatment with antibodies against DKK1 [6], indicating that DKK1 has promise as a novel therapeutic target. In this study, to determine the role of DKK1 in the pathogenesis of RA, we analyzed, for the first time to our knowledge, the expression of DKK1 in RA synovial fibroblasts (RASFs) compared with patients with joint trauma undergoing joint replacement surgery (healthy controls). Knockdown of DKK1 significantly suppressed proliferation and invasion of RASFs, and induced RASF apoptosis.
Recently, it has been increasingly reported that microRNAs (miRNAs) play an important role in the pathogenesis of RA. It is widely known that miRNAs negatively regulate gene expression by binding to the 3′-untranslated region (3′-UTR) of their target mRNAs, resulting in degradation of mRNAs or down-regulation of the corresponding proteins. For example, Xu et al. reported that miR-650 can inhibit proliferation, migration and invasion of RASFs by targeting AKT2 [9]. Moreover, a recent study demonstrated that miR-126 affects RASF proliferation and apoptosis through the PI3K-AKT signaling pathway by targeting PIK3R2 [10]. In this study, we particularly focus on miR-613 and its role in the pathogenesis of RA. We thus speculated that miRNA-613 might modulate RA development by binding to its target mRNAs, which thereby leads to the loss of the functions mediated by the corresponding proteins. For the first time, we found that DKK1 down-regulation could significantly inhibit the proliferation and invasion and induce apoptosis of RASFs. Moreover, miR-613 overexpression also suppressed proliferation and invasion and induced apoptosis of RASFs by directly targeting DKK1.
Tissue specimen collection
Synovial tissue samples from 20 RA patients (12 male and 8 female, 33-67 years old, mean 51) were obtained during joint surgery at Cangzhou Central Hospital from 2016 to 2017. All RA patients fulfilled the American College of Rheumatology criteria for classification of disease [11]. Healthy control specimens (6 male and 4 female, 31-65 years old, mean 48) were obtained from patients with joint trauma undergoing joint replacement surgery at Cangzhou Central Hospital from 2016 to 2017. Healthy control specimens were free of other diseases such as autoimmune disease, infectious disease and cancer. This study was approved by the Ethical Committee of Cangzhou Central Hospital (2016021621) and complied with the guidelines and principles of the Declaration of Helsinki. All participants signed written informed consent.
Cell line and cell culture
Human RASFs were isolated and cultured as previously described [12]. Synovial tissues were taken out intraoperatively and cut into pieces immediately under sterile conditions. The cut synovial tissues were subjected to digestion by 2.5 g/L trypsin at 37°C for 2 h. After that, the digested synovial tissues were subjected to centrifugation to obtain RASFs. RASFs at passages 3-8 were subjected to experiments. RASFs were maintained in Dulbecco's modified Eagle's medium (DMEM, Invitrogen, USA) supplemented with 10% fetal bovine serum (FBS, Gibco, USA) and penicillin and streptomycin (P/S, Gibco) at 37°C within 5% CO 2 . Cells were grown in 6-well plates with 75% confluence at 24 h before transfection.
Transient transfection
The miR-613 mimic, miR-negative control (miR-NC), siRNA for DKK1 (si-DKK1, 5′-TGATAGCCCTGTACAATGCTGCT-3′) and siRNA-negative control (si-NC) were synthesized and purified by Gene-Pharma (Shanghai, China). The DKK1-overexpression plasmid was generated by inserting DKK1 cDNA into a pcDNA3.1 vector. The sequence of this plasmid was confirmed by Gene-Pharma. The miR-613 mimic, miR-NC, si-DKK1 and DKK1-overexpression plasmid were transfected into the RASFs according to the instructions of the purchased Lipofectamine RNAI-MAX transfection kit (Invitrogen, USA). The detailed procedures were as follows: (1) 1 day before transfection, the RASFs were plated into 6-well plates at the concentration of 1 × 10 6 cells per well; (2) on the day of transfection, the Lipofectamine RNAIMAX transfection reagent was mixed evenly with opti-MEM culture medium and the synthesized miR-613 mimic, miR-NC or si-DKK1, and the mixtures were incubated for 5-10 min at room temperature before they were added into the cell culture medium; (3) 48 h after the transfection, cells were digested with trypsin, rinsed once with PBS, and preserved for further experiments.
ELISA-BrdU assay
To investigate the effects of the miR-613 mimic and si-DKK1 on cell proliferation of RASFs, the ELISA-BrdU assay was selected to detect cell proliferation using the Cell Proliferation ELISA-BrdU Kit (Roche Applied Science, Mannheim, Germany) following the manufacturer's instructions. Briefly, 5 × 10 3 cells were seeded in a 96-well plate (Corning, USA) and allowed to grow overnight in complete medium. The medium was then removed and the cells were transfected with the miR-613 mimic or miR-NC for 48 h at 37°C. After 48 h incubation, cells were additionally treated with BrdU labeling solution for a further 16 h. After that, culture medium was removed, cells were fixed and DNA was denatured. Cells were incubated with anti-BrdU-POD solution for 90 min, and then antibody conjugates were removed by washing three times. After incubation with a TMB substrate for 15 min, absorbance at 405 and 490 nm was measured to determine immune complexes.
RNA extraction and real-time quantitative PCR
Total RNA was extracted from cell lines and clinical samples by using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) according to the operating instructions. RNA was quantified by using UV absorbencies at 260 and 280 nm (A260/280). Subsequently the RNA was reverse-transcribed into cDNA using a reverse transcription system (Thermo Scientific, CA, USA). The level of miR-613 was detected by the ABI PRISM 7500 Sequence Detection System (ABI) using the TaqMan MicroRNA assay kits (Applied Biosystems, California, USA). U6 small nuclear RNA (snRNA) was used as the normal control. The mRNA expression levels of DKK1, MMP-2, and MMP-9 were also analyzed by SYBR Green and normalized to GAPDH. The judgment of primer sequences' specificity was based on the dissociation curve, and 2 -ΔΔCt (cycle threshold) was used to calculate the relative gene expression levels. Primer sequences are shown in Table 1.
Transwell invasion assay
Transwell Matrigel invasion assay using Transwell chambers (8-mm pore size; Minipore) precoated with Matrigel (BD Biosciences, Franklin Lakes, NJ) that contained extracellular matrix proteins was used to determine cell invasion. In brief, 1 × 10 5 cells in 100 μl of DMEM containing 1% FBS were seeded in the upper chamber, and 600 μl of DMEM containing 10% FBS was added to the lower chamber. After 6 h incubation at 37°C in a 5% CO 2 atmosphere, cells that remained in the upper chamber were removed by cotton swabs and penetrating cells were fixed in methanol, and then stained with 0.1% crystal violet. Cells were imaged from at least five grids per field. Then the membranes were rinsed with 30% glacial acetic acid. Finally, the wash solution was examined at 540 nm to count the number of glioma cells. All assays were independently repeated three times.
Western blot analysis
RASFs were washed twice in cold PBS, and then lysed in RIPA lysis buffer (Beyotime Institute of Biotechnology Jiangsu, China). The protein concentration of cell lysates was
Measurement of MMP-2 and MMP-9 levels
Enzyme-linked immunosorbent assay (ELISA) kits (USCN, USCN life science, Wuhan, China) were used to determine the levels of MMP-2 and -9 in the culture supernatants based on the manufacturer's instructions.
Luciferase reporter assay
RASFs were seeded in 24-well plates and incubated for 24 h before transfection. The pGL3-DKK1-3′UTR wild-type or mutant plasmid was cotransfected with the miR-613 mimic or miR-NC, and pRL-SV40 Renilla plasmid (Promega, USA) into RASFs. After transfection for 48 h, both firefly and Renilla luciferase activities were detected by a dual-luciferase reporter system (Promega, USA) following the manufacturer's protocols. All experiments were performed in triplicate.
Statistical analysis
All statistical analyses were performed using GraphPad Prism 5.0 (GraphPad Software, Inc., USA). Data from each group were expressed as mean ± standard error of the mean (S.E.M.) and statistically analyzed by Student's t test. Differences were considered statistically significant at a p value of < 0.05.
Results
The level of DKK1 miR-613 is down-regulated in synovial tissues and RASFs It has been reported that the level of DKK1 was significantly up-regulated in synovial fibroblasts from PATIENTS [13]. However, the role of DKK1 in synovial fibroblasts remains unknown. In this study, we also found that the expression of DKK1 in synovial tissues from RA patients was significantly increased in comparison to the adjacent normal tissues (Fig. 1a). Next, we further confirmed the enhanced expression of DKK1 in RASFs (Fig. 1b).
Knockdown of DKK1 significantly inhibited cell proliferation and invasion and promoted apoptosis in RASFs
To study the effects of DKK1 on RASFs, cell proliferation, invasion and apoptosis were estimated in RASFs after transfection with si-NC or si-DKK1 for 48 h. Western blot and qRT-PCR analysis showed that the DKK1 expression was significantly decreased in RASFs after transfection with si-DKK1 for 48 h compared to the si-NC group (Fig. 2a). The BrdU-ELISA assay indicated that knockdown of DKK1 could significantly suppress the proliferation of RASFs (Fig. 2b). Furthermore, the Transwell assays suggested that decreased DKK1 expression inhibited invasive ability of RASFs (Fig. 2c). Finally, knockdown of DKK1 promoted apoptosis of RASFs (Fig. 2d). For further study, the online database microRNA.org predicted that miR-613 might directly target DKK1. Our data confirmed that the miR-613 level in synovial tissues from RA patients was markedly lower than that in the adjacent normal tissues (Fig. 3a). To support this result, we also demonstrated that the miR-613 level was significantly decreased in RASFs, as shown in Fig. 3b. To study whether the DKK1 expression was closely associated with miR-613 in synovial tissues from RA patients or not, the Pearson's correlation analysis revealed a significant inverse correlation between DKK1 and miR-613 in synovial tissues from RA patients (Fig. 3c).
According to the online database microRNA.org, we identified a miR-613 binding site in the 3′UTR of DKK1 (Fig. 3d). To validate whether DKK1 is a direct target of miR-613, luciferase plasmids containing the potential DKK1 miR-613 binding sites (WT) or a mutated DKK1 3′UTR were constructed (Fig. 3d). Overexpression of miR-613 inhibited WT DKK1 reporter activity but not the activity of the mutated reporter construct in RASFs, demonstrating that miR-613 could specifically target the DKK1 3′UTR by binding to the seed sequence (Fig. 3e). Next, we confirmed that introduction of miR-613 could significantly decrease the expression of DKK1 (Fig. 3f ). These data indicated that miR-613 directly regulated DKK1 expression in RASFs through 3′-UTR sequence binding.
Effects of miR-613 overexpression on cell proliferation, cell cycle and apoptosis in RASFs
Next, we evaluated whether miR-613 could affect cell proliferation, cell cycle and apoptosis of RASFs. After transfection with the miR-613 mimic or miR-NC, we found that the level of miR-613 in RASFs was significantly increased in the miR-613 mimic group compared to the miR-NC group (Fig. 4a). The results from the BrdU-ELISA assay to explore the role of miR-613 in RASF proliferation demonstrated that up-regulation of miR-613 had an anti-proliferative effect in RASFs (Fig. 4b).
Since the miR-613 mimic significantly suppressed RASF proliferation, we speculated that introduction of miR-613 could arrest the cell cycle of RASFs. Our flow cytometry results demonstrated that overexpression of miR-613 dramatically increased the percentage of cells in the G1-phase in both RASFs compared with cells transfected with miR-NC (Fig. 4c). Therefore, overexpression of miR-613 might inhibit proliferation of RASFs by hindering the transition of the cell cycle from G1 phase to S phase.
In order to further study whether the miR-613 mimic exerted its anti-proliferative effect through induction of cell apoptosis, the total apoptosis rates of RASF cells were also detected by flow cytometry analysis. We confirmed that the apoptotic rate of RASFs was higher in the miR-613 mimic group than in the miR-NC group (Fig. 4d).
To confirm these effects at the molecular level, related proteins including PCNA (a proliferation marker), p21 protein (a cyclin-dependent kinase inhibitor), CDK4 (a cyclin-dependent kinase), cyclin D1 (a cell cycle protein), and Bax (a pro-apoptotic protein) were determined by Western blot analysis. We found that the expression of PCNA, CDK4 and cyclin D1 displayed obvious down-regulation in the miR-613 mimic group compared to the miR-NC group (Fig. 4e). However, expression of p21 and Bax proteins was significantly up-regulated by overexpression of miR-613 (Fig. 4e). These findings suggested that introduction of miR-613 might be associated with down-regulation of PCNA, CDK4 and cyclin D1, and up-regulation of Bax and p21 in RASFs.
Introduction of miR-613 inhibited invasion and expression of related molecules in RASFs
To determine the function of miR-613 in invasion of RASFs, we evaluated the invasive capacities of RASFs transfected with the miR-613 mimic by Transwell invasion assays. The data from Transwell assays showed that the invasion capability of RASFs was significantly inhibited in the miR-613 mimic group compared to miR-NC group (Fig. 5a). These results showed that miR-613 might play a critical role in suppression of invasion in RASFs.
MMPs may be responsible for the impaired invasion of anti-miR-613-transfected cells. To confirm this hypothesis, we used an ELISA kit to detect the levels of MMP-2 and MMP-9 in the culture supernatants. Our data indicated that secretions of MMP-2 and -9 in the culture supernatants were evidently decreased in miR-613-overexpressed RASFs (Fig. 5b). Additionally, we further detected the expression of MMP-2 and -9 at the mRNA level by RT-PCR assay. After transfection with the miR-613 mimic, MMP-2 and -9 expression at the mRNA and protein levels was distinctly reduced (Fig. 5c, d). Our results suggested that down-regulation of MMP-2 and -9 might be one of the possible mechanisms contributing to the inhibitory effect of the miR-613 mimic on the invasive capacities of RASFs. Consequently, miR-613 overexpression had similar effects as DKK1 silencing on RASFs. Up-regulation of DKK1 partially blocked the effect of miR-613 overexpression on RASFs To confirm whether miR-613 affected RASFs by directly down-regulating DKK1, we cotransfected RASFs with the miR-613 mimic and pcDNA-DKK1. Overexpression of DKK1 significantly enhanced the DKK1 expression inhibited by the miR-613 mimic (Fig. 6a). Results from the BrdU-ELISA assay showed that introduction of DKK1 ignificantly increased cell proliferation of RASFs transfected with the miR-613 mimic (Fig. 6b). Furthermore, the Transwell assay showed that increased DKK1 expression could reverse the inhibitory effect of the miR-613 mimic on invasion of RASFs (Fig. 6c). Moreover, overexpression of DKK1 inhibited the apoptosis of RASFs induced by miR-613 overexpression ( Fig. 6d). Therefore, the effects of the miR-613 mimic were partially reversed by DKK1 overexpression. Our data clearly showed that miR-613 inhibited cell proliferation and invasion and promoted apoptosis in RASFs by directly down-regulating DKK1 expression.
Discussion
Previous studies have reported that DKK1 directly impairs osteoblast differentiation and indirectly enhances bone destruction by promoting RANKL-induced osteoclastogenesis [6,14]. In established RA, expression of DKK1 within the synovium localizes to synovial fibroblasts ex vivo [6] and is tightly regulated by glucocorticoid metabolism in vitro [15], supporting a role for Wnt signaling inhibition in RA bone destruction. Juarez et al. demonstrated that differential expression of DKK1 was detected in resolving and early RA [13], which suggested that increased DKK1 expression could be a key event in progression to RA and occurs early in the disease process. They also confirmed that Wnt signaling inhibition by DKK1 may therefore be an as yet undefined pathway through which synovial fibroblasts influence bone destruction in early RA [13]. In this study, the expression of DKK1 was significantly increased in RA tissues and cells. Moreover, we found that silencing DKK1 could inhibit RASF proliferation and invasion and promote RASF apoptosis. Altogether, these results suggested that DKK1 had a critical role in RA pathogenesis. Previous reports have demonstrated that miRNAs play a multifunctional role in several biological processes such as cell proliferation, differentiation, apoptosis, migration, and invasion during inflammation and abnormal innate immune responses [16][17][18], which makes them potential targets in treatment of numerous autoimmune diseases. Increasing evidence has indicated that miRNAs are closely associated with the pathological progression of RA [19]. Previous reports have demonstrated that the levels of miR-522 [20], miR-140-5p [21], miR-20a [22], miR-338-5p [23], miR-155 [24], miR-125b [25], and miR-29a [26] were altered in synovial fibroblasts and FLSs from RA patients, indicating their roles as RA agonists or suppressors in RA pathogenesis. Therefore, the alteration of miRNA expression could be a marker and shed light on a novel therapeutic strategy in the treatment of RA [19]. In this study, miR-613 was found decreased in RASFs, suggesting that miR-613 might be involved in the regulation of RA pathogenesis.
Generally, miRNAs mediate multiple biological processes through different target sites and regulate the expression of their downstream mRNA targets [27][28][29]. Previous studies have identified many target mRNAs of miRNA-613, including phosphatase non-receptor type 9 (PTPN9), the sex-determining region Y-box 9 (SOX9), and sphingosine kinase 1 (SphK1) [30][31][32]. For example, MiR-613 suppressed laryngeal squamous cell carcinoma progression through regulating PDK1 [33]. MicroRNA-613 impedes the proliferation and invasion of glioma cells by targeting cyclin-dependent kinase 14 [34]. MiR-613 functions as a tumor suppressor in hepatocellular carcinoma by targeting YWHAZ [35]. miR-613 inhibits cell migration and invasion by downregulating Daam1 in triple-negative breast cancer [36]. CXCR4-mediated osteosarcoma growth and pulmonary metastasis is suppressed by microRNA-613 [37]. Using the microRNA.org database, it was predicted that DKK1 could be a potential target of miR-613. We conducted a luciferase reporter assay to test whether miR-613 binds to the 3'UTR of the DKK1 gene. Our results showed that DKK1 is a target of miR-613. We also showed that DKK1 mRNA and protein levels in RASFs transfected with the miR-613 mimic were lowered compared to those in control cells. According to our results, overexpression of miR-613 significantly inhibited RASF proliferation and invasion and promoted apoptosis of RASFs. Moreover, overexpression of DKK1 significantly reversed the effects of the miR-613 mimic on RASFs. It was reported that excess RASFs promoted joint destruction [38]. Recent evidence suggests that RASFs secrete excess matrix destructive enzymes as well as proinflammatory cytokines/chemokines, which then promotes joint destruction [39,40]. Taken together, our findings indicated that miR-613 played a protective role in RA by promoting the death of excess RASFs. The elimination of excess RASFs could lead to the alleviation of RA progression.
Conclusions
In conclusion, we assessed and reported, for the first time, the effect of miR-613 and its target gene DKK1 on RASFs and elucidated the possible pathological progression of RA. Although we consider miR-613 and DKK1 to have great potential to serve as effective therapeutic targets for RA in clinical treatment, a lot more work must first be done. Our study contributed to the understanding of the RA pathogenesis mechanism, yet the full mechanism that involves various pathways needs further investigation.
|
v3-fos-license
|
2019-01-25T14:03:53.745Z
|
2019-01-25T00:00:00.000
|
59222913
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.00009/pdf",
"pdf_hash": "557867e01493838b7a1e5b9cbfd9305aaed20bdb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:884",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"sha1": "557867e01493838b7a1e5b9cbfd9305aaed20bdb",
"year": 2019
}
|
pes2o/s2orc
|
Genomic Analysis Confirms Population Structure and Identifies Inter-Lineage Hybrids in Aegilops tauschii
Aegilops tauschii, the D-genome donor of bread wheat, Triticum aestivum, is a storehouse of genetic diversity, and an important resource for future wheat improvement. Genomic and population analysis of 549 Ae. tauschii and 103 wheat accessions was performed by using 13,135 high quality SNPs. Population structure, principal component, and cluster analysis confirmed the differentiation of Ae. tauschii into two lineages; lineage 1 (L1) and lineage 2 (L2), the latter being the wheat D-genome donor. Lineage L1 contributes only 2.7% of the total introgression from Ae. tauschii for a set of United States winter wheat lines, confirming the great amount of untapped genetic diversity in L1. Lineage L2 accessions had overall greater allelic diversity and wheat accessions had the least allelic diversity. Both lineages also showed intra-lineage differentiation with L1 being driven by longitudinal gradient and L2 differentiated by altitude. There has previously been little reported on natural hybridization between L1 and L2. We found nine putative inter-lineage hybrids in the population structure analysis, each containing numerous lineage-specific private alleles from both lineages. One hybrid was confirmed as a recombinant inbred between the two lineages, likely artificially post collection. Of the remaining eight putative hybrids, a group of seven from Georgia carry 713 SNPs with private alleles, which points to the possibility of a novel L1–L2 hybrid lineage. To facilitate the use of Ae. tauschii in wheat improvement, a MiniCore consisting of 29 L1 and 11 L2 accessions, has been developed based on genotypic, phenotypic and geographical data. MiniCore reduces the collection size by over 10-fold and captures 84% of the total allelic diversity in the whole collection.
INTRODUCTION
World population is projected to reach 9.7 billion by 2050, increasing pressure on the food system and challenging food security (Fao et al., 2014). Wheat, among other major food crops, is currently at an estimated genetic gain of 1% per year. This must more than double to achieve the estimated 2.4% per year to meet the projected production levels needed to provide enough calories and protein to the billions around the world in the coming decades (Ray et al., 2013). However, limited genetic diversity present in the elite wheat cultivars pose a serious threat to this goal (Akhunov et al., 2010). To mitigate this genetic diversity problem, use of crop wild relatives and progenitors, such as goat grass (Ae. tauschii Coss.), presents a promising solution and the best resource.
Aegilops tauschii originated as the result of hybridization between diploid A and B genome progenitors (Marcussen et al., 2014), and became the diploid D-genome donor of bread wheat (Triticum aestivum L.). Ae. tauschii is native throughout the Caspian Sea region and into central Asia and China. Natural hybridization of tetraploid wheat and Ae. tauschii about 8,000-10,000 years ago (Renfrew, 1973;Bell, 1987) led to the formation of hexaploid wheat with Ae. tauschii contributing many genes that expanded the climatic adaption and improved bread making quality (Kihara, 1944;McFadden and Sears, 1946;Yamashita et al., 1957;Kerber and Tipples, 1969;Lagudah et al., 1991). However, during bread wheat evolution, only a handful of Ae. tauschii accessions from a small region hybridized with wheat leading to a narrow genetic base of the wheat D genome (Lagudah et al., 1991). Multiple studies have corroborated this, showing that the D-genome of wheat has the least genetic diversity as compared to its counterparts, A and B genomes (Kam-Morgan et al., 1989;Lubbers et al., 1991;Akhunov et al., 2010). However, much greater genetic diversity is present in this wild donor of the D-genome (Naghavi et al., 2009).
With a pressing need to develop better yielding wheat varieties to feed a growing population and adapt to a changing climate, Ae. tauschii is a valuable source of novel alleles for wheat improvement (Kihara, 1944;Lagudah et al., 1991). Aegilops tauschii harbors considerable genetic diversity for diseases and abiotic factors relative to the wheat D-genome, and is split into two subspecies known as Ae. tauschii ssp. tauschii (Lineage 1; L1) and ssp. strangulata (Lineage 2; L2). The L2 ssp. strangulata is known to be the D-genome donor (Jaaska, 1978;Nakai, 1979;Nishikawa et al., 1980;Jaaska, 1981). Ssp. tauschii is further split into three varieties-typica, anathera, and meyeri, whereas ssp. strangulata is monotypic. Phenotypic classification of these subspecies, especially to varieties, is challenging. Therefore phenotypic data often poorly correlate with genetic classification (Lubbers et al., 1991;Dvorak et al., 1998).
Genetic diversity present in Ae. tauschii has been utilized via synthetic hybridization of tetraploid wheat and wild Ae. tauschii (McFadden and Sears, 1945;Kihara and Lilienfeld, 1949), and introgressed to bread wheat through direct crossing (Gill and Raupp, 1987). However, considerable amounts of untapped genetic diversity remain present in this species. In this study, we characterized the full Ae. tauschii collection held at Wheat Genetics Resource Center at Kansas State University in Manhattan, KS, United States with the main objectives to genetically characterize the Ae. tauschii collection, study the population structure within Ae. tauschii, and develop a genetically diverse MiniCore set to facilitate the use of Ae. tauschii for wheat improvement. In conclusion, we present a strategy to utilize the genetic diversity from Ae. tauschii to broaden the genetic base of D-genome of hexaploid wheat.
Plant Material
This study included 569 Ae. tauschii accessions from Wheat Genetics Resource Center (WGRC) at Kansas State University (K-State) in Manhattan, KS, United States. Most of the Ae. tauschii accessions were collected in 1950s and 1960s from 15 different countries by several explorers, however, a recent exploration was carried out by WGRC scientists in 2012 in Azerbaijan to fill the geographical gaps in the collection and sample more genetic diversity (Supplementary Figure S1 and Supplementary Table S1). Passport data, including longitude and latitude of the collection site, were available for most of the accessions and were plotted on the map to visualize the distribution (Figure 1). To study the relationship between Ae. tauschii and hexaploid wheat (T. aestivum L.), 103 wheat varieties from a panel of diverse United States winter wheat accessions were also included in the study (Grogan et al., 2016) (Supplementary Table S1).
Plant Tissue Collection and Genotyping-by-Sequencing
A single plant for each accession was grown in 2 × 2 pots in the greenhouse. About five centimeter of leaf tissue from single 2-3 weeks old seedlings were collected in 96well tissue collection box and stored at −80 • C until DNA extraction. Tissues were lyophilized in the lab for 24-36 h, followed by genomic DNA extraction using Qiagen BioSprint 96 DNA Plant Kit (QIAGEN, Hilden, Germany). Extracted DNA was quantified with Quant-iT TM PicoGreen R dsDNA Assay Kit (Thermo Fisher Scientific, Waltham, MA, United States). One random well per plate was left blank for quality control and library integrity. DNA samples were genotyping using genotyping-by-sequencing (GBS) (Poland et al., 2012a). GBS libraries were prepared in 96 plexing using two restriction enzymes-a rare cutter PstI (5 -CTGCAG-3 ), and a frequent cutter MspI (5 -CCGG-3 ) with a common reverse adapter ligated. Full protocol is available at the KSU Wheat Genetics website 1 . GBS libraries were sequenced on 10 lanes on Illumina HiSeq2000 (Illumina, San Diego, CA, United States) platform at University of Missouri (UMC; Columbia, Missouri) or McGill University-Génome Quebec Innovation Centre (Montreal, Canada) facility.
SNP Genotyping and Data Filtering
Single nucleotide polymorphisms (SNPs) discovery and genotyping was performed in single step with Tassel 5 GBSv2 pipeline (Glaubitz et al., 2014), using Ae. tauschii genome assembly (Aet v4.0; NCBI BioProject PRJNA341983) as the reference. Tassel was run with bowtie2 aligner for tags mapping in Linux HPC environment via shell script. Genotypic data were processed in R statistical programming language (R Core Team, 2015) using custom R scripts. Population level SNP filtering was performed and SNPs with minor allele frequency (MAF) FIGURE 1 | Geographical distribution of Aegilops tauschii accessions. Red circles represent Lineage 1 (L1), blue triangles Lineage 2 (L2), and gold plus sign (+) are putative hybrids. Green circles and triangles represent MiniCore accessions, and their shapes represent their lineage. less than 0.01 and missing data more than 20% were removed. Further, SNPs with heterozygosity greater than 5% were removed because Ae. tauschii accessions are highly inbred. Fisher's exact test at alpha 0.001 with Bonferroni correction was performed to determine if the putative SNPs were from allelic tags as described in Poland et al. (2012b). Individual samples with more than 80% missing SNP calls and more than 5% heterozygosity were also removed. Retained markers and samples were used for further analyses.
Population Structure and Ancestry Analysis
Population structure and ancestry analysis was performed with fastSTRUCTURE software (Raj et al., 2014), cluster analysis, and principal component analysis (PCA). Initially, fastSTRUCTURE was run with all filtered SNPs at K = 2 using 'simple' prior to partition all Ae. tauschii accessions into L1 and L2 lineages. Per the developer recommendation for computational efficiency, fastSTRUCTURE was run with 'simple' prior and random seed for K = 2 to K = 8 with three replications each to detect the optimum values of K. Once the optimum K was determined, final fastSTRUCTURE analysis was performed using 'logistic' prior with all the SNPs. Only those accessions with available passport information were used in this analysis, and passport information was used to group and order accessions. To ensure the label collinearity for multiple iterations of each K run, fastSTRUCTURE results were processed using CLUMPAK package (Jakobsson and Rosenberg, 2007;Kopelman et al., 2015) and plotted using Distruct program (Rosenberg, 2004). Optimal K-value was determined using 'chooseK' utility provided with fastSTRUCTURE.
Phylogenetic cluster analysis was performed in R language. Genetic distances were computed using 'dist' function with Euclidean method. Distance matrix was converted to a phylo object using 'ape' package (Paradis et al., 2004). Using 'phyclust' package (Chen, 2011), a neighbor joining unrooted tree was plotted to indicate subpopulation clusters and identify tentative cryptic outliers that were not identified phenotypically. Cluster analysis was performed using default parameters in 'dist, ' 'ape, ' and 'phyclust.' Principal component analysis was performed in R language. Eigenvalues and eigenvectors were computed with 'e' function using ' A' matrix output of rrBLUP package (Endelman, 2011). First three eigenvectors were plotted as three principal components to observe clustering. All analyses were performed separately for Ae. tauschii only to detect subpopulation, and with wheat to study the wheat-Ae. tauschii relationship. L1 and L2 accessions were identified from fastSTRUCTURE partitioning of two lineages at K = 2 and projected onto the PCA. To find the best variables explaining the differentiation within lineages, correlation coefficients were computed for PC2 and PC3 vs. longitude, latitude and altitude.
Genetic Diversity Analysis
As a measure of average heterozygosity over multiple SNPs in a given population, Nei's diversity index (Nei, 1973) was computed for the whole population, and separately for L1, L2, wheat, and combined for L1 and L2. Additionally, pairwise F ST between subpopulations, and lineage wise minor allele frequency (MAF) were computed and plotted using custom R scripts. Pairwise F ST were computed among L1, L2, and wheat in all combinations. MAF plots were plotted separately for L1 and L2.
Lineage-Specific Allelic Contribution to Putative L1-L2 Hybrids and Wheat D-Genome
Lineage specific private alleles are the ones that are segregating in one lineage but fixed in the other. To determine a lineage specific allele at a SNP site, dataset was split into L1 and L2 accessions. SNP sites where MAF was zero in one lineage but greater than zero in the other lineage, were filtered and the segregating lineage specific allele identified. L1 and L2 private alleles were assigned different colors and plotted for each putative hybrid separately. For each hybrid, lineage specific contribution was determined as percentage of alleles contributed by specific lineage. Using private allele SNPs, allele matching was performed as described in Singh et al. (2019) to find the putative parents of each hybrid from both L1 and L2. For wheat D-genome, a consensus of lineage specific alleles was determined, and lineage specific alleles were plotted across all wheat D-genome chromosomes. For those SNP sites, where more than one wheat lines had L1 specific allele, it was considered as a putative introgression from L1. Lineage specific contribution was determined as percentage of alleles contributed by specific lineage across the consensus.
Genetically Diverse Representative Core-Set Selection
All SNPs were used to select a representative core-set from the Ae. tauschii collection. The core-set was selected in two steps. First, software package PowerCore was used with default settings (Kim et al., 2007), which selects the lines to retain most diverse alleles by implementing advanced M (maximization) strategy. Then the number of selected accessions was further reduced by phenotypically guided selection using the available phenotypic data for Leaf rust composite, Stem rust race TTKSK (Rouse et al., 2011) and Hessian fly biotype D resistance. The diversity captured by the MiniCore was assessed by the percent segregating SNPs present in the selected accessions relative to the whole collection.
Geographical Distribution of Ae. tauschii
Aegilops tauschii is mainly found around the Caspian Sea and in central Asia but is found as far West as Turkey (Lon: 26.327362, Lat: 40.009735) and as far East as eastern China (Lon: 111.048058, Lat: 34.059486). Geographical origin data was known for most of the accessions (Figure 1). The majority of the accessions come from Afghanistan, Iran and Azerbaijan (Supplementary Figure S2). L1 is spread across the entire Ae. tauschii geographical range, whereas L2 is only present in Transcaucasia and around the Caspian Sea region (Figure 1). However, we did find one L2 accession in Uzbekistan, which is the first report of an L2 accession out of their natural habitat.
Genomic Profiling
Genotyping-by-sequencing (GBS) generated 318,639 putative single nucleotide polymorphisms (SNPs) from a total of 672 samples consisting of 569 Ae. tauschii and 103 wheat lines. Filtering the SNPs based on missing data, MAF, heterozygosity, and Fisher's exact test resulted in 13,582 SNPs. Additionally, poor samples were removed based on the amount of missing data and heterozygosity. Twenty Ae. tauschii samples with more than 80% missing SNP calls and 5% heterozygosity were removed, which resulted in a dataset of 13,582 SNPs for 652 samples consisting of 549 Ae. tauschii and 103 wheat samples. Finally, after removing 447 SNPs that were private to wheat, a total of 13,135 high quality SNPs were retained and used for further analyses.
Population Structure Analysis
All SNPs were used to infer the ancestry of filtered samples using variational Bayesian inference algorithm fastSTRUCTURE. Global analysis was run for Ae. tauschii and wheat together for K ranging from two to eight with three iterations for each K (Figure 2). Samples were pre-assigned labels based on their geographical origin, and this information was used for plotting the membership coefficients. At K = 2, L1 and L2 split from each other within Ae. tauschii and wheat remained clustered with L2 of Ae. tauschii. Nine accessions showed a very distinct structural differentiation as admixture of L1 and L2 (Figure 2; group 15). These nine accessions were hypothesized as the possible hybrids between L1 and L2 and were analyzed separately. Using "chooseK" utility provided with fastSTRUCTURE K = 6 was determined to be the optimal, where marginal likelihood of the data was maximized. For this study, we also found that K-values ranging from 2 to 6 were optimal and gave biological and geographic inference. At K = 3 L1, L2 and wheat were completely separated, with over half of the Iranian and few Azerbaijan accessions from lower altitudes showing admixture. At K = 4, L1 showed sub-population differentiation where accessions from Armenia, Azerbaijan, Georgia, Russia, Syria, and Turkey clustered separately from accessions originated in Afghanistan, China, Kyrgyzstan, Pakistan, Tajikistan, Turkmenistan, and Uzbekistan. Accessions from Iran showed mixture of accessions from these two groups. Putative hybrids showed clear similarity with accessions from the western side of Caspian Sea in L1 and Iranian accessions in L2. At K = 5, L2 accessions showed some differentiation where more than half of the accessions from Iran occurring at lower altitudes differentiated from Armenia, Azerbaijan, Georgia, and Turkey. For K > 5 no further information was provided by the population structure analysis in terms of population differentiation within L1 and L2, however, putative hybrids formed their own cluster. Wheat showed no subpopulation differentiation at all. Therefore, we determined K = 5 to be a secondarily optimal stratification level after the optimal K = 3.
Population structure analysis was also run only on Ae. tauschii to determine the impact of the wheat outgroup on the pattern of Ae. tauschii grouping (Supplementary Figure S3). Marginal likelihood of the data was maximized at K = 5. At K = 2, L1 and L2 differentiated strongly, and the same group of nine accessions as possible hybrid was evidenced as admixture of L1 and L2. At K = 3, L1 showed the same population differentiation as the global analysis. Accessions from the eastern side of Caspian Sea differentiated from the western side. At K = 4, L2 Iranian accessions showed admixture and differentiate from other accessions. At K = 5, putative hybrids differentiated to form their own cluster. At K > 5 no more useful information was provided by the population structure analysis.
Principal Component and Cluster Analysis
Principal component analysis was run as a second approach to cluster accessions and detect subpopulations. The same set of 13,135 Ae. tauschii specific SNPs were used for PCA. The inferred lineages for Ae. tauschii individuals by population structure analysis were used to color the accessions in PCA (Supplementary Figure S4) and phylogenetic cluster analysis (Figure 3). Principal component analysis was performed separately for two datasets-Ae. tauschii with wheat, and Ae. tauschii only. As expected, the population differentiation FIGURE 2 | Global population structure analysis for Ae. tauschii L1, L2, putative hybrids and wheat for K = 2 to K = 6. An additional color is added with each increase in the value of K. Each vertical bar represents an individual, where the proportion of the color bar representing membership coefficient for each subpopulation. A bar with only a single color represents its ancestry to a single population, and a mixture of colors represents admixture from different populations.
observed by fastSTRUCTURE was confirmed with PCA as three distinct groups-L1, L2 and wheat-were observed in the first two components of the PCA (Supplementary Figure S4). PC1 explained 55% of the variation separating L1 and L2. PC2 explained 7% of the variation and separates out wheat from L2 of Ae. tauschii. Corroborating previous reports, the wheat was observed to be more closely related to L2 accessions.
Principal component analysis with only the Ae. tauschii accessions, also confirmed the strong population differentiation between two Ae. tauschii lineages, L1 and L2. In this analysis, PC1 explained 53% the variation in the dataset (Figure 4 and Supplementary Figure S4). When analyzed in the absence of wheat, L1 shows a strong within lineage differentiation on the second principal component explaining 4% of the variation, and L2 on the third principal component explaining 4% of the variation (Figure 4). To find the variables explaining the most variation within lineages along PC2 and PC3, the correlation coefficients were computed to agroclimatic variables. Correlation analysis showed that the L1 differentiation was strongly correlated with the longitudinal gradient of accessions with an east-west gradient relative to the Caspian Sea, and L2 with altitude relative to sea level (Supplementary Figure S5). After removing the outlier accessions, when the longitudes of L1 accessions are plotted against PC2, it clearly separated the accessions in east and west of Tehran, Iran (Figure 5). On the third principal component, population differentiation was also observed, which corresponded to the altitude of origin of the L2 accessions in reference to the sea level (r = 0.61). PC3 vs. altitude plot also shows a clear trend with PC3 separating the accessions according to their altitude, however, there are few outliers present on the both ends (Supplementary Figure S6). Generally, lower altitude accessions clustered together separately from the higher altitude accessions. We found that the strongest differentiation between L2 clusters was at around 150m above sea level. Overall the PCA results were in strong agreement with the population differentiation observed with fastSTRUCTURE.
As a final assessment of population structure, Cluster analysis was performed by computing genetic distances among accession using Euclidean method. An unrooted tree in this cluster analysis splits samples into three distinct clades-L1, L2 and wheat (Figure 3). Wheat and L2 were more closely related than wheat and L1, and L1 and L2. L1 and L2 further shows two clades within that could again be attributed to longitudinal variation from the Caspian Sea and altitude, respectively. Wheat essentially did not show any differentiation within.
Admixed Ae. tauschii Accessions Are L1-L2 Hybrids, or Possibly a New Lineage Nine accessions showed up in STRUCTURE, PCA and cluster analysis as admixture of Ae. tauschii lineages L1 and L2. To test their origin as hybrids between L1 and L2 accessions, private alleles in both lineages were filtered and tested in the hybrid samples. A total of 4,711 L1 and 4,700 L2 private alleles were identified in the whole collection. Based on the total number FIGURE 3 | Neighbor-joining tree showing relationship between L1, L2, possible L1-L2 hybrids and wheat. Red branches represent L1 accessions, blue L2, gold L1-L2 hybrids, and green wheat. Wheat is closely related to L2 of Ae. tauschii. Putative hybrids cluster out separately and appear in between the two lineages.
of SNPs assayed in putative hybrids, lineage specific alleles contributed by L1 ranged from 48 to 70%, and L2 ranged from 30 to 52%. Out of nine putative hybrid samples, only TA3429 was confirmed as a typical bi-parental recombinant inbred line between L1 and L2 accession(s), in which the chromosomal segments from L1 and L2 were clearly demarcated without any overlap (Figure 6). The other eight putative hybrids showed no such clear pattern but ambiguous distribution of private alleles (Supplementary Figure S7). Private alleles were visualized for one randomly selected accession from each L1 and L2, which showed no contribution from the other lineage (top row, Supplementary Figure S7).
Seven out of the eight unclear putative hybrids originated in Georgia and one in Turkmenistan. A total of 2,098 SNPs with private L1 or L2 alleles were assayed in these hybrids. Exploring further, we found that 1,988 SNPs were fixed (private alleles contributed by L1 or L2). Only 110 SNPs were segregating mostly from the one accession from Turkmenistan (Supplementary Figure S8). Removing that accession left only six segregating SNPs and 1,768 fixed SNPs in seven putative hybrids from Georgia. Failing to construct their expected hybrid haplotypes, we hypothesized that these putative hybrids from Georgia are an isolated lineage that probably resulted from a single hybridization event between an L1 and L2 accession. To determine this, we filtered out the SNPs to find if there were any alleles private to these hybrids, and we found 713 SNPs with alleles private to these hybrids from Georgia. Of these 713 SNPs, only 29 were segregating within these hybrids, and the remaining were fixed.
To find potential L1 and L2 parents of each putative hybrid, allele matching was performed. SNPs with lineage specific private alleles were used to find the closest accession from each lineage. Lowest and highest percent identity was found to be 76.96 and 85.02%, respectively, between a pair of hybrid and L1 accessions. Similarly, the lowest and highest percent identity between any pair of hybrid and L2 accessions was found to be 74.2 and 77.62%, respectively. These lower identity coefficients confirm that the potential parental accessions of these putative hybrids were not found in this collection. List of putative hybrids with highest matching accessions is summarized in Table 1. (L1) is colored based on the longitudinal gradient and Lineage2 (L2) is colored with altitudinal gradient with reference to the sea level. Empty circles represent L1 (yellow-red gradient) and empty triangles represent L2 (blue-green gradient). Gold plus sign (+) represent putative L1-L2 hybrids. Legends for the color gradient are shown on the right-hand side.
Lineage-Specific Private Allelic Contribution to Wheat D-Genome
All wheat lines had similar distribution of lineage specific alleles across all chromosomes with minor differences (data not shown), therefore, to determine the lineage-specific contribution of Ae. tauschii to wheat D-genome, a consensus of private alleles distribution was determined. As it has been shown that the L2 of Ae. tauschii contributed the D-genome of wheat, all private alleles were assumed to be contributed by the L2. However, if at least two different wheat lines carried the same private allele from L1 at a given SNP site, it was considered a putative introgression from L1 in the consensus. We observed that the D-genome consensus carried only 68 (2.7%) alleles from L1 and 2,406 (97.3%) alleles from L2. Two chromosomes, 1D and 6D, carried 54% of the total L1 alleles with majority of the introgressions present in the distal regions (Figure 6).
Genetic Diversity
Nei's diversity index was computed using all SNPs separately for Ae. tauschii L1, L2, possible hybrids, wheat and Ae. tauschii collection combined. Highest Nei's diversity index was observed for L2 = 0.1326 followed by L1 = 0.0872, and wheat of 0.0158. Higher values of the Nei's index indicates greater allelic diversity in a given population. Combined Nei's index for Ae. tauschii was 0.2382 and the whole dataset including wheat was 0.2597. To evaluate population differentiation between the different pairs of Ae. tauschii lineages and wheat, pairwise F ST statistics were computed. Highest F ST were observed between L1 and wheat, followed by wheat and L1-L2 hybrids, and wheat and L2 ( Table 2). The population differentiation between L1 and wheat also supports the large number of novel of alleles found in this lineage that are absent from the wheat pool. Minor allele frequency was computed and plotted separately for L1, L2 and jointly for both lineages (Supplementary Figures S9, S10). Individually, MAF spectrum for L1 and L2 showed an expected distribution with majority of alleles present at very low frequency (Supplementary Figures S9A,B). Joint distribution of L1 and L2 MAF revealed that majority of the alleles segregating in one lineage were close to fixation in the other lineage (Supplementary Figure S9C). Chromosome-wise map for MAF, revealed that majority of the polymorphic markers were present on the distal ends of the chromosomes (Supplementary Figure S10), and L2 has higher proportion of polymorphic markers as indicated by the density and height of L2 bars.
Core-Set Selection
Genetically diverse core-set was selected using software package PowerCore that implements advanced M (maximization) strategy to select diverse accession by reducing allelic redundancy and keeping the allele frequency spectrum similar. Initially 107 Ae. tauschii accessions were selected using advanced M strategy implemented in PowerCore (Supplementary Table S2). These accessions were then plotted on a phylogenetic tree and selected using known phenotypic information on disease and insect resistance to get the size of this core to a manageable number. This selection was guided by phenotypic data for resistance to Leaf rust composite, Stem rust TTKSK race and Hessian fly biotype D. Other factors, such as the available geographical origin and the history of their previous use in genetic mapping, were also taken into account to pick the representative accessions. Finally, 40 accessions were selected to comprise a MiniCore that is distributed uniformly across the WGRC Ae. tauschii collection (Supplementary Figure S11). Nei's diversity index computed for the MiniCore (0.2235) compared to the whole collection (0.2382) suggests allelic richness in the MiniCore. Also, in the MiniCore, we were able to retain the 11,041 segregating SNPs out of 13,135 from the whole Ae. tauschii collection. By reducing the collection size by over 10-fold, we were still able to capture ∼84% of the segregating alleles present in the whole WGRC collection. MiniCore consists of 29 accessions from L1 and 11 accessions from L2 of Ae. tauschii.
Geographical Distribution of Ae. tauschii
Caspian Sea region is thought to be the center of origin of Ae. tauschii. Most of the accessions in our collection were also sampled from this region (Figure 1). Consistent with the current literature, we observed in our study that L2 of Ae. tauschii is spread on a narrow longitudinal range from northeastern Syria to northeastern Iran spanning a distance of 1625 km, whereas L1 is found from southern Turkey to northwestern China, spanning over 4000 km. However, we did find one L2 accession TA10124 originated in Uzbekistan. It is possible that the passport data for this accession was recorded wrong, but if true, it might point to the possibility of L2 migrating out of its natural habitat and extending eastbound. However, more sampling is required to make any further claims. Most of the accessions were acquired from other genebanks, however, to fill up the geographical gaps, a recent exploration was conducted in 2012 by WGRC researchers (blue dots, Supplementary Figure S1). Multiple accessions from both lineages are found to overlap at similar altitudes, with L1 accessions generally inhabiting higher altitudes than L2 (Supplementary Figure S12A). Majority of L1 and L2 accessions fall in the similar latitude distribution, but some L1 accessions were widely spread (Supplementary Figure S12C).
SNP Discovery and Ascertainment Bias
Using Ae. tauschii genome assembly Aet v4.0 as the reference, GBS produced 13,135 high quality SNP markers useful to assess genetic diversity in the collection. We expected some bias in the two lineages because the reference genome (Aet v4.0) represents Ae. tauschii ssp. strangulata. However, as we did not use any prior SNP information to call SNPs, we expect the ascertainment bias be minimal. Splitting two lineages and computing MAF separately revealed that both lineages had about similar distribution of MAF (Supplementary Figure S9), but with elevated MAF in L2 (Supplementary Figure S10). Because the goal of this project was not to assess any specific genomic region, using Aet v4.0 reference genome should not pose a problem.
Population Structure Analysis
Global population structure analysis showed the expected Ae. tauschii subpopulations (L1 and L2) with each having two subgroups, and wheat D-genome forming a third group which was most closely clustered to L2. These findings are largely in agreement with known population structure of Ae. tauschii, confirming the utility of our genotyping approach. In addition to these five groups, we unexpectedly found putative hybrids clustered together (Figure 2). This small group of nine accessions showed up as admixture of L1 and L2. At K = 3 wheat split from L2 sharing ancestry with most of Iranian and few Azerbaijan accessions. One common feature of these accessions is that they all occur at lower altitudes from the sea level. This points to the possibility that these accessions or their ancestors could have been involved in the origin of wheat.
Aegilops tauschii L1 showed intra-lineage population differentiation in accordance with relative position of East or West of the Caspian Sea. This was also clear in the principal component analysis where L1 was differentiated by PC2 along longitudinal gradients (Figure 4). Iranian accessions did not show clear population differentiation by falling into the eastern or western group but rather show admixture. Iran is at the center of origin for Ae. tauschii and could be seen as a transition region for the East and West clades of L1. The majority of the L2 accessions occur in Azerbaijan and Iran, both of which are on one side of the Caspian Sea with Iran expanding to the eastern side, therefore longitudinal gradient did not explain much of the weak population structure within L2 at K = 5. However, we found that this population differentiation could be attributed to the altitude of the origin of L2 accessions where accessions originating at less than 150 m above sea level cluster separately from the accessions from more than 150 m above sea level (Supplementary Figure S6).
The accessions that were admixture clustered separately from all other accessions and did show unique ancestry. These admixed putative hybrids were observed to have shared ancestry with L1 accessions from Turkey and Transcaucasia, and L2 accessions from Iranian and Azerbaijan accessions occurring at lower altitudes. This could possibly mean that their original parents belong to these geographical regions.
Inter-Lineage Hybridization and the Origin of a New Lineage Aegilops tauschii is a highly self-pollinated species, therefore natural hybrids between L1 and L2 are rare and have been the subject of limited reports. Wang et al. (2013) reviewed that collectively, only 1.4% accessions have been classified L1-L2 intermediates in several studies. They also found two intermediate accessions falling in between L1 and L2. Based on haplotype distribution similarity and close geographical proximity of origin, they speculated that these two accessions could have originated from the hybridization of a single L2 plant with an L1 plant.
In the present study, we found nine such intermediate accessions that fall in between the L1 and L2 in the fastSTRUCTURE, PCA and cluster analyses. Using the SNPs with private alleles, the allele matching of putative hybrids with L1 and L2 accessions did not result in a perfect match, which suggests that the real parental accessions could be missing in our collection. This suggests that the natural hybridization of L1 and L2 accessions is indeed rare, and these hybrids possibly originated from one or few of these rare events. These findings are in alignment with Wang et al. (2013), where they suggested a single hybridization event could have resulted in the two intermediate individuals in their data. Seven of the hybrids identified in our study were found in Georgia, one in Turkmenistan, and one with missing passport data. Both lineages co-exist in Georgia and Turkmenistan, therefore they are not isolated by distance. It is possible that they are reproductively isolated given their inbreeding nature. Similar pattern of reproductive isolation and rare hybridization was reported in rice landraces (Huang et al., 2010), and switchgrass (Mizuno et al., 2010;Sohail et al., 2012;Grabowski et al., 2014).
The distribution of L1 and L2 private alleles in these admixed accessions supports our hypothesis that these accessions could have arisen from L1 to L2 hybrids (Figure 6 and Supplementary Figure S7). One hybrid, TA3429, showed a typical recombinant inbred pattern, which was different than other hybrids. This accession was actually received from a Japanese collection with few other germplasm lines, and was labeled as 4× (tetraploid). However, when tested phenotypically and cytologically, it was diploid like normal Ae. tauschii. Therefore, it is possible that this accession was in fact an artificially created hybrid between an L1 and L2 accession as a diploid.
All other admixed accessions appear to be derived from a rare hybridization event between an L1 and L2 accession followed by isolation and possibly multiple intercrossing events. We found that the majority of the L1, L2 private SNP alleles assayed in these putative hybrids were fixed, and only 110 were segregating with majority of the hybrids carrying same private alleles. Exploring further, we identified 713 SNPs with alleles private to the admixed accessions from Georgia. Of these 713 SNPs, only 29 were segregating among these hybrids. Together this supports the possibility that these accessions resulted from single hybridization event. Though a limited sample, this points to the possibility of development of an unreported lineage as a result of rare hybridization event between an L1 and L2 accession, however, more samples are needed from these areas to shed new light on the nature of hybridizations among both lineages.
Genetic Diversity
Wheat had the lowest Nei's index, which is expected because of its domestication and polyploidization, compared to its wild progenitor, Ae. tauschii. Reduction in genetic diversity has also been reported in cotton as a result of change in ploidy level (Iqbal et al., 2001). Wheat lines in our study also represent a relatively narrow collection of United States winter wheat, leading to the lowest Nei's index. Highest Nei's index was observed for L2, followed by L1. This can be attributed to the differences in distribution of L1 and L2 across their natural habitat. L1 is distributed across the longitudinal gradient, whereas L2 is distributed across the altitudinal gradient. Latitude is known to affect the weather temperature with cooler temperatures away from the equator (Rind, 1998), but the latitude distribution for L1 and L2 was similar for the majority of accessions except few outliers (Supplementary Figure S12C). Therefore, the expected effect of latitude should be minimal. Longitude distribution for L1 was more extensive as compared to L2 (Supplementary Figure S12B). As shown in Figure 1, the majority of the L2 accessions are distributed around the Caspian Sea as compared to very few L1 accessions. Therefore, the longitude effect is more pronounced in L1 than L2. Moreover, the altitude distributions for L1 and L2 were also different (Supplementary Figure S12A), with more L2 accessions growing at lower altitude. Altitude is known to have an effect on the temperature (Körner, 2007). Therefore, L2 accessions might have selected alleles to survive in different temperatures. Combined Ae. tauschii had higher Nei's index as compared to any single lineage, which is expected because all the allelic diversity is assayed in the whole collection.
Ae. tauschii Contribution to the Wheat D-Genome
We assayed wheat D-genome chromosomes for lineage specific introgressions from Ae. tauschii. A majority of the introgressions mapped to L2, which is consistent with the current and past literature. Calculating the percentage of lineage specific alleles, we observed that L1 had only contributed 2.7% of the total Ae. tauschii introgressions in comparison to 97.3% by L2. This supports previous reports and points to the need to use L1 accessions for broadening the genetic base of hexaploid wheat and harness the untapped genetic diversity present in Ae. tauschii L1. With this goal, we developed a small set of Ae. tauschii (MiniCore) consisting of 29 L1 and 11 L2 accessions to facilitate wheat improvement.
Genetically Diverse Representative MiniCore
Accessing the genetic diversity present in wild relatives can be a challenging task for breeders due to the large number of accessions and confounding physiology of the wild plants. Wild accessions with overall poor phenotype could be the source of agronomically important alleles. Efficient use of germplasm collections can often be facilitated through a targeted subset of the total accession that is optimized to capture a maximum amount of the total diversity in a minimum number of accessions. To facilitate the use of Ae. tauschii accessions in wheat breeding, we selected only 40 accessions to develop a small MiniCore set that captures 84% of the segregating alleles from the whole collection. MiniCore was carefully selected from both the lineages of Ae. tauschii but the main focus was to target more from L1. This is because L1 is a reservoir of untapped genetic diversity that has not been leveraged by the breeders. L2 accessions were chosen because this lineage is the source for many of the diseases and insect resistance. These accessions can be utilized to bring in novel genetic variation for wheat rusts, insect resistance, heat and drought tolerance to produce climate resilient wheat varieties. This MiniCore consisting of genetically diverse accessions was selected with an objective to broaden the genetic base of wheat D-genome. However, in future, the selection can be optimized based on the recombination rate and the distribution of Ae. tauschii regions that are already introgressed in the wheat D-genome.
Future Work and Strategy to Utilize Genetic Diversity in Ae. tauschii Untapped genetic diversity in Ae. tauschii is of great interest to breeders and geneticists for wheat improvement and broadening the narrow D-genome (Kihara, 1944;Lagudah et al., 1991;Lubbers et al., 1991;Akhunov et al., 2010). Aegilops tauschii has been utilized via synthetic bridge crossing and direct crossing (McFadden and Sears, 1945;Kihara and Lilienfeld, 1949;Gill and Raupp, 1987), however, both of these strategies have drawbacks. Synthetic bridge crossing involves a tetraploid parent that ultimately brings the genetic diversity in A and B genomes, which makes it difficult and time-consuming process to get rid of undesirable diversity from A and B genomes (Figure 7). Whereas, direct crossing of Ae. tauschii with wheat generally result in high F 1 sterility rendering it less lucrative to researchers (Olson et al., 2013;Cox et al., 2017). However, another novel strategy, which adopts beneficial steps from both these strategies, is "octo-amphiploid bridge" mediated direct genetic transfer, which hasn't been reported in literature much. Using this strategy, Zhang et al. (2018) recently identified 18 QTLs for three agronomic traits, i.e., thousand kernel weight, spike length, and plant height. Briefly, this strategy involves crossing Ae. tauschii directly with wheat producing a haploid F 1 (n = 28; ABDD t ; Supplementary Figure S13), followed by colchicine doubling resulting in an octo-amphiploid (2n = 8x = 56; AABBDDD t D t ) (Figure 7). This octoploid can then be either self-fertilized for several generations to develop recombinant inbred lines (RIL) population or backcrossed with hexaploid wheat to develop near isogenic lines for genetic mapping. Since there are four copies of D-genome chromosomes, the progeny will follow tetrasomic inheritance for any given trait with five expected genotypes; nulliplex, simplex, duplex, triplex and quadriplex. Presence of range of genotypes with a single allele differences present an opportunity to study the dosage effect in addition of the genetic mapping. Extending the disomic inheritance model to this octoploid, typical RIL like 96% homozygosity should be achieved after 20 generations of selfing compared to six generations for disomic inheritance (Supplementary Figure S14). However, theoretically moderate frequencies for homozygous individuals for each allele (nulliplex and quadriplex) are achieved at F 5 or F 6 that can be used for genetic mapping. Once an associated genetic marker is identified for a trait, it can be used to identify a homozygous line for a given trait and backcrossed with wheat recover euploid wheat (2n = 6x = 42) with a desired gene introgressed in it (Supplementary Figure S15). Our initial results indicate that depending on the hexaploid wheat used, euploidy can be achieved as soon as after one or two backcrosses.
CONCLUSION
Studying genetic diversity in Ae. tauschii is very important to wheat improvement in the wake of unpredictable climate and evolving biotic stresses. In this study, we confirmed that Ae. tauschii L1 has immense amount of untapped genetic diversity that can be used for wheat improvement. We also provided the evidence of natural Ae. tauschii L1-L2 hybrids, which opens the door to the possibility of new genetic variation. Finally, selection of forty genetically diverse accessions will facilitate the use of Ae. tauschii for wheat improvement for abiotic and biotic stresses via octo-amphiploid mediated bridge crossing, which will ultimately result in higher genetic gains and faster wheat improvement.
DATA AVAILABILITY
Sequence reads generated using genotyping-by-sequencing are available from NCBI SRA under accession SRP141206. R-code and other scripts are available at GitHub repository https:// github.com/nsinghs/Code_Ae_tauschiiDiversity.
AUTHOR CONTRIBUTIONS
NS analyzed the data and wrote the manuscript. SW performed the GBS. VT and SS developed and contributed to the idea. JR acquired, managed, and provided Ae. tauschii accessions. DW collected and provided phenotypic data. MA provided Ae. tauschii accessions. BG and JP conceived and developed the idea. JP wrote the manuscript.
FUNDING
This is contribution #19-042-J from the Kansas Agricultural Experiment Station, Manhattan. This study was conducted under the auspices of the Wheat Genetics Resource Center (WGRC) and Industry/University Collaborative Research Center (I/UCRC) through support of industry partners and partially funded by NSF grant contract (IIP-1338897).
|
v3-fos-license
|
2024-06-18T15:13:09.337Z
|
2024-06-17T00:00:00.000
|
270557114
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://biotechforenvironment.biomedcentral.com/counter/pdf/10.1186/s44314-024-00003-4",
"pdf_hash": "f72e4b5d1717aac3b38a4693f6633605f6954ddc",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:886",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "76993432859889ce7970c6d8903bd602ac48d34f",
"year": 2024
}
|
pes2o/s2orc
|
Microbial cell factories in the remediation of e-wastes: an insight
Electronic waste, also known as e-waste, is the discarded or by-products of electronic appliances, constituting a major percentage of the total solid waste produced globally. Such e-waste is mostly composed of plastics, various heavy metals, azo dyes, and xenobiotic components, which are mostly non-biodegradable or less degradable in nature. As a result, they increase environmental toxicity, preventing the growth of crops and causing health issues for humans and other animals. On the other hand, recycling e-waste may also lead to the consumption of heavy metals through water or the inhalation of polluted air after combustion, which may cause various health issues such as asthma, nerve, respiratory, kidney, liver disease, and even cancer. Hence, microbial degradation of e-waste has become a new trend in managing such solid wastes. However, their mode of action is somewhat less explored. Microbes degrade various components of e-waste through a number of mechanisms such as bioleaching, biosorption, biotransformation, bioaccumulation, and biomineralization. Some microorganisms release enzymes such as reductases, laccases, esterases, carboxylesterases, catalases, and dioxygenases for the bioconversion of various components of e-waste into their less toxic forms. This review provides insight into the role of microbes in the conversion of various components of e-wastes such as polyaromatic hydrocarbons (PAHs), azo dyes, and heavy metals and their mode of action.
Introduction
With the global demand for electronic goods on the rise, effective management of electronic waste (e-waste) has emerged as a pivotal issue within the realm of solid waste management (Ghulam et al., 2023).This concern extends across developed, transitioning, and emerging nations, forming an intricate web of interconnected challenges [159].The shipment of thousands of electronic products across borders is vital for global trade, yet once their usage lifecycle terminates, they transform into hazardous waste consisting of harmful substances such as toxic chemicals, heavy metals, and non-biodegradable plastics.This transformation results in pollution and the onset of severe health ailments [159].E-waste is a complex mixture of metals and heavy metals, all of which are deadly and represent significant threats to the environment and its ecosystems [99].Lead, mercury, cadmium, nickel, copper, zinc, and other metallic compounds typically found in electrical gadgets are considered hazardous [40].Furthermore, e-waste disposal adds an array of plastic components to the environment, including polyethylene terephthalate esters, polystyrene, polyvinyl chloride, and polypropylene as well as ceramics, printed circuit boards, plywood, and a variety of other materials [119].Organic substances found in e-waste include polycyclic aromatic hydrocarbons (PAHs), polychlorinated dibenzo-p-dioxins (PCDDs), polybrominated dibenzo-p-dioxins (PBDDs), dechlorane plus (DP), and polychlorinated biphenyls (PCBs), which are also toxic to the environment [137].
Directly or indirectly, there is no doubt that electronic waste pollutes the environment and its natural resources such as soil, air, water, or land surfaces (Raj et al., 2023).These wastes are dangerous to the health of both plants and animals, as they are mainly carcinogenic, consisting of heavy metals, acids, and non-biodegradable polymers [116].Because of their ability to biomagnify the food chain, appropriate management from collection through disposal is necessary [9].Around 75% of this waste remains within homes, offices, or industries, destined to become discarded materials.The non-recyclable waste undergoes processes such as dismantling, shredding, and even burning, releasing significant volumes of toxic smoke laden with heavy carcinogenic compounds.These emissions contribute to health deterioration, leading to skin and respiratory issues [33].
Microbes, in particular, have shown exceptional effectiveness in dealing with environmental contaminants.Both fungi and cyanogenic microbes are categorized as organotrophs [35].Fungi are responsible for producing organic acids, while cyanobacteria produce hydrogen cyanide when organic carbon is present [65].This interaction of organic acids and hydrogen cyanide aids the bioleaching process [161].Improving the employment of microorganisms as bioremediation agents is critical for furthering the cause of a sustainable environment (Akinsemolu et al., 2018).This review places special emphasis on the role of various microbes in the remediation of the major biodegradable components present in e-waste such as heavy metals, PAHs, and azo dye,their mode of action; and the challenges associated with the process.Additionally, it provides a brief snapshot of the role of various microbial enzymes in the conversion of e-waste components.
Major sources and the components of e-waste
Currently, e-waste stands as the fastest-growing waste source, experiencing an exponential increase in volume [99].Globally, millions of metric tonnes of e-waste are generated annually, with an expected yearly rise of 4-5% [173].This remarkable expansion can be attributed to several critical factors, including urbanization, industrialization, and our dependence on electronic and electrical components [12].Both domestic consumption and foreign export have contributed to the demand surge for a wide array of electronic products [89].Notably, within the Indian industry landscape, the electronics sector has emerged as one of the fastest-growing segments [140].
E-waste encompasses a spectrum of over 1000 different materials, with composition varying based on the manufacturer, equipment type, and age [117].Comprising approximately 38% ferrous metals, 16.5% non-ferrous metals, and 26% plastics, e-waste predominantly contains iron and steel constituting over 50% of the ferrous metal fraction, followed by various other elements (Moyen Massa et al., 2023).Metals are commonly found in e-waste in elemental form or as alloys of various elements [170].In an era of increasing innovation, modern gadgets boast an astounding variety of up to 60 components, thereby complicating these devices [171].With heightened complexity comes an upsurge in the number of metals with luminous, conductive, and alloying capabilities [171].
A multitude of metals can be found in varied combinations and concentrations in diverse electrical and electronic devices [168].Precise quantities of elements are requisite for manufacturing components like printed circuit boards (PCBs), which are ubiquitous in laptops, personal computers, mobile phones, and similar devices.These components may encompass hazardous elements such as chromium, zinc, lead, nickel, and copper, whether in elemental state or alloyed form [78]. Electrical steels are widely employed in electronics due to their low iron loss and maximum magnetization capacity (Hayakawa et al., 2020).Display technologies like cathode ray tubes (CRTs), liquid crystal displays (LCDs), and light-emitting diodes (LEDs) are prevalent in TV monitors owing to their availability and high resolution (Ciftci et al., 2017) as well as their permanent magnetism (Bloodworth et al., 2014).Rechargeable batteries (lithium-ion/lithium polymer), extensively utilized in laptops and mobile phones, incorporate elements such as lithium oxides, lithium cobalt oxides, and rare earth metals such as lanthanum (La) [5,103].Additionally, heavy metals such as lead, mercury, cadmium, barium, beryllium, chromium, lithium, nickel, zinc sulfide, selenium, yttrium, and europium (rare earth elements) and arsenic constitute essential parts of electrical components [99].Furthermore, halogenated substances such as CFCs, polychlorinated biphenyls (PCBs, polybrominated diphenyl ethers (PBDEs, polybrominated biphenyls (PBBs, brominated flame retardants (BFRs are also present in some electronic appliances such as ACs and refrigerators (Harrad et al., 2012;Birnbaum et al., 2004).
Environment and health effects
In recent years, concerns regarding the presence and distribution of organic contaminants, including heavy metals, within the environment have intensified [32].Any method of garbage disposal, whether in landfills or bodies of water, has serious effects on both human health and the ecosystem [157].Various hazardous electrical components and their health consequences are depicted in Table 1.Most e-waste contains heavy metals such as Pb, Cd, Hg, Zn, and Li, which exhibit adverse health effects on the central nervous system, kidneys, blood, lungs, and skin, among others (Table 1).Additionally, components such as barium, beryllium, and dibenzofurans may cause various lung and skin diseases and even cancer (Table 1).The health effects of e-waste can result from direct exposure in recycling sites, consumption of heavy metals through water, or inhalation of polluted air after combustion [128].This escalation of concern is largely due to substantial evidence indicating that a significant number of chemical groups have demonstrated carcinogenic properties in experimental animals, thereby posing a potential hazard to human health [129].These chemicals are usually classified into three types: • Primary contaminants include heavy metals and halogenated chemicals like lead, cadmium, barium, nickel, and zinc [38].• Secondary contaminants, including by-products of incorrect recycling operations, contain chemicals such as dioxins, PAHs, and PHAHs [188].• Tertiary contaminants include reagents used in hydro and pyrometallurgical processes [38].
The recycling process, such as chlorination, thermal treatment, adsorption, chemical extraction, membrane separation, and ion exchange, releases heavy metals that directly infiltrate the soil surface, posing a threat to the soil ecosystem [153].Consequently, this waste can contaminate water sources, threatening marine life [16].The biomagnification process can be triggered,for instance, cadmium pollution in groundwater systems that surpasses the normal threshold has a negative impact on aquatic species, triggering a biomagnification process [84].Plants absorb and store heavy metals in their tissues when this water is used for irrigation, endangering both plant and human health if ingested [191].A study conducted in Vietnam confirmed the presence of dioxins in e-waste recycling facilities as the outcome of open burning and storage practices, resulting in polluted land and rivers [81].The concentration reported surpassed WHO guidelines [163].For instance, heavy metal like cadmium inhalation can cause potential lung illness and kidney damage [40].The overall effect of e-wastes on the environment is depicted in Fig. 1.
Three commercial forms of PBDEs (penta-, octa-, and deca-PBDEs) are banned in Europe, Canada, and America [68] due to their ability to biomagnify food chains, hence slowing brain development in animals and causing other health issues [66].Birnbaum and Staskal [28] downplayed the use of brominated flame retardants in plastics used in numerous electronics, such as PBDE, octa-, deca-, and penta-PBDE.These substances possess the capability to induce significant health concerns, including the disruption of thyroid gland function.Additionally, heavy metals like mercury, often found in electronic components like fluorescent tubes, switches, and LED screens, exert negative effects on health, leading to sensory impairment, dermatitis, memory loss, etc. [18].Polyvinyl chloride (PVC), widely used as an insulating material in electrical cables, has the propensity to bioaccumulate [167].
E-waste management practices
According to the United States Environmental Protection Agency (USEPA), the United States generates more garbage than many other countries, averaging an estimated 2.0 kg of municipal solid waste per person each day [174].Notably, electronic waste has emerged as a significant issue in the United States [93].Every year, over 3.2 million tons of electronic waste, including computers, displays, and TVs, find their way into US landfills [174].This waste often gets incorrectly disposed of or repurposed without adequate consideration for environmental effects or worker health and safety [26].Similarly, the European Commission has proposed updates to regulations governing electrical and electronic devices to enhance sustainability and mitigate environmental impacts.This initiative aims to reduce electronic waste by implementing various recommendations.These recommendations primarily focus on waste reduction, emphasizing the design of products to be more durable and repairable to extend their lifespan, encouraging reuse through takeback programs where customers can return old products for repair and resale, promoting recycling to recover and reuse valuable materials, and ensuring appropriate disposal of electronic equipment [61].The amendment is designed to tackle the growing volume of waste in this category while acknowledging the environmental and health risks associated with improperly managed e-waste [61].Efforts such as the Restriction of Hazardous Substances (RoHS) in Electrical and Electronic Equipment have been initiated in California, Norway, China, South Korea, and Japan.Additionally, many countries, including Australia, New Zealand, Thailand, Malaysia, and Brazil, are taking significant steps to restrict hazardous substances such as PAHs, PDBEs, and PCBs [41].
Similarly, the European Commission has proposed updates to regulations governing electrical and electronic devices with the aim of promoting sustainability and reducing environmental impacts.These measures, which have been enacted to combat electronic waste, primarily focus on waste reduction by designing products to be more durable and repairable, thereby extending their lifespan.They also encourage reuse through takeback programs, recycling to recover valuable materials, and appropriate disposal of electronic equipment.This amendment aims to tackle the increasing volume of waste in this category while also addressing the environmental and health risks associated with improperly managed e-waste.Efforts such as the Restriction of Hazardous Substances (RoHS) in Electrical and Electronic Equipment have been initiated in California, Norway, China, South Korea, and Japan.Additionally, many countries, including Australia, New Zealand, Thailand, Malaysia, and Brazil, are taking significant steps to restrict hazardous substances such as PAHs, PDBEs, and PCBs.For example, Australia introduced the National Television and Computer Recycling Scheme (NTCRS) in 2011 to provide recycling services for TVs, computers, printers, and related equipment [53].The Product Stewardship Act 2011 mandates producers and importers to responsibly handle the disposal of their goods, including electronic waste, after the product's lifespan.In New Zealand, extended producer responsibility (EPR) schemes for electronic items and the Waste Minimization Act 2008 provide a framework for electronic waste management and encourage waste reduction initiatives [172,177].Thailand has implemented the Hazardous Substance Act to regulate the production, import, export, and use of hazardous compounds found in electronic goods.The country also employs various e-waste management measures, including collection and reusability of waste [165].Malaysia has strengthened its regulatory framework for electronic waste management and hazardous substances.The Environmental Quality Act 1974 governs the production, storage, export, treatment, and disposal of dangerous wastes, including electronic waste.Similarly, Brazil has adopted the National Policy on Solid Waste (PNRS) and the National Solid Waste Plan (PNSR), which include measures for electronic waste management.Brazil actively participates in international agreements and alliances aimed at addressing the challenges posed by e-waste and hazardous substances [31].
In India, e-waste management operates in a comparable manner, wherein urban families engage in informal recycling activities like collecting, sorting, repairing, and disassembling outdated devices to secure employment opportunities (https:// www.waste chind ia.com/ chall enges-for-e-waste-manag ement-in-india/).However, unlike in developed nations, there is no tradition in India of customers willingly giving unwanted devices to professional e-waste disposal centers (https:// hindr ise.org/ resou rces/e-waste-manag ement-in-india/).According to the National Research Development Corporation (NRDC), recyclable electronic items find their way to recycling facilities predominantly located in Asian and African countries [126].For example, India, predominantly in Fig. 1 The schematic diagram illustrates the comprehensive impact of e-waste on human health and the environment, including soil toxicity, biomagnification, air pollution, and other factors Delhi and Bengaluru [14], as well as Pakistan, notably in Karachi and Lahore [90], and China [186,187] serve as major destinations.Other countries like Uganda, Peru, and Brazil also play a great role in generating massive amounts of e-waste [179].
Microbial degradation of e-waste
Biodegradation refers to the chemical breakdown of materials by living organisms, occurring either aerobically or anaerobically [164].This process significantly impacts the breakdown of organic compounds [164].Most of the microbes release biosurfactants to initiate the degradation process of PAHs.Biosurfactants are extracellular surfactants secreted by some microorganisms that accelerate the biodegradation process [29].Biosurfactants are seen as promising options for bioremediation due to their ionic properties, low toxicity, strong emulsifying capabilities, multifunctionality, surface activity, and compatibility with the environment (Mishra et al., 2021).Additionally, biosurfactants exhibit a diverse range of chemical structures and a broad spectrum of metal selectivity and binding capacity, giving them a greater ability to remove contaminants (Mishra et al., 2021).
E-wastes are mostly composed of heavy metals (e.g., Ni, Cd, Al, Cu, Mn, Zn, Au, Zn, Fe, Ag, Pb, Hg, Cr, and Sn), polychlorinated biphenyls (PCBs), and polyaromatic hydrocarbons (PAHs).Certain microbes have a diverse catabolic capacity that allows them to degrade, transform, or accumulate a wide range of compounds, including hydrocarbons such as oil, polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), pharmaceuticals, pesticides, and metals (Table 2) [112] [11,36,44,94,98,145,147,154,182].Although heavy metals are not biodegradable, they could potentially be converted from one chemical state to another, making them less hazardous to the environment [63].Microbes aid in the transformation of contaminants into end products such as carbon dioxide and water, as well as other intermediate metabolic chemicals, during mineralization.Similarly, immobilization is the process of converting chemicals into a state that makes them inaccessible in the environment [138].E. asburiae and B. cereus have been found to have a function in immobilizing heavy metals that contribute to pollution [63].Immobilization can be accomplished in situ or ex situ [138].The ex situ method comprises transferring polluted soils from the pollution site to another place where a microbiological technique is used to immobilize the metal ions responsible for the contamination [15].In contrast, the in situ technique entails treating pollution at its source [37].
A more effective approach to improving the efficacy of bioremediation processes in specific locations involves designing microbial methodologies that take into account factors such as regulatory mechanisms, microbial growth dynamics in contaminated areas, metabolic capabilities, and their responses to varying environmental conditions [8].While exposure to certain organic solvents can lead to the disruption of cell membranes, microbes have developed defense mechanisms [83].These include the formation of hydrophobic or solvent efflux pumps that serve as defensive barriers for the outer cell membrane [55,83].
Among various modes of action of microbes, bioleaching, bioaccumulation, biotransformation, biosorption, biomineralization, reduction, and bio-oxidation are the key processes by which microbes contribute to bioremediation of e-waste.The detailed mechanisms involving microbes in e-waste degradation are discussed below.
Biodegradation of PAHs
A number of polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), and volatile organic compounds (VOCs) are found in e-wastes.A diverge range of microbes have the potential to release biosurfactants which reduces the surface tension of these oily substances and converts them into smaller particles so that they can be absorbed by the cells for further metabolism [112].Polyaromatic hydrocarbons (PAHs) are complex organic pollutants primarily produced through incomplete combustion processes [76].These pollutants, released into the environment by both human activities and natural processes, disperse globally through air and water currents.They contaminate air, plants, and food, accumulating in organisms as they move up the food chain (Ghosal et al., 2017).Escalating levels of PAHs, notably from improper e-waste disposal, raise concerns about potential health risks such as cancer (Shengtao et al., 2022).Prolonged exposure to PAHs also increases the risk of asthma and cardiovascular diseases [72].
Certain bacteria have been identified for their ability to degrade high molecular weight PAHs.Key bacterial genera involved in PAH degradation includes Bacillus sp., Mycobacterium sp., Rhodococcus sp., Pseudomonas Some bacteria struggle to digest PAHs effectively, and simultaneous degradation of different PAH types is challenging due to factors like bioavailability and metabolic interactions [79].Cometabolism, however, plays a crucial role in breaking down PAHs synergistically, making it easier for specific bacteria to degrade a wider range of PAHs, particularly those with high molecular weights [79].Furthermore, a significant challenge hindering PAH bioremediation is the understanding of their dynamics in soil and marine ecosystems.Most emitted PAHs get trapped under coal tar and black-clayish carbon particles, significantly reducing their bioavailability [132].Addressing these challenges requires further research and attention.The overall process of microbial degradation of PAH is depicted in Fig. 2.
Biodegradation of azo dye components of e-waste
When discussing e-waste, it is crucial to address the significant impact of azo dyes.Azo dyes are the most widely manufactured type of dye worldwide, accounting for approximately 80% of all dye production [149].These dyes, produced through a straightforward process of diazotization and coupling, play a pivotal role in the dyeing and printing market [23].Recently, there has been a rise in functional dyes tailored for high-tech applications, such as optoelectronics (e.g., photochromic materials, dye-sensitized solar cells, liquid crystal displays), electronic materials (e.g., organic semiconductors), and imaging technologies (e.g., electrophotography, thermal printing) [58].Various electronic devices, including thermal transfer printers, lasers, nonlinear optical devices, and fuel cells, utilize these dyes [23].Moreover, new azo-cyanine dyes with high molar absorptivity have been investigated for their potential as cyanine photosensitizers in the development of novel photodynamic therapy (PDT) agents [58].However, there is growing concern Fig. 2 The illustration demonstrates the initial degradation of PAHs and VOCs by microbial biosurfactants, followed by their internalization and subsequent breakdown by various microbial enzymes.This initial degradation occurs through peripheral metabolic pathways before the compounds enter the tricarboxylic acid (TCA) cycle, ultimately resulting in the release of simpler and less toxic byproducts about the use of azo dyes in these sectors due to health risks and severe environmental consequences [111].Many studies advocate for bioremediation approaches to address the remediation of azo dyes [149], (El-Rahim et al., 2021).Microorganisms, particularly bacteria, have garnered global attention for their ability to efficiently degrade a wide range of dyes under anaerobic or aerobic conditions [107].For example, commonly used dyes like Congo Red in sectors such as printing have been effectively degraded by microorganisms; for instance, microbes like Dichotomomyces cejpii MRCH 1-2 and Phoma tropica showed a 95% degradation rate (Krishnamoorthy et al., 2017).However, the degradation pathways of azo dyes used in electronics and their specific environmental impact warrant further investigation, as there is limited literature exploring azo dyes in the electronics sector.
Bioleaching
Bioleaching involves the utilization of acidophilic microorganisms to aid in the solubilization of heavy metals that are solid inside a sediment matrix [150].This method is particularly effective for contaminants such as iron or sulfur [24,162].Bioleaching processes may be of two types: "direct" and "indirect." Direct leaching involves electron transfer occurring directly from the metal sulfide to the cell connected to the mineral surface.Indirect leaching, on the other hand, occurs through the action of metal ions, such as iron (III) ions.These ions are produced by iron(II)-oxidizing bacteria, which can be free-floating or attached to the mineral surface.They function as metal sulfide-oxidizing agents [114].In the realm of bioleaching, specific organisms are commonly employed for their metal extraction abilities [162].Bacteria such as Thiobacillus thiooxidans, T. ferrooxidans, Leptospirillum ferriphilum, and Acidithiobacillus ferrooxidans, among others, as well as fungi such as Aspergillus niger and Penicillium simplicissimum, have found extensive usage in extracting metals from electronic waste materials (Brandl et al., 2000;[2]).Autotrophic bacteria (e.g., Thiobacilli sp. and Sulfobacillus benefaciens), heterotrophic bacteria (e.g., Pseudomonas sp. and Bacillus sp.), and heterotrophic fungi (e.g., Aspergillus sp. and Penicillium sp.) represent the three principal categories of microorganisms that play active roles in the bioleaching of metals [150].These microorganisms possess the capability to extract metals from sulfide or iron-containing ores and mineral concentrates (Gokul et al., 2019).
Among them, the fungus Aspergillus niger stands out for its ability to produce organic acids such as citric, gluconic, oxalic, and malic acids (Biswal et al., 2023).These organic acids act as strong chelating agents in the bioleaching process, allowing metals to be recovered Studies show that Chromobacterium violaceum is capable of detoxifying cyanide with the help of the beta-cyanoalanine synthase enzyme [13].This species is potentially useful in the biological recovery of gold from e-waste.Additionally, it has been discovered that Chromobacterium violaceum can also participate in the leaching of gold and copper from waste mobile phone printed circuit boards (PCBs), showcasing its potential in metal recovery processes [2,44].On the other hand, Pseudomonas fluorescens is capable of catabolizing cyanide via the action of cyanide oxygenase.P. fluorescens proved to be more efficient in the bioleaching of gold compared to C. violaceum, even though it produces more cyanide than C. violaceum in the absence of electronic waste (Annamalai et al., 2019), [101]).An extensive literature survey shows Thiobacillus ferrooxidans as one of the most well-studied organisms for the microbial leaching of iron and sulfur with futuristic applications [130].Despite bioleaching being a promising method with futuristic potential, it is time-consuming, yet eco-friendly in nature.Towards large-scale application, the slow rate of the process and metal toxicity towards microorganisms are significant setbacks [21].Hence, there is a scope for further improvement in this method.A few contemporary research studies have demonstrated that the process of bioleaching may be improved by maintaining optimum pH, O 2 , and CO 2 levels, temperature, and mineral substrate supply to favor the maximum growth of the microbes as well as by promoting the formation of bacterial biofilm for the process [67,175].
Biosorption
The absorption and binding of ionized hazardous metals onto the cell surface is the basis of the biosorption process [155].In the presence of ATP, metabolism-dependent biosorption occurs through processes such as chelation-a unique mechanism where ions and molecules attach to metal ions by forming two or more coordinate bonds between a polydentate ligand and a single central atom.Additionally, physical adsorption, a surface phenomenon, creates a film of the adsorbate on the surface of the adsorbent [155].In the absence of ATP, biosorption occurs through a variety of mechanisms such as adsorption, ion exchange, and covalent bonding.These mechanisms are governed by the chemiosmotic gradient potential [19].Based on the cell metabolism requirement and the nature of metal contamination, biosorption pathways may be classified as metabolism-dependent or metabolism-independent [27].Physicochemical interactions between functional groups on the cell surface of bacteria and metals occur through metabolism-independent pathways, involving chemical sorption, physical adsorption, and ion exchange [139].Carbohydrates, lipids, and proteins in microbial cells consist of metalabsorbing groups such as phosphate, sulfate, amino, and carboxyl groups [3].
Because of their capacity to bond with e-waste in aqueous solutions, microbes are referred to as biosorbents [7].It is critical to examine the microbial stability of biosorbents by analyzing their nature, including sorption kinetics, regeneration, maximal sorption capacity, and recovery of associated metals [95].Dead biomass, live cells, or polymers derived from their metabolic processes are utilized as biosorbent materials in the biosorption of heavy metals (Fomina et al., 2014), [48],).
Yeasts are also considered attractive biosorbents due to the presence of polysaccharides in their cell wall.Candida tropicalis, Saccharomyces cerevisiae, and Streptomyces longwoodensis show potential for heavy metal adsorption, including cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni), zinc (Zn), and lead (Pb) [47].The yeast strain Saccharomyces cerevisiae, commonly referred to as baker's yeast, has demonstrated convenience in retaining metal ions such as cobalt and copper [151].Yeasts such as S. cerevisiae can also serve as bioremediation agents via processes such as ion exchange [106].Bacteria and fungi are attractive biosorbents for e-waste remediation due to their capacity to grow in a variety of environmental conditions.
Algae have a remarkable biosorption capacity making them highly efficient compared to other microbes due to their substantial biomass [9,50].This biosorption method acts through an ion exchange mechanism.Brown marine algae (e.g., Fucus vesiculosus), with functional groups like COO − , SO 3 − , SH, and NH 2 , effectively remediate metals such as cadmium, nickel, and lead (Mustapha et al., 2015).
Biosorption is widely used as a biological tool for the accumulation of heavy metals, which are hazardous to the environment, through physico-chemical pathways of uptake due to its suitability with different environmental conditions (Errasquin et al., 2003).However, its effectiveness is dependent on the biosorbent materials used and the associated costs [151].Various fungi, including Allescheriella sp., Botryosphaeria rhodina, Klebsiella oxytoca, Phlebia sp., and Stachybotrys sp., have demonstrated high metal binding capability [49].Additionally, gram-positive bacteria strains, such as Cellulosimicrobium sp., have shown tolerance against xenobiotics and heavy metals such as Cd, Hg, Cr, and Pb (Bhaiti et al., 2019).In a study conducted by Thatoi and his team in 2014, a strain of bacteria known as Bacillus sp.SFC 500 was documented to reduce chromium into its less toxic form through biotransformation.Furthermore, research has shown the efficacy of fungi, such as Rhodobacter sphaeroides, in eradicating hydrophobic toxic metals like zinc and cadmium from the soil [135].
Biotransformation
Metal biotransformation can be categorized as direct or indirect (Balfourier et al., 2023).Direct biotransformation, also known as enzymatic biotransformation, utilizes microbial enzymes to change oxidation states, resulting in the reduction of harsh multivalent metals [160].In contrast, metal-reducing microbes immobilize metals in sedimentary and subsurface settings, stabilizing multivalent hazardous metals (Tabak et al., 2005).
Bioaccumulation
Bioaccumulation harnesses microbial capacity to absorb toxic metals and store them within their cellular vacuoles through a detoxification mechanism and an active process [85].It requires energy for metal absorption and detoxification within the vacuoles (Errasquin et al., 2003).As a result, metals are taken from the environment and sequestered inside living cells, resulting in remediation (Das et al. 2012).Plants and microorganisms are efficient in eliminating metals through accumulation when used for bioremediation of metalcontaminated environments [54].When paired with techniques such as phytodegradation, this approach delivers improved heavy metal removal [127].Metals are incorporated into living biomass through bioaccumulation [45].
Tolerance to metals such as arsenic (As), mercury (Hg), cobalt (Co), iron (Fe), and chromium (Cr) was tested in several native strains of Bacillus sphaericus, along with the assessment of bioaccumulation in live biomass, where it was shown that both living and dead cells showed immense capacity of metal bioaccumulation (Velásquez et al., 2009).
Gram-positive bacteria such as Tsukamurella paurometabola, and Gram-negative bacteria, Pseudomonas aeruginosa, have been used in cadmium (Cd) and zinc (Zn) bioaccumulation [127].The study has also looked at lead (Pb), cadmium (Cd), arsenic (As), and mercury (Hg) removal by S. cerevisiae, Pseudomonas putida, and Fusarium flocciferum [109,127,141].Another study compared the bioaccumulation of copper (II), lead (II), and chromium (VI) by Aspergillus niger, where A. niger was shown to be extremely susceptible to all levels of chromium (VI) concentrations [57].These findings suggest that A. niger could serve as an effective living biosorbent for the removal of heavy metal ions [57].Bacteria such Bacillus circulans, Bacillus megaterium, and Deinococcus radiodurans and fungi such as Aspergillus niger and Monodictys pelagica are also reported to accumulate Cr, U, and Pb from electronic waste (Patel et al., 2014).
Recombinant E. coli has also been reported by researchers for their role in cadmium bioaccumulation by expressing metallothionein (MT) in the cytosol [105].Another study highlighted a twofold increase in cadmium bioaccumulation with glutathione and synthesis of phytochelatin expressing MT (González et al., 2021).
This technology is largely dependent on the growth rate of the microorganisms used in this method and also on their ability to accumulate the heavy metal.Besides, the success rate in field trials is still far behind in comparison to in vitro findings.However, the cost effectiveness of this method cannot be ruled out.
Biomineralization
Biomineralization entails the microbial synthesis of specific inorganic substances using substrate molecules, benefiting the biological system (Kim et al., 2013).This process includes microorganisms accumulating anions or ligands, which then bind to hazardous metal contaminants and precipitate (Patel et al., 2014).It is a frequently employed method for treating e-waste components such as hazardous heavy metals and polymers via degradation or precipitation [190].As a result, polluting metals transform into moderately stable forms, while organic molecules fracture into less hazardous and more stable states (Ayangberno et al., 2017).
There are two types of biomineralization, viz., biological induced mineralization (BIM) and biological controlled mineralization (BCM) [122].In some situations, BIM causes mineral production inside cells or on cell surfaces [56].On the other hand, BCM includes extracellular mineral production due to the metabolic capacities of microbes [4].
Metallophilic bacteria, Cupriavidus metallidurans, can aid in cellular detoxification hence proving to be a potential candidate in accumulating Au (III) [144].Additionally, bacterial strains such as Bacillus fusiformis and Sporosarcina ginsengisoli, along with Cupriavidus metallidurans, are well known for their role in the biomineralization process, effectively eliminating heavy metals such as cadmium, arsenic, and lead [4].Another study led by Achal (2012) demonstrated the excellent biomineralization capability of arsenic (As III) by Sporosarcina ginsengisoli.
Although the method of biomineralization has received much attention in recent days, limitations related to the efficiency of the microbes to be employed and the degree of contamination in the affected area are some of the factors which also play a major role in biomineralization.
Enzymatic degradation of e-waste
As an environmentally friendly biotechnological technology, bioremediation employs biological agents such as plants, bacteria, and their enzymes to transform hazardous pollutants into less toxic or non-harmful chemicals via various metabolic pathways [17].Scientists have discovered that numerous enzymes originating from microorganisms (bacteria and fungi) and plants play an important role in the bioremediation of pollutants [120].The enzymatic actions of important enzymes, such as oxidoreductases, dioxygenases, and hydrolases have been extensively studied (Fig. 1) (Karigar et al., 2011).Microbial enzymes such as reductases, laccases, esterases, carboxylesterases, catalase, dismutases, and dioxygenases show their ability to convert various heavy metals and PAHs into their less toxic forms (Table 3), [60,92,120,166] Microbial enzymatic pathways play an important role in many stages of bioremediation, interacting with hazardous contaminants and converting them to harmless substances [25].Enzymes offer benefits such as substrate specificity, independence from microbial growth rates, uniformity, and simplicity of handling and storage, minimizing dependency on toxic chemicals [39].
Various enzymes exhibit diverse capabilities when it comes to degrading heavy metal pollutants from e-waste [120].Enzymes such as Cytochrome p450, nitrilases, dihydrodiol dehydrogenase, esterases, amidases, laccases, proteases, MnP (manganese peroxidase), glucose oxidase, and glyoxal oxidase play an essential role in breaking down different classes of contaminants [25,97,142].Natural enzymes are generally preferred due to their cost-effectiveness.However, emerging technologies such as genetic engineering, recombinant techniques, and nanotechnology offer promising opportunities to produce more efficient enzymes [120].This is because these technologies can be tailored to change the amino acid sequences of enzymes to achieve specific pH, temperature tolerance, stress resistance, and other metabolic properties necessary for the bioremediation of heavy metals [25].However, enzyme production can be enhanced by genetic engineering by transferring coding genes for expression [75].It is also expected that under
The overall molecular mechanisms involved in bioleaching, bioaccumulation, biotransformation, biosorption, biomineralization, reduction, and bio-oxidation are depicted in Fig. 3. Microbial competition, post-inoculation decline, temperature, pH, oxygen levels, wetness, and other environmental conditions all have an impact on bioremediation [77,104].Pollutant solubility increases as temperature rises [136].A lack of in-depth knowledge of physiology, microbial ecology, gene expression, and site-specific variables is also a barrier.Developing advanced bioremediation technologies suitable for complexly polluted sites with diverse toxic pollutants remains a challenge [46].
Challenges associated with bioremediation
There is disarray regarding bioremediation acceptance criteria, and no widely accepted explanation or treatment technique exists (Sharma, 2021).Assessing bioremediation potential is complex, as the inhibition of microorganisms by toxic heavy metals depends on factors like metal ion concentration, redox potential, and environmental conditions [85].The effectiveness of metal-microbe complex stabilization is dependent on parameters such as sorption sites and microorganism cell wall structure [85].Overall, the efficiency of bioremediation is determined by the substrate subjected to treatment and the unique environmental circumstances at hand (Anekwe et al., 2022).
Potential roles of genetically modified organisms (GMOs) in degrading e-waste components
Genetic engineering presents promising opportunities for mitigating various heavy metals and pollutants, including polyaromatic hydrocarbons (PAHs), which are often challenging to address through conventional bioremediation methods (Verma et al., 2019).Genetically modified organisms (GMOs) offer significant advantages for bioremediation due to their environmentally friendly nature and reduced health risks compared to physiochemical methods, which are less eco-friendly and pose potential dangers to life [91].
For instance, E. coli JM109 modified with pCLG2 (M5) and pGPMT (M4) plasmids demonstrated enhanced absorption of Cd 2+ due to the expression of the cadmium transport system and metallothionein (MT) in M4, effectively doubling the strain's original absorption capacity [52].In another study, Huang et al. [82] utilized a genetically modified Bacillus subtilis strain 168 to methylate and volatilize arsenic (As) with the CmarsM gene from heat-resistant algae Cyanidioschyzon merolae, potentially aiding in the cleanup of As-contaminated compost.Li et al. [102] employed a novel approach, STAR, using CRISPR-ddAsCpf1 to enhance the electron transfer capacity of Shewanella oneidensis MR-1, leading to improved bioreduction of heavy metals like chromium.
Furthermore, certain enzymes can transform heavy metals (HMs) into less toxic forms.For example, when the mercury resistance gene merA from Deinococcus radiodurans is expressed in E. coli BL308, it enables the bacterium to tolerate higher concentrations of Hg (II) and convert it into less toxic Hg (0) [34].Researchers have identified metal-binding peptides responsible for capturing heavy metals (HMs), such as cadB for cadmium (Cd) (II) and zinc (Zn) (II), pbrT and pbrD for lead (Pb) (II), and copM for copper (Cu) (II), while metallothioneins with cysteine and sulfhydryl groups are utilized for HM binding [70,181].
In addition to heavy metals, genetically modified organisms have shown effectiveness in degrading polyaromatic hydrocarbons (PAHs).The breakdown of PAHs by genetically engineered microorganisms (GEMs) relies on specific enzymes such as dioxygenase, monooxygenase, hydroaldolase, and dehydrogenase [43].Changes in degradation pathways and efficiency often depend on variations in enzymes encoded by functional genes [43].These functional genes are frequently utilized to construct GEMs responsible for degrading PAHs.For example, Mohtashami et al. [118] inserted the laccase gene (poxa1b) from Pleurotus ostreatus into E. coli BL21, resulting in the oxidation of benzo[α]pyrene by 17%.They co-expressed pdoAB with plasmid pBRCD, achieving oxidation for phenanthrene, pyrene, anthracene, and benzo[α]pyrene, facilitated by electron transfer components from plasmid pBRCD.However, no literature shows successful field implementation of GMOs in degrading e-waste components from the best of our knowledge.
Future prospective
The importance of addressing the current pace and quantity of e-waste, as well as its environmental effects, cannot be overstated.The current scenario highlights that inadequately managed e-waste recycling processes result in the release of enduringly hazardous substances like PBDEs and PCDDs into the atmosphere, residual ash, airborne particles, soil, water, and the nearby environment.Furthermore, as shown by Miller et al. [113], these hazardous elements eventually make their way into both oceanic and terrestrial ecosystems, sparking a process of bioaccumulation and biomagnification.
As the accumulation of such hazardous chemicals continues to rise, the availability of extractable elements diminishes.Scientists have developed environmentally friendly appropriate methods for recycling and recovering toxic substances from waste to avert disastrous repercussions.These measures not only enhance human health but also have significant environmental effects, both now and in the future [152].Additionally, the adoption of bioremediation methods has gained substantial traction for the purification of landfills and groundwater reservoirs [137].
Despite the array of techniques available for waste management, there persists a deficiency in their appropriate implementation, both in developed and developing countries, even within the framework of improvement (Ferronato et al., 2019).However, the pressing necessity for well-defined regulations, maintenance protocols, and comprehensive policies to monitor health and environmental issues stemming from toxic metals cannot be understated.This need is particularly relevant in the current context and remains a priority for the future.
Conclusions
The rapid increase in e-waste poses a significant challenge that requires immediate attention.This problem has global ramifications, impacting regions worldwide with a wide range of difficulties associated with e-waste disposal.Addressing this challenge is crucial as we strive for a sustainable future.As research progresses, new technologies are emerging to confront this impending disaster, each with its own advantages and disadvantages.However, the practicality of any advancement lies in its ability to serve humanity in a cleaner and more environmentally friendly manner.Microorganisms offer a promising solution to this issue through various mechanisms such as biosorption, bioleaching, biotransformation, bioaccumulation, and enzymatic pathways.It has been found that microorganisms can effectively remediate a wide range of e-waste, including hydrocarbons like polychlorinated biphenyls (PCBs), pharmaceuticals, oil, and polyaromatic hydrocarbons (PAHs), in an eco-friendly, reliable, and economically feasible manner.Furthermore, certain microbes have been observed to facilitate the leaching process, potentially opening up new avenues in metallurgy and metal extraction from ores.However, it is important to note that microbial degradation processes are often more time-consuming compared to physical and chemical methods.Nonetheless, there is significant potential for improving microbial degradation processes through modern biotechnological interventions in the future.
sp., and Achromobacter sp.[76].Mycobacteria, a type of actinomycetes, possess intrinsic resistance to adverse conditions and are particularly adept at decomposing heavy metals, polychloro derivatives of phenols, and various PAHs (Azadi et al., 2020).Bacteria typically degrade PAHs using enzymatic activities like oxygenases and peroxidases.Examples include AlkB from Pseudomonas sp., naphthalene monooxygenase from P. putida, and cytochrome P450 from yeast species such as C. maltosa and C. tropicalis[59], (Das et al., 2011).Various fungi, including basidiomycetes, deuteromycetes, and white-rot fungi, have also demonstrated PAH degradation capabilities[134].Unlike bacteria, fungi utilize PAHs alongside other carbon sources, producing oxidized products including carbon dioxide (CO 2 )[132].White-rot fungi species such as Phanerochaete chrysosporium are particularly efficient at removing PAH chemicals due to their production of extracellular ligninolytic enzymes such as lignin peroxidase, manganese peroxidase (MnP), and laccase (Lac)[1,100].Despite the potential of bacteria and fungi in degrading PAHs, challenges exist.
Biotransformation, in the context of e-waste remediation, refers to the chemical alteration of metals by microbes or changes in their oxidative state caused by electron addition or removal by microbial agents, playing an important role in transforming chemical pollutants into more environmentally friendly compounds (Karigar et al., 2011; Das et al., 2012).
Despite technological breakthroughs and cost-effectiveness when compared to older processes such as incineration or landfilling, bioremediation confronts several challenges.Certain e-waste components, such as chlorinated organic chemicals and strongly aromatic hydrocarbons, resist bacterial decomposition (Viswakarma et al., 2020).The type and amount of contaminants, soil texture, geographical location, and adsorption by soil particles all have an impact on bioremediation effectiveness (Temitope et al., 2022).Bioremediation selectivity necessitates the use of particular microbial species, proper growth conditions, and enough food availability (Philip et al., 2005).
Fig. 3
Fig. 3 Various modes of action of microbes involved in the biodegradation of e-waste
no Types of pollutants Electrical components Health consequences Reference
Hazardous electrical components and their health consequences Sl.
Table 2
Microorganisms involved in e-waste degradation from materials such as lithium-ion batteries (Biswal et al., 2023).Numerous studies have also demonstrated that Aspergillus niger generates gluconic acid, which can chelate and dissolve substantial amounts of different metals, including Li, Cu, Mn, Al, Ni, and Co (Horeh et al., 2018; Biswal et al., 2023).Furthermore, some research has found that Aspergillus niger may leach zinc oxide, while Penicillium sp. is often used in gold recovery bioleaching approaches (Trivedi et al., 2021; [148]).Metals such as Al, Zn, Cu, and Cd have been efficiently recovered from fly ash by Aspergillus niger (Annamalai et al., 2019).
Table 3
Microbial enzymes involved in e-waste degradation
|
v3-fos-license
|
2021-11-23T20:07:10.227Z
|
2021-11-01T00:00:00.000
|
244486136
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2095/1/012078/pdf",
"pdf_hash": "f073f70966ac6061d7487bf0f4d70848d95d897b",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:887",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "f073f70966ac6061d7487bf0f4d70848d95d897b",
"year": 2021
}
|
pes2o/s2orc
|
Research on Quantitative Analysis Method of Combined Maintenance Task
In this paper, the delay time theory is used to establish the mathematical model of the combination of monitoring and periodic replacement tasks, and the impact of maintenance period of monitoring and periodic replacement is discussed. Through the optimization solution, the optimal cycle of combined maintenance tasks is obtained to minimize the maintenance cost while ensuring the reliability. Finally, an example is given to demonstrate the feasibility of this method.
Introduction
RCM is the most widely used maintenance program for nuclear power plants. Reliability centered maintenance [1] is a system engineering method commonly used internationally to determine equipment maintenance requirements and formulate and optimize maintenance strategies. It originated from the field of aviation in the United States and has been successively applied to military equipment, electric power (including nuclear power), railway (including subway), petroleum and petrochemical, processing and manufacturing, shipping and other industries. John moubray, the founder of Aladon company, which is engaged in the promotion and application of RCM in the United Kingdom, published a monograph RCM II [2].Although RCM is widely used and can improve the reliability of equipment, it tends to qualitative analysis and lacks of quantitative analysis process. From 2002 to 2014, although the U.S. nuclear power industry continued to achieve continuous improvement in safety and reliability, the average power generation cost (including nuclear fuel, capital and operation and maintenance) of the U.S. nuclear power industry increased by 28%.Therefore, since 2016, the American industry has launched the DNP industry plan -delivering the nuclear promise: promoting safety, reliability and economic performance, with the goal of reducing costs by 30% in 2020.The action was initiated by the American Nuclear Energy Association (NEI the agency responsible for communication on behalf of the general nuclear safety issues and Safety Bureau of nuclear power companies), coordinated by INPO, and EPRI (American Electric Power Research Institute) was responsible for the technical part. According to the DNP plan, the NEI industry working group issued a series of efficiency bulletins. In the DNP action plan, eb17-03a [3] is VBM (value based maintenance) -value oriented maintenance, which is mainly to solve the current problem of ensuring equipment reliability regardless of cost. At present, the optimization technology of maintenance program mainly includes system fault modeling (using Monte Carlo simulation) and delay time maintenance model [4] [5]. System fault modeling mainly investigates equipment fault mode through fault distribution and resource availability to determine the optimal strategy, while delay time theory determines the optimal monitoring cycle of equipment by considering fault consequences Maintenance task combination refers to a maintenance method that implements two or more types of preventive maintenance. It has practical application value and is widely used in maintenance practice. For example, in order to reduce "temporary repair" and eliminate "machine failure", the maintenance of railway diesel locomotive gearbox often adopts the method of several minor repairs within an overhaul cycle: minor repairs mainly focus on condition monitoring, and measures are taken according to the performance status of the equipment; Overhaul is generally to renovate or replace to completely restore it to the new state. Although, by definition, the combination of maintenance tasks can be the comprehensive implementation of any different tasks, However, in the process of maintenance work, two typical PM tasks, condition monitoring and periodic replacement, are usually combined. The advantage of condition monitoring is to find hidden trouble in time and avoid serious fault consequences through regular or irregular equipment condition evaluation or identification. The advantage of regular replacement is to update before reaching the fault wear period to restore the original fault resistance. It is not difficult to see that by combining these two typical PM tasks, the advantages of the two can be effectively integrated. Aiming at the implementation of maintenance task combination, this paper adopts the delayed time maintenance model to explore the equipment maintenance process and characteristics under the maintenance strategy, and establishes the task cycle mathematical model according to different decision objectives, so as to provide a reference basis for analyzing and comparing the applicability and necessity of task combination strategy. On this basis, typical equipment is selected to verify the above decisionmaking model.
Research Questions
In the DNP action plan of American nuclear power plants, eb17-03a [1] is VBM (value based maintenance) -value oriented maintenance, which is mainly to solve the current problem of relatively blind ensuring equipment reliability regardless of cost. This problem is also severe in domestic nuclear power plants. In addition to reducing unnecessary maintenance of economic equipment, the maintenance cost of important equipment is often higher. So how to solve this problem is a research topic. In the power plant, there are usually maintenance tasks of both regular maintenance or regular replacement and condition monitoring. The two maintenance tasks have their own advantages. After the implementation of maintenance task combination, whether the required maintenance cost has been reduced and whether the invested preventive maintenance cost has obtained greater benefits need to be scientifically and accurately calculated, and the maintenance process needs to be described and quantitatively calculated through mathematical model. From the perspective of economy, this paper establishes a mathematical model of unit time maintenance cost after combining condition monitoring and periodic replacement. At present, the maintenance strategy for the dynamic and static rings of a typical steam pump in a nuclear power plant includes 1Month's condition monitoring task and 36Month's periodic replacement task.
Research Model
The main work of this paper is to demonstrate the impact of the two task combinations on the economy by establishing the economic model of condition monitoring and periodic replacement task combination, so as to find out the optimal combination of task cycle.
Model Basis
The specific implementation process of the combination of condition monitoring and periodic replacement task is as follows: condition monitoring is conducted after a certain time: if the product is found to be in a potential fault state during monitoring, preventive replacement is carried out; If there is no potential fault and the condition is good, no measures shall be taken to continue to use. In case of functional failure between two monitoring, it will immediately stop for repair. When the (k-1) condition monitoring is completed and the k-th condition monitoring is reached, a preventive replacement shall be carried out for the product regardless of its state (as shown in Figure 1). T n is the monitoring interval of the combined task and Tr is the replacement interval; U: the service time in case of potential product failure, also known as initial time, whose density function and distribution function are g (U) and G (U) respectively; H: the service time from product potential fault to functional fault, also known as delay time, whose density function and distribution function are f (H) and F (H) respectively; C r : cost of regular product replacement; C n : cost of product condition monitoring; C P : cost of preventive maintenance for potential faults detected by product monitoring; C f : the cost of repairing the product after functional failure; T pr : the time required for regular product replacement. According to the P-F interval of condition monitoring, the formation of equipment failure is divided into two stages, and it is assumed that the two stages are independent of each other: the first stage is the time process from putting the equipment into use to potential failure; The second stage is the time process from potential failure to functional failure (as shown in Figure 2 below).
Model Establishment
Assuming the initial defect time u (i.e. potential fault point) and the fault delay time h (immediately engraved u+ h product fault point), if it is monitored at time u, the defect can be found and updated; If the defect is not found, then after the delay time h, the defect will lead to failure, and the equipment will be updated. Figures 3 and 4 describe the process of product condition monitoring with T as the interval, and Ti represents the i-th monitoring point (i = 1,2,3...). Assuming that a potential fault occurs at time u, its density function and distribution function are represented by g (u) and G (u) respectively, and the delay time h, its density function and distribution function are represented by F (h) and f (h) respectively, the probability of fault update before the , and the probability of monitoring update during monitoring By integrating u on, ) , , we can get the probability of failure update (or failure risk) , It can be expressed as ( 1 In order to obtain the best replacement cycle and minimize the average cost per unit time, the expected cost per unit time of the product can be expressed as Notice that Case 1: During the entire replacement cycle Tr, the product has neither functional failure nor preventive replacement due to potential failure found during monitoring, that is, no update event occurs, and the maintenance cost is :(k-1)•Cn. There are two possibilities for this event to occur: A: No potential fault occurs before the replacement cycle Tr, that is, U ≥Tr; B: Between the last monitoring and regular replacement ((k-1)Tn, Tr), there is no functional failure occurred before regular replacement. That is (K -1) Tn< U < K Tn∩U+ H > K Tn. The probability of occurrence of case 1 is Case 2: Update is carried out, and the first update is a preventive update when the potential fault is found during the first. In this case, the maintenance cost is: i ·Cn +Cp +.
During the monitoring, the product defect appears at (u,u+du) ( ), and the defect is found at the first monitoring including the following two conditions: There was no defect before the n T i ) 1 ( and the defect was found in the i-th monitoring; The product defect delay time must be greater than. u iT n The probability density of the event is )] , then we can get that: Case 3: Update and the update is due to the fault update of the product at moment X The expected maintenance cost per unit time of maintenance task combination is obtained by Substitute Equation (5) into Equation (1). Then, by optimizing the monitoring cycle or replacement cycle, the maintenance cost and the optimal solution of the corresponding maintenance cycle are calculated.
Demonstrated the Implementation
We collect and summarize the relevant maintenance data of the dynamic and static ring of the mechanical seal of a steam driven feed pump (hereinafter referred to as the dynamic and static ring of the mechanical seal on the high pressure side). Firstly, the monitoring update, fault update and other data of the dynamic and static ring of the turbine seal are extracted from the maintenance data of the turbine driven feed pump, and the relevant information is filled into the standardized table, as shown in Table 1. From the table, it can be concluded that the dynamic and static ring of the turbine seal on the high pressure side has been monitored and updated for 9 times, 3 times for fault update. The following specific information can be obtained from the table (time unit is month): Three fault updates were carried out at time Tj= (6, 15.5, 33) respectively, and the latest monitoring time before update was (0, 13, 21) respectively; A total of 9 times monitoring updates were carried out at time TK = (41, 2.5, 2, 21.5, 2.5, 20, 16.5, 9.5, 25), and the latest monitoring time before the update was (23, 0, 0, 11.5, 0, 9, 11.5, 8.5, 2); At the end of monitoring 3, the product is still working normally.
If it is temporarily difficult to determine the functional form of the initial time and delay time, the Weibull distribution is used for parameter estimation, and then the distribution form is judged by the shape parameters in the estimation results. The maximum likelihood function of the above information can be obtained Here, the results show that the initial time of the dynamic and static ring defect of the mechanical seal follows the exponential distribution with an average of 14 months, and the delay time follows the Weibull distribution with a shape parameter of 1.5 and a scale parameter of 6 months. Through data collection, the maintenance of the dynamic and static rings of the mechanical seal, as well as the time and cost information are obtained as follows: Under normal circumstances, the time required for condition monitoring is 0.5 days and the cost is 1000 yuan; the time required for regular replacement is 2 days and the corresponding maintenance cost is 10000 yuan. If defects are found through condition monitoring during operation, it can be recovered as new, with a time of 1 day and a cost of 8000 yuan; if an unexpected fault occurs, repairing is immediately conducted with a time of 5 days and a cost of 50000 yuan.
|
v3-fos-license
|
2023-03-02T16:15:10.211Z
|
2023-02-27T00:00:00.000
|
257264632
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/20/5/4265/pdf?version=1677513873",
"pdf_hash": "6bc973f403f3092866f03fc2a6491de1ae8613d3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:888",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "29ac80b14512fb45dd3c664b2fcbfcef12cd1312",
"year": 2023
}
|
pes2o/s2orc
|
Discovering Geographical Flock Patterns of CO2 Emissions in China Using Trajectory Mining Techniques
Carbon dioxide (CO2) emissions are considered a significant factor that results in climate change. To better support the formulation of effective policies to reduce CO2 emissions, specific types of important emission patterns need to be considered. Motivated by the flock pattern that exists in the domain of moving object trajectories, this paper extends this concept to a geographical flock pattern and aims to discover such patterns that might exist in CO2 emission data. To achieve this, a spatiotemporal graph (STG)-based approach is proposed. Three main parts are involved in the proposed approach: generating attribute trajectories from CO2 emission data, generating STGs from attribute trajectories, and discovering specific types of geographical flock patterns. Generally, eight different types of geographical flock patterns are derived based on two criteria, i.e., the high–low attribute values criterion and the extreme number–duration values criterion. A case study is conducted based on the CO2 emission data in China on two levels: the province level and the geographical region level. The results demonstrate the effectiveness of the proposed approach in discovering geographical flock patterns of CO2 emissions and provide potential suggestions and insights to assist policy making and the coordinated control of carbon emissions.
Introduction
Climate change is one of the biggest challenges faced by humankind, and its impact is becoming increasingly significant [1,2]. It can greatly affect the environment, production system, survival and development of humankind [3][4][5], hence typical extreme events and disasters (e.g., storms, droughts, heatwaves, fires, and floods) have become stronger and more frequent [6]. The 2022 Intergovernmental Panel on Climate Change (IPCC) annual report demonstrates that the world faces unavoidable climate hazards over the next two decades with global warming of 1.5 • C (2.7 • F), and urgent actions are required to deal with increasing risks [7]. One commonly acceptable solution is to make rapid and deep cuts in greenhouse gas emissions, particularly carbon dioxide (CO 2 ) [8].
The majority of countries in the world have been seeking effective ways to reduce CO 2 emissions. According to previous research, China is currently the largest emitter of CO 2 , accounting for 31% of global CO 2 emissions in 2020 due to its rapid economic development and urbanization [9]. To achieve the net-zero or near-zero CO 2 emissions, the Chinese government has launched a long-term mitigation goal, which indicates that China would reach peak emissions by 2030, and achieve carbon neutrality by 2060 [10]. Motivated by this national strategy, emerging research regarding carbon emissions in China has been paid much attention in various domains. Typical research focuses include the modelling, prediction, trading and policy evaluation of carbon emissions [11][12][13][14][15][16]. The establishment of effective policies to reduce carbon emissions is critical. Nevertheless, the long-term reduction of carbon emissions remains a key policy challenge for China and the world [17]. To meet the requirements to establish effective policies, specific, precise, and flexible policies must be proposed to different levels of geographical units (e.g., county, city, province, region, and country). Understanding the trends and trajectories of carbon emissions remains challenging in light of uncertainty about world economies and technological breakthroughs [18]. In this regard, the discovery of potentially important emission patterns must be performed from historical carbon emission data. The discovery of emission patterns plays a significant role in guiding the formulation of specific, precise, and flexible policies and the coordinated control of carbon emissions. Therefore, this paper focuses mainly on discovering important patterns from carbon emission data.
Carbon emission data are usually time series data, and time series can be classified into three types, i.e., univariate time series, bivariate time series, and multivariate time series [19,20]. In this paper, we are particularly interested in bivariate time series, because they can provide more abundant information than conventional univariate time series; furthermore, related consumptions (e.g., time) can be saved compared with complex multivariate time series. As a bivariate time series integrates the information of two different attributes into one coordinate system, it can be fully transferred to an attribute trajectory, which was introduced in [21] as a novel kind of trajectory. On this basis, traditional trajectory mining techniques can thus be adopted to discover desired movement related patterns.
Nowadays trajectory data are ubiquitous, having benefited from the proliferation of location aware techniques such as the global navigation satellite system (GNSS), Bluetooth, radio frequency identification (RFID), and Wi-Fi. The discovery of movement patterns has an important place in the domains of trajectory data mining as these patterns can exhibit the rules of individuals' movements and their interactions. Typical movement patterns include the flock pattern [22][23][24][25][26], convoy pattern [27][28][29], leadership pattern [30][31][32], moving cluster [33,34], and crew [35]. Among these, the flock pattern has been paid much attention and it commonly plays an important role in many application fields. A flock can be informally depicted as "a group of spatially close objects staying together for a specific time duration". Correspondingly, assuming geographical units are considered as objects and their attributes staying close for a specific time duration, these geographical units can be regarded as forming a flock. This way, the traditional flock existing between moving objects can be extended to geographical flock existing between geographical units. Given the importance of discovering flock patterns in traditional trajectory data and the shortage of flock pattern discoveries from new attribute trajectory data, this paper aims to discover geographical flock patterns from carbon emission data.
To achieve the discovery of geographical flock patterns, we propose a spatiotemporal graph (STG)-based approach. The proposed approach includes three main parts: first, the carbon emission data are transformed to attribute trajectory data; second, the STGs are generated from attribute trajectory data; third, specific types of geographical flock patterns are discovered from STGs. We adopt two criteria (i.e., the high-low attribute values criterion, and the extreme number-duration values criterion) to derive various types of geographical flock patterns. In short, four corresponding but different types of geographical flock patterns can be derived according to each criterion. A case study is conducted to verify the usefulness and applicability of the proposed approach, in which the original province-level CO 2 emission data in China are employed. The geographical flock patterns of CO 2 emissions are discovered at two levels, i.e., the province level and the geographical region level. The findings of this study may provide us with potential suggestions to assist policy making and the coordinated control of carbon emissions. The framework of this study is shown in Figure 1. According to this framework, important patterns (i.e., the geographical flock patterns in this paper) can be discovered from carbon emission data, and effective policies in reducing CO 2 emissions may thus be formulated with the support of the discovered important patterns. effective policies in reducing CO2 emissions may thus be formulated with the support of the discovered important patterns.
Basic Concepts
To facilitate understanding, we introduce three basic concepts in the following, namely bivariate time series, attribute trajectory and geographical flock. Note that in Definition 3, (x, y) denotes the spatial location (e.g., latitude and longitude) of a moving object. In addition to spatial location, (x, y) can denote other attributes
Basic Concepts
To facilitate understanding, we introduce three basic concepts in the following, namely bivariate time series, attribute trajectory and geographical flock.
Bivariate Time Series
Definition 1 (Time series). Given an object O, assume the value of a specific attribute of O at timestamp t i is v i (where 1 ≤ i ≤ n), then the continuous segments of TS = {(v 1 , t 1 ), (v 2 , t 2 ), . . . , (v n , t n )} is considered as a time series.
Definition 2 (Bivariate time series).
Given an object O, assume the values of two specific attributes of O at timestamp t i are u i and (where 1 ≤ i ≤ n), respectively, then the two continuous segments of TS 1 = {(u 1 , t 1 ), (u 2 , t 2 ), . . . , (u n , t n )} and TS 2 = {(v 1 , t 1 ), (v 2 , t 2 ), . . . , (v n , t n )} in the same coordinate system is considered as consisting of a bivariate time series.
Note that in Definition 3, (x, y) denotes the spatial location (e.g., latitude and longitude) of a moving object. In addition to spatial location, (x, y) can denote other attributes as well. If the 2D spatial locations are replaced by two other attributes in a corresponding attribute space, then an attribute trajectory can be generated. The definition of attribute trajectory is given in Definition 4.
An illustration of trajectory and attribute trajectory is shown in Figure 2. Note that Figure 2a illustrates a traditional trajectory, and Figure 2b illustrates a novel attribute trajectory. According to Figure 2, we can observe that one of the most significant differences between the two kinds of trajectories is the plane space in the coordinate systems: for the traditional trajectory, its plane space is usually a geographical space, but for the novel attribute trajectory, its coordinate system is generally an attribute space. In the attribute coordinate system, each of the two axes denotes a corresponding attribute. Given the high similarity in the structures of a traditional trajectory and an attribute trajectory, traditional trajectory mining techniques can be adapted to mine information for attribute trajectory. as well. If the 2D spatial locations are replaced by two other attributes in a corresponding attribute space, then an attribute trajectory can be generated. The definition of attribute trajectory is given in Definition 4. An illustration of trajectory and attribute trajectory is shown in Figure 2. Note that Figure 2a illustrates a traditional trajectory, and Figure 2b illustrates a novel attribute trajectory. According to Figure 2, we can observe that one of the most significant differences between the two kinds of trajectories is the plane space in the coordinate systems: for the traditional trajectory, its plane space is usually a geographical space, but for the novel attribute trajectory, its coordinate system is generally an attribute space. In the attribute coordinate system, each of the two axes denotes a corresponding attribute. Given the high similarity in the structures of a traditional trajectory and an attribute trajectory, traditional trajectory mining techniques can be adapted to mine information for attribute trajectory. Figure 3a illustrates a flock, wherein three objects (i.e., O1, O2 and O3) form a flock during the time period from t2 to t4, and Figure 3b illustrates a geographical flock, which indicates that four geographical units (i.e., G1, G2, G3 and G4) can form a geographical flock (assuming their attribute values at the same timestamp during a time duration are within a given threshold). To assist understanding, a geographical flock can be informally understood as "a group of geographical units with similar attribute values lasting for a specific time duration."
Geographical Flock
Definition 5 (Flock). Given a set of n trajectories of n moving objects, an (r, m, k)-flock F during a time interval I = [t i , t j ] (where j − i + 1 ≥ k) consists of at least m objects such that for each discrete timestamp t i there exists a disk of radius r containing all m objects.
Study Area and Data
China includes 23 provinces, five autonomous regions, four municipalities, and two special administrative regions. Due to the problem of missing data from the provinces of Tibet, Taiwan, Hong Kong (special administrative region) and Macao (special administrative region), we take only the other remaining provinces/autonomous regions/municipalities as the study area. To facilitate interpretation, we group all of these under the term "provinces". Therefore, 30 provinces are involved in the study area. As we aim to discover geographical flock patterns to support the coordinated formulation of potential policies in controlling carbon emissions, we also conduct our case study on the level of geographical region, which includes seven altogether. The detailed information of the 30 provinces and the seven geographical regions are listed in Table 1, in which the names and IDs of all the provinces and geographical regions are included. The visualization of the study area is shown in Figure 4, in which Figure 4a denotes the 30 provinces and Figure 4b the seven geographical regions. Note that in Figure 4a, the white color indicates that the corresponding province is not included in the study area.
Study Area and Data
China includes 23 provinces, five autonomous regions, four municipalities, and two special administrative regions. Due to the problem of missing data from the provinces of Tibet, Taiwan, Hong Kong (special administrative region) and Macao (special administrative region), we take only the other remaining provinces/autonomous regions/municipalities as the study area. To facilitate interpretation, we group all of these under the term "provinces". Therefore, 30 provinces are involved in the study area. As we aim to discover geographical flock patterns to support the coordinated formulation of potential policies in controlling carbon emissions, we also conduct our case study on the level of geographical region, which includes seven altogether. The detailed information of the 30 provinces and the seven geographical regions are listed in Table 1, in which the names and IDs of all the provinces and geographical regions are included. The visualization of the study area is shown in Figure 4, in which Figure 4a denotes the 30 provinces and Figure 4b the seven geographical regions. Note that in Figure 4a, the white color indicates that the corresponding province is not included in the study area. The data used in this study were acquired from the Carbon Emission Accounts and Datasets for emerging economies (CEADs) [36], which provides datasets related to carbon emissions in China at either province level, prefecture-level city level, or county level with different time span (year as the unit) to the public. The data of province-level CO2 emissions adopted in this study include the exact information of CO2 emissions of the 30 provinces. The time span of the data is from 1998 to 2019, and the temporal resolution is one year. Specifically, the data contain the sectoral CO2 emissions inventory for all 30 provinces. We use the total consumption of all sectoral CO2 emissions inventory as the value of the total CO2 emissions for each province. Note that we adopt the linear interpolation methods to derive the corresponding values for the two missing data. Two important attributes, i.e., the total CO2 emissions per year and the growth rate of total CO2 emissions per year, which are studied in a previous work [37], are used to generate the corresponding attribute trajectory of each province and geographical region.
Methodology
We develop an STG-based approach to discover geographical flock patterns from bivariate time series data (e.g., CO2 emission data). The methodology includes three main parts: first, the CO2 emission data are transformed to attribute trajectory data; second, the STGs are generated from attribute trajectory data; third, specific types of geographical flock patterns are discovered from the generated STGs. In the following, each part will be described in more detail.
Generating Attribute Trajectory Data
As introduced in Definition 2, a bivariate time series has two different attributes. The The data used in this study were acquired from the Carbon Emission Accounts and Datasets for emerging economies (CEADs) [36], which provides datasets related to carbon emissions in China at either province level, prefecture-level city level, or county level with different time span (year as the unit) to the public. The data of province-level CO 2 emissions adopted in this study include the exact information of CO 2 emissions of the 30 provinces. The time span of the data is from 1998 to 2019, and the temporal resolution is one year.
Specifically, the data contain the sectoral CO 2 emissions inventory for all 30 provinces. We use the total consumption of all sectoral CO 2 emissions inventory as the value of the total CO 2 emissions for each province. Note that we adopt the linear interpolation methods to derive the corresponding values for the two missing data. Two important attributes, i.e., the total CO 2 emissions per year and the growth rate of total CO 2 emissions per year, which are studied in a previous work [37], are used to generate the corresponding attribute trajectory of each province and geographical region.
Methodology
We develop an STG-based approach to discover geographical flock patterns from bivariate time series data (e.g., CO 2 emission data). The methodology includes three main parts: first, the CO 2 emission data are transformed to attribute trajectory data; second, the STGs are generated from attribute trajectory data; third, specific types of geographical flock patterns are discovered from the generated STGs. In the following, each part will be described in more detail.
Generating Attribute Trajectory Data
As introduced in Definition 2, a bivariate time series has two different attributes. The core of generating a corresponding attribute trajectory from a bivariate time series can be seen in Definition 4. Note that to make effective comparisons between the values of the two attributes (i.e., the total CO 2 emission, and the growth rate of total CO 2 emission), it is necessary to consider a normalization operation. The adopted normalization method is the Z-Score method, which is represented in Equation (1): where v_norm is the normalized value of the original attribute value v, and E and σ are the mean value and the standard deviation of all the original attribute values, respectively. According to this method, the attribute trajectory data of all geographical units (i.e., provinces and geographical regions) can be fully generated.
Generating STGs
The STGs are generated based on regularly sampled attribute trajectory data. Similar to other types of graphs, a STG can also be represented as denote the set of vertices and edges, respectively. Two steps are required to construct an STG: the generation of vertices and the construction of edges.
As mentioned in Definition 6, three essential parameters (i.e., r, m and k) are required in a geographical flock. The parameter r is considered when generating vertices. If the "locations" of geographical units in the attribute space at the same timestamp are within a circle of radius r whose value is user-defined, then the corresponding geographical units are considered to be involved in the same vertex. It should be noted that the selection of the base geographical unit (i.e., the geographical unit whose location at a timestamp is regarded as the center of the corresponding circle) is important. In this approach, we propose that for each timestamp, the geographical unit whose distance in the attribute space is closest to the centroid of all geographical units is considered as the base geographical unit. In this way, the vertices at each timestamp can be automatically generated. When constructing the edges, two basic principles are adopted: (1) any two vertices at the same timestamp are not allowed to be connected, and (2) any two vertices at two consecutive timestamps have to be connected if at least one common geographical unit is involved in both vertices. Thus, the edges can be fully constructed.
Discovering Specific Types of Geographical Flock Patterns
Once the STGs of the geographical units are generated, the geographical flock patterns can then be discovered based on the STGs. Before the discovery operation, the STGs whose time durations are less than a user-defined value of k have to be deleted. For the remaining STGs, an iteration method is adopted to find all groups of geographical units, including the IDs of geographical units and the corresponding time interval in each group. Note that for each group, the number of involved geographical units has to be at least m (which is userdefined) and the time duration has to be at least k. Thus, the groups of geographical units which meet the above conditions are considered to be geographical flocks. A geographical flock is represented as {IDs of geographical units}|[start time, end time]. For example, the geographical flock {1, 2, 3}|[0, 5] indicates that it includes three geographical units whose IDs are 1, 2 and 3, respectively, and lasts for six continuous timestamps which are 0, 1, 2, 3, 4 and 5, respectively.
The geographical flock patterns can be further classified into different types. Based on more refined information, specific types of geographical flock patterns can be derived, according to which more elaborate insights may be provided in practice. In this paper, we propose to use two specific criteria to derive various types of geographical flock patterns, i.e., the high-low attribute values criterion, and the extreme number-duration values criterion. In detail, the high-low attribute values criterion is adopted to derive types of geographical flocks whose attribute values meet certain conditions (e.g., higher or lower than a pre-defined threshold represented by the corresponding parameter of high_threshold and low_threshold), and the extreme number-duration values criterion is utilized to derive types of geographical flocks whose number of members (i.e., geographical units) and time duration have extreme values (e.g., maximum/minimum/longest/shortest). As for the high-low attribute values criterion, since there are two attributes, and the value of each attribute may be higher than or lower than a threshold, four (i.e., 2 × 2) specific types of geographical flock patterns can be derived. For the extreme number-duration values criterion, the extreme number can be either maximum or minimum, and the extreme duration can be either longest or shortest, thus, four (i.e., 2 × 2) specific types of geographical flock patterns can be derived in either case. The derived types of geographical flock patterns are listed in detail in Table 2. To avoid confusion, we use two different encoding methods (i.e., A, B, C, D and I, II, III, IV) to distinguish the types of geographical flock patterns, and each type of geographical flock pattern is represented by a corresponding type ID. For types A, B, C and D, a threshold is defined to distinguish the cases of high and low values. We take the percentile method, in which we only give a desired percentile value for distinguishing the high case and the low case, so that the real attribute values for the corresponding high case and low case can be automatically determined. The real attribute values are considered the thresholds. This can effectively avoid the difficulty of giving real attribute values as thresholds in reality. Based on this, the corresponding types of geographical flock patterns meeting the conditions of high and low cases can be extracted. For types I, II, III and IV, we first calculate the number of members and time duration for each discovered geographical flock, we then find the maximum value and minimum value for the number of members and time duration, respectively, and finally extract the geographical flocks meeting the corresponding conditions. Thus, all the specific types of geographical flock patterns can be extracted. Nevertheless, in reality it cannot be guaranteed that all specific types can be discovered simultaneously for each criterion under the same combination of parameter values. One can select the optimal results by adjusting parameter values and considering his/her specific desires.
Results and Discussion
The results are presented on two levels: the province level and the geographical region level. As introduced in Definition 6, three essential parameters (i.e., r, m and k) are involved in geographical flock, and different combination of parameter values may lead to different results. Therefore, it is necessary to determine suitable parameter values when discovering geographical flock patterns. An important principle for determining suitable parameter values is that a fit number (i.e., neither too large nor too small) of geographical flocks has to be discovered. This is because, if the number of discovered geographical flocks is too large (or too small), it may provide too much (or insufficient) information, which can lead to corresponding difficulties in interpretation. Based on this, for parameter r, the smaller its value, the better, because a small value indicates that the changes of attributes are in a small fluctuation range; for parameters m and k, larger values are preferred, because large values indicate that the geographical flock patterns that are more meaningful may be discovered. By considering the strategies for selecting potential parameter values mentioned above, we have tested a large number of different combinations of parameter values. Due to space limitations, only the significant geographical flock patterns from our perspective are presented in detail in the following.
Geographical Flock Patterns on the Province Level
As mentioned in Section 2.3.3, two criteria are proposed to derive specific types of geographical flock patterns, therefore, the results based on each criterion at the province level will be presented below.
The High-Low Attribute Values
Four representative geographical flocks were discovered based on the criterion of the high-low attribute values under the combination of parameter values for r = 15, m = 3, k = 3, high_threshold = 70, low_threshold = 30. The detailed information of the four geographical flocks can be seen in Table 3. From Table 3 we can see that, among all the geographical flocks, one belongs to type C and the other three belong to type B, while none were discovered for types A and D. This is reasonable, because it cannot be guaranteed that all types of geographical flock patterns can be discovered simultaneously under the same combination of parameter values. According to the four geographical flocks, we can see that three groups of provinces had both a high amount and a low growth rate of total CO 2 emission during three continuous years (corresponding to type B). Specifically, the three groups and the corresponding continuous years are the provinces of Zhejiang, Hunan and Guangdong from 2012 to 2014, the provinces of Shanxi, Zhejiang and Guangdong from 2013 to 2015, and the provinces of Liaoning, Zhejiang, Hubei and Sichuan from 2014 to 2016. Generally speaking, the provinces in the same geographical flock performed well in controlling the growth rate of total CO 2 emission in the corresponding years, which indicates that the measures and policies adopted have been effective. However, in the corresponding years, their total amounts of CO 2 emissions were still very high, which demonstrates that more effective measures may be taken to control the total amount of carbon emissions. Secondly, we can see that one group of provinces had both a low amount and a high growth rate of total CO 2 emission (corresponding to type C). The specific provinces and the corresponding years are Hainan, Ningxia and Xinjiang from 2002 to 2004. It can be inferred that the three provinces have performed well in controlling the total amount of CO 2 emission, but that the effects of controlling the growth rate of total CO 2 emissions were not that satisfactory.
To further explore the spatial relations of the provinces in each geographical flock, we visualize each geographical flock on the map by setting a different color. The visualization is shown in Figure 5, in which each figure corresponds to a geographical flock, and the provinces involved in the same geographical flock are denoted the same color. From Figure 5, we can see that there appears to be a stronger spatial relation for the provinces in the geographical flock type B (Figure 5a-c) than type C (Figure 5d), as the provinces involved in the geographical flock type B (Figure 5a-c) are relatively close to each other in space, while the provinces involved in the geographical flock type C (Figure 5d) have a relatively further geographical distance from each other. Additionally, an interesting finding is that the provinces involved in type B (Figure 5a-c) have generally stronger comprehensive strength than those involved in type C (Figure 5d). This indicates that the related factors (such as economy, population and industry) of the provinces in the same type of geographical flock pattern may be similar, an obvious difference in the related factors may exist in the provinces involved in different types of geographical flock patterns. However, this still needs further exploration. In summary, the geographical flock patterns discovered based on this criterion reveal several interesting findings, which can be fully considered when conducting inter-provincial collaborations and when making coordinated policies to effectively control the amount and growth rate of carbon emissions.
Int. J. Environ. Res. Public Health 2023, 20, 4265 10 of 16 coordinated policies to effectively control the amount and growth rate of carbon emissions.
The Extreme Number-Duration Values
Two significant geographical flocks were discovered based on the criterion of the extreme number-duration values under the combination of parameter values for r = 10, m = 5, and k = 3. The full information of the two geographical flocks is shown in Table 4, from which we can observe that one belongs to type II and the other belongs to type III. As for types I and IV, none has ever been discovered under this specific combination of parameter values.
The results show that a maximum number of ten provinces have had similar evolution patterns in both the amount and the growth rate of total CO2 emission in a shortest duration of three continuous years. The specific provinces and the corresponding years are Beijing, Shanxi, Zhejiang, Anhui, Jiangxi, Henan, Hubei, Guangxi, Gansu and Xinjiang from 1998 to 2000. This demonstrates that the ten provinces formed a maximum group
The Extreme Number-Duration Values
Two significant geographical flocks were discovered based on the criterion of the extreme number-duration values under the combination of parameter values for r = 10, m = 5, and k = 3. The full information of the two geographical flocks is shown in Table 4, from which we can observe that one belongs to type II and the other belongs to type III. As for types I and IV, none has ever been discovered under this specific combination of parameter values. The results show that a maximum number of ten provinces have had similar evolution patterns in both the amount and the growth rate of total CO 2 emission in a shortest duration of three continuous years. The specific provinces and the corresponding years are Beijing, Shanxi, Zhejiang, Anhui, Jiangxi, Henan, Hubei, Guangxi, Gansu and Xinjiang from 1998 to 2000. This demonstrates that the ten provinces formed a maximum group which had similar CO 2 emissions and lasted for three continuous years. This finding would be applicable to meet the needs of detecting the largest number of provinces with similar CO 2 emission so that closer inter-provincial cooperation may be carried out. Secondly, a minimum number of five provinces had similar evolution patterns in both the amount and the growth rate of total CO 2 emission in a maximum duration of four continuous years. The specific provinces and the corresponding years are Heilongjiang, Zhejiang, Anhui, Hubei and Sichuan from 2014 to 2017. This shows that the five provinces have had a similar evolution pattern of CO 2 emission in a maximum duration of four years. Therefore, if one would like to know the provinces which lasted for the longest duration, this finding can provide the ideal answer. In our view, the results can provide useful suggestions to related governmental departments. Furthermore, one can detect the very groups of provinces which have had similar evolution patterns in carbon emissions by adjusting the values of the three parameters to meet his/her specific demands.
The visualization of the two geographical flocks can be seen in Figure 6, which gives an overview of the spatial relations of the provinces involved in the same geographical flock. From Figure 6 we can see that Figure 6a exhibits an overall strong spatial relation for all the involved provinces, and Figure 6b presents a strong spatial relation for most of the involved provinces. Therefore, we can infer that geographical locations may have strong effects on the potential groups of provinces which can form a specific type of geographical flock, but other factors can also have particular effects on the final formulation of potential geographical flocks. The findings indicate that further and finer explorations may be conducted to gain further insight on why the provinces with relatively weak spatial relations can form particular geographical flocks so that more scientific, precise and flexible policies and/or strategies can be made in the future. which had similar CO2 emissions and lasted for three continuous years. This finding would be applicable to meet the needs of detecting the largest number of provinces with similar CO2 emission so that closer inter-provincial cooperation may be carried out. Secondly, a minimum number of five provinces had similar evolution patterns in both the amount and the growth rate of total CO2 emission in a maximum duration of four continuous years. The specific provinces and the corresponding years are Heilongjiang, Zhejiang, Anhui, Hubei and Sichuan from 2014 to 2017. This shows that the five provinces have had a similar evolution pattern of CO2 emission in a maximum duration of four years. Therefore, if one would like to know the provinces which lasted for the longest duration, this finding can provide the ideal answer. In our view, the results can provide useful suggestions to related governmental departments. Furthermore, one can detect the very groups of provinces which have had similar evolution patterns in carbon emissions by adjusting the values of the three parameters to meet his/her specific demands. The visualization of the two geographical flocks can be seen in Figure 6, which gives an overview of the spatial relations of the provinces involved in the same geographical flock. From Figure 6 we can see that Figure 6a exhibits an overall strong spatial relation for all the involved provinces, and Figure 6b presents a strong spatial relation for most of the involved provinces. Therefore, we can infer that geographical locations may have strong effects on the potential groups of provinces which can form a specific type of geographical flock, but other factors can also have particular effects on the final formulation of potential geographical flocks. The findings indicate that further and finer explorations may be conducted to gain further insight on why the provinces with relatively weak spatial relations can form particular geographical flocks so that more scientific, precise and flexible policies and/or strategies can be made in the future.
Geographical Flock Patterns on the Geographical Region Level
The discovered geographical flock patterns based on each criterion on the geographical region level will be presented in detail in the following.
Geographical Flock Patterns on the Geographical Region Level
The discovered geographical flock patterns based on each criterion on the geographical region level will be presented in detail in the following.
The High-Low Attribute Values
Two significant geographical flocks were discovered under the criterion of the high-low attribute values. The exact information of the two geographical flocks is listed in Table 5. From Table 5 we can see that between the two geographical flocks, one belongs to type B and the other belongs to type C, and none have been discovered for types A and D under this specific combination of parameter values (i.e., r = 10, m = 2, k = 3, low_threshold = 40, high_threshold = 60). The two geographical flocks reveal that one group of geographical regions has had both a high amount and a low growth rate of total CO 2 emission during three continuous years (corresponding to type B). The specific geographical regions and corresponding continuous years are Northeast China and Central China from 2015 to 2018. Secondly, one group of geographical regions has had both a low amount and a high growth rate of total CO 2 emission (corresponding to type C). The corresponding geographical regions and continuous years are Northeast China and South China from 2005 to 2007. Note that a common geographical region involved in both groups is Northeast China. According to the results, it can be inferred that Northeast China may have taken effective measures in controlling the growth rate of CO 2 emissions, as it has been keeping a steady low growth rate in recent years (2015~2018) while the growth rate was relatively high in years prior to that (2005~2007). Based on the results, potential suggestions may be provided to related national governmental departments to carry out effective regional cooperation to work out more targeted policies in better controlling carbon emissions. Figure 7 exhibits the visualization of the two geographical flocks. From Figure 7 we can see that the geographical regions in geographical flock type B (Figure 7a) have relatively stronger spatial relations than those in type C (Figure 7b), which coincides well with the corresponding finding in Section 3.1.1. This may provide us additional useful clues on how to produce more scientific strategies to better control carbon emissions in the future. Two significant geographical flocks were discovered under the criterion of the highlow attribute values. The exact information of the two geographical flocks is listed in Table 5. From Table 5 we can see that between the two geographical flocks, one belongs to type B and the other belongs to type C, and none have been discovered for types A and D under this specific combination of parameter values (i.e., r = 10, m = 2, k = 3, low_threshold = 40, high_threshold = 60).
The two geographical flocks reveal that one group of geographical regions has had both a high amount and a low growth rate of total CO2 emission during three continuous years (corresponding to type B). The specific geographical regions and corresponding continuous years are Northeast China and Central China from 2015 to 2018. Secondly, one group of geographical regions has had both a low amount and a high growth rate of total CO2 emission (corresponding to type C). The corresponding geographical regions and continuous years are Northeast China and South China from 2005 to 2007. Note that a common geographical region involved in both groups is Northeast China. According to the results, it can be inferred that Northeast China may have taken effective measures in controlling the growth rate of CO2 emissions, as it has been keeping a steady low growth rate in recent years (2015 ~ 2018) while the growth rate was relatively high in years prior to that (2005 ~ 2007). Based on the results, potential suggestions may be provided to related national governmental departments to carry out effective regional cooperation to work out more targeted policies in better controlling carbon emissions. Figure 7 exhibits the visualization of the two geographical flocks. From Figure 7 we can see that the geographical regions in geographical flock type B (Figure 7a) have relatively stronger spatial relations than those in type C (Figure 7b), which coincides well with the corresponding finding in Section 3.1.1. This may provide us additional useful clues on how to produce more scientific strategies to better control carbon emissions in the future. Three typical geographical flocks were discovered under the criterion of the extreme number-duration values. The specific information of all the geographical flocks can be seen in Table 6, from which we can see that, among the three geographical flocks, two belong to both type I and type III, and the other belongs to both type II and type IV. An
The Extreme Number-Duration Values
Three typical geographical flocks were discovered under the criterion of the extreme number-duration values. The specific information of all the geographical flocks can be seen in Table 6, from which we can see that, among the three geographical flocks, two belong to both type I and type III, and the other belongs to both type II and type IV. An obvious difference of these results from the previous results is that all types of geographical flock patterns have been discovered under this specific combination of parameter values (i.e., r = 5, m = 2, k = 3). The main findings based on the three geographical flocks can be summarized as follows. (1) Two groups of geographical regions have had similar evolution patterns in both the amount and the growth rate of total CO 2 emission in a maximum duration of four continuous years. The number of geographical regions involved in each group is two, which is both the maximum (corresponding to type I) and the minimum (corresponding to type III) among all the discovered geographical flock patterns; the corresponding geographical The corresponding visualization of the three geographical flocks is shown in Figure 8, from which the spatial distribution of the geographical regions involved in the same geographical flock can be clearly seen. An interesting finding is that the geographical flock of type II/IV (Figure 8c) exhibits strong spatial relations, while the spatial relations of the geographical flock of type I/III (Figure 8a,b) appear relatively weak. By considering the similar findings in spatial relations in Section 3.1.2, related departments may gain additional insights to propose more elaborate plans so that carbon emissions can be controlled in a more scientific way in the future. obvious difference of these results from the previous results is that all types of geographical flock patterns have been discovered under this specific combination of parameter values (i.e., r = 5, m = 2, k = 3).
The main findings based on the three geographical flocks can be summarized as follows. (1) Two groups of geographical regions have had similar evolution patterns in both the amount and the growth rate of total CO2 emission in a maximum duration of four continuous years. The number of geographical regions involved in each group is two, which is both the maximum (corresponding to type I) and the minimum (corresponding to type III) among all the discovered geographical flock patterns; the corresponding geographical regions and continuous years are Northeast China and South China from 2005 to 2008, and Northeast China and Southwest China from 2013 to 2016, respectively. (2) One group of geographical regions has had similar evolution patterns in both the amount and the growth rate of total CO2 emission in a minimum duration of three continuous years. Similar to the other two geographical flocks, the number of geographical regions involved in this geographical flock is both the maximum and the minimum, thus it belongs to both type II and type IV. The detailed information of this geographical flock is the geographical regions of Central China and Northwest China from 2014 to 2016. According to the results, the abovementioned geographical regions involved in the same geographical flock may carry out closer cooperation to determine more scientific measures to better control and release the emissions of carbons. For example, Northeast China may corporate closely with other regions, such as South China and Southwest China, to explore the specific reasons why similar emission patterns have appeared.
The corresponding visualization of the three geographical flocks is shown in Figure 8, from which the spatial distribution of the geographical regions involved in the same geographical flock can be clearly seen. An interesting finding is that the geographical flock of type II/IV (Figure 8c) exhibits strong spatial relations, while the spatial relations of the geographical flock of type I/III (Figure 8a,b) appear relatively weak. By considering the similar findings in spatial relations in Section 3.1.2, related departments may gain additional insights to propose more elaborate plans so that carbon emissions can be controlled in a more scientific way in the future.
Conclusions
Climate change has become one of the greatest global challenges, and one which can greatly affect humankind. A significant solution for mitigating climate change is to reduce the emissions of greenhouse gas, particularly CO 2 . To better support the establishment of effective policies for reducing CO 2 , it is crucial to consider specific sorts of important emission patterns that exist between provinces and/or geographical regions. This paper takes geographical flock patterns as the very important kind of emission pattern which deserves further investigation. We propose an STG-based approach to effectively discover geographical flock patterns. The approach mainly includes three steps, i.e., generating attribute trajectories from CO 2 emission data, generating STGs from attribute trajectories, and discovering specific types of geographical flock patterns. In general, eight different types of geographical flock patterns are derived based on two different criteria (i.e., the high-low attribute values criterion and the extreme number-duration values criterion). A case study was conducted on two levels, i.e., the province level and the geographical region level, based on CO 2 emission data in China. The results of the case study demonstrate that the proposed approach is effective in discovering the different types of geographical flock patterns, and potentially useful suggestions and insights can be provided to related departments to assist in policy making and in the coordinated control of carbon emissions in the future.
Given the importance of investigating the evolution patterns of CO 2 emission between different geographical units, we only took the province-level carbon emission data for case studies. Although the results based on the province-level CO 2 emission data appear effective, finer results are still needed. Therefore, fine-granularity CO 2 emission data (e.g., data at the city-level) can be adopted as new datasets for further studies to obtain more precise insight. In addition, other attributes which are meaningful to carbon emission can be used to generate attribute trajectories so that the insights and findings can be further extended. Furthermore, new criteria can be developed and adopted to derive specific types of geographical flock patterns according to one's specific desire, according to which new findings might be acquired.
|
v3-fos-license
|
2011-04-11T09:26:46.000Z
|
2011-04-11T00:00:00.000
|
115152099
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "pd",
"oa_status": "HYBRID",
"oa_url": "https://www.ams.org/proc/2013-141-06/S0002-9939-2013-11675-8/S0002-9939-2013-11675-8.pdf",
"pdf_hash": "6c17ada8f9cc548eb4f313df408a63c5cc2ad510",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:889",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "6c17ada8f9cc548eb4f313df408a63c5cc2ad510",
"year": 2011
}
|
pes2o/s2orc
|
A sharp lower bound for the scalar curvature of certain steady gradient Ricci solitons
We show that the scalar curvature of a steady gradient Ricci soliton satisfying that the ratio between the square norm of the Ricci tensor and the square of the scalar curvature is bounded by one half, is boundend from below by the hyperbolic secant of one half the distance function from a fixed point.
Introduction
A Ricci soliton is a Riemannian manifold (M, g) that admits a smooth vector field X on M such that where L X is the Lie derivative in the direction of the vector field X, Rc denotes the Ricci tensor and λ is a constant. When the vector field X can be replaced by the gradient of some smooth function f on M, called the potential function, (M, g) is said to be a gradient Ricci soliton. In such a case the equation (1) becomes where H f denotes the Hessian of the function f . A Ricci soliton (1) is said to be shrinking, steady or expanding according to λ > 0, λ = 0 or λ < 0.
[1] is a very intesting paper for recent information on this topic. For steady gradient Ricci solitons it is well-known ( [1], for example) that R + |∇f | 2 = C, where C is a positive constant, unless the steady soliton is Ricci flat. We scale the metric to have the constant equal to 1.
Very recently it was given in [2] a lower bound for the scalar curvature of a steady gradient Ricci solitons in terms of the dimension of the manifold and the potential function, under additional assumptions. However there is no a good knwoledge of the behaviour of the potential function of a steady gradient soliton, and thus the bound cannot be expresed in an explicit way in terms of the distance function. In this paper we show the following Theorem 1. Let (M, g) be a complete gradient steady Ricci soliton satifying |Rc| 2 ≤ R 2 2 and normalized as before. Then where r(x) is the distance from a fixed point O ∈ M and k ≤ 1 is a constant that only depends on O and R(O).
Remark 1. Note that on the scaled Hamilton's cigar soliton R 2 , 4(dx 2 +dy 2 ) 1+x 2 +y 2 one has R + |∇f | 2 = 1 and R(x) = sech 2 r(x) 2 , where the distance r(x) is measured from the only point where the scalar curvature attains its maximum. This shows that our lower bound is sharp in dimension two. In higher dimensions, if we consider the product of Hamilton's cigar soliton and any complete Ricci flat manifold, we also have that our bound is sharp. Indeed, note that we actually have equality when moving in the direction of the cigar, where the distance r(x) is measured again from the only point where the scalar curvature attains its maximum.
It is also possible to give a lower bound for the scalar curvature assuming that the Ricci tensor is nonnegative. In such a case we do not know if such a bound is sharp, because we do not know of any example where the equality is achieved.
Corollary 2. Let (M, g) be a complete gradient steady Ricci soliton with nonnegative Ricci curvature and normalized as before. Then where r(x) is the distance from a fixed point O ∈ M and k ≤ 1 is a constant that only depends on O and R(O).
Proofs of the results
By assumption we have that we get that Then it is a straightforward computation to get that R(γ(t)) ≥ 4c c 2 e t + 2c + e −t . Now, since c ≥ 1, we have that Since the geodesic γ and t are arbitrary we have finished the proof. q.e.d Proof of Corollary 2.-Proceeding as in the proof of the theorem and using the inequality |H f | 2 = |Rc| 2 ≤ R 2 we get Then the result is obtained following the same steps as in the theorem. q.e.d
|
v3-fos-license
|
2023-03-05T16:19:04.351Z
|
2023-03-01T00:00:00.000
|
257344044
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2304-8158/12/5/1063/pdf?version=1677754774",
"pdf_hash": "f39332521622c503850cc80d4354aff4a148ebb8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:890",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
],
"sha1": "7f516761c3d13205c5f06688a4dac4af649750a7",
"year": 2023
}
|
pes2o/s2orc
|
Design of Lactococcus lactis Strains Producing Garvicin A and/or Garvicin Q, Either Alone or Together with Nisin A or Nisin Z and High Antimicrobial Activity against Lactococcus garvieae
Lactococcus garvieae is a main ichthyopathogen in rainbow trout (Oncorhynchus mykiss, Walbaum) farming, although bacteriocinogenic L. garvieae with antimicrobial activity against virulent strains of this species have also been identified. Some of the bacteriocins characterized, such as garvicin A (GarA) and garvicin Q (GarQ), may show potential for the control of the virulent L. garvieae in food, feed and other biotechnological applications. In this study, we report on the design of Lactococcus lactis strains that produce the bacteriocins GarA and/or GarQ, either alone or together with nisin A (NisA) or nisin Z (NisZ). Synthetic genes encoding the signal peptide of the lactococcal protein Usp45 (SPusp45), fused to mature GarA (lgnA) and/or mature GarQ (garQ) and their associated immunity genes (lgnI and garI, respectively), were cloned into the protein expression vectors pMG36c, which contains the P32 constitutive promoter, and pNZ8048c, which contains the inducible PnisA promoter. The transformation of recombinant vectors into lactococcal cells allowed for the production of GarA and/or GarQ by L. lactis subsp. cremoris NZ9000 and their co-production with NisA by Lactococcus lactis subsp. lactis DPC5598 and L. lactis subsp. lactis BB24. The strains L. lactis subsp. cremoris WA2-67 (pJFQI), a producer of GarQ and NisZ, and L. lactis subsp. cremoris WA2-67 (pJFQIAI), a producer of GarA, GarQ and NisZ, demonstrated the highest antimicrobial activity (5.1- to 10.7-fold and 17.3- to 68.2-fold, respectively) against virulent L. garvieae strains.
Introduction
Bacteriocins produced by lactic acid bacteria (LAB) have been largely valued as potential food preservatives, and the LAB producers of bacteriocins (bacteriocinogenic strains) have been valued as potential starter, protective, probiotic, paraprobiotic and postbiotic cultures [1][2][3]. Moreover, concerns regarding the increase in antimicrobial resistances (AMRs) confer bacteriocins and the bacteriocinogenic LAB unlimited possibilities for applications in the food industry, human and veterinary medicine and the animal production field [4][5][6]. Microbial-derived biotics, including bacteriocins, are recognized as functional components of natural and bioengineered probiotic, paraprobiotic and postbiotic cultures [2,7,8]. Bacteriocins, including nisin A (NisA) and nisin Z (NisZ), drive the apoptosis of cancer cells and show low toxicity toward normal cells, making them promising anticancer candidates to replace or be combined with conventional therapeutic agents [9][10][11]. Bacteriocinogenic a 34 amino acid pentacyclic peptide naturally produced by L. lactis, exhibits antimicrobial activity against several Gram-positive and Gram-negative bacteria [42], and the bacteriocin exerts its antimicrobial activity by both pore formation and the inhibition of cell wall synthesis through specific binding to lipid II, which is an essential precursor of the bacterial cell wall [43].
The cloning and heterologous expression of bacteriocins by LAB, particularly Lactococcus lactis, has proven to be a promising approach for obtaining microbial cell factories with a potent antimicrobial activity [44][45][46][47][48]. Moreover, the simultaneous production of bacteriocins of different classes and/or subclasses and distinct modes of action may not only improve their antimicrobial activity and spectrum in a synergistic fashion but may also reduce the presence of bacteria that are resistant to their antagonistic activity [49][50][51]. In this study, we have proceeded to the design and expression in different L. lactis strains of up to three different bacteriocins with antimicrobial activity against virulent L. garvieae. These bacteriocins, namely, GarA, GarQ and NisA/NisZ, show different modes of action and other well-described beneficial effects, such as the anticarcinogenic effect of NisA/NisZ and its ability to modulate the microbiota and regulate the immune system of its host. Thus, synthetic genes that encode the signal peptide of the lactococcal secreted protein Usp45 (SP usp45 ), fused to either mature GarA (lgnA) with its putative immunity gene (lgnI) and/or to mature GarQ (garQ) with its immunity gene (garI), were cloned into the protein expression vectors pMG36c which encodes the P 32 constitutive promoter and in plasmid pNZ8048c, under control of the inducible P nisA promoter. Recombinant L. lactis strains were then obtained, and their antimicrobial activity against virulent L. garvieae was determined.
Bacterial Strains, Plasmids and Growth Conditions
The bacterial strains and plasmids used in this study are listed in Table 1. The L. lactis strains were grown at 30 • C in M17 broth (Oxoid Ltd., Basingstoke, UK) supplemented with 0.5% (w/v) glucose (GM17). Pediococcus damnosus CECT4797 was grown in MRS broth (Oxoid Ltd.) at 30 • C. Escherichia coli JM109 (Promega, Madison, WI, USA) was grown in Luria-Bertani (LB) broth (Oxoid Ltd.) at 30 • C with shaking. Chloramphenicol (Sigma-Aldrich, St. Louis, MO, USA) was added at 20 µg/mL to select growth of E. coli and at 5 µg/mL for the selection of the recombinant lactococcal strains. The cell dry weights of the late exponential phase cultures were determined gravimetrically. Agar plates were made by the addition of 1.5% (w/v) agar (Oxoid) to the liquid media. Table 1. Bacterial strains and plasmids used in this study.
Basic Genetic Techniques and Enzymes
Synthetic gene fragments were designed from the described amino acid sequence of the bacteriocin GarA (lgnA) and GarQ (garQ), as well as those from their putative immunity proteins GarAI (lgnI) and GarQI (garI), respectively. In addition, the leader peptide of the native bacteriocins was replaced by the signal peptide of the secreted protein Usp45 (SP usp45 ), a Sec-dependent protein produced by L. lactis MG1363 [45,57]. Similarly, additional sequences containing the SacI cleavage site and the P 32 ribosome binding site (RBS) as well as the SacI/HindIII or the BspHI/HindIII enzyme restriction cleavage sites were added at the 5 and 3 ends, respectively, of the designed synthetic gene fragments. Their codon usage was adapted for its expression by L. lactis. GeneArt ® supplied the synthetic genes into the carrier plasmid pMA-T (Life Technologies S.A., Madrid, Spain). The protein expression vectors pMG36c and pNZ8048c were purified from E. coli JM109 by using the NucleoSpin Plasmid Kit (Macherey-Nagel, Düren, Germany). DNA restriction enzymes were supplied by New England Biolabs (Beverly, MA, USA). Ligations were performed with the T4 DNA ligase (Invitrogen, Walthman, MA, USA). Electrocompetent L. lactis subsp. cremoris NZ9000, L. lactis subsp. cremoris WA2-67, L. lactis subsp. lactis DPC5598 and L. lactis subsp. lactis BB24 cells were obtained after successive growth in the SGGM17 medium, which consisted of of M17 (Oxoid Ltd.) supplemented with 0.5 M sucrose, glucose (0.5%; w/v) and glycine (2%; w/v). The cultures were centrifuged and resuspended in a cold wash buffer containing glycerol (20%; v/v) and 0.5 M sucrose. Aliquots of 50 µL were stored at −80 • C until further use.
Recombinant Plasmids Derived from pMG36c and Transformation into L. lactis Hosts
The primers and PCR products used for the construction of the pMG36c-derived vectors are listed in Table S1. PCR product amplifications were performed in 50 µL reaction mixtures that contained 20 ng of the synthetic gene fragments included in the carrier pMA-T vectors, 70 pmol of each primer, 1 U of Velocity DNA polymerase (Bioline Reagents, Ltd., London, UK), 10 µL of Hi-Fi buffer and 1.5 µL of 30 mM dNTP's mix in a MJ Mini Gradient Thermal Cycler (BioRad Laboratories). PCR cycling conditions were as follows: denaturation at 98 • C (2 min), 35 cycles of denaturation-annealing-extension (98 • C for 30 s, 60 • C for 30 s and 72 • C for 30 s, respectively) and a final extension step at 72 • C (5 min). The PCR-generated fragments were purified using a NucleoSpin ® Gel and PCR clean-up kit (Macherey-Nagel) for cloning and nucleotide sequencing. When required, PCR amplifications were sequenced using the ABI PRISM ® BigDye ® Terminator cycle sequence reaction kit and the automatic DNA sequencer ABI PRISM, model 377 (Applied Biosystems, Foster City, CA, USA) at the Unidad de Genómica (CAI Técnicas Biológicas, UCM, Madrid, Spain). Digestion of the amplified PCR products with SacI/HindIII permitted the ligation of the resulting restriction fragments into pMG36c, which was digested with the same enzymes. The resulting pMG36c-derived vectors were transformed into competent lactococcal hosts and electrotransformed with a Gene Pulser TM and Pulse Controller apparatus (Bio-Rad Laboratories, Hercules, CA, USA), according to a previously described procedure [58]. Transformed cells containing the pMG36c-derived vectors pJFAI, pJFQI, pJFAIQI and pJFQIAI (Table 1), were selected for their growth with chloramphenicol and evaluated for their bacteriocinogenity. The total bacterial DNA from the transformed lactococcal strains was purified using the InstaGene Matrix (BioRad Laboratories). It was then submitted to PCR using the primers MGPJ-F and MGPJ-R. The sequencing of the generated PCR products was performed at the Unidad de Genómica (CAI Técnicas Biológicas, UCM).
Recombinant Plasmids Derived from pNZ8048c and Transformation into L. lactis Hosts
The primers and PCR products used for the construction of the pNZ8048c-derived vectors are listed in Table S1. The initial PCR amplification of the designed gene fragments into carrier pMA-T vectors was performed with primers GARF-BSPHI and GARAIM-R, which were designed to provide the restriction cleavage site for BspHI/HindIII. The digestion of the amplified PCR products with BspHI/HindIII permitted the ligation of the resulting restriction fragments into pNZ8048c, which was digested with NcoI and HindIII. However, it should be highlighted that construction of the pNZ8048c-derived vectors carrying GarQ or GarQ and GarA and their respective immunity proteins was achieved using a novel, PCR-based, restriction-enzyme-free cloning, or ABC cloning, method [59]. Briefly, the procedure implies the PCR amplification of three overlapping fragments, two from the pNZ8048c vector and one from the previously designed synthetic gene fragments, to generate a single, circular, pNZ8048c-derived vector by using a pair of overlapping primers. For the amplification of the appropriate gene fragments, 50 µL PCR reactions containing 100 ng of plasmid pNZ8048c or the synthetic gene fragments included in the carrier pMA-T vectors, 0.5 µmol of each primer and 25 µL of Phusion Hot Start II High-Fidelity PCR Master Mix (Thermo Scientific, Waltham, MA, USA) were used. PCR cycling conditions were as follows: one initial denaturation step at 98 • C (30 s), 30 cycles of denaturation-annealing-extension (98 • C for 10 s, 49.1-65.8 • C for 20 s and 72 • C for 25 s, respectively) and a final extension step at 72 • C for 5-10 min). Overlapping PCRs were carried out using the three corresponding fragments as templates. Specifically, 1.5 × 10 10 copies of fragments derived from amplification of pNZ8048c and 3 × 10 10 copies of fragments obtained from amplification of the designed synthetic genes were used. Agarose gel electrophoresis, visualization and sequencing of the generated PCR products were performed essentially as described for the construction of the pMG36c-derived vectors. The resulting pNZ8048c-derived vectors were transformed into competent lactococcal hosts, and the transformed cells containing the pNZ8048c-derived vectors pNJFAI, pNJQI and pNJFQIAI (Table 1) were selected for their growth with chloramphenicol and evaluated for their bacteriocinogenicity. Bacterial DNA from the lactococcal transformed cells was submitted to PCR amplification with the primers NZPJ-F and NZPJ-R. The sequencing of the generated PCR products was performed at the Unidad de Genómica (CAI Técnicas Biológicas, UCM).
Antimicrobial Activity of the Recombinant L. lactis Strains
The direct antimicrobial activity of colonies from the recombinant lactococcal strains was examined by a stab-on-agar test (SOAT) as previously described [60]. When appropriate, cultures were induced with nisin A (Sigma-Aldrich) at a final concentration of 10 ng/mL for the production of the cloned bacteriocins. Cell-free culture supernatants (CFS) were obtained by the centrifugation of cultures at 12,000× g at 4 • C for 10 min, adjusted to pH 6.2 with 1 M NaOH, filtered through 0.22 µm pore-size syringe filters (Sartorius, Göttingen, Germany) and stored at −20 • C until further use. The antimicrobial activity of the supernatants was determined by an agar diffusion test (ADT). It was further quantified by a microtiter plate assay (MPA) as previously described [60]. For the MPA, the growth inhibition of sensitive cultures was measured spectrophotometrically at 620 nm with a FLUOstar OPTIMA (BMGLabtech, Ortenberg, Germany) plate reader. One bacteriocin unit (BU) was defined as the reciprocal of the highest dilution of the bacteriocin that caused a growth inhibition of 50% (50% of the turbidity of the control culture without bacteriocin).
Purification of Bacteriocins
Bacteriocins were purified as previously described using a multi-chromatographic procedure [60]. Briefly, 1 L supernatants from early stationary cultures of the recombinant lactococci were precipitated with (NH 4 ) 2 SO 4 (50%; w/v), desalted by gel filtration (PD-10 columns) and subjected to a cation-exchange (SP Sepharose Fast Flow), followed by a hydrophobic interaction (Octyl-Sepharose CL-4B) and reverse-phase chromatography in an ÄKTA purifier Reverse Phase Fast Protein Liquid Chromatography system (RP-FPLC), using the PepRPC HR 5/5 column. Fractions exhibiting the highest bacteriocin activity were pooled and re-chromatographed on the same column until chromatographically pure bacteriocin peptides were obtained. All chromatographic columns and equipment were obtained from GE Healthcare Life Sciences (Barcelona, Spain).
Mass Spectrometry (MS) and Multiple Reaction Monitoring (MRM) Analysis of Purified Peptide Fractions from Supernatants of the Recombinant L. lactis Strains
Purified RP-FPLC fractions from the supernatants of the recombinant lactococcal strains were subjected to matrix-assisted laser desorption-ionization time-of-flight mass spectrometry (MALDI-TOF MS) and multiple reaction monitoring liquid chromatographyelectrospray ionization tandem mass spectrometry (MRM-LC-ESI-MS/MS) analyses at the Unidad de Proteómica (CAI Técnicas Biológicas, UCM). Briefly, 1 µL of eluted fractions were spotted onto a MALDI target plate and allowed to air-dry at room temperature. Then, 0.8 µL of a sinapic acid matrix (Sigma-Aldrich) in 30% acetonitrile and 0.3% trifluoroacetic acid was added and allowed to air-dry at room temperature. MALDI-TOF MS analyses were performed using a 4800 Plus Proteomics Analyzer MALDI-TOF/TOF mass spectrometer (Applied Biosystems/MDS Sciex, Toronto, Canada).
For the identification of bacteriocins, the MRM method evaluates a complex mixture of tryptic peptides that can be selectively detected by liquid chromatography coupled to electrospray MS. Briefly, the purified RP-FPLC fractions of interest were dried in Speed-vac and resuspended in 20 µL of 8 M urea. The samples were reduced by adding 10 mM of dithiothreitol for 45 min at 37 • C and alkylated with 55 mM of iodacetamide for 30 min in the dark. The urea was then diluted with 25 mM of ammonium bicarbonate to obtain a molarity of less than 2. When the pH was 8.5, digestion was performed by adding recombinant sequencing grade Trypsin (Roche Molecular Biochemicals, Branchburg, NJ, USA) 1:20 (w/w) and incubating at 37 • C. After 60 min, an aliquot was taken for the partial digestion of the sample. The rest was incubated overnight. The produced peptides were dried in Speed-vac and resuspended in 2% acetonitrile and 0, 1% formic acid. Skyline (64-bit), version 20.1, was used to build and optimize the MRM for the detection of the peptides of interest [61].
All analyses were performed on a LC-MS/MS Eksigent Nanoflow LC system coupled to a hybrid triple quadrupole/ion trap mass spectrometer, 5500 QTRAP (AB Sciex, Foster City, CA, USA), equipped with a nano electrospray interface operating in the positive ion mode. The MS/MS data were analyzed using Protein Pilot 4.5 software (AB Sciex) or MASCOT 2.3 (MatrixScience, London, UK) to identify the peptides against an in-house DataBase with the fasta sequences of the targeted proteins. The searches were performed assuming a digestion with trypsin with a maximum of 2 missed cleavages, a fragment ion mass tolerance of 0.6 Da and a parent ion tolerance of 0.15 Da. Peptide identifications based on the MS/MS data were accepted if they could be established at a CI of greater than 95% (p < 0.05). Data were then processed against the MRM-library on Skyline to ensure consistency between the transitions detected and the sequences of the peptides searched.
Genetic Design and Cloning of Synthetic Genes That Drive the Heterologous Production of GarA and/or GarQ by Recombinant L. lactis Cells
In this work, synthetic genes containing the protein SP usp45 , fused to mature GarA (lgnA) and its putative immunity protein GarAI (lgnI) (AI) encoded by L. garvieae 21881 [34], synthetic genes containing the protein SP usp45 fused to mature GarQ (garQ) and its putative immunity protein GarQI (lgnI) (QI) encoded by L. garvieae BCC43578 [39], and synthetic genes containing the genetic fusions SP usp45 ::lgnA+lgnI+garQ+garI (AIQI) and SP usp45 ::garQ+garI+lgnA+lgnI (QIAI), were designed for cloning into the protein expression vectors pMG36c which carries the P 32 constitutive promoter, and in pNZ8048c with the inducible P nisA promoter. PCR-based amplifications of the synthesized gene fragments allowed for the generation of the PCR products shown in Table S1. Cloning the PCR products A, B, C and D in pMG36c resulted in the pMG36c-derived vectors pJFAI, pJFQI, pJFAIQI and pJFQIAI, and cloning the PCR product E in pNZ8048c resulted in the pNZ8048c-derived vector pNJFAI (Table 1). Similarly, the use of a novel, PCR-based, restriction-enzyme-free cloning method [59] for cloning fragments in plasmid pNZ8048 allowed for the construction of the pNZ8048c-derived vectors pNJFQI and pNJFQIAI (Table 1). In this study, no efforts were made to omit the presence of hypothetically redundant genes that encode putative immunity proteins in the designed synthetic gene fragments.
Antimicrobial Activity of the Recombinant L. lactis Strains as Determined by Their Direct Antagonistic Effect (SOAT) and the Antimicrobial Activity (ADT) of Their Cell-Free Supernatants
The transformation of recombinant plasmids into L. lactis subsp. cremoris NZ9000 showed that while the control NZ9000 (pMG36c) and NZ9000 (pNZ8048c) cells showed no antimicrobial activity against L. garvieae CF00021 or P. damnosus CECT4797, all the recombinant NZ9000-derived strains exhibited a measurable antagonistic effect against L. garvieae CF00021 (Table 2). Interestingly, results from both the SOAT and ADT tests showed that the recombinant strains NZ9000 (pJFAI) and NZ9000 (pNJFAI), which only encode the production of GarA, showed no antimicrobial activity against P. damnosus CECT4797.
Antimicrobial Activity of Recombinant L. lactis Strains against Different L. garvieae Strains
The antimicrobial activity of supernatants from all lactococcal strains was quantified against different virulent L. garvieae strains by using a more sensitive microtiter plate assay (MPA). Again, L. lactis subsp. cremoris NZ9000 (pMG36c) showed no antimicrobial activity against any of the L. garvieae strains evaluated, while the recombinant NZ9000derived strains transformed with the constitutive pMG36c-derived vectors showed a weak antimicrobial activity against the virulent L. garvieae strains (Table 3). However, the NisZproducing L. lactis subsp. cremoris WA2-67 (pMG36c) showed a much higher antimicrobial activity against L. garvieae. Remarkably, the antimicrobial activity determined for the recombinant WA2-67-derived strains transformed with the pMG36c-derived vectors to generate the WA2-67 (pJFAI), WA2-67 (pJFQI), WA2-67 (pJFAIQI) and WA2-67 (pJFQIAI) bacteriocin producers was 1.0-to 1.6-fold, 5.1-to 10.7-fold, 0.9-to 1.6-fold, and17.3-to 68.2fold higher, respectively, than that of the control strain WA2-67 (pMG36c) ( Table 3). The antimicrobial activity of the NisA producer L. lactis subsp. lactis DPC5598 recombinants was 0.5-to 1.2-fold higher than that of the control DPC5598 (pMG36c) strain against L. garvieae, while the antagonistic activity of the NisA producer L. lactis subsp. lactis BB24 recombinants was 1.3-to 3.7-fold higher than that of the control BB24 (pMG36c) strain against the same L. garvieae indicator strains (Table 3). When the antimicrobial activity of the lactococcal strains that were transformed with the inducible pNZ8048c-derived vectors was determined, L. lactis subsp. cremoris NZ9000 (pNZ8048c) showed no antimicrobial activity against any of the L. garvieae strains, while the recombinant NZ9000 (pNJFAI), NZ9000 (pNJFQI), and NZ9000 (pNJQIAI) strains showed a 12.8-to 18.9-fold, 7.4-to 18.1-fold and 6.9-to 14.4-fold higher antimicrobial activity, respectively, than the cells transformed with the pMG36c-derived vectors (Table 4). However, the antimicrobial activity of the L. lactis subsp. cremoris WA2-67 (pNZ8048c) cells and their recombinant WA2-67 (pNJFAI), WA2-67 (pNJFQI) and WA2-67 (pNJFQIAI) strains showed a 0.7-to 0.9-fold, 0.5-to 0.8-fold, 0.1-to 0.2-fold and 0.01-to 0.08-fold lower antimicrobial activity, respectively, against the L. garvieae strains than the cells transformed with the pMG36c-derived vectors ( Table 4). The antimicrobial activity of the NisA-producing L. lactis subsp. lactis DPC5598 transformed with the pNZ8048c-derived vectors was only slightly (0.9-to 2.0-fold) higher than the antimicrobial activity of the cells transformed with the pMG36c-derived vectors. Similarly, the antagonistic activity of the NisA producer L. lactis subsp. lactis BB24 recombinants transformed with the pNZ8048c-derived vectors was only slightly (0.6-to 1.9-fold) higher than cells transformed with the pMG36c-derived vectors ( Table 4). The results regarding the purification to homogeneity of the bacteriocins in supernatants of the selected L. lactis subsp. cremoris recombinants are summarized in Table 5. The evaluation of the most active antimicrobial fractions after the first reversed-phase chromatography step (RP-FPLC) permitted the identification of two active fractions during the purification of the bacteriocins produced by L. lactis subsp. cremoris WA2-67 (pJFQI) and three active fractions during the purification of the bacteriocins produced by L. lactis subsp. cremoris WA2-67 (pJFQIAI). Although the antimicrobial activity of the eluted fractions was low, a significant increase in their specific antimicrobial activity was observed.
Mass Spectrometry (MS) and Multiple Reaction Monitoring (MRM) Analysis of the Purified Bacteriocin Fractions
MALDI-TOF MS analysis of fraction 8 and fraction 7, eluted during the RP-FPLC step of the purification of supernatants from L. lactis subsp. cremoris WA2-67 (pJFQI) and L. lactis subsp. cremoris WA2-67 (pJFQIAI), showed major peaks of 3331.4 Da and 3331.3 Da ( Figure S1), respectively, matching the molecular mass described for NisZ. In addition, second peaks of 3349.3 Da and 3348.8 Da, respectively, likely correspond to the oxidation of the lanthionine ring of NisZ ( Figure S1).
However, MALDI-TOF MS analysis of the eluted fraction 14 from the L. lactis subsp. cremoris WA2-67 (pJFQI), which encoded GarQ and NisZ, in addition to an analysis of fractions 9 and 12 from the L. lactis subsp. cremoris WA2-67 (pJFQIAI), which encoded GarA, GarQ and NisZ, could not identify the presence of the bacteriocins GarA and GarQ with predicted molecular masses of 4645.2 Da and 5340 Da, respectively, in the eluted fractions. Since this was a totally unexpected result, the fractions were subjected to MRM-LC-ESI-MS/MS analysis to determine the presence of the expected bacteriocins in the samples. In the MRM method, a series of target tryptic peptides and their associated transitions (fragments m/z) were predicted from the molecular masses and amino acid sequences of the bacteriocins GarA and GarQ. Each targeted peptide has a set of accompanying transitions which are then selectively detected in the second stage of the MS. A summary of the results obtained is shown in Table 6. MRM transitions were established and validated by tandem mass spectrometry (MS/MS). For each bacteriocin, two encrypted peptides were confidently detected in duplicate runs.
Discussion
Virulent L. garvieae are the etiological agents of a hyperacute hemorrhagic septicemia in fish, known as lactococcosis. They are also responsible for human pathologies due to their zoonotic character and potential presence in foods [31,33]. Bacteriophages and bacteriocins have potential as complementary strategies for combating L. garvieae in foods and fish [26,27], while bacteriocinogenic LAB could be evaluated for their potential use as probiotics, paraprobiotics and postbiotics in food, feed and other biotechnological applications [7,53,62]. The optimization of bacteriocin gene synthesis, expression and production helps the development of LAB as cell factories for the production and delivery of multiple bacteriocins [45,46,63]. The use of synthetic genes that match the codon usage of the producer organisms has a significant impact on gene expression levels and protein folding [47,64].
In this work, the transformation of L. lactis subsp. cremoris NZ9000 with pMG36c-or pNZ8048c-derived vectors demonstrated that the NZ9000 (pJFAI) and NZ9000 (pNJFAI) recombinant cells, which encode GarAI, showed antimicrobial activity against L. garvieae CF00021 but not against P. damnosus CECT4797 (Table 2), confirming their production of GarA and previous observations that this bacteriocin was only active against L. garvieae [34]. However, differences in the antimicrobial activity of recombinants derived from the NisZ producer L. lactis subsp. cremoris WA2-67, which was transformed with the pMG36cderived vectors but not pNZ8048c-derived vectors, were observed against both indicator strains. The obtained results showed that the WA2-67 (pJFQI) cells exhibited larger halos of inhibition than the WA2-67 (pJFAI) cells, and that the WA2-67 (pJFQIAI) cells displayed the largest observed halos (Figure 1). These results suggest that the constitutive expression of GarQI is higher than GarAI or that the specific antimicrobial activity of GarQ is higher than that of GarA. Perhaps the transcription, processing and secretion from genes encoding GarQI+GarAI are more effective than from genes encoding GarAI+GarQI. On the other hand, no remarkable differences were found in the antimicrobial activity of the recombinants derived from the NisA producers transformed with the pMG36c-derived or the pNZ8048c-derived vectors, L. lactis subsp. lactis DPC5598 and L. lactis subsp. lactis BB24 ( Table 2).
Due to the increasing interest that L. garvieae is attracting as not only a relevant bacterial pathogen but also as a zoonotic agent [29,65,66], the antimicrobial activity of the L. lactis recombinants was further evaluated and quantified against different virulent L. garvieae strains by using a more sensitive microplate assay (MPA). The obtained results showed that the L. lactis subsp. cremoris NZ9000 cells transformed with the pNZ8048c-derived vectors showed a 6.9-to 18.9-fold higher antimicrobial activity than the recombinant cells bearing the pMG36c-derived vectors (Tables 3 and 4). The enhanced antimicrobial activity in cells with the nisin-inducible constructs may be due to copy number differences between pNZ8048c and pMG36c, but is more likely caused by the promoters used to drive gene expression [67,68]. Plasmid pNZ8048c contains the high-copy number heterogramic replicon of the lactococcal plasmid pSH71 with a unique NcoI cleavage site downstream of the nisA ribosome binding site (RBS), which is used for translational fusions inducible by NisA [23,69]. To optimize protein production, inducible systems are usually considered superior to constitutive expression systems since the former allow for the achievement of a sufficient biomass before the initiation of target protein expression [70]. The increased antimicrobial activity observed with the NisA-induced cells may also be ascribed to the short induction time for the production of GarA and/or GarQ (3 h), which most likely prevented the secreted bacteriocins from attaching to cell walls to form aggregates and/or to undergo protease degradations.
Moreover and quite remarkably, the antimicrobial activity of L. lactis subsp. cremoris WA2-67 (pMG36c) and the pMG36c-derived WA2-67 (pJFAI), WA2-67 (pJFQI), WA2-67 (pJFAIQI), and WA2-67 (pJFQIAI) strains showed a 1.0-to 1.6-fold, 5.1-to 10.7-fold, 0.9-to 1.6-fold and 17.3-to 68.2-fold higher antimicrobial activity, respectively, than the control WA2-67 (pMG36c) strain (Table 3). These results also indicate that the expression of QI increases the antimicrobial activity of the producer cells; however, the expression of AI has a much lower effect. Additionally, no increase in antimicrobial activity is observed when QI is expressed as the second of the two modules (AIQI). However, when QI is expressed as the first module (QIAI), a synergistic effect of both modules seems to occur regarding the very high antimicrobial activity of the producer cells. Thus, the AI in the AIQI module appears to prevent the QI from becoming active. However, when QI is expressed first in the QIAI module, the AI appears to synergistically increase the QI activity ( Table 3). The pMG36c vector is a shuttle vector. It is based on the low-copy replication origin of pWV01 and is able to replicate in Escherichia coli, Bacillus subtilis and LAB, whereas the strong P 32 promoter drives the constitutive transcription of inserted genes into the multicloning site (MCS) of pUC18 [56]. From the results obtained, it may occur that, as previously suggested, the specific antimicrobial activity of GarA against L. garvieae is lower and/or its production and stability is less than that of GarQ. Additionally, besides the choice of vector and promoters, other factors such as the activation of quality control networks involving folding factors and housekeeping proteases, the oxidation of methionine to methionine-sulfoxide, bacteriocin self-aggregation and mRNA stability may affect bacteriocin production and activity from the recombinant hosts [46,71]. The coexpression of putative immunity genes may also increase the production of bacteriocins in heterologous hosts. These immunity proteins can act by either affecting bacteriocin pore formation or by perturbing the interaction between the bacteriocin and a membrane-located bacteriocin receptor, thereby preventing producer cells from being killed [16]. The expression in AIQI of LgnI before the expression of GarI could also affect producer protection against GarQ, thereby affecting growth and bacteriocin production by the producer cells. Importantly, L. lactis subsp. cremoris WA2-67 (pJFQI) and L. lactis subsp. cremoris WA2-67 (pJFQIAI) showed the highest antimicrobial activity against all virulent strains of L. garvieae evaluated (Table 3).
However, the antimicrobial activity of L. lactis subsp. cremoris WA2-67, which was transformed with the pNZ8048c-derived vectors, produced WA2-67-derivatives that showed a 0.01-to 0.9-fold lower antimicrobial activity, respectively, than the cells transformed with the pMG36c-derived vectors (Table 4). This was an unexpected result since, as previously described, inducible systems are often considered superior to constitutive expression systems for the optimization of protein production. Perhaps NisK and NisR, the two component signal transduction systems for the regulation of NisZ synthesis in L. lactis subsp. cremoris WA2-67, do not fully activate transcription of the P nisA present in pNZ8048c. It could be also possible that levels of phosphorylated NisR may be not enough to drive the activation of two independent P nis promoters which, in addition, derive from two different Lactoccocus lactis subspecies: cremoris and lactis, respectively. Alternatively, NisZ may not be as efficient as NisA for an interaction with the NisK produced by L. lactis subsp. cremoris WA2-67, thus constraining the induction of transcription of P nisA in pNZ8048c by blocking NisZ with NisI and NisEFG.
The antimicrobial activity of the NisA-producers L. lactis subsp. lactis DPC5598 and L. lactis subsp. lactis BB24 was slightly higher for the cells transformed with the pNZ8048cthan the pMG36c-derived vectors, suggesting that NisA is a better inducer than NisZ for the activation of the transcription of P nisA in pNZ8048c (Tables 3 and 4). However, the antimicrobial activity of the DPC5598-derived recombinants under constitutive or inducible conditions was lower during multi-bacteriocin production. This was probably due to the high energy and metabolic cost linked to plasmid maintenance and replication, to the secretion stress associated with bacteriocin overproduction and/or to the synthesis of proteinases for the elimination of misfolded proteins [72][73][74]. Differences in the antimicrobial activity of these strains and those from the L. lactis subsp. cremoris WA2-67 transformed with the pMG36c-derived vectors may be also ascribed to yet-unknown genetic and/or metabolic differences between the strains. L. lactis subsp. lactis DPC5598 was selected as a potential multi-bacteriocin-producing host because it is a plasmid-free derivative of an industrial strain that is extensively used in fermented dairy products due to its phage insensitivity and fast acid-producing ability [54]. L. lactis subsp. lactis BB24 is a fermented, meat-derived isolate widely used as an efficient host for the heterologous production of bacteriocins [45,55]. Both multi-bacteriocin producers should be considered for their potential evaluation as probiotics, paraprobiotics and/or postbiotics reducing the increasing presence of virulent and zoonotic L. garvieae in selected milk and meat substrates, respectively [31,75].
The MALDI-TOF MS analysis of purified eluted fractions from supernatants of the most active antimicrobial strains, L. lactis subsp. cremoris WA2-67 (pJFQI) and L. lactis subsp. cremoris WA2-67 (pJFQIAI) ( Table 5), allowed for the detection of NisZ in supernatants of the producer strains ( Figure S1), suggesting that this bacteriocin is appropriately processed and transported out of the producer cells. However, the presence of GarA and GarQ could not be detected, suggesting interactions of the bacteriocins with unknown biological compounds, or a low amount and recovery of the bacteriocins during their purification to homogeneity [47,64]. However, bacteriocins in the purified fractions were suitable for MRM evaluation and data analysis, which is an emerging targeted proteomics workflow and a highly selective and sensitive method for detecting peptides in the low ng/mL to sub-ng/mL range concentrations [64,76,77]. When the purified fractions were subjected to MRM-LC-ESI-MS/MS analysis, two encrypted peptides were confidently (99%) detected. The detected peptides were confirmed by MS/MS, and at least four transitions were identified for each ( Table 6). The peptide fragments covered 44% of the sequence of GarQ, while the coverage was 21% for GarA. These relatively low-coverage percentages are related to the reduced presence of lysine and arginine (trypsin cleavage sites) in GarA and GarQ, which significatively reduces the number of potentially identifiable target peptides.
Previous studies by our research group identified probiotic features of the native or wild-type NisZ producer L. lactis subsp. cremoris WA2-67 such as a potent antimicrobial activity against ichthyopathogens, survival in fresh water and the gastrointestinal tract of trout, a resistance to bile and low pH, and an improved colonization ability with respect to the intestinal trout mucosa [26,53,78]. Further in silico analyses of the whole-genome sequence (WGS) of this strain also identified other potential probiotic traits such as the production of vitamins and amino acids, adhesion/aggregation and stress resistance factors and the absence of transferable antibiotic resistance determinants and genes encoding detrimental enzymatic activities or potential virulence factors [79]. Other studies performed by our group demonstrated the effectiveness of L. lactis subsp. cremoris WA2-67 to protect rainbow trout in vivo against infection of the virulent L. garvieae and the relevance of NisZ production as an anti-infective mechanism [53].
The work described in this study constitutes the first report on the design of multibacteriocinogenic L. lactis subsp. cremoris WA2-67 strains with a high antimicrobial activity against virulent L. garvieae and a promising role as probiotics, paraprobiotics and/or postbiotics in food, feed and other biotechnological applications. The evaluation of bioengineered strains as probiotics is subjected to approval by regulatory authorities and is performed under strict biological conditions. However, the number of reports on the evaluation of bioengineered bacterial strains as probiotics (live cells), paraprobiotics (dead, non-viable cells) and postbiotics (physical-, chemical-or enzymatic-lysis of probiotic cells) is increasing [7,80,81]. Accordingly, experiments are being planned to evaluate the in vitro effect of L. lactis subsp. cremoris WA2-67 (pJFQI) and L. lactis subsp. cremoris WA2-67 (pJFQIAI) on rainbow trout intestinal epithelial cells (RTgutGC) for a transcriptional analysis of several immune, intestinal, barrier-integrity and homeostasis genes and the induction of antimicrobial peptides (AMPs), as well as for their effect on the in vivo modulation of the intestinal microbiota and immune response of rainbow trout (Oncorhynchus mykiss, Walbaum) and turbot (Scophthalmus maximus).
Conclusions
The design of synthetic genes and their cloning into protein expression vectors bearing constitutive or inducible promoters has allowed for the production and functional expression of GarA and/or GarQ by L. lactis subsp. lactis and L. lactis subsp. cremoris strains. Most importantly, L. lactis subsp. cremoris WA2-67, transformed with the pMG36c-derived vectors, allowed for the obtention of L. lactis subsp. cremoris WA2-67 (pJFQI), a producer of GarQ and NisZ, and L. lactis subsp. cremoris WA2-67 (pJFQIAI), a producer of GarA, GarQ and NisZ, with a much higher antimicrobial activity (5.1-to 10.7-fold and 17.3-to 68.2-fold, respectively) against virulent L. garvieae than the rest of the L. lactis strains evalu-ated. The concerted use of a sensitive microtiter plate assay (MPA) for the quantification of the antimicrobial activity of supernatants, the use of a multi-chromatographic procedure for the purification of bacteriocins to homogeneity, and the use of a MALDI-TOF MS multiple reaction monitoring (MRM-LC-ESI-MS/MS) analysis of the purified bacteriocins are unavoidable and possibly irreplaceable tools for the identification and characterization of the bacteriocins produced by L. lactis subsp. cremoris WA2-67 (pJFQI) and L. lactis subsp. cremoris WA2-67 (pJFQIAI).
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/foods12051063/s1, Table S1: Primers and PCR products used in this study; Figure S1: MALDI-TOF MS analysis of purified NisZ from Lactococcus lactis subsp.
|
v3-fos-license
|
2023-01-17T14:17:02.204Z
|
2017-06-06T00:00:00.000
|
255870608
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s12935-017-0430-x",
"pdf_hash": "47893e9467176740f336914eb64808ed44538199",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:891",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "47893e9467176740f336914eb64808ed44538199",
"year": 2017
}
|
pes2o/s2orc
|
Suppression of endothelial cell migration by tumor associated macrophage-derived exosomes is reversed by epithelial ovarian cancer exosomal lncRNA
To study the mechanism by which epithelial ovarian cancer (EOC)-derived exosomes restore the migration of endothelial cells that is suppressed by TAM-derived exosomes. Exosomes were isolated from TAMs in the ascites of patients with EOC. The effect of exosomes on the expression of endothelial cell miRNA was monitored by PCR. The miRNA mimics were transfected to explore their effects. Microarray data and literature searches were used to predict target genes and the impact of target gene pathways, and small interfering RNA was used to target these genes. We used migration assays to determine whether ovarian cancer cell-derived exosomes participate in the regulation of TAMs and endothelial cells. We used microarray data to identify the target lncRNA, and we constructed target lncRNA expression plasmids to validate targets by Western blotting. We separated TAMs from the ascites of patients with EOC and isolated exosomes from TAM supernatants. After co-culture with HUVECs, these exosomes were efficiently incorporated into HUVECs. The migration of HUVECs was suppressed significantly in the exosome group compared with blank controls (P < 0.05).The miRNA mimic transfection and target gene prediction found that TAM-derived exosomes targeted the miR-146b-5p/TRAF6/NF-κB/MMP2 pathway to suppress endothelial cell migration; this result was supported by PCR and Western blotting analyses. The expression of exosomal miR-146b-5p isolated from serum in the EOC group was significantly increased compared to healthy individuals. Finally, TAM-derived exosomes and EOC SKOV3-derived exosomes in combination stimulated HUVEC cells and overcame the inhibition of endothelial cell migration caused by TAM-derived exosomes. Two lncRNAs that were carried by SKOV3-derived exosomes were identified as NF-κB pathway-associated genes by Western blotting. TAM-derived exosomes can inhibit the migration of endothelial cells by targeting the miR-146b-5p/TRAF6/NF-kB/MMP2 pathway. However, EOC-derived exosomes can transfer lncRNAs to remotely reverse this effect of TAMs on endothelial cells.
Background
Epithelial ovarian cancer (EOC) is considered the most malignant gynecological tumor; its mortality rate is the highest of all gynecological malignancies. The overall 5-year survival rate is approximately 30% [1]. A remarkable feature of advanced EOC is the presence of widespread peritoneal metastases at the time of initial diagnosis. However, the mechanisms of peritoneal seeding, spreading, and progression remain elusive.
Our previous studies suggested that various stromal cells, including activated endothelial cells, tumor-associated macrophages (TAMs), fibroblasts, and bone marrow-derived cells infiltrated the EOC peritoneum [2,3]. More than 75% of the mononuclear immune cells in the peritoneum near a tumor implant were TAMs, which mimic chronic inflammation [2] and associated with tumor progression.
In a previous study, we demonstrated that CD68+ macrophages are in close contact with CD31+ endothelial cells in the peritoneum in the presence of EOC. We found that 53% of CD68+ cells and CD31+ endothelial cells display high levels of VCAM1 adhesion molecule expression, in contrast to 3.6% of CD3+ T cells [2]. Furthermore, it was suggested that the interaction of ovarian cancer cells and tumor-associated macrophages enhances the ability of endothelial cells to promote the progression of ovarian cancer [3]. However, how EOC cells regulate the interaction between TAMs and endothelial cells in the tumor microenvironment remains unknown.
Most recently, exosomes derived from multiple cells have been shown to play important roles in mediating communication between cells. Exosomes are released into the extracellular matrix after the fusion of multivesicular endosomes with the cell membrane, and have a diameter of approximately 30-100 nm. The exosomes carry microRNA (miRNA), long non-coding RNA (lncRNA), and other biologically active substances. One of our study showed that exosomes derived from EOCs could regulate the polarization of tumor-associated macrophages by transferring miR-223p [4].
In this work, the in vitro co-culture of TAM-derived exosomes with endothelial cells suppressed the migration of the endothelial cells. Thus, it seems that TAMs would not promote endothelial migration to participate in angiogenesis in the tumor microenvironment. However, when EOC-derived exosomes were added into the co-culture system, the migration of endothelial cells was restored, which indicated that EOC-derived exosomes play a central role in regulating the interaction of TAMs and endothelial cells.
Here, we investigate the mechanism and possible pathway by which EOC-derived exosomes restore the migration of endothelial cells that is inhibited by TAM-derived exosomes (Additional file 1).
Identification of tumor-associated macrophages separated from epithelial ovarian cancer
CD206 was used as a specific marker to identify TAMs. HLA-DR protein expression is decreased in TAMs, but it is expressed at much higher levels in CD14+ monocytes. We isolated the cells from the ascites of EOC patients with CD14+ magnetic beads. To identify whether these cells were TAMs, CD206 and HLA-DR protein expression was detected by flow cytometry. The portion of CD206+ cells was significantly higher (approximately 66.4% vs. 3.43%), and the rate of HLA-DR+ cells was significantly lower (approximately 0.900% vs. 86.2%) in TAMs isolated from the ascites of EOC patients compared with CD14+ macrophages (Fig. 1a, b).
Exosomes are characterized by their conserved size and density and the presence of specific protein markers [5,6]. To ensure that exosomes were recovered and intact, Cryo-TEM images were used to confirm the secreted exosomes from TAMs. As previously reported [7][8][9], both samples contained small (30-nm diameter) and large (80-nm diameter) spherical exosomes (Fig. 1c). We then examined the expression of exosomal marker proteins in exosomes by Western blotting (Fig. 1d).
Exosomes derived from tumor-associated macrophages are ingested into HUVECs
To examine the potential internalization of exosomes by other cells, we labeled TAM-derived exosomes with the fluorescent dye PKH67. PKH67-labeled exosomes were incubated with HUVECs for 24 h, and the localization of exosomes was examined by fluorescent microscopy (Fig. 2). The internalization of PKH67-labeled exosomes (green) was observed as endosome-like vesicles in the cytoplasm of HUVEC cells. These studies indicate that TAM-derived exosomes can be ingested by other cells.
TAM-derived exosomes suppress HUVEC cell migration
To investigate whether HUVECs were affected by exosomes, the migration of HUVECs was evaluated with Fig. 1 Identification of TAM-secreted exosomes. a, b TAMs were isolated from the ascites of EOC patients, and the percentages of HLA-DR and CD206-positive cells were determined by FACS. c Photomicrographs of exosomes derived from tumor-associated macrophages (TAMs) separated from epithelial ovarian cancer (EOC) fractionated by Exoquick. d Exosomal marker proteins in isolated exosomes were quantified by immunoblotting. Surface levels of the exosomal marker CD63 on the purified exosomes were measured 8-mm pore Transwell assays. Migrated cells (the ratio of cells that migrated to the bottom side) were reduced by approximately one half for HUVECs cultured with exosomes derived from TAMs (P < 0.05, Fig. 3b). These findings indicate that exosomes derived from TAMs suppress HUVEC migration. miR-146b has been shown in a variety of tumors, including glioma, breast cancer, and thyroid cancer, to inhibit tumor metastasis or improve tumor radiosensitivity [10,11]. Studies have also demonstrated that bone marrow stromal stem cells can transport miR-146b via exosomes to inhibit the proliferation of glioma cells [12]. The expression of endothelial cell miRNA after co-culture with exosomes and miR-146b-5p can inhibit the migration of endothelial cells. a Quantification of individual miRNAs in HUVECs with or without exosomes derived from tumor-associated macrophages. The y-axis represents the relative miRNA expression level. The results are presented as the mean ± SD. miRNA expression in HUVECs with or without exosomes derived from tumor-associated macrophages. When HUVECs were co-cultured with exosomes derived from tumor-associated macrophages, a modest but statistically significant increase of miRNA expression was observed for miR-146b (*P < 0.05), miR-21 (*P < 0.05), miR-24 (*P < 0.05), and miR-132 (*P < 0.05). b, c Migration assay: the number of cells that migrate through the membrane to the lower chamber was measured with calcein-AM (green).Cells in the lower chamber were counted in three random microscopic fields using an inverted microscope. The ability of cells to migrate is significantly suppressed by the addition of exosomes derived from TAMs or by the mimics of hsa-miR-146b-5p To demonstrate the effect of miR-146b-5p on the migration of HUVECs, miR-146b-5p mimics or the miRNA negative control were transfected into HUVECS. Compared to the negative control, we found that the transfer of miR-146b-5p into HUVECs suppressed migration (P < 0.05, Fig. 3c). Collectively, the exogenous, TAM exosome-derived miR-146b-5p suppressed endothelial cell migration.
Exosomal miR-146b-5p suppresses TRAF6 expression in HUVECs, and the inhibition of TRAF6 suppressed the migration in HUVECs
Target scan software predicted TRAF6 as a target gene of miR-146b-5p. To test this prediction, a luciferase reporter assay was performed. miR-146b-5p decreased the luciferase activity of Luc-TRAF6-3′ UTR and had a minimal effect on the negative control ( Fig. 4a, b). Next, we set up experimental groups in which HUVECs were co-cultured with exosomes derived from TAMs or miR-146b-5p mimics were transfected into HUVECs. In contrast, HUVECs were treated with PBS or miR-negative control mimics were transfected into HUVECs for control groups. The expression of TRAF6 was detected in experimental groups and control groups by real-time PCR and Western blots. The results showed that in experimental groups, the expression of TRAF6 was significantly suppressed, indicating that exosomal miR-146b-5p suppresses TRAF6 expression in HUVECs (Fig. 4c, d).
Furthermore, when TRAF6 was depleted in HUVECs using siRNA (Fig. 5a, b), a significant reduction in migration ( Fig. 5c, d) was found. The migration of HUVECS decreased by 50 percent in the presence of siTRAF6-1 and decreased approximately 35 percent in the presence of siTRAF6-2. Overall, these data show that TRAF6 plays a causal role in HUVEC migration.
Exosomal miR-146b-5p suppresses the migration of HUVECs via TRAF6/NF-κB/MMP2
TRAF6 is a signal transducer in the NF-κB pathway. To explore whether TAM-derived exosomes could modulate the migration of HUVECs through the NF-κB pathway, NF-κB phosphorylation was assessed in HUVECs treated with exosomes (60 µg/ml). The activation of NF-κB phosphorylation was observed in HUVECs when they were incubated with TAM-derived exosomes (Fig. 5e). Matrix metalloproteinase 2 (MMP-2) is a member of a family of proteolytic enzymes that can enhance endothelial cell migration. We detected decreased expression of MMP-2 in HUVECs when they were incubated with TAMderived exosomes (Fig. 5e). Western blotting revealed that the expression of TRAF6 was inhibited upon treatment with TAM-derived exosomes and that NF-κB phosphorylation was decreased in HUVECs; additionally, MMP-2 expression was decreased. NF-κB phosphorylation and MMP-2 expression were also measured after treatment with miR-146b-5p mimics or siTRAF6. We obtained the same results as with exosome incubation. These findings indicate that exosomal miR-146b-5p can suppress the migration of HUVECs via TRAF6/NF-κB/ MMP2.
EOC-derived exosomes transfer lncRNAs that restore the migration of endothelial cells suppressed by TAM-derived exosomes
Our previous work suggested that the co-culture of tumor-associated macrophage supernatant with EOCs could promote endothelial cell migration [3]. Here, we further showed that exosomes secreted from TAMs in the absence of EOC stimulation could suppress the migration of endothelial cells. To investigate whether macrophage function changes in the presence of exosomes derived from the EOC microenvironment, SKOV3-derived exosomes were added into this co-culture system. Interestingly, the inhibition of endothelial cell migration was reversed remarkably (Fig. 6a, b). Additionally, the phosphorylation of NF-κB was inhibited (Fig. 6c).
lncRNAs are associated with tumor cell proliferation, angiogenesis, and metastasis [13,14]. We verified 2 lncRNAs as potential NF-κB pathway-associated genes in a previous study analyzing SKOV3-derived exosome arrays (Fig. 6d). These two lncRNAs were overexpressed and associated with NF-κB phosphorylation in HUVECs (Fig. 6e). Based on Fig. 6e, we deduced that lncRNAs may contribute to the restoration of the endothelial cell migration.
Discussion
It is known that wide spread implantation and distant metastasis along the peritoneum of EOC are major causes of the poor long-term survival [15]. Tumor angiogenesis depends on the proliferation and migration of vascular endothelial cells after stimulation by various factors, thereby promoting tumor invasion and metastasis. Considerable studies show that the tumor microenvironment plays an important role in tumor progression and metastasis. Studies showed that when macrophages are exposed to a tumor microenvironment that overexpresses IL-4 and IL-10, the macrophages are polarized and differentiate into M2 macrophages, which are also known as tumor associated macrophages (TAMs). TAMs do not exert anti-tumor activities and are involved in tumor progression and angiogenesis [16]. Our previous work suggested that TAMs and endothelial cells interact with one another; under EOC stimulation, these TAMs could promote endothelial cell proliferation and migration to establish angiogenesis [3]. However, the exact Fig. 4 Exosomal miR-146b-5p targets the TRAF6 gene of endothelial cells. A luciferase reporter revealed that miR-146b-5p that could regulate TRAF6 expression. a, b Combinations of predicted miRNA recognition sites (MREs) for each putative target transcript of miR-146b-5p were cloned into the luciferase reporter vector and transfected into HUVECs along with the indicated miRNA mimics. The mean ± SD of three independent experiments is shown, and statistical significance is indicated by *(P < 0.05). c, d The expression of TRAF6, measured by qRT-PCR and Western blots, was significantly suppressed by the addition of exosomes derived from TAMs or the mimics of hsa-miR-146b-5p 1 and siTRAF6-2. c, d TRAF6 depletion by siTRAF6-1 and siTRAF6-2 reduced the migration of HUVECs. The mean ± SD of three independent experiments is shown, and statistical significance is indicated by *(P < 0.05). e A representative immunoblot of TRAF6, phosphorylated(p-)NF-κB, total NF-κB, phosphorylated(p-)IκBα, total IκBα, and MMP-2 in HUVECs treated with TAM-derived exosomes, miR-146b-5p mimics, or siTRAF6 The inhibition of endothelial cell migration by TAM-derived exosomes was reversed, however, by the direct effect of SKOV3 exosomes in promoting endothelial cell migration. c NF-κB phosphorylation was inhibited after incubation with exosomes derived from SKOV3 cells. d 2 lncRNAs identified as potential NF-κB pathway-associated genes in EOCSKOV3 cell exosomes. e A representative immunoblot of phosphorylated(p-)NF-κB and total NF-κB in HUVECs overexpressing the two lncRNAs mechanism of these three cell types' interactions and the crosstalk among them remain unknown.
Most recently, exosomes have been identified as very important mechanisms for crosstalk between various cells. In tumor microenvironment, tumor cells and TAMs are the most important cell sources of exosomes. Our study suggested that TAM-derived exosomes can inhibit the migration of endothelial cells via the microRNA in it. MicroRNAs (miRNAs), small (21-25 nucleotides in length), non-protein-coding RNA transcripts, effectively regulate gene expression after gene transcription. Reports shows that mature miRNAs account for 41.7% of all RNA in exosomes [17]. In 2007, Valadi et al. [18]. first reported that mouse and human mast cell line-derived exosomes carried miRNA that could be transferred from one cell to another to impact important biological functions in the recipient cell. Since then, another study found that macrophagederived exosomes can transfer miRNA into other recipient cells to mediate a regulatory role [19]. Gallo et al. [20] confirmed that exosomes are rich collectives of miRNA that can be used as reliable carriers for miRNA research.
We assessed the effect of exosomes on the expression of endothelial cell miRNA after co-culture with exosomes and found the increased expression of 7 miRNAs (Fig. 3a). miR-146b has been shown in a variety of tumors including glioma, breast cancer, and thyroid cancer to inhibit tumor metastasis or improve tumor radiosensitivity [10,11], and miRNA mimic transfection showed that miR-146b-5p could inhibit the migration of endothelial cells (Fig. 3c). Through complementary base pairing to a specific target in the 3′ UTR of mRNA, microRNAs degrade mRNA or inhibit the translation of mRNA to reduce the expression level of target genes, and thus playing a role in regulating cell growth, differentiation, proliferation, apoptosis, and other physiological processes [21]. miR-146b-5p targeted the TRAF6 gene of endothelial cells, and the decreased expression of TRAF6 influenced endothelial cell migration (Fig. 5a-d) through Matrix metalloproteinase-2 (MMP2). MMP2 is a member of the proteolytic enzyme family. Multiple cells, including fibroblasts, macrophages, endothelial cells, and malignant cells, secrete inactive MMP2 that can be activated by specific activators. MMP2 can degrade the extracellular matrix, which increases the spaces between cells and provides a favorable environment for the formation of tumor blood vessels, and it can also enhance endothelial cell migration. We found that TAM-derived exosomes targeted the miR-146b-5p/TRAF6/NF-kB/MMP2 pathway to suppress endothelial cell migration (Fig. 5e).
Tumor necrosis factor receptor-associated factor 6 (TRAF6) is a signal transducer in the nuclear factor-κB (NF-κB) pathway. Upon the stimulation of cells by various agonists, such as tumor necrosis factor α (TNFα) and interleukin 1β (IL-1β), IκB proteins are rapidly phosphorylated by an IκB kinase (IKK) complex and then degraded by the ubiquitin (Ub)-proteasome pathway. Following IκB degradation, NF-κB translocates into the nucleus where it regulates the expression of a wide spectrum of genes involved in immunity, inflammation, apoptosis, and other cellular processes [22]. Some researchers have found that TRAF6 is closely related to tumor development. As a very important ubiquitin E3 ligase, TRAF6 may induce the ubiquitination of the AKT oncogene. In a nude mouse tumorigenicity model, stable TRAF6 knockdown cells had lower tumorigenic potential than control cells, which suggested that TRAF6 is an oncogene [23]. Meanwhile, the inhibition of TRAF6 expression can significantly inhibit the proliferation and invasion of pancreatic cancer, breast cancer, lung cancer, esophageal cancer, multiple myeloma, and other cells [24,25].
Some studies have found that TAM culture supernatant can promote endothelial cell migration, but our study found that exosomes isolated from TAMs in the ascites of epithelial ovarian cancer (EOC) inhibit the migration of endothelial cells. This finding shows that macrophage function can be changed in the presence of tumor cells. Ovarian cancer cell-derived exosomes participate in the regulation of TAMs and endothelial cells, as revealed through migration assays. TAM-derived exosomes and SKOV3 cell exosomes stimulate HUVEC cells. The inhibition of endothelial cell migration by TAM-derived exosomes was reversed, however, by the direct effect of SKOV3 exosomes in promoting endothelial cell migration (Fig. 6b). We speculate that certain substances in EOC-derived exosomes reverse the suppression of HUVEC migration by TAMs in the tumor microenvironment. Long noncoding RNAs (lncRNAs) are RNA transcripts that are more than 200 nt long but have little protein-coding potential. Within the last few years, thousands of lncRNAs have been implicated in biological processes. For example, lncRNA-p21 has been shown to act in trans or in cis to regulate target gene expression [26]. In our study, we found 2 lncRNA as potential NF-κB pathway-associated genes (Fig. 6d). This observation revealed the important role of exosomes from EOC cells in tumor micro-environment. It gets a glimpse of the complicated network of stromal cells in tumormicroenvironment. We may hypothesis that Before the tumor cells seeded, the TAM cell suppressed the angiogenesis, released some immune factor to cause chronic inflammation according other studies. When tumor cells come, endothelial cells show more active, and supplies more nutrients for tumor cells. Since lacking of advanced technology to confirm the direct contact between these LncRNA and NF-κB pathway-associated genes, we did not figure out how these two lncRNA regulate the NF-κB pathway. However, with the development of the Gene technique, we will study the mechanism how these two lncRNAs control the phosphorylation of NF-κB in vivo, and find a biological tool to interdict the NF-κB pathway only in the endothelial cells in tumor environment.
Conclusion
We demonstrated that the lncRNA carried by exosomes derived from SKOV3 cells could activate the phosphorylation of NF-κB in HUVECs. Perhaps this finding helps explain why EOC-derived exosomes reverse the suppression of HUVEC migration by TAMs.
Patients
Between October and December 2014, ascitic fluid was obtained from 5 patients with EOC undergoing debulking surgery at Shanghai First Maternity and Infant Hospital, Tongji University. Serum was prepared from blood donated by 5 volunteers and 5 patients with EOC. The study protocol was approved by the Institutional Review Board of Shanghai First Maternity and Infant Hospital, Tongji University according to the Ethics Committee of Shanghai First Maternity and Infant Hospital. Informed consent was obtained from patients or their guardians.
Patient samples
Blood samples from 5 healthy controls and 5 pre-therapy EOC patients were collected in a Vacutainer blood collection tube (BD, USA). The tubes were centrifuged at 1500g for 10 min; then, the supernatants were delivered to new tubes and stored at −80 °C until processing. Specimens were obtained from the Shanghai First Maternity and Infant Hospital, Tongji University (Shanghai, China) according to the Ethics Committee of Shanghai First Maternity and Infant Hospital. Informed consent was obtained from patients or their guardians.
The separation and identification of tumor-associated macrophages
TAMs were separated from the ascites of epithelial ovarian cancer via CD14 magnetic beads and then cultured in RPMI 1640 Gibco supplemented with 10% FBS, 100 U⁄ml penicillin, and 100 U⁄ ml streptomycin at 37 °C in 5% CO 2 .
TAMs were isolated, and the percentages of CD206+ and HLA-DR+ cells were analyzed by FACS. TAMs exhibited higher expression of CD206 and lower expression of HLA-DR.
Exosome isolation
To isolate exosomes derived from TAMs associated with epithelial ovarian cancer and human EOCSKOV3 cells, which were obtained from FuHeng BIO (Shanghai, China), cells were first cultured in RPMI-1640 for 48 h. We centrifuged the cell supernatants twice (2000g for 10 min, then 2500g for 30 min to deplete the cells or fragments), added the total exosome isolation kit (Life technology) overnight, and then centrifuged at 10,000g for 1 h. Exosomes were resuspended in PBS (Gibco) and stored at −80 °C. The exosome concentration was detected by the BCA Protein Assay.
Flow cytometry analysis
TAMs were isolated and used to analyze the cytomembrane expression of CD206 and HLA-DR by flow cytometry. The data are expressed as the percentages of immunocytes with positive markers.
Immunofluorescence microscopy for the detection of HUVEC ingestion of exosomes derived from tumor-associated macrophages
Exosomes were labeled with PKH67(Sigma-Aldrich, St. Louis, MO, USA), a green fluorescent dye with long aliphatic tails that localizes in lipid regions of the exosome membranes, for 5 min at 37 °C. Labeled exosomes were washed 3 times with PBS and centrifuged at 80,000g for 2 h. Cells were cultured for 24 h, and HUVECs were collected for microscopy to detect exosomes secreted by TAMs.
Transmission electron microscopy
Exosome pellets were dissolved in PBS buffer, dropped on a carbon-coated copper grid, and then stained with 2% uranyl acetate. The samples were observed using a J Tecnai G2 F20 ST transmission electron microscope.
Co-culture system of Exosomes and HUVECs
TAMs that were separated from the ascites of epithelial ovarian cancer and human EOC-SKOV3 cells were cultured in RPMI-1640 for 48 h. Human umbilical vein endothelial cells (HUVECs) were isolated in the Central Laboratory of Shanghai First Maternity and Infant Hospital. Exosomes were isolated as above. Exosomes were combined with HUVECs at 60 ng/ml of culture medium for 72 h.
HUVEC migration assay
Transwell chambers (6.5 mm) (Corning Costar, Cambridge, MA, USA) with 8.0-µm pore polycarbonate membranes were coated with Matrigel. HUVECs (20,000 cells/ well) were incubated in the upper chamber at 37 °C in 5% CO 2 and allowed to migrate for 8 h toward the lower chamber. Some HUVECs were co-cultured with exosomes for 72 h. The number of cells that migrated through the membrane to the lower chamber was measured after 8 h with calcein-AM (Invitrogen, C3100MP; 50 µg).Cells in the lower chamber were counted in three random microscopic fields using an inverted microscope (Nikon, Japan).
3′ UTR luciferase assay
The 3′ untranslated region (3′ UTR) reporter plasmid for the TRAF6 gene was generated by cloning the 3′ UTR downstream of the luciferase open reading frame (Hanyin Biotechnology, Shanghai, China), Then, the 3′ UTR luciferase reporter plasmid together was transfected with the miR-146b-5p mimics or the miR-negative control using Lipofectamine 3000 (Invitrogen, CA, USA) into HUVECs in a 24-well plate (500 ng luciferase plasmid plus 50 nMmiR-146b-5p mimics). A constitutively expressed Renilla luciferase was co-transfected as a normalizing control. After 24 h of incubation, Firefly and Renilla luciferase activities were sequentially measured using the Dual-Glo Luciferase Assay system (Promega, Madison, WI, USA).
RNA extraction and MicroRNA profiling by RT-PCR
RNA from exosomes was isolated and enriched with a Total Exosome RNA and Protein Isolation Kit (Invitrogen, CA, USA) according to the user's guide, and the total RNA of HUVECs stimulated with the exosomes (60 µg/ ml) after 48 h was extracted using TRIzol (Invitrogen, CA, USA). The miScript Reverse Transcription Kit and miScript SYBR Green PCR Kit (Qiagen GmbH, Hilden, Germany) were used to reverse transcribe and quantitatively detect miRNAs according to the manufacturer's protocol. Human RNU6B was used to normalize miRNA expression. The data were calculated using the 2-ΔΔCT method.
Western blotting analysis
Total protein was isolated from HUVECs treated with exosomes (60 µg/ml), miR-146b-5p mimics, or siRNA targeting TRAF6 (n = 3) for 48 or 72 h. The membranes were blocked with 5% bovine serum albumin for 2 h and incubated with antibodies against TRAF6 (1:2000; Abcam, USA), NF-κB Pathway Sampler Kit(1:1000, CST, USA), and GAPDH (1:5000; Abcam, USA) at 4 °C overnight. Peroxidase-linked secondary anti-rabbit (1:2000; CST) or anti-mouse antibodies (1:2000, CST) were used to detect the bound primary antibodies, and the blotted proteins were visualized using an enhanced chemiluminescence kit (Pierce Biotechnology). The intensity of protein bands was quantified using the Image J software (National Institutes of Health, MD, USA). The relative expression of target proteins was described as a ratio relative to the expression of GAPDH, and statistical data from at least three experiments were graphed.
Statistical analysis
Statistical analyses were performed with SPSS19.0. The data are expressed as the mean ± SD. The Mann-Whitney test, one-way ANOVA, Fisher's exact test, and Student's t test were used to determine P values. Continuous variables in figures are shown as the mean ± SEM. A P < 0.05 was considered statistically significant.
|
v3-fos-license
|
2018-02-16T00:37:50.204Z
|
2010-10-30T00:00:00.000
|
46748487
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3097/lo.201020",
"pdf_hash": "75365455784bdf8bddac23ec35f414dcd17d6fba",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:895",
"s2fieldsofstudy": [
"Environmental Science",
"Geography",
"Economics",
"Sociology"
],
"sha1": "75365455784bdf8bddac23ec35f414dcd17d6fba",
"year": 2010
}
|
pes2o/s2orc
|
Transformation of rural-urban cultural landscapes in Europe: Integrating approaches from ecological, socio-economic and planning perspectives
This paper presents a review of the presentations and synthesis of the discussion during a Symposium on ‘Transformation of rural-urban cultural landscapes in Europe: Integrating approaches from ecological, socio-economic and planning perspectives’ held at the European IALE conference 2009 in Salzburg, Austria. The symposium addressed an extended and much debated subject of the landscape dynamics in Europe. The papers presented during the symposium showcased a broad spectrum of cutting edge research questions and challenges faced by the cultural landscapes of Europe. During six sessions, 18 presentations (besides 20 posters) were made by 36 scientists (including co-authors) from 14 countries, representing 25 institutions of Europe. A glance at the presentations revealed that the state-of-the-art focuses on driving forces and selected aspects of transformation processes, methods of its analysis and planning support as dimensions of research in this field. However, interand transdisciplinary research and integrative approaches to the development of rural-urban cultural landscapes are needed. The extended discussion session at the latter part of the symposium highlighted some critical and unaddressed research questions which remained a pending agenda for future research.
Introduction
E uropean cultural landscapes are a continuum from rural to urban landscapes, which have often developed over long periods of time. In particular large urban regions must be considered as hybrid landscapes where different urban and rural elements are inseparably mingled. This leads to new challenges, e.g. for the protection of natural resources, but it also provides new opportunities for integrative approaches to landscape management that seek to establish beneficial relationships between urban and rural (CSD 1999). There are big differences between European landscapes which have to be reflected in landscape policies. For instance, urban development has a longer history in the south than in the north. The transformation processes also differ between European landscapes which need to be accounted in planning and management. Moreover, actual challenges of climate change, demographic change, economic globalization, health care and natural risks will affect all of the European cultural landscapes but these drivers of change play out in different ways (Nilsson et al. 2008).
Interaction of socio-economic and ecological aspects is needed to support management decisions for sustainable development. In this respect, the diversity in European landscapes, with their respective challenges and approaches to their management, may be regarded as experiments for sustainable development. Yet, critical monitoring and evaluation is needed to learn from these experiments. For this purpose, development of a common European perspective on all landscapes as cultural landscapes and not only selected ones, which are considered as especially valuable such as National Parks, and cooperation in landscape development across the national borders are necessary. This view conforms with the European Landscape Convention -ELC (CoE, 2000, article 2) which 'applies to the entire territory of the (signato-ry states) and covers natural, rural, urban and periurban areas …It concerns landscapes that might be considered outstanding as well as everyday or degraded landscapes'. Moreover, ELC stipulates in article 9 that signatory states 'shall encourage transfrontier co-operation on local and regional level and, wherever necessary, prepare and implement joint landscape programmes.
Symposium 1 of European IALE conference 2009 explored the transformation processes of different European landscape types on the continuum from urban to rural areas. The symposium aimed to provide a forum for discussion on the following themes: Analysis of driving forces and transformation x processes in European cultural landscapes on the continuum from rural to urban areas Landscape change in the perception of the x people of mutliculutural European societies Integrated assessment of the transformation x processes Steering activities of the transformation prox cess at local, regional, national and international/ European levels Risk assessment and vulnerability of ruralx urban landscape systems Monitoring of the transformation processes x -related to selected aspects, techniques, and methods Strategies for sustainable development of rux ral -urban landscape in Europe In total, 18 papers and 20 posters were delivered in this symposium which was the largest one in the conference. Accordingly, a diverse range of research was presented. The following is an attempt to give an overview and provide a synthesis of this symposium. In hindsight, it can be stated that the papers presented at the symposium more or less addressed all of the topics mentioned above. For concise and effective presentation of the synthesis, the papers are grouped into sub-themes within the main topic of the symposium. Each sub-theme covers, but is not limited to, the presentations that fit relatively well within its scope.
1. Driving forces and transformation processes in European cultural landscapes on the continuum from rural to urban areas, including both theoretical papers and presentations on results from empirical studies 2. Methods for analysis of landscape transformations and impact assessment 3. Landscape ecological studies to substantiate planning in urbanising landscape
Driving forces and transformation processes in European cultural landscapes on the continuum from rural to urban
T ransformation of European landscapes is driven by natural and societal processes in the context of global change. Main drivers are social and demographic changes (ageing, shrinking population, migration), economic changes (globalisation), technological change (e.g. development of internet networks), and environmental/climate change. Two papers explored the drivers and their effects in landscape from a theoretical perspective. While Finka et al. (2009) characterised landscapes as adaptive social-ecological systems. Zigraj (2009) points out the challenges for landscape ecology as science to model landscape transformation as multi dimensional socio-natural process. These drivers are connected with change of political and social value systems (e.g. end of communism in Eastern Europe) which have marked effects on landscapes. Importantly, they interact with the planning system and political forces in significant ways. Therefore, global trends play out very differently across the Europe (Figure 1). While some urban centres continue to grow strongly -particularly in the Central Europe -a dramatic depopulation occurs in the European periphery, most of all in the Eastern Europe. Strong differences can also be observed within single countries. For instance, Dodouras et al. (2009) observed a strong population decline in remote Greek areas where many people leave for the economic centres of Athens and other large cities. Abandonment and neglect of traditional farming landscapes have been a consequence. On the other hand, agriculture has been intensified on more fertile land closer to the urban markets while lack of strong planning and building regulation has led to extensive sprawl around the urban centres.
Similar developments were observed in Slovakia (Finka et al. 2009) where the processes of separation of farming activities occurred. Overall, the arable land, vineyards, and orchards decreased in this country. However, in northern Slovakia, arable land and permanent grassland increased at the expense of pasture land. Loss of farmland was mainly caused by urban development and afforestation of marginal pastureland. Yet, land that was forested was also far from stable as large forest areas changed into transitional woodlands due to calamities such as windthrows (Otahel & Pazúr 2009). In total, land cover changed on not less than 4.2% of Slovakia's surface area within a ten year period between 1990 and 2000 (Otahel & Pazúr 2009). Main changes occurred in mountain areas and other regions with marginal farming conditions. The political and economic transformations after 1989 and the consequent change of farming policy were identified as key drivers for these landscape transformations.
These processes of landscape change can also be observed in other parts of Europe, for instance the Mediterranean countries. Papers presented at the symposium from these countries concentrated in particular on the transformation of agricultural landscapes to landscapes dominated by urban forces and the tertiary economy of services like leisure and tourism. Detailed accounts of land use transitions were contributed from Spain and Portugal. In Olzinelles (NE Spain), a parish of 2286ha in the municipality of Sant Celoi in the province of Barcelona, NE Spain, forest cover increased between 1851 and 2008 from 76% to 92% at the costs of agricultural land, while the cover of settlements increased to 2.6% (Otero et al. 2009). Expansion of woodlands led to a decline in biodiversity dependent on fields and meadows, such as butterflies.
Also a loss of many cultural and ecological relevant elements such as ponds and stone terraces was observed. While woodlands were protected as natural parks and for leisure, no specific policies existed for conservation of open landscapes. The study also showed that the loss of farming landscapes has been closely related to land ownership. Traditionally, most of the land has been concentrated in the hands of very few land owners while most of the farmers had only very small land holdings. A greater degree of the large estates was already forested in the 19th century and these became more or less entirely forested and converted into the natural park. Vineyards and dry lands, on the other hand, were mostly in the hands of small land holders. Being economically marginal, these areas are particularly vulnerable to landscape change today. In addition, sprawl of urban areas into forests, caused by the almost explosive growth of secondary homes since the 1980s, increased the risk of wildfires.
Urban growth in the Meditarranean was also studied in the valley of the Sousa River in north-western Portugal (Pereira and Pedrosa 2009). In this case, the main interest was to understand how urbanisation changed the risk of damage from natural hazards, i.e. floods, mass movements and wildfires, in the study area. Urbanisation occurred particularly in areas exposed to these hazards. This was the main reason for increase of risks of damage whereas there was no increase in the intensity of hazards.
A number of further papers and posters were concerned with the consequences of land use change processes; in particular land abandonment and urbanisation, on the ecology of traditional cultural landscapes in the Mediterranean region and approaches to landscape conservation and planning were presented (Simon Rojo, 2009, Vogiatzakis et al. 2009). From the brief discussion of the above papers, a differentiated picture of landscape transformations emerges. In particular, the papers highlighted the need to understand the complex interactions between gobal (e.g. sociodemographic change, climate change etc.) and local determinants (e.g. land ownership, planning system) to develop specific policies for sustainable landscapes.
A second group of papers (and posters) explored more specifically about the transformation processes in urban regions, including changing urban -rural relationships. Notably, emphasis in these studies was less placed on analysis of landscape change as such but most of the papers were more orientated towards analysis of planning policies/strategies and assessment of impacts of urban transformations. Therefore, these papers will be discussed separately later.
Methods for analysis of landscape transformations and impact assessment
T he research presented at the symposium offered a wide range of methods and tools to assess landscape transformations. A critical evaluation of these methods is beyond the scope of this summary. Clearly, the selection of method will depend on the objectives of the research. However, when looking across the studies, it seems that a combination of different methods is required to provide detailed information on landscape transformations and gain a deeper understanding of the underlying causes. Elements of an integrated methodology presented at the conference were: 1. Interpretation of remote sensed data (satellite imagery and/or aerial photography), for detailed analysis of landscape change (e.g. Otahel andPazur 2009, Kupidera et al. 2009). Implemented in a GIS system, these data can be effectively combined with conventional maps such as topographic maps, soils maps, etc. and statistical data (e.g. census data) to assess the impacts on natural resources from landscape change and analyse risks from natural hazards (Pedreira and Pedrosa 2009) 2. While remote sensed data, when available, offers the possibility for spatial analysis of landscape change over larger areas, the papers presented at the conference made it also clear that understanding of the underlying causes and impacts of landscape change requires other, partly qualitative methods, and studies at more detailed scales. Amongst these can be noted: Transformation of rural-urban cultural landscapes ...
/ 2010
Historical analysis, for instance use of old cadax stral maps and historical statistical data (e.g. records on cultivated crops, farm sizes and land ownership) as used by Otero et al. (2009) cadastral maps, land ownership.
Ecological studies, e.g. on biodiversity such as x the detailed analysis of impacts of land use change on selected species groups in the Olzinelles study (Otero et al. 2009).
Social studies, for instance on the perception x of landscape change and management activities by the local population or park uses as in the case of Sarlöv-Herlin and Deak's study on use of cattle to manage a peri-urban landscape in Sweden. This type of study should be particularly relevant for involvement of the public in landscape management.
Policy analysis: A comparative study on urbanix sation patterns in three municipalities in Switzerland (Gennaio & Hersperger 2009) provided interesting insights in this respect. It could be shown that different rates of urban expansion were related to the dominance of specific parties in local politics, distribution of resources among the actors (e.g. access to press media, influence of land owners, etc.), new scientific knowledge but also change in public opinion at national level as an external driving force.
Analysis of planning systems, policies and culx tures: In the symposium Kristensen (2009) provided a comparative study on peri-urbanisation in three countries which showed that planning approaches are important determinants of landscape change but, in turn, they are dependent on natural and so- cial determinants, such as the population density, relative abundance of land and role of agriculture influence on shaping planning systems and policies, historically and today. In the Netherlands, for instance, which is densely populated and where therefore land resources are scarce, local plans regulate the entirety of the land, whereas in much less densely populated Sweden they are restricted to urban areas. This is also the case for Denmark, where farming holds a strong position -one of the main reasons why urban growth boundaries are strong instrument in this country to preserve valuable farmland. Consequently, patterns of urbanisation are different in these three countries.
In yet other studies, methods were emplox yed for understanding decision-making processes which closely work together with decision makers and stakeholders. These collaborative research methods (e.g. Simon Rojo 2009) may hold potential for transdisciplinary research.
It appears from the above discussion that studies of landscape change should adopt methodological approaches which combine different disciplinary methods. This calls for integrated and even transdisciplinary research. Yet, the studies presented at the symposium still seemed mostly using only one or a few of the approaches above, for instance analysis of remote sensed data while the combination of natural scientific, social and political science approaches still rather appeared to be the exception.
In addition, the study by Kristensen showed the importance of comparative research. Similarly, Pileri et al. (2009) undertook comparative research between cases in Italy and Germany to identify strategies for reducing land consumption. Studies on cross-border initiatives aiming at sustainable development of landscapes may offer a particular opportunity for this type of research (Elkabidze et al. 2009) as the implementation of similar programmes (e.g. funded by the European Union) can be studied in a similar landscape context but different planning systems.
Finally, the symposium included some studies where approaches for assessment of landscape transformations were developed and tested. Geitner and Tusch (2009) presented a methodology for assessment of soil functions in rural and urban areas in the Alps, resulting in a soil evaluation system and guidelines for planning. Schetke et al. (2009) developed a multicriteria approach for assessment of the socio-economic impacts of new housing estates in shrinking cities. This was the only presentation in the symposium which approached the phenomenon of shrinking cities whereas all other presentations rather considered urban expansion. However, shrinking cities are already a quite widespread phenomenon and should therefore become an important topic of urban landscape ecological research. Processes of shrinkage may even offer an opportunity for ecological restructuring of cities. However, this requires methodologies to assess the sustainability impacts of different planning concepts such as infill development and urban extensions. To this end, Schetke et al. (2009) operationalised the concepts of 'urban ecosystem services' and 'quality of life' to be integrated into a decision support system. Both assessment studies of Geitner and Tusch (2009) and Schetke et al. (2009) show how assessment can help to bridge between landscape ecology as science and decision making.
Landscape ecological studies to support planning in urbanising landscapes
T he theme of urbanisation was strongly represented in the symposium. In many papers, it was considered as a pressure on rural landscapes (see section 3). A second group of papers was rather interested in identification and assessment of approaches for the sustainable development of urban landscapes.
Urbanisation is a major driver of landscape transformations in Europe where already 70-80% of the population is living in urban areas. Urbanisation is often considered as a one way process. Between 1990 and 2000 alone, the growth of urban areas and associated infrastructures consumed more than 8,000 km2, equivalent to the entire territory of the State of Luxembourg (Nilsson et al., 2008). Only a negligible amount of land use has been reconverted back from urban to Transformation of rural-urban cultural landscapes ...
/ 2010
agriculture, forest or natural, in the same time, on the other hand. Moreover, European cities have expanded on average by 78% between 1950 and 1990 whereas the population has only grown by 33% in the same period (EEA 2006). This process is also called 'urban sprawl'. Sprawl gives raises much concern from an ecological perspective, as it can disrupt and fragment wildlife habitats, destroy productive soils, ad negatively impact on soil, air and water quality. Most of all, it increases energy consumption from car based traffic and thence the ecological footprint of urban areas (Newman and Kenworthy 1989).
Therefore, there is an urgent need to find suitable strategies to reduce land consumption from urbanisation and promote more sustainable patterns of urban development. In this context, Pileri et al. (2009) presented first results from a study to establish comparative information on land sue changes in selected German and Italian cities. It could be shown that amount of land converted to urban considered between the selected cities and moreover that trends were different. For instance, in the Stuttgart case land consumption was much reduced recently whereas the two Italian regions of Milano and Brescia showed ongoing high losses of agricultural land and natural areas. The authors of the study conclude that regional and urban land use planning needs to use a portfolio of policies and instruments which are specific to the local context. In the case of the Italian city regions, they argue that priority should be given to effective protection of open spaces close to the central cities e.g. through greenbelts. As Stuttgart is characterised by a dispersed pattern of urban growth, it is suggested to favour infill policies in the core area and to constrain the growth in small settlements. Certainly, these conclusions will have to be further corroborated by further evidence on the sustainability impacts of different planning policies in these different cities. However, the need for contextualised policies is also emphasised by Kristensen's study on urban -rural landscape policies which has been reported in the previous section.
Is infill development a suitable means to slow urban sprawl or does it have negative impacts on quality of life and ecological services in the city? This question was addressed by Schetke et al. (2009). The study was carried out in the city of Essen in the Ruhr area (Germany) which goes through a period of population and economic decline. This trend has major consequences for society which are also reflected in land use change.
Large areas of derelict land are the most visible sign of this process. The shrinkage of cities is now a widespread phenomenon throughout Europe which raises difficult new questions for urban ecology and urban planning. Is shrinkage a sign of decline or does it offer opportunities for ecological reconstruction of cities as a basis for their regeneration? Therefore, Schetke et al. addressed an important topic where more research would be required.
Results indicate that abundant brownfield sites, from former industrial uses, reduce quality of life as most of these areas are not accessible. Concurrently, they are low on ecosystem services as most of these areas are covered by non-vegetated and water impervious surfaces. Cases of infill development were shown to have positive effects on quality of life and ecosystem services as they increased the amount of accessible green spaces. Therefore, it is concluded that in the case of Essen, fears expressed by planners that infill development may have negative social and ecological impacts are not supported by evidence. These results may not be transferable to every city, e.g. growing cities in southern Germany which are already very compact. However, they offers a suitable methodology to assess the impacts of different urban development models.
Three further papers on urban issues were presented in this session. Mörtberg et al. (2009) presented the results from a project to develop a joint policy document for the landscape of six municipalities north of Stockholm. Biodiversity, recreation and cultural history were priority themes in this project. A Landscape Ecological Assessment was prepared for this purpose. Special to this assessment was the participative process whereby biodiversity targets were formulated. This approach was key to achieving an integrated landscape strategy and gains its broad support by the different local authorities.
The German Nature Conservation Agency (Bundesamt für Naturschutz) now engages in urban nature conservation. This is a significant development as the Agency has traditionally almost exclusively focused on nature conservation in rural areas. However, Kube (2009) gave an overview over the Agency's programme for urban areas and the activities supported by the Agency within this programme. Nature conservation is broadly conceived in this programme in line with the German Act for Nature Conservation which applies to all landscapes, also urban and requires not only conservation of species and habitats but also ecosystem services and access to nature for people. "Urban Woodlands in Leipzig" is one of the ambitious projects funded by the Nature Conservation Agency. The project aims to reintegrate abundant wastelands into the urban fabric as spaces for nature experience, enhancement of biodiversity and improvement of urban climates. The study showed that creation of urban forests on these sites is able to combine qualities of traditional parklands with ecological goals. Forests are a comparatively low cost solution but highly attractive for urban residents. Currently, implementation of the concept of urban woodlands is tested on selected sites in Leipzig.
Exploring new methods of landscape management in urban areas has also been the focus of a paper by Sarlöv-Herlin and Deak (2009). They studied the public perception of grazing with cattle in an urban park at the fringe of Malmo (Sweden) -with encouraging results. The majority of reactions towards the grazing animals were positive. Interestingly, respondents with an urban background were more aware of ecological and heritage issues than interviewees with a non-urban background. This may be taken as a positive signal that urban dwellers will give strong support to preservation of this valuable landscape and accept novel ways of landscape management which can be beneficial for biodiversity. Going even one step further, engaging local people in collaborative landscape management is argued to be an important means for reconnecting people to the landscape (Oliveira et al. 2009).
Conclusions
O verall, the symposium ended with a consolidated understanding -broadly supported by scientific evidence -to be used by the researchers so as the practitioners. A broad range of topics were discussed beyond traditional ecological studies; starting from theoretical concepts to methodological aspects, and to the application of scientific principles in practice -maturing the approaches addressing rural and urban landscapes. It was shown that urban development and demographic changes in the European cultural landscapes are multifaceted drivers of landscape change. Henceforth, there is a need for a scientific consensus in research that concentrates on the urban-rural linkages. The scale of further investigations could vary from local to regional and ultimately global, to meets the targets of sustainability in the biosphere. Furthermore, there is a need to understand the impact of planning strategies and peculiar issues such as shrinking cities. Therefore, it is a considerable scientific challenge to come to a common decision and agreement in a meeting which was actively attended by researchers from a wide variety of academic disciplines -not only ecologists but also the participants from social sciences and planning.
Nevertheless, the sectorial analytic approaches are still predominant and there is a need for inter-and transdisciplinary research -with the active involvement of the stakeholders. It would allow the real time application and testing the state-of-the-art. Most of all, testing-retesting methods would help to improve the fragile scientific methods with on-ground prototyping in urban-rural landscapes. However, there is yet another challenge to overcome that is how to involve the practitioners in complex debates and developments in the science and how to cope with their technocratic thinking?
|
v3-fos-license
|
2021-09-25T15:20:13.609Z
|
2021-08-29T00:00:00.000
|
237935214
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-666X/12/9/1040/pdf",
"pdf_hash": "061a0e23b373b54d3575b26a2d83a189109e6e3a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:897",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"sha1": "0c4c56d3973a5f0790adce5a55be3d3b741211b6",
"year": 2021
}
|
pes2o/s2orc
|
Study on the Heat Source Insulation of a Thermal Bubble-Driven Micropump with Induction Heating
Thermal bubble-driven micropumps have the advantages of high reliability, simple structure and simple fabrication process. However, the high temperature of the thermal bubble may damage some biological or chemical properties of the solution. In order to reduce the influence of the high temperature of the thermal bubbles on the pumped liquid, this paper proposes a kind of heat insulation micropump driven by thermal bubbles with induction heating. The thermal bubble and its chamber are designed on one side of the main pumping channel. The high temperature of the thermal bubble is insulated by the liquid in the heat insulation channel, which reduces the influence of the high temperature of the thermal bubble on the pumped liquid. Protypes of the new micropump with heat source insulation were fabricated and experiments were performed on them. The experiments showed that the temperature of the pumped liquid was less than 35 °C in the main pumping channel.
Generally, thermal bubble-driven actuators use an electric resistive heater to generate heat and thermal bubbles [12,13]. This driving method has been widely used in inkjet printing [14,15]. Recently, many studies of thermal bubble-driven micropumps have been conducted. Tsai and Lin [16] designed a thermal bubble-driven micropump with an aluminum resistive heater. Jung and Kwak [17] developed a thermal bubble-driven micropump using an embedded polysilicon microheater.
In the early stage, our group studied a high flow rate thermal bubble-driven micropump with induction heating [18] The micropump is mainly composed of a glass substrate, an excitation coil, a metal heating plate and a PDMS (polydimethylsiloxane) chip. The micro metal heating plate is located in the pump chamber. The external excitation coil is outside of the pump chamber, which provides energy to the micro heating plate via the electromagnetic field. When an eddy current is generated in the heating plate, the temperature of the heating plate rises rapidly. A small amount of liquid in contact with the micro heating plate is vaporized to produce several large-volume thermal bubbles. Pumping is realized by the periodic expansion and contraction of the thermal bubbles. The pumping flow rate can reach about 102 µL/min. All thermal bubble-driven micropumps of the type mentioned above function by the thermal bubble directly driving the pumped liquid. The thermal bubble will damage some biological or chemical properties of the solution due to the high temperature of the thermal bubble and the direct contact with liquid. For example, half of the enzyme activity of subtilis is lost at a temperature of 52.8 • C. Similarly, half of the enzyme activity of glutinis wheat germ is lost at a temperature of 56.5 • C. The safe temperature is about 40 • C for most of enzymes according to the research of Daniel [19].
In order to reduce the influence of the high temperature of the thermal bubbles on the pumped liquid in the thermal bubble-driven micropump, this paper proposes a kind of heat insulation micropump driven by thermal bubbles with induction heating. Induction heating with high frequency alternating current (AC) was adopted and the directional driving of the micro fluid was realized with the periodic expansion and contraction of thermal bubbles in the design. In addition, the heat source of the thermal bubble was insulated from the pumped liquid to avoid the high temperature damage to biological or chemical substances in the solution. Therefore, the new micropump studied in this paper has the advantages of small heat influence of thermal bubble on the pumped liquid, heat source insulation, simple structure and easy integration.
Design
The structure diagram of the heat source insulation micropump is shown in Figure 1. The pump is mainly composed of a PDMS chip, a metal micro heating plate, an excitation coil and a glass substrate.
Micromachines 2021, 12, x FOR PEER REVIEW 2 of 14 pumping flow rate can reach about 102 μL/min. All thermal bubble-driven micropumps of the type mentioned above function by the thermal bubble directly driving the pumped liquid. The thermal bubble will damage some biological or chemical properties of the solution due to the high temperature of the thermal bubble and the direct contact with liquid. For example, half of the enzyme activity of subtilis is lost at a temperature of 52.8 °C.
Similarly, half of the enzyme activity of glutinis wheat germ is lost at a temperature of 56.5 °C. The safe temperature is about 40 °C for most of enzymes according to the research of Daniel [19]. In order to reduce the influence of the high temperature of the thermal bubbles on the pumped liquid in the thermal bubble-driven micropump, this paper proposes a kind of heat insulation micropump driven by thermal bubbles with induction heating. Induction heating with high frequency alternating current (AC) was adopted and the directional driving of the micro fluid was realized with the periodic expansion and contraction of thermal bubbles in the design. In addition, the heat source of the thermal bubble was insulated from the pumped liquid to avoid the high temperature damage to biological or chemical substances in the solution. Therefore, the new micropump studied in this paper has the advantages of small heat influence of thermal bubble on the pumped liquid, heat source insulation, simple structure and easy integration.
Design
The structure diagram of the heat source insulation micropump is shown in Figure 1. The pump is mainly composed of a PDMS chip, a metal micro heating plate, an excitation coil and a glass substrate. The micro heating plate was designed to sit on the top surface of the glass substrate and the excitation coil was designed to adhere on the bottom surface of the glass substrate The other structures of micropump were designed to be made in a PDMS chip. The schematic drawing of the PDMS chip of the heat source insulation micropump is shown in Figure 2. The structure includes an inlet, a temporary liquid inlet, an outlet, a thermal bubble chamber, a heat insulation channel, a pumping chamber and a diffuser/nozzle. The power of the thermal driven micropump is provided by the expansion and shrinkage of the thermal bubbles. Thermal bubbles will be generated when the temperature of the micro heating plate reaches the nucleation temperature. However, the high temperature of the thermal bubbles can damage something in the liquid if the liquid is too close to the thermal bubbles. The micro heating plate was designed to sit on the top surface of the glass substrate and the excitation coil was designed to adhere on the bottom surface of the glass substrate The other structures of micropump were designed to be made in a PDMS chip. The schematic drawing of the PDMS chip of the heat source insulation micropump is shown in Figure 2. The structure includes an inlet, a temporary liquid inlet, an outlet, a thermal bubble chamber, a heat insulation channel, a pumping chamber and a diffuser/nozzle. The power of the thermal driven micropump is provided by the expansion and shrinkage of the thermal bubbles. Thermal bubbles will be generated when the temperature of the micro heating plate reaches the nucleation temperature. However, the high temperature of the thermal bubbles can damage something in the liquid if the liquid is too close to the thermal bubbles. In this paper, the thermal bubble and its chamber was designed on one side of the main pumping channel. The main pumping channel is composed of the nozzle, the diffuser and the pumping chamber. The pumped liquid flows from the inlet to the outlet through the main pumping channel. The thermal bubble chamber connected with the main pumping channel of the micropump through a side heat insulation channel. The high temperature of the thermal bubble is insulated from the pumped liquid by the liquid in the heat insulation channel, which can reduce the influence of the high temperature of the thermal bubble on the pumped liquid.
The diameter of the thermal bubble chamber of the heat source insulation micropump is 5 mm. The angle between the heat insulation channel and the main pumping channel, θ1, is 40°. A pair of nozzle and diffuser flow controllers was designed with an 80 μm width at the narrow neck W, 1 mm at the open mouth and the diverging angle, θ2, is 14° [16]. The rectangle pumping chamber is 1.5 mm in length (L4) and 1 mm in width (L3). The length of the heat insulation channel (L1) is 2.5 mm and the depth of all micro channels is 150 μm. The diameters of the inlet, outlet and temporary inlet are 2 mm. The temporary liquid inlet is used to inject liquid into the bubble chamber before pumping, after which it is plugged with a cylindrical PDMS plug.
We simulated the liquid velocity with finite element Multiphysics software (COM-SOL Inc., Palo Alto, CA, USA). In our simulation, θ1 was gradually increased from 30° to 75°. The results show that the flow velocity in the main pumping channel was fastest when θ1 was about 40°. As a result of that, θ1 is selected as 40°.
Working Principle
The working principle of the heat source insulation micropump with induction heating is shown in Figure 3. When high frequency alternating current is applied to the excitation coil, an alternating magnetic field will be generated around the coil. Under the action of the alternating magnetic field, an eddy current will be generated in the metal micro heating plate, and consequently, heat will be generated in the micro heating plate. Then, the liquid on the surface of the micro heating plate is heated through heat conduction. When the liquid reaches the nucleation temperature, thermal bubbles will appear on the surface of the micro heating plate. As shown in Figure 3a, with the rapid growth of the thermal bubble, the pressure in the thermal bubble chamber increases quickly. Under the effect of the bubble pressure, the liquid flows rapidly to the heat insulation channel. After that the liquid flows into the pumping chamber through the heat insulation channel. Then the liquid in the pumping chamber will flow into the nozzle and the diffuser, simultaneously. Due to the different flow resistance in two directions, the volume of liquid flowing into the direction of the diffuser and the liquid outlet is greater than that flowing in the nozzle direction. In this paper, the thermal bubble and its chamber was designed on one side of the main pumping channel. The main pumping channel is composed of the nozzle, the diffuser and the pumping chamber. The pumped liquid flows from the inlet to the outlet through the main pumping channel. The thermal bubble chamber connected with the main pumping channel of the micropump through a side heat insulation channel. The high temperature of the thermal bubble is insulated from the pumped liquid by the liquid in the heat insulation channel, which can reduce the influence of the high temperature of the thermal bubble on the pumped liquid.
The diameter of the thermal bubble chamber of the heat source insulation micropump is 5 mm. The angle between the heat insulation channel and the main pumping channel, θ 1 , is 40 • . A pair of nozzle and diffuser flow controllers was designed with an 80 µm width at the narrow neck W, 1 mm at the open mouth and the diverging angle, θ 2 , is 14 • [16]. The rectangle pumping chamber is 1.5 mm in length (L 4 ) and 1 mm in width (L 3 ). The length of the heat insulation channel (L 1 ) is 2.5 mm and the depth of all micro channels is 150 µm. The diameters of the inlet, outlet and temporary inlet are 2 mm. The temporary liquid inlet is used to inject liquid into the bubble chamber before pumping, after which it is plugged with a cylindrical PDMS plug.
We simulated the liquid velocity with finite element Multiphysics software (COMSOL Inc., Palo Alto, CA, USA). In our simulation, θ 1 was gradually increased from 30 • to 75 • . The results show that the flow velocity in the main pumping channel was fastest when θ 1 was about 40 • . As a result of that, θ 1 is selected as 40 • .
Working Principle
The working principle of the heat source insulation micropump with induction heating is shown in Figure 3. When high frequency alternating current is applied to the excitation coil, an alternating magnetic field will be generated around the coil. Under the action of the alternating magnetic field, an eddy current will be generated in the metal micro heating plate, and consequently, heat will be generated in the micro heating plate. Then, the liquid on the surface of the micro heating plate is heated through heat conduction. When the liquid reaches the nucleation temperature, thermal bubbles will appear on the surface of the micro heating plate. As shown in Figure 3a, with the rapid growth of the thermal bubble, the pressure in the thermal bubble chamber increases quickly. Under the effect of the bubble pressure, the liquid flows rapidly to the heat insulation channel. After that the liquid flows into the pumping chamber through the heat insulation channel. Then the liquid in the pumping chamber will flow into the nozzle and the diffuser, simultaneously. Due to the different flow resistance in two directions, the volume of liquid flowing into the direction of the diffuser and the liquid outlet is greater than that flowing in the nozzle direction. If the high frequency alternating current applied to the excitation coil is powered off, the micro heating plate stops heating and the thermal bubbles will shrink, cooling with the surrounding environment, and the pressure in the thermal bubble chamber decreases. Then the liquid flows back from the main pumping channel to the thermal bubble chamber through the heat insulation channel. At the same time, both the liquid of inlet and the liquid of outlet flow into the heat insulation channel through the nozzle and the diffuser, respectively. Due to the different flow resistance in the two directions, more liquid flows into the heat insulation channel from the inlet through the nozzle than that from the outlet through the diffuser, as shown in Figure 3b.
Therefore, in a pumping cycle, there will be a certain net flow from the inlet to the outlet. The periodic expansion and contraction of thermal bubbles can realize the pumping function of the heat source insulation micropump driven by thermal bubbles.
The temperature of the liquid close to the thermal bubbles decreases gradually. The liquid with high temperature moves back and forth in the heat insulation channel with the periodic expansion and contraction of the thermal bubbles. As long as the high temperature liquid cannot enter the main pumping channel in the expansion stage of the thermal bubbles, the liquid pumped in the main channel will not be affected by the high temperature of the thermal bubbles. If the high frequency alternating current applied to the excitation coil is powered off, the micro heating plate stops heating and the thermal bubbles will shrink, cooling with the surrounding environment, and the pressure in the thermal bubble chamber decreases. Then the liquid flows back from the main pumping channel to the thermal bubble chamber through the heat insulation channel. At the same time, both the liquid of inlet and the liquid of outlet flow into the heat insulation channel through the nozzle and the diffuser, respectively. Due to the different flow resistance in the two directions, more liquid flows into the heat insulation channel from the inlet through the nozzle than that from the outlet through the diffuser, as shown in Figure 3b.
Therefore, in a pumping cycle, there will be a certain net flow from the inlet to the outlet. The periodic expansion and contraction of thermal bubbles can realize the pumping function of the heat source insulation micropump driven by thermal bubbles.
The temperature of the liquid close to the thermal bubbles decreases gradually. The liquid with high temperature moves back and forth in the heat insulation channel with the periodic expansion and contraction of the thermal bubbles. As long as the high temperature liquid cannot enter the main pumping channel in the expansion stage of the thermal bubbles, the liquid pumped in the main channel will not be affected by the high temperature of the thermal bubbles.
When the thermal bubble expands periodically, its volume increases from the minimum to the maximum, and the volume of liquid driven by it is equal to its volume variability. The volume variability of thermal bubbles can be calculated with Equation (1).
Here, R 2 is the equivalent diameter of the maximum thermal bubble, R 1 is the equivalent diameter of the thermal bubble after shrinkage.
Therefore, as long as the volume of the heat insulation channel is greater than the variability of the bubble volume, the high temperature liquid will not enter the main channel. The equivalent volume of the insulation channel can be calculated with Equation (2) where L 3 is the width of the heat insulation channel, b is the equivalent length of the heat insulation channel in the micropump and c is the depth of the heat insulation channel.
Assuming that the equivalent thermal bubble increases from 100 µm to 500 µm in diameter and the thermal bubble grows in the center of the thermal bubble chamber, the length of the thermal insulation channel is 2.5 mm, the radius of the thermal bubble chamber is 2.5 mm and the equivalent length of the heat insulation channel is about 4.5 mm. According to Equation (1), the volume variability of the thermal bubbles is about 0.52 mm 3 . According to Equation (2), the total volume of the equivalent insulation channel is about 0.68 mm 3 . When the diameter of the thermal bubbles increases from 100 µm to 500 µm, the high temperature liquid will not flow into the main channel.
The mean output flow rate per second can be calculated with Equation (3) [17,20].
Here, ∆V m is the maximum volume variability induced by the thermal bubble, W is the neck of the diffuser/nozzle structure, L 3 is the mouth of the diffuser/nozzle structure. T is one period of the micropump.
Assuming that the diameter of the thermal bubble after shrinkage is 100 µm, when the maximum diameter of the thermal bubble increases from 500 µm to 1000 µm, the theoretical volume flow rate is shown in the Figure 4. When the thermal bubble expands periodically, its volume increases from the minimum to the maximum, and the volume of liquid driven by it is equal to its volume variability. The volume variability of thermal bubbles can be calculated with Equation (1).
Here, is the equivalent diameter of the maximum thermal bubble, is the equivalent diameter of the thermal bubble after shrinkage.
Therefore, as long as the volume of the heat insulation channel is greater than the variability of the bubble volume, the high temperature liquid will not enter the main channel. The equivalent volume of the insulation channel can be calculated with Equation (2) = (2) where is the width of the heat insulation channel, is the equivalent length of the heat insulation channel in the micropump and is the depth of the heat insulation channel.
Assuming that the equivalent thermal bubble increases from 100 μm to 500 μm in diameter and the thermal bubble grows in the center of the thermal bubble chamber, the length of the thermal insulation channel is 2.5 mm, the radius of the thermal bubble chamber is 2.5 mm and the equivalent length of the heat insulation channel is about 4.5 mm. According to Equation (1), the volume variability of the thermal bubbles is about 0.52 mm 3 . According to Equation (2), the total volume of the equivalent insulation channel is about 0.68 mm 3 . When the diameter of the thermal bubbles increases from 100 μm to 500 μm, the high temperature liquid will not flow into the main channel.
The mean output flow rate per second can be calculated with Equation (3) Here, Δ is the maximum volume variability induced by the thermal bubble, is the neck of the diffuser/nozzle structure, is the mouth of the diffuser/nozzle structure.
is one period of the micropump.
Assuming that the diameter of the thermal bubble after shrinkage is 100 μm, when the maximum diameter of the thermal bubble increases from 500 μm to 1000 μm, the theoretical volume flow rate is shown in the Figure 4.
Fabrication
The schematic of the fabrication process involved in the realization of the thermal bubble-driven micropump is shown in Figure 5 using traditional lithography and electroplating. Firstly, a glass slide of 200 µm in thickness was cleaned as shown in Figure 5a. Cr and Cu layers were deposited as the seed layer on the glass substrate, then positive photoresist (BP212) was spun on the top of Cr/Cu layers Figure 5b and the Cr/Cu were patterned to define the micro heating plate Figure 5c. The micro heating plate was electroplated at 50 • C in a low stress nickel sulfamate bath [21] at a current density of 2 A/dm 2 , resulting in the Ni micro heating plate with a thickness of 20 µm (Figure 5d). After that, the photoresist was removed with acetone, the Cr/Cu seed layers were removed with a solution of hydrochloric acid and glycerol and a solution of ferric trichloride and water (weight ratio of 1:20), respectively, as shown in Figure 5e.
Fabrication
The schematic of the fabrication process involved in the realization of the thermal bubble-driven micropump is shown in Figure 5 using traditional lithography and electroplating. Firstly, a glass slide of 200 μm in thickness was cleaned as shown in Figure 5a. Cr and Cu layers were deposited as the seed layer on the glass substrate, then positive photoresist (BP212) was spun on the top of Cr/Cu layers Figure 5b and the Cr/Cu were patterned to define the micro heating plate Figure 5c. The micro heating plate was electroplated at 50 °C in a low stress nickel sulfamate bath [21] at a current density of 2 A/dm 2 , resulting in the Ni micro heating plate with a thickness of 20 μm (Figure 5d). After that, the photoresist was removed with acetone, the Cr/Cu seed layers were removed with a solution of hydrochloric acid and glycerol and a solution of ferric trichloride and water (weight ratio of 1:20), respectively, as shown in Figure 5e. Figure 5h.
A prepolymer of PDMS (Sylgard 184, Dow Corning, Midland, MI, USA) and curing agent was thoroughly mixed at a ratio of 10:1 (wt/wt) and the PDMS mixture was degassed in a vacuum chamber for 30 min. The PDMS mixture was carefully poured into the SU-8 master mold and then cured at 60 °C on a heating plate for 2 h as shown in Figure 5i. Then the PDMS chip, with a thickness of 3 mm, was peeled off from the mold and three holes with a radius of 1 mm were punched in the PDMS replicas to allow for the connection of tubes used as the inlet and the outlet as shown in Figure 5j. The fabrication was followed by oxygen plasma treatment for irreversible bonding between the PDMS chip and the glass substrate with the micro heating plate as shown in Figure 5k. A photograph of one protype of the thermal bubble-driven micropump with heat source isolation is shown in Figure 6. Before experiment, a 16-turn planar spiral coil was fabricated from copper enameled wires with a diameter of 80 μm and was glued to a PCB board under the micro heating plate. Figure 5h.
A prepolymer of PDMS (Sylgard 184, Dow Corning, Midland, MI, USA) and curing agent was thoroughly mixed at a ratio of 10:1 (wt/wt) and the PDMS mixture was degassed in a vacuum chamber for 30 min. The PDMS mixture was carefully poured into the SU-8 master mold and then cured at 60 • C on a heating plate for 2 h as shown in Figure 5i. Then the PDMS chip, with a thickness of 3 mm, was peeled off from the mold and three holes with a radius of 1 mm were punched in the PDMS replicas to allow for the connection of tubes used as the inlet and the outlet as shown in Figure 5j. The fabrication was followed by oxygen plasma treatment for irreversible bonding between the PDMS chip and the glass substrate with the micro heating plate as shown in Figure 5k. A photograph of one protype of the thermal bubble-driven micropump with heat source isolation is shown in Figure 6. Before experiment, a 16-turn planar spiral coil was fabricated from copper enameled wires with a diameter of 80 µm and was glued to a PCB board under the micro heating plate.
Measurement of the Flow Rate and the Back Pressure
The control system of the thermal bubble-driven micropump with heat insulation is the same as in our previous study [22]. High frequency alternating current was supplied with a high frequency pulse generator (SP1631A, Nanjing Nanjingshengpu Technology Co. Ltd., Nanjing, China). The current was applied to the excitation coil of the micropump through a relay (HSIN DA, 943-1C-5DS, Taiwan Xinda Precision Co., Ltd. Changzhou, China). The on/off sequence of the relay was controlled by a programmable controller (MITSUBISHI, MELSEC FX2N-48MT, Mitsubishi Electric Corporation, Tokyo, Japan).
In our experiments, the frequency of the alternating current was 80 kHz, both the heating period and the condensation period were 1 s, the applied apparent power was increased from 0 VA to 10.1 VA. The volume flow rates are the means of five measurements at each condition. Figure 7 shows volume flow rate versus the apparent power of the thermal bubble-driven micropump with heat source insulation. When the apparent power is greater than 4.07 VA, the pumping flow rate of the heat insulation micropump increased gradually with the increase in the power, and the maximum flow rate of the micropump was about 30 μL/min. When the apparent power was greater than 10.01 VA, the heat generated by the micro heating plate was too much, and the heat cannot be transferred out of the thermal bubble chamber during the condensation period, which will lead to the insufficient contraction of the thermal bubbles. Therefore, the volume change of thermal bubble is too small, and the pumping flow rate will be significantly reduced. When the applied apparent power was 9.04 VA, the maximum back pressure of the micropump was about 118 Pa.
Measurement of the Flow Rate and the Back Pressure
The control system of the thermal bubble-driven micropump with heat insulation is the same as in our previous study [22]. High frequency alternating current was supplied with a high frequency pulse generator (SP1631A, Nanjing Nanjingshengpu Technology Co. Ltd., Nanjing, China). The current was applied to the excitation coil of the micropump through a relay (HSIN DA, 943-1C-5DS, Taiwan Xinda Precision Co., Ltd. Changzhou, China). The on/off sequence of the relay was controlled by a programmable controller (MITSUBISHI, MELSEC FX2N-48MT, Mitsubishi Electric Corporation, Tokyo, Japan).
In our experiments, the frequency of the alternating current was 80 kHz, both the heating period and the condensation period were 1 s, the applied apparent power was increased from 0 VA to 10.1 VA. The volume flow rates are the means of five measurements at each condition. Figure 7 shows volume flow rate versus the apparent power of the thermal bubble-driven micropump with heat source insulation. When the apparent power is greater than 4.07 VA, the pumping flow rate of the heat insulation micropump increased gradually with the increase in the power, and the maximum flow rate of the micropump was about 30 µL/min. When the apparent power was greater than 10.01 VA, the heat generated by the micro heating plate was too much, and the heat cannot be transferred out of the thermal bubble chamber during the condensation period, which will lead to the insufficient contraction of the thermal bubbles. Therefore, the volume change of thermal bubble is too small, and the pumping flow rate will be significantly reduced. When the applied apparent power was 9.04 VA, the maximum back pressure of the micropump was about 118 Pa.
Measurement of the Flow Rate and the Back Pressure
The control system of the thermal bubble-driven micropump with heat insulation is the same as in our previous study [22]. High frequency alternating current was supplied with a high frequency pulse generator (SP1631A, Nanjing Nanjingshengpu Technology Co. Ltd., Nanjing, China). The current was applied to the excitation coil of the micropump through a relay (HSIN DA, 943-1C-5DS, Taiwan Xinda Precision Co., Ltd. Changzhou, China). The on/off sequence of the relay was controlled by a programmable controller (MITSUBISHI, MELSEC FX2N-48MT, Mitsubishi Electric Corporation, Tokyo, Japan).
In our experiments, the frequency of the alternating current was 80 kHz, both the heating period and the condensation period were 1 s, the applied apparent power was increased from 0 VA to 10.1 VA. The volume flow rates are the means of five measurements at each condition. Figure 7 shows volume flow rate versus the apparent power of the thermal bubble-driven micropump with heat source insulation. When the apparent power is greater than 4.07 VA, the pumping flow rate of the heat insulation micropump increased gradually with the increase in the power, and the maximum flow rate of the micropump was about 30 μL/min. When the apparent power was greater than 10.01 VA, the heat generated by the micro heating plate was too much, and the heat cannot be transferred out of the thermal bubble chamber during the condensation period, which will lead to the insufficient contraction of the thermal bubbles. Therefore, the volume change of thermal bubble is too small, and the pumping flow rate will be significantly reduced. When the applied apparent power was 9.04 VA, the maximum back pressure of the micropump was about 118 Pa. Table 1 lists the main performance and related parameters of the thermal bubble-driven micropump described here and some others from the literature. It can be seen that the micropump with induction heating has a larger pumping flow rate. The back pressure of the micropump with induction heating is lower compared to the micropump with resistance heating due to lower working frequency and larger dimensions of diffuser/nozzle structure. It is noted that the pumping flow rate and the back pressure of the heat insulation thermal bubble-driven micropump are lower than the thermally driven micropumps with induction heating but without heat insulation. The power that is used for heating is about half of the apparent power. Due to the added channel, the flow resistance is increased accordingly and the speed of the liquid that passes through the diffuser/nozzle structure is deceased. Accordingly, the function of the check valve of the diffuser/nozzle structure is decreased, and the pumping flow rate and the back pressure of the heat insulation thermal bubbledriven micropump also decrease.
Fluorescence Temperature Measurement
The temperature measurement using fluorescence intensity changes is a non-contact temperature measurement technology [23,24] that has the advantages of low requirements for measuring instruments, fast response, high resolution, high sensitivity and large temperature measurement range [25,26]. Therefore, the effect of the heat insulation of the micropump was measured with using fluorescence intensity.
Experimental Setup of Fluorescence Temperature Measurement
An experimental setup of fluorescence temperature measurement with microscope was built as shown in Figure 8. The fluorescence excitation of a fluorescence microscope (DSY5000X, Chongqing Aopu Photoelectric Technology Co., Ltd, Chongqing, China) was selected as the light source, and Rhodamine B was selected as the fluorescent dye. Rhodamine B was dissolved in the deionized water to form a Rhodamine solution with a concentration of 0.02 mol/L. Rhodamine B solution was irradiated with the fluorescence microscope to inspire fluorescence. The temperature of the solution was measured in real time with an infrared thermometer (TM910, Taikeman Technology Co. Ltd., Shenzhen, China), and the fluorescence image of the solution in the microchannel at this temperature was saved to the computer. In order to reduce the experimental error, four fluorescence images of each measurement at the same temperature were saved. Then the fluorescence images were grayed according to the fluorescence intensity, corresponding to the temperature. The Rhodamine B solution at 21 °C was selected as the standard low temperature and the solution at 90 °C was selected as the standard high temperature. Then, we normalized the fluorescence intensity corresponding to its temperature according to Equation were is the normalized fluorescence intensity, is the fluorescence intensity extracted from the grayed fluorescence image, high is the fluorescence intensity of the standard high temperature (90 °C) and low is the fluorescence intensity of the standard low temperature (21 °C). The curve fitting between temperature and normalized fluorescence intensity was carried out with the binomial fitting method. The fitting formula is shown as Equation (5) = 0.001 -0.0247 + 1.4374 (5) where is temperature and is the fluorescence intensity after normalization. Figure 9 shows the relationship between the fluorescence intensity and the temperature after normalization. It can be seen that the fluorescence intensity decreases with the increase in temperature. The Rhodamine B solution at 21 • C was selected as the standard low temperature and the solution at 90 • C was selected as the standard high temperature. Then, we normalized the fluorescence intensity corresponding to its temperature according to Equation (4) were I n is the normalized fluorescence intensity, I is the fluorescence intensity extracted from the grayed fluorescence image, I high is the fluorescence intensity of the standard high temperature (90 • C) and I low is the fluorescence intensity of the standard low temperature (21 • C). The curve fitting between temperature and normalized fluorescence intensity was carried out with the binomial fitting method. The fitting formula is shown as Equation (5) where x is temperature and y is the fluorescence intensity after normalization. Figure 9 shows the relationship between the fluorescence intensity and the temperature after normalization. It can be seen that the fluorescence intensity decreases with the increase in temperature. The Rhodamine B solution at 21 °C was selected as the standard low temperature and the solution at 90 °C was selected as the standard high temperature. Then, we normalized the fluorescence intensity corresponding to its temperature according to Equation were is the normalized fluorescence intensity, is the fluorescence intensity extracted from the grayed fluorescence image, high is the fluorescence intensity of the standard high temperature (90 °C) and low is the fluorescence intensity of the standard low temperature (21 °C). The curve fitting between temperature and normalized fluorescence intensity was carried out with the binomial fitting method. The fitting formula is shown as Equation (5) = 0.001 -0.0247 + 1.4374 (5) where is temperature and is the fluorescence intensity after normalization. Figure 9 shows the relationship between the fluorescence intensity and the temperature after normalization. It can be seen that the fluorescence intensity decreases with the increase in temperature. Figure 9. Relationship between fluorescence intensity and temperature. Figure 9. Relationship between fluorescence intensity and temperature.
Measurement of Heat Insulation
In order to test the heat insulation effect of the thermal bubble-driven micropump, the temperature of the solution in the main pumping channel and in the heat insulation channel of the micropump were measured.
Firstly, Rhodamine B solution with a concentration of 0.02 mol/L was introduced into the micropump from the temporary liquid inlet to keep rhodamine B solution filling the thermal bubble chamber and the heat insulation channel. Then, the temporary inlet was blocked, and the Rhodamine B solution with the same concentration was injected into the main pumping channel of the micropump from the inlet. In order to keep zero back pressure in the pumping process, the horizontal tubes that connected with the inlet and the outlet were kept at the same height. Alternating current with frequency of 80 kHz was applied to the excitation coil, both the induction heating time and the interruption time were 1 s. The temperature measurement area was some part of the micropump, which is shown as the blue area in Figure 10. According to the fitting relationship between fluorescence intensity and temperature, the temperature distribution of the region is obtained when the fluorescence intensity of the solution is measured in the micropump. Then, the region was drawn as a temperature nephogram according to the fluorescence intensity. Figures 11 and 12 are partial temperature nephograms of the heat insulation channel and main pumping channel in the thermal bubble expansion stage and in the thermal bubble contraction stage, respectively, with an applied apparent power of 6.28 VA. From the temperature distribution nephogram, it can be seen that the liquid temperature in the heat insulation channel was higher than that in the main pumping channel. The temperature of the solution in the main pumping channel was lower than 35 • C even in the expansion stage at apparent power of 6.28 VA. It is clear that the temperature of the thermal bubble is higher than 100 • C and the solution near to the thermal bubble also has a high temperature. The temperature in the main pumping channel is safe for most chemical and biological solutions with the new thermal bubble-driven micropump.
Measurement of Heat Insulation
In order to test the heat insulation effect of the thermal bubble-driven micropump, the temperature of the solution in the main pumping channel and in the heat insulation channel of the micropump were measured.
Firstly, Rhodamine B solution with a concentration of 0.02 mol/L was introduced into the micropump from the temporary liquid inlet to keep rhodamine B solution filling the thermal bubble chamber and the heat insulation channel. Then, the temporary inlet was blocked, and the Rhodamine B solution with the same concentration was injected into the main pumping channel of the micropump from the inlet. In order to keep zero back pressure in the pumping process, the horizontal tubes that connected with the inlet and the outlet were kept at the same height. Alternating current with frequency of 80 kHz was applied to the excitation coil, both the induction heating time and the interruption time were 1 s. The temperature measurement area was some part of the micropump, which is shown as the blue area in Figure 10. According to the fitting relationship between fluorescence intensity and temperature, the temperature distribution of the region is obtained when the fluorescence intensity of the solution is measured in the micropump. Then, the region was drawn as a temperature nephogram according to the fluorescence intensity. Figures 11 and 12 are partial temperature nephograms of the heat insulation channel and main pumping channel in the thermal bubble expansion stage and in the thermal bubble contraction stage, respectively, with an applied apparent power of 6.28 VA. From the temperature distribution nephogram, it can be seen that the liquid temperature in the heat insulation channel was higher than that in the main pumping channel. The temperature of the solution in the main pumping channel was lower than 35 °C even in the expansion stage at apparent power of 6.28 VA. It is clear that the temperature of the thermal bubble is higher than 100 °C and the solution near to the thermal bubble also has a high temperature. The temperature in the main pumping channel is safe for most chemical and biological solutions with the new thermal bubble-driven micropump. The change of the solution temperature from the bottom to the upper along the marked line was drawn after measuring the solution temperature in the heat insulation channel and the main pumping channel. The position of the marked line is shown as the red dotted line in Figure 10. Figure 13 shows the temperature curve along the marked line at the thermal bubble expansion stage when the apparent power was 4.29 VA; the temperature difference between the heat insulation channel and the main pumping channel was about 5 °C. Figure 14 shows the temperature variation along the marked line during the expansion stage of the thermal bubble when the applied apparent power was 6.28 VA; the temperature difference was about 20 °C. The change of the solution temperature from the bottom to the upper along the marked line was drawn after measuring the solution temperature in the heat insulation channel and the main pumping channel. The position of the marked line is shown as the red dotted line in Figure 10. Figure 13 shows the temperature curve along the marked line at the thermal bubble expansion stage when the apparent power was 4.29 VA; the temperature difference between the heat insulation channel and the main pumping channel was about 5 °C. Figure 14 shows the temperature variation along the marked line during the expansion stage of the thermal bubble when the applied apparent power was 6.28 VA; the temperature difference was about 20 °C. The change of the solution temperature from the bottom to the upper along the marked line was drawn after measuring the solution temperature in the heat insulation channel and the main pumping channel. The position of the marked line is shown as the red dotted line in Figure 10. Figure 13 shows the temperature curve along the marked line at the thermal bubble expansion stage when the apparent power was 4.29 VA; the temperature difference between the heat insulation channel and the main pumping channel was about 5 • C. Figure 14 shows the temperature variation along the marked line during the expansion stage of the thermal bubble when the applied apparent power was 6.28 VA; the temperature difference was about 20 • C. Figure 10) when the applied apparent power is 6.28 VA.
Conclusions
In this paper, a thermal bubble-driven micropump with induction heating and heat source insulation was designed and fabricated. The experiments of the pumping flow rate and the back pressure of the heat source insulation micropump were carried out. The back pressure and the pumping flow rate were reduced compared with the micropumps of the same heating method without heat source insulation. A fluorescent temperature measurement system with microscope was built, and the fluorescent temperature calibrations of Rodamine B solutions from 21 °C to 90 °C were carried out. Then based on the temperature calibration of the Rodamine B solution, the temperature in parts of the heat insulation channel and the main pumping channel were measured. The temperature of the solution in the main pumping channel of the heat source insulation micropump was reduced compared to the temperature in the insulation channel. When the apparent power was 6.28 VA, the temperature of the pumped liquid was less than 35 °C in the main pumping channel. If the pumped liquid contained heat sensitive biological or chemical substances, the Figure 10) when the applied apparent power is 6.28 VA.
Conclusions
In this paper, a thermal bubble-driven micropump with induction heating and heat source insulation was designed and fabricated. The experiments of the pumping flow rate and the back pressure of the heat source insulation micropump were carried out. The back pressure and the pumping flow rate were reduced compared with the micropumps of the same heating method without heat source insulation. A fluorescent temperature measurement system with microscope was built, and the fluorescent temperature calibrations of Rodamine B solutions from 21 °C to 90 °C were carried out. Then based on the temperature calibration of the Rodamine B solution, the temperature in parts of the heat insulation channel and the main pumping channel were measured. The temperature of the solution in the main pumping channel of the heat source insulation micropump was reduced compared to the temperature in the insulation channel. When the apparent power was 6.28 VA, the temperature of the pumped liquid was less than 35 °C in the main pumping channel. If the pumped liquid contained heat sensitive biological or chemical substances, the Figure 10) when the applied apparent power is 6.28 VA.
Conclusions
In this paper, a thermal bubble-driven micropump with induction heating and heat source insulation was designed and fabricated. The experiments of the pumping flow rate and the back pressure of the heat source insulation micropump were carried out. The back pressure and the pumping flow rate were reduced compared with the micropumps of the same heating method without heat source insulation. A fluorescent temperature measurement system with microscope was built, and the fluorescent temperature calibrations of Rodamine B solutions from 21 • C to 90 • C were carried out. Then based on the temperature calibration of the Rodamine B solution, the temperature in parts of the heat insulation channel and the main pumping channel were measured. The temperature of the solution in the main pumping channel of the heat source insulation micropump was reduced compared to the temperature in the insulation channel. When the apparent power was 6.28 VA, the temperature of the pumped liquid was less than 35 • C in the main pumping channel. If the pumped liquid contained heat sensitive biological or chemical substances, the thermal bubble-driven micropump with heat source insulation can be used. For liquid without heat sensitive biological or chemical substances and high requirements for the pumping flow rate, a thermal bubble-driven micropump without heat source insulation can be adopted. In order to further reduce the heat effect of thermal bubbles on the fluid being pumped, we could increase the length of the heat insulation channel.
|
v3-fos-license
|
2023-09-13T14:01:44.154Z
|
2023-09-13T00:00:00.000
|
261699898
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcgenomics.biomedcentral.com/counter/pdf/10.1186/s12864-023-09654-1",
"pdf_hash": "83e6d497280d0d04ec5427e52c5f52addbda1204",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:898",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "fc41b3107a1414673abb68d7f7e0239f14cfb3c3",
"year": 2023
}
|
pes2o/s2orc
|
Characterization and in silico analysis of the domain unknown function DUF568-containing gene family in rice (Oryza sativa L.)
Background Domains of unknown function (DUF) proteins are a number of uncharacterized and highly conserved protein families in eukaryotes. In plants, some DUFs have been predicted to play important roles in development and response to abiotic stress. Among them, DUF568-containing protein family is plant-specific and has not been described previously. A basic analysis and expression profiling was performed, and the co-expression and interaction networks were constructed to explore the functions of DUF568 family in rice. Results The phylogenetic tree showed that the 8, 9 and 11 DUF568 family members from rice, Arabidopsis and maize were divided into three groups. The evolutionary relationship between DUF568 members in rice and maize was close, while the genes in Arabidopsis were more distantly related. The cis-elements prediction showed that over 82% of the elements upstream of OsDUF568 genes were responsive to light and phytohormones. Gene expression profile prediction and RT-qPCR experiments revealed that OsDUF568 genes were highly expressed in leaves, stems and roots of rice seedling. The expression of some OsDUF568 genes varied in response to plant hormones (abscisic acid, 6-benzylaminopurine) and abiotic stress (drought and chilling). Further analysis of the co-expression and protein–protein interaction networks using gene ontology showed that OsDUF568 − related genes were enriched in cellular transports, metabolism and processes. Conclusions In summary, our findings suggest that the OsDUF568 family may be a vital gene family for the development of rice roots, leaves and stems. In addition, the OsDUF568 family may participate in abscisic acid and cytokinin signaling pathways, and may be related to abiotic stress resistance in these vegetative tissues of rice. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-023-09654-1.
Background
Domains of unknown function (DUFs) are a group of protein families that are highly conserved yet uncharacterized.The first DUFs, DUF1 and DUF2, were identified and renamed the GGDEF domain and EAL domain by Chris Ponting in 1998 [1][2][3].Since then, the number of known DUF families has increased rapidly owing to the sequencing of genomes in a large number of species.There are 19,632 families, of which, 4795 (24%) (out of 19 632) are DUF families according to the Pfam database version 35.0 [4].Rice (Oryza sativa L.) is an important cereal crop, and DUFs are predicted to play important roles in its development and responses to abiotic stress [5][6][7][8][9][10].
DUF568 is a conserved domain that is exclusively found in plants.As of January 2023, the Pfam database contained a total of 1,713 sequences belonging to the DUF568-containing gene family (PF04526) across 150 species.An auxin-responsive protein AIR12 (Auxin induced in root cultures), which is a member of the DUF568 family, was reported to interact with other redox partners within the plasma membrane to establish a redox connection between the cytoplasm and the apoplast [11,12].In addition, Os03g0194600, also a member of the OsDUF568 family, has previously been reported to be induced by nitrogen starvation in rice roots [13].
This study conducted a comprehensive genomic analysis of the DUF568 gene family in rice, including phylogenetic analysis, subcellular localization prediction, cis-element and expression analysis.Then, the co-expressed genes and interacting proteins of the OsDUF568 family were analyzed to reveal their potential biological functions.Furthermore, the expression of OsDUF568 family in response to phytohormones and abiotic stresses were investigated through experiments.These results would provide valuable insights onto the OsDUF568 family and pave the way for future research into its biological functions.
To extend our understanding of OsDUF568 family, neighbor joining tree of DUF568 homologous genes from rice, maize and Arabidopsis thaliana was constructed by bootstrap method (Fig. 1).These DUF568 members were classified into three groups (I, II, III).In three groups, the number of DUF568 members in rice and maize distributed evenly, while eight of nine DUF568 members from Arabidopsis thaliana gathered in Group I, and the remaining one gene (AT3G07390) belonged to Group III.The classification results showed that genetic distance of DUF568 between rice and maize was close, genes in Arabidopsis were far away and more conservative.
To investigate the evolutionary relationships of the OsDUF568 family, the conserved domains and motifs were analyzed (Fig. 2B).OsDUF568 proteins all have DUF568 domain, OsDUF568.1,3, 5 and 7 contained Cytochrom-b561-FRRS1-like domain.Further, nine distinct conserved motifs were identified, motifs 1, 4 and 5 were observed in all OsDUF568s, and motifs 4 and 5 surrounded the region of DUF568 domain in all OsDUF568 proteins, which suggested that motifs 4 and 5 may be essential part of the DUF568 domain.Similarly, motifs 2, 3, 6 and 8 may be related to Cytochrom-b561-FRRS1like domain.Moreover, motifs 7 and 9 were found in OsDUF568s (OsDUF568.4excluded) belonged to Group II and III.The structural differences of OsDUF568 proteins suggested that OsDUF568 family may have variant specific functions.
Cis-acting elements of OsDUF568 genes
Analysis of cis-acting elements to a greater detail will facilitate in better understanding the precise control of genes and generate valuable clues for their functional multiplicity [14].This report manifested potential cisacting elements in the 2 Kb upstream regions of the OsDUF568 genes from plantCARE website.Thirty-five cis-acting elements were detected totally (Fig. 3A), which formed four main categories as light responsiveness, phytohormone responsiveness, abiotic stresses and plant growth (Fig. 3B).
The four categories contained eleven subdivisions, the largest subdivision was light responsiveness, which contained 45.3% predicted cis-elements, including G-box (Light-responsive element) and Box 4 (Part of a module for light response) as representatives.A series of regulatory elements participating in plant hormone responsiveness ranked second.Cis-acting factors respond to abscisic acid, methyl jasmonate, gibberellin, salicylic acid and auxin were involved.Among Fig. 1 Phylogenetic tree showing the evolutionary relationships between DUF568 proteins from rice, Arabidopsis thaliana and maize.The major three phylogenetic clusters were marked as I, II and III based on genetic distance.There were eight, 11 and nine DUF568 proteins from rice (filled circles), maize (unfilled circles) and Arabidopsis thaliana (unfilled square), respectively them, ABRE (Related to the abscisic acid response) was covered the largest portion, followed by the TGACGmotif and CGTCA-motif (Methyl jasmonate responsive element).In the abiotic stress response category, ARE (Anaerobic induction element) were the most common, followed by those relating to low-temperature responsiveness (LTR) and drought-inducibility (MBS).As for the plant growth regulation category, only two main stress-related cis-acting factors were identified, known as the CAT-box (Referred to meristem expression) The rose chart on the left shows the proportion of the categories (circle) and subclasses (petal), with the length of each petal proportional to the number of elements.The orange, green, blue and red petals represent light responsiveness, phytohormone responsiveness (including abscisic acid, methyl jasmonate, gibberellin, salicylic acid and auxin), abiotic stress responses (including anaerobic, drought, low-temperature, anoxic, and defense and stress), and plant growth, respectively.The name of the cis-acting elements and their corresponding box colors are sorted from high to low according to their occurrence frequency, accompanied by the subclasses of the cis-acting elements on the right and O2-site (Involved in zein metabolism regulation).Intriguingly, all kinds of cis-regulatory elements distributed widely throughout the promoter regions of OsDUF568 genes, revealing that OsDUF568 may have intricate expression patterns and be crucial in the regulation of rice development and stress resistance.
Expression patterns of OsDUF568 genes in different tissues and response to plant hormone & abiotic stresses
To further characterize the potential biological function of OsDUF568 genes (excluding OsDUF568.8),the expression patterns were analyzed in 12 different tissues obtained from RiceENCODE website (Fig. 4).The results showed that the expression levels of OsDUF568 genes varied across different tissues.Specifically, OsDUF568.3and 5 were highly expressed in most tissues, while OsDUF568.1 was expressed in low levels in most tissues.OsDUF568.4was barely expressed in nine tissues except for young leaves, nodes I & II, and roots.Importantly, all eight OsDUF568 genes showed high expression levels in these three tissues, indicating that OsDUF568 genes might be involved in the development of leaves, nodes and roots in rice.
Notably, there were several plant hormone response elements in the upstream cis-acting elements of OsDUF568 genes.To further investigate the possible mechanisms of OsDUF568 genes, this study analyzed the relative expression of OsDUF568 genes in response to six plant hormones (ABA, abscisic acid; GA 3 , gibberellin A 3 ; IAA, 3-indoleacetic acid; BL, brassinolide; tZ, trans-zeatin; JA, jasmonic acid) using the data from the RiceXPro website (Fig. 5).The relative expression of OsDUF568.4and 5 were up-regulated after ABA treatment for 3 and 6 h, while OsDUF568.2and OsDUF568.7 were downregulated.Most DUF568 genes were insensitive to GA 3 , IAA and BL treatment, with only OsDUF568.4 showing slight up-regulation after GA 3 , IAA and BL treatment, and OsDUF568.6 was slightly up-regulated after 3 h of IAA treatment.In addition, OsDUF568.2and 8 were down-regulated after tZ treatment, while OsDUF568.1,3, 4, 5, 6 and 7 were up-regulated.Most OsDUF568 genes were down-regulated after JA treatment, except for OsDUF568.3,which showed significant up-regulation.Those results indicated that OsDUF568 genes might regulate relevant hormone signaling pathways.Notably, the expression of OsDUF568.4changed significantly under Fig. 4 Expression patterns of DUF568 genes in different tissues of rice.Log2 transformed min-max normalized gene expression values were used to generate the heat map.The gene expression levels were quantified as fragments per kilobase per million (FPKM) and visualized as a color gradient in the heat map.The color scale bar on the left side of the heat map represents the relative expression level, ranging from 0 to 1.00, where higher values correspond to the higher expression levels the treatment of all six plant hormones, suggesting that OsDUF568.4may be widely involved in rice hormone signaling pathway.
There were several cis-acting elements related to abiotic stress response in the upstream regions of OsDUF568 genes.The changes in gene expression of OsDUF568 genes in response to various abiotic stresses were analyzed by using publicly available transcriptomic datasets from the Expression Atlas website (Fig. 6).
E-GEOD-115371 [15] and E-MEXP-2267 [16] datasets were analyzed (Fig. 6A&B).The E-GEOD-115371 dataset provided molecular profiling of rice seeds grown under anaerobic conditions for 1 h, 3 h, 12 h, 24 h, 2 days, 3 days, and 4 days [15].As shown in Fig. 6, all OsDUF568 genes showed decreased expression in response to anaerobic stress.Among them, OsDUF568.3and 4 were strongly down-regulated, reaching the lowest levels at 72 h.The E-MEXP-2267 dataset contained the transcription profiling time course of rice germination under anaerobic conditions, an aerobic to anaerobic switch, and an anaerobic to aerobic switch [16].OsDUF568.2and 5 showed decreased expression in response to anaerobic stress, while no OsDUF568 gene showed differential expression in response to the switch between aerobic and anaerobic.Overall, our results showed that the OsDUF568 family were likely to play a role in response to anaerobic stress.
Drought-induced differential gene expression of OsDUF568.2, 3 and 5 from two datasets (E-GEOD-41647 and E-MTAB-4994) were showed (Fig. 6C).E-GEOD-41647 contained data for OsDUF568.2and 3 from seedlings of susceptible IR20 and drought-tolerant Dagad deshi genotypes [17].Both OsDUF568.2and 3 were up-regulated in Dagad deshi, but only OsDUF568.2 in IR24.Moreover, To understand the role of OsDUF568 family members in chilling/cold tolerance and susceptibility, three datasets (E-MTAB-5941, E-GEOD-37940 and E-GEOD-38023) were analyzed (Fig. 6D, E, F).E-MTAB-5941 contained the data on short-and long-term stress-induced changes in the transcriptome of a chilling-sensitive genotype Thaibonnet and a chilling-tolerant genotype Volano, each subjected to 2 and 10 h chilling treatment at 10 °C [18].OsDUF568.3was up-regulated in Thaibonnet and Volano, while OsDUF568.1 and 5 were only up-regulated in Thaibonnet and Volano, respectively.E-GEOD-37940 comprised the transcriptome of the cold tolerant introgression line K354 and its recurrent parent C418 under cold stress [19].OsDUF568.4and 7 were up-regulated in K354 and C418, while OsDUF568.3was up-regulated in C418 only.E-GEOD-38023 contained expression data from a chilling-tolerant Li-Jiang-Xin-Tuan-He-Gu (LTH) japonica landrace variety and a chilling-sensitive IR29 indica cultivar.The plants from both genotypes were subjected to chilling treatment at 4 ℃, and then moved to normal temperature 29 ℃ for 24 h to allow recovery [20].OsDUF568.2, 3 and 5 were up-regulated in LTH and IR29, while OsDUF568.4was down-regulated.OsDUF568.7 showed down-regulation in response to normal temperature for recovery.In general, OsDUF568 family genes were differentially regulated in response to chilling between the tolerant and the susceptible rice genotypes.
Co-expression gene networks of OsDUF568 genes
Co-expression network analysis of OsDUF568 genes has the potential to reveal the putative functions of the genes involved in biological processes [21].The co-expression gene networks of OsDUF568 genes were constructed using the RiceFREND website (Fig. 7A and Table S3).The Weighted Pearson correlation coefficient (PCC) of genes in most networks was around 0.65, while the genes in the network of Os08g0335600 (OsDUF568.4)network had a higher coefficient, suggesting a close functional relationship between the genes in this network.In addition, some genes were co-expressed with both Os03g0194300 (OsDUF568.1)and Os03g0194900 (OsDUF568.3),and most of these genes were related to enzymes such as endoglucanase, caffeic acid 3-O-methyltransferase, and transferase.
Remarkably, the co-expressed genes of OsDUF568 networks were highly expressed in roots, stems and leaves (Fig. 7C), which was consistent with the expression patterns of OsDUF568 genes (Fig. 4), suggesting that OsDUF568 and co-expressed genes may play important roles in rice development.
Protein-protein interaction networks analysis of OsDUF568 proteins
The predicted functional partners of OsDUF568s were identified from STRING website, and the proteinprotein interaction (PPI) networks were constructed (Fig. 8A and Table S4).The association of most proteins in the OsDUF568 PPI networks were textmined.In addition, there were a number of co-expressed and experimentally determined associated proteins in the networks of OsDUF568.6 and 8. Some proteins were associated with multiple OsDUF568 proteins.Among them, Auxin-repressed protein-like protein ARP1 (OsJ_34778) was associated with OsDUF568.1,6, and 8, and Pentatricopeptide (PPR) repeat-containing protein-like protein (OS06T0611200-00) was associated with OsDUF568.3,5, 6 and 7, which indicated the function of the OsDUF568 family may be closely related to these two proteins.
According to functional enrichments (Fig. 8B), most networks of OsDUF568 family were related to signal transduction, and proteins in OsDUF568.6 and 8 networks were related to hormone-mediated signaling pathway, especially cytokinin-activated signaling pathway (GO:0009736).Meanwhile, proteins in OsDUF568.1 networks were related to protein deneddylation (GO:0000338) and COP9 signalosome (CSN) assembly (GO:0010387).CSN complex regulates the activity of cullin-RING ligase (CRL) families of E3 ubiquitin ligase complexes, and play critical roles in regulating gene expression, cell proliferation, and cell cycle [22].The functions of proteins in OsDUF568.7 may include Rab protein signal transduction (GO:0032482) and Cellular localization (GO:0051641), Rab proteins affect cell growth, motility and other biological processes [23].The results indicated that OsDUF568 family may be involved in material transportation, metabolism and signal transduction in rice.
Expression of OsDUF568 family in response to phytohormones and abiotic stresses
The published data from several public databases above showed that some OsDUF568 family members had higher expression levels in different rice tissues, and were repressed under multiple phytohormones and abiotic stresses, to confirm experimentally, the OsDUF568.2,3, 4, 6 and 7 expression in rice seedlings subjected to various phytohormones (ABA and 6-BA) and abiotic stresses (drought and cold) treatments were examined.The expression of OsDUF568 genes in leaves, stems and roots of rice seedlings were investigated (Fig. 9A).The OsDUF568.2, 3 and 4 transcript level in stems were lower than leaves and roots, while OsDUF568.6and 7 were higher.Furthermore, OsDUF568.2,3, 6 and 7 is highly expressed in the roots.The results suggested the expression of OsDUF568 genes exhibited significant tissue specificity in rice.
The expression of OsDUF568 genes were also regulated by phytohormones.As shown in Fig. 9B, the expression level of OsDUF568.2decreased after 6-BA treatment, while OsDUF568.3,4, 6 and 7 reached their highest level after 6-BA treatment for 6 h.Under ABA treatment (Fig. 9B), the expression level of OsDUF568.2and 7 were suppressed gradually, OsDUF568.3and 4 were induced to reach highest level after 6 h treatment.
OsDUF568 genes showed significant response to abiotic stresses.OsDUF568.2expression level reached highest at 3 h after drought treatment, then descend.Expression of OsDUF568.3,and 7 were gradually repressed by drought treatments (Fig. 9C).OsDUF568.6expression level was induced slightly at the initial time point and suppressed to the lowest level at 12 h.As to cold stress (Fig. 9C), the OsDUF568.2,4, 6 and 7 expression level were rapidly suppressed.On the contrary, OsDUF568.3expression level for cold stress was first induced and then suppressed again.These results indicated that OsDUF568 genes were involved in responses to multiple phytohormones and abiotic stresses.
Discussion
According to the RAP-DB, NCBI (National Center for Biotechnology Information) websites, and results from Preger et al. [11] and us, OsDUF568 family was found to contain DOMON-CIL1-like and Cytochrom-b561-FRRS1-like domains.The DOMON superfamily may be a direct participant in the electron transfer process [24].Cytochromes b561 (CYB561s) are a family of di-heme transmembrane (TM) proteins that use ascorbate (ASC) as an electron donor and are present in various organs and cell types in plants and animals.The CYB561-core domain is associated with DOMON in ubiquitous CYB-DOM proteins, which comprise a novel electron-transfer system potentially involved in oxidative modification of cell-surface proteins.CYB561s and CYBDOMs play important roles in plants such as stress defense, cell wall modifications and cell metabolism [25].In addition, AIR12s (OsDUF568.6and 8) were also found to be involved in the establishment of a redox connection between the cytoplasm and the apoplast [11].Further, the OsDUF568 proteins also contained multiple conserved amino acids (Fig. 2A), with methionine and histidine supporting the binding of OsDUF568 proteins and hemes [11].All results above showed OsDUF568 family may be involved in stress defense and cell metabolism by mediating electron transport of redox domains in rice.
The upstream regions of OsDUF568 genes were found to contain cis-acting elements that respond to light, phytohormone, and abiotic stresses (Fig. 3), suggesting that these factors may interact to regulate the expression of OsDUF568 genes.The expression patterns of OsDUF568 were investigated in 12 tissues (Fig. 4), and it was found that these genes were highly expressed in rice roots, stems and nodes I and II, particularly in roots, suggesting their importance in rice growth and root development.Furthermore, the relative expression of OsDUF568 genes under six hormone treatments showed that OsDUF568 genes were sensitive to ABA, tZ and JA treatments (Fig. 5).ABA treatment significantly altered the expression of four OsDUF568 genes, while tZ treatment up-regulated most of OsDUF568 genes, and JA treatment down-regulated most.The results indicated that OsDUF568 genes may participate in these hormone pathways in rice.
Rice is adversely affected by abiotic stresses including anaerobic [26,27], drought [28] and cold [19,20].Several reports showed DUFs may be important for rice resistance to abiotic stresses [7][8][9].All OsDUF568 genes showed decreased expression in response to anaerobic stress (Fig. 6A), indicating that OsDUF568 family were likely to play a role in response to anaerobic stress.Besides, OsDUF568.2and OsDUF568.3positively responded to drought stress (Fig. 6B), elucidating their roles in rice adaption to the drought environment.Similarly, OsDUF568.1,2, 3, and 5 were up-regulated in chilling stress (Fig. 6C), suggesting the four OsDUF568 genes may be important for rice resistance to cold.Considering the common positive response of OsDUF568.2and OsDUF568.3under drought and chilling stresses, overexpression of the two genes in rice may be effective methods to engineering plant fitness for drought and cold conditions.
The co-expression (Fig. 7) and PPI (Fig. 8) networks of OsDUF568 genes were constructed, and the possible function of these genes were studied using GO enrichment analysis.The results showed that OsDUF568 genes were widely involved in material transportation, metabolism and signal transduction in rice.Meanwhile, AIR12 (OsDUF568.6and 8) may be related to hormone-mediated signaling pathway, like cytokinin.OsDUF568.6 was up-regulated while OsDUF568.8was down-regulated after trans-zeatin treatment, and the two genes showed different expression in response to other phytohormone (Fig. 5).The results indicated the function of AIR12 protein was closely related to phytohormone signal transduction, especially cytokininactivated signaling pathway.
To understand the potential biological functions of OsDUF568 genes in rice, the RNA transcript levels of OsDUF568.2,3, 4, 6 and 7 genes in different rice tissues, and treated by phytohormones and abiotic stresses were further investigated in rice seeding (Fig. 9).OsDUF568 genes were generally highly expressed in rice roots, which was consistent with the predicted results (Fig. 4).This indicated that OsDUF568 family may be vital for the development of rice roots.Expression analysis revealed that some OsDUF568 genes were induced or inhibited by different phytohormones treatment.Among them, OsDUF568.4and 6 were significantly induced after 6-BA treatment.Previous reports showed that OsDUF568.6 would be induced after cytokinin treatment.In addition, cytokinin-inducible type-A response regulator OsRR6 acted as a negative regulator of cytokinin signaling, OsDUF568.4was highly expressed in rice transgenic lines overexpressing OsRR6 [29].This suggests that OsDUF568.4and 6 may be involved in cytokinin signaling pathway.Besides, OsDUF568.3and 4 were upregulated after both 6-BA and ABA treatment, which suggested OsDUF568.3and 4 may participate in both abscisic acid and cytokinin signaling pathways.The expression levels of most OsDUF568 genes were decreased under drought and cold stress treatments, while OsDUF568.2and 3 were significantly induced under drought and cold treatments, respectively.OsDUF568.2has previously been reported as a gene within a quantitative trait locus (QTL) region for high grain yield under lowland drought [30].Further research on the biological functions of OsDUF568.2may be helpful to develop drought resistant versions of popular varieties.
Taken together, these results indicated that OsDUF568 gene family was essential for the development of leaves, stems and roots of rice.The OsDUF568 family may also participate in abscisic acid and cytokinin signaling pathways, and be related to abiotic stress resistance in those vegetative tissues of rice.
Conclusions
This study conducted a comprehensive analysis of the OsDUF568 family.The phylogenetic tree showed a close evolutionary relationship between DUF568 members in rice and maize, while those in Arabidopsis were distantly related.Cis-element prediction displayed that over 82% of the elements upstream of OsDUF568 were responsive to light and phytohormones.Expression patterns revealed that all 7 OsDUF568 genes searched were highly expressed in young leaves, nodes I and II, and roots of rice.Furthermore, the expressions of some OsDUF568 genes were responsive to plant hormones (abscisic acid, trans-Zeatin and jasmonic acid) and abiotic stress (anaerobic, drought and chilling).Further GO analysis of the co-expression and PPI networks revealed that OsDUF568 related genes were enriched in material transportation, metabolism and signal transduction in rice.Finally, RT-qPCR experiments indicated that OsDUF568 family was highly expressed in rice roots, and may participate in signaling pathways involved in phytohormones and abiotic stresses.The findings provide valuable insights into the OsDUF568 family and contribute to the elucidation of their biological functions in the future.
Identification of DUF568 gene family members and phylogenetic analysis
The HMM (Hidden markov model) of the DUF568 (PF04526) domain was obtained from Pfam [31].The HMM was compared with the whole protein sequences of rice, Arabidopsis thaliana and maize obtained from the NCBI [32] using HMMER ver.3.0 (E-value < 10 -15 ) [33].The MSU and RAP loci of DUF568 genes in rice were obtained from the China Rice Data Center [34].Exons and chromosome locations were obtained from the NCBI [32], and description were obtained from RAP-DB (The Rice Annotation Project Database) [35].Protein physicochemical properties were analyzed using the ProtParam [36], and subcellular localization was predicted using the PSORT [37].
The full length DUF568 protein sequences of rice, Arabidopsis and maize were compared in MEGA-X.Multiple sequence alignment was performed using the MUS-CLE aligner with all other parameters set to the default settings.The neighbor joining tree was constructed using the bootstrap method with 1000 repetitions.
Comparison of protein sequence, gene structure, conserved domains and motifs
The amino acid sequences of OsDUF568 proteins in rice were compared using the MUSCLE method in MEGA-X and visualized using Jalview.The signal peptides were identified using the SignalP [38], and the transmembrane regions were analyzed using the TMHMM [39].The conserved motifs of OsDUF568 proteins were predicted using the MEME [40], domains were obtained from Pfam, and both were visualized using TBtools [41].
Cis-acting elements
The genome annotation for rice was obtained from NCBI [42].The cis-acting elements located within 2 Kb upstream of the OsDUF568 genes were extracted using the plantCARE website [43] and visualized using TBtools [41].The rose chart was created using Microsoft Office PowerPoint 2019.
Expression patterns
The expression patterns of OsDUF568 genes in 12 different tissues (including flower buds, flowers, panicles, milk grains, mature seeds, endosperm, young leaves, mature leaves, lamina joints of flag leaf, stem, nodes I and II, and roots) were obtained from the RiceENCODE [44].The expression patterns in response to plant hormones (including ABA, GA 3 , IAA, BL, tZ and JA) were obtained from the RiceXPro [45].Gene expression data for OsDUF568 genes under abiotic stress conditions were extracted from several datasets available at the EMBL-EBI Expression Atlas website [17] including E-GEOD-115371, E-MEXP-2267, E-GEOD-41647, E-MTAB-4994, E-MTAB-5941, E-GEOD-37940 and E-GEOD-38023.Log2-fold change values were used and visualized as a color gradient in the heat maps.All data were visualized using TBtools [41].
Co-expressed genes and PPI networks
The co-expressed genes of OsDUF568 genes were searched using RiceFREND [46], Gene ontology (GO) terms were obtained from the GO [47] with P < 0.05 and FDR (False discovery rate) < 0.05.The expression patterns of co-expressed networks in different rice tissues were obtained from the RiceXPro website [48], and visualized using R.The PPI networks and functional enrichments results of OsDUF568 proteins were obtained from STRING [49].Both networks were drawn by Cytoscape ver.3.9.0[50].
Phytohormone treatments and abiotic stress treatments
The rice cultivars japonica Nipponbare was used for all RT-qPCR analysis.The rice seeds were soaked in 75% alcohol for 1 min, 20% Sodium hypochlorite for 15 min for surface disinfection, and clean the seeds 10 times with water.The seeds were soaked in a fresh water at 28 ℃ for 24 h and germinate for 24 h at 37 °C.Germinated seedlings were transferred to a IRRI (International rice research institute) hydroponic system.Plants were grown in a growth-chamber at 30 °C / 25 °C in a 16-h-light / 8-h-dark cycle and with 75% humidity.
7-day-old seedlings were used to examine the expression patterns of OsDUF568.2,3, 4, 6 and 7.The leaves, stems and roots were sampled from seedlings without any treatment.For drought stress, the roots of seedlings were immersed in 15% PEG-6000 for drought stress.For cold stress, seedlings were transferred to a growth chamber at 4 ℃.The various treated roots were sampled at 0, 1, 3, 6, 12, 24 and 48 h after the abiotic stresses.Phytohormone treatments were performed by adding into the hydroponic system with 50 μM abscisic acid (ABA) and 1 μM 6-benzylaminopurine (6-BA) respectively, and then the roots were sampled at 0, 1, 3 and 6 h after phytohormone treatments.These collected samples were immediately frozen in liquid nitrogen and stored at -80 ℃.Three replications were performed.
Isolation of RNA, real-time quantitative PCR and expression analysis
Total RNA was isolated from the collected samples using the RNApure Plant kit (CWBIO, Nanjing, China).Using ToloScript All in one RT EasyMix for qPCR (TOLOBIO, Nanjing, China) to remove residual genomic DNA and synthesize first-strand cDNA.RT-qPCR was performed with 2 × Q3 SYBR qPCR Master mix (TOLOBIO, Nanjing, China) in a final reaction volume of 10 μL using an Bio-Rad CFX Connect Real-Time PCR Instrument (Bio-Rad, Bio Rad, Hercules, USA).OsActin (Gene ID: 4333919) served as internal controls.Expression levels are depicted as cycle threshold (Ct) value of the candidate gene relative to the Ct value of the housekeeping gene.Data were analyzed with the Bio-Rad CFX Manager software and visualized using R.All gene-specific primers are listed in Table S5.
Fig. 2 Fig. 3
Fig. 2 The protein sequences, conserved domains and motifs of OsDUF568.A Multiple sequence alignment of OsDUF568 proteins, showing signal peptides, conserved residues, DUF568 domains, and DOMON-CIL1-like domains.Conservative amino acid residues M100 and H197 were labeled with pound signs (#).Signal peptide sequences were shaded in pink.The purple and orange bars represent the DUF568 domain and the DOMON-CIL1-like domain, respectively.B Conserved domains and motifs of OsDUF568 proteins.Each domain and motif are illustrated with a specific color, and their distribution corresponds to their positions.The length of genes and proteins can be estimated using the scale at the bottom
Fig. 5
Fig.5 Relative expression of OsDUF568 genes in response to abscisic acid (ABA), gibberellin A 3 (GA 3 ), 3-indoleacetic acid (IAA), brassinolide (BL), trans-zeatin (tZ) and jasmonic acid (JA).The gene expression values were normalized and quantified as Cy5:Cy3 ratios, and visualized as a color gradient in the heat map.The color scale bar on the left represents the relative expression levels, where red indicates up-regulation and blue indicates down-regulation
OsDUF568. 2
was more up-regulated in Dagad deshi than in IR24, and its expression level increased with drought duration in Dagad deshi.The E-MTAB-4994 dataset contained data for OsDUF568.3and 5 from the flag leaf at the panicle initiation stage of Nagina 22 (a drought-tolerant genotype)[17].OsDUF568.3showed up-regulation while OsDUF568.5 showed down-regulation in response to drought in Nagina 22.The results indicated that OsDUF568 family played a role in response to drought stress.
Fig. 6
Fig. 6 Differential expression of OsDUF568 family genes in response to anaerobic, drought and chilling.A-B Fold change in the expression of OsDUF568 genes in rice seeds grown under anaerobic conditions (E-GEOD-115371 and E-MEXP-2267).C Differential expression of OsDUF568 genes in susceptible and drought-tolerant rice genotypes under drought conditions (E-GEOD-41647 and E-MTAB-4994).D-F Differential expression of OsDUF568 genes in chilling-sensitive and chilling-tolerant genotypes under chilling stress (E-MTAB-5941, E-GEOD-37940 and E-GEOD-38023).Log2-fold change values were used and visualized as a color gradient in the heat maps
Fig. 7
Fig. 7 Co-expressed gene networks analysis of OsDUF568 genes.A Co-expressed gene networks of DUF568 genes.The red circles represent OsDUF568 genes, the purple circles represent genes from gene ontology (GO) enrichment, and the yellow triangles represent transcription factors.Weighted Pearson correlation coefficients (PCC) are represented by lines, with values close to 0.52 shown as thin black lines, values close to 0.65 shown as blue lines, values close to 0.80 shown as thick red lines.B GO enrichment analysis (Biological process) of OsDUF568s and their co-expressed genes.GO enrichment was found in OsDUF568.4,5 and 8 networks.C OsDUF568 co-expression genes expression patterns.Data came from RiceXPro, which was performed 75 percentile normalization with log2 transformation and the relative expression value (log2) was obtained by subtracting the median expression value within the data set for each probe
Fig. 8
Fig. 8 Protein-protein interaction analysis of OsDUF568 proteins.A Interaction network of OsDUF568 proteins.Red circles represent OsDUF568 proteins and white circles represent predicted functional partners of OsDUF568s.Different colored edges represent different protein-protein associations.B Functional enrichment analysis (Biological process) of OsDUF568s and their interacting proteins with FDR < 0.05
Fig. 9
Fig. 9 Expression profile analysis of OsDUF568 genes in rice seedlings.Relative expression level of OsDUF568 genes in various tissues at rice seedlings (A).Relative expression level of OsDUF568 genes under 6-BA and ABA treatments (B).Relative expression level of OsDUF568 genes under drought and cold treatments (C).Error bars represent ± SD. * and ** indicate the significant difference according to Student's t-test
Table 1
OsDUF568 gene family and the predicted protein properties Chr Chromosome, MW Molecular weight, pI Isoelectric point, II Instability index, GRAVY Grand average of hydropathicity
|
v3-fos-license
|
2020-07-02T10:29:16.263Z
|
2020-01-01T00:00:00.000
|
226470525
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/35/e3sconf_interagromash2020_03008.pdf",
"pdf_hash": "b0c4e0e0cf139342f2b62fd06b15dcae82171ea9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:900",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "59214ec090340dd6040b7b59cacb9083042eb73c",
"year": 2020
}
|
pes2o/s2orc
|
Effects of dietary 20-hydroxyecdysone supplementation on whole-body protein turnover in growing pigs
One of the approaches to creating biologically active additives for use in pig breeding can be the use of 20-hydroxyecdysone regulating protein metabolism in piglets. The purpose of the work is to assess the effect of 20-hydroxyecdysone on turnover of protein in piglets. The experiment was carried out on barrows (♂ Danish Yorkshire × ♀ Danish landrace) to achieve a live weight of 53-62 kg. At the age of 60 days, 2 groups of piglets were formed: control and experimental. Piglets of the experimental group were injected with 20-hydroxyecdysone at a dose of 1.6 mg / kg body weight. In piglets of the experimental group, in comparison with the control, a decrease in the excretion of nitrogen in the urine was noted (by 26.8%, P <0.05). Nitrogen deposition was higher in piglets of the experimental group by 19.0% (P <0.001) compared with the control. 20-hydroxyecdysone contributed to increased protein deposition in the body of piglets due to protein synthesizing activity. Thus, the use of 20-hydroxyecdysone in pigs increases the efficiency of using amino acids for the synthesis and deposition of proteins in the body.
Introduction
The processes of protein metabolism in the body of growing animals largely depend on nutritional conditions, content, intensity of their cultivation and other factors. Of particular interest are studies on the characteristics of protein metabolism in the body of intensively growing animals in connection with the different supply of amino acids and biologically active substances. The limited knowledge of the mechanisms of regulation of the synthesis and deposition of proteins in the body hinders the development of methods, tools and technologies that contribute to the maximum manifestation of the genetic potential of pig meat productivity, including the production of high-quality pork with a certain ratio of fat and protein in meat [1,2,3,4].
Optimization of nutritional conditions adequate to the physiological needs of pigs contributes to a more complete realization of the productive potential with minimal feed costs per unit of production. An urgent problem in pig breeding is the development of complete feed with the optimal content of protein, energy and essential amino acids. Of great importance is the use of additives of biologically active substances, including phytobiotics, which allow to obtain high average daily gains in live weight, increase the efficiency of feed bioconversion per unit of production and meat quality [5].
One of the approaches to creating a new generation of biologically active additives for use in animal husbandry can be the use of herbal remedies that increase the resistance and adaptive ability of animals. Of particular interest in this regard are plant sources of phytoecdysteroids -polyhydroxylated sterols that do not have a hormonal effect in mammals and have low toxicity. One of the most widely studied phytoecdysteroids is 20hydroxyecdysone, which is part of some types of medicinal plants. The The multiplicity of physiological effects in combination with the low toxicity of 20hydroxyecdysone makes it possible to use it both as an individual compound and as a part of combined preparations. In recent years, great progress has been noted in the study of phytoecdysteroids, their physiological actions in various pathologies and the corrective properties in relation to metabolism in the body have been intensively studied [6,7,8,9,10,11]. According to studies [12], 20-hydroxyecdysone enhances protein synthesis by activating signals through PKB / Akt (protein kinase B / Akt kinase (RAC-alpha serine / threonine protein kinase) to the target of the rapamycin 1 complex (mTORC1).
Purpose of the study
The aim of this study was to evaluate the effect of 20-hydroxyecdysone on body protein metabolism and measure their turnover in growing pigs.
Materials and methods
The experiment was carried out on cross-border piglets, boars (♂ Danish Yorkshire × ♀ Danish landrace). According to the principle of paired analogues taking into account live weight, at the age of 60 days, 2 groups of piglets were formed, fed 2 times a day (9.00 and 16.00) throughout the entire experiment. Group content in the cells, drinking from the car drinkers. The experiment lasted until the live weight of piglets 53-62 kg. Animals of the control and experimental groups received a feed during the growing period, 1 kg of which contained crude protein 158.7 g, lysine 7.7 g, threonine 4.8 g, methionine 4.6 g, metabolic energy 12.7 MJ The ratio of lysine to exchange energy was 61% (g / MJ). The diet of piglets of the experimental group was injected with the drug 20-hydroxyecdysone (dry powder) at a rate of 30 mg / kg of feed (table. 1). The dose of 20-hydroxyecdysone per unit body weight was 1.6 mg / kg. Throughout the experiment, the consumption of compound feed, its chemical composition and consumption per unit of growth were recorded. Piglets were weighed at the beginning of the experiment and at the end of the age period. To characterize the assimilation of nitrogen in feeds and evaluate the effectiveness of their use, we carried out a balance experiment at the end of the growing period in 7 animals (n = 3 in the control group and 4 in the experimental group). After carrying out the balance experiment, we carried out the slaughter of all 7 animals, followed by deboning the carcasses to determine the slaughter qualities and taking samples of organs and tissues for physiological and biochemical studies. Where: a In the 20-hydroxyecdysone -supplemented diets, 1.6 mg/kg was added at the expense of corn.
Most studies use the nitrogen metabolism model to measure protein turnover in the whole body [13], which is based on measuring the kinetics of metabolism of the introduced marker. To measure the rate of protein synthesis using this model, the total precursor flux, the rate of formation of the final products of nitrogen metabolism, and the rate of precursor secretion from the total fund into the gastrointestinal tract must be known. Methodological aspects of measuring the rates of protein synthesis and decomposition in the whole body using 15 N amino acids are considered in detail in [13]. The results obtained by the researchers indicate that the use of different amino acids gives comparable data, despite the fact that the specificity of these amino acids as precursors for protein synthesis is not the same.
During the balance experiment, the rate of synthesis, decomposition, and deposition of total body proteins was determined in piglets at the end of the growing period according to the method [13] using labeled amino acid for nitrogen 15 N-glycine. 15 N-glycine with an enrichment of 98 % atomic excess was administered per os in the amount of 3 mg 15 N per 1 kg of animal body weight for 7 days. Stool and urine samples for isotope analysis were taken in a separate trial with a daily interval in compliance with all requirements for working with stable isotopes. Feces collected during the N balance period were pooled, freeze-dried and stored at 4 °C for N determination. Urine collected was stored at −20 °C until analysis for N. Samples of diet, urine and feces were analyzed for N content by Kjeldahl method [14]. The N-retention was calculated by minus N excretion (via feces and urine) from N intake. For isotopic studies from feces, urine taken on the 6th day of the balance experiment, nitrogen fractions were preparatively isolated using a Kjeltek instrument using the Kjeldahl method [14] with washing the system with ethyl alcohol to isolate each sample. The 15 N content (in atomic percent) was measured on a DELTA V Plus isotope mass spectrometer.
The rates of synthesis, decomposition and deposition of proteins in the whole organism were calculated using the following formulas (1) Where V1 -protein synthesis, g / day; WF -taken nitrogen with feed, g / day; NFtaken 15 N with food; NO -natural enrichment (background, atomic percentage of excess); NM -( 15 N) atomic percentage of excess in urine; NK -( 15 N) atomic percentage of excess in feces; WM -nitrogen excreted in urine g / day; WK -nitrogen excreted with feces, g / day; V2 -protein breakdown, g / day; V3 -protein deposition, g / day.
The data for all parameters determined were analyzed statistically by one-way ANOVA of SPSS 11.0 software. A value of p < 0.05 was considered statistically significant. Data are presented as means ± standard error of the mean (SEM).
Results of study and its discussion
The results of physiological studies on the nitrogen balance showed that feeding 20hydroxyecdysone to pigs as part of a feed helps to more efficient use of nitrogenous feed substances compared to the control. In piglets of the experimental group, compared with the control, a decrease in the excretion of nitrogen in the urine (by 26.8%, P <0.05) was observed against the background of the same digestibility of the feed protein (Table 2). In the final outcome, nitrogen deposition was 19.0% higher (P <0.001) in piglets of the experimental group compared to the control. At the same time, the use of nitrogen from both accepted and overcooked was higher in piglets of the experimental group. Apparent nitrogen digestibility=100%×(nitrogen intake nitrogen in feces)/ nitrogen intake. Nitrogen retention efficiency=100%×nitrogen retention/nitrogen intake. Efficiency of digestible N utilization=100%×nitrogen retention/(nitrogen intake nitrogen in feces). n = 3 in the control group and 4 in the experimental group. * P <0.05; ** P <0.01 *** P <0.001 by the U-criterion when compared with 0 (control).
When studying the effect of 20-hydroxyecdysone on the intensity of body protein metabolism in piglets, a change in the ratio of synthesis and decay to an increase in biosynthetic processes was revealed. The intensity of the synthesis of body proteins was higher in piglets treated with 20-hydroxyecdysone in the diet compared to animals in the control group. It is characteristic that, along with an increase in the intensity of protein synthesis, there was also a certain increase in the rate of their decomposition, and, consequently, renewal with a relative predominance of biosynthetic processes. This state of protein metabolism led to an increase in their level in the body (protein stock) in piglets of the experimental group (Table 3). Where: S -protein synthesis rate; B -protein breakdown rate; NPG -net protein gain expressed in nitrogen; NF -nitrogen flux; EUN -endogenous urinary nitrogen. W0.75 -0.75 metabolic body weight. n = 3 in the control group and 4 in the experimental group. * P <0.05; ** P <0.02 by the U-criterion when compared with 0 (control).
It was found that various plant steroid compounds, phytoecdysteroids, enhance protein synthesis and activate Akt signaling similarly to IGF-I in cultured myocytes [15]. It is believed that the anabolic adaptogenic effect of them, in particular 20-hydroxyecdysone, can be used in the nutrition of athletes [7]. In experiments on pigs, an increase in protein synthesis and deposition, muscle growth with the introduction of 20-hydroxyecdysone in the diet was found [16]. Feeding these and other phytoecdysteroids causes a decrease in obesity in mice [11,16]. However, recent feeding studies have failed to identify the distinct effects of 20-hydroxyecdysone on Akt or mTORC1 signaling in skeletal muscle [12]. This suggests that phytoecdysteroids may be involved in the regulation of long-term transcriptional changes in muscle protein breakdown, in contrast to the signaling mechanisms that regulate muscle protein synthesis [11,12].
Conclusion
Dietary 20-hydroxyecdysone supplementation may improve growth performance in growing pigs. Protein deposition is increased after dietary 20-hydroxyecdysone supplementation. This increase is caused by an increase in the rate of protein synthesis.
|
v3-fos-license
|
2022-09-08T15:09:48.770Z
|
2022-09-05T00:00:00.000
|
252116939
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.africanentomology.com/article/download/12055/18898",
"pdf_hash": "7d689f7e57629c3fc3093652c5d98d5453fd1887",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:901",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "06f7c9fcdd91aa77a753d6ea37a40c48572ae528",
"year": 2022
}
|
pes2o/s2orc
|
Distribution and assemblage structure of blackflies in the western Aures Mountains, Algeria (Diptera: Simuliidae)
Besides their important ecological role in flowing waters, blackflies (Diptera: Simuliidae) may pose medical and veterinary risks. For seventeen months, we surveyed the blackflies of ten localities across the Aures Mountains, in the Saharan Atlas, Algeria, and recorded eight taxa (i.e. species, species groups or species complexes). High altitude sites were dominated by the Simulium ornatum (Meigen, 1818) group, whereas sites located on the southern slope of the Aures Mountains were occupied by the eurytopic Simulium velutinum (Santos Abreu, 1922) complex and the thermophilic, pollutant-tolerant Simulium ruficorne Macquart, 1838 ‘A’ morphotype. Co-inertia analysis was used to determine the relationship between a species’ abundance and habitat types. The co-inertia analysis revealed a likely co-structure between blackfly assemblages and measured environmental descriptors (water temperature, conductivity, current velocity, bed width, etc.) in sampled habitats. This confirmed the importance of altitude as a driver of blackfly distribution. Our results also showed that there has been an increase in anthropogenic pressures on the vulnerable freshwater biota of the Aures Mountains.
INTRODUCTION
Blackflies (Diptera: Simuliidae) are an important component of lotic ecosystems (Malmqvist et al. 2004). Their larvae and pupae develop in fresh, flowing water, with high levels of dissolved oxygen, while their adults are aerial. While a few species are entirely anthophilic, most adult blackfly females feed on birds, humans and other mammals (Crosskey 1990;Currie & Adler 2008). Consequently, blackflies may be a source of nuisance (Hansford & Ladle 1979;Tabatabaei et al. 2020;Sitarz et al. 2021), or act as vectors for pathogens, such as viruses, bacteria, protozoa and nematodes. These pathogens cause human onchocerciasis, eastern equine encephalitis, and other vector-borne diseases affecting mammals and birds (Brockhouse et al. 1993;Shelley & Coscarón 2001;Reeves & Nayduch 2002;Reeves et al. 2007; Barba et al. 2019). Therefore, the medical and veterinary importance of blackflies cannot be overstated (Adler et al. 2010;Watanabe 2014).
Blackflies are also considered the most diverse fauna of stream communities with more than 2401 species reported worldwide (Adler 2021). The Palearctic is considered the most species-rich biogeographic region with 700 recorded species (Currie & Adler 2008). In North Africa, 52 nominal species have been identified, with Morocco having the highest diversity (44 species), followed by Algeria (34 species), Tunisia (18 species), Libya (5 species) and Egypt (2 species) (Belqat et al. 2018).
The present study, focusing on the Simuliidae of the western part of the Aures Mountains, aims to survey this important family in a poorly explored region. We also used multivariate analysis to analyse the response of community composition to environmental conditions by testing for possible co-structure between measured habitat characteristics and blackfly assemblages, in line with the habitat template concept (Southwood 1977;Townsend & Hildrew 1994).
Study area
The Aures Massif, located at the eastern end of Algeria, is part of the Saharan Atlas Mountains (Figure 1). The north-east-south-west orientation of this mountain range led to the development of many valleys with in the alignment. The region covers three provinces, namely Batna, Khenchela and Biskra, an area of 2529 km 2 . The climate in the Aures region varies between semi-arid, with cold winters in the northern part (Batna and Khenchela), to arid, with temperate winters in the southern part (Biskra). The area of study also included a protected park, namely the Belezma National Park, which harbours the Atlas cedar Cedrus atlantica (Endl.) Manetti ex Carriere, a coniferous tree endemic to Morocco and Algeria.
The collection of blackflies was part of a comprehensive study of macroinvertebrates of the region, carried out over a period of 17 months, from April 2018 to August 2019 (Dambri et al. 2020).
Blackfly larvae and pupae were sampled monthly at ten localities (Table 1, Figure 1) using two methods. At each locality, an area of 100 m 2 was kick-sampled by walking across all microhabitats and samples collected using a dip-net (25 cm diameter, 500 μm mesh size). In addition, we sampled a random set of ten cobbles with an average size of 10 cm and collected larvae and pupae using entomological forceps. Sampling covered small to mediumsized mountainous streams with different degrees of ease of access. An additional set of four localities, not easily accessible, were sampled occasionally (Table 1, Figure 1). Larvae and pupae were fixed in ethanol or Carnoy's solution, with identification performed by Prof. Peter H Adler (Clemson University).
Physicochemical sampling and environmental data
For each sampling event, we recorded the physical and chemical parameters of the water in situ using a tape (water depth and river bed width) and multi-probes: conductivity, total dissolved solids (TDS), water temperature, and pH were measured using an Adwa AD32 tester and a HANNA HI1271 pH electrode. Water samples were transported in a cooler and the remaining parameters (i.e. NO 3 , NO 2 , NH 4 , CO 3 , HCO 3 , Cl and O 2 ) measured within 48 hours in the laboratory. Water velocity was estimated using a floating cork stopper timed with a stopwatch. As recurrent droughts in intermittent streams resulted in missing data, only data (temperature, conductivity, TDS, etc.) recorded from November to February were presented/analysed.
Statistical analyses
A co-inertia analysis (CIA) was performed using the ade4 package (Dolédec & Chessel 1994;Dray et al. 2003) to test for co-structure between the blackfly assemblages and the measured environmental descriptors. The blackfly matrix was made up of the total abundance of each taxon at each site. Only regularly (monthly) sampled sites were included in the analysis. The vectorial correlation coefficient 'RV' of the CIA measured the overall correlation between the recorded taxa and the environmental descriptors. The RV ranged from 0 (all the taxa are independent of environmental variables) and 1 (perfect match). The significance of the RV coefficient was tested by performing a Monte-Carlo test (random permutation of the rows of both tables) (Dray et al. 2003). All statistical analyses were performed using R software version 4.0.5 (R Development Core Team 2021).
RESULTS
A total of 479 specimens were identified and assigned to eight taxa (i.e. species, species groups or species complexes) during this study. The Simulium ornatum (Meigen, 1818) group was the most abundant and widespread taxon in the western Aures ( Figure 2a). The S. velutinum (Santos Abreu, 1922) complex was also abundant and widespread, but to a lesser extent than S. ornatum. Three sites (i.e. Nafla, Maafa and Ghoufi) had four taxa present and were the most species-rich localities ( Figure 2b).
Simulium (Simulium) ornatum (Meigen, 1818) group
The S. ornatum group was one of the two most common taxa in this study and was found at all the sites except M'Chouneche, Bouailef and El Kantra. The water temperature was high at the last two stations. The taxon tolerated a wide range of water temperatures, but seemed to avoid localities with high water conductivity and TDS (Table 2).
Prosimulium faurei Bertrand & Grenier, 1972
First record from the Aures Mountains. Our record from the Aures region is associated with a temporary steam (Maafa) at 932 m above mean sea level (AMSL). The substrate was sandy with cobbles and the water was clean with a low flow (0.24 m s -1 ).
Simulium (Eusimulium) aureum (Fries, 1824) group
We recorded two larvae and one pupa from one small mountainous stream (Ravin Bleu) at 1335 m AMSL. The site was visited only once for reasons related to accessibility.
Simulium (Eusimulium) velutinum (Santos Abreu, 1922) complex
Our sampling showed that the S. velutinum complex had a wide distribution (i.e. Ghoufi, Maafa, Bouailef, Kassrou, Nafla, M'Chouneche, Hamla and Ravin Bleu) in the Aures region over a remarkable altitudinal range (i.e. 350-1700 m AMSL). Simulium velutinum s.l. has been shown to be made up of a complex of sibling species, based on the banding sequences of the polytene chromosomes from the larval salivary glands (Cherairia et al. 2014).
Simulium (Nevermannia) ruficorne Macquart, 1838 'A'
First record from the Aures Mountains. This taxon was collected from two sites (Bouailef and El Kantra) with a temperature range from 17.87-27.45 °C ( Table 2). The species seemed to tolerate high values of water conductivity and TDS (Table 2). We based the identification of morphoform 'A' on gill structure (Cherairia et al. 2014).
Simulium (Nevermannia) cryophilum (Rubtsov, 1959) complex
First record from the Aures Mountains. We recorded it at three localities (i.e. Ghoufi, Maafa, and Kassrou). Characteristically, the taxon seemed to avoid localities with high water temperatures, but appeared to tolerate high water conductivity values (Table 2).
Simulium (Wilhelmia) pseudequinum Séguy, 1921
We found it at two different localities (i.e. Ghoufi and Nafla) characterized by the presence of a large amount of macrophytes. In addition, the species seemed to tolerate high values of water conductivity and TDS ( Table 2). The CIA indicated the existence of a co-structure between the blackfly assemblages and the measured environmental descriptors (Figure 3a-f). The Monte-Carlo test using 10 000 replicates gave a marginally significant p-value (0.10) for the coefficient of vectorial correlation (RV = 0.42) for the CIA. This indicated a likely relationship between the distribution of black flies and the measured environmental descriptors. Axis 1, representing 96% of the total variance, separated high altitude sites from the low altitude Saharan sites, located on the southern flank of the Saharan Atlas. The high altitude sites were well oxygenated localities, with relatively deep water, dominated by taxa such as S. ornatum (Figure 4a). The low altitude sites were characterized by high water conductivity and temperature, high loads of total dissolved solids, nitrates and nitrites (Figure 4b) and were dominated by taxa such as Simulium ruficorne 'A' and the S. velutinum complex. Of lesser importance (3.5% of total inertia), Axis 2 organised the Saharan localities along increasing gradient of bed width.
DISCUSSION
In this study eight species, species complexes or groups were recorded in the western Aures, representing 23.5% of the known blackfly fauna of Algeria (Belqat et al. 2018). Four of the recorded taxa are new to the Aures [viz. Prosimulium faurei, Simulium (Nevermannia) ruficorne 'A', Simulium (Nevermannia) cryophilum complex and the Simulium (Simulium) variegatum group].
Prosimulium faurei was previously found in the Tafna watershed in north-western Algeria (Gagneur & Clergue-Gazeau 1988). In north-eastern Algeria, it has been recorded in the Seybouse Basin (Cherairia et al. 2014) and the El Kala region (Samraoui et al. 2021).
Ecology
Two taxa, S. ornatum and S. velutinum were most widespread and abundant, dominating all other taxa recorded in the Aures. This result is similar to that obtained for the El Kala region (Samraoui et al. 2021) and the Tafna River Basin (Boudghane-Bendiouis et al. 2014), but differs from that obtained for the Seybouse River, where S. pseudequineum was the dominant taxon (Cherairia et al. 2014). Simulium pseudequineum, confined to only two sites in the Aures, is often a widespread lowland species, inhabiting streams with large bed width and high water conductivity, both in the Maghreb (Boudghane-Bendiouis et al. 2014;Cherairia et al. 2014;Samraoui et al. 2021) and the Iberian Peninsula (Gallardo-Mayenco & Toja 2002). The Seybouse River Basin may offer many opportunities for S. pseudequineum, which favours low-elevation sites, with a large river bed width and high water conductivity.
Likewise, larvae of S. velutinum were often dominant in the Aures, occupying lowland sites. A result consistent with records found elsewhere in Algeria (Boudghane-Bendiouis et al. 2014;Samraoui et al. 2021). This taxon is also known to tolerate waters with a high load of organic matter (Gallardo-Mayenco & Toja 2002). In fact, the large ecological amplitude of S. velutinum indicates the possible presence of cryptic species (Adler et al. 2015).
In contrast, S. ornatum presents a different case and it is worth noting that this taxon (group) may represent multiple cryptic species (Belqat et al. 2018), widely distributed in the western Mediterranean. The species S. ornatum is the most common and most frequently recorded species in Austrian streams and rivers (Ofenböck et al. 2002). Moreover, the ecology of this taxa seems to vary geographically. In the El Kala region, it is dominant and mainly recorded at downstream sites (Samraoui et al. 2021), while the taxon is clearly crenophilic or rhithrophilic in central and western Algeria and Morocco (Gagneur & Clergue-Gazeau 1988;Giudicelli et al. 2000;Boudghane-Bendiouis et al. 2014; this study). Elsewhere, S. ornatum was found in Pyrenean streams with slow currents, dominated by aquatic vegetation (Vinçon & Clergue-Gazeau 1993). Once again, these contradictory results would be consistent with the presence of a cryptic species complex.
Despite the low statistical power of the performed CIA, due to a small sample size, the results clearly indicate the importance of altitude in driving blackfly distribution in the Aures. This result is congruent with numerous other studies in Algeria (Boudghane-Bendiouis et al. 2014;Samraoui et al. 2021) and elsewhere (Giudicelli & Dakki 1984;Vinçon & Clergue-Gazeau 1993;Ya'cob et al. 2016). Undoubtedly, various environmental factors, such as temperature, dissolved oxygen, water flow, water depth, and stream width are correlated with altitude.
A good knowledge of the status and ecology of the freshwater biodiversity in the Maghreb is urgently needed, as climate change, interacting with other anthropogenic stressors, is fast disturbing freshwater communities. These disturbances are leading to a precipitous decline in both the diversity and abundance of freshwater biota (Benslimane et al. 2019), with the conditions favouring widely distributed, thermophilic species (Morghad et al. 2019). High loads of stressors, such as nitrate and nitrites, common chemical contaminants that may negatively affect human health, are now routinely found in Maghrebian streams and rivers (Abdesselam et al. 2013;Aghzar et al. 2002). These groundwater contaminants are often associated with land use, with high levels of nitrates possibly a result of the use of fertilizers in apple orchards in the Aures, and geological factors. There is thus a need to monitor freshwater biodiversity, including taxa such as blackflies that may pose medical and veterinary risks.
|
v3-fos-license
|
2019-03-29T13:03:08.514Z
|
2019-02-21T00:00:00.000
|
85532438
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2019/9723025",
"pdf_hash": "88e689cd79433b56f43f8f0087485ec045133df1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:902",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "88e689cd79433b56f43f8f0087485ec045133df1",
"year": 2019
}
|
pes2o/s2orc
|
Uniaxial Cyclic Tensile Stretching at 8% Strain Exclusively Promotes Tenogenic Differentiation of Human Bone Marrow-Derived Mesenchymal Stromal Cells
The present study was conducted to establish the amount of mechanical strain (uniaxial cyclic stretching) required to provide optimal tenogenic differentiation expression in human mesenchymal stromal cells (hMSCs) in vitro, in view of its potential application for tendon maintenance and regeneration. Methods. In the present study, hMSCs were subjected to 1 Hz uniaxial cyclic stretching for 6, 24, 48, and 72 hours; and were compared to unstretched cells. Changes in cell morphology were observed under light and atomic force microscopy. The tenogenic, osteogenic, adipogenic, and chondrogenic differentiation potential of hMSCs were evaluated using biochemical assays, extracellular matrix expressions, and selected mesenchyme gene expression markers; and were compared to primary tenocytes. Results. Cells subjected to loading displayed cytoskeletal coarsening, longer actin stress fiber, and higher cell stiffness as early as 6 hours. At 8% and 12% strains, an increase in collagen I, collagen III, fibronectin, and N-cadherin production was observed. Tenogenic gene expressions were highly expressed (p < 0.05) at 8% (highest) and 12%, both comparable to tenocytes. In contrast, the osteoblastic, chondrogenic, and adipogenic marker genes appeared to be downregulated. Conclusion. Our study suggests that mechanical loading at 8% strain and 1 Hz provides exclusive tenogenic differentiation; and produced comparable protein and gene expression to primary tenocytes.
Introduction
Bone marrow-derived mesenchymal stromal cells (MSCs) have the ability to undergo multilineage differentiation and, when introduced into damaged tendon, have been shown to result in superior repair outcomes [1,2]. Despite demonstrating good efficacy, there have been concerns that undifferentiated cells may possibly progress towards unwanted cell lineages when transplanted into tissues, resulting in patient morbidity [3,4]. An example to demonstrate such phenomenon would be in the formation of osteoblastic cells when human MSCs (hMSCs) are transplanted into the cartilage tissue [5]. It has been suggested that lineage-committed or predifferentiated hMSCs may be the answer to this problem [6]. Several methods can be employed to direct hMSCs towards a particular lineage. In the past, these have included hormonal, ionic, and environmental manipulation [7]. However, one of the mechanisms that can be readily used on these cells but not often described in literature is mechanical signalling [8].
It is suggested that the ability of cells to respond to mechanical stimuli is controlled by a series of mechanosensitive receptors or structures that sense and convert mechanical signals into biochemical signalling events [9]. This process, commonly known as mechanotransduction, translates mechanical cues that are perceived from the environment into intracellular signals. This ultimately regulates the complex processes involved in cell proliferation and differentiation [10]. It has been described that during this process, the complex interaction of signals generated from the binding of integrins to signalling molecules, the opening of stretch sensitive ion channels, and the resultant cytoskeletal deformation are simultaneously activated [11]. However, the order of sequence of these events, as well as the relationship between the activated pathways and outcome, remains to be rationalized [12,13].
Although previous works have indicated that mechanical stimulation in general guides MSC differentiation in different ways, these studies have predominantly involved cells other than those found responsible for tendon or ligament homeostasis, such as osteoblast, neuron-like cells, and chondrogenic cells [14][15][16]. In addition, there appears to be very few studies investigating the effects of cyclic uniaxial tensile loading on progenitor cells, although this stimulus is physiologically relevant to the musculoskeletal system. It is worth noting that this stimulus is probably the single most important signal that regulates the proliferation and functions of both ligament and tendon cells [17,18]. However, we also need to be mindful that because of their multipotential ability, it is possible that stimulating stem cells mechanically at an inappropriate manner can result in undesirable outcomes as previously mentioned. It is therefore paramount that the characteristics of the mechanical loading applied be established so as to eliminate these unwanted outcomes. Sadly, studies related to this area appears lacking as previous studies have been mainly focused on a narrow range of frequency and strain rates which do to mimic the scenario observed during physiological loading [19][20][21][22].
In order to establish these characteristics, the present study was conducted to examine the effects of uniaxial cyclic stretching in different durations and strain rates on hMSCs. The focus of this study is to determine the mesenchymal lineage differentiation potential of these cells using gene expression and extracellular matrix (ECM) production related to mesenchymal lineage-related specific markers. These were also compared to tenocytes in order to determine if the tenogenic expression potential of hMSCs subjected to these loading conditions was comparable to that of native tendon cells. These would then indicate that progenitor cells would have undergone tenogenic differentiation. Tenogenic differentiation is defined as cells that exhibit tenocyte-lineage marker genes at both mRNA and protein levels [23]. Amongst the genes that have been identified as described in many literatures include scleraxis, tenomodulin, tenascin-C, collagen type I, collagen type III, and decorin [19,[23][24][25]. Protein expressions for tenogenic differentiation on the other hand are less specific and not well described. However, from available literature, frequently quoted proteins that appear to be relevant to the tenogenic differentiation process have included collagen I and collagen III [23,26].
We hypothesize also that the regulation of extracellular matrix remodelling as well as the expression of the differentiation of hMSCs to a particular cell lineage is dependent on the degree of tensile forces; the morphology and stiffness of the cells were also investigated. Since the focus of this study is relating to the tenogenic differentiation potential of cyclic-loaded hMSCs, the expression of tenogenic genes and proteins as mentioned above was thus investigated. It is hoped that by determining the effects of mechanical stretch on hMSCs using quantitative measurements, we may be able to have better understanding on the mechanical characteristics that govern tendon homeostasis thus enabling future potential therapies for tendons and ligaments to be advocated.
Materials and Methods
All experimental protocols were approved by the University Malaya Medical Centre Institutional Review Board (Reference no: 369. 19) and performed in accordance with the guidelines for Medical Ethics Committee of the University Malaya Medical Centre.
Isolation and Culture of Human Bone Marrow-Derived
MSCs. To isolate bone marrow-derived MSCs, the bone marrow was aspirated from the femoral canal of 10 patients/donors undergoing orthopaedic-related surgeries such as total joint arthroplasty in the University Malaya Medical Centre. Each bone marrow sample was kept on ice throughout the transportation to the laboratory and processed for cell isolation as described in our previous publication [27]. The cells were subcultured until passage 2 to be used in our experiments.
To determine whether the cells obtained were hMSCs, various tests including flow cytometry analysis for specific cell surface markers, cell morphological images, and the ability of the isolated cells to undergo trilineage differentiation were conducted. The methods used in this study are described in our previous publications [27,28]. The isolated cells appeared to conform to the characteristics expected of MSCs (Figure 1), i.e., (1) spindle-shaped plastic adherent features; (2) positive markers for CD29, CD44, CD73, CD90, and CD105 while being devoid of CD14, CD34, CD45, and HLA-DR [28]; and (3) able to undergo trilineage differentiation, namely, chondrogenic, osteogenic, and adipogenic differentiation.
2.2. Isolation and Culture of Human Tenocytes. Human primary tenocytes were isolated from hamstring tendons of adult donors, who underwent surgery for joint arthroplasty. Tendon tissues were harvested to the required size by the operating surgeon and transferred aseptically into containers and immersed with saline solution. Once the tendons were harvested, cell isolation was immediately performed, using the methods modified from the study of Zhang and Wang [29]. Briefly, the tendons were minced into approximately 1 mm 3 in size under a sterile condition, and then phosphate buffered saline (PBS) was added (Gibco, USA). Subsequently, the mixture was added with 0.4 mg/mL type I collagenase and incubated at 37°C for 2 h to allow the enzymatic digestion process to occur. After digestion, the suspension was centrifuged at 1800 rpm for 5 min to remove the collagenase solution, then the pellet was washed 2 times with PBS by centrifugation. The pellet was then resuspended with 1 mL of DMEM high glucose (4.5 g/L glucose) supplemented with 10% fetal bovine serum (FBS), 1% penicillin-streptomycin, and 1% GlutaMAX™-I (Gibco, USA), and transferred into a T25 flask which was added with 5 mL of culture medium. Cultures were incubated at 37°C and 5% CO 2 incubator for 24 h. The digested tissues were then removed from the cell culture flask and discarded completely. The culture medium was changed every third day, until 80-85% confluency for subculture using trypsin digestion. These primary native human tenocyte cultures (passage 3) were used as positive controls in the subsequent experiments.
2.3. Application of Cyclic Uniaxial Tensile Strain. A commercial loading device (STREX, Japan) fitted with elastic silicone chambers was used to conduct experiments that determine the effect of cyclic uniaxial strain on hMSCs. hMSCs were seeded on the collagen type I (Sigma, USA)-coated silicone chamber at the density of 10 4 /cm 2 and allowed to set at 37°C in complete growth medium for 48 h. The medium was then changed to 1% FBS for 24 h and proceeded with complete growth medium before being assembled into the uniaxial strain device. Control cells were treated similarly but were not subjected to cyclic stimulation. The medium and cells were harvested after 6, 24, 48, and 72 h of cyclic loading for downstream analyses, which included (1) biochemical assay, (2) immunostaining, (3) immunophenotyping, (4) topography imaging and elasticity measurement, and (5) gene expression assay.
Cellular Morphology by Microscopy.
Phase-contrast microscopic images of unstrained and strained hMSCs were obtained (Olympus, Japan) in at least four randomly selected sites from our visual field. To observe the effect of cyclic loading on cytoskeletal actin arrangements, hMSCs at all conditions were stained with fluorescent phallotoxins (Molecular Probes, Oregon, USA) for 30 min and then the nucleus stained with Hoechst (Molecular Probes, Oregon, USA) for 10 min in the dark. Fluorescence was recorded using a laser scanning confocal attachment (Leica TCS SP5 II, Germany) and measured by LAS AF image software (Leica, Germany). Images of unstrained MSCs on silicone membrane served as control.
2.5. Quantification of ECM Components. At the end of each time point of the experiment, the total amount of collagen, sulfated glycosaminoglycan (sGAG), and elastin of the resulting samples was determined using Sircol™ Collagen assay kit, Blyscan™ sGAG assay kit, and Fastin™ Elastin assay kit, respectively. The technique used in each measurement was according to the manufacturer's (Biocolor, UK) protocol. These kits used quantitative dye-binding methods to determine the total quantity of the respective ECM component in the sample which released to medium. An enzyme immunoassay kit (Chondrex Inc., USA) was used to measure the levels of type I collagen in strained hMSC lysate (1 Hz, 8%) following the manufacturer's instructions. The concentration of collagen type I was obtained by measuring the absorbance at 490 nm on the microplate reader.
2.6. Immunocytochemical and Fluorescent Immunostaining for ECM Analysis. Membranes with hMSCs subjected to the uniaxial straining or in unstrained conditions were rinsed using PBS, followed by fixation process in methanol for 20 min. After rinsing using Tris-buffered saline (Dako, Denmark), peroxidase block was applied for 5 min to reduce nonspecific background signalling. Cells were then incubated with primary antibodies, which included rabbit anti-collagen type I, rabbit anti-collagen type II, or rat anti-collagen type III (Calbiochem-Daiichi Fine Chemical Co., Japan) diluted at 1 : 100 for 30 min. The cells were then incubated with streptavidin-peroxidase secondary antibody (Dako, Denmark) for 30 min. At last, the collagens in the cells were visualized by reaction with diaminobenzidine (Dako, Denmark).
For direct visualization of the adhesion molecules fibronectin matrix and N-cadherin, 4% paraformaldehyde was used to fixed cells and was permeabilized with −20°C acetone. Cells were then incubated with 1% bovine serum albumin to block nonspecific binding of antibodies, before being incubated with primary antibody, anti-fibronectin (Abcam, UK) diluted 1 : 300 for 1 h. The primary antibody was then detected by a secondary antibody specific to rabbit IgG (Abcam, UK) diluted 1 : 600 for 1 h. Hoechst staining was performed at the end of the staining process and examined under laser scanning confocal microscope (Leica TCS SP5 II, Germany).
Stimulated Cell Surface Antigen Analysis by a
Fluorescence-Activated Cell Sorter (FACS). Antibodies against the human antigen, CD44, CD73, CD90, and CD105 (BD Biosciences, USA), were used to characterize the surface antigen expressions of stretched hMSCs. Briefly, the loaded cells were resuspended in 100 μL of PBS and incubated with fluorescein isothiocyanate-(FITC-) or phycoerythrin-(PE-) conjugated antibodies in the dark for 15 min at room temperature. The fluorescence intensity of the cells was evaluated using a flow cytometer (BD FACS Cantor II, BD Biosciences, USA). Data were analysed using CELLQUEST software (BD Sciences, USA). The presence or absence of staining in cells was determined by comparing strained cells to the matched unstrained control.
Histologic Assessment of Differentiation after Mechanical
Stimulation. The presence of bone-forming nodules was used to determine the occurrence of osteoblast differentiation. This was further assessed using Alizarin Red S dye (Sigma, USA), which stains calcium phosphate deposits. The accumulation of lipid droplets was used to denote adipocyte differentiation. It was determined by incubating paraformaldehyde-fixed cells with 60% isopropanol and followed by freshly prepared Oil Red O solution (Sigma, USA). Unstrained samples were treated as controls. All samples were then captured using a light microscope (Nikon Eclipse TE2000-S, Japan).
Atomic Force Microscopy Measurement of Young's
Modulus. Atomic force microscopy (AFM) images were obtained by scanning the cell surface under ambient conditions using AFM (Bruker Nano, USA) that was set at PeakForce QNM mode. The AFM measurements were obtained using ScanAsyst-air probes. However, the spring constant (nominal 0 4 N/m) and deflection sensitivity were first calibrated but not the tip radius (the nominal value has been used, 3 5 nm). AFM images were collected from each sample and at random spot (at least five areas per sample). The quantitative mechanical data was obtained by measuring DMT modulus/Pa using Bruker software (NanoScope Analysis). To obtain Young's modulus, the retracted curve was fit using the Derjaguin-Muller-Toporov model or abbreviated as DMT modulus [30].
2.10. Multiplex Gene Expression Assay. Total RNA was extracted from unstrained and strained hMSCs using RNeasy mini kit (Qiagen, USA). The purity and concentration of the RNA were assessed by determining the absorbance ratio, measured at 260 and 280 nm wave bands. RNA integrity was assessed by visualizing 18S and 28S rRNA bands on formaldehyde-agarose gels. Only samples with high quality were selected for microsphere-based multiplex-branched DNA downstream analysis. The mRNA expression of mesenchymal lineages (Table 1) was quantified by the Quanti-Gene 2.0 Plex assay (Panomics/Affymetrix Inc., USA). Individual bead-based oligonucleotide probe sets specific for each gene examined were developed by the manufacturer (the 2.0 plex set 12082). In this assay, each lysate was measured in triplicate wells. Controls are also included for genomic DNA contamination, RNA quality, and general assay performance. The housekeeping gene was PGK1 (phosphoglycerate kinase 1) previously validated as the best housekeeping for accurate gene expression analysis in our study.
2.11. Statistical Analysis. The assays were carried out with a minimum number of technical triplicates (n = 3) per experimental run, using six independent samples from different donors (N = 6) for each group of the experiments. Data were presented as mean ± standard deviation (SD). For Young's modulus experiment, Student's t-test was carried out to compare the differences in mean values. While for the other experiments, statistical significance was analysed by one-way analysis of variance (ANOVA), using the least significant difference (LSD). A confidence level of 95% (p < 0 05) was chosen for determining statistical significance using the SPSS 15.0 software (SPSS Inc., USA).
Uniaxial Mechanical Strain Induces MSC Alignment
Perpendicular to the Direction of Stretching. To determine the effects of uniaxial cyclic strain on cell morphology and organization, hMSCs were exposed to uniaxial strain under predetermined experimental conditions. The degree of cells' responsiveness was affected at different strain magnitude and duration (Figure 2(a)). Cells that were exposed to the highest strain magnitude (12%) aligned themselves faster than cells at other strain rates. After 72 h, cells under cyclic strain aligned themselves perpendicular to the direction of strain and these cells look more elongated and were slender in shape, while unstrained cells remain randomly oriented.
Confocal images showed the reorganization of actin filaments perpendicular to the direction of strain whilst random organization of actin filaments for unstrained cells. It also showed that stained actin filaments were denser in the stimulated hMSCs compared to the nonstimulated groups ( Figure 2(c)). hMSCs on 8% uniaxial strained at 1 Hz (Figure 2(b)) lead to spindle-shaped cells similar in shape to tenocytes in vitro. All these results indicated that the cellular cytoskeletal development was associated with strain magnitude.
Uniaxial Tensile Loading Enhances Collagen and Elastin
Production but Not GAG. The total collagen and elastin production appears to be influenced by the strain magnitude. Our results showed that uniaxial stretching increased collagen production (Figure 3(a)), with the exception of the 4% strained group. Higher collagen production was measured as early as 6 h in the 12% strained group. For the 8% strained group, the collagen production was enhanced significantly only after 48 h, which is close to the collagen content in tenoctyes (ratio of human tenoctyes/unstrained hMSCs = 1 43, graph not shown). Compared to collagen, elastin was only increased after 72 h at the higher strained group (Figure 3(c)). However, no enhancement of GAG production in any of the strained groups was observed (Figure 3(b)).
Since collagen type I was reported to be abundant in tendon, ligament, and muscle cells, the 8% strained cells at 1 Hz were further tested using ELISA assay. The results showed that the collagen type I level in medium was increased in mechanically stimulated cells as compared to unstrained cells. The content of collagen type I increased with the duration of stretching (Figure 3(d)).
Mechanical Stimulation Promotes Collagen Type I, Collagen Type III, Fibronectin, and N-Cadherin Expressions.
Immunocytochemical assay showed that the uniaxial cyclic straining promoted the synthesis of collagen type I in MSCs.
In the unstrained control group, there was only a light brown collagen staining in the cytoplasm, while a more intense staining was observed in the 72 h strained group for collagen type I (Figure 4(a)). This was in line with the result of collagen type I obtained from ELISA. Collagen I and collagen III staining showed positive protein expression on both unstrained and strained hMSCs but denser in strained cells especially in the 8% and 12% groups. In contrast, collagen II was not expressed when hMSCs were stretched. These results appear comparable to the level of collagen expressed from primary tenocytes. When unstretched, fibronectin was arranged in random web-like structures, which distributed mainly at the cell periphery. The peripheral fibronectin staining appears to be upregulated when cells are stretched. Fibronectin fibril formation also appears to be enhanced with the increase in strain magnitude (Figure 4(b)). Furthermore, unstimulated or unstretched cells appeared to have thin fibronectin fibrils clustered and distributed throughout the entire basal surface of the cell, while cells exposed to 72 h at 8% and 12% uniaxial stretching appeared to form thicker fibronectin fibrils and to have an observable increase in fibronectin fluorescence intensity (Figure 4(c)). To view cell-cell contacts after stretching, we found that the expression level of N-cadherin was higher on strained cells (Figure 4(b)). However, this level of expression was lower in the 12% strained group.
Mechanical Stretching Induces the Alteration in hMSC
Surface Antigen Expression. The expression of the CD markers in hMSCs appears positive in nonstimulated cells on silicone chambers, as with hMSCs cultured on plastic culture flasks. After 72 h of cyclic loading, CD markers in 4% strained cells appear comparable to unstrained cells. However, when subjected to 8% and 12% strains, the expression of CD markers was reduced, suggesting that appropriate levels of mechanical stretch may induce the alterations in MSC surface antigen expression ( Figure 5(a)). It was observed that CD44 and CD105 were significantly reduced in both 8% and 12% strained groups, while CD73 and CD90 reduced significantly at 8% and 12% strains, respectively. Figure 5
Topographical Changes Observed in Mechanically
Stimulated Cells. The changes in cell topography of unstrained and strained hMSCs were analysed using an AFM. Topographical images were obtained in both height and deflection channels (Figure 6(a)). Results of AFM analysis revealed that strained cells appeared elongated, with spindle-like morphology and microfilament bundles running Figure 3: Biochemical analysis of MSCs subjected to various mechanical stimuli for different duration of stimulation. Content of (a) total collagen, (b) GAG, and (c) elastin of strained cells was measured to determine the total quantity of the respective ECM component in the sample which released to medium. (d) The level of collagen type I in the medium was measured by ELISA. The ratio of the ECM expression was counted by normalizing to the expression amount of corresponding unstrained groups (indicated as 1). Significance p < 0 05 was represented by * which compared to unstrained. N = 6, n = 3, error bar ± SD. parallel to their long axes, while unstrained cells appeared large and flat. Height image showed larger height scale for strained cells than unstrained cells. This was apparently related to the thicker actin stress fibers of the strained cells than the unstrained hMSCs, which could be visualized in detail in the deflection channel. In unstrained cells, deflection image revealed the fine cytoskeletal structure (presumably actin) just under the cell membrane at detail. The fine cytoskeleton structure began integrating when mechanical stimulation was applied on the cells. The cytoskeleton of the stimulated cells became more pronounced. This effect was much evident with the higher magnitude strain to hMSCs, compatible with tenocytes.
The elasticity measurements (Young's modulus) were performed on the cytoskeleton regions surrounding the nuclei. Figure 6(b) shows the average Young's modulus of fixed unstrained and strained hMSCs from 3 independent cultures with 5 different areas. The Young's moduli values of strained hMSC groups were greater than those of unstrained hMSC groups, with a significant increase observed in the 8% and 12% strained group. These results demonstrate that as the strain rate is increased, Young's modulus and therefore stiffness of the cytoskeleton of hMSCs increase. The unstrained hMSCs are supple when compared to strained hMSCs, especially in the 8% strained group.
Mechanical Stimulation
Influences the Expression of MMP3 and PRR16. The mRNA expression of PRR16, an indicator of stem cell differentiation, when cells were subjected to mechanical loading is shown in Figure 7(a). At 1 Hz stretching, downregulation of the PRR16 gene was noted in both 8% and 12% strained groups. This effect was more obvious after the cells were stretched for a longer period. These results appear to occur in parallel to the reduction in the expressed CD markers described previously. Although there is a downregulation of PRR16, the mRNA expression of MMP3 was upregulated in 8% strain (Figure 7(a)). The exhibitory effect on MMP3 mRNA expression was not obvious in the 12% strained group after 48 h.
High Mechanical Strain Upregulated Genes for Macromolecular Components of ECM and Induced
Differentiation Markers for Tendon-Like Cell. Uniaxial strain regulated matrix remodelling, as observed from the increasing levels of COL1 and COL3 expression, in a liner fashion and parallel to the amount of strain (Figure 7(b)). A significant increase was induced by strains of 8% and 12%, but this upregulation was not significant for the 4% strained group. The expression of COL3 showed a pattern similar to that of COL1, but the increase was slightly higher than that of COL1 at 8% strain (at 24 h). DCN expression was significantly upregulated at 8% and 12% strains (>24 h and 48 h), respectively. The differentiation of hMSCs towards tendon-like cells was further examined by measuring the expression of several genes (Figure 7(c)). The results demonstrated that the tenogenic marker (TNC, SCX, and TNMD) expression was upregulated in all groups. However, this was only significantly increased in the 8% and 12% strained groups, most notable being in the 8% group after 24 h, i.e., which was closer to the gene expressions from tenoctyes (DCN = 1. 50 Figure 6: The comparison of cell surface topography between the unstrained hMSCs, strained hMSCs, and tenocytes, visualized by AFM. (a) Representative AFM height and deflection scans of unstrained hMSCs and 4%, 8%, and 12% strained hMSCs, and tenocytes. In height images, brighter colour indicates higher distance of substrate. In deflection images, the detailed structure of presumably the stress fiber could be observed with AFM in different cell groups. The direction of uniaxial strain was in the red arrow direction. (b) Young's modulus on the cytoskeleton of the cells subjected to 4%, 8%, or 12% cyclic stretching for 72 h as indicated. The ratio was counted by normalizing to the expression amount of corresponding unstrained groups (indicated as 1). Statistical significance (p < 0 05) was represented by * relative to the unstrained group. n = 3, error bar ± SD. gene expression levels of SCX returned to the basal level as with the unstrained group, suggesting that the observed increase in gene expression was transient.
Uniaxial Mechanical Strain Did Not Induce Chondrogenic, Adipogenic, and Osteogenic Differentiation Markers.
To determine the global differentiation responses of hMSCs when subjected to uniaxial mechanical strain and to ascertain the possible expression of nontendon differentiation markers, the expressions of nontendon genes were also investigated in this study. These included gene markers for the bone, cartilage, and fat. We found that at 4% strain, osteogenic genes (RUNX2, ALP, and OCN) were transiently upregulated (Figure 7(d)). However, at 8% and 12% strains, these genes were downregulated suggesting that at low mechanical strain levels, osteoblastic differentiation is transiently enhanced. Consistent with our immunostaining results, uniaxial strain did not increase COL2 (Figure 7(e)) and PPARG (Figure 7(f)) genes related to chondrogenesis and adipogenesis processes in these progenitor cells. Several molecules involved in chondrogenesis (i.e., SOX9 and COMP) were influenced by the changes in strain magnitude and duration of cyclic stretching (Figure 7(e)). SOX9 gene was downregulated when uniaxial strain was applied, although at 12% it was observed that a transient increase can be expected at the early stages of stretching (6 h) but is not present thereafter. In contrast, COMP was upregulated in the 12% strained group at 72 h. One reason to this may be due to the fact that COMP is not a specific gene for chondrogenesis and can be found in tendon cells as per observed in other studies [16]. Uniaxial cyclic stimulation also increased the smooth muscle contractile marker, TAGLN, transiently at 12% strain (Figure 7(f)).
Despite the evidences from this study suggesting that a transient increase in nontendon-related genes can occur when hMSCs are subjected to cyclic loading, the functional significance of these changes may be insignificant since only low levels and short duration of these genes were expressed throughout our experiments. We can therefore conclude that uniaxial cyclic loading generally results in tenogenic differentiation and results in the insignificant increase in other downstream musculoskeletal lineages.
Discussion
Our current study demonstrates that uniaxial stretching over a period of time provides exclusive tenogenic lineage differentiation ability in hMSCs. The genes and proteins expressed from these cells were within the defined characteristics of tenogenic differentiation, as mentioned earlier. We can also conclude that the action of cyclic stretching also stimulates superior cell proliferation based on our previous pilot study [28]. However, an increase in strain magnitude does not necessarily result in higher differentiation as demonstrated in this study, where 8% strain resulted in the highest tenogenic expression and not at 4% or 12% strain. Yet, based on our previous pilot study [28], hMSCs subjected to 4% strain at 1 Hz provides the best cell proliferation. The choice of strain rate, i.e., 1 Hz in this study was based on our previous study which showed that at this rate the best cellular differentiation in hMSCs was observed [28]. It is worth noting that uniaxial cyclic loading does not result in chondrogenic, adipogenic, or osteogenic differentiation and that, at the prescribed loading regime, cells tend to form distinctive tendon-like cell phenotype. As far as the authors of this paper are aware, these observations have not been previously reported. Another novelty of this study is that specific combinations of strain amounts and rate of tensile loading provide specific hMSC tenogenic differentiation responses as mentioned earlier. It is important to note here that as far as the authors of this paper are aware, there is no consensus on the proper definition of tenogenic differentiation. In trying to ensure that the work done in our study incorporates any characteristics of tenogenic differentiation possible using gene and protein expressions, the work of several studies from different laboratories was used as reference [25,26,31,32]. It is hoped that in doing this, a more global definition of what defines tenogenic differentiation can be made [23].
Our study corroborates previous findings that cell orientation is altered when subjected to cyclic loading [20,33]. The cell appears to reorientate in a longitudinal axis perpendicular to its original orientation as well as the direction of cyclic loading. This phenomenon appears to be necessary for the reduction of excessive strain that is applied to the cellular structures. In addition, this also results in the increase in specific phenotypic expressions from these cells as previously described [34,35]. It has been suggested that the mechanisms involved in promoting cellular realignment are dependent on various factors, which includes the rearrangement of intracellular stress fibers due to energy dissipation and the fluctuations in the ionic exchange mechanisms such as the depolarization of voltage-gated channels [36,37]. Based on our observations, it is likely that the actin stress fibers, which are a major cytoskeletal constituent, may be responsible for the proliferation and differentiation of hMSCs [38,39]. The AFM and confocal fluorescence microscopic analyses demonstrate these changes occurring in the actin stress fibers which, if based on previous findings, suggest that the change in Young's modulus was ascribable to the development of the cellular cytoskeleton during the differentiation process [40].
Another finding that corroborates previous studies is the fact that hMSCs subjected to tensile cyclic loading result in the apparent increase in the synthesis of collagen type I and type III, and potentially other tenogenic protein expressions [31,32,41]. However, whilst our study did not demonstrate any chondrogenic, osteogenic, or adipogenic expressions, these have been reported in others [42][43][44][45]. We hypothesized that these differences may be attributable to the different loading types, magnitude, rates, and even the device used to create the mechanical strained environments employed in each of these studies, since it has been shown that different types of mechanical signals will produce different outcomes, i.e., resulting in the differentiation of hMSCs towards a specific lineage [46]. For example, low-amplitude or low-frequency mechanical loading has been shown to promote osteogenic (1 Hz, 3%, 48 h) [31], myogenic (1 Hz, 4%, 24 h) [47], and neuronal (0.5 Hz, 0.5%, 8 h) [15] differentiation of hMSCs. In addition, the action of cyclic compression appears to be a major contributing factor required for MSCs to undergo chondrogenesis [48]. Apparently, loading cells in a uniaxial and biaxial manner will result in different outcomes. In another study using similar rate and magnitudes to ours but employing biaxial loading, MSCs tend to differentiate towards osteogenic lineage [49]. Thus, it is not unexpected that uniaxial cyclic stretch is believed to be of paramount importance in the development of functional musculoskeletal tissues [50] especially for the differentiation of MSCs into tendon/ligament fibroblasts. One aspect that needs to be considered is that the differences observed between our study and that of previous reports [51][52][53] may have been related to the Flexcell system used in their studies. In contrast to the Strex machine used in our study, this device employs a suction mechanism at the centre of the elastomeric cell culture surface to create the stretching effect. It may be the case that the radial stretching effect of the Flexcell system could have produced compounding compressive forces to the attached cells thus resulting in the osteogenic lineage differentiation. This however remains speculative and would require further supportive findings in future studies. Thus, it can be concluded that different types of mechanical signals will produce different outcomes, i.e., resulting in the differentiation of hMSCs towards a specific lineage [54].
The clinical implication of the study is apparent and may lead to several potential applications. Although further studies are required, it is now possible to extrapolate the data obtained from our study to be applied into patients. In fact, this is not new since many studies have demonstrated that mechanical loading is beneficial to the musculoskeletal system [55]. This particularly applies to the tendon which has been shown to undergo tissue reparative process when subjected to stretching exercises [56,57]. What is new here in this study is the fact that only a certain combination of strain and cyclic loading rates may be beneficial for multipotent cells such as hMSCs, while other combinations may not be or in fact quite the opposite, may even result in detrimental outcomes. Once the optimal combination has been established, such as that which is observed in the present study, stretching will elicit anabolic responses from the tendon cells. This in turn increases the production of type I collagen in the peritendinous tissues as demonstrated previously [58].
Tendons, being viscoelastic tissues that are stiffer than most other soft tissues, allow the transfer of large tensile forces to occur without causing tissue or cell damage [59]. Indeed, although resistant to tensile forces, tenocytes are still subjected to high mechanical stresses enclaved within a highly mechanoactive environment [60]. However, to study the mechanical processes underpinning the cellular response within an in vivo environment would be technically unmanageable; hence, a model such as the one employed in the present study may be more realistic, appropriate, and informative. We recognize the limitations of a system that do not truly mimic the in vivo environment; however, these have been considered during our analyses and have not overstated the findings of the present study. We also recognized that although the present study was well designed, several limitations were unavoidable and thus need to be highlighted here. Firstly, it needs to be reminded that as with any in vitro studies, the present study does not take into account the complexities of surrounding tissues, and thus, translating the findings into clinical applications would need to be done with caution. Secondly, the present study assumes that the effect of the stretching occurs in a uniform manner, which in reality may not be the case. More so when certain areas within the substrate are subjected to a phenomenon known as differential stretching, as suggested in previous studies [61][62][63][64]. Limited by the size of the cell culture flask and the maximal rate of which cells can proliferate, the present study could only be conducted up to 72 hours. There is a downside to this, since it is possible that certain gene expressions such as osteogenic expressions may not have been detected. In previous studies, it appears that culturing MSCs up to 14 days may be needed for these changes to be observed. Hence, it may be the case that the tests from our experiment may have shown false-negative results. It needs to be reminded however that these changes may be better applied for static cultures and probably not applicable to our stretching cultures [65,66]. Results from other studies seem to suggest that this is the case [53,67,68]. Notwithstanding these limitations, the findings of the present study are still valid and useful owing to the robust study design employed. It is however hoped that future studies can be conducted using more advanced techniques that are not subjected to the limitations mentioned above.
Conclusions
Cells subjected to 1 Hz cyclic uniaxial stretching demonstrated significant maximal tenogenic expression observed but not of other mesenchyme lineages when stretched at 8% strain. No dose-related responses were observed as the result of increased strain magnitude, and it is more likely to be the case that a specific combination of rate and strain magnitude will elicit specific cell responses as demonstrated from our present and previous studies.
Data Availability
The data used to support the findings of this study are included within the article.
Malaya, for his assistance to provide necessary samples for this study. We also thank the University of Malaya for a PhD thesis scholarship for the first author.
|
v3-fos-license
|
2018-07-04T02:58:58.097Z
|
2018-06-26T00:00:00.000
|
49430696
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.nature.com/articles/s41375-018-0178-x.pdf",
"pdf_hash": "93c14d0bd8d79bfc654f05ecfaba3c830feeb148",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:904",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "93c14d0bd8d79bfc654f05ecfaba3c830feeb148",
"year": 2018
}
|
pes2o/s2orc
|
Epstein−Barr virus-encoded EBNA2 alters immune checkpoint PD-L1 expression by downregulating miR-34a in B-cell lymphomas
Cancer cells subvert host immune surveillance by altering immune checkpoint (IC) proteins. Some Epstein−Barr virus (EBV)-associated tumors have higher Programmed Cell Death Ligand, PD-L1 expression. However, it is not known how EBV alters ICs in the context of its preferred host, the B lymphocyte and in derived lymphomas. Here, we found that latency III-expressing Burkitt lymphoma (BL), diffuse large B-cell lymphomas (DLBCL) or their EBNA2-transfected derivatives express high PD-L1. In a DLBCL model, EBNA2 but not LMP1 is sufficient to induce PD-L1. Latency III-expressing DLBCL biopsies showed high levels of PD-L1. The PD-L1 targeting oncosuppressor microRNA miR-34a was downregulated in EBNA2-transfected lymphoma cells. We identified early B-cell factor 1 (EBF1) as a repressor of miR-34a transcription. Short hairpin RNA (shRNA)-mediated knockdown of EBF1 was sufficient to induce miR-34a transcription, which in turn reduced PD-L1. MiR-34a reconstitution in EBNA2-transfected DLBCL reduced PD-L1 expression and increased its immunogenicity in mixed lymphocyte reactions (MLR) and in three-dimensional biomimetic microfluidic chips. Given the importance of PD-L1 inhibition in immunotherapy and miR-34a dysregulation in cancers, our findings may have important implications for combinatorial immunotherapy, which include IC inhibiting antibodies and miR-34a, for EBV-associated cancers.
Introduction
Among non-Hodgkin lymphoma (NHL), more than 95% of endemic BLs are associated with Epstein−Barr virus (EBV). Diffuse large B-cell lymphomas (DLBCLs) constitute about 30% of all NHLs, of which about 10% are EBV associated in immunocompetent patients [1]. Its high frequency makes DLBCL one of the most common cancers These authors contributed equally: Frank J Slack and Pankaj Trivedi. in adults [2]. It is noteworthy that the annual global number of cases of EBV-positive DLBCLs supersede the total number of BLs. Additionally, EBV is the cause of lymphomas arising in immunocompromised individuals such as AIDS and transplant patients [3]. This clearly suggests that EBV's ability to cause cancer lies in its capacity to evade host immune surveillance.
EBV generally establishes one of the following four forms of latency, depending upon the phenotype and the transcription factor repertoire of the infected cells [4]. A complete lack of any virally encoded latent gene expression program as that seen in the resting memory B cell is called latency 0. The expression of the virally encoded EBNA1 and EBERs represents type I latency. EBV-infected normal B lymphocytes express type I latency in vivo [5]. Under pathological conditions, the viral latent-gene expression varies in different tumors. The phenotypically representative BL and corresponding cell lines express EBNA1 and LMP2A. When these lines drift towards an immunoblastic phenotype, the viral gene expression is expanded to all growth transformation proteins, EBNA1 to -6 and LMP1, -2A, and -2B. Collectively, this is known as the type III program. The viral latentgene expression observed in NPC and Hodgkin lymphoma is of intermediate type II latency (LMP1+EBNA2−) [6].
The ability of EBV to transform normal B lymphocytes into permanently growing lymphoblastoid cell lines (LCLs) is attributed to its latent proteins. Among these, LMP1 and EBNA2 have been extensively studied [7,8]. In particular, it is known that EBNA2 is sine qua non for the virus to transform B cells [9]. Indeed, in keeping with its importance in transformation, EBNA2 expression ensues early after EBV infects naive B cells [10]. This viral protein is also a potent activator of transcription such as CD23 and C-myc [11,12] but can also negatively regulate genes like BCL6 and Ig [13,14]. It is a functional homolog of intracellular (Ic) Notch, although they are not interchangeable [15,16]. It does not bind directly to DNA but activates transcription of many target genes by binding to the transcription factor, RBP-Jk [17]. EBNA2 colocalizes with another B-cellspecific DNA binding transcription factor, EBF1 [16], which is essential for the commitment and maintenance of B-cell transcription program [18,19].
Immune checkpoints (IC) regulate T-cell responses to maintain self-tolerance. They deliver costimulatory and coinhibitory signals to T cells [20]. PD-L1, mainly expressed by antigen-presenting cells engages its receptor PD-1 on T cells, to provide a growth inhibitory signal. Different tumors express high PD-L1 to evade immune recognition and consistently, inhibition of PD-1/PD-L1 and other IC molecules have become important targets of cancer immunotherapy [21].
MicroRNAs (miRNAs) are small noncoding RNAs that post-transcriptionally regulate gene expression [22,23]. The miR-34 family members are transcriptionally induced by p53 [24]. They suppress transcription of genes important in cell cycle progression, antiapoptotic functions, and regulation of cell growth. Expression of miRNAs is altered in a broad range of cancers, with frequent downregulation of both p53 and miR-34 [25,26]. The latter is downregulated in chronic lymphocytic leukemia and acute myeloid leukemia (AML) [27,28]. Interestingly, the IC protein, PD-L1, has been shown to be a validated target of miR-34a [29].
Based on gene expression, DLBCLs are divided into two broad categories, the germinal center (GC) type and the activated B-cell type (ABC) or the non-GC type [30]. The overall survival rates in the non-GC (ABC) DLBCL patients are poor [31][32][33][34]. EBV is associated more frequently with the non-GC DLBCLs [2], which generally express high levels of PD-L1 [31]. Both EBV associated and high PD-L1 expressing non-GC DLBCLs have a very poor prognosis [31,35]. In other hematological malignancies, like Hodgkin Lymphoma (HL), high PD-L1 expression has been reported due to either selective amplification of the PD-L1 locus on chromosome 9p24.1 or EBV infection [36]. These two modes of PD-L1 upregulation are mutually exclusive [37]. It was also shown that LMP1 expression induced PD-L1 promoter activity in B cells [37]. In addition, more than 70% of post-transplant lymphoproliferative disorders, of which EBV is the cause, express PD-L1 [37]. In DLBCL, Kwon et al. [32] observed that PD-L1 expression was positively correlated with EBV's presence in ABC type DLBCL.
Although the presence of EBV is correlated with higher expression of PD-L1 both in HL and DLBCLs, it is not clear if and how the virus is responsible for an increased PD-L1 expression and if this applies to other lymphomas like BLs, as well. While LMP1 has been implicated in induction of PD-L1 in HEK293 cells [37] or in epithelial cells [38], it is not known if other EBV encoded genes like EBNA2 can regulate PD-L1 in a more frequent cellular setting and natural reservoir for EBV, such as B cells. In this study, we set out to investigate if EBNA2, which is indispensable for EBV's ability to transform B cells, has any effect on PD-L1 and if this involves regulation of cellular miRNAs.
Infection with a recombinant EBV strain
The recombinant strain of Akata EBV [45] was a kind gift from Prof. Kenzo Takada (Hokkaido University, Sapporo, Japan). The induction of lytic replication, virus production by engaging IgG with corresponding antibodies and infection procedure has been described in detail by us previously [14,41,42]. The supernatant containing recombinant EBV was used to infect EBV-negative U2932, SUDHL5, OMA4, and DG75 cells.
EBNA2 and LMP1 transfection and selection
An EBNA2 expression vector J144-C1, the expression vector for LMP1 J132-G5 and the corresponding vector control pSV-MPA GPT (a kind gift from Prof. Lars Rymo, Gothenburg University, Sweden) were individually transfected into U2932 DLBCL cells by electroporation. The transfection and selection details have been described by us previously [14,46]. BL41K3 cells transfected with estrogen-inducible EBNA2 were treated with 1 µM estradiol to induce EBNA2 expression [44].
Immunoblotting EBNA2 and LMP1 expression was verified by monoclonal antibodies PE2 (Kindly provided by Dr. Martin Rowe, Birmingham University Medical School) [47] and S12 monoclonal antibodies (a kind gift from the late Dr. David Thorley-Lawson, Tufts University, Boston, USA), respectively. β-actin antibodies were purchased from Sigma. PD-L1 (E1L3N, cat# 13684) and p21 (#2947) and BCL2 (#15071) were purchased from Cell Signaling. Further details of the method are in supplementary information.
Quantitative RT-PCR (qRT-PCR)
Total RNA from cell lines was isolated using Direct-zol RNA MiniPrep Plus kit (Zymo Research) according to the vending company's instructions. The integrity of RNA was routinely checked using 1% agarose gel and RNA quantification was estimated with a DS-11 spectrophotometer (DeNovix) [48]. The cDNA synthesis for mature miR-34a was performed according to the manufacturer's instructions (miScript II RT Kit, Qiagen). For verification of pre-miR-34a expression, reverse transcription qPCR was performed. Further details of the method can be found as supplementary information.
Knockdown of miR-34a and miR-34 mimic transfection The U2932 vector control or EBNA2-expressing clones were transfected with 50 nM of miR-34a inhibitors (mir-Vana), mimic miR-34a oligonucleotides or mimic controls, purchased from Ambion. The compounds were delivered into the cells with DharmaFect Duo transfection reagent (GE Dharmacon). After 48 h, the cells were harvested for total RNA and protein extraction.
PD-L1 luciferase reporters and activity
PD-L1 3′UTR Luciferase reporter construct was made as follows. The full-length PD-L1 3′UTR (2674 bp) (ref| NM_014143.3| Homo sapiens CD274 molecule (CD274), transcript variant 1, mRNA) was PCR amplified from human genomic DNA (Thermo Fisher #4312660), in three separate fragments. Fragment 1 was generated with primers F: GAGACGTAATCCAGCATTGG and R1: CTGAGGT CTGCTATTTACTGG; Fragment 2 was generated with primers F1: CCAGTAAATAGCAGACCTCAG and R2: GACTAGATTGACTCAGTGCAC; Fragment 3 was generated with primers F2: GTGCACTGAGTCAATCTAGTC and R: TAACTTTCTCCACTGGGATG. The three fragments were connected by overlap PCR, with forward primer: actcgagGAGACGTAATCCAGCATTGG (containing a XhoI site, underlined) and reverse primer: agcggccgcTAACTTTCTCCACTGGGATG (containing a NotI site, underlined). The full-length PD-L1 3′UTR was cloned into the Psicheck2 vector between the XhoI and NotI sites downstream of Renilla luciferase, and fully verified by sequencing. Further details are enclosed in the supplementary information.
EBF1 knockdown
Knockdown of EBF1 was obtained by transduction of U2932 and its EBNA2 expressors with pLK0.1 lentiviral vectors, which carry shEBF1 and the corresponding control shRNA (TRC Human EBF1 shRNA, Clone ID: TRCN0000013831 and Plko.1-emptyT control TRCN0000208001, Open Biosystems, Dharmacon). Cells were transduced as described below and were selected with 1.5 µg/ml puromycin for 10 days and used for further experiments.
The day after, the plates were washed in 1× PBS and PBMCs were added to the CD3/CD28-coated wells at a density of 1×10 6 cells/well and cultured for 72 h, in order to activate the CD4 and CD8 cell population.
One day before seeding the stimulators, 1×10 5 U2932 MPA vector and U2932 EBNA2 cl-1 were transiently transfected with 50 nM mimic negative control and mimic miR-34a (Ambion) and subsequently irradiated with a sublethal dose of 5 Gy for 2 min. The cells were placed in coculture with 1×10 6 PBMCs. At 72 h post-transfection and after 48 h coculture, all samples were treated for 5 h with GolgiStop™ (BD Biosciences) to block cytokine accumulation in the Golgi complex, for the detection of IFN-γ-producing cells, by flow cytometry. Additional details are in supplementary materials and methods. The entire population of co-cultured cells was stained with FITC mouse anti-human CD8 and Pacific Blue mouse anti-human CD4 (BD Pharmingen) for detection of T cells. The same cells were then permeabilized with cytofix/cytoperm buffer (BD Pharmingen), according to the manufacturer's instructions. The cells were stained intracellularly with human IFN-γ, R-PE (Invitrogen). A matched isotype control, anti-Human IgG Fc secondary antibody, PE (Invitrogen), was also included in this experiment. Sample acquisition was performed with Gallios Flow Cytometer. The data were analyzed with Kaluza for Gallios Software.
3D microfluidic platform for T-cell responses to EBNA2-transfected U2932 DLBCL
The 3D microfluidic chips, polydimethylsiloxane (PDMS, Sylgard 184, Dow-Corning, Midland, Michigan) microfluidic devices were fabricated using soft lithography as described previously [51,52]. The devices were treated with 0.01% v/v poly-L-lysine and 0.5% v/v gluteraldehyde to promote collagen/fibronectin adhesion. After washing overnight in water, steel acupuncture needles (160 μm diameter, Seirin, Kyoto, Japan) were introduced into the devices and a solution of 2.5 mg/ml type 1 collagen, 1× M199 medium, 1 mM HEPES, 0.1 M NaOH and NaHCO 3 (0.035% w/v) and 200 ng/ml Fibronectin (Thermo Fisher Scientific, Waltham, MA) was infused into the devices and allowed to polymerize for 40 min at 37°C. Subsequently, needles were removed to create 160 μm diameter channels within collagen/fibronectin gel and cells were introduced into devices. In coculture experiments, each device was first seeded with 5×10 3 U2932 EBNA2 cl-1 transduced with the control lentivirus or miR-34a containing lentiviral vector and were incubated for 24 h at 37°C . Subsequently, 5×10 4 PBMCs, containing previously activated T cells were added in complete medium (RPM1 1640/10% FBS). The devices were in triplicates and incubated for an additional 48 h before performing immunostaining.
For immunostaining of the cocultures in microfluidic devices, the cells were fixed with 4% PFA for 10 min and washed twice in PBS, permeabilized with 0.1% (v/v) Triton X 100 in PBS for 20 min at room temperature, and treated with a blocking solution (BSA 5% in PBS 0.1% Triton X 100). The devices were incubated with rabbit anticaspase-3 (Cell Signaling) or mouse anti-CD4 and -CD8 antibodies (1:100 dilution, Biolegend) and kept on a rocking platform O/N at 4°C. Devices were merged in PBS and left on a rotor O/N, at 4°C to remove excess antibody. The day after, PBS was removed and Alexa 568-conjugated goat-anti-rabbit IgG (primary abs, caspase-3) and Alexa 647-conjugated goat-anti-mouse IgG (primary abs, CD4 and CD8) secondary antibodies diluted 1:100 in blocking buffer were added per well in each device O/N at 4°C. Finally, PBS was added in each device and processed to detect caspase-3, CD4 and CD8 staining. The devices were visualized using confocal microscope (LSM 710, Carl Zeiss), and image analysis made by ImageJ by performing a maximum intensity z projection and merging the channels.
PD-L1 immunohistochemistry and quantitative analysis in biopsies from DLBCL patients
A written informed consent was obtained from all patients involved in the study. The study design was approved by the Institute's ethics review board. Paraffin sections were immunostained for PDL-1, PD-1, EBNA2, LMP1, MUM-1, CD10, and Bcl6, using an automated immunostainer (DAKO, Glostrup, Denmark). As control for PD-L1 immunostaining, sections from paraffin-embedded human lung carcinoma were used. Further details are in supplementary m&m. For quantitative IHC analysis, the Aperio Imagescope algorithm was used to evaluate both percentage positive cells and intensity of the stained tumor cells in three regions of three clinical samples representing each of the three types of non-GC DLBCL category (EBV neg, EBV +/EBNA− and EBV+/EBNA2+). (latency I expressor) and its latency III-expressing counterpart. Furthermore, expression of PD-L1 and EBNA2 is shown in two additional BLs with resident viral genomes. Daudi carries an EBNA2-deleted EBV strain and Jijoye is EBNA2-positive BL. b Two GC DLBCLs, namely U2932 and SUDHL5, were infected with a recombinant Akata strain of EBV. Total cell lysates were electrophoresed and EBNA2 expression was verified by immunoblotting using PE2 monoclonal antibodies. PD-L1 expression was analyzed using rat monoclonal antibodies. c Two BL cell lines, Oma4 and DG75, were infected with the recombinant Akata virus and tested for EBNA2 and PD-L1 expression. β-actin is used as loading control. d PD-L1, EBNA2, and LMP1 expression in transfected U2932 DLBCL and e BL41 is an EBV-negative BL. The BL41K3 derivatives are estrogen-inducible EBNA2 transfectants. PD-L1, EBNA2, and β-actin expression was analyzed before and after β-estradiol treatment. β-actin is used as loading control
PD-L1 expression is induced in latency III-expressing BLs, in vitro infected BLs and DLBCLs and EBNA2transfected cells
The restricted latency expressor cell line Mutu I [53] did not express PD-L1 while its EBNA2-expressing counterpart showed increased PD-L1. Two additional BL cell lines, Jijoye, which is positive for EBNA2 expression expressed PD-L1, while EBNA2-deleted Daudi BL lacked PD-L1 expression (Fig. 1a). The above data suggest that latency III-related viral proteins could influence PD-L1. To extend these observations, we infected two EBV-negative GC DLBCLs, U2932 and SUDHL5 and two EBV-negative BLs, OMA4 and DG75 with a recombinant Akata EBV. The resultant convertants expressed EBNA2 (Fig. 1b, c). PD-L1 expression was strongly upregulated in both DLBCLs (Fig. 1b) and two BLs (Fig. 1c) after in vitro EBV infection.
Transcription of the PD-L1 targeting miRNA miR-34a is downregulated by EBNA2 We have previously shown that EBNA2 can profoundly alter cellular miRNA expression profile in U2932 cells [54]. Given the strong increase of PD-L1 expression in EBNA2transfected BL and DLBCLs and since miR-34a targets PD-L1, we set out to investigate if miR-34a expression is affected in EBNA2-expressing B lymphoma cells. As shown in Fig. 2a, top panel, EBNA2-transfected U2932 cells showed a marked decrease in miR-34a. Similarly, BL41K3 carrying estrogen-inducible EBNA2 showed reduced miR-34a after estrogen treatment (Fig. 2b, top panel). Additionally, both U2932 EBNA2 and BL41K3 cells showed reduced pre-miR-34a expression (Fig. 2, middle panels). To further confirm that the miR-34a decrease is transcriptional, EBNA2-expressing U2932 and BL41 cells were transfected with miR-34a promoter carrying Luc reporters. As seen in Fig. 2 (lower panels), in the presence of EBNA2, the luciferase activity was significantly reduced, confirming that miR-34a is indeed transcriptionally affected by EBNA2.
Validation of the PD-L1 3′UTR as an miR-34a target in U2932 DLBCL To investigate the role of miR-34a in the regulation of PD-L1 3′UTR, the complete 3′UTR of PD-L1 was cloned into a luciferase reporter construct and transfected into U2932 MPA vector and U2932 EBNA2 cells. Subsequently, the miR-34a inhibitors were introduced into the vector alone carrying cells where miR-34a was higher. Instead, miR-34a Fig. 3 miR-34a targets 3′UTR of PD-L1 in EBNA2-transfected U2932 cells and site-directed mutagenesis of its seed sequence abrogates its binding to PD-L1 3′UTR. a Luciferase reporter construct containing wild-type 3′UTR PD-L1 was transfected in presence of either miR-34a inhibitor in MPA vector control transfectants or miR-34a mimic in U2932 EBNA2-expressing clone. Each transfection was performed in triplicate. For U2932 EBNA2 cl-1 is (*) p = 0.0172. b The specificity of miR-34a binding to its seed sequence in 3′UTR of PD-L1 was confirmed by mutating seed sequence with site-directed mutagenesis. The mimic miR-34a bound to the wild-type 3′UTR of PD-L1 and reduced luc activity. The inhibitory effect of mimic miR-34a was abrogated when its seed sequence in PD-L1 3′UTR was mutated. Each transfection was performed in triplicate. (**) p = 0.0026 refers to U2932 MPA Vector and (**) p = 0.0021 refers to U2932 EBNA2 cl-1. p values were calculated with unpaired t test mimics were transfected into EBNA2-expressing counterparts with low miR-34 expression. Figure 3a shows luciferase activity in controls and in presence of miR-34a inhibitor in U2932 MPA vector or miR-34a mimic in the EBNA2 transfectant. In accordance with miR-34a downregulation in U2932 EBNA2, the luciferase activity was high in these cells. When mimic miR-34a was introduced into EBNA2-expressing cells, the reporter gene activity was significantly reduced (Fig. 3a). To confirm the specificity of miR-34a binding in PD-L1 3′UTR, we mutated the miR-34a seed sequence using site-directed mutagenesis. As seen in Fig. 3b, the wild-type 3′UTR reporter activity was high, consistent with low miR-34a in EBNA2-expressing cl-1. When miR-34a mimic was introduced into these cells, the luciferase activity was reduced. In contrast, the mutated seed sequence carrying luciferase reporters were no longer repressed by miR-34a. This not only validated the sequence specificity of the miRNA-mRNA binding but also mapped and confirmed the miR-34a recognition sequence in the PD-L1 3′UTR. The absolute expression of miR-34a in U2932 and BL41 parental cell lines and their EBNA2-expressing counterparts in comparison with CD19+ B cells and the Luc activity of wild-type and mutated 3′PD-L1 UTR in both cell lines is shown in supplementary figure 3. In comparison with normal CD19+ B cells, both U2932 and BL41 had higher levels of miR-34a (S Figure 3A). As a consequence, the luciferase activity of the wild-type 3′ PD-L1 UTR construct was repressed, which indicates miR-34a binding to the 3′UTR of PD-L1. In contrast, luciferase activity of the mutated 3′ PD-L1 UTR was not affected by miR-34a. Similarly, in EBNA2-transfected cells, due to lower expression of miR-34a, the Luc activity from both WT and mutated 3′UTR construct was not affected (S Fig. 3B).
Overexpression of miR-34a in U2932 EBNA2 cells reduces PD-L1
Having established that miR-34a binds to 3′UTR of PD-L1, we next investigated if miR-34a overexpression could have a direct effect on PD-L1. For this purpose, we transfected miR-34a mimics in U2932 EBNA2 cl-1. As seen in Supplementary Figure 4, the decrease in Luc activity of the biosensor psicheck-2 construct in the presence of miR-34a mimic clearly suggests its successful delivery and binding to target sequences. To investigate the direct effect of miR-34a on PD-L1, miR-34a-transfected U2932 EBNA2 cl-1 was analyzed for PD-L1. A significant reduction in PD-L1 was observed after overexpression of miR-34a in comparison to the scrambled control (Fig. 4a, b). We further investigated if overexpression of miR-34a in U2932 cells influences p21 and BCL2, previously shown to be regulated by this miRNA [55]. As shown in Suppl Fig. 5A, The effect of miR-34a reconstitution on PD-L1 expression in the vector control and EBNA2-transfected U2932 cells was also confirmed by immunoblotting. An average of the corresponding densitometric analysis on three such experiments is shown in the lower panel (**) p = 0.0055. β-actin served as loading control U2932 EBNA2 cl-1 transfected with miR-34a had an increased p21 but reduced BCL2. Consequently, the number of apoptotic cells was higher in miR-34a-transfected U2932 EBNA2 cl-1 in comparison with the vector transfected cells (S Fig. 5B).
EBF1 knockdown de-represses miR-34a and downregulates PD-L1 in U2932 EBNA2 cells
Previously reported ChIP-Seq data show that EBNA2 colocalizes with EBF1 at promoter/enhancers of many genes [16]. To identify the molecular mechanism of miR-34a regulation by EBNA2, we analyzed EBNA2 ChIP-Seq datasets from GEO database (accession number: GSM2039170) and found that EBNA2 peaks at the miR-34a promoter. Subsequently, through JASPAR database [56] and visualization through Integrative Genomics Viewer (IGV) [57], using the reference hg38 (human genome38), we found multiple predicted binding sites for EBF1 at the miR-34a promoter, and among them, one consensus EBF1 sequence overlaps with the EBNA2 peak (Fig. 5a, highlighted in green square). Based on this, we reasoned that miR-34a might be regulated by EBNA2 through EBF1. To verify this, the parental U2932 and its EBNA2expressing derivative line were transduced with lentiviral vectors carrying shEBF1 and shcontrol. As shown in Fig. 5b, upon EBF1 knockdown in U2932 EBNA2 cl1, miR-34a and pre-miR-34a expression is derepressed with a consequential decrease in PD-L1. We further found that miR-34a promoter activity was increased upon EBF1 K.D. (Fig. 5c). These data establish a circuit where EBNA2 might recruit EBF1 to miR-34a promoter to downregulate its expression and consequently upregulate PD-L1.
Suppression of T-cell activation by EBNA2 and increased immunogenicity after miR-34a overexpression as measured in MLR and 3D biomimetic microfluidic platforms
In order to understand the immunological relevance of PD-L1 upregulation and miR-34a downregulation by EBNA2, we first employed an MLR assay. After 3 days of PBMC activation on CD3/CD28-coated wells, the irradiated stimulator U2932 MPA vector, U2932 EBNA2 cl-1 and either their mimic control or miR-34a-transfected derivatives were added in an MLR. Successful miR-34a delivery in stimulator cells and its binding to specific target sequence was confirmed using the psicheck-2 biosensor reporter assay (Suppl. Figure 6A). Effector T-cell activation was confirmed by a strong increase in PD-1 expression in two donors (Suppl. Figure 6B). The activated T-cell state was corroborated by increased IFN-γ production (Fig. 6a). Importantly, U2932 EBNA2 cl-1 boosted IFN-γ production, by both CD8 and CD4 T cells, only when miR-34a was overexpressed (Fig. 6a). These data suggest that the increase in PD-L1 by EBNA2 may have a negative effect on T-cell activation and reconstitution of miR-34a restores immunogenicity of EBNA2 transfectants.
We next investigated how miR-34a might reverse the poor immunogenicity of EBNA2-transfected high PD-L1expressing U2932 cells. The schematic design of the 3D microfluidic chip-based coculture system is shown in Suppl. Figure 7. The effector T-cell activation was confirmed by increased IFN-γ (Suppl. Figure 8A). Stimulator U2932 EBNA2 cells tranduced either with lentiviral vectors carrying miR-34a or vector control were introduced into microfluidic devices. The expression of miR-34a in lentivirus-transduced U2932 EBNA2 cells was checked by real-time qPCR (Suppl. Figure 8B) and the consequent PD-L1 decrease was verified by flow cytometry (Suppl. Figure 8C). Figure 6Bi shows the device with empty lentiviral vector-transduced U2932 EBNA2 expressors either in the presence or absence of T cells. No significant change in caspase-3 expression was observed. In contrast, as seen in Figure 6bii, when miR-34a containing lentivirus was transduced into EBNA2 U2932 clone, there was a marginal induction of caspase-3 in the absence of T cells, most probably due to apoptosis induced by miR-34a expression. In contrast, overexpression of miR-34a in EBNA2-expressing U2932, in the presence of CD4/CD8 cells, induced significant tumor cell death, as indicated by increased caspase-3 expression (Fig. 6bii, c). Overall, these data suggest that reconstitution of miR-34a in EBNA2-expressing U2932 makes them more immunogenic.
PD-L1 and EBV correlation in clinical DLBCL samples
In a cohort of 27 cases of DLBCLs, we investigated how EBV and EBNA2 expression is correlated with increase in PD-L1 expression. According to the Hans Algorithm, 21 cases were classified as non-GC type and 6 cases as GC type. Figure 7a shows PD-L1 expression in three non-GC DLBCLs representing each category namely, EBV negative, EBV+/EBNA2−, and EBV+/EBNA2+ samples. PDL-1 expression was detected at the cell membrane level, in the cytoplasm or as dots in the Golgi area of the neoplastic cells. For quantitative estimation of PD-L1 expression and staining intensity, Aperio Imagescope analysis was employed. The stained tissue sections were digitalized at a ×40 magnification using Aperio Scan Scope. The percentage positivity was calculated by counting positive cells in three squared areas measuring 50,000 μm 2 from each clinical sample. In the same areas the number of the positive cells was determined using the Aperio software IHC Membrane v1. The IHC Membrane Image Analysis algorithm detects membrane staining for individual tumor cells in the selected regions and quantifies the intensity and completeness of the membrane staining. Figure 7b (upper panel) shows that there was a slight and statistically significant overall increase in PD-L1-positive cells in EBNA2-positive cases. Notably, as shown in Fig. 7b, in all EBNA2+ samples analyzed, the number of cells with high staining intensity (+2, +3) as measured by Imagescope algorithm was significantly higher in EBNA2+ ABC DLBCLs in comparison with EBNA2− cases (Supp table 1).
We also analyzed PD-1 expression and found that it is generally expressed by infiltrating cells like T lymphocytes (TILs) and macrophages and not by the neoplastic cells. There was no correlation between the number of PD-1positive infiltrating cells and PDL-1 expression by neoplastic cells (not shown). Suppl. Table 1 describes the details of the clinical samples.
Discussion
Viruses, being obligate parasites, are under constant pressure to survive in the face of strong host immune responses. To maintain a replicative advantage, they use multiple strategies to make themselves immunologically invisible. This includes downregulation of HLA class I, class II molecules, interference with peptide transport mechanisms, inhibition of proteolysis etc. [58]. In this regard, akin to many other viruses, EBV also employs several mechanisms to circumvent immune eradication to establish latency. EBV-positive DLBCLs are high PD-L1 expressors and this is confirmed here [31]. However, which virally encoded proteins could be delegated with this task and how do they achieve it has not been fully explored. To the best of our knowledge, this is the first report of how EBV, through its most critical transformation-associated protein, EBNA2, affects PD-L1 Fig. 5 EBNA2 suppresses miR-34a transcription through EBF1. a Prediction of EBF1 binding motifs at the miR-34a promoter in human reference genome hg38, coordinates chr1:9181678-9182943, was performed with the JASPAR database and visualized with IGV. EBNA2 peak overlaps with the second of the predicted EBF1 binding sites, highlighted in green square, on the miR-34a promoter in dataset GSM2039170. b Knockdown of EBF1 was verified by Q-PCR in the U2932 cell line and U2932 EBNA2 cl1 transduced with pLK0.1 lentiviral vectors which carry shEBF1 and control shRNA. Upon EBF1 depletion (***p = 0.0004) in U2932 EBNA2 cl1, expression of mature miR-34a (**p = 0.0023) and pre-miR-34a (**p = 0.0024) was derepressed and consequently PD-L1 expression decreased (***p = 0.0008). Q-PCRs were performed with three biological and technical triplicates for each sample. c MiR-34a promoter activity was enhanced upon EBF1 knockdown in U2932 EBNA2 cl1, after 48 h. Luciferase assay was performed three times and each sample was in triplicate. (*p = 0.0105). Statistical analysis was performed using an unpaired t test (Prism-7) expression both in DLBCLs and BLs, by downregulating miR-34a through recruitment of EBF1 to its promoter. In the first ever use of a microfluidic chip for EBV-associated lymphoma growth in 3D, we further show that EBNA2expressing DLBCLs are less immunogenic. Reconstitution of miR-34a in U2932 EBNA2 cells increased their immunogenicity as seen by IFN-γ production in MLRs and increased apoptosis as measured by caspase-3 expression in tumor T-cell 3D cocultures.
Most EBV-positive DLBCLs are non-GC type and high PD-L1 expressors. But it is not known if EBV directly infects a non-GC DLBCL or whether it actually could turn a GC DLBCL into a relatively activated DLBCL. Our data showing strong upregulation of PD-L1 in two, in vitro infected GC DLBCLs suggest that EBV indeed has the ability to turn a GC-derived DLBCL into at least a partially activated one. It is important to clarify here that U2932, often described in the literature as ABC type, is a high BCL6, a hallmark of GC phenotype, expressing cell line [14]. Furthermore, a recent detailed classification study suggests that BCL6 is critical marker of GC DLBCL category [59]. Additionally, most ABC DLBCLs express PD-L1. In contrast GC DLBCLs are often PD-L1 negative. U2932 DLBCL is indeed PD-L1 negative. Based on this, we consider U2932 more as an intermediate phenotype DLBCL. Patients with non-GC or activated DLBCLs have both poor prognosis and overall survival rate [31][32][33][34]. Results from our clinical DLBCL samples suggest that EBV-positive non-GC DLBCLs have slightly higher PD-L1 expression than those non-GC DLBCLs without the virus. A quantitative IHC image algorithm analysis on digitalized slides revealed that in EBNA2-positive ABC DLBCL samples PD-L1 expression and the staining intensity was higher. Clearly, the effect of EBNA2 alone on PD-L1 would be impossible to determine in clinical samples because EBNA2 alone latency does not occur in any tumor associated with EBV. But, notwithstanding the small cohort, the data from clinical samples confirm the in vitro data. Overall, we suggest that the effect of EBNA2 on PD-L1 in clinical samples will have to be tested in a larger cohort. However, the results are consistent with the suggestion that EBNA2positive lymphomas may have a better therapeutic outcome with IC blockers.
MiR-34a belongs to the group of tumor suppressor miRNAs and accordingly, it is frequently downregulated in a wide variety of cancers [60]. In keeping with this, its expression is often reduced in ABC type of DLBCL cell lines and tumor tissues [61]. Overall survival of those patients with low miR-34a is poorer and overexpression of miR-34a in ABC DLBCL lines makes them responsive to doxorubicin treatment [61]. Our observations that EBNA2 downregulates miR-34a are consistent with the reported lower expression of miR-34a in ABC DLBCLs and doxorubicin resistance [61]. Indeed, in Lat III ABC DLBCLs, EBNA2 might contribute to chemoresistance and poor prognosis by downregulating miR-34a. Additionally, Craig et al. have shown that intravenous miR-34a treatment of mice with U2932 DLBCL xenografts suppresses tumor growth, thus underpinning its therapeutic utility [62]. Among its noted targets is the oncogene FOXP1 [63]. Interestingly, in AML, miR-34a targets PD-L1 [29,64]. We now show that EBV, through its growth transformationassociated protein EBNA2, increases PD-L1 by downregulating miR-34a. Furthermore, in the presence of EBNA2, pre-miR-34a and miR-34a promoter activity is reduced and this suggests that EBNA2 affects miR-34a transcription.
We found that miR-34a downregulation by EBNA2 likely involves recruitment of EBF1 at the miR-34a promoter. Glaser et al. have recently shown that EBF1 interacts with the N-terminal portion of EBNA2 in a B-cell specific manner and this interaction promotes EBNA2 access to chromatin, without involving RBPJk, a known EBNA2-DNA anchor [19]. Our analysis of EBNA2 ChIP-Seq datasets from GEO database (accession number: GSM2039170) revealed that EBNA2 peaks at the miR-34a promoter. Furthermore, the data showing the importance of EBF1 in miR-34a regulation by EBNA2 is consistent with previous suggestion that EBNA2 and EBF1 are colocalized at EBNA2 peaks [19]. Recently it was also shown that Ten-Eleven translocation 2 (TET2) is highly expressed in latency III (EBNA2+) BLs and ABC DLBCLs [65]. Interestingly, EBNA2 colocalizes with both EBF1 and TET2 [16,66]. From our data, the role of EBF1 in negative regulation of miR-34a is evident but the possibility that EBNA2 could influence PD-L1 by affecting TET2 needs further investigation. Overall, our data support the notion that EBNA2/EBF1 involvement in miR-34a regulation can Fig. 6 miR-34a relieves suppression of immunogenicity induced by EBNA2 as measured in MLR and 3D biomimetic microfluidic coculture devices. a T cells were activated in plates coated with anti-CD3/anti-CD28 antibodies for 72 h. Irradiated targets U2932 MPA vector and U2932 EBNA2 cl-1 were cocultivated with activated T cells (effector). The effector-target ratio was 1:10. The target cells were transfected with mimic control or miR-34a mimic 24 h prior to cocultivation with the effector cells. The coculture was carried out for 48 h and the cells were stained for CD4/CD8 and IFN-γ and processed for flow cytometry. Data are expressed as mean ± SD. The p values for T-cell activation without stimulators are (*) p ≤ 0.05 for CD8 and CD4. In MLR with U2932 stimulators, the statistical significance is (*) p = 0.028 for CD8 and (**) p = 0.0081 for CD4. Three different experiments were performed with PBMCs isolated from three different donors. b Three-dimensional biomimetic microfluidic coculture devices: Four million/ml U2932 EBNA2 cells tranduced with lentiviral vector controls were introduced into the microfluidic devices. In coculture experiments (right panel in b), devices were first seeded with U2932 EBNA2 cells and were incubated for 24 h at 37°C, followed by activated T cells seeding. Immunostaining was performed after 48 h of cocultivation. bi: Representative confocal images of U2932 EBNA2 cl-1 transduced with GFP-lentiviral vector control in the absence or presence of activated T cells; bii Four million/ml U2932 EBNA2 cl 1, tranduced with miR-34a lentivirus, were introduced in the collagen/ fibronectin devices either alone (left panel) or cocultivated with previously activated T cells (right panel). The cocultivation of target, miR-34a transduced U2932 EBNA2 cells with activated T cells, was carried out for 48 h before immunostaining. miR-34a containing lentivirus-transduced U2932 EBNA2 cl-1 cells were stained with anti-GFP antibody (green), activated CD8/CD4 T cells were stained with anti-CD8 and anti-CD4 (magenta), apoptotic U2932 EBNA2 cl-1 cells were visualized with anti-caspase-3 antibody (red), and nuclei were counterstained with DAPI (blue). The overlap of miR-34a transduced U2932 EBNA2 cells (GFP positive) and caspase-3 (red) indicates tumor cell death (merged images, yellow). Scale bar = 100 μm, magnification of the insets = scale bar 20 μm. c Arbitrary Caspase-3 units were calculated for each experimental condition. Statistically significant caspase-3 positive cells were observed only in miR-34a tranduced EBNA2 expressors; Data expressed as mean ± SEM; (****) p < 0.0001. N = 4 fields (3-4 devices for each experimental condition). Shown is one representative experiment out of four performed. Statistical analysis was performed with Prism-7 software using unpaired T test be therapeutically harnessed for DLBCL and particularly for the drug resistant cases.
As mentioned earlier, EBNA2 is the main driver of B-cell transformation induced by EBV. To this end, it is noteworthy that c-MYC is directly upregulated by EBNA2 [12]. Additionally, EBNA2 is also a functional homolog of activated Notch [15]. Both c-MYC and activated Notch are known for their oncogenic properties. Most interestingly, both these proteins are miR-34a targets [67,68]. Based on our data, we surmise that EBNA2 may not only be the functional homolog of Notch but indeed it may help keep Notch expression up through downregulation of miR-34a. Casey et al. have recently shown that c-MYC can induce PD-L1 expression [69]. Further studies will be required to understand if EBNA2 by downregulating miR-34a increases c-MYC, which in turn may upregulate PD-L1. At present, it is not known if activated Notch genes like c-MYC have any effect on PD-L1 expression. Based on our data, this exciting possibility needs further investigation.
Increased tumorigenicity is often combined with poor immunogenicity in cancer. Thus, the double-edged swordlike function of EBNA2 to downregulate miR-34a through EBF1 and consequently upregulate PD-L1 adds to the long list of its oncogenic attributions. To argue against its relevance, because EBNA2 expression is a rarity in lymphomas, would be fallacious, particularly, if wider implications of our findings are considered. EBV-induced immunoblastomas of immunocompromised patients, such as in AIDS and transplant, are EBNA2 expressors. A significant proportion of cases within EBV-positive ABC DLBCLs are also EBNA2 positive. The viral gene expression pattern in these tumors resembles that of in vitro transformed LCLs and cellular proliferation in both these cell types is indeed EBNA2 driven. Clearly, in patients with compromised T-cell immune responses, therapeutic approaches like inactivation of EBNA2 by Crispr-Cas9 gene editing and/or therapeutic introduction of miR-34a mimics will have to be considered.
The 3D biomimetic microfluidic devices, described here for the first time to test immunogenicity of lymphoma cells, provide a quick and economically viable alternative to a more expensive and cumbersome, humanized mouse-based approaches for human tropic viruses like EBV. In addition, these devices might also prove useful in testing the efficacy of combinatorial immunotherapy agents, in lieu of humanized mice.
In conclusion, the identification of EBNA2 as a lead player in tampering with immunogenicity of EBV-infected cells by altering PD-L1 and miR-34a opens up several new RNA-aided immunotherapy avenues to explore. We propose a combinatorial delivery of antibodies and miR-34a to silence PD-L1 both from within and without the cell to maximize chances of a successful and potent therapy to benefit immunocompetent patients with EBV-associated cancers, but such an approach might have wider implications for other cancers as well.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. Fig. 7 PD-L1 expression in DLBCL clinical tissues. a Three non-GC DLBCL patient samples representing the three ABC DLBCL categories, out of a total of 21, stained for PD-L1 are shown. Paraffin sections were immunostained for PDL1 using an automated immunostainer (DAKO, Glostrup, Denmark). As control for PDL-1 immunostaining, sections from paraffin-embedded human lung carcinoma were used. b The stained tissue sections were digitalized at a ×40 magnification using Aperio Scan Scope. The percentage positivity was calculated by counting positive cells in three squared areas measuring 50,000 μm 2 from each clinical sample in (a) above. The number of positive cells was determined using Aperio software IHC Membrane v1. This algorithm detects membrane staining for individual tumor cell in the selected regions and quantifies the intensity and completeness of the membrane staining. Unpaired t test was applied to demonstrate that differences in % total PD-L1-positive cells and % cells with strong staining intensity were statistically significant, (*) p = 0.0125, (**) p = 0.0040
|
v3-fos-license
|
2018-01-23T22:39:41.426Z
|
2012-12-01T00:00:00.000
|
364485
|
{
"extfieldsofstudy": [
"Engineering",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://hal.archives-ouvertes.fr/hal-02300159/file/Thibault2012.pdf",
"pdf_hash": "a9bbb9ef048f756f47c33fc6f1fbbd5560983feb",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:906",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "11aabc767770beff818900c03091323633702fbe",
"year": 2012
}
|
pes2o/s2orc
|
A fully parametric toolbox for the simulation of single point incremental sheet forming process: Numerical feasibility and experimental validation
Single point incremental forming (SPIF) is a sheet metal forming process that allows man- ufacturing components without the development of complex tools in comparison with stamping process. This work is dedicated to the development of SPIF for microparts (shape or detail) and for thin metal sheets (less than 1 mm). This paper focuses on the definition of a numerical tool- box to simulate this process at these scales. Forming of a pyramidal shape of 4 mm in height from a copper sheet with an initial thickness of 0.21 mm is suggested. The complete meth- odology is proposed and numerical results are presented in terms of global geometry (shape and section profiles), thickness evolution and forming forces. Equivalent experimental tests are carried out to validate the numerical approach. Different ways of evolution are provided at the end of this work.
Single point incremental forming (SPIF) is a sheet metal forming process that allows manufacturing components without the development of complex tools in comparison with stamping process.
This work is dedicated to the development of SPIF for microparts (shape or detail) and for thin metal sheets (less than 1 mm). This paper focuses on the definition of a numerical toolbox to simulate this process at these scales. Forming of a pyramidal shape of 4 mm in height from a copper sheet with an initial thickness of 0.21 mm is suggested. The complete methodology is proposed and numerical results are presented in terms of global geometry (shape and section profiles), thickness evolution and forming forces. Equivalent experimental tests are carried out to validate the numerical approach. Different ways of evolution are provided at the end of this work.
Introduction
Single point incremental forming (SPIF) process is interesting both industrially and scientifically. In the first case, sheet metal components can be manufactured without specific tools using a CNC milling machine. This kind of process can be produced complex parts in small batch or for single part. In particular, Jeswiet and Hagan [1] used SPIF as a rapid manufacturing process to custom-made parts. However, this advantage is limited by the important thinning of the sheet, the occurrence of defects and an important working time as mentioned in the complete review offered by Jeswiet et al. [2] on asymmetric SPIF.
For scientists, incremental sheet forming (ISF) exhibits local effects and needs then new characterization methods. Filice et al. [3] define a stretching test to investigate material formability for straining conditions in the range between uni-axial and bi-axial stretching. From these tests, they suggested, a forming limit diagram for an aluminum alloy different from the conventional one. Ham and Jeswiet [4], have built forming limit diagrams by using a Box-Behnken design of experiments and response surfaces method based on the two following criteria: maximum forming angle and effective strains.
In Jeswiet et al. [5] and Duflou et al. [6], the formability is determined by considering the evolution of forming forces during the process. This approach is based on force measurements during the production of conical part. Typical curves are reported and influences of some process parameters are revealed. Ambrogio et al. [7] used this approach to evaluate a ''spy variable'' for defining a correction strategy to prevent failure during process.
Numerical simulations are very useful to develop manufacturing processes (feasibility, optimization). Therefore, numerical simulations based on the finite element method have been carried out for developing ISF process. Henrard et al. [8][9][10] have performed studies to propose the best ways for numerical modeling to predict the process correctly. Malhotra et al. [11] have investigated the use of several material models and element formulations to simulate SPIF. From these studies, it has been shown that element formulations (solid element), integration algorithms (transient dynamic explicit algorithm), material models and contact algorithms are the most influent parameters.
A second way to improve SPIF by virtual forming is based on toolpath strategy. Malhotra et al. [12] have proposed an automatic 3D helical path generator for the simulation of SPIF. Yamashita et al. [13] have conducted numerical simulation with different strategies and parameters on thin sheet.
Time simulation is often much longer than in the case of the other forming processes. This observation is also true for the simulation time, and Robert et al. [14] have proposed numerical techniques to reduce it by modifying the classical resolution of material behavior. In the study of Yamashita et al. [13], time simulation is also reduced by modifying the value of density which is equivalent to the use of a mass scaling algorithm.
Recent studies focused on the development of incremental sheet forming for microparts or for thin sheets [15][16][17][18]. The first study was due to Saotome and Okamato [19], and consisted in the development of a specific SPIF device for realizing 3D microparts in a scanning electron microscope. The main goal of the present paper is the development of a parametric toolbox for the simulation of micro-SPIF. In first, the considered material and the experiments used to validate the numerical approach are presented. Secondly, the fully parametric toolbox is defined and the numerical algorithm and methods used to simulate micro-SPIF are described. Finally, comparisons between numerical results and experimental data are discussed.
This paper shows the ability of the proposed numerical approach to predict SPIF process for manufacturing defect-free microparts.
Material
For this study, the selected material is a FPG copper sheet with an initial thickness of 210 lm and an initial hardness of 124 HV (known as half hard condition state). Its composition has been studied by Gréban et al. [20] and is given in Table 1.
The initial grain size is 10 lm.
The FPG material is used for the realization of lead frames (metal supports of electronics components). This kind of low alloy copper guarantees a high electrical conductivity required for the best performances of lead frames.
Experimental device
To validate the numerical simulations, experiments have been conducted. For this purpose, a dedicated apparatus has been designed and realized. The representation of this tooling is done in Fig. 1a. It is composed of a fixed die support, a modular die, a fixed blank holder clamped with the die by screws and the forming tool (end ball tool).
The modular die allows to define different shapes depending on the geometry of the part to be produced. In particular, it limits the non-desired bending obtained on the base of the part by specifying the nearest contour of the final part.
The forming process is performed by the use of a CNC machine tool (3-axis milling machine). The modular device is linked to a 4-axis dynamometer to get the forming forces during the process: three forces in CNC-axis and the torque along the z-axis. This set is clamped on the machine table as shown in Fig. 1b by bolting. The dynamometer acquisition frequency is set at 1 kHz. This approach has already been used in the case of conventional incremental sheet forming by Ambrogio et al. [7] and Duflou et al. [6].
The shape of the part
A pyramidal shape is proposed to investigate the single point incremental sheet forming of thin sheet and microparts. The geometry definition is illustrated in Fig. 2.
This shape has been chosen because there is alternation between the x and the y directions during the forming process. Therefore it will be possible to determine the influence of the tool position on the forming forces.
The geometrical parameters used in this study are given in Table 2. The tool radius is equal to the corner radius R.
In the following, it will be shown that the draft angle a is a discriminant parameter to choose a forming strategy.
Forming strategies
Different strategies are possible to produce the part. For the pyramidal shape, two approaches are considered: the constant Z-level and helical paths (Fig. 3).
The first strategy follows the internal contour of the part resulting from the intersection of the shape by perpendicular planes to the spindle axis (so parallel to XY plane) located at different z-position (Fig. 3a). Each outline is offset of a gap d 2 in z-direction and of a gap d 1 in x and y direction. The larger profile is done at the beginning of the process that ends by the smaller one. The parameters d 1 and d 2 are linked together by the relations The second approach consists in performing the shape by a helical path (Fig. 3b). By starting from one corner of the pyramid, the tool moves on the theoretical shape by coupling the 3-axis displacements according to the following relations Table 2 Geometrical parameters of the pyramidal shape.
To ensure the spindle integrity, the tool rotates with a constant speed rate X and moves with a constant feed rate f.
To compare the two strategies, helix angle u and d 2 -increment are related with Eq. (3) by considering the length L i equals to the basis length l 1 .
The different parameters used in this study are given in Table 3.
The experimental results for these two strategies are illustrated in Fig. 4.
As shown in Fig. 4, these geometrical and forming parameters, with an initial grain size of 10 lm, discriminate the forming strategies. In the case of the helical path (Fig. 4a), a safe part in term of geometry is obtained. For the equivalent constant Z-level strategy, crack occurred before the desired part is not obtained. From experimental and numerical points of view, the influence of various material, geometrical and process parameters as the initial grain size (size effects), the forming strategies and tools geometry on the forming forces, defaults predictions, shape accuracy, will be investigated from these two tests.
Automatic mesh for parametric studies
One of the most important characteristics of the ISF process is associated with the use of non-specific tools (die and forming tool). From the numerical point of view, this specificity enables: -to build a parametric representation of the process; For that purpose, a dedicated toolbox, programmed in MATLAB Ò language, has been developed to create the parametric model (mesh, boundary, load and initial conditions, material behavior), Table 3 Forming parameters of the pyramidal shape.
4
-to run the simulations (forming and springback) with LS-DYNA Ò software [21], -to post-process the results (deformed mesh display, springback and thickness measurements, forming forces control).
The complete loop is described in Fig. 5. This parametric toolbox also defines the experimental path in CNC language machine by the way of parametric paths (mathematical definition) or CAM files (APT language). The advantage of creating a unique file for the toolpath definition is the insurance to perform experiments and simulations with exactly the same trajectories.
Experiments and simulations are then compared by the use of a specific post-processing toolbox also developed in MATLAB Ò language. This last toolbox permits the calculation of specific results as well as conducting sensitivity analysis, identification and optimization procedures.
Geometrical discretization
The finite element method is chosen to simulate the process. Due to large shear deformations, the blank is meshed with solid elements. Fully integrated eight nodes solid elements are used to get as much information as possible in the thickness. Three elements in thickness are considered. For the present study, 120 elements are imposed in the length and the width to discretize the blank with a total of 43,200 solid elements.
Each tool (forming tool, die and blank holder) is meshed with rigid shell elements (quad elements). The element size of each tool is considering with the smallest value of the die radius and a number of five elements in this radius is imposed to ensure a good geometrical representation of the surfaces.
In order to decrease the simulation time, an initial study was carried out to limit the blank dimensions. As the deformations are located at the contact between tool and sheet, the elastic field does not propagate in the entire blank, and so the dimensions can be reduced, by half of their initial values at least. As a consequence, the initial length and width are assigned to 17 mm.
In this model, the boundaries of the blank are considered to be clamped. It results that all the degrees of freedom of each nodes on the blank boundary are set to be null during simulation.
The complete mesh for this study obtained automatically from the toolbox is presented in Fig. 6.
Numerical algorithms and clustering
To perform robust and predictive simulations, it is crucial to choose the numerical algorithms and the associated numerical parameters. The explicit algorithm is chosen for time integration according to its robustness provided that the stability condition is respected. The classical mass scaling algorithm commonly used to increase time step is not applied in this study.
5
The choice of the contact algorithm is most often critical in the simulation of this type of process. For this reason, the classical penalty method proposed in LS-DYNA Ò by Hallquist [22] is utilized. To ensure the stability condition and a good repartition of nodal forces, the rigid parts are meshed elements of size equivalent to those of the blank.
Virtual simulation time
The stability condition of the dynamic explicit algorithm limits the time step size. In principle, this type of algorithm is dedicated to short transient simulations. Nevertheless, it can be carried out to simulate highly nonlinear problems, including problems of multiple deformable bodies in contact. It is this last feature that leads to implement an explicit approach to simulate the incremental sheet forming.
However physical time cannot be used for the simulation time because the actual run time of the process is significant. In the case of helical path, the experimental execution duration is 25 s which is too important to run with explicit approach. So it is necessary to find a virtual simulation time that will leads to numerical results similar to the experimental ones. This adaptation must not generate kinetic energy that must stay negligible compared to internal energy. But this condition doe not overcome the dynamic effects.
As the material behavior is not time dependent, the simulation time can be chosen as small as possible, but dynamic effects may occur which have no physical meaning. Therefore several simulations were conducted with different duration to find the smallest one which does not introduce numerical dynamic effects. Finally a virtual simulation time of 0.2 s, equivalent to a tool feed rate of 1300 mm/s, has been selected.
Due to the important size of the model and the stability condition, the simulation time is important. The massively parallel processing (MPP) release of LS-DYNA Ò is utilized with 20 processors for reducing the simulation time.
Behavior law
In relation with the studies of Touache et al. [23], the mechanical behavior of the copper alloy is considered through an elastic-plastic law with isotropic hardening. This copper alloy exhibits an isotropic elastic law and no strain rate sensitivity. According to these considerations, the following behavior law is proposed where _ k and u are respectively the plastic multiplier and the potential function. The plastic multiplier must satisfy the following Kuhn-Tucker's conditions _ k P 0 and _ k/ ¼ 0 ð12Þ The internal variables R and p are respectively the isotropic hardening variable and the equivalent plastic strain. The fourth order tensors C and P are the classical elastic operator and the deviatoric projector. The r y , K and G parameters are respectively the yield stress, the bulk modulus and the shear modulus. The set of material parameters was obtained by tensile tests and by ultrasonic tests. This set of identified parameters is summarized in Table 4. The density of this material is equal to 8500 kg m À3 . The friction law chosen to simulate the tribological behavior at the interfaces between tools and blank is the Coulomb's friction law. Lubricant (oil and water) is used during experiments and Table 4 Identified material parameters for FPG alloy. we considered only a static friction coefficient f s = 0.2 in the simulations to take into account steel/copper contact. This choice derives from studies on the influence of friction on forming forces level and will be explained further.
Results and discussions
For the validation of the numerical model, comparisons between experiments and simulations were only helical strategy was considered due to premature failures with the constant Z-level strategy (Fig. 4b). To perform these comparisons the temporal evolutions of the experimental and numerical results are synchronized with reference to the percentage of the forming cycle rather than actual or virtual time.
Numerical forming
As mentioned above, simulations were performed with the MMP version of LS-DYNA Ò using 20 processors. The total computational time took 4 h. The mesh evolution in function of forming cycle is presented in Fig. 7.
The ability of such finite element code to simulate the incremental forming parts of small dimensions or low thickness is proved. The study of the energies evolutions (total energy, kinetic energy, internal energy and contact energy) shows that the initial assumptions are valid: the kinetic energy is well negligible compared to the internal energy. The values are positive and demonstrate that no numerical energy is introduced. The sum of energies gives the total energy. From these considerations the simulations can be compared with experiments.
From the final mesh corresponding to 100% of the process cycle, the maximum value of the equivalent plastic strain reaches over 240%.
Springback simulation
The geometrical comparisons may be carried out provided that the numerical and experimental stress states are the same. Springback prediction is done in a second simulation step by the use of an implicit algorithm (LS-DYNA in implicit mode). This step is associated to the elastic stress relaxation in the part when the tools are withdrawn. Springback result is illustrated in Fig. 8.
Its influence on the final shape of the part shows a global displacement, in the forming direction, of the pyramidal shape. This displacement of 0.06 mm average can be considered as a geometry defect. However it can provide an advantage: a switch in micro-electronics could be designed by using the geometrical defect as a constrained displacement for the equivalent spring.
Qualification of the numerical final shape
One of the most important criteria to validate the formed part is associated to the validation of the overall geometry. This inspection n is realized by comparing the numerical and experimental shapes. The real part, presented in Fig. 4a was digitized by non-contact 3D laser scanning system (Steintek Mobilescan 3D) in high resolution mode. This optical method provides a high density of measured points (more than 100,000) with a resolution of less than 0.01 mm and an accuracy of 5 lm. The comparison with the numerical shape requires the extraction of the outer surface mesh. From the developed postprocessing toolbox, this operation is realized automatically by extracting and converting the outer surface quad mesh in triangular mesh without loss of geometrical definition. The representation of the different meshes (solid mesh, extracted mesh and subdivided mesh) is done in Fig. 9.
This operation is necessary to create a compatible mesh (STL format) for the inspection software (Geomagic Qualify 2012).
The experimental data (measured points) are then compared with this triangular mesh. The initial operation consists in fitting the two geometries in two steps. Firstly, a manual positioning of the experimental scatter with the triangular mesh is done thanks to the advantages of the pyramidal shape: theoretically this shape possesses two planes of symmetry that permit an easier positioning of the two parts. This is more difficult in the case of axisymmetric parts. Secondly, a precise positioning is ensured by the way of a best fit method. Once these operations are performed, the geometric comparisons can be conducted. The different inspection results are shown in Fig. 10.
The 3D comparison presented in Fig. 10a, shows an excellent matching of the experimental and virtual parts (an average difference of -0.031 mm for a standard deviation of 0.056 mm) excepted at the edges of the workpiece (±0.2 mm) and at the base of the pyramid (0.137 mm). These differences can be easily explained: -the experimental part is quite flexible due to its thickness and so it is easy to deform the part away from the forming zone during handling, -the value of the die radius and its position strongly influence the final shape, -the actual die radius is not strictly identical to that used for the simulations.
In addition this area is highly dependent on springback effect which is again a difficult parameter to control. The second comparison (Fig. 10b) is based on the local profiles of the pyramid obtained by cutting the 3D geometries with different planes parallel to a symmetry plane of the part (plan no. 7). All the cutting planes are separated by a distance of 0.4 mm. All the resulting profiles are plotted in their respective cutting plane in Fig. 10d: profiles in black are associated to the experimental measurements as the gray ones with circle markers correspond to the numerical results). These profiles are also shown in Fig 10c. These results confirm the excellent correlation between geometries obtained by experiments and simulations, except for the base of the pyramid. No pillow effect, as suggested by Ambrogio et al. [24], is observed for the small dimensions considered in the present study.
Finally, some observations are revealing the influence of the helical strategy. The numerical results presented in Fig. 11 show a mesh which is twisted about the symmetry axis parallel to the z-axis and denoted z c .
A significant rotation of the mesh is observed in the forming area as presented in Fig. 11a where one half of the mesh after forming simulation is represented. A twisting angle parameter denoted c is introduced to quantify this distortion. This angle can be evaluated by considering a node A in the mesh in its original and deformed position. We can construct the isovalues of the torsion angle shown in Fig. 11b, where its maximum value is 25.7°. The places of maximum twist angle are in areas where the tool undergoes a change in direction, and this area increases as the forming zone decreases.
Thickness evolution
To complete the validation of the model, local comparisons with experiments were carried out. The most important disadvantage of the incremental sheet forming process is an important thinning of the sheet. In this section, the thickness is evaluated and locally compared with the experiments. The thickness is calculated by measuring the normal distance between the inner and outer surface of the sheet on the whole part. The thickness repartition is plotted in Fig. 12b in a projection view defined by Fig. 12a.
The thickness repartition is very similar to that of the twist angle presented in Fig. 11b. This result shows that the twisting phenomenon, and implicitly the forming strategy, influences the thickness distribution. The minimum value of thickness is 0.07 mm and corresponds to a thinning of 66%.
For validation, the experimental part has been cut in two symmetric parts according to the section plane defined in Fig. 12a. The cut was performed by wire electro-discharge machining process (WEDM) to avoid introducing any additional mechanical stresses in the specimen. Numerical and experimental results are given in Fig. 13.
In Fig. 13a, the numerical model section is compared with the experimental one. The global shape of the numerical simulation is closed to the experiments and thickness evolutions plotted in Fig. 13b confirm these observations. Experimental data are obtained by optical metrology and image processing. The 2D numerical profile of the cutting part is extracted by the way of a pedestrian alpha shape extractor algorithm [25]. The thicknesses evolution is presented in Fig. 13b and shows a good correlation between experiments and simulations. The numerical evolution follows the sine law reported by Reagan and Smith [26], Jeswiet et al. [2] and also discussed by Jackson and Allwood [27].
Forming forces
The last comparison is based on the validation of the mechanical behavior and the friction laws. It is possible to investigate their effects by observing the evolution of the forming forces during the process as proposed by Szekeres et al. [28]. With these considerations, the forming forces in the three directions (x, y and z) are given in Fig. 14.
The results show an excellent prediction of the forming forces. Variations and levels are close to the experiments. In the case of z-axial force, a slight difference is observed after 30% of the cycle time. The experimental force is lower than its numerical counterpart of about 15%. A possible hypothesis is related to the softening phenomenon. The forming process results in large deformation (with plastic strains of more than 243% locally) and softening may be due to damage for example [29]. As a consequence damage modeling is a way of improvement for the description of material behavior and for predicting risks of cracks. Experimental results with the constant Z-level strategy can be utilized to perform the complete identification of softening model and fracture evolution.
Another assumption may be the influence of the material microstructure (grain size) on the material behavior. The influence of scale effects [30] can be described by the Hall and Petch relation [31,32]. This relation defines the evolution of the yield stress with the grain size. This may explain the observed changes in the evolution of the forming forces. Resulting forming efforts are nonetheless consistent with the experimental results in first approximation.
Monitoring efforts during incremental forming process could allow the identification of material and friction behavior as they directly influence their levels. This can also be a method to identify the occurrence of defects. Mention may be made of the use of SPIF for the identification of Gurson's damage model [33,29] and the definition of failure in order to predict the occurrence of defects during simulations [34][35][36].
Conclusions and perspectives
This research focuses on the development of single point incremental sheet forming process and its numerical modeling. A complete numerical toolbox has been developed to perform fully parametric simulations with finite element software. The toolbox includes a post-processing tool to analyze the results to conduct comparisons with experimental data. These comparisons lead to the following conclusions: 1. Comparisons made on the overall geometry of the workpiece reveal very good agreement between the part obtained by simulation and that obtained during experiments. Only the parts at the base of the base of the pyramid show significant differences. These deviations are sensitive to the position and value of the die radius. Despite these differences, comparisons conducted at the section level are very convincing. 2. The prediction of thickness distribution is close to that obtained on the real part. 3. Forming forces obtained by numerical simulation show good correlation with measured values. However, it was a slight underestimation of the axial forces during thinning. It is assumed the influence of grain size and softening on the material behavior.
The developed approach has demonstrated its ability to efficiently predict the forming of small parts with thin thickness by the ISF process. In terms of perspectives, the presented methodology can be implemented to introduce size effects into the material and friction laws by adjusting either the grain size on the dimensions of the test (specimen and tools). The helical strategy is well suited for identifying the friction and the material behavior laws. As the constant Z-level strategy leads to fracture, it seems appropriate to identify failure models. As a result the SPIF process may be a specific characterization test for material and friction under complex loading.
|
v3-fos-license
|
2022-05-20T15:12:25.795Z
|
2022-05-01T00:00:00.000
|
248908326
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/27/10/3211/pdf?version=1652796872",
"pdf_hash": "b0abef22553b569e1ef4bfb08513f1ba44c0bacd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:907",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "449aa5eb5cd3d9ba0c9a8267c1fdd008e4577a95",
"year": 2022
}
|
pes2o/s2orc
|
Impact of the Hydrolysis and Methanolysis of Bidesmosidic Chenopodium quinoa Saponins on Their Hemolytic Activity
Saponins are specific metabolites abundantly present in plants and several marine animals. Their high cytotoxicity is associated with their membranolytic properties, i.e., their propensity to disrupt cell membranes upon incorporation. As such, saponins are highly attractive for numerous applications, provided the relation between their molecular structures and their biological activities is understood at the molecular level. In the present investigation, we focused on the bidesmosidic saponins extracted from the quinoa husk, whose saccharidic chains are appended on the aglycone via two different linkages, a glycosidic bond, and an ester function. The later position is sensitive to chemical modifications, such as hydrolysis and methanolysis. We prepared and characterized three sets of saponins using mass spectrometry: (i) bidesmosidic saponins directly extracted from the ground husk, (ii) monodesmosidic saponins with a carboxylic acid group, and (iii) monodesmosidic saponins with a methyl ester function. The impact of the structural modifications on the membranolytic activity of the saponins was assayed based on the determination of their hemolytic activity. The natural bidesmosidic saponins do not present any hemolytic activity even at the highest tested concentration (500 µg·mL−1). Hydrolyzed saponins already degrade erythrocytes at 20 µg·mL−1, whereas 100 µg·mL−1 of transesterified saponins is needed to induce detectable activity. The observation that monodesmosidic saponins, hydrolyzed or transesterified, are much more active against erythrocytes than the bidesmosidic ones confirms that bidesmosidic saponins are likely to be the dormant form of saponins in plants. Additionally, the observation that negatively charged saponins, i.e., the hydrolyzed ones, are more hemolytic than the neutral ones could be related to the red blood cell membrane structure.
Introduction
For many years, molecules of natural origin have been a research topic of interest, due to their structural diversity and complexity, but also for their biological properties, which can be of major industrial interest if correctly understood and mastered. Within these numerous classes of biomolecules, specific metabolites, such as alkaloids, flavonoids, and saponins, are a hot research topic due to their specific interactions with living organisms [1][2][3]. Among these specific metabolites, saponins have been demonstrated to fulfill defensive roles, intervene in inter-and intra-species communications, or even play a role in reproduction processes [4][5][6][7][8][9][10]. These molecules are abundantly present in plants [11], and are also present in a diversity of marine animals, like sponges and echinoderms [12,13]. Saponins present a specific structural identity consisting of the association between an apolar aglycone and one or more (linear [12,13]. Saponins present a specific structural identity consisting of the association between an apolar aglycone and one or more (linear or branched) glycans. Monodesmosidic and bidesmosidic saponins are respectively constituted by a single or two saccharidic chains anchored on a single aglycone [14]. Diverse specific chemical functions, such as sulfate groups [15], free carboxylic acid (−COOH) [16], esterified acetic, tiglic or angelic acids [17,18], and many others [19,20], are also often present on saponins and modulate the saponin biological activities [21][22][23][24][25]. The membranolytic activity of saponins, i.e., their propensity to disrupt the cell membrane upon interaction with membrane sterols, represents one of the most interesting properties for pharmaceutical applications [26][27][28][29][30]. Computational chemistry studies have recently made it possible to visualize the saponin/membrane interaction at the molecular level and represent a promising tool for identifying structural moieties responsible for the activity [31][32][33][34] on the way to the establishment of the Structure-Activity Relationship (SAR) [26]. From an experimental point of view, selective and specific modification of chemical functions using organic chemistry methods represents an elegant method for evaluating their contribution to membranolytic activity [35,36]. As a typical example, there is a general agreement that bidesmosidic saponins are less cytotoxic than monodesmosidic ones [28]. In this context, we recently successfully converted the bidesmosidic saponins extracted from the husk of the Chilean Chenopodium quinoa Willd. (1798) into their monodesmosidic ones [37][38][39] upon specific microwaveassisted hydrolysis of the ester bond at C28 (see Figure 1). The cytotoxicity of the hydrolyzed saponins was shown to be significantly enhanced with regards to the natural bidesmosidic saponins [40]. More recently, we investigated the importance of the sulfate function as a cytotoxicity vector for saponins contained in the viscera of the Malagasy sea cucumber Holothuria scabra [41]. Under microwave activation, the sulfated saponins were quantitatively converted into their desulfated counterparts, and the comparison of the hemolytic activities (HA) of both sets of saponins revealed that the sulfate group was mandatory for the membranolytic activity [41]. Several similar studies have been reported in the literature, i.e., esterification of tea saponins [42], amide group derivatization of βhederin [43], and selective modification of the glycan or the aglycone of chlorogenin-type saponins [44]. We strongly believe that these combined efforts will contribute to the understanding of the cytotoxicity of saponins at the molecular level. In the present study, we re-examined the cytotoxicity of the saponins found in the husk of the quinoa seeds [45]. Our motivation comes from the fact that, even if quinoa seeds are known for their very high nutritional value (rich in protein (~20%) and antioxidant compounds) [45] and their ease of plant cultivation in almost any conditions [37,38], the husk, which represents approximatively 10% of the weight of the seed, is currently discarded due to the large concentration of saponins. Then, it can be used as a source of value-added products with applications in pharmacy, agriculture, and foods, which is in line with the Circular Economy policies promoted by the EU, provided the biological In the present study, we re-examined the cytotoxicity of the saponins found in the husk of the quinoa seeds [45]. Our motivation comes from the fact that, even if quinoa seeds are known for their very high nutritional value (rich in protein (~20%) and antioxidant compounds) [45] and their ease of plant cultivation in almost any conditions [37,38], the husk, which represents approximatively 10% of the weight of the seed, is currently discarded due to the large concentration of saponins. Then, it can be used as a source of value-added products with applications in pharmacy, agriculture, and foods, which is in line with the Circular Economy policies promoted by the EU, provided the biological properties of the natural molecules and their easily accessible derivatives can be fully identified.
In our previous study [40], we demonstrated that monodesmosidic saponins, such as Saponin O hydro produced from Saponin O, shown in Figure 1, are more cytotoxic than the natural bidesmosidic saponins. The cytotoxicity of saponins is often associated with their amphiphilic nature making their association with cell membrane favorable [28]. We thus suspected that the neutralization of the carboxylate group present at C28 on the hydrolyzed saponins should enhance their cytotoxicity. Here, we report on the impact of the transesterification, using potassium methanolate in methanol, of the bidesmosidic saponins extracted from quinoa husk on their cytotoxicity. To achieve this objective, we compare the hemolytic activities of three different fully characterized samples: (i) natural saponins extracted from the quinoa husk, (ii) C-28 hydrolyzed saponins, and (iii) C-28 transesterified saponins. All the samples are qualitatively and quantitatively characterized using mass spectrometry methods, in light of the support of literature data [37,39,40].
Saponin Identification and Quantification in the Natural Extract (NE)
The characterization of the saponins contained in the NE is achieved using the mass spectrometry (MS) protocol developed in our laboratory [46], combining MALDI-MS, accurate mass measurements (HRMS) and LC-MS (MS) experiments. The saponin identification is based on reference studies by Madl et al. [37], Kuljanabhagavad et al. [39] and Colson et al. [40]. The quinoa saponins are bidesmosidic (C3 and C28) triterpenoidic saponins and have the particularity to possess a single glucose residue on C28 [37,39], see Figure 2. Their structure differences arise from (i) the number and the nature (glucose-Glu, galactose-Gal, arabinose-Ara, xylose-Xyl, glucuronic acid-GlcA) of the saccharide units composing the C3-attached glycan, and from (ii) the structure of the triterpene aglycone (oleanic acid-OA, hederagenin-Hed, AG489, AG 487, serjanic acid-SA, phytolaccagenic acid-PA, sapogenin I-SGI, sapogenin II-SGII) [37,39], see also Figure 2. properties of the natural molecules and their easily accessible derivatives can be fully identified.
In our previous study [40], we demonstrated that monodesmosidic saponins, such as Saponin O hydro produced from Saponin O, shown in Figure 1, are more cytotoxic than the natural bidesmosidic saponins. The cytotoxicity of saponins is often associated with their amphiphilic nature making their association with cell membrane favorable [28]. We thus suspected that the neutralization of the carboxylate group present at C28 on the hydrolyzed saponins should enhance their cytotoxicity. Here, we report on the impact of the transesterification, using potassium methanolate in methanol, of the bidesmosidic saponins extracted from quinoa husk on their cytotoxicity. To achieve this objective, we compare the hemolytic activities of three different fully characterized samples: (i) natural saponins extracted from the quinoa husk, (ii) C-28 hydrolyzed saponins, and (iii) C-28 transesterified saponins. All the samples are qualitatively and quantitatively characterized using mass spectrometry methods, in light of the support of literature data [37,39,40].
Saponin Identification and Quantification in the Natural Extract (NE)
The characterization of the saponins contained in the NE is achieved using the mass spectrometry (MS) protocol developed in our laboratory [46], combining MALDI-MS, accurate mass measurements (HRMS) and LC-MS (MS) experiments. The saponin identification is based on reference studies by Madl et al. [37], Kuljanabhagavad et al. [39] and Colson et al. [40]. The quinoa saponins are bidesmosidic (C3 and C28) triterpenoidic saponins and have the particularity to possess a single glucose residue on C28 [37,39], see Figure 2. Their structure differences arise from (i) the number and the nature (glucose-Glu, galactose-Gal, arabinose-Ara, xylose-Xyl, glucuronic acid-GlcA) of the saccharide units composing the C3-attached glycan, and from (ii) the structure of the triterpene aglycone (oleanic acid-OA, hederagenin-Hed, AG489, AG 487, serjanic acid-SA, phytolaccagenic acid-PA, sapogenin I-SGI, sapogenin II-SGII) [37,39], see also Figure 2. Table 1. R2 and R3 functions are specific to the aglycone moiety as shown in the presented aglycones. The C28-glucose is highlighted in red since this residue will be involved in the chemical modifications targeted, i.e., hydrolysis and methanolysis. Table 1. R 2 and R 3 functions are specific to the aglycone moiety as shown in the presented aglycones. The C28-glucose is highlighted in red since this residue will be involved in the chemical modifications targeted, i.e., hydrolysis and methanolysis. Table 1. Chenopodium quinoa husk extract: data collected by MS-based experiments. The compositions and mass error measurements (∆) were determined by MALDI-HRMS. a The saponins were identified based on liquid chromatography (LC-MS) and collision-induced dissociation experiments (LC-MSMS). b The saponin ions detected between m/z 789 and 863 are [2 + 0] fragment ions generated during the MALDI ionization from the [2 + 1] saponins. The %-weights in extracts and the mass fractions (mg·g −1 of Chenopodium quinoa husk powder) were determined based on the LC ion signal intensity ratios, with Hederacoside C as an internal standard, and using the gravimetric extraction yield (15.5 mg·g −1 ). The molar proportions (%) were determined based on LC ion signal relative integration. See the "Materials and Methods" section for the details of all the quantitative analysis. The saponin NE is obtained by methanol extraction of the ground husks, followed by successive liquid/liquid extractions, as described in the "Materials and Methods" section [47]. The yield of extraction is 310.06 mg per 20 g of ground husk, i.e., 15.5 mg·g −1 .
Saponins
Keeping in mind the literature data [37,39,40] revealing that quinoa husk saponins are bidesmosidic three-and four-sugar saponins, the NE is first qualitatively and quantitatively analyzed by mass spectrometry and all the data are presented in Table 1. The MALDI-MS(+) mass spectrum presents three groups of m/z signals, see Figure 3a. These signals are ascribed to sodium-cationization saponins, [M + Na] + [37,39,40]. The presence of monodesmosidic and bidesmosidic saponins must be considered a priori, and these saponins will be identified as [x + y], where x and y stand for the number of monosaccharide residues at C3 and C28, respectively. Please note that monodesmosidic saponins are not expected in the NE based on literature data [37,39,40], but we previously showed that monodesmosidic saponin ions may be generated during the MALDI-MS analysis [40]. The first group of saponin ions (m/z 1113-1157) corresponds to four-saccharide saponin ions, the second group (m/z 951-1025) corresponds to three-saccharide saponin ions, and the third group (m/z 789-863) corresponds to unexpected two-saccharide saponin ions. In the MALDI-MS spectrum presented in Figure 3a, we therefore assign to the m/z 951-1025 saponin ions the [2 + 1] and [3 + 0] topologies, whereas the m/z 1113-1157 ions are purely [3 + 1] ions and the m/z 789-863 ions are [2 + 0] fragment ions, as shown in the literature [40] and confirmed below using LC-MS analysis. Let us again emphasize that, when a saponin extract is exposed to mass spectrometry analysis, depending on the selected ionization method, either Electrospray or MALDI, fragment ions may be generated. This is the case here, as demonstrated in [40], for the bidesmosidic saponins extracted from the quinoa husk that suffer an ester bond dissociation under MALDI conditions [40].
Molecules 2022, 27, x FOR PEER REVIEW 5 of 16 method, either Electrospray or MALDI, fragment ions may be generated. This is the case here, as demonstrated in [40], for the bidesmosidic saponins extracted from the quinoa husk that suffer an ester bond dissociation under MALDI conditions [40]. Table 1. Please note that, in Figure 3a, only the most intense signals are assigned for readability reasons. LC-MS and LC-MSMS analyses are further mandatory to (i) confirm that the detected ions are saponin ions, (ii) discriminate between monodesmosidic and bidesmosidic saponin ions, (iii) identify potential isomers, and (iv) determine the glycan sequence and the aglycone nature using collisioninduced dissociation (CID) experiments. Our LC analysis confirmed that the NE exclu- Table 1. Please note that, in Figure 3a, only the most intense sig-nals are assigned for readability reasons. LC-MS and LC-MSMS analyses are further mandatory to (i) confirm that the detected ions are saponin ions, (ii) discriminate between monodesmosidic and bidesmosidic saponin ions, (iii) identify potential isomers, and (iv) determine the glycan sequence and the aglycone nature using collision-induced dissociation (CID) experiments. Our LC analysis confirmed that the NE exclusively contains bidesmosidic saponins with 12 different elemental compositions and no isomers (see Figures S1 and S2) and that the two-sugar saponin ions detected between m/z 789 and 863 in the MALDI spectrum in Figure Table 1. We will pool all the other minor saponins (26% molar proportion) together according to their compositions, e.g., 3-sugar vs. 4-sugar saponins. Saponins G, 32 and 61 will accordingly be gathered as saponins X (~6% molar ratio) and saponins N, 4, Q, H, 19 and F as saponins Y (~20%). These data are presented as a sector diagram in Figure 4a for further comparison. Using Hederacoside C, a commercially available saponin extracted from Hedera helix, as an internal standard, the saponin %-weights in the NE were determined for the 12 elemental compositions in Table 1. The three major saponins, namely Saponin O, Saponin B and Saponin I, represent respectively~20%,~30% and~22% in weight of the dried extract, while the pooled saponins X and saponins Y, represent~6% and~18%, leading to a saponin weight percentage of 95.91% in the extract, i.e., 95.91 mg of saponins per 100 mg of dry extract. The saponin %-weight in the extract was further converted in the saponin mass fraction (mg·g −1 ) in the ground husk, using the extraction gravimetric yield previously determined at 15.5 mg of extract per g of ground husk. The three major saponins are present at~3 (Saponin O),~4.5 (Saponin B), and~3.5 (Saponin I) mg per g of husk powder, while the minor saponins were estimated to be present around 0.9 (Saponins X), and~2.8 (Saponins Y) mg·g −1 of husk powder, see Table 1.
Selective Hydrolysis and Transesterification of the Quinoa Husk Bidesmosidic Saponins at C28
The bidesmosidic saponins of the NE, see Figure 1, are first hydrolyzed under microwave activation to produce the monodesmosidic saponins bearing a carboxylate group at C28, generating the so-called hydrolyzed extract (HE). This reaction was previously developed in our laboratory [40], but, since we are conducting a comparative study, the intrinsic variability of the saponin natural extract makes it necessary to qualitatively and quantitatively characterize the hydrolysis products. Figure 3b presents the MALDI mass spectrum recorded after microwave-assisted hydrolysis and immediately confirms the success of the hydrolysis, since the bidesmosidic Figure S4), the m/z 973 Saponin B ions first expel the C-28 glucose residue to generate the fragment ions detected at m/z 811 that ultimately decompose to yield the aglycone ions detected at m/z 499. These m/z 811 ions also correspond, from the hydrolyzed extract, to the [M + H] + ions of the C-28 hydrolyzed Saponin B. The CID spectrum of these m/z 811 ions is presented in Figure 5b, and a comparison of Figure 5a,b unambiguously confirms that the hydrolysis reaction is specific at the C-28 position, since all the detected fragment ions detected below m/z 811 are identical.
N, 4, Q, H, 19 and F as saponins Y (~20%). These data are presented as a sector diagram in Figure 4a for further comparison. Using Hederacoside C, a commercially available saponin extracted from Hedera helix, as an internal standard, the saponin %-weights in the NE were determined for the 12 elemental compositions in Table 1. The three major saponins, namely Saponin O, Saponin B and Saponin I, represent respectively ~20%, ~30% and ~22% in weight of the dried extract, while the pooled saponins X and saponins Y, represent ~6% and ~18%, leading to a saponin weight percentage of 95.91% in the extract, i.e., 95.91 mg of saponins per 100 mg of dry extract. The saponin %-weight in the extract was further converted in the saponin mass fraction (mg·g −1 ) in the ground husk, using the extraction gravimetric yield previously determined at 15.5 mg of extract per g of ground husk. The three major saponins are present at ~3 (Saponin O), ~4.5 (Saponin B), and ~3.5 (Saponin I) mg per g of husk powder, while the minor saponins were estimated to be present around ~0.9 (Saponins X), and ~2.8 (Saponins Y) mg·g −1 of husk powder, see Table 1.
Selective Hydrolysis and Transesterification of the Quinoa Husk Bidesmosidic Saponins at C28
The bidesmosidic saponins of the NE, see Figure 1, are first hydrolyzed under microwave activation to produce the monodesmosidic saponins bearing a carboxylate group at C28, generating the so-called hydrolyzed extract (HE). This reaction was previously developed in our laboratory [40], but, since we are conducting a comparative study, the intrinsic variability of the saponin natural extract makes it necessary to qualitatively and quantitatively characterize the hydrolysis products. Figure 3b presents the MALDI mass spectrum recorded after microwave-assisted hydrolysis and immediately confirms the success of the hydrolysis, since the bidesmosidic Figure 5b, and a comparison of Figure 5a,b unambiguously confirms that the hydrolysis reaction is specific at the C-28 position, since all the detected fragment ions detected below m/z 811 are identical. The data are presented in Table 2, as well as in Figure 4, for quantitative analysis. The comparison of the sector diagrams built for the NE and the HE also confirms that the hydrolysis reaction is specific to the C28 ester bond, since the relative proportions between the different saponins are conserved upon hydrolysis. In other words, saponins O (20.1%), B (30.2%), I (23.8%), X (5.5%) and Y (19.9%) are quantitatively (~100% yield) converted into saponins O h (19.9%), B h (31.1%), I h (24.0%), X h (5.6%), and Y h (19.4%), see Figure 4. The data are presented in Table 2, as well as in Figure 4, for quantitative analysis. The comparison of the sector diagrams built for the NE and the HE also confirms that the hydrolysis reaction is specific to the C28 ester bond, since the relative proportions between Figure 4. Table 2. Microwave-assisted alkaline hydrolysis (pH 10-150 • C-5 min) of Chenopodium quinoa husk saponin extract: the elemental compositions of saponin ions are determined by MALDI-HRMS and the molar proportions (%) of the saponin ion are estimated based on the LC-MS signal relative integration. As shown in Figure 1, the third set of saponins targeted for our comparative study is constituted by C28-esterified saponins. Two strategies can be borrowed from organic chemistry corresponding to the direct esterification of the hydrolyzed saponins and the transesterification of the bidesmosidic saponins. All attempts, see Figure S6, to esterify the hydrolyzed saponins at the C28 position failed, and the C28 carboxylic acid/carboxylate moiety was systematically recovered after reaction [48,49]. We further tested several protocols, see Figure S7, for the transesterification of the natural bidesmosidic saponins [50]. As shown in Figure 1, potassium methanolate (MeOK, 1 M), in anhydrous methanol (MeOH anh ) for 1 h at 60 • C, efficiently produces the C28-methylated saponins, yielding so-called Transesterified Extract (TE). Indeed, as shown in Figure 3c, the signals attributed to the bidesmosidic saponin ions can no longer be detected after MeOK treatment. The [M + Na] + ions are now detected at 148 u (mass unit) lower than the bidesmosidic saponin ions. This mass difference, confirmed upon HRMS measurements (see Table 3), corresponds to the formal substitution of a glucose residue by a methoxy group. Globally, the MALDI mass spectrum features two groups of ions that correspond to the [ Finally, the molar proportions of the different saponins remain largely unaffected upon transesterification, as shown in the graphical comparison in Figure 4, where saponins O (20.1%), B (30.2%), I (23.8%), X (5.5%) and Y (19.9%) are quantitatively converted into saponins O tr (21.1%), B tr (31.5%), I tr (22.9%), X tr (6.2%), and Y tr (18.3%), see Figure 4.
Saponins
The hydrolyzed and transesterification reactions performed on the bidesmosidic [3 + 1] and [2 + 1] saponins extracted from the quinoa husk were demonstrated to specifically occur for the C-28 ester function. We propose, in accordance with basic organic chemistry, that both processes involve a nucleophilic addition of HO − (basic hydrolysis) or CH 3 O − (transesterification) at the carbon atom of the C-28 ester function, followed by an elimination of the C-28 glucose as the leaving group, according to the general mechanism presented in Figure 6. The hydrolyzed and transesterification reactions performed on the bidesmosidic [3 + 1] and [2 + 1] saponins extracted from the quinoa husk were demonstrated to specifically occur for the C-28 ester function. We propose, in accordance with basic organic chemistry, that both processes involve a nucleophilic addition of HO − (basic hydrolysis) or CH3O − (transesterification) at the carbon atom of the C-28 ester function, followed by an elimination of the C-28 glucose as the leaving group, according to the general mechanism presented in Figure 6.
Hemolytic Activity (HA) Modulation
The membranolytic properties of NE, HE and TE are compared by determining their hemolytic activities (HA) as a standard method [29,35,47,[51][52][53][54]. HA is evaluated by determining the evolution of the hemoglobin release in solution when a suspension of red blood cells is exposed to increasing concentrations of the tested molecules. The hemoglobin release is measured by determining the solution absorbance at 540 nm [55]. We recently proposed the use of a referent saponin solution to make it possible to compare results from different groups [40,41], and we selected the highly hemolytic saponins extracted from Aesculus hippocastanum [56]. The HA are therefore expressed as a percentage of the activity of the standard solution (see Material and Methods).
The comparison of the HA of the three extracts, see Figure 7, undoubtedly demonstrates the impact of chemical modifications on the HA. The data first confirm that (i) the bidesmosidic saponins present in the NE do not present any membranolytic activity against the red blood cells in the used concentration range, say up to 500 µg·mL −1 ; and (ii)
Hemolytic Activity (HA) Modulation
The membranolytic properties of NE, HE and TE are compared by determining their hemolytic activities (HA) as a standard method [29,35,47,[51][52][53][54]. HA is evaluated by determining the evolution of the hemoglobin release in solution when a suspension of red blood cells is exposed to increasing concentrations of the tested molecules. The hemoglobin release is measured by determining the solution absorbance at 540 nm [55]. We recently proposed the use of a referent saponin solution to make it possible to compare results from different groups [40,41], and we selected the highly hemolytic saponins extracted from Aesculus hippocastanum [56]. The HA are therefore expressed as a percentage of the activity of the standard solution (see Material and Methods).
The comparison of the HA of the three extracts, see Figure 7, undoubtedly demonstrates the impact of chemical modifications on the HA. The data first confirm that (i) the bidesmosidic saponins present in the NE do not present any membranolytic activity against the red blood cells in the used concentration range, say up to 500 µg·mL −1 ; and (ii) that monodesmosidic saponins present in the hydrolyzed extract are already active at a concentration around 20 µg·mL −1 . The bidesmosidic saponins are strongly activated upon transesterification, since a HA is detected as being above 50 µg·mL −1 , see Figure 7. This also reveals that the hydrolyzed negatively charged saponins are more membranolytic than the transesterified ones, which is at odds with our prediction on the basis of their presumed relative amphiphilicities. The red blood cell membrane, being rich in N-acetyl-neuraminic acids, is globally negatively charged [57,58]. This permanent global negative charge is mandatory for preventing red blood cells from aggregating and also for creating a high concentration in positive ions all around the red blood cells [57,58]. The greater activity of the hydrolyzed saponins that exhibit a net negative charge may be linked to this accumulation of positive charges around the red blood cells. We recently showed that the desulfation of the negatively charged sulfated saponins extracted from Holothuria scabra generates neutral saponins whose HA can no longer be detected [41].
Extraction
Mature achene integuments were obtained from pooled samples (Spring 2020) from the Quinoa Breeding Program from Instituto Nacional dé Investigación Agria (INIA) Chile. Seeds were then subjected to physical shearing and kernels were discarded. The obtained husk powder (particle diameter < 1 mm) was sent to Belgium and kept away from light. The husk powder was placed under stirring overnight in methanol. The solution was then centrifuged at 4500× g for ten minutes (Sigma 2-16P, Sigma, Osterode am The red blood cell membrane, being rich in N-acetyl-neuraminic acids, is globally negatively charged [57,58]. This permanent global negative charge is mandatory for preventing red blood cells from aggregating and also for creating a high concentration in positive ions all around the red blood cells [57,58]. The greater activity of the hydrolyzed saponins that exhibit a net negative charge may be linked to this accumulation of positive charges around the red blood cells. We recently showed that the desulfation of the negatively charged sulfated saponins extracted from Holothuria scabra generates neutral saponins whose HA can no longer be detected [41].
Seeds were then subjected to physical shearing and kernels were discarded. The obtained husk powder (particle diameter < 1 mm) was sent to Belgium and kept away from light. The husk powder was placed under stirring overnight in methanol. The solution was then centrifuged at 4500× g for ten minutes (Sigma 2-16P, Sigma, Osterode am Harz, Germany). The supernatant was then collected, and the extract diluted with water to reach a volume ration of 70/30 (methanol/water). This methanolic extract was partitioned (v/v) with nhexane, chloroform, and dichloromethane to remove apolar compounds. The third aqueous phase is recovered and evaporated under vacuum using a rotary evaporator (IKA RV 10, IKA, Staufen, Germany) in a water bath (80 rpm, 50 • C) and the residue is brought to a volume of 25 mL in order to carry out a fourth liquid/liquid extraction (v/v) with HPLC isobutanol to recover the saponins in the organic phase. This phase is then washed twice with Milli-Q water to purify the extract from the residual salts and impurities. The organic phase, containing saponins, is evaporated under vacuum to obtain a purified powder.
Microwave-Assisted Alkaline Hydrolysis
The hydrolysis protocol was adapted from our previous study [40]. C. quinoa NE (3 mg) is solubilized in 3 mL of a buffer solution (pH 10:50 mL of borax 0.025 mol·L −1 added to 18.3 mL of NaOH 0.1 mol·L-1, brought to 100 mL with Milli-Q water). The solution is heated at 150 • C for 5 min using a microwave device (Biotage, Initiator Classic, Biotage Sweden, Uppsala, Sweden) and cooled to room temperature. The pH is brought to 7 using HCl 0.1 mol·L −1 and a liquid/liquid extraction is performed (v/v) with isobutanol. The organic phase is washed twice with Milli-Q water and evaporated under vacuum to obtain the saponins of HE in a powder (57% yield, 97% conversion).
Methanolysis
The transesterification protocol was adapted from Chung et al. [50]. C. quinoa NE (100 mg) is placed overnight in a vial at 50 • C to remove residual water. The vial is then placed in a graphite bath (60 • C, under N 2 ) and 15 mL of CH 3 OK (1 mol·L −1 ) in anhydrous methanol are added (stirring, 60 min). The solution is directly evaporated under vacuum and the dry extract is brought to a volume of 15 mL with isobutanol before undergoing two liquid/liquid extractions (v/v) with Milli-Q water to desalt the phase. The butanol phase is again evaporated to dryness under vacuum to recover the TE saponins as a powder (60% yield, 95% conversion).
Mass Spectrometry (MS) Analyses
The Liquid chromatography analyses are performed with a Waters Acquity UPLC H-Class (Waters, Manchester, UK) composed of a vacuum degasser, a quaternary pump and an autosampler, coupled to a Waters Synapt G2-Si mass spectrometer (Waters, Manchester, UK). A non-polar column (Acquity UPLC BEH C18; 2.1 × 50 mm; 1.7 µm; Waters) is used at 40 • C. For these analyses, 0.1 mg of saponin extract is dissolved in 1 mL of a Milli-Q water/acetonitrile solution (85/15). A volume of 5 µL is injected into the column. The gradient is optimized for the compounds in this study and follows a flow rate of 250 µL·min −1 of Milli-Q water (with 0.1% formic acid (HCOOH), eluent A) and acetonitrile (CH 3 CN, eluent B). The mobile phase consists of an elution gradient starting with 85% of eluent A and 15% of eluent B, reaching 60% of eluent A and 40% eluent B at 6 min, and maintained for 3 min. The ratio is then modified to reach 5% eluent A and 95% eluent B at 11 min, maintained for 1 min and, finally, brought back to 85% eluent A and 15% eluent B at 13 min. This ratio is maintained until the end of the chromatographic run (15 min). Electrospray ionization (ESI) in positive ionization mode is used for the saponin ion production with typical conditions as follows: capillary voltage 3.1 kV, cone voltage 40 kV, source offset 80 V, source temperature of 120 • C and desolvation temperature of 300 • C (dry nitrogen flow rate 500 L·h −1 ), for a mass range (quadrupole in rf-only mode) between m/z 50 and 2000 (1 s integration time). For the LC-MSMS experiments, precursor ions are mass-selected by the quadrupole and collided against argon (Ar) in the Trap cell of the TriWave R device, and the kinetic energy of the laboratory frame (E lab ) is selected to afford intense enough product ion signals. The fragment ions are mass-measured with the ToF analyzer.
The relative quantification of saponins within the natural extract is achieved by adding a known quantity (0.1 mg·mL −1 ) of commercially available Hederacoside C (Sigma-Aldrich-Product n • 97151-M-Clarity TM Program MQ100), a pure saponin from Hedera helix, as internal standard in a solution of saponin extract at a given concentration, typically 0.1 mg·mL −1 . The spiked solution is analyzed using LC-MS (5 µL injection volume) using the experimental conditions described above. For each saponin molecule, including Hederacoside C, the corresponding LC-MS ion signals-including all the isotopic compositionsare integrated using the integration algorithm, available under MassLynx TM 4.1 Software (Waters, Manchester, UK). The global ion counts are then used to estimate the concentration of each saponin, relatively to Hederacoside C signal integration. The %-weights in extract (see Table 1) correspond to the mass percentages of saponins with a given elemental composition within the saponin extract. Please note that the sum of the %-weight does not reach 100%, making it possible to estimate the saponin content within the extract at 95.9%. The mass fractions in viscera expressed in mg·g −1 (see Table 1) are further calculated by using the global yield of extraction determined at 15.5 mg of extract per g of ground husk.
Hemolytic Activity Experiments
To measure the hemolytic activity, reflecting the membranolytic activity, bovine blood (stored with 0.11 M sodium citrate) was collected immediately after the death of the animal at the Abattoirs de Ath (22 Chemin des Peupliers, 7800 Ath, Belgium) on 10 April 2021. The bovine blood was then washed using a phosphate buffered saline (PBS) solution. This solution was prepared by dissolving 8 g of sodium chloride (NaCl), 1.45 g of sodium hydrogen phosphate dihydrate (Na 2 HPO 4 ·2H 2 O), 0.2 g of potassium chloride (KCl) and 0.2 g potassium dihydrogen phosphate (KH 2 PO 4 ) in 800 mL of Milli-Q water. The pH of the solution was adjusted to 7.4 and the solution was brought to a volume of 1 L using Milli-Q water. In a 50 mL Falcon, 10 mL of citrated bovine blood were added to 40 mL of PBS solution. The Falcons were then centrifuged for fifteen minutes at 10,000 g and the pellet was preserved. The washing was repeated until a clear and colorless supernatant was obtained. The supernatant was discarded and 2 mL from the pellets was diluted with 98 mL of PBS, to obtain a 2% (v/v) erythrocyte suspension. At the same time, various solutions containing the extract of saponins at different concentrations were prepared. The latter were placed in the presence of the 2% erythrocyte suspension in triplicate and incubated for one hour at 20 • C, with continuous shaking (500 rpm) before being centrifuged again at 10,000× g for ten minutes. The supernatant of each sample was then collected to measure the absorbance of the solution (540 nm) [59]. In our assay, we systematically used a 500 µg·mL −1 solution of saponins extracted from Aesculus hippocastanum seeds as a reference solution, since the corresponding escins are highly membranolytic [56]. The HA of the tested saponin solutions were then calculated using the following equation: where Abs sample , Abs blank , and Abs ref , respectively, correspond to the absorbance (540 nm) of the tested erythrocytes/saponins solutions, of the erythrocyte solution and of the erythrocyte/referent saponin solution.
Conclusions
The elucidation of the relation between the biological activity of a family of molecules and their molecular structures make it possible to explore the role of specific chemical moieties present on active molecules on their biological activities.
In the present investigation, we focused on the bidesmosidic saponins extracted from the C. quinoa husk, whose saccharidic chains were appended on the aglycone via two different linkages, a glycosidic bond in C3 and an ester function in C28. The C28 position was therefore sensitive to chemical modifications, such as hydrolysis and transesterification. We thus prepared three sets of saponins: (i) bidesmosidic saponins directly extracted and purified from the ground husk (NE-Natural Extract), (ii) monodesmosidic saponins with a carboxylic acid group in C28 (HE-Hydrolyzed Extract), and (iii) monodesmosidic saponins with a methyl ester function in C28 (TE-Transesterified Extract). The HE and TE saponins were respectively prepared by microwave-assisted alkaline hydrolysis and transesterification with potassium methanolate in anhydrous methanol under inert atmosphere from the NE saponins. Mass spectrometry experiments demonstrate that the hydrolysis and the transesterification are both highly specific to the C28 ester function and quantitative (~100% conversion). The impact of the structural modifications on the membranolytic activity of the natural saponins was then assayed on the basis of hemolytic activity measurement. The natural bidesmosidic saponins were confirmed to have no activity against erythrocytes even at the highest tested concentration (500 µg·mL −1 ). The hydrolyzed saponins start to be active against red blood cells already at 20 µg·mL −1 , whereas 50 µg·mL −1 of the transesterified saponins are necessary for inducing a detectable hemoglobin release from the destroyed red blood cells. Globally, the observation that monodesmosidic saponins, hydrolyzed or transesterified, are much more active against erythrocytes than the bidesmosidic ones confirms that bidesmosidic saponins are likely to be the dormant form of saponins in plants [60]. On the other hand, negatively charged saponins, i.e., the hydrolyzed ones, being more hemolytic than the neutral ones, could be associated with the high concentrations of positive charged ions in the vicinity of the negatively charged red blood cell membranes. We detected a similar effect with sulfated saponins (SO 4 − ) that were shown to be no longer hemolytic upon desulfation [41]. These results pointing to the role of charged groups of saponins on their biological activity should be addressed in the future for targeting specific applications. Figure S6: Direct esterification of monodesmosidic saponins (from the hydrolyzed extract-HE): unsuccessful attempts. Invariably the starting material is recovered after reaction; Figure S7: Methanolysis of the bidesmosidic saponins (from the natural extract-NE). All attempts under neutral/acidic conditions failed and only the transesterification using MeOK in anhydrous methanol under inert atmosphere afforded the expected C28-methylated saponins.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
|
v3-fos-license
|
2017-05-31T05:48:32.813Z
|
2007-08-16T00:00:00.000
|
13368863
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://bmcmolbiol.biomedcentral.com/track/pdf/10.1186/1471-2199-8-71",
"pdf_hash": "ddb4a16275d58038055136976edcc576f18dd1c3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:908",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "0295aa94ab96b6b3e2038247669142bad2b98d23",
"year": 2007
}
|
pes2o/s2orc
|
Inactivation of NMD increases viability of sup45 nonsense mutants in Saccharomyces cerevisiae
Background The nonsense-mediated mRNA decay (NMD) pathway promotes the rapid degradation of mRNAs containing premature termination codons (PTCs). In yeast Saccharomyces cerevisiae, the activity of the NMD pathway depends on the recognition of the PTC by the translational machinery. Translation termination factors eRF1 (Sup45) and eRF3 (Sup35) participate not only in the last step of protein synthesis but also in mRNA degradation and translation initiation via interaction with such proteins as Pab1, Upf1, Upf2 and Upf3. Results In this work we have used previously isolated sup45 mutants of S. cerevisiae to characterize degradation of aberrant mRNA in conditions when translation termination is impaired. We have sequenced his7-1, lys9-A21 and trp1-289 alleles which are frequently used for analysis of nonsense suppression. We have established that sup45 nonsense and missense mutations lead to accumulation of his7-1 mRNA and CYH2 pre-mRNA. Remarkably, deletion of the UPF1 gene suppresses some sup45 phenotypes. In particular, sup45-n upf1Δ double mutants were less temperature sensitive, and more resistant to paromomycin than sup45 single mutants. In addition, deletion of either UPF2 or UPF3 restored viability of sup45-n double mutants. Conclusion This is the first demonstration that sup45 mutations do not only change translation fidelity but also acts by causing a change in mRNA stability.
Background
Two translation termination factors, eRF1 and eRF3, participate in termination of protein synthesis in eukaryotes (reviewed in [1]). In S. cerevisiae they are encoded by SUP45 and SUP35, respectively (reviewed in [2]). In eukaryotes, a single factor, eRF1 (Sup45 in yeast), decodes all three stop codons, while eRF3 (Sup35 in yeast) stimu-lates termination through a GTP-dependent mechanism by forming a complex with eRF1.
Eukaryotic cells possess a mechanism known as nonsensemediated mRNA decay (NMD) to recognize and degrade mRNA molecules that contain premature termination codon (PTC) (reviewed in [3][4][5]). The NMD process is mediated by the trans-acting factors Upf1, Upf2 and Upf3 [6][7][8][9][10][11], all of which directly interact with eRF3; while only Upf1 interacts with eRF1 [12,13]. Using in vitro competition experiments, it has been demonstrated that Upf2, Upf3 and eRF1 actually compete with each other for binding to eRF3 [13]. Deletion of any one of the three UPF genes selectively stabilizes mRNAs that are degraded by the NMD pathway without affecting other mRNAs [6,7,[9][10][11]. Genetic studies have shown that Upf1, Upf2, and Upf3 act as obligate partners in the NMD pathway; this means that NMD only occurs when all components are present (reviewed in [14,15]). Mutations or deletions of UPF genes lead to an increased frequency of nonsense suppression at termination codons in a variety of yeast genes (reviewed in [15]). A mutation in the GTP-binding motifs of eRF3 impairs the eRF1-binding ability and not only causes a defect in translation termination but also slows normal and nonsense-mediated mRNA decay, suggesting that GTP/eRF3-dependent termination exerts its influence on the subsequent mRNA degradation [16]. Taken together these results suggest a direct link between the termination complex and the mRNA stability.
Both eRF1 and eRF3 are essential for viability of yeast cells and deletion of the C-terminal part of each protein separately lead to lethality (reviewed in [2]). Nonsense sup45 mutations have been obtained in the presence of SUQ5 suppressor tRNA [17]. However, we have isolated nonlethal nonsense mutations in the SUP45 gene of S. cerevisiae which lead to decreased level of eRF1 [18]. Nonsense mutations were also obtained in the SUP35 gene [19,20]. Here, we show that sup45 nonsense and missense mutations have an inhibitory effect on NMD. Our observation that loss of Upf1 suppresses many of the pleiotropic phenotypes caused by mutations in SUP45 allowed us to discuss the role of the Upf complex in translation termination.
Results
The sup45 mutations cause a general decrease in the efficiency of NMD In previous work, we have isolated non-lethal nonsense and missense mutations in the essential SUP45 gene of S. cerevisiae which lead to a high level of suppression [18]. Since a direct link between the termination complex and the mRNA stability was proposed, we examined the efficiency of NMD in these mutants by testing whether a decrease in eRF1 level will lead to accumulation of PTCcontaining transcripts.
The abundance of the precursor and mature CYH2 mRNA levels can be used to monitor NMD, because it has been shown that inefficiently spliced CYH2 pre-mRNA containing a premature termination codon is degraded by NMD pathway [21]. In the present paper, we show that the accumulation of his7-1 mRNA is affected by NMD pathway.
The strain 1B-D1606 in which sup45 mutants were obtained, contains nonsense mutations in HIS7, LYS9 and TRP1 genes. As shown in Table 1, sequence analysis of the his7-1, lys9-A21 and trp1-289 alleles identified the presence of single nonsense mutations. It is known that destabilization of mRNAs by NMD depends on the position of the nonsense codon and the presence of a DSE (downstream destabilizing element) downstream of mutation (reviewed in [15]). Indeed, while mRNAs with nonsense codons occurring in the last 20 to 30% of the coding region retain their wild-type decay rates, mRNAs harboring PTC in the first two-third of the gene are subject to degradation by NMD. The his7-1 nonsense mutation is located at the beginning of HIS7 mRNA and is followed by two putative DSE ( Table 1), suggesting that his7-1 transcript could be a substrate for NMD.
To check if his7-1 transcript is affected by NMD, two isogenic strains were constructed harboring the his7-1 allele together with UPF1 or upf1∆. For this purpose, the strain 5B-D1645 (his7-1 upf1∆) was transformed with pRS316 and pRS316/UPF1 plasmids. The mean steady-state level of the his7-1 mRNA was 3.5 fold higher in the upf1∆ cells than in UPF1 cells (Fig. 1, upper panel). However, the level of wild-type HIS7 mRNA was not affected by deletion of UPF1 (see Additional file 1, figure 1A). The steadystate level of ade1-14, another nonsense-containing transcript, did not depend on deletion of UPF1 (Fig. 1, middle panel). Therefore, the deletion of UPF1 gene affects the his7-1 transcript in precisely the same manner that it affects the CYH2 pre-mRNA demonstrating that his7-1 is a potential substrate for NMD. Earlier it was shown that deletion of UPF1 gene promotes suppression of some but not all nonsense mutations [7]. Indeed, we did not detect suppression of his7-1 mutation on SC-HIS medium in We studied NMD efficiency in sup45 mutant strains by examination of CYH2 and his7-1 mRNA levels. First, we have shown that the mRNA level of wild-type HIS7 mRNA in sup45-n mutants and upf1∆ mutants is not changed (see additional file 1 Fig. 1B). Next, we used strains harboring nonsense mutations sup45-102, 104, 105, 107, which lead to decrease of eRF1 full-length protein level [18] and the sup45-103 (L21S) missense-mutation [22]. Total RNA was isolated from mutants and analyzed by Northern blots using probes specific for CYH2 and his7-1 mRNA. Strains bearing sup45-n mutations had a significantly increased CYH2 pre-mRNA/RNA ratio (1.7 ± 0.1 to 2.1 ± 0.2) compared with that of wild-type SUP45 strain (ratio of 1). A similar increase was also observed for sup45-103 missense mutant (1.8 ± 0.1) ( Fig. 2A). Similarly, the relative abundance of his7-1 mRNA in the sup45-n mutant strains, normalized using ACT1 mRNA, ranged from 1.8 ± 0.3 to 2.8 ± 0.4 compared with that of wild-type SUP45 strain (ratio of 1) (Fig. 2B). The strain harboring missense mutation sup45-103 was also characterized by the accumulation of the his7-1 transcript (1.8 ± 0.1). This effect was weaker than that observed for UPF1 deletion which causes 4 and 3.5 fold increases in the CYH2 pre-mRNA/ RNA ratio and the accumulation of his7-1 mRNA, respectively ( Fig. 1), however it appears specific for NMD substrates since the amounts of other nonsense-containing transcripts (ade1-14 and lys9-A21) were not significantly changed in sup45 mutants (see Additional file 2). We also observed a significant increase of the CYH2 pre-mRNA/ RNA ratio and of his7-1 mRNA level in strains bearing sup35 nonsense and missense mutations (data not shown). Taken together, these results demonstrate that both nonsense and missense mutations in SUP45 decrease the efficiency of mRNA degradation by NMD thus leading to accumulation of mRNAs containing PTCs.
Nonsense or missense alleles of SUP45 affect accumulation of his7-1 mRNA and CYH2 pre-mRNA Figure 2 Nonsense or missense alleles of SUP45 affect accumulation of his7-1 mRNA and CYH2 pre-mRNA. Northern blots were prepared with total RNA from wild-type strain 1B-D1606 (SUP45) and its sup45 mutant derivatives. Blots were hybridized with DNA probes that detected the his7-1, CYH2 and ACT1 transcripts. For each mutant the average CYH2 pre-mRNA/mRNA ratio (A) and the abundance of his7-1 mRNA (B) relative to the wild-type strain are shown with the standard deviation (s.d.) . Following sup45 mutations were tested: 102, 104, 105, 107 (nonsense) and 103 (missense).
his7-1 mRNA accumulated when nonsense-mediated decay is inhibited Figure 1 his7-1 mRNA accumulated when nonsense-mediated decay is inhibited. Nothern blotting was used to assess the effect of UPF1 deletion on the accumulation of his7-1 mRNA. Total RNA was isolated from strain 5B-D1645 (his7-1 upf1∆) transformed with plasmids pRS316 and pRS316/ UPF1, designated as (upf1∆) and (UPF1), respectively. Northern blots were hybridized with radiolabeled HIS7, ADE1, ACT1 and CYH2 probes. A. Representative hybridization signals specific to his7-1 mRNA (upper panel), ade1-14 mRNA (middle panel) and actin mRNA (ACT1) used as an internal control (lower panel) are shown. Numbers indicated under upper and middle panels represent the relative abundance of his7-1 and ade1-14 mRNA's, respectively, in upf1∆ and UPF1 strains. (s.d.) -standard deviation. B. Accumulation of CYH2 precursor mRNA was used to control that NMD is altered in the upf1∆ strain. The CYH2 probe detects both precursor and mature CYH2 mRNA. The fold increase in CYH2 precursor/mature mRNA accumulation in upf1∆ strain relative to UPF1 strain is indicated with the standard deviation (s.d.).
Increased viability of sup45 nonsense mutants in the absence of UPF1
Previously, we have shown that sup45 nonsense mutants are viable in different genetic backgrounds [18]. However, the efficiency of plasmid shuffle was significantly lower in the case of mutant sup45-n alleles compared to plasmid bearing wild-type SUP45 gene [18] indicating that sup45n mutations imperfectly replace SUP45. To assess the effects of double sup45 upf1 mutations on viability of corresponding strains we performed plasmid shuffle analysis using strains bearing single sup45 mutations or sup45 in combination with upf1∆ (see Materials and Methods).
Two nonsense mutations resulting in different stop codons (sup45-102 (UAA) and sup45-107 (UGA)) and one missense mutation (sup45-103) were used to perform plasmid shuffle experiments. Two isogenic yeast strains 1A-D1628 (sup45∆ pRS316/SUP45) and 1-1A-D1628 (sup45∆ upf1∆ pRS316/SUP45) were transformed with the pRS315 plasmids bearing the wild-type SUP45 gene or different sup45 mutations. Transformants were then subjected to plasmid shuffle analysis to verify whether strains containing the sup45 alleles could lose the plasmid carrying the wild-type gene. In the sup45∆ strain, all transformants were able to grow in the presence of 5-FOA, indicating that all tested mutations can replace wild-type SUP45. However, as previously described [18], plasmid shuffle was less efficient with sup45 mutations than with wild-type SUP45. Surprisingly, introduction of upf1∆ mutation lead to increased viability of sup45 mutants (Fig. 3, 5-FOA). We did not observe difference in growth between wild-type and sup45 mutants on medium selective for both plasmids (Fig. 3, -L-U). In order to check that deletion of UPF1 does not lead to higher production of eRF1 protein in double sup45 upf1∆ mutants which could explain the increased viability of these mutants, we analyzed eRF1 protein level by western blot. As shown in figure 3B, deletion of UPF1 does not affect the level of eRF1 protein in sup45 mutants.
Deletion of the UPF1 gene suppresses several sup45 phenotypes
It is known that mutations in the SUP45 gene lead to suppression of nonsense mutations and also to many phenotypic changes including high or low temperature sensitivity, respiratory efficiency and sensitivity to aminoglycoside antibiotics such as paromomycin (reviewed in [2]). It has been previously reported that loss the UPF1 gene results in the suppression of some but not all of nonsense mutations. In addition, deletion of UPF1 does not confer sensitivity to paromomycin [7], an aminoglycoside antibiotic that induces translational misreading. We therefore compared phenotypes of single Deletion of UPF1 gene leads to increased viability of sup45 nonsense mutants sup45 and double sup45 upf1∆ mutants, previously obtained by plasmid shuffle, for suppression efficiency, temperature sensitivity and sensitivity to paromomycin. As previously described [18,22], nonsense sup45-102 and sup45-107 and missense sup45-103 mutations are temperature sensitive. We observed that deletion of UPF1 suppressed temperature sensitivity of both sup45-102 and sup45-107 nonsense mutants in rich medium. However UPF1 deletion did not suppress temperature sensitivity of sup45-103 missense mutant (Fig. 4A). The observed disparity in effect of UPF1 deletion on temperature sensitivity of nonsense and missense mutants can be connected with different nature of temperature sensitivity in the case of decreased level of eRF1 compared with mutated eRF1. Loss of Upf1 also restored growth of all sup45 mutants on paromomycin media (Fig. 4B). In addition, deletion of UPF1 had an allosuppressor effect on suppression of ade1-14 mutation by sup45 mutations. However deletion of UPF1 alone, in the presence of wild-type copy of the SUP45 gene had no suppressor effect on ade1-14 mutation (Fig. 4C). On YPD medium, all transformants grew indicating that they retained their growth capacity on rich medium (Fig. 4D). Therefore the analysis of sup45 upf1∆ double mutants shows that loss of Upf1 not only affects viability of sup45 mutants but also suppresses several sup45 phenotypes.
Defects of NMD in double mutants sup45 upf1∆
We have previously shown that sup45 mutants affect NMD and that UPF1 deletion suppresses several sup45 phenotypes. Therefore, we next examined if UPF1 deletion would have an additional effect on NMD in sup45 mutants. For this purpose, we compared accumulation of CYH2 precursor mRNA in single sup45 mutants and sup45 upf1∆ double mutants. To this end, we transformed strain 3v-D1658 (sup45∆ upf1∆ pRS315/SUP45) and its derivates (sup45∆ upf1∆ pRS315/sup45-n) with plasmids pRS316 or pRS316/UPF1. In the presence of upf1∆ and wild-type SUP45, accumulation of CYH2 precursor increased by 4.6 fold. We found that the presence of sup45 mutations increased the ratio of preCYH2 to mature CYH2 mRNA by 1.9 to 2.3-fold (Fig. 5A,B), in agreement with results obtained previously in a different genetic background ( Fig. 2A). Combination of sup45-n mutations and upf1∆ slightly but reproducibly increased the ratio of preCYH2 to mature CYH2 mRNA from 4.6 to 6.0 fold. Therefore, these results indicate that the combination of sup45-n and upf1 mutations together increased accumulation of CYH2 precursor more than either single one.
To compare the effects of single sup45 mutations and the sup45 upf1∆ double mutations on the efficiency of suppression, we replica plated the same transformants that were used for Nothern blots analysis (Fig. 5A) on adenine deprived medium. As shown in another genetic background (Fig. 4C), deletion of UPF1 does not promote suppression of ade1-14 mutation (Fig. 5C) and the combined effects of sup45 mutations and upf1∆ promote increase in the suppression efficiency compared with sup45 mutations alone. As shown in figure 5D, allosuppression of ade1-14 by deletion of UPF1 is not the result of stabilization of ade1-14 mRNA in double sup45 upf1∆ mutants. We also showed that deletion of UPF1 does not affect sup45-n mRNAs and eRF1 as well as eRF3 protein levels (Fig. 5D).
Discussion
In the present work, we have shown that nonsense and missense mutations in the SUP45 gene lead to stabilization of PTC-containing mRNAs degraded by NMD. This is the first demonstration that sup45 mutations do not only change translation fidelity but also acts by causing a change in mRNA stability.
The CYH2 pre-mRNA which contains a premature termination codon was previously shown to be degraded by NMD pathway [21]. We also identified that NMD affects accumulation of his7-1 mRNA. A single A→T mutation in this allele leads to change of codon 229 for UAA. In addition, two imperfect putative DSE are found downstream Double mutants sup45 upf1∆ are characterized by defects of NMD Figure 5 Double mutants sup45 upf1∆ are characterized by defects of NMD. A. Representative hybridization signals specific to precursor and mature forms of CYH2. Total RNA was isolated from strain 3v-D1658 (sup45∆ upf1∆ pRS315/SUP45) and its derivates (sup45∆ upf1∆ pRS315/sup45-n) transformed with pRS316 and pRS316/UPF1 plasmids, designated as (UPF1 -) and (UPF1 +), respectively. The Northern blots were hybridized with radiolabeled CYH2 probe. The CYH2 precursor/mature ratio in wild-type strain was set as 1.0. B. The fold increase in CYH2 precursor/mature mRNA accumulation measured in the same strains as in panel A are represented relative to such in wild-type strain. C. The same transformants as in panel A were tested by plating 10 0 , 10 -1 and 10 -2 serial dilutions of overnight cultures (left to right) on synthetic complete plates lacking adenine and incubated 5 days at 25°C. The same serially diluted cultures were also spotted on plates lacking leucine and uracil (-L -U) to estimate the total number of cells analyzed. D. Norhern blots prepared with total RNA from the same transformants as in panel A were hybridized with radiolabeled probes, detecting ade1-14, SUP45 and scR1 mRNA (scR1 was used as a control). eRF1 and eRF3 protein levels in the same transformants were analyzed by western blot. Tubulin was used as a loading control. WT -wild type, 102 -sup45-102 (nonsense), 107 -sup45-107 (nonsense).
of this premature stop codon. Using a upf1∆ strain, we demonstrated that the his7-1 transcript is possibly under control of NMD pathway. In order to answer if changes in transcription of HIS7 gene could account for accumulation of his7-1 mRNA, we examined the mRNA level of wild-type HIS7 mRNA in sup45-n mutants and upf1∆ mutants. Deletion of upf1 as well as sup45 mutations leads to accumulation of his7-1 mRNA but do not affect mRNA level of wild-type HIS7 mRNA. Accordingly, a genomewide analysis performed in strains depleted for NMD showed that wild-type HIS7 mRNA (and also ADE1 and LYS9 mRNAs) is not affected in strains deleted for upf1 [26].
Here, we demonstrate that accumulation of his7-1 and CYH2 precursor mRNAs in cells bearing sup45 mutations was much higher than that observed in the wild-type strain. But, sup45 mutations do not promote accumulation of other nonsense-containing transcripts, such as ade1-14 or lys9-A21, despite efficient suppression of these mutations, as well as his7-1 [18,22]. This result indicates that simply increasing read-through efficiency does not result in a general increase in the abundance of PTC-containing mRNAs but that sup45 mutations specifically affect PTC-containing mRNA subjected to NMD. We observed that the abundance of his7-1 and CYH2 precursor mRNAs in cells bearing sup45 mutations was lower compared to those in the upf1∆ strain. This difference between upf1∆ and sup45 mutants could be explained by the complete absence of Upf1 protein in the upf1∆ strain leading to complete inactivation of NMD and by the presence of some functional eRF1 protein in sup45 mutants which is necessary for cell viability [18,22]. Indeed, we previously reported that in the case of sup45-n, the level of eRF1 is decreased compared to wild-type and in the case of sup45 missense mutants the level of eRF1 is unchanged but its functionality is altered. These results demonstrate that eRF1 participates in NMD.
Recently, it was shown the importance of the second translation termination factor (eRF3) for NMD and interaction of both eRF1 and eRF3 with Upf proteins. Upf1 protein interacts with the polypeptide release factors eRF3 and eRF1 while they are still present in the ribosomebound termination complex, providing a direct link between the termination complex and the NMD machinery [12,13]. Both Upf2 and Upf3 interact with eRF3, but not with eRF1; and Upf2, Upf3 and eRF1 compete with each other in vitro for binding to eRF3 [12,13]. eRF3 also interacts with poly(A)-binding protein (PABP) [27,28], furthermore, eRF3 regulates the initiation of normal mRNA decay at the poly(A) tail-shortening step through the interaction with PABP [29]. Thus, eRF3 can mediate normal and nonsense-mediated mRNA decay through its Deletion either UPF2 or UPF3 gene leads to increased viability of sup45 nonsense mutants association with Pab1 and Upf1 and therefore was proposed as a key mediator between translation termination and NMD [16]. Moreover, it was previously shown that a weak translation termination due to [PSI + ] (a prion form of eRF3) antagonizes the effects of NMD [30]. A first indication for a link between translation termination and NMD came from observations that decay of PTC-containing mRNAs can be antagonized by tRNAs that suppress termination [31]. Data have shown that normal termination is distinct from premature termination and this difference is dependent upon the presence of Upf1 at the premature termination codon [32]. Our results together with data about eRF1-Upf1 interaction [12,13] demonstrate that eRF1 as well as eRF3 is an essential factor linking translation termination and NMD. Recognition of stop codons is a common event necessary for the two processes. Since it is established that eRF1 plays a crucial role in translation termination by directly recognizing stop codons (reviewed in [2]), eRF1 could have an identical function in NMD by recognition of PTC.
We observed that the combination of upf1∆ and sup45-n mutations leads to an increase in CYH2 precursor mRNA abundance that was higher than in upf1∆ and sup45-n single mutants. A similar additive effect on stabilization of nonsense-containing mRNA was shown for combination of upf1∆ and [PSI + ] [30]. Therefore, a possible explanation for this additive effect of upf1∆ and sup45-n mutations could be that eRF1 is required for both normal and nonsense-mediated mRNA decay, as it was shown for eRF3 [16].
It has been shown that mutation in eRF3 which impairs eRF3 binding to eRF1 affected mRNA decay [16]. In the present paper, we show that the missense mutation sup45-103 (L21S) alters degradation of PTC-containing mRNAs by NMD. However, we have previously shown that this mutation does not affect the eRF1-eRF3 interaction [22], indicating that this allele has an inhibitory effect on NMD that is independent on eRF1-eRF3 binding. This result demonstrates that eRF1 mutation affecting PTC-containing mRNA decay by NMD does not obligatory alters the eRF1-eRF3 interaction.
A role for the Upf1 protein, essential for NMD, in translation termination first became evident when a set of mutations were isolated in the UPF1 gene that separated the mRNA decay function from its activity in modulating premature termination [33,34]. Subsequent studies have shown that deletion of either UPF2 or UPF3 can also lead to a nonsense suppression phenotype [7,9,11,13,35]. In addition, it was shown that upf1∆ mutation causes a general decrease in the efficiency of translation termination at UAG, UAA, and UGA stop codons [30].
In this work, we have shown that deletion of UPF1 does not affect ade1-14 mRNA level, but results in allosuppression of ade1-14 mutation in sup45 nonsense mutants therefore revealing that deletion of UPF1 has a synergistic effect with sup45-n mutants. Similar allosuppressor effect has also been shown for deletion of UPF1 in combination with [PSI + ] [30,36]. Based on this additive effect, Keeling et al. [30] proposed that upf1∆ mutation and [PSI + ] influence the termination process in distinct ways. Our results suggest that this could be also the case for upf1∆ and sup45 mutations.
We found that deletion of the UPF1 gene affects several other sup45 phenotypes, such as temperature sensitivity, paromomycin sensitivity and viability of sup45 mutants. It is known that deletion of UPF1 gene in yeast does not cause any detectable phenotypic effects except respiratory deficiency [37] and nonsense suppression [7,9,13,[33][34][35]. Also telomere length is affected by deletions of UPF1-3 genes [38]. How UPF1 deletion could affect sup45 phenotypes? We can not exclude an indirect effect of UPF1 deletion on sup45 phenotypes. It has been reported that NMD controls the mRNA levels of several hundred of wild-type genes [24,26]. One can hypothesize that depletion of Upf1 could affect the expression of some translation apparatus components (e.g. tRNA genes) which themselves influence the viability of sup45 mutants. Indeed, the presence of SUQ5 mutation, a mutant suppressor tRNA Ser , increases the viability of sup45-n mutants [18]. Alternatively, since inactivation of the NMD pathway by upf1∆ mutation does not increase the steady-state levels of wildtype and mutant SUP45 mRNAs and does not cause a change in the amount of eRF1 protein, we propose that the effect of NMD on sup45 phenotypes is probably via a change in the stoichiometry of factors involved in translation termination and NMD. In contrast to mammals, Upf proteins of S. cerevisiae are present at very low intracellular concentrations [39]. Considering that in sup45-102 and sup45-107 nonsense mutants the amount of eRF1 was estimated as 8% and 17% of wild-type level, respectively [18], in mutant cells eRF1 and Upf1 proteins are probably present in stoichiometric amounts. Possibly, in wild-type cells, Upf1 is not preventing normal termination because its amount is ten times lower than eRF1, but in the case of their presence in stoichiometric amounts in sup45 nonsense mutants binding of Upf1 to eRF1 could result in a defective complex formation that blocks termination. This hypothesis is supported by finding that viability of sup45 nonsense mutants depends also on Upf2 or Upf3 proteins. There is a possibility that effect of Upf2 or Upf3 depletion is indirect and is under control of Upf1. It was shown that in mammalian cells a depletion of Upf2 or Upf3 reduces the amount of the phosphorylated form of Upf1 possibly preventing Upf1 dissociation from eRF3 and eRF1 [40]. Phosphorylation of Upf1 and Upf2 was also shown in S. cerevisiae [41,42], an indication that this mechanism might operate in yeast cells as well.
From recent studies, it appears more and more clear that translation termination and mRNA stability are intimately linked and our results demonstrate that eRF1 is also an essential factor linking these two processes.
Conclusion
In the present work, we have shown hat nonsense and missense mutations in SUP45 gene lead to stabilization of CYH2, a PTC-containing pre-mRNA degraded by NMD, and to accumulation of his7-1 mRNA. At the same time, sup45 mutations do not promote accumulation of other nonsense-containing transcripts, such as ade1-14 or lys9-A21, despite efficient suppression of these mutations. Thus sup45 mutations specifically affect PTC-containing mRNA subjected to NMD. Deletion of UPF1 results in allosuppression of ade1-14 mutation in sup45 nonsense mutants and leads to an increase in CYH2 pre-mRNA abundance therefore revealing that deletion of UPF1 has a synergistic effect with sup45-n mutants. This is the first demonstration that sup45 mutations do not only change translation fidelity but also acts by causing a change in mRNA stability.
Models explaining increased viability of sup45 nonsense mutants in the absence of Upf1, Upf2 or Upf3 proteins are proposed. First, the depletion of Upf1 could affect the expression of some translation apparatus components (e.g. tRNA genes) which themselves influence the viability of sup45 mutants. Second, a change in the stoichiometry of factors involved in translation termination and NMD provides the effect of NMD on sup45 phenotypes.
Yeast strains, plasmids and growth conditions
The S. cerevisiae strains used in this study are listed in Table 2. Previously characterized sup45 mutations [18,22] were used in this study, among them the following nonsense mutations (sup45-n): Yeast strain 1-1A-D1628 was generated by using a onestep gene replacement method. The UPF1 gene was deleted by the removal of the entire open reading frame and the insertion of the kanMX gene by using PCR-based gene deletion approach [43] with plasmid pFA6a-kanMX. The following primers were used for PCR: F1 (AATATACTTTTTATATTACATCAATCATTGTCATTAT-CAACGGATCCCCGGGTTAATTAA) and R1 (AAGCCAAGTTTAACATTTTATTTTAACAGGGTTCAC-CGAAGAATTCGAGCTCGTTTAAAC). Yeast strain 1A-D1628 was transformed with the fragment generated by PCR. Kan R transformants were screened by PCR. Yeast strains were grown either in standard rich or synthetic culture medium [44] at 25°C. Transformants were grown in the media selective for plasmid maintenance (SC-Trp, SC-Leu, SC-Ura). Suppression of nonsense mutations was estimated by growth at 25°C on synthetic media lacking the corresponding amino acids. For plasmid shuffle, selective medium containing 1 mg/ml 5-fluoroorotic acid (5-FOA, Sigma) was used. Yeast transformation was performed as described [45]. Plasmid pRS316/UPF1 contains UPF1 gene under its own promoter [46].
Plasmid shuffle
The haploid SUP45::HIS3 [CEN URA3 SUP45] and SUP45::HIS3 UPF::kanMX4 [CEN URA3 SUP45] strains were used in "plasmid shuffle". These strains were transformed with [CEN LEU2 sup45] plasmids. Transformants, selected on -Ura-Leu medium, were velveteen replica plated onto 5-FOA medium, which counterselects against URA3 plasmids [47]. Growth was also assayed using serial dilutions of overnight cultures with OD 600 = 1. Serially (10-fold) diluted yeast cell cultures were spotted on plates containing 5-FOA to determine the ability of the sup45 mutant alleles to support cell growth in the presence and absence of any one of three UPF genes. The wild-type yeast SUP45 gene carried on the URA3 plasmid eliminates because 5-FOA is toxic to cells expressing the URA3 gene. The same serially diluted cultures were also spotted on plates lacking leucine and uracil to estimate the total number of cells analyzed.
Sequencing of the alleles his7-1, lys9-A21 and trp1-289
Yeast DNA was prepared using genomic DNA purification Kit (Promega). DNA fragments, corresponding to ORFs were amplified with the following primers: For each allele at least two independent PCR-products were sequenced using the following primers:
Analysis of mRNA steady-state levels
Total RNA was prepared by hot-phenol extraction method from yeast culture grown in YPD medium to log phase OD 600 = 0.5-0.8 as described [44]. Five micrograms of each RNA sample were separated on a 1.2% agarose gel, containing 3% formaldehyde and transferred to nylon membrane, Z-probe (Bio-Rad). SUP45, HIS7, CYH2 and ACT1 transcripts were detected using gene-specific 32 Pradiolabelled DNA probes. Radioactive signals were directly detected and quantified by STORM Phosphor Imager system (Molecular dynamics, USA).
|
v3-fos-license
|
2022-07-18T01:15:22.248Z
|
2022-07-15T00:00:00.000
|
250607900
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.omega.2024.103073",
"pdf_hash": "409b77fb51dfab4d632a95fc4a5c3b0564f5d584",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:909",
"s2fieldsofstudy": [
"Economics",
"Computer Science",
"Business"
],
"sha1": "5750d0d7a8c0e383008d9c767aa348e604d58f93",
"year": 2022
}
|
pes2o/s2orc
|
Flexible global forecast combinations
Forecast combination -- the aggregation of individual forecasts from multiple experts or models -- is a proven approach to economic forecasting. To date, research on economic forecasting has concentrated on local combination methods, which handle separate but related forecasting tasks in isolation. Yet, it has been known for over two decades in the machine learning community that global methods, which exploit task-relatedness, can improve on local methods that ignore it. Motivated by the possibility for improvement, this paper introduces a framework for globally combining forecasts while being flexible to the level of task-relatedness. Through our framework, we develop global versions of several existing forecast combinations. To evaluate the efficacy of these new global forecast combinations, we conduct extensive comparisons using synthetic and real data. Our real data comparisons, which involve forecasts of core economic indicators in the Eurozone, provide empirical evidence that the accuracy of global combinations of economic forecasts can surpass local combinations.
Introduction
Forecast combinations-aggregations of multiple individual forecasts-are one of the most persistently reported empirical successes in forecasting.As a key economic institution, the European Central Bank elicits economic forecasts every quarter for the Eurozone from more than one hundred forecasters, an exercise known as the Survey of Professional Forecasters (SPF).Each forecaster has unique expertise, and some possess private information, so combining is a means to a more accurate and robust projection of the economy than any one forecaster could alone produce.For this reason, the Federal Reserve Bank of Philadelphia runs a similar survey by the same name for the United States.Exactly how to combine forecasts from these surveys is a long-standing problem.gradient boosted trees.The trees were grown on thousands of time series, enabling weights to be learned across tasks.Though similar, their problem is distinct from the economic forecast combination problem that is the main focus of this paper.Whereas Montero-Manso et al. (2020) combined a small number of forecasts for a large number of tasks drawn independently from a large pool, we combine a large number of forecasts for a small number of related tasks.Elaborate approaches involving boosted trees are not feasible in our setting.
In light of the preceding discussion, this paper proposes a new framework for globally combining forecasts.
Our framework minimises a global loss function comprised of individual forecasting tasks.The framework is flexible to the level of relatedness among the different tasks.Specifically, using a task-coupling penalty, we interpolate between fully local combination, where all tasks are heterogeneous, and fully global combination, where all tasks are homogeneous.The best interpolation is determined in a data-driven fashion.Via this framework, we 'globalise' the weighting schemes of Bates and Granger (1969), Conflitti et al. (2015), and Matsypura et al. (2018).We then evaluate the new global combinations in both simulation and an application to expert forecasts from the European Central Bank SPF. 1 The results indicate neither fully local nor fully global combination uniformly performs best.Instead, combinations that lie somewhere between these extremes typically lead to the best out-of-sample performance.We also show the benefits of our framework on model-based forecasts of economic and financial time series from the M4 Competition in Appendix D.
The paper is organised into six sections.Section 2 introduces the proposed framework for globally combining forecasts.Section 3 addresses computation of the new combinations.Section 4 presents numerical experiments that gauge the benefits of globalisation.Section 5 describes empirical comparisons of the new methods in application.Section 6 closes the paper.Proofs are in Appendix A, additional synthetic data experiments in Appendix B, and additional empirical results in Appendix C.
Single-task forecast combination
To set the scene for our framework, we first describe the traditional single-task forecast combination problem.Let y ∈ R be the forecast target and f = (f 1 , . . ., f p ) ⊤ ∈ R p be forecasts of y.Denote by e = y1 − f the forecast errors.It is customary to assume the errors satisfy E(e) = 0 and Var(e) = Σ, where Σ is a p × p positive-definite matrix.Consider the linear combination forecast f = f ⊤ w, where 1 The literature on the European Central Bank and Federal Reserve Bank of Philadelphia SPFs typically refers to individual forecasts as 'expert forecasts'; see footnotes 6 and 7 in Magnus and Vasnev (2023) for papers that use those surveys.Expert forecasts often include judgement that is now recognised as an important element in forecasting and can be used to adjust individual model output (Lawrence et al., 2006) or model selection/combination (Petropoulos et al., 2018).In many areas, 'judgemental forecasts' is a more common term; see Lawrence et al. (2006).Our methodology is also applicable to those areas.w = (w 1 , . . ., w p ) ⊤ ∈ R p are unit sum weights controlling the contribution of individual forecasts to the combination forecast.
Since the forecasts are unbiased and the weights sum to one, the mean square error minimising forecast combination is that which minimises the combination forecast error variance Var(e ⊤ w) = w ⊤ Σw.This minimisation is performed with respect to a constraint set W: The simplest configuration of the constraint set is W eql = {1/p}, yielding equal weights.Using W opt = {w ∈ R p : 1 ⊤ w = 1} leads to optimal weights as proposed by Bates and Granger (1969).The constraint set W optcvx = {w ∈ R p : 1 ⊤ w = 1, w ≥ 0}, as studied by Conflitti et al. (2015), adds a nonnegativity condition to guarantee a convex combination.The resulting weights are referred to hereafter as optimal convex weights.
weights restricted to an optimal subset of forecasts.These weights were investigated by Matsypura et al. (2018) and are referred to hereafter as optimal equal weights.Here, z is a vector of p binary variables z j (j = 1, . . ., p) where z j assumes the value one if forecast j is selected for inclusion in the combination and zero otherwise.The constraint w = z/(1 ⊤ z) guarantees the selected forecasts are equally-weighted.Other weighting schemes can also be cast in this setup by appropriately choosing W.
When the covariance matrix Σ is large-dimensional and estimated from data, it can be helpful to include a shrinkage penalty in the objective function (e.g., Roccazzella et al., 2022): where λ ≥ 0. Setting q = 2 yields a ridge penalty (Hoerl and Kennard, 1970), while q = 1 yields a lasso penalty (Tibshirani, 1996).When q = 2, the objective can be rearranged as w ⊤ (Σ + λI)w, so the ridge penalty has the effect of shrinking the covariance matrix towards the identity matrix I, thereby stabilising the objective.The lasso penalty has a similar stabilising effect.Though there exist numerous covariance estimators that explicitly perform shrinkage (Ledoit and Wolf, 2004;Schäfer and Strimmer, 2005;Touloumis, 2015), these do not accommodate missing data.Missing data is an important empirical consideration, discussed further in Section 5. On the other hand, it is straightforward to mimic the effect of shrinkage by plugging a standard missing-data covariance estimator into (1).Under all the aforementioned configurations of W, the limiting shrinkage case (λ → ∞) leads to equal weights as the optimal solution when q = 2.
Multi-task forecast combination
The problem described above concerns one forecasting task y.Suppose now we have multiple tasks y = (y (1) , . . ., y (m) ) ⊤ ∈ R m .The m tasks may comprise, e.g., different variables or different forecast horizons.We index all quantities relating to the kth component by superscript (k).Hence, the combination k) , where k) with Var(e (k) ) = Σ (k) .Though the multi-task setup is typical of economics, research to date has treated the tasks in isolation, using weights fit on a per-task basis: min This combination is local because the individual tasks are in no way linked, i.e., solving optimisation problem (1) for each task individually leads to the same weights as solving optimisation problem (2).Information from one task that might be relevant to other tasks is neglected.Instead, one can consider a single vector of weights that is a minimiser of the total loss across all tasks: This combination is global insofar as the resulting weights take into account information contained in all tasks.Since the loss term in the objective can be expressed equivalently as w ⊤ ( m k=1 Σ (k) )w, this approach can be interpreted as averaging over the task-specific covariance matrices.When the covariance matrices are estimated by the sample covariance matrix, averaging is the same as estimating a single covariance matrix after aggregating data from different tasks.Unfortunately, an implicit assumption underlies this approach that the tasks are completely homogeneous.This assumption might be unreasonably strong in practice and could harm forecast performance.
Rather than committing to a fully local or fully global approach, one can consider bridging the two approaches using per-task weights that are globally regularised: min Here, the penalty γ m k=1 ∥ w − w (k) ∥ q q with γ ≥ 0 is a device to incorporate global information into the per-task weights.It achieves this goal by penalising departures from an auxiliary weight vector w common to all tasks, where the departures are measured as squared deviations (q = 2) or absolute deviations (q = 1).
Regardless of q, taking γ → ∞ yields global combination (3), while taking γ → 0 yields local combination (2).Hereafter, we refer to the limiting case γ → ∞ as 'hard' global combination, and the case with finite nonzero values of γ as 'soft' global combination.These different cases are depicted in Figure 1.The value of γ should reflect the level of relatedness among tasks-larger values encourage homogeneity, while smaller values promote heterogeneity.The best value in terms of out-of-sample forecast performance is usually unknown in application but is estimable from data.•, w (1) f (1) . . . . . .
When the departures are measured as squared deviations (i.e., q = 2), it is not difficult to obtain an analytical solution: That is, the optimal value of the common parameter vector w is the average of the individual parameter vectors w (1) , . . ., w (m) .One can thus interpret our approach as finding per-task weights within a certain distance of the average weight vector.Some additional algebra gives an alternative expression for Ω γ,2 : This expression highlights that our approach explicitly penalises mutual distances between local weight vectors.Our experience is that formulating soft global combination using either of the above closed-form solutions yield computational performance similar to that of (4), provided the number of tasks m is not large.When m is large, these solutions involve many more quadratic terms in the objective, which can impede computation.For instance, under the simulation design of Section 4 when m = 10 and p = 50, it takes roughly six times longer to solve for optimal weights when using the second of the above closed-form solutions.
Proposition 1.When q = 2, the optimisation problem (4) can also be expressed as min where w⋆ = m −1 m k=1 w (k) is the optimal value of the common parameter vector.
Form (6) reveals that γ plays a dual role.It shrinks towards equal weights when it appears in front of I, similar to λ, but it also pushes towards a corner solution via the last term.While the full explicit solution cannot be derived for γ ̸ = 0, it is possible to prove the following proposition.
Proposition 2. The optimal solution of problem (6) when W = W opt satisfies where Proof.See Appendix A.2.
For γ = 0, we get an explicit solution w (k) = A (k) 1/(1 ⊤ A (k) 1) which is optimal weights of Bates and Granger (1969) shrunk towards equal weights by λ.When γ ̸ = 0, it helps λ with shrinkage, as expected, but also enters in a highly nonlinear way via B and D, so the total effect of γ is difficult to discern.
Task grouping
Sometimes it can be useful to limit the flow of information between certain tasks, e.g., when one or more tasks are unrelated.For this purpose, denote by G := {G 1 , . . ., G g } a collection of g groups of tasks, where Using this notation, one can modify Ω γ,q to impose the restriction that only tasks within the same group share information: Ω γ,q (w (1) , . . ., w (m) ) = min where w(l) is an auxiliary weight vector for the lth group.When G consists of just one group, this grouped version of the penalty reduces to (5).Conversely, when G consists of m groups, the grouped penalty has no globalisation effect, i.e., it leads to local combination.The grouped version is helpful in our application to the SPF data in Section 5 where we study different groups of variables and forecast horizons.
Task scaling
If the tasks under consideration vary in difficulty, one or more tasks might dominate the loss component of the objective function.To prevent this behaviour, we consider a scaled version of global combination: min where τ (1) q , . . ., τ > 0 are fixed scaling parameters.If the tasks are to be evenly balanced, a suitable value of τ is the optimal objective value from local combination: This configuration of τ (k) q places all tasks on equal footing, and we use it in all subsequent experiments.
Optimal (convex) weights
Computation of forecast combinations in our framework varies in complexity according to the weighting scheme, i.e., the specific configuration of W. We begin by describing methods for computation for optimal weights of Bates and Granger (1969) and optimal convex weights of Conflitti et al. (2015), both natural candidates for our framework.The constraint sets W opt = {w ∈ R p : 1 ⊤ w = 1} and W optcvx = {w ∈ R p : 1 ⊤ w = 1, w ≥ 0} defining these combinations are convex.All the objective functions described in Section 2 are convex.The resulting convex optimisation problems are efficiently solvable using most mathematical programming solvers; we use Gurobi (Gurobi Optimization, LLC, 2023).
Optimal equal weights
Optimal equal weights of Matsypura et al. (2018) are another natural candidate for our framework.
The constraint set defining these weights is less tractable than that for optimal weights or optimal convex weights.Recall the set is defined by a mix of continuous and discrete variables: The integrality constraint z ∈ {0, 1} p is nonconvex but is amenable to a mixed-integer programming solver such as Gurobi.The constraint w = z/(1 ⊤ z) is also nonconvex but cannot be handled directly by Gurobi.
Matsypura et al. (2018) used the decomposition
is the set of all vectors that equally weight s forecasts.Since s is fixed for , the constraint w = z/s is linear.The authors sequentially optimise over W opteql 1 , . . ., W opteql p and retain a solution with minimal objective value.This decomposition approach is, however, infeasible in our framework, because different tasks need not combine the same number of forecasts.To this end, we use a new one-step approach which directly optimises over W opteql .Though this new approach is proposed for the purpose of globally combining forecasts, it may be of independent interest for local forecast combination.
We have found it to be to be uniformly faster than the approach in Matsypura et al. (2018) in the single-task setting, sometimes by an order of magnitude.
First, we rewrite the constraint w = z/(1 ⊤ z) as the pair of constraints ws = z and s = 1 ⊤ z, where s ∈ {1, . . ., p}.The new constraint ws = z is bilinear in w and s, meaning it is linear for fixed w or fixed s.
Though this bilinear constraint remains nonconvex, it is amenable to spatial branch-and-bound techniques (Liberti, 2008) which are similar to classic branch-and-bound techniques used for handling integrality constraints.As of version 9, released in 2020, Gurobi can solve optimisation problems with bilinear constraints to global optimality.We now rewrite the constraint set (7) using the new bilinear constraint representation: The constraint s = 1 ⊤ z is, in fact, redundant in the above characterisation of W opteql since it is implied by the remaining constraints.Our experience is that Gurobi benefits from excluding it.
Simulation design
We evaluate the possible gains from global forecast combination in simulation.We work directly with the forecast errors which are sampled from a p-dimensional Gaussian e (k) t ∼ N (0, Σ (k) ) for t = 1, . . ., T and k = 1, . . ., m.We fix p = T = 50, so the number of forecasters is of the same order as the number of samples.Different sample sizes are considered in Appendix B.2, though the main findings are robust to sample size.The number of tasks m ∈ {2, 5, 10}.The covariance matrices Σ (1) , . . ., Σ (m) are constructed element-wise as Σ The correlation parameter ρ = 0.75 to induce high correlations between forecasters, typical of forecaster surveys.For forecaster j = 1, . . ., p, the standard deviations σ (k) j are generated by drawing random variables uniformly distributed on [a, b] and correlating them with correlation coefficient α ∈ {0, 1/3, 2/3, 1}.The parameter α dictates the level of task-relatedness.As α approaches one, a forecaster's performance on one task is strongly indicative of their performance on other tasks.The converse is true as α approaches zero-a forecaster's performance on one task is weakly indicative of their performance on other tasks.The bounds a = 1 and b = 3 so the accuracy of the worst forecaster is up to three times poorer than that of the best forecaster.A visualisation of data from this simulation design is given in Appendix B.1.
As a measure of out-of-sample accuracy, we report the mean square forecast error on an infinitely large testing set relative to that from an oracle: where ŵ(1) denotes estimated weights for task one fit using an estimate Σ(1) of the true covariance matrix 1) , and w (1) denotes oracle weights fit using Σ (1) .We restrict our attention to the relative forecast error of the first task only to measure the marginal effect of adding additional tasks.The covariance matrices are estimated using the sample covariances jt for all (i, j) ∈ {1, . . ., p} 2 .The shrinkage parameter λ is swept over a grid of ten values evenly spaced on a logarithmic scale between 0.001 and 1000.For every value of λ, the globalisation parameter γ of soft global combination is swept over the same grid.The best values of λ and γ are chosen on a validation set constructed independently and identically to the training set, which we remark approximates the precision of leave-one-out cross-validation.
The simulations are run parallel in R (R Core Team, 2023) with Gurobi given a single core of an AMD Ryzen Threadripper 3970x and a 300 second time limit for each value of γ and λ.
Forecast performance
Figure 2 reports the relative forecast errors from 30 simulations.The first row of plots is where the estimate ŵ and oracle w are fit under the sum to one constraint that defines optimal weights.The second and third rows correspond to the cases where ŵ and w are fit under the constraints that define optimal convex weights and optimal equal weights, respectively.The relative forecast error reported is not comparable across these three weighting schemes since the oracle is different in each case.Our goal is not to compare weighting schemes but rather to measure the benefits of globalisation.The interested reader is referred to Appendix B.4 for forecasts errors reported relative to equal weights-all key findings below remain the same.
Since local combination ignores information in additional tasks, its performance stays fixed as both the number of tasks and task-relatedness increase.In contrast, the relative forecast error of hard global combination decreases roughly linearly with task-relatedness, providing for substantial improvements when task-relatedness is high.Yet, when task-relatedness is low, hard global combination can underperform relative to local combination.This poor performance is made worse by adding additional tasks.
Soft global combination ameliorates the poor performance of hard global combination when the tasks are unrelated and nearly performs as well as hard global combination when the tasks are identical.There is, of course, a statistical cost to estimating the best level of globalisation.Between the extremes, soft global combination successfully adapts to the level of task-relatedness to improve over both local and hard global combination.The greater the number of tasks, the greater the possibility for improvement.
Among the three weighting schemes, optimal weights benefit most from globalisation.The constraint set that defines optimal weights is unbounded, and thus its relative forecast error can be arbitrarily bad.
Optimal convex weights and optimal equal weights are defined by bounded constraint sets, so there exist finite upper bounds on their relative forecast errors.Thus, the opportunity to improve these weights is somewhat less than for optimal weights, yet often still substantial.
In Appendix B.2, we provide additional results and extended discussion for T ∈ {25, 100, 150}.For shorter series (T = 25), soft global combination performs well even when the tasks are unrelated and significantly improves when they are related.Even though the justification is different for longer series, the conclusion is the same: soft global combination is preferred.It gets the best of both worlds regardless of whether the tasks are related.
The soft global combination results in this section correspond to the globalisation penalty configured with squared deviations (q = 2).Further comparisons in Appendix B.3 indicate no material improvement by the absolute deviation penalty (q = 1), so we restrict our attention to squared deviations hereafter.
Recommendations
The findings from these experiments suggest several recommendations for practitioners.First, consider globalising any forecast combinations when tackling multiple forecasting tasks.The potential gains from globalisation can be significant, even for moderate levels of task-relatedness.Second, unless domain knowledge indicates the tasks are unrelated or strongly related, use soft global combination with cross-validation.
Soft global combination with γ cross-validated is reasonably robust to task-relatedness, while the downside of applying hard global (local) combination to unrelated (strongly related) tasks is large.Last, when using optimal weights, employ global combination whenever possible since that weighting scheme benefits most from globalisation.The benefits persist even when globalising in tandem with shrinkage.
Data and methodology
The European Central Bank SPF is an ongoing survey eliciting predictions for rates of growth, inflation, and unemployment from forecasters for the Eurozone.The survey has been conducted quarterly since 1999 Q1.In each round, the survey participants are asked to provide predictions of the three variables at several time horizons.We focus on the two rolling horizons in this paper, which are one and two years ahead of the latest available observation of the respective variable.For instance, in the 1999 Q1 survey, one-year forecasts corresponded to 1999 Q3 for growth, December 1999 for inflation, and November 1999 for unemployment. 2 The total number of forecasting tasks m = 6.
The SPF data is publicly available at the European Central Bank Statistical Data Warehouse (SDW).
Actual values of inflation and unemployment are also available at the SDW.Actual values of growth are available from Eurostat.We access data at the SDW using the R package ecb (Persson, 2022), and data from Eurostat using the R package eurostat (Lahti et al., 2017).The data used in this paper was retrieved on 17 April 2022.After merging the forecasts and actual values, between T = 85 and T = 90 observations are available.The first observations are 1999 Q3 (one-year growth), 1999 Q4 (one-year inflation and unemployment), 2000 Q3 (two-year growth), and 2000 Q4 (two-year inflation and unemployment).The last observation is 2021 Q4.
A notable feature of the SPF is that forecasters enter and exit the survey at different times.This aspect of the survey, coupled with periodic nonresponse, gives rise to a sizeable portion of missing data.To deal with this issue, we follow previous works (Matsypura et al., 2018;Radchenko et al., 2023) and filter the data to only include forecasters who respond for a reasonable number of periods.Specifically, the forecasters who 2 To simplify exposition, forecasts of inflation and unemployment are referred to by the quarter they belong to, e.g., December 1999 inflation and November 1999 unemployment are called forecasts of 1999 Q4.To handle missing values that remain after filtering, the covariance matrices of forecast errors are estimated using all complete pairs of observations: jt for all (i, j) ∈ {1, . . ., p} 2 .Here, T (k) i denotes the periods in the training set where forecaster i provided a forecast for task k.Covariance matrices constructed in this manner are not guaranteed positive-definite.For this reason, we take the positive-definite matrix nearest to Σ(k) using nearPD from the R package Matrix (Bates et al., 2022).The forecast errors are standardised by the standard deviation of the forecast targets as estimated on the training set prior to estimating the covariance matrices.
Globalisation path
The first set of experiments study the evolution of out-of-sample forecast performance as the globalisation parameter γ is swept over its support (the 'globalisation path').Here, we take 30 values of γ logarithmically spaced between 0.001 and 1000.As a measure of out-of-sample accuracy, we report the mean square forecast error on a testing set relative to that from local combination: , where, for a given weighting scheme, f (k)(γ) t+h|t is a global combination forecast of task k at time t + h produced using a training set up to time t with γ ∈ [0, ∞), and T and T are the first and last periods in the testing set.The denominator is the mean square forecast error from setting γ = 0, so this metric is the percentage improvement due to globalisation.We pick T and T so the testing set is the last five years to 2019 Q4.
The period after 2019 Q4, covering the COVID-19 recession and 2021-2022 inflation surge, is considered in separate experiments in Section 5.3.
Figures 4, 5, and 6 report the globalisation paths of optimal weights, optimal convex weights, and optimal equal weights for fixed shrinkage parameter λ = 0.1.The globalisation paths of optimal weights are smooth because the fitted weights are a smooth function of γ as Proposition 2 implies, while those of optimal convex weights and optimal equal weights are nonsmooth.In the case of optimal convex weights, the convexity constraint makes the fitted weights nonsmooth in γ when it is binding.The path for optimal equal weights is a step function in γ due to the weights being discrete.Three ways of grouping the tasks are considered: grouping variable tasks (group 1: one-year growth, inflation, and unemployment; group 2: two-year growth, inflation, and unemployment); grouping forecast horizon tasks (group 1: one-and two-year growth; group 2: one-and two-year inflation; group 3: one-and two-year unemployment); and grouping all tasks (group 1: one-and two-year growth, inflation, and unemployment).The reader is reminded information flows only between tasks belonging to the same group.Across all weighting schemes and tasks, there is always a globalisation path that attains its minimum at some positive amount of globalisation.The limiting case γ → ∞, hard global combination, is sometimes helpful and sometimes harmful.For instance, growth and inflation realise roughly 15% improvement from hard global combination (optimal weights, grouped variables) at the two-year horizon while unemployment deteriorates by about 40% at the same horizon.This behaviour might be attributable to growth and inflation being difficult tasks at the two-year horizon (e.g., expert forecasts of those tasks are not responsive to the COVID effects in 2020 and 2021 as Figure 3 shows), thus providing a noisy signal to unemployment.However, even in the cases where hard global combination on its own is not useful (such as one-and two-year unemployment forecasts), the optimal choice of γ is still positive, and soft global combination can extract benefits.The results lead us to the following practical suggestions regarding the groupings.For a one-year growth forecast, using all available information (i.e., the 'grouped all' version) is beneficial as it is the best or close to the best performer across the different weights.For the same reason, we also recommend this grouping for two-year inflation and two-year unemployment forecasts.For one-year unemployment, one should group variables as this grouping is the best or close to the best across the different weights.For one-year inflation, grouped horizons deliver stable improvement across different weights (even though grouping variables works best for optimal weights).Finally, for a two-year growth forecast, we recommend grouping horizons but avoiding optimal weights.The convexity of the weights seems to be critical to avoid instabilities of negative weights, which was recently documented by Radchenko et al. (2023).
If one is working with only a single forecasting horizon, the selection of grouping becomes redundant.Also, in other applications, grouping all tasks seems a sensible default provided γ is chosen judiciously on a task-by-task basis.This default option can be improved by using additional cross-validation to help determine which grouping performs best.
Tuned globalisation
The second set of experiments are broader comparisons that acknowledge the level of globalisation requires tuning in practice.For this purpose, we use leave-one-out cross-validation-a valid procedure provided the combination forecast errors are uncorrelated (Bergmeir et al., 2018).The value of γ is tuned over ten values logarithmically spaced between 0.001 and 1000 on a per-task basis, so different tasks need not use the same value.To allow for comparisons of forecast accuracy across weighting schemes, we report the mean square forecast error relative to that from equal weights, a common benchmark in practice: , where f (k) t+h|t is an arbitrary combination forecast and f (k) t+h|t is the equally-weighted combination forecast.Values of this metric less than one indicate superior performance to equal weights.
Table 1 reports the average value of the performance metric across the six tasks, with the minimal and maximal values among the tasks in brackets.The shrinkage parameter λ = 0.1.We study tuned λ next.The last five years of the data is again studied, but we now include the period 2020 Q1 to 2021 Q4 to evaluate recent performance during the COVID-19 recession and 2021-2022 inflation surge.Figure 3 highlights how the quarters on and after 2020 Q1 contain several outliers.To prevent these outliers dominating Averages over all tasks are next to minimums and maximums over all tasks in brackets.All values are relative to equal weights.
the performance metric, the testing set is split before and after 2020 Q1.Likewise, to avoid the outliers contaminating the estimated covariance matrices and thus the estimated weights, the training set is stopped at 2019 Q4.
With few exceptions, soft global combination improves on local combination.The improvements are generally greatest pre-2020.The more minor improvements post-2020 are possibly a consequence of the recent period of deteriorated economic conditions during which task-relatedness could be less stable.In some instances, hard global combination outperforms both soft global combination and local combination.
However, as in the previous section, it also sometimes underperforms.On the other hand, the data-driven determination of the globalisation level for soft global combination produces good combinations that con-sistently forecast well.
Optimal weights realise the most significant gains from globalisation among the three weighting schemessoft global combination (grouped all) places first in terms of average performance across tasks (pre-2020) compared with local combination, which places last.Moreover, globalisation leads to smaller maximal loss for optimal weights.Though not always beating optimal weights according to average performance, optimal convex weights and optimal equal weights have more consistent performance across tasks, especially pre-2020.With a suitable amount of globalisation, each weighting scheme can beat the notoriously difficult benchmark of equal weights for one or more task groupings.
Tuned shrinkage
The results of Table 1 are from tuning the globalisation parameter γ while holding the shrinkage parameter λ fixed.It is insightful to evaluate whether there are further benefits from tuning λ in addition to γ.
To this end, we cross-validate both parameters here.We focus on optimal (convex) weights to keep computation time reasonably low. the board relative to the results of Table 1 (for local, hard global, and soft global combinations).Though it is known (see Roccazzella et al., 2022) that optimal weights benefit from (carefully tuned) shrinkage, our result is the first documentation of similar behaviour for global combination.The results for optimal convex weights-whose nonnegativity constraint already imparts a form of shrinkage-are similar to Table 1.Our core finding remains the same in both cases: globalisation via soft global combination is typically beneficial.
Forecast combination puzzle
In more than 50 years of forecast combination literature spanning a myriad of weighting schemes, 'forecasters still have little guidance on how to solve the forecast combination puzzle' (Wang et al., 2023) post-COVID at the 1% level (except for the unemployment forecasts).As there is little data available post-COVID, reestimated weights are likely to have large variability that negates the benefits of soft global combination.If weights from the pre-COVID period are used, they will not necessarily be optimal and may not provide the benefits observed under stable conditions.Wang et al. (2023) recommend equal weights in such cases.Until more data is available, equal weights are probably suitable for the post-COVID period.
With more post-COVID data, soft global combination should quickly catch up as a strong competitor and a potential solution to the forecast combination puzzle; see also Frazier et al. (2023).
The benefits of equal weights centre around the substantial reduction of the variance at the cost of introducing a small bias; see Claeskens et al. (2016).A more recent approach by Blanc and Setzer (2020) requires an explicit solution to analyse the bias-variance trade-off.With the absence of an explicit solution in our case, a practitioner needs to empirically validate whether soft global combination beats the equallyweighted combination weights in their setting.Our findings suggest the likelihood of improving over the equal weights is high.
Concluding remarks
To date, the problem of combining economic forecasts has been handled on a per-task basis, with the combination for each variable and forecast horizon learned independently of other variables and horizons.
When the forecasting tasks are related, as economic theory and evidence suggest, this approach of learning the combinations using only local information is potentially suboptimal.This paper investigates the value of a global approach, where task-relatedness is directly exploited to improve the quality of combinations.At the heart of our approach is a principled framework that accounts for the level of homogeneity across tasks by flexibly interpolating between fully local and fully global combinations.In addition to unifying local and global approaches under one umbrella, the new framework accommodates many existing weighting schemes.
Empirical evidence from the European Central Bank SPF suggests combinations of expert forecasts for rates of growth, inflation, and unemployment in the Eurozone benefit from some degree of globalisation, as do combinations of these same variables across one-and two-year horizons.Further empirical evidence on economic and financial data from the M4 Competition in Appendix D indicates similar benefits for combinations of model-based forecasts.
Our approach is not limited to point forecasts and can be extended to probabilistic forecasts.Consider, e.g., the optimal weights of Hall and Mitchell (2007) and Geweke and Amisano (2011) The density parameter θ (k) j can be different across tasks.This problem can be further enhanced using a shrinkage penalty or additional constraints, e.g., the high moment constraints of Pauwels et al. (2023).
Furthermore, our approach is based on the intuitive idea that a forecaster's competence in predicting one variable might contain some signal about their competence in predicting another.One can examine the connection between the accuracy of the combined forecast and individual forecaster characteristics.We leave this direction for future research.
An R implementation of the global forecast combinations in this paper is publicly available at https://github.com/ryan-thompson/global-combinations.proposed soft global approach is not harmed by it and is similar to hard global combination.These results suggest using soft global combination for shorter series as it performs well even when the tasks are not related and significantly improves when they are related.
When longer history is available (T = 100 or 150), Figures B.9 and B.10 show that hard global combination is harmful when task-relatedness is low or moderate and even when it is strong (α = 2/3) for T = 150.
The reason for this result is the large amount of irrelevant information introduced by long time series.The irrelevant information does not harm our proposed soft global combination as γ can be tuned to remove its effect.Soft global combination performs similarly to local combination when the task-relatedness is low, moderate, or even strong (α = 2/3).Of course, when the tasks are perfectly related (α = 1), the situation flips.Now, hard global combination is highly beneficial as long series bring a lot of relevant information.
Again, soft global combination can extract the same benefits as hard global combination.
These results confirm the previous suggestion of using soft global combination, but now for longer series, as it can perform well whether tasks are related or not.In practice, one does not usually know the relatedness of the tasks.In our empirical study, e.g., inflation and unemployment are related by the Phillips curve.This relationship changes over time with periods where the variables are strongly related and periods where they are weakly related.The advantage of soft global combination is that one does not need to know the strength of the relation to witness benefits.weight to local information.By tapering down γ, soft global combination can deal with the additional noise that is harmful to hard global combination even when the task-relatedness is moderate and especially when it is low.When the tasks are perfectly related, γ remains at its maximum.The shrinkage parameter λ also decreases with sample size as the covariance matrix estimator becomes increasingly reliable.Full-info.Cross-valid.Full-info.Cross-valid.
Optimal weights
One and other times further away.This variation is likely due to estimation error from cross-validation, which is potentially considerable given the relatively small training sample.Structural breaks may also play a role and could be addressed via cross-validation with a rolling window.Variability in the cross-validated value of γ is typically largest for the two-year horizon tasks, likely because these are more difficult than the one-year tasks and hence noisier.Though cross-validation is a practical tool for tuning γ, there is certainly value in future work investigating alternatives (e.g., a formulaic characterisation of the optimal γ via asymptotic analysis).
For further insight into the globalisation parameters from by cross-validation, we present
Figure 1 :
Figure 1: Global and local forecast combination frameworks.The notation ⟨x, y⟩ = x ⊤ y represents the dot product of two vectors x ∈ R p and y ∈ R p .Local combination learns different weight vectors for each task independently of other tasks.Hard global combination learns one weight vector for all tasks.Soft global combination learns different weight vectors for each task while sharing information between tasks.
Figure 2 :
Figure 2: Mean square forecast error as a function of task-relatedness parameter α for 30 synthetic datasets with p = 50 forecasters and T = 50 samples.Vertical bars represent averages and error bars denote one standard errors.All values are relative to oracle weights.
Figure 3 :
Figure 3: Data from the Survey of Professional Forecasters.Points represent forecasts and lines denote actual values of the forecast target.Point sizes reflect the number of equivalent forecasts when rounded to one decimal place.
Figure 4 :
Figure 4: Mean square forecast error of optimal weights as a function of globalisation parameter γ for the Survey of Professional Forecasters.Testing period is 2015 Q1 to 2019 Q4.Minimum of each curve is marked by a circle.All values are relative to local combination (γ → 0).The shrinkage parameter λ = 0.1.The x-axis is in log scale.
Figure 5 :
Figure 5: Mean square forecast error of optimal convex weights as a function of globalisation parameter γ for the Survey of Professional Forecasters.Testing period is 2015 Q1 to 2019 Q4.Minimum of each curve is marked by a circle.All values are relative to local combination (γ → 0).The shrinkage parameter λ = 0.1.The x-axis is in log scale.
Figure 6 :
Figure 6: Mean square forecast error of optimal equal weights as a function of globalisation parameter γ for the Survey of Professional Forecasters.Testing period is 2015 Q1 to 2019 Q4.Minimum of each curve is marked by a circle.All values are relative to local combination (γ → 0).The shrinkage parameter λ = 0.1.The x-axis is in log scale.
that, in the case of only one task, maximise the log-score when combining p individual predictive densities p j (•; θ j ) using the historical observations y 1 , . . ., y T :
Figure B. 8 :Figure B. 9 :
Figure B.8: Mean square forecast error as a function of task-relatedness parameter α for 30 synthetic datasets with p = 50 forecasters and T = 25 samples.Vertical bars represent averages and error bars denote one standard errors.All values are relative to oracle weights.
Figure B. 11
Figure B.11 shows the behaviour of the cross-validated globalisation parameter γ and cross-validated shrinkage parameter λ.As the sample size T increases, the globalisation parameter γ decreases, giving more Figure B.12: Mean square forecast error as a function of task-relatedness parameter α for 30 synthetic datasets with = 50 forecasters and T = 100 samples.Vertical bars represent averages and error bars denote one standard errors.All values are relative to oracle weights.
Figure D.15: Mean square forecast error for economic and financial data from the M4 Competition.Vertical bars represent averages and error bars denote one standard errors.All values are relative to equal weights.
Table 1 :
Mean square forecast errors for the Survey of Professional Forecasters with cross-validated globalisation parameter.
Table 2 :
Table 2 reports the results.Optimal weights witness an improvement across Mean square forecast errors for the Survey of Professional Forecasters with cross-validated globalisation and shrinkage parameters.Averages over all tasks are next to minimums and maximums over all tasks in brackets.All values are relative to equal weights.
Table 3 :
Rossi (2021)cal results show that before the COVID-19 recession (2017-2019), soft global combination not only improves upon local and hard global combinations but also upon equal weights.Our synthetic experiments also demonstrate improvement in stable conditions; see Appendix B.4.However, results post-2019 are mixed with around half cases where soft global combination performs better than equal weights.One possible explanation is a structural break produced by the COVID-19 recession.Rossi (2021)showed that the 2007-2008 global financial crisis (GFC) affected forecasting performance significantly.Currently, no similar research is available for the COVID period.However, Figure3leaves no doubt that for inflation and growth, the effects of COVID far exceed those observed during the GFC.Simple tests for a structural break in Table3support this claim.The variance of the average forecast error is significantly different Tests for a structural break in the average forecast error between pre-and post-COVID periods (up to 2019 Q4 and after 2019 Q4).p-values are reported from a t test for a difference in means and an F test for a difference in variances.
Table C .
4: Comparisons of cross-validation tuning with full-information tuning for the Survey of Professional Forecasters over the period 2017 Q1 to 2019 Q4.The globalisation parameter γ is reported in logarithmic scale.The shrinkage parameter λ = 0.1.All tasks are grouped.Standard errors are reported in parentheses.
|
v3-fos-license
|
2023-07-12T07:48:45.101Z
|
2023-06-27T00:00:00.000
|
259733415
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/AE2F52371D601C3C1216B836F92B6572/S0003055423000576a.pdf/div-class-title-the-view-from-the-future-aurobindo-ghose-s-anticolonial-darwinism-div.pdf",
"pdf_hash": "60f829739060f12c3ee27e2d68431f3634f9f0d9",
"pdf_src": "Cambridge",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:910",
"s2fieldsofstudy": [
"Political Science",
"History"
],
"sha1": "9857ae9375ac3812b74c13261ed76aaea19a4eb0",
"year": 2023
}
|
pes2o/s2orc
|
The View from the Future: Aurobindo Ghose’s Anticolonial Darwinism
Darwinism and evolutionary theory have a bad track record in political theory, given their entanglements with fin-de-siècle militarist imperialisms, racialized hierarchies, and eugenic reformism. In colonial contexts, however, Darwinism had an entirely different afterlife as anticolonialists marshaled evolutionist frameworks to contest the parameters of colonial rule. This article exhumes just such an evolutionary anticolonialism in the political thought of Aurobindo Ghose, radical firebrand of the early Indian independence movement. I argue that Ghose drew on a nuanced reform Darwinism to criticize British imperialism and advance an alternative grounded in the Indian polity’s mutualism. Evolutionism formed a conceptual ecosystem framing his understanding of progress—national, civilizational, and spiritual—and reformulating the temporal and conceptual coordinates of the liberal empire he resisted. The article thus exposes the constructiveness of anticolonial politics, the hybridity of South Asian intellectual history, and the surprising critical potential of Darwinism in colonial settings.
I
n 1919, in a concentrated meditation on India's relation to external influence, Aurobindo Ghose -onetime nationalist firebrand, intellectual lighting rod of the so-called "extremist" faction of Congress, and mystic of Pondicherry-proclaims his commitments to "social and political liberty, equality and democracy.""If I accept any of these ideas," he goes on, it is not because they are modern or European… but because they are human [and] of the greatest importance in the future development of the life of man… [T]he effective idea of democracy-present as an element in ancient Indian as in ancient European polity and society -is… a necessity of our growth… [W]e must not take it crudely in the European forms, but must go back to whatever corresponds to it, illumines its sense, justifies its highest purpose in our own spiritual conception of life and existence … [A] living organism, which grows not by accretion, but by self-development and assimilation, must recast the things it takes in to suit the law and form and characteristic action of its biological or psychological body.(Ghose 1997e, 47-8) The tract is remarkable in several respects.First, it offers a glimpse into anticolonialism's constructiveness, predicated on an endogenous Indian democracy connecting past and future practices.The West, Ghose sees, had no particular claim to the ideal of selfdetermination.More broadly, its scope evinces the hybridity, complexity, and syncretism of South Asian political thought at the dawn of the twentieth century.Finally, it hints at the Darwinist underpinnings of Ghose's anticolonialism, situating India's prospects in an evolutionary adaptation through which the social body would digest those political principles fitted to its "fundamental motives" (Ghose 1997d, 86).
It also encapsulates this article's preoccupations.I aim to show how Aurobindo Ghose, one of the early Indian anticolonial movement's leading lights, consolidated a wide range of fin-de-siècle political Darwinisms into a penetrating critique of British imperialism and of the liberalism he saw at its root.In so doing, I engage a growing scholarship on anticolonial political theory examining the reconstitution of "foundational questions of modern politics" in colonial contexts (Kapila 2021, 5).While scholars in history and literary studies have for decades engaged their nuances, political theorists have only recently begun to broach disciplinary and conceptual matters raised by and through anticolonial thought (Elam 2017;2021;Getachew 2016;2019;Getachew and Mantena 2021;Idris 2022;Iqtidar 2022;Klausen 2020;Manjapra 2020;Pham 2020;Sultan 2022;Temin 2022;Wilder 2015).J. Daniel Elam suggests that beyond the struggle for national independence, anticolonialism comprises a "philosophical movement and critical analytic" (Elam 2017) widening and unsettling our political imaginary.At once engaging, rejecting, and transcending the terms of Western modernity, it reformulates political theory's categories by compelling us to "rethink, or unthink, the supposedly European parameters of modern thought" (Wilder 2015, 9).Reflecting on the decolonization of political thought, Humeira Iqtidar conceptualizes this rethinking "as a layered process of appropriating, reworking, and reinterpreting ideas, and bringing them in to a wider conversation beyond Europe's parochial experience to invert colonial hierarchies of ideas" (Iqtidar 2021(Iqtidar , 1146).Ghose's evolutionary anticolonialism captures just this juncture of appropriative, re-imaginative, and anti-hierarchal thinking, modeling an intellectual dexterity stretching political theory beyond its European limits.
A second objective is to contribute to modern South Asian intellectual history exploring what Shruti Kapila characterizes as the "Indian political" (Kapila 2014; see also Baxter 2016;Bayly 2011;Bose and Manjapra 2010;Elam and Moffat 2016;Goswami 2004;Kapila, 2007;2010;Maclean and Elam 2013;Parasher 2022;Sartori 2008).By exhuming the underappreciated Darwinism in Ghose's political philosophy, I hope to complement "new histories of political thought in India" centering anticolonial thinkers, agitators, and revolutionaries (Elam and Moffat 2016, 514).Drawing on essays penned in the radical broadsheets that Ghose published in the 1910s and just prior (Bande Mataram, Karmayogin, and Arya), I show that evolutionary theory formed a conceptual ecosystem framing his understanding of progress-national, spiritual, and civilizational-and reformulating the temporal coordinates of the liberal empire he sought to resist.To be sure, Darwinism is one among many influences, Indian and Western, inflecting Ghose's political thought and the spiritualism with which it became increasingly integrated as of approximately 1908. 1 Intellectual historians have long noted the sway of German idealism, and of Hegel in particular, in his social and political philosophy (Klausen 2014;Maitra 1956;Sartori 2008;2010;Varma 1976;Wolfers 2016;2017).Few, however, have recognized the extent of Ghose's debts to various political Darwinisms circulating in the late nineteenth and early twentieth centuries, and still fewer, how these furnished a language for articulating a distinctive anticolonial and anti-liberal politics.2I argue that evolutionism formed the backbone of a notion of progress that contested Western liberalism, traced its slide into imperialism, and charted an Indian alternative. 3It grounded both a radical critique of colonial power and the constructive, future-oriented vision of political order that Ghose saw superseding it.
Finally, more broadly, I expose the surprising critical potential of political Darwinism and social evolutionism in colonial contexts, pressed as they were into serving anticolonial ends.Darwinism is of course an unlikely candidate for advancing anti-imperialist politics, given its long association with chauvinist militarisms, racial supremacism, and civilizational hierarchies.Political theory has found little to redeem in the evolutionisms worming their way into turn-of-century social and political thought. 4 Yet Darwinism and evolutionary theory were more protean than the common view allows, fitting more or less comfortably into positions spanning the period's political spectrum, from anarchism (Adams 2016), to socialism (Stack 2000), to libertarianism and conservatism (Hofstadter [1944(Hofstadter [ ] 1955)).Rather than a fixed doctrine, Mike Hawkins treats Darwinism as "a cultural unit such as an idea (or set of ideas)… capable of being replicated in diverse circumstances" (Hawkins 1997, 16).These Darwinist replications spilled outside the West and took on novel political tenors in colonial settings.If some Euro-American variants tended toward "laissez-faire liberalism, racism or imperialism" (Hawkins 1997, 7), Darwinism and evolutionism were in the subcontinent shaped by, and drawn into, a climate of ascendant nationalism (Killingley 1995, 174).A purpose of this article is thus to widen our view of Darwinism's political currency by uncovering the anticolonialisms it served in India and beyond.
I excavate several evolutionist threads in Ghose's anticolonialism, spanning a little over a decade .The first stem from his brief, though luminous, career of direct political activism ; the second, from a series of political essays he wrote between 1918 and 1921, following a near-decade of withdrawal from active politics during which he formulated a complex yogic philosophy.Evolutionism, I show, is a guiding thread connecting these intellectual, spiritual, and political endeavors, the conceptual language through which Ghose figured humanity's advancement widely (in the arc of human civilization) and narrowly (in India's movement beyond colonial rule).It comprises an organizing meta-principle, a polyglot theoretical ecosystem framing his understandings of social, political, species-wide, and spiritual growth, recurring at various points and in various guises at various junctures of his life and ideas.
The View from the Future These evolutionisms carried distinctive political grammars which Ghose engaged and, in their liberal iterations, criticized.He distinguishes two evolutionist logics, which I'll call liberal evolutionism and social evolutionism.Liberal evolutionism framed human advancement in terms of natural selection, unimpeded competition, and civilizational fitness-as the contest between social groups in an evolutionary struggle typically associated with social Darwinism.Social evolutionism, conversely, took humanity's progress as based on communality, mutual aid, and concerted ethical steering.I argue that Ghose saw liberal evolutionism as operative at an early phase in humanity's advancement, associated with a still-immature species mired in competition and struggle, which social evolutionism would ultimately surpass.While liberal principles based on the "survival of the fittest" contributed to progress in a juvenile humanity, communalistic principles of mutuality and non-competition would prove an evolutionary advantage for a better developed species.
This was not an entirely novel position: it mirrored arguments advanced by Western "reform Darwinists" at the turn of the century (Bannister 1979).Ghose's originality, however, lay in drawing these political Darwinisms into the colonial context and transmuting them into an incisive critique of liberal imperialism.Where Western evolutionists such as Herbert Spencer, William Graham Sumner, T. H. Huxley, Alfred Russel Wallace, Benjamin Kidd, and Lester Ward grappled over whether evolutionary laws might steer ethics and social policies, Darwinism took on an entirely different political life in colonial peripheries.In Ghose's hands, it animated a sharp critique of European modernity, of its presumed advancement over Indian civilization, and of its liberal ethos and institutions.More constructively, it also underpinned the "complex communal freedom and self-determination" (Ghose 1997a, 405) he recovered in the Indian polity.Ghose's evolutionism thus braced his resistance to liberal imperialism and his vision of a future politics grounded in mutualism, non-competition, and global interdependence.
The argument proceeds as follows.I start by sketching the evolutionist parameters of Ghose's early anticolonialism (1906-10), which provincializes European claims to civilizational superiority by recasting the narrative of development from the limited scale of European modernity to the larger arc of human evolution.I then move to the late 1910s and early 1920s, when he published a series of essays on Indian politics, progress, and civilization.In them, I uncover liberal and social evolutionisms which Ghose associated, respectively, with Western imperialism and Indian communalism.I elucidate Ghose's sharp polemic against the former and the alternative to it that he found in the latter.The conclusion widens the lens by uncovering Darwinism's critical afterlives in anticolonialisms within and beyond the subcontinent.
SCALING UP: EVOLUTIONISM IN GHOSE'S EARLY NATIONALISM
Aurobindo Ghose was born in Calcutta in 1872, the son of a well-to-do family immersed in the reformist Brahmo Samaj movement.His father, Krishna Dhun Ghose, developed an abiding interest in Darwin and evolutionism while pursuing his medical studies in Edinburgh.Aurobindo and his siblings attended a Christian anglophone boarding school in Darjeeling until 1879, when the family moved to England.Despite his distaste for the Christian intonations of his education, Ghose excelled and won a scholarship to Cambridge, which he attended for two years.He then secured an appointment in the civil service at Baroda, returning to India in 1893, where he taught himself Sanskrit and Bengali.
Though he'd been openly critical of the British empire since his days at Cambridge, Ghose deepened his commitment to Indian independence on his return to the subcontinent.He organized an underground revolutionist group (ineffective though it was) and met influential members of what would ultimately become the "extremist" faction of Congress.Following Bengal's 1905 partition, Ghose moved to Calcutta and began publishing Bande Mataram, a radical nationalist broadsheet, alongside Bal Gangadhar Tilak.During this period, he became a public champion of non-cooperation and passive resistance while privately advancing more radical revolutionary efforts.In 1906-08, he ascended to the leadership of the nationalist movement and came to be among its most uncompromising, advocating India's unqualified independence.He was jailed for a year in 1908, charged with conspiracy and "waging war against the King" in the Alipore Bomb Case, and spearheaded two more anticolonial periodicals on his emergence (Karmayogin and Dharma).By 1910, he turned his attention from political to spiritual matters, moving to an ashram in Pondicherry where he remained and wrote prolifically until his death in 1950. 5hose appears to have inherited his father's interest in evolutionism, which comprises a conceptual throughline linking his early activism, the yogic philosophy he developed as of the 1910s, and his postwar political essays.It also shaped his anticolonialism.Here, he was not alone.Dermot Killingley (1995) and C. Mackenzie Brown (2012) note the pervasiveness of Darwinism and evolutionism in the thought of leading turn-of-century figures, including Bankim Chandra Chatterjee, Shyamji Krishnavarma, and Swami Vivekananda.As in the West, social Darwinism also emerged in India, painting lower castes, Muslims, Indigenous tribes, and other "undesirable" populations as inferior or unfit, or as causing the degeneration of Hindu civilization (Killingley 1995, 184).Darwinism nonetheless provided a vital conceptual repository for anticolonial politics, Ghose's among them (Kapila 2007;Marwah 2019).
In his early political period , Ghose drew on evolutionism to provincialize conceits of civilizational superiority by vastly extending the timescale of historicist frameworks placing Europeans at the apex of human advancement.In articles, speeches, and essays, he recast European modernity and Indian history to illustrate the relative brevity of Western ascendency.By stretching the measure of progress from the European context to the arc of human evolution, he highlighted the limitedness of Western notions of social improvement.On this larger temporal map, tracing the motions of human civilizations dating back to Europe's infancy, one could see that "Asia is long-lived, Europe brief and ephemeral… Europe lives by centuries, Asia by millenniums.""In the place which is left vacant by the decline of the European nations," Ghose prognosticated, "Asia young, strong and vigorous…is preparing to step forward and possess the future" (Ghose 1907).This evolutionary standpoint, loosely construed though it is, inverts the civilizational story.Europe was advanced only from within its own constrained understanding of progress, consisting of industrial development, militarist expansionism, secularism, and rationalism.But the measures themselves revealed a cramped view of social evolution destined to wear itself out by its sheer vacuity.It was, then, "the office of Asia to take up the work of human evolution when Europe comes to a standstill," Ghose averred."Such a time has now come in the world's history" (Ghose 1908a).
At this juncture, Ghose is giving an evolutionist gloss to a common trope of the period, as anticolonialists of all stripes upturned Eurocentrist historicisms by appealing to Indian civilization's longevity (Bayly 2011;Prakash 1999).Excavating what Partha Chatterjee describes as a constructed "classical" past (Chatterjee 1993, 95-115), they undercut the charge of backwardness by illuminating the depth of Indian society and knowledge, which far predated Europe's."Hindus could use the vast scale of evolutionary time as ammunition in their resistance against Western intellectual hegemony," Killingley observes, and "claim to be on the side of enlightenment against the Christians" (1995, 190).By expanding civilization's timescale to the evolutionary level, anticolonialists reclaimed an epistemic patrimony by exhuming endogenous traditions of thought demonstrating the falsity of Indian "stagnancy" (Prakash 1999).Ghose's evolutionism was, then, at this point more patina than substance.This would change in the following decade.
A DEEPER EVOLUTIONISM: REFORM DARWINISM IN COLONIAL INDIA
Ghose left political life in 1910, moving to an ashram in Pondicherry where he would spend the rest of his days.His withdrawal from active politics, however, in no way abated his thinking or writing on India's political prospects.Dennis Dalton sees his constructive anticolonial project as emerging in this "second phase," between 1910 and 1921, as Ghose "reached the summit of his capacity as a thinker only after his withdrawal from political activity" (Dalton 1982, 86).During this period, Ghose developed an evolutionary philosophy that more deeply engaged Darwinism and other evolutionisms (Brown 2012, 156).While rejecting Darwin's materialism, he saw evolutionism as "the key-note of the thought of the nineteenth century," affecting "all its science and its thought-attitude," along with "its moral temperament, its politics and its society" (Ghose 1998, 169;Raina and Habib 1996, 15).Over the 1910s and early 1920s, he wrote voluminously on natural, spiritual, social, and political evolution, leading C. Mackenzie Brown to characterize him as "the foremost Hindu evolutionary thinker of the 20th century" (Brown 2012, 160;Dalton 1982).
Between 1914 and 1919, Ghose published a stream of essays in Arya elaborating the "integrative evolutionism" that would shape The Life Divine (Brown 2012; see also Singh 1963, 69-72).This was his principal philosophical work on spiritual evolution, which situated humanity on an evolutionary scale (Iyengar 1945, 271). 6Ghose draws on "evolution which the Darwinian theory first made plain to human knowledge" to argue that the struggle for life is not only a struggle to survive, it is also a struggle for possession and perfection, since only by taking hold of the environment whether more or less, whether by self-adaptation to it or by adapting it to oneself… can survival be secured, and equally is it true that only a greater and greater perfection can assure a continuous permanence, a lasting survival.It is this truth that Darwinism sought to express in the formula of the survival of the fittest.(Ghose 2005, 211-2) Ghose's evolutionism here integrates several ideas indebted to Herbert Spencer.7 First, evolution is a meta-principle governing individual, group-based, and civilizational progress.8Second, his appeal to the "survival of the fittest"-which, though Darwin came to accept it, originated with Spencer-imports an ambiguous notion of fitness (an adaptedness to particular conditions, or a more generalized capacity for survival over competitors?).Finally, Ghose integrates the directionality of Spencer's evolutionism, which he -Spencer-took as a universal law by which simpler and less perfected forms of life developed into increasingly complex and ameliorated ones.
In these instances, Ghose follows a line of leading turn-of-century figures who developed spiritualist evolutionisms in response to the Darwinian revolution.As in the West, Indian thinkers grappled with the cosmological implications of Darwinism's thoroughgoing materialism, by turn integrating and rejecting principles of natural selection in relation to Hindu notions of birth, death, creation, reincarnation, evolution, and involution within and beyond the organic world.Their syntheses were also shaped by nationalist ambitions to reconcile Hinduism with advances in modern science, particularly within revivalist circles that came to prominence in this period (Bayly 2011;Nanda 2020;Prakash 1999).As early as 1875, Bankim Chandra Chatterjee declared Hinduism's alignment with the mechanisms of natural selection, claims later taken up and bolstered by Theosophists seeking to "recover" evolutionary principles in Hindu scriptures (Bevir 2020;Nanda 2011;Singleton 2007).Keshab Chunder Sen advanced an "Avataric evolutionism," tracing the four stages of matter's transformation, from gross elements to vegetative life, to animality, to humanity, to divinity (Brown 2012, 114).Trends in Western thinking helped consolidate this fusion of spiritualism, science, and evolutionism under the pall of colonial rule.Oxford Sanskritist Monier Monier-Williams proclaimed that Hindus were "Darwinians many centuries before Darwin; and Evolutionists many centuries before the doctrine of Evolution had been accepted by the Scientists of our time" (Monier-Williams 1891, xii).The spread of Henri Bergson's vitalism lent credence to the contention that the Indian variant supplemented Darwin's incompletely materialistic theory by accounting for evolution's operation at a higher-spiritual-level (Brown 2012, 159-60).
Evolutionism thus became intimately braided with Hindu spiritualism and anticolonial nationalism, buttressing "Occidentalist" claims to India's advances over Western civilization.This confluence was particularly pronounced in Swami Vivekananda, who significantly influenced Ghose's spiritual and political thinking (Brown 2012, 131-54;Dalton 1982, 29-58;Raina and Habib 1996, 17;Singleton 2007, 130-1).Vivekananda's "Modern Advaitic Evolutionism" accepted Darwin's struggle doctrine as operative in the natural world but supplemented it with a Lamarckian theory of evolutionary transmutation across reincarnations.By tracing spiritual evolution through cycles of rebirth, Vivekananda recovered an overall cosmological purposiveness evacuated by Darwin's starkly mechanistic postulation of aimless evolutionary transformation.Vivekananda also asserted the priority of Hinduism's claims over evolutionism, stating that the "idea of evolution was to be found in the Vedas long before the Christian era" (cited in Brown 2012, 141) and that Patanjali was "the father of evolution, spiritual and physical" (cited in Singleton 2007, 131).While Ghose's spiritualist evolutionism was indebted to Vivekananda's, he superseded it by incorporating the latest in Western debates on organic evolution (Brown 2012, 154).This became integrated with his political Vedantism, which saw "the final fulfilment of the Vedantic ideal in politics" as "the true Swaraj for India" (Ghose 1908b).The Vedanta "provide[d] a metaphysical defense of the idea of the country as the Mother and as divine" (Varma 1976, 229), grounding the spiritualist nationalism that became increasingly entrenched following Ghose's 1908 jailing. 9In this context, Ghose's evolutionism is embedded in an overarching projection of humanity's spiritual advancement and of the nationalist movement's role in it, a synthesis of Vedantic cosmology, Hegelian idealism, Darwinist evolutionism, and Nietzschean notions of self-overcoming. 10 The scholarship addressing Ghose's evolutionism nearly invariably situates it in this light-in relation to the Vedantism inflecting his nationalism and to his vision of humanity's progress toward divinity.But the emphasis on Ghose's spiritual evolutionism obscures Darwinism's impacts on his political thought and on the anticolonialism he developed in the 1910s and 1920s.While evolutionism undoubtedly merged with Hinduism in his spiritualism and early nationalism, it took a markedly political turn in later essays addressing politics, culture, and colonialism in India and abroad.These essays-published in Arya between 1914 and 1921 and gathered together as The Human Cycle (1916Cycle ( -1918)), The Ideal of Human Unity (1915-18), and The Foundations of Indian Culture (1918-21)-are among Ghose's most sustained reflections on politics.In them, he broaches domestic and international relations, political ideologies, social evolution, colonial rule, and much more in a distinctly corporeal register and through a Darwinist lens.
In several of these essays, Ghose appears to adopt social Darwinism's conceptual parameters, treating competition, struggle, fitness, natural selection, and adaptation as human evolution's operative principles.He also draws out their affinities with liberal commitments to political and economic non-interference, taking intervention as impeding otherwise "natural" selective processes improving the stock of a given society, race, or civilization.In this liberal evolutionist view, progress is driven by struggle, antagonism, and competition, enabling the best and fittest to rise up in a kind of existential meritocracy.
These are the terms in which Ghose characterizes the contest between India and Britain in The Foundations of Indian Culture, a series of essays responding to William Archer's depiction of Indian culture, art, and religion as "a repulsive mass of unspeakable barbarism" (Ghose 1997d, 55). 11Here, Ghose figures India's resistance to colonial domination as an evolutionary clash of civilizations.He invokes liberal evolutionism's vernacular to frame the schism between India's "predominantly spiritual" and Europe's "predominantly material" principles as a "war of cultures" (Ghose 1997d, 55-6).In starkly social Darwinist terms, he proclaims that by the law of struggle which is the first law of existence in the material universe, varying cultures are bound to come into conflict.A deep-seated urge in Nature compels them to attempt to extend themselves and to destroy, assimilate and replace all disparates [sic] or opposites… [T]he civilization which neglects an active self-defence will be swallowed up and the nation which lived by it will lose its soul and perish.Each nation is a Shakti or power of the evolving spirit in humanity and lives by the principle which it embodies… The principle of struggle has assumed the large historical aspect of an agelong clash and pressure of conflict between Asia and Europe.(57) Portending their accelerating rivalry, India's rise in global affairs was "already intensifying the attempt, natural and legitimate according to the law of competition, of European civilization to assimilate Asia" (60).
Cast in this light, the confrontation of Indian nationalism and European colonialism constitutes a battle between opposing ideals of social existence played out on the global stage."The principle of struggle, conflict and competition," Ghose contends, "still governs and for some time will still govern international relations" (63).This Darwinian dogfight yields one of two possibilities: "[e]ither India will be rationalised and industrialised out of all recognition and she will be no longer India or else she will be the leader in a new world-phase" (65). 12In these instances, Ghose adopts a liberal evolutionist framework in which unconstrained grappling for civilizational preeminence yields fitness.This mirrors Western militarist Darwinisms that, Paul Crook notes, took "struggle [as] necessary for the genetic health of a species" (Crook 1994, 77).Without indulging their racialist excesses, Ghose shares ground with social Darwinists such as Jules de Gaultier (1912), who saw the conflict between nations as "an expression of social Darwinism" (254).
However, this is not Ghose's final word on the matter, as he goes on to show the deficits of liberal evolutionism.Retaining the evolutionist schema, he relativizes both European and Indian claims to civilizational superiority by exposing their shared parochialism."[C]ivilization and barbarism," he avers, are words of a quite relative significance.For from the view of the evolutionary future European and Indian civilization at their best have only been half achievements, infant dawns pointing to the mature sunlight that is to come.Neither Europe nor India nor any race, country or continent of mankind has ever been fully civilized from this point of view.(Ghose 1997d, 85-6) Here, Ghose looks beyond the liberal evolutionism figuring Asia and Europe as locked in an existential duel.From this vantage point, "this view from the future, the coming ages may look on Europe and Asia of today much as we look on savage tribes or primitive peoples" (86).
At first glance, Ghose's position appears incoherent, both adopting and criticizing liberal evolutionism and its social Darwinist presumptions.How are we to reconcile an evolutionism driven by Britain's and India's rivalry with a view of both civilizations as "half achievements"?
To make sense of his claims, we need to contextualize them within late nineteenth-century Euro-American debates on political Darwinisms, with which Ghose was intimately familiar (Brown 2012;Varma 1976).These spanned a range of social, political, and ethical questions, but a central one concerned the extent to which the evolutionary laws revealed by Darwin governed, or should govern, ethics and social policy.Did its principal tenet-that evolution proceeded through natural selection, enabling better-adapted organisms to succeed over the less well-adapted through a process of competitive struggle-apply to human societies?If so, by what mechanisms and through what modifications, given humanity's advanced capacities?Two basic positions coalesced, whose duality Mike Hawkins captures as "nature as model and threat" (1997,18).
The first-"nature as model"-is what's commonly taken as social Darwinism. 13Broadly speaking, social Darwinists took natural selection as operative in human societies, indulging a certain ethical naturalism by treating the "struggle for existence" as the antagonistic process through which individuals, races, civilizations, and species evolved. 14While this glosses over considerable differences across social Darwinist positions, its basic thrust was to take state interventionism as contrary to evolutionary laws.By mitigating the excesses of markets, redressing systemic inequalities, and aiding disadvantaged populations, overly intrusive states impeded competitive struggle and artificially preserved "inferior" stock.If human evolution required selective pressures to eliminate its weaker elements, state interference was ultimately dysgenic.This unforgiving stance is commonly associated with classical liberals such as Spencer, William Graham Sumner, and Franklin Giddings, who to varying degrees opposed social and economic policies 12 Liberal evolutionism recurs in Ghose's other essays of the period.In the fourth installment of "Indian Polity," he maintains that "the life of man is still predominatingly [sic] vital and moved therefore by the tendencies of expansion, possession, aggression, mutual struggle for absorption and dominant survival which are the first law of life" (426); in "Indian Culture and External Influence," he ponders the "biological necessity" and "instinct of life" operative in the historical processes by which an "inactive or weaker culture perishes" (45). 13For the many debates on social Darwinism's parameters, see the literature in footnote 4. For a helpful overview of those debates, see Crook (2007, 29-43).On the term's fluidity and instabilities, see Hawkins (1997, chap.1). 14Most commentators take this as a minimal condition of social Darwinist positions, typically conjoined with other characteristic features (see, e.g., Hawkins 1997, 31).As I aim to illustrate the broader split between social and reform Darwinists, I do not address more particular definitional questions.
The View from the Future constraining competition within societies (Ryan 2001). 15Sumner, for instance, took unfettered capitalism as a site for the "natural" contest between individuals and groups.Humanity had "made no step whatever in civilization which has not been won by pain and distress," he intoned in an 1879 lecture, and "if we do not like the survival of the fittest, we have only one possible alternative, and that is the survival of the unfittest" (Sumner 1918, 221-5).Spencer similarly fulminated that "[i]f left to operate in all its sternest, the principle of the survival of the fittest… would quickly clear away the degraded," but for the "shortsighted beneficence… [of] unwise institutions, [which] brought into existence large numbers who are unadapted to the requirements of social life."The state was empowered to preserve justice, Spencer argued, but should indulge in no further charity (Spencer 1897, 392).
Against this were reform Darwinists, who distinguished the operation of evolutionary struggle in the natural and human spheres, arguing that natural laws could not serve as the basis of moral and social choice (so, "nature as threat").Immutable as Darwin's principles were in the natural world, humanity's unique attributes set it outside of their ambit, a position held by the period's leading biologists-T.H. Huxley, Alfred Russel Wallace, and Darwin himself.Reform Darwinists attacked the "brutal laws of social Darwinism," stressing the influence of culture and intellect in human evolution and resisting the baleful effects of unchecked natural forces (Bannister 1979, 11).
As with social Darwinists, reform Darwinist arguments varied widely.Huxley, for one, refuted Spencer's cosmological evolutionism by drawing a firm line between antagonistic struggle in the natural world and the human sphere's ethical grounding.The "intense and unceasing competition of the struggle for existence" in nature was, he claimed, diametrically opposed by humanity's "characteristic feature… the elimination of that struggle" (1896, 13).Leading thinkers such as Edward Bellamy, Jacques Novicow, and John Fiske adopted this rough dualism, distinguishing the operation of natural selection and competitive struggle in the lower orders from a social-ethical sphere governed by humanity's higher faculties (Bannister 1979;Crook 1994).A related tack differentiated primitive and modern populations.Natural selection held sway over nascent societies struggling against both environment and competitors, the argument went, but such brutalities were superseded in advanced societies whose evolutionary advantage lay in combination and coordination.This commonly translated into a racialized historicism demarcating "barbarous" non-Europeans subject to violent selective pressures from modern peoples characterized by, in Benjamin Kidd's terms, "higher social efficiency" (1894,42)."Among civilized nations at the present day, it does not seem possible for natural selection to act in any way," Wallace maintained, concluding that "it must inevitably follow that the higher-the more intellectual and moral-must displace the lower and more degraded races" (1871,.
We can now better understand Ghose's view, which transposes the reform Darwinist argument into the colonial context. 16Like the reform Darwinists, Ghose regards competitive struggle as serving its evolutionary purpose only at an early point in humanity's development, as the selective force governing all biological entities in the natural world.His reformulation, however, situates this violent contest not in "primitive" societies but in the clash of civilizations propelled by Western empire.Liberal evolutionism and the imperialism it countenanced took competitive struggle as determining civilizational fitness, reflecting the propulsions of a still-immature species mired in natural rather than human selection.
The longer view-the "view from the evolutionary future"-comprised social evolutionism.This was a communalistic Darwinism treating human advancement as driven by mutual aid, social cooperation, and concerted political direction-by conscious choice rather than antagonistic rivalry.Ghose depicts humanity as progressing through three successive stages.The first is the period of conflict and competition which has been ever dominant in the past and still overshadows the present of mankind… The second step brings the stage of concert.The third and last is marked by the spirit of sacrifice in which… each gives himself for the good of others.The second stage has hardly commenced for most; the third belongs to the indeterminate future.(Ghose 1997d, 56) 17 Humanity remained at present in the first stage, in which sociopolitical development was spurred by imperialism's "conflict and competition."But the view from the future, as Ghose glimpsed it in India's ascendant nationalism, would ultimately proceed through "concert," the cooperative interchange of social evolutionism.From this standpoint, liberal evolutionism and the imperialism it sustained belonged to an early phase of our collective trajectory.The eclipse of the West's domination over Asia would mark a new stage of social evolution moved by the principles of combination, 15 The common perception of social Darwinism is largely indebted to Richard Hofstadter's Social Darwinism in American Political Thought (1944), which framed its substance and linked it to conservatism (in fact, classical liberalism).A revisionist scholarship has widened well beyond this view, taking "Darwinism as a multiplex phenomenon translatable into many social and ideological idioms" (Crook 1994, 12).For an important rejoinder to Hofstadter's view, see Bannister (1979). 16Ghose of course departs from Western reform Darwinists' focus on domestic social policy, but retains the view that the competitive struggle governing the animal world is inapplicable to human societies and should not serve as its orienting principle.For reform Darwinism's proximity to Marxist, socialist and communalist politics, see Hawkins (1997, ch. 7). 17These three evolutionary stages recur in "Indian Polity": "Human society has in its growth to pass through three stages of evolution before it can arrive at the completeness of its possibilities" (Ghose 1997a, 398).
aid, and communality that Ghose read into Indian civilization.
Though filtered through his spiritualized nationalism, Ghose's contention is directly aligned with the "scientific pacificism" of antiwar evolutionists such as Jacques Novicow and Norman Angell.In La guerre et ses prétendus bienfaits (1894) and La critique du darwinisme social (1910), Novicow attacked social Darwinists such as Spencer, Ernest Renan, and Gustav Ratzenhofer for applying biological laws to social questions.Criticizing their "prodigious leap" from natural struggle to the social sphere, Novicow charged such facile equivalences with neglecting the "unimaginable complexity" of human interactions."That some of these relations have become established between different animal species," he pointed out, "it does not follow that the same relations should be found, without modification, between human societies" (42-3), which transcended the laws of natural selection as they climbed the evolutionary ladder.Ghose claimed the very same: mature social evolution moved through human capacities for symbiotic exchange against the brutalities of nature, war, and early civilization. 18hose also integrates Pietr Kropotkin's conviction that mutual aid was the lynchpin of evolution.Drawing on his observations of animal behavior in northern Asia and Siberia, Kropotkin saw intraspecies cooperation, rather than competition, as an evolutionary advantage.The struggle for existence concerned a species' resistance to rivals and natural elements, such that success hinged on joint action."[C]ompetition is not the rule either in the animal world or in mankind," Kropotkin held, since "[b]etter conditions are created by the elimination of competition by means of mutual aid and mutual support" (1902,79).Ghose's evolutionism, still further, takes Indian anticolonialism as the tipping point of a global movement toward concert and selfsacrifice, echoing Kropotkin's encompassing cosmopolitanism: "the ethical progress of our race, viewed in its broad lines, appears as a gradual extension of the mutual-aid principles from the tribe to always larger and larger agglomerations, so as to finally embrace one day the whole of mankind" (210-1).
Ghose's anticolonialism thus consolidates a range of socialistic Darwinisms obfuscated by the scholarship's tendency to collapse his evolutionism into his spiritualism.But re-situated within this political-conceptual landscape, his appeal to liberal evolutionism's cultural clash becomes comprehensible.At an historical juncture where the international sphere remained structured by Western powers driven by antagonistic competition, India had little choice but to engage Britain on those terms."Conflict is not indeed the last and ideal stage," he sees, "for that comes when various cultures develop freely, without hatred, misunderstanding or aggression and even with an underlying sense of unity.But so long as the principle of struggle prevails, one must face the lesser law; it is fatal to disarm in the midmost of the battle" (Ghose 1997d, 57).One should, then, "regard this age of civilization as an evolutionary stage, an imperfect but important turn of the human advance" from competitive struggle to mutual aid (Ghose 1997d, 82).While "the real and perfect civilization" would ultimately emerge from this transition, the present "life of mankind is still nine tenths of barbarism to one tenth of culture."This was the result of the "European mind [that] gives the first place to the principle of growth by struggle," treating society as "an organization for growth by competition, aggression and farther battle" (Ghose 1997d, 92).
The global context, however, was shifting.With Asia's ascendency, "a certain growing mutual closeness of the life of humanity is the most prominent phenomenon of the day," leading "to a free concert with some underlying oneness" (Ghose 1997d, 63-4)."Indian culture," Ghose holds, aimed at "a lasting organization that would minimize or even eliminate the principle of struggle" (Ghose 1997d, 92).In a clearly Darwinist idiom-in an essay titled "Evolution"-he recognizes that "[s]truggle exists, mutual destruction exists, but as a subordinate movement, a red minor chord"; the "real law" of human evolution "is rather mutual help" (Ghose 1998, 174).Humanity's evolutionary future lay in interdependent social forms set against liberalism's atomism, competitiveness, and political culture.By mapping its coordinates onto the colonial context, Ghose thus marshaled reform Darwinism to criticize liberalism's social, political, and ethical foundations, along with the imperialism to which it inevitably succumbed.
This critique advanced several related arguments.The first highlighted the straightforward hypocrisy of a doctrine whose professedly universalistic commitments to liberty, self-government, and autonomy so easily meshed with a racialized exceptionalism denying those entitlements to Indians.Liberal imperialists, he acidly charged, indulged a "mass of contradictions, the profession of liberalism running hand in hand with the practice of a bastard Imperialism which did the work of Satan while it mouthed liberal Scripture to justify its sins" (Ghose 1908b).Ghose had drawn these linkages since his student days at Cambridge, as he became increasingly conscious of the cultural supremacism underpinning Asia's political subjection (Heehs 2008, 30; see also Sartori 2010;Singh 1963, 35-41).He dissected liberalism's connections to empire in print as early as 1893.In "New Lamps for Old," published in Indu Prakash, he skewered the Congress moderates' gradualist liberalism, militating for India's complete political independence."We must no longer hold out supplicating hands to the English Parliament," he declaimed, "but must recognize the hard truth that every nation must beat out its own path to salvation" (quoted in Heehs 2008, 38).
More profoundly, Ghose exposed the nexus of liberalism's individualistic atomism, its ethos of competitiveness, and its materialist foundations. 19In The Human Cycle, he lambastes liberal government in Darwinian terms, as a contest between antagonistic monads driven by "an increasing stress of competition," whose "conflict ends in the survival not of the spiritually, rationally or physically fittest, but of the most fortunate and vitally successful" (198). 20Even in the West, liberalism amounted to "a huge organised competitive system, a frantically rapid and one-sided development of industrialism and, under the garb of democracy, an increasing plutocratic tendency that shocks by its ostentatious grossness" (Ghose 1997f, 199-200).
In the subcontinent, its ravages were still more pronounced.The taproot of India's subjugation, Ghose saw, was the cultural and economic liberalism at the heart of the British empire, which fueled Indian immiseration and envisioned progress in strictly antagonistic and materialist terms.Under its "competitive system of commerce, with its bitter and murderous struggle for existence," Indians had borne "this industrial realization of Darwinism.It has been written large for us in ghastly letters of famine, chronic starvation and misery and a decreasing population" (Ghose 1909a).Liberal notions of societal improvement predicated on expansionary industrialism, alienating individualism, and economic rivalry had for Indians yielded entrenched poverty, political subservience, a wealth drain to a foreign power, and the destruction of the social fabric.The presumption that struggle constituted a natural law of progress drove liberal evolutionism, politics, and empire: a civilization anchored in materialism and propelled by antagonism naturally led to imperialist exploitation vindicated by spurious claims to "fitness."Ghose thus sought to counter liberalism's very measure of progress which, along with its disintegrative individualism, reduced the polity to "a battle of conflicting interests" (Ghose 1997f, 198).The political task, as he rather bluntly put it, was "to get rid of this great parasitical excrescence of unbridled competition, this giant obstacle to any decent ideal or practice of human living" (Ghose 1997f, 200).As Andrew Sartori notes, Ghose's anticolonialism aimed to transcend the "shallowness of colonial political categories-the liberal categories of exchange" (Sartori 2008, 142).
By contrast, mutualism and communalism were woven into the fabric of Indian society, which Ghose envisioned, like Spencer, as an organic unity. 21"The true nature of the Indian polity" must be regarded "as a part of and in its relation to the organic totality of the social existence"; "[a]ll its growth, all its formations, customs, institutions are then a natural organic development" (Ghose 1997a, 396, 398).This organicism belonged to Swadeshi efforts to reconceptualize the Indian social body outside the compass of liberal modernity, as more fundamental than a mere assemblage of individuals (Sartori 2008, 154-5; see also Bose 2010, 129;Dalton 1982, vi;Sartori 2010, 325).Against liberalism's atomism, materialism, and antagonism, Ghose conceptualized Indian social existence as rooted in "institutions and ways of communal living already developed by the communal mind and life" (Ghose 1997a, 401-2).This communalism was neither isolationist nor solipsistic.It was, rather, constructive, outward-looking, and cosmopolitan: India's resurgence would lead to a reconstitution of global relations based on "concert" rather than antagonism. 22Ghose's nationalism aimed at "building up India for the sake of humanity" (Ghose 1909b).
Taking political institutions as embedded in webs of sociality and communal practice, then, India would evolve as an organic whole whose constituent elements-social, economic, political, and spiritualcould not be isolated from one another, much less set against each other.Retaining "the system of a very complex communal freedom and self-determination" (Ghose 1997a, 405) embedded in the Indian polity, advancement would "proceed not along the Western line of evolution, but to a new creation out of its own spirit" (Ghose 1997a, 407-8).This would be based on "the principle of an organically self-determining communal life" in which "the condition of liberty it aimed at was not so much an individual as a communal freedom" (Ghose 1997a, 408).For Ghose, decolonization extended well beyond Indian independence, and well beyond the political possibilities imaginable within liberal evolutionist terms.The view from the evolutionary future was ultimately a reorientation of the political itself, in India and globally, toward mutualism, interdependence, and human unity.
THE VIEW FROM THE EVOLUTIONARY FUTURE
I suggested at the outset that evolutionism forms a conceptual ecosystem connecting Ghose's spiritualism, politics, and anticolonialism.We may now perhaps better see its reach.In Ghose's hands, evolutionism is more than a tool for condemning British imperialism, 19 Many of the period's cultural nationalists framed their anticolonialisms through a bifurcation between Indian spiritualism and Western materialism.Ghose's distinction lies in the reform Darwinism anchoring his critique and in the acuity of its analysis of liberalism'sand liberal evolutionism's-entanglements with empire. 20The commentary on The Human Cycle's social evolutionism relegates it, as above, to Ghose's spiritual philosophy, particularly by tying it to Pierre Teilhard de Chardin's theological evolutionism.As above, this neglects the political Darwinism I highlight here.See Deutsch (1986, 200-10), Gupta (2014, 51-66), Korom (1989), Varma (1955, 238-9); Verma (1990, 61-3), andZaehner (1971). 21While Sartori takes Ghose's organicism as "ground[ing] politics in the life of the people" (Sartori 2008, 169), Manu Goswami points to its dangers in the Swadeshi era.In this context, she argues, organicism naturalized "Hindus as the original, organic, core nationals" and depicted Muslims as "an external element within the corporatist vision of an organic national whole" (Goswami 2004, 188).For Ghose's affinities with Spencerian social organicism, see Verma (1990, 65). 22On the cosmopolitanism of Ghose's anti-imperialism, see Sartori (2010), Varma (1955, 240-1), and Verma (1990).For a cognate vision of Indian nationalism's cosmopolitan moorings, see Pal's Nationality and Empire.
Inder S. Marwah and it extends beyond the spiritual philosophy to which much of the commentary confines it.It is, rather, the connective tissue threading together a sweeping critique of liberalism's Eurocentrism, historicism, and imperialism and grounding an alternative political future to which India might aspire.This is neither to minimize the problematic features of Ghose's evolutionism nor to treat it as any kind of anticolonial template.Many of his more practically disposed compatriots observed that Ghose's nationalism was closer to a metaphysics, a poetry, or a spiritual philosophy than a practicable decolonial program.His organicist depiction of India's inborn mutualism fell within a culturalized politics contributing to, as Manu Goswami describes it, the "the progressive Hinduization… of the imagined body politic" (2004,258).For Sumit Sarkar (2010), the Swadeshi movement's failure lay in its lapsing into this Hindu exclusivism, feeding the Hindutva that today corrodes Indian democracy.More generally, anticolonialism's capitulation to the nation-state form, its capture by elites, and its proximity to nationalist chauvinisms have long been subjected to well-warranted criticism.To be clear, Ghose did not partake in any such nativism."[T]he swadesh," he wrote, "which must be the base and fundament of our nationality, is India, a country where Mahomedan and Hindu live intermingled and side by side" (Ghose 1909c).But the wider repercussions of the organicist and culturalist politics he helped inaugurate and its degeneration into Hindu jingoism belong to his political and intellectual legacy.
Another dimension of that legacy, however-and one that has received little attention-lies in its revealing Darwinism's curious political trajectory in the subcontinent, and its perhaps unanticipated emancipatory capacities in colonial peripheries.While Darwinism's sociopolitical harms are well catalogued, they risk concealing its critical potential outside the West.In Euro-American contexts, Darwinism was a battleground between social reformers and laissez-faire liberals over state intervention into markets, populations, social pathologies, and the overall gene pool (Bannister 1979;Bowler 1983;Hawkins 1997).At the international level, racial realists such as Ludwig Gumplowicz and Gustav Ratzenhofer invoked Darwin and Spencer in reading global conflict through the prism of fitness and existential struggle (Crook 1994;Hobson 2012).Leftist Darwinists such as Karl Pearson fared no better, taking socialism as benefiting Western nations in the contest "of superior with inferior race.""No thoughtful socialist," he remarked, "would object to cultivating Uganda at the expense of its present occupiers if Lancashire were starving" (1894, 111).Darwinism's persistent interweaving with racial supremacism has led John Hobson to relegate it to a "Eurocentric conception of world politics" (2012, 1).
This misses the critical purposes Darwinism served in the colonial world.There, it took on an entirely different political life, furnishing a conceptual repertoire to confront Western civilizational hierarchies, racialized historicisms, and colonial rule.And if the acuity of Ghose's anticolonial evolutionism is especially noteworthy, it was not unique to him as anticolonialists the world over tapped into an ascendant Darwinism to stake their claims.
Within India, nationalists across the ideological spectrum advanced radically original evolutionisms countering colonial logics.Shyamji Krishnavarma, for instance, drew on Herbert Spencer's evolutionary sociology to criticize empire as a relapse-in Spencer's term, a "rebarbarization"-into "militant" social order (Kapila 2007;Marwah 2017).He also refuted the canard of Indians' political immaturity by showing that Darwinism undercut the "law of progress" on which it rested (Indian Sociologist 1907, 38).Bipin Chandra Pal echoed him, declaring it "impossible for any man to lay down beforehand… the particular form of Swaraj that will be established in this country," since Darwinism proved the impossibility of predicting what "the particular form of a thing… passing through a process of evolution will be" (Pal 2020, 200).Like Ghose, Pal also saw liberalism's thoroughgoing individualism as exacerbating "conflicts of economic competition," "enfeebl[ing] the spirit of co-operation in the community, and set[ting] up the doctrine of the survival of the fittest, in its crudest and least scientific sense, as the predominating principle of the evolution of human society" (1916,.Vivekananda drew on Darwinism to proclaim that the "highest evolution of man is effected through sacrifice alone" (Vivekananda 2006, 3026), advancing a mutualist evolutionism antithetical to colonial rule.
Such anticolonial evolutionisms extended well beyond India.Marwa Elshakry traces the integration of socialism and evolutionism in Middle Eastern antiimperialisms, where "[m]utual moral development (as much as national collectivism) became the new mainstream reading of social evolution" (Elshakry 2013, 223).The "power of evolutionary socialism," she reflects, "lay precisely in its ability to bring together an emphasis on national development and a growing international critique of Western capitalist and imperial expansion outside Europe" (225).Middle Eastern intellectuals claimed that "the true moral lesson of evolution was the rise of the mutualism of scientific socialism" (226), treating "social evolution as founded on 'the exchange of aid' (tabadul al-musa'ida), not competition" (231).In a very different idiom, Vietnamese anticolonialists such as Phan Boi Chau and Phan Chu Trinh drew on social Darwinism to articulate their anxieties about the Vietnamese people's survival, situate them in relation to global struggles between stronger and weaker nations, and chart a way toward self-rule (Pham n.d.;Marr 1981).Justo Sierra's The Political Evolution of the Mexican People adopted evolutionism to work through the complications of Mexico's colonial history, national identity, and political modernity.In La Libertad, he developed a "program for national reconstruction through scientific politics buttressed by social assumptions drawn from Spencer and Darwin" (Hale 1989).
The View from the Future In each of these instances, Darwinism offered an opening-a theoretical grammar enabling the deconstruction and reformulation of notions of progress, social evolution, and political capacity underpinning colonial rule.It was, in effect, a wedge for anticolonialists to pry apart the colonial order's imbrication of racial supremacy and political power, and the foundation on which to redraw its undergirding temporal map.Evolutionism subtended both a critique of, and a political future built from, the terms of a modernity that colonial subjects had no choice but to adopt (Chakrabarty 2000;Kaviraj 2005).It was pressed into navigating the quandaries of this colonial modernity, caught between Eurocentrism, Europhilia, and Europhobia.Anupama Rao suggests that "the enduring legacy of insurgent thought lies in the example of its relentless experimentation in remaking words, concepts, and new worlds" (Rao 2014, 8).In India and beyond it, anticolonial evolutionism exemplifies just this novel and worldmaking mode of political thought.
As Getachew and Mantena (2021) recognize, this constructive, future-oriented, and globally-minded politics was central to anticolonial theory, though it remains overshadowed by the critiques of Eurocentrism that have tended to attract scholarly attention.The preponderance of historiographical literatures either reproducing or lamenting anticolonialism's collapse into nationalist teleology has, also, often eclipsed its internationalist and "improvisational constitution of imaginary futures" (Goswami 2012(Goswami , 1462-3)-3).And yet, beyond the immediacy of its struggle for national independence, anticolonialism was "a way of imagining a properly postcolonial world beyond one's own national borders" (Elam 2017), a politics whose political and conceptual reach is easily overlooked.V. P. Varma treats Ghose as issuing "a concrete social philosophy for the reconstruction of the social and political life of a dependent nation" (Varma 1955, 235-6).This is true, but incomplete.Its evolutionist bent also situated Indian progress in relation to a global transformation beyond the Western political order.It was, in Elam's terms, "an attempt to articulate a world that has yet to exist" (Elam 2021, 4).Though in a different context, Arendt (1990) captures the perplexities of charting a new political order out of the wreckages of another's demise, from the "hiatus between end and beginning, between a no-longer and a not-yet" (205).Ghose envisioned a way out of that hiatus through the reconstitution of an endogenous Indian sociality grounded in the "inner domain of national life" (Chatterjee 1993, 26), outside the colonial state's remit.
This India extended beyond the West without for that rejecting it outright.For all his asperity toward liberal modernity, Ghose neither indulged the fantasy of revivifying a precolonial Indian civilization nor repudiated the West out of hand.Ghose was, Sugata Bose observes, "no traditionalist" (2010,124).He was censorious toward Indians grasping at the shell of past practices rather than modernizing Indian civilization in alignment with its ethical foundations.He readily criticized the imperfections of Indian culture and acknowledged the value of Western scientific and political advances, which he encouraged Indians to adopt.But he remained wary of the ethos under which they passed.To accept "that terrible, monstrous and compelling thing, that giant Asuric creation, European industrialism" would be to take on "its social discords and moral plagues and cruel problems" (Ghose 1997e, 46).It was a fine line to toe, but Ghose insisted that Indians "observe with an unbiased mind the successes of the West, the gifts it brought to humanity," and "consider how we can assimilate it to our own spirits and ideals" (Ghose 1997d, 88). 23volutionism was key to this assimilative vision of India's political future.India should neither blindly incorporate Western norms and institutions nor cling to its own historically freighted social and political practices.An evolving India had to recover its "essential idea-forces" (Ghose 1997d, 86) and move beyond its historical limitations without relinquishing its ethical warp.Indian evolution, then, looked neither backward to a nostalgia-tinged past nor forward to a future plotted out by the West.It would be, rather, a "reshaping of the forms of our spirit" (Ghose 1997d, 89).To adopt Western ideals would leave Indians "clumsy followers always stumbling in the wake of European evolution and always fifty years behind it."By integrating its better elements, however, India would become "no mere Asiatic modification of Western modernism, but some great, new and original thing of the first importance to the future of human civilization" (Ghose 1997b, 19, 18).Ghose's evolutionist anticolonialism thus evades the Promethean vision of decolonization as radically autonomous self-constitution, free of the pollutions of Western thought, and capitulating to the West's political vision.Evolutionism demonstrated that "[a]ny attempt to remain exactly what we were before the European invasion or to ignore in future the claims of a modern environment and necessity is foredoomed to an obvious failure," since "the living organism which rejects all such interchange, would speedily languish and die of lethargy and inanition" (Ghose 1997e, 51, 48).Ghose saw that abandoning the West wholesale was as futile as accepting it root and branch would be damaging.The only option, he concluded, was to evolve.article was supported by the Social Sciences and Humanities Research Council of Canada.
|
v3-fos-license
|
2019-09-10T20:24:06.226Z
|
2019-05-01T00:00:00.000
|
202185823
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/37/e3sconf_clima2019_06030.pdf",
"pdf_hash": "1b54b73b677c4ec69ae37742f24d14e296a3ef53",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:913",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "14887a26372bc685626b94437c254dbb537203fa",
"year": 2019
}
|
pes2o/s2orc
|
Influence of thermal zoning and electric radiator control on the energy flexibility potential of Norwegian detached houses
Energy flexibility of buildings can be used to reduce energy use and costs, peak power, CO2eq- emissions or to increase self-consumption of on-site electricity generation. Thermal mass activation proved to have a large potential for energy flexible operation. The indoor temperature is then allowed to fluctuate between a minimum and maximum value. Many studies investigating thermal mass activation consider electric radiators. Nevertheless, these studies most often assume that radiators modulate their emitted power, while, in reality, they are typically operated using thermostat (on-off) control. Firstly, this article aims at comparing the energy flexibility potential of thermostat and P-controls for Norwegian detached houses using detailed dynamic simulations (here IDA ICE). It is evaluated whether the thermostat converges to a P-control for a large number of identical buildings. As the buildings are getting better insulated, the impact of internal heat gains (IHG) becomes increasingly important. Therefore, the influence of different IHG profiles has been evaluated in the context of energy flexibility. Secondly, most studies about energy flexibility consider a single indoor temperature. This is questionable in residential buildings where people may want different temperature zones. This is critical in Norway where many occupants want cold bedrooms (~16°C) during winter time and open bedroom windows for this purpose. This article answers to these questions for two different building insulation levels and two construction modes (heavy and lightweight).
Introduction
Energy consumption needs to be more flexible. Firstly, the use of power demanding electric appliances is increasing which means that consumers are demanding more power from the distribution grid than before and often at the same time. The power grid is dimensioned to accommodate the highest possible load that can occur. Since the consumption of electricity varies significantly over hours, days and years, the grid will only experience this dimensioning load for short periods [1]. During an average weekday, the electricity consumption in Norwegian residential buildings peaks between 07:00 and 10:00 and between 16:00 and 21:00 [2]. Secondly, to make the transition to a sustainable energy system, more of the electricity must be produced from renewable energy sources. However, an increasing production from intermittent energy sources such as solar and wind may have serious adverse effects on the stability of the electricity grid. Therefore, it will become increasingly important to shift from a system based on generation-ondemand to a system where the energy use is flexible and controlled according to grid requirements or intermittent energy production. In the recent years, there has been an increasing focus on energy flexibility on the demand side. Demand side management (DSM) adapts the consumption according to the needs of the surrounding electricity grid [3]. When the electricity use for heating and cooling is considered for DSM, a thermal storage is necessary [4]. For buildings, DSM can be achieved in several ways, for example using heat storage in hot-water tanks or in the thermal mass and by shifting the use of plug loads in time [5]. Storage in the building structure, i.e. the building thermal mass, has been identified as a promising and cost-effective way for buildings to offer flexibility [6,7]. The available storage capacity in the building structure is not only dependent on the material properties but also on the geometry of the building, the distribution of thermal mass inside the building and the interaction with the heating system. In addition, the performance of structural thermal storage will vary with time, as weather conditions and occupant behavior affect the available storage capacity [8].
To activate the thermal mass of the building, a suitable control strategy is necessary. Rule Based Control (RBC) is a common control approach for energy systems in buildings. Even though simpler than Model Predictive Control (MPC), RBC can still be used to deploy the building energy flexibility [4]. For instance, RBC has been investigated for Norwegian residential buildings using either time-scheduled or day-ahead electricity prices to control the set-point indoor temperature [9,10]. Key performance indicators (KPIs) can be used to evaluate the performance of a system with respect to a specific desired result. A KPI is a parameter (or value) that provides simplified information about a complex system, to show the general state or trend [11]. For instance, KPIs are necessary to quantify the energy flexibility generated by different control strategies. Typical KPIs of building energy flexibility can describe physical features of the building, such as the storage capacity, or quantify the magnitude of the building's response to external signals, e.g. the electricity price [4]. Simplifications of modelling the occupant behavior is a main reason for the gap between the predicted and actual energy performance of a building. For buildings with better insulation levels, internal heat gains (IHG) have an increasing contribution to the space-heating demand [12]. It is common practice in the building industry to dimension the power of the space-heating system without accounting for IHGs. This often leads to oversizing of the space-heating system in highly-insulated buildings. In addition, it is important to use realistic IHG profiles in energy simulations to get reliable predictions of the actual energy performance. For instance, a bottom-up stochastic model to generate realistic electricity load profiles can be used to create realistic IHG profiles [13]. Several studies have identified occupant dissatisfaction with too high bedroom temperatures in Norwegian highly-insulated buildings during winter time. Low temperature in bedrooms is difficult to achieve without window opening which eventually leads to a significant increase of the space-heating needs. This is especially true when there is a desire for higher temperatures in the rest of the building [14][15][16][17][18]. One key characteristic of DSM is user acceptability, i.e. the occupant's willingness to accept that the building is controlled depending on the needs of the electricity grid [3]. For example, a compromise could be a cheaper electricity bill at the sacrifice of thermal comfort (within certain limits). Storing heat using the building thermal mass typically leads to relatively high indoor temperatures which can prevent reaching cold temperatures in bedrooms. Only a limited number of studies investigated the effect of thermal zoning on building energy flexibility. Different temperature set-points (TSP) are defined for so-called day-zones and night-zones, with the TSP for the day zone is slightly higher than the TSP of the night zone [6,19,20]. However, most studies assume a single temperature for the entire building. The Norwegian building stock is dominated by singlefamily houses (SFH). The interest in high-performance buildings is rapidly increasing but they still represent a small fraction of the building stock. In 2013, only 31 % of the inhabited building stock were built after 1980 [21]. In Norwegian residential buildings, the most common spaceheating system is electric radiators, however, the number of heat pump installations is increasing [21,22]. According to electric radiator manufacturers, the most common control of these radiators is thermostatic control (meaning that the radiator is operated at full power between a start and stop temperature). Nevertheless, the studies on energy flexibility are usually done assuming a continuous power modulation. This can for example be a proportional (P) controller or a proportional-integral (PI) controller. The objectives of this study is to identify the flexibility potential of Norwegian detached houses heated by electric radiators. More specifically, this energy flexibility potential is compared between a thermostatic and a proportional control of the radiators. Furthermore, the influence of the thermal mass activation on internal thermal zoning is investigated as well as the impact of such a zoning on the flexibility potential. These questions are investigated using detailed dynamic simulations in IDA ICE. A detached house controlled with two simple RBC strategies for heating is simulated. Since IHGs are expected to have an influence on the building thermal dynamics, different scenarios of IHG profiles are evaluated. This includes fixed IHG profiles defined from standards but also stochastic IHG profiles. Stochastic IHG profiles enable to investigate whether the thermostat control converges to a P-control when energy flexibility is evaluated for a large number of identical buildings, at a so-called aggregated level.
Definition of the case building
A two-storey detached house with a heated floor area of 160 m 2 located in Oslo is used as a case study. An illustration of the building geometry from IDA ICE is shown in Fig. 1. Two different construction modes are investigated, one lightweight timber construction (LCM) and one heavy masonry construction (HCM). The heat storage capacity and the average U-values of the internal structures for the two construction modes are listed in Table 1. 24]. This results in a total of four investigated building types; passive house standard with heavy construction (PHH) and lightweight construction (PHL), and an insulation level typical for a building built in the 1980s with heavy construction (TBH) and lightweight construction (TBL). The building envelope specifications for the PH and TB insulation levels are listed in Table 2. The PH buildings are modeled with a balanced mechanical ventilation system with heat recovery and an air temperature of 20 °C for the supply ventilation air. The ventilation airflow rates are in accordance with the Norwegian building code [25]. Natural ventilation is usually applied to old buildings. For the sake of simplicity the TB buildings are modelled with a balanced mechanical ventilation without heat recovery. The space-heating system consists of electric radiators in every room except for the two bathrooms and the laundry room which are equipped with floor heating. The nominal space-heating power of each room is evaluated using IDA ICE simulations with ideal heaters, no IHG and a constant design outdoor temperature (DOT) of Oslo (i.e. -19.8 °C). The dimensioning of the radiator is done according to the current practice: the nominal power of the radiators and floor heating equals the nominal power of the respective room they are located in. The radiator control has a deadband (Δ) and P-band of 1 °C. The thermostat control starts at TSP -Δ/2 and stops at TSP + Δ/2.
Rule-based control strategies
To evaluate the energy flexibility potential, two different RBC strategies are applied: an off-peak hour control strategy (OPCS) and a spot price control strategy (SPCS). Both adjust the TSP of the space-heating system. The OPCS aims at reducing the electricity use for spaceheating during peak hours. This objective is critical for Norway as the distribution grid is expected to face bottlenecks in the near future [26]. Peak hours are based on the average electricity consumption of Norwegian residential buildings on a weekday. Based on this daily profile, peak hours for electricity consumption are defined between 07:00 and 09:00 as well as between 17:00 and 19:00 [2]. With OPCS, the TSP is 21 °C in all rooms. Nevertheless, this temperature is reduced by 2 K in the defined peak hours while it is increased by 2 K one hour before peak hours to store energy in the thermal mass.
The SPCS aims at reducing energy costs for spaceheating. It is based on the day-ahead hourly spot price for electricity, retrieved from NordPool [27]. The TSP is here controlled using two thresholds for the electricity price. The low and high thresholds are set to the minimum spot price plus 25 % and 75 % of the difference between the minimum and maximum day-ahead spot prices, respectively. Therefore, thresholds are updated every day. These two thresholds define periods of low, medium and high electricity prices. The SPCS decreases the TSP by 2K in high-price periods, keeps the TSP at 21°C in medium-price periods and increases the TSP by 2K in low price periods. Since the nighttime is characterized with a low electricity demand, the spot price is relatively low. The SPCS would initially exploit this low price to increase the TSP. This is expected to lead to a higher energy consumption and cost, as concluded in a previous study [9]. Therefore, the SPCS only operates between 06:00 and 23:00. Otherwise it is overruled to 21 °C.
Internal heat gain profiles
Four different IHG profiles are investigated. Two are based on Norwegian standards and are uniform in space. Nevertheless, one of these profiles is static (NS) and the other one is distributed in time with a fixed daily profile (TS). The two other IHG profiles are stochastic, with variations from day to day and between the seasons. The magnitude of these stochastic profiles is the same, but one profile contains IHGs distributed in time (SMt) while the second has IHGs varying in both time and space (SMts). The annual profile for lighting and electric appliances is generated using the bottom-up model developed by Richardson et. al. [13] which has been adjusted to Norwegian households by Rangøy [28]. However, there is currently no stochastic occupancy model compatible with the available Norwegian statistical data. Therefore, a fixed occupancy profile with an hourly resolution was created artificially.
Fig. 2. Yearly-averaged internal heat gains from the four investigated profiles.
This occupancy profile does not contain as many fluctuations as the profiles for lighting and electric appliances. However, a separation is made between weekdays and weekends. In addition, variations in the occupancy have been considered for the summer, winter and spring/autumn months. The occupancy, appliance and lighting profiles are generated for a household of four persons. Finally, the stochastic profiles are scaled so that the yearly-averaged IHGs in W/m 2 is the same as in the two Norwegian standards (NS and TS). The daily IHG profiles are illustrated in Fig. 2, where the yearly average of the stochastic profile is shown along with the maximum and minimum values.
Summary of simulation scenarios
For clarity this means that a qph of 1 means that no energy is shifted in the peak hours, i.e. the energy consumption in the peak hours is the same as the reference case without the implemented RBC. A low value for qph indicates more energy shifted and a value of 0 means that the energy use is fully shifted away from peak hours.
Results
This section successively shows the effect of the radiator control type, the IHG profile and the temperature zoning on the flexibility potential. Table 4 shows the relative change in energy use during the peak hours (qph) with OPCS and SPCS for both controller types using SMts IHGs. Both RBC strategies successfully shift energy and power use from the defined peak hours to off-peak hours. The share of shifted energy and power is much more significant for the PH buildings than the TB buildings. In PH buildings, the OPCS leads to zero energy and power consumption during the four defined peak hours of the day throughout the year. . 3(a) shows the absolute change in yearly energy use during the peak hours compared to the reference cases, for the OPCS and SPCS and for the PHL and PHH. This is given for both radiator controls and all four IHG profiles. Stochastic profiles always result in the largest amount of energy shifted from peak hours, the SMts is the profile with the highest amount of shifted energy. Furthermore, with stochastic IHG profiles the magnitude of energy shifted is relatively similar with thermostatic and proportional control. For standard static IHGs the magnitude of energy shifted is much more significant with proportional control. The same is illustrated for the TB buildings in Fig. 3(b). The magnitude of energy reduced during peak hours for these TB buildings is almost ten times higher than for the PH buildings. Unlike the PH buildings, the difference of energy shifted between the different control types and IHG profiles is limited.
Controller type and internal heat gains
The space-heating power given in Fig. 4(a) is the average of the 20 equivalent PHL buildings, but with different SMts profiles. This is given for 22 nd January. The average spaceheating power is compared for the reference and the RBC, with thermostatic and proportional control. Fig. 4 shows that the aggregated space-heating power of a neighborhood is very similar with thermostatic control and with P-control. It is assumed that with more buildings with different SMts IHG profiles, the results with the thermostatic controller would be smoother. The most significant difference with a thermostatic controller compared to the P-controller is the rebound peak with OPCS. The aggregation with a Pcontroller results in a more distinct rebound peak after the pre-defined peak hours. This is because in some of the building zones, especially the ones with floor heating (technical room, bathrooms), the air temperature will not drop below 20.5 °C during the peak hours. If the temperature in these zones is between 20.5 °C and 21 °C, the P-controller will start to operate while the thermostatic controller will wait until the temperature is below 20.5 °C. Therefore, the selection of the start and stop temperatures (or dead-band) of the thermostatic control or the constant of the P-control will also have an impact on the rebound effect.
Fig. 3. Difference in annual specific energy use during peak hours between the reference and the OPCS or SPCS for (a) the PHL and PHH and (b) for the TBL and TBH (for thermostatic control (TC) and proportional control (PC)).
The results for TBL, shown in Fig. 4(b), have the same trend as the PHL regarding the difference between radiator controls. However, unlike the PHL, the difference in rebound peak between the thermostatic and proportional controls is insignificant. As the temperature drop during peak hours is higher for the TB buildings, both the thermostatic and proportional controls will operate at full power right after the peak period and lead to the same magnitude of the rebound peak. Again, in terms of modeling, higher insulation levels require a more careful definition of the radiator control.
Internal thermal zoning
The effects of decoupling the bedrooms from the RBC strategies are investigated for the bedroom in the South-East corner of the building (bedroom SE). The operative temperature in bedroom SE is studied during nighttime during the heating season considering the SMts profile. Based on simulations, the heating season is defined between October and April. Fig. 5(a) It is difficult to achieve low bedroom temperatures in PH buildings due to the balanced mechanical ventilation with a centralized heat recovery, relatively low heat losses through the envelope and solar gains. Fig. 5 operative temperature is already above 22°C for about 30% of the time without any activation of the thermal mass using RBCs. The SPCS leads to a higher increase of bedroom temperatures than OPCS, especially for the lightweight PH building (PHL): the operative temperature is then above 22 °C for more than 50 % of the night-time during the heating season. As for the TB buildings, decoupling the bedrooms from the RBC strategies has a positive effect on reducing the bedroom temperatures. The SPCSbdc21 results in a noticeable improvement: with this strategy, the share of time over 22 °C is reduced significantly. A difference can be noticed between the construction modes. Compared to PHL, PHH has less hours at high temperatures when RBC strategies are applied to bedrooms. However, the decoupling of the bedrooms is not as effective for PHH as for PHL. This is reasonable as internal walls of lightweight buildings are insulated, thus having a lower U-value than in heavyweight buildings. The heat transfer in internal constructions is significant in heavyweight buildings so that the bedroom temperature is influenced by temperature fluctuations generated by the RBCs in the neighboring rooms. 6 is similar to Fig. 5, but shows the cases for a TSP of 16 °C in the bedrooms. Regarding the TB buildings, Fig. 6(a) shows that these buildings. achieve temperatures close to this low TSP for most of the heating season TSP. With OPCSbdc16 and SPCSbdc16, the bedroom temperature in TBL is below 17 °C almost 100 % of the time. For the reference control, the temperature is higher in the TBH, but the temperature remains below 17 °C for 70 % of the time. OPCSbdc16 and SPCSbdc16 do not modify this trend. In addition, SPCS without overruling during nighttime (SPCSbdc16+nor) is also studied. SPCSbdc16+nor leads to a lot of hours with a TSP of 23 °C in the rest of the building (i.e. not bedrooms) but it only slightly increases the bedroom temperature. For the TB buildings, the thermal mass activation in the common rooms does not increase the resulting bedroom temperatures. Therefore, the risk to open windows to decrease bedroom temperatures is not expected to be higher with the thermal mass activation. temperature in bedrooms is not affected by the RBCs suggesting that the risk of window opening is not increased. By decoupling the bedrooms from the RBC strategies, the amount of energy and power shifted is reduced, as the radiators in the bedrooms will operate with a constant TSP of 21 °C or 16 °C. The energy flexibility potential with the bedrooms decoupled from the RBC strategies is evaluated using the energy use during peak hours (qph) for the reference case and for the RBC strategies. Again, OPCSbdc16 and SPCSbdc16 are evaluated against a reference case with a constant TSP of 16 °C in the bedrooms and 21 °C in the other zones. In general, the indicator qph is higher when bedrooms are decoupled (bdc) from the RBC compared to the scenario with coupled bedrooms. This is illustrated in Fig. 7.
Conclusions
This work evaluated the energy flexibility that Norwegian residential buildings can provide to the electricity grid. This has been done using rule-based controls (RBC) that adjust the temperature set-point (TSP) of a direct electric space-heating system. Physical aspects which may influence the energy flexibility are investigated, including internal heat gains (IHG), the type of radiator control and the occupant preference for cold or warm bedrooms. Two RBC strategies activating the building energy flexibility are applied: one with a pre-defined schedule (OPCS) that aims at reducing electricity use during peak hours and one that aims at decreasing energy costs using time-variable spot prices (SPCS). These RBCs have been evaluated using simulations for a detached house with two different insulation levels and two construction modes. The main focus was the load shifting away from peak hours (which is a main concern in Norway). Results showed that all building types have potential to shift their energy and power use. The buildings with a higher insulation level achieve a higher relative share of energy and power shifted. Although less insulated buildings have a lower relative peak shaving potential, the magnitude of the energy and power shifted is significantly higher.
With highly-insulated buildings, the largest potential for energy and power shifting was found for stochastic IHG profiles, which are assumed to be the most realistic representations of occupant behaviour. However, the influence of the internal gains on the energy shifted is small for the less-insulated buildings. This indicates that the flexibility potential using thermal mass can be dependent on the timing of the IHGs, especially in highlyinsulated buildings. Thus, modelling IHGs using standard fixed profiles may underestimate the load-shifting. It was found that the type of radiator control has an impact on the energy and load shifting potential of highlyinsulated buildings, whereas this effect is almost negligible for low insulation levels. The two RBCs were also evaluated for 20 identical buildings but with different stochastic IHG profiles. Considering aggregated results, the performance of the thermostatic control converges to a proportional control. Proportional control can thus reasonably be used to evaluate the energy flexibility of several buildings. However, for highly-insulated buildings, the rebound peak right after the peak hours was found to be higher with proportional control compared to thermostatic control. With low insulation levels, cold bedrooms can be easily created by applying a low temperature set-point in these rooms (e.g. ~16 °C). If the two RBCs are not applied to bedrooms (but only to the rest of the building), bedroom temperatures do not increase significantly. With high insulation levels including a centralized heat recovery of the ventilation air, it is intrinsically difficult to create cold bedrooms. Periods with moderate to high bedroom temperatures will be found systematically during the space-heating season (as long as bedroom windows are not open). If the two RBCs are not applied to bedrooms (but only to the rest of the building), they do not amplify this phenomenon. Consequently, these results suggest that the activation of the building thermal mass, if not applied in bedrooms, will not further increase the risk of window opening or user dissatisfaction in bedrooms. The window opening should be avoided as it would lead to a noticeable increase of the space-heating needs. In addition, the thermal mass activation without considering bedrooms leads to a moderate reduction of the load-shifting potential compared to the activation of the entire building. As a general comment, higher insulation levels require more complexity for all the physical phenomena investigated (i.e. IHG definition, radiator control and temperature zoning). Results are relatively insensitive to this modelling complexity for low insulation levels. As most of the Norwegian building stock is composed of low-insulated buildings, this is an important conclusion.
Acknowledgements
The
|
v3-fos-license
|
2017-07-28T04:44:35.775Z
|
2017-07-01T00:00:00.000
|
9112043
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/7/7/e016541.full.pdf",
"pdf_hash": "b9d3c988cea4ef2665a6345097028af74699f7af",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:914",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "4c28908b778c38960e871e35c9564ec1a836d9f0",
"year": 2017
}
|
pes2o/s2orc
|
Common attributes in retired professional cricketers that may enhance or hinder quality of life after retirement: a qualitative study
Objectives Retired professional cricketers shared unique experiences and may possess specific psychological attributes with potential to influence quality of life (QOL). Additionally, pain and osteoarthritis can be common in retired athletes which may negatively impact QOL. However, QOL in retired athletes is poorly understood. This study explores the following questions from the personal perspective of retired cricketers: How do retired cricketers perceive and experience musculoskeletal pain and function in daily life? Are there any psychological attributes that might enhance or hinder retired cricketers’ QOL? Design A qualitative study using semistructured interviews, which were subject to inductive, thematic analysis. A data-driven, iterative approach to data coding was employed. Setting All participants had lived and played professional cricket in the UK and were living in the UK or abroad at the time of interview. Participants Eighteen male participants, aged a mean 57±11 (range 34–77) years had played professional cricket for a mean 12±7 seasons and had been retired from professional cricket on average 23±9 years. Results Fifteen participants reported pain or joint difficulties and all but one was satisfied with their QOL. Most retired cricketers reflected on experiences during their cricket career that may be associated with the psychological attributes that these individuals shared, including resilience and a positive attitude. Additional attributes included a high sense of body awareness, an ability to self-manage pain and adapt lifestyle choices to accommodate physical limitations. Participants felt fortunate and proud to have played professional cricket, which may have further contributed to the high QOL in this group of retired cricketers. Conclusions Most retired cricketers in this study were living with pain or joint difficulties. Despite this, all but one was satisfied or very satisfied with their QOL. This may be partly explained by the positive psychological attributes that these retired cricketers shared.
Thank you for asking me to review this paper. I feel there is a lot of merit to this paper; it highlights novel findings not previously described and uses sound methodology to collect the data. I have made a few minor comments below. Introduction While I feel the introduction includes all the necessary information to set the scene for the study I feel that a number of themes are discussed interchangeably in a single paragraph and that these could be grouped better to make the point. The 2nd paragraph described the psychological attributes of cricketers during play, the impact of retirement on psychology, QOL to OA and a cricketer"s ability to deal with pain in retirement all in 1 paragraph. The flow of the introduction would be better if concepts were explained more fully and grouped together. Page 6, line 37; The primary aim is reported to explore physical activity after retirement. However, this would appear to be the primary aim of the larger study not this one. Please correct this to present the aims of this study. The main question of this study therefore appears to be twofold: 1. The influence of musculoskeletal pain in retired cricketers on their QOL. 2. The psychological attributes of retired cricketers that may influence their retirement quality of life. Methodology Page 7, line 36: You indicate that your participants were selected from a larger sample based on their answer to 2 questions on physical activity. You need to indicate that you chose equal numbers of participants from each group (this is only clarified later in the methodology). The inductive thematic approach used in this study has been well described.
Results
Results are clearly presented and well supported by appropriate quotes. It should be highlighted that the results are a combination of qualitative and quantitative analysis. The interview findings have been used to explain the findings of high musculoskeletal injury and pain and the high quality of life experienced by cricketers. Page 11, line 3: Please could clarify if this past history of injury was "ever" or pre/post retirement. Page 12, line 42L "Language used throughout the …" I feel the word language does not adequately explain how the investigators came to make the conclusions they did. This may be pedantic but I feel this is better explained by the process of identifying themes and whether these themes are reflected positively or negatively. Page 13, line 226: Does the 21 test matches described by Sam not make him identifiable. Page 14, line 53: Figure 1: This figure does not accurately describe what you have described in your results. As you point out in the discussion it is unclear whether a resilient person becomes a successful cricketer or the sport helps an individual develop resilience. Your figure suggests that the positive and negative experiences develop resilience which you cannot conclude from your study. No examples are given for positive experience like have been included for the negative experience. Positive experiences produce emotions (fortunate, proud etc) while the negative experiences produced a physical experience (pain and function). This doesn"t make sense. I feel this figure needs to be revised. Discussion Page 15, line 57: "…who took party in this study described many attributes…..". Demonstrated indicates a more measurable assessment of the attribute. Page 16, line 3: "…greater proportion of retired, professional cricketers…". Page 16, line 10: "QOL outcome measures are driven influenced by pain and physical impairment…". "Driven by" indicates they are the sole determinants of QOL. Success, money, etc would surely influence QOL too. Page 16, line 34: "…the present study frequently described shared resilient attributes, this which was highlighted….". This sentence is very long. Please shorten it. Conclusion Appropriate.
Reviewer comment
The study is interesting and has a current issue. Abstract is appropriate Introduction OK Methods -Why the authors did not also use QOL subjective questionnaires?
1.1 Author response We agree that QOL questionnaires would have provided additional information that could have been a useful supplement to the qualitative interviews. However, assessing QOL was not an aim of the larger cross-sectional study from which participants were recruited and consequently validated measures of QOL were not included in the cross-sectional questionnaire design.
Author action
No action taken.
Reviewer comment
The sample is short and an author interview is a vies.
Author response
We believe the decision to perform 18 interviews was appropriate. The decision to cease recruitment at 18 interviews was supported by data saturation being achieved by the 14th interview. This was further confirmed by an additional four interviews that were carried out after data saturation, where no new themes emerged.
Author action
No action taken.
Reviewer comment
Some important papers in that theme were not cited by the authors: 1.3 Author action p5 Lines 6-8 After retirement, athletes are at increased risk of developing osteoarthritis, predominately due to a history of sport-related injury, and this can result in pain and activity limitations.3-7 p16. Lines 295-303: Retired professional footballers with osteoarthritis reported worse health-related QOL compared with retired footballers without osteoarthritis.22 However, most conventional QOL outcome measures (including the EQ-5D used in the study of retired footballers22) are influenced by pain and physical impairment, where the presence of pain and physical disability will result in a reduced QOL score, irrespective of the impact of pain or disability upon the individual.23 ************************* Reviewer: 2 comments ************************* Thank you for asking me to review this paper. I feel there is a lot of merit to this paper; it highlights novel findings not previously described and uses sound methodology to collect the data. I have made a few minor comments below.
Reviewer comment
Introduction While I feel the introduction includes all the necessary information to set the scene for the study I feel that a number of themes are discussed interchangeably in a single paragraph and that these could be grouped better to make the point. The 2nd paragraph described the psychological attributes of cricketers during play, the impact of retirement on psychology, QOL to OA and a cricketer"s ability to deal with pain in retirement all in 1 paragraph. The flow of the introduction would be better if concepts were explained more fully and grouped together.
Author response
Thank you for your constructive feedback. We have reformatted the introduction to improve the flow, as suggested.
Reviewer comment
Page 6, line 37; The primary aim is reported to explore physical activity after retirement. However, this would appear to be the primary aim of the larger study not this one. Please correct this to present the aims of this study. The main question of this study therefore appears to be twofold: 1. The influence of musculoskeletal pain in retired cricketers on their QOL.
2. The psychological attributes of retired cricketers that may influence their retirement quality of life.
Author response
We see how presenting the study aim in this way may create confusion. We do however, believe it is important to accurately report both the a priori aim of the overall qualitative study, and the specific aim of this paper. We have revised the wording and hope this improves clarity whilst still making a distinction between the overall study aim, and specific aim of this paper.
Author action
We have changed the wording to improve clarity: p6-7, Lines: 47-49: The a priori aim of this qualitative study was to explore physical activity after retirement from professional cricket. However, the study also captured participants" broader perspectives regarding QOL and prominent themes linking QOL to professional cricket emerged during the initial inductive analysis. This paper focuses on these emergent themes, which were explored while considering two sensitising questions: How do retired cricketers perceive and experience musculoskeletal pain and function in daily life? Are there any psychological attributes that might enhance or hinder retired cricketers" QOL? A complementary manuscript will address physical activity after retirement from professional cricket.
Reviewer comment Methodology
Page 7, line 36: You indicate that your participants were selected from a larger sample based on their answer to 2 questions on physical activity. You need to indicate that you chose equal numbers of participants from each group (this is only clarified later in the methodology). The inductive thematic approach used in this study has been well described.
Author response
Thank you for pointing this out, we have now included an additional sentence highlighting this.
2.3 Author action p8 Line 73-74: We aimed to include an equal number of participants from each of these groups.
Results
Results are clearly presented and well supported by appropriate quotes. It should be highlighted that the results are a combination of qualitative and quantitative analysis. The interview findings have been used to explain the findings of high musculoskeletal injury and pain and the high quality of life experienced by cricketers.
Author response
Although pain and osteoarthritis were assessed quantitatively in the larger cross-sectional study via questionnaire, this data did not contribute to the qualitative analysis. However, the cross-sectional data was used to describe the sample characteristics in Table 1. We realise this may not be clear, and have subsequently clarified this distinction in the methods section.
Author action
The following paragraph was added to the methods section: p10-11, Lines 138-143: Following qualitative analysis, responses to relevant questions collected as part of the cross-sectional study were used to describe the study sample (including osteoarthritis status, age, previous surgery and length of professional cricket career). Questionnaire data was cross-checked for accuracy with participants" narratives and if disparities were present (e.g. a participant reported no current joint pain on questionnaire but described experiencing current joint pain during the interview), participant narratives were considered more reliable.
Reviewer comment
Page 11, line 3: Please could clarify if this past history of injury was "ever" or pre/post retirement.
Author response
This has now been clarified.
2.5 Author action p11 Lines 155-157: More than two thirds (n=14, 78%) reported having ever had an injury that resulted in more than one month of reduced participation in exercise, training or sport.
Reviewer comment
Page 12, line 42L "Language used throughout the …" I feel the word language does not adequately explain how the investigators came to make the conclusions they did. This may be pedantic but I feel this is better explained by the process of identifying themes and whether these themes are reflected positively or negatively.
Author response
Indeed it was not merely the language used that demonstrated a high level of resilience amongst participants. To clarify, we have removed the reference to participant language.
2.6 Author action p12, Attributes that appeared to enhance QOL in this sample of retired cricketers, included resilience and a positive attitude. Language used throughout the interviews highlighted that Almost all retiredcricketers in this study were resilient and had a positive attitude regarding pain and musculoskeletal impairment.
Reviewers comment
Page 13, line 226: Does the 21 test matches described by Sam not make him identifiable.
Authors response
We have decided to remove the start of the quote where reference to the number of test matches played is made. 2.7 Authors action p14 Line 242-244 "You just play the hand your dealt and I never think there"s any point having any regrets or bearing any grudges, it is what it is and you know I"ve had a great life out of it. …I"ve made plenty of rubbish decisions, but I wouldn"t change that fact, because all that failure and all that success has made me who I am and I"m quite happy being me now." 2.8 Reviewer comment a. Page 14, line 53: Figure 1: This figure does not accurately describe what you have described in your results. As you point out in the discussion it is unclear whether a resilient person becomes a successful cricketer or the sport helps an individual develop resilience. Your figure suggests that the positive and negative experiences develop resilience which you cannot conclude from your study. b. No examples are given for positive experience like have been included for the negative experience. c. Positive experiences produce emotions (fortunate, proud etc) while the negative experiences produced a physical experience (pain and function). This doesn"t make sense. I feel this figure needs to be revised.
2.8 Author response a. Thank you for this feedback. You are quite right, we cannot insinuate a causal relationship between positive/negative experiences while playing cricket, and psychological attributes such as resilience in professional cricketers. We only intended to highlight "potential" interactions between key themes as stated in the title. However, to ensure this is not misleading we have revised the figure to include question marks and have included a description of what this denotes in the figure legend. b. We have now included examples of positive experiences to improve consistency. c. With the inclusion of the arrow and question mark between experience and psychological attributes, the negative experience may result in physical (pain and limited function), behavioural (body awareness, self-management) and emotional/psychological (eg, resilience, mental toughness) impacts. If the arrow/question mark between negative experiences and psychological attributes was removed, than you are quite right that this would not make sense since negative experiences would impact only physical factors without having psychological consequences.
2.8 Author action p25 Figure 1. a. Question marks have been added to Figure 1 and a description of what this denotes has been added to the figure legend: "Question marks denote uncertainty surrounding the nature of the relationship between cricket-related experiences and psychological attributes common in successful cricketers, since individuals may possess these attributes prior to cricket participation" b. The examples given for "positive experiences" are as follows: success, comradery, pride, accomplishment c. No changes made since this has been addressed by adding question marks (see 2.8 a.)
Discussion
Page 15, line 57: "…who took party in this study described many attributes…..". Demonstrated indicates a more measurable assessment of the attribute.
2.9 Author response Thank you for this suggestion. We agree that demonstrated may not be an ideal word to use here. However, participants did not "describe" these attributes, rather, they "exhibited" them. Thus, we have replaced the word "demonstrated" with "exhibited." 2.9 Author action p16 line 306. "Demonstrated" has been replaced with "exhibited." 2.10 Reviewer comment Page 16, line 3: "…greater proportion of retired, professional cricketers…".
2.10 Author response We have replaced "proportion" with "number" 2.10 Author action p16 line 307: "It is possible that a greater number of retired-cricketers "flourish" in later life compared with the general population" 2.11 Reviewer comment Page 16, line 10: "QOL outcome measures are driven influenced by pain and physical impairment…". "Driven by" indicates they are the sole determinants of QOL. Success, money, etc would surely influence QOL too.
Author response
The majority of health related QOL measures, including the EQ-5D and SF-36, are largely influenced by the presence of pain or physical disability. Yet, they do not quantify the contribution of pain or physical disability to an individual"s QOL. Thus, if a person is physically disabled, the overall QOL score will be impaired. Nevertheless, we agree that "driven" may be somewhat misleading, and "influenced" would be more appropriate here.
2.11 Author action p16lines 311-314 However, most conventional QOL outcome measures (including the EQ-5D used in the study of retired footballers22) are influenced by pain and physical impairment, where the presence of pain and physical disability will result in a reduced QOL score, irrespective of the impact of pain or disability upon the individual.23
Reviewer comment
Page 16, line 34: "…the present study frequently described shared resilient attributes, this which was highlighted….". This sentence is very long. Please shorten it.
Conclusion
Appropriate.
Author response
This sentence has been shortened.
2.12 Author action p17 Line 352: Retired-cricketers in the present study shared resilient attributes, this was highlighted by their ability to maintain a positive attitude and adapt activity choices to effectively cope with fluctuations in pain and physical function. However, if other reviewers are satisfied I am happy to accept changes. I am happy that this is now publishable. 2 minor comments include: Page 12, line 192: "….in this study were resilient…". As resilience wasn"t measured directly it would be more accurate to say they describe characteristics akin to resilience. Page 15, line276: "… these individuals to possess, ….". Again, I think it would be more accurate to say these characteristics were described.
Author response
We would like to thank reviewer 2 for taking the time to critically appraise the manuscript and provide valuable comments and feedback. Your attention to detail is greatly appreciated.
In Figure 1, injury and surgery are described as "negative experiences". Injury and surgery are risk factors for developing chronic pain and osteoarthritis in later life, which a majority of participants reported and attributed to having played professional cricket. As described in the discussion, chronic pain and osteoarthritis have been associated with worse quality of life in other samples and general population groups. What is proposed in this figure, is that psychological attributes (including resilience, acknowledging limitations and self-management), as well as pride surrounding past accomplishments in cricket, may in part, negate the negative impacts of pain and osteoarthritis upon quality of life.
Author action
We have made small edits to the wording of Figure 1 legend to provide further clarity: p. 29 Green arrows and boxes represent factors with potential to positively impact quality of life; Red arrows and boxes represent factors with potential to negatively impact quality of life; A. Reflecting upon positive experiences in cricket and psychological attributes may positively impact quality of life following retirement from professional cricket; B. Reflecting upon positive experiences in cricket and psychological attributes may reduce the impact of pain, osteoarthritis and physical limitations on quality of life after retirement from professional cricket; Together, A. and B. provide one potential explanation for high quality of life despite a high prevalence of pain, osteoarthritis and physical limitations in this sample of retired professional cricketers; Question marks denote uncertainty surrounding the nature of the relationship between cricket-related experiences and psychological attributes common in successful cricketers, since individuals may possess these attributes prior to cricket participation 2.2 Reviewer comment 2 minor comments include: Page 12, line 192: "….in this study were resilient…". As resilience wasn"t measured directly it would be more accurate to say they describe characteristics akin to resilience. Page 15, line276: "… these individuals to possess, ….". Again, I think it would be more accurate to say these characteristics were described.
Author response
Thank you for drawing attention to this, we have made the following amendments: 2.2 Author action p13. "Almost all retired-cricketers in this study demonstrated characteristics and described experiences suggesting high levels of resilience and a positive attitude regarding pain and musculoskeletal impairment." p15. "...psychological attributes that these individuals may possess, including resilience, a positive attitude, heightened body awareness.."
|
v3-fos-license
|
2017-08-15T20:24:50.547Z
|
2015-09-04T00:00:00.000
|
28063795
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=59394",
"pdf_hash": "7340cc91e4c5f8dcc4777d2ea09dbbf4632cb859",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:922",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7340cc91e4c5f8dcc4777d2ea09dbbf4632cb859",
"year": 2015
}
|
pes2o/s2orc
|
Ileus Caused by Large Diverticulum of Postbulbar Duodenum : Case Report
Duodenal diverticula are common and are usually found in patients undergoing roentgenographic investigation of the upper gastrointestinal tract. The majority of these cases are asymptomatic and rarely require operative intervention. Occasionally they can result in the obstruction of the biliary and/or pancreatic ducts, haemorrhage or perforation. Symptomiatic cases may require endoscopic or surgical intervention. Herein, we present a case report of a female patient who underwent surgical procedure due to repetitive obstructive symptoms.
Introduction
Duodenal diverticula occur very commonly in upper gastrointestinal barium studies, but despite their incidence duodenal diverticula are rarely symptomatic [1] [2].For that reason, the duodenum is usually overlooked as an underlying cause of acute abdomen [3].That is why we present a case of symptomatic large duodenal diverticulum in a patient who presented with an upper intestinal obstruction.
Case Report
A 47-year-old woman was admitted to a hospital in November 2014 with symptoms of upper abdominal pain, nausea and vomiting.On admission, plain film of the abdomen was normal but barium study revealed huge di-verticulum with diameter of cca 60 × 40 mm nearby bulbi duodeni (Figure 1).Her past medical history was significant for symptoms of obstruction since her youth.So far, she was admitted to hospital with ileus many times.She also had family history of ulcer disease and colorectal carcinoma.In her 21-st year she had left adnexetomy due to ovarian cyst.The patient was dismissed from hospital after termination of symptoms and was sent to gastroenterological workup.
Abdominal ultrasound showed large diverticulum next to hepatoduodenal ligamentum, with clear signs of disorders of passage in pars descedens duodeni.The rest of abdominal ultrasound was normal.The patient was sent to endoscopy as preoperative workup (it was necessary to distinguish between papila Vateri and diverticulum).Upper endoscopy revealed in postbulbar segment beneath papila Vateri diverticulum with dimensions cca 60 × 40 mm.Under diverticulum, next to crossing to jejunum, duodenum was rotataed around its axis (torsion around ligamentum Treitz) (Figure 2).Lower endoscopy was normal, except significant dolichocolon.Due to endoscopy finding, the patient was sent back to abdominal surgeon to perform operation.
The patient was operated in March 2015.Median laparotomy and diverticulectomy were performed (Figure 3).The postoperative course went without complications.The patient remained hospitalized for 14 days with local surgical wound healing, as well as changes in her diet with food supplements.Control gastrographin study was made on the 7th postoperative day and no extravasation of contrast was found (Figure 4).The patient was dismissed from hospital on the 14th postoperative day.On control examination the patient was asymptomatic and in very good condition, without feeling any discomfort after surgical intervention.
Discussion
A diverticulum is an abnormal sac or pouch protruding from the wall of a hollow organ [1] [4].Diverticular disease of the small intestine is relatively common.The prevalence of small intestinal diverticula ranges from 0.06% to 1.3%.The etiopathogenesis is unclear, although the current hypothesis focuses on abnormalities in the smooth muscle or myenteric plexus, on intestinal dyskinesis and on high intraluminal pressures [5].Duodenal diverticula are the most common acquired diverticula of the small bowel, and Meckel's divericulum is the most common true congenital diverticulum of the small bowel.Duodenal diverticula represent the second most common site for diverticulum formation after the colon [1] [6]- [8].They occur twice as often in women as in men and are rare in patients younger than age 40.Two thirds to three fourths of duodenal diverticula are found in the periampullary region.The overwhelming majority of duodenal diverticula are asymptomatic and are usually noted incidentally by an upper gastrointestinal series [5] [9].Complications are rare and perforation was only reported in less than 200 cases [6].Diagnosis may also be obtained by upper gastrointestinal endoscopy.Less than 5% of duodenal diverticula will require surgery due to complication of the diverticulum itself.The causes of small bowel obstruction can be divided into three categories: 1) Obstruction arising from extraluminal causes (adhesions, hernias, etc.); 2) Obstruction intrinsic to the bowel wall (primary tumors); 3) Intraluminal obturator obstruction (gallstones, foreign bodies etc.).Adhesions secondary to previous surgery are by far the most common cause of small bowel obstruction.The cardinal symptoms of intestinal obstruction include colicky abdominal pain, nausea, vomiting, abdominal distension, and a failure to pass flatus and feces.These symptoms may vary regarding the site and duration of obstruction.Nausea and vomiting are more common with a higher obstruction and may be the only symptoms in patients with high intestinal obstruction [1] [6].
The diagnosis of intestinal obstruction is often immediately evident after a thorough history and physical examination.Plain radiographs usually confirm the clinical suspicion and define more accurately the site of obstruction.The accuracy of diagnosis of the small intestinal obstruction on plain abdominal radiographs is estimated to be approximately 60%.
Barium studies have been a useful adjunct in certain patients with a presumed obstruction.Barium studies can precisely demonstrate the level of the obstruction as well as the cause of the obstruction in certain instances.Also, barium studies are recommended in patients with a history of recurring obstruction or low-grade mechanical obstruction to precisely define the obstructed segment and degree of obstruction.
Several operative procedures have been described for the treatment of the symptomatic duodenal diverticulum.The most common and the most effective treatment is diverticulectomy which is most easily accomplished by performing a wide Kocher maneuver that exposes the duodenum.The diverticulum is then excised, and the duodenum is closed in a transverse or longitudinal fashion, whichever produces the least amount of luminal obstruction.Due to the close proximity of the ampulla, careful identification of the ampulla is essential to prevent injury to the common bile duct and the pancreatic duct.The main postoperative complication of diverticulectomy is duodenal leak or fistula, which carries up to a 30% mortality rate [7].
Not many case reports with symtomatic duodenal diverticula were published.Usually most common complication described was perforation when segmental duodenectomy was performed.Other complications mentioned were gastrointestinal bleeding, intractable pain, biliary or pancreatic obstruction and gastrointestinal obstruction [1] [6].In most papers, authors concluded that operative treatment of duodenal diverticula is safe but should be reserved for those with emergent presentations [2] [6] [8] [10].Moreover, few papers emphasised the need for better diagnostic evaluation of upper gastrointestinal diverticula, as they are mostly unrecognized [8].
Conclusion
It is important to think of duodenal divericula as a cause of acute abdomen, as duodenal diverticula are not so rare.Upper endoscopy and upper gastrointestinal radiographic imaging should be obtained in diagnostic pathway.Surgical resection remains the mainstray of treatment when diverticulum is large, symptomatic or complicated by perforation, volvulus or bleeding.
|
v3-fos-license
|
2021-08-02T00:06:15.752Z
|
2021-05-04T00:00:00.000
|
236618169
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-1090420/latest.pdf",
"pdf_hash": "ec6ce234fd62648bb7e83866a7846a8eaa97a185",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:923",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "a555ca0cd1c388d411f97047a25160929a1f82b0",
"year": 2021
}
|
pes2o/s2orc
|
The Effect of Quarantine During COVID-19 Pandemic on The Oral Health Habits of The Syrian Community
Introduction: Corona Virus has appeared in the end of 2019; it belongs to a large family of viruses that can cause respiratory infections. Due to its rapid and ease spread it has become a pandemic in almost the whole world, which necessitated the imposition of quarantine procedures in order to reduce the speed of spread and the number of deaths, but these procedures may cause many effects on people, one of the most important aspects that can be affected is oral health care. This study was conducted to investigate the effect of quarantine procedures on oral health habits of the Syrian community. Materials and methods: A survey was made with Google forums then published on Facebook from 4-16-2020 until 11-5-2020. The number of people corresponding to the study criteria reached 1033, the effect of quarantine procedures on changing oral health habits, number of times of brushing and the time of brushing were studied. Results: Quarantine led to a change in oral health habits in 57.4% of the sample, females were signicantly more affected by changing habits during quarantine (P = 0.020), number of brushing times was not clearly affected and it was twice daily (49.4% Before quarantine, 42.1% after quarantine) as there was no statistically signicant difference between the two stages in terms of the number of brushing times. Conclusion: This study was one of the rst studies that showed the effect of home quarantine on the oral habits of members of the Syrian community. Home quarantine did not signicantly affect the change in the oral health habits of Syrians.
Introduction
Coronavirus belongs to a large family of viruses that can cause respiratory infections whose symptoms range between the common cold and severe symptoms. Middle East respiratory syndrome (MERS-CoV) and severe acute respiratory syndrome (SARS-CoV) are among the diseases that cause viruses from the Corona family. The new generation of the virus appeared in 2019 in Wuhan -China and was named (-2SARS-CoV) to later become a pandemic that threatens the lives of individuals [1].
The emerging covid virus attacks the respiratory pathways and can be transmitted through droplets emitted through the mouth and nose when a person coughs or sneezes. It can be transmitted indirectly by touching contaminated surfaces with hands and then touching the hand to the mouth, nose and eyes, as varying numbers have been recorded for the days during which the virus can survive on surfaces. [2] A great variety of disease symptoms was recorded in people with the emerging Covid virus, the severity of symptoms was graded between moderate to severe, among the most common symptoms that were observed: Fever -cough -di culty breathing or shortened breathing time -chills -muscle pain -coldheadache -pharyngitis -loss of sense of smell or taste [3] The incubation period for the virus is from 2 to 14 days before symptoms appear, and during this period the person carrying the virus will be able to transmit it to others. Understanding the incubation period is very important in order to understand the importance of quarantine and social distancing. Years ago, SARS-CoV was contained globally through large-scale quarantine measures. Quarantine was also used effectively when infectious diseases spread centuries ago, such as cholera and plague, as the conditions that necessitated the imposition of quarantine at the time were similar to the current conditions that required quarantine in light of the spread of the new Corona virus. The quarantine has slowed the rate of spread and reduced the number of deaths Quarantine aims to separate people who have been exposed to the pathogen from general society and to emphasize the adoption of health prevention methods, but adherence to quarantine can cause some individuals psychological, emotional and nancial problems [4]. After the quarantine experiment conducted in 2003 in Canada when the SARS-CoV virus spread, there were signi cant negative effects on quarantined individuals who felt anxious, isolated, fearful, frustrated and depressed. Despite past quarantine experiences and the long history of quarantine we know little about how people understood, their attitude and behavior during the quarantine period [5].
it may be very easy for an individual to ignore his normal healthy oral hygiene habits, and according to previous studies, depressed people have a lack of oral health care and therefore it is recommended that people diagnosed with depression receive more dental services[6]. All diagnosed mental illnesses were associated with increased tooth decay, as well as increased tooth loss [7] The aim of the research: To study the effect of quarantine on oral health habits, whether they increase or decrease.
Study design:
This observational cross-sectional online survey was conducted in Damascus between April 16 and May 15, 2021, in order to evaluate the oral health habits during the quarantine period The questionnaire was designed on the Google Form website while maintaining the con dentiality of the person participating in the questionnaire and preventing the questionnaire from being answered by the same account.
The questionnaire consisted of several questions including gender, age, marital status, place of residence, educational quali cation, economic status, and a question about oral health habits before and during the quarantine period. The aim of the questionnaire was to nd out the effect of quarantine on people's oral health habits and link them to the other variables (educational quali cation, economic status, gender and age).
On the rst page of the questionnaire the participants were asked for their informed consent to participate in this research, after describing the aim and objectives of this research. Participants under the age of 18 were asked to provide their parental or legal guardian informed consent.
Study population:
Participants were eligible if they were syrian citizen living in Damascus city, which is the capital of Syria, and the city with the highest population.
Statistical analysis
All data analyses were carried out using IBM SPSS Statistics for Windows (Version 26) (IBM Corp., Armonk, NY, USA). All tests were two-tailed and a p-value of less than 0.05 was considered statistically signi cant.
Results
The number of samples collected was (1090), and 57 (5.2%) of them were from outside Syria, so they were excluded from the sample, and thus the nal sample number was (1033). The average age was (23.40 years) and ranged between 12-62 -with a mean of 5.43. The number of males was 265 (25.7%), while the number of females was 768 (74.3%). (Table 1) The socio-economic level of the sample was assessed through the (SES) indicator, where 576 (55.8%) of the sample were at the low level, (24.5%) a medium level, and nally (19.7%) a high level Quarantine had a change in the health habits of more than (57.4%) of the sample, while there was no change in the oral health habits of (42.6%) of the sample. (Table 1) The method of maintaining the cleanliness of the toothbrush was evaluated, and the sample showed mixed results, as the majority (40.5%) put the brush in a cup with the rest of the family, (28.2%) put the brush in their own cup, while (26.1%) covered the brush with a special cover. And (5.2%) put the brush to the side of the sink. (Table 1) The number of brushing times was evaluated during a period before quarantine and after health, and the results did not show signi cant differences between the two stages, as the majority of the sample brushed their teeth twice daily (49.4% before quarantine, and 42.1% after quarantine), and there was no fundamental difference. The number of brushing times between the two phases (P = 0.716). Also, the time of brushing the teeth was evaluated during the two time periods, and the results were converged except with regard to brushing before leaving the house, where before the quarantine it was a percentage (15.8%) and was not present during the quarantine period, there was no fundamental difference between the two variables during the two time periods (P = 0.276). (Table 2) Research variables were studied by comparison between males and females, where females showed a greater change in oral health habits due to quarantine (51.3% for males, 59.5% for females), and this difference was statistically signi cant, as the value of the level of signi cance was (P = 0.020). Also, a fundamental difference was found between males and females in terms of the number of brushing times before and after the quarantine, and the time of brushing the teeth before and after the quarantine, as the value of the signi cance level for each of the two variables was (P = 0.000, P = 0.008), respectively. (Table 3)
Discussion
Oral health derives its importance from its impact on public health, in addition to its role in raising an individual's self-con dence and enhancing his ability to communicate with the surrounding community[8] The association of oral health with unhealthy habits among community members has led the World Health Organization since the beginning of the eighties to set many goals that must be achieved until 2020 under the name of dental self-care, which include reducing the percentage of intra-oral diseases and increasing individuals 'awareness of the importance of dental self-care associated with tooth brushing. More than once a day, reduce the consumption of sugars [9]. But after the emergence of the COVID_19 virus and the challenges that came with it, dentists faced the di culty of securing personal protective equipment, the high risk of transmission within the dental clinic, and patients 'fear of following periodic visits [10] It was imperative to ensure that society knew about the ideal oral care procedures and the effect of home quarantine on those procedures. This study showed that the number of common brushing times among the participating Syrian individuals is in accordance with the recommendations of the World Health Organization [11] and the (FDI) [12] twice daily, with a rate of (49.4% before quarantine and 42.1% after quarantine), similar to what is common in Greece [13] also Sweden, Denmark, and Germany [14]while the number of brushing times common among some medical students in India was once [15]and less than the number of times common among school teachers in Saudi Arabia, who most of them brush their teeth three times a day [16] While brushing the teeth upon waking was the most frequent time in the sample during both study periods, and it is contrary to the Oral Health Foundation's instructions, which recommend that brushing the teeth be the last thing an individual does before bed, in addition to again during the day. [17]The method of maintaining the brush was wrong for a large number of individuals. Several studies have shown that covering the brush or placing it in contaminated media (with other brushes or the side of the sink) increases pollution and bacterial growth on the surface of the brush [18] [19] Home quarantine resulted in changing health habits for 57.4% of individuals, and females were more susceptible to changing habits during quarantine than males, and this may be attributed to females staying at home for longer periods than males during this period. Nevertheless, females were more interested in oral health measures, whether before the home burrow [19] [20] or during quarantine.
Conclusion
This study was one of the rst studies that showed the effect of home quarantine on the oral habits of members of the Syrian community. Home quarantine did not signi cantly affect the change in the oral health habits of Syrians. Recommendations: 1. Emphasize the importance of oral health measures for all individuals.
2. Adopting health awareness programs that include all spectrums of society.
3. Cooperation with international organizations and associations interested in public and oral health in particular to develop clear plans and strategies for disseminating correct information about self-care oral care procedures.
4. Emphasizing the role of the dentist as a trainer and observer for the application of these procedures by patients attending dental clinics and centers. Ethical approvals were obtained from the ethics committee of Damascus university -Syria. Participants were informed about the study purpose on the rst page of the questionnaire, and they were asked to give their full informed consent to participate before lling the questionnaire. For participants under the age of 18, informed consent was obtained from the parents or the legal guardian of the participant.
In addition to that, all methods were performed in accordance with the relevant guidelines and regulations ('Sex and Gender Equity in Research -SAGER -guidelines').
|
v3-fos-license
|
2018-11-15T14:04:21.127Z
|
2018-11-15T00:00:00.000
|
53307339
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3389/fimmu.2018.02638",
"pdf_hash": "6688cf4e5be01ae78aa552d8eef256702792dfa8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:927",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "6688cf4e5be01ae78aa552d8eef256702792dfa8",
"year": 2018
}
|
pes2o/s2orc
|
Role of Mechanotransduction and Tension in T Cell Function
T cell migration from blood to, and within lymphoid organs and tissue, as well as, T cell activation rely on complex biochemical signaling events. But T cell migration and activation also take place in distinct mechanical environments and lead to drastic morphological changes and reorganization of the acto-myosin cytoskeleton. In this review we discuss how adhesion proteins and the T cell receptor act as mechanosensors to translate these mechanical contexts into signaling events. We further discuss how cell tension could bring a significant contribution to the regulation of T cell signaling and function.
INTRODUCTION
To mount a proper adaptive immune response and establish immune memory, T cells carry out many distinct cellular processes. In a simplified view, these processes can be grouped in three categories. (a) The adhesion cascade, during which circulating T cells exit the blood flow to roll, adhere and eventually extravasate through the endothelial cell layer. (b) Migration, on the wall of blood or lymph vessels, within lymph nodes and inflamed or cancerous tissues. And (c), activation, which primes naïve T cells and triggers cytotoxicity and cytokine secretion from effector cells. The molecular interactions and signaling pathways associated with T cell activation (1), migration through venular walls (2) and T cell migration in general (3) have been extensively characterized and are comprehensively described in these recent reviews. But the emergence of novel biophysical approaches has allowed to shine light on a previously neglected aspect of these processes: they all generate mechanical stimuli.
During the adhesion cascade, the blood flow applies an external shear stress on T cells binding and migrating on and through endothelial cells (2). T cell migration in tissues is driven by morphological changes, constantly fluctuating actin polymerization and molecular motors-driven contractions, which all generate internal mechanical tension (4). It goes the same with T cell activation, which involves a tight contact between T cells and antigen-presenting cells or target cells, acto-myosin contractions and a sustained actin retrograde flow (5). Adding to the multiplicity of these mechanical contexts, T cells interact with substrates displaying various and changing stiffness (6) and with adhesion molecules that are either diffusive or firmly anchored to cortical actin (7). Hence, the idea that force plays an essential role in the T cell-mediated immune response has matured from an exciting hypothesis to a well-established field of T cell biology (8)(9)(10)(11).
In this review we first focus on demonstrated mechanotransduction events in T cells. We discuss how adhesion proteins-selectins and integrins-and the T cell receptor (TCR) act as mechanosensors during the adhesion cascade and during T cell activation, respectively. In the second part of the review, we get inspiration from other cell types and systems to picture how cell tension might contribute to the cellular signaling that regulates T cell migration and activation.
SHEAR FORCE: A KEY PLAYER DURING T CELL ROLLING AND ARREST ON THE ENDOTHELIUM
In search for their cognate antigen, T cells circulate between peripheral tissues and secondary lymphoid tissues, thereby exploiting a network of blood and lymphatic vessels (12). T cells circulating in the blood enter lymph nodes through high endothelial venules (HEVs). Before they can extravasate trough HEVs, T cells first need to roll, arrest and finally adhere to the vessel walls (2,13). Forces derived from the blood flow play a decisive role in this adhesion cascade, contributing both to the initial capture by selectins and to the firm integrin-mediated arrest preceding extravasation (Figure 1).
Rolling on HEVs is mediated by fast on and off rates interactions between selectins on T cells and their ligands displayed by the endothelium. Pioneering work using atomic force microscopy (AFM) in combination with flow-chambers revealed that selectin-ligand interactions form catch bondsmolecular interactions whose dissociation rate decreases with force, see Glossary at the end of the article-when subjected to low shear force generated by the blood flow (11,14,15). Thus, a mechanotransduction process, driven by a conformational change in the selectin headpiece, prolongs the life time of the bond between selectins and their ligand and thereby gives rise to enhanced cell adhesion under flow conditions ( Figure 1A).
T cell tethering and rolling eventually leads to arrest and firm adhesion on endothelial cells, which is driven by heterodimeric integrins and their ligands and which also requires low force from the blood flow (2,13,16). Remarkably, integrin adhesiveness is increased very shortly after T cells make contact with endothelial cells, through a multistep process during which force plays an essential role (17). The first step in integrin-mediated adhesion is activation by signals coming from selectins and chemokine receptors. In a certain way, this first step prepares integrin to bear tensile forces, as it (a) increases integrin affinity for immobilized ligands on the extracellular side and (b) strengthened integrinactin cytoskeleton connection on the intracellular side through the recruitment of talin and kindlin to the intracellular integrin tail (17,18). Indeed, integrin activation by chemokines alone is not sufficient to trigger adhesiveness, which is achieved only by the effect of shear force from the blood flow (19). Integrins bound to immobilized ligand on one side and firmly anchored to the actin cytoskeleton on the other side are pulled into a high affinity, open conformation by the low force of the shear flow ( Figure 1B). This force-mediated reorganization of integrin conformation eventually allows stable bonds with ligands at the surface of endothelial cells to support T cell immobilization.
T CELL MIGRATION: STEERING TOWARD STIFFNESS
After adhesion and extravasation through endothelial cells, T cells adopt a motile behavior to reach antigen-presenting cells in lymph nodes or inflamed tissues. As described in an excellent recent review, the link between the actin cytoskeleton, adhesion modules and the extracellular matrix is highly dynamic and allows cells to convert the mechanical properties of their environment into signaling (20). In the context of migration, this can result in durotaxis-the ability of cells to migrate toward stiffer substrates. Durotaxis is another way how mechanotransduction could potentially contribute to T cell functions. Typical targets of T cells, such as (a) cancer cells that can be softer than normal cells (21); (b) tumors, that are stiffer than normal tissue because of high collagen density and crosslinking (22,23), or (c) antigen-presenting cells (6) have specific stiffness properties. Changes in extracellular matrix stiffness of specific tissues are generally associated with disease progression (24). Neutrophils, whose amoeboid type of migration is similar to that of T cells, spread more and migrate slower but more persistently and exert stronger traction forces on stiffer substrates (25,26). Like neutrophils, T cell migration on ICAM-1 coated surfaces is also influenced by substrate rigidity. Indeed, it has been recently shown that T cells migrate faster on stiffer substrates (27).
T CELL ACTIVATION NEEDS FORCE
Contact of a migrating T cell with a target cell or an antigenpresenting cell displaying a cognate antigen result in activation and arrest and in the formation of an immunological synapse (1,28). In this paragraph, we will discuss in detail how mechanotransduction plays an essential role in this process. By demonstrating that T cell activation with antigen-coated beads requires the beads to be larger than 4 µm, Mescher provided the first hint that the generation of tension over a significant scale is indispensable for T cell activation (29). The first mechanosensor model for TCR was published quite some time later, in a study demonstrating that the binding of an immobilized agonist antibody to CD3ε induces a torque in the structure of the TCR-CD3 complex. Non-activating antibodies however, need to be conjugated to a bead and pulled tangentially to the receptor using optical tweezers to induce a similar activating response (30,31). By suggesting that the migrationrelated movement of T cells engaging a cognate peptide at the surface of antigen-presenting cells induces tangential forces on TCR, this study is also an important reminder that T cells are actually migrating and under tension when they find their cognate antigen. Mechanosensing cells or proteins can sense and react to externally applied mechanical stimuli, without actively contributing to the force that is at the source of the stimulus. For instance, in the case of a cell submitted to shear stress. This can be termed passive mechanosensing (32), in contrast to active touch sensing (mentioned further in this review), where the mechanosensor is actively involved in the mechanical stimulus it is sensitive to, a bit like poking a mango to determine if it is ripe or not. Cell motility generates cell tension and thereby might lead to passive mechanosensing as migration-related forces are transferred onto the TCR-CD3 complex (Figure 2A). Similarly, formation of the immunological synapse leads to activation of the integrin LFA-1 and to tight adhesion to immobilized ICAM-1 on antigen-presenting cells (33), as well as, acto-myosin contractions (34) and cytoskeletal tensions [(35), Figure 2B]. Hence, transition from migration to activation upon engagement of a cognate peptide represents a mechanical signal that is very likely to results in passive mechanosensing by TCR. Interestingly, TCR engagement promotes local actin polymerization around the receptor itself (35), in a way that reminds of the signaldependent and talin-mediated anchorage of integrins to the actin cytoskeleton during the adhesion cascade. This means that TCR is further anchored to the underlying cortical actin cytoskeleton upon activation, which could very well make it more susceptible to respond to mechanical stimuli. Along this line, it is now wellestablished that T cells, like many other cells, engage in the "active touch sensing" described by Kobayashi and Sokabe (32) by actively pushing and pulling on the substrate they adhere to in order to interrogate its stiffness ( Figure 2B). Within the first tens of seconds of TCR triggering on a biomembrane force probe setup, T cells engage in a sequence of pushing and pulling forces even in the absence of LFA-1 engagement (36). Traction force microscopy (TFM) on polyacrylamide gels further confirmed that antibody activation of CD3 leads to acto-myosin-mediated pulling forces, which originate at the cell edge and are directed toward the cell center (37). Another TFM study on micropillars determined that these centripetal forces are generated through the binding of TCR to activating ligands, further suggesting that integrins are not the mechanosensor at play during T cell activation (38). These forces are in the range of 100 pN, which is lower than the nanonewton forces observed during epithelial cells migration (39). Of note, phosphorylation of the early TCR signaling kinase Lck takes place on the side of the pillars facing the cell edge, suggesting that TCR signaling is triggered where the tension is highest and strengthening the idea that TCR works better when it is under tension (38). The surface of T cells is covered with microvilli, whose tips are enriched with TCR [ (40,41), Figure 2A]. These microvilli extend and retract while T cells scan antigen-presenting cells and it is likely that the first step of antigen recognition on antigen-presenting cells is mediated by TCR located on stretched microvilli. This raises the possibility that active touch sensing might already be involved in the very early stages of T cell activation, as TCR at the tip of microvilli is subjected to specific forces resulting from the scanning of antigen-presenting cells. But forces applied on TCR at the tip of microvilli are also likely to be reduced by the elastic nature of these projections, which can act as shock absorbers, for instance in the context of the adhesion cascade (42). Further investigations are required to determine if TCR at the tip of microvilli is put under tension due to the exploratory character of these projections, or if on the contrary, the force on TCR is dissipated through a shock absorber effect. Finally, forces imposed on TCR located on collapsed microvilli will be very different once the immunological synapse is fully established.
A direct consequence of the active touch sensing through TCR is that T cell activation is influenced by substrate stiffness. As a matter of fact, T cells pull more on stiffer substrates than on softer ones (37). CD4 T cells produce also more IL-2 on harder substrates up to 100 kPa (27,43), but the stiffness contribution to T cell activation is somehow lost beyond 100 kPa (43,44). More generally, every aspect of T cell activation is potentiated by stiffer surfaces up to 100 kPa (27). The effect of substrate stiffness on T cell activation could even be larger than reported in these studies, which all used functional antibodies against CD3 to activate T cells. It is indeed likely that differences in the rigidity of substrates might have a more pronounced effect on the binding of TCR to its natural ligand, a cognate peptide presented by major histocompatibility complex (MHC), than to an activating antibody.
The mechanism behind stiffness sensing in T cells is not identified yet, but talin might be involved. As part of the complex protein assembly between integrins and the actin cytoskeleton (45), talin is an essential element of the substrate stiffness sensing machinery and preventing talin to mechanically engage with integrin disrupts extracellular rigidity sensing (46). Interestingly, T cells lacking talin fail to stop migrating in response to TCR triggering (47). As mentioned above, talin is essential to integrinmediated adhesion (17) and in particular to LFA-1 adhesiveness for ICAM-1 following TCR triggering (48). It is likely that the affinity of LFA-1 for ICAM-1 is increased during T cell arrest upon TCR activation through a similar mechanism than described above during the arrest on endothelial cells in the blood flow. One can indeed consider that during activation, the LFA-1-ICAM-1 bond is put under tension by acto-myosin contractions and actin retrograde flow in a similar fashion that it is stretched by extracellular forces resulting from shear flow during the adhesion cascade ( Figure 2B). As a matter of fact it has been shown that ICAM-1 is immobilized at the surface of antigen-presenting cells in order to promote T cell-antigen presenting cells conjugation and T cell activation (33). Hence talin mechanosensing properties could contribute to the stop signal that precedes the establishment of the immunological synapse and eventually to full T cell activation. However, a recent study somehow challenges the idea that the talin-LFA-1 axis supports the stop signal. Feigelson et al. reported that the integrin ligands on antigen-presenting cells, ICAM-1 and -2, are dispensable for these cells trigger arrest activation of T cells (49). Finally, intravital microscopy studies have shown that T cells do not necessarily stop when encountering a stimulatory antigen-presenting cells. Antigen recognition can happen during long-lasting contact, the immunological synapse, but also during shorter and more dynamic interactions, termed kinapse [(28, 50), Figure 2A]. While the functional difference between synapse and kinapse has not been fully established, the duration and nature of the antigen-presenting cell-T cell interaction contribute to shape the outcome of T cell activation (51). Therefore, it is likely that the mechanosensitive properties of integrin and TCR contribute to this process by leading to distinct signaling in the context of a synapse or of a kinapse.
Thus, T cells pull on activating substrates and they are more susceptible to be activated by stiffer substrates. Having this in mind, it does not take a bit leap to imagine that the active touch used by T cells is not only a mechanism to interrogate substrate stiffness. Indeed, a few recent studies indicate that putting TCR under tension is in fact an integral part of the activation process ( Figure 2B). Presenting T cells with activating peptide-MHC complex (pMHC) on an AFM microscope showed that T cell activation requires both the binding of a cognate antigen and forces through TCR (52). An in depth analysis of the kinetics of TCR-pMHC interactions using a biomembrane force probe showed that TCR establishes catch bonds with cognate pMHC and slip bonds-molecular interactions whose dissociation rate increases with force-with non-agonistic pMHC, thereby making force applied through TCR a component of the antigen discrimination process (53). The formation of catch bond is even what distinguishes stimulatory from non-stimulatory ligands between peptides that bind TCR with similar affinity (54). These results are further confirmed by two studies from Lang and colleagues using optical tweezers and DNA tethers. They first identified an elongated structural element of the TCRβ constant chain, the FG loop (55), as a key factor for the contribution of the force in antigen discrimination (56). More recently, they demonstrated that TCR needs non-physiological levels of pMHC molecules to be triggered in the absence of forces (57). Using DNA-based nanoparticle tension sensors Liu et al. further demonstrated that piconewton forces are transmitted through TCR-CD3 complexes a few seconds after activation and that these forces are required for antigen discrimination (58).
In summary, passive mechanosensing of the forces resulting from migration and activation, and active touch sensing through the TCR-CD3 complex probably act together to connect TCR triggering at the same time to the physical environment (speed of migration, stiffness of the presenting cells) the T cell evolves in and to ligand selectivity (8). This maybe brings us back to a model Figure 1B). These forces also lead to passive mechanosensing by TCR. Additionally, TCR itself further engages in active mechanosensing, by pulling and pushing on pMHC molecules. Non-stimulatory ligands form slip-bonds under tension and fail to trigger TCR signaling. By contrast, stimulatory ligands engage in a catch-bond with TCR, which leads to a conformational change and in turn promotes TCR signaling. Binding to a stimulatory ligand also increase the density of F-actin around the TCR to further anchor it to the underlying cytoskeleton. All in all, tensions through the TCR-pMHC bond contribute to TCR triggering and antigen discrimination. described just 10 years ago, which proposed that the TCR-CD3 complex requires to be stretched in order to be activated (59). A postulate that is strengthened by the fact that TCR triggering involves a mechanical switch of its structure (60).
Forces that T cells generate upon activation do not relate only to signal intensity and specificity, but also contribute to the T cell response, notably in the context of killing. Cancer target cells that express a higher number of adhesion molecules facilitate the release of lytic granules by cytotoxic T lymphocytes (61). More strikingly, tension induced on target cells by cytotoxic T lymphocyte facilitates perforin pore formation in target cells and thereby increases the transfer of granzyme proteases and cytotoxicity (62).
TENSION IN T CELLS: FURTHER FACTS AND PERSPECTIVES
Cell tension is the result of a complex interplay between tension mediated through the cytoskeleton and membrane tension. The cortical actin-plasma membrane relationship plays a central role in mechanobiology and is very well described in recent reviews (63,64). In this regard, proteins that link the plasma membrane to the underlying cortical actin such as Ezrin/Radixin/Moesin (65) are likely to play a determining role in T cell mechanical properties and mechanotransduction. Ezrin, which directly regulates membrane tension (66) is deactivated upon T cell activation to promote cell relaxation and in fine conjugation to antigen-presenting cells (67). Similarly, constitutively active Ezrin increases membrane tension and impairs T cell migration in vivo (68). Hence, it appears that the ability of T cells to relax and deform their membrane is directly related to their ability to migrate and be activated. This is confirmed by the fact that naïve T cells are less deformable than T lymphoblasts, as assessed by a micropipette aspiration assay. The same study showed that depolymerization of the actin cytoskeleton makes naïve T cells and T lymphoblasts more deformable altogether (69).
Variations in membrane tension can influence T cell signaling in various ways. Mechanosensitive (MS) channels open up to mediate ion flux in response to membrane stretch (32,70). First discovered in bacteria where they compensate for sudden changes in environmental osmolality, MS channels have been shown to mediate intracellular Ca 2+ rise in response to tension applied to focal adhesion or along actin fibers (71). T cells express a large variety of potential MS channels (72) and an electrophysiological study showed that one of them, TRPV2, opens and mediates Ca 2+ entry in T cells subjected to mechanical stress (73). It has recently been shown that the most potent mechanosensitive ion channel identified to date, Piezo 1, is expressed in T cells, where it contributes to T cell activation through Ca 2+ -influx, albeit the study did not actually investigate if this is through mechanical stress (74). In this regard, a study using AFM in synchronization with fluorescence imaging reported that mechanical stimulation alone, without TCR stimulation, is sufficient to elicit an increase in intracellular Ca 2+ (75). This is in agreement with the expression of Piezo 1 in T cells, but somehow in contradiction with Hu and Butte, who reported that mechanical stimulation triggers Ca 2+ flux only when coupled with TCR triggering (52). Further studies are still required to determine whether or not mechanical stimuli alone are sufficient to trigger Ca 2+ flux through Piezo 1 in T cells.
Whether or not MS channels play a role in T cell migration also remains to be determined. It is however likely that membrane tension contributes to organize polarity during T cell migration, in light of what has been observed in neutrophils. Ten years after the inhibitory effect of cell tension on the small GTPase Rac had been shown (76), Houk et al used micropipette aspiration to show that cell tension acts as a long-range inhibitor to prevent Rac-mediated actin protrusions elsewhere than at the leading edge of motile neutrophils (77). These results were extended to further demonstrate that cell tension limits actin assembly through a negative feedback pathway involving phospholipase D2 and the mammalian target of rapamycin complex 2 (mTORC2) (78). Membrane tension also impact on the distribution and dynamics of membranebending proteins, such as BAR domain proteins (79), and reciprocally (80). In this context, it is interesting to note that tension promotes membrane tensformation of the leading edge of COS-1 cells, through the recruitment of FBP17, a membranebending and curvature-sensing activator of WASP-dependent actin polymerization (81). Even though T cells and COS-1 cells have noticeably different mechanisms of migration, it seems likely that tension and actin polymerization could act in concert to install polarity in migrating and in activated T cells via similar mechanisms.
Carrying the speculation further, we could even imagine that the contribution of membrane tension to T cell activation or migration extends to the regulation of intracellular trafficking. As discussed in comprehensive reviews, the plasma membrane is largely inelastic and can increase in area only 2-3% before rupture occurs (63,82,83). Consequently, cells actively respond to membrane tension through regulation of intracellular trafficking, increased membrane tension favoring exocytosis (84)(85)(86) and reduced membrane tension leading to endocytosis (87). This means that cell tension could act as a mechanical longrange messenger to directly influence and coordinate endocytic and exocytic events (82,83,88) taking place during T cell migration and activation. In fact, intracellular trafficking is a key factor in establishing functional polarity by spatially restricting membrane proteins at a specific localization in the cell, thereby confining signaling and interactions with other cells or with the extracellular matrix. Selective endocytosis of a given receptor can locally reduce its surface expression. Similarly, targeted recycling can increase the local concentration of a protein within the plasma membrane. Incidentally, T cells are highly polarized, both during migration (uropod vs. leading edge) and during activation (immunological synapse). It is thus possible that membrane tension contributes to the regulation of these processes through the organization of specific endocytic and exocytic events. For instance, endocytosis and recycling are essential to integrin polarization and activity in motile cells in general (89,90) and in T cells in particular (91,92). Similarly, targeted delivery of vesicles to the immunological synapse is required for full T cell activation (93,94) and secretion of cytotoxic granules (95,96). A good illustration of how this could happen can be found during phagocytosis by macrophages, a process that is in many ways similar to the formation of the immunological synapse and during which membrane tension coordinates the actin-driven formation of the phagocytic cup and exocytosis-fusion of vesicles (97).
Finally, cell tension does not stop at the plasma membrane or the cortical cytoskeleton. As well described in a recent review, forces are transferred from the cell surface to the nuclear envelope through the intermediate of the cytoskeleton or directly from the external environment (98). The structure and function of the nucleus are affected by these tensions, which allows it to function as a mechanosensor (99,100). Accordingly, tensions can regulate gene expression by modifying the connection of heterochromatin to the nuclear lamina (101). Forces transferred to the nuclear envelope have also been reported to favor cell proliferation (98). Nuclear deformation has further been shown to directly lead to the import of specific transcription factors through the opening of nuclear pore complexes (102, 103). Because of its size and rigidity, the nucleus is the limiting factor during cell migration in a dense meshwork (104). Typically, dendritic cells use myosin II-driven contractions (105) and produce a dense actin network around the nucleus (106) to promote nucleus deformation and in turn facilitate squeezing through constrictions. 3D migration of T cells in confined environments is thus very likely to lead to compression of the nucleus. Similarly, the pulling exerted by T cells on antigen-presenting cells is susceptible to lead to compression or even flattening of the nuclear envelope. Hence it is conceivable that tension resulting from prolonged migration in confined environment or from T cell binding to an antigenpresenting cell can lead to rearrangement of the chromatin structure or to the opening of nuclear pores and thereby influence the regulation of gene expression leading to T cell differentiation or proliferation.
CONCLUSION
T cells are subjected to ever-changing forces, either generated intracellularly or from their environment. They further interact tightly with cells displaying various levels of stiffness and with molecules whose anchorage to the underlying actin cytoskeleton varies. But more important than the multiplicity of these mechanical contexts, is the fact that they very often are associated with specific processes participating to T cell function. It is therefore very likely that distinct mechanical signals team up with biochemical signals to ensure that T cells do the right thing at the right place and time. The role of mechanotransduction in the adhesion cascade preceding extravasation and in T cell activation is now well-established, although there is still room to refine the model describing it. Now is maybe the time to investigate the importance of cell tension for T cells (Figure 3), using what we have learned from other cell types and taking advantage of ever-improving biophysical approaches.
AUTHOR CONTRIBUTIONS
JR and DL conceived and wrote the manuscript, JL draw the illustrations.
Shear stress (or shear force)
Shear stress is the tangential force applied by a flowing fluid on the surface of an object Catch bond/Slip bond A catch bond is a bond that becomes stronger (increased lifetime) when a pulling force is applied to it. By contrast, a slip bond becomes weaker (decreased lifetime) with applied force
Mechanosensing
Mechanosensing is the process through which cells or proteins detect and respond to variations of forces and mechanical properties of their environment Mechanosensor A mechanosensor is a molecule/protein that mediates mechanosensing Passive mechanosensing During passive mechanosensing, a mechanosensor detects applied mechanical stimuli (without applying force or tension itself) Active touch sensing Active touch sensing is the process through which cells actively probe the mechanical properties of their environment (for instance substrate stiffness)
Mechanotransduction
Mechanotransduction is the process during which mechanosensors translate mechanical inputs into intracellular signaling events Stiffness (or elastic modulus) Stiffness is a measure of the ability of an object or a substance to resist deformation upon an applied force Durotaxis Migrating cells can sense stiffness of the substrate they migrate in or on, typically via active touch sensing. Durotaxis is the ability of cells to move up rigidity gradients.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-03-01T00:00:00.000
|
7248649
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2011.00050/pdf",
"pdf_hash": "2c4f6c141b7fab1bc1c05e8d1ab0910a3743f571",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:928",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "2c4f6c141b7fab1bc1c05e8d1ab0910a3743f571",
"year": 2011
}
|
pes2o/s2orc
|
Projection Neuron Circuits Resolved Using Correlative Array Tomography
Assessment of three-dimensional morphological structure and synaptic connectivity is essential for a comprehensive understanding of neural processes controlling behavior. Different microscopy approaches have been proposed based on light microcopy (LM), electron microscopy (EM), or a combination of both. Correlative array tomography (CAT) is a technique in which arrays of ultrathin serial sections are repeatedly stained with fluorescent antibodies against synaptic molecules and neurotransmitters and imaged with LM and EM (Micheva and Smith, 2007). The utility of this correlative approach is limited by the ability to preserve fluorescence and antigenicity on the one hand, and EM tissue ultrastructure on the other. We demonstrate tissue staining and fixation protocols and a workflow that yield an excellent compromise between these multimodal imaging constraints. We adapt CAT for the study of projection neurons between different vocal brain regions in the songbird. We inject fluorescent tracers of different colors into afferent and efferent areas of HVC in zebra finches. Fluorescence of some tracers is lost during tissue preparation but recovered using anti-dye antibodies. Synapses are identified in EM imagery based on their morphology and ultrastructure and classified into projection neuron type based on fluorescence signal. Our adaptation of array tomography, involving the use of fluorescent tracers and heavy-metal rich staining and embedding protocols for high membrane contrast in EM will be useful for research aimed at statistically describing connectivity between different projection neuron types and for elucidating how sensory signals are routed in the brain and transformed into a meaningful motor output.
IntroductIon
Acquiring systematic information about synaptic connectivity in the brain is currently one of the greatest challenges in neuroscience. In recent years, significant effort has been invested into developing staining and microscopy techniques to speed up, automate, and make acquisition of large image volumes of brain tissue possible. Different strategies have been pursued, some based on electron microscopy (EM), some on light microcopy (LM), and some on a combination of EM and LM.
The strength of EM-based techniques is their high spatial resolution. Among established EM techniques are serial block-face scanning electron microscopy (SBFSEM), focused ion beam scanning electron microscopy (FIBSEM), serial section transmission electron microscopy (ssTEM), and serial section scanning electron microscopy. In the SBFSEM technique (Denk and Horstmann, 2004), block-face imaging in a scanning electron microscope (SEM) is combined with serial sectioning with a diamond knife of a resin-embedded brain block inside the microscope chamber. SBFSEM Frontiers in Neuroscience www.frontiersin.org April 2011 | Volume 5 | Article 50 | 2 allows sectioning of large areas of tissue for hundreds of consecutive sections and is therefore suited for high-resolution imaging of large tissue volumes. An alternative technique for automated cutting and block-face imaging, FIBSEM, relies on using a focused ion beam for milling thin layers of embedded tissue (Knott et al., 2008;Merchan-Perez et al., 2009). The ion beam mills away few nanometer thick layers of tissue, giving rise to close-to-isotropic volumetric data that facilitates the tracking of small neurites through the tissue. The FIBSEM technique is currently limited by the inability to mill areas larger than a few hundreds of square micrometers, because the focused ion beam cannot be deflected arbitrarily. FIBSEM is therefore less suited for imaging of large volumes. Also, block-face techniques such as SBFSEM and FIBSEM have the common disadvantage of section loss because the cut ultrathin sections cannot be collected. It is therefore not possible to repeatedly image a region after the block has been cut, which would be desirable in case that analysis of the imaged volume reveals that higher-resolution images of certain sections are needed. Another disadvantage of block-face EM techniques is that immunohistochemical stainings can only be performed before tissue embedding but not thereafter.
A third EM approach relies on high-throughput ssTEM. Different frameworks have been proposed to automate image acquisition, image tile, and serial section registration, and take advantage of the high imaging speed of transmission electron microscopes (TEM; Anderson et al., 2009;Cardona et al., 2010). Unlike block-face techniques, ssTEM allows for repeated imaging of sections and post-embedding immunolabeling. However ssTEM involves cutting and collecting hundreds of serial sections on small and fragile TEM grids. Preparation necessitates therefore a skillful and trained operator, because section loss can lead to incompleteness in the reconstruction. Moreover, compared to FIBSEM and SBFSEM, ssTEM has the disadvantages that sections with current technology cannot be cut thinner than 30-40 nm, compared to few nanometers with FIBSEM, and that imaged sections need to be registered between each other, since their orientation is lost during preparation.
All EM-based techniques provide highly resolved brain volumes. However, they suffer from the inherent drawback of EM to necessitate long imaging and tracing times. Complete EM reconstruction of vertebrate brains requires a prohibitive amount of time (see Helmstaedter et al., 2008 for estimations of reconstruction time). Hence, if long-range projections from and to a specific area cannot be contained in their entirety in the reconstructed volume, they need to be identified using electron-dense tracers (Anderson et al., 1994;da Costa and Martin, 2009). The main disadvantage of electron-dense tracers is that only a limited number of them can be used simultaneously (Smith and Bolam, 1991;Lanciego et al., 1998;Reiner et al., 2000), and therefore only few projection neuron types can be distinguished in a single brain. Therefore, using EM-only methods, it is difficult to describe the connectivity between different brain regions in terms of interactions between different projection neuron types.
In projectomics, the circuitry of inputs and outputs of brain regions is described without necessarily characterizing the local network in similar detail. This approach is different from connectomics, in which a local network is described exhaustively in terms of all synaptic connections present. Whereas connectomics involves a complete EM reconstruction of the region of interest, projectomics requires ways of distinguishing different projections neurons without needing a complete EM reconstruction of the entire brain. LM can overcome the limitation of few distinguishable electron-dense tracers and allows labeling of many more projection neuron populations. These populations can be labeled by injection of different colors into the different brain regions or through expression of fluorescent proteins by viral or transgenic methods.
Different light-based methods have been advertised for circuit reconstruction. The Brainbow is a transgenic technology in which neural circuits are visualized by genetically labeling neurons with different proportions of multiple colors (Livet et al., 2007). Based on proportion differences, neuronal processes belonging to a single cell can be identified in different parts of the brain without needing to completely trace the neuron. Another LM-based technique is array tomography, in which ordered arrays of ultrathin resinembedded serial sections are repeatedly stained and imaged in the light microscope. In array tomography, large-field volumetric imaging of large numbers of antigens and fluorescent proteins is possible (Micheva and Smith, 2007;Micheva et al., 2010b). Due to small section thickness, the z-resolution of array tomography is comparable to that of EM and much smaller than that of twophoton or confocal imaging. However, the same is not true for the x-y resolution, which in classical LM is smaller than in EM by orders of magnitudes. Recent advances such as stimulated emission depletion (STED) microscopy have increased LM resolution beyond Abbé's limit, allowing for live imaging and measurement of details includ-
Electron microscopy
Electron microscopes use an electron beam to illuminate the specimen and reach a resolution higher than light microscopy and sufficient to resolve individual synapses.
Resin embedding
Biological tissue prepared for serial section electron microscopy needs to be cut in ultrathin sections. For this purpose the tissue is typically infiltrated with a resin subsequently polymerized so that the hardened tissue can be cut with a diamond knife.
Tracer
Tracers are chemical compounds that, after injection in the nervous system, are taken up by neurons and transported inside of them. Tracers reveal neuron location and morphology by virtue of labels visible with either LM or EM.
Projectomics
Projectomics is an approach to circuit reconstruction in which mainly afferent and efferent projections of a brain region are reconstructed, without aiming at a complete characterization of the entire local network.
Array tomography
Array tomography is a high-resolution proteomic imaging method based on repeated staining and imaging of ordered arrays of ultrathin, resinembedded sections (Micheva and Smith, 2007).
Frontiers in Neuroscience www.frontiersin.org
April 2011 | Volume 5 | Article 50 | 3 ing spine neck width and the curvature of the heads of spines (Nagerl et al., 2008;Nagerl and Bonhoeffer, 2010). However STED currently is not an alternative to EM, because exhaustive detection of smaller details such as synaptic vesicles and membrane specializations (necessary for identification of functional synapses) still needs EM today. In addition, with LM it is common to observe signal discontinuities because of partially unlabeled structures. Such discontinuities hinder dense reconstructions of neural circuits. Hence, many dense and high-resolution reconstruction tasks may not be feasible without EM.
correlatIve mIcroscopy for neural cIrcuIt reconstructIon
In the correlative LM-EM approach, fluorescence LM provides information about neurons and their synapses inside a small subvolume, which is then reconstructed in EM imagery. Multiple neurons or neuronal types are distinguished based on their fluorescence labels and their synapses are detected with EM. As a consequence, there is no need to reconstruct neurons over long distances. Unfortunately, to combine LM and EM is not straightforward in most cases. Chemical preparation for EM leads to strong fluorescence reduction if the aims are good ultrastructure preservation and high membrane contrast. In addition, during tissue preparation, morphological changes such as tissue shrinkage can occur. Because of such tissue transformations, it is not possible today to directly superpose three-dimensional LM imagery taken prior to embedding with EM micrographs. Although such a correlative microscopy approach would be conceptually very simple and desirable, it is still not feasible. In addition, the correlative preembedding LM and EM approach suffers from problem of low z-resolution of LM imagery, rendering it difficult to unambiguously associate synaptic LM labels with the correct synapse seen in the EM.
The idea of correlative array tomography (CAT) is to combine LM and EM imagery on the same ultrathin section, thus overcoming the aforementioned problems of low z-resolution and tissue shrinkage. In CAT, endogenous antigens such as tubulin, GABA, SNAP-25, or synapsin, can be labeled with immunofluorescence using staining and fixation protocols also suitable for EM (Micheva and Smith, 2007;Micheva et al., 2010b).
It is an open question, however, whether tissue preparation protocols compatible with immunoreactivity and EM also deliver an ultrastructure quality high enough for neural circuit reconstruction. As a step in this direction, we extended the CAT approach by first labeling multiple types of projection neurons using fluorescent tracers and then fixing, staining, and embedding the tissue aiming for high-quality EM ultrastructure, suitable for circuit reconstruction (Oberti et al., 2010). Although our staining and fixation protocols reduce tissue antigenicity and are therefore less suited for detection of endogenous molecules, we show that tracer signal lost during embedding can be recovered using fluorescent antibodies against the tracers, and at the same time synapses can be well resolved in EM. We apply this CAT approach to projection neurons in our animal model, the zebra finch.
the songbIrd
The zebra finch is a good animal model to study a complex sensory-motor behavior. Zebra finches are able to imitate the song that they hear sung by their tutors. During a sensory phase, the juvenile animal listens to the tutor and memorizes its song, while in a later sensorimotor phase the juvenile vocalizes and uses auditory feedback to match its own song with the memorized tutor song (Konishi, 1965). Identification of different projection neuron populations is essential to investigate how sensory information enters the brain, how it is processed there, and how a meaningful output is generated. In the zebra finch, projection neurons can be easily labeled because the avian brain is organized in segregated nuclei. Their projection neurons can therefore be labeled by targeted injection of fluorescent tracers of different colors, one for each brain area.
A specialized set of brain areas is involved in song learning and production (Figure 1). One of these is HVC (used as a proper name), a premotor area in the forebrain which drives motor output by a sparse sequence of bursts but also receives auditory information (Nottebohm and Arnold, 1976;Hahnloser et al., 2002Hahnloser et al., , 2008Long et al., 2010;Roberts et al., 2010). HVC receives input from the nucleus interface of the nidopallium (NIf) and from the thalamic nucleus uveaformis (Uva). HVC contains a population of neurons projecting to the robust nucleus of the arcopallium (RA), which in turns relays information to the motor neurons of the vocal organ and to respiratory areas. Another population of HVC neurons projects to the basal ganglia nucleus Area X, which is involved in generating song variability (Scharff and Nottebohm, 1991;Reiner et al., 2004;Olveczky et al., 2005).
In our strategy, we use fluorescent tracers to label neuronal processes according to their projection target and use EM to visualize the circuit context of the labeled structures (Oberti et al., 2010). and osmificated tissue. Subsequently we incubate the sections with primary antibodies that bind to the fluorophores, followed by fluorescent secondary antibodies. We finally image immunostained sections with LM and EM (Figures 2F,G). In the EM we locate a region of interest previously defined in LM imagery using landmarks such as section borders, blood vessels (visible in some fluorescence channels) and stained cell somata, which can be easily identified in EM images even at low magnification.
After images are acquired with the EM, we align them with LM images using the previously mentioned landmarks or using features such as somata borders or tracer-filled vesicles in retrograde stained cells. The final data consists of multichannel LM pictures superimposed to EM imagery ( Figure 2H). Figure 3 shows examples of correlative images of zebra finch HVC. The animal was injected with two different tracers in the afferent nucleus Uva (Texas Red dextran) and the efferent nucleus RA (Lucifer Yellow dextran). After resin embedding, we cut ultrathin sections and acquired images of Texas Red direct fluorescence with a LM (Figure 3, left column, red signal). Lucifer Yellow fluorescence was completely quenched during the preparation, but we detected it anew using anti-Lucifer Yellow antibodies and fluorescent secondary antibodies for signal amplification (Figure 3, left column, yellow signal). Sections were subsequently imaged with an SEM using an energy-filtered detector for backscattered electrons (Figure 3, middle column). Finally the two image sets were superimposed ( Figure 3, right column, and Figure 4). Texas Red fluorescence that was detected in ultrathin sections
WorkfloW for correlatIve array tomography usIng neural tracers In the songbIrd
We inject dextran-coupled fluorophores of different colors into the living brain (Figure 2A). After the tracers have diffused, we perfuse the animal with 2% paraformaldehyde and 0.075% glutaraldehyde and slice its brain with a vibratome (Figure 2B). Light microscopy examination of the sections allows us to localize the region of interest, to evaluate quality of labeling, and to proof correctness of the injection sites. We then process the sections for EM. The tissue is stained and fixed with heavy metals (40 min 1.5% potassium ferrocyanide and 1% osmium tetroxide, 40 min 1% osmium tetroxide, 1 h 1% uranyl acetate) dehydrated in a graded series of ethanol dilutions, and finally embedded in an epoxy resin (Durcupan ACM resin, Fluka, Buchs, Switzerland). This preparation results in hardened sections, which, after trimming to smaller pieces, we can cut in ribbons of ultrathin serial sections using a diamond knife ( Figure 2C). We put ultrathin sections on various substrates such as pioloform film, glass coated with an electron-conductive substrate, or other electron-conductive materials (such as silicon wafer, as in the examples in this paper).
We then image the collected ultrathin sections in a conventional wide-field fluorescence microscope ( Figure 2D). Exposure times in the order of several seconds are necessary (5-20 s). Preparation of the tissue for EM causes a strong reduction of the fluorescence of the tracers, which varies depending on the fluorophore and can reach in some cases complete quenching. We therefore introduced an additional step to regain fluorescence signal by immunolabeling against the fluorophores (Figure 2E). First the sections are treated with periodic acid and sodium metaperiodate, chemicals which facilitate accessibility and binding of the antibodies to the resin-embedded deformation of the sections due to the electron beam, as it occurs frequently in TEM, was absent, because the sections were mounted on a silicon wafer, which is a rigid substrate. Optical deformations were measured and found to be minimal in our SEM and negligible in the LM. The reliability of immunostaining signal can be assessed by inspecting the correspondence of immunolabeled structures in consecutive sections (Figure 3). To be conservative, it can be assumed that fluorescence signal present in only individual sections forms false positive staining, without antibody staining survived preparation and was colocalized with retrogradely labeled cell somata (Figures 3C,D, asterisks) and with smaller neuronal processes (Figures 3A,B, asterisks). Lucifer Yellow-dextran tracer, relabeled with antibodies, also colocalized with small neurites (Figures 3C,D, arrows) as well as with presynaptic terminals (Figures 3A,B, arrows). Alignment of the two image sets was based on landmarks such as section borders and blood vessels, and necessitated only rotation, translation, and scaling of the images. Physical Superposition of the two image sets (Figure 3, right column) allows therefore classification of structures observed in the EM based on LM information. EM-identified synapses can be assigned to a specific projection neuron class (Figures 3A,B, arrows) based on the fluorescence signal in few ultrathin sections, without needing to trace the neuron in a big volume (for additional examples, see Oberti et al., 2010).
Electron microscopy imaging can be done either in the SEM or the TEM. In the first case, sections can be mounted on a variety of electronconductive substrates, including coated glass and silicon wafers. These substrates allow the collection of long ribbons of serial sections, which are useful in large imaging tasks such as in array tomography (Micheva and Smith, 2007). SEM has the disadvantage of slower imaging speed compared to TEM. With TEM, imaging is typically faster, but sections need to be mounted on a fragile electrontransparent substrate such as pioloform film on a slot grid. Slot grids have a size limited to few millimeters; therefore, only few serial sections can be mounted on each of them. As a consequence TEM imaging is more labor-intensive and errorprone than SEM imaging.
The applicability of multiple tracer types is only limited by the signal to noise ratio of the image data and the availability of antibodies. Immunostaining of multiple tracers on the same section can be done using antibodies raised in different species, or, if the antibodies are all raised in the same species, using consecutive staining and antibody elution rounds (Micheva et al., 2010a). Although the elution protocols developed for array tomography also work when the tissue is embedded in resins other than LR White such as epoxy resins, it remains to be tested whether the excellent ultrastructure and integrity of the ultrathin sections we achieved are affected by the antibody elution. In our work, we embedded the tissue in Durcupan resin because we found the ultrastruture to be better preserved than in LR White. In future work we will seek to find a substrate which on the one hand binds to the Durcupan-embedded sections so that these do not detach due to physical stress during preparation and elution, and on the other hand is inert to the chemicals used for elution and is also electronconductive for EM imaging.
alternatIve approaches to correlatIve mIcroscopy
Other tissue preparation strategies have been proposed for combining LM and EM. Photooxidation is a technique in which fluorescent dyes are used to oxidize diaminobenzidine into an electrondense osmiophilic polymer (Maranto, 1982; because tracer-filled neurons span multiple ultrathin sections. We suspect that our method is more susceptible to false negatives, because small neurites may not be labeled because of insufficient tracer concentration. Tracing small neurites in EM image stacks of several ultrathin sections, however, should allow us to find a region in which the neuron is larger and more likely contains reliably detectable tracer. This tracing should be possible thanks to the good preservation of tissue ultrastructure (Figure 4). These small nanocrystals are both fluorescent and electron dense, and they can be discriminated by their color in the LM and their size and shape in the EM, allowing labeling of multiple antigens simultaneously. Unfortunately, to our knowledge quantum dots are not available yet in a form that can be injected into the brain as a neuronal tracer, for example coupled to dextrans.
Our CAT approach to projectomics does not require complete imaging and reconstruction of large volumes. Instead, we expect to gain statistical information about neuron types by reconstructing small, local volumes, in which part of neurons are recognized via their label. Our approach can be applied to different animal models. Connectivity in subcortical regions of the mammalian brain, such as interactions between hypothalamic and brainstem nuclei, could be investigated by targeted injections of fluorescent tracers combined with EM imaging. Moreover, immunostaining against endogenous-or transgenically expressed molecules in interneurons could be used to classify these neurons and their synapses.
In our animal model, the zebra finch, we expect our method to be useful for statistical quantification of connections between different projection neuron types. In the case of the forebrain area HVC, our goal is to understand how the signal is conveyed by the afferent projection neurons, how it is transformed by the interaction with other neuron populations within HVC, and how it is routed to motor areas by efferent projection neurons. For example we want to quantify the strength of connections of neurons with members of the same class compared to the connections made with other neuron types. This information will help formulate circuit models to better understand how the song-control network can learn and generate a highly stereotyped motor behavior.
acknoWledgments
The authors would like to thank Prof. Kevan Martin, Rita Bopp, German Koestinger and Simone Rickauer for technical advice, and the Electron Microscopy Center of ETH Zurich (EMEZ), in particular Dr. Roger Wepf, for support. Deerinck et al., 1994). Photooxidation has the advantage of allowing direct and unambiguous correlation of the same structures in the LM and EM, because the fluorescent labels imaged in the LM are converted into labels visible in the EM. However, this technique has so far not allowed to convert different colors into substrates distinguishable in the EM.
Cryosectioning has also been proposed as a useful technique for correlative microscopy, in particular combined with immunolabeling methods using gold-or FluoroNanogold-labeled antibodies (Takizawa et al., 1998;Slot and Geuze, 2007;van Rijnsoever et al., 2008). Cryosectioning leads to a better antigenicity preservation compared to classical chemical preparation, but its applicability for imaging of large volumes may be limited by section instability -it is almost impossible to reliably collect large numbers of serial sections.
Quantum dots have been used for correlated LM and EM imaging (Giepmans et al., 2005). Figure 3C enclosed by the dashed square. The high-quality ultrastructure is illustrated by membranes of small neurites that have no gap artifacts and appear fully closed (white arrows) and by highly contrasted synaptic densities (black arrows). Scale bar 500 nm.
|
v3-fos-license
|
2022-08-31T13:23:45.263Z
|
2022-08-31T00:00:00.000
|
251985543
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://languagetestingasia.springeropen.com/counter/pdf/10.1186/s40468-022-00178-1",
"pdf_hash": "83a417d08c74bbbd4a7f7394bd92d98c58554a0c",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:929",
"s2fieldsofstudy": [
"Education"
],
"sha1": "a672119cc0310e3290f5378dc5eddb286fbe6931",
"year": 2022
}
|
pes2o/s2orc
|
Towards a unified English technology-based writing curriculum in the Arabian Gulf countries: the case of Oman
This study investigates the efficacy of a new testing tool, a Web-based application known as the Academic Writing Wizard (AWW), in creating a unified English technology-based writing curriculum in the Arabian Gulf countries, focusing particularly on the case of Oman. The application was piloted in three Oman high schools selected by the Omani Education Ministry. All the schools have class grades 11 and 12 only. Over 2 weeks, 71 students and 6 teachers were trained in the effective use of AWW. In the pre-application phase, the selected students were asked to write a five-paragraph essay without using AWW. In the post-application phase, they were asked to write the same essay employing AWW, specifically elements of the Lexical Cohesive Trio (LCT), combining elements of textual reference: anaphora, cataphora, transitional signals, lexical repetition, and lexical phrases. A total of 71 respondents took part in the study. All were senior grade students (class grades 11 and 12). Comparisons of the two groups with respect to the quantitative and scoring scales were performed on the basis of the nonparametric Mann–Whitney criterion. An analysis of the dynamics of the indexes was conducted on the basis of the nonparametric Wilcoxon criterion. A multifactorial dispersive analysis was performed to study the influence of the factors in class. MANOVA was also conducted to study the influence of two factors simultaneously: the class and the time period. Based on the results of the statistical analysis, the following was found: 1. The dynamics of the index Teacher’s Score and the values of indexes in all the grades were higher in the post-application period. 2. There was a statistically significant positive increase across all indexes between the post-application and pre-application periods in each grade. 3. The dynamics of the index Teacher’s Score and the values of indexes in all the grades were higher in the post-application period. There was an incremental increase from the post-app period to the pre-app period in the 11th grade and 12th grade of 3%. Thus, the index Teacher’s Score was influenced by the Period factor. 4. The dynamics of the Score index were clearly visible. The values of the index Score in all the classes were higher in the post-app period. 5. Based on the results of the multifactorial dispersive analysis per index Score, Teacher’s Score and Final Grade were influenced by the Period factor only. When the index Final Grade is compared, the average score was 46.3 ± 5.8% in the 11th grade in the pre-app period, which increased by 13% to 59.3 ± 5.5% (P < 0.0001) during the post-app period. In the12th grade, the average score of the same index was 48.1 ± 8.4% in the pre-app period, which increased by approximately 13% to 61.0 ± 7.5% (P < 0.0001) during the post-app period. The results further indicate that the application of AWW significantly improved Omani students’ English academic writing skills. Therefore, AWW will be a useful tool in the English curriculum of both Omani and Arabian Gulf English schools.
Introduction
This study, which was funded by the Kuwait Foundation for the Advancement of Sciences (KFAS), introduces a technological writing curriculum in the high schools of Arabian Gulf countries, specifically Oman, to enable a smoother school-university transition for students. Including technology in academic writing raises awareness among students of the steps, features, and techniques required or commonly used to create a coherent academic text. This study, therefore, marks an important step towards improving Omani students' writing skills. While working with the students on Academic Writing Wizard (AWW), I noted that they found it useful as a new tool for improving their writing skills as it enabled them to experience new ways of creating English texts. Some discovered the importance of employing textual elements such as referential lexical elements, anaphora and cataphora, transitional signals, lexical repetitions, and lexical phrases. Although the students might have previously encountered textual elements fragmentally, they could not combine them in an effective way. Most of the lexical and textual elements of writing are taught to them sporadically and marginally. This study aimed to demonstrate that the application of technology in teaching academic writing could enhance the Gulf Co-operational Council (GCC) students' knowledge of academic writing and related techniques, which will provide them with substantial help at higher university levels. Most high-school students in the GCC countries face difficulties in their transition from Arabic-speaking schools to English-medium universities. This sudden transition rebukes most students, hindering them as such from succeeding in universities whose medium of instruction is English. One of the main obstacles that GCC students face is their inability to produce effective English academic texts. The level of English taught at high schools is far less than what is expected at the tertiary level. In a study conducted by Tryzna and Al Sharoufi on language policy in Kuwait, the authors pinpointed the main obstacles Kuwaiti and GCC students face when learning English as a foreign language "The paper discusses the shortcomings found in creating an effective pedagogical system, capable of producing proficient English language speakers in the state-funded schools in Kuwait. Due to the indispensable role that the English language plays in most Kuwaiti institutions, finding a viable solution to solve language problems in Kuwait is becoming of an insurmountable concern to the educational authorities. This paper suggests that adopting a unified solution at the Gulf Cooperation Council, GCC, countries' level would provide Kuwaiti educational authorities with a workable solution, capable of overcoming language problems currently faced by Kuwaiti students. " (Tryzna & Al Sharoufi, 2017).
In another study conducted by Ali Al-Issa on the English writing problems faced by Omani students, he states "When it comes to written expression, the ESL instructor should also be aware that Omani students lack opportunities to communicate freely in writing. Students dwell on a single topic from the textbook, listening to a text about it, reading about it, speaking about it, and eventually writing about it. In general, students think about and produce language in a linear and controlled manner. The topics in OWTE are seldom based on the outward-bound students' needs and interests. Therefore, students may have little interest in expressing themselves in the Omani classroom. " (Al Issa, 2006).
In another study conducted by Basmah Al Saleem, she enforces the use of computers in modern education and she further states that traditional methods for teaching language skills are already obsolete "In fact, computers in language learning use language skills, be it natural language, foreign language or the so-called second language. Digital technology is used as an educational resource to help language learners improve their language skills, complementing it with other methods of teaching; thereby providing an involved, linguistically rich learning environment. Using computers in learning languages means using computer technology to provide, improve and evaluate the learning material through the use of interactive computer features, its different learning modes, and the Internet. Orthodox techniques are no longer limited to this. In the age of information, technology and communications, educational methods which were effective in the old days are not necessary to be useful. Future education is likely to be based on e-learning, based on the use of modern technologies like computers, intranets and the Internet. " (AlSaleem, 2020). Academic Writing Wizard, and the Lexical Cohesive Trio in particular, is introduced in this study as a possible solution for those high-school students to work on their academic writing even when they are still at their high schools. Although the phrase "academic writing" is mainly used at a tertiary level, it is important to mention that Academic Writing Wizard mainly aims at fixing high-school students' English and makes it close to the one used at a tertiary level. AWW tries as such to bridge the gap between high-school English and tertiary-level English. The author relies on important studies concerned with applying the LCT framework to help improve students' academic writing (Al Sharoufi, 2013.
As for the main questions raised in this study, they are as follows: 1. Can Academic Writing Wizard (AWW) improve high-school students' academic writing skills and make them able to cope with academic writing at the university level? 2. Can Academic Writing Wizard be an acceptable tool for high-school students? 3. Will high-school students use AWW effectively in their English classes?
Literature review
AWW is based on the notion of conscious writing, which requires an awareness of the textual tools that make up a coherent academic text. Technology is integrated to help quantify these tools and provide an effective academic writing environment for students. These tools play an essential role in creating and conveying coherent meaning. Halliday and Hasan (1976) argue that cohesion is a semantic and grammatical feature of texts that works internally in structuring written and spoken discourse.
Numerous studies have highlighted the importance of teaching and creating awareness of reference, repetition, and lexical blocks or bundles in academic English writing. Using the right references or pronouns combined with a proper application of transitional signals contributes to improving the lexical and logical connections in writing. Cohen and Fine (1978) argue that for non-native students, a failure to properly understand and use these elements weakens cohesive ties throughout the text. Winter (1979, p. 101) and Hoey (1995, pp. 26-48) discussed the strong relationship between lexical repetition and the creation of meaning in text. In his detailed analysis, Hoey (1995) contends that there are different types of lexical repetitions, namely simple lexical repetition, complex lexical repetition, simple paraphrase, and complex paraphrase. However, Boers et al. (2006) assert that lexical blocks or sequences can substantially affect oral fluency. Ranjbar et al. (2012) note that teaching and employing lexical bundles can improve non-native speakers' academic writing ability. Biber and Barbieri (2007) discussed features of lexical bundles such as incomplete clauses and non-fixed expressions transparent in meaning along with the frequency with which they occur. A database of the most frequently used lexical bundles in academic writing was developed by a group of linguists at the University of Manchester; this database is a component of my Lexical Cohesive Trio (LCT). There is an increasing number of automated writing evaluation systems that focus on a variety of textual features and provide scores thereof. IWrite focuses on grammatical textual features, relevance of writing, and language usage (Liang & Deng, 2020). Another application used for automatic evaluation is iTest checking reading, listening, writing, and translation skills, but this application does not train students on academic writing in particular. Another application is The Intelligent Essay Assessor, which mainly traces specific textual patterns and learns the process of textualizing them as such produce similar patterns, based on which it provides approximate scores (Landauer et al., 2003). E-rater is another application that targets word usage and grammatical and discoursal features (Burstein et al., 2004). The main problem with most of these applications is their failure to specifically target lexical cohesion, a feature so indispensable in determining textual coherence. AWW comes to bridge this important research gap. AWW helps English learners understand and apply lexical cohesion in a very effective way.
In this literature review section, I would like to break it down into three main components as follows.
Technology use in education in Arabian Gulf countries, especially in Oman
The main problem that Omani and GCC students face is that the main approach used to teach English is mainly based on teaching simplistic grammatical competence. This approach encroaches on the actual role of language as a communication tool. Al Issa explains the inadequacy of teaching decontextualized grammar as he states "Omani and GCC students spend a considerable amount of their time studying grammar out of context and in isolated sentences in teacher-fronted instruction situations. The teachers themselves were taught through the grammar translation or audiolingual method. Students in Oman and other Arab countries are hardly, if ever, given opportunities to explore grammatical structures in context "to see how and why alternative forms exist to express different communicative meanings" (Nunan, 1998, p. 102-3;Al Issa, 2006). Using technology, then, is the answer to overcoming the shortcomings caused by grammar-based approaches in English writing. To overcome difficulties caused by older teaching methods, the Omani government paid attention to using technology in Omani schools. A study conducted by Tahani Al-Habsi, Saleh Al-Busaidi, and Ali Al-Issa shows that technology is finding its clear way towards changing the educational scene in Oman. "The present qualitative study is an intervention, which attempted to explore the integration of technology among 11 public school English language teachers in the Sultanate of Oman through the use of community of practice (CoP). As the first in the region, this qualitative study triangulated data using a focus group interview and reflective journals. Three themes emerged from the data analysis. Despite certain challenges, the findings were generally positive and encouraging, and revealed that if a CoP is effectively utilized to the fullest to integrate technology in ELT, it can facilitate policy implementation and Second Language Teacher Education (SLTE) in the Sultanate of Oman, the neighbouring Gulf Cooperation Council countries, some Asian and Far Eastern countries, and beyond. " (Al-Habsi et al., 2022). Although the previous study confirms the importance of using technology in English classrooms, it is far from being fully applied in real life. The present researcher faced some difficulties when piloting Academic Writing Wizard in Omani schools. The main difficulty being weak Internet connectivity. As such, providing the educational sector in Oman with effective Internet infrastructure will boost the process of teaching English writing to Omani students.
Lexical Cohesive Trio
Academic Writing Wizard (AWW) provides easy access to numerous tools through a comprehensive digital environment that helps students visualize and create textual links and receive instantaneous assessments. In his review of Academic Writing Wizard, Adrian Wurr states: Academic Writing Wizard is a web-based instructional tool to help writers develop cohesive and coherent essays. The program was originally developed for undergraduate EFL students but has been successfully used in secondary and tertiary English L1 and L2 instructional environments. The program is based on the premise that since languages are rule-governed, learning a foreign language is primarily a process of learning how to string meaningful chunks of the target language together. Applying this simple premise to more sophisticated, corpus-based word and phrase databases, Academic Writing Wizard helps developing writers use cohesive links and formulaic academic expressions more consciously in their writing. Recent research (Al Sharoufi, 2014) examining the efficacy of the program found statistically significant (p < .0001) improvements in students' use of cohesive devices, lexical repetitions, and phrases. User-friendly tutorials and YouTube videos help instructors register for and use the free-trial program in their classes. Once an account is created, students are led through a four-step process to writing an essay or academic report. First, students must select the number of paragraphs they want their essay to include. Then for each paragraph in the essay, they select the connecting words and types of lexical and phrasal repetition they want to include in the essay in steps 2-5. 224 Step 2 involves selecting connecting words from a list of common conjunctions (e.g., and moreover, indeed). In step 3, students select which type of lexical repetition they want to use, simple, complex, and/or phrasal. Each is developed further in step 4, where drop-down menus provide the student with a list of options for improving cohesion in different parts for the essay. essay. Figure 1: Step 4 Screenshot --Selecting lexical phrases commonly used in conclusions to show need for more research For example, Fig. 1 shows a list of lexical phrases commonly used in the conclusion of a research paper for making suggestions for future work on the topic. Finally in step 5, students are shown a visual representation of lexical devices used such as the one in Fig. 2, wherein each cohesive device is identified in color-coded highlights in the text. This final step and screenshot is where the real value of Academic Writing Wizard lies because, however abstract the grammatical concepts of cohesion may seem to developing writers, seeing each cohesive link in their essay highlighted makes intuitive sense. Providing students with immediate feedback on which parts of the essay are stronger and weaker cohesively is akin to how streetlights provide pilots flying over more and less developed landscapes at night with a map of the cities and countryside below. With the information provided in the Cohesive Trio Density Matrix and grammar and style check- ers, students are able to revise their essays further. In pre-and postsamples of student writing Al Sharoufi (2014) (Wurr, 2017) Some AWW screenshots improve students' reading and writing skills. Technology encourages improved comprehension of reading and more elaborate writing in the science classroom by motivating students to act on their curiosity, access resources, and embellish their work" (p. 89). Several studies have discussed the effect of including technology in second language writing. Lin and Griffith (2014) reviewed many of these studies and argued that technology can undoubtedly help in improving the quality of second language writing, reporting that: "The literature review suggests that online collaborative learning environments can have cognitive, sociocultural, and psychological advantages, including enhancing writing skills, critical thinking skills, and knowledge construction, while increasing participation, interaction, motivation, and reducing anxiety" (p. 303). Ahmadi further stresses the indispensable role of technology in improving foreign students' linguistic skills: "Language is one of the significant elements that affects international communication activities. Students utilize different parts of English language skills such as listening, speaking, reading, and writing for their proficiency and communication (Grabe & Stoller, 2002). In addition, Ahmadi (2018) stated that one of the important elements for learning is the method that instructors use in their classes to facilitate language learning process. According to Becker (2000), computers are regarded as an important instructional instrument in language classes in which teachers have convenient access, are sufficiently prepared, and have some freedom in the curriculum. Computer technology is regarded by a lot of teachers to be a significant part of providing a high -quality education" (Ahmadi, 2018).
Notwithstanding potential complaints regarding technological issues or availability in certain schools, the numerous benefits of its integration into the field of academic writing cannot be denied. Through this wide and unlimited environment, students can explore various data and information that are updated instantaneously on the Web. They can further establish their own ideas and create their own essays or research papers in a way that builds connections and relations, starting from the smallest blocks of their written works to the huge academic corpora available on the World Wide Web.
Academic writing expectations in high school versus in college
According to some Omani scholars, Omani students' writing skills are left to be desired. Ali Al Issa stresses that English is not taught as a communication tool. He asserts that "Al-Battashy (1989) and Al-Toubi (1998) state that English is not taught as a language for communication in Oman. They note that the classroom materials, especially the prescribed text Our World Through English (OWTE, 1999(OWTE, -2000, and classroom activities are controlled and do not resemble real language use. Saur and Saur (2001) point out 20 ORTESOL Journa1 knowledge-based tests, mastery of content, and achievement grades dominate the scene and powerfully affect student motivation that the kind of English taught and evaluated in secondary school is different from the kind of English the students need for entry to an English-medium college or university, and Al-Alawi (1997) also notes that it has little connection with the real world. " (Al Issa, 2006). It is thus important to find a viable solution to the linguistic problems faced by Omani students if they want to join universities where English is the medium of instruction. The present study tries to provide a viable solution to the problems raised above by introducing a Web-based application that can help Omani students improve their academic writing skills.
Context of the study
This study aims to demonstrate the role of technology in improving writing composition and bridging the gap between high-school education and university education in terms of academic writing. A similar study based on the efficacy of using Lexical Cohesive Trio was conducted on university students at the Gulf University for Science and Technology (GUST) in Kuwait. The results of that study showed that using Lexical Cohesive Trio can robustly enhance students' academic writing skills.
The figures all show significant improvement in and abundance of transitional signals, lexical repetition, and lexical phrases, respectively. Appendix 1 shows a numerical breakdown: Transitional signals were used 417 times before using the framework and 721 times after using the framework; lexical repetitions were used 420 times before using the framework and 872 times after using the framework; finally, lexical phrases were used 447 times before using the framework and 1079 times after using the framework. in particular, the final result, 1079, showing an increase in the number of lexical phrase occurrences in all the collected samples of student essays, constitutes an important guarantor of an improved logical and rhetorical structure. It is this abundance of lexical phrases, occurring naturally in academic articles, which should be emphasized in the conscious teaching of the Lexico-Cohesive Trio. (Al Sharoufi, 2014) The previous results were further analyzed by SPSS, and the results were indeed promising as follows: The results were further analysed in light of the t-test, using SPSS. My aim was to statistically ensure the validity of my hypothesis, that the LCT is an efficient framework for teaching academic writing. Based on the table in Fig. 2, three pairwise t-tests on the data have been conducted to examine whether there area significant changes in the two version, before and after theuse of the LCT. A pairwise t-test confirmed that significantly more transitional signals are produced after the framework was used: t(1,29)= -4.938 with a p value of less than 0.001. Similarly, a pairwise t-test confirmed that significantly more lexical repetitions were produced after the framework was used: t(1,29) = -5.218 with a p value of less than 0.001. Finally, a pairwise t-test confirmed that significantly more lexical repetitions were produced after the framework was used: t(1,29) = -10.672 with a p value of less than 0.001. These tests present strong evidence in favor of using the LCT, and confirm the significant finding that an increase in students' use of cohesive devices, lexical repetitions, and lexical phrases enhances their ability to write coherent essays. (Al Sharoufi, 2014) It is thus shown that using LCT has considerably enhanced university students' academic writing. Such promising results have encouraged the author to use the same lexical framework, LCT, with high-school students this time. Based on the previous study, the author created a new Web-based application for teaching academic writing, Academic Writing Wizard (AWW), to bridge the gap between high-school students and university students in the GCC countries, especially in Kuwait and Oman. Furthermore, the study attempts to pilot AWW at the high-school level to develop comparison and analysis material that links between the following two aspects: technology and LCT elements. In fact, the author conducted a two-phase study: one of which focused on Kuwaiti schools, and the other of which focused on Omani schools, being the current study. So long as the two studies are closely related, I decided to conduct a satisfaction survey on both Kuwaiti and Omani students to investigate their satisfaction after using the Academic Writing Wizard and report the results of the survey in this study.
Design of the study
The crux of this study involves drawing a statistical comparison between pre-application essays and post-application essays in terms of the LCT. This study is based on a previous study on Lexical Cohesive Trio elements that showed a substantial efficacy in improving students' academic writing.
30 junior and senior students from the Gulf University for Science and Technology, GUST, in Kuwait were asked to write two essays for this experiment: one essay was not based on the suggested framework, while the other was. Students were drilled on effectively using the cohesive trio: reference, lexical repetition, and lexical phrases. Each student was requested to draw boxes in which they would write all the necessary details in advance. Within the first box, the students were requested to state the type of reference they would use in the first paragraph, whether anaphoric, cataphoric, etc., specifying each type, the pronoun used, and the referent. Then, in the next box, they had to specify the type of lexical repetition they chose; and, finally, the lexical phrases they found most appropriate to link with the previous two components were placed in the third box. After having performed this process for each paragraph, they would then embark upon writing the essay, which should at this point be a quite straightforward process. (Al Sharoufi, 2014).
The present study expands the application of the LCT framework, using a new Web-based application, Academic Writing Wizard. This study also uses a quantitative approach that aims at detecting and measuring actual textual improvements in students' writing after using AWW to demonstrate the benefit of this technological tool and LCTbased approach in academic writing. Through the obtained data, a comparison can be made between pre-app essays and post-app essays, the latter being heavily influenced by AWW's concept of conscious writing and selection of LCT elements that improve the cohesiveness of the text.
Instruments
The study relies on three main instruments: The first is AWW itself, a Web-based writing application that allows conscious production of texts with the help of ready-made drop lists of referential elements, transitional signals, lexical repetition, and lexical phrases. The study also uses a survey to measure students' satisfaction with the application. The survey focuses on "rating" the application in terms of capability, simplicity, and convenience on a scale from 1 to 5 points. Finally, the data collected is analyzed through a multi-factor analysis of variance (MANOVA) to test the statistical significance of factors such as grade level (11th and 12th) and period of writing the text (pre-application and post-application).
Participants
The researcher visited three high schools in Oman with class grades 11 and 12. These were Musa bin Naseer High School (11-12) and Al-Khoud High School for boys (10-12) and Al-Hassan bin Hashim School, respectively. It is important to mention in this respect that the Ministry of Education in Oman selected the above schools. The researcher had no role in that selection process. It is also important to mention also that the 2-week period spent to train teachers and students in Oman was granted to me by the Omani Ministry of Education, taking into account the time constraint of both teachers and students in all the selected schools. The administrative bodies of these schools selected a group of students from each class grade to participate in this pilot study: 32 11th graders and 39 12th graders. As for the selected students' language proficiency, the selected teachers agreed that their students' proficiency level falls between pre-intermediate and intermediate. The selection process was done entirely by the schools' administrations, and the researcher had no role in this process nor had he any role in selecting the teachers of those classes. Regarding the selected teachers, I was told that all of them hold a BA in English and have over 5 years of teaching English at public schools in Oman.
As for the survey, which was mainly conducted to answer research questions 1 and 2, the sample comprised 73 respondents who answered 12 questions exploring the effectiveness of employing AWW. The respondents were selected and divided based on the number of written works or assignments required from them per semester: 1st group: up to five written assignments, 2nd group: more than six written assignments.
Data collection
Over the course of 2 weeks, the researcher trained the teachers of those grade classes in the systematic application of AWW and trained students on the effective utilization of AWW. The researcher adhered to the following steps to effectively pilot AWW at the selected Omani schools.
Week 1 1. Asking the selected students to write a five-paragraph essay without using AWW was done on day 1 of the study (1 day) 2. Meeting with the English language instructors in the learning resources hall, introducing AWW, and explaining the necessary steps to use it from both the teacher's and the student's perspective was the core of step 1 of this pilot study. The researcher further informed teachers on how to assess students' written assignments and provide feedback. All of this was achieved using in-house computers with Internet connectivity (2 days) 3. The next step was meeting with the students of two classes from the 11th and the 12th grades (students from the English language elective course were chosen), explaining how to use the program through a data show presentation, and receiving questions and enquiries from students (2 days) Week 2 4. The third step was based on piloting the program whereby students wrote essays of five paragraphs on a selected topic using AWW (2 days) 5. Assigned teachers were asked to correct some of these essays using AWW (1 day) 6. Giving feedback and gathering opinions on the program was the final step (2 days) (A detailed and documented report of my visit to the above-mentioned schools is attached to this article.) Referring to the initial phase as the pre-application phase, I asked the students to write a five-paragraph essay on topics chosen by their teachers. Then, I trained them on employing AWW and subsequently asked them to write the same essay using AWW: This second phase is referred to as the post-application phase. By the end of the AWW trial, I was able to collect 71 pre-app and 71 post-app essays for analysis.
Data analysis
I analyzed the collected essays by performing a multi-factor analysis of variance (MANOVA) of the pre-app and post-app phases. In both phases, teachers were asked to grade their students' essays to maintain both objectivity and effectiveness. Furthermore, they were asked to grade their students' essays using AWW. My aim was to test cohesion in accordance with my devised cohesive framework: LCT. Even though the students were asked to simply write a five-paragraph essay without selecting LCT elements in the pre-app phase, AWW is set to automatically detect elements of the LCT and produce cohesion percentages accordingly. Such percentages were entirely based on the cohesive elements suggested by my LCT framework. The teachers then graded their students' post-app essays, to which the learners had added their own selected LCT elements. The final scores obtained after adding the score generated by AWW to that of the teachers were then used for a statistical analysis, whereas the pre-app essays and post-app essays were necessary for a textual analysis that focuses on the students' selection of LCT elements from AWW.
Results of the statistical analysis
A total of 71 respondents took part in the investigation. All were senior grade students (class grades 11 and 12). Comparisons of the two groups with respect to the quantitative and scoring scales were performed on the basis of the nonparametric Mann-Whitney criterion. An analysis of the dynamics of the indexes was conducted on the basis of the nonparametric Wilcoxon criterion. A multifactorial dispersive analysis was performed to study the influence of the factors in class. MANOVA was also conducted to study the influence of two factors simultaneously: the class and the time period. The level of statistical significance was fixed at 0.05. The P-values are presented in the report to within a hundred-thousandth. Table 1 presents the results of a statistical analysis of the differences between students' grades. The comparison focuses on two groups (senior grades), 11th grade and 12th grade. The test was performed using the Mann-Whitney criterion. The results did not reveal statistically significant differences for any of the indexes. The total Knowledge Score and Teacher's Score represent the Final Grade. The average assessment of the index Final Grade in the pre-app period was 2% higher in grade 12 than in grade 11. The average assessment of the index Final Grade in the post-app period was 2% higher in grade 12 than in grade 11. Table 2 and Fig. 1 present descriptive statistics (M is the average, and S is the standard deviation) and the relative increment from the post-app period to the pre-app period in each grade. The results indicate that under all indexes and classes, there were statistically significant differences between the post-app and pre-app periods, indicating a difference between the mean values of both. When the index Final Grade is compared, the average score was 46.3 ± 5.8% in the 11th grade in the pre-app period, which increased by 13% to 59.3 ± 5.5% (P < 0.0001) during the post-app period. In the12th grade, the average score of the same index was 48.1 ± 8.4% in the pre-app period, which increased by approximately 13% to 61.0 ± 7.5% (P < 0.0001) during the post-app period. Figure 1 presents the dynamics for the scores pertaining to students and teachers in the two grades. In both grades, the same increase of 10% was observed in the students' score and of 3% in the teachers' score.
MANOVA
In Table 3 and Figs. 2, 3, and 4, the results of the multifactorial dispersive analysis MANOVA are presented for each index: Score, Teacher's Score, and Final Grade. The P-value indicates the statistical significance of the relevant factors: grades, period and period and grades. As indicated in Table 3, a statistically significant factor, period (P < 0.0001), was established for each grade. Given that the factor Period*Grades was not statistically significant for any of the scores (P > 0.05), the dynamics in each grade were the same. Moreover, statistically significant differences in grades were not observed for any period.
In Figs. 2, 3, and 4, a statistically significant positive result is observed for the same dynamics by grade for each score. In Fig. 2, the dynamics of the Score index is clearly visible: in the pre-app period, the values of 11th and 12th grades were almost the same and equal to 24%. The values of the index Score in all classes were higher in the post-app period. There was an increment from the post-app period to the pre-app period in the 11th grade and 12th grade of 10%. Thus, the index Score was influenced by the Period factor. Figure 3 presents the dynamics of the index Teacher's Score. The values of indexes in all the grades were higher in the post-app period. There was an incremental increase from the post-app period to the pre-app period in the 11th grade and 12th grade of 3%. Thus, the index Teacher's Score was influenced by the Period factor. Figure 4 presents the dynamics of the index Final Grade. The values of indexes in all the grades were higher in the post-app period. There was an increment from the postapp period to the pre-app period in the 11th grade and 12th grade of 13%.
Based on the results of the statistical analysis, the following conclusions can be drawn: 1. The dynamics of the index Teacher's Score and the values of indexes in all the grades were higher in the post-app period. 2. There was a statistically significant positive increase across all indexes between the post-app and pre-app periods in each grade. 3. There was an incremental increase from the post-app period to the pre-app period in the 11th grade and 12th grade of 3%. Thus, the index Teacher's Score was influenced by the Period factor. 4. The dynamics of the Score index were clearly visible; in the pre-app period, the values of 11th and 12th grades were almost the same and equal to 24%. The values of the index Score in all the classes were higher in the post-app period. 5. Based on the results of the multifactorial dispersive analysis per index Score, Teacher's Score and Final Grade were influenced by the Period factor only.
Based on a final report submitted by the Educational Supervisor, Mr. Ahsan Ibrahim Awjanah, at the Omani Ministry of Education, stated that AWW helped students improve their writing skills over a short period of time. He even recommended using the application as an invaluable educational tool, asserting that "No doubt, the program is practical and very useful, and this was quite clear from the instructors' positive feedback and the apparent enthusiasm of the students while using it during the visits to these three schools. This program may be one of the tools used in the process of continuous educational refinement in high schools. The next step, however, would be setting the program to limit the number of writing compositions ranging from one paragraph to four paragraphs, and setting a different scale for grade distribution from the current one so that the program becomes more fitting for the high-school level. Thus, the success of the program remains connected to improving Internet connections and providing a sufficient number of computers for the students in one class, noting that the student can work through his/her personal computer in the availability of a network and a computer. " (see Appendix 1).
Textual analysis
The following section compares two essays chosen from the collected data to demonstrate why AWW is an important asset for teaching academic writing. To draw an effective comparison between the pre-phase and post-phase periods, I analyzed the pre-app essay first and then the post-app essay.
Example 1
Step
-Pre-framework essay
Time management Sports is an area that almost every teenager is interested in. However, when it gets practiced through the time-management skill, one will guarantee the satisfaction of both sports and study needs.
Allocating time of the day to do sport either on your own or in a sport team, should energize you to study better, in my opinion. The reason behind this is that when you go to do sports in the allocated time, this allows your brain to have less stress and gives your body the chance to activate and exercise.
This controlled pattern of balancing the brain and body needs will result in better study results and healthy body.
Step 2 -LCT framework elements added to the pre-app essay
In this phase, the student rewrote the essay based on his/her choice of LCT elements (see Appendix 1).
Time management
The final year at school is significantly important to students. Along with this importance comes the necessity of time management. This paper attempts to show that this skill is really crucial since it allows students to enjoy doing all things they want while being effective and productive.
Sport is an area that almost every teenager is interested in. To date, there has been little agreement on what is best to pursue, sports or studies. However, when it gets practiced through the time-management skill, one will guarantee the satisfaction of both sports and study needs.
Allocating time of the day to do sports either alone or in a sport team should energize students to study better, in my opinion. A possible explanation for this might be that when they go to do sports in the allocated time, basically this allows their brains to have less stress and gives their bodies the chance to activate and exercise.
A large and growing body of literature has investigated the importance of balancing studies and sports. Research showed that this controlled pattern of balancing the brain and body needs will result in better study results and healthy body. Therefore, people should follow the right pattern to ensure a good health for their body and mind.
When comparing the two essays, it is clear that the second adopts a more analytical tone than the first pre-framework essay. The second essay also employs a better academic style given the clear research results and lexical phrases used to support the main argument. Furthermore, there is a cohesive internal connection between paragraphs, not only through lexical phrases but also through simple transitional signals used evenly throughout the essay for a better flow of ideas and perspectives.
Example 2
Step 1 -Pre-framework essay Technology This century featured more technology compared with the last century. Lots of people say that last century's life is better due to direct connection with people around them all the time. However, I think technology and computer made life easier and faster. Therefore, in this essay I am going to explain the positive impacts of What-sApp on families.
WhatsApp is a common application which you can find in all the phones. It connects us with our families from different locations and time. We can know more about their daily routine by sending us pictures, videos and recording their voices.
Also, WhatsApp app is a way to invite people to an event that you were planning or meetings. A research results showed that most of the parents preferred that their children have WhatsApp because it makes them closer and they have knowledge about their children even if they spend most of their time out.
In addition, WhatsApp is good way for shy people to apologise. For example, a daughter had a conflict with her mother, she can send a message on WhatsApp to apologise and explain her feelings comfortably. Furthermore, family members can discuss about any subject and exchange information and knowledge.
In conclusion, I believe that technology has two sides, a positive side and negative one. However, it depends on the person who's using it and his awareness.
Step 2 -LCT framework elements added to the pre-app essay
The student chose the following LCT elements to rewrite the essay (see Appendix 2 Also, WhatsApp is a way to invite people to an event that you were planning or meetings. Additionally, the most interesting finding was that most of the parents preferred that their children have WhatsApp because it makes them closer and they have knowledge about their children even if they spend most of their time out.
Another important finding was that WhatsApp can help solve problems and discuss different issues. For example, it is good way for shy people to apologise. For example, a daughter had a conflict with her mother, she can send a message on WhatsApp to apologise and explain her feelings comfortably. Furthermore, family members can discuss about any subject and exchange information and knowledge.
In this essay, I have argued that technology has two sides, a positive side and negative one. However, it depends on the person who's using it and his awareness.
In comparing the two essays, it is evident that the employment of LCT enhances the student's academic writing by providing textual links or blocks that create a cohesively structured essay. For example, in the second paragraph of the post-framework essay, the lexical phrase "In recent years, there has been an increasing interest in" is a perfect introductory sentence for the paragraph and the body of the essay. The employment of lexical phrases and transitional signals helps to organize and present the ideas in a clearer and more coherent way than in the pre-framework essay. Another example in the same paragraph is the employment of the term "basically, " which indicates an addition to the previous point and explains the main argument. Following a similar pattern for the rest of the post-app essay, the student's choice of LCT elements improves the lexical cohesion of the essay and ensure a better evaluation (see Appendix 2).
Kuwaiti and Omani students' satisfaction survey
In order to answer research questions 1 and 2, a satisfaction survey was conducted for this purpose. Because the objective of the first phase of this study was to introduce a technology-based writing curriculum to schools in the Gulf region, I previously conducted a study in a Kuwaiti high school for girls (unpublished paper), in which I applied the same methodology to study the efficacy of applying AWW in English writing curriculum in Kuwait. It should be noted that the Kuwaiti group of respondents was not included in the above study conducted in Oman. For the current research, I conducted a survey to gauge both Kuwaiti and Omani students' satisfaction with AWW as a testing Web-based application. The sample comprised 73 respondents who answered 12 questions exploring the effectiveness of employing AWW. All binary answers were summed, which formed a new variable "Rating, " scored from 1 to 5 points.
When the respondents were compared, the following groups were identified: 1. Country of residence of the respondent: Kuwait (KW) -47 students and Oman (OM) -26 students 2. Age: the respondents were divided into two groups: 34 people up to 18 years of age and 15 people over 19 years of age 3. Number of papers written: the respondents were divided into two groups: 21 people with up to five written works per semester and 16 people with more than six papers.
Materials and methods
The comparisons of the two groups with respect to the quantitative scale were conducted based on the nonparametric Mann-Whitney criterion. After checking the collected data, I found that the data did not meet the requirements of the parametric test (homogeneity of variance and normal distribution). Instead of an independent t-test, I decided to use its nonparametric counterpart -the Mann-Whitney test. The statistical significance of the different values for binary and nominal variables was determined using a chi-square test. To describe the quantitative variables, the mean value and standard deviation were calculated in the "M ± S" format. The level of statistical significance was fixed at 0.05. Statistical processing was performed using the statistical package Statistica 10. Table 4 presents the results of a statistical analysis of the differences between Kuwait and Oman. The test was conducted using the Mann-Whitney criterion. There were no statistically significant differences between the respondents in Kuwait and Oman for any of the variables. The average score for the questions "Have you rated the capability of AWW?" and "Rate the simplicity and convenience of AWW" for the respondents in Oman was higher (by 0.3-0.6) than for the respondents in Kuwait. The value of the rating in Kuwait was slightly higher (by 0.4) than that in Oman but this was not statistically significant (P = 0.5564). Table 5 and Fig. 5 present the results of a statistical analysis comparing the binary variables for the countries. The test of differences was conducted using the chi-square test. The results showed that statistically significant differences were observed for the question "Have you used the internet for help?" (P = 0.0219). Specifically, 53% of the respondents in Omanand 21% of the respondents in Kuwait used the Internet for help.
Comparisons of variables by country
The frequency of use of the Internet among respondents in Oman was 32% higher than that for respondents in Kuwait (P = 0.0219). The results of the comparisons of nominal variables by country are given in Table 6. In terms of age groups, there were significant differences between the respondents (P = 0.002). All 15 respondents in Oman were under 18 years old, while in Kuwait 19 respondents were under 18 years old and 15 were above 18 years old. There were no significant differences between the two countries in terms of the number of written assignments per semester (P = 0.9747). Thus, the respondents of Kuwait and Oman differ only in terms of age groups and used the Internet for help. Therefore, the two countries can be assumed to be compared for further statistical research. Table 7 presents the results of a statistical analysis of the differences between respondents who used AWW and those who did not use AWW. The test was conducted using the Mann-Whitney criterion. The average score for the questions "Have you rated the capability of AWW?" and "Rate the simplicity and convenience of AWW" for the respondents who used/did not use AWW was the same and equal to 3.3 and 3.0, respectively. The "Rating" value for the students who used AWW (2.4 ± 2.0) was higher than for those who did not use AWW (1.6 ± 1.9). Table 8 and Fig. 6 present the results of a statistical analysis of the differences undertaken using the chi-square test. The results indicated that there were statistically significant differences for the question "Have you used the internet for help?" (P = 0.0006); specifically, 73% of the respondents who used AWW and 18% of the respondents who did not use AWW used the Internet for help. . 6 Comparison of answers to questions regarding who used AWW and who did not Fig. 7 Comparison of answers to questions regarding who used AWW and who did not As shown in Figs. 6 and 7, the frequency of positive responses among respondents who used AWW ranged from 7.5 to 27%, in relation to those who did not use AWW. However, this was not statistically significant (P > 0.05). The initial sample should be increased by approximately 2 times in order to draw statistical conclusions. Table 9 contains the comparison results for the nominal variables for those who used AWW and those who did not. There were no statistically significant differences in age group and the number of written assignments per semester. Table 10 presents the results of a statistical analysis of the differences between respondents under the age of 18 years and those over 19 years. The test was performed using the Mann-Whitney criterion. There were no statistically significant differences for any of the variables. The average score for the respondents under the age of 18 years was 0.2-0.4 points higher than that of the respondents aged 19 years and above for the questions "Have you rated the capability of AWW?" and "Rate the simplicity and convenience of AWW. "
Comparison of variables by age group
As indicated in Table 11, there were no statistically significant differences in the respondents' answers in terms of age group. Table 12 presents the results of a statistical analysis of the differences between respondents who completed up to five written assignments per semester and respondents who completed more than six written assignments per semester. The respondents were divided into two groups, those who used AWW and those who did not use AWW. The test was undertaken using the Mann-Whitney criterion. According to Table 13 and Figs. 8 and 9, the frequency of positive approvals was much higher (by 25% or more) among students who used AWW in a group with up to five written assignments per semester than among students who wrote more than six assignments per semester. To ensure greater reliability of statistical conclusions, the initial sample should be doubled in size.
Conclusion regarding students' satisfaction survey
Based on the results of the statistical analysis, the following primary conclusions can be drawn: 1. Kuwaiti (KW) respondents and Omani (OM) respondents statistically differ only in terms of "Age" and "Used the internet for help. " With regard to the frequency of responses, the countries are generally uniform. 2. Students who used AWW respond more positively to employing AWW than those who did not use AWW. 3. There were no statistical differences in answers to questions in terms of age group. 4. In general, the most positive feedback about the AWW service was received from newcomers who have already worked with AWW.
To answer the main research question posed in the introduction: 1. Can Academic Writing Wizard (AWW) improve high-school students' academic writing skills and make them able to cope with academic writing at the university level? 2. Can Academic Writing Wizard be an acceptable tool for high-school students?
Will high-school students use AWW effectively in their English classes?
Having analyzed all results obtained statistically, one can affirmatively say that the answer to question 1 is yes. All results show cogently that post-application results have improved, which means that AWW helped high-school students improve their academic writing skills. As for questions 2 and 3, the answer is also yes. Having conducted the satisfaction survey, one can definitely say that most respondents have affirmatively shown that AWW is a very useful and effective writing tool that can help them excel in their writing classes.
Discussion
The statistical analyses and comparisons of the pre-application and post-application phases describe the benefits of employing AWW as a tool to ease the transition from school to university. The results revealed that AWW helped the students improve the cohesion of their essays and without hindering their flow of ideas. In a study conducted by Al Issa, he highlights the importance of integrating technology into English classrooms. He states " In addition to the important role of teachers in language development, Nunan et al. discuss the importance of education technology, as a means to provide "naturalistic samples" of contextualized language, and time allocated to English on the national curriculum. " (Al- Issa, 2005). Al Issa shows here that using technology can provide more context to English learners, and this is exactly what AWW provides students with, an entire environment that help them contextualize their written texts. AWW has not only managed to improve the text, but also increased students' confidence and teachers' satisfaction with the results. The senior English teacher at the Kuwaiti high school for girls had repeatedly encouraged the integration of AWW into the English curriculum, emphasizing the importance of writing. AWW offers clear steps and provides an immediate assessment that increases students' awareness of the elements of academic writing, helping them to correct their mistakes and improve areas of weakness. As such, AWW provides a simple yet effective and unlimited environment for practice in academic writing.
To discuss the applications of automatic writing evaluation I mentioned earlier in the study, I have shown in this paper that Academic Writing Wizard is a unique application that mainly targets lexical cohesion. IWrite, iTest, and The Intelligent Essay Assessor are all applications used to target surface textual features automatically and without giving human assessors any role in providing their feedback. AWW, however, relies on both automatic evaluation, based on lexical cohesion, and human evaluation, based on instructors' feedback. In so doing, AWW provides students with a rich environment that helps them to learn academic writing with ease and effectiveness. By manually selecting LCT elements from drop-down lists, students have the opportunity to explore a variety of options to select from, which furnishes them with an abundance of choices to build their sentences, mastering in the process tone and style, as I have shown in my analysis.
There is an increasing number of automated writing evaluation systems that focus on a variety of textual features and provide scores thereof. IWrite focuses on grammatical textual features, relevance of writing, and language usage (Liang & Deng, 2020). Another application used for automatic evaluation is iTest checking reading, listening, writing, and translation skills, but this application does not train students on academic writing in particular. Another application is The Intelligent Essay Assessor, which mainly traces specific textual patterns and learns the process of textualizing them as such produce similar patterns, based on which it provides approximate scores (Landauer et al., 2003). E-rater is another application that targets word usage and grammatical and discoursal features (Burstein et al., 2004).
Implications for future research and limitations of this study
Maybe one of the main implications of this study for future researchers would be to enhance the LCT framework. AWW misses, in fact, a detailed list of academic collocations, which causes an important limitation to the effective LCT framework. Including a rich list of collocations to the LCT framework, therefore, would definitely boost the effectiveness of the lCT framework. Another element that might be added to the framework is a detailed list of academic words. By adding such steps, one can make the LCT framework suitable for various types of genres. As such, the Academic Writing Wizard itself can be used as an effective tool for teaching specific generic features. AWW can thus be further calibrated to use in English for specific purpose courses too. Experimenting with AWW will open new paths for investigating academic writing and will help both instructors and students overcome the daunting challenges caused by obsolete and sterile teaching methods. the patronage of His Highness the Amir of Kuwait Sheikh Sabah Al-Ahmad Al-Jaber Al-Sabah. This exhibition is considered one of the largest specialized invention fairs in the Middle East and the second in importance internationally. Dr. Al-Sharoufi was honored for inventing and developing a computer program that helps evaluate and improve students' academic writing skills in English (Academic Writing Wizard). In 2015, Dr. Al-Sharoufi was awarded a financial reward for submitting a research paper entitled "Towards a Unified Approach to the English Language Writing Composition Subject Built on Technology in the Arab Gulf Countries: the State of Kuwait and Sultanate of Oman as Models. " Academic Writing Wizard is a new Web-based application available on the Internet that helps teachers and students alike in achieving their academic goals easily and effectively. This program helps students in outlining, writing, and editing academic essays using a three-step framework, which is a new idea in English academic writing. It also provides an exceptional environment for teachers to assign topics and receive students' essays, assess them, and give appropriate feedback for each paragraph of the essay or the written composition. The program assesses the student's work by giving a score out of 50% of the total grade, and then the teacher adds a score out of 50%. Finally, the program displays the total grade out of 100%.
First: Permission to apply the study in the schools of Muscat Governorate Topic: Facilitating the task of the researcher, Dr. Hussein Al-Sharoufi.
We would like to inform you that Dr. Hussein Al-Sharoufi, Associate Professor at GUST in Kuwait, has submitted a request to carry out a research study for the use of (a computer program he has developed to improve students' skills in English Academic Writing). Through this study, he aspires to devise a unified approach for teaching English at secondary level in the Gulf countries (GCC), especially in the State of Kuwait and the Sultanate of Oman.
In order to apply this study, he will need to train two classes from the 11 th and the 12 th grades in two different schools in Muscat Governorate. He will also need to train the teachers of these classes in how to use the program effectively. The researcher wishes to apply his study in the governorate during the period between 12 February to 25 February 2016.
We therefore hope for your generosity in facilitating the researcher to apply the study tools according to the procedures. Therefore, kindly: 1-Nominate two schools from the governorate for the study application. 2-Nominate a representative (supervisor of English language) to be a representative of the Ministry and to accompany the researcher during his visit and his application. We kindly request that you provide us with the name of the candidate and his contact numbers no later than 2016/02/14. 4. Gathering opinions from the teachers and students on the program.
Third: The students' opinions on the program
A) The advantages: • This is first time such an educational program has been piloted in school.
• The program is considered to be extremely useful and excellent.
• It is easy to use the program after following the necessary steps, and they are all clearly explained. • The program helps correct common mistakes and improve writing composition skills. • The program helps in selecting the appropriate vocabulary to rephrase sentences without having to check other websites. • The program helps in learning new English terminology and developing the student's linguistic knowledge in preparation for university. • The program helps in learning and specifically improving writing skills through applying punctuation, spelling, grammar, and transitions correctly. • The program helps the student recognize his/her weaknesses and guides them to the correct method and information. • The program helps the student recognize his/her spelling mistakes so s/he can avoid them the next time. • The program helps the student acquire self-confidence, because it gives them the chance to recognize their mistakes and then rephrase sentences. • The diligent student finds in this program an easy and immediate tool to evaluate their own writing composition in order to improve and refine it without having to wait for their turn to meet the teacher.
B) The challenges: • Students need training to familiarize themselves with how to use the program.
• The program includes substeps that must be followed, which take time for the student to absorb. • The program evaluated the writing assignment intensely and strictly, such that the score given could be low even if the composition is very good. • The Internet is not always available and this hinders the student from working on his/her assignment at home through the program.
Fourthly: The teachers' opinions on the program
A) Advantages: • The program can be used as an effective educational tool as it provides the students with immediate corrections of their mistakes.
• The program includes helpful tools such as dictionaries, and tools that help link ideas together, producing a better cohesive text which helps the student express themselves well. • The program helps prepare students for academic writing which they will need at university level. • The program has features that help enrich the student's writing skills such as refining and linking sentences and paragraphs, and offering diverse vocabulary and adjectives necessary for enriching the composition. • The program helps students organize their writing and use different terminologies, verbs and transitions, etc. • The program facilitates the use of references on the Internet and helps students rephrase sentences through the alternative options suggested by the program. • The program helps the teacher assess students' written assignments and reduces pressure regarding the score awarded.
B) Challenges:
• The program requires the use of the Internet, and it is not possible to complete the assignment without a strong Internet connection. • Weak networks hinder the work of the program in some schools. • Students might lose the enjoyment of using a pencil and paper for writing if they are limited to using the program alone. • The unavailability of a sufficient number of computers for the students in one class in some schools is considered an obstacle to utilizing the program. • Some students do not own personal computers nor do they have an Internet connection at their homes. • The teacher's burden increases as they need to provide a good follow-up and effective feedback. • The program is extremely good in terms of the learning aspect, but may not be appropriate in the assessment and examination aspects.
Fifth: Conclusion Without any doubt, the program is practical and extremely useful as an application, and this was quite clear from the instructors' positive feedback and the apparent enthusiasm of the students using it during the visits to these three schools. This program may be one of the tools used in the process of continuous educational refinement in high schools. The next step, however, would be to set the program to limit the number of writing composition ranging from one paragraph to four paragraphs, and setting a different scale for grade distribution so that the program is better suited for high school level. Thus, the success of the program remains dependent on improving Internet connections and providing a sufficient number of computers for the students in one class, noting that the student can work on his/her personal computer in the availability of a network and a computer.
Finally In conclusion, we would like to extend our thanks to everyone who helped, however possible, to make the researcher's visit a success in the service of the higher
|
v3-fos-license
|
2021-11-04T15:22:18.667Z
|
2021-11-02T00:00:00.000
|
242044339
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-1039187/latest.pdf",
"pdf_hash": "c2f9d118d82b6898f127f42f32593997f14c5093",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:930",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a605a1db3478e9b07b0abbb991eee808d6fe9dc7",
"year": 2022
}
|
pes2o/s2orc
|
Comparison of Maximal Aerobic Speeds of Team-sport Athletes and Investigation of Optimal Training Loads
To investigate maximal aerobic speed (MAS), participants of team sports in terms of certain variables and to determine relationship team sports. In this study, it was used from quantitative research approach, screening research. 44 athletes voluntarily participated in study. Current average; age was 17.20±1.0 years, height was 178.6±6.6 cm, and weight was 73.1±11.2 kg. 20-meter shuttle run test (20MSRT) was used as a data collection tool. It was applied descriptive statistics, inferential one-way analysis of variance (ANOVA), Pearson product-moment correlation and sources of meaningful differences were examined by least significant difference (LSD) test. There was a positive relationship between age, VO 2max and speed scores (p < 0.05). Besides, significant difference between VO 2max , distance and speed also VO 2max , distance and speed were found to be significantly different according to types of sports (p < 0.05). When distance and speed scores of athletes were examined, it was determined that mean scores of football players were higher compared to basketball and handball. Heart rate and MAS scores of participants were not significantly different according to type of sport played. This study will contribute to strength and strength coaches, trainers and physiotherapists in terms of training programs that they will apply to athletes of various sports.
INTRODUCTION
Many studies are carried out to identify characteristic qualities of competitors in different sports.Thanks to the developing technology, sports science renews itself day by day and tries to discover training techniques that increase success.In this context, contemporary training methods consider effects of high-intensity aerobic running speed training on physiological and performance values in team sports.Moreover, research is conducted on individual training applications related to different types of sports(J.Baker et al., 2003).MAS is defined in the literature as the speed of movement produced by an athlete at maximal aerobic power or 100% of VO2max (Bosquet et al., 2002;Rampinini et al., 2009).MAS measurements are made in km/h.It is more important to know the MAS, which is mandatory to adjust the running speed, which in turn facilitates physiological development, than to know VO2max (D. Baker and Heaney, 2005).It is thought that being able to adjust running speed of the athletes according to needs of the sports that they play will have an important role in their performances.On the other hand, it is thought that these running speeds may differ between sports, players' positions, and even genders.It is known that MAS are determined optimally according to type of sport and comparisons can be made accordingly.In this way, it becomes easier and more efficient to follow the physiological development of athletes (D. Baker and Heaney, 2005).This process is attracting more interest day by day from sports scientists.Furthermore, looking at the current literature on MAS, it is seen that it is next to this (D. Baker, 2011;D. Baker and Heaney, 2005;Bellenger et al., 2015;Berthoin et al., 1994;Mülazımoğlu, 2012).On the other hand, when examining studies conducted outside of Turkey, it is seen that there is slightly more research available in the literature (D. Baker, 2015;D. Baker and Heaney, 2005;Dellal et al., 2008;González-Badillo et al., 2015).As a result, this study will contribute to the existing literature in terms of relationships between types of sports and age of athletes and maximal aerobic speed.Therefore, the purpose of this research is a comparison of maximal aerobic speed (MAS) of young athletes and investigation of optimal training loads.
Research Design
The aim of this study is to compare the MAS of young athletes participating in team sports and to determine the optimal training loads.In this study, a quantitative research approach was used.As a methodological approach, the screening method was preferred in order to determine the most obvious characteristics of the participants, such as their skills and attitudes (Büyüköztürk et al., 2017).The 20-meter shuttle run test was applied as a data collection technique by the researchers to the athletes forming the working group after obtaining the necessary permissions.Athletes voluntarily participated in the implementation of the data collection tool, and informed consent forms were obtained from them or their families.
Statistical Analysis
Before proceeding to the analysis stage, the data of the research were tested to determine whether they were suitable for normal distribution with the Shapiro-Wilk W test and kurtosis and skewness values.Parametric techniques were used because the distributions were observed to be normal.The data were analysed with the SPSS 23 package program.In this context, information about the research group and various research variables were evaluated using descriptive statistics techniques.As data analysis techniques, one-way analysis of variance (ANOVA) and Pearson product-moment correlation coefficients were used.In the analysis of the data, significance levels were taken as 0.05.
20-Meter Shuttle Run Test
Within the scope of this research, the Trabzonspor Kadir Özcan Youth Development Centre's synthetic field was used for the 20-meter shuttle run test.Active male athletes from football, basketball, and handball voluntarily participated in the study.Six running tracks, consisting of a flat area of 20 meters, were used for the test.The starting and ending points of the created areas were determined by marking them with training funnels.The command signals necessary for the test were transferred from a computer environment to the athletes via a sound system.Before starting the test, the athletes were allowed to acclimate to low-intensity shuttle running exercises.In addition, the athletes were encouraged to run at the maximum level during the test and to complete the test at the maximum level."The running speed at the start of the test is 8.5 km and the running speed is gradually increased by 0.5 km per minute.The test ended with two faults occurring repeatedly or when athletes reached their burnout levels" (Köklü et al., 2011).
Heart Rate Measurement Equipment
A heart rate monitor (Polar RS 800, Finland), which can measure heart rate instantly, was used to determine the heart rate and save it to the computer.
Heart Rate Measurement Equipment
A heart rate monitor (Polar RS 800, Finland), which can measure heart rate instantly, was used to determine the heart rate and save it to the computer.
Heart Rate Measurement Equipment
A heart rate monitor (Polar RS 800, Finland), which can measure heart rate instantly, was used to determine the heart rate and save it to the computer.
Calculation of Maximal Aerobic Speed
There are many different formulas and test models to calculate MAS directly and indirectly (D. Baker, 2015).Baker and Heaney obtained some normative aerobic fitness data for MAS scores for athletes competing in field sports.In this sense, they used the following tests to determine MAS: laboratory tests, Multistage Montreal Beep, VAMEVAL, YoYoIR1, Carminatti's test, Multistage Shuttle Beep, Set Time Trial, Set Distance Trial, and 1200m Shuttle.The test model applied in the current study is the Multistage Shuttle Beep (20-m shuttle run test).The formula for calculating MAS in this test is as follows: (MAS=Latest speed (km/h)×1.34-2.86)[13].The result of this formula gives us MAS in km/h.It should then be converted to m/s so that training running distances can be more easily calculated.For example, the MAS of an athlete whose test is completed at the 13th shuttle level is calculated as follows: 13th shuttle speed=14.5km/h.We need to put it into the formula and convert it to m/s as follows: MAS= 14.5×1.34-2.86=16.5 (km/h)×1000/3600=4.6 m/s.There is also a general correction formula for calculating MAS with estimated VO2max (MAS=Estimated VO2max/3.5),which gives us the result in kilometres per hour (Léger and Abdullah Cetindemir, et al.: Comparison of Maximal Aerobic Speeds of Team-sport Athletes and Investigation of Optimal Training Loads Mercier, 1984).
RESULT
In this section, the appropriateness of the variables for normal distribution is examined with skewness and kurtosis values, and demographic information and descriptive values related to the research variables are shown both on a general basis and according to the type of sport played.Afterwards, information about the test results is included.The arithmetic averages of test performance parameters (HR, VO2max, distance, speed, and MAS) of the football, basketball and handball players are presented in Table 1.The relationships between some parameters and the MAS values of participants of different sports were tested with ANOVA.The results are given in Table 3.As a result of this analysis, VO2max, distance, and speed were found to differ according to type of sport.In the LSD test performed to find source of difference, VO2max scores favour football, basketball and handball.In favour of football and handball scores showed a significant difference in favour of basketball.Among the distance values of the athletes, the average of football players was highest (X ̅ = 2176.25,SD = 487.83)compared to basketball (X ̅ = 1548.33,SD = 412.92)and handball (X ̅ = 1192.50,SD = 435.72).The average distance score of the basketball players was higher than that of handball players.In terms of speed values, basketball players had the highest average among the three types of sports.There were no significant changes in HR or MAS scores of the participants according to type of sport.In the MAS scores of the athletes, a significant difference was found in favour of football between football and handball players.While there was no significant difference in the ANOVA test, the significant difference seen in the LSD test may be due to the precise measurement of the LSD test.According to this relationship, MAS scores increase as the age, VO2max, distance, and speed of the athletes increase.It can be said that the MAS score will increase as the age variable increases.This can be described as a developmental process and it shows parallelism with other studies performed (Baquet et al., 1999).However, no study investigating the relationship between VO2max, distance, and speed variables and MAS scores was found.In this sense, when the relationships of these variables are interpreted, according to the formula MAS is calculated as (MAS = Latest speed (km/h) × 1.34 -2.86).The higher the speed variable, the higher the MAS value will be indirectly.
According to the protocol of the 20-meter shuttle run test, when the relationship of MAS with distance is interpreted, distance covered must be increased in order to obtain the speed value of participants or to obtain next speed level.
According to the VO2max formulas used in the calculations in our study, for the VO2max value to be high, the speed score should also be high (D. Baker, 2015).The relationship between athletes' HR, VO2max, distance, speed and MAS values and the type of sport was tested with ANOVA.As a result of this analysis, it was found that VO2max, distance and speed differ according to type of sport.
In the LSD test conducted to find the source of the difference, there was a significant difference in VO2max.Scores of football players were highest in comparisons of football, basketball and handball, while basketball scores were higher than those for handball.In this sense, when studies on footballers are examined, had similar findings for VO2max values (Crisp et al., 2013;Helgerud et al., 2001).In studies of basketball players, similar VO2max findings (Crisp et al., 2013).Finally, when studies of handball players are examined, lower VO2max values were calculated in a previous study (Zapartidis et al., 2011).For this reason, the training and conditioning of athletes can be predicted.The average distance of those who played football was higher compared to basketball and handball.The average distance score of basketball players was higher than that of handball players.When similar previous studies are examined, which examines the distance between 8619 -10,335m, the distance travelled about basketball, the average distance of athletes is 5587m, and the distance covered in handball found distances covered in their study as 3627m (Crisp et al., 2013;Helgerud et al., 2001;McInnes et al., 1995;Oba and Okuda, 2008).In this sense, it is thought that reason for the difference in distances between different types of sports is due to structural differences.In this sense, what separates games from each other, that is area measures or the playing time of the games.Based on these differences, it can be said that when the data in our study are compared, the same parallelism is revealed.When looking at the speed values, a difference was observed in favour of football when comparing the scores for all three studied sports and in favour of basketball when comparing handball and basketball.When studies on football are examined, performance values related to the shuttle run test applied to football players in his study (Gastin et al., 2017;Spinks et al., 2002;Vitale et al., 2018).
In that study, speed reached by young football players in the shuttle run test was found as 13.7 km/h, which is very close to speed obtained in our study (13.8 km/h).When we look at studies about basketball, results of shuttle tests to determine maximal oxygen consumption among basketball players.
The study stated that basketball players were able to run an average of 2152 m in the 20-meter shuttle run test.This Abdullah Cetindemir, et al.: Comparison of Maximal Aerobic Speeds of Team-sport Athletes and Investigation of Optimal Training Loads result corresponds to a speed of approximately 14 km/h in the test protocol.When this result is compared with current research (12.45 km/h), it is seen that previous speed score was higher.It is thought that such a difference may be due to athlete age, training, and conditioning levels.Finally, looking at work on handball, reported that handball players had an average speed of 13.25 km/h in the 20-meter shuttle run test.In this study, it was found that handball players had an average speed of 11.65 km/h.This speed is lower.Such a difference may be due to athlete age, training, and conditioning levels, as mentioned above (Paradisis et al., 2014;Suna et al., 2016).No significant changes were observed in the HR and MAS scores of participants according to types of sports.However, in the LSD test, a significant difference was observed in the HR scores of the football and handball players in favour of handball.When we examine maximal HR results in studies on basketball, reported that HR of basketball players in 20meter shuttle run test as 198.46 beats/min (Moran et al., 2019).In the current study, it was found to be 196.58beats/min.When we examine other studies, found that average heart rate during basketball competitions to be 169 ± 9 beats/min (McInnes et al., 1995).
When we examine studies about football players, found that maximal heart rate to be 196.92beats/min in a shuttle run test applied to young footballers (Nassis et al., 2010).Analysed HR of footballers and reported an average of 164 beats/min (Bangsbo et al., 1991).Danish players and different study reported that 171 beats/min for professional players (Ali and Farrally, 1991;Journal, 2014).As a result of the study, it was found to be 193.81beats/min.When we examine works on handball, reported that HR of 180 beats/min for handball players in 20-meter shuttle run test.
In the current study, it was found to be 199.87beats/min.In addition, in a study testing physiological and physical capacities of elite male handball players, found maximal HR of handball players in the Yo-Yo test to be 191 beats/min.(Michalsik et al., 2015;Suna et al., 2016).
For the MAS scores of athletes, a significant difference was found between football and handball players in favour of football.While there was no significant difference in ANOVA test, significant difference in LSD test may be due to precise measurement of LSD test.As a result of our current study, MAS of football players was 3.92 m/s, while that of basketball players was 3.84 m/s and that of handball players was 3.54 m/s.When we examine studies in literature, it can be seen that there is very limited research on MAS.The value for Italy Series A footballers was reported as 4.91 m/s with rampinini test (Rosenblatt, 2014).
CONCLUSIONS
1.It was found that as age, VO2max, and distance and speed scores of athletes increased, the MAS also increased.2. A positive correlation was found between age and VO2max and speed scores.3. A positive correlation was found between VO2max and distance and speed.4. A negative relationship was found between Bmi and VO2max and speed.It was found that VO2max, distance, and speed differed according to types of sports.In the LSD test conducted to find the source of difference, there was a significant difference in VO2max in favour of football compared to basketball and handball scores, and in favour of basketball compared to handball scores.For the distance values of the athletes, it was observed that average of football players was higher compared to basketball and handball.Similarly, the average score of basketball players was higher than that of handball players. 5.When speed values for different types of sports were analysed, a difference was found in favour of football compared to basketball and handball, and scores for basketball were higher than those for handball.6.No significant changes were observed in HR and MAS scores of the participants according to types of sports.However, in LSD test, a significant difference was found between football and handball in favour of football in MAS scores of the athletes.
MAS
10 minutes is preferred, 1-2 set method can be used.2-4 minutes rest time should be given between sets.Accordingly, the grid training model for football players, basketball players, and handball players is presented visually in Figure1.(D. Baker, 2011).
Figure 1 .
Figure 1.MAS program with grid method for football, basketball, and handball players
Figure 2 .
Figure 2. MAS running distances with Eurofit method for football, basketball, and handball players
Figure 3 .
Figure 3. Mas running distances with tabata method for football, basketball and handball branches
Table 1 .
Descriptive statistics for performance variables
Table 2 .
Relationships between height, weight, age, and Bmi and HR, distance, speed, and MAS values
Table 3 .
HR, VO2max, distance, speed, and MAS values of athletes according to type of sport DISCUSSIONIn this research, relationships between athletes' height, weight, age, and Bmi and HR, VO2max, distance, speed, and MAS scores were examined with Pearson correlations.A positive correlation was found between age, VO2max, and speed and MAS values.
Table 4 .
Table 4. MAS program with long interval method for football, basketball, and handball players Long edge, KK; Short edge, MAS; Maksimal aerobic speed
MAS program with grid method for football, basketball, and handball
1:1 Running/Rest-Active (15 s: 15 s).Long edge run with 100% MAS.Short edge run with 70% MAS, work starts with 6 minutes, 2-4 sets if 8 minutes run time is preferred.If
|
v3-fos-license
|
2017-04-15T20:34:27.671Z
|
2012-08-15T00:00:00.000
|
33556417
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0042726&type=printable",
"pdf_hash": "e864e9939b34ac810ae958dcdd7156414fb723e6",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:932",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e864e9939b34ac810ae958dcdd7156414fb723e6",
"year": 2012
}
|
pes2o/s2orc
|
The Integrity of the Cytokinesis Machinery under Stress Conditions Requires the Glucan Synthase Bgs1p and Its Regulator Cfh3p
In yeast, cytokinesis requires coordination between nuclear division, acto-myosin ring contraction, and septum synthesis. We studied the role of the Schizosaccharomyces pombe Bgs1p and Cfh3p proteins during cytokinesis under stress conditions. Cfh3p formed a ring in the septal area that contracted during mitosis; Cfh3p colocalized and co-immunoprecipitated with Cdc15p, showing that Cfh3p interacted with the contractile acto-myosin ring. In a wild-type strain, a significant number of contractile rings collapsed under stress conditions and this number increased dramatically in the cfh3Δ, bgs1cps1-191, and cfh3Δ bgs1/cps1-191. Our results show that after osmotic shock Cfh3p is essential for the stability of the (1,3) glucan synthase Bgs1p in the septal area, but not at the cell poles. Finally, cells adapted to stress; they repaired their contractile rings and re-localized Bgs1p to the cell surface some time after osmotic shock. A detailed analysis of the cytokinesis machinery in the presence of KCl revealed that the actomyosin ring collapsed before Bgs1p was internalized, and that it was repaired before Bgs1p re-localized to the cell surface. In the cfh3Δ, bgs1/cps1-191, and cfh3Δ bgs1/cps1-191 mutants, which have reduced glucan synthesis, the damage produced to the ring had stronger consequences, suggesting that an intact primary septum contributes to ring stability. The results show that the contractile actomyosin ring is very sensitive to stress, and that cells have efficient mechanisms to remedy the damage produced in this structure.
Introduction
Cytokinesis is the final stage of cell division and results in a roughly equal distribution of organelles in each of the two daughter cells.In the fission yeast Schizosaccharomyces pombe, cytokinesis requires the positioning of the division plane, the assembly and contraction of an actomyosin ring, the synthesis and the degradation of a division septum, and the coordination of all these processes with nuclear division [1][2][3][4][5][6][7][8][9][10][11].
The division plane coincides with the position of the nucleus in order to ensure that both daughter cells will receive an equal number of chromosomes.Microtubules emplace the nucleus at the medial region of the cell, and Mid1p and the kinase Plo1p promote the recruitment and assembly of the contractile actomyosin ring (CAR) at the cell cortex around the nucleus [11][12][13][14][15][16].First, the type-II myosin heavy-chain Myo2p, its light chains Rlc1p and Cdc4p, and Rng2p arrive at the equator of the cell in a Mid1p-dependent manner.Then, the PCH protein Cdc15p and the formin Cdc12p become incorporated to the ring and promote the recruitment of certain actin-interacting proteins that initiate the polymerization and compaction of actin into a ring.Maturation of the ring is accompanied by the incorporation of additional proteins to this structure [4,[17][18][19][20][21][22][23].
Once the CAR has been assembled, its contraction is initiated by the activity of a cascade of protein kinases (the SIN pathway, from Septation Initiation Network) that assembles at the spindle pole body.Mutants in components of this pathway are able to assemble a CAR but this ring is unstable and does not contract [5,10,22].In the case of yeasts, ring contraction is accompanied not only by the incorporation of new plasma membrane but also by the synthesis of a septum composed of cell wall material [8,24].In fission yeast, the primary septum, composed of linear and branched (1,3) glucan, is surrounded by a secondary septum that has a composition similar to that of the lateral cell wall [25,26].Bgs1p plays a relevant role in cytokinesis because it is the (1,3)glucan synthase responsible for the synthesis of linear (1,3)glucan and for the integrity of the primary septum [27].Finally, the septum needs to be degraded in order to allow both daughter cells to separate.(1,3)-and -glucanases have been implicated in cell separation [28][29][30].Septins and the exocyst are required for the correct localization of glucanases [31].
Cfh3p is similar to S. cerevisiae Chs4p, a scaffold protein that attaches the chitin synthase Chs3p to the septin ring.Cfh3p is a protein that regulates the activity of Bgs1p by stabilizing it at the cell surface [32].Cfh3 and Chs4 proteins share the presence of tandem SEL1 domains, a subfamily of TPR domains that are present in proteins involved in multiprotein complexes required for signal transduction [33].Here we show that stress collapses the cytokinesis machinery and that Bgs1p and its regulator Cfh3p are required to ensure the stability of the cytokinesis apparatus under these conditions.The results point to the notion that Cfh3p acts as a scaffold that ensures the stability of Bgs1p at the septal area, so that linear (1,3)glucan can be synthesized even under unfavorable conditions.
Overexpression of cfh3 + produces an abnormal distribution of proteins involved in cytokinesis
Previous results had shown that cfh3 + overexpression results in a defect in cytokinesis [34].In order to gain further information about the role of Cfh3p in this process, we analyzed the distribution of proteins involved in the different steps of cytokinesis in cells overexpressing cfh3 + .We focused our attention on the distribution of CAR components (actin, the myosin light chains Cdc4p and Rlc1p, and Cdc15p); on a protein that links the CAR to the plasma membrane (Chs2p), and on some proteins involved in cell separation (the septin Spn3p and the glucanases Agn1p and Eng1p).The results showed that an excess of Cfh3p produced alterations in the localization of all these proteins (figure S1).Since Cfh3p regulates the activity of Bgs1p [32], we wondered whether this alteration of cytokinesis was due to a hyperactivation of Bgs1p.In fact, in cells overexpressing cfh3 + Bgs1p was not only observed at the cell surface of cell poles and septal area, as in the WT strain, but across the whole of the cell periphery (figure S2, A).However, the following results argue against the hypothesis that an altered Bgs1p regulation would be the cause of the defects in cytokinesis exhibited by cells overexpressing cfh3 + .1) -glucan synthase activity did not increase in these cells (not shown), and 2) overexpression of bgs1 + from the 3Xnmt1 + promoter produced cells with an abnormal morphology that sometimes lysed; however, these cells were not chained, branched or multiseptated (figure S2, B).These results suggested that the interference of Cfh3p with cytokinesis was not a consequence of a hyperactivation of Bgs1p.The specificity of the interaction of cfh3 + overexpression with the contractile ring was supported by the fact that the multiseptation phenotype was not observed in cdc15-140 and SIN mutants, which cannot assemble stable CARs, whereas it was observed in septin mutants (figure S3), in which CARs assemble and contract and septa are synthesized but not dissolved owing to glucanase misregulation [31].It has been suggested that the function of cfh3 + would be to regulate Chs2p [34], a protein similar to chitin synthases that lacks such catalytic activity [35] and whose overexpression leads to cytokinesis defects [36].As shown in supplemental figure S3, the phenotype of cfh3 + overexpression was produced in cell lacking chs2 + , and vice versa.Taken together, these results suggested that Cfh3p might interact physically with the CAR, such that a high concentration of this protein would disturb the structural/mechanical properties of the ring.
Cfh3p accumulates at the cell poles and septal area
According to the databases, Cfh3p is a prenylated protein.Accordingly, it was expected to localize to the cell surface.A GFP-Cfh3 fused protein was observed at the cell poles and at the midzone of the cell in the WT strain, (figure 1 A), in agreement with the localization described by Matsuo et al. using immunofluorescence analyses [34].A strain bearing GFP-Cfh3 and Cut11-RFP showed that the Cfh3p signal accumulated at the cell poles of interphase cells.In mitotic cells, Cfh3p was observed at the cell equator before the nuclei separated; at later times, a strong Cfh3p signal accumulated at the cell equator; this signal contracted along time, and remained at the septal area before the cells separated.After cell separation, Cfh3p was observed at the cell poles (figure 1 A, left panels).Confocal microscopy confirmed that Cfh3p accumulated at the cell poles and septal area (figure 1 A, right panels).A time-lapse experiment using confocal microscopy allowed us to perform a more detailed analysis of Cfh3p localization to the cell midzone; we observed that Cfh3p was initially assembled as a ring and that this ring contracted during cytokinesis, leaving behind a fluorescent signal that formed a plaque when the leading ring was disassembled at the end of contraction (figure 1 A, lower right panel).This result suggested that Cfh3p was associated with both the contractile ring and the plasma membrane.
We analyzed Cfh3p localization at the cell midzone in mutants affected in different stages of cytokinesis: CAR assembly and contraction (cdc4-8, rlc1D, myo2-E1 myo3D, cdc15-140, and chs2D), SIN signaling (cdc11-119 and cdc16-116), septum synthesis (cps1-191) and cell separation (spn3D and spn4D).When the SINdefective cdc11-119 mutant was incubated for 3 hours at 36uC, the strongest Cfh3p signal was detected at the cell poles of interphase cells, and at the cell midzone of mitotic cells (figure 1 B).In a cdc16-116 mutant, in which the SIN signal does not turn off, Cfh3p localized to the edge of the growing septa and it remained at the septal area after the septa had been completed.Thus, Cfh3p can arrive at the cell midzone in the absence of the SIN pathway but it requires that the SIN signal must be turned off for it to be removed from the cell equator after mitosis.Cfh3p localized to the cell equator of the cells in the myosin myo2-E1 myo3D, cdc4-8, and rlc1D mutants, although the signal was not uniform, in accordance with the altered CARs in these mutants (figure 1 B and results not shown).In a cdc15-140 mutant, a weak GFP-Cfh3 signal was observed at the inter-nuclei area of about 30% of mitotic cells (arrowhead in panel a of figure 1 B); this result suggested that GFP-Cfh3p was able to arrive at the cell midzone in these cells but was not able to remain there for a long time.In the cps1-191, chs2D, spn3D, and spn4D mutants Cfh3p localized to the cell equator.In sum, Cfh3p localization to the cell equator was independent of the myosin components of the CAR, the glucan synthase Bgs1p, and the septins; Cdc15p was required to stabilize Cfh3p at the cell equator and the SIN activity regulated the removal of Cfh3p from the midzone after mitosis.Time-lapse experiments using cells that expressed GFP-and RFP-tagged Cdc15p, Cfh3p, and Bgs1p indicated that Cfh3p arrived at the cell midzone after Cdc15p and before Bgs1p (figure S4).These results were in agreement with a role of Cfh3p in CAR maturation/contraction and/or in septum synthesis.
We also analyzed the localization of proteins involved in cytokinesis in a cfh3D mutant; we found that Cdc15-GFP, Chs2-GFP, GFP-Bgs1p, Spn3-GFP, and Eng1-GFP localized correctly in the absence of Cfh3p (not shown).However, we observed that a number of the Cdc15-GFP and the GFP-Cdc4 rings were asymmetric or broken.
Finally, we performed co-localization analyses between Cfh3p and CAR proteins.It was found that GFP-Cfh3 and actin (stained with rhodamine-phalloidin) co-localized to the contractile ring (figure 1 C, left panels).Observation of a strain bearing GFP-fused coronin, a protein that associates with actin patches [37], and RFP-Cfh3p revealed that Cfh3p did not co-localize with actin patches (figure 1 C, central panels), indicating that Cfh3p is not associated with all actin-containing structures.RFP-Cfh3p colocalized with the CAR-associated protein Cdc15p fused to GFP (figure 1 C, right panels).cfh3D mutants show a genetic interaction with mutants defective in CAR assembly and contraction In a previous work, we determined that cfh3D was synthetic sick with the cdc14-114 SIN-defective mutant and with the cps1-191 mutant [32], which involved Cfh3p in septum synthesis.The cfh3D mutants did not show a genetic interaction with mutants affected in the myosin component of the CAR or with septin mutants.Here, we extended the analysis of genetic interactions to analyze the functional relationship between cfh3D and mutants affected in other CAR components.Thus, we constructed double mutants between cfh3D and cps8-188 (a strain carrying a point-mutation in the act1 + gene, coding for actin), and between cfh3D and cdc15-140 and imp2D strains (carrying mutations in PCH-family proteins required for CAR function).The WT strain and the mutants carrying single or double mutations were streaked onto YES plates and incubated at different temperatures (22uC to 37uC).It was found that the cfh3D cps8-188, cfh3D cdc15-140, and cfh3D imp2D strains were more thermosensitive than the corresponding single mutants (figure 2 A).Thus, cfh3D showed a genetic interaction with some mutants affected in CAR assembly and/or contraction, pointing to a role of Cfh3p in these processes.
Cfh3p is a CAR-associated protein
The facts that cfh3 + overexpression produced an abnormal distribution of proteins involved in different steps of cytokinesis, and that Cfh3p localized to the septal area as a contractile ring, suggested that Cfh3p might associate with the CAR.This possibility was analyzed by performing a co-immunoprecipitation experiment.We incubated cell extracts from strains carrying HA-Cfh3, Cdc15-GFP, or both tagged proteins, in the presence of polyclonal anti-GFP antibody.Following this, we performed Western blotting analyses using monoclonal anti-GFP or anti-HA antibodies.In parallel, total cell extracts from the same strains were analyzed by Western blotting to detect the input of Cfh3p or Cdc15p.As shown in figure 2 B, HA-Cfh3 was detected in anti-GFP immunoprecipitates from the strain bearing both tagged proteins, but not from the control strains, pointing to a physical interaction between Cfh3p and a CAR component or a CARassociated protein.
Contractile rings in the cfh3D mutant are sensitive to stress
Since the above results suggested that Cfh3p might play some role in the assembly and/or contraction of the CAR, we carried out time-lapse experiments using strains bearing both the Cdc15 ring protein and the Hht2 histone fused to GFP in order to visualize the progression of nuclear division.In this way, the photographs could be compared at the same time points.Our initial results showed that the time for CAR assembly and contraction for the control strain was about 4063 minutes (n = 10), in agreement with previous results [36], while for the cfh3D strain it was about 7569 minutes (n = 10).This result suggested that Cfh3p might play a relevant role in CAR assembly/ contraction; however, since the cfh3D mutant did not show either a delay in the generation time or an increase in the number of septated cells, we wondered whether this surprising result might be a consequence of the method used to prepare the samples, which involved centrifugation of the cells and their mixing with melted solid medium kept at 42uC.Indeed, when the samples were prepared by filtering the cells and spreading them onto solid YES medium layered on the slides, the time for ring assembly and contraction in both strains was about 4063 minutes (n = 10).
These results suggested that CAR was unstable and sensitive to stress in the cfh3D mutant.In order to confirm this hypothesis, we observed cells from the WT or the cfh3D strains that had been subjected to different stress conditions under the microscope: these conditions were osmotic stress (incubation with 1.2 M sorbitol, 1 M KCl or 0.2 M MgCl 2 for 15 minutes at 32uC), nutritional stress (growth until late logarithmic phase), and mechanical stress (centrifugation for 2 minutes at 160006 g).In all cases we found cells with an abnormal localization of Cdc15p, which included asymmetric rings (50% of cases; arrow in figure 3 A); rings that did not disassemble properly (20% of cases; figure 3 A, asterisk), broken rings (25% of cases; bracket in figure 3 A) or an accumulation of the protein in the lateral cell cortex (5% of cases; arrowhead in figure 3 A).A closer analysis of CARs with confocal microscopy allowed us to confirm that when the cfh3D strain was exposed to a stress source, a certain percentage of rings were misshapen, including rings that were asymmetric, broken, and/or distorted (two examples are shown in figure 3 A, lower panels).
When the total number of the cells with an abnormal distribution of Cdc15p was quantified for each strain and condition (n$500 in all cases), it was found that in the cfh3D cultures this number was significantly higher than in the WT cultures, even when the cells were growing in logarithmic phase in YES medium, and that this number increased dramatically in the mutant strain when the cells were stressed (figure 3 B).Similar results were obtained using Cdc4 and Rlc1 GFP-fused proteins (abnormal Cdc4 rings were detected in 0.4% of WT cells and in 10% of cfh3D cells grown in YES, and in 33% of WT cells and in 85% of cfh3D cells incubated in YES with 1 M KCl for 15 minutes.The values for cells carrying Rlc1-GFP were 0.5%, 1.8%, 52%, and 69%, respectively).These results showed that the contractile rings were less stable in the cfh3D mutant than in the WT strain, particularly when the cells were undergoing some stress.
Contractile rings in cps1-191 cells are sensitive to stress
As described above, we found that the cfh3 + gene played a role in maintaining CAR stability.Since Cfh3p is a regulator of theglucan synthase Bgs1p [32], we wondered whether both functions of the Cfh3 protein were related.To analyze this, we observed the Cdc15-GFP rings in the WT, cfh3D, cps1-191, and cfh3D cps1-191 strains that had been incubated in YES medium at 25uC (a permissive temperature for the cps1-191 mutation) or at 32uC (a semi-restrictive temperature for cps1-191; this temperature allowed us to observe rings at different stages of contraction and to detect differences between the cps1-191, and cfh3D cps1-191 strains, which was not possible at 36uC), or in YES plus 1.2 M sorbitol (osmotic stress) for 15 minutes at 32uC.We quantified the total number of cells exhibiting an abnormal distribution of Cdc15-GFP, as explained above (a minimum of 500 cells were scored in each case).Quantification of the abnormal distribution of Cdc15 in these strains revealed that in all conditions the cps1-191 mutant exhibited more cells with abnormal rings than the cfh3D mutant, and that the cps1-191 cfh3D double mutant showed the strongest defect (figure 3 C).Thus, 38% of the cps1-191 cells exhibited abnormal Cdc15 rings when they grew under the permissive temperature; this defect was observed in 63% of the cells incubated at 32uC and in up to 82% of the cells when the culture had been subjected to osmotic shock (figure 3 C).The percentages of cells with abnormal rings for the cfh3D cps1-191 strain were 42%, 77%, and 91% for the YES cultures incubated at 25uC or at 32uC, and for the YES plus sorbitol culture incubated at 32uC, respectively.The right panels in figure 3 C show micrographs of the Cdc15 rings in the WT, cfh3D, cps1-191 and cfh3D cps1-191 strains grown in YES medium at 32uC.When the cells were incubated in the presence of 1 M KCl instead of sorbitol, similar results were obtained (not shown).Figure S5 shows Cdc4-GFP rings in the WT, cfh3D, cps1-191 and cfh3D cps1-191 strains incubated at 25uC.These results showed that a defective Bgs1 protein led to a defect in the stability of the CAR and that this phenotype was enhanced when the cells lacked Cfh3p and when they underwent a stress shock, and suggested that the defects in the CAR observed in the cfh3D strain might be the consequence of the misregulation of Bgs1p in this mutant.
Cfh3p ensures Bgs1p stability at the septal area but not at the cell poles The physical interaction between Cfh3p and the CAR suggested that Cfh3p could act as a scaffold required to ensure the stability of Bgs1p at the cell equator.In order to investigate this possibility, cells from the WT strain or the cfh3D mutant bearing Cut11-RFP (a nuclear-membrane protein used as a cell-cycle marker) and GFP-Bgs1 proteins were exposed to 1 M KCl and incubated at 32uC for different times.As shown in figure 4, in the WT strain the GFP-Bgs1 protein could be observed at the cell poles and at the septal area in the control culture (cells incubated in YES medium; marked as 09 in figure 4 A) and at 10 minutes after KCl had been added to the medium.After 20-30 minutes of incubation in the presence of the salt, the fluorescence corresponding to GFP-Bgs1p was strong at the cell midzone but very weak or undetectable at the poles see insets in the upper panels of figure 4 A).According to the florescence signal, the septal area was distorted when the cells were incubated in the presence of KCl for 10-30 minutes.After longer incubation times (40-50 minutes), Bgs1p was observed at the cell equator and also at the poles; this signal probably corresponded to new Bgs1p molecules that were delivered to the membrane after the initial osmotic shock.The fluorescence observed at the poles after 40-50 minutes of incubation in the presence of KCl was not as strong as that observed in the cells incubated in the absence of KCl (upper panels in figure 4 A and results not shown), perhaps due to an enhanced endocytosis of Bgs1p under stress conditions [32].Additionally, at this time (40-50 minutes) the septal area was not distorted.In the cfh3D strain, Bgs1p was observed at the cell equator and the poles when the cells were incubated in YES medium and at 10 minutes after the addition of KCl to the culture (figure 4 A, lower panels).After 20-30 minutes in the presence of KCl, Bgs1 could not be observed either at the cell midzone or at the poles in most cells, in agreement with previous results [32].After 40 minutes, the GFP signal was seen at the cell midzone and the poles.Thus, Cfh3p is critical for ensuring the presence of Bgs1p in the septal area after stress shock.
In order to quantify these results, the percentage of cells with the GFP-Bgs1 fluorescence signal at the cell midzone with respect to the total cell number was scored at different times after the addition of 1 M KCl to the cultures; the results, shown in the right panel of figure 4 A, confirmed that in the cfh3D mutant the number of cells exhibiting GFP-Bgs1 in the cell midzone decreased dramatically after osmotic shock, while this treatment had a milder effect in the WT control.Additionally, the results confirmed that GFP-Bgs1p was present in the cell midzone of cfh3D cells after 45-60 minutes of incubation in the presence of KCl.
Time-lapse experiments were performed to observe the effect of osmotic shock along time in the same cells.Under the conditions of these experiments, no re-localization of GFP-Bgs1p to the cell poles was observed, and all the process seemed to proceed more slowly than in liquid medium.However, the results confirmed that in the WT strain Bgs1p was present in the septal area of the cells along the experiment, while in the cfh3D mutant the fluorescence signal disappeared from the cell equator after the stress shock and was observed again at later times (figure 4 B).Cells restore the cytokinesis machinery after the initial stress shock As described above, GFP-Bgs1p was observed in the cell midzone of cfh3D cells after 40 minutes of incubation in the presence of KCl.Additionally, at this time GFP-Bgs1 localized to the cell poles in both the WT and cfh3D strains and the septal area was not distorted.In order to determine whether this adaptation to the stress insult was specific to the localization of Bgs1p, we analyzed CAR morphology in WT and cfh3D cells bearing the Cdc15-GFP fusion protein incubated in the presence of KCl for different times.The number of cells exhibiting normal Cdc15 rings with respect to the number of cells with normal and abnormal rings (such as those shown in figure 3 A) was calculated (cells in interphase were not scored).The plot in the left panel of figure 5 A shows a quantification of the results.The number of cells with normal rings decreased in the WT and cfh3D strains after 15 minutes of incubation in the presence of the salt; as described above (figure 3), CARs were more affected by osmotic shock in the cfh3D than in the WT strain.30 minutes after the osmotic shock, in both strains the number of cells exhibiting a normal CAR was similar to that obtained when the cells were incubated in YES medium (09 time-point).The right panel in figure 5 A shows WT and cfh3D cells bearing Cdc15-GFP that had been treated with KCl for different times; similar results were obtained when the cells had been treated with 1.2 M Sorbitol instead of KCl (not shown).These results showed that cells were able to restore the contractile rings after osmotic shock.
In order to follow the recovery of the CAR in a single cell we performed time-lapse experiments in WT and cfh3D cells bearing both the Cdc15 ring protein fused to the GFP and the Hht1p histone (used as a cell cycle marker) fused to the RFP. Figure 5 B shows the behavior of one WT cell (left set of micrographs) and two cfh3D cells (central and right sets of micrographs) incubated in YES with 1 M KCl; the cfh3D cells exhibited asymmetric/broken rings 5 minutes after the addition of the salt (indicated with arrowheads).In both cases, the CARs behaved as normal rings after 25 minutes in the presence of KCl.These results confirmed that cells were able to remedy the damage produced to the cytokinesis apparatus by the initial stress shock and to proceed through cell division.
In order to determine whether CAR integrity and the localization of Bgs1p in the septal area were restored simultaneously or consecutively, we cultured a cfh3D strain bearing both the Cdc15-GFP and the RFP-Bgs1 fusion proteins in YES with 1 M KCl and analyzed both processes in the same culture.As shown in figure 6 A, upper panel, the number of cells with normal Cdc15 rings decreased significantly 15 minutes after the addition of KCl, in agreement with previous results (figures 3 and 5); after 30 minutes in the presence of KCl, the percentage of cells with normal rings was similar to that scored in YES medium (09 timepoint).With respect to Bgs1p, this protein was in the cell midzone in 23% of the cells cultured in YES medium; 15 minutes after addition of KCl, this number decreased to 12%, and it fell to 3% after 30 minutes in the presence of the salt.This percentage increased slightly after 45 minutes and was 18% after 60 minutes.These results showed that stress produced a fast and dramatic damage in CAR integrity and that the cells could repair this damage very efficiently.Regarding Bgs1, both its delocalization in response to stress shock and its re-localization to the cell midzone took place more gradually.The lower panel in figure 6 A shows representative fields of cfh3D cells bearing the Cdc15-GFP and RFP-Bgs1 proteins that had been incubated in YES with 1 M KCl for different times.The photographs show that 15 minutes after osmotic shock cells exhibited aberrant CARs; Bgs1p was still observed at the cell equator, although the RFP signal did not form a neat ring, as it did when the cells were incubated in YES medium (see the cell marked by an arrow in the lower panel of figure 6 A).Thus, although the morphology of the contractile rings is perturbed by stress, CARs seem to be competent to retain ringassociated proteins at the cell equator.After 30 minutes in the presence of the salt, CARs were normal and cells did not exhibit Bgs1p in the cell midzone.The RFP-Bgs1 ring was observed in some cells after 45 minutes (indicated with arrowheads in the figure 6 A, lower panel) and it was present in all the dividing cells after 60 minutes in the presence of KCl.It was not possible to perform time-lapse experiments to analyze CAR recovery and Bgs1p re-localization in the same cell because the RFP-Bgs1 fluorescence was very weak in the presence of KCl and faded before the end of the experiment.
The cfh3D strain bearing the Cdc15-GFP and RFP-Bgs1 proteins allowed us to analyze the effect of stress on cytokinesis in more detail.In YES medium (09), most mitotic cells exhibited Cdc15p as a contractile ring located at the leading edge of the growing septum (CW in figure 6 B).The Cdc15 ring coincided with the Bgs1 signal, which was observed as a contractile ring at the leading edge of the growing septum; Bgs1 left a fluorescent signal behind as it contracted.Under stress conditions, there were cells with Cdc15 rings that did not display the Bgs1 signal, and cells in which the Cdc15 signal was not located at the leading edge of the growing septa (two examples are shown in Figure 6 B).These results confirmed that stress collapsed and discoordinated the cytokinesis machinery.
Cfh3p and cytokinesis
We have previously described that Cfh3p regulates the activity of the (1,3)glucan synthase Bgs1p, particularly under stress conditions [32].In this work we aimed to further characterize the function of this protein.The time at which Cfh3p localized to the division site, and the fact that it formed a contractile ring pointed to a role of Cfh3p in cytokinesis.cfh3 + overexpression led to defects in cell division; analysis of this phenotype did not provide information about this role, since the phenotype was accompanied by an aberrant distribution of many proteins required for different steps of cytokinesis.The physical interaction between Cfh3p and Cdc15p, a ring-associated protein, suggested that the defects in the cytokinesis machinery observed in cells overexpressing cfh3 + might be indirect; an excess of Cfh3p A. Left panels, wild-type or cfh3D cells bearing GFP-tagged Bgs1p and RFPtagged Cut11p were incubated in the presence of 1 M KCl for the indicated times, collected by filtration and photographed.The insets show cell poles.Right panel, cells from the same strains were treated with 1 M KCl; samples were collected by filtration at the indicated times and photographed.The percentage of cells exhibiting GFP-Bgs1 in the cell midzone (with respect to the total cell number) was scored from the photographs.The experiment was performed three times, with similar results; the result of a representative experiment is shown.B. Time-lapse experiment of cells from the same strains treated with 1 M KCl, collected by filtration, spread onto YES+1 M KCl on a slide and photographed along time; the numbers indicate the minutes, after KCl had been added, at which the cells were photographed.Cells in which the Bgs1 ring was starting and finishing assembly/contraction at the 59 time-point are marked by an asterisk and an arrowhead respectively.Bar, 10 mm.doi:10.1371/journal.pone.0042726.g004probably disturbs the structural/mechanical properties of a structure that is dynamic and highly regulated.Even so, a specific role for Cfh3p in CAR assembly/contraction can be inferred from the facts that cfh3D mutants showed a genetic interaction with mutants defective in ring assembly/contraction and that in a cfh3D mutant a significant number of cells exhibited abnormal contractile rings.Cfh3p interacts physically with Bgs1p [32] and with the CAR (figure 2); thus, Cfh3p might act as a scaffold whose interaction with Cdc15p and/or other CAR component or CARassociated proteins would be required for Bgs1p to become stabilized at the plasma membrane at the site of cell division.
Cfh3p, Bgs1p, and cytokinesis under stress
In a cps1-191 mutant, a significant number of cells had abnormal CARs, even at the permissive temperature, which suggested that the (1,3)glucan synthase bgs1 + /cps1 + is required for CAR stability.Most interestingly, we found that in the WT strain CARs were unstable under osmotic, nutritional and mechanical stress conditions, and that the effect of stress was more dramatic in the cfh3D, the cps1-191 and the cfh3D cps1-191 cells.In the absence of Cfh3p, the activity of Bgs1p is reduced due to an enhanced endocytosis, particularly after stress shock [32].This strongly suggested that the damage to the CAR observed in the cfh3D strains could be explained in terms of the defect in Bgs1p of this mutant.Thus, in the WT strain a fully functional Bgs1p would be delivered to the membrane and would remain there for the time required to exert its activity at a normal rate.In the cfh3D mutant, a robust Bgs1p would be delivered to the membrane and would act properly for some time, but this protein would be endocytosed faster than in the WT strain.This would result in a lower functionality of the (1,3)glucan synthase and in the appearance of subtle CAR defects.In the cps1-191 strain, a weak Bgs1 protein would be delivered such that although it could remain at the membrane for a normal length of time, it would lead to some cell defects.Finally, in the cfh3D cps1-191 double mutant, a defective Bgs1 protein would be delivered to the membrane and endocytosed faster than in the single cps1-191 mutant, resulting in very low Bgs1p functionality, which would account for the strong defects detected in this strain ( [32] and this work).The defects in these strains would be exacerbated by stress, which reduces the stability of Bgs1p at the plasma membrane.We observed that in the WT strain Bgs1p delocalized from the cell poles but not from the cell equator after stress shock, and that in the cfh3D mutant Bgs1p delocalized from both, cell poles and midzone.Thus, Cfh3p is essential to guarantee that linear -glucan is synthesized correctly at the primary septum (where it plays its most relevant function; [27]), even under unfavorable conditions.Contractile ring, primary septum, and cytokinesis under stress It seems plausible to think that defective CARs present in the cells after stress shock and/or in the cps1-191 mutant could be a consequence of defects in the synthesis of the primary septa.In Saccharomyces cerevisiae, coordination between the synthesis of a chitin primary septum and the contraction of the acto-myosin ring is required to overcome the internal turgor pressure during cell division and for the cell to proceed successfully through cytokinesis [38].ScChs2p, the chitin synthase required for primary septum synthesis [39], is also required to maintain CAR stability [40].In S. pombe, the (1,3)glucan synthase Bgs1p is required for the correct synthesis of the primary septum [27], which is made up of glucan.The fact that the bgs1/cps1-191 mutant had abnormal CARs (even when grown in YES medium) could be explained if the weak activity in this mutant synthesized defective primary septa unable to support contraction, thus reducing the stability of the CAR.This defect would be enhanced by stress due to the reduced stability of Bgs1p at the plasma membrane, in particular in the cfh3D mutant.However, the observations that even in a WT strain the number of defective rings increased after a short osmotic shock, that in the cfh3D mutant this phenomenon was observed at a time at which Bgs1p was still observed in the cell midzone (15 minutes), and that the rings were restored before Bgs1p relocalized to the septal area suggest that stress might induce direct damage to the contractile ring.Consequently, in the cfh3D cells a combination of two effects produced by stress (direct damage to the CAR and a defective septum synthesis due to the reduced Bgs1p activity) would result in a defect in CAR stability stronger than that produced in the WT strain, which would only be affected by the direct damage produced to the ring by stress, a defect that is rapidly repaired by the cell.In the case of the cps1-191 mutants, the cells would have a weak primary septum, even in YES medium, which would result in the presence of some defective rings; under these circumstances, the direct damage produced to the CAR by a stress shock would have strong consequences, and would account for the severe defects in the cytokinesis apparatus detected in the in the cps1-191 and cfh3D cps1-191 strains after osmotic shock.
The contractile ring as a sensor for stress
Our results show that stress collapses the cytokinesis machinery.Previous results had shown that stress produces alterations in other morphogenetic elements in different organisms; thus, actin becomes depolarized and the dynamics of microtubules is affected by osmotic shock [41][42][43][44][45].It has been proposed that the reorganization of actin after osmotic shock would be a protective response directed to reinforcing the cell cortex after the cell shrinkage produced by the change in external osmolarity [43].It is likely that the depolarization of actin after centrifugation [46] would also be a protective response to the mechanical stress produced during that process.When cells are under hyper-osmotic conditions they shrink and the membrane undergoes changes in its physical state and in protein-protein and protein-lipid interactions [47,48].It is possible that these circumstances might affect the cytokinesis machinery.After a certain time of incubation in the hyper-osmotic condition, cells become adapted to the new environment by adjusting their internal osmolarity; they reorganize the distribution of actin and restore microtubule dynamics and tip growth [41,45,49].We found that cells were able to recover from the initial osmotic shock; after a prolonged incubation under stress, cells stabilized the contractile rings and re-localized Bgs1p to the cell division site and cell poles.It is possible that the rapid damage produced to the CAR could trigger a mechanism that would promote cell adaptation to osmotic stress and repair of the cytokinesis machinery.Once the CAR has been restored, Bgs1p would be relocated to the septal area and septum synthesis would reinitiate.These results are in agreement with the fact that neither the WT nor the cfh3D strains exhibit defects in cytokinesis under these conditions ( [32] and this work).Thus, the contractile ring could be considered as a sensor that detects environmental conditions and promotes protective responses to ensure the accuracy of cell division.
It has been described that Cdc15p dephosphorylation is required for its functionality at the CAR [50].We analyzed whether osmotic shock promoted Cdc15p phosphorylation and CAR recovery was concomitant with Cdc15p dephosphorylation in the WT and cfh3D strains; we found that Cdc15p mobility was not slower in extracts obtained from cells incubated with KCl than in extracts obtained from the control culture (not shown).This suggested that there was no correlation between CAR instability after osmotic shock and Cdc15p phosphorylation, and seemed to rule out the possibility that the adaptation mechanism involved changes in Cdc15p phosphorylation.Thus, although changes in Cdc15 phosphorylation cannot be completely excluded, it is possible that other processes could guarantee the stability of the contractile rings under stress conditions.Determining their nature should shed light on the mechanisms that guarantee cell division in unfavorable environments.
General techniques
All techniques for S. pombe growth and manipulation have been described ( [51], http://www.biotwiki.org/foswiki/bin/view/Pombe/NurseLabManual).The source and relevant genotypes of the strains used are listed in Table S1.Unless stated, cells were incubated at 32uC.To induce osmotic stress, either powdered KCl was added to the culture at the desired final concentration or the cells were collected by filtration and transferred to YES supplemented with 1.2 M sorbitol.For overexpression experiments using the nmt1 + promoter in the pREP3X plasmid, cells were grown in EMM medium containing appropriate supplements and 15 mM thiamine; cells were harvested, washed extensively with water, and resuspended in EMM with supplements.For phenotype analysis, expression was induced for 20-24 hours.In order to express cfh3 + from the thiamine-repressible nmt1 + promoter, site-directed mutagenesis was used to introduce an XhoI site immediately upstream from the initial ATG.The cfh3 + ORF and 1 kb of the 39non-coding sequence were then cloned into the overexpression pREP3X plasmid as an XhoI/SacI DNA fragment.Geneticin (G418, ForMedium) and hygromycin (For-Medium) were used at 120 and 400 mg ml 21 , respectively.Molecular and genetic manipulations were according to Sambrook et al. [52].All tagged proteins were integrated into the chromosome under the control of their own promoters.Double mutants were obtained by tetrad analysis.Combinations of mutated alleles with HA-, GFP-or RFP-tagged proteins were performed either by plasmid transformation or by ''random spore'' selection from genetic crosses [51].
Protein techniques
Western blotting and co-immunoprecipitations were performed as described [32].
Microscopy
Hoechst binds preferentially to the A/T-rich zones in DNA.We used Hoechst 33258 because its slow entry into the cells allows a non-specific staining of the cell wall.Thus, simultaneous observation of the nuclei and the cell wall can be performed in living cells.Unless stated, the observation of tagged proteins was performed on cells collected by filtration.In order to estimate the percentage of cells with damaged cytokinesis machinery, samples were collected at the desired times and photographed.The percentage of cells exhibiting normal Cdc15-GFP ring morphology and GFP-Bgs1 localization in the cell midzone was scored from the photographs.In the former case, only cells exhibiting contractile rings were scored, while in the latter all cells in the field were scored.The experiments were performed a minimum of three times and a minimum of 500 cells were scored in each experiment.For conventional fluorescence microscopy, images were captured with a Leica DM RXA microscope equipped with a Photometrics Sensys CCD camera, using the Qfish 2.3 program.Confocal microscopy was performed with a Leica TCS SL spectral confocal microscope with a 6361.4 oil objective, using an excitation wavelength of 488 nm.Images were processed with Adobe Photoshop or Leica Confocal Software.
Figure 1 .
Figure 1.Cfh3p accumulates at the cell poles and septal area.A. Localization of Cfh3p in a WT strain.Left panel, micrographs of different cells from a strain bearing GFP-Cfh3 and Cut11-RFP; the pictures were taken with a conventional fluorescence microscope.Right panels, micrographs of a cell bearing GFP-Cfh3 taken with a confocal microscope; the upper panel shows a z-section while the lower panels show three-dimensional reconstructions of stacks of z-series taken along time to show CAR contraction; the numbers indicate the time-points (in minutes) at which the cell was photographed.B. Localization of Cfh3p in different strains.In the case of the cdc15-140 and cdc11-129 mutants, the cells expressed RFP-tagged Cut11p and GFP-tagged Cfh3 and Atb2 proteins; the cells were photographed after 3 hours of incubation at 36uC; the arrowhead in the cdc15-140 panel points to a weak GFP-Cfh3 signal (a, and b depict different cells from the same culture).In the case of the cdc16-116 and myo2-E1 myo3D strains, the left panels correspond to staining with Hoechst 33258 and the right panels to the GFP fluorescence.The arrowhead in the cdc16-116 panels points to a Cfh3p ring that corresponds to a growing septum.C. Cfh3p co-localizes with actin and with Cdc15 at the CAR.Left panels, GFP-Cfh3p and rhodamine-phalloidin images.Central panels, GFP-tagged coronin (Crn1) and RFP-tagged Cfh3p images.Right panels, GFP-tagged Cdc15p and RFP-tagged Cfh3 images.Bar, 10 mm.doi:10.1371/journal.pone.0042726.g001
Figure 2 .
Figure 2. Cfh3p is a ring-associated protein.A. The cfh3D mutant shows a genetic interaction with mutants affected in CAR assembly/ contraction.Cells from the indicated strains were streaked onto YES plates and incubated at the indicated temperatures for 2 days.B. Cfh3p and Cdc15p co-immunoprecipitate. Cell extracts from strains carrying Cdc15-GFP and/or HA-Cfh3 fusion proteins were analyzed by Western blotting using monoclonal anti-GFP (-GFP) or anti-HA (-HA) antibodies before (Extracts) or after immunoprecipitation (IP) with a polyclonal anti-GFP antibody.doi:10.1371/journal.pone.0042726.g002
Figure 3 .
Figure 3. Cfh3p and Bgs1p are required for CAR integrity under stress conditions.A. Conventional and confocal fluorescence microscopy of WT or cfh3D cells treated with 1 M KCl for 15 minutes and collected by centrifugation.In the panels showing the bright field and fluorescence overlaid images, the arrow points to an asymmetric ring; the asterisk shows a ring that did not disassemble after the septum had been completely synthesized; the bracket marks a broken ring, and the arrowhead points to an abnormal accumulation of the Cdc15 protein at the cell cortex.B. Percentage of cells with an abnormal distribution of Cdc15p.The cells were grown in YES or YES supplemented with 1.2 M sorbitol, 1.0 M KCl or 0.2 M MgCl 2 for 15 minutes and collected by filtration (YES, Sorbitol, KCl, and MgCl 2 , respectively), allowed to grow until they reached the end of the logarithmic phase (3.5610 8 cells/ml), and collected by filtration (Late logarithmic), or were collected by centrifugation when they were growing actively in YES medium (Centrifugation).The standard deviation is given for each value.C. Left panel, percentage of cells from the indicated strains showing an abnormal distribution of Cdc15-GFP when cultured in YES or YES with sorbitol at the indicated temperatures.The standard deviation is given for each value.Right panel, micrographs of cells cultured in the presence of sorbitol for 15 minutes at 32uC.Bar, 10 mm.doi:10.1371/journal.pone.0042726.g003
Figure 4 .
Figure 4. Effect of osmotic shock in the localization of Bgs1p. A. Left panels, wild-type or cfh3D cells bearing GFP-tagged Bgs1p and RFPtagged Cut11p were incubated in the presence of 1 M KCl for the indicated times, collected by filtration and photographed.The insets show cell poles.Right panel, cells from the same strains were treated with 1 M KCl; samples were collected by filtration at the indicated times and photographed.The percentage of cells exhibiting GFP-Bgs1 in the cell midzone (with respect to the total cell number) was scored from the photographs.The experiment was performed three times, with similar results; the result of a representative experiment is shown.B. Time-lapse experiment of cells from the same strains treated with 1 M KCl, collected by filtration, spread onto YES+1 M KCl on a slide and photographed along time; the numbers indicate the minutes, after KCl had been added, at which the cells were photographed.Cells in which the Bgs1 ring was starting and finishing assembly/contraction at the 59 time-point are marked by an asterisk and an arrowhead respectively.Bar, 10 mm.doi:10.1371/journal.pone.0042726.g004
Figure 5 .
Figure 5. Cells repair the damage produced to the contractile ring by osmotic shock.A. Left panel, Wild-type or cfh3D cells bearing GFPtagged Cdc15p were treated with 1 M KCl; samples were collected by filtration at the indicated times and photographed.The percentage of dividing cells with a normal distribution of Cdc15-GFP (with respect to the total number of cells exhibiting Cdc15 in the cell midzone) was scored from the photographs.The experiment was performed three times, with similar results; the result of a representative experiment is shown.Right panel, cells from the same strains were incubated in the presence of 1 M KCl for the indicated times, collected by filtration and photographed.Bar, 10 mm.B. Time-lapse experiments of WT (left set of photographs) and cfh3D (central and right sets of photographs) cells bearing Cdc15-GFP and Hht1-RFP that were treated with 1 M KCl, collected by filtration, spread onto YES+1 M KCl on a slide and photographed along time; the numbers indicate the minutes, after KCl was added, at which the cells were photographed.Arrowheads point to abnormal rings.doi:10.1371/journal.pone.0042726.g005
Figure 6 .
Figure 6.Analysis of the damage produced to the cytokinesis machinery by osmotic shock.A. Upper panel, cfh3D cells bearing both Cdc15-GFP and RFP-Bgs1 fusion proteins were treated with 1 M KCl; samples were collected by filtration at the indicated times and photographed.The percentage of dividing cells with a normal distribution of Cdc15-GFP (with respect to the total number of cells exhibiting Cdc15 in the cell midzone) and the percentage of cells exhibiting RFP-Bgs1 in the cell midzone (with respect to the total cell number) were scored from the photographs.The experiment was performed three times, with similar results; the result of a representative experiment is shown.Lower panel, micrographs showing cfh3D cells bearing both the Cdc15-GFP and RFP-Bgs1 proteins that had been grown in YES supplemented with 1 M KCl for the indicated times.The arrows in the panel corresponding to the 159 time-point mark the septal area of a cell with aberrant Cdc15 and Bgs1 rings.The arrowheads in the panel corresponding to the 459 time-point mark a weak RFP-Bgs1 signal at the cell equator.Bar, 10 mm.B. Micrographs showing the septal area of cfh3D cells bearing both the Cdc15-GFP and RFP-Bgs1 proteins grown in YES supplemented with 1 M KCl for the indicated times and stained with Calcofluor White (CW); a and b, septal area of two different cells incubated in the presence of KCl for 15 minutes.doi:10.1371/journal.pone.0042726.g006
Figure
Figure S1 Localization of proteins involved in different stages of cytokinesis in cells overexpressing cfh3 + .For comparison, the distribution of the different proteins in the WT strain is shown in the left-hand side panels of each set of pictures.(A) Cell wall staining with Calcofluor (left panels) and actin staining with rhodamine-phalloidin (right panels).(B-F) For each set of micrographs the panel on the left shows nuclear and cell wall staining with Hoechst 33258, and the panel on the right shows the GFP fluorescence signal.(B) Distribution of the myosin light-chain Cdc4p.The asterisk marks a cell in which the Cdc4 protein can be observed in the midzone after the septum has been synthesized; the dot marks a cell in which a new ring has been assembled close to a previous ring that has not contracted completely, and the arrow points to an abnormal distribution of Cdc4p at the cell cortex.(C) Distribution of the PCH protein Cdc15p.The arrows point to an abnormal distribution of Cdc15p at the cell cortex; the dot marks a cell in which a second ring has been assembled in the body of a cell that has not undergone cell separation, and the asterisk marks an asymmetric ring.(D) Distribution of the chitin synthase-like Chs2p.The arrows point to the position where there should be a Chs2p ring and the asterisk marks an asymmetric ring.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2008-01-23T00:00:00.000
|
14892841
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc6220",
"pdf_hash": "b87eec000b585d2c41024d81356ea9f0860694fb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:934",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "b81fc490187eb00260a4f5c78ed7a4777b955060",
"year": 2008
}
|
pes2o/s2orc
|
Are platelets a 'forgotten' source of sepsis-induced myocardial depressing factor(s)?
The mechanism of sepsis-induced cardiac failure was initially thought to be related to the presence of 'myocardial depressant' substances that directly alter heart function. Exosomes released by platelets and identified in the plasma are suggested to, at least partially, explain myocardial depression in sepsis. This hypothesis needs to be evaluated by clinical studies.
Sepsis-induced cardiac dysfunction has been known for many years but the mechanism appears to be complex, including both 'intrinsic' cardiomyopathy and direct and/or indirect effects of circulating depressing factors. Among these factors, many cytokines have been suggested to play a role. In the previous issue of Critical Care, exosomes released by platelets were also suggested to play a role [1].
The clue for the sepsis-induced cardiac dysfunction in patients with septic shock came from Parker and colleagues' study in 1984 [2]. Using simultaneous radionuclide cardiac imaging and thermodilution cardiac output studies on patients with septic shock, they showed a 'paradox': all patients had a high cardiac output and maintained a stroke volume index associated with a depressed left ventricular ejection fraction < 0.45. Interestingly, survivors had a left ventricular ejection fraction that remained low for 4 days and then rose to normal values within 7-10 days [2]. These data reflecting left ventricular dysfunction but also right ventricular dysfunction were confirmed by further studies [3,4].
It is now agreed that systolic function deteriorates in the early phase of septic shock in humans, as confirmed by echocardiographic studies. The question of left ventricular diastolic dysfunction in septic shock remains less clearly defined. Reduced compliance manifested as reduced rapidity of ventricular filling has been described in patients with septic shock. Using left ventricular pressure-volume loops, we recently confirmed a reduced rate of left ventricular relaxation and decreased compliance in lipopolysaccharide-treated rabbits. Both alterations can be restored, at least partially, by levosimendan but not by milrinone or dobutamine [5].
In the 1970s and 1980s, the mechanism of sepsis-induced cardiac failure was thought to be the presence of 'myocardial depressant' substances that directly alter heart function [6]. Parrillo and colleagues suggested the existence of 'circulating myocardial depressant factor(s)' in humans by showing that serum obtained during the initial phase of septic shock decreased both the amplitude and the velocity of shortening of cardiomyocytes from newborn rats. Although cytokines such as TNFα and IL-1β have been suggested to be those 'circulating myocardial depressant factor(s)' and might explain a myocardial depressant activity in the first 2 days of sepsis, they can hardly explain a delayed and depressant effect on heart contractility observed 7-10 days later since TNFα and IL-1β plasma levels return to normal values within 48 hours of sepsis onset.
In the study published in the current issue of the journal, Azevedo and coworkers suggest that exosomes released by platelets and identified in the plasma might explain myocardial depression in sepsis [1]. Although these results should be confirmed by different groups in different settings, it is interesting to mention that this paper opens our eyes to a new concept that platelets may release, over days, exosomes that induce and maintain alterations of heart function in septic patients. It is interesting to mention that the duration of myocardial depression corresponds to the 10 days of life of the platelets. Is this by chance or do the platelets present at
Are platelets a 'forgotten' source of sepsis-induced myocardial depressing factor(s)?
the time of sepsis insult keep a footprint of the first injury for the remaining days of their life?
Exosomes might act via free radical release [1]. Nitric oxide, produced mainly by inducible nitric oxide synthase 2, is involved in vascular dysfunction both in animals and humans [7]. Nitric oxide plays also a crucial role in the development of the 'intrinsic' septic cardiomyopathy in many ways, including a change in contraction, protein nitration and an alteration in mitochondrial respiration [8]. In septic patients, nitric oxide produced in large amounts may interact with the superoxide anion and produce peroxynitrite. As suggested by our model of muscle dysfunction in septic patients, peroxynitrite -rather than nitric oxide per se -decreases muscle contractility [9]. Of interest, we recently showed in an animal model of sepsis that other cardiovascular mediators, such as prostaglandins and endothelin, released by cardiac endothelium, may contribute to restore cardiac contractile performance [10]. Azevedo and coworkers suggested that platelets might also be the source of these mediators [1].
In summary, platelets might be a forgotten source of mediators that alter heart function during sepsis. Many questions are raised by Azevedo and coworkers' article [1]. Are the vessels as altered as the heart by the exosomes? Does the thrombocytopenia observed in sepsis influence the amplitude of those alterations? These questions need to be evaluated by clinical studies.
|
v3-fos-license
|
2023-01-19T21:28:43.635Z
|
2018-07-17T00:00:00.000
|
255978397
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/s13046-018-0785-4",
"pdf_hash": "97ab980a226a5de45f817ce10bf8491b1139bebf",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:935",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "97ab980a226a5de45f817ce10bf8491b1139bebf",
"year": 2018
}
|
pes2o/s2orc
|
Lycorine inhibits glioblastoma multiforme growth through EGFR suppression
Lycorine has been revealed to inhibit the development of many kinds of malignant tumors, including glioblastoma multiforme (GBM). Although compelling evidences demonstrated Lycorine’s inhibition on cancers through some peripheral mechanism, in-depth mechanism studies of Lycotine’s anti-GBM effects still call for further exploration. Epidermal Growth Factor Receptor (EGFR) gene amplification and mutations are the most common oncogenic events in GBM. Targeting EGFR by small molecular inhibitors is a rational strategy for GBM treatment. The molecular docking modeling and in vitro EGFR kinase activity system were employed to identify the potential inhibitory effects of Lycorine on EGFR. And the Biacore assay was used to confirm the direct binding status between Lycorine and the intracellular EGFR (696–1022) domain. In vitro assays were conducted to test the suppression of Lycorine on the biological behavior of GBM cells. By RNA interference, EGFR expression was reduced then cells underwent proliferation assay to investigate whether Lycorine’s inhibition on GBM cells was EGFR-dependent or not. RT-PCR and western blotting analysis were carried out to investigate the underlined molecular mechanism that Lycorine exerted on EGFR itself and EGFR signaling pathway. Three different xenograft models (an U251-luc intracranially orthotopic transplantation model, an EGFR stably knockdown U251 subcutaneous xenograft model and a patient-derived xenograft model) were performed to verify Lycorine’s therapeutic potential on GBM in vivo. We identified a novel small natural molecule Lycorine binding to the intracellular EGFR (696–1022) domain as an inhibitor of EGFR. Lycorine decreased GBM cell proliferation, migration and colony formation by inducing cell apoptosis in an EGFR-mediated manner. Furthermore, Lycorine inhibited the xenograft tumor growths in three animal models in vivo. Besides, Lycorine impaired the phosphorylation of EGFR, AKT, which were mechanistically associated with expression alteration of a series of cell survival and death regulators and metastasis-related MMP9 protein. Our findings identify Lycorine directly interacts with EGFR and inhibits EGFR activation. The most significant result is that Lycorine displays satisfactory therapeutic effect in our patient-derived GBM tumor xenograft, thus supporting the conclusion that Lycorine may be considered as a promising candidate in clinical therapy for GBM.
Background
Gliomas are the most common brain tumor in adults, accounting for about 70% of primary neoplasms of the central nervous system (CNS). High-grade gliomas, especially the glioblastoma multiforme (GBM), is the most common and progressive type during all intracranial cancers [1]. About 90% of GBMs are classified as primary and associated with dismal prognosis that typically appears suddenly in patients. On one hand, such lesions affect mainly the elderly (mean age 62 years), have rapid evolution (less than 3 months) and no clinical or histopathological evidence of precursor lesions [2]. On the other hand, secondary GBMs affect younger individuals (average age 45 years) and progress slowly from a lower degree of diffuse astrocytoma. Current therapeutic strategies for GBM include surgical resection, followed by radiotherapy and chemotherapy [3][4][5]. Despite such aggressive multimodal therapy, the median survival of GBM is still poor [6]. The high mortality rate results from the universal resurgence of tumors post-treatment, which occurs due to infiltrating tumor cells that escape initial surgery and exhibit profound resistance to irradiation and current chemotherapy treatments [7]. With the increasing number of cancer-related mortality, identification of novel tractable targets for improved therapeutics and development for novel drugs that can radically cure GBM are desperately needed.
Genomic and proteomic analyses have identified a number of key oncogenic drivers of GBM tumorigenesis and therapeutic resistance, including receptor tyrosine kinases (RTKs) [8]. In particular, genomic alteration of the epidermal growth factor receptor (EGFR) is present in approximately half of all GBMs [9,10]. EGFR plays an important role in various tumors including GBM. It is the most frequently amplified gene in GBM, while its expression in normal brain tissue is either undetectable or extremely low [11]. The most common genetic aberration associated with malignant glioma is amplification of EGFR, with a frequency of about 50%. Amplifications and rearrangements of EGFR are highly indicative of high-grade gliomas, with a worse prognosis than estimated from just histopathologic grading [12]. EGFR activation leads to autophosphorylation of several key tyrosine residues triggering several intracellular downstream signaling pathways including the Ras/Raf/MEK/ ERK pathway, the PLCγ -PKC pathway and the PI3K/ AKT pathway, resulting in cell proliferation, motility and survival [13]. Within such a large proportion of EGFR genomic alterations, approximately 20-40% of them harbor EGFR variant III (EGFRvIII) mutant, which contains a deletion of exons 2-7 in the extracellular ligand-binding domain [14,15]. EGFRvIII induces the receptor tyrosine kinase activation in both a cell autonomous and nonautonomous manner, thus results in a ligand-independent mutant and shows constitutive activation in the absence of ligand to activate the tumor-promoting signaling pathways [16].
The fact that EGFR functions one of the most vital factors to promote gliomas has attracted many investigations of EGFR inhibitors, aiming to promote apoptosis of cancer cells, or to increase tumor sensitivity to possible adjuvant therapies. However, the successful application of EGFR-targeted therapy for the treatment of GBM has proven to be very challenging. Many GBM patients do not respond to these therapies and eventually show drug resistance and disease progression [16]. To screen and develop novel inhibitors that target both wild type EGFR and EGFRvIII to impair GBM malignant tumor cell biology could be therapeutically beneficial either as single agents or in combination with other chemotherapy agents in gliomas therapy.
Lycorine is a pyrrolo [de]phenanthridine ring-type alkaloid extracted from Amaryllidaceae genera and possesses various biological effects including anti-tumor [17], antiviral [18], antimalarial [19], and antiinflammation [20]. Several studies have shown that Lycorine exhibits selective cytotoxicity effects on leukemia, cervical cancer, multiple myeloma, prostate cancer, hepatocellular carcinoma, bladder cancer and breast cancer [21][22][23][24]. And the anti-cancer effect of Lycorine on glioblastoma, even in drug-resistant glioblastoma, were also reported in some studies [22,[25][26][27][28] . Typically, Lycorine triggered multiple myeloma KM3 cells' apoptosis through the activated molecular signaling in mitochondrial and death receptor-mediated apoptotic pathways [29]. Lycorine also induced cell-cycle arrest in myelogenous leukemia K562 cells via HDAC inhibition [30]. Additionally, Lycorine has been reported to exhibit anti-proliferative, apoptosis-inducing, and anti-invasive properties in prostate cancer associated with the JAK-STAT signaling pathway. Lycorine promoted autophagy and apoptosis via TCRP1/Akt/mTOR axis inactivation in human hepatocellular carcinoma [31]. Lycorine induced apoptosis of bladder cancer T24 cells by inhibiting phospho-Akt and activating the intrinsic apoptotic cascade [32]. Lycorine could impair human glioblastoma U373 cell migration by increasing cellular actin cytoskeleton rigidity possibly through modulating the Rho/Rho kinase/LIM kinase/cofilin signaling pathway [22]. Furthermore, Lycorine possessed favorable in vitro anti-proliferative activity through cytostatic rather than cytotoxic effects towards apoptosis-resistant U373 cells because of its structural features of a C-ring and C/D-ring junction, which was essential for its biological activities [28]. Besides, a structure-activity relationship (SAR) analysis of Lycorine with its intracellular targets revealed Lycorine's both anti-proliferative and apoptosis-inducing activities in human glioblastoma apoptosis-resistant T98G cells and in human glioblastoma apoptosis-sensitive HS683 cells. Lycorine's C1, C2-hydroxyls provided a superior binding pose with the pocket a, the guanosine triphosphate (GTP) binding site, of its target protein eEF1A elucidated by the molecular docking results [26].
Although accumulating evidences demonstrated Lycorine's inhibition effects on cancers including glioblastoma, through some peripheral mechanism such as the currently most acceptable mode of Lycorine's action of its inhibition on DNA and protein biosynthesis in cancer cells, or through some other complex unrevealed way, in-depth mechanism studies of Lycotine's anti-GBM effects still call for further exploration. Researches to determine Lycorine's underlying mechanisms besides abovementioned in cancer cells are warranted. A wealthy X-ray structural information of Lycorine in complex with eukaryotic ribosome had also been found associated with the inhibition of the elongation cycle during the protein translation process to alter cell proliferation and protein synthesis. Lycorine adopted a special conformation within the pocket region in the A-site of the peptidyl transferase center of ribosomes, which suggested that the dioxol-pyrroline group of Lycorine might be a recognition motif for the binding with its target complex proteins. Lycorine's X-ray structure-based drug design may highlight general principles for its targeting and facilitate its new therapeutics design, thus serving as a tool to guide Lycorine's future drug research and development [33]. Those abovementioned signals, such as JAK, STAT, AKT and mTOR, involved in Lycorine's inhibition on many kinds of cancer types, were all downstream pathway signals relative to tyrosine kinase. This prompt us to form the hypothesis that the underling in-depth mechanism of Lycorine's inhibition on GBM cancer may fundamentally correlate with some classical tyrosine kinase pathway, for example, the EGFR signaling pathway.
In accordance with existing researches and the X-ray structure of Lycorine, we identify Lycorine as a novel inhibitor directly targeting EGFR through molecular docking assay and Biacore assay, and our findings propose a fundamental in-depth mechanism of Lycorine's suppression on GBM growth. To our knowledge, investigations of Lycorine's interaction with EGFR have not been described in previous literature. We present in this current study that Lycorine inhibits proliferation and migration of various GBM cell lines,including cells holding wild type EGFR amplification and EGFRvIII, and induces cell apoptosis and cell death. In vivo experiments show that intraperitoneal administration of Lycorine reduces tumor growth in U251-luc intracranially orthotopic transplantation model, EGFR stable knockdown abates Lycorine's treatment effect in mice subcutenous xenografts, and in patient-derived xenograft model Lycorine exhibits impressive efficacy with no obvious toxicity. Lycorine inhibits the activation of EGFR signaling and multiple EGFR downstream targets, such as AKT, ERK, mTOR, cyclin D1, Bcl-2, Bcl-xL, and matrix metalloproteinase 9 (MMP9). In conclusion, our findings suggest that Lycorine is a new small molecule targeting EGFR thus may be a potential therapeutic in combating GBM.
Molecular docking modeling assay
The X-ray crystal structure of EGFR was obtained from the Protein data bank ((PDB ID: 5FED, EGFR kinase domain in complex with a covalent aminobenzimidazole inhibitor) website (http://www.rcsb.org/). The structures of the ligands were built and energy minimized using the Chemoffice software package (Cambridge). We used AutoDock Toolkit developed by the Scripps Research Institute and Olson lab for free for docking experiments. All of the water molecules were removed before the experiments so that our experiments were performed under non-aqueous conditions. The primary ligand bound to the binding pocket was the chosen conformation for the active site. The picture was prepared using Pymol 1.2R2 version.
In vitro EGFR kinase assay
The half maximal inhibitory concentration (IC 50 ) values of Lycorine and positive control Gefitinib against EGFR kinase activity were carried out using the Promega Kinase-Glo kit (Promega, Mannheim, Germany) according to the manufacturer's protocol in the presence of 600 nM ATP. Data were presented as means and 95% confidence intervals (CIs) from three independent experiments.
Biacore assay for surface plasmon resonance (SPR) analysis
Firstly the human EGFR (696-1022) domain recombinant fusion protein was expressed in Escherichia coli (E.coli) BL21 (DE3). In detail, BL21 (DE3) was transformed with pGEX4T-1-EGFR (696-1022) plasmid which was constructed through molecular cloning methods in our laboratory. When the OD value reached about 0.6, the E.coli was transfected and induced to express recombinant fusion protein by adding 0.5 mM Isopropyl β-D-1-thiogalactopyranoside (IPTG). The soluble protein was obtained by sonication and centrifugation, then incubated with Glutathione-Sepharose beads (GE Healthcare), and eluted with glutathione. Fusion protein was further concentrated with ultrafiltration centrifuge tube and its concentration was determined. Then the SPR analysis was conducted with a Biacore T200 instrument (GE Healthcare) with CM5 sensor chip. In order to capture EGFR (696-1022) with GST tag, GST antibody was immobilized in parallel-flow channels of CM5 sensor chip. To test the interaction between Lycorine and EGFR, a series concentrations of Lycorine were injected into the flow system. Experiments was conducted with PBS buffer and the dissociation time was 60 s. Since Lycorine was dissolved in PBS with 5% DMSO, solvent correction assay was performed to adjust the results.
Western blotting analysis
For detecting the effects of Lycorine's long time treatment on the expression of EGFR as well as phosphorylation of EGFR and its downstream signaling pathways, U251 cells were pretreated with 100 ng/mL human recombinant protein EGF (Thermo Fisher scientific, PHG0311) for 6 h, then treated with Lycorine for indicated concentration for another 24 h; for detecting the effects of Lycorine's short period treatment on the expression of EGFR and phosphorylation of EGFR, U251 cells were pretreated with 100 ng/mL human recombinant protein EGF for indicated time course (0, 15, 30, 45 and 60 min), then treated with 25 μM Lycorine for another 1 h; to prove that Lycorine inhibited the EGF-dependent activation of EGFR kinase phosphorylation, U251 cells were initially pretreated with or without 25 μM Lycorine for 1 h to allow Lycorine enter the cells, then followed by 100 ng/mL EGF treatment for 0, 15, 30, 45 and 60 min (Lycorine was maintained during the EGF-treated time course), and EGF-dependent EGFR phosphorylation was measured; for detecting the expression level of EGFR knockdown, the U251 parental, shControl and shEGFR cells were cultured in 6-well plates and whole cell lysate protein were extracted then subjected to the western blotting analysis; for detecting in vivo protein level in xenografts, the in vivo xenografe tissues were grinded in liquid nitrogen, then cells samples or tissue samples were lysed in RIPA buffer, respectively. Protein concentration was determined using a Bicinchoninic acid assay (Thermo Scientific). Protein samples were run on 8 to 12% SDS-PAGE gels and transferred to polyvinylidene difluoride membranes (Gibco) as detailed before [34,35]. The membranes were incubated overnight using specific antibodies. The signals were visualized via the Odyssey Western blotting detection system.
SRB cell viability assay
SRB cell viability assays were performed by stained with Sulforhodamine B. Briefly, 5000 cells per well were seeded in 96-well plates as detailed before [37,38]. After 24 h, cells were exposed to different concentrations of Lycorine for 48 h. Cells were fixed with 10% trichloroacetic acid for 1 h at 4°C, washed five times with flowing water, and air-dried, then stained with 50 μL 0.4% (w/v) SRB for 20 min at room temperature, washed five times with 1% acetic acid, and air-dried. 100 μL 10 mM Tris was added per well, and absorbance was measured at 515 nm. For the detection of EGFR RNA-interference stable cells' viability, the U251 parental, shControl and shEGFR cells were cultured in 96-well plates for indicated days (0, 1, 3, 5 and 7) with no Lycorine treatment, and their cell viabilities were analyzed, respectively.
Migration assay
Cells were allowed to grow into Trans well /Boyden chambers (8 μm; BD Biosciences). Serum-starved U251 cells (5 × 10 4 cells) in 100 μL medium with 0.5% FBS were pretreated with Lycorine (from 0 μM to 10 μM) for 30 min. Cells were then seeded on the upper chamber of Transwell and migrated to the lower chamber with 600 μL medium. After 5 to 7 h incubation, non-migrated cells were removed with cotton swabs, and migrated cells were fixed with cold 3.7% paraformaldehyde and stained with 0.1% crystal violet. Images were taken with an inverted microscope (Olympus; magnification, × 100), and migrated cells in 4 random fields were quantified by manual counting.
Colony formation assay
Cells were trypsinized and seeded 2000 per well in 6-well plates and allowed to attach overnight, then exposed to different concentration of Lycorine for 7 days. After being fixed with 4% paraformaldehyde for 20 min, cells were stained with 0.2% crystal violet as detailed before [39]. The morphology of cell colonies was recorded with photo imaging and the number of cell colonies were calculated and analyzed as the ratio of the number and diameters of treated samples to untreated sample.
Construction of stable EGFR knockdown cell line
A specific EGFR shRNA Lentiviral particle containing EGFR gene interfere sequence was purchased from santa cruz biotechnology (sc-108,050-SH). This interfere sequences were linked to pLL3.7 lentiviral expression vector and co-transfected into 293 T cells along with the packaging plasmids (pGag/Pol, pRev and pVSV-G) by Lipofectamine 2000 (Invitrogen). The titer and infection efficiency were determined by observing the expression of GFP under fluorescence microscopy. With appropriate multiplicity of infection and several days of screening with puromycin, U251 cells were infected by lentivirus and the stable knockdown cells were screened out, labeled as shEGFR. And the empty plasmid containing control shRNA was simultaneously constrcted and labelled as shControl. These two U251 stable cell lines were employed for further in vitro cell proliferation assay and in vivo subcutaneous xenograft assay.
U251-luciferase cell orthotropic transplantation xenograft model
U251-luc intracranially orthotopic transplantation model were performed to verify Lycorine's therapeutic potential on GBM in vivo. Nude BALB c/c mice were anesthetized and fixed in a stereotactic apparatus, a burr hole was drilled 2 mm lateral and 1 mm anterior to the bregma to a depth of 3.25 mm, and 5 × 10 5 U251-luc cells in 10 μL PBS were implanted. 7 days later, based on photon flux indexes detected by Xenogen IVIS 2000 Luminal Imager(PerkinElmer, Waltham, MA) with living image software (PerkinElmer), all mice bearing tumor were divided into three groups (n = 10 per group) randomly and the luminal photos were taken and photon flux indexes which could represent the orthotopic tumor sizes were recorded every 10 days. Lycorine (10 mg/kg/day per mouse and 20 mg/kg/day per mouse) was injected intraperitoneally every day. The control group was treated with DMSO. 40 days later, mice were sacrificed, and tumors in brain substances were removed and bioluminescence imaging were recorded. The growth rate curve of the tumor xenograft was evaluated by determining the photon flux indexes. GBM tumor xenografts were fixed and prepared for immunohistochemistry.
U251 shEGFR subcutaneous xenograft model U251 shEGFR stable cell lines was successfully constructed as above mentioned. For testing the growth rate difference between U251 shControl and shEGFR in vivo without Lycorine treatment, 7 × 10 6 cells per mouse were inoculated into nude BALB c/c mice on the right back sides for indicated time. The beginning day of cell inoculation was defined as day 0 and tumors were allowed to grow for 32 days. Phenotype of tumor-bearing nude mice and their xenografts were taken photos at an interval of 8 days, and the growth curve of U251 shControl and shEGFR after their inoculation from day 0 to day 32 were analyzed according to tumor volumes calculated every 4 days, respectively. To detect Lycorine's in vivo effects on GBM growth was dependent or independent of EGFR expression, we used nude mice to conduct the same subcutaneous xenograft assay again with Lycorine administration. U251 shControl cells and U251 shEGFR cells (7 × 10 6 cells per mouse) were separately inoculated subcutaneously on the right back sides of the mice in correspondent group. When the tumors reached about 100 mm 3 after cell inoculation for 12 days, mice were intraperitoneally administrated every day with Lycorine at the dose of 20 mg/kg/day, or with DMSO as solvent control group. The beginning day of Lycorine administration was defined as day 0. Mice were continually observed and their tumor weight and volume were calculated until they were sacrificed at day 35, which meant the day of experiment ending was 47 days later after the beginning of cell inoculation.
Patient-derived xenograft model
This assay was performed as described previously with few modifications [40]. Briefly, the patient-derived cells were injected subcutaneously on the right back sides of the mice (5 × 10 6 cells per mouse). After the tumors reached about 100 mm 3 , we removed them from the mice and dissected them into 30 little pieces, equally. Then these 30 little tumor pieces were subcutaneously transplanted into the right back sides of nude mice after anaesthetized by Afferden, randomly. After the tumors reached about 100 mm 3 , mice were divided into 3 groups and received intraperitoneal injection either with DMSO or Lycorine (10 mg/kg/day per mouse and 20 mg/kg/day per mouse) every day for 14 days. During the administration of Lycorine, the body weight and the tumor size of the mice were monitored every 2 days. Mice were continually observed and their tumor weight and volume were calculated until they were sacrificed.
Immunohistochemistry staining
Intracranial tumors dissected from the U251-Luciferase cell orthotropic transplantation xenograft model were excised, fixed and embedded in paraffin. To investigate the effect of Lycorine on tumor cells proliferation and apoptosis in vivo, sections (4 μm) were stained with anti-proliferation cell nuclear antigen (Ki-67), GFAP, cleaved caspase-3, p-EGFR and MMP9. Images were obtained with Leica microscope (Leica, DM4000b). The results were analyzed using Image-Pro Plus 6.0 software.
Statistical analysis
Results were statistically analyzed using the Student's t test with GraphPad Prism version 4.02 for Windows. All experiments were repeated at least three times. A value of P < 0.05 was considered statistically significant.
Identifying Lycorine as a novel potential EGFR inhibitor
For identifying whether Lycorine is a novel potential EGFR inhibitor for cancer therapy, we downloaded the X-ray crystal structure of EGFR kinase domain from the Protein Data Bank (PDB ID: 5FED, EGFR kinase domain in complex with a covalent aminobenzimidazole inhibitor), and AutoDock Toolkit (ADT) software package was employed to perform the molecular docking assay. Fig. 1a showed the X-ray crystal structure of EGFR kinase domain, which contained an ATP binding region. From Fig. 1b it could be concluded that Lycorine (green), inserted into EGFR pocket domain (amaranth), which was located at the kinase active site within the ATP binding region, thus may destroy the kinase activity of EGFR (Fig. 1b). Fig. 1d showed the flexible docking model between Lycorine and EGFR acquired 10 combining conformations (the green one and other amaranth 9), and Fig. 1e showed each binding free energy with their root-mean-square deviation (RMSD). The first combining conformation (Run 1) ranked the most accurate and reasonable binding model because Run 1 had the lowest binding energy and the minimal cluster RMSD value (Fig. 1e). Lycorine was found to be directly bound with EGFR (696-1022) kinase active sites in the pocket domain, via its hydrogen bond interacting to Asn842 (N842), Lys 745 (K745) and Thr854 (T854) in the docking structure (Fig. 1c). Besides, EGFR (696-1022) domain retained Lycorine in its ATP binding pocket through 3 different interactions: the hydroxide radical of the T854 lateral chain connected to the two hydroxyl hydrogen bonds of Lycorine's C-ring; the carbonyl of the N842 lateral chain connected to the hydroxide radical of Lycorine's C-ring; and the 3 HN + of the K745 lateral chain connected to the oxygen atom of Lycorine's dioxolane. All the above results strongly suggest that Lycorine may function as an EGFR inhibitor and competitively inhibit ATP's binding with EGFR, thus impede EGFR downstream signal kinases' autophosphorylation.
Lycorine impairs proliferation, migration and colony formation of GBM cells
As a small natural product, Lycorine has very simple chemical structure and low molecular weight (Fig. 2a), thus can be easily available to treat cancer cells. To investigate the anti-cancer activity of Lycorine on GBM, a typical malignant GBM cell lines, U251, were subjected to the cell viability assay. Fig. 2b showed Lycorine inhibited cell proliferation in a dose-dependent manner and reduced the number of cultured live cells dramatically, and Fig. 2c statistically demonstrated Lycorine's inhibition to U251's cell viability, with an IC 50 about 10 μM (Fig. 2c). We also performed cell migration assays using U251 cells with highly malignant mobility. Lycorine, in a dose-dependent manner, significantly inhibited U251 cell migration (Fig. 2d), and Fig. 2e showed the statistical results of Fig. 2d. As Fig. 2f showed, Lycorine inhibited colony formation of GBM cells in a concentration dependent manner, and showed a very significant difference compared to the control group when at 10 μM. After being seeded in 6-well plates and colony formatted for 1 week, U251 cells displayed a decreased number of colonies with the increase of Lycorine concentration. Fig. 2g and h show the statistic results of each colony formation assay, according to the diagram of colony numbers (Fig. 2g) and colony diameters (Fig. 2h). Shortly, Lycorine, in a dose-dependent manner, significantly inhibited GBM cell proliferation, migration and colony formation.
Lycorine exhibits cytotoxicity to GBM cells expressing wild type EGFR and EGFRvIII
The aforementioned results suggest that Lycorine inhibits the proliferation of U251 cells. Furthermore, we questioned if Lycorine had ideal selective effects between different GBM cells holding different EGFR mutations as well as healthy normal human IMA2.1 astrocytes. 6 kinds of cell death that induced by Lycorine to GBM cells was examined. These 6 cell lines, including U87 (wild type EGFR), LN229 (wild type EGFR amplification), U251 (wild type EGFR amplification), A172 (EGFRvIII mutant), Gli36vIII (EGFRvIII mutant), and GBM6 (wild type EGFR and EGFRvIII co-existing), were all utilized to conduct the cell viability assay. The expression level of EGFR mRNA was confirmed by RT-PCR (Fig. 3a). SRB assay results clearly showed that the half maximal inhibitory concentration for Lycorine inhibition of GBM cellular proliferation was approximately 10-20 μM, while that of normal human IMA2.1 astrocytes were much more than 100 μM (Fig. 3b). In other words, Lycorine was more toxic to GBM cells than to normal brain tissue cells thus can be considered possessing unique selectivity to treat GBM. Moreover, although Lycorine could inhibit all the 6 cell lines of GBM proliferation, its inhibition mode on the 6 GBM cells was different obviously. No matter wild type EGFR or EGFRvIII, the higher expression level those cells harbored, the greater inhibition efficiency presented. For example, for U251, Gli36vIII and GBM6 cells, they all had a higher EGFR or EGFRvIII expression, thus they were seemly more sensitive to Lycorine. At the dose of 20 μM their cell viability reduced to 20% compared with control (Fig. 3b, the upper panel). Situation was different for the other 3 cell lines, U87, A172 and LN229 possibly because they had lower expression level of EGFR or EGFRvIII, thus The binding mode of Lycorine with EGFR kinase domain. Lycorine (green), inserted into EGFR pocket domain (amaranth), which was located at the kinase active site within the ATP binding region. The binding cavity is shown as achromatic surface. c Lycorine directly bound with EGFR (696-1022) kinase active sites in the pocket domain via its hydrogen bond interacting to Asn842 (N842), Lys 745 (K745) and Thr854 (T854) in the docking structure. d 10 combining conformations (the green one and other amaranth 9) acquired from the flexible docking model between Lycorine and EGFR. e The binding free energy with root-mean-square deviation (RMSD) of 10 combining conformations. The smaller the RMSD value, the higher the accuracy were not so susceptive to Lycorine, compared with the former 3 GBM cell lines. At the same dose of 20 μM their cell viability reduced to about 40%, relatively much higher than 20% (Fig. 3b, the below panel). In conclusion, results of Fig. 3b suggested that the inhibition effect of Lycorine to GBM cells were correlated with the expression amount of EGFR, no matter wild type EGFR, or EGFRvIII, or other EGFR mutants, thus Lycorine could be considered a candidate to overcome different EGFR mutation status in treating GBM.
Lycorine suppresses EGF-induced EGFR signaling pathway
According to the aforementioned molecular docking suggestions (Fig. 1), the EGFR kinase assay was carried out in the presence of Lycorine or the well-known EGFR protein kinase inhibitor Gefitinib. As shown in Fig. 4a, the kinase activity inhibition IC 50 for Gefitinib was nearly 21 nM, which is consistent with previous report [41]. The inhibition IC 50 for Lycorine was about 68 nM (Fig. 4a), suggesting Lycorine directly inhibited the kinase activity of EGFR at a concentration comparable to classical EGFR kinase inhibitor. Then we treated U251 cells with Lycorine to induce apoptosis and western blot analysis was conducted. Clear cleavages of PARP and caspase-3 occurred and suggested that Lycorine suppressed GBM cell growth through its pro-apoptotic effects (Fig. 4b). Next, we checked the effect of Lycorine on EGF-induced EGFR phosphorylation and EGFR protein level. After EGF's induction for 6 h and then followed Lycorine's treatment for another 24 h, Lycorine reduced EGF-induced EGFR phosphorylation in a dose-dependent manner. Lycorine at 25 μM fully blocked EGFR phosphorylation. Meanwhile, the expression levels of p-AKT and p-ERK decreased accordingly in the same manner as p-EGFR, while the total amount of EGFR proteins were also declined. Accordingly, some other oncogenic proteins Fig. 2d. f Cells were seeded in 6-well plates for 7 days after the treatment of Lycorine in according concentrations and fixed with 4% paraformaldehyde, and stained with 0.2% crystal violet. The statistical results of colony numbers and diameters were presented in g and h. All data are represented as mean ± S.D. from triplicate wells. *, p < 0.05, **, p < 0.01, ***, p < 0.001, as compared to control such as p-mTOR, Bcl-2, Cyclin D1 and MMP9 were all down-regulated by Lycorine and some tumor suppressors including p21 and p27 were up-regulated (Fig. 4c). Generally EGF induces EGFR phosphorylation with a fast kinetic so that EGFR phosphorylation peaks within about 1 h then decreases because the activity of tyrosine phosphatases and because down-regulation of EGFR [42]. To more accurately distinguish the inhibition model of Lycorine on EGFR and EGFR's phosphorylation, we exposed cells to Lycorine only for short period (25 μM, 1 h) and then made a kinetic of EGFR phosphorylation in the presence or absence of EGF for indicated time points such as 0, 15, 30, 45 and 60 min. Results were shown in Fig. 4d and e. After EGF induction, p-EGFR level was significantly up-regulated within 30 min then reduced within 60 min (Fig. 4d, No Treatment panel). Accordingly, 25 μM Lyrorine's treatment for a short period further aggregated p-EGFR reducing process while the total expression level of EGFR had no obvious change (Fig. 4d, Lycorine panel). The statistic results in Fig. 4e illustrated the apparent difference between EGFR and p-EGFR under the treatment of Lycorine. Briefly, Lycorine decreases EGFR phosphorylation for short treating time whereas decreases both EGFR and p-EGFR for long treating time (Fig. 4c, d and e). All these results confirm the fact that Lycorine inhibits EGFR and its downstream signaling pathways.
Lycorine binds to EGFR, inhibits EGF-activated EGFR phosphorylation and exhibits an EGFR-dependent manner to suppress GBM cells proliferation
To further dig out the mechanistic understanding of Lycorine's inhibition on EGFR, we purified GST-tagged EGFR (696-1022) region in line with our molecular docking result and then subjected it to the Biacore platform. The Biacore assay was utilized to evaluate the binding between Lycorine and EGFR under the principle of surface plasmon resonance (SPR), and the result verified that there was indeed a complex between Lycorine and EGFR. Lycorine interacted directly with EGFR (696-1022). The RU values evaluating Lycorine's binding to immobilized EGFR demonstrated a dose-dependent manner. Lycorine at 10 μM exhibited significant positive Surface EGFR expression on glioma cell lines (U87, wild type EGFR; LN229, wild type EGFR amplification, U251, wild type EGFR amplification, A172, EGFRvIII mutant, Gli36vIII, EGFRvIII mutant, GBM6, wild type EGFR and EGFRvIII co-existing) and a healthy normal human IMA2.1 astrocytes was monitored by RT-PCR. b Lycorine suppresses GBM cell's proliferation independent of EGFR mutation status but dependent of EGFR expression level. The cell viability assay stained by SRB was performed as described in Methods. All data are represented as mean ± S.D. from triplicate wells signals while Lycorine at 0 μM almost with no reaction. And the determined equilibrium dissociation constant (KD) between Lycorine and EGFR (696-1022) was about 3.6 μM (KD = 3.6 × 10 − 6 M) (Fig. 5a). Considering Lycorine direct binds to EGFR (696-1022) and competitively occupies ATP binding pocket of intracellular EGFR region, we speculate that Lycorine may throughout block EGFR autophosphorylation within this tyrosine kinase domain. Hence we treated cells with Lycorine initially and then stimulated cells with EGF. Results showed that even if cells were stimulated by EGF, the amount of p-EGFR was still very faint under Lycorine pretreated groups (Fig. 5b and c, Lycorine panel) while in the No Treatment group, p-EGFR could be significantly induced to a high expression level with a normal kinetic time course (0, 15, 30, 45 and 60 min) ( Fig. 5b and c, No Treatment). This results can be explained that when Lycorine enters the cytoplasm, binds with intracellular EGFR (696-1022) domain and occupies the ATP binding pocket of intracellular EGFR and blocks the essential binding process of ATP and EGFR for EGFR's auto-activated phosphorylation. Thus the amount of p-EGFR in the No Treatment group are much higher than that in the Lycorine pretreated group. In conclusion, our findings prove that Lycorine inhibits EGF-activated EGFR kinase activity.
Lycorine's interaction and inhibition on EGFR casts our hypothesis whether Lycorine's antitumor activity depends on EGFR expression. Herein we measured if abolishing EGFR expression by shRNA altered Lycorine toxicity on GBM U251 cells. The knockdown extent of EGFR expression was illustrated in Fig. 5d. Compared with the parental and shControl group, shEGFR group reduced its EGFR expression by nearly 70% (Fig. 5e). The in vitro growth rate of U251 shControl and shEGFR was also measured (Fig. 5f ). Long-term knockdown of EGFR indeed decreased GBM cell viability at day 7 because EGFR was an oncoprotein for cancer cell For detecting the phosphorylation of EGFR and its downstream signaling pathways, U251 cells were pretreated with 100 ng/mL human recombinant protein EGF for 6 h, then treated with Lycorine for indicated concentration for another 24 h; and cell lysates were subjected to Western blotting analysis with indicated antibodies. d U251 cells exposed to 100 ng/mL human recombinant protein EGF for indicated time points (0, 15, 30, 45 and 60 min) then followed with 25 μM Lycorine for 1 h or with no treatment. The expression of EGFR and p-EGFR were detected by western blotting. e Statistical result of Fig. 4d proliferation. However, from day 0 to day 5, no obvious inhibition effects were observed between the shControl and shEGFR, which meant RNA-interfered EGFR might need a long time to exhibit its inhibition on cell proliferation. To avoid RNA-interfered EGFR's influence on Lycorine, the subsequent cell viability assay was conducted by treating GBM cells with Lycorine for 48 h of a comparably short time, before the time point of day 5 to refrain from EGFR's long-term knockdown in decreasing cell growth, and the results were shown in Fig. 5g. Short-term EGFR knockdown ablated the ability of Lycorine's treatment to hinder cell proliferation (Fig. 5g). In parental and shControl groups, EGFR expression was normal so Lycorine showed significant inhibition effects on cell proliferation (Fig. 5g, blue and orange columns). However, Lycorine could not show its obvious inhibition even at the high dose of 20 μM when EGFR was knocked-down, which suggested that Lycorine's inhibition on cell proliferation was dependent on EGFR expression in vitro (Fig. 5g, gray columns). All these findings suggest that EGFR might be a critical and direct target of Lycorine in GBM cells. 5 Lycorine binds to EGFR, inhibits EGF-activated EGFR phosphorylation and exhibits an EGFR-dependent manner to suppress GBM cells proliferation. a Biacore assay to reveal the SPR analysis of the binding between Lycorine and EGFR (696-1022) domain. The purified EGFR (696-1022) protein was immobilized on an activated CM5 sensor chip. Lycorine was then flowed across the chip. b U251 cells were pretreated with or without 25 μM Lycorine for 1 h then followed by 100 ng/mL EGF treatment for 0, 15, 30, 45 and 60 min (Lycorine was maintained during the EGF-treated time course), and EGF-dependent EGFR phosphorylation was measured by western blotting. c Statistical result of Fig. 5b. d After successful construction of stable U251 shEGFR cells, the knockdown efficiency of EGFR protein was detected by Western blotting in Parental (normal U251 cells), shControl and shEGFR stably constructed U251 cells, respectively. e Statistical result of Fig. 5d. f Parental (normal U251 cells), shControl and shEGFR stably constructed U251 cells were seeded in 96 well plates for indicated days and cell viability was assessed by SRB assay.g Statistic result of cell proliferation when EGFR was interfered by shRNA. Parental (normal U251 cells), shControl and shEGFR stably constructed U251 cells with shRNA were treated with indicated concentrations of Lycorine (0 μM, 10 μM and 20 μM) for 48 h and cell viability was detected by SRB assay. All data are represented as mean ± S.D. from triplicate wells. *, p < 0.05, **, p < 0.01, as compared to control
Lycorine inhibits U251-luc intracranially orthotopic tumor growth in vivo
The orthotopic transplantation tumor model is widely used to imitate the real clinical situation of cancer progression in drug research. In an attempt to mimic human disease to the maximum extent, we evaluated Lycorine's chemotherapeutic potential on U251 orthotopic tumor growth model in vivo. Briefly, a luciferase-expressing U251 cell line (U251-luc) was established by stably transfected with luciferase-expressing plasmids. After injected stereotactically into the mice intracranial right frontal lobe of adult nude mice brains from day 0 to day 40, U251-luc cells exhibited bioluminescence which could be traced by photon flux indexes to represent the tumor sizes using the IVIS 2000 Luminal Imager system. Mice were divided into 3 groups (n = 10 per group) and treated with Lycorine at 10 mg/kg/day or 20 mg/kg/day or vehicle control. Tumors in the whole body of each mouse were imaged by IVIS every 10 days to determine local tumor growth and tumor cells dissemination. As shown in Fig. 6, Lycorine evidently impaired the U251-luc orthotopic xenografts in tumor-bearing mice. In the control group, bioluminescence was detected in the whole parts of mice cranial cavity (Fig. 6a) and increased remarkably with day number increase (Fig. 6b). Treatment with Lycorine statistically reduced the photo flux indexes (Fig. 6b). Administration of 20 mg/kg/day of Lycorine almost completely blocked tumor growth. The average normalized photon flux of the 10 mg/kg/day Lycorine treated group and 20 mg/kg/day Lycorine treated group was (2.14 ± 0.51) × 10 6 p/sec/cm 2 / sr and (13.57 ± 1.28) × 10 6 p/sec/cm 2 /sr, respectively, while that of control group was (106.03 ± 3.43) × 10 6 p/sec/cm 2 / sr ( Fig. 6). At day 40 the nude mice were sacrificed and the orthotopic xenografts were stripped for molecular biological detection. When exploring the signal pathways after Lycorine administration in vivo by RT-PCR, western blotting and immunohistochemistry analysis, we found the expression of EGF and EGFR decreased in both mRNA (Fig. 6c) and protein level (Fig. 6d), while the expression of p-EGFR, Bcl-xL and Ki-67 decreased, compared to the control group. As an intermediate filament protein considered to be the best astroglial marker, GFAP also reduced after Lycorine treatment. Conversely, the apoptotic marker Cleaved caspase 3 was up-regulated (Fig. 6e). Together, these in vivo findings were in agreement with our in vitro results and indicated that Lycorine therapeutically suppressed GBM tumor growth in intracranially orthotopic xenograft model through suppressing the EGFR signaling pathway.
Lycorine's inhibition on GBM growth is dependent on EGFR in vivo
Like Lycorine's in vitro effects on shEGFR cells that have been revealed in Fig. 5, the in vivo subcutaneous xenograft assay was also performed to assess EGFR disturbance on Lycorine's inhibition on GBM growth. Firstly we tested the growth rate of U251 shControl and shEGFR in vivo without Lycorine treatment. Photos in Fig. 7a showed the phenotype of tumor-bearing nude mice and their xenografts at indicated days, and the growth curve in Fig. 7b elucidated the detailed data of U251 shControl and shEGFR after their inoculation from day 0 to day 32 with a final tumor volume of 703 ± 2.19 mm 3 and 512 ± 11.04 mm 3 , respectively. There was no significant difference between shControl and shEGFR until their inoculation after day 24. But from day 24 to day 32, knockdown of EGFR indeed reduced GBM tumor growth in vivo. All tumors reached to a volume of nearly 100 mm 3 at day 12, which meant at an early stage of in vivo experiment, the growth rate of shControl and shEGFR were the same. This in vivo result was consistent with in vitro results of Fig. 5f. Likewise, we determined the time point of day 12 applicable to initiate Lycorine administration to measure whether abolishing EGFR expression by RNA interference could alter Lycorine's toxicity on GBM xenografts in vivo or not. Then we conducted the in vivo subcutaneous xenograft assay again under the treatment of Lycorine. The tumor size, volume and weight of subcutaneous xenografts were demonstrated in Fig. 7c, d and7e. When EGFR was knocked-down by stable shRNA, even at the largest dose of 20 mg/kg/day, Lycorine still faded its severe inhibition on GBM growth when compared with the control group (Fig. 7c, d and e). It was safe to infer that EGFR's deprivation (shEGFR group) reduced GBM growth compared to the control group because EGFR exerted as a promoting factor for many caner types, especially for GBM. However, as EGFR expression was reduced by EGFR shRNA, Lycorine's inhibition on GBM growth also declined dramatically (Fig. 7d, green curve, compared with the orange curve of shControl group treated with 20 mg/ kg/day of Lycorine, ** P < 0.01). Fig. 7f verified EGFR expression level after Lycorine administration and shRNA interfering in vivo. Lycorine down-regulated the expression level of EGFR in vivo. Interestingly, in our experiment result of Fig. 7f, Lycorine could reduce EGFR expression to a more severe extent than EGFR shRNA do. This phenomenon might partially explain why the tumor volume in the shControl group treated with 20 mg/kg/day was much smaller than that in the shEGFR group (Fig. 7c, d and 7e). And Lycorine at the dose of 20 mg/kg/day still exhibited some inhibition effect on tumor growth even in the shEGFR group. This might be due to the possible reason that Lycorine may have some other pharmacological targets besides EGFR, which meant EGFR was not the only one target of Lycorine in vivo. Another explanation might be that the EGFR shRNA used in our experiment was not efficient enough to completely disappear EGFR expression, thus even just a litter remaining EGFR might contribute to Lycorine's inhibition on tumor growth in the shEGFR xenografts. Anyhow, these results provide rationale evidence that Lycorine's inhibition on EGFR also occurred in vivo and this inhibition was dependent on EGFR.
Lycorine retards the growth of patient-derived GBM tumor xenografts Finally, we clinically examined the effect of Lycorine on GBM tumors. It has been widely accepted that patient-derived tumor xenograft models can be utilized as an ideal drug-screening tool for many kinds of cancer therapy including therapy of GBM [40,43]. We employed a patient-derived GBM cell line, which was primarily separated from a high-grade patient's in situ tumor of glioma in Xianning central hospital, the first affiliated hospital of Hubei University of Science and Technology (Xianning China), to test if Lycorine could be clinically beneficial. First, the SRB assay was performed to identify the effects of Lycorine on this Fig. 6 Lycorine inhibits U251-luc orthotopic tumor growth in vivo. a Tumor growth in the orthotopic intracranial cavity over a 40-day period was detected by bioluminescence analysis every 10 days. b Quantitative analysis of growing cells in brain bioluminescence analysis every 10 days. The means and 95% confidence intervals (error bars) are presented;***, P < 0.001, **, P < 0.01. P values were calculated using a two sided Student's t test. p/ sec/cm 2 /sr = photons/ second/cm 2 /steradian. The inhibitory effect of Lycorine on EGFR signaling pathway in U251-luc orthoropic tumor growth model were sectioned and probed with human EGF, EGFR primers (c) and anti-human p-EGFR, EGFR, PCNA antibodies (d). Human GAPDH was served as a mRNA loading control for Fig. 6c. Human β-actin was served as a protein loading control for Fig. 6d. e U251-luc orthoropic tumor sections were processed for immunohistochemical analysis to detect human GFAP, p-EGFR,Bcl-xL, cleaved-caspase 3, and Ki-67. Representative images are shown. Brown color indicates positive cells. Scale bar = 30 μm patient-derived GBM cell line. Within our expectation, Lycorine inhibited cell proliferation of this cancer cell line in a dose-dependent manner (Data not shown). Then, we injected this cancer cell line into nude mice to establish the patient-derived GBM subcutaneous tumor xenograft model. Mice were divided into 3 groups (n = 10 per group) and treated with Lycorine at 10 mg/kg/day or 20 mg/kg/day or vehicle control. At the day 14, mice were sacrificed and the tumor xenograft of each mouse was dissected (Fig. 8a). And the tumor weight of each lesion was calculated. Lycorine significantly retarded the growth of tumor volume (Fig. 8b). The average tumor volume of control group was 1621 ± 28 mm 3 , whereas tumor size in Lycorine -treated group was 734 ± 56 mm 3 for 10 mg/ kg/day group and 403 ± 64 for 20 mg/kg/day group, respectively. And statistical results showed a significant difference between the drug-treated groups and the control group (Fig. 8c), especially for the 20 mg/ kg/day group, the tumor burden of each mouse almost ceased to grow following the administration of Lycorine (Fig. 8c). At the same time, treatment of Lycorine at the given concentration had little toxic effect on the body weights of the Lycorine -treated Fig. 7 Lycorine's inhibition on GBM growth is dependent on EGFR in vivo. a Photos of tumor-bearing nude mice and their xenografts at indicated days of U251 shControl and shEGFR after inoculation. White triangle indicates the tumors that grow subcutaneously in mice right backs. b The growth curve of U251 shControl and shEGFR tumor volume after their inoculation from day 0 to day 32. c Representative images of tumor tissue in control, shEGFR, shControl+Lycorine 20 mg/kg/day and shEGFR+Lycorine 20 mg/kg/day groups. d The tumor volume of 4 groups was shown through growth curve (n = 4, **, P < 0.01). e The statistic result of tumor weight in control, shEGFR, shControl+Lycorine 20 mg/kg/day and shEGFR+Lycorine 20 mg/kg/day groups. The means and 95% confidence intervals (error bars) were presented (n = 4, **, P < 0.01). f Dissected tumor tissues were extracted protein and subjected to Western blotting analysis and the expression of EGFR was detected in 4 groups mice at the curative dose (Fig. 8d), which was consistent with the results of our previous report [23].
Discussion
Despite advances in multimodality therapies such as surgery, radiotherapy, and chemotherapy, glioblastoma remains the most aggressive primary brain malignancy with an average post-diagnostic survival of just over 14 months [44]. Considering the extremely poor outcome for patients, increasing the magnitude of chemotherapy standard procedure can improve overall survival in GBM. EGFR contributes to the differentiation, proliferation, survival, migration and invasiveness of cancer cells and increases tumor angiogenesis [2]. Aberrantly activated EGFR affects a wide range of human cancers, particularly lung cancer, colorectal cancer, pancreatic cancer and glioblastoma. Among all these cancer types, glioblastoma has the highest rate of EGFR gene alteration. EGFR and the mutant EGFRvIII are major focal points in current concepts of targeted cancer therapy for GBM, so they are considered to be responsible for tumor initiation, propagation, recurrence, and chemo-and radio-resistance [45,46]. There are several treatments available [47], including monoclonal antibodies like cetuximab or small molecule inhibitors like Gefitinib, and a vaccination called rindopepimut was even developed to be administered to EGFRvIII-positive tumors [48][49][50]. However, glioma cells treated with those agents often show resistance mechanisms [51]. Diverse combination drug strategy are already in clinical trials but there is still urgent need to develop novel effective therapeutics.
Many challenges remain to be addressed to effectively target EGFR-dependent GBM. Currently available EGFR inhibitors often fail to achieve adequate inhibition of EGFR in tumors due to sub-optimal brain distribution. GBM are enriched for EGFRvIII mutations in the extracellular domain (ECD) of EGFR, which are refractory to the first-generation EGFR kinase inhibitors, such as Gefitinib and Erlotinib [51]. So far, the vast majority of combination studies included EGFR-or EGFRvIII-specific agents together with radiation or broad alkylating reagents like temozolomide. Although a few of these agents are already approved by the Food and Drug Administration for different cancer types, e.g., cetuximab for colorectal cancer, gefitinib and erlotinib for non-small cell lung cancer, unfortunately, none are approved for glioma treatment yet [52].
In the present study, we investigated the role of Lycorine in the tumor growth as a possible drug candidate for GBM. Lycorine effectively down-regulated EGFR signaling pathway, both at mRNA and protein expression levels.
Previously reported studies only focus on targeting EGFR-vIII or wild type EGFR. Here we explore whether both wtEGFR and EGFRvIII can be effectively targeted by Lycorine to treat GBM. Lycorine displayed enhanced cytotoxic capability when co-cultured with GBM cells. In detail, cell proliferation, migration, colony formation and cell apoptosis were all responded to Lycorine. Furthermore, Lycorine impaired GBM tumor growth in three different Fig. 8 Lycorine hinders the growth of patient-derived GBM tumor xenografts. a The patient-derived in sute tumor cells were injected subcutaneously into the nude mice, which was operated according to the ways described in Methods. After mice sacrificed, tumors were removed and images taken with a Nikon camera. b Statistic results of quantitative tumor weight analysis of subcutaneous lesions after mice sacrificed. c Statistic results of quantitative analysis of tumor volume with the day's growing in mice every 2 days. d Effect of Lycorine on mouse body weight. Lycorine did not affect the body weight of mice when recorded every2 days. The means and 95% confidence intervals (error bars) were presented (** P < 0.01) xenograft models (an U251-luc intracranially orthotopic transplantation model, an EGFR stably knockdown U251 subcutaneous xenograft model and a patient-derived xenograft mouse model), dependent on EGFR overall expression. The higher expression level of EGFR that cancer cells harbored, the greater inhibition efficiency that Lycorine displayed. The inhibition effect of Lycorine to GBM cells were solely correlated with the expression amount of EGFR, no matter wild type EGFR or EGFRvIII or other EGFR mutants, suggesting Lycorine can overcome different EGFR mutation status in treating GBM. These findings support drug administration of Lycorine represents a promising clinical strategy to treat GBM.
As most anticancer drugs, Lycorine might probably more efficient when acting on rapidly cycling cells than on slowly dividing ones. And EGFR functions as an important mitogen driving factor in GBM [53,54]. EGFR downregulation by shRNA indeed reduced GBM cell growth. The possibility that the decreased toxicity of Lycorine on U251 shEGFR may due to slower cycling cells couldn't be excluded. Thus it was really hard to divide Lycorine's effects on GBM growth was EGFR-dependent or -independent. However, this would not be a confusion anymore after the revealing of our current research. Lycorine was less toxic on GBM cells in which the expression of EGFR was decreased by stable RNA interference (U251 shEGFR) might be suggestive of a role of EGFR in Lycorine action. However, through in vitro and in vivo EGFR knockdown, we measured the growth rate of U251 shControl and U251 shEGFR to distinguish the growth inhibition was mainly caused by Lycorine treatment or by EGFR downregulation. And both our in vitro and in vivo experiments delicately avoid the complexity because we chose an applicable short time point to conduct Lycorine treatment to exclude EGFR downregulation effects on in vitro GBM cells proliferation and in vivo GBM tumor growth. At least in our current research system, we could confirm that the effects of Lycorine played a leading course when treating GBM cancer, even if EGFR's knockdown might slow cell cycling. Therefore, it was safe to assert Lycorine acted through an EGFR-dependent pathway in its suppression on GBM.
Lycorine belongs to isoquinoline alkaloids extracted from the perennial medicinal plant Lycoris of Amaryllidaceae genera that widely distributed in China. Many researches have reported Lycorine's excellent biological activities, including anti-tumor activity. Although Lycorine does not have a defined protein target or action mechanism, it is supposed to be a candidate for clinical application. For instance, a drug containing Lycorine as an effective component has been clinically used in Russian as an expectorant to treat chronic and acute inflammatory processes in lungs and bronchial diseases [55]. Lycorine also promotes hematopoietic stem and progenitor cell niche colonization [56]. As a natural small molecular product, Lycorine holds many advantages such as multi-channel, multi-target and few side effects. Besides, Lycorine exhibits ideal biosafety. Particularly important, Lycorine is an agent that can effectively penetrate the blood-brain barrier (BBB) and doesn't induce obvious CYP3A4 inhibitory activity [22], which means that GBM primary tumors in the cranial cavity can be easily accessible by Lycorine administration through oral intake or intravenous injection without systemic hepatotoxicity. The reason why GBM still remains difficult to treat despite recent advances in targeted therapy is that the central nervous system is hard for drugs to transport to the cranial cavity because of the BBB. Our findings reveal Lycorine may function as a drug that not only can inhibit EGFR but also can cross the BBB to target intracranial tumors. This drug shows promising effectiveness in GBM orthotopic mouse models as well as in patient-derived xenograft model. Like the latest reported AZD3759, a BBB-penetrating EGFR inhibitor for the treatment of EGFR mutant NSCLC with brain metastases [57], Lycorine may be developed clinically, with the goal of achieving high enough drug concentrations within the CNS. All of these abovementioned properties make Lycorine potential for the pharmacological application for GBM, the most notorious malignancy in human brain. Besides, more detailed factors such as Lycorine's free concentrations in the blood, cerebrospinal fluid, and brain tissue of Lycorine distribution, need further investigation. Summarily, our data confirms the potential of Lycorine for the treatment of GBM and support its further clinical evaluation in larger trials.
Considering the binding mode of Lycorine with EGFR, our molecular docking results and Biacore analysis elucidated that EGFR (696-1022) domain retained Lycorine in its ATP binding pocket through 3 different interactions and 2 of them were mediated by its C-rings: the first C-ring connected the hydroxide radical of the T854 lateral chain of EGFR (696-1022) domain through two hydroxyl hydrogen bonds of Lycorine; the second C-ring connected the carbonyl of the N842 lateral chain of EGFR (696-1022) domain to the hydroxide radical of Lycorine. Our results are consistent with some previous researches. For example, X-ray structural information of Lycorine in complex with eukaryotic ribosome revealed Lycorine utilized its dioxol-pyrroline group to contact the pocket region in the A-site of the peptidyl transferase center of ribosomes [33]. Another SAR analysis of Lycorine with its intracellular targets elaborated Lycorine's C1, C2-hydroxyls provided a superior binding pose with the pocket a, the GTP binding site, of its target protein eEF1A [26]. It can be inferred that the C1, C2-hydroxyl rings of Lycorine may be a recognition motif for the binding with its target complex proteins and play critical role for its drug potential. This reminds researches should pay special attention to protect and make full use of this region when developing Lycorine as lead compound drug candidate or when synthesizing Lycorine's modified derivatives.
Our findings that Lycorine has inhibitory effects on EGFR pathway function exactly an extension mechanism compared with some previous literatures. Lycorine's mode of action such as its inhibition on protein biosynthesis, its apoptosis-inducing activity, its cell-cycle arresting activity, its anti-proliferative, its anti-invasive properties and its autophagy-promoting activity have already been revealed associated with the JAK, STAT, phospho-Akt, and TCRP1/Akt/mTOR axis. All these molecules are downstream pathway signals of EGFR. That's why we link the mechanism of Lycorine's inhibition on GBM to EGFR. On one hand, the existing X-ray structure of Lycorine and EGFR provide virtual structural basis of their interaction. On the other hand, some published literatures which revealed some superficial mechanism of Lycorine's inhibition on cancer remind us to consider there might be some intrinsic relationship between Lycorine and EGFR, because Lycorine really has influence on EGFR's downstream signals such as JAK, STAT, AKT and mTOR. Summarily, our current research for the first time provides a direct evidence that Lycorine binds with its intracellular target, EGFR. Therefore, our research contributes great progress in elucidating Lycorine's pharmacological activity and makes significant sense in understanding Lycorine's mechanistic drug target.
However, some deficiencies of this present study must be admitted. Firstly, the effective concentration of Lycorine to inhibit EGFR and cure GBM is somewhat high, compared with clinically classical EGFR inhibitors such as Gefinitib, which functions at nanomole level. The fact that Lycorine executives a mocromole concentration to suppress GBM cells may limit its drug clinical applicability. Favorably, Lycorine's chemical structure is very simple and possesses a typical alkaloids' tetracyclic skeleton. Therefore, it's easy to conduct the analysis of structure-function relationship according to its chemical structure. Using Lycorine as a lead compound to synthesis modified derivatives may be a promising direction for novel drug development. And this direction need more extensive research interests and will represent far-reaching value of medical research implications in GBM clinical treatment. Secondly, although our results reveal the direct interaction between Lycorine and EGFR (696-1022) domain and this interaction endows Lycorine's inhibition on EGF-activated EGFR kinase phosphorylation, further detailed mechanism still wait for future exploration. As typical RTK, EGFR is a membrane-spanning protein with N-terminal extracellular ligand-binding domains to interact with EGF or other ligands, and C-terminal intracellular catalytic domains. After ligands stimuli, EGFR is activated via binding of their extracellular domain elicits RTK oligomerization and activation. Then signals are transduced to the intracellular tyrosine kinase activity domain and EGFR autophosphorylation occurs. Activated auto-phosphorylated EGFR may trigger a number of signaling pathways contributing to tumorigenesis and progression. In our study, we elucidate that Lycorine binds to EGFR and inhibits EGF-activated EGFR phosphorylation through different western blotting results when treating cells using EGF first then followed by Lycorine, or using Lycorine first then followed by EGF. If cells were stimulated by EGF first to induce EGFR kinase activity herein express high level of p-EGFR, then Lycorine could downregulate EGF-induced EGFR phosphorylation and its downstream signals ( Fig. 4c and d). And these result also differ two situations when treating cells with long time Lycorine (Fig. 4c) or short time Lycorine (Fig. 4d). If cells were treated with Lycorine initially and then stimulated with EGF, Lycorine could enter the cytoplasm, bind with intracellular EGFR (696-1022) domain and occupy the ATP binding pocket of intracellular EGFR, which might hinder EGFR autophosphorylation, because Lycorine might block the essential binding process of ATP and EGFR for EGFR's auto-activated phosphorylation. Thus even if cells were stimulated by EGF, the level of p-EGFR was still too low to be detected under Lycorine pretreated groups (Fig. 5b and c). Anyhow, our findings prove that Lycorine inhibits EGF activation of EGFR kinase activity. We may also infer that the extracellular EGF be no inclined to have any relationship with intracellular Lycorine. However, our present study indeed finds Lycorine reduces the mRNA level of EGF and EGFR in vivo (Fig. 6c) and down-regulates both total EGFR and p-EGFR in vitro (Fig. 4c). The intrinsic regulation mechanism between Lycorine and EGF/EGFR is still cryptic. Why Lycorine can affect the transcription of EGF? How Lycorine can reduce the protein expression of EGFR? Whether Lycorine can regulate EGFR's endocytosis, degradation, cycling, and nuclear translocation and so on? All these detailed mechanisms between Lycorine and EGF/ EGFR need to be further explored.
Conclusions
To sum up, our findings confirm Lycorine inhibits GBM growth through EGFR suppression in terms of the way that Lycorine treatment reduces EGFR expression level and inactive EGFR downstream signaling pathway through direct binding to EGFR. Our research provides a proof-of-principle that targeting the alternatively amplified and mutated EGFR by Lycorine could be used to substitute existing EGFR inhibitors and hinder GBM tumor growth.
|
v3-fos-license
|
2021-05-10T00:04:21.989Z
|
2021-01-29T00:00:00.000
|
234097250
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://doi.org/10.3390/su13063514",
"pdf_hash": "589823d5fa761b8f05b001c37fcd21de289715cb",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:937",
"s2fieldsofstudy": [
"Business",
"Environmental Science",
"Sociology"
],
"sha1": "e1af35d4f2a6b7c81bb8a87f78417c9e7178f523",
"year": 2021
}
|
pes2o/s2orc
|
COVID-19 as a trigger for sustainable tourism and eco-influencers on Twitter
: The social confinement resulting from the COVID-19 crisis temporarily reduced greenhouse gas emissions. Although experts consider that the decrease in pollution rates was not drastic, some surveys detect a growth in social concern about the climate. In this environment, institutions, city councils and companies have promoted sustainable tourism as a necessary option, even before world society regains freedom of movement. This work analyzes and geolocates the sustainable tourism and ecotourism proposals on Twitter, quantitatively and qualitatively, using the Twitonomy Premium tool, with data extracted at the end of December 2020. The results show an arduous activity in Ireland, Kenya, Sri Lanka, India, Croatia, Spain, Finland, France, Mexico and Pakistan, among others. The accounts that achieve the most impact and engagement are both from public institutions and influencers specialized in travel, writers and chefs, who act as eco-influencers. Ecotourism is promoted as the necessary option for the conservation of cities and landscapes, which will be visited by tourists supposedly more aware after the virus.
Introduction
In December 2019, a hitherto unknown type of coronavirus [1], named SARS-CoV-2, caused a severe respiratory illness in Mainland China. The virus transmission went from a single area to the entire country in 30 days [2,3,4]. Two months later, after its rapid expansion, the disease began to be called by the scientific community as COVID-19 (an acronym for Coronavirus Disease 19). Throughout 2020, dozens of countries around the world experienced numerous outbreaks, as no effective drugs [5,6] or vaccines were developed. The main factors that contributed to its expansion were the population's high international mobility, and the high population density in urban areas [7,8,9,10].
Preventive strategies, in addition to hygienic ones, included measures of social distancing, community confinement, reduced mobility, and perimeter closures of hundreds of cities [11,12,13]. This social confinement temporarily reduced greenhouse gas emissions. In Spain, the BC3 (Basque Center for Climate Change) and the Observatory for Energy Transition and Climate Action (OETA) predicted that 2020 would close with a historic decrease in these emissions. They estimated a fall of 15%, the largest decrease since 1990 and the year in which these calculations were inaugurated [14]. According to the same study, and according to monthly measurements, in the first months of 2020, the reduction in emissions was due to the decrease in activity of coal-fired power plants [14]. This decline was on the rise, in April and May, as social distancing and home confinement measures tightened.
The data is reproduced in a similar way when studying the phenomenon at the European level, although experts consider that the decrease in pollution rates was not so drastic. The Global Carbon Project (GCP) of the World Meteorological Organization (WMO), in its November 2020 newsletter, estimated that in the most intense period of forced confinement, reductions in carbon dioxide (CO2) could fall as much as 17%, in relation to the 2019 data [15]. However, it predicted that the total annual reduction would only be between 4.2% and 7.5%. The best data for the environment came from the level readings of large cities' centers: Helsinki, Florence, Heraklion, Pesaro, London, Basel and Berlin [16]. However, the WMO recommends caution and explains that the high natural atmospheric variability of CO2 requires more numerous measurements and in more time, since a lower concentration of carbon dioxide is not always linked to a lower presence of fossil fuels.
Until the data are published later, numerous studies and surveys do detect an increase in social concern about the climate as a result of the crisis and confinement. The deadly coronavirus called into question the welfare state and encouraged the world's population to think about climate change more seriously. Keesing et al.
[17] already warned of the unbreakable nexus between the climate emergency and the transmission of infectious diseases, a decade before the COVID-19 crisis. They noted that the decline in biodiversity reduced the capacity of essential ecosystem services, the defenses of humans, animals, and plants, and consequently, the increase in infectious diseases [17]. This study called for the need for socio-climatic awareness so that areas of high natural biodiversity serve as a reserve for pathogens that do not have to come into contact, for example, with humans [17]. Currently, this work accumulates 23,000 downloads on the website of the prestigious journal Nature, and more than 854 quotations in publications around the world.
The WMO submitted another report in May 2020, which also openly stated that climate change is deadlier than coronavirus because it includes ocean warming, record sea levels, melting ice sheets, storms and droughts, and proliferation of still unknown pathogens [18]. Likewise, the Convention on Biological Diversity (CBD) of the United Nations (UN) underlined in its report Global Biodiversity Outlook (GBO-5), in August 2020, the need to meet the 20 Goals of Aichi. According to the text, biodiversity is key to all factors of human life, including health [19].
Numerous international media have shown the results and proposals of these studies. Climate awareness has expanded its visibility and importance on the media agenda. And polls from various organizations include questions on these issues, even focusing on areas that were used to be unrelated to climate. The European Investment Bank (EIB), the community financial body of the European Union, published in January 2021, the 2020-2021 EIB Climate Survey. The results reveal that COVID-19 has influenced the perception of citizens about the climate emergency; and climate and ecological recovery are high on the EU agenda [20]. Specifically, the survey shows that 57% of European citizens affirm that the economic recovery after the global pandemic must consider the climate emergency and that European governments must promote an urgent reduction of CO2 [20]. According to the same survey, citizens of some European countries, such as Hungary (71%), Malta (67%), Spain (64%), Germany (63%), Luxembourg (63%) and France (61%), think that the fight against climate change should be part of the economic recovery [20].
In this new climate-conscious environment, institutions, municipalities and companies have promoted sustainable tourism as a necessary option, even before world society regains freedom of movement. Although the concept of sustainable tourism is not new, the current situation has caused its regeneration. Butler [21] explained that the term was born from the Brundtland Report of United Nations, also known as Our Common Future To operationalise the general and secondary objective, the following Hypothesis 1-4 were proposed: Hypothesis (H1). COVID-19 has awakened and increased a virtual civic awareness of the citizens, who follow accounts that promote sustainable tourism on social media, even months before being able to recover international mobility. Hypothesis (H2). This awareness of sustainable tourism is promoted by official entities, which must fulfill their environmental commitments, and by influencing users, tourism experts, who can see their core of followers and influence increased. Hypothesis (H3). Sustainable tourism offers a reinvention of the way of doing tourism, revising destinations or proposing new destinations with a new awareness. Hypothesis (H4). It is impossible to corroborate whether this eco-tourism awareness will actually translate into more sustainable cities, trips and travelers when the socio-health crisis is over, although Twitter now offers an eco-conscious and perhaps escapist and cathartic call, showing tourism that we cannot realize but we do dream about.
Materials and Methods
This work analyzes and geolocates the sustainable tourism and ecotourism proposals on Twitter, quantitatively and qualitatively, using the Twitonomy Premium tool, with data extracted at the end of December 2020. The chosen social media has been Twitter because it allows you to view tweets without having to be registered as a user. Likewise, Twitter is considered the social network where governments, politicians and institutions are most present [46,47,48,49,50,51,52,53,54]. Considering the reports of the introduction and underlining that the ecological commitment must come from governments, institutions and companies [55,56,57,58,59], the social media for microblogging was chosen as the most adequate to meet the objectives of the study.
Twitonomy is a web application to analyze the social network Twitter, exclusively. It is used to make publications and to analyze tweets, hashtags, followers, impressions, engagement rate and top domains. It is owned by Diginomy Pty Ltd, an Australian company headquartered in New South Wales. Its use policies include that its users are over 16 years old, human and not systems or bots, and if they opt for the Premium version of payment, that they provide a full name and a valid email address [60]. It is not affiliated with Twitter Inc., or any of its brands, and its features and functionalities are independent of the social network.
The data offered in each search is provided by Twitter's API (Application Programming Interface) and is subject to its limits [60]. Analyzing the general policies and guidelines, directly on the Twitter website, the social network explains that its APIs provide companies, developers and users with programmatic access to their data, with the exception of non-public information or direct messages [61] which implies a necessary compliance with the required ethical standards [62] and proposes a radical rereading of traditional journalism as a primary source of information [63].
The analyzed hashtag is #SustainableTourism, which includes all its forms in uppercase and lowercase: #Sustainabletourism, #sustainableTourism and #sustainabletourism. When using the Premium version or paid subscription, Twitonomy allows monitoring up to a full year and dates were entered for the interval "since: 2021-01-01" and "until: 2021-12-31". Likewise, the last 3,000 tweets of the first 9 days of the year 2021 were analyzed in detail, as the tool offers, to confirm that there were no inconsistencies.
Results
When searching for the hashtag #SustainableTourism, the Twitonomy Premium app offers numerous results. In the left column of results it provides: flow of tweets per day, Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 29 January 2021 doi:10.20944/preprints202101.0636.v1 most influential users, most engaging users, most active users, top hashtags, top laguages and locations on a map. In the right column of results it provides: most retweeted tweets and most favorite tweets, in reverse chronological order, from present to back. Taking into account the main objective of the study, the results that allow a solid, realistic and deductive portrait have been chosen.
Most influential users & most active users
According to Twitonomy, the most influential users are the users or accounts with the most followers; the most engaging users are the users or accounts that gained the most favorites using the selected hashtag; and the most active users are the users or accounts that most mentioned the selected tweet in original tweets, since they do not count retweets, as they are not original content. To meet the proposed objectives, the 5 Twitter accounts that used the hashtag #SustainableTourism and that have the most influence (more users) and the 5 Twitter accounts that used the hashtag #SustainableTourism and that were most active (used the hashtag more times). They are shown in Table 1. Among the 10 selected accounts there are accounts of European organizations (@EU_MARE), national organizations (@visitportugal, @ecotourismkenya), regional tourism offices (@ComunediGenova, @OldDublinTown), companies dedicated to tourism (@Koonholidays) and influencers (@FoodDrinkDest). Its activity, in number of tweets, is very uneven, between 659 and 98,859 publications. Likewise, the range of followers is very wide, between 115 only and 110,904. Another interesting point, very representative, are the Twitter lists. This tool allows a user to create a list of accounts that interest him so that only the tweets of the accounts that he has decided to include in that list appear in it. It is another way to measure engagement and it is interesting to see that @visitportugal would be the most included in lists, predictably by travelers who want to travel to the Portuguese country. On the contrary, @Koonholidays would only be on 4 lists, despite its activity and its visibility with the hashtag #SustainableTourism.
3.2. Actividad de tweets, retweets, hashtags y tweets retweeted Table 1 provides some very interesting quantitative data for the intended objective, but taking advantage of the features of Twitonomy, the data related to the specific activity of each account was also recorded. It is very interesting to relate the visibility of the account with the work that is carried out by the user or owner of that account; or the Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 29 January 2021 doi:10.20944/preprints202101.0636.v1 effort that is dedicated from each account to achieve their visibility and engagement. These results are shown in Table 2. After selecting the 10 accounts with the most influence and activity, the research analyzed, one by one, the activity of each of those accounts. The Twitonomy tool offers a very complete profile analytics: tweet analytic, tweet history, users most re-tweeted, users most replied to, users most mentioned, hashtags most used, tweets most retweeted, tweets most favorited, days of the week, hours of the day (UTC), platforms most tweeted from, tweets, followers, following, favorites, lists following and lists is following. To meet the objectives of the research, it has been considered that the most representative data are those that appear in Table 2: The tweets per day section shows a wide range, between 1.11 and 13.77 tweets per day. This data is remarkably interesting, as it shows profuse activity, especially from official agency accounts (@visitportugal, @OldDublinTown, @EU_MARE). It must be remembered that the validity of this data is based on the fact that the number of tweets per day arises from the selected period and not from the entire age of the account, since in that second case, the data would not be comparable between accounts that can have very different life spans.
Retweets are another interesting point, in this research and in any other work on Twitter. Remember that the Twitonomy screening excludes posts where the hashtag has been retweeted. That is, it stores and analyzes the original tweets in which the chosen hashtag has been used. However, it does allow you to know how many times that original tweet was retweeted by other accounts, as will be seen later. In this case, the data refers to the non-original publications that each account made, which were retweets, but in all their activity, not only referring to the hashtag # SustainableTourism. This data allows viewing the interaction of the accounts with other users of the social network and is the part of the investigation where the results are more even, because a profuse activity in retweets is observed. This includes @OldDublinTown, with 84% of its publications with retweets that come from original tweets of other users; @visitportugal, with 72% of retweets; and @EU_MARE, with 63%. This activity, in the tourism sector, is quite common Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 29 January 2021 doi:10.20944/preprints202101.0636.v1 because accounts can retweet publications of tourists who are visiting or have visited them.
The hashtags section provides the number of tweets used by each of the accounts in their publications. The average number of hashtags is very similar, in all cases, and almost all accounts use only one hashtag, thus giving it all the prominence. Some accounts have an average hashtag per tweet below zero and this is an interesting circumstance, since the absence of hashtags can worsen the visibility of the tweet and the account; but this would not have happened in all these cases.
Five most named destinations or attractions for each account
After the quantitative analysis of the two previous subsections, a mixed analysis of the content of the accounts was necessary. Its variety, age and origin are quite different, as already mentioned, and that makes the destinations and attractions that they promote are also very varied, as shown in Table 3. The most popular destinations and attractions in recent weeks bring together the appearance of cities, specific tourist attractions, places and nature reserves, and fairs and festivals. The options are very varied and offer forms of sustainable tourism for all ages, tastes and budgets. As some studies cited in the introduction indicated, Twitter allows viral tourism communication and marketing, which can bring the benefits of a destination to any part of the world. In later research it would be interesting to study the specific appearances of certain destinations, especially those most vulnerable or threatened by biosystemic change.
Discussion and conclusions
The previous results, according to the objectives of the research, must be commented and discussed in depth, from the perspective of the authors to the state of the art and the previously exposed working hypotheses. Hypothesis (H1). COVID-19 has awakened and increased a virtual civic awareness of citizens, who follow accounts that promote sustainable tourism on social networks, even months before being able to recover international mobility. Climate awareness is one of the concerns that has grown the most after the SARS-CoV-2 social and health crisis [14,15,16,17,18]. Although it was already on the political agenda of governments and parties, it is now also on the social agenda, as revealed by commented international polls [19,20]. It is confirmed that eco-sustainable awareness is a top concern and as the media share the results of reports and surveys, they get the audience interested and expand their data and knowledge on the matter in social networks [22].
The decision to search for this information in social media can respond to the exponential growth of these during confinement. Although they were already an important part of our lives, the prohibition of physical socialization promoted the increase of virtual communication through social networks. In future research it would be interesting to see if there are more reasons to choose social media as a source of information on sustainable tourism, for example: political disaffection of citizens, distrust in official reports, suspicion of the mass and traditional media for their relationships policies or their business interests, detection of little relevant presence of ecological issues on the political and media agenda ... Likewise, it would be very suggestive to interrelate the presence of a hashtag on social networks with searches for the same term in search engines, as allowed by the Google Trends tool.
Hypothesis (H2). This awareness of sustainable tourism is promoted by official entities, which must fulfill their environmental commitments, and by influencing users, tourism experts, who can see their core of followers and influence increased. The analysis of the hashtag #Sus-tainableTourism has corroborated this hypothesis, although it has shown that the accounts with the most influence and activity are those of official entities, well above individual or personal accounts. According to the commented authors, Twitter is the social network most chosen by official entities, governments, political parties or politicians in office [46,47,48,49,50,51,52]. The research data confirm that the most active accounts in sustainable tourism are of this nature and maintain the validity and timeliness of these previous research [23,24,25]. It is true, as has been raised in the research that it is necessary to distinguish between more influential accounts and more active accounts. Influence is usually measured in the number of followers and official entities have an easier time scoring points in this regard; while an influencer or individual person must win each follower, one by one, for the content they offer.
According to the research data, the European Union's commitment to sustainable tourism is tangible and among the 10 accounts analyzed there are several European organizations that work on water, fishing or food. They would comply with the commitments of the 2030 Agenda and not doing so would be a serious incongruity. It is also particularly positive to see how European countries ostensibly adhere to this commitment (Portugal, Italy, and Ireland, in the first places) and how other countries outside the European Union also embrace these commitments (Kenya, India, Sri Lanka, and Mexico, in the first places). It would be interesting, in subsequent research, to filter the search for activity on Twitter only to European countries, to analyze and compare which would be the most active and responsible in social media; and compare their proposals with those of countries that stand out in each of the other continents. Likewise, it would be relevant to compare the activity on Twitter in those countries that are especially active and influential in the social media but did not have a high rate of tourists before the pandemic. This would allow assessing, when mobility recovers, if the strategy has benefited them and the number of tourists actually grows.
Hypothesis (H3). Sustainable tourism offers a reinvention of the way of doing tourism, revising destinations or proposing new destinations with a new awareness. The state of the matter outlined high-impact academic works, with experiences in the five continents and the proposal of new destinations, previously considered exotic or more inhospitable [ 30,36,38]. These investigations were considered to elaborate this hypothesis, which stood as one of the fundamental preconceptions to make a mixed analysis that included the qualitative. The hypothesis has been corroborated with the most named destinations and attractions, listed in Table 3. Based on these results, future research could focus exclusively on analyzing the Twitter accounts of a country that promote all national destinations from a sustainable way. It would be important to detect if these destinations are more underlined by official entities (governments, parties, municipalities), by tourism companies, by influencers, by anonymous tourists who share their travel experiences, or by citizens who live in those places and they want to share their value with the rest of the world. Other investigations could examine whether the forgotten destinations, the protagonists of those tweets during confinement, actually come alive again in physical visits when international mobility recovers. Hypothesis (H4). It is impossible to corroborate whether this eco-tourism awareness will translate into more sustainable cities, trips and travelers when the socio-health crisis is over, although Twitter now offers an eco-conscious and perhaps escapist and cathartic call, showing tourism that we cannot realize but we do dream about. As mentioned, surveys on problems that concern European and international society, of course, contemplate climate awareness [20]. This awareness includes, in some studies, new eco-tourism awareness and a promise or anticipation of being more aware and sustainable tourists when it is possible to travel again. When the desired group immunity has been achieved, the virus is overcome, and freedom of movement regained, studies will have to assess whether that awareness has been translated into reality or was only promises; and how long that awareness lasts, whether it is temporary or long-lasting.
Unfortunately, there are still many months to go before these studies can be done. Meanwhile, this research has corroborated the hypothesis, showing Twitter as the field where that commitment, at least now, is visible. And it is due to the number of accounts, the number of users, their activity, the engagement they achieve and the internationalization of the proposals. The impossibility of doing tourism has not prevented the continued talk of tourism and that it is reinterpreted with a sustainable environmental awareness and committed to countries, cities, heritage and of course, nature. From this it is proposed that future research compare the activity of the most influential accounts, contrasting their activity on Twitter, YouTube, Facebook, Instagram and TikTok, to detect successes and errors, similarities and differences, or the best exploited and most chosen networks by the audience. Likewise, it would be interesting to conduct surveys or focus groups with followers of these accounts, to find out which groups choose one social network over another and why, when raising awareness about sustainable tourism.
|
v3-fos-license
|
2022-10-27T15:23:33.532Z
|
2022-10-23T00:00:00.000
|
253154575
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-0383/11/21/6251/pdf?version=1666522366",
"pdf_hash": "a039c66ef0c70b5123b769c786ce30f94ef1bfd0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:938",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5a653ac14b4242ab663692907492a93ec148c88d",
"year": 2022
}
|
pes2o/s2orc
|
Clinical Outcomes of Biodegradable versus Durable Polymer Drug Eluting Stents in Rotational Atherectomy: Results from ROCK Registry
Background: The aim of this study was to compare the clinical outcomes of biodegradable polymer (BP) versus durable polymer (DP) drug eluting stents (DES) in patients with calcified coronary lesions who underwent rotational atherectomy (RA) and percutaneous coronary intervention (PCI). Methods: This study was based on a multicenter registry which enrolled patients with calcified coronary artery disease who received PCI using RA during between January 2010 and October 2019 from 9 tertiary centers in Korea. The primary outcome was 3-year all-cause mortality, and the secondary outcomes were cardiovascular death and target-lesion failure. Results: A total of 540 patients who underwent PCI using RA were enrolled with a follow-up period of median 16.1 months. From this registry, 272 patients with PCI using DP-DES and 238 patients with BP-SGDES were selected for analysis. PCI with BP-DES was associated with decreased all-cause mortality after propensity score matching (HR 0.414, CI 0.174–0.988) and multivariate Cox regression analysis (HR 0.458, HR 0.224–0.940). BP-DES was also associated with decreased cardiovascular mortality, but there was no difference in TLF between the two groups. Conclusions: BP-DES were associated with favorable outcomes compared to DP-DES in patients undergoing PCI using RA for calcified coronary lesions.
Introduction
Severely calcified coronary lesions are one of the most difficult challenges to the intervention cardiologist. Heavy calcium deposition increases the complexity of the procedure
Introduction
Severely calcified coronary lesions are one of the most difficult challenges to the intervention cardiologist. Heavy calcium deposition increases the complexity of the procedure and the likelihood of procedural failure by interfering with lesion preparation and balloon expansion, making device delivery difficult, and limiting final stent expansion [1]. Additionally, they are associated with a high frequency of restenosis and need for repeat revascularization, which is likely to adversely impact both the short-and long-term outcomes of coronary artery disease [2,3].
Rotational atherectomy (RA) is one of the preferred methods used during PCI for heavily calcified coronary lesions. It modifies calcified plaques leading to lumen enlargement and better stent expansion [4]. Early applications of RA either alone [5] or using bare metal stents (BMS) [6] were associated with high rates of restenosis and repeat revascularization. The introduction of drug-eluting stents (DES) in combination with RA has been shown to be safe and associated with better clinical outcomes compared to BMS [7][8][9]. Second generation DES (SGDES) have been a further improvement on the first-generation stents in terms of safety and efficacy when used after RA [10][11][12]. Additionally, there is accumulating evidence that newer ultrathin stents and biodegradable polymers are associated with more favorable results in patients undergoing PCI compared to thicker strut SGDES and durable polymers [13][14][15][16].
However, it is yet uncertain whether biodegradable polymer DES (BP-DES) are superior to durable polymer DES (DP-DES) in heavily calcified lesions requiring RA. In this study, our objective was to find out if there were clinical differences in BP-DES versus DP-DES in patients who underwent coronary stent implantation after RA for heavily calcified lesions.
Study Design and Population
This study was based on the Rotational atherectomy in Calcified lesions in Korea (ROCK) registry. Details of the registry have been published elsewhere [17]. In brief, 540 patients who underwent PCI using RA due to calcified CAD between January 2010 and October 2019 at 9 tertiary centers in Korea were retrospectively enrolled and analyzed. The median follow-up period was 16.1 months (interquartile range 8.8, 38.4 months). For the purposes of the current analysis, 30 patients were dropped out due to use of BMS, first generation DES, and drug eluting balloons. The remaining 510 patients were divided into two groups according to the durability of the DES polymer used during the procedure. (Figure 1). Data including demographic, clinical, angiographic and procedural characteristics were collected at each site using a standardized report form. Approval was given by the local ethics committee of each hospital. The study protocol was approved by the Institutional Review Board of each institution and is in accordance with the Declaration of Helsinki.
Clinical Outcomes and Definition
The primary clinical outcome of this study was all-cause death during 3 years of follow-up. The secondary outcomes were 3-year cardiovascular death, and target-lesion failure (TLF) defined as a composite of cardiac death (CD), target-vessel spontaneous myocardial infarction (TVMI), or ischemia-driven target-lesion revascularization (TLR).
Technical success was defined as residual stenosis of less than 30% in the presence of grade III Thrombolysis in Myocardial Infarction (TIMI) flow [18]. Procedural success was defined as technical success without in-hospital major adverse cerebral and cardiac events, including in-hospital death, in-hospital cerebrovascular accident (CVA), urgent revascularization (PCI or surgery) following the index procedure, procedure-related atrioventricular block requiring temporary pacemaker insertion, type D-F coronary perforation or dissection, intervention (including surgery) due to cardiac tamponade, and peri-procedure MI. Target-vessel spontaneous MI was defined as spontaneous MI which could be clearly attributed to the target vessel. Spontaneous MI was defined as a creatine kinase-myocardial band (CK-MB), or troponin increase above normal range with signs or symptoms of ischemia at any time during follow-up after discharge. Peri-procedural MI was defined as a CK-MB peak elevation of over 10-fold above the upper normal range occurring within 48 h after the index procedure. TLR was defined as any revascularization (PCI or surgery) of the treated lesion. Bleeding events were defined according to the TIMI bleeding criteria [18]. All clinical events were confirmed by source documentation collected at each hospital and centrally adjudicated by an independent group of clinicians unaware of the procedural details.
Statistical Analysis
Continuous variables are presented as the mean ± standard deviation or median and interquartile range and compared using the Student's t-test or Mann-Whitney U-test, as appropriate. Categorical variables are presented as numbers and percentages and compared using the chi-square test or Fisher's exact test. Clinical endpoints were compared using event curves were constructed using the Kaplan-Meier method and compared using the log-rank p-value. To adjust for confounding factors, two analyses were performed. First, propensity-matched cohorts were constructed using 1:1 matching on propensity scores obtained from logistic regression and a nearest-neighbor method with a caliper width of 0.2. The covariates in the propensity score were age, sex, previous history of hypertension, diabetes mellitus (DM), chronic kidney disease (CKD), dialysis, previous PCI, stroke, atrial fibrillation, systolic and diastolic blood pressure at admission, left ventricular ejection fraction (LVEF), peak CK-MB, low-density lipoprotein (LDL) cholesterol, HbA1c, clinical diagnosis, number of coronary arteries with stenosis > 50%, lesion location, mean stent diameter, total stent length, total number of stents, and treatment with aspirin or P2Y12 inhibitors. Covariate balance was assessed with a standardized mean difference < 0.1 indicating appropriate balance [19]. Second, univariate Cox regression analysis was used to identify predictors of mortality on all variables listed in Table 1, and multivariate Cox regression analysis by stepwise selection method was performed to identify independent predictors of death on each variable with p < 0.1 in univariate analysis. To avoid overfitting, the number of covariates were chosen such that there was 1 predictor variable per 10 events [20]. The relative effect of predictor variables on clinical outcomes were expressed using hazard ratios (HR) and 95% confidence intervals (CI). A two-sided p-value < 0.05 was considered statistically significant. All statistical analyses were performed using R Statistical Software version 4.2.1 (R Foundation for Statistical Computing, Vienna, Austria). Table 1 shows the baseline demographic and laboratory characteristics of patients classified according to DES polymer durability. In total, 272 patients underwent PCI with DP-DES and 238 patients with BP-DES. Patients who were implanted BP-DES were more likely to be male, had lower peak CK-MB levels, and were more likely to be treated using IVUS. Procedural success rates were 97.4% and 97.5% for DP-DES and BP-DES, respectively.
Baseline Characteristics of the Study Population
Propensity score matching identified 201 matched pairs for analysis. There was no significant difference in baseline characteristics between the two matched groups. After propensity score matching, a decrease in standardized mean difference for all major variables was observed, and only treatment with P2Y12 inhibitors had a standardized mean difference above 0.1, indicating appropriate covariate balance.
Clinical Outcomes According to Polymer Durability
The Kaplan-Meier event curves for all-cause mortality, cardiovascular mortality, and TLF are shown in Figure 2. PCI using BP-DES showed a tendency decreased all-cause death compared to DP-DES, and most of the mortality difference occurred within one year of index PCI, although the results were not significant (log-rank p = 0.081). Meanwhile, there was a tendency toward decreased cardiovascular death in the BP-DES group which did not reach statistical significance, and there was no difference in TLF between the two groups.
A summary of clinical outcomes according to DES polymer durability is presented in Table 2. In the DP-DES group 29 (10.7%) deaths occurred, and in the BP-DES group 11 (4.7%) deaths occurred. After adjusting for baseline variables using propensity score matching, PCI with BP-DES was associated with a significant decrease for the primary outcome of all-cause death (HR 0.414, CI 0.174-0.988). BP-DES was also associated with a lower risk of cardiovascular death (HR 0.281, CI 0.094-0.843), but there was no difference in TLF (HR 1.048, CI 0.590-1.861) between the two groups.
Univariate Cox regression analysis was done for all the clinical variables listed as baseline characteristics in Table 1, and the results are shown in Table 3. Older age, lower BMI, previous history of dyslipidemia and CVA, lower LVEF, lower Hb, higher peak CK-MB, a diagnosis of NSTEMI or silent ischemia, and a smaller stent diameter were potential factors for increased all-cause mortality. Treatment with aspirin, P2Y12 inhibitors, betablockers, and statins were associated with decreased mortality. After multivariate Cox regression, PCI with BP-DES was identified as an independent factor predicting decreased all-cause mortality (HR 0.458, CI 0.224-0.940). Older age, previous history of CVA, lower LVEF, higher peak CK-MB, and no treatment with P2Y12 inhibitors were other factors showing the strongest association with increased all-cause mortality. Table 1 shows the baseline demographic and laboratory characteristics of patients classified according to DES polymer durability. In total, 272 patients underwent PCI with DP-DES and 238 patients with BP-DES. Patients who were implanted BP-DES were more likely to be male, had lower peak CK-MB levels, and were more likely to be treated using IVUS. Procedural success rates were 97.4% and 97.5% for DP-DES and BP-DES, respectively.
Baseline Characteristics of the Study Population
Propensity score matching identified 201 matched pairs for analysis. There was no significant difference in baseline characteristics between the two matched groups. After propensity score matching, a decrease in standardized mean difference for all major variables was observed, and only treatment with P2Y12 inhibitors had a standardized mean difference above 0.1, indicating appropriate covariate balance.
Clinical Outcomes According to Polymer Durability
The Kaplan-Meier event curves for all-cause mortality, cardiovascular mortality, and TLF are shown in Figure 2. PCI using BP-DES showed a tendency decreased all-cause death compared to DP-DES, and most of the mortality difference occurred within one year of index PCI, although the results were not significant (log-rank p = 0.081). Meanwhile, there was a tendency toward decreased cardiovascular death in the BP-DES group which did not reach statistical significance, and there was no difference in TLF between the two groups. A summary of clinical outcomes according to DES polymer durability is presented in Table 2. In the DP-DES group 29 (10.7%) deaths occurred, and in the BP-DES group 11 (4.7%) deaths occurred. After adjusting for baseline variables using propensity score matching, PCI with BP-DES was associated with a significant decrease for the primary outcome of all-cause death (HR 0.414, CI 0.174-0.988). BP-DES was also associated with a lower risk of cardiovascular death (HR 0.281, CI 0.094-0.843), but there was no difference in TLF (HR 1.048, CI 0.590-1.861) between the two groups. Table 2. Clinical endpoints before and after adjustment using propensity score matching.
Discussion
BP-DES, compared to DP-DES, was associated with decreased three-year all-cause mortality in patients who underwent PCI with rotational atherectomy. This finding was significant after modifying for confounders using multivariate Cox regression analysis and propensity score matching. PCI with BP-DES also showed better outcomes for cardiovascular death, while there was no difference observed for TLF.
In our study, 510 patients received SGDES implantation for PCI after RA and were followed up for a duration of median 16.1 months. All-cause death occurred at an overall incidence of 10.4%, cardiovascular death 6.8%, and TLF at 14.5%. Our mortality data is comparable with other reports of PCI using RA and DES and indicates high standards of PCI using both DP-DES and BP-DES. Okai et al. reported incidences of cardiac death 10.9% and TVR 21.4% during a median period of 3.8 years of follow-up [21]. Abdel-Wahab et al. reported all-cause death at 4.4%, MI at 3.4%, TVR in 9.9%, and TLR in 6.8% during a follow-up period of median 15 months [9].
Contemporary SGDES are an improvement on the first-generation DES with respect to the antiproliferative agent, polymer biocompatibility, and thinner stent platforms, and have been proven to be superior to BMS or first-generation DES [22][23][24]. Initially there was concern that these improvements may not be true for severe calcified lesions, as the calcified plaques might limit stent expansion so that the thinner struts of the SGDES would be less effective than the thicker first-generation DES [25,26]. However, recent studies conclude that SGDES were indeed more effective even for calcified lesions [27].
Stent polymers have been implicated in the development of restenosis and late and very late stent thrombosis, due to local hypersensitivity and inflammatory reactions [28]. This has led to the development of DES with biodegradable polymers, where only the bare metal platform is left behind after the drug-eluting polymer is degraded. There is accumulating evidence that newer ultrathin stents and biodegradable polymers are associated with more favorable results in patients undergoing PCI compared to thicker strut SGDES and durable polymers. In post hoc analyses of the BIO-RESORT trial, Synergy EES (strut thickness 74 µm) were associated with lower rates of TLR compared to Resolute Integrity ZES (strut thickness 91 µm) in small coronary arteries [13] and calcified lesions [14]. In the BIOFLOW V [15] and BIOSTEMI [16] trials, superior outcomes were observed with Orsiro CoCr-SES-BP (strut thickness 60/80 µm) compared to Xience CoCr-EES-DP. Our study is consistent with previous studies demonstrating improved clinical outcomes with BP-DES and extends the results to the population of patients undergoing RA for severely calcified coronary artery lesions.
An alternative explanation for the superior performance of BP-DES is that the newer stents have thinner strut platforms compared to older DP-DES. Stents with ultrathin struts on the order of 60 µm have been associated with decreased TLF and TVMI [29], whereas older BP-DES with thicker struts have been at best non-inferior to contemporary DP-DES [30,31]. The heterogenous composition of both DP-DES and especially BP-DES in our study, while a limitation, is also a strength since we incorporated stents with thicker struts as well as Orsiro SES-BP, meaning that the effects observed in our analysis is more likely to be due to polymer durability than from strut design.
The effects of DES in conjunction after the vessel injury caused by RA are yet unclear. Theoretically, as the lumen left after RA is relatively smooth and free of calcifications, this would manifest as improved outcomes after RA and DES compared to DES alone [4]. RA has also been shown to decrease polymer damage during stent delivery through calcified lesions [32]. Yet, in the ROTAXUS trial, RA + DES was not associated with improved clinical outcomes, as the acute gains from RA were offset by increased late lumen loss, possibly as a result of vessel injury sustained during the RA procedure [33]. Vessel injury may also lead to increased inflammation, creating an environment where BP-DES may be potentially superior to DP-DES. A recent study by Mankerious et al. reported that Orsiro CoCr-SES-BP was associated with lower rates of TLF in small vessels after PCI using RA compared to EES-DP, although this effect was not significant when larger vessels were also considered [34]. Our analysis also suggests that BP-DES is associated with superior outcomes compared to DP-DES when used after RA. Further studies are needed to ascertain the effect of BP-DES in the RA population.
This study has several limitations. First, this was not a randomized trial but a retrospective study and may be subject to inherent biases, although we used propensity score matching and multivariate Cox regression analysis to eliminate confounding variables. A selection bias cannot be excluded due to the heterogeneous composition of both BP-DES and DP-DES groups, and the lack of clear and definite indications for choosing BP-DES or DP-DES. Second, the number of patients in the registry was not large, and our study may have lacked sufficient power to distinguish between outcomes or to incorporate more variables into the multivariate regression analysis. However, since in contemporary practice RA is generally reserved for the most severely calcified lesions unamenable to high pressure balloons, the number of patients undergoing PCI using RA cannot be very large, and the number of patients in our registry is similar to other studies on RA [9,21,34]. Third, the difference in the primary outcome of all-cause death reached only borderline significance after adjustment with propensity score matching, and there is a possibility that the results were only due to chance. Fourth, stent characteristics other than polymer durability may have been confounding variables that were unaccounted for in our analysis.
Conclusions
BP-DES was associated with decreased all-cause mortality and cardiovascular mortality compared to DP-DES in patients who underwent PCI using RA for calcified coronary lesions. There was no difference in TLF between the two groups. However, these results should only be considered as hypothesis generating, and further studies are needed to confirm these results.
|
v3-fos-license
|
2020-12-30T06:18:36.684Z
|
2020-12-29T00:00:00.000
|
229714312
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://aiche.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/btpr.3119",
"pdf_hash": "02a1293dfe2adf8d9b54af230e7428cc86731205",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:940",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "8f89ad9aaad043a6cff3a3c3b59fcd25e1826892",
"year": 2021
}
|
pes2o/s2orc
|
Safety risk management for low molecular weight process‐related impurities in monoclonal antibody therapeutics: Categorization, risk assessment, testing strategy, and process development with leveraging clearance potential
Abstract Process‐related impurities (PRIs) derived from manufacturing process should be minimized in final drug product. ICH Q3A provides a regulatory road map for PRIs but excludes biologic drugs like monoclonal antibodies (mAbs) that contain biological PRIs (e.g. host cell proteins and DNA) and low molecular weight (LMW) PRIs (e.g., fermentation media components and downstream chemical reagents). Risks from the former PRIs are typically addressed by routine tests to meet regulatory expectations, while a similar routine‐testing strategy is unrealistic and unnecessary for LMW PRIs, and thus a risk‐assessment‐guided testing strategy is often utilized. In this report, we discuss a safety risk management strategy including categorization, risk assessment, testing strategy, and its integrations with other CMC development activities, as well as downstream clearance potentials. The clearance data from 28 mAbs successfully addressed safety concerns but did not fully reveal the process clearance potentials. Therefore, we carried out studies with 13 commonly seen LMW PRIs in a typical downstream process for mAbs. Generally, Protein A chromatography and cation exchange chromatography operating in bind‐and‐elute mode showed excellent clearances with greater than 1,000‐ and 100‐fold clearance, respectively. The diafiltration step had better clearance (greater than 100‐fold) for the positively and neutrally charged LMW PRIs than for the negatively charged or hydrophobic PRIs. We propose that a typical mAb downstream process provides an overall clearance of 5,000‐fold. Additionally, the determined sieving coefficients will facilitate diafiltration process development. This report helps establish effective safety risk management and downstream process design with robust clearance for LMW PRIs.
it clearly states that biologic drugs are excluded. For biologic drugs, safety risk concerns from PRIs are currently addressed primarily on a case-by-case basis and carried out in different ways developed by pharmaceutical companies. 6 For biologic drugs like mAbs, PRIs generally arise from the cell substrates (e.g., host cell proteins and host cell DNA), the cell culture process (e.g., media components and antifoam), and the purification process (e.g., Protein A leachate from affinity column and detergents used for viral inactivation). 1 Safety risks of the biologically derived macromolecules or biological PRIs (such as host cell protein, DNA and protein A leachate) are managed through routine testing to assure to be below acceptable ranges. [6][7][8][9] The risk assessments, process clearance, and assays for biological PRIs have been reviewed in multiple recent publications. [10][11][12][13][14][15] Putatively acceptable residual levels that are based on human consumption safety history or observations from clinical trials are often used to guide process development, such as 100 parts-per-million (ppm) for residual host cell proteins (HCPs) 10 and 10 ng per dose for DNA. 14 Most of the upstream PRIs (e.g., vitamins and anti-foam) and downstream PRIs (e.g. buffers and reagents) have low molecular weight (LMW) compared to the biological PRIs such as HCPs and DNA. These PRIs are usually considered too small to constitute epitopes that can be recognized by the mammalian immune system, 16 thus the immunogenicity risk is fairly low and can be neglected. Some LMW PRIs (e.g. metal ions) potentially impact protein stability as discussed in a recent review paper 13 and the impact can be evaluated by stability studies, therefore, these risks are not discussed in this report.
We are focused on the safety risk arising from potential toxicity of the LMW PRIs. ICH Q3C (R6), 17 Q3D (R1), 18 Q6B, 1 Q9, 19 and M7 (R2) 20 guidelines provide relevant guidance and recommendations, however, safety risk assessment for PRIs in biologics drugs remains complicated. 9,21,22 Testing every LMW might be the most assuring approach to guarantee no safety risk to patients, but routine tests of all LMW PRIs for every manufacturing lot are unrealistic and unnecessary. Therefore, a science-based safety risk assessment is highly encouraged to meet regulatory expectations and pharmaceutical companies often implement a safety risk assessment-guided-testing strategy for LMW PRIs. 6,22,23 In this report, we discuss a LMW PRI safety risk management process that consists of multiple stages that can be integrated with CMC development activities. The categorization, risk assessment approaches, testing strategy, downstream clearance, decision tree, and process development aiming for robust PRI removals are also discussed.
| Risk assessment approaches, impurity safety factor and clearance calculation
A risk assessment can be carried out using PDE (permissible daily exposure), which is the maximum acceptable intake per day of an impurity in pharmaceutical products. 17 A PDE is usually derived preferably from NOEL (no-observed-effect level) with the following Equation (1): where F1 accounts for extrapolation between species, F2 is a factor of 10 to account for variability between individuals, F3 is a variable factor to account for toxicity studies of short-term exposure, F4 is a factor that may be applied in cases of severe toxicity, and F5 is a variable factor applied if LOEL (low observed effect level) is used. 17 For a LMW PRI with no available NOEL or LOEL, that is, PDE cannot be determined through Equation (1), a safety risk assessment can be carried out with an impurity safety factor (ISF) calculation. 6 ISF represents the distance between a toxicity dose and a PRI dose in a product dose. ISF is calculated with the following Equation (2) When the testing result for the PRI was "not detectable," the assay limit of detection (LOD) was used in the equation. 0.1 M sodium hydroxide for membrane storage. The impurity to be tested was spiked before the start of diafiltration. Samples were taken after each DV and tested by the corresponding qualified assay. Clearance of the tested PRIs was analyzed with the following Equation (5) 24 :
| Chromatography instrument and operations
where C is the final concentration of the PRI, C 0 is the initial PRI concentration, N is the number of DV, and S is the sieving coefficient.
S was determined by fitting the equation to the experimental data.
Unless mentioned otherwise the fittings had R factors greater than 95%.
| Analytical assays and sample testing
The assays for the 13 The safety risk level is determined according to the safety and toxicity data available in scientific literature and public databases, as well as information from regulatory guidance. 4 In brief, PRIs carry low safety risk are considered as "known-to-be-safe" and can be eliminated from the safety risk management process. PRIs with reported medium toxicity are considered to pose medium risk, while PRIs with reported genotoxicity and carcinogenicity are considered to pose high risk. PRIs with medium and high risks are carefully managed in the following three parts. The Part (1) categorization mainly focuses on toxicity of PRI and the risk associated with usage amount is evaluated in the following Part (3). Part (2) consists of process development. As a rule of thumb, high-risk PRIs should be avoided; mediumrisk PRI usage should balance risk and process benefit after process clearance knowledge is obtained; while low-risk PRI usage may have more flexibility to maximize process benefits. Additionally, acceptance criterion can be set up for raw materials to simplify risk management and reduce the testing burden. Maintaining process clearance data for LMW PRIs builds the knowledge base about the process clearance potential for different PRIs, which has the potential to reduce future process development activities. Part (3) is a safety risk assessment for the remaining high-and medium-risk PRIs to further define their safety risk levels. Generally, when a PRI has a significant safety margin or its residual level is well below the safety dose, the PRI can be considered to pose a low safety risk. PRIs with limited safety margin (a level close to the safety dose) or no safety margin (the level is at or above the safety dose), additional actions must be taken (such as testing or process change) to minimize their safety risks. Part (4) consists of assay development and testing for LMW PRIs that are identified in Part (3) to demonstrate process clearance. A suitable assay with sufficient sensitivity needs to be developed and confirmed compatible with the samples to be tested.
Appropriate testing points need to be selected and a PRI testing plan is established for GMP manufacturing.
Overall, implementation of safety risk management processes help to systematically eliminate safety risk and meet regulatory expectations, as well as streamline CMC development.
F I G U R E 1 Schematic of safety risk management process for LMW PRIs F I G U R E 2 Decision tree for LMW PRIs risk assessment. PDE, permitted daily exposure; TTC, threshold of toxicological concern 3.2 | Categorization, safety risk assessment approaches, and decision tree for LWM PRIs Categorization of LMW PRI is an initial risk identification step. Generally, LMW PRIs can be categorized into three groups based on toxicological risks: A, B, and C ( Figure 2).
Category A contains LMW PRIs that inherently pose no safety risk and are termed "known-to-be-safe" PRIs within the safety risk assessment. Many LMW PRIs derived from upstream processes are nutrients (such as amino acids, vitamins, salts, lipids, carbohydrates and trace elements) required for cell growth. Many of these PRIs can be found in humans as naturally existing chemicals, that is, human metabolites.
Metabolites and the concentration range in humans can be found in the Human Metabolome Database. 26 23 Schenerman et al 23 proposed an approach termed "impurity safety factor (ISF)" to measure the distance between the PRI level in a dose of product to the established toxicity dose. The PRI is considered to pose no safety risk only when the ISF is greater than the defined threshold value. Subsequently, the CMC Biotech Working Group, consisting of industry experts, adopted this ISF approach in a white paper entitled "A Mab: A Case Study in Bioprocess Development" 6 ; and the PhRMA working group included the ISF approach in its advice on applying "quality by design for biotechnology products." 7 To measure safety risk of Category B1 PRIs, ISF values can be calculated using Equation (2). The threshold ISF value can be carefully determined based on the dose-response relationship. 23 For Category C PRIs, risk assessment is performed by comparing PRI dose in a product dose to a TTC value (typically 1.5 μg/day). 20 Step 2 safety risk assessment is divided into two sub-steps, as shown in Figure 2.
Step 2a risk assessment uses worst-case assumptions. The main assumption is that the PRIs are copurified with the product to final drug substance, that is, process clearance is not con- improve the process to get sufficient removal) or the analytical method (e.g., poor sensitivity) need to be improved, and the ISF should be recalculated until it is acceptable.
Additionally, for PRI that has no available safety/toxicity data or PRI without chemical identity, risk assessment can be carried out assuming that the PRI has the highest safety risk and follow the assessment workflow for a Category C PRI. Step 2a assessment was carried out for the nine PRIs.
| Safety risk assessment of LMW PRIs in a mAb
Step 2a results suggested that six out of the nine PRIs posed no safety risk even without accounting for process clearance. The remaining three PRIs were considered to pose safety risks without accounting for process clearance. Accordingly, in-process testing for three PRIs was added to the testing plan for GMP manufacturing, and testing was carried out accordingly at the viral filtration step to demonstrate the process clearance. The three PRIs (an antifoam, an anti-shear protectant, and a chemical reagent for cell line selection) were not detected in the samples by the corresponding assays.
Step 2b assessment was carried out using the corresponding assay detection limits and the results demonstrated that the three PRIs posed no safety risks. Therefore, the 105 PRIs in this example posed no safety risk and their safety risks were successfully managed.
| Clearance data for 6 LMW PRIs from 28 mAbs
Removal of LMW PRIs by downstream manufacturing processes is one aspect that determines their risk level. Therefore, downstream process clearance is critical for the PRI safety risk mitigation. Knowing the process clearance potential should help risk mitigation. Figure 5 shows the clearance data from large-scale GMP manufacture of 28 dif- Table 1). These PRIs have molecular weights ranging from 70 to 9,000 g/mol and have different physical properties such as charge and hydrophobicity.
| Clearance of LMW PRIs in Protein A chromatography
The results from the spiking/clearance study on Protein A chromatography are summarized in Table 2 and Figure 7a. As shown in Table 2, Interestingly, low levels of dextran sulfate and Triton X-100 were detected in the Wash fractions but they were not detected in the Eluate fraction. The results indicate that these two PRIs were weekly retained on the Protein A column during loading, likely due to weak interactions with the mAb proteins or the resins. These weak interactions were effectively disrupted by the wash condition because the two PRIs were not detected in the Eluate. Therefore, a wash condition can further improve PRI removal capability of Protein A chromatography. Due to potential weak interactions between LMW PRIs and the mAb, mAb properties (such as charge and hydrophobicity) may affect LMW PRIs removal. As shown in Figure 7a, clearance of the same PRI was similar between the two different mAbs, suggesting the contribution from mAbs to PRI removal may be negligible.
As shown in Figure 7a, more than 1,000-fold clearance was achieved for all tested LMW PRIs. Clearance for EDTA, polysaccharide, MSX, PEG8000, and Triton X-100 were greater than 10,000-fold. Along with the historical data summarized in Figure 5 and the recent publication, 4
| Clearance of LMW PRIs in cation exchange chromatography
The results from spiking/clearance studies from cation exchange chromatography are summarized in Table 3 and Figure 7b. Except for dextran sulfate and Triton X-100, all of the spiked PRIs in the feed were removed in the column flow-through, and the remaining level in the elution fraction was very low with most as "not detectable." The clearance for BME, polysaccharide, monothiol glycerol, Pluronic F68, and simethicone was more than 1000-fold. Unlike Protein A chromatography, removal of dextran sulfate by cation exchange chromatography was not as effective compared to the other tested PRIs.
Considering that dextran sulfate is negatively charged under the pH conditions, 31 its interactions with the positively charged mAbs may reduce the clearance. The removal of Triton X-100 on cation exchange chromatography was also less than that on Protein A chromatography. Similar clearance of dextran sulfate and Triton X-100 on cation exchange chromatography were also observed on mAb B (Figure 7(b)), suggesting that mAb-specific interactions are unlikely the major reason. The retention mechanism is likely weak interactions between Triton X-100 and the mAb; however, further investigation is needed to confirm this hypothesis. Figure 7c, copper ion and MSX were effectively removed by diafiltration. A typical 6 DV diafiltration resulted in greater than 300-fold clearance for these two PRIs. Equation (5) was fitted to the data shown in Figure 7c and the sieving coefficients for copper(II) and MSX were estimated to be 1.09 and 1.02, respectively, suggesting nearly ideal sieving. Significant removal of EDTA, tropolone, and caprolactam was also achieved by diafiltration, although the clearance for the three PRIs was not as effective as copper ion and MSX. Accordingly, the obtained sieving coefficients for these three PRIs were in the range of 0.58-0.83. Very limited clearance was obtained for Pluronic F68, even with a spiked concentration (450 μg/ml) that was significantly lower than the critical micelle concentration (1,900 μg/ml). 32 Poor clearance is expected when the Pluronic F68 concentration is higher than critical micelle concentration because the size of the micelles is greater than the TFF membrane MWCO. The sieving coefficient for Pluronic F68 was estimated out to be 0.11.
| Clearance of LMW PRIs during diafiltration
Copper ion had a sieving factor slightly greater than one likely due to electrostatic repulsions between the positively charged copper ion and the positively charged mAb. 33,34 Similarly, EDTA removal was not as effective as copper ion or the neutrally charged MSX In terms of reasonably reducing testing burden, removal of all inprocess testing for PRIs and assuming good clearance certainly poses significant risks. Based on the results presented in this work, assuming no process clearance is highly conservative, and assuming some degree of process clearance for typical mAb downstream process has scientific basis and is reasonable. Our studies showed that Protein A chromatography and cation exchange chromatography (operated in bind-elute mode) had greater than 1,000-and 100-fold clearance, respectively. The typical diafiltration process is also capable of removing LMW PRIs, generally more than 100-fold but the clearance potential can be affected by PRI chemical properties such as charge and hydrophobicity as demonstrated in this study and several recent reports. 24,34 The downstream clearance potential must be considered in the safety risk assessment to avoid any unnecessary testing. In the absence of clearance data, the initial risk assessment (Figures 2 and 3) based on the worst-case assumption that there is no clearance during downstream processing is likely to lead to some testing. However, the clearance data presented here suggests that it is quite reasonable to assume some conservative level of clearance, which can help reduce the testing burden. For the overall process, a minimum clearance of 5,000-fold can be assumed for mAb purification processes, with 100-fold clearance from the Protein A chromatography step, 10-fold clearance from the cation exchange chromatography (in bind-elute mode), and fivefold clearance from the diafiltration process. With this assumption, an additional assessment taking into account a minimum process clearance can be added to the decision tree in Figure 2. After accumulating sufficient clearance data, testing for some LMW PRIs may be avoided for mAbs using the platform process. The gained clearance potential of each unit operation also facilitates the process development for a new mAb. It is noteworthy that clearance of dextran sulfate and Triton X-100 by bind-elute cation exchange chromatography was significantly lower than the clearance by Protein A chromatography. The poor clearance of dextran sulfate may be explained by the potential electrostatic interactions between dextran sulfate and the resins or bound mAbs. The mechanism for the retention of Triton X-100 by the cation exchange chromatography would need further studies. Furthermore, we found that the properties (such as charge and hydrophobicity) of protein and/or PRI could impact the clearance during tangential flow filtration through potential weak interact rations between PRI and proteins. These undesired interaction led to lower sieving coefficients for several commonly seen PRIs.
The sieving coefficients obtained in our study for the commonly seen LMW PRIs can be used to guide diafiltration development to achieve desired clearance. Taken together, this report establishes an effective safety risk management and rational design of robust downstream process for LMW PRIs. writing-original draft; writing-review and editing.
|
v3-fos-license
|
2021-05-28T05:17:50.304Z
|
2021-05-01T00:00:00.000
|
235214698
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/insects12050440",
"pdf_hash": "93e2f546d5361bc288b07f6c547a7f88b5f4eedd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:941",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"sha1": "93e2f546d5361bc288b07f6c547a7f88b5f4eedd",
"year": 2021
}
|
pes2o/s2orc
|
The Impact of Climate Change on Agricultural Insect Pests
Simple Summary Climate change and extreme weather events have a major impact on crop production and agricultural pests. As generally adaptable organisms, insect pests respond differently to different causes of climate change. In this review, we address the effects of rising temperatures and atmospheric CO2 levels, as well as changing precipitation patterns, on agricultural insect pests. Since temperature is the most important environmental factor affecting insect population dynamics, it is expected that global climate warming could trigger an expansion of their geographic range, increased overwintering survival, increased number of generations, increased risk of invasive insect species and insect-transmitted plant diseases, as well as changes in their interaction with host plants and natural enemies. As climate change exacerbates the pest problem, there is a great need for future pest management strategies. These include monitoring climate and pest populations, modified integrated pest management strategies, and the use of modelling prediction tools which are presented here. Abstract Climate change and global warming are of great concern to agriculture worldwide and are among the most discussed issues in today’s society. Climate parameters such as increased temperatures, rising atmospheric CO2 levels, and changing precipitation patterns have significant impacts on agricultural production and on agricultural insect pests. Changes in climate can affect insect pests in several ways. They can result in an expansion of their geographic distribution, increased survival during overwintering, increased number of generations, altered synchrony between plants and pests, altered interspecific interaction, increased risk of invasion by migratory pests, increased incidence of insect-transmitted plant diseases, and reduced effectiveness of biological control, especially natural enemies. As a result, there is a serious risk of crop economic losses, as well as a challenge to human food security. As a major driver of pest population dynamics, climate change will require adaptive management strategies to deal with the changing status of pests. Several priorities can be identified for future research on the effects of climatic changes on agricultural insect pests. These include modified integrated pest management tactics, monitoring climate and pest populations, and the use of modelling prediction tools.
Introduction
Throughout history, human population growth has been accompanied by many changes in everyday life, culture, technology, science, the economy, and agricultural production. Agricultural production has also undergone many major changes-agricultural revolutions-which have influenced by the development of civilization, technology, and general human advancement. However, the exceptional population growth in the last 100 years has had many undesirable consequences that (along with changes in environmental conditions) impact the security of the food supply. The growing world population has rising demands for crop production and accordingly, by 2050, global agricultural
Climate under Change
The climate is a crucial element that determines various characteristics and distributions of managed and natural systems, including hydrology and water resources, cryology, marine and freshwater ecosystems, terrestrial ecosystems, forestry and agriculture [5]. It can be explained as the phenomenon that involves changes in environmental factors such as temperature, humidity and precipitation over many years. As a result of increased temperatures, climate extremes, increased CO 2 and other greenhouse gases (GHGs) as well as altered precipitation patterns, global food production is under severe threat [6]. Global warming is a serious problem facing the world today. It has reached record breaking levels as evidenced by unprecedented rates of increase in atmospheric temperature and sea level [7]. According to the World Meteorological Organisation (WMO), the world is now about one degree warmer than before widespread industrialization. The Intergovernmental Panel on Climate Change (IPCC) [7] also reported that each of the last three decades has been increasingly warmer, with the decade of the 2000s being the warmest. Based on a range of global climate models and development scenarios, it is expected that the Earth could experience global warming of 1.4 to 5.8 • C over the next century [8]. The main cause of global warming is increased concentrations of greenhouse gases in the atmosphere. The most prevalent atmospheric gases are carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O), which are caused by many anthropogenic activities including burning off the fossil fuels and land-use change [9]. Looking at the period of industrialization in the last two centuries, the concentration of greenhouse gases has increased immensely compared to the pre-industrial era [10]. Among the greenhouse gases, CO 2 is the most important and the most abundant [11]. The increase in atmospheric CO 2 is one of the most recorded global changes in the atmosphere in the last half century [12]. Its concentration in the atmosphere has increased dramatically to 416 ppm, against 280 ppm reported from pre-industrial period, and is likely to double in 2100 [8,13]. CO 2 is considered a greenhouse gas due to its high absorbance in certain wavelengths of thermal infrared radiation emitted from the Earth's surface. The greater the amount of atmospheric gases that absorb thermal infrared radiation from the Earth's surface, the greater the proportion of radiation emitted from the atmosphere toward the Earth's surface [14]. As a result, the long-wave balance of the Earth's surface becomes less negative, while more energy is available for sensible and latent heat flux at the Earth's surface. As more energy is available for heat flux, this leads to an increase in air temperature [15]. Changes in extreme weather and climate events have been observed since the mid-20th century. Many of these changes, which have been linked to anthropogenic influences, include a reduction in cold temperature extremes, an increased occurrence of warm temperature extremes, enhanced rates of sea level rise, and an increase in the frequency of heavy precipitation events in numerous regions. Heat waves are expected to become more frequent and last longer, and extreme precipitation events are expected to be more intense and frequent in certain areas [7]. It is very likely that the precipitation pattern will change and not be uniform. In the higher latitudes and equatorial Pacific, there appears to be an increase in mean annual precipitation. In the dry mid-latitude and subtropical regions, mean precipitation is likely to decrease, while in the wet mid-latitude regions, mean precipitation is likely to increase. Extreme precipitation events in most mid-latitude areas and humid tropical regions are likely to become more frequent and intense [7]. The United Nations (UN) and the IPCC have made numerous decisions to reduce GHGs emissions, provide financial assistance to developing countries, and improve adaptive capacity to meet the challenges posed by the harmful effects of climate change.
Impact of Climate Change on Crop Production
Agriculture, often referred to as an open-air factory, is an economic activity that depends heavily on climate and certain weather conditions to produce food and many other goods necessary to sustain human needs. Moreover, agriculture is an activity that is exceptionally vulnerable to climate change and the impacts of climate change are characterized by various types of uncertainty [16]. Climate change is estimated to have both positive and negative impacts on agricultural systems at the global level, with negative impacts outweighing the positive ones [17]. Temperature increases, altered precipitation patterns, and increased CO 2 concentrations have a significant impact on ecosystems, ranging from species to ecosystem levels [18]. For this review, it is important to first explain the effects of climate change on crop production, as the effects of climate change on insect pests depend on the plant species on which these insects thrive and feed.
Impact of Temperature Increase
Temperature is considered one of the most important factors affecting the distribution and abundance patterns of plants due to the physiological limits of each species [19]. It is a factor that limits the geographical areas where different crops can be grown, as well as a factor that affects the rate of development, growth and crop yields [20]. Agricultural crops have basic temperature requirements to complete a particular phenophase as well as the entire life cycle. In addition, extremely low and high temperatures can have detrimental effects on crop development, growth, and yield, especially during critical phenophases (such as anthesis) [21]. It is predicted that the spring-summer season will have higher air temperatures, which would be beneficial for crop production in northern locations where the length of the growing season is currently a limiting factor [22]. The effects of temperature increase are generally associated with other environmental factors such as water availability, the occurrence of strong winds, and the intensity and duration of sunlight [20]. The direct negative temperature influence on yield could be additionally impacted by indirect temperature influences on these environmental factors. For example, the rise in temperature increases atmospheric water demand, which may lead to additional water stress due to higher water pressure deficits, which subsequently reduces soil moisture and eventually decreases yield [23]. Other indirect effects of temperature rise include increased frequency of heat waves and the impacts on pests, weeds, and plant diseases [7]. 2.1.2. Impact of Elevated CO 2 Concentration CO 2 is the essential chemical compound for photosynthesis, a process in which water and CO 2 are converted into sugars and starch, powered by solar energy. Photosynthesis occurs in the green pigments of leaves, and CO 2 must enter through stomatal openings [24]. Since carbon is the key element in the structure of the plant, increased CO 2 concentration enables faster growth due to rapid carbon assimilation [25]. The main effects of elevated CO 2 on plants include a reduction in transpiration and stomatal conductance, improved water and light-use efficiency, and thus an increase in photosynthetic rate. Consequently, elevated atmospheric CO 2 concentrations could have a direct impact on ecosystems by stimulating plant development and growth [26]. Although higher CO 2 concentrations could increase crop yields, the magnitude of the effect remains to be determined. Finally, CO 2 will affect C3 and C4 plants differently because the C4 group of plants are less sensitive to an increase in atmospheric CO 2 concentration than C3 plants [22,27]. However, Rötter and Van de Geijn [24] found that both C3 and C4 plants will benefit from an increase in atmospheric CO 2 concentration. The vast majority of crop plants use the C3 carboxylation pathway (Calvin cycle), which is the oldest of the carbon fixation pathways and is found in plants of all taxonomies [28,29]. The concept of C3 photosynthesis is based on the observation that the first product of photosynthesis is a 3-carbon molecule, whereas in C4 photosynthesis the first photosynthetic product is a 4-carbon molecule. C4 photosynthesis occurs in the more highly developed plant taxa and the major C4 plant species include maize, sorghum and sugarcane, all of tropical origin [24]. Only 3% of all flowering plant species are C4 plants and yet they account for about 50% of the 10,000 grass species [30]. C4 plants have about 50% higher photosynthetic efficiency than C3 plants (e.g., rice, wheat, soybean, potato, etc.), indicating that their productivity is very high. This is due to the different mechanism of carbon fixation by the two photosynthesis types. The C3 type of photosynthesis uses only the Calvin cycle for CO 2 fixation, which is catalyzed by the Rubisco enzyme that takes place inside the chloroplast in the mesophyll cell. In the C4 group of plants, photosynthetic activities are divided between mesophyll and bundle sheath cells (BS), which are biochemically and anatomically distinct. Primary carbon fixation is catalyzed by the enzyme PEPC (phosphoenolpyruvate carboxylase), which forms OAA (oxaloacetate) from CO 2 and PEP (phosphoenolpyruvate). OAA is converted to the salt of malic acid (malate) and then disperses into the BS cell where it is decarboxylated to provide an elevated CO 2 concentration around the Rubisco enzyme. Finally, the initial substrate of the C4 cycle, PEP, is regenerated in the mesophyll cell by the enzyme PPDK (pyruvate orthophosphate dikinase) [31]. The mechanism of CO 2 concentration suppresses the oxygenation reaction by Rubisco and the subsequent energy-wasting process of photorespiration, resulting in increased photosynthetic yield and improved efficiency of water and nitrogen use compared to C3 plants [32]. C4 plants are usually found in warmer environments, such as tropical grasslands, where photorespiration rates would be very high for C3 plants [30]. Therefore, under these conditions, the efficiency of C4 photosynthesis is greater than that of C3 photosynthesis [29]. Cure et al. [33] also noted that plants with nitrogen-fixing symbionts (e.g., soybean, alfalfa, lupine, etc.) tend to benefit more from an increased CO 2 resource than other plant species under favourable environmental conditions for both the plant and the symbiont.
Impact of Changeable Precipitation Pattern
Crop production is strongly influenced by water availability. Climate change will alter rainfall patterns, soil moisture storage, evaporation and runoff. It is estimated that more than 80% of total global crop production is supplied by rainfall, and therefore changes in total seasonal rainfall or its patterns are very important [34]. There is clear evidence of amplification of the global hydrological cycle, which is strongly influenced by changes in temperature. However, its impact on crop production is still very difficult to predict as it depends on other climate parameters such as the intensity and frequency of extreme weather events [35]. Changes in precipitation patterns may be of greater importance to Insects 2021, 12, 440 5 of 31 agriculture than changes in temperature, especially in regions where dry seasons may be a limiting factor for crop production [3]. According to Lickley and Solomon [36] a drying trend is emerging in Southern and Northern Africa, parts of Latin America, Australia and Southern Europe. Moreover, the models predict significant drying for these regions as well as for the southern parts of North America by mid-century, with an increase in drought of more than 10% and a moisture deficit of more than 200 mm per year. In Mediterranean countries, cereal yields are limited by water scarcity, heat stress and short grain filling duration. Therefore, permanent crops such as olives, grapevine and citrus are of greater importance in this region. These crops are greatly affected by extreme weather events such as hail and storms, which can subsequently reduce or completely destroy yield [34]. Due to high evapotranspiration and limited rainfall, attention should be given to the development of irrigation techniques that allow efficient and effective use of available water resources, as well as good agronomic practices that emphasize moisture conservation and thus improve crop productivity [37]. Lack of water in the soil can cause plants to lose their biological functions and become even more susceptible to diseases and pests [38]. On the other hand, the world has become wetter in large areas such as northern Europe and eastern parts of the Americas, with extreme rainfall events contributing strongly to the increase in global precipitation [8]. Direct analysis of precipitation extremes (largest annual 1-day precipitation accumulation/largest annual 5-day precipitation accumulation) shows that extreme precipitation has increased in large parts of the world, with an increase in the potential of a typical 2-year event of about 7% over the period from 1951 to 1999 [39,40]. Due to the wet weather conditions on the Atlantic coast and in the European mountainous regions, there are cold and rainy summers that lead to yield and quality losses in various arable crops [34]. These wet conditions can also affect the workability of the soil and reduce the number of working days of agricultural machinery [41]. Overall, the exact nature of forthcoming climatic changes is still uncertain, but current projections indicate that they are very likely to have serious impacts on crops in the near future.
Impact of Climate Change on Insect Pests
Global climate changes have significant impacts on agriculture and also on agricultural insect pests. Agricultural crops and their corresponding pests are directly and indirectly affected by climate change. Direct impacts are on pests' reproduction, development, survival and dispersal, whereas indirectly the climate change affects the relationships between pests, their environment and other insect species such as natural enemies, competitors, vectors and mutualists [4]. Insects are poikilothermic organisms; the temperature of their body depends on the temperature of the environment. Thus, temperature is probably the most important environmental factor affecting insect behaviour, distribution, development and reproduction [42]. Therefore, it is very likely that the main drivers of climate change (increased atmospheric CO 2 , increased temperature and decreased soil moisture) could significantly affect the population dynamics of insect pests and thus the percentage of crop losses [43]. Climate change creates new ecological niches that provide opportunities for insect pests to establish and spread in new geographic regions and shift from one region to another [44]. The complexity of physiological effects exerted by rising temperatures and CO 2 can profoundly affect interactions between agricultural crops and insect pests [45][46][47]. Therefore, farmers can expect to face new and intense pest problems in the coming years due to the changing climate. The spread of crop pests across physical and political boundaries threatens food security and is a global problem common to all countries and all regions [44].
Response of Insect Pests to Increased Temperature
Insect physiology is very sensitive to changes in temperature, and their metabolic rate tends to approximately double with an increase of 10 • C [48]. In this context, many researchers have shown that increased temperature tends to accelerate insect consumption, development, and movement, which can affect population dynamics by influencing fecun-Insects 2021, 12, 440 6 of 31 dity, survival, generation time, population size, and geographic range [49]. Species that cannot adapt and evolve to increased temperature conditions generally have a difficult time maintaining their populations, while other species can thrive and reproduce rapidly. Temperature plays an important role in metabolism, metamorphosis, mobility, and host availability, which determines the possibility of changes in pest population and dynamics [6] (Figure 1). From the distribution and behavior of contemporary insects, it can be hypothesized that rising temperatures should be accompanied by increased herbivory [50]. Given the distribution and behaviour of insect pests, it can be hypothesised that an increase in temperature should be associated with increased herbivory [50], as well as changes in the growth rate of insect populations [51]. Thus, insect populations in tropical zones are predicted to experience a decrease in growth rate as a result of climate warming due to the current temperature level, which is already close to the optimum for pest development and growth, while insects in temperate zones are expected to experience an increase in growth rate [51]. The same authors confirmed this theory by estimating changes in the growth of pest populations in the production of the world's three major grain crops (wheat, rice and maize) under different climate change scenarios. According to the study, for wheat, which is normally grown in temperate climates, warming will accelerate the growth of pest populations. For rice grown in tropical zones, they predict a decrease in the growth of pest populations, and for maize grown in both temperate and tropical regions, mixed responses to the growth of pest populations could be expected [51].
Response of Insect Pests to Increased CO2 Concentration
Elevated concentrations of atmospheric CO2 can affect the distribution, abundance, and performance of herbivorous insects. Such increases can affect consumption rates, growth rates, fecundity, and population densities of insect pests [61]. Currently available data suggest that the effect of elevated atmospheric CO2 on herbivory is not only highly specific to individual insect species, but also to particular insect pest-host plant systems [62]. The effects of increasing CO2 levels on insect pests are highly dependent on their host plants. Increased CO2 levels would have a greater impact on C3 crops (wheat, rice, cotton, etc.) than on C4 crops (corn, sorghum, etc.). Therefore, these differential effects of elevated The effects of increased temperatures are greater for aboveground insects than for those that spend most of their life cycle in the soil, because soil is a thermally insulating medium that can buffer temperature changes and thus reduce their impact [49]. For example, under warmer conditions, aphids are less susceptible to the aphid alarm pheromone they normally release when threatened by insect predators and parasitoids, which can lead to increased predation [52]. Whitefly populations are primarily regulated by environmental factors such as temperature, precipitation, and humidity in general. High temperature along with high humidity correlates positively with whitefly population build-up [53].
Future changes in insect population dynamics depend on the level of global temperature increase in coming years. Climate models predict that the average temperature of the globe will increase by 1.8-4 • C by the end of the current century [54][55][56]. As ambient temperatures generally increase toward optimal temperatures for growth and development of many insect pest species, potentially reducing thermal constraints on population dynamics, the severity of pest infestations is expected to increase under global warming scenarios [57]. However, given the narrow ecological niche requirements, physiological tolerances of insects, and variable effects of temperature on their phenology and life history, global warming may not uniformly increase pest abundance and thus economic crop losses [58]. In their analysis, Lehmann et al. [58] showed mixed responses to climate warming in different insect pest species. The results of their analysis indicate that temperature rise leads to increased pest severity in most of their insect case studies. However, 59% of all species analysed showed responses that could reduce their harmful impact, mostly via reduced physiological performance and range contraction.
Another study of about 1100 insect species found that climate change due to global warming will drive about 15-37% of these species to extinction by 2050 [59,60].The general consequences of global warming on insect dynamics include: expansion of geographic range, increased survival rates of overwintering populations, increased risk of introduction of invasive insect species, increased incidence of insect-transmitted plant diseases due to range expansion and rapid reproduction of insect vectors, reduced effectiveness of biological control agents such as natural enemies, etc.
Response of Insect Pests to Increased CO 2 Concentration
Elevated concentrations of atmospheric CO 2 can affect the distribution, abundance, and performance of herbivorous insects. Such increases can affect consumption rates, growth rates, fecundity, and population densities of insect pests [61]. Currently available data suggest that the effect of elevated atmospheric CO 2 on herbivory is not only highly specific to individual insect species, but also to particular insect pest-host plant systems [62]. The effects of increasing CO 2 levels on insect pests are highly dependent on their host plants. Increased CO 2 levels would have a greater impact on C3 crops (wheat, rice, cotton, etc.) than on C4 crops (corn, sorghum, etc.). Therefore, these differential effects of elevated atmospheric CO 2 on C3 and C4 plants may result in asymmetric effects on herbivory, and the response of insects feeding on C4 plants may differ from that of C3 plants. C3 plants are likely to be positively affected by elevated CO 2 and negatively affected by insect response, whereas C4 plants are less responsive to elevated CO 2 and therefore less likely to be affected by changes in insect feeding behavior [63].
As mentioned in the previous section, increased CO 2 levels are likely to affect plant physiology by increasing photosynthetic activity, resulting in better growth and higher plant productivity. This in turn would indirectly affect insects by changing both the quantity and quality of plants and vegetation. A common feature of plants grown under elevated CO 2 is a change in the chemical composition of leaves, which could affect the nutrient quality of foliage and palatability to leaf-feeding insects [64] (Figure 2). In addition, such crops often accumulate sugars and starches in their leaves, which reduces palatability by altering the C (carbon) to N (nitrogen) ratio [65]. Nitrogen is the key element in the insect's body for its development, and therefore increased CO 2 concentration leads to increased plant consumption rate in some pest groups [66]. This can lead to increased levels of plant damage, as pests must consume more plant tissue to obtain an equivalent level of food. Increased consumption rates are a common response in foliage feeders, such as caterpillars, miners, and chewers, to a reduction in nitrogen, as predicted by CO 2 fertilization, with compensatory feeding [50,67]. Hamilton et al. [67] conducted an experiment in which soybeans were grown at elevated atmospheric CO 2 concentrations. During the early season, soybeans exhibited 57% more damage from insects such as the Japanese beetle (Popilia japonica Newman), Potato leafhopper (Empoasca fabae Harris), Mexican bean beetle (Epilachna varivestis Mulsant), and Western corn rootworm (Diabrotica virgifera virgifera Le Conte) than soybean grown under ambient atmospheric conditions. This study concluded that the measured increase in simple sugars content in soybean foliage may have stimulated compensatory insect feeding [67]. Under these conditions, insect herbivores tend to consume more plant material and thereat cause more plant damage [68,69]. Increased feeding rates do not always compensate for reduced diet quality, and consumption of plants growing under elevated CO 2 conditions could reduce the efficiency of the arthropods that feed on them [70]. Responses to CO 2 fertilization vary depending on the type of pest feeding. Whole-cell feeders such as thrips show an increase in population size [66]. Phloem-feeding insect pests, including whiteflies and aphids, have combined responses of increased population growth rates and a decrease in population density [71]. There are inconsistent reports on the effects of elevated CO 2 on sucking insects, although in some cases abundance and fecundity may increase [60]. Stiling and Cornelissen [72] conducted a meta-analytic study and reviewed study documentation on the indirect effects of a CO 2 increase on life history parameters of herbivores. The results of their study showed strong responses of insect pests to increased CO 2 compared to ambient CO 2 ; (I) an increase in consumption rates of about 17%; (II) a decrease in pest abundance of about 22%; (III) an increase in development time of about 4%; and (IV) a decrease in relative growth rate of about 9% ( Figure 2). Stronger effects of the increase in atmospheric CO 2 were also found for chewers in contrast to other feeding guilds, such as sap-sucking herbivores (e.g., aphids, leafhoppers, scale insects). Thus, it has been shown that despite the numerous studies conducted to assess aphid responses to elevated atmospheric CO 2 levels, it is still not possible to predict future response in general or to establish general rules for different aphid populations to changes in climate [73,74].
Response of Insect Pests to Changeable Precipitation Pattern
Changes in the amount, intensity, and frequency of precipitation are very important indicators of climate change. As observed in most events, the frequency of precipitation has decreased while the intensity of precipitation has increased. This type of rainfall pat-
Response of Insect Pests to Changeable Precipitation Pattern
Changes in the amount, intensity, and frequency of precipitation are very important indicators of climate change. As observed in most events, the frequency of precipitation has decreased while the intensity of precipitation has increased. This type of rainfall pattern has favoured the occurrence of droughts and floods. Insect species that overwinter in the soil are directly affected by overlapping rainfall. In short, heavy rainfall can lead to flooding and prolonged stagnation of water. This event threatens insect survival and at least affects their diapause ( Figure 3). In addition, insect eggs and larvae can be washed away by heavy rains and flooding [6]. Small-bodied pests like aphids, mites, jassids, whiteflies etc. can be washed away during heavy rainfall [75] (Figure 3). Variable rainfall can have a major impact on insect populations. For example, Staley et al. [76] studied the effects of increased summer rainfall and drought on the Soil-dwelling wireworm (Agriotes lineatus L.) in grassland plots. Wireworms are very damaging pests of crops such as potatoes, corn, sugar beet, etc., especially when grown in grassland plots, and there are predictions that they are likely to become a much greater problem with the effects of climate change [77]. Staley et al. [76] found rapid growth of wireworm populations in the upper part of the soil as a result of increased summer rainfall events as opposed to ambient and drought conditions [78]. Herbivorous insects are affected by drought through several mechanisms; (I) dry climates may provide suitable environmental conditions for the development and growth of herbivorous insects; (II) drought-stressed plants attract some insect species. For example, when plants lose moisture through the process of transpiration, water columns in the xylem break apart or cavitate, producing an ultrasonic acoustic emission that is detected by harmful bark beetles (Scolytidae); (III) plants stressed by drought are more susceptible to insect attack because of a decrease in the production of secondary metabolites that have a defense function [79] (Figure 3).
Expansion of Insects' Distribution
In general, the following factors may determine the distribution of insect pests: (I) natural biogeography; (II) crop distribution; (III) agricultural practices (monocultures, irrigation, fertilizers, pesticides); (IV) climate; (V) trade; and (VI) cultural patterns [80]. Climate change will have a major impact on the geographic distribution of insect pests, and
Expansion of Insects' Distribution
In general, the following factors may determine the distribution of insect pests: (I) natural biogeography; (II) crop distribution; (III) agricultural practices (monocultures, irrigation, fertilizers, pesticides); (IV) climate; (V) trade; and (VI) cultural patterns [80]. Climate change will have a major impact on the geographic distribution of insect pests, and low temperatures are often more significant than high temperatures in determining their geographic distribution [81]. Numerous pest species are shifting their range because of climate change, but also due to increased international trade, which allows individuals to disperse throughout the world. In the case of agricultural insect pests, this type of dispersal shift can greatly affect agricultural production [82]. The geographic distribution and abundance of all organisms in nature is accentuated by species-specific climatic requirements that are crucial for their growth, development, reproduction, and survival. Modified temperature and precipitation patterns with the foreseeable changes in climate will determine the distribution, survival and reproduction of species in the future [43]. Due to the spread of insect pests to new areas, along with the shift in the growing areas of their host plants, farmers will face new and severe pest problems. In such cases, in addition to climatic conditions suitable for the particular crop, other factors such as soil properties and environmental structure are of great importance [83]. For pest species in general, a poleward shift in distribution limits is predicted as a response to global warming [84]. The ranges of insect pests are expected to shift to higher altitudes by 2055, with an increase in the number of generations in central Europe. In Europe, for example, the European corn borer (Ostrinia nubilalis Hubner) has shifted more than 1000 km northward [85]. Nevertheless, a decrease in the number of generations was predicted for southern Europe due to global warming, which would negatively affect populations of this insect pest. This implies that climate change affects various species differently [6]. Lopez-Vaamonde et al. [86] reported that 97 non-native Lepidoptera species in 20 families have become established in Europe, and 88 European Lepidoptera species in 25 families have expanded their range in Europe, with 74% of species becoming established in the last century. Parmesan et al. [87] studied 35 species of non-migratory European butterflies and concluded that the geographic ranges of 63% had shifted 35 to 240 km northward and only 3% southward in the 20th century. Increased fluctuations of warm air masses towards higher latitudes have resulted in the establishment of the Diamondback moth (Plutella xylostella L.) in Arctic Ocean on the Norwegian islands of Svalbard, 800 km north of its former range limit in western Russia [88]. The pink bollworm (Pectinophora gossypiella Saunders), a major cotton pest, is presumed to be expanding its current range from the frost-free zone in southern Arizona and California into the cotton growing areas of Central California [89]. Gutierrez et al. [90] suggest that the range of the Olive fly (Bactrocera oleae Rossi) in both Europe and North America will retreat southward and expand northward due to the effects of warmer summer temperatures and milder winters on adult flies. High summer temperatures currently limit the range of the B. oleae in the desert regions of Arizona and southern and central California, while the cold limits its range in the far north. Climate warming is predicted to further limit its occurrence in many regions of California as high summer temperatures become increasingly unfavourable. Conversely, climatic conditions along the California coast are expected to be more favourable for them to thrive. In Italy, low winter temperatures limit olive and B. oleae occurrence in the northern regions, but this is expected to change as formerly unfavourable regions become favourable due to global warming [90]. On the other hand, changes in frost pattern, are one of the drivers of the spread of the frost-sensitive insect pests [91]. The frequency of spring frosts decreases with increasing temperature, so longer warm periods extend the period and intensity of insect epizootics [92]. Crop growers can in theory benefit from earlier seeding, but as a consequence these plants then become available to insect pests sooner, allowing them to begin feeding earlier and cause greater damage, as well as potentially producing additional insect generations during the typical growing season [92]. In addition, rising temperatures may increase the overwintering survival of insects that were limited by low temperatures at higher elevations, leading to an expansion of their geographic range [93,94].
Increased Overwintering Survival
Insects are poikilothermic or cold-blooded animals and therefore have a limited capacity for homeostasis in response to changes in ambient temperature. They have evolved a variety of strategies to stay alive under thermally stressful environmental conditions [95]. The most critical season for many insect pests is winter, as low temperatures can significantly increase mortality and thus reduce populations in the following season [81]. Studies have shown that global warming is most pronounced in winter at high latitudes [8]. Therefore, insects that undergo a winter diapause are likely to experience the greatest changes in their thermal environment [96]. In terms of overwintering strategies, insects are generally classified into two groups: freeze-tolerant and freeze-avoidant. The first group of insects uses a physiological adaptation strategy in the form of diapause, while the second group uses a strategy in the form of behavioural avoidance or migration [96]. Insects may enter diapause, which is an obligate or facultative, hormonally mediated state of low metabolic activity characterized by suppressed development, suspended activity, and increased resistance to adverse environmental extremes [97]. Diapause is an adaptive trait that plays an important function in the seasonal regulation of insect life cycles and is influenced by environmental factors such as temperature, photoperiod and humidity [98]. Aestivation and hibernation are two types of diapause. Aestivation allows insects to survive in environments with higher temperatures, while hibernation keeps them alive at lower temperatures [99]. In this article, we will focus only on winter diapause-hibernation.
Diapause is a fundamental requirement for overwintering success of many species in temperate and colder climates, and it confers increased cold hardiness (an organism's ability to survive at low temperatures) in the absence of acclimation to low temperatures, which usually occurs naturally during the transition from summer to fall and winter [100]. Some insect species enter diapause during the inactive egg or pupal stages, while others do it as larvae, nymphs, or adults. When diapause occurs in the inactive stages, it is often accompanied by a sharp drop in metabolic rate that is accompanied by an increase in cold hardiness [96]. During larval diapause, which is likely more common in subterranean herbivores that are protected from low temperatures, feeding may continue and forward development may slow down rather than stop [97]. While diapause is an obligate part of the life cycle in univoltine species, it is facultative in multivoltine species and dependent on an environmental trigger such as photoperiod [97].
The adaptive significance of the seasonal response to photoperiodism is to shut down further development and reproduction by preparing metabolic activities for winter dormancy, even though current environmental conditions possibly be favourable [101]. Moreover, considering the complex roles insects play in the ecosystem, many other processes are synchronous with their diapause programme, such as plant consumption, pollination, or interspecies interactions. Consequently, a single disruption of diapause as a result of anthropogenic climate change can have profound effects on the stability of the entire ecosystem. Therefore, when discussing the effects of climate change, it is important to consider the effects of climate warming on all three phases of diapause, namely diapause initiation, diapause, diapause termination, and post-diapause quiescence [96]. For many insect species, it is likely that higher temperatures during the photoperiodic induction of diapause (usually in autumn) reduce the frequency and duration of diapause [96]. In the European bluebottle fly (Calliphora vicina Robineau-Desvoidy), for example, adult flies reared at 20 • C produce a smaller number of diapausing offspring. Diapause is also shorter than in flies reared at 15 • C [96,102]. In cases where diapause is obligate for successful overwintering, higher temperatures are required to allow development to the next diapausing generation before severe winter conditions begin. In addition, for many temperate insect species, delaying diapause poses the risk of encountering cold stress outside diapause or before cold tolerance mechanisms are established [101]. This is believed to determine the current northern range of the Green stink bug (Nezara viridula L.) in Japan. Only diapausing adults of N. viridula are capable of overwintering, and winter conditions (timing of diapause induction) at the northern border begin when stink bugs reach only the nymphal stages. The entire population is, therefore, doomed to decline [103]. Further south, however, there is a sufficiently long growing season for this generation to reach the adult stage before winter, and at these sites N. viridula has displaced the most recently dominant pest specie, the Oriental green stink bug (Nezara antennata Scott) [96,104]. The duration of diapause can be influenced by many factors: accumulated chilling, humidity, food, and photoperiod [97]. However, for many species, the general principle is that the duration of diapause is shorter at higher temperatures. For example, the flesh fly (Sarcophaga crassipalpis Macquart), a common laboratory insect used to study diapause processes, remains in diapause for 118 days at 17 • C, 70 days at 25 • C, and 57 days at 28 • C [96,105]. This is because warmer winter temperatures increase the metabolic rate of diapause, resulting in a shorter diapause. A comparison of metabolic rates and diapause duration under these different conditions suggests that diapause ends when energy reserves reach a critical point. When metabolic rate is high, energy reserves are depleted quickly, and when metabolic rate is low, this set point is reached much later, resulting in a longer diapause [105].
Good synchronization with the environment and host plant means that insect herbivores are well adapted to their habitats [106]. However, climate warming can disrupt the metabolic balance during diapause, which can significantly affect the timing of emergence, so any change in spring emergence could lead to a loss of synchrony with the environment or host plant [96,106]. For example, many insects rely on synchrony between the timing of bud burst (or flowering) and emergence of feeding stages. It is quite conceivable that under current predictions of climate change, synchrony between trophic levels could become uncoupled as a consequence of subtle environmental differences in the phenology of individual species [107]. One of the best-studied examples of this is the Winter moth (Operophtera brumata L.), in which egg hatching is markedly advanced compared to bud break on its host plant, the Pedunculate oak (Quercus robur L.). It is unlikely that a 2 • C increase in temperature will dramatically alter the timing of bud burst, but the timing of larval hatching is likely to be significantly advanced, possibly leading to larval hatching before bud burst, which is dangerous for the moth and could reduce this specific pest problem [108].
It appears that univoltine temperate species respond differently to warmer winter conditions, making it difficult to predict the precise effects of climate change on overwintering insect species [109]. Non-diapausing, frost-sensitive species and those that can overwinter in their active stages appear to have increased survival rates under warmer winter conditions. These insect pests are expected to build up their populations and expand their geographic ranges to higher altitudes as average temperatures there increase [49]. Extremely low winter temperatures increases winter mortality, which is considered a key factor in the dynamics of many temperate insects, especially those that do not go into diapause but are active throughout the winter when temperature permits [108]. Warmer winters or a reduction in the frequency of extreme cold periods may, therefore, improve the survival of such species, as they are not exposed to low lethal temperature extremes [96]. However, insects exhibit a variety of strategies in relation to the threat of lethal low temperatures and these will partly determine the impact of warmer winter conditions [110].
Increased survival during overwintering period could lead to an increase in overwintering population and therefore to a greater abundance of insects on plants during the warmer period of a year. Consequently, global warming would increase the build-up of insect populations, early infestations and resultant crop damage from insect pests [111,112]. For example, increases in temperature have resulted in range expansion and increased overwinter survival of the Corn earworm (Helicoverpa zea Boddie) and the Cotton bollworm (Helicoverpa armigera Hubner). Consequently, this appears to be a significant threat to yield loss and a major challenge for pest management in corn, a fundamental food crop in the United States [43,113].
The flight phenology of aphids can be an accurate biological indicator of climate warming [114]. Many authors have shown that an increase in temperature promotes the survival of overwintering anholocyclic aphid species in the United Kingdom and in some cases brings forward their flight onset by up to one month. Such changes caused by climate warming will increase aphid outbreaks and cause earlier spring migrations, giving populations a better chance to build up to damaging levels in the subsequent growing season with a prolonged virus infection period [114][115][116][117]. Horticultural pests of plants grown in and restricted to greenhouses will have more opportunities to survive outdoors as average temperatures increase. For instance, warmer winter conditions are likely to increase the probability of the invasive South American leaf miner (Liriomyza huidobrensis Blanchard) overwintering outside greenhouses in the United Kingdom [114,118].
Increased Number of Generations
As mentioned earlier, temperature is the most important environmental factor for insects, affecting mainly their phenology. The ambient energy hypothesis suggests that growth and reproduction are greater at high temperatures. Therefore, higher temperatures or global warming leads to higher population sizes, which in turn can lead to a higher number of species in dynamic equilibrium [119,120]. Under a global warming scenario this makes it possible to accelerate reproductive rates within a certain preferred range, leading to an increase in the number of generations of many insect species and to more crop damage [121]. One of the many species traits and climate variables that have been used to link climate change to phenological shifts is thermal development tolerance, which can be measured using growing degree days (GDD).
GDD is a measure of heat accumulation calculated annually by accumulating the daily total sum of degrees between a minimum and maximum temperature threshold (Dmin and Dmax). GDD has long been used to predict plant and insect phenology in agriculture [122]. Future temperature increases will affect univoltine and multivoltine temperate species in different ways and to different extents. For multivoltine insects, such as aphids and some lepidopteran species, such as the large cabbage white butterfly (Pieris brassicae L.), higher temperatures, all other parameters being equal, should allow for faster development times that predictably allow for additional generations within a year [49,123]. Species with annual life cycles generally develop more rapidly than those with longer life cycles [49]. Using several models, it has been extrapolated that a 2 • C increase in temperature could result in one to five additional life cycles per year [121]. The most significant examples in this regard are aphids, which can be expected to produce four to five additional generations per year due to their low developmental threshold and short generation time. Aphids may, therefore, be particularly sensitive indicators of temperature changes [120]. Higher temperatures during their development have the beneficial effect of shortening the time in the larval and nymphal stages (when they are highly threatened by predators) [124], and allowing species to become adults earlier [120].
Expected responses of insects to a rise in temperature include an advance in the timing of adult emergence and an increase in flight duration [120]. One explanation for the changes in voltinism is the earlier onset of the flight period, which could allow for the production of an additional generation [125]. Since the insects fly earlier in the growing season, the individuals of the first generation could reproduce earlier. In addition, due to higher temperatures, faster larval development and growth occurs, so more individuals of the subsequent generation could develop when photoperiod and temperature conditions are still favourable, allowing them to develop directly in the same season rather than diapausing as larvae [125]. The timing of adult emergence can be documented with pheromone-, suction-, or light-traps. Long-term data analyses on insect phenology show that the timing of emergence of insect pests changes under climate change [75]. Analysis of suction trap data showed that the spring flight of the Potato aphid (Myzus persicae Sulzer) began two weeks earlier for every 1 • C increase in mean temperature in January and February [6]. Depending on the temperatures during winter and the duration of exposure, the relative abundance of populations after winter ranges from very low (cold winter) to very high (mild winter) [126]. A 50-year report of the timing of the first migrating individuals of M. persicae caught in a suction trap each year (from a study by Rothamsted Research, Harpenden, UK) showed a strong correlation with winter mean temperatures in January and February [96]. Members of the order Lepidoptera are another good example of phenological changes. Such changes in butterflies have been reported in the UK, where 26 of 35 observed species have advanced their first appearance [120,127]. In Spain, the first appearance of 17 species has shifted by 1-7 weeks in only 15 years [120,128]. Early emergence increased voltinism in the European grapevine moth (Lobesia botrana Denis and Schiff.) in Spain. This pest is usually trivoltine in Mediterranean latitudes, but with a tendency to emerge early in spring, it sometimes has a fourth additional flight, possibly due to global warming [129]. Since the 1980s, the number of generations per year has increased in many central European Lepidoptera species, with some univoltine or bivoltine species transitioning to bivoltine or multivoltine life cycles [125]. Partially bivoltine or multivoltine species are expected to experience an increase in abundance of second or subsequent generations [125,130].
Given the wide diversity of insect pests, it seems impossible to describe the precise effects of climate change for each species, the environmental conditions and the ecosystems in which they interact [49].
However, accurate quantification of the relationship between climate change and insect traits, such as changes in phenology and voltinims for a key insect pest species, could provide a conceptual framework for how these specific changes might manifest in other insect species [131]. The documented changes in voltinism confirm the high adaptability of insects to environmental change, which is why they are among the organisms that respond to global warming [121].
Increased Risk of Invasive Alien Insect Species
Invasive alien species (IAS) are defined as taxa that are introduced either intentionally (e.g., food, crops, ornamentals, pets, livestock) or unintentionally due to human activities outside their natural habitat [132]. Invasive insects are usually agricultural, stored-product, forestry, household or structural pests and can often be vectors of various diseases or parasites [133]. The spread of species to regions outside their original range has accelerated exponentially over the last millennium due to international travel, the global trading system and agriculture [134]. The Convention on Biological Diversity [135] describes invasive alien species as the greatest threat to global biodiversity with high costs to agriculture, forestry and aquatic ecosystems [6]. It is commonly assumed that only a small proportion of introduced IAS become established and only a small proportion of these species spread and become economic pests. This is often referred to as the "rule of 10," according to which approximately 1 in 10 introduced species escape into the environment, 1 in 10 of these introduced species become established in the environment, and 1 in 10 of these established species become economic pests [136].
For invasive insect pest species, many authors in recent studies predict expanded geographic range and increased population densities and voltinism under predicted climate change scenarios [49,126,137,138], which could soon lead to potentially severe consequences for sustainable agricultural production [139]. However, it is important to state that climate change is not the predominant driver of biological invasion. To become invasive, alien insects must successfully arrive in a new habitat, survive the given conditions, and thrive. Climate change could positively or negatively influence the components of this invasive pathway. Climate, in combination with landscape features, sets the limits for the dispersal of such species and determines the seasonal conditions for their development, growth and survival in a new habitat [140]. These habitats may have been previously unsuitable, and dispersal to suitable, distant habitats may have been blocked by a geographic barrier, such as mountain ranges or the sea [141]. All biological systems have thermal limits, so temperature increase will have a huge impact on ecosystems and the species that live in them.
The extent of the responses of most native and non-native insect species to global warming is still unknown, and certainly the new warmer conditions would not be beneficial to all of them [140]. The process of insect invasion involves a chain of events that include the transport, introduction, establishment, and dispersal of invasive alien insects [134]. Once a new species arrives in a new habitat, the other stages of the invasion process could be positively or negatively influenced by existing climate and climate change [142]. Climate change can directly affect the transport and introduction of invasive insects. Extreme climate events (e.g., storms, high winds, hurricanes, currents, and swells) could shift pests to new geographic areas where they may find environmental conditions favourable for establishment [143]. For example, the Cactus moth (Cactoblastis cactorum Berg), was blown from the Caribbean islands to Mexico during the 2005 hurricane season, where it posed a significant ecological and economic threat to more than 104 prickly pear species (Opuntia Mill), 38 of which are endemic [144,145]. Some insect species are more prone to introduction and dispersal to new geographical regions than others, and some pathways favour the introduction of some alien insect species [146]. The number of insect individuals arriving is referred to as propagule pressure [141], also known as "introduction effort" [147].
Propagule pressure is a function of the frequency and number of individuals invading a new habitat [133]. In general, the more individuals introduced into an area, the greater the chance that they will successfully establish [147]. One or more propagules of a species must first enter a transport pathway, then survive the transport journey, followed by a successful exit from the transport vector, and final establishment of an initial population that may or may not spread and become invasive [148]. Propagule pressure is related to the extent of plant trade, the likelihood that alien insects are transported on these plants, and the probability that they pass through border controls undetected in plant commodities [149]. One of the most recent examples of such an introduction pathway is the case of the invasion of the highly polyphagous and harmful invasive insect, the Spotted wing drosophila (Drosophila suzukii Matsamura), in North and South America and Europe. The pathway of introduction is thought to be trade in fresh fruit, with initial propagules occurring undetected in the egg or larval stage in large quantities of fresh fruit traded via South East Asia [150,151]. The spread of invasive pest species due to climate change is in fact, slow. Parmesan and Yohe [19] found that insect species are spreading at an average rate of 6.1 km per decade due to climate change. This is happening due to the increase in temperature in these areas and is causing insects to survive where they could not previously thrive [92].
Invasive species usually have a wider range of tolerance or bioclimatic range than native insects, allowing alien insects to find a wider range of suitable habitats [137]. Insect species are known to be highly sensitive to climate change. Sensitivity arises from the fact that most of their physiological processes are temperature-dependent [152]. Plasticity is a driving force behind the spread of many invasive species. Because plasticity is a trait of the individual, it is often touted as a responsive mechanism that allows organisms to adapt to new environmental conditions in the rapidly changing world (also referred to as "plastic rescue") [153,154].
Adaptations can take the form of phenotypic, behavioural, developmental, or physiological traits. Physiological or behavioural plasticity may result from differences in environmental conditions (e.g., temperature, humidity, photoperiod), available diet, or pressure from predators or competitors [155,156]. Behavioural responses can be adaptive and improve fitness, such as finding host plant species when invading new environments. One of the plastic responses of foraging insects to new environments is to change or expand their food choices. For some species, such as D. suzukii, which shows extreme plasticity in its diet choice with more than 30 plant species, diet breath is probably the most important trait responsible for its invasion success [157]. The evolution of many traits involves components of different mechanisms, such as plastic responses to photoperiod in relation to climate change [153]. Snell-Rood et al. [153] predict that general mechanisms that evolve through selective processes within an individual are very likely to lead to survival in new environments, especially when conditions exceed the typical range of the native environment in extreme ways, such as large temperature shifts. In ectotherms such as insects, thermal adaptation may occur, for example, through behavioural traits that control energy metabolism [158].
Reduced Effectiveness of Biological Control Agents-Natural Enemies
Climate change is likely to have severe impacts on the abundance, distribution, and seasonal timing of pests and their natural enemies, which will alter the degree of success of biological control programs [69]. Phytophagous insect species are naturally controlled by top-down (natural enemies) and bottom-up (host plant availability and quality) mechanisms. These natural mechanisms interact to influence insect population dynamics, performance, and behaviour [159]. In agricultural, forestry and other ecosystems, phytophagous insects can be considered as cornerstones of the tri-trophic host plant-insect pest-natural enemy relationship [160]. The effects of climate change on interactions between insect pests and natural enemies, whether natural enemies are intentionally introduced to new regions or whether they are native and biological control is supported by conservation measures, are modulated by direct effects on the metabolism and physiology of the organisms involved, the responses of those organisms, and subsequent tri-trophic interactions. These interactions are affected by climate change in a variety of ways. Temperature changes can affect the biology of each component species of a system differently, destabilizing their population dynamics [60] and causing temporal desynchronization. Natural enemies, which are the third trophic level, are expected to be significantly affected by climate change [161]. If trophically connected species respond variously to climate change, the trophic interaction between them could be perturbed, resulting in decoupling of the synchronized dynamics between insect pests and their natural enemies and potentially negatively affecting the performance of biological control [162].
Aphids are among the insect pests that are controlled by many natural enemy species, such as parasitic wasps, which lay their eggs in the bodies of aphids, and predatory species, such as ladybirds. All of these species are affected by the effects of global warming and could respond differently to temperature changes [73,107]. Hance et al. [60] reported that if a natural enemy starts to develop at a slightly lower temperature than the prey (e.g., aphid) and develops faster than the prey when the temperature rises, a too early and warm spring leads to its early emergence and a high probability of death from lack of prey. If this phenomenon is repeated over several years, it may lead to the extinction of the natural enemy. Evans et al. [163] showed that a rise in temperature disrupted the biological control of the Cereal leaf beetle (Oulema melanopus L.). In this trophic system, the development of O. melanopus was more affected by warming than that of the natural enemy, resulting in a phenological shift between enemy and prey and a weakening of biological control.
Crop distribution ranges are predicted to shift due to climate change. As an outcome, herbivores may track changes in crop distribution and migrate to areas where they may or may not be tracked by their predators or parasitoids, resulting in spatial desynchronization [73]. The final outcome depends partially on the competence of corresponding natural enemy species to expand their geographic range or on the possibility of new natural enemy populations that could control the pest in its new habitat [69]. In the absence of these conditions, herbivores may be able to escape predation and build large populations in their new habitat [164]. The potential for natural enemies to pursue their hosts depends primarily on their environmental tolerance relative to their herbivorous hosts, as well as their movement rates [69]. Gilman et al. [165] suggested that natural enemies that are specialists are more likely to be affected by climate change than generalists because they are less able to adapt to spatial desynchronization with their host communities. In such a case, biological control in food webs composed of many generalists might be more resilient to climate change [164].
Elevated CO 2 concentration, altered precipitation patterns, and temperature increase modify plant phenology and productivity, which in turn affect the growth and abundance of herbivore populations (host insects) and indirectly influence the supply of prey and hosts available for predation or parasitism [64,69]. Thomson et al. [69] also found that plants grown under elevated CO 2 , temperature extremes, and reduced precipitation provide diverse nutritional resources for herbivores, indirectly affecting the fitness of parasitoids and predators that feed on these herbivorous hosts. Bezemer et al. [66] studied the effect of temperature increase and elevated CO 2 concentration on the synchronized population of a tritrophic system consisting of the host plant annual bluegrass (Poa annua L.), the pest green peach aphid (M. persicae) and the parasitoid wasp (Aphidius matricariae Haliday). They showed that aphid populations built up under both elevated temperature and elevated CO 2 . There was no information on an effect of elevated CO 2 on parasitism success, but parasitism increased in correlation with elevated temperature. Another study examining the efficiency of the parasitoid wasp (Aphidius picipes Nees) feeding on the English grain aphid (Sitobion avenae F.) showed that parasitism increased in correlation with elevated CO 2 , but the same elevated CO 2 level resulted in lower fecundity of the wasps [166].
Therefore, the overall competence of a given species under elevated CO 2 concentrations is positively or negatively affected depending on its life history traits [166]. Dyer et al. [167] found that elevated temperature and CO 2 reduced the nutritional properties of alfalfa plants, (Medicago sativa L.), which are the host plants of the Beet armyworm (Spodoptera exigua Hubner). This type of reduced nutritional quality of the host plants resulted in a shortened development time of the larvae of S. exigua. At the same time, larvae of its natural enemy, the parasitic wasp (Cotesia marginiventris Cresson) were unable to fully develop, leading to the extinction of the local population of C. marginiventris. Few studies have addressed how increased CO 2 concentration affects predator efficiency. The family of ladybirds (Coccinelidae) is the largest insect group of predatory natural enemies. Chen et al. [166,168] investigated the food preference of the Asian ladybird (Harmonia axyridis Pallas) in food choice experiments. They showed that H. axyridis preferentially preyed on aphids under elevated CO 2 concentration compared to ambient CO 2 concentration. Despite this preference, predation performance was not affected by high CO 2 concentrations. Eventually, the time required for larval development of H. axydiris was significantly shorter or remained unchanged under altered CO 2 conditions [166,168].
Ultimately, climate change and global warming affect higher trophic levels directly, by altering the behaviour of natural enemies, or indirectly, by altering physiological traits in host plants and behavioural traits in herbivorous insects. Given all these facts, it is important to assess the trophic system as a whole. A challenge for the future is to develop models based on knowledge of phenological processes obtained through longterm monitoring of herbivores, their associated natural enemies and host plants, and their response to current climate and climate change [69].
Increased Incidence of Plant Diseases Transmitted by Insect Vectors
Insects are important vectors that transmit many plant diseases such as viruses, phytoplasmas and bacteria [169]. Viruses are a major cause of many plant diseases in global food production. The estimated economic loss from these diseases exceeds $30 billion per year [170]. Outside their vector or host insect, viruses are immovable and therefore heavily dependent on their vectors for transmission and spread. Some viruses and vectors are host generalists and others are specialists with a specific mode of transmission. Vectors can vary in their transmission efficiency, so the persistence, spread and prevalence of viruses depend on the particular vectors, their host plant and the climatic conditions in which they thrive [171,172]. Climate change may have a major impact on the epidemiology of plant viruses [173]. Most viruses of agricultural crop species are messenger RNA viruses and single-stranded DNA viruses. Their main host-to-host transmission strategy is the use of insect vectors with mouthparts for piercing and sucking [174]. In the previous sections, we have described the effects of climate change on various insect pests, some of which act as vectors of viruses. As climate directly affects insect physiology, phenology, etc., it could indirectly affect the viruses they transmit. This influence could have positive, negative or neutral consequences for the emergence and development of viral diseases in crop production [172].
Global warming may favour the occurrence of insect-transmitted plant diseases due to geographic expansion and increases in populations of insect vectors [175,176]. The main order of insects that transmit plant viruses are the sap-feeding Hemiptera. Within this order, the families of aphids (Aphididae), leafhoppers (Cicadellidae) and whiteflies (Aleyrodidae) are the major vectors of viral diseases [177]. Among these, aphids are the largest group of vectors, transmitting more than 275 virus species, and the majority of aphid species are capable of transmitting some plant viruses. Aphids are crucial virus vectors in temperate zones of the world, while whiteflies are restricted to warmer areas and thrive in temperate regions in crops grown under greenhouse conditions [174]. The short development time and high reproductive capacity of aphids and whiteflies make them particularly sensitive to responses to climate change [60]. The migration potential and longdistance dispersal of virus vectors could also be affected by climate change. Aphids can travel long distances when they encounter favourable thermal conditions that launch them upward, where atmospheric air movements expose them to horizontal translocation [178]. This long-distance transport has been linked with severe viral epidemics caused by aphids transported by extremely persistent low-pressure winds from the Great Plains of North America in the south to corn-growing areas in Minnesota [179].
It has also been reported that an increase in temperature in Northern Europe, especially at the beginning of the growing season, increases the rate of viral diseases in potato due to earlier colonisation by aphids, the main vectors of potato viruses [43,180]. The severity of viral diseases is highly dependent on the timing of infection and the amount of inoculum. The amount of viral inoculum is influenced by the overwintering of its insect vectors and their (alternative) host plants [181]. Aphids are expected to have higher survival rates in milder winters, and higher spring/summer temperatures increase their development and reproduction rates. The final outcome is a higher incidence of viral disease transmission and spread [182].
Barley yellow dwarf virus (BYDV) causes a very damaging disease in the Poaceae family and is transmitted by various aphid vectors. In Central Europe, the temperature minimum for migration of the Bird cherry oat aphid (Rhopalosiphum padi L.), the main vector of BYDV, is 8 • C, based on long-term monitoring. In addition, population build-up in summer is determined by temperatures in autumn, and population build-up in autumn is dependent on precipitation patterns and extremely low temperatures in winter [183]. Warmer conditions in autumn and winter in central and northern Europe increase vector persistence and thus the risk of virus transmission in winter crops such as winter barley and winter wheat [184]. In summer, warm temperatures and low rainfall reduce host availability, which poses various challenges for viruses and their insect vectors. Temperatures above 36 • C in warmest summer months result in decreased survival of aphids, reducing the spread of BYDV [185].
Among whiteflies, the Greenhouse whitefly (Trialeurodes vaporariorum Westwood) and the Silver leaf whitefly (Bemisia tabaci Gennadius) are most important virus vectors. Moderate precipitation and high temperatures are generally favourable for B. tabaci and lead to population increases [186]. Environments with dry and hot climates with installed irrigation systems provide favourable conditions for B. tabaci. Considering their short generation time, large populations can develop in summer. The same conditions could lead to an increase in the rate of evolution of the virus, resulting in more efficient strains with broader host range, greater transmission efficiency, and larger virus reservoirs in crops. Extreme winds and increased cyclonic activity in the tropics, as predicted by climate change scenarios, could promote the spread of B. tabaci. Drought could decrease its survival rate and disrupt its development, as well as restrain population size and dispersal [71]. Based on climate models, under four different climate scenarios that include data on humidity, temperature, and atmospheric CO 2 levels, it is predicted that many more geographical regions worldwide will be suitable for outdoor tomato production. These regions may also become suitable for the establishment of B. tabaci populations and thus for the increased incidence of the highly damaging pathogen of tomato-tomato yellow leaf curl virus (TYLCV) [187].
Grapevine yellows are grapevine diseases associated with phytoplasmas. They show notable differences in epidemiology due to the different life histories of their associated insect vectors [188]. One of the most important grapevine diseases in Europe is Flavescence dorée [189], and its main vector is the American grapevine leafhopper (Scaphoideus titanus Ball) [190]. As average temperatures increase during the growing season, S. titanus is expanding its range northward [191]. While short summers are considered a barrier to the northern spread of S. titanus due to the insect's inability to reach its full life cycle [192,193], climate change with longer and hotter summers should favour the spread of S. titanus in northern vineyards such as in Germany by extending the favourable development period [191]. Currently, S. titanus is widely spread in many vine growing areas across the Europe. Mirutenko et al. [193] reported the occurrence of S. titanus in Ukraine, which is currently its northern limit of distribution in Europe. However, climate warming at the southern limit of its current range could lead to insect declines or extinctions of small populations in areas such as southern Italy [192].
With climate change, an increase in newly introduced insect-transmitted plant diseases is expected. Therefore, it is of great importance to have diagnostic tools and appropriate personnel to detect new pathogens.
Adaptation and Mitigation Strategies for Pest Management in a Changing Climate
Climate change adaptation can be viewed as an ongoing process of implementing existing risk management strategies and reducing the potential risk from climate change impacts [194]. Climate change is widely expected to make pest infestations more unpredictable and increase their geographic range. Coupled with the uncertainty of how climate change will directly affect crop yields, the interactions between insects and plants in ecosystems remain unclear [78]. The adaptive capacity of agricultural production systems will depend on several biological, economic, and sociological factors. The ability of local communities to adapt their pest management practices will depend on their physical, social and financial resources [71]. With climate change and the acceleration of global trade, uncertainties and frequency of occurrence of existing and new pests will increase. Increasing the ability to adapt rapidly to disturbances and climatic changes will therefore become all the more important [195]. Potential adaptation strategies have been identified to reduce the risks of spreading new pests and diseases, and to mitigate the negative impacts of existing pests. The most commonly mentioned strategies are modified integrated pest management (IPM) practices, monitoring climate and insect pest populations and the use of modeling predictions tools [92] (Figure 4). uncertainties and frequency of occurrence of existing and new pests will increase. Increasing the ability to adapt rapidly to disturbances and climatic changes will therefore become all the more important [195]. Potential adaptation strategies have been identified to reduce the risks of spreading new pests and diseases, and to mitigate the negative impacts of existing pests. The most commonly mentioned strategies are modified integrated pest management (IPM) practices, monitoring climate and insect pest populations and the use of modeling predictions tools [92] (Figure 4).
Modified Integrated Pest Management (IPM) Practices
By definition, IPM refers to harmful species of phytophagous animals (mainly insects and mites), pathogens and weeds. In the context of sustainable agriculture, the emphasis in plant protection is on preventive or indirect measures, which must be fully exploited
Modified Integrated Pest Management (IPM) Practices
By definition, IPM refers to harmful species of phytophagous animals (mainly insects and mites), pathogens and weeds. In the context of sustainable agriculture, the emphasis in plant protection is on preventive or indirect measures, which must be fully exploited before control or direct measures are applied. Decisions on the need for control measures must be based on the most modern tools, such as forecasting methods and scientifically validated thresholds. Direct pest control tools are a last resort when economically intolerable losses cannot be prevented by indirect measures [196].
FAO recommends a dual strategy based on action at global and regional levels and, above all, significant investment in improving existing early detection and control systems. This requires the development of new agricultural practices, the introduction of new crops species, and the application of the principles of integrated pest management to contain the spread [197].
Mainly, growers and researchers design IPM strategies to minimize negative impacts on the environment while maximizing crop yields and economic returns [198]. Many authors have discussed the problem of pest management in a novel environment with a changing climate and the need to reconsider existing preventive agricultural practises and IPM strategies to improve heterogeneous agroecosystems that are resilient enough to tolerate weather variability [195]. However, in recent years, it has been predicted that researchers and growers will need to change many of these carefully constructed IPM tactics to respond to the important impacts of global warming [195].
Many IPM programs have focused on decisions based on extensive knowledge of how many insect pests can be tolerated before economic yield losses occur, also known as economic or intervention thresholds. IPM has historically evolved in the pest management field where the use of established thresholds has yielded good results. Although intervention thresholds play an important role in IPM, they are not always relevant, sufficient, or possible. When decision support systems are not available or appropriate, the use of thresholds is neglected [195]. Understanding how the environment affects plant and pest development is critical, and understanding their interactions with the environment allows crop advisors to respond to climate change. Environmental factors such as drought stress affect crop protection recommendations. When a crop is under drought stress, it is less able to cope with the additional stress caused by herbivorous insects, which can easily lower the economic threshold [199]. Due to the faster development of insects at higher temperatures, populations develop faster and crop damage occurs earlier than currently expected. Therefore, treatment thresholds based on the number of insects per plant must be lowered to prevent unacceptable yield losses [200].
Modified cropping practices and adaptive management strategies are needed to reduce the impact of agricultural pests on crops in a changing climate. These may include: (I) planting different crop varieties; (II) planting at different times of the year to minimize exposure to pest outbreaks; and (III) increasing biodiversity at field margins to increase the number of natural enemies [69,201].
The use of pheromones and allelochemicals is an important method by which insects sense their environment. They play a substantial role in various IPM techniques such as biological control, mating disruption, push-pull strategies, monitoring and trapping [202]. As the climate warms and microclimates become more variable, the use of pheromones and allelochemicals in their current form is expected to become less effective and may require a synergist or other adjuvant to reduce their volatility under high temperature conditions [201]. In addition, some biopesticides based on enthomopathogenic viruses, fungi, bacteria, and nematodes are extremely susceptible to environmental changes. An increase in temperature and a decrease in relative humidity may cause some of these management techniques to be less effective, and a similar result is expected for synthetic insecticides [203]. In this context, the focus should be on the development of new pest management strategies and possible new formulations of insecticides as well as attractants and repellents. For example, Wenda-Piesik et al. [204] in their study, investigated the behavioural response of confused flour beetle (Tribolium confusum Du Val) to different concentration of environmentally friendly volatile organic compounds (VOC) in terms of their repellent and attractive properties. As a result, they confirmed that highest concentration of applied VOC repelled individuals of given species significantly. This research can serve as a basis for the development of new sustainable and environmentally friendly pest control agents.
There is an urgent need to better understand the effects of global warming on the performance of many synthetic insecticides, their persistence in nature, and also the development of resistance to certain insecticides in pest populations [205]. Therefore, it seems necessary to consider the use of efficient biological control agents or the introduction of insect pest-resistant crop varieties obtained through conventional genetic breeding or genetic engineering [197].
Monitoring Abundance and Distribution
One of the most important prerequisites for determining whether climate change is altering the population dynamics of insect pest species is access to long-term data [206]. Without these important baseline data, it is extremely difficult to fully assess changes in pest populations under changing climate regimes and also to predict future population dynamics [201]. Long-term monitoring of pest populations and behaviour, particularly in climate change-sensitive regions, may provide some of the first clues to biological responses to climate change [207]. Changes in the dynamics of vectors, diseases and host populations at the local level need to be monitored, as do changes in their geographical distribution. New invasive species are being introduced in many parts of the world, aided by climate change. Effective monitoring and management systems are needed to prevent invasive species from becoming an economic pest in new geographic regions [207,208]. Therefore, adaptive responses in both pest management and biosecurity will be required.
Currently available pest management strategies such as detection, prediction, physical control, chemical control, and biological control could be intensified to control pests in response to climate change [207]. Due to the transboundary nature of many insect pests, a global management approach is needed for monitoring and risk assessment to be effective. A global system for sharing information between regions, including important information on insects, invasive alien species, diseases, and ecological conditions, including weather data, is needed. Therefore, it is important to improve cooperation between countries and regions, including national, regional, and global organizations [209]. Entry point monitoring and rapid eradication, as exemplified by the US Department of Agriculture's (USDA) Early Warning and Rapid Response Program and the European and Mediterranean Plant Protection Organization's (EPPO) Early Warning and Information System for IAS, will continue to be important when addressing invasive species [139,210].In addition, by monitoring climate and pests in combination with climate and pest risk prediction information, farmers can preemptively adopt certain pest prevention practices to reduce the occurrence and increase of expected pest problems [207].
Climate Forecasting and Model Development
It is impossible to design a priori climate change adaptation strategies for specific national or global climate change scenarios because of the heterogeneity of changes in average temperature and other climate parameters around the world. Adaptation strategies to climate change must be one of the components of an integrated strategy that takes into account all aspects of agricultural production.
Pest management strategies must tolerate regional climate change and its uncertainties. Some of the available options include sensitivity analyses and combined results obtained by using projected climate change scenarios with sensitivity analyses for a given area over a wide range of variable values. This strategy could become a useful tool in informing pest management personnel when designing adaptation measures for pest management under new environmental conditions [71].
Climate models combined with the environmental requirements of a particular pest species (envelope) can be an effective tool for projecting the possible range of changes on a global scale. Modelling the pest risk together with the responses of its plant hosts to climate change can therefore increase the ability to predict the outcome of an insect infestation [92]. The potential distribution of insect pest species is primarily estimated by ecological niche models (ENMs). They can be divided into two groups: correlative models and mechanistic models. Correlative models use correlated values of environmental variables and records of occurrence to make predictions about potentially adequate areas for the particular species. The most commonly used correlative models are MaxEnt, Bioclim, Random Forest, etc. [211]. As cited by Evans et al. [212], correlative species distribution modelling is the most commonly used approach for predicting the impacts of climate change on biodiversity and has become a cornerstone of climate change policy [213]. Correlative modelling is a widely used tool for projecting future changes in the geographic distribution of species, assessing extinction rates, and setting priorities for biodiversity conservation [214]. These models identify statistical relationships between the current geographic distributions of a given species and climate variables, which are then implemented to projections of climate change to suggest climatically suitable habitats for that species in the future [215]. The final output of correlative models is often presented in the form of maps showing future climatically adequate regions for a given species, the total area of which can then be compared with current geographic ranges to estimate the future risk of their introduction and establishment [212].
Mechanistic models are predictive tools that use the values of environmental variables of a given area in combination with knowledge about the environmental tolerances of a given species [211].
Mechanistic species distribution models differ from correlative models in that they examine how the environment constrains physiological performance in a given region. Future species distributions are then predicted through a process of elimination, whereby regions that constrain physiological performance to the extent that they affect the ability to survive, grow, or reproduce are excluded from the final distribution [216]. It has also been argued that mechanistic models are the preferred approach to most management questions because they are able to extrapolate beyond known conditions and isolate traits that determine biogeography [217]. CLIMEX is an example of a semi-mechanistic modelling software tool that uses the physiological and behavioural parameters of species and the values of climate variables to make predictions about suitable habitats or regions for specific species [218]. In addition, comprehensive analysis of climate and historical weather records, together with development of the models described above, will facilitate prediction of pest risks. This could be reflected in the development of proactive strategies for pest prevention and control strategies in a changing climate [219].
Conclusions
Although there are still many unknowns related to climate change, it is widely accepted that it greatly affects the cultivation of agricultural plants as well as the insect pests associated with them. Some of the uncertainties regarding different aspects of climate change that are relevant to insect pests include small-scale climate variability such as temperature increase, increase in atmospheric CO 2 , changing precipitation patterns, relative humidity, and other factors. Given the enormous heterogeneity of insect species, their host plants and global climate variability, mixed responses of insect species to global warming are expected in different parts of the world. The effects of climate change on insects are complex, as climate change favours some insects and inhibits others, while impacting their distribution, diversity, abundance, development, growth and phenology. Additionally, it is generally expected that there will be an overall increase in the number of pest outbreaks involving a broader range of insect pests. Insects would likely expand their geographic distribution (especially northward). Due to increased overwintering survival rate and the ability to develop more generations, the abundance of some pests will increase. Invasive pest species will likely establish more readily in new areas and there will be more insect-transmitted plant diseases. Another negative consequence that could occur as a result of climate change is the reduced effectiveness of biological control agents-natural enemies-and this could be a major problem in future pest management programs. If climate change factors lead to favourable conditions for pest infestation and crop damage, then we face a high risk of significant economic losses and a challenge to human food security. A proactive and scientific approach will be required to deal with this problem. Therefore, there is a great need for planning and formulating adaptation and mitigation strategies in the form of modified IPM tactics, climate and pest monitoring, and the use of modelling tools. Funding: This review was funded by the European Regional Development Fund through the project Advanced and predictive agriculture for resilience to climate change (AgroSPARC) (KK.05.1.1.02.0031).
Data Availability Statement:
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2018-10-22T01:22:45.950Z
|
2016-03-08T00:00:00.000
|
62000485
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/8920/8617",
"pdf_hash": "28fe4330f894b794b59aace5cb99b27e049935d2",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:942",
"s2fieldsofstudy": [
"Education"
],
"sha1": "28fe4330f894b794b59aace5cb99b27e049935d2",
"year": 2016
}
|
pes2o/s2orc
|
Teaching Methods Indications for Education and Training of Sport Skills
This work, starting from the theories of the movement, wants to provide applicative methodological guidance on the understanding of the task, the importance of the variability, the revaluation of the error and of the extrinsic feedback. This last certainly one of the most important aspects about the problems of motor control and then learning the technical skills and sports. From these conceptual approaches are derived some important implications, in terms of teaching, from which to draw indications applicative control and learning of technical skills and sports like basketball at school.
Introduction 1.
The technique is a key determinant of performance, while learning and technic improvement constitute a fundamental objective of training in all sports. The teacher must be able to provide students with information that allows understanding of the task, to plan adequately the lessons aimed at acquiring the technical skills and sports and to identify appropriate strategies for error correction.
Several theories of movement are proposed as models from which to draw information applicative to structure appropriately the teaching of motor skills. Starting from the idea that the motor pattern can be viewed as a generalization of concepts and relationships derived from experiences, allowing you to identify the specifications required to run a particular version of a motor program.
The motor program, considered as an abstract structure in memory that precedes the action and contains the patterns of muscle contractions and relaxations that define the movement (Adams, 1987), therefore the motor program, to start his movement, does not need the feedback produced from the responce, since it contains in memory a set of instructions (which muscles contract it, in what order, how long, ...) are able to initiate the required action. The General Motor program (Schimdt, 2003) are therefore the starting point for the development of motor patterns based on adjusting the feedback. The execution of any movement, and thus also of a sports technique, is never repeated exactly the same way, but adjustments to the motor program must be continuously made it in order to adapt the execution to the requirements environmental. Bernstein (1967) in his studies has created a very unique idea: "to start a movement of the motor system must select a mode of action more appropriate and relevant to the aim from pursued." He called all these possibilities of action of a group of muscles and limbs degrees of freedom, which are considered, screened and selected, to then be put in place. From these conceptual approaches are derived some important implications, in terms of teaching, which providing methodological guidelines applicable to the control and learning of technical skills and sportive. Among the more significant indications are found the methods of information for the understanding of the task, the role of the variability of the practice, the meaning of the error and the utilization of feedback during learning.
2.
This work aims to provide applicative methodological guidelines on the understanding of the task, the importance of the variability and availability, the revaluation of the error and the extrinsic feedback in learning technical skills and sportive like basketball at school.
Understanding of the task
At the beginning of the learning, the student will form in his mind an image inaccurate of the gesture required , then that image will be little by little more precisely and will be enriched by various sensory data (auditory, visual, kinesthetic, tactile, etc ...) at the same time the motor performance improve. A first didactic aspect to consider are the information that the teacher must provide to the student to facilitate the understanding of the motor task and the formation of a mental image progressively more precise and adherent to the ideal model awaited . With young students, early of the learning, they should be given priority, in general, to visual information, which allow us to catch the gesture as a whole, therefore demonstration should be as precise as possible while meeting the speed of actual implementation. Though, the learning could be depend on student's ability for several perception canal: audio, kinesthetic …
Importance of variability
The greater the variation of the parameters used in the same motor program, the more accurate it becomes the gesture sought. The learning of this motor pattern becomes more effective, efficient and economical as much as is diversified the experience done; therefore, the variability of the practice, understood as the variation of parameters applied to a same motor program, represents an element that contributes to the formation of a pattern more and more precise and accurate (Schmidt,Wrisberg, 2004). Perform a teaching of motor activities of type heuristic, it means helping the student to find possible solutions to a given motor task in a given context, emphasizing the executive variability (Pesce, 2002).
A methodological aspect to consider is that the variable practice is not advantageous for the achievement of immediate objectives, but it becomes to long-term objectives, especially in the sports of situation (basketball, volleyball, soccer, etc ...), which require the adaptation of the technical gesture to changing conditions (Figures 1, 2, 3, 4).
Revaluation of the error and the extrinsic feedback
In a specific motor action the student can make two types of errors: in the choice of the response (not appropriate) or in the execution of the movement. In the first case, program selection is inappropriate due to an incorrect assessment of the environmental conditions, for example, in basketball, a wrong assessment of the trajectory of the ball leads to a placing defensive which, although carried out correctly from the point of view of technical and tactical, It is not appropriate to the situation ( fig. 5). In this case, the chosen program is incorrect because, although well made, is ineffective to achieve the goal.
Figure 5. Position wrong
In a program chosen correctly can be instead an error during the execution due to an inadequate control of the movement. When a beginner has not mastered a motor gesture, or when an experienced player can not control an automated movement, because of emotional factors related to the performance or simply to fatigue, a motor program correctly chosen not is perfectly realized. The two types of error, however, can occur simultaneously. In the sport of situation, like basketball, the purpose of the feints is precisely to induce the opponent at start a motor program inappropriate, because the change takes more time.
An important aspect of the didactic relates to the information that the teacher must provide the student after the execution of a technical gesture, for to correct any errors or repeat the correct movement in subsequent tests (Parisi, Raiola, 2014ab). This information on the result of the performance deriving from external sources at subject are the extrinsic feedback additional, such information may be provided in the form quantitative or objective, or in qualitative form relatively to the way you ran the movement. Is necessary always evaluate the dual function of feedback (information and reinforcement) and determine appropriate instructions that have a negative value with those who have a positive meaning, bearing in mind that messages you send to students may also be of nature non verbal (Mantovani, 2004).
In the process of learning / teaching this information subsequent to performance are a variable easily manipulated by the teacher, but with a different meaning depending on the age and abilities of the students. For example, the frequency of the corrective action must be greater in the early stages of learning; After is appropriate to reduce the frequency of extrinsic feedback to the advantage of that intrinsic. The extrinsic feedback contributes to the elaboration of the reference of correctness with which compare the intrinsic feedback during execution, so is useful for the formation and for the strengthening of motor pattern.
4.
Learn and refine new and always different technical gestures (sport skills) should to be choose in relation to a specific coordinative capability from develop, such as the change of direction, the shooting feint, the change of direction, etc.. And good rule to remember that the highest level of coordination is one in which the student, in addition to successfully perform the gesture, keeps active the possibility to modify it and adapt it to the "situation" while maintaining the effectiveness (Raiola et al, 2016ab, Raiola, 2015ab, Raiola et al., 2015, Gaetano et al, 2015ab, Altavilla et al, 2014, Gaetano, Rago 2014.
To be adept, moreover, involve be sure of the own abilities and improve the efficiency of an skill is reflected in an increase in security, the reduction of energy consumption and, sometimes, the reduction of time in the execution of a movement . This means reducing or eliminating unintended movements and unnecessary. A learning, however, only learned a abstract cognitive level stays away from the real context and from the direct experience (Altavilla, Raiola, 2014). Ultimately, be especially skilled in any field, and specifically in the performance of motor tasks, implies sharpen and train baggage of the motor skills that you possess. A failure and continuous solicitation, even in the presence of considerable capability, will never render skilled or capable of learning new motor tasks. This objective is accomplished through a long period of work, through numerous exercises performed with conscious control and with a great variety of motor experiences. The success of our teaching comes from our ability to pass on to young people the right information techniques, tactics and the mindset of working for the improvement of individual and personal growth (Gaetano, 2012ab, Altavilla, Raiola G., 2015.
|
v3-fos-license
|
2017-07-19T01:33:36.816Z
|
2017-03-31T00:00:00.000
|
9913955
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jeo-esska.springeropen.com/track/pdf/10.1186/s40634-017-0085-5",
"pdf_hash": "6f0e6b30687d16769f39799fa892f5c73bffa691",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:945",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"sha1": "6f0e6b30687d16769f39799fa892f5c73bffa691",
"year": 2017
}
|
pes2o/s2orc
|
The effect of a dynamic PCL brace on patellofemoral compartment pressures in PCL-and PCL/PLC-deficient knees
Background The natural history of posterior cruciate ligament (PCL) deficiency includes the development of arthrosis in the patellofemoral joint (PFJ). The purpose of this biomechanical study was to evaluate the hypothesis that dynamic bracing reduces PFJ pressures in PCL- and combined PCL/posterolateral corner (PLC)-deficient knees. Study Design: Controlled Laboratory Study. Methods Eight fresh frozen cadaveric knees with intact cruciate and collateral ligaments were included. PFJ pressures and force were measured using a pressure mapping system via a lateral arthrotomy at knee flexion angles of 30°, 60°, 90°, and 120° in intact, PCL-deficient, and PCL/PLC-deficient knees under a combined quadriceps/hamstrings load of 400 N/200 N. Testing was then repeated in PCL- and PCL/PLC-deficient knees after application of a dynamic PCL brace. Results Application of a dynamic PCL brace led to a reduction in peak PFJ pressures in PCL-deficient knees. In addition, the brace led to a significant reduction in peak pressures in PCL/PLC-deficient knees at 60°, 90°, and 120° of flexion. Application of the dynamic brace also led to a reduction in total PFJ force across all flexion angles for both PCL- and PCL/PLC-deficient knees. Conclusion Dynamic bracing reduces PFJ pressures in PCL- and combined PCL/PLC-deficient knees, particularly at high degrees of knee flexion.
Background
The natural history of posterior cruciate ligament (PCL)deficiency includes significant knee pain and arthrosis in the medial and patellofemoral compartments (PFJ) Kennedy et al. 2014;LaPrade et al. 2015a) (Gill et al. 2003b;Kennedy et al. 2013;Patel et al. 2007;Shelbourne et al. 2013;Strobel et al. 2003;Torg et al. 1989). The exact mechanism of articular cartilage degeneration in PCL-deficient knees remains unknown; however, several cadaveric studies have reported that PCL deficiency leads to a significant increase in contact pressure in these two knee compartments (Gill et al. 2003b;Grood et al. 1988;Markolf et al. 1993;Strobel et al. 2003). This increase in compartmental pressure is possibly the result of increased anterior-posterior laxity (MacDonald et al. 1996;Anderson et al. 2012;Fanelli & Edson 1995;Gill et al. 2003b;Goyal et al. 2012;Kumagai et al. 2002;Logan et al. 2004) and rotational instability (Jonsson & Karrholm 1999Gill et al. 2003aKennedy et al. 2013) of the knee. PCL injuries rarely occur in isolation, and concomitant posterolateral corner (PLC) injuries are common, particularly in a trauma setting (Fanelli & Edson 1995). The PLC resists excessive varus and external rotation forces in the knee (Markolf et al. 1993;Torg et al. 1989). The PLC also plays a secondary role in resisting posterior translation of the tibia. Therefore, the PLC and PCL play a symbiotic role in resisting excessive external rotation and posterior translation of the proximal tibia.
While optimal treatment of isolated PCL and multiligament knee injuries is unclear, management may include bracing to restore posterior and rotational stability in the knee. Static braces provide a constant anterior force through the entire arc of knee range of motion (Pierce et al. 2013;Jansson et al. 2013a). Several authors have evaluated the effectiveness of static bracing for the treatment of PCL injuries (Ahn et al. 2011;Jung et al. 2008;Spiridonov et al. 2011). While static braces reportedly contribute to satisfactory outcomes, Jacobi et al. demonstrated that appropriate stability is not fully restored following management with a static brace (Jacobi et al. 2010).
Tension within the PCL varies through the knee arc of motion. For instance, forces through the PCL have been shown to increase almost linearly with knee flexion angle (Markolf et al. 2006). Unlike static braces, dynamic PCL braces are designed to provide increased anterior force and improved posterior stability at higher degrees of knee flexion, thus better replicating the natural role of the PCL (Jansson et al. 2013a). In the only study comparing the effect of static versus dynamic bracing on PCL-deficient knees, Laprade et al. demonstrated that dynamic braces due in fact provide more stability than static braces at higher degrees of knee flexion (LaPrade et al. 2015b). By improving knee kinematics, dynamic braces may help normalize medial and PFJ pressures in PCL-deficient knees and potentially reduce the incidence of knee arthrosis. We are not aware of any clinical study that has evaluated peak pressures in the knee or the incidence of arthrosis in PCL-or PCL/PLC-deficient knees treated with a dynamic brace.
The purpose of this biomechanical study is to evaluate peak PFJ pressures in PCL-deficient and PCL/PLC-deficient knees with and without application of a dynamic brace. We hypothesiz that dynamic bracing of PCL-and PCL/PLC-deficient knees will significantly reduce peak pressures in the PFJ, particularly at higher degrees of knee flexion.
Specimen preparation
Ten fresh frozen cadaveric knees (proximal femur through foot) were procured from an institutionalapproved tissue bank. Specimens with evidence of injury or instability by physical examination were excluded. All specimens were stored at − 30°C until testing, at which point they were thawed at room temperature for approximately 24 h.
After defrosting, the quadriceps and hamstring tendons were dissected and sutured (#2 Fiberwire, Arthrex, Naples, FL) with locking Krackow stitches just distal to the musculotendinous junction. A custom aluminum stand was designed to hold the knee at 30°, 60°, 90°, and 120°of flexion. The proximal femur of each specimen was also dissected and clamped to the testing frame, while the foot and ankle were placed in a modified ankle foot orthosis (AFO) and secured to the custom-designed aluminum stand using a strap (Fig. 1). The ankle was maintained at 0°of dorsiflexion throughout testing with Fig. 1 Photographs of design apparatus with cadaveric knee without (a) and with (b) application of the dynamic brace. Tekscan sensors connected to the handle via a lateral arthrotomy. Weights are attached to suture pulleys to simulate muscle loading through the hamstrings. The quadriceps is attached to the MTS machine via suture to simulate muscle loading two tight straps that kept each heel seated in the neutrally-positioned AFO. Sutures from the hamstrings and quadriceps muscles were attached to cables to allow the application of simulated muscle forces. The skin and muscles of the specimen were preserved, and the skin was re-approximated with sutures following dissection to ensure that the brace would fit each specimen appropriately ( Fig. 1).
Contact pressure measurements
PFJ peak contact pressures and total forces were measured with Tekscan pressure mapping sensors (K-Scan 5051, Tekscan Inc., Boston, MA). The 5051 sensor is a 0.1 mm thin, flexible film with printed conductive ink that measures forces with a resolution of 1,936 sensing elements within a 55.9 mm × 55.9 mm sensor matrix area. The sensor is capable of measuring contact pressures up to 8 MPa. Prior to testing, the sensors were reinforced with vinyl laminate and then preconditioned, equilibrated, and calibrated according to the manufacturer's recommendations.
Sensors were reinforced with vinyl laminate to prevent shear force damage and reduce drift. Once laminated, the sensor was preconditioned using a 2 MPa cyclic load for 30 cycles inside the Tekscan equilibration device, which applied a uniform pressure to the sensing matrix area through an air-filled bladder. In addition, a threepoint equilibration process was performed to account for sensing element variation at 50, 100, and 150 raw digital outputs. After equilibration, the sensor was calibrated using a Mechanical Testing System (MTS Bionix 370.02, MTS Corp., Eden Prairie, MN) by applying incremental loads from 0 to 750 N to the sensor. The sensor was compressed between a metal plate and a flat high-density polyethylene block with a 1.5 mm thick silicon rubber sheet below to evenly distribute loads, covering approximately 75% of the sensor matrix area. The raw digital output was then correlated to contact pressures using a power law curve to best fit the nonlinear sensor behavior.
Mechanical testing
Motion tracking cameras (Optotrak Certus, Northern Digital Inc., Waterloo, Ontario, Canada) were used to validate proper angles of the tibia relative to the fixed femur prior to loading. Both tibial and femoral anatomical axes were pre-defined using a digitizing probe. Two infrared diode sensors were placed on both the femur and tibia to track their relative 3D motion.
Once calibrated, the 5051 sensor was placed in the PFJ via a lateral arthrotomy and sutured to the distal quadriceps tendon. A quadriceps load of 400 N was applied via the MTS machine, and a separate load of 200 N was applied to the hamstrings (100 N to biceps femoris and 100 N to semitendinosus/gracilis) using free weights attached to cables. These muscle loads have been used in multiple previous studies evaluating various biomechanical effects of PCL deficiency (Li et al. 2003;Li et al. 2002).
The integrity of the PCL was confirmed by the senior author via posterior drawer test and through visualization during mechanical testing. Two cadavers with PCL insufficiency were excluded. The PCL was cut via a lateral arthrotomy and testing was performed at 30°, 60°, 90°and 120°both with and without a dynamic brace (Ossur Rebound PCL Brace, Ossur, Reykjavik, Iceland) under the simulated muscle loads. Sectioning of the PCL was confirmed visually and via posterior drawer examination. Only specimens with a Grade III Posterior Drawer Test were included. Afterwards, the PLC was cut via the lateral arthrotomy and testing was repeated. Care was taken to preserve the skin and muscle bulk of each specimen so that the brace fit each specimen appropriately.
Each Ossur Rebound brace was custom-fitted to each individual specimen. The Ossur Rebound PCL Brace has three settings; the highest tension setting applies approximately 54.5 Newtons of force to the proximal tibia with the knee in full extension. We used the highest-tension setting for each cadaver.
Data and statistical analysis
Deep patellofemoral force and area of the applied force was recorded for each knee. Total pressure was calculated as force divided by area. Patellar pressure data were plotted in two dimensions to identify peak pressure areas. A cluster of 16 pixels at the point of maximal peak pressure was determined and averaged to calculate peak pressure values for each testing condition and at each angle (30°, 60°, 90°and 120°).
Multiple repeated measures ANOVA (SAS 9.4, SAS Institute, Inc., Cary, North Carolina) was applied to force, total pressure, and peak pressure for each condition tested (−PCL, −PCL + brace, −PCL/-PLC, −PCL/-PLC + brace), at all four flexion angles tested (30°, 60°, 90°, and 120°), and for the interaction between condition and angle. Comparison between the deficient conditions with and without the brace was the focus of the statistical results. A p value of <0.05 was considered significant.
Results
Two of the ten specimens received showed evidence of PCL insufficiency based on posterior drawer examination, which was confirmed with gross inspection and were thus eliminated from the study. The remaining eight specimens underwent all test conditions and had an average age of 75 years (range: 64-89) and consisted of 7 male legs and 1 female leg.
Force
Total force measured across all test conditions was lowest at 30°of knee flexion with a significant increase to 60°( p < 0.05), leveling off from 60°to 120° (Fig. 3). When analyzed across all angles, force through the PFJ was significantly reduced in PCL-deficient knees when a dynamic brace was applied to the extremity (p < 0.001) (Fig. 2a). This reduction was most significant at 120°(280.3 ± 58.9 vs 266.9 ± 55.6 N, p < 0.01). Likewise, across all angles tested, use of a dynamic brace in PCL/PLC-deficient knees significantly reduced PFJ force when compared to unbraced PCL/PLC-deficient knees (p < 0.05) (Fig. 2b).
Total pressure
Total pressure measured within the PFJ, analyzed across all angles tested, was significantly reduced with use of a dynamic brace in both PCL-(p < 0.05) and PCL/PLC-deficient (p < 0.01) knees.
Analysis at each specific angle was also performed. PCL-deficient knees at 30°of knee flexion averaged 490.5 (±62.6) kPa, which was significantly reduced to 450.1 (±73.1) kPa with the use of the dynamic brace. At higher angles of flexion, no significant differences in total pressure between PCL-deficient knees with and without the brace were observed (Fig. 3a).
Following resection of the PLC (−PCL/-PLC), total pressure was reduced for all flexion angles tested with the addition of the dynamic brace, reaching significance at 30°and 120°(p < 0.05) (Fig. 3b).
Peak pressure
The overall interaction between peak contact pressure and flexion angle was not significant. When analysis was performed independent of knee flexion angle, PCLdeficient knees without a brace had a significantly higher peak pressure when compared to braced knees (p < 0.05) (Fig. 4a). Likewise, when analysis was performed independent of flexion angle, PCL/PLC-deficient knees without a brace had a significantly higher peak pressure when compared to braced knees. Application of a dynamic brace to PCL/PLC-deficient knees also led to a reduction in peak PFJ pressures at certain specific angles, reaching significance at 60°(1340 ± 276 vs. 1187 ± 298 kPa with brace, p < 0.05), 90°(1304 ± 204 vs. 1194 ± 152 kPa with brace, p < 0.05), and 120°(1453 ± 344 vs. 1138 ± 168 kPa with brace, p < 0.05) of knee flexion (Fig. 4b).
Discussion
The most important findings in this study were that the application of a dynamic PCL brace led to a significant reduction in force, total pressure, and peak pressures in the PFJ in PCL-and PCL/PCL-deficient knees, most significantly at higher degrees of flexion. These results confirm our hypothesis that the peak pressure inside the PFJ would change more dramatically at higher degrees of knee flexion because the dynamic brace is designed to impart a larger anteriorly directed force on the tibia in that state. These results are clinically relevant because maximum posterior knee instability in PCL and PCL/ PLC-deficient knees occurs immediately after toe-off with the knee in deep flexion (Iwata et al. 2007).
Previous investigators have measured contact pressures in the PFJ in PCL-and PCL/PLC-deficient knees (Skyhar et al. 1993;Gill et al. 2003a;Spiridonov et al. 2011). Both Skyhar et al. and Gill et al. reported increased PFJ contact forces in PCL-and PCL/PLC- Fig. 2 Force in the PFJ as a function of knee flexion angle in the PCL-deficient state (a) and PCL/PLC-deficient state (b) with and without the use of a dynamic brace (*indicates p < 0.05) deficient knees when compared to the intact state under simulated muscle loads at all knee flexion angles. Altered peak pressures in PCL-and combined PCL/PLC-deficient knees are most likely a result of abnormal knee kinematics. In PCL/PLC-deficient knees, the tibia translates posteriorly and externally rotates with the application of a simulated load. External rotation of the tibia leads to lateralization of the patella, which creates increased compression between the lateral facet of the patella and lateral trochlea (Gill et al. 2003a;Kwak et al. 2000). This phenomenon correlates well with our data, as peak pressures in the PCL/PLC-deficient knees were consistently isolated to the lateral facet, particularly at higher degrees of flexion.
Although previous studies have demonstrated that the use of static braces following PCL reconstruction improves posterior knee laxity (Ahn et al. 2011;Jung et al. 2008;Spiridonov et al. 2011), Jacobi demonstrated posterior laxity was not restored to the intact state (Jacobi et al. 2010). Further, Laprade demonstrated that forces applied by a dynamic brace were significantly larger than those applied by a static brace at higher flexion angles in PCL-deficient knees (LaPrade et al. 2015b). Therefore, as demonstrated by Laprade in his study, our results suggest that dynamic bracing may be a better option than static braces for management of chronic PCL injuries or to protect healing ligaments following surgical reconstruction of the PCL and/or PCL and PLC. Clinical studies are needed to determine whether the effect of dynamic bracing on peak PFJ pressures will result in improved patient outcomes and/or a lower incidence of arthrosis in patients with PCL and PCL/PLC injuries.
This study, like many cadaveric studies, is presented with several limitations. First, an axial load was not applied to the tibia. As a result, closed chain exercises that place maximum stress on the PCL, such as lunges and squats, were not properly represented. Based on the design of this brace, it was hypothesized that it could provide an even greater reduction in peak pressure during these types of exercises. A second limitation was the application of a constant hamstring and quadriceps load to each specimen in all conditions at all degrees of knee flexion. While a 2:1 ratio of quadriceps to hamstring loading has previously been validated (Li et al. 2003;Li et al. 2002), these forces are significantly lower than those that occur in vivo. Moreover, quadriceps and hamstring forces vary with different exercises and in different degrees of knee flexion. Nevertheless, the observed trends in peak pressure likely reflect the effect of PCL and PCL/PLC deficiency on peak pressures and how those pressures change when the knee is stabilized with a dynamic brace. Another limitation was the Tekscan sensor's sensitivity, which, as has been previously reported (Wilharm et al. 2013), decreases with time and after multiple cycles. Shear stress, moisture, and temperature fluctuations have all been implicated as a source of sensor deterioration (Anderson et al. 2003;Jansson et al. 2013b). These effects were minimized by covering the sensor with vinyl laminate, which resists shear and water damage. In addition, the sensors were sutured in place to further minimize shear stress. A final limitation of this study is that the average age of the specimens was 75 tears old. The effect of the dynamic brace on older specimens may not accurately reflect the effect of the brace on the typical younger patient with a PCL-or PCL/PLC-deficient knee.
Conclusions
In conclusion, the results presented in this cadaveric study demonstrate that dynamic bracing reduces force, total pressure, and peak pressure in the PFJ in PCL-and PCL/PLC-deficient knees, most significantly at higher degrees of knee flexion. While further clinical research is necessary, dynamic bracing may provide a non-invasive means to reduce the incidence of knee arthrosis in patients with PCL and combined PCL/PLC injuries.
|
v3-fos-license
|
2021-06-14T11:24:32.882Z
|
2021-06-03T00:00:00.000
|
235421020
|
{
"extfieldsofstudy": [
"Geography",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2220-9964/10/6/381/pdf",
"pdf_hash": "4a6d3a545faf8f36e537412587b32ada94425061",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:946",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "4a6d3a545faf8f36e537412587b32ada94425061",
"year": 2021
}
|
pes2o/s2orc
|
Methodological Proposal for Automated Detection of the Wildland–Urban Interface: Application to the Metropolitan Regions of Madrid and Barcelona
Official information on Land Use Land Cover is essential for mapping wildland–urban interface (WUI) zones. However, these resources do not always provide the geometrical or thematic accuracy required to delimit buildings that are easily exposed to risk of wildfire at the appropriate scale. This research shows that the integration of active remote sensing and official Land Use Land Cover (LULC) databases, such as the Spanish Land Use Land Cover information system (SIOSE), creates the synergy capable of achieving this. An automated method was developed to detect WUI zones by the massive geoprocessing of data from official and open repositories of the Spanish national plan for territory observation (PNOT) of the Spanish national geographic institute (IGN), and it was tested in the most important metropolitan zones in Spain: Barcelona and Madrid. The processing of trillions of LiDAR data and their integration with thousands of SIOSE polygons were managed in a Linux environment, with libraries for geographic processing and a PostgreSQL database server. All this allowed the buildings that are exposed to wildfire risk with a high level of accuracy to be obtained with a methodology that can be applied anywhere in the Spanish territory.
Introduction
Despite the scientific and technological developments in recent years, there are still natural phenomena that cause loss of human lives and high socio-economic costs in the context of climate change that may increase their severity and frequency. Forest fires, endemic in zones with a Mediterranean climate, together with certain human activities, generate landscape degradation, a barely remediable transformation in the environment, and environmental changes, such as soil erosion or decreasing air quality [1]. Wildfires take place every year in Spain due to climate conditions, such as the long summer period, combined with low rainfall and high temperatures. These factors affect forest vegetation and facilitate wildfire outbreak and spread. Furthermore, there are anthropogenic causes [2]; 95% of the fires that occur in the Mediterranean region are produced by human causes, either by accidents, negligence, intentional acts, etc. [3] Climate change and changes in land use affect the fire regime [4]. In the touristic zones and in the metropolitan environments where settlements are in contact with forest zones, this problem is more severe, for obvious reasons, which increase the real state and human exposure to fire risk. In addition, there is an increase in density in these zones due to the proximity of human activities to the forest zone and an increase in the vulnerability of forest vegetation produced by the abandonment of traditional agricultural activities in the peri-urban environments of large cities and holiday dwellings. The expansion of these WUI zones in the United States was very significant [5] and it has substantially increased the cost of wildfire suppression and the treatment of forest fuels, not to mention the damage that they cause [6]. This is also the situation in Spain [7] and has led to detailed studies on its legal regulation in Europe and in Spain, considering the changes in the LULC data and the increased vulnerability that this creates [8].
The coexistence between forest areas and new settlements, with non-traditional activities in the rural environment, is one of the main characteristics of the concept of the wildland-urban interface (WUI), which is defined in the Spanish forest law (Ley de Montes) 43/2003 as those areas that include housing estates, other buildings, works, electrical installations and transport infrastructures located in or near forest land, that can entail a fire hazard or be affected by them. Royal Decree 893/2013 provides another description of this concept, defining it as zones where buildings meet forest land.
There are particularly good works that delimit the areas in which this urban and forest contact occurs, identifying the typology and the degree of hazard [9], and indeed the recent evolution of the landscape in many Mediterranean countries is conditioned by the growth in urbanized zones. The expansion of WUI zones in Spain exceeds one million hectares (about 4% of the total forest surface), with an average of 12,500 forest fires per year during the past ten years [7], gaining prominence in the metropolitan environments of Madrid and Barcelona and in the Mediterranean coast [10], where urbanizers seek single-family dwellings or second homes, attracted by the aesthetic appeal of a natural environment, which causes a situation that alters the traditional landscape and brings a social and cultural behavior that is alien to that environment and is of danger of a wildfire [8]. Therefore, it is very important to identify automated methods that help us to delimit this type of WUI zone at different scales and, in this case, the most important metropolitan zones in Spain.
This study is a continuation of a research line initiated in the SIOSE-INNOVA project (subprogram "Retos I + D + I 2016" for 2016 R + D + i challenges, Spanish Ministry of Economy and Competitiveness, CSO2016-79420-RAEI/FEDER UE) in which the exploitation of Geo Open Data repositories of the Spanish national plan for territory observation (PNOT) were tested to obtain the delimitation of these WUI zones in an automated manner for large areas and with a high level of detail. For this purpose, a methodology based on the use of free software tools and standard open data is proposed for testing in the automated detection of WUI zones in the periphery of the metropolitan areas of Madrid and Barcelona ( Figure 1).
Technological developments in remote sensing and in the geographic information process have led to the generation of highly detailed data that are of great use in the study of wildfires in WUI zones [11]. In this respect, the use of Airborne Laser Scanning (ALS), also known as LiDAR technology, is noteworthy for determining the density of forest fuel, measuring hazard and mapping wildfire risk in settlements, [12] as well as for generating models for the automated identification of firefighter safety zones [13]. LiDAR data and aerial photographs were used to determine the volume of forest masses, land slope or flame height. Some methodologies using laser information as fundamental data were developed to model fire behavior during a wildfire [14].
There are good examples of the delimitation of WUI zones using ALS echoes in Spain, such as the analysis carried out by Badia A. and Gisbert, M. [11] at an experimental level in the metropolitan area of Barcelona, although limited to a very specific area. In Galicia, Robles et al. [15] successfully obtained the exposure of settlements to wildfire risk through non-automated analysis of LiDAR data, or by using decision trees and Geographic Information Systems (GIS), as in the case of Fernández-Álvarez, M. et al. [16]. Initial research in the SIOSE-INNOVA project for the delimitation of WUI zones began with the use of PNOT LiDAR in Valle del Tiétar (Segovia) and in Camp del Turia (Valencia), through active remote sensing of zones where vegetation and buildings were in close proximity to each other [17][18][19]. All these works demonstrated the usefulness of LiDAR data in the study of forest vegetation, wildfires, delimitation of WUI zones, and a highly detailed characterization of vegetation, although in small areas. The advantage of using LiDAR in terms of geographic accuracy also entails its main disadvantage: the large volume of data to be handled and the problem of its processing. When these volumes of information refer to entire regions of Europe, the use of robust databases and the automation of processes is necessary.
The use of LULC databases to delimit WUI zones in Spain has also proved to be useful [20], although Corine Land Cover (CLC) data are not the most appropriate to achieve detailed results; however, they are unquestionably useful in the creation of evolutionary maps for Europe as a whole [21], for regional realities [22], or even provincial realities [23]. In the research undertaken within the SIOSE-INNOVA project, studies were carried out on the identification of WUI zones, testing the suitability of official land-use databases (Corine Land Cover and SIOSE) in verified scenarios, such as the provinces of Navarra and Castellón [24]. The problem is the reference scale of this type of database. The Corine Land Cover (CLC) geodatabase is particularly useful for a reference scale of 1:100,000, and SIOSE for a scale of 1:25,000, but, in both cases, there is a geometrical ambiguity that makes it impossible to accurately determine the buildings exposed to risk, as information is essential for planning and preventive work.
In line with the work of this project, the Master's Thesis by León, P. [25] and Navarro Carrión, et al. [26] demonstrated the effectiveness of combining LiDAR data and LULC databases to determine the zones exposed at different scales and with the geometric accuracy needed to precisely delimit the buildings exposed at a detailed scale in the entire province of Alicante. This research began to point out solutions to the problem of Geo Small Data in order to be able to process these sources and identify the zones that are the subject of study within large areas (provinces or regions), with an appropriate level of precision for the local approach.
Our contribution aims to automate the working method and optimize the use of Geo Small Data techniques that allow for the massive processing of data from the combination of LiDAR point clouds, SIOSE, aerial photographs, satellite images, etc. [26]. For this purpose, the methodology was applied to more complex zones, such as Madrid or Barcelona, in order to set a model that enables the delimitation of WUI zones in any geographic context within the country, or even for Spain as a whole, and that can be easily updated as the official information sources are updated. The results of this research were verified by means of statistical sampling methods, which, along with the photointerpretation of aerial photographs and the use of the web map service (WMS) services of the SIOSE field photographs, allowed us to successfully verify the accuracy obtained in the determination of the WUI zones used as the study areas.
This research began to point to solutions to the Geo Small Data problem in order to process these sources and identify the areas under study within large areas (provinces or regions), but with an adequate level of precision for the local approach. Our work is an evolution of this methodology, which has contributed to increasing the level of automation of the processes of this work and the creation of new automated processes. Fundamentally, all the processes related to the downloading of data from the servers of the National Geographic Institute and their entry into the geodatabase of the research laboratory were automated and improved. We worked in different autonomous communities, not only in the Valencian Community, because the LiDAR information does not have the same characteristics for all the areas; although some minimums are fulfilled in all of them, there are areas, such as Madrid, where the information is denser and more detailed. This research allowed for the contemplation of all these variations in the information and definitive automation of the data download processes. In addition, techniques for checking results with aerial photographs, the systematic sampling of field areas and cross-checking with a repository of field photographs were used and improved.
However, although the obtained results were closely verified through photointerpretation and official graphic documentation, future research should continue with the formulation of qualitative verification protocols that are based in fieldwork and objectoriented remote sensing (OBIA). Therefore, the purpose of this article is limited to the automated method of delimitation of WUI zones based on official and open data, but it does not involve research on WUI in terms of landscape evolution or a qualitative assessment of different WUI typologies in metropolitan zones.
Materials
This research coordinated the use of large volumes of data, their analysis and their exploitation, applied to a selection of case studies, seeking simplicity in the geo-processing of information and increasing the quality and efficiency of the results. In this way, the thematic richness of the SIOSE database increases with the geometrical accuracy of LiDAR point clouds and the synergy of this complementary data approach was particularly useful in identifying the areas exposed to wildfire risk within the selected zones. According to a report by the Greenpeace organization [27], 80% of Spanish municipalities located in areas at high risk of wildfire do not have local Emergency or Self-Protection Plans, and many of them do not have all the necessary resources. This aspect gives this methodology strategic relevance in acting as basic information for the development of this type of work at a local level and helping to mitigate the loss of human life and material damage caused by these events, which are becoming more and more frequent due to the evolution of the occupation of space and the consequences of climate change.
Usually, in research works that use point clouds from ALS to detect vegetation or buildings, the points are processed to generate a raster topology, filtering LiDAR echoes, and generating a Digital Terrain Model (DTM), the corresponding digital surface models of vegetation and buildings (DSM), and calculation of the normalized digital surface models of vegetation and buildings (nDSM). Finally, their reclassification and polygonal vectorization are carried out, as in a previous step, to the required geoprocesses, which are used in the determination of buildings exposed to a certain distance from the forest fuel. However, the disadvantage of processing an entire province, such as Madrid or Barcelona, is that a desktop GIS tool is incapable of this work since spatial resolutions of 1 or 2 m generate very large models, whose polygonal vectorization is difficult, and even more so if we want to correct the resulting vector topology and perform the necessary buffering or clipping geoprocesses to determine the exposure zones.
SIOSE Database
The SIOSE database is a vector database that contains information on Land Use Land Cover at a national level [28] under the principles of the Directive INSPIRE 2007/2/CE. It is updated every 3 years and has the geodesic reference system ETRS89 (European Terrestrial Reference System 1989) and the UTM cartographic representation system with zones 28, 29, 30 and 31 (UTM projection).
The technical characteristics of SIOSE are divided into geometric and semantic specifications. Among the geometric ones, we find the 1:25,000 scale, as the polygon is the only entity with its own geometry with a minimum size of between 0.5 and 2 hectares and a minimum width of 15 m. Its semantic characteristics include the object-oriented data model (that describes the objects, attributes, etc.) [29] and relationship between polygons and thematic classes (1 polygon: N classes), which makes it difficult to use from a desktop GIS application. It comprises 40 simple classes, 45 predefined classes and the capacity to create infinite associations built by the users themselves from the database, which worked perfectly for this subject of study.
PNOA-LiDAR Project
The PNOA LiDAR flight covers the entire Spanish territory with point clouds with X, Y, Z coordinates and other attributes, captured through ALS. To date, two coverages were conducted: the first one between 2008 and 2015, and the second one started in 2015, and it is still in execution.
The data collection process is performed through airborne LiDAR sensors. Once the point clouds are captured, they are subject to quality controls in order to automatically classify the information, thanks to the infrared values, and to colorize it with RGB based on the PNOA orthophotos. The point density is 0.5 points per m 2 for Barcelona and 1 point per m 2 for Madrid. The data are collected in digital files of 2 × 2 km (Barcelona) and 1 × 1 km (Madrid), in LAS or LAZ formats (compressed LAS format) in the ETRS89 geodesic reference system for the Spanish mainland and the Balearic Islands and REG-CAN95 for the Canary Islands and the UTM projection system with the corresponding zone. The classification of LiDAR points is done according to the specification of the American Society of Photogrammetry and Remote Sensing (ASPRS).
The PNOA project has a huge field of application, which, for the purpose of the present research, lies in obtaining point clouds of buildings and vegetation through the automatic detection of terrain modifications or elements, such as buildings and fuel models.
Other Sources
Additionally, the spatial reference information from the Spanish national cartographic base (Base Cartográfica Nacional), scale 1:200,000, was used to establish provincial limits, and the national topographic map (Base Topográfica Nacional), scale 1:25,000 to obtain the cartographic grid of the study zone and the LiDAR file grid (second coverage), together with the technical specifications of the data collection.
Methods
In this study, SIOSE LULC polygons were integrated with LiDAR point clouds in a PostGIS Geodatabase (PostgreSQL), and logical statements on SIOSE were used to determine the "target zones" where forest-fuel and buildings coexist or are close to each other. The resulting polygons constituted a geometrical filter to reduce the amount of LiDAR data needed for processing and, in addition, we resorted to the use of a library specializing in point cloud processing, Point Cloud Library (PCL), the creation of patches and clustering techniques. This allowed us to apply a processing model capable of being run in conventional computers in order to make it easily applicable to fire prevention in urban-wildland interfaces in any part of the Spanish territory. The reason for resorting to LiDAR data is the lack of geometric accuracy of the SIOSE LULC database, with a thematic richness that exceeds the spatial limit of its polygons and hinders the cartographic determination of the affected buildings (scale > 1:25,000).
For the detection of areas that are prone to being affected by fires in WUI zones, we decided to consider the terms intermix and interface, set by Stewart, S.I. et al. [30], but with the criteria adapted to the Spanish institutional data sources [24], which allow us to identify the SIOSE polygons that are in a situation of intermix and interface ( Figure 2). The definition of intermix corresponds to polygons with a minimum of 50% of forest surface in which there are scattered residential buildings. Interface zones include polygons with dwellings, which are closer than 100 m to polygons with at least 75% of the forest or fuel surface. In this way, polygons with forest tree cover attributes (coniferous, hardwood deciduous, evergreen and scrub) and building coverage (isolated building, building between party walls, single-family isolated dwelling, single-family semi-detached dwelling) were selected. As for LiDAR data, its download causes computational difficulty, both for its storage and for the Center of the National Geographic Institute of Spain (CNIG) state server, which only allows a limited number of files to be downloaded. In order to solve these LiDAR, on its side, provides accurate information on forest-fuel elements and their proximity to buildings, determining their exposure to fire risk [15]. However, this is particularly useful in zones of limited size since the spatial dataset to be processed for zones as large as those proposed in this research involves the problem of massive data management, as shown in Table 1: voluminous and complex information from the thematic (SIOSE) and geometric (LiDAR) perspectives. For Navarro-Carrión, J.T. et al. [26], managing this amount of information from a desktop computer with standard features is an impossible task for a desktop GIS application. As for LiDAR data, its download causes computational difficulty, both for its storage and for the Center of the National Geographic Institute of Spain (CNIG) state server, which only allows a limited number of files to be downloaded. In order to solve these problems, an algorithm was programmed to download geographic information in a massive, organized and systematic way. The next step was to search the values that were relevant for the research: medium and high vegetation (classes 4 y 5) and buildings (class 6), according to the classification of the American Society for Photogrammetry and Remote Sensing [31].
The use of LiDAR data through decision trees and GIS for calculating the exposure of WUI zones in Spain already proved useful at a detailed scale as in the case of Galicia [16]. In this case, the metropolitan areas of Madrid and Barcelona are exceptionally large, and it is difficult to download LiDAR data due to their massive volume. To solve this problem, a specific database was created in PostgreSQL, with PostGIS and PointCloud extensions for LiDAR data and its combination with SIOSE polygonal data in the Linux operating system. PgAdmin and QGIS were used to run additional processes and check the results that need to be monitored from a visual interface. On the other hand, for the extraction, transformation and loading of data, GDAL/OGR and PDAL libraries were used for vector and LiDAR data, respectively. For the storage of LiDAR information, patches and clustering were used to simplify the consultation tasks, as these tools allow for organization of the records by storing them by groups in a table.
The information obtained from the IGN (PNOA-LiDAR, SIOSE and complementary cartography) was processed with PostgreSQL/PostGIS, PointCloud in the NAS Server of the Geomatics laboratory of the University of Alicante and a Docker Hub Server [32]. SIOSE polygons with WUI Intermix or Interface and LiDAR point clouds relevant to us were selected to finally proceed to the necessary geoprocesses that allowed us to combine both data sources, check the differences between them and obtain an accurate delimitation of the exposed buildings. These steps are shown in the flow chart in Figure 3.
The synergy created by combining these tools prevented a computer collapse due to the large number of geometric entities to be processed, and allowed for the execution of the geoprocesses needed to meet the objectives, make complex queries and apply them to the very broad analysis areas of the study cases and in great detail. The synergy created by combining these tools prevented a computer collapse due to the large number of geometric entities to be processed, and allowed for the execution of the geoprocesses needed to meet the objectives, make complex queries and apply them to the very broad analysis areas of the study cases and in great detail.
Results
The proposed methodology was applied to the selected case studies according to the seven phases: downloading basic information, the creation and structuring of the database, intermixing and interfacing the SIOSE polygon input, the selection of LiDAR points from intermixed and interfaced polygons, the creation of LiDAR point clusters for fuel and building, the determination of intermix and interface exposure, and, finally, the research development and results.
SIOSE-Based Determination of Exposure to Fire Risk
The implementation of the methodology developed by Moreno, V. [24], extended in this research and applied to the provinces of Madrid and Barcelona, led to twelve relationships stored in the WUI schema of the database. The geometries generated from these relationships provide the metrics of the records computed in the processes. Figure 4 and Figure 5 represent the entire territory, analyzed according to its type of exposure in an intermix or wildland-urban interface, obtained through the polygons of the SIOSE 2014. The polygons marked in red are the result of intermix; they are areas where residential use coexists with 50% or more of the forest mass. The polygons marked in orange are the interface ones containing residential use and located at a distance equal to or less than 100 m from other polygons with at least 75% or more fuel inside.
Results
The proposed methodology was applied to the selected case studies according to the seven phases: downloading basic information, the creation and structuring of the database, intermixing and interfacing the SIOSE polygon input, the selection of LiDAR points from intermixed and interfaced polygons, the creation of LiDAR point clusters for fuel and building, the determination of intermix and interface exposure, and, finally, the research development and results.
SIOSE-Based Determination of Exposure to Fire Risk
The implementation of the methodology developed by Moreno, V. [24], extended in this research and applied to the provinces of Madrid and Barcelona, led to twelve relationships stored in the WUI schema of the database. The geometries generated from these relationships provide the metrics of the records computed in the processes. Figures 4 and 5 represent the entire territory, analyzed according to its type of exposure in an intermix or wildland-urban interface, obtained through the polygons of the SIOSE 2014. The polygons marked in red are the result of intermix; they are areas where residential use coexists with 50% or more of the forest mass. The polygons marked in orange are the interface ones containing residential use and located at a distance equal to or less than 100 m from other polygons with at least 75% or more fuel inside. The areas exposed to fire risk in the province of Madrid represent a little less than 1% in an intermix situation and almost 2% of the territory in an interface situation according to the results obtained from SIOSE, whereas in Barcelona, they represent a little more than 2% in an intermix situation and almost 8% of the territory in an interface situation. The areas are represented at a larger scale in order to evaluate the degree of detail achieved. In Madrid, the study was conducted in the municipalities of El Escorial, Galapagar and San Lorenzo del Escorial, and in Barcelona, in the Garraf region, with the aim of representing urban areas with abundant forest mass. For that purpose, a sampling of the analyzed data population was carried out, which is described in the section below.
Calculation of the Sample Size
In a move to achieve methodological rigor in the different areas of the research for the development of the study phases, it was necessary to take a sample of the obtained results to identify the effectiveness of the methodology. To validate the results, we accessed the WMS of fieldwork photos taken during the development of SIOSE, and the PNOA aerial photography WMS. However, the following considerations were made to estimate the sample size: for intermix and interface polygons, the work was performed with a 10% error and 90% confidence level, considering that the sample is 50% heterogeneous.
The samples selected in municipalities of Madrid for intermix and interface consisted of 16 and 48 polygons (Figure 6a), and for Barcelona, in the Garraf region, they consisted of 42 and 60 polygons (Figure 6b). They were defined by using the random selection tools offered by QGIS, which made it possible to obtain a sample that was spatially well distributed. The selected areas show significant territorial differences in terms of their land-use distribution model, their surface and the relationship between residential zones and forest mass. In a move to achieve methodological rigor in the different areas of the research for the development of the study phases, it was necessary to take a sample of the obtained results to identify the effectiveness of the methodology. To validate the results, we accessed the WMS of fieldwork photos taken during the development of SIOSE, and the PNOA aerial photography WMS. However, the following considerations were made to estimate the sample size: for intermix and interface polygons, the work was performed with a 10% error and 90% confidence level, considering that the sample is 50% heterogeneous.
The samples selected in municipalities of Madrid for intermix and interface consisted of 16 and 48 polygons (Figure 6.a), and for Barcelona, in the Garraf region, they consisted of 42 and 60 polygons (Figure 6.b). They were defined by using the random selection tools offered by QGIS, which made it possible to obtain a sample that was spatially well distributed. The selected areas show significant territorial differences in terms of their landuse distribution model, their surface and the relationship between residential zones and forest mass. The proportion of affected areas (Table 2) in the intermix is 1.1 to 1, while in the case of the interface, the proportion between polygons of buildings is 1.8 to 1, which may indicate that the exposure in the selected zones of Barcelona is higher than in the selected areas in Madrid. The proportion of affected areas (Table 2) in the intermix is 1.1 to 1, while in the case of the interface, the proportion between polygons of buildings is 1.8 to 1, which may indicate that the exposure in the selected zones of Barcelona is higher than in the selected areas in Madrid. Based on the samples analyzed for the evaluation of the situation of intermix and interface with SIOSE, it was decided to develop sheets to assess, verify and plot the polygons concerning the sources mentioned above. In order to further refine the results, three levels of exposure (high, moderate, low) were established, as seen in Figure 7, which will be compared against the results of the operations performed with the LiDAR data. The high exposure value indicates dwellings that, in the event of a fire, would be totally compromised. The moderate value indicates dwellings that could be affected by fire to a lesser extent, due to their distance from the fuel. The low value indicates dwellings that coexist with the fuel but, due to the limited presence of forest mass or to their greater distance from it, would be affected by a fire to a lesser extent than in the previous situations. Based on the samples analyzed for the evaluation of the situation of intermix and interface with SIOSE, it was decided to develop sheets to assess, verify and plot the polygons concerning the sources mentioned above. In order to further refine the results, three levels of exposure (high, moderate, low) were established, as seen in Figure 7, which will be compared against the results of the operations performed with the LiDAR data. The high exposure value indicates dwellings that, in the event of a fire, would be totally compromised. The moderate value indicates dwellings that could be affected by fire to a lesser extent, due to their distance from the fuel. The low value indicates dwellings that coexist with the fuel but, due to the limited presence of forest mass or to their greater distance from it, would be affected by a fire to a lesser extent than in the previous situations. It is worth noting that, in the intermix evaluation, the cases found in which SIOSE fails to recognize the exposure or produces "fake positives" (Table 3) are mostly due to the absence of homes within the polygons. In the case of the interface, it is more associated with the geometric ambiguity of the SIOSE information for the different coverages or uses within the same polygon with more than a 100 m distance from the fuel to the home for these cases. The help of the LiDAR information was essential in these cases to identify the exact location of the thematic information categories of the Land Use Land Cover surfaces (LULC) associated with each of the polygonal entities of SIOSE (association of several thematic attributes to a single geometric object). It can be pointed out that, for the Garraf region, where the urbanistic model is based on tourism and expansion derived from second homes, the sampled results showed that the assessment was correct for between 85% and 95% of the cases (Table 3), while for the It is worth noting that, in the intermix evaluation, the cases found in which SIOSE fails to recognize the exposure or produces "fake positives" (Table 3) are mostly due to the absence of homes within the polygons. In the case of the interface, it is more associated with the geometric ambiguity of the SIOSE information for the different coverages or uses within the same polygon with more than a 100 m distance from the fuel to the home for these cases. The help of the LiDAR information was essential in these cases to identify the exact location of the thematic information categories of the Land Use Land Cover surfaces (LULC) associated with each of the polygonal entities of SIOSE (association of several thematic attributes to a single geometric object). It can be pointed out that, for the Garraf region, where the urbanistic model is based on tourism and expansion derived from second homes, the sampled results showed that the assessment was correct for between 85% and 95% of the cases (Table 3), while for the municipalities of Madrid, it was correct for between 95% and 100% of the cases. In addition, there is a differentiated exposure between the study zones because in the municipalities of El Escorial, Galapagar and San Lorenzo del Escorial, there is a larger number of isolated houses that are in contact with forest and silvopastoral areas, whereas the exposure in the Garraf region mostly affects more consolidated housing areas that coexist with an abundance of fuel. However, the sampling indicates that the methodology applied is effective in identifying exposure and that it is useful for the territory under study.
LiDAR-Based Determination of Exposure to Fire Risk
Nonetheless, in order to further determine the level of exposure, LiDAR data were used. After obtaining the clusters of vegetation and buildings, it was decided to verify the results generated with the SIOSE database and assess how these results improve with the incorporation of LiDAR data. For this purpose, we generated sheets that show the percentages and levels of exposure determined by the sampling.
Regarding the Garraf region (Table 4), the sampling determines that the methodology is efficient for between 88.5% (interface) and 100% (intermix) of the cases, and exposure levels reclassified through the initial SIOSE identification are recognized; these results can be inferred for the rest of the region. It is worth noting that the LiDAR points corresponding to homes or forest mass that were identified in the analyzed sample made it possible to adjust the exposure results and to bring them closer to the reality of the territory, which is also true in the case of polygons that, when using only the SIOSE database, showed a specific exposure level that was represented by a higher level of accuracy when complemented by the LiDAR information.
Unlike the results of Barcelona, the sampling in the municipalities of the metropolitan area of Madrid is effective for between 91.7% and almost 94% of the cases. This difference is related to the higher accuracy of the LiDAR data in this zone (Madrid = 1 echo/sqm vs. Barcelona = 0.5 echoes/sqm), which slightly influences the quality of the results obtained.
According to the approached methodology, the zones exposed to fire risk in the wildland-urban interface within the sampling performed (Figures 8 and 9) correspond to the areas represented by the filtered clusters. Once the quality of the results was determined, the intermix and interface polygons identified by SIOSE were re-classified, with the positive clusters indicating that the exposed area was larger in the intermix than in the interface. These areas corresponded to the intersection between the buildings-forest mass at a distance of less than 100 m. However, within the sample, 14% corresponded to interface polygons, which, for SIOSE, were fake positives and which, when using LiDAR, reduced this percentage to 11% since they allowed for re-classification of the exposure. It must be emphasized that it would have been impossible to reach such a degree of accuracy without the combination of both official repositories (SIOSE + LIDAR).
Discussion
This research allowed access to and the handling of information that is difficult to use due to its volume, structure, management and characteristics. A methodology was proposed to enable its use and access for users in different management levels for the development of territorial action plans, emergency plans, plans for self-protection against fire risk, and plans for a more appropriate protection of the environment in the planning of urban development projects.
The research shows that it is possible to create and manage a geodatabase containing land-use information provided by official sources (SIOSE) in combination with the voluminous LiDAR files from PNOA. Storing point clouds in patches and processing them through clusters allows for accurate identification of the WUI zones exposed to fire risk, achieving a high level of detail, applicable at different scales. The use of Geo Small Data approaches, together with the parallelization of processes, allows for high-quality results within a reasonable time.
The proposed methodology enabled the automation of a replicable process of calculation and estimation of the built-up areas exposed to fire risk, at a scale of detail appropriate for any urban planning work, estimating which areas are the most exposed to risk, Once the quality of the results was determined, the intermix and interface polygons identified by SIOSE were re-classified, with the positive clusters indicating that the exposed area was larger in the intermix than in the interface. These areas corresponded to the intersection between the buildings-forest mass at a distance of less than 100 m. However, within the sample, 14% corresponded to interface polygons, which, for SIOSE, were fake positives and which, when using LiDAR, reduced this percentage to 11% since they allowed for re-classification of the exposure. It must be emphasized that it would have been impossible to reach such a degree of accuracy without the combination of both official repositories (SIOSE + LIDAR).
Discussion
This research allowed access to and the handling of information that is difficult to use due to its volume, structure, management and characteristics. A methodology was proposed to enable its use and access for users in different management levels for the development of territorial action plans, emergency plans, plans for self-protection against fire risk, and plans for a more appropriate protection of the environment in the planning of urban development projects.
The research shows that it is possible to create and manage a geodatabase containing land-use information provided by official sources (SIOSE) in combination with the voluminous LiDAR files from PNOA. Storing point clouds in patches and processing them through clusters allows for accurate identification of the WUI zones exposed to fire risk, achieving a high level of detail, applicable at different scales. The use of Geo Small Data approaches, together with the parallelization of processes, allows for high-quality results within a reasonable time.
The proposed methodology enabled the automation of a replicable process of calculation and estimation of the built-up areas exposed to fire risk, at a scale of detail appropriate for any urban planning work, estimating which areas are the most exposed to risk, Once the quality of the results was determined, the intermix and interface polygons identified by SIOSE were re-classified, with the positive clusters indicating that the exposed area was larger in the intermix than in the interface. These areas corresponded to the intersection between the buildings-forest mass at a distance of less than 100 m. However, within the sample, 14% corresponded to interface polygons, which, for SIOSE, were fake positives and which, when using LiDAR, reduced this percentage to 11% since they allowed for re-classification of the exposure. It must be emphasized that it would have been impossible to reach such a degree of accuracy without the combination of both official repositories (SIOSE + LIDAR).
Discussion
This research allowed access to and the handling of information that is difficult to use due to its volume, structure, management and characteristics. A methodology was proposed to enable its use and access for users in different management levels for the development of territorial action plans, emergency plans, plans for self-protection against fire risk, and plans for a more appropriate protection of the environment in the planning of urban development projects.
The research shows that it is possible to create and manage a geodatabase containing land-use information provided by official sources (SIOSE) in combination with the voluminous LiDAR files from PNOA. Storing point clouds in patches and processing them through clusters allows for accurate identification of the WUI zones exposed to fire risk, achieving a high level of detail, applicable at different scales. The use of Geo Small Data approaches, together with the parallelization of processes, allows for high-quality results within a reasonable time.
The proposed methodology enabled the automation of a replicable process of calculation and estimation of the built-up areas exposed to fire risk, at a scale of detail appropriate for any urban planning work, estimating which areas are the most exposed to risk, and is applicable to accelerated urban expansion dynamics, such as the case of Barcelona or Madrid. The data sources are open, standardized and regularly updated, thus responding to a public interest, due to the impact generated by the exposure to fire risk on the safety of people and real state in the study zones. Updates allow for chronological monitoring of the phenomenon, as the official geographic information repositories are renewed.
Obtaining SIOSE polygons in intermix and interface was essential to the recovery of LiDAR points with such a high level of accuracy. To verify the results, the WMS services of the SIOSE 2014 photos and latest PNOA orthophotos were used, which allowed for a visual analysis of the study area and can always be complemented by the corresponding field work.
It should be noted that a correct analysis of cluster filtering is necessary, so that the calculation of the interaction between the variables of forest fuel-distance-residential zones provides a real estimate of the compromised areas. This situation was appropriately solved thanks to the improvements achieved in the proposed methodology in previous works [24,26], which were implemented in other geographic environments (the Navarra region in the Pyrenees mountain area, and the region of Valencia in the tourist Mediterranean coast) but which share the same objective and application as this study. It is worth mentioning that the major methodological challenges to be solved refer to errors in the classification of LiDAR pulses and the difficulty of handling a point cloud as large as the ones present in this research work. The total computation time was 20 h, and it was performed with a conventional computer with common features and free software.
The use of LiDAR data acquired by unmanned aerial vehicles (UAVs), combined with other passive and active remote sensing data, has the greatest future for fuel mapping of the wildland-urban interface (WUI), using machine-learning algorithms [33]. It is also necessary to highlight the importance of short-range LiDAR for field data collection, together with the application of qualitative and quantitative mapping methods to visualize land use in a dynamic context since it is possible to record dynamic phenomena in space, thanks to images obtained cyclically by UAVs [34]. This information would be of great value in very detailed studies for emergency management and risk prevention in those areas where the existence of fire risk exposure was previously detected. In addition, advanced topographic methods, such as the GNSS method and the object of topographic information based on low-altitude aerial imagery [35], together with field LiDAR, are decisive tools for these purposes.
However, in the case of this research, the methodological proposal was limited to the maximum exploitation of official repositories of geographic open data with national coverage for studies applied to wide regions, but with very detailed results. Perhaps it would be useful in the future to further the research in the most exposed areas detected with this methodology with the aforementioned technologies.
In the introduction, we referred to many previous studies that demonstrated the usefulness of LiDAR data in the study of forest vegetation, forest fires and delimitation of WUI zones. We also pointed out that the advantage of LiDAR accuracy also entails its main disadvantage: the large volume of data to be processed. This is an advantage or disadvantage that depends on the scale of the work and the level of precision. A single LiDAR file from the Download Center of the National Geographic Institute of Spain (CNIG) only covers an extension of 2 × 2 km in Catalonia and 1 × 1 km in Madrid; each of these files can contain a cloud of up to 9 million points, with more than fifteen thematic information fields associated with each of them.
Thanks to the combination of the official geographic information repositories used-SIOSE and LiDAR-in this research, it was possible to process the information from two European regions in a massive way and, at the same time, obtain a great level of detail. This went beyond the reference scale of the SIOSE database (1:25,000) and ruled out the false-positive scenarios that were classified as positive in the SIOSE polygons due to the ambiguity geometric that presented the uses of the SIOSE enclosures. On the other hand, the SIOSE data and aerial photography also made it possible to improve the LiDAR point cloud classification data, thereby producing a synergy between the two sources of information.
Conclusions
SIOSE has proven to be a key tool for the identification of exposed zones in the wildland-urban interface. It has an extraordinary thematic potential that is ahead of other LULC databases in terms of thematic possibilities and geometric accuracy and, if properly exploited, it allows for the identification of the phenomenon under study for the study zones in 94% of the areas exposed in the intermix situation, and between 88% and 92% for the areas in the interface situation. The decrease in the percentage between these WUI typologies relates to the geometric ambiguity of their coverage, and, therefore, other sources were used to offset this drawback through ALS active remote sensing. LiDAR data have a high level of accuracy and detail, which enabled the identification of buildings and fuel, hence complementing the polygon information of SIOSE. The combination and complementarity of the data obtained with the proposed methodology generated a synergy capable of providing a more accurate approximation of the WUI territory exposed to fire risk within the study area. In addition, the WMS service of the cadaster of Barcelona, the PNOA orthophotographs and the field photos of the study area available through the WMS service were used in order to verify that the automated process effectively achieved the objective.
The results of this research can be applied at different scales, from a regional to a local level, and can be replicated in any area of the Spanish territory under different hazard conditions. The base information for this project comes from official repositories of the IGN National Center for Geographic Information. Although the volume and complex nature of these geographic data might limit their use to large computers, the methodology used allowed for exploitation of this information with a conventional desktop computer, creating algorithms for the identification of zones exposed to forest fuel by using free software tools and open geographic data.
|
v3-fos-license
|
2021-06-15T13:17:57.240Z
|
2021-04-28T00:00:00.000
|
235427326
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4893/14/5/141/pdf",
"pdf_hash": "3b0e96d8d108a22848785a65512adac686686e43",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:947",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "5cbcd9021681c160fb103e22cc55d86015cdb0df",
"year": 2021
}
|
pes2o/s2orc
|
Design Optimization of Interfacing Attachments for the Deployable Wing of an Unmanned Re-Entry Vehicle
: Re-entry winged body vehicles have several advantages w.r.t capsules, such as maneuverability and controlled landing opportunity. On the other hand, they show an increment in design level complexity, especially from an aerodynamic, aero-thermodynamic, and structural point of view, and in the difficulties of housing in operative existing launchers. In this framework, the idea of designing unmanned vehicles equipped with deployable wings for suborbital flight was born. This work details a preliminary study for identifying the best configuration for the hinge system aimed at the in-orbit deployment of an unmanned re-entry vehicle’s wings. In particular, the adopted optimization methodology is described. The adopted approach uses a genetic algorithm available in commercial software in conjunction with fully parametric models created in FEM environments and, in particular, it can optimize the hinge position considering both the deployed and folded configuration. The results identify the best hinge configuration that minimizes interface loads, thus, realizing a lighter and more efficient deployment system. Indeed, for such a category of vehicle, it is mandatory to reduce the structural mass, as much as possible in order to increase the payload and reduce service costs.
Introduction
The Italian Aerospace Research Centre (CIRA) is involved in several programs to develop new technologies, material, and structural concepts to speed up the process of designing and manufacturing the next European System for in-orbit-experimentation with re-entry capability, known as Space Rider, after the successful heritage of the IXV program [1,2].
For such vehicles, one of the key points is the need for a hot primary structure that can withstand the severe thermo-structural challenges that are typical of the re-entry phase aero-thermal environment. Ceramic matrix composites (CMC) are widely used and have been developed for this purpose in the last few decades [3][4][5][6]. CIRA is also developing new CMC for space applications, for the design of control surfaces and thermal protection systems [7][8][9]. Another key point is represented by the development of key technologies applicable to future unmanned spacecraft, for re-entry from LEO orbits w.r.t structural optimization. In this framework, CIRA was the primary contractor on the Unmanned Space Vehicle 3 project, named USV3 [10].
The CIRA Unmanned Space Vehicle (USV3) concept must respect a set of technical guidelines, in particular, the objective is to perform an autonomous orbital and suborbital flight, with enhanced flying capability and conventional landing on a runway.
A typical atmospheric re-entry flight consists of three phases: a hypersonic phase, a transition phase from supersonic to subsonic flight, and a landing phase. During the hypersonic phase, aero-braking decelerates the vehicle from near-orbital velocity while The deployable wing system (DWS) guarantees the connection between the fuselag cold structure and the deployable wing structure and is basically composed of the actua tion system and at least two hinges, the shape of which has been conceived to ensure stable and strong connection both in the folded and deployed configuration.
The position of the DWS rotation axis has been defined considering the need to min imize the portion of fixed wing, to minimize the gap between the fixed wing portion and deployable wing, and to avoid any interference during wing deployment as shown in Figure 2. The locking subsystem concept is basically made up of a set of pins moved by a linea actuator that has the purpose of retracting the pin (release function) before the deployin phase, and re-inserting the pins into their seats when the deployment phase is completed ( Figure 3). A set of main preliminary requirements (summarized in Table 1) has been defined a the input for performing a trade-off study to define the best position of the DWS hinges. The deployable wing system (DWS) guarantees the connection between the fuselage cold structure and the deployable wing structure and is basically composed of the actuation system and at least two hinges, the shape of which has been conceived to ensure a stable and strong connection both in the folded and deployed configuration.
The position of the DWS rotation axis has been defined considering the need to minimize the portion of fixed wing, to minimize the gap between the fixed wing portion and deployable wing, and to avoid any interference during wing deployment as shown in Figure 2. The deployable wing system (DWS) guarantees the connection between the fuselage cold structure and the deployable wing structure and is basically composed of the actuation system and at least two hinges, the shape of which has been conceived to ensure a stable and strong connection both in the folded and deployed configuration.
The position of the DWS rotation axis has been defined considering the need to minimize the portion of fixed wing, to minimize the gap between the fixed wing portion and deployable wing, and to avoid any interference during wing deployment as shown in Figure 2. The locking subsystem concept is basically made up of a set of pins moved by a linear actuator that has the purpose of retracting the pin (release function) before the deploying phase, and re-inserting the pins into their seats when the deployment phase is completed ( Figure 3). A set of main preliminary requirements (summarized in Table 1) has been defined as the input for performing a trade-off study to define the best position of the DWS hinges. The locking subsystem concept is basically made up of a set of pins moved by a linear actuator that has the purpose of retracting the pin (release function) before the deploying phase, and re-inserting the pins into their seats when the deployment phase is completed ( Figure 3). The deployable wing system (DWS) guarantees the connection between the fuselage cold structure and the deployable wing structure and is basically composed of the actuation system and at least two hinges, the shape of which has been conceived to ensure a stable and strong connection both in the folded and deployed configuration.
The position of the DWS rotation axis has been defined considering the need to minimize the portion of fixed wing, to minimize the gap between the fixed wing portion and deployable wing, and to avoid any interference during wing deployment as shown in Figure 2. The locking subsystem concept is basically made up of a set of pins moved by a linear actuator that has the purpose of retracting the pin (release function) before the deploying phase, and re-inserting the pins into their seats when the deployment phase is completed ( Figure 3). A set of main preliminary requirements (summarized in Table 1) has been defined as the input for performing a trade-off study to define the best position of the DWS hinges. A set of main preliminary requirements (summarized in Table 1) has been defined as the input for performing a trade-off study to define the best position of the DWS hinges.
F4
During the Launch/Ascent phase the system shall withstand the following Quasi Static Loads (QSL) * * as for VEGA-C launcher user manual [14].
The proposed work presents a positioning study of the hinges that enable the deployment of the wing of the USV3 DWS. The results of the optimization process adopted to evaluate the best configuration for the hinges are shown below. The critical parameters are the interface forces generated both in the unfolded and folded configuration; therefore, the analyses aimed to identify the optimal position in terms of the minimization of these forces. These results are the input data and represent the general requirements for the design of the deployment system (DWS). The position of the right hinge can reduce the applied loads as much as possible, leading to a component with a reduced mass (key aspect for space application). Figure 4 shows the CAD model of the structural parts of the USV3 vehicle wing (therefore, without the TPS). The same figure reports both the wing's rotational axis and the starting position of the hinges derived from the structural architecture of the reference vehicles (IXV, USV), and, in particular, their position is related to the position of the main frames. With the development of USV3, however, with the introduced complication of a deployable wing, the possibilities of preserving the total mass requirement (Table 1) could be put at risk (obviously, the deployment system results in an increment of parts number). For this reason, dedicated structural optimization analyses are needed. Interface load reduction, which could be derived from better hinge positioning, is one of the primary concerns for obtaining lighter components. The DW connection system consists of two hinges, which will be referred to as the forward hinge (FRW-H) and afterward hinge (AFT-H). The wing rotates around the axis shown in Figures 4 and 5 (Hinge Axis) and is kept in the deployed position by a pair of * as for VEGA-C launcher user manual [14].
Problem Description
The proposed work presents a positioning study of the hinges that enable the deployment of the wing of the USV3 DWS. The results of the optimization process adopted to evaluate the best configuration for the hinges are shown below. The critical parameters are the interface forces generated both in the unfolded and folded configuration; therefore, the analyses aimed to identify the optimal position in terms of the minimization of these forces. These results are the input data and represent the general requirements for the design of the deployment system (DWS). The position of the right hinge can reduce the applied loads as much as possible, leading to a component with a reduced mass (key aspect for space application). Figure 4 shows the CAD model of the structural parts of the USV3 vehicle wing (therefore, without the TPS). The same figure reports both the wing's rotational axis and the starting position of the hinges derived from the structural architecture of the reference vehicles (IXV, USV), and, in particular, their position is related to the position of the main frames. With the development of USV3, however, with the introduced complication of a deployable wing, the possibilities of preserving the total mass requirement (Table 1) could be put at risk (obviously, the deployment system results in an increment of parts number). For this reason, dedicated structural optimization analyses are needed. Interface load reduction, which could be derived from better hinge positioning, is one of the primary concerns for obtaining lighter components.
F4
During the Launch/Ascent phase the system shall withstand the following Quasi Static Loads (QSL) * * as for VEGA-C launcher user manual [14].
The proposed work presents a positioning study of the hinges that enable the deployment of the wing of the USV3 DWS. The results of the optimization process adopted to evaluate the best configuration for the hinges are shown below. The critical parameters are the interface forces generated both in the unfolded and folded configuration; therefore, the analyses aimed to identify the optimal position in terms of the minimization of these forces. These results are the input data and represent the general requirements for the design of the deployment system (DWS). The position of the right hinge can reduce the applied loads as much as possible, leading to a component with a reduced mass (key aspect for space application). Figure 4 shows the CAD model of the structural parts of the USV3 vehicle wing (therefore, without the TPS). The same figure reports both the wing's rotational axis and the starting position of the hinges derived from the structural architecture of the reference vehicles (IXV, USV), and, in particular, their position is related to the position of the main frames. With the development of USV3, however, with the introduced complication of a deployable wing, the possibilities of preserving the total mass requirement (Table 1) could be put at risk (obviously, the deployment system results in an increment of parts number). For this reason, dedicated structural optimization analyses are needed. Interface load reduction, which could be derived from better hinge positioning, is one of the primary concerns for obtaining lighter components. pins that engage in two holes (stopper deployed configuration) placed on the stopper deployed axis (SDC Axis). While the wing is kept in the folded position with a similar system that involves another couple of holes (stopper folded configuration) placed along the stopper folded axis (SFC Axis). Figure 5 shows the three fundamental axes for evaluating the best position of the hinges. The hinge axis, lying on the XZ plane, is an input data and, therefore, it cannot be modified since its position enables the rotating of the wing without any interferences w.r.t the fuselage structures. The other two axes are instead determined by the architecture of the hinges. In particular, the stopper axis in the unfolded configuration lies in the XZ plane (vehicle axes) and its inclination, around the Y axis, is a function of the hinge axis distance at the FRW and AFT stations. The stopper axis in the folded configuration is finally determined by rotating the axis of the unfolded stopper of the closing angle, which is 55°, around the hinge axis. Therefore, the hinges are able to slide along these axes in both directions. The three axes are not parallel to each other, and, for this reason, the projections of the hole centers of each hinge (FRW-H and AFT-H) will not lie in the same ZY plane (vehicle reference system). The parametric FE model (described in the next section) can control these misalignments accurately and, therefore, determine the station along the X axis of the center of each hole. Figure 5 shows the centers of the holes that are used to evaluate the reaction forces and are used to accommodate the rotation axes and locking pins.
Definition of Design Variables
The hinge location is defined by means of four parameters that represent the design variables. They are the position, along the X-axis, of points C1 and C4 and the position, along the Z-axis, of points C2 and C5, as reported in Figure 6. Points C3 and C6 are automatically determined by defining the wing rotation angle by a rigid rotation of C2 and C5 points around the hinge axis ( Figure 5). Within the developed APDL macro, appropriate relationships are defined between the positions of the six points in order to ensure the consistency of the models. For example, mathematical relations are inserted on the positions in X in order to guarantee that the forward hinge will always be in advanced position w.r.t the afterward one. Figure 5 shows the three fundamental axes for evaluating the best position of the hinges. The hinge axis, lying on the XZ plane, is an input data and, therefore, it cannot be modified since its position enables the rotating of the wing without any interferences w.r.t the fuselage structures. The other two axes are instead determined by the architecture of the hinges. In particular, the stopper axis in the unfolded configuration lies in the XZ plane (vehicle axes) and its inclination, around the Y axis, is a function of the hinge axis distance at the FRW and AFT stations. The stopper axis in the folded configuration is finally determined by rotating the axis of the unfolded stopper of the closing angle, which is 55 • , around the hinge axis. Therefore, the hinges are able to slide along these axes in both directions. The three axes are not parallel to each other, and, for this reason, the projections of the hole centers of each hinge (FRW-H and AFT-H) will not lie in the same ZY plane (vehicle reference system). The parametric FE model (described in the next section) can control these misalignments accurately and, therefore, determine the station along the X axis of the center of each hole. Figure 5 shows the centers of the holes that are used to evaluate the reaction forces and are used to accommodate the rotation axes and locking pins.
Definition of Design Variables
The hinge location is defined by means of four parameters that represent the design variables. They are the position, along the X-axis, of points C1 and C4 and the position, along the Z-axis, of points C2 and C5, as reported in Figure 6. Points C3 and C6 are automatically determined by defining the wing rotation angle by a rigid rotation of C2 and C5 points around the hinge axis ( Figure 5). Within the developed APDL macro, appropriate relationships are defined between the positions of the six points in order to ensure the consistency of the models. For example, mathematical relations are inserted on the positions in X in order to guarantee that the forward hinge will always be in advanced position w.r.t the afterward one. The stopper axis, in the deployed configuration, is parallel to the X axis of the global model by setting H1-Z = H2-Z. This assumption will be held to satisfy functional requirements.
Model Description
To study the optimal configuration for the hinge system, a simplified parametric model of the wing was developed. Both the model and the whole set of analyses were carried out in the ANSYS environment. The parametric model was created by means of a dedicated routine developed using APDL language [15] (Ansys Parametric Design Language).
The numerical model was defined considering the main geometric dimensions and, in particular, the two points where the load was applied, i.e., the center of gravity (CoG) and the point where it is possible to concentrate the aerodynamic loads (PtL). Figure 7 shows, schematically, the global dimensions and the location of the CoG and PtL. The estimated overall mass of the wing is 92.15 kg (including the TPS and the installed sub-systems). The rational angle in the folded configuration is 55°. Figure 8 shows the simplified numerical model. The hinges were not modeled directly but by means of rigid elements, as direct modeling of the hinges is not entirely relevant for the evaluation of reaction loads. Furthermore, this choice reduces the computational costs, which is a relevant issue for the optimization analysis. The stopper axis, in the deployed configuration, is parallel to the X axis of the global model by setting H1-Z = H2-Z. This assumption will be held to satisfy functional requirements.
Model Description
To study the optimal configuration for the hinge system, a simplified parametric model of the wing was developed. Both the model and the whole set of analyses were carried out in the ANSYS environment. The parametric model was created by means of a dedicated routine developed using APDL language [15] (Ansys Parametric Design Language).
The numerical model was defined considering the main geometric dimensions and, in particular, the two points where the load was applied, i.e., the center of gravity (CoG) and the point where it is possible to concentrate the aerodynamic loads (PtL). Figure 7 shows, schematically, the global dimensions and the location of the CoG and PtL. The estimated overall mass of the wing is 92.15 kg (including the TPS and the installed sub-systems). The rational angle in the folded configuration is 55 • . The stopper axis, in the deployed configuration, is parallel to the X axis of the global model by setting H1-Z = H2-Z. This assumption will be held to satisfy functional requirements.
Model Description
To study the optimal configuration for the hinge system, a simplified parametric model of the wing was developed. Both the model and the whole set of analyses were carried out in the ANSYS environment. The parametric model was created by means of a dedicated routine developed using APDL language [15] (Ansys Parametric Design Language).
The numerical model was defined considering the main geometric dimensions and, in particular, the two points where the load was applied, i.e., the center of gravity (CoG) and the point where it is possible to concentrate the aerodynamic loads (PtL). Figure 7 shows, schematically, the global dimensions and the location of the CoG and PtL. The estimated overall mass of the wing is 92.15 kg (including the TPS and the installed sub-systems). The rational angle in the folded configuration is 55°. Figure 8 shows the simplified numerical model. The hinges were not modeled directly but by means of rigid elements, as direct modeling of the hinges is not entirely relevant for the evaluation of reaction loads. Furthermore, this choice reduces the computational costs, which is a relevant issue for the optimization analysis. Figure 8 shows the simplified numerical model. The hinges were not modeled directly but by means of rigid elements, as direct modeling of the hinges is not entirely relevant for the evaluation of reaction loads. Furthermore, this choice reduces the computational costs, which is a relevant issue for the optimization analysis. Hinge reaction forces are evaluated for both configurations: deployed and folded wing (55°). The routine is designed to analyze both configurations by properly rotating the wing (Figure 8).
Even if a full FE model is not mandatory for such evaluations (as the rigid model was used), it was adopted to have a better understanding of the analyzed configurations and to simplify the constraints definition. Indeed, the location of the holes could be determined by geometrical/mathematical relationships and the reaction forces could be determined by solving the equation system related to a hyper-static structure. On the other hand, using an analytical formulation would introduce some approximations in order to simplify the equations and it would not provide the capability to define the exact location of the holes (the position of the stopper holes in the folded configuration depend on the other four holes and by their specific station along the chord direction. Further, C1, C2, and C3 do not lie in the same plane, nor do C4, C5, and C6).
Furthermore, the choice to adopt an FE model, instead of analytical formulations, is also justified by the very small computational costs related to such an FE model. Indeed, a single run, which consists of a model generation and two static analyses with two postprocesses, requires at most 15 s on an HP Z840 Workstation-Intel Xeon CPU E5-2620 v3 @ 2.40 GHz; RAM 128 GB.
The outer surface of the wing has been discretized with shell elements (Shell181) that have been assigned a rigid behavior ( Figure 9A). The load application points, both for the unfolded and folded configuration, are connected to the main structure by rigid elements (MPC184). The centers of the holes of the deploying system are connected to the root rib by rigid elements (MPC184) as shown in Figure 9B. The whole FE model consists of about 4000 elements (3600 shells and 400 MPCs) and about 3500 nodes. As rigid behavior has been adopted for the wing, the average element size (25 mm far from the region of the hinges and 5 mm close to them) aims to reproduce the wing shape with a discrete accuracy and to have good force distribution (both applied loads and reaction forces), but this is not a critical aspect. Hinge reaction forces are evaluated for both configurations: deployed and folded wing (55 • ). The routine is designed to analyze both configurations by properly rotating the wing ( Figure 8).
Even if a full FE model is not mandatory for such evaluations (as the rigid model was used), it was adopted to have a better understanding of the analyzed configurations and to simplify the constraints definition. Indeed, the location of the holes could be determined by geometrical/mathematical relationships and the reaction forces could be determined by solving the equation system related to a hyper-static structure. On the other hand, using an analytical formulation would introduce some approximations in order to simplify the equations and it would not provide the capability to define the exact location of the holes (the position of the stopper holes in the folded configuration depend on the other four holes and by their specific station along the chord direction. Further, C1, C2, and C3 do not lie in the same plane, nor do C4, C5, and C6).
Furthermore, the choice to adopt an FE model, instead of analytical formulations, is also justified by the very small computational costs related to such an FE model. Indeed, a single run, which consists of a model generation and two static analyses with two postprocesses, requires at most 15 s on an HP Z840 Workstation-Intel Xeon CPU E5-2620 v3 @ 2.40 GHz; RAM 128 GB.
The outer surface of the wing has been discretized with shell elements (Shell181) that have been assigned a rigid behavior ( Figure 9A). The load application points, both for the unfolded and folded configuration, are connected to the main structure by rigid elements (MPC184). The centers of the holes of the deploying system are connected to the root rib by rigid elements (MPC184) as shown in Figure 9B. The whole FE model consists of about 4000 elements (3600 shells and 400 MPCs) and about 3500 nodes. As rigid behavior has been adopted for the wing, the average element size (25 mm far from the region of the hinges and 5 mm close to them) aims to reproduce the wing shape with a discrete accuracy and to have good force distribution (both applied loads and reaction forces), but this is not a critical aspect.
Boundary Conditions
The wing structure is constrained at the centers of the holes of both components of each hinge. The centers of these holes lie on one of the three defined axes (hinge axis, stopper axis in unfolded configuration, and stopper axis in the folded configuration).
The reference points of the rigid elements are constrained in translation degrees of freedom w.r.t local reference systems where the X axis is parallel to the specific axis on which the point lies, as reported in Figure 10 for the deployed configuration. The same condition was applied for both the unfolded and folded configuration. Figure 10 schematically shows the boundary conditions that were applied.
The applied loading conditions are different between unfolded and folded configuration. In the deployed configuration the load, equal to 13.6 kN, acts along the Z direction, and it has been applied in the pressure center, i.e., PtL. When the wing is in the folded configuration, it is in the fairing of VEGA-C, and, therefore, the load derives from the acceleration field generated by the launch phase and it can be schematically applied at the center of gravity of the wing (CoG). The acceleration factor is 7.5 g along a longitudinal direction (X axis) and 1.35 g along lateral directions (Z and Y axes), as reported in Table 1.
Optimization Analysis
To evaluate the optimal hinge configuration, optimization analysis has been performed by fusing the commercial ModeFrontier code [16] that is an integration platform for multi-objective and multi-disciplinary optimization. It provides an easy coupling method with third-party engineering tools (like Finite Element codes), enables the automation of the design simulation process, and facilitates analytic decision making.
A multidisciplinary approach is key for a successful design process, especially when the constraints and the requirements are very challenging. A powerful workflow enables the execution of complex chains of design optimization and innovative algorithms to determine the set of best possible solutions that combine opposing objectives.
The code is widely used in engineering applications and is demonstrated to be suitable for several technical problems [17][18][19][20].
Boundary Conditions
The wing structure is constrained at the centers of the holes of both components of each hinge. The centers of these holes lie on one of the three defined axes (hinge axis, stopper axis in unfolded configuration, and stopper axis in the folded configuration).
The reference points of the rigid elements are constrained in translation degrees of freedom w.r.t local reference systems where the X axis is parallel to the specific axis on which the point lies, as reported in Figure 10 for the deployed configuration. The same condition was applied for both the unfolded and folded configuration. Figure 10 schematically shows the boundary conditions that were applied.
Boundary Conditions
The wing structure is constrained at the centers of the holes of both components of each hinge. The centers of these holes lie on one of the three defined axes (hinge axis, stopper axis in unfolded configuration, and stopper axis in the folded configuration).
The reference points of the rigid elements are constrained in translation degrees of freedom w.r.t local reference systems where the X axis is parallel to the specific axis on which the point lies, as reported in Figure 10 for the deployed configuration. The same condition was applied for both the unfolded and folded configuration. Figure 10 schematically shows the boundary conditions that were applied.
The applied loading conditions are different between unfolded and folded configuration. In the deployed configuration the load, equal to 13.6 kN, acts along the Z direction, and it has been applied in the pressure center, i.e., PtL. When the wing is in the folded configuration, it is in the fairing of VEGA-C, and, therefore, the load derives from the acceleration field generated by the launch phase and it can be schematically applied at the center of gravity of the wing (CoG). The acceleration factor is 7.5 g along a longitudinal direction (X axis) and 1.35 g along lateral directions (Z and Y axes), as reported in Table 1.
Optimization Analysis
To evaluate the optimal hinge configuration, optimization analysis has been performed by fusing the commercial ModeFrontier code [16] that is an integration platform for multi-objective and multi-disciplinary optimization. It provides an easy coupling method with third-party engineering tools (like Finite Element codes), enables the automation of the design simulation process, and facilitates analytic decision making.
A multidisciplinary approach is key for a successful design process, especially when the constraints and the requirements are very challenging. A powerful workflow enables the execution of complex chains of design optimization and innovative algorithms to determine the set of best possible solutions that combine opposing objectives.
The code is widely used in engineering applications and is demonstrated to be suitable for several technical problems [17][18][19][20]. The applied loading conditions are different between unfolded and folded configuration. In the deployed configuration the load, equal to 13.6 kN, acts along the Z direction, and it has been applied in the pressure center, i.e., PtL. When the wing is in the folded configuration, it is in the fairing of VEGA-C, and, therefore, the load derives from the acceleration field generated by the launch phase and it can be schematically applied at the center of gravity of the wing (CoG). The acceleration factor is 7.5 g along a longitudinal direction (X axis) and 1.35 g along lateral directions (Z and Y axes), as reported in Table 1.
Optimization Analysis
To evaluate the optimal hinge configuration, optimization analysis has been performed by fusing the commercial ModeFrontier code [16] that is an integration platform for multiobjective and multi-disciplinary optimization. It provides an easy coupling method with third-party engineering tools (like Finite Element codes), enables the automation of the design simulation process, and facilitates analytic decision making.
A multidisciplinary approach is key for a successful design process, especially when the constraints and the requirements are very challenging. A powerful workflow enables the execution of complex chains of design optimization and innovative algorithms to determine the set of best possible solutions that combine opposing objectives.
The code is widely used in engineering applications and is demonstrated to be suitable for several technical problems [17][18][19][20]. Figure 11 shows the defined workflow. The latter is composed of "nodes" connected to each other to define the logic data path. The input variable nodes are used to define the design variables, their allowable ranges and increments (in this case the variables are treat as discrete variables): the input file node is the APDL macro and it is used to translate the design variables in a format that ANSYS can manage, and, thus, it is the link between ModeFrontier and ANSYS; the DOS node is a simple shell dos that is used to call on demand ANSYS to execute the numerical simulation; the DOE (design of experiment) node is used to define the starting generation and connected to this node there is a Scheduler node that is the optimization algorithm to be used; the output file nodes have the same function of the input file nodes, but, in this case, the link is from ANSYS to ModeFrontier; the output variable nodes represent all data from the solver that are available for the optimization process (but can be also used simply as checking data); the constraint nodes are used to define the constraint functions and can be applied both in output and input variable; finally, the objective function nodes are used to define whether the process has to minimize or maximize well-defined output variables. The design variables consist of the coordinates of the centers of the holes C1, C2, C4, and C5, which are specifically defined in Table 3. The coordinates of points C3 and C6, relative to the folded configuration, are defined by the previous points and by the folding angle. The variables H1-X and H2-X are defined in a dimensionless way, in order to avoid unfeasible configurations, they are then converted into physical dimensions by the calculation routine. They can range from 0 to 1 with a step increment equal to 0.025. The variables H1-Z and H2-Z can range from 10 mm up to 50 mm with a step increment equal to 2.5 mm.
Algorithms 2021, 14, x FOR PEER REVIEW 9 of 17 Figure 11 shows the defined workflow. The latter is composed of "nodes" connected to each other to define the logic data path. The input variable nodes are used to define the design variables, their allowable ranges and increments (in this case the variables are treat as discrete variables): the input file node is the APDL macro and it is used to translate the design variables in a format that ANSYS can manage, and, thus, it is the link between ModeFrontier and ANSYS; the DOS node is a simple shell dos that is used to call on demand ANSYS to execute the numerical simulation; the DOE (design of experiment) node is used to define the starting generation and connected to this node there is a Scheduler node that is the optimization algorithm to be used; the output file nodes have the same function of the input file nodes, but, in this case, the link is from ANSYS to ModeFrontier; the output variable nodes represent all data from the solver that are available for the optimization process (but can be also used simply as checking data); the constraint nodes are used to define the constraint functions and can be applied both in output and input variable; finally, the objective function nodes are used to define whether the process has to minimize or maximize well-defined output variables. The design variables consist of the coordinates of the centers of the holes C1, C2, C4, and C5, which are specifically defined in Table 3. The coordinates of points C3 and C6, relative to the folded configuration, are defined by the previous points and by the folding angle. The variables H1-X and H2-X are defined in a dimensionless way, in order to avoid unfeasible configurations, they are then converted into physical dimensions by the calculation routine. They can range from 0 to 1 with a step increment equal to 0.025. The variables H1-Z and H2-Z can range from 10 mm up to 50 mm with a step increment equal to 2.5 mm. The developed APDL macro is the core of the optimization procedure as it can manage the design variable and translate them into an FE model ready to be solved by ANSYS, Figure 11. Optimization workflow.
The developed APDL macro is the core of the optimization procedure as it can manage the design variable and translate them into an FE model ready to be solved by ANSYS, and, at the end of each design evaluation, it uses ANSYS to extract the output data that are then used in ModeFrontier as constraint and objective functions.
The initial sampling, named generation 0, was done with a SOBOL algorithm (it is based on a pseudo-random SOBOL sequence and enables the uniform distribution having of the experiments in the design space), and all generations were composed by 30 elements each, so-called individuals. The adopted optimization algorithm is MOGAII, which is a genetic multi-objective algorithm that supports geographical selection and directional cross-over, implements Elitism for multi-objective search, defines constraints by objective function penalization, and allows generational or steady state evolution. The evolution process is defined by 10 generations and the following main parameters are reported: In the adopted workflow there are no constraints on the input variables. Such constraints, related to the congruence of the numerical model, are already implicitly implemented within the APDL-macro that generated the parametric model.
The output variables are the bearing and axial forces (in a local reference system) for each pin (hole's center). In particular, since the bearing loads are the most critical values for such a problem, the last ones have been defined as objective functions while the axial loads have been set as constraint functions. As reported in the optimization workflow, the objective functions are eight and four for the deployed configuration and four for the folded configuration. Table 2 reports the limit value set for the constrain functions in terms of axial load on bolts (reaction load on the reference point of the MPC element). Further, Figure 12 reports the scheme adopted for identifying the output variables. and, at the end of each design evaluation, it uses ANSYS to extract the output data that are then used in ModeFrontier as constraint and objective functions. The initial sampling, named generation 0, was done with a SOBOL algorithm (it is based on a pseudo-random SOBOL sequence and enables the uniform distribution having of the experiments in the design space), and all generations were composed by 30 elements each, so-called individuals. The adopted optimization algorithm is MOGAII, which is a genetic multi-objective algorithm that supports geographical selection and directional cross-over, implements Elitism for multi-objective search, defines constraints by objective function penalization, and allows generational or steady state evolution. The evolution process is defined by 10 generations and the following main parameters are reported: In the adopted workflow there are no constraints on the input variables. Such constraints, related to the congruence of the numerical model, are already implicitly implemented within the APDL-macro that generated the parametric model.
The output variables are the bearing and axial forces (in a local reference system) for each pin (hole's center). In particular, since the bearing loads are the most critical values for such a problem, the last ones have been defined as objective functions while the axial loads have been set as constraint functions. As reported in the optimization workflow, the objective functions are eight and four for the deployed configuration and four for the folded configuration. Table 2 reports the limit value set for the constrain functions in terms of axial load on bolts (reaction load on the reference point of the MPC element). Further, Figure 12 reports the scheme adopted for identifying the output variables. The optimization process evaluated 300 design sets and only a very reduced part (7.3%) of them were determined as unfeasible. A design is considered unfeasible if the constraints reported in Table 3 are violated. Figure 13 reports the time history of the objective functions H1UP_BR_DP, H1LW_BR_DP, H2UP_BR_DP, and H2LW_BR_DP, which are the bearing loads in the deployed configuration at the four constrained points. The results marked with a gray square are feasible, while those marked with an orange rhombus are unfeasible. The optimization process evaluated 300 design sets and only a very reduced part (7.3%) of them were determined as unfeasible. A design is considered unfeasible if the constraints reported in Table 3 are violated. Figure 13 reports the time history of the objective functions H1UP_BR_DP, H1LW_BR_DP, H2UP_BR_DP, and H2LW_BR_DP, which are the bearing loads in the deployed configuration at the four constrained points. The results marked with a gray square are feasible, while those marked with an orange rhombus are unfeasible. The optimization process evaluated 300 design sets and only a very reduced part (7.3%) of them were determined as unfeasible. A design is considered unfeasible if the constraints reported in Table 3 are violated. Figure 13 reports the time history of the objective functions H1UP_BR_DP, H1LW_BR_DP, H2UP_BR_DP, and H2LW_BR_DP, which are the bearing loads in the deployed configuration at the four constrained points. The results marked with a gray square are feasible, while those marked with an orange rhombus are unfeasible. The previous figures show that it is only possible to see a clear trend in the folded configuration and, thus, the algorithm is able to provide design sets that are able to minimize the reaction forces for all hinges simultaneously. Conversely, in the deployed configuration, a sample is able to minimize only one or two objective functions and thus the best one has to be defined with a compromise.
The probability density function, related only to feasible designs, for the two design variables (H1-X and H2-X) is shown in Figure 15. In an evolutive optimization process, the probability of selecting a particular sample depends on the fitness function. The latter is better for samples that satisfy the objective functions and the constraint functions with a bigger margin. Therefore, the better an individual is, the greater the likelihood that it will be re-selected and, therefore, that such a sample will survive into the next generations. Of course, if a sample appears many times during the evolution process, it one of the best samples.
Therefore, by relating the number of samples (w.r.t to total number) to a specific range, it is possible to quickly and easily understand which regions should be chosen for the hinges.
The graphs show that the most feasible values of the H1-X variable are concentrated in the range of 100-450 mm. For the variable H2-X, there is a thickening in the region 800-1080 mm. The results indicate that increasing the distance between the hinges is a valid option to reduce the bearing forces in the four points both for deployed and folded configurations. The H1-Z and H2-Z variables are quite obvious and the best value, for both of them, results in the smallest feasible value, that is 10 mm. The previous figures show that it is only possible to see a clear trend in the folded configuration and, thus, the algorithm is able to provide design sets that are able to minimize the reaction forces for all hinges simultaneously. Conversely, in the deployed configuration, a sample is able to minimize only one or two objective functions and thus the best one has to be defined with a compromise.
The probability density function, related only to feasible designs, for the two design variables (H1-X and H2-X) is shown in Figure 15. In an evolutive optimization process, the probability of selecting a particular sample depends on the fitness function. The latter is better for samples that satisfy the objective functions and the constraint functions with a bigger margin. Therefore, the better an individual is, the greater the likelihood that it will be re-selected and, therefore, that such a sample will survive into the next generations. Of course, if a sample appears many times during the evolution process, it one of the best samples. The previous figures show that it is only possible to see a clear trend in the folded configuration and, thus, the algorithm is able to provide design sets that are able to minimize the reaction forces for all hinges simultaneously. Conversely, in the deployed configuration, a sample is able to minimize only one or two objective functions and thus the best one has to be defined with a compromise.
The probability density function, related only to feasible designs, for the two design variables (H1-X and H2-X) is shown in Figure 15. In an evolutive optimization process, the probability of selecting a particular sample depends on the fitness function. The latter is better for samples that satisfy the objective functions and the constraint functions with a bigger margin. Therefore, the better an individual is, the greater the likelihood that it will be re-selected and, therefore, that such a sample will survive into the next generations. Of course, if a sample appears many times during the evolution process, it one of the best samples.
Therefore, by relating the number of samples (w.r.t to total number) to a specific range, it is possible to quickly and easily understand which regions should be chosen for the hinges.
The graphs show that the most feasible values of the H1-X variable are concentrated in the range of 100-450 mm. For the variable H2-X, there is a thickening in the region 800-1080 mm. The results indicate that increasing the distance between the hinges is a valid option to reduce the bearing forces in the four points both for deployed and folded configurations. The H1-Z and H2-Z variables are quite obvious and the best value, for both of them, results in the smallest feasible value, that is 10 mm. Therefore, by relating the number of samples (w.r.t to total number) to a specific range, it is possible to quickly and easily understand which regions should be chosen for the hinges.
The graphs show that the most feasible values of the H1-X variable are concentrated in the range of 100-450 mm. For the variable H2-X, there is a thickening in the region 800-1080 mm. The results indicate that increasing the distance between the hinges is a valid option to reduce the bearing forces in the four points both for deployed and folded configurations. The H1-Z and H2-Z variables are quite obvious and the best value, for both of them, results in the smallest feasible value, that is 10 mm.
Simultaneous representation of all eight objective functions is not possible, therefore the last one has been divided into two groups. Each group reports the objective functions of its wing configuration.
The 4D bubble graph of Figure 16 shows the objective functions relating to the deployed wing (only feasible designs). The horizontal and vertical axes report the bearing loads of the upper pin for forward and afterward hinge respectively, the bubble color refers to the bearing load of the lower pin of the forward hinge and, finally, the bubble size is related to the bearing load of the lower pin of the afterward hinge. The figure also highlights the region (which is part of the Pareto front) where individuals, even if they do not minimize a specific objective function, are a good compromise in the minimization of the four bearing loads (all bearing loads are about 20 kN).
Simultaneous representation of all eight objective functions is not possible, therefor the last one has been divided into two groups. Each group reports the objective function of its wing configuration.
The 4D bubble graph of Figure 16 shows the objective functions relating to the de ployed wing (only feasible designs). The horizontal and vertical axes report the bearin loads of the upper pin for forward and afterward hinge respectively, the bubble color re fers to the bearing load of the lower pin of the forward hinge and, finally, the bubble siz is related to the bearing load of the lower pin of the afterward hinge. The figure also high lights the region (which is part of the Pareto front) where individuals, even if they do no minimize a specific objective function, are a good compromise in the minimization of th four bearing loads (all bearing loads are about 20 kN). The bubble-graph 4D of Figure 17 shows the objective functions relating to the folde wing (only feasible designs). In this case, it is evident that the selection of the optima individuals is simplified since many individuals are able to simultaneously minimize th four objective functions. Many of the individuals belonging to the region highlighted in Figure 16 (deploye configuration) are also highlighted in Figure 17 (folded configuration). Figure 18 show only the elements present in both regions. About 13 designs can be considered as the bes ones for both configurations. Figure 19 shows the individuals present in both regions defined above, and, in par ticular, it is possible to relate any selected individual to the design variables. This grap highlights again that the objective functions are minimized as the relative distance be tween the two hinges increases. All selected designs provide bearing loads close to eac other and therefore any of them is not clearly better than another one. Therefore, the fina The bubble-graph 4D of Figure 17 shows the objective functions relating to the folded wing (only feasible designs). In this case, it is evident that the selection of the optimal individuals is simplified since many individuals are able to simultaneously minimize the four objective functions.
Algorithms 2021, 14, x FOR PEER REVIEW 13 of Simultaneous representation of all eight objective functions is not possible, therefor the last one has been divided into two groups. Each group reports the objective function of its wing configuration.
The 4D bubble graph of Figure 16 shows the objective functions relating to the d ployed wing (only feasible designs). The horizontal and vertical axes report the bearin loads of the upper pin for forward and afterward hinge respectively, the bubble color r fers to the bearing load of the lower pin of the forward hinge and, finally, the bubble siz is related to the bearing load of the lower pin of the afterward hinge. The figure also high lights the region (which is part of the Pareto front) where individuals, even if they do no minimize a specific objective function, are a good compromise in the minimization of th four bearing loads (all bearing loads are about 20 kN). The bubble-graph 4D of Figure 17 shows the objective functions relating to the folde wing (only feasible designs). In this case, it is evident that the selection of the optim individuals is simplified since many individuals are able to simultaneously minimize th four objective functions. Many of the individuals belonging to the region highlighted in Figure 16 (deploye configuration) are also highlighted in Figure 17 (folded configuration). Figure 18 show only the elements present in both regions. About 13 designs can be considered as the be ones for both configurations. Figure 19 shows the individuals present in both regions defined above, and, in pa ticular, it is possible to relate any selected individual to the design variables. This grap highlights again that the objective functions are minimized as the relative distance b tween the two hinges increases. All selected designs provide bearing loads close to eac other and therefore any of them is not clearly better than another one. Therefore, the fin Many of the individuals belonging to the region highlighted in Figure 16 (deployed configuration) are also highlighted in Figure 17 (folded configuration). Figure 18 shows only the elements present in both regions. About 13 designs can be considered as the best ones for both configurations.
choice could be completed considering other functional requirements, such as the real possibility of installing the hinges in the selected regions. The obtained results enable a preliminary estimation of the diameters of the bolts/pins and the plate thickness. For the subsequent evaluations, only the bearing components were taken into account as they are strictly connected to the sizing of both the locking pins (in the unfolded and folded configuration) and the rotation axes. Because of the mission of the investigated vehicle and, therefore, due to the operative temperature, the pins, the lag, and the hinge axes are made of titanium alloy Ti6Al4V. Considering an operative temperature equal to 160 °C, the tensile ultimate strength is equal to about 720 MPa, the shear ultimate stress is about 398 MPa and the bearing stress is 1071 MPa.
To estimate the diameters and the thickness of the connection components, only the maximum loads from the best design sets, for each configuration (wing unfolded and folded), have been considered. Table 3 shows the minimum diameters of the locking pins and hinge rods, considering a safety factor equal to 1.5 and a number of cutting planes equal to 2. Even if the ECCS handbook requires a smaller safety factor, in this context, a more conservative safety factor was adopted since the vehicle is still in phase 0/A. Figure 19 shows the individuals present in both regions defined above, and, in particular, it is possible to relate any selected individual to the design variables. This graph highlights again that the objective functions are minimized as the relative distance between the two hinges increases. All selected designs provide bearing loads close to each other and therefore any of them is not clearly better than another one. Therefore, the final choice could be completed considering other functional requirements, such as the real possibility of installing the hinges in the selected regions. The obtained results enable a preliminary estimation of the diameters of the bolts/pins and the plate thickness. For the subsequent evaluations, only the bearing components were taken into account as they are strictly connected to the sizing of both the locking pins (in the unfolded and folded configuration) and the rotation axes. Because of the mission of the investigated vehicle and, therefore, due to the operative temperature, the pins, the lag, and the hinge axes are made of titanium alloy Ti6Al4V. Considering an operative temperature equal to 160 °C, the tensile ultimate strength is equal to about 720 MPa, the shear ultimate stress is about 398 MPa and the bearing stress is 1071 MPa.
To estimate the diameters and the thickness of the connection components, only the maximum loads from the best design sets, for each configuration (wing unfolded and folded), have been considered. Table 3 shows the minimum diameters of the locking pins and hinge rods, considering a safety factor equal to 1.5 and a number of cutting planes equal to 2. Even if the ECCS handbook requires a smaller safety factor, in this context, a more conservative safety factor was adopted since the vehicle is still in phase 0/A. The obtained results enable a preliminary estimation of the diameters of the bolts/pins and the plate thickness. For the subsequent evaluations, only the bearing components were taken into account as they are strictly connected to the sizing of both the locking pins (in the unfolded and folded configuration) and the rotation axes. Because of the mission of the investigated vehicle and, therefore, due to the operative temperature, the pins, the lag, and the hinge axes are made of titanium alloy Ti6Al4V. Considering an operative temperature equal to 160 • C, the tensile ultimate strength is equal to about 720 MPa, the shear ultimate stress is about 398 MPa and the bearing stress is 1071 MPa.
To estimate the diameters and the thickness of the connection components, only the maximum loads from the best design sets, for each configuration (wing unfolded and folded), have been considered. Table 3 shows the minimum diameters of the locking pins and hinge rods, considering a safety factor equal to 1.5 and a number of cutting planes equal to 2. Even if the ECCS handbook requires a smaller safety factor, in this context, a more conservative safety factor was adopted since the vehicle is still in phase 0/A. Considering the diameters of the previous table, it is possible to estimate the thickness of the plate in order to not incur breakages due to exceeding the allowable bearing (Table 4) values. In this case, the yield value was considered as the reference value. Finally, the net section failure for the lug used to retain the locking pins is reported ( Table 5). The general rules prescribe that the widths are not less than two times the diameter of the hole, therefore, having previously determined the diameter of the hole it is possible to estimate the thickness of the lug.
Conclusions
The present work illustrates a structural optimization procedure for the definition of a deployable wing connection system of a sub-orbital unmanned re-entry vehicle (USV3), class 2500 kg, equipped with a deployable wing housed within the fairing of the European VEGA-C launcher. The obtained results provide useful data for the preliminary design of the deployment system consisting of two hinges, which allow the wing deployment after it is placed in orbit, and, thus, allow a descent, a hypersonic flight, and a landing phase with greater control capabilities.
The optimization process uses two pieces of commercial software (ModeFrontier and ANSYS) and, in particular, it exploits a fully parametric model generated by means of an APDL macro and a multi-objective genetic algorithm. The optimization process focuses on minimizing the bearing forces that are generated by the design loads, defined both by the launch phase (inertial loads) and by the hypersonic flight phase (aerodynamic loads).
The results show how it is possible to significantly reduce the loads on the hinges by shifting these in strategic convenient positions. In particular, a set of optimum designs has been defined (hinge location) that is able to minimize the loads, both in the deployed and folded configuration. The hinges, obviously, have to be positioned within the root rib domain, and the optimization process has shown that the optimal position of the forward hinge (point C1) is between 100 and 400 mm w.r.t the leading edge of the root profile in the longitudinal direction. The afterward hinge (point C4) should be positioned between 750 and 962 mm (remember that the maximum dimension of the root rib is 1180 mm). Regarding the transversal position, for both hinges (point C2 and C5) the locking pins have to be positioned as low as possible, i.e., at 10 mm. The remaining two points C3 and C6 are automatically determined considering that the maximum wing rotation that is 55 • .
The optimization process provides several configurations suitable to be used as the best design. Indeed, they are able to minimize both the diameter of the pins/bolts and the thickness of the surrounding and locking plates. This guarantees the fulfillment of a fundamental requirement, i.e., structural mass reduction. The final choice could be completed considering other functional requirements/constraints that will be discussed and introduced in a subsequent design phase, which will also include the detailed design of the deployment system.
|
v3-fos-license
|
2018-04-03T03:41:23.985Z
|
2016-09-01T00:00:00.000
|
7654236
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rsbl.2016.0463",
"pdf_hash": "7a3c8c2162ea7d49917c50465705499aab1ac686",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:953",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "e870687696e86a2059bc65b37fa5db50be86385b",
"year": 2016
}
|
pes2o/s2orc
|
Decreased small mammal and on-host tick abundance in association with invasive red imported fire ants (Solenopsis invicta)
Invasive species may impact pathogen transmission by altering the distributions and interactions among native vertebrate reservoir hosts and arthropod vectors. Here, we examined the direct and indirect effects of the red imported fire ant (Solenopsis invicta) on the native tick, small mammal and pathogen community in southeast Texas. Using a replicated large-scale field manipulation study, we show that small mammals were more abundant on treatment plots where S. invicta populations were experimentally reduced. Our analysis of ticks on small mammal hosts demonstrated a threefold increase in the ticks caught per unit effort on treatment relative to control plots, and elevated tick loads (a 27-fold increase) on one common rodent species. We detected only one known human pathogen (Rickettsia parkeri), present in 1.4% of larvae and 6.7% of nymph on-host Amblyomma maculatum samples but with no significant difference between treatment and control plots. Given that host and vector population dynamics are key drivers of pathogen transmission, the reduced small mammal and tick abundance associated with S. invicta may alter pathogen transmission dynamics over broader spatial scales.
Introduction
Invasive species can directly or indirectly alter vector-borne disease systems by changing the abundance of, or interactions between, vectors and their hosts. Previous studies have most commonly implicated the invader in altering species relationships in ways that support vector-borne pathogen transmission and, therefore, increase disease risk. For example, a widespread, invasive shrub increases human risk of ehrlichiosis because it provides habitat for deer that host infected ticks [1], and densities of ticks and tick hosts were greatest in areas that had been invaded by the causative agent of sudden oak death [2]. By contrast, with few exceptions (e.g. [3]), invasive species have less frequently been implicated in the reduction of infectious disease transmission. However, invasive host species may dilute vector-borne disease risk consistent with the dilution effect hypothesis [4]. flea-transmitted Bartonella species was reduced with increasing densities of introduced voles [5].
Here, we investigate the potential impact of the invasive red imported fire ant (Solenopsis invicta) on tick, small mammal and pathogen communities in southeast Texas. Ticks and small mammals transmit and maintain numerous zoonotic pathogens that are significant public health concerns. Solenopsis invicta are known to predate small mammals [6], and their presence is associated with changes in mammal foraging activity [7] and habitat selection [8] possibly mediated by changes in food resources [9]. Solenopsis invicta are also associated with reductions in tick populations [10,11], although effects vary between tick species [10]. Using a large-scale manipulative experiment to reduce S. invicta populations across an area of historic invasion, we expected that S. invicta predation and avoidance behaviour by mammals and ticks would lead to decreased mammal, tick and pathogen abundance in plots where S. invicta were in high density relative to treatment plots where S. invicta were experimentally suppressed.
Material and methods
The manipulative experiment occurred at two field sites separated by over 160 km in southeast Texas: Attwater Prairie Chicken National Wildlife Refuge (APCNWR) and a private ranch in Goliad County (GRR). Each field site was partitioned into two treatment plots and two control plots. Treatment plots were chemically treated with Extinguish Plus TM (Central Life Sciences, Schaumburg, IL, USA) for S. invicta suppression as part of an existing management plan for Attwater's prairiechicken (Tympanuchus cupido attwateri) [12]; control plots were not treated. Efficacy of the treatment was monitored by setting out fatty lures in treatment and control plots ( [12]; see the electronic supplementary material).
Small mammals and their attached ticks were collected using seed-baited Sherman live traps (H.B. Sherman Traps, Tallahassee, FL, USA). Three line transects (approx. 20 m apart), each with 20 traps spaced 10 m apart, were spread across each of the four plots at both field sites, resulting in a total of 60 traps per plot and 240 traps per site. Small mammal trapping was conducted for two consecutive nights each month (APCNWR: trapping occurred from June 2013 until September 2014; GRR: October 2013 until July 2014, with the exception of January 2014). All captured mammals were marked with an ear tag, identified to species and inspected for ticks, which were removed, identified and stored in 70% ethanol. Off-host tick presence was assessed via drag sampling (see the electronic supplementary material). On-host ticks were tested for infection with microbes in the genera Rickettsia and Borrelia (see the electronic supplementary material).
We used general linear mixed models assuming a negative binomial error distribution to analyse counts of mammals and on-host ticks across treatment and control plots. We used a zeroinflated (ZI) model if it fit the data better (i.e. lower Akaike information criteria) than the same model that did not account for ZI. All models were implemented in program R (v. 3.2.2) in the package glmmADMB (v. 0.8.3.2). Site (two levels, APCNWR and GRR) and season (four levels, spring ¼ March to May; summer ¼ June to August; autumn ¼ September to November; winter ¼ December to February) were added to models as Oryzomys palustris (marsh rice rat) 0 (0%) 1 (100%) 0 (0%) 0 (0%) rsbl.royalsocietypublishing.org Biol. Lett. 12: 20160463 random intercepts (mammal abundance was spatio-temporally heterogeneous throughout the study; see the electronic supplementary material). Sampling effort (effective trap nights) per transect was included in the model using the offset function. Significance of all treatment coefficients was assessed through a log-likelihood ratio test of nested models assuming a x 2 -distribution. Association between pathogen infection of ticks (larval pools, larval individuals, and nymphs analysed separately) and S. invicta treatment was tested with a Fisher's exact test.
Results
The majority (64.1%) of small mammals captured were from S. invicta-suppressed treatment plots (table 1; figure 1; electronic supplementary material, figure 1). Our model predicted a 1.8-fold increase in the total number of small mammals captured per unit effort on treatment relative to control plots ( p , 0.001). The effect was consistent among the three most commonly sampled mammal species (Sigmodon hispidus, Baiomys taylori and Reithrodontomys fulvescens). Our model predicted a 2.0-fold increase in S. hispidus captured on treatment relative to control plots ( p , 0.001). Effect sizes were slightly lower for B. taylori (1.4-fold increase on treatment plots, p ¼ 0.01) and R. fulvescens (1.4-fold increase on treatment plots, p ¼ 0.05).
Ninety-eight mammals (8.7% of captures) were parasitized by a total of 237 ticks, including 142 larvae and 95 nymphs (electronic supplementary material, tables S2 and S3). Nearly all ticks were Ambylomma maculatum (99.6%) with the exception of one nymphal Ixodes scapularis (0.4%). The rodent species most heavily parasitized by ticks were S. hispidus (15.6% of total captures), Chaetodipus hispidus (7.7%), R. fulvescens (7.7%) and B. taylori (1.4%). Our model predicted a threefold increase in the number of on-host ticks caught per unit effort on treatment relative to control plots ( p ¼ 0.01). When the number of rodents captured during a sampling night was included in the model with the offset function, the model still predicted an increase in the number of ticks on treatment plots, but this effect was no longer significant ( p ¼ 0.45). This suggests that the effect of a greater number of on-host ticks on treatment plots was primarily driven by an increased capture rate of small mammals along treatment transects. To directly investigate tick loads across treatment and control plots, we modelled the number of ticks per host individual in S. hispidus and R. fulvescens, two well-sampled (N ¼ 482 and 195, respectively) and highly parasitized species in our data. Tick loads did not vary significantly across plots in S. hispidus ( p ¼ 0.90, figure 2), possibly due to demographic effects that resulted after an explosive increase in the population (see the electronic supplementary material). However, our model predicted a 27-fold increase in the tick loads on R. fulvescens on treatment relative to control plots ( p ¼ 0.003; figure 2). Drag sampling of 30 200 m 2 of vegetation resulted in the collection of 86 ticks, with no difference between treatment and control plots (see the electronic supplementary material).
A total of 126 individual tick nymphs and larval pools removed from mammals were tested for infection with Rickettsia species, of which 34 (27.0%) tested positive (electronic supplementary material, table S4). Most rickettsial sequences had high homology to species regarded as endosymbionts (n ¼ 27; electronic supplementary material, table S4). A total of seven A. maculatum samples were infected with the human pathogen R. parkeri (1.4% prevalence in larvae and 6.7% prevalence in nymphs). The proportion of ticks infected with R. parkeri was not different between treatment and control plots ( p . 0.05). A total of 83 tick samples were tested for infection with Borrelia species of which B. lonestari was found in a single A. maculatum nymph on an APCNWR treatment plot (electronic supplementary material, table S4).
Discussion
The invasion of red imported fire ants in the southern United States has had large, negative consequences on ecological communities (reviewed in [13]). We observed decreased small mammal abundances in the presence of S. invicta (figure 1), possibly associated with direct (e.g. predation) and indirect effects (e.g. changes in habitat selection and avoidance behaviour) [7,8]. Furthermore, we observed that increased small mammal populations on S. invicta-suppressed plots were associated with an increased abundance of on-host ticks (figure 2), consistent with host population regulation of tick populations [14]. Our data suggest that S. invicta reduce small mammal populations that, in turn, regulate local tick populations. Thus, these invasive ants may influence tick abundance by affecting the behavioural or physiological mechanisms that control the number of ticks on host individuals, although tick populations may also be influenced directly by S. invicta predation. However, the collection of off-host ticks by drag sampling, which was largely restricted to the adult life stage, was not significantly different between control and treatment plots (see the electronic supplementary material). Notably, our study did not investigate other potentially important hosts that support ticks at the larval and nymph stage (i.e. small ground passerines), or adult-stage ticks (i.e. larger mammals), which may also affect tick abundance. It is possible that lower small mammal abundance could increase the frequency of ticks feeding on alternative hosts, including humans, thus increasing disease risk (e.g. [15]). The cascading effects of S. invicta on native small mammal and tick populations have important potential implications for the transmission of tick-borne pathogens, which represent significant public health concerns. Small mammals such as S. hispidus, which was heavily parasitized in this study, are reservoirs for numerous tick-borne pathogens including those in the genera Borrelia, Rickettsia, Anaplasma and Babesia, as well as multiple viruses [16]. Increased small mammal and tick abundance in S. invicta-suppressed areas are expected to intensify contact rates between ticks and hosts, facilitating pathogen transmission. Indeed, increasing host abundance is one of the main drivers of tick-borne disease emergence [17]. Higher tick loads on R. fulvescens on treatment plots directly increase vector-host ratios, potentially resulting in increased tick-borne pathogen transmission [18].
The only known human pathogen we detected in ticks removed from mammals was R. parkeri, which was present in 1.4% of larvae and 6.7% of nymphs. Rickettsia parkeri is a spotted fever group Rickettsia long associated with A. maculatum and recently associated with human disease in the United States [19]. Although the apparent prevalence of R. parkeri infection in our study is low compared with recent research in Virginia (27 -55% prevalence; [20]), these studies examined adult ticks located on the northern edge of the S. invicta invasion. It is unknown how the current pathogen community in rodent-associated ticks compared with that which occurred in the area prior to S. invicta invasion, and the spatial and temporal scale of the contemporary experimental suppression of S. invicta may not be sufficient to detect any alteration in pathogen infection associated with a reduction in ant numbers.
While S. invicta have pervasive impacts on the ecosystems they invade [16], including depressing populations of endangered taxa [12], land managers need to consider the incidental effects that S. invicta suppression may have on tick-borne disease dynamics in some systems. Our work implies that during its invasion S. invicta may have produced ecosystem cascading effects that could lead to decreased vector, host and pathogen abundance.
Ethics. All procedures were approved by the Texas A&M Animal Care and Use Committee ( permit no. 2012-100).
|
v3-fos-license
|
2020-11-26T09:05:02.514Z
|
2020-11-27T00:00:00.000
|
229392038
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bioone.org/journals/mountain-research-and-development/volume-40/issue-1/MRD-JOURNAL-D-19-00071.1/Issues-with-Applying-the-Concept-of-Community-Based-Tourism-in/10.1659/MRD-JOURNAL-D-19-00071.1.pdf",
"pdf_hash": "5ef8d3dd813d33ee614c238f4905356fc73b2a3f",
"pdf_src": "BioOne",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:954",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "210899a9e4b8c37549111c71ae604bd748c8f32e",
"year": 2020
}
|
pes2o/s2orc
|
Issues with Applying the Concept of Community-Based Tourism in the Caucasus
In Armenia and Georgia, tourism has become part of the development strategies that aim to revitalize those mountain areas experiencing a rural exodus and anemic economic structures. Association agreements between the European Union (EU) and Georgia (2014) and the EU and Armenia (2018) promote community-based tourism (CBT), emphasizing the importance of facilitating cooperation between stakeholders and inclusion of local communities. This study describes the current application of CBT in Georgia and Armenia to elucidate the understanding and perception of the concept by different stakeholders and to provide recommendations for the development of comprehensive CBT practices in the South Caucasus. We used qualitative methods within our research. Our overall analysis includes policy documents and semistructured interviews with tourism and rural development authorities, civil society organizations, and entrepreneurs. Our key findings reveal the various factors that influence the sustainable development of CBT projects, especially in mountainous areas. We recommend integrating tourism and community development practices, elaborating specific guidelines for CBT projects, and filling the knowledge gap of community development facilitators regarding tourism practices. We also suggest focusing more on diversifying community-based products to expand cooperation among service providers.
Introduction
The association agreement (AA) between the European Union (EU) and Georgia (AA 2014) and the EU-Armenia Comprehensive and Enhanced Partnership Agreement (CEPA 2017) promote the ''development and promotion of, inter alia, community-based tourism'' (AA 2014:116). It emphasizes the engagement of local communities in the process of planning and implementing tourism, including equality in decision-making (Khartishvili et al 2019). However, there is a knowledge gap with respect to what the community-based tourism (CBT) concept means in these countries. Tourism in both countries today differs from the structures common during Soviet times and is going through a transition period because of pressures from international tourists, who demand high-quality competitive tourism experiences, especially in mountainous areas. At the same time, tourism has become an integral part of the strategy documents of different ministries and institutions; however, intersectional cooperation is lacking. Several international initiatives are facilitating this transition and supporting links between local service providers and tourism operators (Bakhtadze-Englaender 2019).
This research aims to explore the current understanding and application of the concept of CBT in Georgia and Armenia to suggest recommendations for the development of comprehensive CBT practices in the South Caucasus. The research focuses primarily on the following questions: What is the current understanding of the term CBT by different stakeholders in Georgia and Armenia? Which aspects of CBT motivate its integration into development projects? What are the key factors and constraints of CBT projects implemented in Armenia and Georgia?
CBT: understanding the concept A community-based approach to tourism has spread since the 1970s (Reid et al 2004) and has become an integral part of rural and tourism development strategies in the global South (Lane and Kastenholz 2015). Murphy's (1985) proposal for community-driven tourism planning is more in tune with rural contexts in both developed and developing countries. In this case, ''community'' refers to a group of people living in a defined space (Murphy 1985(Murphy , 2013. Suansri (2003) describes CBT as a type of tourism that is ''managed and owned by the community, for the community, with the purpose of enabling visitors to increase their awareness and learn about the community and local ways of life'' (Suansri 2003:14). Denman emphasizes the social dimension in CBT by proposing ''community-based ecotourism where the local community has substantial control over, and is involved in, its development and management, and a major proportion of the benefits remain within the community' ' (2001:7).
The fundamental notion of CBT is a core aspect of sustainable development, in which community participation in the implementation and decision-making processes creates conditions for developing learning capacity and empowering the community (Goodwin and Santilli 2009;Giampiccoli 2013, 2016;Kontogeorgopoulos et al 2014). For many developing countries, their natural and cultural heritage continues to be a source of significant economic benefits, attracting international and domestic visitors (The Mountain Institute 2000). CBT practices and its participatory development approach are a response to topdown planning (Novelli et al 2017), in which the local community-with many residents who are service providers-has little decision-making power in tourism planning and management processes (Blackstock 2005).
Although many literature sources provide similar definitions of CBT, a single common definition seems to be missing (Goodwin and Santilli 2009). At the same time, most literature refers to similar beneficial aspects of CBT: multipurpose use of resources, economic development through tourism revenues, diversification of the economy, establishment of additional enterprises, protection of living culture and nature, improved community livelihood, and empowerment of communities (Boonratana 2010;Dolezal 2011;López-Guzmán et al 2011;Nair and Hamzah 2015). Empowered communities gain knowledge and management skills through participation and ownership (Arnstein 1969) that enable them to manage businesses and control their resources (Leksakundilok 2004).
Because of these beneficial aspects, CBT is being widely promoted by international aid programs in developing countries (Richards and Hall 2003;Idziak et al 2015;Nair and Hamzah 2015;Dangi and Jamal 2016;Kavita and Saarinen 2016). However, there is much to learn from past unsuccessful cases of CBT. Several community development projects failed, even though they were provided with funding, because project managers did not take into account local circumstances and did not pay proper attention to the local aspects of the contextual nature of CBT (Blackstock 2005;Stone and Stone 2011). Practitioners followed programs proposed by Western experts that may be successful in other countries without considering the local context (Goodwin and Santilli 2009;Johnson 2010;Nair and Hamzah 2015;Mtapuri and Giampiccoli 2016). There are also cases in which the central management system in the developing world hampered citizens' participation in decision-making processes, which is key to successful CBT development (Leksakundilok 2004).
Despite widespread CBT projects in the developing world, the practice has emerged only recently in post-Soviet countries. CBT development requires a better understanding of the local context, an individual approach, and appropriate planning models that are adapted to local perspectives and social structures.
However, to our knowledge, there is no literature addressing the understanding of CBT and its implementation, including its beneficial aspects and constraints, in the Caucasus region.
Research context and methods
This paper focuses on tourism development projects recently initiated by international organizations in the mountainous areas of Georgia and Armenia. Figure 1 shows one of the popular mountain travel destinations of the South Caucasus: Tabatskuri village in Samtskhe-Javakheti region, Georgia.
Initially, we collected and analyzed policy documents and identified several CBT projects through desk research; we gathered further information about additional projects and stakeholders via the snowball method. In total, 15 CBT projects implemented during 2012-2018 in Armenia and Georgia were examined. The findings are summarized here.
We conducted semistructured interviews (face to face and via videoconferencing) with experts and stakeholders in June and September 2018 and in March 2019. In total, 40 interviews (25 in Georgia and 15 in Armenia) were recorded and transcribed with consent of the interviewees. Among the interviewees were experts and researchers (12), representatives of public institutions (4), nongovernment organizations (NGOs; 14), and private businesses (10). We did not interview community members, because the research aimed to identify the perceptions of experts and project managers. We analyzed the data using qualitative content analysis.
Understanding of CBT by different actors
Respondents use the term CBT in projects in a loose and undefined way. Project managers even noted that the term CBT does not exist in project-related documents and guidelines and that they accepted CBT as a term proposed in the Western world, which had been included in the AAs per the request of the EU (albeit without a definition; AA 2014). A central leading structure of rural, eco-, and/or agritourism in both countries is missing, and the concept of alternative forms of tourism has not yet been discussed and is not reflected in official tourist documents.
The definition of community also differs from one respondent to another. For example, policymakers focus on administrative boundaries of the municipality (selfgoverning units in the region), whereas representatives of civil society organizations focus on common lives, interests, habits, etc (Parliament of Georgia 2014). Table 1 provides definitions of community, community-based activities, and CBT proposed by various actors. The respondents' understanding of CBT is often associated with remote mountainous areas. They use CBT interchangeably with rural tourism, in which the main actors are community members. Generally, both Armenian and Georgian interviewees perceive rural tourism as an umbrella term for alternative forms of tourism and activities in rural areas, including remote mountainous areas.
Beneficial aspects of CBT motivating its integration into development projects
We divided favorable aspects perceived by practitioners and experts as motivation to integrate CBT into development programs into four categories: preservation of culture and nature, valorization of traditional products, diversification of rural economy, and community development. Respondents from environmental agencies develop community-based activities using tourism as praxis dedicated to enhancing residents' awareness of and involvement in natural resource management and protecting ecosystems. Better communication with locals also helps them to promote and preserve both tangible and intangible culture in mountainous areas. Farmers' associations and rural tourism development organizations spoke about the role of CBT in the valorization of traditional products, particularly organic, locally produced products. They noted that the involvement of CBT practices stimulates farmers to restore forgotten traditions, because it increases their awareness of and access to the market. Such practices resulted in the emergence of new tourism activities, such as marani (family wine cellar) wine tours in Georgia. Practitioners and state representatives concerned about rural revitalization and diversification of the local economy recognize the role of CBT practices in terms of creating additional jobs and employment opportunities for locals, particularly for the youth in mountainous regions. Community development organizations in both countries advocate CBT as a tool for community mobilization and capacity building-a participatory approach in community and sustainable development. In Table 2, we grouped all aspects mentioned by interviewees from selected NGOs that play a leading role and have extensive experience in both community development and rural tourism practices in Armenia and Georgia.
More perceived benefits of CBT are evident in the purpose/activities column in Table 3, which summarizes 15 projects implemented in Georgia and Armenia, between 2015 and 2018, focusing on their objectives, keys to success, and main constraints. Some projects, initiated either by external initiatives or by local strategic players, are still active. The projects, in particular those initiated by external agencies, focus on safeguarding cultural traditions and natural resources, and enhancing economic prosperity, including the development of trails, product or service quality standards, and establishment of associations and local entities. There are cases of local initiatives that focus on concrete activities, such as managing common spaces (recreational and parking places, waste management, water supply, etc), as well as development of common products and facilities.
FIGURE 1 Georgia's beautiful mountain scenes offer great potential for community-based tourism: Tabatskuri village in the South Caucasus. (Photo by Lela Khartishvili) Characteristics, constraints, and key factors of CBT projects perceived by actors Tourism projects in Georgia and Armenia are implemented primarily by international aid programs. There are few examples of private initiatives-motivated and active locals in villages who joined forces to address common needs and interests. The cases perceived as most successful by the interviewees are characterized by good cooperation between community leaders and national authorities. Examples of such cases are presented in Table 3: the village of Kalavan, Armenia, where accommodation and catering services and other tourism facilities belong to a group of local residents, and the villages Dartlo and Omalo in Tusheti, Georgia, where the Tushi community participates in natural resource management and village restoration programs and has effective cooperation with regional and national authorities. Successful cooperation is the result of a long process of community mobilization and capacity building; in Tusheti's case, this was facilitated by the local administration of the protected areas of Georgia and various environment agencies.
Term
Definition Respondent/organization Community A settlement in a municipality (self-governing unit in the region) with administrative boundaries. A community consists of 2 or more villages with a common representative. A community fund is a part of the municipal budget.
Government of Georgia
A group of people living in a certain geographical area (without administrative borders) sharing similar socioeconomic conditions and culture, interests, problems, and needs.
A coalition of 11 civil society organizations in Georgia A group of people, unions, and alliances. It can be an informal or formal (legal) nonprofit organization with an organizational structure, such as an association or network.
Green Valley, Georgia
Community means my family and my neighbors, who share challenges, expectations, beliefs, and benefits.
Tkibuli District Development Fund, Georgia
Community based Community based means the way people make decisions and benefit at a local level; sustainability refers to results; and community-based activity refers to the process.
Community-based tourism
A form of tourism in rural areas in which the main assets are local residents and their offerings based on local resources.
Ilia State University, Georgia
Tourism in remote areas that is managed by a local entity (eg, travel agency or tourism information center) and benefits both individual businesses and communities. CBT is driven by active community leaders who contribute to the development of CBT with local and contextspecific knowledge.
Utsera development project, GIZ Georgia
An activity of a group of people in certain rural areas that have a common vision and mission and share common benefits and interests to improve livelihoods through tourism activities.
Centre for Strategic Research and Development, Georgia
Activities of a legal organization (ie, association, network, or alliance with an organizational structure) or a nonformal cooperative-type rural entity that offers competitive agritourism products and supplementary income for rural residents.
Biological Farming Association Elkana, Georgia
Activity in rural and remote areas that is more than mere cooperation in the production or marketing of the product.
Tatev development projects, Armenia
Human-oriented tourism in remote areas managed by local residents who provide accommodation services in village houses or small hotels and offer traditional local food, wine, and handicrafts that are of interest to tourists.
Tourism development center in Gumri, Armenia An integral part of ecotourism; it focuses on the benefits and partnerships of the local community and ensures the long-term stay of tourists in the villages.
Georgian Ecotourism Association
Tourism in less urbanized areas of the country in traditional, natural, and cultural landscapes based on local resources, such as traditional agriculture, and on tangible and intangible cultural heritage. Accommodation is provided in small and medium-size farmhouses and other rural (nonagricultural) homestays.
Rural Tourism Network, Georgia
Examples in Table 3 show that sharing common business interests, such as the development and organization of a diverse and year-round tourist product, creates solutions to waste management and other issues, motivates local residents to cooperate effectively, and establishes a network of services. Practitioners see collaboration and partnership as important for getting technical support (training and study tours), defending their rights, and learning from one another. Local leaders, as main drivers, play a crucial role in CBT projects. In most cases, they are urban entrepreneurs who have invested in a second home to rent as a guesthouse. Projects driven by women are particularly successful; women tend to have more experience in networking and hospitality. Table 3 also depicts the perception of respondents on the constraints of CBT project development. They spoke openly about activities supported by projects mostly contributing to the development of infrastructure, such as accommodation facilities and trail marking, but did not address the social values of CBT, such as local residents' perception or readiness to participate in implementation and management processes. In most cases, local residents find it difficult to collaborate and take ownership of projects. They are not aware of their rights, preventing them from becoming more demanding and involved in decision-making processes. D. Dolidze, the project manager at the Biological Farming Association Elkana (Georgia), noted that despite many efforts spent on project implementation, there was not enough time to deal with fundamental problems, such as a mistrust among the locals, pessimism, and a lack of motivation and capacity. Such problems are not visible and X X X Promotion of cultural heritage (protection and restoration of cultural landscapes in mountainous areas) X X X X
Valorization of traditional or locally produced organic products
Enhancement of awareness of organic products X X X Restoration of forgotten traditions as cultural identity and unique sales products in the region X X X Accessibility to international and national markets X X Generation of supplementary income through new activities X X X X X X
Revival of rural areas
Distinguishing local production by geographical origin X Identification of unique, high-quality products X Development of small and medium-size enterprises X X X X X X X X Establishment of service standards (food safety and service quality) X X X Establishment of value chains X X X X X Increasing tourist spending in the region (new attractions for tourists) X X X Revitalization of the local economy and opportunities for rural areas X X X X
Mobilization and empowerment of communities
Establishment of strong entities or community groups X X X Capacity building at the local level X X X X X X X Enhancement of community participation, ownership, and transparency of project implementation X X X X X Integration of participatory planning approaches in community development practices X X X X X Enhancement of cooperation and networking X X X X X require better understanding of the context and history of the problem, which could be provided only by local actors. A. Ghazanchyan, from Development Principles in Armenia, and N. Vasadze, who is the director of the Centre for Strategic Research and Development of Georgia, spoke about old stereotypes of collective farms (kolkhoz) from the time of the Soviet Union, which impeded development processes in the countries of the South Caucasus and still influence them today. They noted that community-based activities require more patience from the project managers' side and slow development of practices with a focus on community participation and learning capacity development. Figure 2 visualizes CBT in the form of an iceberg, in which the upper part illustrates the problems and constraints of CBT projects in Armenia and Georgia and the lower part shows hidden elements that cause those problems.
Discussion
The concept of CBT in Armenia and Georgia CBT is a new concept in the Caucasus, and the respondents appreciate opportunities for professional exchange. They openly discussed issues and problems related to CBT implementation. The respondents' perception and understanding of CBT coincide with internationally accepted characteristics of the term and the fundamental notion of CBT given in the literature. Although there is no single agreed definition (Goodwin and Santilli 2009), the main principles of CBT tend to be consistent, and several practical guidelines are available (Suansri 2003;Mtapuri 2012, 2014;Kontogeorgopoulos et al 2014;Dangi and Jamal 2016). Despite the experience of Armenian and Georgian practitioners in community-based approaches and involvement in environmental, cultural, economic, and political activities, the term CBT does not exist in their project documents, and understanding of CBT's guiding principles, such as community-owned businesses, community-controlled activities, and ownership, are presented in an unclear manner. Because the countries also do not have a clear definition of ecotourism, rural tourism, or agritourism, these types of tourism activities are often grouped together and confused with one another. Although attention has been given to the community approach in all types of alternative tourism development, CBT is still considered a separate form of tourism, rather than a practice that should be embedded in all rural tourism activities.
Identified challenges and constraints
Several practitioners claim that CBT, if planned and organized well, leads to inclusion and empowerment of local . Thus, CBT projects need a clear methodology, but there is a gap in knowledge about such methodology in the Caucasus. Community development and environmental agencies are committed to using participatory learning practices and have elaborated community development working schemes. However, they lack knowledge of tourism, its complex nature, and the specific characteristics of tourism products and services. Blind acceptance of the reference to CBT in the EU AA without a clear understanding of its principles, guidelines, and how they apply in the Caucasus context makes it difficult to implement CBT in practice. Thus, there is a need for better understanding and for specific guidelines for CBT projects in the Caucasus countries. These would help integrate community development workflows with tourism practices. One of the key constraints to community cooperation in the Caucasus is lack of diversification of tourism activities and high competition. The development of unique yearround activities and partnerships would help to overcome seasonality and miscommunication among locals. Wellorganized CBT enables local control and the ability to initiate and manage projects (Leksakundilok 2004).
Today, CBT projects in Armenia and Georgia can benefit from support of external international experts to build capacities on the national and local levels. The empowerment of locals, achievable through active participation and learning capacity development, requires a lot of time for community mobilization, trust building, and planning of long-lasting tourism activities, as was the case in the Tusheti Protected Areas project in Georgia. Social aspects, such as values, opinions, local perception, and behaviors, which are fundamental elements of good cooperation, need better investigation, which could be facilitated by an additional preparatory phase in projects. This will help both practitioners and community members to analyze the context and locals' needs.
Conclusions and recommendations
Our results contribute new findings to understanding of the concept, main aspects, and factors affecting CBT implementation in Armenia and Georgia, which will help practitioners, policymakers, and experts in developing community-driven projects in the South Caucasus. We propose recommendations to fill the knowledge gaps of tourism professionals and community development facilitators in CBT development practices. In particular, we recommend elaborating specific guidelines for implementation of CBT projects, with a focus on diversifying community-based products and community participation, rather than solely developing tourism infrastructure and facilities. Our study opens the opportunity for future research to investigate issues like citizens' inclusion in CBT businesses and management practices in mountainous areas in Armenia and Georgia, and to examine whether CBT practices deliver outcomes that benefit sustainable mountain development.
Based on the results of our research, we propose the following definition of CBT for the South Caucasus: CBT in the South Caucasus is a community development practice for nonurban and remote mountain villages. It is a joint effort of a group of people living in a certain geographical area, in which local culture, environment, and hospitality are the main advantages. CBT focuses on the benefits for the local people, capacity building, and empowerment and should constitute a core component of tourism activities in rural mountain regions.
To conclude this study, we suggest the following recommendations for the development of comprehensive CBT practices in the South Caucasus: Promotion of CBT as processes generating community development using tourism practices (rather than a separate form of tourism). Preparation of guidelines for the development and implementation of CBT projects in Caucasus countries, including a focus on the following: -Integration of community development workflows with tourism practices; -Stronger integration of participatory learning approaches into tourism development practices; -Providing time for community trust building and capacity building of local stakeholders in tourism management. Focus on the development of diverse products and business as a major motivation for locals to cooperate and obtain common benefits.
In this paper, we focused on the understanding and implementation of CBT in Armenia and Georgia, primarily addressing CBT in the specific context of the Caucasus mountain region. Our findings are insightful and relevant to other mountain areas, particularly those in other post-Soviet countries. However, we suggest that careful context-specific examination at the local and national levels is necessary to apply our results and recommendations elsewhere.
A C K N O W L E D G M E N T S
This study is part of the project ''Transdisciplinarity for Sustainable Tourism Development in the Caucasus Region j CaucaSusT,'' funded by the Austrian Development Agency under the scope of the Austrian Partnership Programme in Higher Education and Research for Development. The project addresses the capacity of universities in Armenia and Georgia to teach and research transdisciplinary study within the focus of sustainable tourism development.
|
v3-fos-license
|
2018-09-24T13:35:15.200Z
|
2018-09-01T00:00:00.000
|
52292870
|
{
"extfieldsofstudy": [
"Environmental Science",
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/18/9/3100/pdf",
"pdf_hash": "2c873f1a77bd78d6862b2446db1c0beafd4e661a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:955",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Materials Science"
],
"sha1": "2c873f1a77bd78d6862b2446db1c0beafd4e661a",
"year": 2018
}
|
pes2o/s2orc
|
A Feasibility Study on Timber Moisture Monitoring Using Piezoceramic Transducer-Enabled Active Sensing
In recent years, the piezoceramic transducer-enabled active sensing technique has been extensively applied to structural damage detection and health monitoring, in civil engineering. Being abundant and renewable, timber has been widely used as a building material in many countries. However, one of the more challenging applications of timber, in construction, is the potential damage caused by moisture. Increased moisture may cause easier warping of timber components and encourage corrosion of integrated metal members, on top of potentially causing rot and decay. However, despite numerous efforts to inspect and monitor the moisture content of timber, there lacks a method that can provide truly real time, quantitative, and non-invasive measurement of timber moisture. Thus, the research presented in this paper investigated the feasibility of moisture-content monitoring using an active sensing approach, as enabled by a pair of the Lead Zirconate Titanate (PZT) transducers bonded on the surface of a timber specimen. Using a pair of transducers in an active sensing scheme, one patch generated a designed stress wave, while another patch received the signal. While the active sensing was active, the moisture content of the timber specimen was gradually increased from 0% to 60% with 10% increments. The material properties of the timber correspondingly changed under varying timber moisture content, resulting in a measurable differential in stress wave attenuation rates among the different specimens used. The experimental results indicated that the received signal energy and the moisture content of the timber specimens show a parabolic relationship. Finally, the feasibility and reliability of the presented method, for monitoring timber moisture content, are discussed.
Introduction
Timber, which is a ubiquitous natural resource, is used widely across many countries as a building material [1,2]. Timber is an inhomogeneous, anisotropic organic material whose mechanical properties are affected by several factors, such as its specific cellular structure, as well as the physical, and chemical conditions of the surrounding environment. Moisture content (MC) is one of the key influencing factors. Different moisture content can cause variations in timber properties, such as in the strength, the stiffness, and the physical volume. Moisture may even initiate decay or encourage the growth of fungi, which reduces the mechanical strength of the timber [3]. Therefore, given the vast number of timber-based structures around the globe, the investigation for determining a reliable detection method of moisture content in timber and wooden structures is of great significance.
Traditionally, wood moisture content is estimated by weighing the wood [4]. By comparing the weight of the wet wood to that of its dried (i.e., using an oven) condition, the moisture mass can be estimated. Recently, with the rapid development in structural health monitoring methods [5][6][7][8][9][10][11] and damage detection technologies [12,13], the identification of moisture content in timber and wooden structures has attracted much attention. For instance, Brischke et al. [14,15] and Fredriksson et al. [16,17] determined the moisture content of wood by measuring the change of electrical resistance due to the presence of absorbed moisture. Subsequently, Yamamoto et al. [18] used a modified confocal laser scanning microscope (CLSM) system to observe in-situ microcracks on wooden surfaces, and measure the change in resistivity, to obtain information about moisture content. Casans et al. [19] proposed an analog circuit, for high resistance measurement of fiber materials, to estimate the moisture content of wood, based on the measurements of resistance in the fiber material and its relationship with moisture content. Fredriksson et al. [20] and Björngrim et al. [21] applied conductivity measurement methods to evaluate the moisture content of wooden structures. Similarly, radio frequency techniques [22][23][24][25][26], capacitance measurements [27][28][29][30][31][32], fiber optics [33][34][35], and X-ray techniques [36][37][38][39][40] were all used to assess the moisture content of wood, and provide new ideas for nondestructive evaluation of wood moisture contents. Recently, Rodriguez-Abad et al. [41] and Reci et al. [42] employed the wave signal, based on the ultrasonic method, and the Ground Penetrating Radar (GPR) method to estimate the moisture content of wood, respectively, and they discovered noticeable changes in measurement data when the wood moisture content changed. However, most of the above-mentioned methods are qualitative in nature. In addition, due to the need for manually driven external excitation, the above methods are not suitable for real time monitoring.
Recently, methods for structural damage detection [43][44][45][46] and health monitoring [47][48][49][50][51][52][53] have been developed to identify the damage status and health of structures, in real time. In particular, the active sensing method is based on the piezoelectric effect of piezoelectric materials through which health monitoring and damage detection in structures are achieved. Piezoceramic transducers based on Lead Zirconate Titanate (PZT) are widely used in the active sensing method due to their several advantages, such as rapid response [54], energy harvesting capacity [55,56], low cost, ease of implementation [57][58][59][60][61], and the dual effects of both sensing and actuating [62][63][64]. Wand et al. [65] and Roh [66] proposed the active sensing monitoring technique to diagnose the damage of composite plates, by embedding multiple piezoelectric patches into a composite structure. Subsequently, this technique is widely applied in the damage detection and structural health monitoring in civil and mechanical engineering, such as for damage detection in a pipeline system [67][68][69], timber structures [70], monitoring of bolt looseness [71][72][73][74], damage detection in concrete structures [75][76][77], monitoring of soil water content [78], soil freeze-thaw process [79], bond slip detection of composite concrete structures [80,81], and debonding in adhesively-bonded structures [82,83]. However, the study of moisture-content monitoring in timber or wooden structures, using PZT transducer-enabled active sensing approach, has not been reported.
In this research, we propose the usage of a PZT transducer-enabled active sensing method to monitor timber moisture content and carry out experimental feasibility studies by using timber specimens. For each specimen, one PZT patch served as the actuator and another served as the sensor. The actuator generates stress waves that propagate through the structure and are received by the sensors. As the propagation characteristics of the stress wave are sensitive to timber moisture content, the stress wave energy will change correspondingly. A wavelet packet-based energy index was applied to evaluate the moisture content. The results indicated that this method can estimate the moisture content in timber structure quantitatively and accurately.
Active Sensing Method
The active sensing method as enabled by PZTs was used to estimate the moisture content in timber samples. In this method, one PZT patch ("PZT1") generates a stress wave that propagates across the sample and is received by another PZT patch ("PZT2"). Both patches are bonded to the top and bottom surfaces of the sample. Changes in the sample lead to changes in the received signal. Figure 1 depicts the application of the method to a timber sample in both dry and wet conditions. A designed, directional stress wave containing frequency components from 100 Hz to 500 kHz was generated by PZT1 and received by PZT2. Due to the change in the timber moisture content, the material properties of the timber will change and result in a corresponding change in the stress wave attenuation ratio in timber. To quantify the timber moisture content, a wavelet packet-based energy approach was used (Section 2.2).
Active Sensing Method
The active sensing method as enabled by PZTs was used to estimate the moisture content in timber samples. In this method, one PZT patch ("PZT1") generates a stress wave that propagates across the sample and is received by another PZT patch ("PZT2"). Both patches are bonded to the top and bottom surfaces of the sample. Changes in the sample lead to changes in the received signal. Figure 1 depicts the application of the method to a timber sample in both dry and wet conditions. A designed, directional stress wave containing frequency components from 100 Hz to 500 kHz was generated by PZT1 and received by PZT2. Due to the change in the timber moisture content, the material properties of the timber will change and result in a corresponding change in the stress wave attenuation ratio in timber. To quantify the timber moisture content, a wavelet packet-based energy approach was used (Section 2.2).
Wavelet Packet-Based Energy Approach
The wavelet packet analysis approach has several desirable characteristics, such as high timefrequency resolution, and it can effectively decompose and analyze frequency signals across much of the frequency spectrum. Wavelet packet analysis can also be used to obtain data insight, in both time and frequency domains. The wavelet packet-based energy approach is often used in structural analysis to compute the energy of received signals [84,85]. In this investigation, a wavelet packetbased energy analysis was used to compute the received wave signal energy under different moisture content in timber specimens, which is given as follows: First, the original signal S received by the PZT sensor was decomposed by an n-level wavelet packet decomposition into 2 n signal subsets with different frequency bands. The signal subset Xj, where j was the frequency band (j = 1, 2, …, 2 ), could be expressed as, where m was the data sampling of the decomposed signal subset.
Second, the energy of the signal subset , , could be defined as
Wavelet Packet-Based Energy Approach
The wavelet packet analysis approach has several desirable characteristics, such as high time-frequency resolution, and it can effectively decompose and analyze frequency signals across much of the frequency spectrum. Wavelet packet analysis can also be used to obtain data insight, in both time and frequency domains. The wavelet packet-based energy approach is often used in structural analysis to compute the energy of received signals [84,85]. In this investigation, a wavelet packet-based energy analysis was used to compute the received wave signal energy under different moisture content in timber specimens, which is given as follows: First, the original signal S received by the PZT sensor was decomposed by an n-level wavelet packet decomposition into 2 n signal subsets with different frequency bands. The signal subset X j , where j was the frequency band (j = 1, 2, . . . , 2 n ), could be expressed as, where m was the data sampling of the decomposed signal subset.
Second, the energy of the signal subset E i,j , could be defined as where i was the ith measurement. The energy vector of the signal at the ith measurement could be given as, Finally, based on the definition of the energy vector E i , the total energy E of the received original signal at the ith measurement could be computed as, In this paper, the received wave signal energy under different moisture content in timber specimens was computed via the wavelet packet-based energy method.
Timber Specimen
In this experiment, a total of three timber specimens with the same dimensions (200 mm × 100 mm × 20 mm) were fabricated using the same pine wood from North America. For each test specimen, a pair of PZT disks (10 mm diameter, 0.2 mm thick, purchased from Beijing Ultrasonic) were mounted onto predetermined positions, using epoxy (Loctite Heavy Duty 5 min epoxy) ( Figure 2). In this research, the type of the PZT sensors used was PZT-5H. The PZT sensor was a sandwiched structure, with two electrode layers and one layer of PZT material. Finally, based on the definition of the energy vector Ei, the total energy E of the received original signal at the ith measurement could be computed as, , In this paper, the received wave signal energy under different moisture content in timber specimens was computed via the wavelet packet-based energy method.
Timber Specimen
In this experiment, a total of three timber specimens with the same dimensions (200 mm × 100 mm × 20 mm) were fabricated using the same pine wood from North America. For each test specimen, a pair of PZT disks (10 mm diameter, 0.2 mm thick, purchased from Beijing Ultrasonic) were mounted onto predetermined positions, using epoxy (Loctite Heavy Duty 5 min epoxy) ( Figure 2). In this research, the type of the PZT sensors used was PZT-5H. The PZT sensor was a sandwiched structure, with two electrode layers and one layer of PZT material. Moisture content (MC) has different definitions in the literature; for the purposes of this paper, the MC of the timber specimens in this study was defined as:
100% 100%
where MC was the moisture content of the timber specimen, mwater was the mass of water within the timber specimen, mdry was the mass of the dry timber specimen, mwet was the total mass of the wet timber specimen. In this study, mdry of the timber specimen was determined by the Chinese national standard (GB/T 1931-2009) [86]. According to the standard protocol, the timber specimens were placed in an oven and baked at a temperature of (103 ± 2) °C for 8 h, then the mass of the timber specimens was weighed and recorded on an electronic scale. Subsequently, the selected specimens were weighed every 2 h. The mdry of the timber specimens were determined when the difference between the two most-recent measurements did not exceed 0.5%.
Experimental Setup
The experimental setup consisted of a data acquisition system (National Instruments (NI) USB-6361), timber specimens, an electronic scale and a monitoring terminal (a laptop), as shown in Figure Figure 2. Timber specimens. (PZT2 is at the same position, on the back side). From left to right: Specimen 1, Specimen 2, Specimen 3.
Moisture content (MC) has different definitions in the literature; for the purposes of this paper, the MC of the timber specimens in this study was defined as: where MC was the moisture content of the timber specimen, m water was the mass of water within the timber specimen, m dry was the mass of the dry timber specimen, m wet was the total mass of the wet timber specimen. In this study, m dry of the timber specimen was determined by the Chinese national standard (GB/T 1931-2009) [86]. According to the standard protocol, the timber specimens were placed in an oven and baked at a temperature of (103 ± 2) • C for 8 h, then the mass of the timber specimens was weighed and recorded on an electronic scale. Subsequently, the selected specimens were weighed every 2 h. The m dry of the timber specimens were determined when the difference between the two most-recent measurements did not exceed 0.5%.
Experimental Setup
The experimental setup consisted of a data acquisition system (National Instruments (NI) USB-6361), timber specimens, an electronic scale and a monitoring terminal (a laptop), as shown in Figure 3. During the test, an electronic scale was used (Accuracy: 0.01 g) to measure the mass of the wet timber specimen, after it was immersed into clean water until the MC reached the designed value (60%) (Figure 4a). The moisture content of the timber specimen gradually increased from 0% to 60% with 10% increments. When the MC of the timber specimen reached the designed value, the timber specimen was placed into a Ziploc sealed plastic bag for more than 12 h (Figure 4b) to ensure the uniformity of moisture in the timber. At every 10% MC increase, a swept sine excitation signal was input to the PZT actuator to transmit a stress wave towards the other end of the specimen. PZT patches were layered in epoxy for waterproofing. Experimental details of swept sine wave signals are shown in Table 1. 3. During the test, an electronic scale was used (Accuracy: 0.01 g) to measure the mass of the wet timber specimen, after it was immersed into clean water until the MC reached the designed value (60%) (Figure 4a). The moisture content of the timber specimen gradually increased from 0% to 60% with 10% increments. When the MC of the timber specimen reached the designed value, the timber specimen was placed into a Ziploc sealed plastic bag for more than 12 h (Figure 4b) to ensure the uniformity of moisture in the timber. At every 10% MC increase, a swept sine excitation signal was input to the PZT actuator to transmit a stress wave towards the other end of the specimen. PZT patches were layered in epoxy for waterproofing. Experimental details of swept sine wave signals are shown in Table 1.
Results and Discussions
The signals received by the PZT sensors, in the time domain, under different moisture-content levels (0%, 10%, 20%, 30%, 40%, 50%, and 60%) are shown in Figure 5. The data indicated that the amplitude of the signal received by the PZT sensor decreased when the moisture content in the timber specimen increased. As water provides an attenuating effect for stress waves, such an inverse correlation might be expected. On the other hand, despite sharing the same overall trend, each specimen still exhibited unique characteristics in the received signal, perhaps due to minor nonuniformities among specimens, including epoxy thickness and electrode welding. 3. During the test, an electronic scale was used (Accuracy: 0.01 g) to measure the mass of the wet timber specimen, after it was immersed into clean water until the MC reached the designed value (60%) (Figure 4a). The moisture content of the timber specimen gradually increased from 0% to 60% with 10% increments. When the MC of the timber specimen reached the designed value, the timber specimen was placed into a Ziploc sealed plastic bag for more than 12 h (Figure 4b) to ensure the uniformity of moisture in the timber. At every 10% MC increase, a swept sine excitation signal was input to the PZT actuator to transmit a stress wave towards the other end of the specimen. PZT patches were layered in epoxy for waterproofing. Experimental details of swept sine wave signals are shown in Table 1.
Results and Discussions
The signals received by the PZT sensors, in the time domain, under different moisture-content levels (0%, 10%, 20%, 30%, 40%, 50%, and 60%) are shown in Figure 5. The data indicated that the amplitude of the signal received by the PZT sensor decreased when the moisture content in the timber specimen increased. As water provides an attenuating effect for stress waves, such an inverse correlation might be expected. On the other hand, despite sharing the same overall trend, each specimen still exhibited unique characteristics in the received signal, perhaps due to minor nonuniformities among specimens, including epoxy thickness and electrode welding.
Results and Discussions
The signals received by the PZT sensors, in the time domain, under different moisture-content levels (0%, 10%, 20%, 30%, 40%, 50%, and 60%) are shown in Figure 5. The data indicated that the amplitude of the signal received by the PZT sensor decreased when the moisture content in the timber specimen increased. As water provides an attenuating effect for stress waves, such an inverse correlation might be expected. On the other hand, despite sharing the same overall trend, each specimen still exhibited unique characteristics in the received signal, perhaps due to minor non-uniformities among specimens, including epoxy thickness and electrode welding. In order to analyze the changes in stress wave response, the energy of the received signal was estimated using the wavelet packet-based energy method ( Figure 6). The observed trend in the wavelet packet-based energy of the three timber specimens indicated a decrease in signal energy with the increase of the moisture content. Furthermore, the correlation between signal energy and MC suggested a parabolic relationship. However, there was a slight difference in the energy value of the three timber specimens for the same moisture-content levels. The reason for the slight discrepancies could be the natural inhomogeneity of timber, which significantly affects the stress wave propagation. In order to analyze the changes in stress wave response, the energy of the received signal was estimated using the wavelet packet-based energy method ( Figure 6). The observed trend in the wavelet packet-based energy of the three timber specimens indicated a decrease in signal energy with the increase of the moisture content. Furthermore, the correlation between signal energy and MC suggested a parabolic relationship. However, there was a slight difference in the energy value of the three timber specimens for the same moisture-content levels. The reason for the slight discrepancies could be the natural inhomogeneity of timber, which significantly affects the stress wave propagation. The experimental results showed that piezoceramic transducers hold great promise for use in the monitoring of moisture content of wooden structures through an active sensing method. On the other hand, certain challenges should be addressed prior to practical applications. First, although the moisture content of timber and wooden structures could be estimated directly from the time domain response and processed by the wavelet packet-based energy index, these methods could not be used to identify moisture-content distribution in timber and wooden structures. Second, the research did not consider certain aspects, such as the temperature, the species of wood, the geometry of the samples, as well as any boundary conditions, defects, and properties of the epoxy. Controlling for these parameters may lead to further insight into the results.
Conclusions
This paper demonstrated, for the first time, the use of PZT-enabled active sensing techniques to monitor the moisture content of timber specimens in real time. The experimental results showed that piezoceramic transducers hold great promise for use in the monitoring of moisture content of wooden structures through an active sensing method. On the other hand, certain challenges should be addressed prior to practical applications. First, although the moisture content of timber and wooden structures could be estimated directly from the time domain response and processed by the wavelet packet-based energy index, these methods could not be used to identify moisture-content distribution in timber and wooden structures. Second, the research did not consider certain aspects, such as the temperature, the species of wood, the geometry of the samples, as well as any boundary conditions, defects, and properties of the epoxy. Controlling for these parameters may lead to further insight into the results.
Conclusions
This paper demonstrated, for the first time, the use of PZT-enabled active sensing techniques to monitor the moisture content of timber specimens in real time. The amplitude of the wave signal received by PZT sensor decreased with the increase of moisture content in the timber specimens. The energy of the received signals, computed by using the wavelet packet-based energy approach, could be employed to quantitatively evaluate the change in moisture content of the timber specimens. Additionally, a parabolic relationship was found between the stress wave signal energy and the moisture content of the timber specimens. The experimental results revealed that the active sensing technique, based on PZT transducers, was effective and sensitive to be able to monitor the moisture content of the timber specimens, in real time. Future work in this area could include an investigation of the sensitivity and reliability of the method, the feasibility of the proposed method for quantitatively monitoring the moisture content on a larger scale, and in-service timber structures and structural elements. However, the investigations also need to consider additional influencing factors, such as the bonding layer, temperature, humidity, boundary conditions, and the microstructure of the wood.
|
v3-fos-license
|
2024-02-15T14:08:17.208Z
|
2024-02-15T00:00:00.000
|
267659145
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "a37a3f5f0d20a6dd6a84492e87eb964682aa86fd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:956",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"sha1": "85a5ad2f7d1b44ed6159f6822ed75ae9cce6555e",
"year": 2024
}
|
pes2o/s2orc
|
The attitude of contemporary Iranian directors and screenwriters toward patients with mental disorders in comparison with general population
Background Mental disorders are accountable for 16% of global disability-adjusted life years (DALYs). Therefore, accessible, cost-effective interventions are needed to help provide preventive and therapeutic options. As directors and screenwriters can reach a great audience, they can use their platform to either promote stigma or educate the public with the correct definition and conception of mental disorders. Therefore, we aimed to measure the stigmatizing attitude of contemporary Iranian directors and screenwriters toward patients with mental disorders in comparison with a general population group. Methods In this comparative study, we included 72 directors and screenwriters between 18 and 65 years of age with a minimum involvement in at least one movie/television show, and 72 age and educationmatched controls. We collected the demographic data of the participants, and used the Persian version of the Level of Contact Report (LCR) to measure their familiarity with mental disorders, and used the Persian version of Social Distance Scale (SDS) and Dangerousness Scale (DS) to measure their attitude toward them. Results Compared to the general population group, directors and screenwriters had significantly lower SDS (12.51 ± 3.8 vs. 13.65 ± 3.73) and DS (12.51 ± 3.8 vs. 13.65 ± 3.73) scores (P < 0.001), indicating a more positive attitude toward patients with mental disorders. Familiarity with mental disorders was not significantly different between the groups. Female sex was associated with a more negative attitude among the directors and screenwriters group. Additionally, among the SDS items, ‘How would you feel about someone with severe mental disorder marrying your children?’ and ‘How would you feel about someone with severe mental disorder taking care of your children for a couple of hours?’ received the most negative feedback in both groups. And among the DS items, ‘there should be a law forbidding a former mental patient the right to obtain a hunting license’ received the most negative feedback in both groups. Conclusions Iranian contemporary directors and screenwriters had a more positive attitude toward patients with mental disorders, compared to general population. Due to this relatively positive attitude, this group of artists can potentially contribute to anti-stigma initiatives by offering educational materials and resources, promoting mental health care, and improving access to mental health care.
The attitude of contemporary Iranian directors and screenwriters toward patients with mental disorders in comparison with general population Kiandokht Kamalinejad 1 , Seved Vahid Shariat 1 , Negin Eissazade 2 and Mohammadreza Shalbafan 1*
Background
Mental disorders account for 16% of the global disability-adjusted life years (DALYs).Subsequently, there is a pressing demand for accessible cost-effective preventive, supportive, and therapeutic strategies [1].A significant number of patients with mental disorders do not seek mental healthcare, due to the discriminatory attitudes of the public towards mental disorders, commonly referred to as stigma.Stigma results in social exclusion, selfstigma, isolation, and ultimately, reduced quality of life [2,3].
Cinema and television hold significant power over shaping the perceptions and attitudes of the public on various issues, with mental health being no exception.At the same time, the artists working in this field, are influenced by societal attitudes, as the reception of their work not only mirrors current perceptions but also adds to the ongoing dialogue around mental health.Therefore, cinema and television can be adopted as tools for either reinforcing stigma or reducing it.Nevertheless, individuals with mental disorders are frequently portrayed as violent, unpredictable, and inferior, perpetuating negative stereotypes.The exaggeration and sensationalization of mental disorders in media, contribute to the persistence of misconceptions.Furthermore, portrayals of psychiatrists and psychologists often lack empathy, reflecting them as ineffective, or at times, even harmful in their treatment methods [4][5][6][7][8].
Stigma against mental disorders in the Middle East is significantly affected by sociocultural factors, creating barriers to open discussions and impeding access to mental healthcare [9,10].However, due to anti-stigma interventions, there has been an increasing awareness of mental health issues and a greater willingness to seek treatment in recent years [9].As filmmakers can reach a great audience, they can use their platform to educate the public with the correct definition and conception of mental disorders.Therefore, we aimed to evaluate the stigmatizing attitude of contemporary Iranian directors and screenwriters toward patients with mental disorders in comparison with a general population group.
Design and participants
This comparative study was conducted between February 2021 and August 2022.We reached out to directors and screenwriters via their official social media accounts, with inclusion criteria specifying an age range of 18-65 years and a minimum involvement in at least one movie or television show.Controls, matched for age and education, were volunteer social media users and university staff, excluding hospital staff.
Tools
In our survey, we collected the demographic information of the participants, along with the Persian versions the Level of Contact Report (LCR), Social Distance Scale (SDS), and Dangerousness Scale (DS) [11][12][13][14].
The LCR is a tool for assessing familiarity with mental disorders.It consists of twelve situations with varying degrees of intimacy with patients with mental disorders, listed in increasing order of familiarity.Participants are asked to score each item from 1 (I have never observed a person with mental illness) to 4 (I have a severe mental illness).Higher scores indicate more familiarity with mental disorders.If more than one category applied for a respondent, we selected the one with the highest familiarity.The Cronbach's alpha coefficient for the Persian version of this questionnaire was reported as 0.427 in the coercion structure and between 0.75 and 0.91 in other structures [11,13].
The SDS is used to assess the attitude toward patients with mental disorders.It presents a patient with a severe mental disorder and asks the participants to rate their level of comfort with interacting with them in seven different hypothetical scenarios, on a scale of zero to three.Total score ranges from 0 to 21, with higher scores indicating greater discomfort and desire for distance.We used the Persian version of this scale, for which Cronbach's alpha coefficient has been reported as 0.96, retest coefficient as 0.88, and content validity as 0.77 [12,14].
The DS also measures the attitude toward patients with mental disorders.It consists of eight questions asking the reaction of the respondent in particular situations involving a patient with a mental disorder.It is scored using a 7-point Likert scale, from 'completely disagree' to 'completely agree.' Total score ranges from eight to 56, and higher scores indicate higher levels of perceived dangerousness.For the Persian version of this scale, the Cronbach's alpha coefficient has been reported as 0.92, retest coefficient as 0.89, and content validity as 0.75 [12,13].
Statistical analysis
Statistical analyses were conducted using Statistical Package for the Social Sciences (SPSS) software for Windows (version 27, SPSS Inc., Chicago, IL, USA).Descriptive statistics are presented as mean ± standard deviation.Categorical variables were compared using the Chi-square potentially contribute to anti-stigma initiatives by offering educational materials and resources, promoting mental health care, and improving access to mental health care.Keywords Stigma, Mental disorders, Directors, Screenwriters, General Population and Fisher's exact tests.Normality was assessed using the Shapiro-Wilk test.Continuous variables were compared using the Pearson and Spearman tests, along with the independent t-test, and the Mann-Whitney U test.The correlations between LCR, SDS, and DS scores and the categorical variables were assessed using one-way ANOVA.A p-value of 0.05 or less was considered statistically significant.
Results
A total of 144 participants with a mean age of 43.04 ± 10.9 years completed our survey (72 directors and screenwriters and 72 controls), of which 92 (63.9%) were male.The majority of the participants in the directors and screenwriters group were male (N = 59, 81.9%).Demographic data of the participants are presented in Table 1.
LCR
The mean scores of LCR were 10.72 ± 8.09 in the directors and screenwriters group and 9.26 ± 6.3 in the control group.LCR scores were not significantly different between the two groups (P = 0.152) (Table 2).Among the participants' demographic data, only higher number of movies/television shows was associated with higher LCR scores (P = 0.014) (Table 3).
Further analysis by linear regression model, revealed that mean LCR score is significantly correlated with the number of movies/television shows (P = 0.01), and not with sex (P = 0.66), age (P = 0.39), education level (P = 0.20), movies/television shows about mental disorders (P = 0.56), and personal (P = 0.30) and family (P = 0.91) history of mental disorders.
SDS
The mean scores of SDS were 12.51 ± 3.8 and 13.65 ± 3.73 in the directors and screenwriters group and control group, respectively.Directors and screenwriters had significantly lower SDS (P < 0.001) scores (Table 2).
Higher SDS scores were significantly associated with the female sex in the directors and screenwriter group (P = 0.029).Education level, positive personal and family history of mental disorders, and movies/television shows about mental disorders did not have any significant correlations with SDS scores (Table 3).Adjusting the analysis for gender, did not lead to different results.Among the directors and screenwriters group, 'How would you feel about someone with severe mental disorder taking care of your children for a couple of hours?' (N = 34, 47.2%) and 'How would you feel about someone with severe mental disorder marrying your children?' (N = 32, 44.4%) received the most negative feedback, respectively.
Among the control group, 'How would you feel about someone with severe mental disorder marrying your children?' (N = 45, 62.5%) and 'How would you feel about someone with severe mental disorder taking care of your children for a couple of hours?' (N = 43, 59.7%) received the most negative feedback, respectively.
Further analysis by linear regression model, revealed that SDS scores are only correlated with sex (P = 0.013), and not related with age (P = 0.18), education level (P = 0.96), number of movies/television shows (P = 0.74), movies/television shows about mental disorders (P = 0.78), and personal (P = 0.531) and family (P = 0.82) history of mental disorders.
DS
The mean scores of DS were 12.51 ± 3.8 and 13.65 ± 3.73 in the directors and screenwriters group and controls, respectively.Directors and screenwriters had significantly lower DS (P = 0.017) scores (Table 2).The correlations between the DS scores and demographic data of the participants was non-significant (Table 3).
The item 'There should be a law forbidding a former mental patient the right to obtain a hunting license' received the most negative feedback in both directors and screenwriters group (N = 10, 13.9%) and controls (N = 18, 25.0%).
In addition, a significant association was found between the SDS and DS scores in both directors and screenwriters group (P < 0.001) and controls (P < 0.001).LCR scores were significantly associated with both SDS (P = 0.013) and DS (P = 0.024) scores in the directors and screenwriters group.
Further analysis by linear regression model, revealed that DS scores were significantly correlated with personal history of mental disorder (P = 0.003) and number of movies/television shows (P = 0.031), and not with sex (P = 0.534), age (P = 0.10), education level (P = 0.56), movies/television shows with mental disorders (P = 0.56), and family history of mental disorder (P = 0.95).
Discussion
Compared to general population, directors and screenwriters had significantly lower SDS and DS scores, indicating a more positive attitude toward patients with mental disorders.Female sex was significantly associated with a more negative attitude in the directors and screenwriters group.Familiarity with mental disorders was not significantly different between the groups.Positive personal history of mental disorders was significantly associated with higher DS scores.Additionally, in both groups, 'How would you feel about someone with severe mental disorder marrying your children?' and 'How would you feel about someone with severe mental disorder taking care of your children for a couple of hours?' received the most negative feedback among the SDS items, and 'There should be a law forbidding a former mental patient the right to obtain a hunting license' received the most negative feedback among the DS items.
We found that directors and screenwriters had a more positive attitude toward patients with mental disorders which might be due to artists' creativity and flexibility, and exposure to cultural diversity.However, in contrast to previous studies, we did not find any association between familiarity with patients with mental disorders and the stigmatizing attitude toward them [15][16][17][18] which raises questions about the specific nature of familiarity and previous contact with patients with mental disorder.As suggested by previous studies, encouraging direct, regular and interactive contact with these individuals may allow for a deeper understanding of their experiences and struggles, and challenging preconceived inaccurate negative stereotypes [4,19,20].
Female sex was previously reported to be associated with a positive attitude toward patients with mental disorders [21].However, based on our results, female sex was associated with a greater tendency for social distance among directors and screenwriters.However, the small number of female participants in this group makes this finding inconclusive.
We did not find any significant association between age and stigmatizing attitude.However, previous studies have reported that older age is associated with a more negative attitude towards patients with mental disorders [22].We did not find any significant correlations between education level and stigmatizing attitude.Although, it has been reported that a higher level of education is associated with a more positive attitude toward patients with mental disorders [23,24].
Positive personal history of mental disorders was significantly associated with a lower level of perceived dangerousness.Similarly, previous studies have reported that a positive personal history of mental disorders is associated with a more positive attitude [24,25].
We did not find any significant associations between positive family history of mental disorders and stigmatizing attitude.However, previous studies have reported that positive family history of mental disorders is associated with a more negative attitude.However, based on previous studies, perceived dangerousness is significantly lower in individuals with a positive family history of mental disorders [25,26].
The study of Eissazade et al. ( 2022) which was conducted to investigate the attitude of Iranian theater artists toward patients with mental disorders, did not report any associations between demographic data and stigmatizing attitude [27].Moreover, the mean score of SDS (12.51 ± 3.8 vs. 10.67 ± 4.92) and DS (33.53 ± 7.03vs.28.87 ± 10.291) were higher in our study.However, we investigated different groups of artists with different sample sizes [27].
In the directors and screenwriters group, among the SDS items, 'How would you feel about someone with severe mental disorder taking care of your children for a couple of hours?(similar to the study of Eissazade et al. ( 2022)), and 'How would you feel about someone with severe mental disorder marrying your children?' received the most negative feedback.Some concerns exist surrounding the ability of patients with severe mental disorders to ensure children's safety and welfare, as these disorders can sometimes involve impaired judgment or erratic behavior.However, the most reported child abuse perpetrators are among families or acquaintances [27,28].
In line with the study of Eissazade et al. ( 2022), 'There should be a law forbidding a former mental patient the right to obtain a hunting license' received the most negative feedback among the DS items.Mental disorders can sometimes be associated with impaired cognition or impulsivity, raising apprehensions about the ability to responsibly handle firearms.However, no clear link has been found between violent crimes and mental disorders, without substance use.Also, there has been no substantial report of firearm victims killed by patients with mental disorders in Iran over the past years [27,29].Gaining insight into the factors contributing to violence can help with reducing stigma toward these patients.
Anti-stigma programs are needed to raise awareness by offering educational materials and resources, improve access to mental healthcare, and support advocacy efforts to promote mental health policy and legislation.Directors and screenwriters have a significant role in educating the public as they can reach a large and diverse audience, and advocate for policy changes.And therefore, cinema and television can be adopted as powerful tools for reducing the stigma surrounding mental disorders by presenting realistic portrayals and breaking down harmful stereotypes, and subsequently contribute to promoting empathy and social inclusion for patients with mental disorders, and creating safe spaces for these individuals to openly discuss their experiences [30][31][32][33].
Limitations
Our study had limitations such as a small sample size, a cross-sectional design, potential participation bias, selfreporting bias, and a restricted choice of questionnaire.Given the significance and prevalence of mental disorders, future investigations should encompass larger and more diverse samples across various artistic domains to achieve more comprehensive results.
Conclusions
In conclusion, Iranian contemporary directors and screenwriters had a more positive attitude toward patients with mental disorders compared to the general population.Due to this relatively positive attitude, this group of artists can potentially contribute to antistigma initiatives by offering educational materials and resources, promoting mental health care, and improving access to mental health care.
Table 1
Demographic data of the participants (count, %)
Table 2
Comparison of the questionnaires' scores between directors and screenwriters and the control group (Mann-Whitney U-test) LCR: Level of Contact Report, DS: Dangerousness Scale, SDS: Social Distance Scale *Correlation is significant at the 0.05 level
Table 3
The p-values of correlations between the demographic data and LCR, DS, and SDS scores of the participants LCR: Level of Contact Report, DS: Dangerousness Scale, SDS: Social Distance Scale *Correlation is significant at the 0.05 level
|
v3-fos-license
|
2019-05-12T14:23:52.396Z
|
2018-08-01T00:00:00.000
|
149914184
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://durham-repository.worktribe.com/preview/1350008/26095.pdf",
"pdf_hash": "1f6df0b529e9d26308326aeb776abadfb8936940",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:957",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "f11f237de94c166e3099fda996aeb2b34f881468",
"year": 2018
}
|
pes2o/s2orc
|
The neuromodulatory effects of sex hormones on functional cerebral asymmetries and cognitive control: An update
Abstract
Several reviews and meta-analyses have been conducted in an attempt to quantify the size of sex differences in FCAs across a range of lateralised cognitive processes (e.g., Hiscock et al., 1994;1995;1999;2001;Vogel et al., 2003;Voyer, 1995).Taken together, these meta-analyses conclude that small but reliable sex differences in FCAs exist at the population level, with males yielding larger asymmetries than women do.Given the small effect sizes reported by these meta-analyses, it follows that only studies with large sample sizes will yield consistent and significant sex differences in FCAs.Hirnstein et al. (2013) supported this notion in a study using behavioural data compiled from 1782 participants (885 females).The data were acquired using a behavioural measure of language lateralisation (i.e., the Bergen dichotic listening task, Hugdahl et al., 2009).The results showed that sex differences in the degree of language lateralisation were dependent on age, and the effect of sex was largest during adolescence (Cohen's d = 0.31).
A large body of evidence suggested that the distinct hormonal profiles of men and women are key to the generation and maintenance of such sex differences in FCAs and cognition.However, sex hormone levels are not stable in women.Instead, they fluctuate both across the lifespan (e.g., menopause) and across shorter time intervals (e.g., the menstrual cycle).As such, menstrual cycle related hormone fluctuations might at least to some extent underpin the larger degree of inter-and intra-individual variation in women.
Sex hormonal effects on the brain
There are three main groups of gonadal steroid hormones: estrogens, progestins, and androgens.The human principle derivatives are estradiol, progesterone, and testosterone, respectively.Sex steroids can cross the blood-brain barrier, despite being synthesised primarily by the gonads (e.g.testes, ovaries) and adrenal glands (Rupprecht, 2003).However, there is increasing evidence that sex hormones can be synthesised locally in specific brain regions (e.g.hippocampus, prefrontal cortex; Luine, 2014;Rupprecht, 2003).
Organising and activating effects of sex hormones
Sex hormonal effects on the brain are broadly categorised as either organising or activating effects.Organising effects result from interactions between hormones and genes, and occur primarily during early ontogenesis and puberty.These effects result in permanent sex differences in brain structure.Activating effects are the result of hormonal fluctuations that occur throughout life and, in contrast to organising effects, are transient and reflect dynamic changes in functional brain organisation (for a review, see Cohen-Bendahan et al., 2005).
Organising effects are maximal during certain sensitive periods.The precise sensitive periods for organising effects are not known, however, it is generally accepted that gestation weeks 8-24 are a key period during ontogenesis, although increasing evidence suggests additional sensitive periods exist (Cohen-Bendahan et al., 2005).Activating effects refer to the transient effects of fluctuations in sex hormone levels on brain activity, functional brain organisation, and cognitive function (Luine, 2014).To investigate the activating effects of sex hormones (i.e., estradiol and progesterone), a large number of studies have taken advantage of the endogenous fluctuations in hormone levels that occur in young women during the menstrual cycle (Wisniewski, 1998).
The menstrual cycle is a recurring reproductive cycle, characterised by hormonal fluctuations and physiological changes in the ovaries and uterus.Each cycle begins with menstruation, and lasts approximately 28 days.The cycle can be divided into several phases, each characterised by a different hormonal profile.Throughout the menstrual phase (cycle days 1-5), circulating levels of estradiol and progesterone are low.At cycle day 6, estradiol begins to increase, reaching a peak just prior to ovulation; progesterone levels remain low throughout this phase (the follicular phase, cycle days 6-12).Ovulation typically occurs around cycle day 14, following the secretion of luteinising hormone.At this point, estradiol levels drop slightly.Following ovulation, the cells surrounding the egg undergo luteinisation.During this phase, estradiol and progesterone are secreted by the luteinised cells and estradiol levels reach a second, smaller peak while progesterone levels reach their maximum (luteal phase, cycle day 22).Estradiol and progesterone levels both fall rapidly (premenstrual phase, cycle days 24 -28) before a new cycle begins.See Hausmann & Bayer (2010) for more details.
Organising effects of sex hormones on lateralisation
Several clinical studies suggest that estrogen can influence FCAs during early development (Hines & Shipley, 1984).In a behavioural study, Hines and Shipley (1984) examined language lateralisation in women exposed to diethylstilbesterol (DES) during gestation.Diethylstilbesterol is a synthetic estrogen, administered to pregnant women to lower risk of miscarriage.These authors showed that language lateralisation was significantly increased in the offspring of DES-exposed women, compared to their unexposed sisters.
Although contradictions exist (e.g.Smith & Hines, 2000), this finding suggests that high levels of prenatal estradiol may play a defeminising role in male development (Hausmann & Bayer, 2010).
Critically, a number of potential mechanisms underlying cycle effects on FCAs have been proposed.One suggestion is that sex hormones selectively influence activity of a specific hemisphere, although there is debate regarding which one.For example, using a visual-half field paradigm, Bibawi et al. (1995) presented normally cycling women with images of chairs, presented in pairs (one to the left and one to the right visual field).Participants were then required to identify which chairs had been presented from a 12-item array.Results showed that during the high-hormone luteal phase, women correctly identified more chairs presented to the right visual field than the left, indicative of a left hemispheric advantage.However, there was no hemispheric advantage during the low-hormone menstrual phase.
Subsequently, the authors concluded that sex hormones selectively activate the left hemisphere.Using two dichotic listening paradigms (language and music, lateralised to the left and right hemispheres, respectively), Sanders and Wenmoth (1998) demonstrated greater language lateralisation during the luteal phase compared to the menstrual phase, but a greater FCA for music processing during the menstrual phase.Thus, these authors concluded that right hemispheric activity was selectively suppressed during high-hormone cycle phases.
Critically, several studies (e.g.Wadnerkar et al., 2008;Sanders & Wenmoth, 1998;Weekes & Zaidel, 1996;Bibawi et al., 1995;McCourt et al., 1997) did not objectively verify participants' reported cycle phase by directly measuring hormone levels.Such measures are critical for menstrual cycle research, as evidence suggests that a significant proportion of menstrual cycles in young women aged 20-24 years (approx.40%, Metcalf & Mackenzie, 1980) are non-ovulatory.Thus, these women may not experience the expected fluctuations of estradiol and/or progesterone.Indeed, many studies that did include hormone measures report a high rate of post-hoc participant exclusion on account of their hormone levels not falling into the expected ranges for each cycle phase (e.g., up to 46% participants were excluded in Gordon et al., 1986).As such, if some participants were tested just before or after the expected peaks in estradiol and progesterone levels, there would be greater variability in the degree of FCAs across participants.
In the early study by Hausmann and Güntürkün (2000) investigating FCAs across the menstrual cycle, direct hormone measurements were included.In this study, normally cycling women completed left-hemispheric (word matching) and right-hemispheric (figure matching, face discrimination) tasks, during both the low-hormone menstrual phase and the highprogesterone luteal phase.Cycle phase was verified by salivary hormone assays.In addition, a sample of men and a sample of post-menopausal women were tested at corresponding time intervals.The authors identified an interaction between cycle phase and FCAs in all tasks, indicative of a general reduction in FCAs during the high-progesterone luteal phase.In contrast, FCAs were stable across time in post-menopausal women and men.A second study replicated these findings with the same tasks (Hausmann et al., 2002).In this study, normally cycling women were tested 15 times at three-day intervals.This allowed for a longitudinal analysis of the relationship between sex hormones and FCAs for longer than one menstrual cycle, as well as a cross-sectional analysis.For the figure-matching task, both analyses indicated a significant relationship between progesterone and reduced FCAS, resulting from an enhanced performance of the sub-dominant left hemisphere.
As these studies demonstrated FCAs for both left-and right-hemispheric tasks were reduced when levels of progesterone were high, it was suggested that sex hormones were not selectively influencing the activity of a particular hemisphere.Instead, Hausmann and Güntürkün (2000c) proposed that sex hormones affect FCAs by modulating interhemispheric interaction, a physiological process that affects both hemispheres.This mechanism is based on the assumption that the lateralisation of a cognitive process, to either hemisphere, arises due to inhibition of the non-dominant hemisphere by the dominant hemisphere (i.e.interhemispheric inhibition) via the corpus callosum (Chiarello & Maxfield, 1996;Cook, 1984).Specifically, Hausmann and Güntürkün (2000) argued that although cortico-cortical transmission is primarily excitatory, interhemispheric inhibition occurs because the lasting effect of callosal activity is inhibition of the contralateral hemisphere (Innocenti, 1980).This inhibition occurs because the majority of callosal projections terminate on pyramidal neurons, which subsequently activate GABAergic interneurons (Toyama & Matsunami, 1976).
Moreover, it has been shown that callosal projections may also terminate directly on GABAergic interneurons (Conti & Manzoni, 1994).Both of these mechanisms would result in widespread inhibition of homotopic regions of the non-dominant hemisphere by the dominant hemisphere (Kawaguchi, 1992).
In the hypothesis of progesterone-mediated hemispheric decoupling, Hausmann and Güntürkün (2000c) proposed that high levels of progesterone during the luteal phase leads to a reduction of interhemispheric inhibition.This leads, in turn, to a functional decoupling of the two hemispheres and a reduction in lateralisation.Specifically, it was proposed that progesterone and metabolites, such as allopregnanolone, can reduce interhemispheric inhibition by suppressing the excitatory neural response to glutamate and increasing the inhibitory neural response to GABA (Hausmann & Güntürkün, 2000;Hausmann et al., 2002;Hausmann & Bayer, 2010).This view was supported by a number of physiological studies, demonstrating that progesterone suppresses the excitatory response of neurons to glutamate, while also increasing the inhibitory response of neurons to GABA (Smith et al., 1987a(Smith et al., , 1987b)).Further studies have shown that similar effects are obtained with combined estradiol and progesterone administration (Smith et al., 1987b).Thus, it was proposed that high levels of progesterone in the luteal phase might lead to a transient reduction in interhemispheric inhibition, and in turn, a reduction in lateralisation (Hausmann & Güntürkün, 2000;Hausmann et al., 2002).Although the cortico-cortical transmission is mainly excitatory, the main and longer lasting effect in the contralateral hemisphere appears to be inhibitory, probably because most excitatory (glutamatergic) callosal fibers terminate on pyramidal neurons which then activate inhibitory (GABAergic) interneurons.These activated inhibitory interneurons could then induce a widespread inhibition in homotopic regions of the contralateral hemisphere.According to Hausmann and Güntürkün (2000;Hausmann et al., 2002), progesterone reduces cortico-cortical transmission during the midluteal phase by suppressing the excitatory responses of neurons to glutamate and by enhancing their inhibitory responses to GABA.The combined effect would result in the functional hemispheric decoupling, and thus to a temporal reduction in functional asymmetry (right figure).Adopted from Hausmann and Bayer (2010) and reprinted with permission from MIT Press.
A similar mechanism aiming to explain cycle-related effects of sex hormones to FCAs was previously proposed by Bianki andFilippova (1996, 2000), who were the first to investigate the link between changes in FCAs in motor activity in the open field and stages of the estrous cycle in rats.Based on their findings, however, Bianki andFilippova (1996, 2000) suggested that higher levels of estrogen levels in the proestrus phase, during which ovarian follicles in rats mature, increased interhemispheric inhibition from the left hemisphere to the right hemisphere.This interhemispheric inhibition was significantly reduced in this study when estrogen levels dropped.
In line with Bianki andFilippova (1996, 2000), Weis et al. (2008) also reported evidence for the notion that estradiol levels can influence inter-hemispheric inhibition and FCAs.In this study, normally cycling women underwent functional magnetic resonance imaging (fMRI) while completing a word-matching task, identical to that used by Hausmann and Güntürkün (2000c).All women were tested during both the low-hormone menstrual phase and the highestradiol follicular phase.A control group of males was tested at corresponding time intervals.
Functional connectivity was assessed using psychophysical interaction analysis (PPI) to determine the inhibitory influence of the dominant left hemisphere on the non-dominant right hemisphere.Behaviourally, a significant left-hemispheric advantage was found in the menstrual phase, which was reduced in the follicular phase.In addition, PPI analysis revealed that the inhibitory influence of the left hemisphere over the right hemisphere fluctuated according to estradiol levels.Specifically, high levels of estradiol during the follicular phase were associated with reduced interhemispheric inhibition, and in turn, reduced FCAs.In contrast, no change in FCAs or interhemispheric inhibition was found in the male controls.Moreover, no significant difference was found when activity in left inferior frontal gyrus (a region critically involved in the word-matching task) was directly compared between the menstrual and follicular phases.
In contrast to Hausmann and Güntürkün (2000) who found that high progesterone levels (in combination with high estradiol levels) can reduce interhemispheric inhibition, Weis et al. (2008) showed that it was estradiol alone, not progesterone, which was associated with a reduction in interhemispheric inhibition.This finding is in line with other studies investigating the sex hormonal modulation of interhemispheric processes.For example, Hausmann et al. (2006) employed transcranial magnetic stimulation (TMS) to investigate the effect of sex hormones on transcallosal transfer.In this study, TMS was applied to the primary motor cortex to elicit suppression of tonic voluntary muscle activity in both the contralateral and ipsilateral side.The ipsilateral suppression (the ipsilateral silent period) is thought to be cortically mediated via excitatory transcallosal fibers that terminate on inhibitory interneurons (Hausmann et al., 2006).Therefore, the ipsilateral silent period can be used as an indirect measure of the connectivity between homotopic regions of the left and right motor cortices.Hausmann et al. (2006) showed that the ipsilateral silent period fluctuates across the menstrual cycle, with the largest suppression/inhibition during the luteal phase (high levels of estradiol and progesterone) compared to the follicular phase (high levels of estradiol only).Hausmann et al. (2013) reported additional evidence in support of this view.
In this study, electroencephalography (EEG) was used to directly measure interhemispheric connectivity by using visual-evoked potentials to estimate interhemispheric transfer time (IHTT).The results showed that IHTT from right-to-left was longer during the luteal phase as compared to the menstrual phase.Additional analyses revealed that this effect was related to high levels of estradiol, as opposed to progesterone, suggested the different interhemispheric processes are modulated by different sex hormones (Hausmann et al., 2013).
In recent years, research into sex hormonal effects in the brain has expanded to investigate other aspects of functional connectivity, such as intrahemispheric connectivity (Weis et al., 2011).For example, using a figure-matching task, Weis et al. (2011) investigated whether reductions in FCA for this task were similar to those seen for the verbal task (Weis et al., 2008).Behaviourally, women demonstrated reduced FCAs during the luteal phase.In addition, fMRI data revealed cycle-phase related changes in functional connectivity within the task dominant hemisphere.Specifically, activation of right-hemispheric networks was reduced during the luteal phase, as compared to both the menstrual and the follicular phase.
In addition, PPI analysis revealed cycle-related changes in functional connectivity, such that stronger functional connectivity between a right temporal seed region (fusiform gyrus) and heterotopic regions of the left hemisphere (e.g.precuneus, postcentral gyrus, inferior parietal lobule) was found during the luteal phase.Consequently, the authors suggest that sex hormones modulate not only interhemispheric inhibition between homotopic areas (Weis et al., 2008) but can also influence intrahemispheric integration, and interhemispheric connectivity between heterotopic brain regions (Weis et al., 2011).For a recent review on the potential neuroendocrine mechanisms underlying the effects of estradiol and progesterone on FCAs, see Hausmann (2017).
Top-down versus bottom-up effects
Several previous studies (Cowell et al., 2011;Hampson, 1990aHampson, ,1990b;;Sanders & Wenmoth, 1998;Wadnerkar et al., 2008) suggested that left hemispheric language dominance was increased during cycle phases associated with high levels of estradiol.However, this was not a consistent finding, with other studies suggesting that high levels of estradiol led to a reduction in FCAs (Alexander et al., 2002;Altemus et al., 1989;Mead & Hampson, 1996;Sanders & Wenmoth, 1998).Still further research showed that estradiol influenced language lateralisation, in particular when a high level of cognitive (top-down) control was required (Hjelmervik et al., 2012).Hjelmervik et al. (2012) investigated hormonal effects on the top-down processes related to language lateralisation using a forced-attention dichotic listening paradigm.This task is a robust tool used to provide a behavioural measure of language lateralisation.It involves the simultaneous presentation of two different auditory stimuli, separately to the left and right ear.Participants are required to report which one of the stimuli they heard the most clearly.Typically, in healthy right-handed adults, this task reveals a bias favouring the right ear, indicative of left-hemispheric language lateralisation.However, the so-called rightear advantage (REA) can be modulated by instructing participants to selectively attend to and report from either the left or the right ear specifically.In contrast to the non-forced condition, the forced-left condition requires a high level of cognitive control, as participants must override their bias towards the dominant right ear.In this study, normally cycling women completed the dichotic listening task three times across the cycle.A cycle-related effect was found only in the forced-left condition.In this condition, women demonstrated a greater leftear advantage during the high-estradiol follicular phase, as compared to both the menstrual and luteal phases.The authors interpreted this finding as evidence of an active role of estradiol on cognitive control, and not on language lateralisation per se (Hjelmervik et al., 2012).
A recent study (Hodgetts et al., 2015) aimed to disentangle the effects of estradiol on topdown/bottom-up aspects of FCAs.It was predicted that if gonadal steroid hormones primarily affected the bottom-up processes related to language lateralisation (e.g.inter-hemispheric inhibition), estradiol and/or progesterone would reduce the dichotic listening bias across all attention conditions.However, if high levels of gonadal hormones selectively affect top-down cognitive control, estradiol-related changes were expected only in the forced-left dichotic listening condition (Hjelmervik et al., 2012).
This study showed that language lateralisation was reduced when estradiol and progesterone levels were high, across all attention conditions.This suggests that the effect of sex hormones on cognitive control is marginal, and that the neuromodulatory properties of sex hormones on FCAs are due primarily to their influence over bottom-up processes.Given that the same effect was present for both right ear and left ear advantages, it is unlikely that the general reduction in language lateralisation is due to sex hormones selectively affecting one hemisphere.Instead, it was argued that this effect was due to the effect of sex hormones over interhemispheric inhibition.In line with this hypothesis, it was argued that the reduced REA in the non-forced and forced-right condition in participants with a relatively high level of estradiol might be due to reduced inhibition of the subdominant right hemisphere by the dominant left hemisphere.This would facilitate right hemisphere processing of stimuli presented to the left ear.Moreover, during the forced-left condition, it was argued that the top-down control process, required to successfully divert attention away from the dominant right ear in favour of the left ear, results in a shift of activation from the left hemisphere to the right hemisphere.As such, if estradiol is exacting a neuromodulatory effect on interhemispheric inhibition, the reduced LEA in the forced-left condition may be due to a reduction of inhibition from the right hemisphere over the left hemisphere.This would lead to an increase in left-hemispheric processing of stimuli presented to the right ear, and a reduction in the LEA.
Given that interhemispheric inhibition is a universal physiological process that underpins lateralisation, it follows that high estradiol levels should also reduce biases for tasks lateralised to the right hemisphere.Therefore, a follow-up study (Hodgetts et al., 2017) was designed to investigate this notion, using two differently lateralised dichotic listening tasks.
In this study, a linguistic and an emotional prosody dichotic listening task were used, designed to assess left and right hemispheric FCAs, respectively.For this study, it was hypothesised that if estradiol influenced the bottom-up processes of lateralisation (i.e., interhemispheric inhibition), reduced dichotic listening biases should be found in both tasks, regardless of the attention condition, when estradiol levels are high.In contrast, if estradiol influenced cognitive control, increased biases should be found in the forced-left and forced-right conditions of the linguistic and emotional dichotic listening tasks, respectively.However, no modulatory effect of sex hormones on language lateralisation was found.Moreover, in the emotional prosody task, high estradiol levels were marginally associated with a reduction in FCAs in the forced-right condition, suggesting that estradiol had a small effect on the topdown aspect of FCAs in this task.
Critically, in this study, the degree of lateralisation yielded in both tasks was considerably larger than those in either Hodgetts et al. (2015) or Hjelmervik et al. (2012).It was argued that this was due to a strong, stimulus-driven (bottom-up) effect, which resulted in the task being less cognitively demanding.As such, it was suggested that the stimulus-driven effect was so strong that any sex hormonal effects on FCAs were masked.It was also noted that the only dichotic listening condition to show any estradiol-related trends was the cognitive control (forced-right) condition of the emotional task.Moreover, this condition yielded the smallest bias and the lowest target detection rate of all the forced-attention conditions.Thus, it was argued that these dichotic listening tasks associated with high target detection rates and large laterality biases, are less susceptible to the modulatory effects of sex hormones which may be due to particularly strong stimulus-driven effects masking any sex hormonerelated effect on FCAs.
These studies, in conjunction with Hjelmervik et al. (2012) suggest that estradiol (and potentially progesterone) are, in principle, capable of modulating both top-down and bottomup processes related to FCAs.Specifically, while these studies suggest that sex hormones do possess neuromodulatory properties that can influence FCAs, these effects may be reduced when task demands are low.
Sex hormones and resting state connectivity
In light of evidence showing that sex hormones can affect task-related functional connectivity, recent research has begun to investigate sex hormonal effects on functional connectivity in the brain at rest.In the absence of a specific cognitive task, the brain exhibits a pattern of low-frequency oscillations in the BOLD signal (approx.0.01-0.1Hz,Damoiseaux et al., 2006).Biswal et al. (1995) were the first to demonstrate the resting state fMRI (rs-fMRI) approach, revealing temporally correlated time courses of low frequency oscillations within the sensory motor cortex.Subsequent research using rs-fMRI has identified a number of networks that are spatially comparable to task-related activations (Damoiseaux et al., 2006), such as executive function (Laird et al., 2011;Seeley et al., 2007), language (Laird et al., 2011;Tie et al., 2014) and memory (Laird et al., 2011;Vincent et al., 2006) resting state networks.
Given that functional connectivity is susceptible to endogenous hormone fluctuations across the menstrual cycle (Weis et al., 2008;Weis et al., 2011), sex hormones may also be capable of influencing resting state connectivity.This is a critical issue, as it would suggest that sex hormone effects on task performance and functional brain organisation may not be due to an effect on task-related brain activity, but reflective of their effect on task-independent intrinsic connectivity.
Five recent studies have investigated the effect of cycle-related hormone fluctuations on resting state network connectivity but with inconsistent results (Arélin et al., 2015;De Bondt et al., 2015;Hjelmervik et al., 2014;Weis et al., 2017;Petersen et al., 2014).Using a betweensubjects design, Petersen et al. (2014) investigated resting state functional connectivity under different hormonal conditions, across the menstrual cycle in normally cycling women, and in oral contraceptive pill users.Normally cycling women were tested either in the menstrual phase (termed early follicular by the authors) or the luteal phase.This study reported increased functional connectivity between the right anterior cingulate cortex (ACC) and the executive control network, and reduced functional connectivity between the left angular gyrus and the anterior default mode network (DMN) during the luteal as compared to the menstrual phase.Using a repeated-measures design, Weis et al. (2017) investigated functional connectivity of the auditory and default mode networks in normally cycling women, and a control sample of men.This study demonstrated increased connectivity between regions of left prefrontal cortex (PFC) and the DMN in women during the menstrual phase, compared to the follicular and luteal phases.In contrast, DMN connectivity was stable in men.In the auditory network, no effect of cycle phase/session was found, and no interaction between cycle phase/session and sex was found.In contrast, Hjelmervik et al. (2014) investigated four fronto-parietal cognitive control networks, using a repeated measures design.No cycle-related effect on functional connectivity was found.Similarly, De Bondt et al. (2015) did not find any effect of sex hormones in fronto-parietal networks (termed 'executive control networks' by the authors).However, analysis of the DMN revealed an increase in functional connectivity in the luteal phase, relative to the follicular phase, between the cuneus and the network.Arélin et al. (2015) conducted 32 resting state scans in a single subject across four menstrual cycles.Results showed that high progesterone levels were associated with increased connectivity of the dorsolateral PFC and the sensorimotor cortex to the resting state network.Further analysis revealed that high progesterone levels were associated with higher functional connectivity between right dorsolateral prefrontal cortex bilateral sensorimotor cortex, and the hippocampus, as well as between the left dorsolateral PFC and bilateral hippocampi.
The finding of a menstrual cycle effect on DMN connectivity (e.g.Petersen et al., 2014;Weis et al., 2017) presents a number of implications, for both task-based and rs-fMRI.
Specifically, this study suggests that DMN connectivity is not stable in women, and is confounded by sex hormonal fluctuations across the menstrual cycle.However, this cannot be generalised to other cognitive networks.This finding also has implications for behavioural studies.Critically, this finding suggests that the effect of gonadal steroid hormones on behaviours underpinned by regions of DMN, such as the medial PFC, orbitofrontal cortex and cingulate cortex, might at least to some extent depend on resting-state connectivity, as opposed to task-related activity.
Estrogen and the prefrontal cortex
As mentioned earlier, there is empirical evidence suggesting that estradiol is capable of modulating both top-down and bottom-up processes related to FCAs.In line with this assumption, a number of physiological studies, in both humans and non-human primates, have shown that estrogen receptors are present in the PFC.For example, Wang et al. (2010) demonstrated that estrogen receptor alpha (ERα) was present in excitatory synapses of the dorsolateral PFC of female rhesus monkeys.In a human post-mortem study, Bixo et al. (1995) demonstrated that the concentration of estradiol in frontal cortex is higher compared to other cortical regions, such as temporal cortex and cingulate cortex.
Estrogen and cognitive control processes
As mentioned earlier in this review, using the dichotic listening paradigm, Hjelmervik et al. (2012) demonstrated that cognitive control improved during the high-estradiol follicular phase, and that this was directly associated with an increase in estradiol levels compared to the menstrual phase.This finding was in line with a number of menstrual cycle studies which have similarly demonstrated the enhancing influence of estrogen on cognitive control (Colzato et al., 2012;Hampson, 1990aHampson, , 1990b;;Hampson & Morley, 2013;Hjelmervik et al., 2012;Rosenberg & Park, 2002).Rosenberg and Park (2002) demonstrated that verbal working memory ability fluctuated across the menstrual cycle, with participants' best performance occurring during high estradiol phases.However, this study did not include any direct hormone measurements (see also Craig et al., 2007).Hampson and Morley (2013) investigated estradiol effects on working memory by comparing performance between groups of women with naturally differing levels of estradiol.In this study, women were classified as high/low in estradiol via a post-hoc median split based on saliva assays.It was found that women with relatively high levels of estradiol committed significantly fewer errors in a spatial working memory task.Similarly, Colzato et al. (2012) demonstrated that inhibitory control varies across the menstrual cycle, with women in the high-estradiol follicular phase exhibiting better inhibitory control relative to the menstrual and the luteal phases (but see Colzato et al., 2010).
Due to the link between aging, the menopause, and cognitive decline (Henderson, 2008), the majority of evidence suggesting that estradiol can improve executive functioning has been conducted in post-menopausal women receiving hormone therapy.Indeed, it has been suggested that it is particularly frontally mediated functions that show decline following menopause (Fuh et al., 2006;Thilers et al., 2010).Following a systematic review, Maki and Sundermann (2009) concluded that estradiol therapy has beneficial effects on several cognitive control processes, including working memory (Duff & Hampson, 2000), problem solving (Erickson et al., 2007) and source memory (Wegesin & Stern, 2007).
In addition to these behavioural studies, evidence for an enhancing effect of estrogen therapy on PFC functioning and cognitive control processes has been found in neuroimaging studies.Joffe et al. (2006) conducted a randomised, double-blind, placebo-controlled fMRI study of estradiol effects on prefrontal cognitive functioning in 52 peri-/post-menopausal women using a battery of executive function tasks.Behaviourally, the enhancing effect of estradiol was restricted to an improvement in response inhibition only.However, the fMRI data revealed increased activation in several frontal cortical regions associated with cognitive control, including inferior frontal gyrus, dorsolateral PFC and posterior parietal regions.
Subsequently, the authors conclude that estradiol therapy increases the "functional capacity" of the PFC, via the recruitment of additional frontal regions, leading to improvements in executive functioning.
Critically, the enhancing effect of estradiol on executive functions is inconsistent (Colzato & Hommel, 2014).For example, a high level of estradiol during the follicular phase has been associated with impaired response inhibition in a stop-signal reaction time task, as compared to both the luteal and the menstrual phases (Colzato et al., 2010).This is in direct contrast to the enhancing effect of estradiol on response inhibition demonstrated by Colzato et al. (2012).Additional studies have linked high levels of estradiol to detriments in working memory (Gasbarri et al., 2008) and increased susceptibility to interference in the Stroop task (Hatta & Nagaya, 2009).Moreover, a recent study reported no effect of cycle-related estradiol fluctuations on a range of tasks requiring cognitive control, including working memory and verbal learning (Mihalj et al., 2014).
In light of these inconsistencies, it has recently been suggested that the effect of estradiol on cognition is dependent on individual differences in baseline dopaminergic function (Colzato & Hommel, 2014).Dopaminergic effects on cognition tasks follow an "inverted-U" function, such that performance improves with medium dopamine levels, and deteriorates with high/low levels.Given that estradiol is associated with increased dopamine turnover rates, Colzato and Hommel (2014) speculated that participants with low baseline dopamine levels, and thus poor cognitive performance, might benefit from high levels of estradiol and concurrent increases in dopamine.In contrast, those with high baseline dopamine levels, and good cognitive performance, would experience detrimental effects with high estradiol levels, as dopamine increases beyond an optimal point.A study by Jacobs and D'Esposito (2011) supports this notion.In this study, the authors demonstrated an interactive effect of baseline dopamine levels (indexed by variation associated with the catechol-Omethyltransferase Val 158 Met genotype) and menstrual cycle phase on a working memory task.Specifically, women with low baseline dopamine exhibited poor working memory during the menstrual phase (low estradiol), but improved during the follicular phase (high estradiol).
In contrast, participants with high baseline dopamine demonstrated the opposite pattern, good performance when estradiol was low, and impairments when estradiol was high.Taken together, these findings suggest that while estradiol can have an enhancing effect on executive functioning and cognitive control abilities, this effect is subject to individual differences in neurophysiology.
Clinical implications
The findings also in this research area also have some tentative clinical implications as well, particularly for the notion that estradiol acts as an antipsychotic in schizophrenia (Häfner, 2005;Riecher-Rössler et al., 1994;Kulkarni et al., 2013; for a review: Riecher-Rössler & Kulkarni, 2010).As schizophrenia and psychosis have consistently been associated with extensive deficits in executive function (Aas et al., 2014;Gold, 2004;Johnson-Selfridge & Zalewski, 2001;Roiser et al., 2013;Weisbrod et al., 2000), it is possible that estradiol could be beneficial to the cognitive symptoms of the illness.Interestingly, a recent study suggested that newly diagnosed patients with schizophrenia yielded higher levels of progesterone, compared to healthy controls (Bicikova et al., 2013).However, it is not clear yet whether progesterone alone can also modulate schizophrenic symptomatology, or if this is due to an interactive effect with estradiol (Ko et al., 2006).
In a similar manner, the present literature on sex hormonal effects on the brain and cognition also has some tentative implications for the notion of stratified medicine, which features highly in, for example, the current National Health Service (NHS) strategy in the United Kingdom or in the prevention and health care system in Germany.This notion has evolved from earlier conceptualisations of personalised or tailored treatment programmes, referred to as gender-specific medicine (Legato, 2017) or sex-sensitive psychiatry (e.g.Riecher-Rössler & Rohde, 2001).Both concepts refer to the practice of medical care in such a way that the planning, delivery, and method of treatment have been tailored to take sex differences into account.For example, with reference to schizophrenia and psychosis, this might involve the tailoring of medication regimens to account for changes in symptom severity across the menstrual cycle.Indeed, a number of early clinical studies that have demonstrated greater improvement in psychotic symptoms in patients given an adjunctive estradiol treatment (e.g.Kulkarni et al., 1996;2001;Riecher-Rössler & Kulkarni, 2010).
However, given that a number of studies not yielding an "enhancing" effect of estradiol on cognition, it could be argued that adjunctive hormonal therapies may not be suitable for all patients.As such, the additional of such therapies may require a "trial and error" approach to treatment planning.This would involve altering medication regimens until the best combination is found for that particular patient.Such an approach has the obvious benefit of leading to a successful outcome for that particular patient.However, this approach could prove costly, both economically (time spent by clinicians working with individual cases, cost of multiple medications being administered in the short term) and personally for the patient.
Critically, hormone therapies are characterised by a range of side effects, and it is currently not clear how they might interact with standard antipsychotic treatments.As such, there is a high need for further clinical research in order to fully determine how such treatments might be implemented.Schizophrenia has also been consistently associated with atypical FCAs (Løberg et al., 1999;Løberg et al., 2004;Oertel et al., 2010;Oertel-Knochel & Linden, 2011).Given that sex hormones are capable of influencing a number of processes related to FCAs, a possible question for future research concerns the relationship between the modulation of FCAs by gonadal hormones in atypically lateralised patients.Schizophrenia is also characterised aberrant functional connectivity in the DMN (Broyd et al., 2009;Buckner, Andrews-Hanna, & Schacter, 2008;Garrity et al., 2007;Whitfield-Gabrieli et al., 2008).As such, the findings of a menstrual cycle effect on the DMN (Petersen et al., 2014;Weis et al., 2017) raises questions regarding how this network is affected by sex hormones in patients, and the potential behavioural consequences of this.
Conclusions and future directions.
In conclusion, despite growth in the number of studies in the area since 2000, several ongoing debates concerning the mechanisms by which gonadal steroid hormones can affect brain lateralisation and cognition remain.At present, it may be argued that are three potential mechanisms by which hormones influence FCAs: (1) sex hormones affect one hemisphere exclusively, (2) sex hormones affect the activity of either hemisphere, or (3) sex hormones affect the processes related to interhemispheric interaction.Indeed, it is unlikely that only one mechanism can fully account for the range of hormonal effects reported throughout the literature (Hausmann, 2017).For example, recent studies from our group suggest that estradiol effects on FCAs are task-dependent, and that tasks which yield a large degree of asymmetry due to low task demands might be less likely to demonstrate sex hormonal effects (Hodgetts et al., 2015;2017).Taken together, these findings suggest that differences in the degree of asymmetry produced by an individual task might account for some of the inconsistencies in the literature regarding the effect of sex hormones on FCAs.Similarly, the results across a number of behavioural studies, from our group and others, have important implications for our understanding of estradiol-related effects on executive function and cognitive control.Perhaps most importantly, they suggest that the notion of estradiol enhancing cognitive control (Colzato et al., 2012;Hampson, 1990aHampson, , 1990b;;Hampson & Morley, 2013;Hjelmervik et al., 2012;Rosenberg & Park, 2002) is too simplistic.Rather, the current literature seems to indicate that the enhancing effect of estradiol is sensitive to , das potentiell erklären konnte wie Sexualhormone FCAs modulieren.Das Modell ging davon aus, dass hohe Progesteronspiegel die synaptische Effizienz der cortico-corticalen Übertragung vermindern und so FCAs reduzieren.Empirische Daten die diese Modellannahmen direkt testen existierten damals jedoch nicht.Mit verschiedenen neurowissenschaftlichen Ansätzen haben wir nun zahlreiche Daten erheben können, die die ursprünglichen Modellannahmen teilweise stützen und darüber hinaus Östradiol als bedeutenden Neuromodulator ausweisen.Mit diesem Übersichtsartikel möchten wir ein Update zu diesem faszinierenden Forschungsbereich geben sowie dessen potentielle klinische Relevanz kurz diskutieren.
Figure 1 .
Figure 1.Schematic illustration of the original hypothesis of progesterone-modulated interhemispheric inhibition.Left figure illustrates the process of interhemispheric inhibition.Although the cortico-cortical transmission is mainly excitatory, the main and longer lasting effect in the contralateral hemisphere appears to be inhibitory, probably because most excitatory (glutamatergic) callosal fibers terminate on pyramidal neurons which then activate inhibitory (GABAergic) interneurons.These activated inhibitory interneurons could then induce a widespread inhibition in homotopic regions of the contralateral hemisphere.According toHausmann and Güntürkün (2000;Hausmann et al., 2002), progesterone reduces cortico-cortical transmission during the midluteal phase by suppressing the excitatory responses of neurons to glutamate and by enhancing their inhibitory responses to GABA.The combined effect would result in the functional hemispheric decoupling, and thus to a temporal reduction in functional asymmetry (right figure).Adopted fromHausmann and Bayer (2010) and reprinted with permission from MIT Press.
|
v3-fos-license
|
2018-12-17T19:08:19.673Z
|
2018-09-21T00:00:00.000
|
52951628
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10615-018-0680-7.pdf",
"pdf_hash": "12a3b6a89af9f6d0d797e73c7ac55bcba77977e4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:959",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "8095063c23a13345826e087316cfb0aa650b7c8a",
"year": 2018
}
|
pes2o/s2orc
|
Evaluating the Quality of Social Work Supervision in UK Children’s Services: Comparing Self-Report and Independent Observations
Understanding how different forms of supervision support good social work practice and improve outcomes for people who use services is nearly impossible without reliable and valid evaluative measures. Yet the question of how best to evaluate the quality of supervision in different contexts is a complicated and as-yet-unsolved challenge. In this study, we observed 12 social work supervisors in a simulated supervision session offering support and guidance to an actor playing the part of an inexperienced social worker facing a casework-related crisis. A team of researchers analyzed these sessions using a customized skills-based coding framework. In addition, 19 social workers completed a questionnaire about their supervision experiences as provided by the same 12 supervisors. According to the coding framework, the supervisors demonstrated relatively modest skill levels, and we found low correlations among different skills. In contrast, according to the questionnaire data, supervisors had relatively high skill levels, and we found high correlations among different skills. The findings imply that although self-report remains the simplest way to evaluate supervision quality, other approaches are possible and may provide a different perspective. However, developing a reliable independent measure of supervision quality remains a noteworthy challenge.
Supervision is widely considered an essential form of support for good social work practice. In the United Kingdom (UK), as elsewhere, social workers employed by the state in children's services are required to have regular supervision (Tsui 2005). Reasonably good evidence supports the claim that good supervision helps improve worker-related outcomes, including self-efficacy (Lee et al. 2011), confidence (Cearley 2004), stress levels (Boyas and Wind 2010), and retention (Kadushin and Harkness 2014;Mor Barak et al. 2009). However, little evidence clearly shows that supervision makes a difference for workers' practice quality or client-related outcomes. Authors of a recent systematic review in the UK concluded, "The evidence base for supervision is weak" (Carpenter et al. 2013(Carpenter et al. , p. 1851). In addition, researchers have debated the definition of good supervision. Researchers have emphasized different parts or elements of the process, although most have agreed that a good supervisor-supervisee relationship is foundational (Voicu 2017;Noble and Irwin 2009). Beyond this, researchers may place more or less emphasis on the importance of different skills, for example, problem solving (Lambeth Council n.d., p. 27), collaboration (Falender and Shafranske 2013), and reflection (Clayton 2017).
The primary method used to evaluate the quality of supervision in many studies is some variety of self-report. As commonly defined in methods textbooks, self-report does not necessarily mean participants provide personal information directly; self-report includes any method involving asking participants about their feelings, views, attitudes, beliefs, and experiences (Lavrakas 2008). Wheeler and Barkham (2014) selected a "core battery" of six self-report measures to evaluate supervision components (pp. 367-385). These self-report measures included experience, focus, and ability (Orlinsky et al. 2005); the supervisory alliance (Efstation et al. 1990); and identification of supervision issues (Olk and Friedlander 1992). Davys et al. (2017) found that in daily practice, self-report is the most common way of evaluating supervision, most often through "informal discussions" between supervisors and supervisees, although some respondents reported using rating scales, questionnaires, and checklists as well (p. 114). The benefits of self-report include easy administration, low cost, face validity, and easy replication (Jupp 2006). Yet researchers have noted the well-known limitations of self-report, particularly in relation to evaluation (Fan et al. 2006;Huizinga and Elliott 1986).
First, people find it hard to assess themselves or others accurately, reliably, and consistently in relation to specific characteristics or competencies (Gurbanov 2016). As criminal defense lawyers and prosecutors have long known, "eyewitness testimony is unreliable [because] human perception is sloppy and uneven" (Buckhout 1974, p. 171). Thus, unless researchers take steps to correct biases, self-report must be interpreted with caution. Second, although it is possible to use rating scales to obtain responses more nuanced than simple yes or no answers, respondents are liable to interpret these scales differently. For example, one respondent might rate his or her satisfaction with supervision at 6 out of 10, and another respondent with a similar experience might rate his or her satisfaction at 8 (Austin et al. 1998). Third, respondents may interpret not only the scale but also the questions or statements in different ways. This may not be problematic for concrete questions (e.g., "How often do you have supervision?") but may be troublesome for abstract concepts (e.g., "To what extent does your supervisor promote reflection and analysis?"). Fourth, the use of self-report methods to evaluate quality and outcomes is further complicated when the same respondents are asked to provide more than one type of data, as often happens in supervision and worker-outcomes studies. Mor Barak et al. (2009) summarized the problem as follows: A [key] limitation stems from the potential for monomethod bias…, which is a typical risk when study respondents are the source of information for both the predictor and the outcome variables… Because most studies are potentially subject to mono-method bias, there may be some inflation in the results. (p. 26) One possible solution is to use self-report methods with different respondents to assess different variables.
For example, Harkness (1995) asked supervisees to rate the quality of their supervision, and clients were asked to rate various aspects of engagement and outcomes. Using this approach, Harkness found that the supervision skills of empathy and problem solving were associated with client ratings of contentment and goal attainment, respectively (pp. 69-70).
Another option is to develop evaluative measures that do not rely on self-report or that can be combined with selfreport to increase validity and reliability. Bogo and McKnight (2006) called for the development of reliable supervision measures to facilitate comparison among different approaches in different contexts. Some researchers have sought to apply such measures to simulations of social work practice (Bogo et al. 2011;Maxwell et al. 2016). In addition, observations of real practice have been used as part of social work qualifying programs (Domakin and Forrester 2017) and in evaluative research studies (Bostock et al. 2017). Observational methods, whether simulated or real, are likely to be more valuable when researchers use a reliable and valid coding framework. Such frameworks enable evaluations that are more meaningful, which in turn fosters robust examinations of the relationships among supervision and other variables (e.g., family satisfaction with the service).
Methods
In this paper, we report the results of a compare-and-contrast study using self-report data from social workers who rated the quality of their supervision. In addition, we used observations of how the same supervisors behaved in a simulated supervision session with a professional actor (Wilkins and Jones 2018). The methodological stance is one of theoryoriented evaluation (Weiss 1998). We began by providing in-depth descriptions of practice and then developed theories to explain how different elements linked and produced outcomes (White 2009). In this paper, we evaluate what happened in one particular form of supervision, or at least in a simulation of it, with the intention that the findings will inform further studies of how supervision shapes practice and outcomes. The overall method is one of participatory action research, with a focus not simply on describing what happens but also on helping supervisors and social workers reflect on their current supervision practices and outcomes.
Context
In the UK, government organizations known as local authorities (of which there are 152 in England) typically provide statutory social work services for children. Each authority employs a number of social workers and supervisors to provide services for children and their families. The primary aim of these services is to protect children from significant harm resulting from abuse and neglect. Services include family support and other interventions. Unlike in some countries, social workers in UK local authorities receive supervision most often if not exclusively from their line managers . Typically, social workers are organized into relatively small supervision groups of approximately six people, supervised by the line managers. For the purposes of this paper, we were interested in the managers' skills in their role as supervisors; thus, to avoid international confusion, apart from this paragraph, we refer to these participants as supervisors rather than managers (although in practice they fulfill both roles).
Over the past 2 years, along with many colleagues, we have been engaged in a large-scale participatory actionresearch project in one statutory children's service in central London. At times, the project has involved participants from other local authorities as well. The project as a whole was funded by the UK Department for Education (Luckock et al. 2017). The primary aims of the project were to improve the quality of social work practice, to improve the experiences of children and families, and to reduce the need for children to enter public care. As part of this project, social workers were routinely observed in practice and supervision and were offered follow-up mentoring and feedback sessions (Wilkins and Whittaker 2017). We coded observations of practice using an established skills framework (Whittaker et al. 2016). We are currently developing a similar framework for supervision. This framework, coproduced with both supervisors and supervisees, is evolving; later in the paper, we describe in detail the version used in this paper.
As part of this iterative process, we became curious about the relationship between what social workers thought about their own supervision quality and what we thought about their supervision quality after observing it. This led us to develop the following research questions: 1. Using a customized coding framework, can we reliably assess the skills used by UK children's services supervisors in simulated supervision sessions? 2. Using a self-report questionnaire based on the same framework, how do social workers assess the quality of their own supervision? 3. How do results from the two methods compare?
Study Design
This study was undertaken in one outer London local authority with 12 supervisors and 19 social workers. In mid-2016, 12 supervisors took part in a simulated supervision session with an actor trained to play the part of an inexperienced social worker. In addition, we asked 54 social workers to complete a questionnaire about their experiences of being supervised by the group of supervisors who took part in the simulation. A group of five researchers with varying experiences and expertise in the field of child and family social work coded the audio recordings of the simulated sessions ( Fig. 1).
Ethics
The study received approval from the second author's university ethics committee as part of the wider action-research project outlined previously. It was agreed that individual sessions would remain confidential unless serious concerns about malpractice emerged. This did not occur. Supervisors expressed their consent to take part in the simulations, and similarly, social workers consented in relation to the questionnaire.
Data Collection
We used two methods of data collection-a simulated session of supervision, audio-recorded by the lead author, and a questionnaire completed by social workers. The simulation involved a professional actor playing the part of an inexperienced, newly qualified social worker asking for help in relation to a recent and concerning incident. The scenario occurred as follows: The worker, whose regular supervisor was on leave, received a telephone call from Elizabeth, mother to 5-month old Rees, with whom she had been working for approximately 3 months. Elizabeth reported to the worker that her ex-partner, Daniel, came to the family home last night and, under the influence of alcohol, attempted to take Rees away. When Elizabeth tried to stop him, he assaulted her. A neighbor called the police, who arrested Daniel but considered Rees safe enough to remain at home. The worker had arranged to visit Elizabeth but was unsure what to say and what other actions she might need to complete. The actor was advised to present as anxious and to express concern that Elizabeth could be concealing the true nature of her relationship with Daniel. Not all of these details were given to the supervisors beforehand-rather, the supervisors were briefed only that the social worker, sounding anxious, had asked to meet with them and that they had only 20 min before they needed to go to another meeting. Thus, the simulation was limited to a maximum of 20 min, although supervisors could have ended it sooner if desired.
Supervisor completes simulated session of supervision
QuesƟonnaire data collected from at least one social worker in relaƟon to each supervisor Simulated session independently analyzed by a minimum of two researchers Fig. 1 Outline of the three-stage data collection process 1 3 The lead author observed and audio-recorded each session for analysis by researchers blinded to the questionnaire data. Social workers completed the questionnaire by hand on paper, separate from the administration of the simulation. The questionnaire consisted of two parts. In the first part, we asked social workers to report the frequency and length of a typical supervision session. The second part consisted of nine statements related to their supervisors' supervision quality and the problems the supervision addressed.
Respondents were asked to rate each statement on a 5-point Likert scale from most agree to least agree. Nineteen questionnaires were completed out of a possible 54, a response rate of 35%. This low response rate is typical. Baruch and Holton (2008) found from an analysis of 1607 studies that for questionnaires conducted with people who were members of organizations, the average response rate was 35.7% with a standard deviation of 18.8 (p. 1150). We collected at least one questionnaire for each supervisor, although for three of the supervisors, we collected two questionnaires each, and for two supervisors, we collected three questionnaires each.
Data Analysis
At least two researchers coded each audio recording of simulated supervision using a customized supervision skills framework. Two researchers coded five of the simulations, three researchers coded four, and four researchers coded the remaining three. (Different numbers of researchers coded different numbers of recordings based on practical availability, rather than by intentional design-however, the lead author and at least one other researcher coded all the recordings.) The framework used in this study had three dimensions: clarity about risk and need, child focus, and support for practice. We used a 3-point Likert scale (1, 3, and 5) to give one score per dimension per recording. Inter-rater reliability was moderate; any disagreements were resolved through discussion among the relevant researchers (Table 1).
We developed the coding framework used in this study as part of a larger action-research project. The framework, coproduced by researchers, supervisors, and supervisees, has been applied so far to more than 130 audio recordings of real supervision episodes from two different local authorities across a variety of different social work teams and services (including children in need/child protection, children in care, children leaving care, and fostering). The process has been iterative-we have revised and adapted the framework in relation to events in the sessions, based on feedback from supervisors and supervisees. Thus, the development of this framework is ongoing.
Researchers have proposed many definitions of good social work supervision, although we are not aware of any published measures that relate specifically to UK social work, other than Bostock et al.'s (2017) coding framework designed specifically for systemic group supervision. Many people would describe the characteristics of good supervision and good supervisors with some or all of the following phrases-communicating freely and reciprocally, encouraging the expression of authentic feeling, offering empathic understanding and acceptance, providing a problem-solving orientation based on consensus and cooperation and promoting a positive working alliance (Kadushin 1992).
In addition, in UK children's services, ideas about good supervision may include considerations of children's welfare (Reece 1996) as well as risk and need assessments (Skills for Care & Children's Workforce Development Council 2007). Further, good supervision should support the quality of social work practice (Goulder 2013) without excluding good case management (Howe and Gray 2013, pp. 11-13).
These ideas have proven prescient for our work with the inner London authority. Through workshops and individual interviews with supervisors, we sought to develop a shared understanding of what constitutes good supervision in this particular context. The elements we agreed on through this process reflect many of those drawn from the literature ( Table 2).
The three dimensions in Table 2 formed the basis for our framework and questionnaire. We do not suggest these are the only important elements of good supervision; however, we agreed on these core dimensions through the process outlined previously. Again, in consultation with supervisors and social workers, we developed the core dimensions into a 3-point scale, with different descriptors for high-, moderate-, and low-quality examples.
The statements used in the questionnaire were designed to reflect the three dimensions of the coding framework. We used an average score from each set of three statements as an overall score for each of the three dimensions (Table 3). Clarity about risk and need My supervision helps me think more clearly about risk My supervision helps me think about immediate risk and longer-term risk My supervision helps me think about how risks relate to the service user Child focus My supervision helps me think about how problems in the family might be affecting the child My supervision helps me think about things from the child's perspective My supervision helps me focus on what is best for the child Support for practice My supervision helps me understand why I need to do things (not just what I need to do) My supervision helps me understand how I need to do things (not just what I need to do) My supervision helps ensure the quality of my practice
Using a Customized Coding Framework, Can We Reliably Assess the Skills Used by UK Children's Services Supervisors in Simulated Sessions of Supervision?
As a team of five researchers, we analyzed 12 simulated sessions of supervision in terms of three dimensions: clarity about risk and need, child focus, and support for practice. Across the 12 sessions, we achieved a moderate degree of inter-rater reliability (Table 1).
Using a Self-Report Questionnaire Based on the Same Framework, How Do Social Workers Assess the Quality of Their Supervision Generally?
Considering the same dimensions collected with the social work questionnaire, average scores were relatively high (Fig. 2). Based on the questionnaire data, we found strong correlations among the three dimensions (Table 4).
How do Results from the Two Methods Compare?
The scores provided by social workers indicate a skillful group of supervisors in relation to the three dimensions measured. However, the scores given by the research team indicate a less skillful group of supervisors (Fig. 2). Correlations among the dimensions as coded by researchers were weak. In addition, we found weak correlations between the scores given by researchers and the questionnaire data provided by social workers (Table 5).
Strengths and Limitations
The primary strength of the study is that it included direct observations of supervisors rather than relying solely on self-report. This remains a relatively rare approach in the study of supervision, albeit not a unique one. The primary limitation is that this study was based on a single simulated observation with an inexperienced worker (played by an actor) who the supervisors did not normally supervise. The impact of these features on the supervisors' behavior is difficult to quantify; however, the nature of the scenario might indicate an action-oriented response rather than a reflective response. Nevertheless, social workers and supervisors are often encouraged to "reflect-in-action" as well as "reflecton-action", although some find this difficult at times (Ferguson 2018). In addition, many of the supervisors ended their sessions before the 20-min deadline. This could indicate that we timed the length of the simulation well, giving supervisors sufficient time to discuss everything to their satisfaction. Alternatively, it might indicate a level of discomfort and a desire for the experience to end sooner rather than later. Other limitations include a lack of information about the characteristics of either the supervisors or the social workers. Further, we lacked knowledge about the questionnaire respondents, in particular, whether they differed significantly from social workers within the same supervision group who did not respond.
Finally, we acknowledge that the supervision framework we used is still in development. Although a similar version of the framework has been applied to actual supervision discussions, and the findings are reported elsewhere , researchers might consider this state of ongoing development a limitation. They might reasonably ask, why not wait until the framework is fully developed before publishing about it? We take a different view. We believe that publishing in relation to ideas still in development is a strength because publishing fosters criticism and feedback and thus potentiates future improvement. In any case, this paper is not principally about the framework; rather, it is about the difference between insider and outsider perspectives on supervision quality.
Discussion
In discussing these results, the first thing to note is that this study forms part of an ongoing series of linked-but-separate projects focused on the nature and quality of social work supervision in UK children's services. As such, we are not seeking to draw definitive conclusions from this study as a stand-alone project. Rather, we are interested in what it tells us about our approach to evaluating the quality of supervision and the implications of this approach more generally. However, before considering these general implications, we address three questions in relation to these results. First, why did our coding scores differ so much from the questionnaire results? Second, why did we find weak correlations among the coding scores across the individual dimensions? Third, why did we find strong correlations among the questionnaire scores across the individual dimensions?
Why Did Our Coding Scores Differ So Much from the Questionnaire Results?
First, we discuss why our coding scores differed so much from the questionnaire results. One strong possible explanation is that we drew conclusions from a one-off observation, while the social workers provided feedback based on a far wider and richer range of experiences. As a research team, we listened to one simulated session of supervision, with no other knowledge about each supervisor. Domakin and Forrester (2017) found that making reliable judgments about practice skill required analyzing several observations, rather than just one. When completing the questionnaires, the social workers would have known far more about their supervisors and had experience of them in a much wider range of circumstances. Thus, the questionnaire results may not have reflected what happened in the observations (which, in any event, the social workers were not party to) but instead represented many weeks, months, or even years of experience.
In addition to collecting data in the outer London authority, we provided a workshop for the supervisors in this study, which took place after the simulations. The purpose of the workshop was to provide feedback to the supervisors regarding their collective performance and the anonymized questionnaire results. As part of the consent process before they completed the questionnaire, we informed the social workers they would be receiving feedback. However, although the questionnaires were anonymous, the social workers may have been reluctant to give negative feedback. Their supervision groups were relatively small (and the response rate modest), and it might have been easy for supervisors to decipher who completed each questionnaire. Individual social workers might have been wary of potentially disrupting their supervision relationships by giving challenging feedback and hence may have felt some pressure, consciously or unconsciously, to give positive feedback. This bias would not have influenced the research team, because we gave our feedback from a position of protected anonymity, and we had no ongoing relationship with the supervisors to protect.
This possibility resonates with the finding that the reality of an ongoing relationship between student social worker and practice assessor can complicate questions of objectivity and lead to inflated ratings of performance (Domakin and Forrester 2017). Finch and Taylor (2013) made similar arguments, suggesting that evaluating students is an emotional experience for many practice assessors. They concluded a strong possibility exists that at least some supervisors pass some social work students despite serious failings. Similarly, the social workers in our study might have felt an emotional response to being asked to rate the quality of their supervisors and responded accordingly in their feedback (see also Bogo et al. 2007).
Given the nature of the simulation, it would be understandable if the supervisors simply found it difficult and behaved differently than they might have in their daily work. The fact that they were encountering a stranger while being audio-recorded and assessed by an unknown team of researchers may have negatively affected their performances. They may have found themselves unable to adopt their usual approaches or demonstrate their typical skills. Perhaps some of the supervisors did not take the simulation seriously, given the pressures of their jobs. Participants would likely have believed it was more important to perform to the best of their abilities when real children and families were involved, whereas in the simulation, it did not really matter one way or the other. This attitude might account for the number of sessions that supervisors ended sooner than required, perhaps because they felt uncomfortable in the simulation or because they simply wanted to get back to their actual work. Thus, the coding scores given by the research team might be an accurate reflection of how the supervisors performed in the simulation, and the social workers' questionnaire results might be an accurate reflection of how they performed more generally.
Why Did We Find Weak Correlations Among the Coding Scores Across the Individual Dimensions?
Next, we consider why we found weak correlations among the coding scores across the individual dimensions of supervision skill. One explanation is that some supervisors may excel in some skill areas but not in others. For example, one supervisor might be skilled at assessing risk but less skilled in terms of focusing on the child. Another supervisor might be very good at focusing on the child but less able to support the quality of social workers' practices. Such a conclusion would be analogous to believing that social workers can excel in some areas (e.g., engaging teenagers) while struggling in others (e.g., report writing).
Another possible explanation is that the simulation emphasized the demonstration of certain skills over others. For example, the social worker actor presented as anxious and unsure what to do next. This may have motivated a "support for practice" response among participants. In fact, we found that supervisors scored on average higher for this dimension than for the others.
It may also be the case that as a research team, we were more experienced at coding some dimensions, compared to others. This could have led us to give higher scores unintentionally for those skills. In a recent study, researchers found that the more experienced the assessors, the higher the scores they tended to give (O'Connor and Cheema 2018). This finding could show that rather than the supervisors behaving differently in relation to the different skills, the research team was simply more experienced at coding them.
Why Did We Find Strong Correlations Among the Questionnaire Scores Across the Individual Dimensions?
Next, we question why we found strong correlations among the questionnaire scores across the individual dimensions. One consideration is that as researchers, we were rating the supervisors' skills based on what we heard in the audio recordings, whereas the social workers would have been able to evaluate the relationship in a much more holistic way. For example, social workers with positive supervision relationships may have been consciously or unconsciously reluctant to give negative feedback about their supervisors, whereas social workers with more negative supervision relationships may have been similarly reluctant to give positive feedback. It is conceivable that the research team rated the supervisors' specific skills (as described by our coding framework), whereas the social workers rated the overall relationship. If so, this would be an example of the "halo effect," a form of cognitive bias whereby positive overall impressions influence the evaluation of more specific characteristics (Nisbett and Wilson 1977).
Finally, the coding framework we used is still in development and may not be a valid measure of supervisor skill (in addition to the limitations of using a simulation). This consideration may imply that our analysis of the audio recordings was not a meaningful indicator of supervision skill. In contrast, the social workers were likely to know their supervisors well; even if the questionnaire was not able to differentiate specific supervision skills, we find it hard to argue that the self-report feedback from the workers was not a valid reflection of how they felt about their supervisors and how they experienced their own supervision.
What are the Implications of These Results for Wider Efforts to Evaluate the Quality of Social Work Supervision in UK Children's Services?
Given these results, what are the implications for efforts to develop a reliable and valid framework for assessing the quality of supervision in the context of UK children's services? First, albeit based on a small and nonrepresentative sample, the findings from our self-report questionnaire indicate that social workers tended to rate their supervisors either very highly or very poorly-there was no apparent middle ground. This finding implies that self-report, by itself, may lack nuance and sophistication, making it difficult to identify differences in quality and experience. Second, our results indicate (if nothing else) that observing what happens during supervision may provide a different, rather than a complementary, perspective to self-report. This finding may be an unhelpful complication, or it may be a useful point of triangulation.
A third implication, and one that came up often in our discussions of the audio recordings, is that the local authority in question-and we suspect many others besides-did not have an accepted and shared vision of the nature and purpose of good supervision. Although researchers have done much in the UK in recent years to develop and implement frameworks for social work practice, less effort has focused on what makes for great supervision. Yet without such agreement, it is challenging to produce a coding framework that both makes sense to those being observed and that can be readily applied to different scenarios and contexts. After all, if supervisors do not consider supervision primarily a mechanism to support practice (as in clinical supervision), how helpful is it to code their supervision as if they did? Further, to what extent is it possible to develop a detailed coding framework based on examples that may or may not incorporate such attempts? Our findings show that when evaluating supervision, we need clarity about what we are trying to measure and why, as well as a shared understanding of what good supervision is or should be within a given context. Hence, in developing ways of measuring supervision, we need to remain mindful of the need to ensure the frameworks we use can be implemented reliably and that they measure elements that matter for practitioners and supervisors and ultimately for children and families.
Conclusion
It seems likely (and desirable) that supervisors seek regular feedback from their supervisees in relation to the quality and helpfulness of the supervision they provide. In practice, much of this feedback is collected in relatively ad hoc fashion through informal discussions (Davys et al. 2017). Finding useful ways to collect feedback that is more structured would be highly advantageous. In the UK, many social work service leaders organize an annual "health check" survey of employees (Wolverhampton People Directorate Adult Social Care 2017; Local Government Association 2014), seeking feedback on a range of issues, including job satisfaction, employment conditions, and the quality of supervision support. Our findings show that although asking supervisees about their experiences of supervision remains a valid approach, it is important to acknowledge that different forms of evaluation will produce different results. Thus, leaders should think about ways of triangulating these data rather than relying on one method alone. Although self-report feedback may offer useful insights into how supervisees experience supervision, it can also mask the complexity and nuance of actual supervision case discussion outcomes, supervisors' supervision skills, and where applicable, fidelity to a particular model of supervision.
Funding This study was funded by Department for Education (UK).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
David Wilkins is a Senior Lecturer in Social Work at Cardiff University.
David has previously worked as a Research Fellow at the University of Bedfordshire, Academic Tutor for the Frontline Programme and Lecturer for Anglia Ruskin University. David's research focuses on the relationship between supervision, practice skills and outcomes.
Munira Khan is a Research Assistant at Tilda Goldberg Centre for Social Work and Social Care, University of Bedfordshire. Munira has previously worked as a Guest Lecturer and Associate Research Officer in India. Munira is interested in research with children and young people, disability research, creative methodologies and participatory research.
Lorna Stabler is a Research Associate in Children's Social Care at CASCADE, Cardiff University. Lorna previously worked at the Tilda Goldberg Centre for Social Work and Social Care. Lorna's research interests include the relationship between social work intervention and family outcomes, coproduction with young people and measuring risk, harm and outcomes for children.
Fiona Newlands is a Research Assistant at the Tilda Goldberg Centre for Social Work and Social Care, University of Bedfordshire. Fiona previously worked as a researcher at the Anna Freud National Centre for Children and Families. Fiona's research interests include the communication of empathy in child and family social worker and child and adolescent mental health.
John
Mcdonnell is a social worker and practice coach. John has worked in child and family social work, in the fields of disability and child protection. John's interests include service development and how coaching and feedback on direct practice can help improve social work skills and outcomes for families.
|
v3-fos-license
|
2022-07-25T15:03:13.918Z
|
2021-12-20T00:00:00.000
|
251031392
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://acnsci.org/journal/index.php/etq/article/download/43/45",
"pdf_hash": "bbfb5a4b2c3f4c190c6ddea060b944e41dca76d0",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:962",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"sha1": "d2cb05732b371d9de40ba1998a17f4e02e5e0015",
"year": 2021
}
|
pes2o/s2orc
|
Design features of the synthetic learning environment
. The article considers the features of the learning transformation in the transition from the usual material-object environment to learning in the digital synthetic environment. Attention is drawn to the fact that nowadays’ students prefer online and blended learning, in which human interaction with technical learning tools not only creates new opportunities, but also requires coordination of their interaction. A brief description of the main features of learning using new technological capabilities is given, highlighting such aspects as virtual and augmented reality, as well as the use of game-oriented technologies with an emphasis on reflexive games. An analysis was made of changes in the properties of a new learning environment from the standpoint of biotechtonics, which develops the principles of accounting the human factor, i.e. the coordination of human capabilities with technical systems in a digital learning environment in which a person is transferred to a new interactive space using devices that reflect signals in his/her sensory organs and devices, accepting different actions. Variants of teaching technologies based on new principles are proposed, which make it possible to improve the quality of assimilation of educational material. It is noted that the basis for creating complex synthetic learning environments are biotechnical systems, which provide a variety of image content management tools for models of these environments, both for the researcher and the student. It is proposed to expand the concept of “biotechnical system” by including the so-called “biotechnical technologies”, which becomes especially relevant in the digital world. The difference between this type of technology lies in the fact that among the technological operations included in them, great importance should be given to operations that are associated with ensuring the safety of work and creating optimal conditions for the resilience and labor activity of a person. At the same time, a person interacts mainly with information technologies, with information and knowledge that affect him/her, but not with material objects, both in the process of management and in the process of studying the outside world to use it effectively.
This trend changes the priorities of society related to: • development of human capital, taking into account the specific conditions of its functioning in the information environment and the implementation of new forms and means of learning life span [7]; • the increase of requirements to the cognitive capabilities of a person, when the nature of mental activity acquires more and more features of operator work [28]; • the possibility of creating adaptive ergatic systems and the transition to automation of the entire learning process, taking into account the functional capabilities of a human [23]; • creation of effective biotechnical systems [6], including in cyberspace, and, accordingly, ensuring their security [20].
At the same time, it has been theoretically and experimentally proven that the optimal (human-oriented) systems' design creates the prerequisites for effective activity and its control even in difficult conditions [34], especially with an increase in the rate and volume of this activity [11].
The above trends change the requirements for training and retraining, the ability and willingness to move to master new professions that did not yet exist at the initial choice [9]. At the same time, the use of training in a synthetic or combined (artificial and natural) environment is constantly increasing, where modeling and simulation allow the student to explore objects and phenomena that are inaccessible (in the general case) in an ordinary educational institution [17]. This approach allows the student to learn complex concepts more easily and quickly apply them to solving practical problems.
Game technologies are becoming more and more widespread in education [4]. As a result of this process, there is a need for new modeling techniques to describe the behavior of the subjects of the educational process [25]. Such methods arise due to ICT not "in the classroom", but in a digital synthetic environment [3], where the integration of participants in the educational process takes place. At the same time, there is an increasing need to take into account various aspects of the interaction between equipment/technologies and humans, the safety of the latter, which are not limited to physical security problems, but require consideration of the human factor in a broad sense at various stages of life.
Since the trend of using synthetic artificial environment (SAE) in education is quite new, its advantages, disadvantages and consequences remain unpredictable so far. The problem of creating and using the environment in the educational process was primarily dealt with by researchers in the field of emergent technologies, space, and military spheres [5]. Much attention was paid to balancing technologies, the cost of the created environment, trust in it and measuring/evaluating its effectiveness, as well as analyzing the capabilities of SAE for learning in general and in learning modeling systems as such [18]. However, such changes in the means and structure of the learning environment change the teaching load on the pupil/student and actualize the problem of considering the psychological and psychophysiological "cost" of such learning activities [26].
According to research results [13], online learning is today the preferred form (89% of respondents indicated this), but the mixed form is even more interesting (93%). The emergence of new learning tools is generating new trends in digital education (eLearning) that expand the range of ergonomics/human factors problems formulated just a few years ago [15]. Of note, it is the growing interest in the use of virtual (VR) and augmented (AR) reality in education [22]. However, these new technologies also give rise to new problems: deterioration in the interaction of students; emergence of dependence on mixed reality; hardware and software deficiencies; high costs (today); limited content [21]. In addition, it should be considered that, for psychophysiological reasons, the use of these technologies is recommended for children at least 12 years old.
The article purpose is to analyze the features and to develop a model of interaction between a human and technical means in a synthetic learning environment.
Results and discussion
Synthetic learning environment (SLE) require the attention of ergonomics and human factors specialists to a much greater extent than traditional approaches, since a human, technical means and a learning environment are explicitly integrated in their interaction. The activity of a person (both a teacher and a student) is acquiring more and more features of operator work [2]. For operators included in the production process control system, a methodology and method for assessing the level of their training have been developed [19]. The system methodology is also known for biomedical research [12], methods for training and predicting the performance of an operator have been developed (at least for process operators, dispatchers, and manipulators). However, for a group of research operators, to which students can be assigned (in accordance with the classification of types of operator work adopted in ergonomics), many design and analysis issues require considering human activity in an environment that can be simultaneously characterized as technical and technological with an emphasis on the use of information technology. It is advisable to dwell on the main aspects of this problem -the features of the SLE, learning technologies in it and the problems of coordinating a human with technical systems in such a learning environment.
Synthetic artificial learning environment
As Cook and Palmer [13] point out, the known data indicate a tendency to enrich learning opportunities by transferring learning and development activities to a synthetic environment, where the content of learning is shifting towards self-learning and project-oriented activities. At the same time, Cannon-Bowers and Bowers [12] note that: "a synthetic environment is a reconstructed multifunctional system with a mixture of real and computer synthesized (simulated) objects under control of a computer that provides interaction between combinations of real and synthesized objects. SLE consists of a digital and analog representation of the physical environment with a given accuracy and complexity; it scales to any size and complexity. At the same time, "the subject of the educational process actually functions as an operator-researcher, using auxiliary intermediate means (technical, informational, organizational, etc.) to achieve the ultimate goal (acquisition of knowledge, skills, competencies)" [26]. In addition, it should be considered that in the SLE "a human-operator is transferred to a new interactive environment with the help of devices that reflect signals in human sensory organs and devices that perceive various actions of the operator." Therefore, it is advisable to use the following principles for designing a multimedia learning environment when creating it: consistency, signaling, spatial adjacency, and temporal suggestion.
The concept of an immersive and virtual environment is related to the concept of SLE. According to Sergeev [31], in terms of content, a learning environment always appears as "a dynamic process of forming a network of relations in the subject of education, to which a wide variety of elements of the external and/or internal environment are selectively involved in order to ensure the autopoiesis of the organism, the stability of the personality and the continuity of its history" [32].
The main properties of an immersive learning environment are as follows: redundancy; the possibility of observation; accessibility to cognitive experience; saturation; plasticity; posturesubjective spatial localization; autonomy of existence; possibility of synchronization; vector; integrity; motivation; presence and interactivity.
Virtual learning environment and augmented reality. According to a report by Digi Capital [16], approximately 3.5 billion AR devices will be issued by 2023 and it will become a $90 billion industry. VR may grow more slowly, to 60 million devices and $15 billion in revenue over the same time period. Naturally, large tech companies are already taking steps to expand their services, such as Snapchat and Facebook recently introduced enhanced AR and VR features for both entertainment and education. The prospect of using AR and VR in education is also considered at the level of state programs. Thus, the United States held a 2-year competition for the best development of AR for medicine; more than 170 companies and institutions in China have united in the Virtual Reality Industry Alliance to accelerate the development of AR/VR; the French Ministry of Education introduced AR into the secondary school curricula; in the OAE, 17 schools have joined the pilot project to integrate VR into the curriculum.
Progress in this area will cover in the nearest future, apparently, most countries. However, it is important to understand the difference between AR and VR to optimize the implementation of such innovative technologies into the educational process. Smart, Cascio and Paffendorf [32] offer such an interpretation of the connection between the knowledge of the world and the means of AR/VR in the SLE (figure 1). One considers it expedient to supplement the scheme with the resulting product from the modeling side as "New Possibilities of Cognition" (in the original, the authors of [29] did not consider the result of the SLE in this direction).
The prevalence of the term "synthetic learning environment" (SLE) in the English-language literature is associated with the emergence and rapid development of electronic learning tools. At the same time, new opportunities appear for the formation and development of new forms of human socialization, various approaches to understanding the "syntheticity" of the educational environment, the place and "presence" of the subjects of the educational environment in the educational process. The "synthetic experience" acquired by the student has a unique potential for interaction with the structures of the mind and acquires the functions of a kind of thinking exoskeleton [1].
New technical and technological solutions for the creation of the SLE require the development of pedagogical systems and their methodological foundations. According to the author, the main pedagogical elements in SLE should include: provision of sufficient reference information/resources built into the simulation, preparation of training settings, diagnostic interactions, collaboration, dynamic and context-sensitive assistance, reflective strategies, student-controlled experience.
The possibilities of active learning methods and SLE have a certain parallelism, but with their own specifics.
Game-oriented learning technologies and modeling
As it is known, in most cases (under normal conditions of development), children begin to learn the world in a playful way. The game as a private and simplified model of the world makes it possible to model situations from the future life at a level accessible to the child. The expansion of opportunities for the use of ICT, their availability for various segments of the population and age groups, the overall growth of computer literacy, as well as the development of media and intellectual means of human access to the Internet, significantly expand the gaming potential of understanding the world, as well as the possibilities for human development in age and cognitive aspects. A game (especially in digital form) is becoming an important pedagogy for education in the twenty-first century. Using game-based learning models, future workers (primarily in the knowledge industry) are preparing for a quick response to changes in technology, career changes and career growth. The success of complex video games demonstrates that games can promote the development of strategic thinking, interpretive analysis, problem solving, plan formulation and execution, and adaptation to rapid change.
A promising form of organizing objective test methods for assessing the capabilities and readiness of a student can be a computer game built on the principle of reflection, i.e., with the provision of the possibility of control by the subject of the subject of activity on the basis of acquired experience and imagination without direct informational contacts with the subject itself. Reflexive control contributes to the balancing of sensory flows that affect a person and cause responses and contribute to the continuous harmonic self-development of a healthy person [14].
Systems that use various information test influences, the reaction of the subject to which gives information about the studied properties of his/her personality, belong to the class of biotechnical measuring and computing systems with test influences [27]. When building such a system, it is necessary to solve three problems: 1) selection of a test with the help of which a controlled informational impact on the subject is carried out; 2) the choice of a "guiding principle", in accordance with which the subject makes one or another decision to change the content of the test object; 3) implementation of the "test response" to the impact: the response of the subject to the presented test, allowing him/her to fulfill the chosen decision.
The choice of the modality of the test is usually carried out considering the studied personality trait and the age of the subject, his/her skills in tests' performing, working conditions and other factors. To implement a test action on the part of the subject in response to the impact, the simplest reactions of a motor nature are usually used, which are widely used in everyday life. Accurate selection of all three characteristics of the test method ensures the reproducibility, reliability, and validity of the test results.
The simplest test studies are based on human sensorimotor responses. As test stimuli, stimuli of three sensory modalities, which have proven themselves in practice, are usually used: visual, auditory, and tactile, i.e., those modalities that are used in presenting an operational image of a real situation. However, in SLE, the possibilities of sensory influence are expanded by using electrophysiological signals to input a response (for example, using AR). At the same time, the fixed indicators complement the assessment of the situation, characterizing it from different angles. Additional opportunities for such testing appear if physiologically substantiated influences are used to stimulate these responses, especially at the stages of education and training for a profession. But the introduction of such means of feedback requires considering the psychophysiological characteristics of students of different age groups, taking into account the characteristics of their sensitively sensitive age.
Features of the biotechtonics of a synthetic learning environment
It follows from the above that modern means of cognition -training, development, games, education -are increasingly moving from the interaction "human-human" to interaction in the "human-technology-environment" system, which the meaning of the concept of a biotechnical system (BTS) needs in extension. Proposed by V. M. Akhutin, the term "BTS" characterized this class of systems as "a set of biological and technical elements combined into a functional unified system of purposeful behavior" [28]. Its main biological element is considered to be a person, whose main function was to control a technical system that was supposed to perform certain tasks with objects of human cognitive interest external to BTS.
Considering the results of the analysis of trends towards the expansion of the concept of BTS, an interpretation was proposed for a new direction in the field of scientific research and education -"biotechtonics", which combines all research in the form of a single scientific concept of a fundamental nature: "unification of the living with the inanimate (artificial) object" [27]. At the same time, it should be borne in mind that today a human lives in the information (it is often called digital) era, when the world around is represented not so much by technical (having spatial localization) means, but by technological ones (in the digital space, technology and the environment of activity are increasingly the same). Therefore, the concept of "biotechnical system" can be expanded to include the so-called "biotechnical technologies", which becomes especially relevant in the digital world. The difference between this type of technology lies in the fact that among the technological operations included in it, great importance should be given to operations that are associated with ensuring work safety and creating optimal conditions for human life and work. At the same time, a person interacts mainly with information technologies, with information and knowledge that affect him, and not with material objects, both in the process of management and in the process of studying the outside world to use it effectively.
The process of cognition itself can be represented as a dialogue system, in which the technical unit (TU) is included, which creates a synthetic learning environment (figure 2), in which a human researcher (HR), if he/she wants to get an idea about the properties of the object of his/her interest (OI), must have certain connections with this object. Moreover, double multidirectional arrows emphasize that direct connections from HR and OI to TU differ from the reverse linksfrom TU to HR and OI. All elements in the system exchange a kind of "requests" and "answers", which can be implemented in various ways using various techniques, methods, and technical means. Such interaction can be material, energy and most often informational, therefore, connections must be adapted to the transfer of matter, energy or information. The transmission itself is carried out through the environment surrounding all of these elements, and it is understood that the material-objective environment is represented by a part that is in the immediate environment of HR and OI. The real environment (real reality, RR) includes the environment directly at the point of interaction and the digital environment, which is limitless (more precisely, limited by the physical network used for interaction). This environment is active and affects both biological objects that are in a state of dialogue (both at the moment of direct interaction and delayed), but these biological objects themselves also affect the characteristics of the RR.
The object of interest in a real environment manifests its activity in various physical forms, the parameters of which contain information about its characteristics and properties. At the same time, it should be noted that the researcher himself directly reacts only to such signals that are perceived by his sensory analyzers. The most commonly used for such interaction are analyzers: visual (VA), auditory (AA) and tactile (TA). HR responses are most often manifested in the form of locomotive (motor) movements. These features of a human must also be considered when creating a SLE so that his/her activities correspond to ordinary work activities, which allows him/her to form working skills (figure 3).
In the figure 3, the model of the process of interaction between a researcher HR and a learner (L) reflects the place and role of technical devices included in the technical part (TP) of the teaching system. With direct contact of HR with L, HR can connect all his sensory and effector formations to get an idea of L, but his/her possibilities are limited.
To expand his/her ideas, HR is forced to create special technical means, both when obtaining information about the properties of the L, and when influensing on it. These units include the information display system (IDS HR ) with the playback device (PD HR ), the control panel (CP HR ), and the BPK 1 command generation unit (CGU 1 ). The learning environment is formed by the block of its formation (BFLE); it is presented as an image on the PD L , which, in accordance with the teaching program, can change the content of this image using its control panel CP L . Through the block BPK 2 , L can changes the content of the presented image in accordance with the task being solved. In order to observe L's actions while studying his/her scenario, a second playback device PD L can be included in the TP, on which the entire process of his/her activity is reproduced. The BFLE may contain storage devices that allow to evaluate the work of the L after the completion of the research.
The main block, which determines the type of image of the synthetic environment, the method of analyzing and changing its visual content, as well as processing the parameters of the psychophysiological state of the L, is the block for forming the BFLE synthetic environment.
Compared to the material-objective world in a digital learning environment, the interaction of HR and L can be carried out without direct human effectors, by controlling using electrical signals of the human brain (EEG, EMG) and the corresponding transducers. At the same time, the informational and emotional components of human activity, the role and capabilities of its cognitive part increase.
The means of dialogue between man and technology today have expanded significantly, but the principles of the synthesis of teaching BTS have remained the same. Virtual and augmented reality expand the possibilities of interaction between a human and technical and technological teaching aids, but HR still forms in his/her mind only a model of the object of knowledge (cognitive model of activity), and the subject of knowledge itself interacts not with the object of knowledge, but with its model -the image on the IDS L , which he himself builds based on his own ideas based on the current level of awareness. Expanding the possibilities of means of cognition, creating new tools, methods, and technologies for studying L contributes to a deeper study of it, and such processes only improve the quality of the model, leaving the still unknown beyond its limits.
Such ideas could be useful in development of BTS, where visual communication has a great importance (f.e., in [30]) and in systems for prediction of a human performance [8], as well as when designing BTS of measuring of psychophysiological indices that could be built-in more complex educational and work tools [24]. A special significance design of the synthetic learning environment has in STEM and STEAM education of future educators [33].
Conclusions and future research
• In the XXI century, educational space acquires new features with the strengthening of the role of synthetic artificial learning environment. • A synthetic learning environment becomes an independent subject of learning due to the expansion of its content and didactic potential, active participation (suggestions, provision of choices and polylogue, "immersion", the ability to adapt the learning process to the needs and abilities of the student, etc.) in the formation of competencies student, as well as the possibility of his/her socialization. • The basis for the creation of complex synthetic learning environments are biotechnical systems, which provide a variety of means to control the content of images for models of these environments, both on the part of the researcher and the student. • It is advisable to focus further research on the considered problem on solving the issues of developing the scientific and applied direction of biotechtonics in several directions: the synthesis of environmental models, methods for controlling the content of plots and considering the peculiarities of the student's activity in such a learning environment of different content.
|
v3-fos-license
|
2017-09-21T14:40:32.000Z
|
2017-06-06T00:00:00.000
|
119206885
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.nuclphysb.2017.07.016",
"pdf_hash": "bcd24ea3851d3fdc2ef0b3e0b3fe939677325cc1",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:963",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "bcd24ea3851d3fdc2ef0b3e0b3fe939677325cc1",
"year": 2017
}
|
pes2o/s2orc
|
Dimensional flow and fuzziness in quantum gravity: emergence of stochastic spacetime
We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.
Introduction
After many years of research, we are not yet close to an acknowledged unique quantum theory of gravity, partly because of the lack of experimental guidance. The mathematical and conceptual challenges raised by the attempt of combining quantum-mechanical and general-relativistic principles produced plenty of different approaches to the problem of quantum gravity (QG) [1][2][3]. Among them, we count string theory [4], the tripod of group field theory, loop quantum gravity and spin foams [5][6][7][8], causal dynamical triangulation [9], causal sets [10], asymptotically safe gravity [11][12][13], non-commutative spacetimes [14,15] and non-local quantum gravity [16][17][18][19], just to mention some of the most popular models available in the literature. Over the last twenty years, this considerable theoretical effort has started both to figure out phenomenological predictions that could be tested with the presently-achievable levels of experimental sensitivity and to gradually focus on few results that seem independent of the specific quantum-gravity framework adopted [20]. In fact, even if there are great differences between inequivalent approaches, some common features have been noticed. One of the most recurrent findings in the field is dimensional flow (or dimensional running), i.e., a change of spacetime dimension with the scale of the observer. In almost all quantum-gravity models, the dimensionality of spacetime exhibits a dependence on the scale, changing (or "flowing") from the topological dimension D in the infrared (IR) to a different value in the ultraviolet (UV). There can be more than a single relevant scale and, thus, the dimension can change many times before reaching its far-UV value at a scale that is often identified with (or recognized as) the Planck length ℓ Pl = (G /c 3 ) 1/(D−2) . Sometimes, the concept of dimension does not even survive deep into these UV scales and it dissolves into some highly non-smooth structure (for instance, multi-fractal, discrete, or combinatorial). All known quantum gravities are multi-scale by definition because they all have an anomalous scaling of the dimension [21][22][23] (see [24,25] for a scan of the literature and more and newer references). A recent strategy for easily realizing the running of the dimension has been followed by multi-fractional theories, comprehensively reviewed in [24]. In these models, the basic ingredient implementing dimensional flow is a non-trivial factorizable integration measure The profiles q µ (x µ ) are determined uniquely and solely by requiring to reach the IR limit as an asymptote [26]. An approximation of the full measure, which will be of interest here, is the so-called binomial space-isotropic profile Here there is no summation over the index µ = 0, . . . , D − 1. The fractional exponents 0 < α 0 , α < 1 are directly related to both the spectral and the Hausdorff dimensions (d S , d H ) at very short distances ℓ ℓ * ; if α 0 = α, then d S ≃ Dα ≃ d H in the UV for the theories considered here (with fractional or q-derivatives [24]). In the above measure, we are assuming spatial isotropy (same α for all space directions) and the existence of only one characteristic length ℓ * . These approximations can be relaxed without difficulty but, since the full exact form of the measure q µ (x µ ) is not needed here, 1 for the sake of our argument we will limit our attention to (1), at least at the beginning. The binomial measure (1) is obtained by a coarse-graining procedure from the most general case of measures with logarithmic oscillations, that contain at least another shorter length ℓ ∞ ℓ * and a frequency ω [27]. Later on, we will consider also this case.
Both α and ℓ * are free parameters of the theory with the only constraints that ℓ * is expected to be small in order to respect experimental constraints on the dimension of spacetime (typically, ℓ * is much smaller than the electroweak scale [24]) and α must stay in the interval α ∈ (0, 1) for arguments of theoretical consistency [24]. The second flowequation theorem [26] or rigorous arguments of multi-fractal geometry [27] fix the measure q µ (x µ ) uniquely, but not the physical frame where measurements are performed. In fact, while a multi-fractional geometry is designed to adapt with the scale of the observation, our devices (rods, clocks, and so on) are not. 2 This is realized at the price of breaking Poincaré invariance, so that physical observables have to be computed in a fixed preferred frame. This poses the so-called problem of the choice of presentation, which consists in the choice ofx µ . Although there are infinite possible choices, four are special [29] and the second flow-equation theorem reduces them to two [26].
In this paper, we show that the limitations on the measurability of spacetime distances, which have been obtained by many authors combining quantum mechanical (QM) and general relativistic (GR) arguments [30][31][32][33][34] or relying on specific quantum-gravity models [35][36][37][38][39], can be regarded as a multi-scale effect. In fact, multi-fractional theories naturally carry an additional non-trivial contribution to the magnitude of a distance, which is not present in a classical theory with standard integration measure. For special values of α in Eq. (1), this multi-fractional contribution can be reinterpreted as an intrinsic uncertainty (or fuzziness, in QG jargon) on the measurement of spacetime distances exactly of the same type encountered in a standard (i.e., non-multi-scale) model where both QM and GR are taken into account [33,34]. This suggests that classical multifractional models in Minkowski spacetime (i.e., in the absence of curvature) partially encode both QM and GR effects, and that they do so thanks to dimensional flow. This correspondence between semi-classical quantum gravity and multi-fractional theories will allow us to give a physical interpretation to the ambiguities of the multi-fractional theories with fractional and q-derivatives. In fact, the comparison of the multi-fractional uncertainty on the distance with two different lower bounds found by Ng and Van Dam [33] and Amelino-Camelia [34] will select two preferred values for the fractional exponent, α = 1/3 or α = 1/2. Remarkably, the second value was recognized as special since early papers [23,27,40] for several theoretical reasons [24], including its frequency of appearance in the quantum-gravity landscape of theories. Moreover, we will identify ℓ * with the Planck length ℓ Pl in the former case, while in the latter we will obtain ℓ * = ℓ 2 Pl /s < ℓ Pl , where s is the observation scale. Interestingly, in the second case the dependence on the scales at which the measurement is being performed becomes explicit. This is exactly what is expected to happen in multi-fractal geometry and, in particular, in multi-fractional theories, where the results of measurements depend on the observation scale. In our analysis, this effect comes directly from equating the multi-fractional uncertainty with the semi-classical one. Turning this sort of duality around, we solve the long-standing presentation problem in a surprising way. Consider a length L in a multi-fractional spacetime with binomial measure (the same argument holds for time intervals). The typical difference between L and the value ℓ that would be measured in an ordinary space is [29] Until now, the multi-fractal correction δL α has been regarded as a deterministic effect signaling an anomalous scaling at scales ℓ * . Here, we reinterpret it as an intrinsic uncertainty of measurements, so that lengths cannot be measured with a precision smaller than δL α . This reinterpretation is not arbitrary and will rely on the so-called harmonic structure of geometry, associated with a deep UV discrete scale invariance generating an infinite hierarchy of scales. This structure is at the core of a precise relation between Nottale scale relativity [41][42][43] and a multi-fractional measure that is nowhere differentiable, a property which is distinctive of stochastic geometries. After reviewing the distance and time uncertainty estimates of [33,34] in section 2, we will obtain them directly in multi-fractional stochastic spacetimes in section 3. Section 4 is devoted to a discussion of the consequences of the main results for multi-fractional theories and quantum gravity at large, also comparing with previous attempts to relate dimensional flow and fuzziness. A condensed presentation can be found in [44].
Review of the estimates
We begin by reviewing the Salecker-Wigner procedure [45] for the quantum measurement of spacetime distances and highlight how, taking into account the quantum nature of measuring devices, the presence of gravitational interactions forbids to identify a length with arbitrarily good accuracy (zero uncertainty). A necessary observation is that QM and GR give completely different definitions of the position of an object. In the former, it is simply identified by its four coordinates x µ , but there is no prescription for the actual measurement of these coordinates. On the contrary, coordinates have no meaning by themselves in GR and, in order to identify a "position" (a spacetime event), one has to specify an operational procedure to measure the distance between the observer and the measured object. Thus, for the purpose of measuring a given distance, Salecker and Wigner [45] recognized three basic devices: a clock, a light signal, and a mirror. We set the initial time when the light ray leaves the clock site. Then, it is reflected by the mirror at a distance L. When the light ray comes back to the clock, the time we read is T = 2L/c, where c is the speed of light. Now, quantum mechanics affects this measurement by introducing an uncertainty δL. In the same way, if we try to measure the time of travel T , the latter will be affected by a quantum uncertainty δT . To calculate these uncertainties, we follow two possible lines of reasoning. The first, due to Ng and Van Dam [33], seeks the major element of disturbance for the measurement of both distance and time in the QM motion of the quantum clock. The second argument, by Amelino-Camelia [34], focuses on the QM uncertainty in the position of the center of mass of the whole system. In both cases, since we are considering QM properties of devices, the system is initially described by a wave packet with uncertainties on position and velocity that affect the measurement by producing an initial spread δL(0). Then, the length L acquires an uncertainty δL(T ) ≃ δL(0) + δv(0)T , where δv(0) is the QM uncertainty on the velocity of the system (there is a slight difference in the two cases, since in the first one δv(0) refers to the clock, while in the second to the center of mass), over the duration T of our measurement. We discuss explicitly the uncertainty on length measurements but an analogous argument applies also to time measurements, for which there is an equivalent result. First, let us follow the approach of Ref. [33]. As aforementioned, the uncertainty δL is induced by the fact that, as a quantum object, the clock cannot stay absolutely still. It has a QM uncertainty on its velocity δv(0) = δp(0)/M /[2M δL(0)], where M = M c is the mass of the quantum clock. In the light of this, we can rewrite the QM uncertainty on the measurement of our distance as where we have replaced T = 2L/c and also maximized the denominator by putting δL(T ) in place of δL(0). (Due to the quantum motion of the clock, the uncertainty on the length measurement is expected to increase, i.e., δL(T ) δL(0).) Therefore, using only standard QM arguments, we find Now we add GR effects. Turning gravity on, we know that the gravitational field of the clock will affect the measurement of the distance L. As soon as gravity comes into play, spacetime is no longer Minkowski and, thus, distances change due to curvature effects. How much does this modify the distance we are measuring? To answer that, one can calculate the uncertainty δL produced by the gravitational field of the clock. Suppose our quantum clock is spherically symmetric and that the metric around it is approximately Schwarzschild. Passing to "tortoise coordinates" [46], the time interval for a complete trip is given by where r c is the size of the clock and r S = 2GM c /c 2 is the Schwarzschild radius. Then, the distance reads Here, the first term is the distance in Minkowski spacetime, while the second contribution is the gravitational correction due to the clock. Thus, we have δL ≃ r S 2 ln r c + L r c in the approximation r c ≫ r S . This expression tells us that, having introduced GR effects, there is an additional uncertainty to the measurement of the distance given by having neglected the numerical factor ln[(r c + L)/r c ]. Combining this bound with the QM one of Eq. (4), we finally obtain [33] δL δL 1 3 := (ℓ 2 Pl L) where the subscript stresses that this lower bound has exponent 1/3. Following a similar reasoning, one can easily find an intrinsic uncertainty also on measurements of time intervals [33]: The argument by Amelino-Camelia [34] is slightly different. In that case, one identifies the source of disturbance with the center of mass of the system rather than with the clock. The QM part of the reasoning remains the same, the only difference being the replacement of M c with the total mass M tot into Eq. (4). On the gravity side, we simply require that the total mass is not large enough to form a black hole, i.e., M tot c 2 s/G, where s is the size of the total system made up of the clock plus the light signal plus the mirror. In fact, if a black hole formed, then the light signal could not propagate to the observer, thereby making the measurement impossible. Combining this restriction with the QM uncertainty, one finds [34] δL δL 1 2 := the subscript 1/2 is to distinguish the exponent of the uncertainty. Analogously, the uncertainty on time measurements reads [34] δT δT 1 2 := where t = s/c.
QM+GR=QG: the physics of quantum gravity emerges
There are at least two comments to make concerning expressions (5) and (7). Similar considerations apply also to Eqs. (6) and (8). First, they both depend on the time T = 2L/c of the measurement, a feature that has been often regarded as a sign of quantum gravitational decoherence. Second and most importantly for what follows, it is worth noting that the interplay of QM and GR principles determines a feature that, hopefully, might help our intuition on the physics of QG. In fact, one ends up with an intrinsically irreducible uncertainty on the measurement of a single observable, in this case the distance or time interval. The combination of QM and GR affects geometric observables, such as distance and time, and this was often interpreted as a confirmation that QG requires a new understanding of geometry (as explicit constructions of quantum gravity eventually confirmed). This single-observable uncertainty is not just a QM effect, since QM only imposes a limitation on the simultaneous measurement of conjugate variables. It also has no counterpart in GR. In fact, one recovers the standard case δL = 0 by turning off either GR or QM. As far as we consider only QM limitations, we can of course get δL = 0 by taking the infinite-mass limit M c , M tot → ∞ in Eq. (4). However, this is no longer possible when we consider GR interactions since, in the presence of gravity, the apparatus would form a black hole before reaching an infinite mass. Again, from Eq. (4) one can see that the uncertainty on the distance L goes to zero if we turn off QM by sending → 0. Moreover, both δL 1 3 and δL 1 2 depend on ℓ Pl , that goes to zero if one takes either the limit G → 0 (i.e., we neglect gravity) or → 0 (i.e., we neglect quantum properties). However, as soon as both QM and GR effects are taken into account, there is an irreducible δL. These uncertainty expressions are telling us that quantum gravity might require either a new measurement theory or an exotic picture of spacetime, or both. In the second case, we are led to expect a sort of spacetime foam at scales close to the Plack distance. In fact, the appearance of a limitation on the 6 measurement of distances suggests that, at Planckian scales, spacetime is no longer the smooth continuum we are used to in both QM and GR. At those very-high-energy (veryshort-distance) scales, the presence of an intrinsic δL may mean that spacetime is made of events that cannot be localized with arbitrary sharpness. In QG, classical continuous spacetime is replaced by a fuzzy structure. All these considerations, born of the heuristic arguments of [33,34], later found confirmation in concrete QG theories [1,2], each of which realizes this irregular UV structure in very different ways [21,22,24]. One may ask whether and why, if real QG can be embodied by so many diverse theories, the heuristic combination of QM and GR leading to Eqs. (5)-(8) is essentially correct. In the next section, we answer to this question as follows: the heuristic arguments are correct and part of the reason is that they rely, inadvertently, on a universal feature of quantum gravities: dimensional flow.
To show this, we shall analyze the measurement of a distance in the multi-fractional theories with fractional or q-derivatives in flat embedding space adding neither QM nor GR effects. Despite these notable absentees, we will see that a non-trivial contribution to distance measurements is present and it has the same structure of the spatial uncertainties (5) and (7) of the semi-classical QG arguments we just reviewed. The same is true also for the time uncertainties (6) and (8). Interpreting the multi-fractional correction to spacetime distances as an uncertainty, not only are we able to fix the ambiguities of the model (i.e., its free parameters α and ℓ * as well as the presentation), but we also recognize how multi-fractional models intrinsically unite the combination of GR and QM effects. This provides further support to the view that multi-fractional theories can be regarded both as stand-alone proposals and as effective models of QG, and that they are able to capture at least two of (what we think to be) the characteristic features of quantum geometry, namely, dimensional flow and spacetime fuzziness. The latter concept, typically ambiguous when not applied to a particular theory [47], will be given a precise meaning later. 3
Fuzziness, dimensional flow and stochastic geometry
Let us now show that the bounds (5)-(8) on the measurement of spacetime distances can be reinterpreted as purely classical multi-fractional effects in the absence of gravity. Classical multi-fractional theories encode both QM and GR effects, which we have used above to obtain the uncertainties δL and δT in a semi-classical setting with elementary notions of QM and GR on a standard geometry with local measure dx 0 dx 1 (the Ddimensional case is straightforward). 4 On one side of the correspondence, we have a multi-fractional theory with two structures: a built-in dimensional flow, which is a feature usually derived (rather than assumed) in top-down approaches to QG, and a stochasticspacetime structure we will describe in this section. On the other side, there is an uncertainty on distance measurements, a property that follows from a naive bottom-up 3 In theories with discrete pre-geometric structures, such as the set GFT-LQG-spin foams, "fuzziness" means combinatorial and discreteness effects [48,49]. 4 In multi-fractional theories, one replicates the same argument for all directions separately and combines everything into the distance ℓ = ℓ 2 1 + ℓ 2 2 + . . .. Alternatively, one can pick a multi-scale measure dependent only on the Lorentz distance; these models are purely phenomenological in general, but they capture the correct dimensional flow [23,50,51].
approach combining just QM and GR principles without adding any hypothetical QG ingredient. The correspondence states that this uncertainty is reproduced by dimensional flow plus an intrinsic randomness at the microscopic level. As surprising as it may be, both dimensional flow and spacetime fuzziness can be obtained at the same time as a result of having a deformed non-trivial integration measure. This may explain the common origin of these two QG features despite strong differences among different QG approaches. Apparently, when gravity is quantized one always obtains a multi-scale geometry and some sort of "irregular" or non-smooth structure in the UV. Conversely, a multi-scale geometry and a UV fuzzy structure naturally lead, when properly defined, to the unification of quantum mechanics with gravity.
The interpretation we propose here has also the advantage of drastically reducing the ambiguities of multi-fractional models. In fact, by comparing the multi-fractional uncertainty to the bounds of Eqs. (5)- (8) we succeed in fixing the free parameters α and ℓ * . In particular, the multi-fractional length ℓ * turns out to be related to the Planck length ℓ Pl , a fact that strengthens the interpretation of multi-fractional theories as QG descriptions. Moreover, the problem of having different presentation choices, discussed below, is either reinterpreted as an effect of underlying spacetime fuzziness or is irrelevant in the presence of such a structure.
Deterministic view
To this aim, we first comment briefly on the so-called presentation problem, which is part of the definition of a multi-fractional theory. We refer to [24] for a detailed discussion. The basic point is the following. In multi-fractional theories, the geometric coordinates q µ (x µ ) change with the scale (via their ℓ * dependence), while fractional coordinates x µ are scale-independent. The properties of experimental devices, used to take measurements, are independent of the observation scale. Thus, while the measure changes with the scale, clocks, rods and detectors do not. Consequently, physical observables have to be compared in the fractional frame with x µ coordinates that do not adapt to the scale. This poses the problem of choosing a preferred fractional frame {x µ } where Eq. (1) is defined and observables are calculated. To say it in other words, geometric coordinates q µ transform under the so-called q-Poincaré transformations q µ (x ′ µ ) = Λ µ ν q ν (x ν ) + a µ that are symmetries of the measure. However, physical quantities are determined in the fractional frame (which does not adapt to the scale, being it spanned by the x µ ), where the dynamics is not invariant under these transformations. For this reason, in order to define physical observables it is necessary to fix a frame. It turns out that the frame ambiguity can be encoded in the vector parameterx µ in (1). Different presentation choices produce different measurement outcomes, corresponding to different theories with the same dimensional flow. We will show here that the presentation problem is not a problem at all when recognized as the source of an intrinsic uncertainty in the measurement of fractional distances. According to this perspective, the presentation ambiguity has the physical interpretation of an intrinsic spacetime distance fuzziness.
To this end, let us discuss the computation of a spatial distance. The difference between the spatial distance ∆q 0 expressed in terms of geometric coordinates q i (i = 1, . . . , D − 1) and the one ∆x 0 in fractional coordinates x i is encoded in the quantity X := (∆q − ∆x)/∆x. Thus, the distance in the integer frame ∆q can be either larger or smaller than the distance ∆x measured in the fractional frame, depending on the sign of X ≶ 0. To explicitly see the influence of the presentation on the distance, let 8 us consider a fractional frame labeled byx µ . For simplicity but without loss of generality, let us consider one spatial dimension. It is not difficult to see that 5 From the above expressions, it is evident that different values ofx (i.e., different presentations) give different results for the distance, but they do not change the anomalous scaling X ∼ x α solely governed by α. Up to now, this was regarded as a freedom of the model and one had to make a choice of the presentation in order to have unambiguous predictions (deterministic view ). Four special presentation choices have been identified as special among the others [29], but the second flow-equation theorem [26] selects only two of these: the initial-point presentation, wherex = x A (the presentation label is the beginning in time or space of the measurement, the zero of the clock or rod), and the final-point presentation, wherē x = x B (the end in time or space of the measurement, the number marked by the clock or rod when the experiment is over). In two of the three existing multi-fractional theories (the so-called theory T v with weighted derivatives and the theory T q with q-derivatives), it is not actually possible to give such a physical interpretation to the presentation choice as an intrinsic uncertainty, since none of these settings is invariant under translations and one cannot changex (a constant characteristic of the theory) at each experiment. Consider, for instance, the scalar-field action in T q : The dynamics is not invariant under a shift x µ → x µ −x µ . On the other hand, in the third multi-fractional model, the theory with fractional derivatives (which we call T γ following [24]), the ordinary differential d is replaced everywhere by the differential , an exterior multi-fractional derivative such that q µ (x µ ) = q µ (dx µ ). The analogue of the scalar-field action (11) is where we introduced the multi-scale derivatives µ proposed in [24] and D q( At any plateau in dimensional flow (i.e., those scales where the spacetime dimension is approximately constant; in the binomial case (1), there are only two plateaux at µ coincides, when γ = α, with the exterior derivative introduced in [40] for a noscale fractional measure, and the Euclidean distance is ∆(x, y) ≃ ( µ |y µ − x µ | 2α ) 1/(2α) in the deterministic view. It is not difficult to see that a shift x µ → x µ −x µ leaves the derivatives µ , the action (12) and the equations of motion invariant.
Stochastic view
In [29], an analogy was noticed between the existence of different presentation choices in multi-fractional theories and the existence of different choices of the evaluation time of noise in stochastic processes. Consider a one-dimensional stochastic process X(t) given by a noise with no deterministic component. In general, the graph (t, X(t)) is nowhere differentiable with probability 1 and one cannot write a meaningful differential dX(t) without some hand-made prescription on its inverse operation, integration [52,53]. For an initial condition X(t i ), we can make the splitting ∆t = t − t i as t i = t 0 < t 1 < · · · < t n−1 = t and write for any test function f in some suitably defined functional space. While the specific choice of the pointt j ∈ [t j , t j+1 ] is irrelevant in the case of the Riemann-Stieltjes integral of an ordinary differentiable function X(t) = x(t), it affects the output in the case of a process X(t) fluctuating stochastically in [t j , t j+1 ]. The so-called Itô and Stratonovich interpretations fixt j in two inequivalent ways (respectively,t j = t j andt j = (t j+1 +t j )/2) describing systems with different random properties. At the level of the Fokker-Planck equation [54,55], the Itô-Stratonovich dilemma amounts to a choice of operator ordering in the Laplacian. In this case, the guiding principle is phenomenology: the stochastic system under examination will be better described by one choice instead of the other. In a multi-fractional particle-mechanics setting, the presentation problem precisely consists in the choice oft j →t j +t j , where X(t) is replaced by q(t) [29].
Having established that the presentation problem is basically equivalent to the Itôversus-Stratonovich prescription, we have two options. One is the deterministic view : different choices correspond to different theories and only observations will be able to decide which prescription is correct. Perhaps, this view is not particularly elegant because it relies on an Ansatz whose ultimate validity can be decided only by future experiments (how far in the future, we cannot tell). However, it is not particularly scandalous either, since it is not new in quantum gravity. Exactly the same Itô-Stratonovich ambiguity in the Fokker-Planck equation appears in quantum cosmology, in the Fokker-Planck equation of eternal inflation [56] and in the Laplacian term of the Wheeler-DeWitt equation of canonical quantization [57,58]. In these cases, the guiding principle to fix the operator ordering is theoretical and can be more or less (but, more often than not, less) compelling.
In opposition to the deterministic view, the other option is more innovative. Generalizing to spacetime geometries, a nowhere-differentiable geometry can be realized in two ways: • by keeping the multi-fractional measure q(x) deterministic but changing the differential calculus, or • by considering a nowhere-differentiable measure q(x).
The first case corresponds to the theory T γ , while the second case can be applied to all multi-fractional theories.
Stochastic view with multi-fractional derivatives
Concerning the first possibility, fractional calculus embodies efficiently the nowhere differentiability typical both of multi-fractals [59][60][61][62] and of anomalous stochastic processes or diffusion pseudo-processes [63][64][65][66] (whose application to quantum gravity and multi-fractional theories can be found in [67][68][69]). The theory T γ relies on this calculus [40] generalized to multi-fractional configurations such as (12) [24], where the whole integro-differential structure is deformed in such a way as to encode the irregularity property on a continuum.
In the UV, T γ describes an irregular geometry very different from a smooth spacetime. Such irregularity is most naturally described in terms of probabilistic rather than deterministic features. For instance, we cannot exactly know what the dimension of spacetime is at a given scale, but we can find the most probable dimensions with a certain probability. This suggests to interpret the presentation ambiguity as a sign of a non-trivial stochastic-spacetime structure at microscopic scales, which cannot be classified or measured deterministically. Then, the initial-point and the final-point presentations (selected by the second flow-equation theorem) give us the two extreme values of the fluctuation interval of the fundamental uncertainty we would find in any measurement. This stochastic view departs from the physical interpretation so far adopted in the literature, but for a good reason: it matches completely the heuristic arguments of the previous section which, in turn, permit to fix some of the free parameters of the multi-fractional measure.
In the theories T v and T q with a differentiable measure q(x), one cannot realize (13) straightforwardly because differential calculus is ordinary and one does not integrate over all possible labelst [29]. However, it is possible to show that the multi-fractional derivatives µ can be approximated by the q-derivatives ∂/∂q µ (x µ ) and that the propagators of T γ and T q agree in the UV [24]. Therefore, regarding T q as an approximation of T γ=α carrying all the main features of the exact theory (same anomalous scaling in the UV, same scale hierarchy and value of ℓ * , and so on), one can investigate the effects of choosing the initial-or final-point presentation in the much simpler T q , having always in mind that this is done only for technical simplicity. In the case of distance and time intervals such as those considered here, there is no difference between the two theories.
Taking the initial-point presentation, from Eqs. (9) and (10) we get while, according to the final-point presentation, we obtain where we have defined The initial-point presentation corresponds to a positive fluctuation +δL α , while in the final-point case one gets a negative fluctuation equal to −δL α . In the usual deterministic view, we would have one theory T + γ or T + q predicting ∆q > ℓ (Eq. (14)) physically inequivalent to another theory T − γ or T − q predicting ∆q < ℓ (Eq. (15)). In contrast, in the stochastic view the coexistence of the two allowed presentations is related to a limitation on the measurability of distances, and we do not have to decide a single presentation a priori. In this way, an epistemological weakness of the model is overcome by replacing the idea of a UV geometry constituted by an anomalous spacetime where measurement can have arbitrary precision to a one where the UV limit is a fuzzy, or, better said, stochastic spacetime. The same discussion applies also to the time direction for which, in multi-fractional theories with fractional or q-derivatives, we have with where t * is the time scale that characterizes the UV scaling of the time direction and α 0 is the fractional exponent in the time direction x 0 . Interpreting the presentation ambiguity as an intrinsic uncertainty in the determination of distances, Eq. (16) tells us that a classical theory with a non-trivial measure, which exhibits a multi-scale (mono-scale, in the binomial case) behavior, naturally sets an obstruction on sharp measurements of distances as in a foam-like picture. A classical multi-scale theory in flat spacetime can reproduce the combined effect of GR and QM principles that, if held together, prohibit arbitrarily sharp measurements of spacetime intervals as we reviewed in section 2. Besides resolving the presentation ambiguity, the interpretation of Eq. (16) as a distance uncertainty also allows us to compare δL α with the bounds δL 1 3 (5) and δL 1 2 (7) obtained by the naive combination of simple QM and GR arguments. Importantly, by comparing Eq. (16) with Eq. (5), we can fix the multi-scale parameters α and ℓ * : Thus, we get a preferred value for α and, what is more, the length scale ℓ * of the model plays the role of the Planck length. This relation of ℓ * with ℓ Pl can be regarded as a confirmation that multi-fractional theories encode QG features in a highly non-trivial way. Similarly, identifying the multi-fractional fluctuation with the previously obtained semi-classical QG uncertainty, we also discover that the binomial measure should be isotropic in space and time, so that In fact, comparing Eqs. (18) and (6), we fix α 0 and t * by while confronting Eq. (16) with (7) and Eq. (18) with (8) we get, respectively, To summarize, the theory with fractional derivatives describes spacetimes with a microscopic stochastic structure [29]. The presentation labelx µ prescribes how integrals on stochastic spacetime variables X µ can be performed, and the presentation problem is similar to (not to say the same as) the Itô-Stratonovich dilemma in stochastic processes. Inspired by this, instead of defining as many physically inequivalent theories (but with the same anomalous scaling) as the number of presentations and choosing one presentation among the others, we take all presentations at the same time. The measures {q µ (x µ ) : x µ ∈ Ê D } do not correspond to a class of (in)finitely many theories Tx γ (labeled bȳ x µ ) all with the same anomalous scaling: they are one measure corresponding to one theory T γ with an intrinsic microscopic uncertainty, limited by the initial-and final-point presentations. The theory T q with q-derivatives is related to T γ=α by an approximation ≃ d of the exterior derivative [24] and can be used to explore the physics of T γ=α . The multi-fractional theory T v is not an approximation of T γ nor has any stochastic microstructure, and we cannot juxtapose such a structure arbitrarily.
Stochastic view with random measure
If we could make q(x) nowhere differentiable, then we would be able to bypass the above limitations and extend the stochastic structure to all multi-fractional theories, not only to the one with fractional derivatives. To understand where the "stochasticity" could come from in classical multi-fractional spacetimes, it is useful to make a short digression and recall that the connection between a fractal and a stochastic structure in multi-scale spacetimes is not new. A proposal very similar to multi-fractional theories is Nottale's scale relativity [41][42][43], where lengths on a fractal spacetime are made of a deterministic differentiable part ℓ (the length on usual space) and a stochastic nowhere differentiable part. Here ℓ * = 1/ε is the inverse of the resolution at which one is probing the geometry and ζ is a wildly fluctuating stochastic variable such that depending on whether the distance is time-or space-like. Because both scale relativity and multi-fractional spacetimes rely on a fractal geometry, these scenarios give about the same length expression. However, the original fractal-spacetime formulation of multifractional theories [27] has been made much more solid thanks to a fundamental principle (slow IR dimensional flow) [26] that reproduces the measure dictated by fractal geometry and, as we will see now, fixes some of the free parameters of scale relativity. In particular, not only is the stochastic random variable ζ of Nottale's "fractal" length L present in a more general multi-fractional length if we go beyond the approximation (1) of a binomial measure, but it is also fixed by the second flow-equation theorem, in contrast with the ad hoc variable ζ in scale relativity. In fact, considering the second-order truncation of the full measure determined by the flow-equation theorem [26], we have (index µ omitted everywhere) where F ω (x) = F ω (λ ω x) is a complex modulation factor encoding a fundamentally discrete spacetime symmetry x → λ ω x in the far UV (λ ω is fixed). Requiring the measure 13 to be real-valued, one has [24,26,27] F where A n and B n are constant amplitudes and ℓ Pl ∼ ℓ ∞ ℓ * . Since we will need some details about the derivation of this expression, let us make a short detour (in one dimension, for simplicity). The most general fractional complex measure q(x) = x + n f n (x) giving rise to a Hausdorff dimension slowly varying in the IR [26] is given by the sum over n of terms of the form where ω n > 0 and ξ n , η n are constant. Assume, without loss of generality, that ω n = nω (as in fractal and critical systems, as well as in quantum gravity as suggested by an analysis of the spacetime dimension [70]). Split |x/ℓ * | αn±inω = c ±n |x/ℓ * | αn |x/ℓ ∞ | ±inω , where ℓ ∞ is an arbitrary length and is a pure phase. Then, f n (x) = |x/ℓ * | αn (c n ξ n |x/ℓ ∞ | inω + c −n η n |x/ℓ ∞ | −inω ). We can reparametrize the system as If η n = ξ * n (real-valued f n ), then f n (x) = F n (x) reproduces Eq. (26c). The coordinate scaling ratio λ ω = exp(−2π/ω) of the discrete scale invariance is governed by the frequency ω, which can be interpreted as the imaginary part of the Hausdorff dimension of spacetime [70]. The log-oscillating structure is typical of iterative (also called deterministic) fractals [62,[71][72][73][74][75][76][77], complex and critical systems [78], while in the context of quantum gravity it is solely determined by the flow-equation theorem [26]. In multi-fractional theories, the modulation factor (26) is usually approximated by only two frequencies, the zero mode n = 0 (F 0 (x) = A 0 ) and the n = 1 mode. This approximation, not followed in [70], captures the physical imprint of the log oscillations in several physical observables [24], but here we will retain the full structure (26). The logarithmic oscillations are blurred out when we coarse grain the measure (26a) to scales ≫ ℓ ∞ . This coarse graining amounts to defining y := ln |x/ℓ ∞ | and taking the average of any function f (y) [27,62]: which yields the constants 14 Thus, if we drop the zero mode and set A 0 = 0, the profilẽ reproduces Nottale's fractal lengths (24) upon the identification Correspondingly, the relations (25) agree with (31). If we insisted in having a negative average in the time direction as in (31) (a feature which we do not see as necessary, for the moment), then we would have to put direction labels µ on the amplitudes A n → A The last piece of the puzzle is the differentiability of (26a). In general, q(x) is differentiable everywhere except at a discrete infinity of points. However, special choices of the n-dependence of A n and B n can render q(x) nowhere differentiable. To see this, we make yet another parametrization of the measure, inspired by the typical n-behavior of the coefficients ξ n and η n = ξ * n of Eq. (27) found in complex and critical systems [79]: where ξ is real and n-independent, γ, u 0 parametrize an exponential or power-law behavior, and ψ n is a real n-dependent phase. Writing also c n = exp(iβ n ), where β n := nω ln(ℓ ∞ /ℓ * ), and comparing with Eq. (29), we get A n = 2ξ e −γn n u cos(ψ n + β n ) , B n = −2ξ e −γn n u sin(ψ n + β n ) .
This expression allows us to make a prescription on the amplitudes A n and B n such that the measure (26a) is nowhere differentiable. In fact, in [79] it was found that functions of the form g(x) = +∞ n=0 ξ n x −sn are nowhere differentiable if the phases ψ n are random (more precisely, ergodic and mixing), which is the case provided ψ n varies fast enough with n. For instance, the phases ψ n = Ω, ψ n = Ωn and ψ n = Ω ln(Ωn), where Ω is a constant, are too "slow" in n and can produce at most a discrete infinity of singular points, while ψ n = Ωn ln(Ωn) , ψ n = Ωn 2 , ψ n = Ωe n/Ω , or the solution of the recursive equation ψ n+1 = ψ n + an, where a is irrational, all give rise to Weierstrass-type functions, which are nowhere differentiable [79]. Choosing the measure amplitudes in this way, we obtain the desired result: a stochastic (nowheredifferentiable) spacetime geometry with an intrinsic distance-time uncertainty, for any multi-fractional theory with measure (26a).
Summary
To summarize, a stochastic spacetime can be realized in multi-fractional theories by two inequivalent mechanisms: • Multi-fractional derivatives (section 3.2.1). In this case, realizing the stochastic integrals argument of [29], T γ enjoys the stochastic view independently of the choice of the measure q(x) (with or without zero mode, with regular or random amplitudes). Nottale's scale relativity is not directly related to T γ , but it could be a relative of T q , which is an approximation of T γ .
• Random measure (section 3.2.2). All multi-fractional theories T v , T q and T γ enjoy the stochastic view because they all share the same nowhere-differentiable measure q(x) with random amplitudes. Nottale's scale relativity corresponds to the case where log oscillations average to zero (modulation functionF ω in the measure).
Which option is better justified remains to be decided. The first one is valid only for the theory T γ with multi-fractional derivatives and does not require any Ansatz for the measure amplitudes. The second one is valid for all multi-fractional theories, but it requires the Ansatz (35) with a fast varying ψ n such as the examples collected in (36). On the positive side, the choice (35), stemming from (34), was empirically found to be very general in complex and critical systems [78,79]. But, to be fair, what holds in those branches of physics may not be valid in quantum gravity, where the nowhere differentiable function under scrutiny has a totally different role with respect to its counterparts in complex and critical systems. We do not know how to obtain (34) and (36) from first principles or from observations in multi-fractional theories, although cosmological constraints on A n and B n are under study [70]. Also, the mechanism we detailed for generating stochastic fluctuations of spacetime measurements might have consequences for the cosmological constant problem [80]. 6 Either way, if spacetime is stochastic, then the same measurement uncertainties calculated via heuristic arguments combining quantum mechanics and general relativity arise in multi-fractional theories. In this precise sense, classical multi-fractional theories encode quantum-gravity effects.
Related proposals
A relation between spacetime fuzziness and a fractal structure was suspected long since and it has been investigated under different perspectives during the years. By itself, this connection is not technically difficult to establish. For instance [81], it is sufficient to consider a metric formulation and deform the metric with corrections that depend on the geodesic distance σ(x, x ′ ) between two points, but such that its zeropoint length is non-vanishing, lim x ′ →x σ(x, x ′ ) ∝ ℓ Pl [82][83][84]. Hence the four-volume is deformed. In this set-up, which is similar to the one in rainbow gravity, one can choose the corrections so that to obtain simultaneously a varying Hausdorff dimension and a spacetime uncertainty [81]. The real challenge, however, is to embed such connection in a top-down theory and to explain its physical origin.
The first datum we would like to mention does not link fuzziness and multi-scale spacetimes explicitly but it provides indirect support of the above view that a classical stochastic spacetime efficiently reproduces quantum-gravity effects. Coordinates defined on a nowhere-differentiable geometry obey an uncertainty principle virtually identical to Heisenberg's [85]. In other words, the nowhere-differentiable structure typical of multifractional theories (with multi-fractional derivatives or a random measure) naturally reproduce quantum-mechanical effects such as those considered in section 2. If we recall that nowhere differentiability is typical of sets with non-integer dimension [40,[59][60][61][62], then the relation between measurement uncertainty and dimensional flow becomes apparent.
On the other hand, intrinsic spacetime fuzziness is the starting point of non-commutative spacetimes [86]. There, the idea is to get QG=QM+GR from a fuzzy spacetime rather than the latter from the former. Getting quantum-gravity as a byproduct of a non-trivial integro-differential structure is also the path followed of multi-fractional theories, which made us wonder about possible connections between non-commutative spacetimes and the multi-fractional paradigm [87,88]. Despite a number of similarities in dimensional flow, there is no quantitative connection between these two frameworks, mainly because in the former coordinates do not factorize in effective measures. However, in the present paper we have finally found the reason beyond those similarities: it is because dimensional flow and distance-time uncertainties have the same origin in both theories. In the case of fuzziness coming from a random measure (section 3.2.2), this common origin is the quasi-universality of dimensional flow, established by the first flow-equation theorem for non-factorizable geometries and by the second theorem for factorizable ones [24,26].
Fuzziness understood as a spacetime foam was related to a multi-scale quantumgravity structure already in [89,90]. As a model of spacetime "foam," Crane and Smolin took a scale-invariant distribution of Planckian black holes. If one then considers a perturbative quantization of gravity, this is sufficient to deform the dimension dependence of the graviton propagator and to improve renormalizability of the theory. Two differences with respect to our approach are the implementation of general-relativistic features by hand (in this case, microscopic black holes) and the derivation from there of an anomalous (or multi-scale, or fractal) spacetime structure. Quantum mechanics and general relativity were joined there (in the form of a black-hole foam) ad hoc, which is essentially the same philosophy of the estimates reviewed in section 2. However, a notable upgrade by [89,90] with respect to arguments of [33,34] is the recognition that the resulting spacetime uncertainty is responsible for introducing a scale hierarchy, making spacetime multi-scale or fractal. Here we proceeded the other way around, taking a spacetime which is multi-scale by default and getting a stochastic structure (in turn determined by the integral or differential calculus realizing multi-scaling) from that, using a very minimal list of ingredients: unique parametrization of the spacetime measure from the flow-equation theorem and randomization of the measure amplitudes as suggested by results from critical and complex systems.
In [91], a phenomenological dispersion relation was proposed to recover the running profile of the spectral dimension found numerically in causal dynamical triangulations. The same dispersion relation was then employed to write down the expression of the geodesic distance between two points, which happens to depend on the resolution of the probe. Thus, in causal dynamical triangulations one can get fuzziness from dimensional flow. In that paper, this connection is, to our understanding, not completely explicit and there is no direct reference to fuzziness, but there is a more pressing issue one should be careful about. Inferring a modified dispersion relation from a given dimensional flow is a risky procedure that likely incurs in the twin problem [69] well known in transport theory [66], stating that very different diffusion equations can give rise to the same asymptotic form of the return probability. In quantum gravity, this means that one can get the same dimensional flow (up to irrelevant differences in transient regimes) from very different diffusion processes, Laplacians, and diffusion operators [67]. Thus, the result of [91] relies on an intermediate step (the guess of a deformed dispersion relation) whose physical grounds are not clear to us. Nevertheless, it may provide circumstantial evidence of the relation between dimensional flow and fuzziness in causal dynamical triangulations.
Of all these examples, multi-fractional theories, non-commutative spacetimes and (with the above reservations) causal dynamical triangulations are top-down examples; the others rely on isolated theoretical observations or on the heuristics of quantum gravity.
Avoiding observational constraints on multi-fractional theories
We feel confident that the observations we made here might represent an important step towards understanding why the running of dimensions at short scales is a universal property of QG approaches. In particular, we have argued that dimensional flow is linked to distance-time fuzziness, whose form can be inferred from arguments combining quantum mechanics and general relativity, without knowledge of the detailed features of one or another QG model. In this way, we have been able to pick out two preferred values for the fractional exponent of the measure α. If we take seriously the parameter fixing suggested by the QM+GR arguments and their correspondence with multi-scale spacetimes, then we can refine previous bounds on ℓ * , t * and the associated energy scale E * . The α = 1/2 case has already been considered in the literature, while the other is reported in table 1 for the most effective experiments or observations by which the theory with q-derivatives has been tested. 7 While bounds from the cosmic microwave background (CMB) black-body spectrum and from the Lamb shift in quantum electrodynamics change only by two orders of magnitude from one case to the other, the observation of black-hole gravitational waves is more sensitive to the value of the fractional exponents. For α = 1/2 = α 0 , the energy E * is not much smaller than the grand-unification scale [92], while for α = 1/3 = α 0 it is E * > 10 4 TeV, 1000 times larger than the LHC run-2 center-of-mass energy. The bounds from gamma-ray bursts (GRB) have been determined much less rigorously [92]. As already known, they exclude the α = 1/2 = α 0 case because E * > 10 13 m Pl . For α = 1/3 = α 0 , this bound is less severe but still above the Planck mass, E * > 10 4 m Pl . A detailed calculation of the effect of multi-fractional geometries in T q and T γ on the propagation of high-energy photons in a cosmological background will be needed to check whether these estimates are robust, although there seems to be little hope at least for T q [92]. Since T q can be regarded also as an approximation of T γ [24], observational bounds on T γ could be conjectured to be very similar to those on T q . We know much less about Table 1: Bounds on the hierarchy of the multi-fractional theory Tq with q-derivatives for α 0 = 1/3 = α and α 0 = 1/2 = α. Bounds from the Lamb shift and from gravitational waves refer to the most conservative estimates with generic coefficients in the correction terms (see [24], especially table 8, for details). "Pseudo" indicates bounds obtainable only in the stochastic view (which is an approximated step in Tq) and only in the case where photon-graviton propagation speeds differ by a maximal random fluctuation. There is no useful bound on the measurement and variation of the fine-structure constant αqed [94]. For α = 1/2, there are also bounds on the amplitudes A 1 and B 1 in (26c) [93]. Bounds without references have been obtained in this paper.
Avoided in stochastic view CMB black-body < 10 −26 < 10 −18 > 10 only for A 0 = 0 spectrum [93] only for A 0 = 0 Lamb shift [24,94] < 10 −26 < 10 −18 > 10 only for A 0 = 0 Gravitational waves < 10 −42 < 10 −33 > 10 17 also for A 0 = 0 (pseudo) [24,92] GRBs [92] < 10 −57 < 10 −48 > 10 32 only for A 0 = 0 Vacuum Cherenkov < 10 −79 < 10 −71 > 10 55 only for A 0 = 0 radiation [24] T γ and we cannot rule it out with our present theoretical understanding of it. Four things may happen that could save the theory T γ : (i) that the GRB and vacuum Cherenkov radiation bounds are somehow flawed under a closer scrutiny; (ii) that, despite their similarities, T q and T γ are essentially different in some key physical consequences, as also technical reasons seem to indicate [24]; (iii) that the fractional derivatives in T γ must or can be taken with an order γ smaller than the fractional exponent α in the measure; (iv) that the heuristic arguments of [33,34] do not fix the fractional exponents as claimed in this paper; or (v) that these arguments do fix the fractional exponents and the stochastic view holds with no zero mode in the measure (A 0 = 0). Case (v) would also save T q . The most likely possibilities are, in our opinion, (ii) and (v). Case (ii) is the most attractive to us, but only explicit calculations will be able to check it. Regarding (v), in the stochastic view the averaging to zero of stochastic fluctuations in the propagation of particles can easily avoid all constraints (including from gravitational waves, GRBs and vacuum Cherenkov radiation) coming from modified dispersion relations, which are the strongest to date. However, in this case it would be difficult to falsify T q and T γ . Letting A 0 = 0 would avoid the gravitational-wave bound but not those from GRBs and Cherenkov radiation. Case (iv) may still be possible and one would consider it only in the deterministic view, which would amount to dissociate the heuristic quantum-gravity arguments from multi-fractional theories.
Structure of multi-scale spacetimes
We make some remarks on the structure of spacetime uncovered here for the theory T γ and its approximation T q . The pair of equations (19)-(21) on one hand and (22)-(23) on the other hand describe two different geometries. In this concluding section, we comment on both. The only feature in common is scaling isotropy, for which the spectral and Hausdorff dimensions follow the same UV running, as noted in the introduction. It is intriguing that, as a byproduct of our analysis, we found such a property.
• α = 1 3 = α 0 . A characteristic worth special mention for this case is the identification of the scales t * and ℓ * with the Planck time t Pl and length ℓ Pl . The main consequence of this finding is that the binomial measure with log oscillations is not just an approximation of a more complicated multi-scale polynomial measure: since there is no meaning to scales below ℓ * = ℓ Pl , there is no other scale than ℓ * in the hierarchy of the theory. A mild theoretical support to these features is what we might call a "scale equipartition." In deriving Eq. (26a), we introduced the arbitrary scale ratio (28), corresponding to a non-zero constant phase β n in the amplitudes (35). However, one might as well impose equipartition of the same fundamental length in power-law and oscillatory terms: Taking on board results in non-commutative spacetimes that indicate that ℓ ∞ = ℓ Pl [87,88], we get ℓ ∞ = ℓ * = ℓ Pl , in agreement with Eq. (19). The case of the time direction is similar. Note that this spacetime is not normed in the UV of the theory T γ , since α < 1/2 [24,40]. This is not a problem in the stochastic view, where intervals lose meaning anyway at scales ∼ ℓ * . The transition from a normed deterministic space to a fuzzy one is gradual and nothing special happens exactly at the scale ℓ * . To summarize, we end up with a binomial isotropic-scaling geometry with α = 1/3 = α 0 and one fundamental absolute scale (38), which marks a smooth transition from a normed spacetime to a non-normed fuzzy spacetime. Interestingly, having α 0 = 1 (ordinary time direction) but α = 1/3 corresponds to a spectral dimension d S ≃ 1 + 3/3 = 2 in the UV, the same asymptotic configuration of Hořava-Lifshitz gravity [95]. This is not a coincidence, since the critical scaling of coordinates in Hořava-Lifshitz gravity is easily reproduced by geometric coordinates [24,28].
• α = 1 2 = α 0 . This is nothing but the special value 1/2 of the multi-fractional literature, the minimum exponent at which a norm exists in T γ [23,24,40]. The main point of departure from the α = 1/3 = α 0 case is that here a norm does exist in the UV, but it is not unique. This is the so-called Manhattan or taxicab geometry, where two points can be connected by many different geodesic paths with the same minimum length [40]. Therefore, in this case spacetime is normed at all scales but geodesics lose uniqueness in the deep UV. 20 On top of that, in this second case ℓ * (and t * ) shows a dependence on the scale s (or t) at which spacetime events are probed. At macroscopic scales s ≫ ℓ Pl (t ≫ t Pl ), ℓ * → 0 (t * → 0) and there is no dimensional flow (δL α → 0). On the other hand, lowering s (t), the scale ℓ * (t * ) increases up to ℓ Pl (t Pl ) when we are measuring geometry or testing spacetime events at Planckian distances. This dependence of ℓ * (or t * ) on the observation scale reveals a crucial and often advertized feature of multi-fractional theories, namely, the fact that in a multiscale geometry measurements depend on the scale at which the experiment is being performed. Thus, Eq. (22) agrees with the perspective according to which multifractional models are theories where the result of measurements is affected by the scale s (t) of the observer. In this sense, we can talk about a relative multiscale hierarchy among observers, although this does not exclude the existence of an absolute hierarchy as in the previous case. Imposing the phenomenological limitations on measurements of section 2 to hold only at the Planck scale (s = ℓ Pl , t = t Pl ), we recover an absolute hierarchy and the identification (38).
All this combines with the microscopic discrete scale invariance of the measure (26) [24,27] to give non-smooth geometries that can describe the deepest UV recesses of quantum gravity and that will deserve further study.
Conclusions
We established a relation between dimensional flow and spacetime fuzziness by working in the framework of multi-fractional theories. The main reason why we focused on these models is that they are easy to manipulate. However, this correspondence does not hold exclusively for multi-fractional geometries, as argued in [44]. The approach of multifractional spacetimes just provides a first exploratory study useful to recognize a striking feature that may be much more general and characteristic of other QG theories. We are aware that testing our conjecture may be harder in other formalisms of quantum gravity, but we feel confident that the encouraging results reported here will energize efforts in that direction. All the main elements of our arguments are already in place in some of the major proposals of the literature. We commented on non-commutative spacetimes and causal dynamical triangulations in section 4.2, but there is more. In particular, both asymptotically-safe quantum gravity and the discrete-geometry, mutually related frameworks of loop quantum gravity, spin foams and group field theory have dimensional flow [11-13, 48, 49, 96] and implement fuzziness by the presence of minimal lengths, areas or resolutions [5,97,98]. However, although a relation between anomalous dimension and fuzzy features certainly seems to exist in these cases, so far it has been at best indirect or purely technical. Revisiting these theories in search of a physical connection similar to that found here may help to clarify some of their formal aspects and even give new tools by which to extract testable phenomenology.
|
v3-fos-license
|
2019-05-16T08:42:46.002Z
|
2016-12-01T00:00:00.000
|
155094784
|
{
"extfieldsofstudy": [
"Sociology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/rbsmi/v16n4/1519-3829-rbsmi-16-04-0379.pdf",
"pdf_hash": "057ed6a271d5525fdb6aa9ef8ebbdc2ff23d9e03",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:965",
"s2fieldsofstudy": [],
"sha1": "057ed6a271d5525fdb6aa9ef8ebbdc2ff23d9e03",
"year": 2016
}
|
pes2o/s2orc
|
Translation and cross-cultural adaptation of “ Hoja Verde de Salud Medioambiental Reproductiva ” in Brazil
Objectives: to perform a cross-cultural adaptation of “Hoja Verde de Salud Medioambiental Reproductiva”, originally conceived in Spanish for Brazilian Portuguese. Methods: the translation and cross-cultural adaptation process was carried out in five stages: translation, synthesis of the versions, back-translation, the acquisition of a consensual version after reviewed by the committee specialists and the application of the pretest to obtain the final version. The interviews were carried out at two reference services in maternal and child health, both located in Recife, Pernambuco, which provided medical care for highrisk pregnancies with a diversified clientele regarding the region of the State. Results: there were difficulties in understanding some words during the pretest and the precision of dates for medication use, radiation tests, as well as weeks of pregnancy and breastfeeding duration in weeks. The committee specialists made some alterations on the questionnaire considering suggestions made by the interviewees. Conclusions: after the adaption process, an available instrument in detecting environmental risks which might be incorporated in the maternal and child health routines and could contribute in detecting and preventing diseases and the severities and promote health for Brazilian children.
Introduction
In recent decades the relation between health and environment has been in evidence, since environmental damage threatens life support systems, including mankind and despite of scientific, materials and economic advances recently obtained, the consequences of this deterioration brought several impacts to human health. 1 As a certain concern from various countries, this theme has been an issue of international agreements since the beginning of the 20 th century. 2 The periods between the peri-conceptual and birth, as well as childhood, these represent stages of particular vulnerability to the environmental agents.][5][6][7] The embryo, fetus and children are particularly vulnerable to the environmental risk factors. 8The sooner the person is exposed to chemicals more likely s/he will develop health problems. 9The degree and type of interaction with the environment varies according to age, socio-cultural patterns and places of residency, differentiating between urban and rural areas. 10he environmental risk factors act together and the economic and social conditions enhance the adverse and its effects, especially in situations of conflict, poverty and malnutrition.Such factors determine health, quality of life, children´s growth and development causing impacts on the health of adolescents and the future adults. 8he vulnerability of the children often depends on the development stage they are in, because there are some critical periods in the structural and functional development, both in pre and post-natal life, especially when a given structure or function is more sensitive to alterations.In addition, the children have a higher intestinal permeability and the immaturity of their detoxification system. 11ccording to the World Health Organization (WHO), recent knowledge about the special susceptibility of children to environmental risks should be used for structuring actions that ensure their growth and the development of good health. 8Besides the fact, children´s environmental health is a recent theme, we are unaware of the existence of instruments prepared or adapted from other cultures for the Brazilian context is specifically the objective to detect early environmental risk factors on pregnant women, even in the peri-conceptional period, collaborating for the adoption of habits and healthy environments for children.
Among the advantages in achieving a crosscultural adaptation of instruments is the lowest cost, when compared to the preparation of new tests and the easiness to compare between the groups. 12lthough, there are several publications based on this theme, there is no consensus as to the forms of execution, making the operational synthesis a mosaic procedure from various sources. 13he aim to promote learning and environmental health care, involving actors from different sectors and work fields, the Red Latinoamericana de Salud Ambiental Infantil (SAMBI), along with the professionals from Mount Sinai Hospital and the staff from a specialized unit in Environmental Pediatrics in Murcia, Spain created a project called "Salud Ambiental para el Embarazo, Lactancia y Crianza em Iberoamérica" (SAELCI).
SAELCI is a multicentric project.Its proposal is to promote child health care from the peri-conceptional up to childhood through the use of "Hoja Verde de Salud Medioambiental Reproductiva" as one more instrument in pre-natal consultations and/or puerperium, characterizing the parents' exposures to Latin America, Portugal and Spain.In Brazil, the study is being conducted only in the city of Recife.
The Instrument
The "Hoja Verde de Salud Medioambiental Reproductiva" was developed by a team from the Pediatric Environmental Health Specialty Unit (PEHSU) in Murcia, Spain, from the "Green Page" of WHO, an instrument which enables to identify and address the exposures of environmental risk for children. 8The new instrument is being used in Spain for a few years now and was adapted to be used during the period of pre-pregnancy, pregnancy and breastfeeding by professionals who dedicated themselves in the pediatrics environment collaborating with prevention, detection and treatment for diseases and severities.
A semi-structured questionnaire was elaborated and easy to apply, which assists in detection, prevention, reduction and/or elimination of environmental risk factors that exist since the period of periconceptional until childhood, seeking to contribute for the creation of healthier environments for children.The search gives subsidies for the diagnosis of exposure to the environmental risk factors, not counting on the scores, only one item aims to quantify the consumption of alcohol in the peri-conceptional period of both parents and then the mother during the pregnancy.
This questionnaire can be applied by any health professional member of the team involved in child care and pregnant women and/or postpartum in favor of the integral pregnancy approach and child´s healthcare. 14The father is invited to participate of the interview as an fundamental actor for the mother and baby´s health and to detect factor exposure in the peri-conceptional period.The application of the instrument represents an opportunity for the parents' guidance.
The questionnaire is composed of questions divided into nine blocks, to: a) The child parent´s identification data (age and schooling), the date of the last menstruation, the family´s monthly income, and among others; b) Reproductive obstetric history (including the occurrence of abortions and/or malformations, breastfeeding period, performed fertility treatment, contraceptive use); c) Exposure to ionizing radiation; d) Exposure to medication, including homeopathy and/or the consume of vitamin supplements; e) Occupational and leisure activities exposures; f) Consumption of tobacco and other drugs; g) Exposure to alcohol; h) Exposures to in or outdoor environments and i) Perception of environmental risk of parents in the household and/or community.
The objective of this study is to perform the cross-cultural adaptation of the Spanish language to Portuguese in Brazil of "Hoja Verde de Salud Medioambiental Reproductiva", based from the methodological steps required.
The process of translation and cross-cultural adaptation
][17][18][19] Initially, two independent translations were made of the spanish language to the portuguese language.The first translation was performed by two health professionals with experience in public health and environmental health were aware of the purpose of this research and had fluency of the spanish language.The second translation was carried out by a sworn translator without previous contact with the instrument and was not aware of the purposes of this research and had no connection with the health area. 16ll the translators involved in this first stage have the mother tongue as the target language -Portuguese. 15,16In both of the translations were emphasized the necessity of semantic equivalence of the items and not only literal.The two versions of the translation were compared by researchers and after elaborated one single version.
Then, two back-translations of the questionnaires were obtained which consisted in the translation back to the original language which enables to identify incorrect interpretations and failures of cultural adaptation contexts. 20This was performed by two teachers of the Spanish language without any knowledge about the purpose of this research and without any prior contact with the original instrument and no involvement in the health area.
The formation of the committee specialists was proceeded, composed by two medical hygiene physicians, one of them was fluent in Spanish, two gynecologists and a professional specialist in fetal medicine, who analyzed the translated versions and the back-translation, seeking for the semantic equivalence (the translated words should contain the same meaning), idiomatic equivalence (certain expressions are difficult to translate to the another language, being able to elaborate new expressions when necessary), equivalence of experience (the questions could be able to capture the daily routine experience which could be measured in different ways when compared to different cultural contexts) and conceptual equivalence (the concept of words should be preserved seeking for words and phrases that could be represented between cultures). 16 consensus version was elaborated (pre-final) which was forwarded to two specialized authors in environmental health who participated in the elaboration of the original version along with the professionals in Murcia, Spain.After clarifying doubts regarding some issues, an agreement was obtained by both.The questionnaire is accompanied in the manual.
Application of the pre-test
After a favorable statement from the Ethics Committee in Research at Hospital Universitário Oswaldo Cruz (CEP/HUOC) (CAAE number: 50091115.9.3002.5191)and the acceptance statement signed on the informed consent form and the approval consent for children under 18 years of age according to the Resolution 466/2012, the pre-test was applied for 30 participants from the two services, both are located in the city of Recife, Pernambuco.These services provide medical care to high-risk pregnancies and receive a diversified clientele regarding the region of the State: The Centro Integrado Professor Amaury de Medeiros (CISAM), from the Universidade de Pernambuco, and Instituto de Medicina Integral Prof. Fernando Figueira (IMIP).In each health unit 15 women, pregnant women (25) or postpartum (5) were interviewed in October, 2015.Pregnant women at any gestational age and postpartum, in immediate or delayed periods, accompanied with someone, if under aged, parents or guardians allowed them to participate were included in the study.The exclusion criterion was for those who did not have mental conditions to answer the questions.The interviews were conducted by the first researcher and by five medical students from the Universidade de Pernambuco and the Faculdade Pernambucana de Saúde, after being trained for data collection.
During the pre-test, the participants of this study identified difficult expressions to understand.For these questions, suggestions were requested to replace the expressions or an inclusion of explanatory terms which allowed the participant to require a better understanding of the question demanded.
Results
In the phase of the pre-test, the interview ranged between 20 and 25 minutes.The group was comprised of residents from 10 towns in Pernambuco, mainly the towns of Recife (33.3%),Olinda (30.0%) and Jaboatão dos Guararapes (13.3%) and only one pregnant woman lived in the rural area.The age ranged from 15 to 39 years with a mean of 20.5 years.The schooling levels varied from not have gone to school to undergraduates, although half of them graduated from high school.Only 23.3% reported being white colored skin, while 60.0% recognized to be mixed color.The age of the babies' parents ranged from 17 to 54 years and 50% of them graduated from high school.53.3% of the monthly income was up to two minimum wages (R$ 788.00 was the minimum wage at the period of the interviews, equivalent to US$203.15).The gestational age ranged from 17 to 39 weeks.
There were difficulties in understanding certain expressions.In block B (obstetric history), the term hormonal contraceptives use, and in block C (ionizing radiation), the term ionizing radiation was not understood by all participants.In addition, the pregnant and postpartum women could not explain the dates in taking medications, reporting to be quite difficult to specify the initial date and the end, and radiation exams, as well as weeks of gestation and duration of breastfeeding in weeks.On these occasions the prenatal card was requested.
After another meeting with the committee of specialists, taking into consideration the participants' suggestions, small alterations were made in the questionnaire, as for example: in parentheses were added the words "pills, injection" to facilitate the understanding of hormonal contraceptive use, as well as the colloquial expression "X-Ray" to understand the term in ionizing radiation.In block B, the period of breastfeeding was obtained in months and not weeks.The initial date and the end of taking medication were modified for the periods between one month before the gestation, the trimesters of pregnancy and the breastfeeding period.In block E, the words "aeromodeling" (aeromodelismo in Portuguese) and "maqueación" (making of models, design) were withdrawn from the activities for not relating to Brazilian usual activities.The main alterations can be seen in Table 1.
During the interviews, when the baby's father was not present, the mother had some difficulty to answer about the father´s tobacco and alcohol consumption, regarding to specific issues relating to the period of spermatogenesis, as for example: "How much did you smoke before getting pregnant?"(how many cigarettes/a day)?, "How old were you when you started smoking?", "how much do you smoke now (how many cigarettes/a day)?", alcohol intake (in grams/a day) in the two months prior to getting pregnant.The difficulties occurred due to the fear on the behalf of some not responding the question precisely or due to the short time of their relationship or even the absence of the father during the pregnancy.In these cases it was encouraged to relate celebration dates and festivity periods, to emphasize the period of peri-conceptional to base on the date of the last menstruation (DUM) or on the first ultrasound which this one facilitated the answer.
The following questions were also added: "The baby's father had exams with exposure of ionizing radiation ("X-Ray") up to two months prior the date of the last menstruation (DUM)?" in block C, the question is just for the baby's mother; "Do you or someone who lives with you work with agriculture?";"If yes, do you store pesticides at home?Where?", "Do you reuse the packaging of pesticides to store food or water?", "Do you dye and/or straighten your hair or on somebody's else?", in block E; "Did you recently reform or paint your house?", and, "Did you receive a visit from an agent of the environmental health at your home for mosquito control?If yes, where?At home, the water well or on the street?" this was in block H.In addition, a new block of questions were elaborated about concept data (block J), which discusses: gestational age and birth weight, head circumference, gender, mother's comorbidities during pregnancy and so on.The authors of the original version were consulted, as mentioned before, they also agreed with the additional, since the issues are closely related to the aim of the questionnaire, environmental risk factors rather are quiet present in the Brazilian context and the unusual scenario at the time in the country -the identification of Zika virus and its probable relation to microcephaly with the largest number of cases registered in the city of Recife.
The main environmental exposures are described in Table 2, where it can be observed the high frequency of alcohol consumption, exposure to chemicals substances and 1 case of ionizing radiation exposure.
Discussion
A careful implementation of each stage was performed for the cross-cultural adaptation of the instrument and included some words for a better understanding for the main public and issues related to the Brazilian reality.CISAM and IMIP are references for the care provided to high-risk pregnancy attending a quite diversified demand from several towns in Pernambuco State, which collaborated for the heterogeneity of participants, as for example, in relation to schooling, family monthly income, gestational age, reproductive obstetric history and environmental exposures.
The cross-cultural adaptation of the instrument requires a thorough process.The translation is the first step in trying to obtain concepts, words and expressions that are cultural, psychological and linguistically equivalent in a second language and culture. 128]21 The use of two translations was important because it was allowed to compare and discuss in the elaboration of the synthetic version, making the conceptual translation easier and ensuring to eliminate errors and ambiguous interpretations. 21,22One of the translations, following the other authors' recommendations, was performed by a professional without any knowledge about the purpose of this research and without any link regarding to the health sector, as the translator was "naive", s/he would be more apt to identify different meanings comparing to the original ones, in this case, as the first translators. 15,16he use of two translations and two back-translations is regarded as the minimum necessity to detect difficulties, minimizing possible errors and promoting the achievement of the versions with semantic, idiomatic, experiential or cultural and conceptual equivalences.Following recommendations, the two back-translators were not aware about the concepts explored and the purpose of this study, without any clinical "connection", which increases the probability to highlight discrepancies. 16nlike other studies, the characteristics of the original instrument hinders the adaptation to Portuguese, the translation on labor activities and different probabilities of environmental exposure, for example, this was elaborated without any difficulties for Brazilian experiences. 23There was no necessity of replacing words, only removing words such as "aeromodelismo" and "maquetación" for not representing work activities or leisure which usually are present in the Brazilian context, the other options were remained.At all time, there was a concern about using an easier form of the language to be understood, which would make the survey much easier to achieve by using the instrument.One of the alterations was to understand the period of taking medication, which was modified to facilitate the identification of the baby's exposure from the gestational period, since the effects on the fetus depends among other aspects, such as the time of the exposure during the pregnancy.In addition, the scope of various schooling levels of the interviewed population probably enables the use of the questionnaire for the pregnant and postpartum women in Brazil at any schooling level. 21he application of the questionnaire "Green Page of Environmental Reproductive Health" at referential services for high-risk pregnancy can be considered a limiting factor, since the population that quest for these health units, do not represent the actual situation of the exposed population in general.However, the reference service provides greater diversity and frequency of risky situations, which allows a better judgment of understanding the data collection instrument for its adaptation, which is the goal of this study.
The alterations made in the original instrument were taken in consideration life habits and Brazilian environmental exposures with a narrow relation to concluded the purpose of this research.Screening instruments in the environmental health, during this period of life, were never adapted to the Brazilian context, representing a singular opportunity for a greater insertion of the theme and the importance in addition to the advantage mentioned by some authors, comparing distinct population groups. 12 recent study on head circumference at birth and the exposure to tobacco, alcohol and illegal drugs in early pregnancy, illustrates well the use and its potential "Green Page of Environmental Reproductive Health" for pregnant women.In the study, alcohol consumption, even in low doses, and exposure of ionizing radiation were related to lower head circumference of the babies.Around 13.0% of the pregnant women reported illegal drugs consumption.According to the authors, these findings reinforce the need of counseling and guidance of the parents to assume a more preventive measurement for the fetus and newborn, which could be performed by applying the questionnaire. 24In this present study, 17 interviewees (56.7%) consumed alcohol during the peri-conceptual period and 4 (13.3%)during pregnancy, 3 (10.0%)reported tobacco use and 3 reported exposure to illicit drugs, as well as passive exposure to tobacco, the use of pesticides, among other important exposures (Table 2).
After completing the stages recommended in the literature, "Green Page of Environmental Reproductive Health" could be applied on a larger scale, giving subsidies to the knowledge of the situation exposed to environmental risk factors in our cultural context, as well as serve as an instrument to evaluate possible adverse effects in the concepts.To the extent that this instrument identifies risk factors, it is always subjected to change and become improved, by accompanying our reality of exposure.Then, the available instrument detects the environmental risks, which may be incorporated in the actions of the maternal and child health routine, contributing to detect and prevent diseases, severities and health problems and promote health for Brazilian children.The instrument will also be allowed to compare the environmental exposures between different socio-cultural contexts, as the participating countries in the SAELCI project.
Table 1
Issues of the original version of the translations, the consensus version (including alterations in the back-translations) and the final version of the Green Page of Environmental Reproductive Health.
Table 1
Issues of the original version of the translations, the consensus version (including alterations in the back-translations) and the final version of the Green Page of Environmental Reproductive Health.
Table 2
Exposure of pregnant and postpartum women to environmental risk factors detected through the "Green Page of Environmental Reproductive Health".Recife, 2016.
|
v3-fos-license
|
2021-10-28T15:08:18.540Z
|
2021-10-26T00:00:00.000
|
240017202
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-963697/v1.pdf?c=1636093754000",
"pdf_hash": "42fc90d8877bbeb9fa6972fa9bfee8f446002757",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:966",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "88436c7979cbe7ffaf97c93bbb08062f6c84fc13",
"year": 2021
}
|
pes2o/s2orc
|
High Prevalence of Hypoglycemia in People Above the Age of 75 without Diabetes Mellitus
Background: Hypoglycemia, especially at old age, can lead to several major problems, such as falls and cognitive deficits. The aim of our study was to detect hypoglycemia in older persons with and without diabetes mellitus type 2 (T2DM).Methods: The frequency and duration of hypoglycemia/hyperglycemia was studied in ambulatory geriatric (>75 years), non-diabetic persons (Group 1, n=10), using real time Continuous Glucose Monitoring (CGM, Dexcom G6), and in age- and sex-matched cognitively-healthy, T2DM patients having HbA1c levels < 9.0% (Group 2, n=10). The device was used during 20 days per person, who was blinded for the values on the receiver (except in case of severe hypo- or hyperglycemia). Data were stored for further analysis on the Dexcom Clarity Portal.Results: Hypoglycemia occurred frequently in older persons without T2DM, despite absence of hypoglycemia-inducing medication. In this group, people had 0.50% (median value) of the time glycemic values below 70 mg/dL, most of the episodes happened during nighttime. Conclusions: Our study demonstrates that hypoglycemia occurs frequently in non-diabetic older persons. Further studies are needed to determine whether this could be part of the normal aging process, and to determine if hypoglycemia might contribute to cognitive deterioration.
Background
The occurrence of hypoglycemia is a well-known problem in the treatment of patients with diabetes mellitus (DM) type 1 (T1DM) and type 2 (T2DM), but little is known about hypoglycemia in elderly people without DM.
Older individuals with DM are at a notably higher risk for severe hypoglycemia due to age, duration of DM, duration of insulin therapy, and higher prevalence of hypoglycemia unawareness [1,2]. Hypoglycemia is often underdiagnosed and can lead to several complications, such as falls, fall-related fractures, epileptic seizures, cognitive de cits, and persistent frailty [3,4]. Also, the similarity between symptoms of hypoglycemia and symptoms of dementia, such as confusion, agitation and behavioural changes, may lead to missed diagnosis of hypoglycemic episodes in older people [5].
Prevention of hypoglycemia is especially important for elderly persons with long-lasting DM and associated complications, who are prone to asymptomatic hypoglycemia [6,7]. HbA1c levels give an indication of the average glycemic value, but not of the glycemic variability. Appropriate treatment is essential using speci c target values for metabolic control. Different target values for HbA1c for different age groups have been proposed [6,8]. Hypoglycemia is associated with cognitive and functional decline in older people with diabetes. Identi cation of individuals at risk and prevention of hypoglycemia is therefore an important task in the management of diabetes in home-dwelling older people with diabetes [5].
It is well known that T2DM increases the risk for cognitive decline and dementia such as Alzheimer's disease (AD) and vascular dementia [9,10,11]. It has been established that increased prevalence of hypoglycemia can worsen cognitive decline in T2DM patients.
Continuous Glucose Monitoring (CGM) can be used to measure hypoglycemia rate, duration and glycemic variability. Several devices for CGM are available: Dexcom (G4, G5 and G6), Medtronic (Guardian Connect and Guardian Sensor 3), Senseonics Eversense and Abbott (FreeStyle Libre and FreeStyle Libre 2). Safe and effective therapeutic decision-making can be facilitated by establishing target percentages of time in the various glycemic ranges, hoping to meet the speci c needs of special diabetes populations. The primary goal for effective and safe glucose control is to increase the Time in Range (TIR), while reducing the Time Below target glucose Range (TBR). These CGM-based targets must be personalized if applied to individual DM patients [1].
Very little is known about the occurrence of hypoglycemia as part of the normal ageing process in nondiabetic elderly people and secondly, if the incidence of hypoglycemia is associated with cognitive decline.
There is one report by Adolfsson et al. showing a lower fasting glucose in persons with AD [12]. In addition, it has been demonstrated that hypoglycemia occurs in non-DM hospitalized patients [13]. Blood glucose levels also need to be screened in other settings, and especially during common infections, also in nondiabetics, to identify persons at high risk for infection-related hypoglycemia (IRH). Arinzon et al. made a comparative study of diabetic and nondiabetic persons and IRH seems to indicate a poor general health status rather than being the cause of death [14].
Various physiological mechanisms are involved to prevent hypoglycemia: glucagon and norepinephrine play an important role in correcting hypoglycemia in normal human physiology. It could be possible that the impairment of the central sympathic autonomic nervous system negatively facilitates the occurrence of hypoglycemia [15].
If present, autonomic neuropathy in normal ageing could contribute to a higher risk of hypoglycemia and subsequently enhance cognitive decline. To our knowledge, no studies are available that have addressed this issue.
The aim of our study was to detect hypoglycemia in elderly non-DM people using CGM and to compare this to a DM control group.
Study participants
This study analyzed the frequency and duration of hypoglycemia in older non-DM persons (Group 1) compared to an age-and sex-matched DM control group on oral anti-DM and/or insulin therapy (Group 2) using real time CGM. Persons using only metformin were included in Group 1, since this therapy is not associated with a risk for hypoglycemia.
In this study, each Group consisted of 10 subjects, aged 75 years or older. Participants of Group 1 and 2 were selected by screening the consultation lists and the list of "robust" subjects who already nished the BUTTERFLY study (BrUssels sTudy on The Early pRedictors of FrailtY), a longitudinal observational cohort study with a two-year follow-up [16]. Participants were recruited between September 2020 and May 2021.
The inclusion criteria for Group 2 were T2DM based on the ADA-criteria, and HbA1c less than 75 mmol/mol. Participants selected for both Groups were excluded, if they had a neurocognitive dysfunction, con rmed by an MMSE-score (≤24/30) or if they were diagnosed with cancer the previous six months prior to the study.
Data collection
The following clinical data were collected for both Groups: age, sex, weight, medical history, clinical ndings, medication, HbA1c-level and MMSE-score.
The data regarding glycemia were obtained using the Dexcom G6 device. CGM has repeatedly been described in previous studies [17,18]. Glucose is measured in interstitial uid using the glucose oxidase method through uorescence by using a subcutaneous sensor. The values are sent to a receiver with Bluetooth technology [17,18]. The participants were monitored for a period of 20 days. After 10 days, the sensor was changed. The CGM-data were uploaded and stored for further analysis after the study period of 20 days on the Dexcom Clarity Portal.
During the entire study period, the participants were not able to observe the actual values of their glucose measurements. This blinded use avoided any in uence in lifestyle of the participants concerning exercise, food intake or medication. No adjustments were made to the treatment in Group 2 during the study period, unless severe hypo-or hyperglycemia was observed (glucose < 54 mg/dL or > 250 mg/dL).
Outcomes
The CGM data were analyzed by measuring the following variables: total number of hypoglycemic episodes and duration (glucose <70 mg/dL); the number of nocturnal hypoglycemic episodes (between 22.00 h pm and 7.00 h am); the severity of hypoglycemia (<70 mg/dL or <54 mg/dL); glucose variability; TIR (70-180 mg/dL [1]); HbA1c.
The ethical committee of the University Hospital of Brussels approved the study. Participants received all necessary information and gave a written informed consent.
All methods were performed in accordance with the relevant guidelines and regulations.
Statistical analysis
Data are given as median values and interquartile range. An unpaired t-test was used for comparison of Group 1 and 2. Table 1 shows the demographic information for all participants. The study was completed by 19 participants; one participant from Group 1 dropped out. No signi cant differences were noted between the Groups for median age, weight, or MMSE. There was a statistical difference (p < 0.0001) in mean HbA1c between the Groups 1 and 2. Table 2 shows CGM outcomes and HbA1c levels for all participants. Table 3 shows the median values for Group 1 and 2. Hypoglycemia was detected in both groups (median values: 0.50 % of the time vs 0.00 % of the time for Group 1 and 2 respectively, p= 0.551). hypoglycemic values during the daytime as well as during the night. In Group 2, participants with a high Coe cient of Variation had more hypoglycemia. This correlation was not seen in Group 1.
Demographic information
The median value and IQR for TIR were signi cantly lower in Group 2 compared to Group 1 (p < 0.001). There was a large variation in values for TIR in the diabetic group despite similar HbA1c levels. Hyperglycemia was signi cantly more present in Group 2 (especially during daytime, p < 0.0001).
Discussion
To our knowledge, this study is the rst to demonstrate that hypoglycemia occurs in elderly persons without DM using CGM despite absence of hypoglycemia-inducing medication. It was demonstrated that participants in the non-DM Group reached glycemic values below 70 mg/dL during 0.5% of the time. The DM Group 2 showed in general a greater glucose variability. Nevertheless, they had no higher percentage of time spent in hypoglycemia.
The strengths of this study are that two well-de ned groups of elderly persons were evaluated and almost all participants wore their sensor during the whole study period. A weakness is the small number of participants in both study Groups and the relatively short duration of the study. Therefore, the results should be explored in larger groups and for longer periods.
Most interestingly is the nding that hypoglycemia occurs in the non-DM group. This has not been demonstrated previously in studies using CGM. One could speculate whether hypoglycemia is an intrinsic characteristic of the natural aging process, although no tests were done to rule out secondary causes for hypoglycemia. If hypoglycemia is part of the natural aging process, it may also play a role in the development of Mild Cognitive Impairment and its progression to AD [19].
It is well known that T2DM increases the risk for cognitive decline and dementia such as AD and vascular dementia [9,10,11]. Some studies described even a form of dementia named DM-related dementia, a dementia subgroup associated with speci c DM related metabolic abnormalities. It is characterized by less well-controlled glycemia [8,18]. Some of these people showed neither signi cant medial temporal lobe atrophy on magnetic resonance imaging (MRI), nor parietotemporal hypoperfusion on single-photon emission computed tomography (SPECT), which are characteristic features of AD. In addition, they show no cerebrovascular disease lesions on MRI that could be responsible for cognitive impairment [9,20,21]. Ogawa et al. [9] found that MMSE scores correlated signi cantly and inversely with mean glucose, HbA1c, and the percentage of time in hyperglycemia in people with DM-related dementia. These people showed signi cantly greater glucose variability and signi cantly higher percentage of time spent in hypoglycemia than people with AD and DM. Additionally, Ogawa et al. [9] observed that glycemic controls can improve some domains of cognitive function, such as attention and executive functions, determined by the Trail-Making Test Part A and B [22]. A study by Biessels et al [19] showed that many patients with MCI progress to dementia and that the coexistence of DM may increase the risk for progression of cognitive deterioration.
Hypoglycemia is a frequent complication of DM treatment, and it is considered an independent risk factor for dementia in patients with T2DM [23]. The possible pathophysiological processes include post-hypoglycemic neuronal damage, in ammatory processes, coagulation defects, endothelial abnormalities, and synaptic dysfunction of hippocampal neurons during hypoglycemic episodes [23].
Few data are available if hypoglycemia occurs in elderly persons, and if it plays a role in promoting cognitive decline: Adolfsson et al. compared the fasting glucosemia in AD patients, in patients with distal gangrene, with cerebrovascular disease and non-DM controls. They found that the fasting glycemia was signi cantly lower in people with AD than in the other three groups [12]. On the other hand, Cukierman-Yaffe et al.
concluded that hypoglycemia did not increase the risk of incident cognitive dysfunction in middle-aged individuals with dysglycemia [24]. It has recently been demonstrated that hindbrain astrocytes stimulate catecholaminergic neurons to counteract hypoglycemia [25]. One could speculate whether impairment of this system due to aging, could impair the response to hypoglycemia.
Conclusions
We observed that hypoglycemia is common in non-DM persons. Very little is known about the prevalence of hypoglycemia in the normal aging process, where it could occur perhaps as part of central autonomic neuropathy and impaired contra-regulatory mechanisms to prevent hypoglycemia in the elderly. Hypoglycemia could well contribute to cognitive decline in both MCI and AD patients. In
|
v3-fos-license
|
2017-08-02T07:34:15.727Z
|
2017-02-20T00:00:00.000
|
3250760
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00018-017-2474-4.pdf",
"pdf_hash": "65c4a6026456598490f05795fe11c6ff2bc28eb2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:967",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "5d67166d74c18a1a2ffda730abe3c991b9322a65",
"year": 2017
}
|
pes2o/s2orc
|
S-phase checkpoint regulations that preserve replication and chromosome integrity upon dNTP depletion
DNA replication stress, an important source of genomic instability, arises upon different types of DNA replication perturbations, including those that stall replication fork progression. Inhibitors of the cellular pool of deoxynucleotide triphosphates (dNTPs) slow down DNA synthesis throughout the genome. Following depletion of dNTPs, the highly conserved replication checkpoint kinase pathway, also known as the S-phase checkpoint, preserves the functionality and structure of stalled DNA replication forks and prevents chromosome fragmentation. The underlying mechanisms involve pathways extrinsic to replication forks, such as those involving regulation of the ribonucleotide reductase activity, the temporal program of origin firing, and cell cycle transitions. In addition, the S-phase checkpoint modulates the function of replisome components to promote replication integrity. This review summarizes the various functions of the replication checkpoint in promoting replication fork stability and genome integrity in the face of replication stress caused by dNTP depletion.
deprivation, focusing particularly on the knowledge derived from budding and fission yeast model systems and highlighting similarities or differences with higher eukaryotes, when studies are available. The S-phase checkpoint primarily depicted here involves the budding yeast Mec1 and Rad53 kinases corresponding to ATR and CHK1 in human cells, and Rad3 and Cds1 in fission yeast (see definition of these factors in the following section). For a summary on the DNA damage checkpoint factors, and their activation upon specific types of DNA damage, we invite the readers to other recent reviews [16,17] and to the last section of this review. We will also provide a description of the phenotypes caused by mutations in the S-phase checkpoint pathway, with emphasis on the DNA replication fork alterations and chromosome fragmentation entailed by dNTP depletion.
The practical importance of studying the cellular responses to DNA replication fork arrest lies in the fact that many DNA replication inhibitors, such as HU, are chemotherapeutic agents [8]. Therefore, the knowledge of the underlying molecular mechanisms and responses can inform therapeutic approaches. Such knowledge could explain why cancer cells containing alterations in the ATR-CHK1 signaling pathway are selectively killed by certain DNA replication inhibitors, while cells in which this signaling is functional may show high levels of resistance [18][19][20]. Moreover, stalled replication forks have been shown to be potent inducers of genomic rearrangements, which are frequently associated with cancer [21][22][23][24]. Therefore, understanding the regulatory mechanisms and DNA transitions induced at stalled forks can lend important clues in the etiology of genome instability induced by specific replication stress cues.
Replication fork-extrinsic S-phase checkpoint-dependent regulations triggered by DNA replication inhibition
S-phase checkpoint-dependent controls activated upon HUinduced replication stress do not necessarily rely on DNA replication fork components. We will refer to these regulatory mechanisms as replication fork-extrinsic controls. For instance, one of the first studied functions of the replication checkpoint relates to its role in delaying cell cycle transitions in response to certain perturbations until the initial problem is fixed [25]. MEC1 (Mitosis Entry Checkpoint 1) in Saccharomyces cerevisiae (hereafter, S. cerevisiae or budding yeast) and rad3 (RADiation sensitive mutant 3) in Schizosaccharomyces pombe (hereafter S. pombe or fission yeast) have been isolated as genes necessary to inhibit mitosis entry and chromosome segregation in the presence of blocked DNA replication [26, 27] ( Table 2). In line with studies in yeasts, it was afterwards established that one fundamental function of their human ortholog ATR (Ataxia Telangiectasia and Rad3 related) is to prevent the onset of mitosis in the presence of irregularities during DNA replication detected by the S-phase checkpoint [28,29]. The budding yeast Rad53 (RADiation sensitive 53), fission yeast Cds1 (Checking DNA Synthesis 1) and human CHK1 (CHeckpoint Kinase 1) kinases were then shown to have similar effects on cell cycle control following replication perturbation with HU [30-33].
The above-mentioned serine/threonine kinases function as a hierarchical kinase pathway known as the S-phase checkpoint, in which the signal is relayed from Mec1/Rad3/ ATR to Rad53/Cds1/CHK1 [31, 33-37]. The major kinases of the Mec1 Rad3/ATR -Rad53 Cds1/CHK1 pathway in yeast and mammalian cells are summarized in Table 2. The ways in which DNA damage and replication-associated lesions [primarily RPA-coated single stranded (SS) DNA] are recognized by Mec1/Rad3/ATR and the signal is relayed towards downstream kinases have been and continue to be intensively studied. For this topic, we invite readers to recent reviews [16,38] and to the last section of this review. In this section, we summarize replication fork-extrinsic checkpoint-mediated regulations that affect chromosome stability via controls of cell cycle transitions, dNTP pools, origin firing, and gene gating.
Early studies suggested the notion that the S-phase checkpoint prevents entry into mitosis upon HU-induced replication perturbations. In budding yeast, upon recruitment on ssDNA-RPA complexes generated at the stalled DNA replication forks (see "Structural determinants and protein factors required for S-phase checkpoint activation in response to DNA replication stress" of this review), Mec1 activates Rad53 and the mitosis inhibitor protein kinase Swe1 (Saccharomyces WEe1 homologue 1), and these kinases synergistically inhibit the mitosis-promoting activity of Cdk1 (Cyclin-Dependent Kinase 1) [39]. In addition, Mec1-mediated activation of budding yeast Chk1 stabilizes the securin Pds1 (Precocious Dissociation of Sisters 1), which prevents mitotic entry by inhibiting Separase/ ESP1 (Extra Spindle Pole body 1) and, subsequently, the proteolysis of cohesin, a protein complex that holds the sister chromatids together until anaphase (Fig. 1a) [40-42]. In fission yeast, Rad3-Cds1 inhibits the activity of the mitotic kinase Cdc2 (Cell Division Cycle 2) by activating mitosis-inhibitory kinases Wee1 ("wee" from small, as loss of Wee1 activity causes cells to enter mitosis before reaching the appropriate size so that cytokinesis generates abnormally small daughter cells) and Mik1 (Mitotic Inhibitor Kinase 1) that cooperate in the inhibitory phosphorylation of Cdc2 [43,44]. In addition, Rad3 acts via Cds1 and Chk1 activation to inhibit the phosphatase Cdc25 (Cell Division Cycle 25), which can activate Cdc2 by removing the inhibitory Wee1-and Mik1-dependent phosphorylation (Fig. 1a) [45, 46]. Thus, low CDK and Cdc25 phosphatase activities, together with a high level of Securin, ensure strong inhibition of chromosome segregation in the presence of DNA replication problems detected by the S-phase checkpoint (Fig. 1a). In human cells, multiple cyclin-dependent kinases (CDKs) are present, and the basic mechanism of inhibition of mitosis entry following HU-induced replication arrest is conserved. That is, ATR/CHK1-mediated phosphorylation events cause inhibition of the CDK activators Cdc25A, Cdc25B and Cdc25C (Fig. 1a) [47].
Besides adjusting cell cycle transitions, another critical function of the S-phase checkpoint is to increase the synthesis of dNTPs. This function of the replication checkpoint was discovered in budding yeast in unperturbed conditions in a search for mutations that could bypass the lethality associated with MEC1 deletion. Ablation of the SML1 (Suppressor of Mec1 Lethality 1) gene, encoding for the inhibitor of RNR (RiboNucleotide Reductase), suppresses mec1 lethality [48]. It is now known that Sml1 is phosphorylated and degraded in a manner dependent on the kinases Mec1, Rad53 and Dun1 (DNA damage UNinducible 1) at the beginning of each unperturbed S-phase and when DNA replication is stalled (Fig. 1b) [49]. The Mec1-Rad53-Dun1 kinases also act to phosphorylate and inhibit the transcription . This leads to induction of the expression of several genes, including those encoding for the RNR subunits, thus providing additional means to increase the dNTP pools before the beginning of each S-phase or following DNA replication inhibition (Fig. 1b). Moreover, Mec1-Rad53-Dun1-dependent up-regulation of RNR under replication stress involves Dun1-mediated proteasome-dependent degradation of Dif1 (Damage-regulated Import Facilitator 1), responsible for nucleus-to-cytoplasm redistribution of the Rnr2 and Rnr4 subunits of RNR (Fig. 1b) [51]. A similar mechanism is at work in S. pombe, where Cds1 inhibits the small regulator of RNR, Spd1 (S-Phase Delayed 1), leading to the re-localization of the RNR subunits to the cytoplasm (Fig. 1b) [52]. Importantly, combined over-expression of the RNR2 and RNR4 genes partially suppresses the HU hyper-sensitivity of rad53 mutant cells, supporting the idea that S-phase checkpoint-dependent RNR up-regulation contributes to cell survival of rad53 cells under conditions that inhibit RNR [53]. Up-regulation of the cellular pool of dNTPs through the degradation of RNR inhibitors, increased transcription of the RNR genes, and subcellular relocalization of the RNR subunits, are also potent cellular responses to DNA replication inhibition in mammalian cells where the ATR-CHK1 kinase pathway induces the accumulation of the RRM2 (Ribonucleoside-diphosphate Reductase subunit M 2) subunit of RNR following replication stress (Fig. 1b) [54]. Similar to results in yeast, high levels of RRM2 were also shown to suppress different phenotypes associated with ATR dysfunction and insufficiency [55]. The S-phase checkpoint also prevents (late) origin firing when cells are faced with limiting dNTP pools. The underlying mechanism in budding yeast involves Rad53-dependent inhibitory phosphorylation of the replisome component Sld3 (Synthetically Lethal with Dpb11 3), and of the Dbf4 (DumbBell Former 4) subunit of Cdc7 (Cell Division Cycle 7)/DDK (Dbf4-Dependent Kinase), required for induction of origin firing (Fig. 1c) [56]. Recent work in mammalian cells indicated that following replication stress, inhibition of origin firing serves to indirectly protect the stalled forks by preventing exhaustion of RPA that coats ssDNA exposed at replication forks. Thus, inhibition of origin firing protects the intrinsically fragile ssDNA from being converted to deleterious double strand breaks, DSBs [57]. But is inhibition of origin firing the sole mechanism underlying the protective role of the checkpoint at stalled forks? In budding yeast, a separation-of-function allele of MEC1 (mec1-100), which is defective in the inhibition of late and dormant origins firing, but is proficient in DNA replication forks stabilization, revealed that mec1-100 cells are less sensitive to HU than mec1 null cells suggesting that fork stabilization synergizes with origin firing regulation to preserve fork integrity and genome stability under replication stress [58].
The S-phase checkpoint was also recently shown to regulate gene gating, a process that links nascent message RNA (mRNA) to the nuclear envelope and to the nuclear pore from where it gets exported to the cytoplasm (Fig. 1d). In this process, following dNTP depletion, Rad53-dependent phosphorylation of the nucleoporin Mlp1 (Myosin Like Protein 1) blocks mRNA export and releases transcribed chromatin from the nuclear pores. This process was proposed to resolve chromosomal topological constrains that can be deleterious for the architecture of the stalled DNA replication forks [59]. Ablation of gene gating, achieved by deletion of SAC3 (Suppressor of Actin 3), and nucleoporin Mlp1 mutants mimicking constitutive checkpoint-dependent phosphorylation alleviate rad53 checkpoint defects [59]. Thus, Rad53-mediated DNA replication fork stabilization partly involves inhibition of gene gating.
In conclusion, there are four well-documented replication fork-extrinsic S-phase checkpoint-dependent regulations triggered by the presence of arrested DNA replication forks: (1) regulations that prevent the onset of mitosis, (2) inhibit de novo DNA replication origin firing, (3) increase the cellular pool of dNTPs, and (4) release the transcribed genes from the nuclear envelope ( Fig. 1). Are these functions sufficient to explain the complex phenotypes of S-phase checkpoint mutants, or other regulatory mechanisms involving control of fork-associated DNA transitions are at play? In the next section, we review the main phenotypes of S-phase checkpoint mutants and some observations that suggest that replisome-associated factors and DNA metabolism enzymes, such as nucleases and helicases, are also under the control of the S-phase checkpoint, directly or indirectly.
Replication in the absence of the S-phase checkpoint induces chromosome fragility
Budding yeast cells allowed to replicate when Mec1 is conditionally inactivated show increased chromosome fragility, as observed by increased chromosome breakage [60]. This breakage was especially striking at late DNA replication regions defined as Replication Slow Zones (RSZs) [60,61]. It was proposed that this function of Mec1 is conceptually related to ATR roles in counteracting fragile sites expression in mammalian cells [62]. Fragile site expression in mammalian cells is generally observed in mitosis, at certain genomic regions that replicate late and whose fragility is induced by replication inhibition with aphidicolin [63]. Chromosome fragmentation induced by the absence of Mec1 or ATR was attributed to low RNR activity: this would decrease the dNTP pool below the threshold required to sustain DNA replication fork progression, thus leading to DNA replication fork collapse and breakage at the RSZs [60,61]. In support of this thesis, it was shown that increased RNR levels alleviate fragility both in mec1 and ATR-depleted cells [55,60,61].
Upon exposure to HU, mutations in RAD53 also cause fragility in RSZs [64], although it is not yet known whether the underlying mechanism is identical to the one observed in mec1 mutants in unperturbed conditions [60]. Interestingly, RAD53 ablation does not influence the basal cellular pool of dNTPs [65], but contributes to up-regulation of the RNR activity (see "Replication fork-extrinsic S-phase checkpoint-dependent regulations triggered by DNA replication inhibition"). Based on these findings, it was proposed that Rad53 up-regulates the local concentration of dNTPs at ongoing DNA replication forks [65]. This hypothesis of a local up-regulation of RNR at forks has also been recently proposed in higher eukaryotes based on the finding that CHK1 depletion in human cells does not cause a decrease in the whole cellular pool of dNTP levels [66]. Interestingly, chromosome fragmentation at RSZs in mec1 mutants is suppressed by high HU concentrations [61], although viability is highly impaired. High HU concentrations at the beginning of S-phase cause a significant fraction of replication forks in rad53 cells to be in an irreversible reversed or resected fork conformation close to the replication origins [64,67], thus preventing fork breakage at RSZs (see also below). Deletion of RRM3 (Ribosomal DNA Recombination Mutant 3), encoding a DNA helicase best known for its role in promoting replication through natural pausing sites [68,69], also suppresses fork breakage at the RSZs in mec1 cells [61]. This result may indicate an indirect effect of Rrm3 on dNTP levels or a completely different mechanism. rrm3Δ cells have elevated dNTP levels due to increased endogenous DNA damage and basal level of checkpoint activation [61,70]. In addition, Rrm3 also functions together with other DNA metabolism factors to affect stalled replication fork architecture ( [64] and see below).
How does chromosome fragility arise in the absence of Mec1, Rad53 and ATR? While various pathways are likely at work, it seems that unscheduled action of certain nucleases play an important part. The action of the fission yeast Mus81 (MMS and UV Sensitive 81) endonuclease in this process was one of the first to be documented [71,72]. In Cds1-depleted cells, Mus81-mediated processing of stalled forks accounts in large part for the chromosome fragmentation observed [71]. Interestingly, human Mus81 was recently shown to contribute to common fragile site expression [73]. Mus81 forms an endonuclease complex with Mms4 in budding yeast (Methyl Methane Sulfonate sensitivity 4) and EME1/EME2 (Essential Mitotic structure specific Endonuclease 1-2) in mammalian cells, and processes different DNA recombination and replication intermediates [74][75][76]. The Mus81-Mms4 activity is enhanced in G2/M via Cdk1-and Plk1 (Polo-Like Kinase 1)-dependent phosphorylation of Mms4 [75,77,78]. On the other hand, the replication checkpoint Mec1-Rad53 prevents premature activation of Mus81-Eme1 during replication in yeasts and human cells [66,71,75,79,80]. In fission yeast, activation of Cds1 by HU treatment induces Cds1-dependent phosphorylation of Mus81, and subsequent dissociation of Mus81 from chromatin [72]. Thus, the S-phase checkpoint protects the integrity of stalled DNA replication forks not only by regulating fork-extrinsic cellular processes (see Sect. 2), but also by regulating the spatiotemporal dynamics of nucleases, such as Mus81-Mms4 [77].
In line with the above-mentioned mechanism of chromosome fragility, Mus81-and Mre11 (Meiotic REcombination 11)-dependent DNA breaks have been recently shown to be induced in human and hamster cells in unperturbed conditions when CHK1 is ablated, confirming that one important function of the S-phase checkpoint is to prevent enzymatic activities that can cleave stalled replication forks [66]. Currently, it is not clear whether checkpoint-mediated restriction of Mus81 actions happen at specific genomic regions or at a certain time during replication. Moreover, the location of RSZs and fragile sites induced by dysfunctions in the Mec1 Rad3/ATR -Rad53 Cds1/CHK1 checkpoint pathway is only partly understood, although recent efforts promise to map those genomic sites on human chromosomes using quantitative genome-wide high-resolution techniques.
S-phase in the presence of low HU concentrations induces massive chromosome fragmentation in rad53 cells
Cells deleted for Rad53 but kept alive by the SML1 deletion (rad53 sml1) show massive chromosome fragmentation when replicating in the presence of low concentrations of HU [61,64]. Under these conditions, chromosome breakage is observed 3-5 h from the release of cells into S-phase, when bulk replication is nearly complete in wild-type cells [64]. Notably, high HU concentrations do not induce massive chromosome fragmentation in sml1 rad53 cells, even after long incubation in HU [61]. The exact relationship between HU concentrations, time of exposure to HU in S-phase and chromosome fragmentation in rad53 mutant cells is not completely understood, but several observations brought insights in this process. Exposure to high HU concentrations at the beginning of the S-phase strongly impedes fork progression in rad53 defective cells, causing a high percentage of forks to be arrested in a reversed or resected fork conformation close to the DNA replication origins [64,67]. Such alterations in replication fork structure are largely irreversible, as judged from the inability of rad53 cells to re-start DNA replication after HU removal [81]. Thus, it is possible that reversed forks can stabilize arrested forks against breakage (see also "S-phase checkpoint roles in fork architecture: prevention of pathological DNA transitions or resolution of transient DNA intermediates?"). Alternatively, and perhaps more likely, if fragility is preferentially induced in RSZs located in late replicating regions, inhibiting replication early on will prevent replication forks to reach late-replicating genomic regions. Importantly, chromosome breakage in mec1 cells does not require metaphase to anaphase transition, but involves condensation and Topoisomerase II-mediated activities [82]. Whether chromosome fragmentation in rad53 cells exposed to low HU concentrations occurs through the same mechanism as the one observed at RSZs in mec1 cells in unperturbed conditions [60, 61,64] remains to date unclear.
S-phase in presence of high HU concentrations in rad53 cells alters replication fork architecture and inactivates replication
Structural analysis of DNA replication forks through neutral-neutral 2D gel electrophoresis and transmission electron microscopy of rad53-K227A kinase-defective mutant and rad53 sml1 cells treated with high HU concentrations revealed that around 40% of forks had extensive resection (with an average of 0.8-1 Kbp of ssDNA on one of the newly synthesized strands close to the fork junction), 10% of forks had breaks, and 10% had reversed forks (Fig. 2a) [64,67]. The ssDNA discontinuities at the fork in rad53 cells appear to be localized on only one of the two newly synthesized strands. Moreover, a consistent fraction of resected replication forks (5%) are in a "bubble conformation" with one side of the replication bubble, with a length up to 2 Kb, being completely single stranded. These latter replication fork structures have been called hemi-resected DNA replication bubbles or hemi-replicated DNA structures (Fig. 2a) [64,67]. Such structures are not observed in control wild-type cells, which also show a very low level of reversed forks (less then 1% of the total forks), and usually have ssDNA stretches of less than 0.2 Kb at the fork junction [67]. These results suggest a protective action of the S-phase checkpoint on the structure of stalled DNA replication forks.
The extensive resection processes observed on either leading or lagging strands in rad53 cells [64,67] could be explained by high frequency of resection/unwinding events of one of the two newly synthesized strands, or, alternatively, by extensive uncoupling between leading and lagging strands (Fig. 2b). It is possible that in resected and uncoupled forks, the parental strands could re-anneal causing extrusion of the newly replicated strand (either with 5′ or 3′ end) (Fig. 2b, c). Further annealing of the extruded nascent strands could induce the formation of a reversed fork with a protruding ssDNA end on the regressed arm (Fig. 2b, c). Stalled forks of rad53 cells may undergo complete elimination of leading and lagging strand filaments, causing formation of hemi-resected DNA replication bubbles (Fig. 2a). We note that DNA replication forks with extended regions of ssDNA or reversed forks carrying single Holliday Junction (sHJ) centers may undergo spontaneous or nuclease-mediated processing with the formation of DSBs, thus representing a potential source of chromosomal rearrangements and genome instability [80,83,84] (Fig. 2b).
Controversial roles of the S-phase checkpoint on replisome maintenance and association with stalled replication forks
S-phase checkpoint mutants exposed to high concentrations of HU were shown to undergo progressive dissociation of the replicative DNA polymerases from early ARS regions containing replication-derived forks [64,85,86]. However, this notion has become somewhat controversial. Chromatin immunoprecipitation studies reported decreased binding of Polα at early active Autonomously Replicating Sequences (ARSs) of rad53 cells treated with high HU concentrations [64,[85][86][87]. However, another report concluded that Polα dissociation in rad53 cells only takes place at a small subset of forks localized at very early ARS regions [88]. In this latter study, the authors purified the replisomes from HUtreated rad53 and wild-type cells, revealing the presence of fully assembled replisomes in the absence of Rad53. The replisome composition was not changed, but whether the purified replisomes were still active and associated to the forks in vivo is not yet known, in spite of the multiple efforts dedicated by the authors to elucidate confounding effects [88]. Moreover, purification of the total pool of replisomes at a given time can be influenced by the presence of functional replisomes coming from de novo origin firing, a process that is deregulated in rad53 mutants. Furthermore, control cells may undergo DNA replication termination faster, which would cause dissociation of the replisomes from the chromosomes. These factors may potentially mask the differences between wild-type and rad53 cells, in which replication is slower than in wild type in the presence of HU. Similar experiments on the replisome composition were conducted in human cells, and the results confirmed that the replisome associates normally to the nascent strands in ATR-inhibited cells exposed to HU [89].
Various studies indicate that cells with non-functional checkpoints have a different replication fork architecture in comparison with wild-type cells (see Sect. "S-phase in presence of high HU concentrations in rad53 cells alters replication fork architecture and inactivates replication") and accumulate Rad52 (RADiation sensitive 52) recombination protein foci in S phase [90,91]. Importantly, mec1 cells under replication stress strongly depend for viability on factors with roles in homologous recombination, such as Rad52 [92] and the RecQ helicase Sgs1 (Slow Growth Suppressor 1) [93]. However, Rad52 also has annealing activity, and therefore its requirement for viability in mec1 cells [64,67]. b Replication stress induces uncoupling events between leading and lagging strands. Subsequent re-annealing of the parental and nascent strands can promote structural transitions at the stalled replication forks. Processing of the intermediates can also cause chromosome breakage. c Cellular mechanisms for fork stabilization and re-start. Re-priming coupled to DNA damage tolerance can preserve the normal DNA replication fork architecture. DNA replication inhibition and DNA lesions can induce fork uncoupling, formation of long ssDNA stretches, long DNA flaps and fork reversal. Activities that are potentially implicated in processing of flaps and reversed forks are shown (related to "Phenotypes caused by S-phase checkpoint dysfunction in unperturbed conditions and after dNTP depletion", "S-phase checkpoint roles in fork architecture: prevention of pathological DNA transitions or resolution of transient DNA intermediates?" and "S-phase checkpoint-dependent phosphorylation events at stalled replication forks" of the review) may reflect increased annealing events triggered by elevated levels of ssDNA during replication, similar to a situation recently reported in Polymerase α/Primase mutants [94].
How these apparently contradictory results on replisome composition and DNA polymerase association in checkpoint defective cells can be explained remains still puzzling. Hopefully, future research using conditional inactivation of S-phase checkpoints and advanced genomic and visualization techniques will illuminate the kinetics to which different replisome and recombination factors associate to replication forks in checkpoint proficient and deficient cells, and will shed new light on the effect of specific replisome-mediated processes to the complex phenotypes of checkpoint mutants.
S-phase checkpoint roles in fork architecture: prevention of pathological DNA transitions or resolution of transient DNA intermediates?
A prominent phenotype of S-phase checkpoint mutants exposed to dNTP depletion is an altered replication fork architecture-compared to the one of wild-type cells, characterized by increased fork reversal and resection of newly synthesized strands [60, 67,81] (see also "S-phase in presence of high HU concentrations in rad53 cells alters replication fork architecture and inactivates replication"). The structures accumulating in rad53 cells treated with HU could represent either pathological intermediates that are actively prevented by the replication checkpoint, or normal transient structures that are not detectable in wild-type control cells because their processing or resolution might rely on the replication checkpoint [95]. Whether fork reversal is actively prevented or not by the S-phase checkpoint is an important notion to discuss, as this has general implications on the roles of reversed fork intermediates for replication and genome stability. These roles have remained controversial and a matter of debate.
Recently, it was proposed that reversed forks are central intermediates of replication fork stabilization and restart mechanisms under replication stress, based on the observation that mammalian cell lines exposed to different sub-lethal doses of replication stress-inducing agents activate a RAD51 (the ortholog of budding yeast RADiation sensitive 51)-dependent pathway that promotes formation of reversed forks [96]. In this view, when fork progression is challenged, RAD51-dependent reactions would convert stalled forks into reversed forks [96]. RECQL1 (RECQ Like helicase 1) helicase and DNA2 (DNA Replication Helicase/Nuclease 2) nuclease were proposed to subsequently process and restart the reversed forks (see Fig. 2c) [97][98][99]. Several questions remain, however, open about the mode of action of the RAD51-RECQ1-DNA2 pathway of fork stabilization and re-start through fork reversal. For example, replisome location and the relationship between the replisome and the replication fork during the formation of the reversed fork and its re-start are not well defined. Although human RAD51 plays a role in protecting the nascent stands of replication forks from MRE11-dependent resection in unperturbed conditions [100], the exact roles of RAD51 in DNA replication in general and in reversed fork formation following replication stress in particular are still under investigation. Does fork reversal lead to a replisomedependent fork restart? Is RAD51-mediated fork reversal the best option for fork reactivation or is it a last-resort option? Is fork reversal triggered genome-wide or is preferentially induced at specific genomic regions where other fork reactivation mechanisms fail?
The current insufficient knowledge of factors processing reversed forks and the lack of techniques that can map single-ended DSBs on the chromosomes do not allow precise answers to the above questions. However, it is useful to consider what other mechanisms may mediate fork restart independently of fork reversal. One such mechanism involves replicative helicase-coupled re-priming downstream of the stalled replisome, to allow re-initiation of DNA synthesis after the replication obstacle (Fig. 2c) [94,101]. This would preserve a normal replication fork structure and induce formation of DNA gaps, which could be filled-in postreplicatively [102,103]. While this mechanism has been primarily studied in the context of DNA damage tolerance induced by alkylating agents, in principle it can operate in response to other types of replication obstacles or replication stress cues that do not block Polα-Primase activity. Notably, additional specific DNA polymerases directing re-priming events at stalled forks are starting to be identified in mammalian cells [104,105], suggesting that even in conditions when Polα-Primase activity is inhibited, re-priming events may be induced.
Interestingly, most processes related to replication intermediate metabolism and the function of the replication checkpoint are conserved from yeast to mammals, but fork reversal is much more frequent in mammalian cell lines than in wild-type yeast cells [67,95,96,102]. We recently proposed that this observation holds insights about the contexts in which fork reversal is triggered [4]. The high complexity of the human genome, which is enriched in repetitive sequences and heterochromatic regions, may account for numerous physical or topological fork barriers that would be more easily accommodated by fork reversal rather than other fork reactivation events, such as the re-priming mechanism discussed above. Moreover, the genomic context in which these fork-stalling events happen would not necessarily trigger checkpoint activation [106]. Indeed, stalled forks at ribosomal DNA in budding yeast, the locus most abundant in repetitive sequences in this organism, do not mount checkpoint activation [107], but trigger the formation of sHJs that most likely represent reversed forks [94,108]. We propose that the reversed forks detected in mammalian cells may often originate from repetitive sequences that represent natural obstacles for replication forks, and which may be further destabilized by treatment with replication inhibitors, such as HU [14]. In these contexts, fork reversal may promote fork stabilization until an incoming fork reaches the region. In this view, the stalled fork will not necessarily be an intermediate in the restart process, but it will represent an important strategy, present from yeast to mammals, to promote fork stability in specific genomic contexts that constitute natural replication obstacles [4].
In S-phase checkpoint mutants treated with HU fork reversal is increased [64,67], but how does the S-phase checkpoint regulate fork reversal? Is it because other fork restart mechanisms, such as re-priming, are impaired in the absence of the S-phase checkpoint, or because the S-phase checkpoint counteracts fork remodeling or promotes resolution of the reversed fork? Is fork remodeling related to the extensive resection events observed in checkpoint mutants? Some answers began to emerge. First, supporting the view that the fork remodeling and resection events are related to each other, deletions of genes encoding the DNA helicases Pif1 (Petit Integration Frequency 1) and Rrm3 were shown to reduce the formation of both resected and reversed forks in rad53 cells treated with HU [64]. Regarding the etiology of fork reversal, the human DNA translocase SMARCAL1 (SWI/SNF-related Matrix-associated Actin-dependent Regulator of Chromatin subfamily A-Like protein 1) was shown to induce fork remodeling [109]. Interestingly, the Pif1 DNA helicases and SMARCAL1 associate to stalled replication forks and nascent strands also in Rad53 and ATR proficient cells, respectively, but they do not exert their activities on changing the fork structure [64,109]. Importantly, ablation of SMARCAL1 or Rrm3/Pif1 DNA helicases suppresses chromosome fragmentation in ATR and Rad53 deficient cells, respectively, suggesting that fork reversal may be a toxic replication intermediate in checkpoint mutants and subsequently induce chromosome fragmentation.
As discussed in "Replication in the absence of the S-phase checkpoint induces chromosome fragility", a significant part of chromosome breakage observed in S-phase checkpoint cells can be attributed to the unscheduled action of the Mus81 endonuclease. In addition to Mus81-Mms4/ Eme1, endonuclease activity-containing factors, such as SLX4 [Synthetic Lethal of unknown (Xfunction 4] and CtIP (Carboxy-terminal Interacting Protein) are partly responsible for chromosome fragmentation in cells depleted for ATR and exposed to replication stress [109,110]. Exo1 nuclease resects stalled and reversed forks in rad53 cells treated with HU, and the nuclease activity of Dna2 counteracts fork reversal in fission yeast through the processing of fork-associated DNA flaps (Fig. 2c) [111][112][113]. Explicitly, Dna2 nuclease may reduce the length of the DNA flaps caused by extended replication fork uncoupling events (see Fig. 2c) [111]. This action will contribute to limit subsequent re-annealing of the parental strands and the extrusion of the newly synthesized filaments as 5′ or 3′ DNA flaps (Fig. 2c), thus counteracting fork reversal [111]. In this vein, an Exo1-Dna2-Sae2-dependent nuclease pathway was recently shown to counteract formation of unusual DNA replication intermediates in checkpoint defective cells exposed to replication stress [114].
In checkpoint mutants such as rad53, fork reversal is accompanied by increased uncoupling between leading and lagging strands, which could subsequently lead to fork reversal. Does the checkpoint prevent this uncoupling? Intriguingly, it was recently shown that HU treatment leads to the unloading of PCNA specifically from the lagging strand of the DNA replication fork [115]. As uncoupling between leading and lagging strands is not extensive in wild-type cells [67], these findings suggest that, in some way, lagging strand activities must be inhibited following DNA replication fork stalling induced by dNTP deprivation. Notably, the unloading of PCNA is mediated by Elg1 (Enhanced Level of Genomic instability 1) [116][117][118], which is phosphorylated by the checkpoint [119]. Thus, the extensive uncoupling of leading and lagging strands in rad53 cells may also illustrate that Rad53 inhibits lagging strand elongation following HU-induced fork stalling. The substrates involved may relate to Elg1-mediated PCNA unloading [115], involve counteraction of Rrm3 and Pif1 [64], downregulation of DNA primase [120], and/or additional mechanisms.
Uncoupling between the replicative DNA helicase MCM (MiniChromosome Maintenance) complex and DNA polymerases strongly activates ATR in Xenopus egg extracts [121]. Such uncoupling is expected to generate ssDNA regions at the fork junction on both replicating strands. It was proposed that one important function of the replication fork pausing complex Tof1-Csm3-Mrc1 (Topoisomerase I interacting Factor 1-Chromosome Segregation in Meiosis 3-Mediator of the Replication Checkpoint 1) in S. cerevisiae, composed of Swi1-Swi3 (Switchable 1-3) and Mrc1 in S. pombe, and TIMELESS-TIPIN and Claspin in mammalian cells, is to maintain the coupling between the DNA synthesis apparatus of the replisome and the MCM DNA helicase (see Fig. 3b) [122,123]. We note that predicted fork structures with ssDNA on both replicated arms have not been observed in rad53 checkpoint mutants [64,67] or in Xenopus egg extracts depleted for Tipin [124], potentially due to redundancy in factors ensuring the coordination between the replicative helicase and the replisome. Such factors, bridging the replisome and the replicative helicase, include Ctf4 (Chromosome Transmission Fidelity 4)/AND-1 (Acid Nucleoplasmic DNA binding protein 1) and MCM10 (MiniChromosome Maintenance 10) [125][126][127]. Intriguingly, in Tipin-depleted Xenopus extracts and ctf4 single mutants in budding yeast, there is an increase in fork reversal, suggesting that failure to coordinate replisome and helicase movements may also induce alternate fork response pathways that involve formation of reversed forks [94,124].
Taken together, these observations illustrate that unscheduled fork remodeling, resection, cleavage, and unwinding can induce cytotoxicity and genome instability in cells defective in the replication checkpoint. Recent studies suggest that many of these activities are counteracted by the replication checkpoint via phosphorylation events [64,72,109]. Relevant phosphorylation substrates of the checkpoint at stalled replication forks are discussed in the next section.
S-phase checkpoint-dependent phosphorylation events at stalled replication forks
Based on the phenotypes of S-phase checkpoint mutant cells and the DNA structures arising in mutants of the Fig. 3 DNA substrates and protein factors required for S-phase checkpoint activation. a High amounts of ssDNA-RPA complexes at stalled forks and primer-template substrates can be induced by uncoupling of leading and lagging strand DNA synthesis, uncoupling between DNA polymerases and MCM DNA helicase, by discontinuous synthesis of the nascent strands, by hyper-priming activity of Polα, unwinding or resection of one of the nascent strands. b Simplified representation of the replication fork and some replisome compo-nents. Protein factors shown in yellow are involved in the activation of the Mec1 Rad3/ATR -Rad53 Cds1/CHK1 checkpoint pathway following dNTP deprivation. Physical and functional interactions instrumental to checkpoint activation are indicated through arrows and dashed lines, respectively (related to "Structural determinants and protein factors required for S-phase checkpoint activation in response to DNA replication stress") S-phase checkpoint ("S-phase checkpoint roles in fork architecture: prevention of pathological DNA transitions or resolution of transient DNA intermediates?"), we pinpointed possible roles of the checkpoint in controlling the activity of replication fork components or regulators, such as nucleases and helicases. Here, we plan to discuss various studies relevant to this concept and to highlight critical substrates that emerged.
HU induces Mec1-dependent hyper-phosphorylation of the subunit 2 (RPA2) of the ssDNA-binding protein RPA (Replication Protein A) in S. cerevisiae [128]. Besides its roles in stabilizing the replisome and the ssDNA generated during DNA replication, the RPA complex functions as a platform to recruit ATR-ATRIP (ATR Interacting Protein) checkpoint complexes at lesion sites and at stalled forks (see also "Structural determinants and protein factors required for S-phase checkpoint activation in response to DNA replication stress"). ATR-dependent phosphorylation of RPA2 following dNTP deprivation has been shown to occur also in human cells, where this modification is critical to sustain DNA synthesis and DNA replication fork re-start, and to recruit PALB2 (PArtner and Localizer of BRCA2) to stalled DNA replication forks [129,130]. Early studies in S. cerevisiae suggested that Rad53-mediated targeting of the Pri1 subunit of DNA primase (encoded by the PRI1 and PRI2 genes) facilitates slow-down of replication in the face of replication stress [120]. Although the molecular mechanism and the phosphorylation sites implicated in this regulation are not known, it is conceivable that such regulation of the primase activity may serve to prevent uncoupling between leading and lagging strands synthesis in the presence of replication stress (see Fig. 3a).
Chromosome breakage arises in cells depleted for or mutated in the replication checkpoint ("Phenotypes caused by S-phase checkpoint dysfunction in unperturbed conditions and after dNTP depletion"). In fission yeast and human cells, the chromosome fragmentation of cells lacking Cds1 or depleted for CHK1 largely depends on Mus81 ( [66,72], see also "Phenotypes caused by S-phase checkpoint dysfunction in unperturbed conditions and after dNTP depletion"). As mentioned in "S-phase checkpoint roles in fork architecture: prevention of pathological DNA transitions or resolution of transient DNA intermediates?", processing of the nascent strands in checkpoint mutants under replication stress can be deleterious. Exo1 is a 5′-3′ exonuclease/5′ flap-endonuclease and plays a role in the resection of the stalled and reversed forks forming in rad53 cells exposed to HU [112,113]. EXO1 deletion in budding yeast does not suppress the HU hypersensitivity of rad53 cells treated with HU, suggesting that Exo1-dependent resection of the stalled forks is an event that occurs when the structure of the fork has been already altered in an irreversible way [112]. Exo1 is hyper-phosphorylated upon HU treatment in a Mec1-dependent manner [131], but whether this serves to inhibit Exo1 activity or to regulate its cellular localization is not yet clear. In human cells, ATR-dependent phosphorylation of EXO1 leads to its polyubiquitylation and subsequent proteasome-mediated degradation, highlighting another important mechanism through which the S-phase checkpoint limits fork-processing activities [132,133].
A number of factors have been implicated in fork remodeling. One such factor, SMARCAL1, which can promote fork regression in vitro, is phosphorylated by ATR upon replication stress [109,134]. ATR-dependent regulation of SMARCAL1 is thought to inhibit SMARCAL1-dependent fork remodeling-induced by dNTP deprivation, and to prevent subsequent SLX4-and CtIP-dependent processing of the fork structures [109]. This finding provides support to the idea that proteins that potentially remodel, cut or resect the stalled fork are efficiently inhibited by the S-phase checkpoint. In this vein, Rad53-dependent phosphorylation of the DNA helicases Rrm3 and Pif1 at the stalled forks prevents the accumulation of both resected and reversed forks, as well as the chromosome fragmentation phenotype typical of rad53 cells [64]. Whether Rrm3 specifically localizes to leading or lagging strands replisomes is not known, while Pif1 was proposed to participate in an alternative pathway of Okazaki fragment processing to stimulate DNA polymerase δ-dependent strand displacement activities on the lagging strand [135]. Intriguingly, human Pif1 can unwind synthetic DNA structures resembling stalled DNA replication forks and catalyze in vitro reactions that are similar to the ones thought to be involved in the formation of reversed forks [136]. Upon HU treatment, Rad53 phosphorylates Pif1 and Rrm3 [64]. Genetic data indicate that Rad53-mediated phosphorylation of Pif1 and Rrm3 counteract fork remodeling leading to fork reversal [64]. However, due to the pleiotropic effects of yeast and human cells defective in replication checkpoint function, it is difficult to derive interpretations of protein function in a checkpoint-proficient context based on phenotypes observed in checkpoint deficient cells. Substantiating the notion that helicases often act at stalled forks, in addition to the SMARCAL1, Pif1 and Rrm3 helicases mentioned above, the human FBH1 DNA helicase has also been recently shown to catalyze regression of the stalled forks following replication stress [137].
In the process of annealing of the parental strands at uncoupled forks, long 5′ flaps may be generated. Such flaps would require processing by Rad27, Exo1 and Dna2 (Fig. 2c). If long 5′ flaps on Okazaki fragments fail to be cleaved, they can induce formation of reversed forks. The notion that Dna2 nuclease deals with a toxic substrate generated by Pif1 was suggested by the observation that the lethality caused by the absence of Dna2 is suppressed by ablation of Pif1 [135]. Since Pif1 is thought to create DNA flaps at the lagging strand of replication forks during the alternative pathway of Okazaki fragment processing [138], the observed genetic interaction supports the idea that long 5′flaps at forks are toxic and counteracted (Fig. 2c).
The replication checkpoint also targets replisome components. The MCM2 subunit of MCM (Minichromosome Maintenance Complex) is phosphorylated in an ATR-dependent manner following replication stress and this event facilitates robust activation of the intra-S checkpoint [139,140]. Thus, transmission of the S-phase checkpoint signal downstream of ATR may involve ATR-dependent phosphorylation of a series of replisome components. Psf1 (Partner of Sld Five) subunit of the GINS (Go-Ichi-Ni-San) complex of the replisome is also phosphorylated in a Mec1-dependent manner, but the physiological role of Psf1 modification is not yet known [88].
ATR also mediates the transient association of FANCD2 (FANConi Anemia Complementation Group D2) to the MCM helicase complex at stalled replication forks, although it is not known whether this is related or not to the MCM2 phosphorylation event described above [141]. FANCD2 plays roles in protecting the stalled replication fork and in restraining DNA replication after removal of HU [141]. Intriguingly, FAN1 (Fanconi Anemia associated Nuclease 1), a 5′ flap endonuclease implicated in ICL (Inter-strand CrossLink) repair and identified as interacting factor of FANCD2 [142,143], is also recruited to stalled replication forks through its interaction with the monoubiquitylated form of FANCD2. FAN1 recruitment with FANCD2 at stalled DNA replication forks is necessary to re-start DNA replication and to prevent chromosome abnormalities even in the absence of ICLs [141,144,145]. Whether these actions of FAN1 following dNTP deprivation or its recruitment to FANCD2 are regulated by ATR remains still unknown.
BLM helicase, the human orthologue of SGS1 mutated in the cancer-prone Bloom syndrome, interacts with stalled replication forks and is phosphorylated in an ATR-dependent manner following dNTP depletion, suggesting possible functional crosstalk between ATR and BLM at stalled replication forks [146]. Moreover, ATRdependent phosphorylation of BLM is required for DNA replication fork restart and suppression of new origin firing [147].
Although already complex, it is likely that the picture of S-phase checkpoint replisome substrates will expand in the future, giving a better view of the DNA transitions that occur at stalled forks and the processes underlying fork stabilization, collapse and restart.
Structural determinants and protein factors required for S-phase checkpoint activation in response to DNA replication stress DNA structures and protein signals required for S-phase checkpoint activation
The DNA damage and replication checkpoint is activated by abnormalities in the DNA, both in terms of the substrate per se and the amount of substrate generated. Early studies in yeast revealed that processing of uncapped telomeres caused checkpoint activation and that the extent of Rad53 activation during the repair of a single site-specific and nonrepairable DSB correlated with the extension of resection [148,149]. These findings suggested that non-physiological high levels of ssDNA represent a signal for DNA damage checkpoint activation. This concept was later substantiated by findings that checkpoint activation after UV irradiation in non-replicating yeast cells depends on lesion processing and exposure of ssDNA gaps [150]. Thus, uncoupling of leading and lagging strands (or of DNA polymerases and MCM helicase), due to prolonged stalling or re-priming events downstream of the lesion, can provide substrates for checkpoint activation at replication forks (Fig. 3a) [13,121,151]. These events could also be induced by dNTP deprivation or other treatments that inhibit DNA replication without causing DNA lesions. Further studies revealed that checkpoint activation requires recruitment of a subset of checkpoint factors, called "sensors" to the damaged sites, and this is mediated by ssDNA coated with RPA [152,153]. The sensors include ATR-ATRIP and corresponding orthologs (Mec1-Ddc2 in budding yeast and Rad3-Rad26 in fission yeast) and the PCNA-like checkpoint clamp complex 9-1-1 [154]. 9-1-1 stands for Rad9-Rad1-Hus1 (RADiation sensitive 9-RADiation sensitive 1-HydroxyUrea Sensitive 1) in S. pombe and human cells, and its equivalent in S. cerevisiae is Rad17-Mec3-Ddc1 (Radiation sensitive 17-Mitosis Entry Checkpoint 3-DNA Damage Checkpoint 1) [155][156][157][158]. However, recruitment of 9-1-1 requires not only the presence of RPA-coated ssDNA, but also a primer-template junction, where the 5′-end of an annealed DNA fragment (primer) is close to a stretch of ssDNA [159]. Thus, continued primer synthesis at stalled replication forks can contribute to checkpoint activation [151]. Accordingly, early studies showed that decreased levels of dNTPs induce continuous synthesis of primers in an in vitro system with immuno-purified yeast DNA polymerase I and DNA primase [160]. Thus, discontinuous DNA synthesis on the nascent strands, uncoupling between leading and lagging strand DNA synthesis, and unwinding/ resections events of the newly synthesized filaments induce the formation of substrates required for the recruitment of checkpoint sensors at stalled replication forks (Fig. 3a).
While Dpb11 activates Mec1 through direct physical interaction, the Polε-mediated mechanism remains elusive. Nevertheless, genetically, Dpb4, Dpb11, and Polε seem to function in the same branch of Mec1 and Rad53 activation following dNTP deprivation, suggesting that the blocked DNA polymerase ε complex on the leading strand acts as activator of the checkpoint [176,179].
Another important step in understanding S-phase checkpoint activation was the discovery of Mrc1 Claspin as specific mediator for the Rad53 Cds1/CHK1 activation upon DNA replication inhibition (Fig. 3b) [179,180]. Following dNTP deprivation, Mrc1 Claspin becomes hyperphosphorylated in a Mec1 Rad3/ATR -dependent manner and mediates the activation of Rad53 Rad3/CHK1 [181]. The mrc1-AQ allele, defective in Mec1-dependent phosphorylation, but functional with regard to replication functions in unperturbed conditions, does not support Rad53 activation following HU treatment, suggesting that Mec1dependent targeting of Mrc1 is necessary to activate the downstream kinases of the S-phase checkpoint (Fig. 3b) [181] (Table 3). Mrc1 is associated with the replisome by means of physical interactions with the N-and C-terminal parts of Polε [182]. Mec1-dependent phosphorylation of Mrc1 abolishes the interaction with the N-terminal part of Polε [182]. Taken together these findings indicate that Mec1-dependent structural modification of the Mrc1-DNA Polymerase ε complex may lead to the formation of the true Rad53 activator at the stalled fork. In the absence of Mrc1, Rad53 activation and cellular viability, become dependent on Rad9 [181].
Future studies will perhaps continue to dissect the interplay and interactions between Mec1 ATR , Mrc1 Claspin and Polε in activating Rad53 Cds1/CHK1 . As the substrates of Mec1 and Rad53 are also being unraveled during unperturbed and replication-stress conditions [183,184], it is likely that the following years will witness increased understanding on the processes and DNA substrates that activate or are shielded by the replication checkpoint to preserve genome integrity.
Conclusions
Studies over the past decade brought about the notion that disruption of checkpoint response pathways often underpins tumor progression, and unveiled many aspects of checkpoint function. In particular, principles underlying checkpoint activation at stalled forks, an important source of replication stress, have been put together. We have also learnt a great deal about key substrates of the S-phase checkpoint, encompassing both substrates extrinsic to replication forks and replisome components, and how their modification may affect specific cellular processes and DNA transitions at the fork. However, complicating the picture, checkpoint mutants often have pleiotropic phenotypes, making the interpretation of specific results difficult. Moreover, given the multitude of checkpoint substrates, it is likely that the checkpoint may have both activating and inhibitory roles in a specific process. Future studies will need to sort out the spatial and temporal regulations, such as those related to genomic region, chromatin state and replication timing, of checkpoint-mediated modifications, and their effect on replication proficiency and DNA dynamics during normal replication and at stalled replication forks. The interconnectedness between checkpoint activation and fork reactivation, as opposed to mere fork stabilization, has started to be investigated recently, and much remains to be learnt in this domain. Considering the recent advances in genomic, proteomic and imaging approaches, and the development of efficient and reversible conditional systems in both yeast and mammalian cells, it is certain that the following years will witness important discoveries of secret facets of the checkpoint pathway, and will unveil principles that govern the cellular response to stalled replication forks. The
|
v3-fos-license
|
2024-03-18T05:08:49.200Z
|
2024-03-16T00:00:00.000
|
268497064
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "ab09709879b9618373aeb8bb4df16f6748c2ce55",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:968",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ab09709879b9618373aeb8bb4df16f6748c2ce55",
"year": 2024
}
|
pes2o/s2orc
|
Bilateral vertebral artery injury leads to brain death following traumatic brain injury: a case report
Background Vertebral artery injury is a rare condition in trauma settings. In the advanced stages, it causes death. Case A 31-year-old Sundanese woman with cerebral edema, C2–C3 anterolisthesis, and Le Fort III fracture after a motorcycle accident was admitted to the emergency room. On the fifth day, she underwent arch bar maxillomandibular application and debridement in general anesthesia with a hyperextended neck position. Unfortunately, her rigid neck collar was removed in the high care unit before surgery. Her condition deteriorated 72 hours after surgery. Digital subtraction angiography revealed a grade 5 bilateral vertebral artery injury due to cervical spine displacement and a grade 4 left internal carotid artery injury with a carotid cavernous fistula (CCF). The patient was declared brain death as not improved cerebral perfusion after CCF coiling. Conclusions Brain death due to cerebral hypoperfusion following cerebrovascular injury in this patient could be prevented by early endovascular intervention and cervical immobilisation.
Background
Vertebral artery injury following trauma is a rare case with incidence from 0.5 to 2% of all trauma cases.Traumatic vertebral artery injury (TVAI) can be related to cervical spine injury with some mechanisms, such as hyperflexion, hyperextension, distraction, facet dislocation and fractures of the cervical spine.The most related etiology for those injuries are motor vehicle accidents, while the other causes are direct assault, hanging, sports injuries (for example, swimming), and neck manipulation by chiropractors and physiotherapists [1][2][3].
Symptoms related to TVAI occur in 70% of cases within the first 24 hours post accidents.Other patients may have asymptomatic features or delayed presentation that may lead to undetected late deterioration.Physical findings of posterior circulation ischaemia include dysarthria, impaired balance and coordination, ataxic gait, visual field defects, diplopia, nystagmus, Horner's syndrome, hiccups, lateral or medial medullary syndrome, lower cranial nerve palsies, papillary abnormalities and impaired consciousness.Due to the high proportion of asymptomatic cases, Denver criteria that consist of signs, symptoms, and risk factors of TVAI can be used as screening tools.Digital subtraction angiography (DSA), CT, or MR angiography are radiology modalities for confirming the diagnosis [1,4,5].
Treatment options for TVAI consist of observation, anticoagulation, endovascular treatment, and surgery.Heparin followed by warfarin for three months can be given as a conventional strategy.Open surgical treatment may be considered in uncontrolled hemorrhage [1].
The mortality rate of TVAI varies in the range of 11-100% based on disease stages.This paper describes a rare case of bilateral vertebral injury leading to brain death after traumatic brain injury.
Case report
A Sundanese female, 31 years old, was admitted to the emergency room following a motorcycle accident.Her Glasgow Coma Scale (GCS) was 12 with isochoric pupils, normal pupillary light reflex, and without another neurologic deficit.Other vital signs were within normal limits.Immediate head computed tomography (CT) showed Le Fort III fracture with cerebral edema (Fig. 1).There were grade 1 C2-C3 anterolisthesis with pre-vertebra soft tissue swelling suspected hematoma in the cervical X-ray result (Fig. 2).
The patient was admitted to the high care unit (HCU) with a rigid collar neck and scheduled elective arch bar maxillomandibular application and debridement.The patient was assessed by Neurology with right extremities weakness and positive right pathologic reflex.The patient was planned to have cervical magnetic resonance imaging but was cancelled due to her agitated situation.The rigid neck collar was removed after five days in HCU.
She underwent arch bar maxillomandibular application and debridement on the fifth day in general anesthesia with a hyperextended neck position and nasotracheal tube during the procedure.She was admitted to the intensive care unit (ICU) post procedure.We used thiopental as a sedative agent to decrease intracranial pressure.After 19 hours of monitoring, her right extremities weakness was increased.CT evaluation revealed bilateral centrum ovale infarction (Fig. 3).Her condition deteriorated on the third day in the ICU.GCS was three without light pupillary reflex.Sedation and analgetic discontinuation did not improve her consciousness.The patient then underwent digital subtraction angiography (DSA).Grade 5 (transection) bilateral vertebral artery injury due to cervical spine displacement and grade 4 (occlusion) left internal carotid artery injury with CCF were recognised during angiography (Fig. 4).CCF coiling was performed, but her cerebral perfusion was not improved with severe vasospasm appearance (Fig. 5).Patient was declared as brain death.
Discussion
Traumatic vertebral artery injury is a rare case with incidence from 0.5 to 2% of all trauma cases.In this case, cerebrovascular injury was associated with head and neck trauma.Cervical hyperflexion, hyperextension, dislocation, and fracture can cause intramural thrombus formation due to intimal injury leading to total occlusion.In the advanced stage, blood vessel transection, as happened in our patient, can be fatal death (Table 1) [1,6].According to Denver screening criteria, our patient's condition is consistent with cerebrovascular injury signs and symptoms (Table 2) [7].We believe the left internal carotid artery occlusion happened after a head impact.Decreased blood flow due to CCF formation also promoted thrombosis intravascular.Furthermore, vertebral artery injury might have already occurred due to anterolisthesis induced by traumatic brain injury in this patient and worsened after the removal of cervical immobilisation and hyperextension neck position during surgery.
DSA is a gold standard for diagnosing cerebrovascular injury.Other diagnostic modalities are ultrasonography doppler, magnetic resonance angiography, and computed tomography angiography (CTA) [1,8].The treatment strategy includes conservative, endovascular, and surgery based on injury stages [9].Grade 5 cerebrovascular injury in our patient was indicated to have surgery.Unfortunately, she was already in brain death.
DSA procedure is also the gold standard for diagnosing CCF.CCF closure target is increasing blood flow in the internal carotid artery [10].However, inadequate intracerebral blood flow in this patient after CCF coiling was aggravated by bilateral vertebral artery injury.
Conclusion
Brain death in this patient happened due to cerebral hypoperfusion following grade 5 bilateral vertebral artery injury and grade 4 left internal carotid artery injury.We believe those injuries could be prevented using cervical immobilisation and early endovascular intervention.Semirigid immobilisation with a cervical orthosis for 6-12 weeks is a conservative strategy for traumatic spondylolisthesis [11].
Fig. 1
Fig. 1 Head computerized tomography showed Le Fort III fracture
Fig. 4
Fig. 4 Digital subtraction angiography showed grade 5 left internal carotid artery injury (a) with post coiling carotid cavernous fistula (b) and grade 5 left (c) and right (d) vertebral arteries injury (shown by arrow)
Table 1
Cerebrovascular injury classification
Table 2
Denver criteria for cerebrovascular injury screeningCT computerized tomography, GCS Glasgow Coma Scale
|
v3-fos-license
|
2018-12-11T02:26:37.756Z
|
2018-10-10T00:00:00.000
|
55999507
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.cell-stress.com/wp-content/uploads/2018/10/2018A-Liu-Cell-Stress.pdf",
"pdf_hash": "713af7f7a4b20f63bcb3bb89dcc9ad8517715999",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:972",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"sha1": "6e0238c997754210629060740ddbfafe83968102",
"year": 2018
}
|
pes2o/s2orc
|
p38β MAPK mediates ULK1-dependent induction of autophagy in skeletal muscle of tumor-bearing mice
Muscle wasting is the key manifestation of cancer-associated cachexia, a lethal metabolic disorder seen in over 50% of cancer patients. Autophagy is activated in cachectic muscle of cancer hosts along with the ubiquitin-proteasome pathway (UPP), contributing to accelerated protein degradation and muscle wasting. However, established signaling mechanism that activates autophagy in response to fasting or denervation does not seem to mediate cancer-provoked autophagy in skeletal myocytes. Here, we show that p38β MAPK mediates autophagy activation in cachectic muscle of tumor-bearing mice via novel mechanisms. Complementary genetic and pharmacological manipulations reveal that activation of p38β MAPK, but not p38α MAPK, is necessary and sufficient for Lewis lung carcinoma (LLC)-induced autophagy activation in skeletal muscle cells. Particularly, muscle-specific knockout of p38β MAPK abrogates LLC tumor-induced activation of autophagy and UPP, sparing tumor-bearing mice from muscle wasting. Mechanistically, p38β MAPK-mediated activation of transcription factor C/EBPβ is required for LLC-induced autophagy activation, and upregulation of autophagy-related genes LC3b and Gabarapl1. Surprisingly, ULK1 activation (phosphorylation at S555) by cancer requires p38β MAPK, rather than AMPK. Activated ULK1 forms a complex with p38β MAPK in myocytes, which is markedly increased by a tumor burden. Overexpression of a constitutively active p38Tbeta; MAPK in HEK293 cells increases phosphorylation at S555 and other amino acid residues of ULK1, but not several of AMPK-mediated sites. Finally, ULK1 activation is abrogated in tumor-bearing mice with muscle-specific knockout of p38β MAPK. Thus, p38β MAPK appears a key mediator of cancer-provoked autophagy activation, and a therapeutic target of cancer-induced muscle wasting.
INTRODUCTION
Cancer has been increasingly recognized as a systemic disorder that stresses multiple organs independent of its location. At least 50% of cancer patients experience cachexia, a systemic wasting syndrome manifested as weight loss, inflammation, insulin resistance, and increased muscle protein breakdown. Progressive loss of muscle mass (muscle wasting) contributes significantly to cancer-associated morbidity and mortality [1,2]. However, the etiology of cancer cachexia is not well defined and there is no FDAapproved treatment for this lethal disorder.
There has been a general consensus that accelerated muscle protein degradation is a major cause of cachexiaassociated muscle mass loss. It has been well-established that the ubiquitin proteasome pathway (UPP) plays an important role in cancer-induced muscle wasting by degrading myofibrillar proteins [3][4][5]. More recent evidence indicates that cancer also induces autophagy activation in the cachectic muscle of tumor-bearing mice [6][7][8][9] and cancer patients [10][11][12]. Autophagy targets cytoplasmic constituents including ubiquitinated protein aggregates and organelles for degradation by lysosomes [13,14]. Autophagy inhibition blocks muscle protein degradation induced by the activation of Toll-like receptor 4 (TLR4) [15], a plasma membrane receptor that is activated by danger-associated molecular patterns (DAMPs) [16] and mediates cancerinduced muscle wasting [4,9,17]. Thus, cancer-provoked autophagy activation is considered a therapeutic target of cancer-induced muscle wasting. However, the intramuscular signaling pathways that mediate cancer-induced activation of autophagy remain poorly understood. Elucidating and thereby targeting the cellular signaling pathways that mediate cancer-induced activation of autophagy could allow intervention of cancer-induced muscle wasting.
The Akt-FoxO signaling pathway inversely mediates the activity of both autophagy and UPP in muscle in response to such catabolic stimuli as fasting or denervation [18,19]. However, Akt is activated in cachectic muscle of tumorbearing mice [20,21] and cancer patients [11,12], which inhibits FoxOs by promoting their translocation out of nuclei [22]. Thus, the Akt-FoxO signaling pathway does not appear to mediate cancer-induced muscle catabolism that is due largely to systemic inflammation. On the other hand, we observed previously that inflammation activated-p38 MAPK mediates both the autophagy and UPP activation in skeletal muscle in response to TLR4 activation by lipopolysaccharide [15]. Other inflammatory mediators implicated in cancer cachexia also activate p38 MAPK including oxidative stress [23], TNFα [24], IL-6 [25], IL-1 [26], TWEAK [27], activin A/myostatin [28] as well as extracellular Hsp70 and Hsp90 [4], while promoting autophagy and/or UPPmediated muscle protein loss. However, it is unknown which of the three p38 MAPK isoforms that are expressed in skeletal muscle (, , and ) mediates cancer-induced autophagy activation. More importantly, how p38 MAPK activates autophagy is unknown.
In the present study, we demonstrate that activation of the p38 MAPK isoform is necessary and sufficient for autophagy activation in skeletal muscle in a mouse model of cancer cachexia, and that deletion of p38 MAPK in skeletal muscle abrogates muscle wasting by attenuating muscle protein degradation mediated by autophagy as well as UPP. Mechanistically, p38 MAPK mediates cancer-provoked autophagy activation by upregulating Atg8 orthologues LC3b and Gabarapl1 as well as by activating ULK1. These data support p38 MAPK as a key mediator and therapeutic target of cancer-associated muscle wasting.
LLC induces autophagy activation in skeletal muscle cells through p38 MAPK
We previously showed that LLC induces an increase in autophagy flux and autophagosome formation in cultured C2C12 myotubes as well as mouse muscle [9]. In the present study, we further investigated whether LLC induces autophagy activation in skeletal muscle cells through p38 MAPK by monitoring the autophagy marker LC3-II. By pretreating C2C12 myotubes, which respond to cancer cellconditioned media in a similar manner as primary myotubes [4], with a p38/ MAPK inhibitor SB202190, we observed that the induction of autophagy by LLC cellconditioned medium (LCM) required p38 MAPK activation ( Figure 1A). In addition, systemic administration of SB202190 to LLC tumor-bearing mice inhibited autophagy activation in cachectic muscle ( Figure 1B). These results suggest that LLC induces autophagy activation in skeletal muscle through the activation of p38 and/or p38 MAPK.
To identify the isoform of p38 MAPK that mediated LLC induction of autophagy, we utilized siRNA-mediated gene silencing and observed that the loss of p38 MAPK, but not p38 MAPK, abolished LCM-induced autophagy activation ( Figure 1C). To examine whether activated p38 MAPK actually stimulated autophagy flux, a constitutively active mutant of p38 MAPK or p38 MAPK [29] was expressed in myotubes. Only the active p38 MAPK increased the LC3-II level. In addition, treatment of myotubes with lysosome inhibitor chloroquine (CQ) further increased LC3-II in active p38 MAPK-expressing myotubes. Further, expression of active p38 MAPK resulted in a loss in the autophagy-selective target p62, and CQ treatment abrogated this effect ( Figure 1D). These results indicate that p38 MAPK activation stimulated autophagy flux. Similarly, LC3-II was specifically increased by the overexpressed active mutant of p38 MAPK in the muscle (TA) of tumor-free mice (Figure 1E). Furthermore, co-transfection of GFP-LC3 with active p38 MAPK, but not active p38 MAPK, in mouse muscle resulted in increased autophagosome formation as indicated by the formation of GFP-LC3-incorporated puncta ( Figure 1F). These data provided evidence that p38 MAPK activation is necessary and sufficient for autophagy activation in skeletal muscle cells by a tumor burden. Inhibition of p38 MAPK attenuates autophagy activation in skeletal muscle of LLC tumor-bearing mice. In 7 days of LLC implant to mice, SB202190 was i.p. injected (5 mg/kg) daily with DMSO (50%) as vehicle control for 14 days. Lysate of TA collected on 21 days of LLC implant was analyzed by Western blotting for LC3 (samples from 3 mice per group were loaded on each gel). (C) LCM-induced autophagy activation in myotubes is dependent on p38 MAPK. C2C12 myoblasts were transfected with control, p38 MAPK or p38 MAPK-specific siRNA. After differentiation, myotubes were treated with LCM or control medium for 8 h. Autophagy activation was evaluated by Western blotting analysis of LC3. (D) Expression of constitutively active p38 MAPK stimulates autophagy flux in C2C12 myotubes. Plasmids encoding a constitutively active mutant of p38 MAPK or p38 MAPK with the HA tag were transfected into C2C12 myoblasts, with empty vector as control. After differentiation, myotubes were treated with chloroquine (CQ, 20 M) for 8 h. LC3, p62 and expression of the p38 MAPK mutants were monitored by Western blotting. (E) Constitutively active p38 MAPK activates autophagy in mouse muscle. Plasmids encoding the constitutively active mutant of p38 MAPK or p38 MAPK fused with the HA tag were transfected into the TA of mice. Empty vector was transfected into the contralateral TA. On day 14 expression of the p38 MAPK mutants and autophagy activation was analyzed by Western blotting (data from two mice per group are shown). Data were analyzed by one way ANOVA. * denotes a difference between bracketed groups or from controls (p < 0.05). (F) Constitutively active p38 MAPK stimulates autophagosome formation in mouse muscle. A plasmid encoding GFP-LC3 was transfected into mouse TA muscle, and co-transfected with the plasmid encoding the constitutively active mutant of p38 MAPK or p38 MAPK in the contralateral TA. In 7 days, TA muscle was collected. Frozen sections were prepared and stained with DAPI. Autophagosome formation was evaluated by confocal microscopy. Bars represent 50 m.
LLC induces muscle wasting through p38 MAPKmediated activation of autophagy and UPP
Next, we investigated whether LLC-induced activation of autophagy and muscle wasting are attenuated in the muscle of mice with muscle-specific knockout of p38 MAPK (p38 MKO) [30]. It was previously shown that the muscle phenotype was not altered by deleting the p38β gene [28,30,31]. We observed that p38 MKO mice were resistant to LLC-induced autophagy activation as measured by LC3-II and p62 levels (Figure 2A). In addition, p38 MKO mice were resistant to LLC-stimulated UPP activity as monitored by the levels of atrogin1, UBR2 and myofibrillar protein myosin heavy chain (MHC) ( Figure 2B). Consequently, p38 MKO mice were spared from LLC-induced muscle wasting without altering tumor growth as measured by tumor weight, body and muscle weight, muscle proteolysis (tyrosine release), muscle strength (grip strength, Figure 2C) and myofiber cross-sectional area ( Figure 2D). Thus, LLC induces muscle wasting through p38 MAPK-mediated activation of both autophagy and UPP.
p38 MAPK-mediated C/EBP activation is critical to LLCinduced autophagy activation
We previously showed that p38 MAPK mediates LLCinduced upregulation of atrogin1 [29] and UBR2 [32] through the activation of C/EBP-binding to a cis-element in their 5'-promoter by phosphorylating the Thr-188 residue of C/EBP [29], and that LLC-induced muscle wasting is abrogated in C/EBP knockout mice [21]. We therefore investigated whether the p38 MAPK → C/EBP signaling pathway also mediates LLC activation of autophagy. We observed that LLC-induced activation of C/EBP was blocked in the muscle of p38 MKO mice ( Figure 3A), confirming in vivo that p38 MAPK mediates LLC-induced activation of C/EBP. To investigate whether C/EBP is critical to LLC-induced autophagy activation, we treated C/EBPdeficient C2C12 myotubes with LCM, and observed a dependence of LCM-induced autophagy activation on C/EBP ( Figure 3B). Further, we observed that LLC-induced autophagy activation was abrogated in the muscle of C/EBP knockout mice ( Figure 3C). Thus, p38 MAPK-mediated activation of C/EBP is critical to LLC-induced autophagy activation.
p38 MAPK → C/EBP signaling mediates upregulation of specific autophagy-related genes
To understand how C/EBP mediates LLC-induced autophagy activation, we searched a data base (http://tfbind.hgc.jp/) for potential C/EBP-binding sites in the 5'-promoter regions of autophagy-related genes and identified multiple sites within -1 kilo-bases in five important autophagy related genes ( Figure 4A). By analyzing mRNA levels of these genes in the muscle of LLC tumorbearing wild type and C/EBP knockout mice, we found that mRNA of LC3b and Gabarapl1, two Atg8 orthologues, were upregulated by LLC in a C/EBP-dependent manner ( Figure 4B). Corroborative data were obtained in C2C12 myotubes where LCM treatment upregulated the two genes in a C/EBP-dependent manner ( Figure 4C). To determine whether LLC stimulates C/EBP binding to the two gene promoters, we performed the chromatin immunoprecipitation (ChIP) assay and observed that LCM treatment of C2C12 myotubes stimulated C/EBP binding to multiple sites in the LC3b and Gabarapl1 promoter ( Figure 4D). These data suggest that C/EBP upregulates the two Atg8 orthologues in muscle cells in response to a tumor burden.
To verify whether p38 MAPK is required for C/EBPmediated upregulation of LC3b and Gabarapl1, we observed that the upregulation of these genes in LCM-treated myotubes was inhibited by SB202190 ( Figure 5A). Further, upregulation of these genes in skeletal muscle was abrogated in LLC tumor-bearing p38 MKO mice ( Figure 5B). These data confirm that LLC upregulates LC3b and Gabarapl1 through the p38 MAPK → C/EBP signaling pathway.
LLC induces ULK1 activation in skeletal muscle through p38 MAPK Both LC3b and Gabarapl1 are members of the ATG8 family that are essential for autophagosome formation [33]. On the other hand, ATG8 family members must be activated by the ULK1 complex post-translationally to initiate the lipidation process required for autophagsome formation [34]. Therefore, we examined whether LLC induced ULK1 activation in skeletal muscle cells. LCM treatment of C2C12 myotubes for 1 h increased ULK1 phosphorylation on Ser-555 (pS555-ULK1, Figure 6A). Phosphorylation of ULK1 on this serine residue is known to activate ULK1 by AMPactivated protein kinase (AMPK) upon nutrient deprivation [35,36]. LCM was shown previously to induce mTOR inhibition and AMPK activation in C2C12 myotubes [37]. To examine the involvement of AMPK in LCM-induced activation of ULK1, we pretreated myotubes with the AMPK inhibitor Compound C [38]. To our surprise, Compound C did not alter LCM-induced ULK1 phosphorylation on Ser-555, although it did inhibit AMPK activation as measured by the phosphorylation state of its Thr-172 residue. However, pretreatment with p38 MAPK inhibitor SB202190 attenuated LCM-induced Ser-555 phosphorylation of ULK-1, without affecting LCM-induced Thr-172 phosphorylation of AMPK ( Figure 6A). These data suggest that p38 MAPK, rather than AMPK, is critical to LCM-induced activation of ULK1. We then investigated whether p38 MAPK interacted with ULK1 by performing immunoprecipitation of p38 MAPK from myotube lysate. We observed co-precipitation of pS555-ULK1 with p38 MAPK in control myotubes, and that the level of co-precipitated pS555-ULK1 dramatically increased in LCM-treated myotubes (Figure 6B), suggesting that p38 MAPK does interact with ULK1 resulting in phosphorylation of its Ser-555 residue in myocytes, and this activity is stimulated upon p38 MAPK activation by a tumor burden. To identify the p38 MAPK isoform responsible for LCM-stimulated ULK1 phosphorylation on Ser-555, we performed siRNA-mediated gene silencing in myotubes and observed that p38 MAPK, but not p38 MAPK, was required for this reaction (Figure 6C). To verify that activation of p38 MAPK is sufficient to phosphorylate ULK1, FLAG-tagged ULK1 was co-expressed with constitutively active p38 MAPK in HEK293T cells. Mass spectrometry analysis of FLAG-ULK1 isolated from cell lysate revealed multiple Mice with muscle-specific knockout of p38 MAPK and control mice (p38 MAPK f/f ) were implanted with LLC cells. In 21 days, mice were euthanized for analysis of muscle wasting. Lysate of TA collected from the mice was analyzed by Western blotting for LC3 and p62 to evaluate autophagy activity (A) and UBR2, atrogin1 and MHC to evaluate UPP activity (B). Muscle wasting in the mice was analyzed by examining tumor weight, body and muscle weight, tyrosine release and grip strength (C). Muscle mass was measured as muscle fiber cross sectional area (D). Data in A to C were analyzed by one way ANOVA. Data in D were analyzed by Chi-square analysis. * denotes a difference (p < 0.05). phosphorylated amino acid residues including Ser-555 (Supplementary Table 1). However, these sites did not include a number of known AMPK-mediated phosphorylation sites in ULK1 such as Ser-317, Ser-467, Ser-637 and Ser-777 [36,39]. In combination, the above data support that p38 MAPK stimulates ULK1 activity upon activation by cancer independent of AMPK in the cellular environment. Finally, we confirmed that Ser-555 phosphorylation of ULK1 increased in the muscle of LLC tumor-bearing p38 f/f mice but not in that of muscle-specific p38 MAPK knockout mice. On the other hand, LLC-induced AMPK activation was not altered in p38 MAPK-deficient muscle ( Figure 6E). Therefore, we conclude that p38 MAPK, rather than AMPK, mediates LLC-induced autophagosome formation by activating ULK1 in addition to upregulating LC3b and Gabarapl1.
DISCUSSION
The current study identifies p38 MAPK as a key mediator of cancer-induced autophagy activation in skeletal muscle, through activating ULK1 as well as upregulating C/EBPcontrolled LC3b and Gabarapl1 genes. Importantly, we were able to abrogate cancer-induced muscle wasting by deleting p38 MAPK in skeletal muscle to prevent muscle protein degradation mediated by autophagy as well as UPP. These findings suggest that p38 MAPK is a key mediator and a therapeutic target of cancer-induced muscle wasting.
We show that p38 MAPK upregulates LC3b and Gabarapl1 through activating C/EBP. The LC3 and Gabarapl family of proteins are mammalian orthologues of ATG8, a yeast autophagy-related protein involved in the formation of autophagosomes [33]. Increased expression of LC3/Gabaraple and activation of autophagy have been observed in cachectic muscle of cancer patients [10][11][12]. In various types of cells, increased expression of LC3/Gabarapl are often associated with autophagy activation [40,41]. As the substrates of ULK1-mediated lipidation process that is critical for autophagosome formation [34], increased expression of LC3/Gabarapl is likely to facilitate autophagosome formation. On the other hand, the possibility exists that C/EBP may regulate additional autophagy-related genes that are rate-limiting for autophagosome formation.
Autophagy is activated by nutritional deprivation through AMPK-mediated phosphorylation of ULK1 on multiple sites including Ser-555 in various cell types [35,36]. In skeletal muscle, autophagy is also activated by the inactivation of Akt that inversely regulates FoxO3-mediated upregulation of autophagy-related gene expression in response to denervation or starvation [18,42], or by AMPKmediated activation of FoxO3 in response to C26 colon adenocarcinoma [43]. We previously showed that LLC acti-
FIGURE 4. C/EBP mediates tumor induction of autophagy-related genes in skeletal muscle cells. (A)
Potential C/EBP-binding cites are identified in autophagy-related genes. Data base search identified multiple potential biding sites of C/EBP in the 5' promoter region of listed autophagy-related genes. (B) LC3b and Gabarapl1 are upregulated in skeletal muscle of LLC tumor-bearing mice in a C/EBPdependent manner. Total RNA was extracted from the TA of wild type and C/EBP knockout mice implanted with LLC cells for 21 days, and analyzed for the mRNA of above identified autophagy-related genes by real-time PCR. (C) LC3b and Gabarapl1 are upregulated in LCMtreated myotubes in a C/EBP-dependent manner. Total RNA was extracted from C2C12 myotubes transfected with C/EBP-specific or control siRNA. The mRNA of C/EBP, LC3b and Gabarapl1 was determined by real-time PCR. Data from panel B and C were analyzed by one way ANOVA. * denotes a difference (p < 0.05). (D) LCM activates C/EBP binding to the LC3b and Gabarapl1 promoters in myotubes. C/EBP binding to the LC3b and Gabarapl1 promoters in C2C12 myotubes treated with LCM or the control NL20 cell-conditioned medium (NCM) was analyzed by the ChIP assay. Pre-immune IgG was used as control for C/EBP-specific antibody.
vates Akt and inactivates FoxO1/3 in muscle cells, while activating p38 MAPK concomitantly [21]. In the present study we show that despite AMPK activation by LLC in skeletal muscle, AMPK does not mediate in ULK1 activation LLC tumor-bearing mice. On the contrary, autophagy activation in skeletal muscle by LLC requires p38 MAPKmediated activation of ULK1. Thus, tumor activates autophagy in skeletal muscle through a signaling mechanism distinct from that in nutritional deprivation.
It is well-established that in skeletal muscle p38 MAPK is activated by various inflammatory mediators implicated in cancer cachexia including various cytokines [24][25][26][27][28], reactive oxygen species (ROS) [23] and extracellular vesicleassociated Hsp70 and Hsp90 [4]. Of the three members of p38 MAPK family expressed in skeletal muscle, p38 MAPK is responsible for most of the known roles of p38 MAPK including mediating inflammation [44,45] and promoting myogenesis [46,47]. p38γ MAPK regulates the expansion of myogenic precursor cells [48], endurance exerciseinduced mitochondrial biogenesis and angiogenesis [30], as well as glucose uptake [49]. On the other hand, p38 MAPK has few known functions. Utilizing genetic manipulations including the p38 MAPK muscle specific knockout mice [30] in addition to pharmacological inhibition of p38/ MAPK (due to the lack of p38 MAPK-specific inhibitors), the current study demonstrated for the first time that p38 MAPK is essential to autophagy activation during muscle wasting induced by LLC. The current study also demonstrated for the first time in vivo that p38 MAPK is essential to UPP activation and muscle wasting induced by cancer. Therefore, developing p38 MAPK-specific pharmacological inhibitors would be highly desirable for the intervention of cancer cachexia.
Our findings on the role of p38 MAPK in autophagy activation may have significance beyond skeletal muscle cells. For example, p38 MAPK mediates various stressinduced autophagy activation in a variety of cells [50][51][52][53][54]. Particularly, in response to TLR4 activation, p38 MAPK mediates autophagy activation associated with innate immunity [55], which is similar to cancer-induced muscle wasting [4,9,17]. However, which p38 MAPK isoform and how it activates autophagy in those processes are unknown. It is . (B) LCM stimulates p38 MAPK interaction with ULK1 in myotubes. C2C12 myotubes were treated with LCM for 1 h. Cell lysate was subjected to immunoprecipitation with pre-immune IgG or p38 MAPK-specific antibody. Precipitate was analyzed by Western blotting for p38 MAPK and pSer-555 ULK1. (C) LCM stimulation of ULK1 phosphorylation on Ser-555 in myotubes is mediated by p38 MAPK specifically. C2C12 myoblasts were transfected with control, p38 MAPK or p38 MAPK-specific siRNA. After differentiation for 96 h, myotubes were treated with LCM or control medium for 1 h. Knockdown of p38 MAPKs and phosphorylation of ULK1 on Ser-555 were analyzed by Western blotting. (D) p38 MAPK is required for ULK1 activation in cachectic muscle of LLC tumorbearing mice. Mice with muscle-specific knockout of p38 MAPK and control mice were implanted with LLC cells. Lysate of TA collected from these mice on day 21 was analyzed by Western blotting for ULK1 phosphorylation on Ser-555 and AMPK phosphorylation on Thr-172. Data were analyzed by one way AVOVA. * denotes a difference (p < 0.05).
possible that a similar mechanism as we observed in skeletal muscle cells exists in other types of cells to mediate autophagy activation in response to inflammatory stresses.
Taken together, our findings reveal that p38 MAPK mediates cancer-induced autophagy in skeletal muscle through novel transcriptional as well as post-translational mechanisms. In addition, our findings suggest that cancerinduced muscle wasting can be ameliorated by targeting a single signaling molecule in skeletal muscle, p38 MAPK.
Myogenic cell culture
Murine C2C12 myoblasts (American Type Culture Collection) were cultured in growth medium (DMEM supplemented with 10% fetal bovine serum) at 37C with 5% CO2. At 85-90% confluence, myoblast differentiation was induced by incubation for 96 h in differentiation medium (DMEM supplemented with 4% heat-inactivated horse serum) to form myotubes. Preconditioned medium from cultures of Lewis lung carcinoma cells (obtained from National Institute of Cancer, Frederick, MD) or non-tumorigenic human lung epithelial cell line NL20 (obtained from American Type Culture Collection) that were cultured for 48 h were centrifuged and the supernatant was used to treat C2C12 myotubes (25% final volume in fresh medium) when indicated. Pretreatment of SB202190 or compound C (10 μM, dissolved in DMSO with 0.1% final concentration, Sigma-Aldrich, St. Louis, MO) for 30 min was carried out when indicated. All cell culture experiments were independently replicated 3 times as indicated (N = 3).
Animal use
Experimental protocols were approved in advance by the institutional Animal Welfare Committee at the University of Texas Health Science Center at Houston. For LLC cell xenograft, 100 l LLC (National Institute of Cancer) cells (1 × 10 6 cells), or an equal volume of vehicle (PBS) was injected subcutaneously into the right flanks of 7-week-old male C57BL/6 mice (The Jackson Laboratory, Bar Harbor, ME), C/EBP -/mice in C57BL/6 background that were bred from C/EBP -/+ mice [56] or mice with muscle-specific knockout of p38β (p38β MKO) [28]. The latter were created by crossbreeding floxed-p38β mice (p38β f/f ) in C57BL/6 background [30] with muscle creatine kinase-Cre (MCK-Cre, The Jackson Laboratory, Bar Harbor, ME). SB202190 was i.p. injected (5 mg/kg) daily from day 7 of tumor implant. Mice were euthanized on day 21 for evaluation of muscle wasting. When indicated plasmids encoding constitutively active mutant of p38 MAPK isoforms and GFP-LC3 fusion protein [55] were transfected into TA by electroporation as previously described [29].
Transfection of siRNA and plasmids in C2C12 myoblasts
Predesigned siRNAs specific for C/EBPβ, p38α and p38β were purchased form Sigma-Aldrich. The IDs of the siRNAs were SASI_Mm01_00187563, SASI_Mm01_00020743 and SASI_Mm01_00044863, respectively. Control siRNA was purchased from Invitrogen. These siRNAs were introduced into C2C12 myoblasts using the jetPRIME reagent (Polyplustransfection Inc., Illkirch, France) according to the manufacturer's protocol. In 24 h, myoblasts were differentiated and experiments were started in another 96 h when myotubes were formed. Plasmids encoding constitutively active p38α and p38β isoforms [57] were transfected into C2C12 myoblasts using the jetPRIME reagent. Empty vector was transfected as the control. These manipulation of p38 MAPK in myoblasts did not alter the end result of differentiation and formation of myotubes [29].
Real-time PCR
Total RNA was isolated from myotubes or muscle by using TRIzol reagent (Invitrogen, Carlsbad, CA). Real-time PCR was performed as described previously [21]. Sequences of specific primers are listed in Table 1. Data were normalized to GAPDH.
Western blot analysis
Western blot analysis was carried out as described previously [24]. Antibodies to total and/or phosphorylated p38MAPK (T181/Y182), p-C/EBPβ (T188), p-ULK1 (S555), total and phosphorylated AMPK (T172), p62 as well as p38α and p38β were from Cell Signaling Technology (Beverly, MA). Antibody to C/EBPβ and ULK1 were from Santa Cruz Biotechnology (Santa Cruz, CA). Antibody to atrogin1/MAFbx was from ECM Biosciences (Versailles, KY). Antibodies to UBR2 and LC3-II were obtained from Novus Biologicals (Littleton, CO). Anti-MHC antibody (MF-20) was from R&D Systems (Minneapolis, MN). Antibody to the HA tag was from Covance (Princeton, NJ, USA). Data was normalized to α-Tubulin (antibody was from Development Studies Hybridoma Bank at the University of Iowa, Iowa City, IA) or GAPDH (Antibody was from Millipore, Billerica, MA, USA). Levels of phosphorylated proteins were normalized to corresponding total proteins.
Chromosome immunoprecipitation (ChIP) assay
ChIP assay was performed as previously described [21]. Antibody against C/EBP was from Santa Cruz Biotechnology. Preimmune IgG was from Sigma-Aldrich. The PCR primers used are listed in Table 1.
Histology and confocal microscopy studies
Cross sections of TA were fixed and stained with H&E by the Histology Core at Lester and Sue Smith Breast Center, Baylor College of Medicine. Cross-sectional area of stained muscle sections was quantified by using the ImageJ software (NIH). Five view-fields with ~100 myofibers per field in each section were measured. Frozen sections of mouse TA (5 m) that expressed GFP-LC3 and/or p38 MAPK mutant were stained with DAPI and examined with a Nikon A1R Confocal Laser Microscope using 60X objective.
Adjustment of brightness, contrast, color balance, and final image size was achieved using Adobe Photoshop CS (Adobe Systems, San Jose, CA, USA).
Statistical analysis
Data are presented as the mean ± S.D. and were analyzed with one-way ANOVA or Student t test using the SigmaStat software (Systat Software, San Jose, CA) as indicated. When applicable, control samples from independent experiments were normalized to a value of 1 without showing variations (actual variations were within a normal range). Chi-square analysis was carried out by using R to compare the distributions of muscle fiber cross-sectional area among various groups. A p value < 0.05 was considered to be statistically significant.
ACKNOWLEDGMENTS
This study was supported by grants from National Institute of Arthritis and Musculoskeletal and Skin Diseases to Y.-P. Li (R01 AR063786 and AR067319). We thank Andrey Tsvetkov of UTHealth for helpful discussions, and David Engelberg (Hebrew University, Jerusalem, Israel) for sharing plasmids encoding the constitutively active mutants of p38 MAPK isoforms.
SUPPLEMENTAL MATERIAL
All supplemental data for this article are available online at www.cell-stress.com.
CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
AUTHOR CONTRIBUTION
Y-PL and GZ designed experiments and wrote the manuscript. GZ, ZL, KWTS, HD, HAD, SG, YW and YW conducted
|
v3-fos-license
|
2018-12-17T12:44:59.850Z
|
2015-08-03T00:00:00.000
|
59063893
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/jchem/2015/460392.pdf",
"pdf_hash": "0a061a63462498e98b6d262d3496a6119f6665f7",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:973",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "0a061a63462498e98b6d262d3496a6119f6665f7",
"year": 2015
}
|
pes2o/s2orc
|
Design of Polymeric Nanofiber Gauze Mask to Prevent Inhaling PM2.5 Particles from Haze Pollution
Recently, PM2.5 (particulate matter with diameter of 2.5 micron or less) has become a major health hazard from the polluted air in many cities in China. The regular gauze masks are used to prevent inhaling the PM2.5 fine particles; however, those masks are not able to filter out the PM2.5 because of the large porosity of the mask materials. Some well-prevented masks usually have poor breathability, which increases other health risks. In this study, a polysulfone based nanofiber for mask filtration material was synthesized by electrospinning.That nanofiber mask material was characterized by SEM, air permeability test, and PM2.5 trapping experiment. The results indicate that nanofiber mask material can efficiently filter out the PM2.5 particles and simultaneously preserve a good breathability. We attribute such improvement to the nanoscaled fibers, having the same porosity as that of regular gauze mask but with extremely reduced local interfiber space.
Introduction
PM2.5 (particulate matter with diameter of 2.5 micron or less) in polluted air can directly go through the lung alveolar to cause many diseases including asthma [1].Recently, many Chinese cities are covered by haze air. Figure 1 showed the polluted air in Beijing.The heavy metals adhered on PM2.5 particles may even lead to severe chronic health problems such as cancer after long-term exposure under the particles contained environment [2].
In order to prevent inhaling PM2.5 in haze, people wear regular gauze masks.Most of those masks are made of non-woven fabric, activated carbon, or cotton which has fiber diameter of several micrometers [3].They have significant shortcoming of poor PM2.5 rejection and low air permeability [4].
In this paper, a novel polymeric nanofiber masks were synthesized by electrospinning.Such masks based on nanofibers are expected to well prevent the PM2.5 particles and maintain a good air permeability.These nanofibers can be potentially developed to high efficiency and low cost mask product.
Experiment Materials.
Polysulfone, acetone, polyethylene oxide, and dimethyl acetamide were purchased from Beijing Chemical Factory, China.Medical clinic masks, medical operating room masks, ITO PM2.5 masks, N95 respirator, and R95 masks were purchased from Tianjin Youkang medical and health care products factory.All chemicals were of analytical grade and were used without further purification.
Procedure for Electrospinning.
Polysulfone solution was prepared at a concentration of 18 wt% by dissolving in DMAc/acetone (9 : 1) with vigorous stirring.The prepared solution was kept overnight without stirring under room temperature to remove air bubbles.For the electrospinning, the 18 wt% polysulfone solutions were filled into a syringe with a metal needle connected with a high-voltage power supply (Tianjin Dongwen High Voltage Co., China).The voltage is 13 KV and the distance between the needle and the aluminium foil is 13 cm.The polymer solution was fed at a constant rate of 0.4 mL/h by using a syringe pump.The nanofibers were collected on the surface of a non-woven PP on the grounded aluminium foil.The collective time of nanofibers was 15, 30, and 60 min.More detailed information was shown in Table 1.
Morphology Observation.
The membranes were dried in vacuo and sputtered gold before observation.The morphology of nanofiber membranes was imaged by scanning electron microscopy (SEM, Hitachi T-1000, Japan).
Permeability Simulation Test.
Permeability simulation test device mainly includes a pump, a pressure differential gauge, a meter, and bronchus.The nanofiber matts and the commercial masks as control group were clamped by a wellsealed testing chamber, which was connected with a syringe at top, install a pressure meter at the side exit.The bottom side of the chamber was open to the ambient air.Counterweights were placed on the syringe to adjust the pressure inside the testing chamber.The resulting pressure above the nanofiber matts was measured by the pressure meter.
Intercept Rate Test.
Intercept experimental equipment was conducted in a nuclear particle counter (Model of CPC 3772 made in TSI Company).That counter can measure the concentration of particles (diameter between 10 and 2500 nm) in the air.The particle concentration in air represented the PM2.5 intercept rate of the filtration material.
2.6.Data Analysis.All data were means ± SD from three independent experiments.Comparisons between multiple groups were performed with the ANOVA test by SPSS. values less than 0.05 were considered statistically significant.
Morphology Observation.
Figure 2 is the picture of nanofibers on the non-woven fabric.All the masks are white in color and homogeneously distributed on the non-woven fabric.The thickness of the electrospun fiber matts was at the rank of 15 min < 30 min < 60 min.
According to the SEM images, the nanofibers of 15 min electrospinning were about 500-800 m in diameter with random orientation and high porosity (Figure 2).The interdistance among the nanofibers is about 1-3 m.The nanofibers with 30 and 60 min electrospinning showed similar morphology to those of 15 min except for their higher thickness (data not shown).
Permeability Test.
The permeability of the nanofiber masks was then compared with the commercial masks of disposable non-woven face mask, non-woven mask for operation room, Ito PM2.5, N95, and R95 (Figure 4).As listed in Table 2, the pressure drop of three nanofiber masks was 15 min < 30 min < 60 min.Compared with the commercial masks, the disposable non-woven face mask showed the lowest pressure drop, while the R95 had the highest barrier to air permeability.The low pressure drop indicated the good permeability.3 showed the PM2.5 rejection capability of each mask in the intercept rate test.The rejecting ratio was calculated by the rejected particles/total particles in air.All the three nanofiber masks performed a high rejected effect of >90%, at the rank of 60 min > 30 min > 15 min.For the commercial available masks, the disposable non-woven face mask showed the poorest effect on PM2.5 rejection of 32.9%, which could not match the requirement of daily use.By contrast, the R95 mask for medical use in preventing virus permeability had an extremely high rejection of 99.9%, which is the highest one among the selected masks in our study.
Discussion
In this study, a novel nanofiber mask was synthesized by electrospinning for preventing the PM2.5 from haze air.That nanofiber mask showed better performance than the commercial masks in both air permeability and PM2.5 rejection.
To make a clear comparison, the advantages and disadvantages of commercial masks have been summarized in Table 4.Most of these masks were made of non-woven fabric microfibers with large diameters of several micrometers [5].The thin microfiber mask such as the disposable non-woven mask, though showed excellent air permeability, performed poor rejection on PM2.5 due to the insufficient thickness of fibers and larger interfiber space.In contrast, the thick microfiber masks of Ito PM2.5 and N95 improved the rejecting ratio of PM2.5 to >80% which is enough for daily use.But they both had high resistance on air permeability which led to uncomfortable breathability in use.The R95 mask, which is commonly used in medical purpose, could well protect the doctors from virus infection.However, it was not suitable for preventing air pollution in daily use because of the extremely poor air permeability.The nanofiber mask with 15 min electrospinning had a high PM2.5 rejection of 90% and acceptable air permeability (Tables 2 and 3).Therefore, it is a good mask raw material for preventing haze air pollution.
The requirement on air permeability and PM2.5 rejection is a contradiction [6].That means the high air permeability usually reduces the particle rejection of PM2.5 and vice versa.As shown in Figure 3, the nanofiber mask (15 min) displayed the largest difference between air permeability and PM2.5 rejection among these masks.We attribute that unique feature to the nanoscaled fiber size.At the same porosity, which indicates air permeability, the smaller fiber required higher quantity to achieve the same coverage as that of the bigger fiber.More fibers in the same area result in smaller interfiber space.From the SEM image in Figure 3, when the fiber scale is down to submicron, the interfiber space between the nanofibers is also reduced to micron scale, which is smaller than the PM2.5 size, consequently rejecting the PM2.5 passing efficiently.
Conclusion
This study synthesized a polysulfone nanofiber for mask filtration material by electrospinning.The nanofiber mask material was characterized by SEM, air permeability test, and PM2.5 trapping experiment.This nanofiber mask material can efficiently filter out the PM2.5 particles and simultaneously preserve a good breathability.In this regard, this nanofiber based material would be made into the comfortable and effective mask to prevent inhaling the harmful particles in haze air pollution.The nanofiber masks could be developed to commercial available masks in the future.
Figure 4 :
Figure 4: Permeability and intercept rate comprehensive comparison.
Table 2 :
The pressure drop statistics in permeability test.
Table 4 :
Comparison of different masks.
|
v3-fos-license
|
2020-06-18T09:06:27.986Z
|
2020-06-12T00:00:00.000
|
220266573
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.dib.2020.105850",
"pdf_hash": "8e9ba89c0931612c3ef2220014720999e89bc168",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:974",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "af4d8beb9c4654a27f4286f28283d50e31aae700",
"year": 2020
}
|
pes2o/s2orc
|
Mass spectrometry dataset on apo-SOD1 modifications induced by lipid aldehydes
Metal-deficient Cu,Zn-superoxide dismutase (apo-SOD1) is associated with the formation of SOD1 aggregates that accumulate in ALS disease. The data supplied in this article support the accompanying publication showing SOD1 modification and aggregation induced by lipid aldehydes [1]. Here, we present the LC-MS/MS dataset on apo-SOD1 modification induced by seven different lipid aldehydes: 4-hydroxy-2-hexenal (HHE), 4-hydroxy-2-nonenal (HNE), 2-hexen-1-al (HEX), 2,4-nonadienal (NON), 2,4-decadienal (DEC) or secosterol aldehydes (SECO-A or SECO-B). Modified protein samples were digested with trypsin and sequenced by a LC coupled to a Q-TOF instrument. Protein sequencing and peptide modification analysis was performed by Mascot 2.6 (Matrix Science) and further validated by manual inspection. Mass spectrometry data (RAW files) obtained in this study have been deposited to MassIVE and the observed peptide-aldehyde adducts can be used in further studies exploring SOD1 modifications in vivo.
a b s t r a c t
Metal-deficient Cu,Zn-superoxide dismutase (apo-SOD1) is associated with the formation of SOD1 aggregates that accumulate in ALS disease. The data supplied in this article support the accompanying publication showing SOD1 modification and aggregation induced by lipid aldehydes [1] . Here, we present the LC-MS/MS dataset on apo-SOD1 modification induced by seven different lipid aldehydes: 4-hydroxy-2-hexenal (HHE), 4-hydroxy-2-nonenal (HNE), 2-hexen-1-al (HEX), 2,4-nonadienal (NON), 2,4-decadienal (DEC) or secosterol aldehydes (SECO-A or SECO-B). Modified protein samples were digested with trypsin and sequenced by a LC coupled to a Q-TOF instrument. Protein sequencing and peptide modification analysis was performed by Mascot 2.6 (Matrix Science) and further validated by manual inspection. Mass spectrometry data (RAW files) obtained in this study have been deposited to MassIVE and the observed peptidealdehyde adducts can be used in further studies exploring SOD1 modifications in vivo .
© 2020 The Author(s
Value of the data
• The data show the characterization of apo-SOD1 lipoxidation sites induced by seven biologically relevant lipid aldehydes. • These data can be useful for researchers studying protein lipoxidation.
• These data can be useful for studies investigating protein post-translational modifications induced by lipid peroxidation products.
Protein digestion
After incubation, samples were first reduced with sodium borohydride (NaBH 4 , 5 mM), for 1 h at room temperature and then, submitted to disulfide reduction with dithiothreitol (DTT, 5 mM), for 30 min at 60 °C and Cys alkylation with iodoacetamide (15 mM), for 30 min at room temperature. Protein digestion was done with proteomic-grade trypsin (Promega, Madison, WI,
LC-MS/MS analysis
Peptide mixture was analyzed by a LC-MS/MS system consisted of a nanoAcquity UPLC system (Waters Corp., Milford, MA, USA), coupled to a quadrupole-time-of-flight (Q-TOF) mass spectrometer (TripleTOF6600 Sciex, United States), as described previously [4] . First, samples were desalted on the trapping column (Waters, nanoAcquity Trap column, 180 μm × 20 mm; 5 μm) using 1% solvent B at a flow rate of 10 μL/min for 2 min under isocratic conditions. Peptides were then separated on a C18 analytical column (Waters nanoAcquity UPLC, 75 μm × 150 mm; 3.5 μm) using a gradient of 0.1% aqueous formic acid (mobile phase A) and 0.1% formic acid in acetonitrile (mobile phase B). Chromatographic separation was done at a flow rate of 400 nL min −1 for a total run time of 97 min according to a gradient shown below. Column temperature was kept at 35 °C. Sample injection volume was 2 μL. Peptides were infused into the TripleTOF6600 instrument through a nano-ESI source (Sciex, Framingham, MA). The nano-ESI source was equipped with a nano-ESI emitter tip (New Objective). The mass spectrometer parameters were:
Ion Source Parameters Settings
Ion spray voltage floating (ISVF) 2400 V Curtain Gas (CUR) 20 Interface heater (IHT) 120 Ion source gas 1 (GS1) 3 Ion source gas 2 (GS2) 0 Declustering potential (DP) 80 V Tandem mass spectra were acquired by a data-dependent mode. TOFMS survey scan was set to the m/z range of 30 0-20 0 0 and the accumulation time to 100 ms. Top 25 MS/MS spectra were acquired in the mass range of m/z 10 0-20 0 0 with an accumulation time of 25 ms. The overall cycle time was 775 ms. Precursor ion selection criteria included charge state between + 2 and + 5 and ion intensity greater than 150 counts. Former fragmented precursor ions were excluded from reanalysis for 20 s. Fragmentation was performed using rolling collision energy with a collision energy spread of 5. For LC-MS/MS quality control we used 1 pmol/μl stock solution of beta-galactosidase, which was prepared according to manufacturer's instruction (LC/MS peptide calibration kit P/N 4,465,867), pre-digested BSA or HeLa protein digest standard (Pierce, Thermo Scientific). Data acquisition was performed with Analyst TF 1.7 (Sciex). Mass spectrometry raw data have been deposited to the Mass Spectrometry Interactive Virtual Environment (MassIVE), with access via https://massive.ucsd.edu/ProteoSAFe/dataset.jsp?accession=MSV0 0 0 085309 .
Data analysis
Protein sequencing and modification analysis was performed with Mascot R software 2.6.1 version (Matrix Science Ltd., London, United Kingdom), using the following parameters: Modified peptides identified by Mascot R were further validated by manual inspection. To identify the y and b fragments of the modified peptides and attribute their masses in MS/MS spectrum, we used the Bio Tool Kit microapp in PeakView R software. First, the protein sequence was digested in silico to create a list of theoretical peptides. Modified SOD1 peptide sequences found in MASCOT R software with their respective charges were selected in "Bio Tool Kit" and modifications corresponding to the mass of each aldehyde were added as variable modification. The software was settled to match the theoretical fragments to the ions in MS/MS spectrum with a match tolerance of 0.050 Da.
Declaration of Competing Interest
The authors declare that they have no competing financial interests or personal relationships that could influence the work reported in this paper.
|
v3-fos-license
|
2019-07-12T13:50:01.728Z
|
2019-07-11T00:00:00.000
|
195878627
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/s12862-019-1471-7",
"pdf_hash": "078d6a76beeca3f9c096d0a50f496d8ef96d0ca4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:977",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "053be332e7e722b5476bab4c27bd8ba0bd013182",
"year": 2019
}
|
pes2o/s2orc
|
Multiple origins of melanism in two species of North American tree squirrel (Sciurus)
Background While our understanding of the genetic basis of convergent evolution has improved there are still many uncertainties. Here we investigate the repeated evolution of dark colouration (melanism) in eastern fox squirrels (Sciurus niger; hereafter “fox squirrels”) and eastern gray squirrels (S. carolinensis; hereafter “gray squirrels”). Results We show that convergent evolution of melanism has arisen by independent genetic mechanisms in two populations of the fox squirrel. In a western population, melanism is associated with a 24 bp deletion in the melanocortin-1-receptor gene (MC1RΔ24 allele), whereas in a south-eastern population, melanism is associated with a point substitution in the agouti signalling protein gene causing a Gly121Cys mutation. The MC1R∆24 allele is also associated with melanism in gray squirrels, and, remarkably, all the MC1R∆24 haplotypes are identical in the two species. Evolutionary analyses show that the MC1R∆24 haplotype is more closely related to other MC1R haplotypes in the fox squirrel than in the gray squirrel. Modelling supports the possibility of gene flow between the two species. Conclusions The presence of the MC1R∆24 allele and melanism in gray squirrels is likely due to introgression from fox squirrels, although we cannot completely rule out alternative hypotheses including introgression from gray squirrels to fox squirrels, or an ancestral polymorphism. Convergent melanism in these two species of tree squirrels has evolved by at least two and probably three different evolutionary routes. Electronic supplementary material The online version of this article (10.1186/s12862-019-1471-7) contains supplementary material, which is available to authorized users.
Background
The origin of adaptive genetic variation is one of the key issues in evolutionary biology. Such variation generally depends on new mutations or standing variation. Another less well understood means of adaptation is adaptive introgression where interspecific mating occurs followed by generations of backcrossing and selection for advantageous introgressed alleles. Hybridisation between closely related species has been widely documented, but the role of such hybridisation in adaptation is not always clear. Adaptive introgression has been recognised for some time as an important source of genetic variation in plants, for example between sunflower species [1], between iris species [2], and between ragwort and groundsel [3]. Until recently, there were fewer convincing examples in animals, an early case being between species of Australian fruit fly [4]. More recent examples include an allele at the K locus leading to melanism that introgressed from domestic dogs to wolves [5], the vkorc1 allele that confers resistance to rat poison among Old World mice [6], variation at agouti (ASIP) associated with winter coat colour in snowshoe hares [7], alleles that affect beak shape in Darwin's finches [8,9], and loci controlling colour patterns in Heliconius butterflies [10].
Colouration in animals has a wide range of adaptive functions including concealment, signalling, protection and thermoregulation [11]. Melanism (darkened colouration) is found in many diverse species and two of its major functions are to provide camouflage from predators, e.g. in lizards [12] and rock pocket mice [13] and to give a thermal advantage, e.g. in butterflies, ladybirds, snails and snakes [10,14]. In amniote vertebrates, variation in dark colouration is primarily caused by variation in the amount of black/brown eumelanin present. Of the more than 300 loci which control melanin pigmentation in vertebrates [15], two key interacting loci have been found to be repeatedly involved in adaptive variation in melanin colouration: the melanocortin-1 receptor (MC1R) gene and agouti signalling protein (ASIP) gene. High activity of the MC1R protein leads to enhanced synthesis of eumelanin, a process that is inhibited by ASIP, so that gain-of-function mutations of MC1R or loss-of-function mutations in ASIP lead to melanism [16,17]. In wild populations, mutations in MC1R have been associated with melanism in lizards [12],~10 species of birds [18,19] and a variety of mammals, from rodents [13] to cats [20,21]. Mutations in ASIP have been associated with melanism in birds [22], rodents [23], hares [7] and cats [21,24].
The fox squirrel and gray squirrel are naturally sympatric over a broad region of eastern North America (Fig. 1) and have similar ecological requirements and life histories [25]. Like many species of wild mammals, individual hairs on the dorsum of these squirrels usually have alternating bands of brown/black (eumelanin) and red/yellow (phaeomelanin) pigments, a pelage condition known as "agouti." The overall appearance of coat colour for a particular animal depends on the width and placement of the pigment bands along the hair shafts as well as the intensity of the pigments. Coat colour of individuals for some wild mammals, such as the gray squirrel, is relatively uniform over the geographic distribution of the species. Other species, including the fox squirrel, exhibit dramatic patterns of geographic variation in coat colour. There are two distinct colour groups of fox squirrels: animals from most of the range (colour group 1), have an overall orange agouti colouration, whereas animals from the south-eastern coastal plain (colour group 2) are generally silver-gray or tan agouti with black heads and white noses and ears (Fig. 2). The colour group 1 (orange agouti) squirrels generally have intense reddish bands of phaeomelanin in their dorsal hairs; whereas the gray/tan agouti (colour group 2) animals generally have dilute yellowish bands.
Melanism (uniform dark brown or black colouration over the whole body) occurs at low frequency (less than 1%) across most of the range of both fox squirrels and gray squirrels [26,27]. In this study, squirrels are characterized as jet-black melanic if their entire coat has solid jet-black hairs, as partial melanic if their coat has between 75 and 90% solid jet-black hairs, and as brownblack melanic if their coat is overall darkened, with banding on the hairs (Fig. 2). Melanism in the gray squirrel is much more common in the northern part of the range (more than 75%) [26]; in contrast, melanism in fox squirrels is more common in the southern part of the range, reaching a maximum frequency of 13% [27]. We previously reported that a 24 bp deletion in the Fig. 1 Map of North America showing the native ranges fox squirrels and gray squirrels. Ranges of fox squirrel colour group 1 (orange agouti) are shown in light blue and colour group 2 (gray/tan agouti) in dark green. Gray squirrel range is to the east and south of the heavy dashed line. Spots show the locations of fox squirrel and gray squirrel samples, gray spots = wildtype, and black spots = samples with both wildtype and melanic fox squirrels or gray squirrels. Spots with coloured outlines show locations of fox squirrels with MC1R alleles typically from the gray squirrel (red outline) and locations of gray squirrels with MC1R alleles typically from the fox squirrel (green outline). Pie charts show MC1R haplotype frequencies in the fox squirrel and gray squirrel and ASIP genotype frequencies (see Additional file 3) in the fox squirrel MC1R is associated with melanism in the gray squirrel, where homozygotes for the mutation are jet-black melanic, heterozygotes are brown-black melanic, and squirrels homozygous or heterozygous for other alleles have a typical grizzled wildtype phenotype (Fig. 2) [28]. The genetic basis of melanism in the fox squirrel has not yet been elucidated.
The aim of this study was to investigate the genetic basis of melanism in the two colour groups of the fox squirrel, using a candidate gene approach. Having found that the same 24 bp deletion in the MC1R, which had previously been described in the gray squirrel, was also associated with melanism in the colour group 1 (orange agouti) fox squirrel, the study was expanded to examine the causes of derived allele sharing between the two species. Here we present evidence of multiple genetic origins of adaptive melanism in two species of tree squirrels.
Results
An allele at the MC1R locus with a 24 bp deletion (MC1RΔ24) is associated with melanism in colour group 1 (orange agouti) fox squirrels from Colorado and Nebraska. One jet-black melanic squirrel was homozygous and seven brown-black melanic squirrels were heterozygous for the MC1RΔ24 allele, whereas all other colour group 1 fox squirrels (n = 42) had other alleles (Fisher's exact test, P < 10 − 11 ) ( Table 1). An MC1R allele with the same 24 bp deletion was previously found to be associated with melanism in the gray squirrel [28]. We therefore compared MC1R variation in fox squirrels to that from an expanded sample of gray squirrels (n = 51) ( Table 1 and Additional file 1).
Remarkably, all MC1R haplotypes containing the 24 bp deletion are identical in the two species (allele counts 9 in fox squirrels and 9 in gray squirrels), and this includes gray squirrel populations introduced to British Columbia, Canada in the early 1900's and gray squirrels introduced to Britain from North America in the late 1800's [29]. On a haplotype network, the MC1RΔ24 allele is nested within the alleles from fox squirrels, a minimum of 4 mutational steps away from all other common alleles in gray squirrels (Fig. 3, Additional file 1). Most other alleles form speciesspecific clusters, but rare alleles in both species are also shared. Bayesian modelling of gene flow with a two population model ( Table 2, Additional file 2) shows consistent estimation of a low degree of bi-directional gene flow between the two species, with estimates of gene flow in both directions significantly greater than zero (at p < 0.01) in all runs (e.g. run 1: gray to fox, LLR Photos of fox (Sciurus niger) and gray (Sciurus carolinensis) squirrels. Squirrels are described as jet-black melanic if their entire coat has solid jet-black hairs, as partial melanic if their coat has between 75 and 90% solid jet-black hairs, and as brown-black melanic if their coat is overall darkened, with banding on the hairs. a) Colour group 1 (orange agouti) fox squirrel homozygous for the MC1RΔ24 allele (jet-black melanic). b) Colour group 1 fox squirrel heterozygous for the MC1RΔ24 allele (brown-black melanic). c) Colour group 2 (gray/tan agouti) fox squirrel homozygous for the Gly121Cys mutation in ASIP (jet-black melanic). d) Colour group 2 fox squirrel heterozygous for the Gly121Cys mutation in ASIP (partial melanic). e) Colour group 1 (orange agouti) wildtype fox squirrel. Wildtype fox squirrels from colour group 1 lack white markings, have an overall orange-red agouti colouration and orange or yellow venters. f) Colour group 2 (gray/tan agouti) wildtype fox squirrel. Wildtype fox squirrels from colour group 2 have an overall silver-gray or tan agouti colouration with cream or buff venters and black on the dorsal surface of their heads and often have white markings on their noses, ears, feet, and tails. g) Gray squirrel, typical wildtype grizzled phenotype. h) Gray squirrel homozygous for the MC1RΔ24 allele (jet-black melanic). i) Gray squirrel heterozygous for the MC1RΔ24 allele (brown-black melanic) (log likelihood ratio) = 13.81, p < 0.01, fox to gray, LLR = 13.51, p < 0.01). Phylogenetic reconstructions using maximum likelihood show that the fox squirrel MC1R alleles together with the MC1RΔ24 allele form a monophyletic clade that has 73% bootstrap support (Fig. 4). There was no association between MC1R and melanism in colour group 2 (gray/tan agouti) fox squirrels (Table 1). Here there was an association between melanism and variation in ASIP: all nine jet-black melanic individuals, sampled from a single population in northern Georgia, were homozygous for a single bp substitution (G361T) leading to a Gly121Cys mutation, whereas all other colour group 2 individuals (n = 32) were heterozygous or homozygous for other alleles (Fisher's exact test: P < 10 − 10 ) ( Table 1 and Additional file 3). There was also a strong tendency for individuals with intermediate, partial melanic colouration to be heterozygous for the Gly121Cys mutation, with a significant association across all colour group 2 squirrels (Fisher's exact test: P < 10 − 5 ). The associations between ASIP genotype and melanism are shown for the northern Georgia population in Fig. 5. The G361T substitution is unique to ASIP haplotype A3 (Additional file 3). A C253G substitution, causing an Arg85Gly mutation, which is also present in haplotype A3, is not associated with melanism. This is shown by haplotype A2, which has the C253G but not the G361T substitution: haplotype A2 is never seen in a jet-black squirrel, and almost all A1/A2 heterozygotes are wild-type and both A2/A3 heterozygotes are partial melanic (Table 1). Two melanic colour group 1 (orange agouti) fox squirrels, from Ohio and Arkansas, had no evidence for mutations in coding regions of the MC1R or ASIP Melanic a indicates two brown-black melanic fox squirrels where the underlying genetics is unknown associated with phenotype, suggesting a further genetic mechanism for melanism.
Discussion
We present strong evidence for identifying the loci underlying convergent evolution of melanism in two populations of fox squirrels. Melanism in colour group 1 (orange agouti) fox squirrels from Colorado and Nebraska is associated with the MC1RΔ24 allele identical to that found in the gray squirrel. In contrast, colour group 2 (gray/tan agouti) fox squirrels do not show an association between MC1R and melanism (which was the basis of our previous report of a lack of association) [29], and the MC1RΔ24 allele is absent in this population. The 24 bp deletion falls at a mutational hotspot on the boundary of the second and third transmembrane domains in the MC1R receptor where a number of other species have mutations associated with melanism, for example, the bananaquit which has a E92K mutation causing the receptor to be constitutively active [30]. Functional studies on the MC1RΔ24 protein in the gray squirrel confirmed that it plausibly causes melanismit showed high basal activity as well as responding to ASIP as an agonist in comparison to the usual inverse agonist activity of ASIP [31]. There are further examples of deletions in this part of the receptor leading to a darkened phenotype in wild populations including Eleonora's falcon, the jaguar, the jaguarundi and the golden-headed lion tamarin [20,32,33]. We found that jet-black melanism and partial melanism in colour group 2 (gray/tan agouti) fox squirrels, are associated with a non-synonymous single nucleotide polymorphism in the ASIP locus, with all jet-black individuals being homozygous for the derived allele, and most partial melanic squirrels being heterozygous. Jetblack melanic, partial melanic, and wildtype squirrels were all sampled from the same location in northern Georgia, indicating frequent interbreeding in relation to colour phenotype. Hence population structure cannot explain this association, and this is further supported by the absence of an association between colouration and MC1R genotype, and the overall low frequency of jetblack individuals in group 2 fox squirrels (maximum 13%). Interestingly, since there is a strong trend for heterozygous individuals to have an intermediate colour phenotype, our results are most consistent with a pattern of partial dominance at ASIP, which is unusual [34]. The Gly121Cys mutation falls in the highly conserved cysteine-rich domain of the ASIP protein, which is thought to form a highly ordered structure stabilised by five disulphide bridges that create an inhibitor cysteine-knot motif ( Fig. 6 and Additional file 4) [35,36]. The position of 10 cysteine residues in this region is highly conserved across mammalian orders, and in the four cases where changes in the number of cysteine residues have been reported, they are associated with melanism: German Shepherd dogs and alpacas (Arg96Cys) [37,38], the pampas cat (Arg120Cys) [21] and the Asian golden cat (Cys128Trp) [24]. Taken together this strongly suggests that the Gly121Cys mutation in ASIP is causative for melanism in colour group 2 (gray/tan agouti) fox squirrels, although functional studies will be needed to confirm this. Our findings indicate that melanism has evolved at least twice in fox squirrels: jet-black melanic phenotypes in different parts of the species' range are the result of mutations at two different loci, MC1R and ASIP. These two loci are also associated with intraspecific events of parallel melanism in a Solomon Island flycatcher [22] and in rock pocket mice [39,40]. These results in tree squirrels add to the extensive evidence that MC1R and ASIP represent functionally equivalent "adaptive hotspots" for melanism in vertebrates [18]. It is intriguing that the identical MC1RΔ24 haplotype is associated with melanism in both fox squirrel and gray squirrel species, and there are three possible explanations for how this has occurred. First, the allele could have arisen in the common ancestor of both species, and been retained by balancing selection. This is unlikely since deep divergences between clusters of haplotypes with and without the deletion would be expected. Second, the mutation could have arisen independently in both species, but this is also unlikely as the haplotypes are identical. Therefore the most likely explanation is that the MC1RΔ24 allele arose in one species and subsequently introgressed to the Fig. 4 Phylogenetic reconstruction of MC1R haplotypes in squirrels. Maximum likelihood reconstruction with bootstrap support values on branches, and branch lengths proportional to sequence evolution. MC1R haplotypes for fox squirrels and gray squirrels are included separately (see Fig. 3) other species. Given the close association of the MC1RΔ24 allele with common fox squirrel haplotypes, and the support for monophyly of fox squirrel haplotypes including the MC1RΔ24 allele, introgression from the fox squirrel into the gray squirrel is by far the most likely scenario, but we cannot completely rule out the possibility of introgression in the other direction. The plausibility of introgression between these two sympatric species is supported by the Bayesian modelling results, although caution is needed since the results are based on a single locus, and a couple of parameters had broad posterior curves. It is notable that these squirrels have been observed in mixedspecies mating chases, with male fox squirrels pursuing female gray squirrels [41].
Interspecies mating is likely to be an important source of adaptive genetic variation. It has been noted that introgressive hybridisation has the largest evolutionary impact if two species have some morphological differences but are still closely related enough to recognise the other species as potential mates and be reproductively compatible and before the point reached when genetic incompatibilities incur severe fitness costs [8]. In such cases introgression can provide genetic variants at a higher frequency than de novo mutation, thus accelerating the evolutionary process. Unlike novel mutations, adaptive introgression has the advantage of involving alleles that have already been tested by natural selection, and so where adaptive alleles are "available" in closely [36]. Mutations in the fox squirrel (Gly121Cys), dog and alpaca (Arg98Cys), [37,38], pampas cat (Arg118Cys) [21] and Asian golden cat (Cys126Trp), [24] are boxed and highlighted with arrows. The highly conserved RFF sequence is underlined and shaded related species, they are likely to make an important contribution. Furthermore, dominant alleles are more likely to become established by introgression than recessive alleles, for example in F 1 individuals among the parental species where the beneficial effects of the dominant introgressed allele may counteract outbreeding depression at other loci. This is precisely the pattern found here, where dominant melanic MC1R alleles have introgressed rather than recessive ASIP melanic alleles. It would be interesting to perform population genomic analyses on these populations to investigate the possibility of adaptive introgression of MC1R further.
In some cases hybridization can expand the ecological niche of a population by increasing physiological tolerances beyond the range of either of the parental species [8,42]. This is particularly relevant where there is a new ecological challenge which may occur at the periphery of a population's range, such as a cold climate [4,8]. For example, we suggest that the high frequency of melanic gray squirrels (with the MC1RΔ24 allele) in the northern parts of the species' range, which was first noted in the 1740s by early European explorers of North America [43], might be explained by a thermogenic advantage in cold climates [44]. We also suggest that this high frequency of melanism may have contributed to the prehistoric expansion of the gray squirrel's range (during the past 11,000 years following the Wisconsinan glaciation) further north into eastern Canada. Melanism associated with the MC1RΔ24 allele may also confer thermal advantage to colour group 1 (orange agouti) fox squirrels that inhabit regions with extremely cold, harsh winters, such as Nebraska and Colorado [45]. On the other hand, melanism probably does not confer thermal advantage to colour group 1 (orange agouti) fox squirrels in the southern part of the range (lower Mississippi River drainage), because those animals rarely (if ever) experience temperatures as low as those consistently recorded in Nebraska and Colorado. Thus, the adaptive advantage for melanism appears to differ between gray squirrels and some fox squirrels, and factors responsible for melanism may differ among populations of the fox squirrel.
In addition to providing thermal advantage, melanism often functions to camouflage animals from predators [12,13]. To account for the higher frequency of melanism in the southern part of the fox squirrel's range, Kiltie [27] posed the hypothesis that melanism increases camouflage of colour group 2 (gray/tan agouti) fox squirrels from predators (hawks) in areas frequently burned by wildfires, such as those on the south-eastern coastal plain. He subsequently conducted a series of experiments to test concealment of all phenotypes of both fox squirrel colour groups [46,47]. He tested dynamic crypsis by presenting captive red-tailed hawks (Buteo jamaicensis) with moving models of melanic and wildtype phenotypes against different backgrounds (including burned and unburned tree bark) [46]. He also tested static crypsis by analysing digitized photographs to determine how well museum specimens of melanic and wildtype phenotypes matched different background types (including burned and unburned tree bark) [45,46]. Kiltie's experiments yielded complex results: for both colour groups, he concluded that melanism may confer concealment from hawks when fox squirrels are in motion, but wildtype colouration may be better camouflage when the animals are not moving [46,47]. Clearly much work remains to be done to elucidate the selective pressures on melanism in tree squirrels.
Conclusions
We conclude that the presence of the MC1RΔ24 allele and melanism in gray squirrels is likely due to introgression from colour group 1 (orange agouti) fox squirrels. We further conclude that melanism in colour group 2 (gray/tan agouti) fox squirrels is associated with a Gly121Cys mutation in ASIP. Finally, convergent melanism in these two species of tree squirrels has evolved by at least two and likely three different evolutionary routes -MC1R mutation, ASIP mutation, and probably introgression of MC1R mutation.
Sampling
We used tissues and DNA samples originally collected as part of previous genetic studies of these species [28,29,48,49], and we also obtained tissues from museum specimens housed at Louisiana State University Museum of Zoology, Sam Noble Oklahoma Museum of Natural History, and Denver Museum of Natural Sciences. (Note that all samples were originally collected following methods which met the guidelines of the American Society of Mammalogists for the use of mammals in research). In total we used tissue samples from 106 fox squirrels and 51 Gray squirrels from 28 locations across their ranges (see Figs. 1 and 2, Table 1). Permission was granted to import squirrel tissue to the United Kingdom from the United States of America by the Department for Environment, Food and Rural Affairs: authorisation number IMP/GEN/2014/06.
Haplotype reconstruction
We used PHASE 2.1 [50] to reconstruct haplotypes of the MC1R, and Network [51] to generate median joining networks. All sequences have been deposited on Genbank with the following accession numbers: G0: EU604831, G1:
Gene flow analysis
Bayesian modelling was used to assess gene flow between fox squirrels and gray squirrels. We used iMa2 [54] to estimate gene flow at MC1R between fox (n = 106) and gray (n = 39) squirrels in eastern USA in a two population model, ignoring the delta24 deletion. We conducted short preliminary runs to determine upper bounds for the demographic parameters and appropriate heating parameters. Then we conducted three independent runs with a different random number seed, for 10 6 MCMC steps and a burn-in period of 10 5 steps. We used 40 chains with heating parameters -ha 0.975, −hb 0.75. Convergence was assessed by the concordance of parameter estimates, acceptable chain mixing and autocorrelations.
|
v3-fos-license
|
2023-03-15T13:11:23.841Z
|
2023-03-12T00:00:00.000
|
257509306
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s42003-023-05175-5.pdf",
"pdf_hash": "43cda34527ee8aa8d13698372bdb687a93ebb665",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:978",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "99b4a9badc583598373833041dad4c6b7c10dbbc",
"year": 2023
}
|
pes2o/s2orc
|
Magnesium ions mediate ligand binding and conformational transition of the SAM/SAH riboswitch
The SAM/SAH riboswitch binds S-adenosylmethionine (SAM) and S-adenosylhomocysteine (SAH) with similar affinities. Mg2+ is generally known to stabilize RNA structures by neutralizing phosphates, but how it contributes to ligand binding and conformational transition is understudied. Here, extensive molecular dynamics simulations (totaling 120 μs) predicted over 10 inner-shell Mg2+ ions in the SAM/SAH riboswitch. Six of them line the two sides of a groove to widen it and thereby pre-organize the riboswitch for ligand entry. They also form outer-shell coordination with the ligands and stabilize an RNA-ligand hydrogen bond, which effectively diminishes the selectivity between SAM and SAH. One Mg2+ ion unique to the apo form maintains the Shine–Dalgarno sequence in an autonomous mode and thereby facilitates its release for ribosome binding. Mg2+ thus plays vital roles in SAM/SAH riboswitch function.
The author conducted large-scale MD simulations of SAM/SAH riboswitch using mature force fields and discussed the influence of magnesium ions on ligand binding. Some novel viewpoints were proposed, such as the repulsive effect of inner-shell magnesium ions at the entrance of ligand pockets promoting ligand entry. This manuscript has some innovation and the results are also reliable. However, I still have some issues that the author needs to address, which are listed below: 1. Why is there almost no inner-shell magnesium ions in the Leap(41) system, while there is a significant increase in the Leap(21) system? How to determine if it is caused by ion competition? Because the ion allocation method of the LEAP module has a high probability of allocating the initial position of ions very close to the RNA backbone, resulting in insufficient hydration opportunities for magnesium ions, it is necessary to rule out the impact of these two systems caused by the initial ion allocation scheme.
2. If magnesium ions undergo an outer-shell to inner-shell transition in the Leap(21) system, then this will be a very interesting phenomenon. It should be meaningful to demonstrate the details of magnesium ion dehydration in this system.
3. The manuscript states that metal ions mainly gather around the first 21 residues, is this because these regions have lower electrostatic potential? Can the authors provide some characterizations of electrostatic potential surfaces? 4. Do all calculations of binding free energy come from the Leap(41) system? Can the authors provide the binding free energy results in the MCTBI and Leap(21) systems? I would like to know if the molecules that bind in the inner-shell mode near the binding pocket have an impact on the ligand binding free energy, as described in the " Inner-shell Mg<sup>2+</sup> ions form outer-shell coordination with ligands and stabilize U16-ligand hydrogen bonding" viewpoint later (page 10, line 259).
5. The simulation results of outer-shell magnesium ions (4.3 Å peak in RDF) are in good agreement with previous literature (Physical Review E, 99:012420, 2019). However, is it possible that the viewpoint of "Inner-shell coordination is nearly exclusively formed with OP1 and OP2, with OP2 favored about 2-fold over OP1" (Page 8, line 200) comes from statistical errors? Because this situation does not occur in the RDF of the SAM system ( Figure S3). 6. In Figure 4B, it can be observed that as long as magnesium ions bind to the ligand entrance, it will to some extent cause the entrance to widen. However, the chemical structure and size of SAM and SAH are very similar. Why is the width at the entrance of SAM significantly larger than that of SAH when they are bound? Is it because of the difference in binding mode? 7. I noticed that the author conducted multiple repeated simulations for each case. are all the results (such as RMSF, groove width <i>d</i><sub>p6-p14</sub>) in the manuscript based on the statistical average of these cases?
Additionally, there are some minor issues: 1. Note that there are some misleading writing styles in the text, such as 0A, 0B, 0C, etc. (Page 3). What do they represent?
2. "Via these direct and indirect interactions" can be changed into "Through these direct and indirect interactions". (Page 4, line 66) Reviewer #2 (Remarks to the Author): The authors have performed extensive MD simulations of SAM/SAH riboswitch to understand the role of Mg2+ ions in the ligand interactions and overall dynamics. The study relies on the Mg2+ binding sites predicted by a tool MCTBI and build the MD simulations based on that. The results from MD simulations have been analyzed accordingly by the authors to come to the conclusions. Although, the issues associated with long exchange rates in running MD simulations of RNA with Mg2+ ions has been assumed to be insignificant. Given that, authors have performed extensive sets of MD to get the best information possible. The reviewer has both major and minor comments for the authors.
Major comments to authors: -The authors should refrain from stating that MD simulations "identified" inner-shell Mg2+ ions as it is not 100% correct.
-The enhanced sampling approaches by two groups (Thirumalai and Mackerell) have been recently shown to address the issues for long exchange rates to identify the Mg2+ binding sites. Please discuss it accordingly.
-Please comment on the role of Na+ ions in and around the binding sites.
-Does the ionic environment follow counterion condensation theory to neutralize the RNA within certain distance? -Authors should comment on the results from the systems where Mg2+ ions were added with Leap protocol. What were major differences?
Minor comments: P3 L44 -The terminology "A high resolution NMR structure" should be reconsidered. P3 L46 -Please refer to figure 1 here. P8 L191 -Do authors mean to state "not a single new inner-shell Mg2+ ion wa found"? P8 L200 -"OP2 favored about 2-fold over OP1" How is this significant? How are the two oxygens differentiated? What kind of environment causes this? P10 L254-257 Do authors have any references that support such a speculation? P15 L397 -In one of the papers by MacKerell lab, it was shown that Mg2+ impacts RNA folding by push-pull mechanism. Can authors discuss on that based on this? P16 L427 -Monovalent ions support the divalent ions in various conditions. Usually 0.15 M concentration is used for NaCl/KCl. how was this concentration translated into number of ions? based on volume of the box? Was the volume corrected for the volume occupied by the RNA? what would be effective concentration in the box? P18 L475 -Please explain what is baseline?
We thank the reviewers for their constructive comments. Our point-by-point response is given below in blue.
Reviewer #1 (Remarks to the Author):
The author conducted large-scale MD simulations of SAM/SAH riboswitch using mature force fields and discussed the influence of magnesium ions on ligand binding. Some novel viewpoints were proposed, such as the repulsive effect of inner-shell magnesium ions at the entrance of ligand pockets promoting ligand entry. This manuscript has some innovation and the results are also reliable. However, I still have some issues that the author needs to address, which are listed below: 1. Why is there almost no inner-shell magnesium ions in the Leap(41) system, while there is a significant increase in the Leap(21) system? How to determine if it is caused by ion competition? Because the ion allocation method of the LEAP module has a high probability of allocating the initial position of ions very close to the RNA backbone, resulting in insufficient hydration opportunities for magnesium ions, it is necessary to rule out the impact of these two systems caused by the initial ion allocation scheme.
Indeed, the difference between Leap(21) and Leap (41) is largely due to the initial ion allocation scheme. Specifically, in Leap (21), Mg2+ ions were given the preference to be placed close to the RNA while Na+ ions were relegated to the solvent; in Leap(41) the preferences were reversed between Mg2+ and Na+. We now further clarify this difference between Leap(21) and Leap(41) (p. 6, 9, and 18).
2. If magnesium ions undergo an outer-shell to inner-shell transition in the Leap(21) system, then this will be a very interesting phenomenon. It should be meaningful to demonstrate the details of magnesium ion dehydration in this system.
All the inner-shell Mg2+ ions in Leap (21) were indeed in outer-shells or farther away from RNA phosphates. The transitions occurred very early on, during the energy minimization and heating stage of the simulation. In the new supplementary figure S5 and new movie S1, we demonstrate the transition into inner-shell coordination of a Mg2+ ion, initially at an initial distance of 6.9 Å from the nearest OP atom.
3. The manuscript states that metal ions mainly gather around the first 21 residues, is this because these regions have lower electrostatic potential? Can the authors provide some characterizations of electrostatic potential surfaces?
Yes, these regions have the most negative electrostatic potential (p. 10). We now present the electrostatic potential surface of the RNA in new supplementary figure S7.
Do all calculations of binding free energy come from the Leap(41) system? Can the authors provide the binding free energy results in the MCTBI and Leap(21) systems? I would like to know if the molecules that bind in the inner-shell mode near the binding pocket have an impact on the ligand binding free energy, as described in the " Innershell Mg 2+ ions form outer-shell coordination with ligands and stabilize U16-ligand hydrogen bonding" viewpoint later (page 10, line 259).
The binding free energy results shown in Fig. 2A were from the Leap(41) protocol. We now present the corresponding results for the MCTBI and Leap(21) protocols in new supplementary figure S2 (p. 7), and discuss these results in reference to U16-ligand hydrogen bonding (p. 12-14).
5. The simulation results of outer-shell magnesium ions (4.3 Å peak in RDF) are in good agreement with previous literature (Physical Review E, 99:012420, 2019). However, is it possible that the viewpoint of "Inner-shell coordination is nearly exclusively formed with OP1 and OP2, with OP2 favored about 2-fold over OP1" (Page 8, line 200) comes from statistical errors? Because this situation does not occur in the RDF of the SAM system ( Figure S3).
We agree that, in the SAM form, OP2 is not as strongly favored over OP1 as in the apo and SAH form, and have revised the wording to: "with OP2 favored over OP1 by 1.3 to 3.0-fold ( Figure S6)." 6. In Figure 4B, it can be observed that as long as magnesium ions bind to the ligand entrance, it will to some extent cause the entrance to widen. However, the chemical structure and size of SAM and SAH are very similar. Why is the width at the entrance of SAM significantly larger than that of SAH when they are bound? Is it because of the difference in binding mode?
Yes, the larger groove widening in the SAM form is due to its different binding characteristics from SAH. In essence, SAH has a single conformation, whereas SAM has two alternative conformations (new supplementary figure S3), one is similar to that of SAH and in the other the aminocarboxypropyl group extends into the groove. In the latter case the groove is wider (p. 11).
7. I noticed that the author conducted multiple repeated simulations for each case. are all the results (such as RMSF, groove width dp6-p14) in the manuscript based on the statistical average of these cases?
The reported results for each system were based on the average of the multiple repeated simulations for that system.
Additionally, there are some minor issues: 1. Note that there are some misleading writing styles in the text, such as 0A, 0B, 0C, etc. (Page 3). What do they represent?
These are typos, which we have corrected them to be Figure 1A, 1B, 1C.
2. "Via these direct and indirect interactions" can be changed into "Through these direct and indirect interactions". (Page 4, line 66) We have now made the suggested change.
Reviewer #2 (Remarks to the Author): The authors have performed extensive MD simulations of SAM/SAH riboswitch to understand the role of Mg2+ ions in the ligand interactions and overall dynamics. The study relies on the Mg2+ binding sites predicted by a tool MCTBI and build the MD simulations based on that. The results from MD simulations have been analyzed accordingly by the authors to come to the conclusions. Although, the issues associated with long exchange rates in running MD simulations of RNA with Mg2+ ions has been assumed to be insignificant. Given that, authors have performed extensive sets of MD to get the best information possible. The reviewer has both major and minor comments for the authors.
Major comments to authors: -The authors should refrain from stating that MD simulations "identified" inner-shell Mg2+ ions as it is not 100% correct.
We have changed "identified" to "predicted" or "found".
-The enhanced sampling approaches by two groups (Thirumalai and Mackerell) have been recently shown to address the issues for long exchange rates to identify the Mg2+ binding sites. Please discuss it accordingly.
-Please comment on the role of Na+ ions in and around the binding sites. Na+ ions do not form tight, site-specific coordination with RNA phosphates and are very mobile, even when the phosphates are not coordinated with Mg2+ [as in Leap(41); p. 9]. So Na+ ions mostly act as diffuse counterions.
-Does the ionic environment follow counterion condensation theory to neutralize the RNA within certain distance?
Qualitatively, the ion environment in the MCTBI protocol agrees with what is envisioned in the counterion condensation theory, in that over 10 Mg2+ ions tightly coordinate with RNA phosphates and neutralize a large fraction of the RNA charge. The situation in the Leap(41) protocol is different. Here Na+ ions are the species nearest to RNA phosphates but are very mobile, such that there is no clear demarcation between a "condensation" zone and a diffuse zone.
-Authors should comment on the results from the systems where Mg2+ ions were added with Leap protocol. What were major differences? Again, the difference between Leap(21) and Leap (41) is largely due to the initial ion allocation scheme. Specifically, in Leap (21), Mg2+ ions were given the preference to be placed close to the RNA while Na+ ions were relegated to the solvent; in Leap(41) the preferences were reversed between Mg2+ and Na+. We now further clarify this difference between Leap(21) and Leap(41) (p. 6, 9, and 18).
Minor comments: P3 L44 -The terminology "A high resolution NMR structure" should be reconsidered.
We now remove "high resolution".
We have now corrected the typo "0A" that should have been Figure 1A.
P8 L191 -Do authors mean to state "not a single new inner-shell Mg2+ ion wa found"?
Yes, due to the fact that Na+ ions "pre-occupied" the phosphate sites, as we now further clarify (p. 9). P8 L200 -"OP2 favored about 2-fold over OP1" How is this significant? How are the two oxygens differentiated? What kind of environment causes this?
We now explain that OP2 typically points toward the nuclease base whereas OP1 toward the solvent (p. 9). This difference explains the preference of OP2 over OP1 for inner-shell coordination. This preference is also seen in the statistics collected by Zheng et al. from crystal structures. P10 L254-257 Do authors have any references that support such a speculation?
It is only our speculation, but supported by our observation of frequent carboxyl-Mg2+ coordination in the bound form (p. 12).
P15 L397 -In one of the papers by MacKerell lab, it was shown that Mg2+ impacts RNA folding by push-pull mechanism. Can authors discuss on that based on this?
We now cite the Kognole and MacKerell paper (ref. 28), in the contexts of RNA folding (p. 4) and enhanced sampling (p. 5). The context noted by the reviewer, i.e., compensatory effects between SAM and SAH, however, has no direct relation with the push-pull mechanism.
P16 L427 -Monovalent ions support the divalent ions in various conditions. Usually 0.15 M concentration is used for NaCl/KCl. how was this concentration translated into number of ions? based on volume of the box? Was the volume corrected for the volume occupied by the RNA? what would be effective concentration in the box?
We calculated the number of ions for a given salt concentration according to the number of water molecules: N_ion = 0.0187 x conc (M) x N_water.
P18 L475 -Please explain what is baseline?
We have modified this sentence to state that 5 Å is where the second peak of the RDFs falls to a minimum.
|
v3-fos-license
|
2018-02-25T15:31:20.723Z
|
2013-12-01T00:00:00.000
|
3534135
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2013/23/epjconf_fission2013_04003.pdf",
"pdf_hash": "6f381321462a2b6bf8c23b620e949c3e2d4ec2db",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:979",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "6f381321462a2b6bf8c23b620e949c3e2d4ec2db",
"year": 2013
}
|
pes2o/s2orc
|
Study of spontaneous fission lifetimes using nuclear density functional theory
The spontaneous fission lifetimes have been studied microscopically by minimizing the collective action integral in a two-dimensional collective space of quadrupole moments (Q20, Q22) representing elongation and triaxiality. The microscopic collective potential and inertia tensor are obtained by solving the self-consistent HartreeFock-Bogoliubov (HFB) equations with the Skyrme energy density functional and mixed pairing interaction. The mass tensor is computed within the perturbative Adiabatic TimeDependent HFB (ATDHFB) approach in the cranking approximation. The dynamic fission trajectories have been obtained by minimizing the collective action using two different numerical techniques. The values of spontaneous fission lifetimes obtained in this way are compared with the static results.
Introduction
The spontaneous fission of a nucleus is a many-body quantum tunneling in a multi-dimensional space of nuclear collective coordinates.To explore this large-amplitude collective motion (LACM) microscopically, we employ the ATDHFB theory that provides a consistent theoretical framework to study LACM [1].In this approach, the collective nuclear dynamics is considered to be much slower than the single particle motion of individual nucleons.This approximation is fulfilled for spontaneous fission where excitation energy of the system is small compared to the fission barrier height [1,2].
The main ingredients for a theoretical calculation of fission lifetime are the collective potential and collective inertia tensor.For heavy nuclei, the microscopic calculation of these two input quantities can be done suitably by using the self-consistent density functional theory (DFT) [3].Within this approach, based on a suitable energy density functional, constrained HFB equations are solved to obtain the potential energy surface.In the ATDHFB approach, the collective inertia tensor is then obtained EPJ Web of Conferences consistently from the self-consistent densities [3].These quantities are used in the present paper to study the spontaneous fission half-life by minimizing the collective action integral in a two-dimensional collective space of quadrupole moments Q 20 (elongation) and Q 22 (triaxiality).In the present study, we have considered 264 Fm as the fissioning system and, therefore, we have not considered Q 30 (massasymmetry) since symmetric fission is the major fission-decay mode for Fm isotopes.
The theoretical model is outlined in Section 2. Section 3 explains the numerical technique to obtain the minimum action path.The results are presented in Section 4. Finally, the results of this study are summarized in Section 5.
Calculation of energy surface and mass parameters
The symmetry-unrestricted DFT solver HFODD [4] is used to calculate the collective potential.The Skyrme energy density functional with SkM * parameterization [5], which is optimized for fission barrier height of 240 Pu, is employed in the particle-hole channel.In the particle-particle channel, the densitydependent mixed pairing interaction is considered.A detailed description of the input used is given in Ref. [6].The potential energy surface is obtained in the collective space of (Q 20 , Q 22 ) by subtracting the zero-point energy (ZPE) from the total HFB energy E tot .For the present purpose, ZPE is obtained by using the Gaussian overlap approximation [7].The behavior of E tot and ZPE for 264 Fm is illustrated in Figure 1.
The components of collective inertia tensor in two-dimension are calculated within the perturbative cranking approximation of ATDHFB formula.The expression for the mass tensor reads [3,7] M pC = [M (1) ] −1 M (3) [M (1) ] −1 , ( 1 ) where the energy-weighted moment tensor is written in the quasiparticle basis of HFB.In Eq. ( 2), | is a two-quasiparticle wave function and the sum is taken over whole quasiparticle basis for neutrons and protons.Qi is the quadupole moment operator, which is either Q20 or Q22 , and E denotes the quasiparticle energy.The contours of M pC ij in two-dimension are plotted in Figure 2 for 264 Fm.
The effective mass M eff (s) along a particular path s in the collective space is given by [8,9]: We have tested the numerical accuracy of the quadrupole inertia by utilizing the above expression.First, M eff (s) is calculated along the negative Q 20 axis, which coincides with the oblate deformation axis.Then, the nuclear densities obtained along the negative Q 20 axis, are rotated by the Euler angles so that the symmetry axis lies along = 60 • .Since = 60 • corresponds to oblate shapes in the twodimensional quadrupole deformation plane, the effective mass M eff (s) along this axis should match the values obtained at Q 20 < 0. This is demonstrated in Figure 3 where the values of M eff (s) along the two pathways ( = 180 • and = 60 • ) are shown.
Action minimization techniques and spontaneous fission half-life
Here, we describe the numerical techniques to calculate the minimum action path on the twodimensional collective surface.The spontaneous fission half-life T 1/2 associated with the minimum action path is given by [8, 9] where n is the number of assaults of the nucleus on the fission barrier per unit time and it is often considered to be equal to 10 20.38 s −1 [9].The penetration probability P can be estimated from the semiclassical WKB approximation [8,9]: where S(L) is the action-integral calculated along the fission path L(s) in the multi-dimensional deformation space In the above equation, V (s) and M eff (s) are the potential energy and effective mass along the fission path L(s), respectively.The limits s1 and s2 are classical turning points on L(s), and ds is the element of length, and E 0 is the ZPE calculated at the ground state configuration.The effective mass M eff (s) was obtained from the quadrupole mass tensor of Figure 2.
We have calculated the minimum action path by following the dynamic-programming method described in Ref. [8].Alternatively, the minimum action path can be obtained using the Ritz method [9].In the latter method, trial paths are expressed as Fourier series of collective coordinates, and the coefficients of different Fourier components are extracted by minimizing the action integral given by Eq. ( 6).As discussed in Ref. [9], the Ritz method takes much longer computational time than the dynamic-programming method and it is more sensitive to the starting parameter set.
Results
The minimum action paths obtained with the dynamic programming method and Ritz method are shown in Figure 4 by dashed and dotted lines, respectively.The value of E 0 is taken to be 1.0 MeV for the sake of a qualitative comparison of different approaches.As seen in Figure 4, the dynamical paths are almost 04003-p.4indistinguishable from each other, and this constitutes a stringent test of the method.The static path corresponding to the minimum-energy valley is shown in Figure 4 by a solid line.Clearly, the static path passes goes through strongly triaxial shapes whereas the dynamical paths remain close to the prolate (Q 22 = 0) axis.Therefore, a strong dynamical effect-due to variations of the collective inertia with Q 22 -prevents the corresponding fission pathways from reaching strongly non-axial shapes.This observation is in accordance with the previously published results in Ref. [10].
To understand better the dynamical effect, in Figure 5 we plot the collective potential V (s) and mass parameter M eff (s) along different fission pathways.Evidently, the static path traverses a longer distance in the collective space.The dynamical paths are shorter but they go through regions of large potential energy to avoid areas of large collective mass.This result emphasizes the role of collective mass parameters in determining fission pathways in a multi-dimensional collective space.The values of the action integral and fission half-lives corresponding to different fission pathways are summarized in Table 1 together with the axial (one-dimensional; Q 22 = 0) result.One can see that the dynamical results are close to the axial approximation.Also, the outer turning points for the dynamical paths, as shown in Figure 4, remain very close to Q 22 = 0.This suggests that -in the case considered -triaxiality does not contribute significantly to the spontaneous fission lifetime within the perturbative cranking ATDHFB approach.
Summary
In summary, spontaneous fission lifetimes have been studied within a dynamic approach based on the minimization of the collective action in a two-dimensional collective space of elongation and triaxiality.A strong dynamical effect has been predicted; it offsets the static reduction of the inner barrier by triaxiality.This dynamical effect obviously depends on the nucleus under consideration.In the discussed case of 264 Fm, the inner barrier is not sufficiently high to counteract the increase of the collective inertia.This observation is consistent with the results of macroscopic-microscopic work [11], which pointed out that the effects of non-axial shapes on the fission process are weakened by the mass tensor.A more detailed study of dynamical effects due to triaxial and reflection asymmetric degrees of freedom is in progress.
Figure 1 .
Figure 1.E tot and ZPE in MeV for 264 Fm calculated in this work in the collective space of (Q 20 , Q 22 ).
Fission 2013 Figure 2 .
Figure 2. Similar as in Figure 1 but for the perturbative cranking inertia M pC .
Figure 3 .
Figure 3. Effective mass M eff (s) as a function of path length s along = 180 • and = 60 • .
Figure 5 .
Figure 5. Effective values of potential V (s) (top) and mass parameter M eff (s) (bottom) along different fission paths in 264 Fm.
Table 1 .
Action integral and spontaneous fission half-life of 264 Fm calculated in different methods.
|
v3-fos-license
|
2017-08-28T16:46:09.902Z
|
2012-12-05T00:00:00.000
|
40780856
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/38923",
"pdf_hash": "fbaa6892d0979365dc1e58fb909fd03564eeab44",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:980",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"sha1": "fbaa6892d0979365dc1e58fb909fd03564eeab44",
"year": 2012
}
|
pes2o/s2orc
|
Intelligent Systems for the Detection of Internal Faults in Power Transmission Transformers
This chapter presents an approach based on expert systems, which is intended to identify and to locate internal faults in power transformers, as well as to provide an accurate diag‐ nosis (predictive, preventive and corrective), so that proper maintenance can be per‐ formed. In fact, the main difficulty in using conventional methods, based on analysis of acoustic emissions or dissolved gases, lies in how to relate the measured variables when there is an internal fault in a transformer. This kind of situation makes it difficult to de‐ sign optimized systems, because it prevents the efficient location and identification of pos‐ sible defects with sufficient rapidity. In addition, there are many cases where the equipment must be turned off for such tests to be carried out. Thus, this chapter proposes an architec‐ ture for an intelligent expert system for efficient fault detection in power transformers us‐ ing different diagnosis tools, based on techniques of artificial neural networks and fuzzy inference systems. Based on acoustic emission signals and the concentration of gases present in insulating mineral oil and electrical measurements, intelligent expert systems are able to provide, as a final result, the identification, characterization and location of any electrical fault occurring in transformers.
Introduction
This chapter presents an approach based on expert systems, which is intended to identify and to locate internal faults in power transformers, as well as to provide an accurate diagnosis (predictive, preventive and corrective), so that proper maintenance can be performed. In fact, the main difficulty in using conventional methods, based on analysis of acoustic emissions or dissolved gases, lies in how to relate the measured variables when there is an internal fault in a transformer. This kind of situation makes it difficult to design optimized systems, because it prevents the efficient location and identification of possible defects with sufficient rapidity. In addition, there are many cases where the equipment must be turned off for such tests to be carried out. Thus, this chapter proposes an architecture for an intelligent expert system for efficient fault detection in power transformers using different diagnosis tools, based on techniques of artificial neural networks and fuzzy inference systems. Based on acoustic emission signals and the concentration of gases present in insulating mineral oil and electrical measurements, intelligent expert systems are able to provide, as a final result, the identification, characterization and location of any electrical fault occurring in transformers.
With the changes occurring in the electricity sector, there is a special interest on the part of power transmission companies in improving and defining strategies for the maintenance of power transformers. However, when a fault occurs in a transformer, it is generally removed from the system and sent to a maintenance sector to be repaired. With this in mind, some feasibility studies have been conducted, aimed at supporting the electrical system in order to maintain the supply of energy, reducing operation costs and maintenance. Among these investigations, researches have been accomplished into the identification of internal faults in power transformers. In this case, the analysis of dissolved gases [1]- [5] and/or of acoustic emissions [6]- [10] can be highlighted. Within the context of economic viability, it is worth noting the increasing difficulty of removing an operating power transformer and placing it under maintenance. Thus, the above techniques, which evaluate parameters or quantities that indicate the current state of the transformer, have emerged as a more attractive alternative.
Although some papers deal with the development of tools for monitoring sensors [3], very few papers can be found on the efficient use of both sensor types (dissolved gases and acoustic emissions) in the same study. This is probably due to the fact that the cost associated with the acquisition of these sensors is very high. Another factor that should be highlighted is the growing use of intelligent tools for identifying and locating of internal faults [1-2, 5, 7].
The increasing use of intelligent tools is due to the fact that conventional techniques are not always able to achieve high accuracy rates of fault identification. In one of the most outstanding studies in the area [1], which makes a comparison between conventional and intelligent tools, the authors propose a method based on obtaining association rules that perform the best analysis of dissolved gases and satisfactorily ensure reliable identification of failures. The authors compared the proposed technique with other conventional methods (Rogers and Dornenburg) and intelligent techniques (Neural Networks, Support Vector Machines and k-Nearest Neighbors). A total of 1193 samples from dissolved gas sensors were acquired, which were divided into two sets of data in order to evaluate each technique used, i.e., one for training (1016 samples) and the other for validation (177 samples). After all training and validation processes had been conducted, the following accuracy rates were obtained: Artificial Neural Networks (62.43%), Support Vector Machines (82.10%), k-Nearest Neighbors (65.85 %), Rogers (27.19%), Dornenburg (46.89%) and Association Rules (91.53%). According to the results, it can be clearly seen that intelligent systems outperform conventional methods.
In addition to this paper, in [2], the authors make a more detailed analysis of gases. In this analysis, a total of 10 kinds of fault were considered, namely: partial discharge, thermal failures lower than 150°; thermal failures greater than 150° and lower than 200°; thermal failures greater than 200° and lower than 300°; cable overheating; current in the tank or iron core, overheating of contacts; low energy discharges, high energy discharges, continuous sparkling (a luminous phenomenon that results in the breakdown of the dielectric by discharge through the insulating oil), and partial discharge in solid insulation. It is worth mentioning that the method applied in this study was based on a fuzzy inference system, which was tested under controlled fault conditions. Other tests were also realized in Hungarian substation transmission transformers, where the method performed well against the uncontrolled failure scenarios. However, studies [1] and [2] present a gap with regard to internal fault diagnosis for power transformers, because they only identify the type of failure and do not locate the partial discharges.
In order to provide a better fault diagnosis for power transformers, some studies have used acoustic emissions to locate faults due to partial discharges. Among these investigations, in [8], the authors propose a geometric analysis of the arrival times of acoustic emission signals in order to properly locate the sources of partial discharges. In the proposed methodology, they use both time measurements from sensors and pseudo-measurements, which provide greater precision in the tracking system of partial discharges.
In the context of these studies, this chapter aims to determine the necessary procedures for the development of a methodology based on information from sensors for both dissolved gases and acoustic emissions. The purpose of this methodology is achieve satisfactory results for identifying internal faults, and, in the case of faults due to partial discharges, to locate them accurately to help in the process of decision-making related to the maintenance of transmission transformers.
The tasks of identifying and locating internal faults in power transformers are extremely important, since they have a very high aggregate cost for purchase and for maintenance. Dissolved gas analysis and the analysis of partial discharges by means of acoustic emission sensors are essential for maintaining the equipment, and can bring many benefits, such as reducing the risk of unexpected failures, extending the useful life of a transformer, decreasing maintenance costs and reducing maintenance time (due to the precise location of the failure). Furthermore, with the processing of these data by means of intelligent expert systems, it becomes possible to provide answers to help in the decision-making process about the power transformer analyzed.
Internal Faults in Transformers
The diagnosis of the status and operating conditions of transformers is of fundamental importance in the reliable and economic operation of electric power systems. The aging and wear and tear of transformers determine the end of their useful life; thus, the occurrence of faults can affect the reliability or availability of the power transformer. Understanding the mechanisms of deterioration and having technically feasible and economically viable repair strategies enables us to correlate faults with the operating evolution of the equipment in service [11].
Many techniques have been proposed to ensure the integrity, reliability and functionality of power transformers, all of which seek trinomial low cost, efficiency and rapid diagnosis. Among several techniques available for detecting internal faults in power transformers, acoustic emission analysis can be highlighted because it is not invasive, allowing analysis to be conducted on the equipment during normal operation [12].
A power transformer can be affected by a variety of internal faults, such as partial discharge, electrical arcs, sparks, corona effects, and overheating. Of these, Partial Discharge (PD) can be highlighted, since it is directly related to the insulation conditions of a power transformer, which in turn trigger the occurrence of more severe faults. PD in high voltage systems occurs when the electric field and localized areas suffer significant changes which enable an electric current to appear [6].
According to [13], PD can be grouped into 8 classes: • Point to Point discharges in insulating oil: these PDs are related to insulation defects between two adjacent turns in the winding of a transformer; • Point to Point discharges in insulating oil with bubbles: this kind of fault is also caused by PD between two adjacent winding turns, but the condition of insulation degradation allows the formation of gas bubbles; • Point to Plan in insulating oil: defects in the winding insulation system can cause PD between it and the grounded parts of the transformer tank; • Surface Discharges between two electrodes: the most common kind of PD, occurring between two electrodes insulated with oil-paper called triple point, where the electrode surface is in contact with dielectric solids and liquids; • Surface Discharges between an electrode and a multipoint electrode: the PD relating to these elements differ from the previous one with regard to the intensity distribution of the electric field. Both are insulated with oil-paper; • Multiple Discharges on the plan: multiple damaged points in the winding insulation may cause PD between it and the grounded parts of the transformer tank; • Multiple Discharges on the plan with gas bubbles: the PD in this case occurs at various damaged points in the winding insulation and the grounded parts of the transformer tank, but in the presence of gases dissolved in insulating oil; • Discharges caused by particles: in this case, the insulating oil is contaminated with particles of cellulose fiber formed by the degradation process of the oil-paper insulation system, due to the aging of the power transformer. Such particles are in constant motion in the oil, causing PD;
Laboratory Aspects for Internal Fault Experiments in Power Transformers
It is important to specify equipment, methods and parameters, which vary according to the type of defect that is to be analyzed. In simple terms, the monitoring system can be better understood through Figure 1. The structures highlighted (inside the black boxes) are those that present the greatest challenges for configuration and parameterization, which are entirely dependent on the type of tests to be accomplished.
The most complete and detailed tests are (given their wide coverage of internal faults) more complex and expensive due to the various devices necessary used for the fault detection and location process, because more sensors and also data acquisition hardware are necessary.
Electrical measurements
Electrical parameters are also necessary for a correct characterization of internal transformer faults, especially when dealing with systems that require databases for normal operating conditions and with situations when a system has to be restored following a disturbance. This is the case of artificial neural networks, which require quantitative data for the learning process. It is necessary to measure voltages and three-phase primary and secondary currents, totaling 12 electrical parameters. The acquisition frequency in this case must not be high, because the purpose is to investigate the most predominant harmonic components in the electrical system.
Acoustic measurements
The acoustic signals are captured by acoustic emission sensors distributed evenly throughout the tank, which are externally connected to the power transformer. Such sensors have several characteristics that require a correct specification: • Number of sensors per transformer: The number of sensors needed to detect internal faults in transformers varies according to the size of the equipment, amount of available channels and the type of fault to be detected. For the fault location task, for example, it takes a greater number of sensors, so that the entire volume of the transformer can be monitored. Thus, a total of 16 to 20 sensors is normally used [14]; • Pre-amplification: This item is extremely important because only the amplified acoustic signals are sent to the acquisition hardware, which removes extraneous noises; • Operating frequency: This is strongly dependent on the type of fault to be monitored. Mechanical faults are associated with frequencies ranging from 20 kHz to 50 kHz, while electrical ones vary between 70 kHz and 200 kHz; • Resonance frequency: This parameter defines the frequency where the signal gain is maximum. For maximum performance, it is necessary for the resonance frequency of the sensor to be tuned to the phenomenon to be monitored. The most common sensors have a resonance frequency of 150 kHz.
The experimental apparatus for supporting experiments aimed at testing computer systems developed for identifying and locating partial discharges in power transformers consists of a metal tank, in which all the devices responsible for the acquisition of acoustic and electrical signals are mounted. Figure 2 illustrates a tank specially prepared for this purpose.
Measurements of dissolved gases
Measurement of dissolved gases in insulating oil can be acquired from chromatographic analysis of the oil, which is often performed in the laboratory. However, there are now some commercial devices that sense some gases dissolved in the oil. These devices can be used to monitor a power transformer in real time. It is worth mentioning that, through the analysis of dissolved gases, it is possible to obtain a first indication of a malfunction, which is usually related to electrical discharges and overheating. Figure 5 shows the installation (in the tank) of the gas sensor, which is responsible for acquiring information on the quantities of gases dissolved in the insulating oil in order to relate them to internal defects.
Equipment for data acquisition
As seen above, the frequencies for electrical signals differ greatly from those found in acoustic signals, whose acquisition hardware can be divided into two according to technical and financial aspects: • Hardware for electrical signals: for power quality purposes established in the Brazilian standard PRODIST [15], the 25 th harmonic is the last one of interest. Thus, according to the Nyquist criterion, a minimal acquisition rate of 3 kHz is required. For electrical parameters it is also possible to use hardware with an A/D multiplexed converter, which reduces the cost of equipment; • Hardware to acoustic signals: one of the factors that make this hardware expensive is the need to use an A/D converter for each channel. The sources of acoustic emissions also vary between 5 kHz and 500 kHz, where an acquisition frequency in MHz is necessary.
Computer for receiving and processing data
The computer is responsible for storing acoustic, electrical and dissolved gas data coming from the hardware acquisition. The hardware bus speed and the disk storage capacity must also take into account the amount of planned experiments, although a high performance disk is unnecessary, since a SCSI bus can be used.
Analysis and diagnosis
The implementation of this structure is very challenging, because it consists of a combination of techniques to efficiently identify and locate faults in power transformers. Among these techniques, those based on intelligent systems have efficiently increased the performance of processes involving the detection and location of faults [13].
Data Analysis from Acoustic Emission Signals
Altogether, we collected 72 oscillograph records of partial discharges. Each of these records depicts a time window of one second. In general, many occurrences of partial discharge are registered in these time slots.
In addition to this phenomenon, the data acquisition system also recorded mechanical waves that were used to evaluate the gauging of acoustic emission sensors. These waves are the result of the break, near the surface where the sensor is installed, of graphite with specifications given by the manufacturer of acoustic emission sensors. The graphs resulting from this test are highlighted in Figure 6.
Advances in Expert Systems 12
From Figure 8 we can see that each partial discharge results in a highly correlated mechanical wave. The graphs shown in Figure 9 highlight this relationship more clearly. In order to verify the behavior of the sensors for the tests, the voltage and current signals are processed in order to find the frequency response of these devices. In Figure 11 the amplitude versus frequency for the first calibration test has been recorded. The top of the graph highlights the energy and voltage signals sampled, and at the bottom there is the amplitude versus frequency. From the signal analysis it is then possible to observe a maximum response around 400 Hz and 100 kHz.
Advances in Expert Systems 14
In Figure 12, the signals were assigned in segments where the amplitude was more significant for detection purposes, which now represents the presence of different peak amplitudes at various frequencies.
The energy signal shows an envelope having important information, making clear the differences between the acoustic emission signal and the reflections that are also registered. In order to better evaluate these peaks, segments of interest were amplified and the frequency response was recalculated for this section, as reported in Figure 13. In the segment highlighted in Figure 12, there is clearly a large concentration of low frequencies, with maximum amplitude at 10 Hz. In contrast, Figure 13 presents a large concentration at 100 kHz and another at approximately 2.5 MHz.
It is worth noting that, in the light of the two analyses, the signal with higher energy, recorded in the first segment, has an extremely low frequency wave. Thus, the propagation velocity tends to be higher due to the proximity to the spectrum of mechanical waves. However, for higher frequencies, typically observed in electromagnetic waves, there is a decrease of the signal energy, because this wave will suffer large attenuation when propagating through the insulating oil. Thus, the signal perceived by the acoustic emission sensor has already suffered severe degradation before being detected. This attenuation phenomenon is of great importance for the location process of partial discharges when installing more sensors in the experimental tank. In fact, since the speed of wave propagation in the insulating oil is known, it is then possible to estimate the location of the source of discharge.
The energy calculation is performed to obtain the full power of a signal. However, some signals are negative and therefore a quadratic sum of the sampled points must be calculated, as shown in the following equation: where N is the i-th window, and M represents the j-th point of the data window (consisting of 1101 points per window).
Thus, it may be noted that each data window corresponds to an acoustic emission signal measured by a given sensor. In this case, 8 sensors are used and, therefore, for each partial discharge we have 8 data windows. In addition, 10 samples for each partial discharge are still considered, which were obtained at different moments. Thus, the energy calculation for each of the 8 acoustic emission sensors is shown form Figures 14 to 21. Moreover, three different experiments were compared, where there was variation in the depth of the partial discharges in the oil tank used during the tests.
Experiment 1 represents a partial discharge located at 5 cm from the surface of the insulating oil, while experiments 2 and 3 are respectively located at 21.5 and 40 cm from the surface of the insulating oil.
Experiment 3 also had a small variation in the distance of the partial discharge from the front of the experimental tank, where it was moved 1 cm with respect to the original position of tests 1 and 2.
It is important to mention that this displacement is made in such away that the partial discharge of experiment 3 could be detected by sensors closer to the front wall of the tank, where it was expected that sensors 1 and 2 allocated on the wall would be more sensitive in experiment 3 rather than in experiments 1 and 2.
From Figures 14 and 15 it is possible to observe the energy response supplied by sensors 1 and 2 (for each of 10 samples), which represents the greatest contribution of experiment 1 in sensitizing them, while sensor 3 shows an energy response which makes it difficult to define which experiment caused the highest sensitization ( Figure 16). The sensor 4 showed an energy response similar to that already shown for sensors 1 and 2 ( Figure 17). By means of the energy response supplied by sensor 5 (Figure 18) it can be seen that there is a certain emphasis on the response of experiment 1, but its energy levels are very close to those of experiments 2 and 3. The energy response of sensor 6 ( Figure 19) in almost all samples presented responses similar to those obtained by sensors 1 and 2. However, in the first sample it can be seen that there are very similar levels of energy in the three experiments, although sensor 6 was a little more sensitive in experiment 3. Finally, sensor 8 presented an energy response ( Figure 21) similar to that already obtained by other sensors, whose higher sensitization was caused by experiment 1.
Intelligent Systems
This section provides a theoretical foundation for fuzzy inference systems and artificial neural networks, as they are very prominent intelligent tools in the literature.
Fuzzy inference systems
Systems called fuzzy are built based on the theory of fuzzy sets and fuzzy logic, introduced by Zadeh in 1965, to represent knowledge from inaccurate and uncertain data. Fuzzy sys-tems consist of a way to make a computational decision close to a human decision. Figure 22 shows a block diagram that expresses, in a simplified form, how a fuzzy system works. On the other hand, the "Inference Procedure" block maps a system by using the linguistic rules. Thus, if rules are combined with input fuzzy sets acquired by the fuzzification interface, the system is then able to determine the behavior of the output variables of the system so that they can be defuzzified, generating the corresponding output to a given input value.
When using a fuzzy inference system, fuzzy rules and sets are adjusted and tuned by expert information. However, in some cases, because of the complexity and nonlinearity of the problem, it is necessary to use hybrid systems, such as ANFIS, where adjustments are performed in an automated manner according to the data set that represents the process. However, it is worth mentioning that, regardless of the setting, the whole fuzzy system has linguistic rules that can be represented as follows: Another factor that should be noted is the inference procedure, in which a variety of methods can be used. Currently, the most commonly used methods are those of Takagi-Sugeno and Mamdani.
Artificial neural networks
Artificial neural networks are computational models inspired by the human brain, which can acquire and retain knowledge. Among the various neural network architectures, there is the architecture of multiple layers, called MLP (Multilayer Perceptron). This type of architecture is usually used for pattern recognition, functional approximation, identification and control tasks [16]. The structure of a neural network can be developed according to Fig. 3. As seen in Fig. 3, the neural network structure is basically composed of an input layer, hidden neural layers and an output neural layer. Also, between the layers, there is a set of weights, which are represented by a matrix of synaptic weights that will be adjusted during the training phase. It is further worth commenting that, for each of the neurons (hidden neural layers and output neural layer), it is necessary to implement activation functions in order to limit their output. In view of the basic configuration of the MLP neural network, other factors that should be explored are the training and validation stages.
During the training phase of MLP neural networks, some algorithms can be used. Currently, the backpropagation algorithm can be highlighted, which uses a descendent gradient calculation to reach the best adjustment of the synaptic weight matrix. In addition to the backpropagation algorithm, the Levenberg-Marquardt algorithm has been widely used because of its ability to accelerate the convergence process, due to the use of an approximation of Newton's method for non-linear systems [16].
On the other hand, the validation stage has the purpose of verifying the integrity of previously conducted training, so that the learning ability (generalization) of neural networks can be analyzed.
Intelligent Systems Used for the Identification and Location of Internal Faults in Power Transformers
As already mentioned in Section 1, a wide range of papers may be found in the literature, which are concerned with the identification and location of internal faults in transformers. However, there are very few papers which use intelligent systems applied to the same purpose, also taking into account experiments with acoustic emission sensors, electrical measurements and dissolved gases.
Among the most prominent papers found in the literature, we can highlight a few that use fuzzy inference systems and artificial neural networks for the analysis of dissolved gases [2,[17][18][19] and, for decision making, data from acoustic emission sensors [13].
As may be observed in papers [2,[17][18], which have fuzzy systems applied to the analysis of dissolved gases, the only notable difference lies in the fact that each one proposes different input variables to solve the problem and also different classes of faults. Thus, each paper has different settings of rules and of discourse universes for each input variable.
Therefore, a task of great importance is analyzing dissolved gases is the data preprocessing step, where the most relevant variables are obtained to characterize internal faults in power transformers.
As for those papers that analyze acoustic emission data, they typically employ conventional techniques [6][7][8][9][10]. However, the authors in [13] perform a series of experiments with partial discharges in insulating oil. However, these tests are not performed in order to apply the methodology to power transformers, but rather to identify partial discharges in any environment where oil is the insulator. Therefore, in order to identify the partial discharges, the authors use a MLP artificial neural network with backpropagation training, where the accuracy rates were above 97%.
Following the above context, it appears that the development of a method for identifying and locating internal faults in power transformers requires a number of steps, which are set out below: • Allocation of sensors (acoustic emission and dissolved gases); • Acquisition of data from sensors in accordance with the requirements commented upon in Section 3; • Data preprocessing stage (definition of the most relevant variables and application of other necessary tools); • Training or tuning of intelligent systems;
Advances in Expert Systems 22
• Data validation (use of other data than those used in training/tuning stage); • Performance analysis of the methodology in relation to other methodologies found in the literature.
It is worth mentioning that, out of the 6 steps mentioned above, most attention should be given to the allocation and acquisition of data, because bad data acquisition can affect the whole process of identifying and locating faults. It is also important to emphasize that the calculations made during the preprocessing of the signals was devised in order to extract the characteristics that best represent the positioning of the partial discharge in relation to the acoustic emission sensor. However, for this first stage of testing the expert system and the hardware used in the acquisition of the signals, we used the experimental tank.
In order to better represent the embedded software, a block diagram detailing the calculations to be performed by the software is set out below ( Figure 24). As can be seen in Figure 24, it may be noted that the embedded software, after obtaining the acoustic signal, applies some computations in order to extract the characteristics that may represent the signal appropriately. Through these features, the expert system is able to distinguish these signals and to locate the source of partial discharges.
In this context, during the preprocessing step of the signs, the following calculations are performed: RMS, Energy, Length, Amplitude, Rise Time and Threshold. Finally, after obtaining the signal characteristics, they are sent to the computer through a USB (Universal Serial Bus).
Upon receipt of these data, the expert system is then responsible for providing information regarding the location of any partial discharge in the transformer. In order to better represent the overview of expert system, a block diagram is shown in Figure 25. In this figure, it may be noted that, after the received data concerning the characteristics commented upon previously, these are provided as input to the expert system. In Figure 25 we can also observe that the expert system is composed of intelligent tools, such as artificial neural networks and fuzzy inference systems, which aim to locate partial discharges. Upon locating a partial discharge in transformer transmission, the operator may submit the equipment for maintenance (if necessary). Thus, the intelligent system here has the function of assisting the decision-making of the electric utility.
Conclusion
The tasks of identifying and locating internal faults in power transformers are extremely necessary, since this is one of the pieces of equipment that has the highest aggregated cost for both its purchase and maintenance.
Therefore, dissolved gas analysis and the analysis of partial discharges by means of acoustic emission sensors are essential for maintaining the equipment, which brings many benefits such as reducing the risk of unexpected failures and unscheduled downtime, extending transformer working life, reducing maintenance costs and minimizing maintenance time (due to failure location). Furthermore, processing this data by means of intelligent systems makes it possible to provide answers to help in decision-making about the analyzed power transformers.
|
v3-fos-license
|
2019-02-17T14:05:20.474Z
|
2017-04-01T00:00:00.000
|
55615503
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://astesj.com/?download_id=1447&smd_process_download=1",
"pdf_hash": "649fbc85322ef5dc52052bba7d108d8227e2cf0b",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:984",
"s2fieldsofstudy": [
"Business"
],
"sha1": "3a5f37de8007d6f3cc726436941ae39033a505bc",
"year": 2017
}
|
pes2o/s2orc
|
Design of Petri Net Supervisor with 1-monitor place for a Class of Behavioral Constraints
This paper studies the design of supervisory controllers with a minimum number of monitor places for Manufacturing System modeled as safe Petri Nets. The proposed approach considers a class of safety specifications known as Behavioral Constraints with a restricted syntax. The set of Behavioral Constraints are represented as predicate logic formulas in normal conjunctive form. Then, each Behavioral Constraint induces a set of algebraic linear inequalities. The approach establishes an equivalence in order to minimize the number of monitor places. Thus, each Behavioral Constraint induces a single linear inequality, giving rise to a 1-monitor place Petri Net supervisor. The approach is illustrated with the design and implementation of 1-monitor place modular supervisor for an automated manufacturing prototype.
Introduction
The operation of manufacturing systems is increasingly challenging because of the execution of more complex tasks. In order to reduce periods for manufacturing procedures, but complying with regulatory standards to guarantee a proper operation and product quality, plenty of manufacturing features have been improved in recent years ( [1]). The reconfigurability allows to change the entire procedure of an Automated Manufacturing System (AMS), but it also must minimize the use of time and resources ( [2]). The safety of the operation, with all the automatic processes occurring in the AMS is a critic feature, leading to the existence of entities with the propose of guarantee safety operation, such as Supervisory Controllers (SCs). For AMS modeled as discrete event systems, Supervisory Control Theory (SCT) proposed by Wonham in [3] is a well-accepted paradigm frequently employed for designing logic controllers at the coordination and basic layers of control systems. Petri Nets provides a formal logic platform for modeling and synthesis of logic controllers as well as analysis widely used in AMS (e.g. [4] [5]). The synthesized Supervisory Controller (SC) is a Petri Net (PN) with a finite number of places, which are called monitor places. Some of the advantage of PN, more compact representations of the supervisor than their automata counterparts are usually achieved and accepts concurrency in the execution of transitions. Among several design methods considering safety specifications, the Invariant Based Control Design method [6] has been successfully employed to deal with forbidden states [7] and Behavioral Constraints [8]. However, the resulting PN may not be a minimal realization of the SC. Synthesis strategies for PN supervisor with a reduced number of monitor places have been proposed for forbidden state avoidance [9] only, not for Behavioral Constraints. This paper studies the synthesis of 1-monitor place supervisory controllers for safe PN. The proposed design approach employs the Invariant Based Control Design (IBCD) method and a class of safety specifications [10] that can be modeled as Behavioral Constraints [8]. Section 2 introduces the fundamentals of PN and SCT and the representation of Behavioral Constraints (BCs) as a set of linear inequalities. Section 3 shows the proposed technique to transform the set of Behavioral Constraint (BC) into a smaller set of linear inequalities, leading to a PN supervisor with a reduced number of monitor places using the IBCD method. Section 3 also establishes the conditions for a Supervisory Controller based on Be-havioral Constraints (SCBC) to be proper. Section 4 shows the case study used in this work, an AMS, presenting its description and modeling. Then, Section 5 presents a set of BCs to be imposed in the AMS, the representation as linear inequalities and the resulting SC designed using the IBCD method, as well as its implementation as a ladder diagram.
Fundamentals
In this Section the basic definitions of Petri Nets and Supervisory Control Theory are introduced.
Petri Nets fundamentals
For modeling techniques, as well as structural and dynamic properties of PN the reader is refereed to [11].
Supervisory Control Theory (SCT)
The automata version of SCT is developed in [3]. In this subsection, the fundamentals SCT for discrete event system modeled as PN are introduced, as seen in [6]. Moreover, the basic concepts and definitions of BC are discussed in [12] and presented in the current section.
Definition 9 (Control pattern) Let N be a PN and T be its set of transitions.
The control pattern Γ is defined as the set of transitions enabled in a marking M of (N,M).
Definition 10 (Transition sequence) Let (N , M) be a PN system and T be its set of transitions. σ = t 1 t 2 · · · t n is a transition sequence of transitions such that Finally, the concept of safety specification is explained. A safety specification leads to the system to developed a safety property. Safety properties are often characterized as " nothing bad should happen " . The mutual exclusion property, deadlock freedom are examples of safety properties [10].
Predicate representation of Behavioral Constraints
Let N be a safe PN with firing vector Q =[q 1 q 2 · · · q l ] and let (N , M) be a system with marking vector M =[m 1 m 2 · · · m l ].
Definition 17 Predicate variable A : Q → {T rue, False} associated to a firing transition T i is defined with the rule Definition 19 (Behavioral Constraint (BC)) A BC is defined with the following predicate logic syntax with A being a predicate variable associated to firing transition T a and Φ a formula in conjunctive normal form, composed by predicate variables associated to marking places, that is with r j as the place index in N , j = 1, 2, . . . , l, with l the number of places associated in Eq. 3 and Eqs. 1 and 2 are equivalent to Proof. N is a safe net, thus N is a 1-bounded net. Hence the marking vector takes only 0 and 1 values. Therefore Table 21 holds. Table 21 Truth table of Proposition 21 Using Proposition 21, BC presented in Eq. 5 can be written in an equivalent form, as shown in Lemma 22.
with il as the number of disjunction variables in each formula φ i . Proof. It follows from applying Proposition 21 to BC 5.
Supervisory Controllers design
using an Equivalent representation of a set of Behavioral Constraints Using the n inequalities induced by predicate system 8 with the IBCD method ( [6]), a PN supervisor is obtained with n monitor places, each one with a bidirectional arc to transition t a . It is presented below a procedure to design a PN SC, based on a BC as in Eq. 1 with a single monitor place.
Theorem 23 Let A(q a ) and Θ(m k 1 ), Θ(m k 2 ) . . . Θ(m k l ) be variables as in definitions 17 and 18. Let a BC for restricting the system behavior be with m K = m k 1 + m k 2 + · · · + m k n and m J = m j 1 + m j 2 + · · · + m j m and m > 0 www.astesj.com
a BC for restricting the system behavior. A 1-monitor place PN supervisor can be synthesized (i. e. its incidence matrix can be calculated) with the IBCD method using linear inequality
[nq a − m K ] ≤ 0 (11) with m K = m k 1 + m k 2 + · · · + m k n 3.1 Properness of a Supervisory Controller based on Behavioral Constraints The conditions for a SCBC to be non-blocking and controllable are studied in this subsection.
Definition 25 (System Under Supervision) Let N be a safe net and M its marking vector. Let C be the PN that implements a supervisor for N and M c the marking vector of C.
A System Under Supervision (SUS) is defined as
where N ||C represents the synchronization of nets N and This definition complements definition 13, adding the marking vector. In the rest of the document, closed loop system will be refereed as SUS.
A supervisor is proper iff the SUS in non-blocking and controllable [3].
Liveness analysis
A necessary condition for non-blocking is liveness. For safe PN modeling AMS, the condition of liveness is required, as shown in this subsection. An AMS is composed by sub systems, each modeled as a live and bounded PN circuit.
Definition 26 (Partial blocking) A system (N , M) is called partially blocking if there is a sub system (N 1 , M 1 ) of (N , M) which is blocking.
Lemma 27 Let N be a safe PN. System(N , M) is live if and only if is not partially blocking.
Proof. As necessary condition, if a system is not partially blocking, then there is the system is live. For the sufficiency, is enough to prove that in a partially blocking system there is a transition not enabled in every reachable marking of M. Assuming a blocking system (N 1 , M 1 ) with N 1 a sub net of N . Let t be an output transition to a place s of N 1 and t is not enabled in marking M 1 , s has no tokens in M 1 . The system is partially blocking M 1 , hence the reachable markings from M contains element such that s has no tokens. If s has no tokens, transition t is not enabled. Therefore (N , M) is not live.
Therefore, for safe PN, non-partial blocking is required in order to ensure a full funcionallity in the AMS. Hence by Lemma 27, liveness is required. Now, the condition for a SCBC to be live is established. Using definition 28, of Proposition 29 and Lemma 31 are proved. Proposition 29 establishes conditions to guarantee reachability of a marking vector. Lemma 31 demonstrates if an associated marking vector is reachable, then SUS is live. Finally, Theorem 32 follows from Proposition 29 and Lemma 31, establishing condition for a SUS to be live.
Definition 28 (Marking vector associated to constraints) The marking vector associated to the above constraint is defined as There is not more than 1 place in the BC belonging to the same minimal S-invariant S of N if and only if the associated marking vector of the above BC R T = m 1 m 2 · · · 1 1 · · · 1 m 2+k n · · · m l with l as the number of places, is reachable.
Proof. First the following implication is proved using its contra-positive. If the associated marking vector is reachable, then there is not more that 1 place in the BC belonging to the same minimal S-invariant. Consider S a minimal S-invariant containing 2 or more places included in the BC, and vector S1 = [1 1 · · · 1] of length m, with m as the number of places in S. The next equation is the invariance condition and guarantees that the number of tokens in an S-invariant is conservative.
M os is the initial marking of the places in S, and for the conservativeness of the S-invariant, this value holds for any reachable marking. Let R be a projection containing the values of R corresponding to the places in S.
Multiplying S1 by R S1 * R ≥ 2 The above expression violates conservativeness, hence the marking is not reachable.
For the converse implication, consider that there is not more than 1 place in the BC belonging to the same minimal S-invariant S. Therefore, all places of the BC belongs to different and disjoints minimal S-invariant, this is concluded from the fact that the net N is 1-bounded and system (N , M) is live. The last claim implies that every minimal S-invariant is marking in M, because N is a free-choice PN (see [11] Commoner Theorem). Thus, every S-invariant has a token in the initial marking, the system is live and by Lemma 27 it is not partially blocking. Hence, there is a reachable marking of the system (N , M) such that every place in the BC has one token simultaneously (invariants are disjoints) and the associated marking vector is reachable. Proof. If associated marking vector is not reachable, it means that the formula Φ of the BC never is true, thus transition t a is never enabled. The system is not live.
Proof. By contradiction, assume a SUS live and there is not any reachable marking such that formula Φis true and t a is enabled. By 30, associated marking vector of the BC is not reachable, hence by 31 SUS is not live, leading to a contradiction. Now for the sufficiency condition, assume that marking M r is reachable and formula Φ is true and t a is enabled in M r . Therefore, transition t a is enabled in SUS, hence it is enabled in systems with and without supervision. The following claim is proved in 37 from subsection 3.1.2, only transition t a may be disabled by the supervisor. The system (N , M) is live and the SUS may only disables transition t a . However, there is a marking M r enabling transition t a in the SUS, henceforth every transition is enabled in some reachable marking of SUS and by definition SUS is live.
Non-conflict analysis
If a set of BC is non-conflicting then the resulting SC is non-blocking [3]. As before, liveness is required for manufacturing systems. Hence, a set of BC is called non-conflicting if the SUS is live. Proof. Necessary condition. A set of constraints is non-conflicting if the SUS is live. Assume a SUS such that there is a subnet C 1 of C generating a non-live system (C 1 , M 1 ). Since is not live, there is a transition t 1 disabled in all reachable markings from some marking M i . t 1 is a transition of the SUS also, therefore the SUS is not live, leading to a contradiction.
For the sufficiency, assume that a SUS is not live. Therefore, at least a transition t of N is not enabled for all reachable markings. In the first case, t is connected to C. Then, there is a place c input to t in C with no tokens for all reachable marking. There is a transition T 1 input to c not enabled and following the same idea that t, assuming T 1 connected to C there is c 1 input to T 1 in C. Recursively until place c n is place c (there is a finite number of places in C), there is a subnet of C with a disabled transition, hence the subnet is not live. If transition t is not connected to C, there is a transition t in the same minimal S-invariant of t connected to C, and the above procedure can be followed for t i . That is, a controlled siphon is a siphon that never becomes unmarked.
Controllability analysis
This subsection shows that a SUS synthesized using the IBCD method with BC is, in fact, controllable.
AMS case study 4.1 System description and open loop model description
The AMS employed as a case study is a pneumatic punching center whose topology is illustrated in Fig. 1. The manufacturing procedure begins when a piece arrives to the storage unit (SU), then valve B (VB) opens, activating the input piston (IP). IP pushes the piece into the slot 1 (S1) of the rotatory table, while valve A (VA) retracts the IP. The motor (MR) is turned on, generating a rotation of 90 degrees clock-wise in the rotor, and the piece advances to slot 2 (S2). The piece is processed by the punching machine (PM) at slot 2, using valve E (VE) to activate the PM. Then, the motor turns 90 degrees clock-wise again, placing piece into slot 3 (S3). The piece at slot 3 is pushed by the output piston (OP), activated by valve D (VD), to a conveyor belt, and finally, valve C (VC) retracts the OP.
Each elementary component of the AMS is modeled as a two-places PN block. A place is added to the block associated to each discrete value. The set of transitions are defined as the events to change the discrete value of a component. A transition is added to the model for each event. For the initial marking, a token is added to the associated place of the initial discrete value of each component. The rest of the places remain with no tokens. Table 3 enlists the elementary components with the associated semantics of each place and transition. Fig. 2 shows the PN blocks of the AMS.
The following causal relationships complete in the open loop behavior of the AMS. Bidirectional arcs are added to the model to include the relationships in the behavior, as shown in Fig. 2. • A piece can arrive to slot 1 only if input piston is out and there is a piece in storage (bidirectional arcs from P 2 and P 4 to T 5). This PN is live and 1-bounded, i. e. is a safe PN. The incidence matrix d of each PN module is of the form of Eq. 14. Hence, the incidence matrix D p of the entire system is a 28x28 block matrix in Eq. 15. The initial marking vector m of each module is shown in Eq. 16. Hence, the initial marking vector M o of the AMS is shown as a block vector in Eq. 17.
Closed loop specification modeling
The specifications to be imposed upon the AMS are described in this subsection. Four safety specifications are defined to ensure the AMS safe operation. Matching definition 19, each specification have a corresponding BC.
1. If turning on motor (T27) is enabled, then both piston (P3, P13) and punching machine (P11) are in the withdrawn position and there is a manufacturing piece in slot 1 (P6) or in slot 2 (P8 www.astesj.com 37 Using Lemma 22, the induced system for the BCs from Eqs. 18-21 is presented in Eq. system 22 consisting of a linear system of 8 inequalities. Employing the method proposed in section 3 (Theorem 23 and corollary 24) Eqs. 18-21 are transformed into a set of 4 linear inequalities shown in Eq. system 23.
Properness analysis
This subsection presents the analysis to show that the designed SCBC is in fact proper, i. e. the SUS is live, non-conflicting and controllable. For each BC, there are not 2 places belonging to the same PN block. Each PN block is a minimal S-invariant (see [11]).Therefore, there are not 2 places belonging to the same minimal S-invariant. Hence, by Proposition 29 the associated marking vectors for all the BCs are reachable. Now, by Proposition 30 in those markings the respective formulaes Φ are true. Since all transition of the BC are enabled in its respective associated reachable markings by Theorem 32 the SUS for every BC is live. Now, by Theorem 33 the PN supervisor must not have any not live subnet in order to prove that the set of constraints is non-conflicting. However, the only not disjoint subnet of PN supervisor is concerned to transitions T 7 and T 8 . From a quick analysis it is clear that this particular subnet is live. Hence, by 34 and Theorem 33 the SUS is live, i.e. the set of BCs is nonconflicting.
The set of constraints must be proven admissible. By Theorem 38, the set of constraints is proven admiswww.astesj.com 38
Ladder diagram implementation of supervisory controller
A PN can be translate into a ladder diagram for its implementation in a control device (e.g. a PLC). The general procedure for the translation of PN into ladder diagram is explained in [14]. Every place has a corresponding register in the ladder diagram. Every transition has a corresponding contact and its execution generates the change of the contact state.
The following rules are an adaptation of the translation procedure developed in [14]. Let T a be a transition in the supervisor PN. Let P a be an output place of T a , connected by an arc with weight na. Let P b be an input place of T a , connected by an arc with weight nb.
• Each transition T a is represented as a contact in a ladder segment.
• If P a is 1−bounded, then it is represented by a coil with a set function. If P a is not 1−bounded, then it is represented by an add block, adding na tokens to P a .
• If P b is 1−bounded, then it is represented by a coil with a reset function. Also,a normally open contact is associated to P b in the segment.
• If P b is not 1−bounded, then it is represented by a subtract block, subtracting nb tokens to P b . Also, a comparison contact is associated to P b , with the rule, greater or equal than nb.
• If P a = P b (self-loop), then the number of tokens holds. Thus, there are not output blocks associated to P a in the segment.
The resulting ladder diagram for the SCBC is composed by 28 segments, one for each transition of the AMS model. A part of this ladder diagram is shown in Fig. 4.4. Each segment contains the conditions to enable the corresponding transition. For example, monitor place C1 must have at least 7 tokens for enabling transition T27. The number 7 is the coefficient corresponding to transition T 27 in the Eq. system 23. Moreover, in the Fig. 4
Conclusions
The approach presented in this work reduces the number of monitor places needed to impose a set of constraints in a AMS. In the case study, the safety specification were successfully imposed in the system behavior using 4 monitor places, showing the exact same results that using the classical approach with 8 monitor places. The incidence matrix of a discrete event system modeled as a PN usually has a lot of zero entries. The proposed approach reduces the dimension of Matrix L of the IBCD method, avoiding unnecessary by-zero multiplications giving a computational numerical advantage.
In the context of discrete event system the state expansion leads to complicated and unreadable graphs representations, such as Finite State Machines. The use of PN gives a more compact representation of the system, but it is still possible to find very complex graphs representations when a SC is design.
It has been proposed a synthesis method for a class of BC with a restricted syntax. Giving rise to a minimal PN SC. This increases the variety that can be considered in the synthesis (i.e. forbidden states) using a solid and mathematically established procedure.
The safety specifications ensure a behavior that forbids to any unwanted situation occurs in the system. The implementation was made using techniques previously proposed. The resulting implementation is compact and is a more usable approach for manufacturing systems.
is the same solution set for the system Proof. The solution set of an inequalities system agrees to the intersection of each inequality solution set. Let predicate 29 be associated to system 28 and predicate 30 be associated to ineq. 27.
Then Σ is the solution set for the inequality Proof. (By mathematical induction) Let the base case be Proposition 39. The induction hypothesis of the inductive step is the Lemma statement. Therefore, it must be proved that the solution set of ineq. 34 and system 35 is the same.
Ineq. 34 holds if and only if
x − y s+1 ≤ 0 (36) holds and (x − y 1 ) + (x − y 2 ) + · · · + (x − y s ) ≤ 0 (37) also holds. This is derived from the fact that x can only take values 0 and 1. If Σ is the solution set for ineqs. 36 and 37, then σ is the solution set for 34. By induction hypothesis, if ineq. 37 holds, then system x − y s ≤ 0 also holds. Therefore, Σ is the set solution for system 35 and it is proven that Σ is solution for 34 and 35.
Lemma 41 Let X, y 1 , y 2 , · · · , y n , z 1 , z 2 , · · · z m be integer variables with domain {0,1}. Let Y = y 1 + y 2 + · · · y n , Z = z 1 + z 2 + · · · z m . Let R = {(X, Y , Z)|X = {0, 1}, Y = {0, 1, · · · n}, Z = {0, 1, · · · m}} be the constrained domain. Let Σ ⊂ R the solution set for the inequality Then Σ is also the solution set for the system Proof. The proof consists of two steps. First the inequality 38 is derived from a geometrical perspective. Then, it is proven that if Σ is solution for eq. 38, then it is also solution for system 39. By Lemma 40, the first n inequalities are equivalent to ineq. 33, therefore system 39 becomes From a geometric perspective, both inequalities in system 40 have a corresponding plane in a tree dimensional space (X, Y , Z). The solution set for each inequality is constructed with the points contained in domain R and bounded above by the corresponding plane, thus the solution set for system 39 is constructed with the points contained in domain R and bounded above for the intersection of both corresponding planes. Therefore, there is a plane such that contains the intersection of both planes and bounds above all the points contained in domain R and the solution set of system 40. The intersection of these planes is a line containing the points (0, 0, 0) and (1, n, 1). In order to describe a plane equation, an orthogonal vector to the plane is required, and for its calculation a third point is obtained by convenience, ( m (mn+1) , 1, 0). the orthogonal vector is obtained by calculating the cross product of two vectors in the plane, for simplicity, v 1 =< 1, n, 1 > and v 2 =< m, mn + 1, 0 >. The plane equation is (mn + 1)X − mY − Z = 0. Thus, the solution set for (mn + 1)X − mY − Z ≤ 0 is the same of system 39. Fig. 1. shows the plane and the constrained domain R. Now it is proven that solution set Σ for ineq. 38 is the same for system 39. System 39 holds for X = 0. If X 0, because of the domain constraint, then X = 1. If X = 1, 39 holds for y i ≥ 1 and Z ≥ 1, then Y ≥ n. Again because of the domain constraint, Y = n. Hence the set Σ that holds for expression 41 is the solution set for system 39.
( T F F T F F F T T T T F F T T F T T T T F T T T T T Table 41 Truth table of equation 41 T T T T F F T T F T T T T F T T T T T Table 41 Truth table of shares the same solution set with system 44 and, henceforth, with system 42.
|
v3-fos-license
|
2020-08-06T09:03:50.780Z
|
2020-07-31T00:00:00.000
|
225520261
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.ijert.org/research/implementation-of-tea-harvester-IJERTV9IS070542.pdf",
"pdf_hash": "7f5464d73fcdd38e547cfef6028a15608f95574a",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:985",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "2378b4845402922b6f22f5bc13836b55c9477393",
"year": 2020
}
|
pes2o/s2orc
|
Implementation of Tea Harvester
-Tea is one of the most commonly used beverages across the world. India is the second largest tea producer in the world. Over 70 percent of tea leaves are consumed within India. The tea leaves are harvested by different methods. The conventional methods have certain disadvantages like overweight, pollution, low yield and high cost of machine. This paper includes design to overcome the limitations of conventional machine. The mechanical design of the harvester involving roller, blades, conveyor mechanism, height adjustment, wheels, storage. The electrical connections involving controllers, sensor, drivers is done using Proteus design suite. The integration of the mechanical designs with electrical connections makes the harvester complete. The designed and developed model is suitable to harvest tea leaves with gradient of 5 percent. The novelty of this model involves addition of sprinkler to the harvester. This makes work much easier and smarter. The harvester is remotely controlled by the user through the application developed. This completely makes the machine more durable and reliable for harvesting tea leaves.
I. INTRODUCTION
Tea is one of the foremost popular and healthy beverages around the world. India stands to be the second largest tea producer in the world [1]. India is also a largest consumer of tea as well. The favoured tea beverage in India is the Assam tea and also the Darjeeling Tea. The Indian tea industry has grown to have many global tea brands and has evolved into one of the foremost technologically equipped tea industries within the world. Tea is manufactured by processing the leaves within the factories that are plucked and graded with various grades counting on the standard. A milestone in the tea industry is during 1990-91, India implemented a policy called Liberalization, Privatization, and Globalization (LPG), it created a harsh competition between firms and affected the tea industry also [2]. The tea industry in India has suffered in some ways like lowering of tariff barriers, fewer limits on imports, shortage of labourers, decrease in wages and attack of pests. The tea leaves are usually harvested in many ways. The conventional methods are hand plucking, diesel operated machines, and petrol operated machines, highly automated machines. The main drawbacks of such machines are as follows; they weigh up to 18 kilograms so it requires more labour for the work. The smoke outputs from the machine make the leaves to shed soon, also it affects the health of the labour [4]. The costs of machines are usually high. These key factors have made the engineers mind to design a harvester. The study showed the harvester used in recent years have drawbacks like pollution, requires more labour, bulky machines etc. To overcome these hindrances as discussed, a design is proposed having fewer limitations than the prevailing designs. The proposed semi-automatic system has a roller with blade arrangement that harvests the tea leaves automatically and is integrated with a pesticide sprayer; it doesn't compromise on crop safety and therefore the production rate. This tea harvesting machine is more viable, feasible, lighter in weight and profitable than manual harvesting [5][6][7][8]. The proposed model is originally designed to be a working porotype model but had to be developed entirely on software platform using solid works and Proteus design suite for validation due to the corona virus crisis. The model is tested for its working in the Proteus software by developing the circuit in Proteus and running it for various simulations [9][10]. The working of the various functions such as the movement, plucking, collection of leaves etc. are all tested and validated using the Proteus software. An android application is developed to control all the functions and a Bluetooth module is provided in the Proteus circuit for integration. The model is designed on solid works for the movement of the tea harvester and also the plucking mechanism is made visible in the animation of the model. Various functions such as plucking, the working of the pesticide sprayer, conveyor belt is also seen in the animation model. In this paper section II discusses the block diagram of harvester, followed by the Development of 3D model with Android Application to control the harvester. The simulation study of harvester is done on Proteus Design suite.
II. BLOCK DIAGRAM OF THE PROPOSED TEA HARVESTER
The block diagram of proposed tea harvester is shown in Figure 1. server. The connectivity is through Bluetooth. The app is provided with functionalities to control the harvester in all aspects. The command from the client is first received by the Bluetooth module HC-05 and the signals are sent to controller. The controller adopted here is Arduino Uno and Nano. Controller is programmed to control the motor of all the mechanism. In the front, the blades serve the purpose of cutting tea leaves. It does its function with help of roller. The conveyor mechanism is to stack the tea leaves plucked by the blade mechanism and to place it to storage unit. The power to all the electrical and electronic components is powered by the Battery and is places under the conveyor. Behind the harvester, the tea leaves are sprinkled with nutrients after the machine is passing ahead. To pump the nutrients from the nutrient tank a pump is placed. The HUB or in wheel motors are housed inside the wheels to provide movement of harvester. The machine is not provided with chassis for turning effect because of design and application constraint.
III. MODELLING OF PROPOSED TEA HARVESTER
The harvester model is first sketched in two dimension and later it was done in three dimension using solid works software.
A) Two-Dimensional Model of Tea Harvester
The proposed sketch of the tea harvester model and its CAED design is as shown in Figure 2, Figure 3 and Figure 4 respectively. The dimension of the model developed in the CAED software is in mm. Figure 4 shows the Top view of harvester developed using CAED.
B) Three-Dimensional Model of Three Harvester
The three-dimensional design of the Tea Harvester is done using the software called Solid works. The mechanism of the tea harvester is developed by designing individual parts. The parts are initially developed in 2 Dimension and then extruded to get the 3 Dimensional models. All the parts are later assembled in the assembly section of Solid works. Once the model is developed, motion is inserted to individual mechanism. All the motion are adhered to design and application constraints. • Leaf Plucking Unit: The leaf plucking unit consists of a roller made of six bars and moves in a circular motion. In its motion, the roller first supports the leaf and the two layers of blades moving in a to and fro motion cut the tea leaf stems. After it is cut, the roller moves the tea leaves onto the conveyer belt that transforms it into the leaf collecting unit.
• Roller and Blade Mechanism: The roller is designed in such way that it acts as an alternative for plucking mechanism. It consists of six bars for supporting tea leaves and the supported leaves are cut by the blades. The roller blade mechanism developed in the solid works platform is shown in figure 5. • Leaf Collection Unit: The leaf collection unit consists of the conveyer belt and a box unit on the platform for collecting the tea leaves. The leaf after being cut is transported from the cutting unit to the collection unit in a conveyer belt to be stored. A door is also provided for easy collection of the tea leaves after the box is filled • Conveyor Belt Mechanism The tea leaves after it is cut by the roller and blade mechanism is made to enter conveyor mechanism. The conveyor has driver and driven pulley. The speed of the conveyor is made slow so that the tea leaves get stacked and then shifted to tea storage unit. The conveyor belt mechanism is shown in figure 6. The design of the tea harvester includes storage for the batteries to power the tea harvester. The optimum location of the batteries to power its electrical components such as the hub motors is near the cutting unit and the front wheels and hence the battery box is located closer to the front portion. In the battery box, the batteries, the boost converter circuit and the controller are located. • Nutrients And Tea Leaves Storage Unit : The pesticides like Flubendiamide and Emamectin Benzoate and nutrients/minerals has to be sprayed soon after the harvesting of the tea leaves that increases the production of tea leaves for the next yield. A separate nutrient box is also provided on the platform and is used to store the nutrients that are sprinkled on the tea leaves right after plucking. Spraying of nutrients helps in the healthy growth of new tea leaves. The nutrients and tea leaves storage unit is as shown in figure 7. The sprinkler mechanism is added to this model so as to sprinkle the nutrients on to the bush right after chopping the leaves. This is done to increase the yield. Figure 8 shows Sprinkler Mechanism. • Height Adjustable Limb Mechanism : The tea plantation is not plain in hilly regions. In order to compensate the height of machine a height adjusted mechanism is provided in the design. The height adjustment is provided with lead screw mechanism. The designed limb model developed using the Solid works is shown in figure9. • Wheel Mechanism: The wheel mechanism is for movement of tea harvester across and along the field. The wheel is provided with degree freedom of 120 degree for changing the direction of harvester. The wheel is provided an animation for forward, reverse and left and right direction. The wheel mechanism is as shown in Figure 10. The integrated view is obtained by integrating all the parts together as shown in Figure 11 a and Figure11 b. The figure 11a shows the harvester in right side view with all the mechanism labelled.
C) Design of the App for Control of Tea Harvester
The proposed tea harvester has several functions such as sprinkling the nutrients, steering of the vehicle, height adjustment and the conveyor mechanism; all these are controlled remotely via the developed Android Application.
The developed application works based on Bluetooth Connectivity 4.0. This android application has been developed on MIT App developer platform. This platform uses graphical user interface that allows the users to drag and drop visual objects to create an application. There are 2 sections on the MIT app developer platform to develop the application and they are: designer and the blocks. In the designer section it is able to edit the appearance of the app and able to add visible elements such as the buttons, list pickers. Non visible elements such as the Wi-Fi and Bluetooth server and clients are also added. The function of each button and its logic is defined in the block section. The figure 12 shows snapshot of Android App developed. • Measurement of parameters The parameters like battery voltage, current, conveyor voltage, pump voltage are monitored and tabulated in the table form. This is to ensure that all the motors receive full voltage across them to function perfectly. The conveyor motor is slowed because to stack the leaves once they are cut by the roller and blades. VI. CONCLUSION The proposed tea harvester with its multiple functions is able to overcome many hindrances compared to the previous methods of harvesting. The labour force required also reduces and the yield of the harvest increases. The results of the software validated the intended design and all the working parts of the machine were checked for its movement. The bandwidth of the Bluetooth app is about 5 meters. The model developed is able to store Tea Leaves weighing up to 10 kilograms. The sprinkler storage capacity is 1.5 litres. The height adjustment is provided for a stroke length of 0-200mm. The developed model was simulated for different speeds of 10, 20 and 30 RPM. The results of the software validated the intended design and all the working parts of the machine were checked for its animated movement in solid works.
|
v3-fos-license
|
2022-09-03T15:18:50.704Z
|
2022-09-01T00:00:00.000
|
252034259
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1999-4915/14/9/1945/pdf?version=1661997733",
"pdf_hash": "fb27293e51fa5ac0dcf4eac522b19e77372d1b99",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:987",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "3aa6692cb22a931cc525556b04ff775ce25a4828",
"year": 2022
}
|
pes2o/s2orc
|
Anti-Prion Systems in Saccharomyces cerevisiae Turn an Avalanche of Prions into a Flurry
Prions are infectious proteins, mostly having a self-propagating amyloid (filamentous protein polymer) structure consisting of an abnormal form of a normally soluble protein. These prions arise spontaneously in the cell without known reason, and their effects were generally considered to be fatal based on prion diseases in humans or mammals. However, the wide array of prion studies in yeast including filamentous fungi revealed that their effects can range widely, from lethal to very mild (even cryptic) or functional, depending on the nature of the prion protein and the specific prion variant (or strain) made by the same prion protein but with a different conformation. This prion biology is affected by an array of molecular chaperone systems, such as Hsp40, Hsp70, Hsp104, and combinations of them. In parallel with the systems required for prion propagation, yeast has multiple anti-prion systems, constantly working in the normal cell without overproduction of or a deficiency in any protein, which have negative effects on prions by blocking their formation, curing many prions after they arise, preventing prion infections, and reducing the cytotoxicity produced by prions. From the protectors of nascent polypeptides (Ssb1/2p, Zuo1p, and Ssz1p) to the protein sequesterase (Btn2p), the disaggregator (Hsp104), and the mysterious Cur1p, normal levels of each can cure the prion variants arising in its absence. The controllers of mRNA quality, nonsense-mediated mRNA decay proteins (Upf1, 2, 3), can cure newly formed prion variants by association with a prion-forming protein. The regulator of the inositol pyrophosphate metabolic pathway (Siw14p) cures certain prion variants by lowering the levels of certain organic compounds. Some of these proteins have other cellular functions (e.g., Btn2), while others produce an anti-prion effect through their primary role in the normal cell (e.g., ribosomal chaperones). Thus, these anti-prion actions are the innate defense strategy against prions. Here, we outline the anti-prion systems in yeast that produce innate immunity to prions by a multi-layered operation targeting each step of prion development.
The first emergence of what are now known to be prion diseases cannot be determined clearly. There are several records about scrapie in sheep in the mid-18th century [15], long The first emergence of what mined clearly. There are several [15], long before the word 'prion sheep and goat disease seem to be Chinese character 痒, meaning p (which also refers to sheep pruri of disease (疒) and sheep (羊) [16 extreme UV resistance of the scra alopathies (human Creutzfeldt-Ja the normal cell surface protein Pr in purified infectious material fro about these diseases. In addition, diseases, such as Alzheimer's dise rosis (ALS), and type II (late-onse prion diseases [20][21][22].
The discovery of prions in Sa led to an acceleration of our un knowledge of prions and prion d difficult to answer the question of answer is likely that there is alwa proteins are being synthesized fr ently most proteins are capable of
What Do Prions Do in the Hos
The [URE3] [2] and [PSI+] [27 per 10 6 cells) in S. cerevisiae. The fr of the prion protein [2] and in th cross-seed [PSI+]), the prion of Rn induced, generally have the same their proportions can vary. Variou duction of a particular cellular main/protein [32][33][34][35][36][37][38] (e.g., high co the prion protein sequence for pr However, unlike prion propagati how the normal prion protein is c After the still-mysterious alte mal protein molecules undergo th of prion protein conformation. Th of studies on the amyloid structu analysis and mass-per-length det loid-forming part of the prion pr [39][40][41][42][43][44][45]. The common architecture of ter, parallel β-sheet) suggested a mation (the same location of folds in the amyloid to molecules newl [33,34,44,[46][47][48]. In this sense, the and drive the joining of new mon plates its own sequence [35,49]. Th different prion variants (strains), in terms of the intensity of their p These different variants have diff the protein sequence), but each va ) and sheep (羊 ) [16]. The initial evidence of an infectious protein was the extreme UV resistance of the scrapie agent [11,17]. The transmissible spongiform encephalopathies (human Creutzfeldt-Jakob disease (CJD) and scrapie) were first connected with the normal cell surface protein PrP when PrP was found to be essentially the only protein in purified infectious material from infected animals [18,19]. We now know a great deal about these diseases. In addition, many of the human-amyloid-based neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease, amyotrophic lateral sclerosis (ALS), and type II (late-onset) diabetes, share common aspects with the PrP-related prion diseases [20][21][22].
The discovery of prions in Saccharomyces cerevisiae and studies about them have also led to an acceleration of our understanding of those diseases [2,[23][24][25]. Although our knowledge of prions and prion diseases has increased since the mid-18th century, it is difficult to answer the question of the evolutionary origin of prions or prion diseases. The answer is likely that there is always a chance of prions or prion diseases appearing while proteins are being synthesized from ribosomes, even during ancient times, since apparently most proteins are capable of forming amyloid structures under some conditions [26].
What Do Prions Do in the Host?
The [URE3] [2] and [PSI+] [27] prions arise spontaneously at a low frequency/rate (~1 per 10 6 cells) in S. cerevisiae. The frequency of a prion arising increases on overproduction of the prion protein [2] and in the presence of [PIN+] (for [PSI+] inducibility, [PIN+] can cross-seed [PSI+]), the prion of Rnq1p [28,29]. These prions, spontaneously obtained and induced, generally have the same features, both biologically and biochemically, although their proportions can vary. Various cellular conditions, including the absence or overproduction of a particular cellular protein [30,31] and special features of the prion domain/protein [32][33][34][35][36][37][38] (e.g., high content of specific amino acids or the minimum length of the prion protein sequence for prion generation), affect the frequency of prion formation. However, unlike prion propagation, which is understood in principle, it remains unclear how the normal prion protein is converted to the prion form, thus generating a new prion.
After the still-mysterious alteration of the prion protein to initiate the prion, the normal protein molecules undergo the same structural alteration by a templating mechanism of prion protein conformation. The templating mechanism was suggested by the results of studies on the amyloid structure using solid-state nuclear magnetic resonance (NMR) analysis and mass-per-length determination of filaments of the prion domains (the amyloidforming part of the prion protein) from prion proteins Sup35p, Ure2p, and Rnq1p [39][40][41][42][43][44][45].
The common architecture of three different yeast prion amyloids (a folded, in-register, parallel β-sheet) suggested a mechanism of transferring the conformational information (the same location of folds by interactions of identical side chains) from molecules in the amyloid to molecules newly joining the amyloid for the elongation of the filament [33,34,44,[46][47][48]. In this sense, the protein molecules can template their own conformation and drive the joining of new monomers to the ends of the filaments, just like DNA templates its own sequence [35,49]. The architecture also enables us to explain more about the different prion variants (strains), formed from the same given sequence of prion proteins, in terms of the intensity of their prion phenotype (e.g., strong or weak, stable or unstable). These different variants have different conformations (turns/folds at different locations in the protein sequence), but each variant can propagate its own unique folding pattern [46][47][48]. The architecture of the prion amyloid also supports the extraordinary trait of a prion as a non-chromosomal genetic element that is cytoplasmically inherited (extra-nuclear or extra-chromosomal inheritance). The yeast prions [PSI+] and [URE3] were first reported as unusual genetic elements based on classical yeast genetics experiments showing 4 prion:0 prion-free segregation in meiosis [9,11] that were later discovered to be prions based on three genetic criteria [2]. The first is that curing a virus or plasmid is irreversible as long as it does not re-infect the cured cells. A prion may be cured by some treatment, but it should arise again in the cured cells at a low frequency because the normal form of the protein is still there. The second is that overexpression of the normal form of the prion protein should increase the frequency at which the prion arises. The third is that the prion form will likely not function like the normal form, so prion-carrying cells should have a phenotype that is similar to recessive mutants in the gene for the prion protein [2]. Note that the normal function of Rnq1p, which forms the [PIN+] prion [28,29,50], is not yet known, so one cannot tell whether the third criterion is satisfied in this case.
Thus, this phenotype similarity between the prion and recessive mutants in a gene required for propagation of the prion (the prion protein gene) is evidence that the nonchromosomal genetic elements are prions [2].
How Does the Host Cell Deal with Prions?
Although they can infect from the outside, prions are also an inside-the-cell risk unlike other infectious agents, such as fungi, bacteria, and viruses, which must come from the outside. The yeast host has evolved active protection systems against this threat from the inside: prion generation and prion propagation [25]. Although artificial overproduction of or a deficiency in a certain cellular component may cause the loss of a prion, several systems have been discovered that, at their normal expression levels, without overproduction of or a deficiency in any component, deal with prions by blocking their generation and even by inhibiting their propagation after they arise. These have been referred to as "anti-prion systems" [25] (Table 1).
Btn2p and Cur1p Act on the [URE3] Prion
Btn2p and its paralog Cur1p were first reported to cure [URE3] in a screen for proteins whose overproduction can cure the prion [51]. While the curing is happening, Btn2p was localized to one specific place in the cell with all the of Ure2 amyloid filaments, and this suggested that the progeny cells without prion filaments ([URE3] prion curing) resulted from the sequestration of filaments by Btn2p [51]. This curing by overproduction of Btn2p or Cur1p was found to require Hsp42, a small chaperone known to collect cellular aggregates [52]. Btn2p was also reported to cure an artificial prion and transfers some non-prion aggregates to a specific site in cells [62][63][64].
To test whether Btn2p and Cur1p were actively working in normal cells, [URE3] prions were isolated in btn2∆cur1∆ cells. While prion generation was increased by about 5-fold, >90% of [URE3] variants isolated in the double mutant had a relatively smaller prion seed (propagon) number and could be cured by reintroduction of either BTN2 or CUR1 [52]. The [URE3] variants arising in btn2∆cur1∆ cells can be eliminated by normal expression levels of Btn2p or Cur1p, indicating that the [URE3] prion arises frequently in wild-type (WT) cells but is usually curable by normal levels of Btn2p and Cur1p [52]. These findings set the pattern that we used in searching for other "anti-prion systems" that constantly block prion generation and inhibit their propagation in normal cells.
Hsp104 at the Normal Level Acting on the [PSI+] Prion
Hsp104 is a specific disaggregating chaperone that works with Hsp70s and Hsp40s to tweeze monomers from a protein aggregate, allowing the molecule a second chance at the correct folding through the action of Hsp70s [65,66]. This tweezing activity, by breaking the amyloid filaments into pieces, is essential for the propagation of the amyloid-based prions in yeast [53,[67][68][69][70]. However, overproduction of Hsp104 cures [PSI+] efficiently [53]. Although many outstanding studies have been conducted to investigate the curing mechanism by Hsp104 overproduction, there still remains controversy. The proposed mechanisms include (1) solubilizing the filaments by the extraction of monomers from the filament ends [71], (2) an asymmetrical distribution (segregation) of amyloid filaments between daughter cells [72], and (3) inhibiting other chaperones' accessibility to the filaments by Hsp104's occupation of an amyloid cleavage site [73,74].
Deletion or mutation of the N-terminal domain, hsp104 ∆N or hsp104 T160M , eliminates the overproduction-mediated [PSI+] curing ability of Hsp104 without affecting its prionpropagation-supporting activity [67]. This finding indicated that the two activities of Hsp104, prion curing and propagation, were distinct, and thus enabled investigation of whether Hsp104, at its normal level, has an "anti-prion system" effect (concept described above) on the [PSI+] prion. In hsp104 T160M cells, the frequency of the spontaneous appearance of [PSI+] was elevated by approximately 13-fold, and about half of the [PSI+] variants isolated in the mutants were destabilized in cells with the HSP104 WT allele but not in hsp104 T160M cells (stably maintained) [54]. This finding indicated that many [PSI+] variants arising in an hsp104 T160M host can propagate in the mutant background but not in the presence of Hsp104 curing activity from WT Hsp104. However, not all of the 13-fold increase in the frequency of [PSI+] is accounted for by the variants that are destabilized in the wild type. The mutation also increased the generation of [PSI+]s that are not hypersensitive to Hsp104 (such as the [PSI+] variants that are usually studied) [54]. This shows that Hsp104 is involved in prion generation as well as prion propagation.
Inositol Polyphosphates Acting on [PSI+] Prion Propagation
A yeast-genetics-based screen was conducted to find anti-prion components that can block the generation and inhibit the propagation of [PSI+] variants at their normal expression level. Siw14p was found in the screen, and further detailed analysis revealed that about half of the [PSI+] prion variants arising in siw14∆ cells were eliminated by the restored SIW14 gene controlled by its own promoter on a CEN plasmid [55].
Nonsense-Mediated mRNA Decay Proteins Acting on [PSI+]
Nonsense-mediated mRNA decay (NMD) is a eukaryotic surveillance mechanism for mRNA quality control. NMD promotes the degradation of aberrant mRNA with a premature termination codon [76], and the core components of NMD are Upf1p, Upf2p, and Upf3p, which are normally found in a complexed form with Sup35p on the ribosome [77,78]. In the same screen described above, Upf1p and Upf3p were frequently detected [56]. Together with Upf2p, all three Upf proteins form a trimeric Upf complex playing a key role in NMD [79]. In the absence of any one of these three functionally related proteins, both spontaneous and induced [PSI+] frequency were increased by 10-15 fold, and most [PSI+] variants arising in each upf mutant were destabilized by simple restoration of the UPF allele [51]. This curing of [PSI+] variants did not have a clear correlation with any of the Upf protein activities, such as helicase, ATPase, or RNA-binding in NMD, but required Sup35p binding and Upf complex formation for efficient prion curing [56]. Upf1p is associated with the Sup35p amyloid both in vitro (co-purification with the Sup35 amyloid [77]) and in vivo (co-localized with [PSI+] prion aggregates [56]). An in vitro Sup35p amyloid formation assay showed that even a decinormal amount of purified Upf1p was sufficient to arrest [PSI+] amyloid growth, while Ure2p amyloid formation was not affected. Taken together, these findings indicated a direct and exclusive inhibitory effect of Upf1p on [PSI+] amyloid filaments by competing with the Sup35p monomer or by binding to the ends of the growing amyloid filaments [56].
Deletion of both SSB1 and SSB2 or of ZUO1 or of SSZ1 was reported to elevate both spontaneous and induced [PSI+] generation [84][85][86]. Curing of [PSI+] by Hsp104 overproduction was impaired in an ssb1/2∆ strain, but enhanced in zuo1∆ and ssz1∆ strains [84,85]. The release of Ssb1/2p from ribosomes in zuo1∆ or ssz1∆ cells results in the destabilization of [PSI+] propagation, while ribosome-associated Ssb1/2p lowers the frequency of [PSI+] generation [85,87]. The restoration of Ssb1p to normal levels was unable to destabilize any of the [PSI+] variants arising in an ssb1/2∆ strain, and thus this SSB-RAC system was thought to be only a blocker of [PSI+] prion formation [84]. However, Ssb1/2p at normal levels also impacts [PSI+] maintenance during heat stress by impairing the proliferation of prion aggregates in post-stress divisions [88]. This shows that Ssb1/2p have anti-prion activity that is involved in both prion propagation and prion generation.
The re-examination of the roles of the SSB-RAC system in both the generation and propagation of [PSI+] prions confirmed again the elevation of spontaneous and induced [PSI+] frequency by over 10 fold in the absence of Ssb1/2p, Zuo1p, or Ssz1 and showed that more than half of the [PSI+] variants arising in each mutant were cured by the restoration of each component [57]. The [PSI+] prions generated in cells lacking SSB1/2 have a different propagation ability compared with [PSI+] prions generated in strains lacking ZUO1 or SSZ1. This difference may be a result of the different cellular environments produced by the ribosome association and the accessibility of each chaperone [57]. The anti-prion activity and negative effects on the generation and propagation of [PSI+] prions of ribosomeassociated chaperones can be explained by their cellular function in the proper folding of nascent polypeptides, but it was surprising that there was no effect on another yeast prion, [URE3], in either generation or propagation [57]. Taken together, the exclusive effect of the SSB-RAC-based anti-prion system on the [PSI+] prion and the functional relation of these chaperones in translation termination may suggest that the system directly affects Sup35p, the protein whose amyloid form is [PSI+].
Anti-Prion Systems Turn an Avalanche of Prions into a Flurry
The intraspecies transmission barrier refers to the barricade, produced by the polymorphism of the PrP protein sequence, against efficient transmission of sheep scrapie to goats or mice [89]. In yeast species, the same types of barriers, produced by the polymorphism of Sup35p sequences, were also reported [58,[90][91][92][93].
Within isolates of wild S. cerevisiae, there are sequence polymorphisms of Sup35p that are each able to give rise to [PSI+], but transmission to cells expressing a different polymorph was found to be inefficient compared with transmission between cells with the same polymorph [58]. This polymorphism-based intraspecies barrier suggests that the polymorphism of the prion protein is selected during evolution because it prevents infection by the [PSI+] prion.
The yeast protein Sis1, an Hsp40/DnaJ homolog, has essential roles in cell viability, protein refolding, and the ubiquitin-proteasome system [94,95]. Sis1p was also shown to be required for the propagation of [PSI+], [URE3], and [PIN+] [96] by functioning with Hsp70 Ssa proteins and the cooperation with Hsp104 for the efficient fragmentation of prion amyloid filaments [65]. The C-terminal domain (CTD) of Sis1p was found to be dispensable for cell growth without [PSI+] but becomes essential with [PSI+] [59]. Thus, the CTD of Sis1p seems to protect the cells from the toxicity produced by [PSI+], and does so by preventing the amyloid from soaking up all the Sup35p monomer [60].
A genetic screen using Hermes transposon mutagenesis and next-generation sequencing to find the Sis1p analog system responsible for preventing the toxicity of [URE3] revealed that disruption by the transposon of LUG1 (YLR352W) led to a severe growth defect in the presence of a mild variant of the [URE3] prion [61]. Lug1p is an F-box protein that functions in substrate selection for efficient ubiquitination by a cullin-containing ligase [97,98]. In the absence of [URE3], lug1∆ strains grow normally, but they show severe growth defects in the presence of the prion [61]. Thus, Lug1p can protect cells from the detrimental effects produced by the [URE3] prion by reducing the pathogenicity of the prion.
Three systems, the intraspecies transmission barrier, Sis1p, and Lug1p, do not perfectly fit with the concept of the anti-prion system, i.e., blocking the generation and propagation of prions at the same time in a normal cell. However, all three relieve the deleterious effects of prions as prion infection blockers or lethality blockers. Together with the anti−prion systems described above, they all comprise a multi-layered defense system against threats of prions (Figure 1). Before these systems were discovered, spontaneous prion generation was thought to be a very rare event with a frequency of about 10 −6 . Triple mutants with anti-prion defects in Hsp104, the ribosome-associated chaperone Ssz1, and the NMD protein Upf1 generate the [PSI+] prion at~5000 times the rate of a wild type with the same [PIN+] variant [99]. In the triple mutant, most of the [PSI+] isolates are cured by replacing any one of these three defective genes, showing that Hsp104, the ribosome-associated chaperones, and the Upf proteins are three independently acting anti-prion systems [99]. We now believe that prions arise more frequently (~5 × 10 −3 ) than was previously thought but that most prions are cured right after they arise, before being detected (Figure 1).
Conclusions
The existence of an array of yeast anti-prion systems evidently confirms that these prions are not considered 'good' or 'beneficial' to yeast, but this does not mean that all the prions have detrimental effects on the host. The [Het-s] prion of the filamentous fungus Podosopora anserina is a non-chromosomal determinant of vegetative incompatibility by a self-nonself recognition that restricts the transmission of harmful fungal viruses by regulating heterokaryon formation [100]. This [Het-s] prion is widespread in wild strains and, together with its functional partner NWD2, triggers a cell death process [101] in the first few fused incompatible cells, thereby saving most of the cells of both colonies from a potentially viral pathology. Thus, the [Het-s] prion is a 'functional prion', beneficial to the clone in which this form of programmed cell death occurs.
Most recently, the yeast-RNA-binding protein Vts1p was reported to convert into the [SMAUG+] state that can regulate meiosis in response to environmental stimulation [9,102]. This [SMAUG+]/[smaug−] state affects the survival of yeast cells under the condition of transient or long-term nutrient depletion. A non-amyloid-forming [SMAUG+] behaves as a prion and delays the initiation of meiosis and sporulation during starvation [9,102]. Thus, these new findings may support our notion above that prions are not necessarily 'infectious misfolding diseases' but may be 'pathogenic' in a specific condition.
Most human pathogenic amyloids have the same architecture (a folded, in-resister, parallel β-sheet) as the structurally characterized yeast prions [103,104]. Moreover, the common human amyloidoses AD, ALS (Lou Gehrig's disease), PD, and type II diabetes seem to be prion diseases [22,105]. Studies on prions and anti-prion systems in a simple eukaryote yeast have extended our understanding of the nature of prions and should
Data Availability Statement:
The data presented in this study are openly available.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2011-10-03T00:00:00.000
|
658924
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0025122&type=printable",
"pdf_hash": "bcb54f036cb6ea7a7f08bc267690de521bc34e5a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:988",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"sha1": "bcb54f036cb6ea7a7f08bc267690de521bc34e5a",
"year": 2011
}
|
pes2o/s2orc
|
Synapse Geometry and Receptor Dynamics Modulate Synaptic Strength
Synaptic transmission relies on several processes, such as the location of a released vesicle, the number and type of receptors, trafficking between the postsynaptic density (PSD) and extrasynaptic compartment, as well as the synapse organization. To study the impact of these parameters on excitatory synaptic transmission, we present a computational model for the fast AMPA-receptor mediated synaptic current. We show that in addition to the vesicular release probability, due to variations in their release locations and the AMPAR distribution, the postsynaptic current amplitude has a large variance, making a synapse an intrinsic unreliable device. We use our model to examine our experimental data recorded from CA1 mice hippocampal slices to study the differences between mEPSC and evoked EPSC variance. The synaptic current but not the coefficient of variation is maximal when the active zone where vesicles are released is apposed to the PSD. Moreover, we find that for certain type of synapses, receptor trafficking can affect the magnitude of synaptic depression. Finally, we demonstrate that perisynaptic microdomains located outside the PSD impacts synaptic transmission by regulating the number of desensitized receptors and their trafficking to the PSD. We conclude that geometrical modifications, reorganization of the PSD or perisynaptic microdomains modulate synaptic strength, as the mechanisms underlying long-term plasticity.
Introduction
Synapses are local micro-contacts between neurons mediating direct neuronal communication via neurotransmitters. Several wellidentified processes are involved in synaptic transmission, such as the release of neurotransmitters from the presynaptic terminal into the synaptic cleft. This vesicular release results in the activation of receptors located on the postsynaptic neuron. At excitatory synapses, open receptors such as AMPARs, a class of glutamate gated channels, mediate neuronal depolarization by an ionic current. The postsynaptic response depends on several factors [1][2][3] such as the number of release synaptic vesicles, the release probability at the presynaptic terminal, the synaptic cleft geometry, the glial coverage and the number and distribution of postsynaptic receptors that determine the time course of neurotransmitter activity. Thus, if synaptic transmission at a single synapse over time depends on so many stochastic events, how can the synaptic signal be reliable?
Previous computational studies of synapses with stationary receptors [3][4][5][6][7][8][9] show that several geometrical features such as cleft height and localization of vesicular release contribute to shaping the postsynaptic current over time. So far, only a few quantitative results are known about the characteristics of receptor trafficking, which may affect synaptic transmission [10][11][12][13][14]. Furthermore, it is unclear whether fluctuations in PSD receptor density affect the amplitude of the synaptic current at a time scale that could interfere with fast spiking. Indeed, recent findings indicate that receptor trafficking has a fast functional implication on synaptic transmission [10][11][12][13][14]. If the number of receptors can vary at the PSD, moving with a diffusion constant in a range of 0.1 to 0.2 mm 2 =s [13], then this motion may affect the amplitude of the synaptic current and fast spiking of about 20 Hz. Because extrasynaptic receptors could potentially replace synaptic ones, in particular those desensitized by glutamate molecules, a refined combination of experiments led to the proposition that receptor trafficking has a fast functional implication on synaptic transmission [10][11][12][13][14][15][16]. This was illustrated in a pairedpulse protocol where, in the absence of receptor diffusion, the second pulse was diminished [17].
To investigate how vesicles and receptor location, cleft geometry, receptor trafficking, and recycling as well as glial coverage influence the temporal expression of the postsynaptic current, we develop here a computational model to simulate the different steps of synaptic transmission, starting from vesicle release. To account for the Brownian motion of receptors, neurotransmitters dynamics and receptor opening and closing, we use Markov chain modeling and present results from Brownian dynamics simulations. However, we do not construct here any fitting procedure. Our approach allows simulating synaptic transmission based on the molecular properties of receptors and the geometrical organization. We built a synapse with a cleft surrounded by astroglia which take up glutamate molecules through transporters. On the postsynaptic terminal, receptors can move by lateral diffusion and enter the PSD, where they can be trapped by scaffolding molecules. In our model, PSD receptors are maintained at equilibrium with a pool of extrasynaptic receptors inside a reservoir, isolated from the rest of the dendrite. We refer to perisynatic and extrasynaptic areas, as the microdomains surrounding the PSD, and outside the PSD, respectively.
We first quantify the role of synapse geometry on synaptic transmission and then show that although receptor desensitization contributes to paired-pulse depression, receptor diffusion can restore the second pulse by about 5% at 25 Hz, and by 20% with further stimulations (at least 10 pulses). Second, to determine the conditions for which the synaptic current is maximal, we analyze the relative position of the PSD versus the active zone (AZ) where vesicles are released. We find that an alignment of vesicle release sites and a high concentration of receptors on the PSD, which is possibly mediated by adhesion molecules [18], leads to a maximal current. Finally, we study the consequence of spike correlation on synaptic transmission. We show that a low vesicular release probability can decorrelate spikes (for a frequency larger than 10 Hz). Moreover, increasing the inter-spike interval has several consequences: We find that when a vesicle is successfully released at a single synapse, it depresses the AMPARs. Thus, by reducing the release probability (by five), many spikes will not be generated which prevents AMPARs from becoming desensitized. As a consequence of this filtering, we show that a successfully released vesicle on average leads to a fivefold higher current compared to a situation where the release probability is one (no filtering). However, the price to pay is to filter spikes (take one in four) at the synaptic level. We show that it is actually an advantage that synapses are unreliable in order to produce a detectable and significant synaptic current. Neurons can overcome this local inherent unreliability by making multiple synaptic boutons [19] to the targeted neuron.
Results
We approximate the synaptic cleft as two coaxial cylinders (see Fig. 1) where AMPARs are distributed on the PSD and in a perisynaptic microdomain modeled as a reservoir surrounding the PSD. Receptors can move by free diffusion and can be exchanged between these two regions. Glutamate molecules are released after vesicle fusion, which may occur at release sites placed anywhere on the presynaptic terminal. Finally, transporters are distributed uniformly on the glial sheath surrounding the synapse.
Effects of synaptic geometry, vesicular release location and glial transporters on open AMPARs
Although the role of several geometrical parameters have already been explored on AMPAR-mediated synaptic current [20][21][22], we present here an integrated and unified model, that first confirmed previous results, validating our approach, and then provide new quantifications and predictions. To study the impact of geometrical parameters on synaptic transmission, we follow the dynamics of open AMPARs. To quantify the effect of vesicular release, we release them at increasing distances (in steps of 10 nm) from the center of presynaptic terminal. 130 receptors are uniformly distributed over the postsynaptic neuron. Astrocytic processes are located at a distance of 40 nm away from the synaptic cleft edge (Fig. 1) and contain a transporter density of 5,000=mm 2 [23]. The cleft height is 20 nm. Classically, AMPAR can be in one of three states, which can be further subdivided by sub-conductance states, accounted in Markov models [21,24]: A receptor can either be open (a current can flow), closed, or desensitized (the receptor is closed and does not respond to any glutamate stimulation). One intermediate state is for example called deactivation, the closing of the receptor and subsequent unbinding of the ligand, as opposed to receptor desensitization (i.e., the ligand remains bound to the receptor in a long-lasting nonconducting state). The transitions between sub-conductance states have been described by Markov models (see Text S1). To evaluate the number of open AMPARs, we use two well-known The synapse is surrounded by astroglial processes containing glutamate transporters (GLTs). Presynaptic vesicle fusion occurs at randomly selected locations, released glutamate (blue) diffuses in the cleft and binds to AMPARs (green) or GLTs (pink). AMPARs diffuse between the PSD, where they can attach to scaffolding molecules (orange) and the extrasynaptic regions, where they can undergo endocytosis (1) and exocytosis (2), maintaining the number of AMPARs at the post-synaptic terminal. (B) Two co-axial cylinders represent the pre-and postsynaptic terminal, forming a gap which represents the synaptic cleft. AMPARs (green) are distributed inside and outside the PSD. The trajectory of a glutamate molecule as illustrated by red, blue or green arrows corresponds to binding to AMPARs, GLTs or diffusing away from the cleft (at 500 nm), respectively. doi:10.1371/journal.pone.0025122.g001 AMPAR models: the Milstein-Nicoll (MN) and the Jonas-Sakmann (JS) schemes [24,25] (see Figure 2 in Text S1). We also tested another scheme, presented by Raghavachari-Lisman [25] (RL scheme) (Fig. 6). Here, all presented numbers are obtained with the MN scheme unless marked otherwise. These schemes differ by their number of states and rate constants. Although an AMPAR has four potential binding sites, the JS accounts for only two, while MN only for one. The RL scheme accounts for the four subunits, but not for the different AMPAR subunits accounted for by the the MN scheme, which was obtained by fitting recent data from the GluR4 AMPAR subunit without TARP ligation [25]. One of the main striking differences between the MN and JS schemes is the average time an AMPAR spends in the desensitized states (see Text S1 for a detailed quantification). Using these schemes, we study the number of open AMPARs and show that it decreases drastically as a function of the release site distance ( scheme: 7 resp. 22). (Fig. 2A). Correspondingly, the synaptic glutamate concentration rise disappears in less than 0.3 ms (Fig. 2C). For a release distance 2d release vR cleft (the radius of the postsynaptic terminal, measured from the center), the decrease in open AMPARs is less than 30%, whereas for d release wR cleft =2, the change is drastic (divided by 2). Because the location of vesicular release matters, we systematically test two types of release site distribution: one with all sites placed in the center and another one with all sites uniformly distributed.
We next study how the number of open AMPARs depends on the number of vesicles released (Fig. 2D) and on glial glutamate transporters. To explore various activity regimes, we release up to seven vesicles at uniformly distributed release sites for two different transporter concentrations (5,000 and 10,000=mm 2 ). After seven vesicles, the number of open receptors saturates at about 40% for the MN scheme (JS scheme: 50%), as reported [22]. However, doubling the transporter density on glia does not affect the maximal number of open AMPARs, as previously found [22]. To confirm that the direct effect of transporters can be neglected when the vesicles are released at the center, we estimate the number of glutamate molecules returning into the synaptic cleft after escaping (Fig. 2H): at 40 nm, the number is around 250, which is less than 10% of the free glutamate molecules. Moreover, the relative clearance of glutamate molecules for one or seven vesicles is of the same order (Fig. 2E). We summarize in Figure 2G the number of open receptors as a function of the number of released vesicles for different cleft heights, known to change during development and pathological condition [26,27].
Finally, we estimate how the synapse-to-glia distance affects the number of open AMPARs and ran simulations for glial distances of 20 nm, 40 nm and 100 nm, where one to seven vesicles are released uniformly distributed over the cleft, and for two transporter densities t transp of 5,000 and 10,000=mm 2 . In Figure 2 of Text S1, we present the results for release sites centered on the AZ. For a transporter density of 5,000=mm 2 , changing the glial distance from 20 nm to 100 nm of the edge of the synapse to the glial sheath (a range measured in [28]) reduces the mean maximal number of open AMPARs by 27% (JS: 33%) for a single vesicle released in the center, while reduction reaches 40% (JS: 46%) for uniformly distributed release sites over the presynaptic terminal. For seven released vesicles, the reduction becomes 28% (JS: 22% and 37%). For a glial distance of 20 nm, doubling the density of transporters reduces the number of open AMPARs by 13% (JS: 16%). For seven vesicles, reduction is 3% (JS: 4%). However, no reduction effect is found for a glial distance of 100 nm. We conclude that in all cases, changing the glial distances in a range of 20 to 40 nm will maximally affect the number of open AMPARs by 24% (JS scheme: 28%). When the release sites are located in the AZ, this number is changed by 15% (JS scheme: 20%, see Text S1).
Glutamate transporters limit glutamate spread up to 500 nm from the synapse Efficient removal of glutamate from the extrasynaptic space is crucial to limit spillover and desensitization of synaptic AMPARs. To analyze the extent of glutamate spread in the extrasynaptic space, we simulate freely diffusing glutamate molecules between two concentric cylinders for various glial transporter densities. For a synapse-to-glia distance of 40 nm and a transporter density of 5,000=mm 2 , 90% of the released glutamate is bound in one ms within a distance of 0.42 mm away from the releasing synapse ( Fig. 3A), confirming that spillover does not activate neighboring synapses [5]. To study the influence of transporter density, we estimate the time in which 90% of the released glutamate is taken up by transporters (clearance time) and the maximal distance beyond which the glutamate concentration is 10% of the amount released (spreading distance). We find (Figs. 3B,C) that the clearance time remains on the order of a few milliseconds and the spreading distance can reach the mean distance between two neighboring synapses of around 0.5 mm [29] (Results for a doubled glutamate diffusion constant are shown in Figure 3 in Text S1).
Optimal synaptic transmission for alignment of PSD and active zone
The structural organization of the synapse is fundamental for synaptic transmission, and to analyze the functional consequence of the localization of the PSD relative to the active zone, we estimate the number of AMPARs activated in three cases: 1) when both vesicle release sites and AMPARs are uniformly distributed (UD) over the pre-and postsynaptic terminals, 2) for UD release sites but AMPARs concentrated on the PSD, and 3) for both release sites and AMPARs concentrated at the AZ and PSD, respectively. In the last case, AMPARs and release sites are exactly centered and apposed. In As receptors and release sites become co-localized, the coefficient of variation (CV) decreases by the factor 10, while the mean number of open AMPARs increases from 17 to 35. We also show that the Interestingly, UD release sites may represent miniature EPSCs, described as spontaneous vesicular release events [2], whereas release triggered by an action potential may cause vesicle fusion in the AZ apposed to the PSD. Indeed, our whole cell recordings of hippocampal CA1 pyramidal cells reveal a higher coefficient of variation for mEPSCs than for evoked EPSCs (Fig. 4 J). Figures 4 D,E,F show that UD vesicular release is responsible for a much smaller number of activated AMPARs but with larger variance. The CV between UD and AZ-centered release differs by a factor 10. By definition, the CV computed here does not account for any variability in vesicle release probability. Comparison of this simulated CV with experimental data requires to only take successful synaptic events into account for the data set. In this way, any changes occurring in the CV can be related to a variation in vesicle release position or in post-synaptic dynamics. We conclude that this source of fluctuation is due to the randomness of vesicle release location relative to the PSD and very little due to receptor trafficking. Moreover, receptor clustering leads to a more reliable transmission (CV is minimal), suggesting that PSD placement plays a fundamental role for the synaptic current.
Synaptic efficiency by AMPAR relocation from extrasynaptic to synaptic sites Synaptic plasticity at CA1 Schaffer collateral synapses has been attributed to the local change in the number of AMPARs, because long-term potentiation increases the AMPAR density [11]. This increase occurs at the PSD and may also concern the extrasynaptic space. To study the consequence of AMPAR spatial organization on the synaptic current, we increase AMPAR number by 50% by inserting additional AMPARs inside or outside the PSD (Figs. 5 A,B,C). The first case leads to a 27% increase in the number of open AMPARs (from 15.4 to 19.6) for an AZ covering the PSD, while increasing AMPARs directly at the PSD leads to a 50% increase (from 15.4 to 23.3), confirming the critical role of the density of AMPARs for synaptic transmission [11,22]. We further investigated the consequence of different vesicle release site corresponding to the different release site and receptor localizations. (J) The coefficient of variation of AMPAR-mediated peak amplitudes of miniature EPSCs (n = 8) is larger compared to evoked EPSCs (n = 15, Pv0.01). Representative sample traces of AMPAR-EPSCs (Scale bar, 10 pA, 5 ms) and mEPSCs (Scale bar, 5 pA, 5 ms) are shown above the respective bars. doi:10.1371/journal.pone.0025122.g004 locations: at the center of AZ, uniformly distributed (UD) over the AZ and UD over the presynaptic terminal. Figures 5D,E,F show the corresponding dispersion for the three release site distributions. The spread distribution corresponding to vesicle release over the presynaptic terminal is one of the main sources of synaptic current fluctuation. We conclude that adding AMPARs is the most efficient way to increase the synaptic current as demonstrated experimentally in [15,16], and translocation of receptors from the extrasynaptic pool to the PSD leads to a 23% increase, while the CV remains approximately constant, showing that the mean and the standard deviation vary equally with changes in receptor number. Thus, we predict that there will be no alteration of the synaptic current variation (CV) during synaptic plasticity, if these changes occur only postsynaptically. Therefore, we attribute changes in CV of evoked EPSCs, which were experimentally measured before and after LTP, to modifications other than those considered here, such as changes in release probability. We conclude from this analysis, that LTP may be viewed as a twostep process, in which at first receptors are inserted extrasynaptically and then traffic to the PSD to attach to scaffolding molecules, with an increase of 27% in the first step and an additional 23% in the second, leading to an approximate total increase of 50%.
Receptor trafficking significantly modulates synaptic transmission only after a pulse train
Because receptors can move in and out of the PSD [12][13][14], we look at the effect of receptor trafficking on synaptic transmission. After two and more consecutive pulses, we estimate the number of open AMPARs. After a single vesicle release, receptors can be either closed, open or desensitized. In the latter case, the amplitude of the synaptic response elicited by a second pulse will be reduced unless they are replaced by non-desensitized receptors entering from outside the cleft by diffusion.
At steady state, receptors are exchanged between PSD and reservoir and we design a synapse with equal receptor density in PSD and reservoir such that, on average, receptors are maintained at a number of 100 on the PSD and 300 in the reservoir. In that case, the mean and variance of the receptor number on the PSD are wheret t D =t t R~0 :3 is the steady state ratio of the PSD and reservoir resident times such that SN PSD T~100 and s 2 PSD~8 :5. In Figure 6 A, we show a realization of the receptor dynamics inside the PSD. Mean and variance are obtained by averaging over 50 realizations. Because an aggregation of impenetrable obstacles constitutes a corral area which restricts the motion of receptors and confines them, we decided to implement a fence (a wall with some small holes) around the PSD. A receptor is then reflected by the fence and thus can stay a longer time in the PSD (see Section 7.2.3 in Text S1 for the implementation). We first performed a simulation for unrestricted diffusion at the PSD (no fence). In this case, the resident time of a receptor in the reservoir (resp. PSD) is t R &48 ms (resp. t D &12 ms) (averaged over 100 runs). Then, to study receptor exchange between PSD and reservoir, we plotted the time course of receptors arriving at and leaving from the reservoir. The mean number of exchanged receptor is given by (see Section 1 in Text S1) where J 0 ,J 1 are the Bessel functions of the first kind of order zero and one, respectively, and j n are the ascending zeros of J 1 . For a small PSD, and for a time t larger than a few milliseconds, In our simulations, the boundary of the reservoir is impenetrable for dendritic receptors. For a PSD diameter of 200 nm, cleft and reservoir diameter of 400 nm, we find that receptors from the reservoir can replace, within 50 ms, 80% of the PSD free receptors. The increased variance of the receptor number entering the PSD compared to the one entering the reservoir is due to the difference of the reservoir size compared to the PSD area (factor 3). After a sufficiently long time (100 ms), the receptor number at the PSD is lower than the number at equilibrium, because a fraction of these receptors remains in the PSD. These recovery curves simulate FRAP experiments, where bleached receptors leave the PSD and are replaced by extrasynaptic ones. In Figure 6 B, we show the time course of replenishment for different fractions of PSD fence (from 0 to 90%). The fence slows down the receptor exchange, but after 50 ms, a fence coverage of 0% compared to 90% does affect the speed of receptor replenishment. We conclude that only large fence coverage of more than 90% can change the transient time course. At a 90% fence coverage, the resident time in the PSD (resp. reservoir) of a receptor is t D~1 95 ms (resp. t R~6 90 ms), in agreement with the resident time formula [30]. We note that if 90% fence coverage is made of 10 fence parts, t D~3 3 ms and t R~1 08 ms. We conclude that a receptor cannot be confined inside the PSD for time of the order of minutes just by a fence unless it is bound to scaffolding molecules [31].
To test the functional role of diffusion on synaptic transmission, we use a paired-pulse protocol (Fig. 6 C,E,G) in which two vesicles are released successively in the synaptic cleft at the center of the presynaptic terminal with a time delay of Dt = 50 ms. This protocol does not account for any facilitation mechanism. When no corral is present, we either allow receptors to diffuse (D~0:1 mm 2 /s) or not. We use the JS and MN schemes for AMPAR dynamics: In all cases, receptor diffusion increases the amplitude of the second pulse by about 10%. In Figure 6 C,E,G, the paired-pulse ratio is shown as a function of the time interval D.
Finally, to test wether receptor trafficking can have a larger impact on the number of open AMPARs during high synaptic activity, (10 pulses at 20 Hz), we simulate up to 10 pulses at 20 Hz. For the JS scheme, after ten pulses, the differences between diffusing or stationary AMPARs is about 3.8%, however the difference increases to 12.7% for the MN scheme (receptors are not bound to transmembrane AMPARs regulatory proteins) and 12.5% in the RL scheme. Figures 6 D,F,H display the increase in the number of desensitized receptors as a function of time. These results show that perisynaptic receptors also become desensitized, and are subsequently exchanged with AMPARs at the PSD, but they do not contribute to synaptic transmission. During the 20 Hz stimulation, some perisynaptic receptors do contribute to replenishment of the PSD receptor pool, which facilitate synaptic transmission. We further vary reservoir size, first considering a reservoir with 50% of its size located outside the cleft and subsequently one with an extra-cleft three times larger, see Figures 6 D,F,H. Because the radius of the synapse is about 200 nm, receptors have time to diffuse to the PSD. Interestingly, increasing the reservoir size by adding an extra-cleft region can contribute to the synaptic recovery of respectively 23% and 29% after ten pulses (Fig. 6 F). We conclude that AMPAR trafficking can balance freely diffusing desensitized receptors in small synapses, and this effect is controlled by the size of the reservoir, modeling the perisynaptic space.
Synaptic transmission is depressed by fast spiking but can be rescued by reduction of vesicle release probability When a train of action potentials is fired at high frequency, a fraction of AMPARs will not contribute to the synaptic current due to desensitization. To investigate such effect, we estimate the number of open AMPARs following a single spike embedded in a spike train. Due to the long duration of the spike trains, receptor trafficking can be expected to play a role, thus we consider two different reservoir sizes. In the first case, the reservoir is located inside the cleft only. For 100 pulses at 20 Hz, (Fig. 7 A) shown) as the fraction of non-desensitized receptors inside the reservoir is very small. In the second case, we increased the reservoir fourfold corresponding to an additional 120 AMPARs in the extracleft reservoir. For spike train frequencies (number of pulses) of 20 Hz (100), 10 Hz (50), 5 Hz (25), the average maximal open AMPARs are 9, 14, 20. The effect of doubling reservoir size is presented in Section 5 in Text S1. We conclude that desensitization can drastically affect the synaptic response during a spike train. If the spike frequency is not too high (less than 10 Hz), this depression can be partially compensated by a large AMPAR reservoir, the size of which is however not arbitrary. For a vesicle release probability close to one, a 20 Hz or higher spike train leads to a reduction of one fifth of the synaptic current.
A low release probability such as p~0:25 together with a large extra-cleft reservoir would restore up to two thirds of the maximal postsynaptic current response (shown in Figs. 4 E,H). Interestingly, although a low release probability (around 0.25) would decrease the frequency at a single synapse, this effect would be compensated by a significant postsynaptic current (multiplied by 3).
Discussion
We have presented here a computational model to estimate the postsynaptic current mediated by AMPARs. The present approach features glutamate diffusion in the synaptic cleft, AMPAR trafficking in and out of the PSD, AMPAR activation modeled by kinetic schemes, and transporters located on an astroglial sheath which can take up glutamate molecules. We have shown that changing the glial distances in a range of 20 to 40 nm affects the number of open AMPARs by at most 15%, when vesicles are released in a small centered AZ. Moreover, the synaptic current is maximal when receptors are clustered at the PSD, suggesting that PSD receptor localization plays a fundamental role for the synaptic current. Adding 50% of receptors extrasynaptically followed by a translocation to the PSD using scaffolding molecules, leads to an increase of 27% in the first step and an additional 23% in the second step, resulting in an approximate total increase of 50% of the current, suggesting that LTP can be viewed as a two-step process. Finally, AMPAR trafficking can balance freely diffusing, desensitized receptors in small synapses, and this effect is controlled by the size of the perisynaptic space, which maintains a specific density of receptors. Thus for certain synapses with a large perisynaptic region, where most of the surface extrudes from the synaptic cleft, synaptic desensitization can be partially compensated by AMPAR trafficking for a spiking frequency less than 10 Hz, while a release probability bigger than 0.2 extends this property to 50 Hz.
The perisynaptic microdomain shapes the postsynaptic response
How the perisynaptic microdomain can control the amount of AMPAR at synapses? The postsynaptic bouton is organized in multiple compartments such as the PSD that concentrates scaffolding molecules, the peri-and extrasynaptic space, and the dendritic spine that isolates the head from the dendritic shaft. The amount of receptors in the dendrite is about 10 times higher than at synapses [32]. Were all receptors free to move at equilibrium between the dendrite and dendritic spines, synaptic specificity would be lost, and this would imply that the synaptic weight would only be controlled by scaffolding molecules, which are found in large excess (compared to bound AMPARs) at the PSD [33].
Postsynaptic AMPAR density depends on surface trafficking [10,13], but receptors can also be regulated by endo-and exocytic pathways [34,35]. This recycling mechanism is a source of AMPAR fluctuation. Indeed, blocking locally endocytosis or preventing recycling endosomal transport abolishes LTP induction in spines [35], thus AMPARs are transported from recycling endosomes back into the spine to prevent them from escaping the spine. Actually, AMPARs undergo continuous recycling by endoand exocytosis [36][37][38]. Moreover, preventing endocytosis by uncoupling the PSD from the endocytotic zone [39] leads to a decrease in the number of AMPARs in a time scale of minutes. This result shows that local endocytosis can balance fast lateral diffusion [40]. In our work, we use the recycling concept to define a reservoir compartment where receptors can only be exchanged with the PSD. We fix the number of receptors in this reservoir and we assume that this number is maintained at equilibrium by endo/ exocytosis or exchanged due to surface membrane diffusion. The reservoir is a source of AMPARs, isolated from the dendrite. Would receptors traffic continuously, in order to maintain a local increase in the concentration, a barrier should exist to prevent synaptic receptors to equilibrate with the rest of the dendrite. This barrier could either be physical, due to the spine shape or dynamic, made up by the exo-and endocytosis machinery [40,41].
As shown in Figure 5 , increasing the number of AMPARs in the perisynaptic microdomain itself leads to an increase in the number of open AMPARs. In that case, because the number of receptors at the PSD is changed, we conclude that regulating the perisynaptic size can be viewed as a form of plasticity induced by geometrical remodeling of the spine and independent of additional scaffolding molecules. In this respect, the reservoir plays a fundamental role. Furthermore, when receptors finally cluster at the PSD, a further increase in the current amplitude is achieved (Fig. 5). This suggests that synaptic plasticity may occur in two distinct stages: in a first step, receptors are just inserted and free to move in the reservoir, while in the second, they enter the PSD where they remain clustered. We conclude that increasing the number of scaffolding molecules will change the equilibrium between the PSD and the reservoir, leading to a stronger clustering of receptors and hence an increase of synaptic current (Fig. 8). Finally, it would be interesting to know what exactly determines the perisynaptic size and how the number of AMPARs is maintained there: is the dendritic spine head the location of the perisynaptic microdomain where diffusion is regulated by the thin neck? It was indeed shown that the spine neck can regulate intracellular calcium [42,43] and receptor trafficking [31].
In the past decade, it was shown that the number of synaptic AMPARs [1,13,14] is not fixed but changes due to lateral diffusion and endocytotic recycling [35,40]. It is conceivable that recycling can change the number of receptors at the PSD and thus affect the amplitude of the synaptic current. To quantify such an effect, we simulated spikes on a time scale of hundreds of milliseconds (Fig. 6) and found that in a paired pulse protocol, the fluctuation of current amplitude due to receptor trafficking was less than 5% (Fig. 6 A). However, it has recently been suggested [17] that receptor trafficking can participate functionally in synaptic transmission by significantly increasing the number of potentially available receptors and thus replacing desensitized ones. We find here that such effect can only be significant after several efficient vesicular release events, triggered by a number of spikes (at least 6 to 7), leading to a 30% recovery for large perisynaptic microdomains. During 300 ms (Fig. 6 C) of unhindered diffusion across the PSD, 70% of the moving receptors can be replaced by undesensitized extrasynaptic AMPARs. However, this result is an overestimation because in vivo, presynaptic depression will prevent vesicle release and thus provides time for the receptors to recover. Finally, recent findings suggest that the PSD undergoes constant remodeling [30], and we suggest here that these changes may affect the number of scaffolding molecules, the size and shape of the PSD, and the perisynaptic size.
Synaptic strength depends on release site-receptor alignment and synaptic micodomains
A drastic impact of release site localization on the number of open AMPARs has been already shown in [5,6,44]. We confirm (Fig. 2B) that release site positioning to the periphery (ectopic release [45]) can decrease the amount of open AMPAR by 50%. Vesicles are released at the active zone [46][47][48][49] and as shown in Figure 4, apposition of the postsynaptic receptors to the release site is a fundamental requirement for an optimal synaptic transmission in which the mean number of open AMPARs is high, but the variance is low. It is still unclear how this apposition can be achieved, but adhesion molecules such as neuroligin/neurexin may play a major role [18]. Indeed, N-cadherin molecules, present in both, the pre-and postsynaptic terminal, can provide the apposition information since they interact directly with the extracellular domain of AMPARs and can influence the clustering of AMPA-receptors [18]. Moreover, scaffolding molecules can transmit, to the presynaptic terminal, the location of the PSD and AMPAR accumulation via these adhesion molecules [18]. In addition, N-cadherin was found to associate with AMPARs and regulate their trafficking in neurons [50]. Other molecules such as beta-catenin may also be involved, because ablation of betacatenin in the postsynaptic neuron reduces the amplitude of spontaneous excitatory synaptic responses mediated by AMPARs [51]. In addition, at the presynaptic terminal, N-cadherin molecules may define the spot where vesicles should be released and regulate their clustering [52,53]. Interestingly, impairing the adhesive activity of cadherins by deletion of b-catenin or Ncadherin was found to reduce the number of reserved pool synaptic vesicles in the presynaptic terminal, resulting in an enhanced synaptic depression during repetitive stimulation [54]. In addition, the neurexin/neuroligin complex has been shown to modulate presynaptic release probability.
The apposition of active zone and PSD seems to be fundamental for synaptic transmission: It allows vesicles to be released at a favorable location relative to the localization of AMPAR clusters such that the probability of activation by glutamate is maximal. In addition, recent evidence [55] indicates that a released vesicle can induce docking of new vesicles to the same spot via a direct actin wire and favor an active zone with a finite number of hot spots for vesicle fusion. Another possibility is that docked vesicles move into the active zone by diffusion and only fuse at a finite number of distinguished locations apposed to AMPAR clusters. However, after 4 to 5 pulses, the probability that a vesicle is released at the same spot should decrease rapidly and thus an efficient release should occur at a different location. This scenario suggests that to sustain high-frequency activity in a single synapse, the active zone contains several hot spots for vesicle docking. Interestingly, various AMPAR clusters have already been reported [56]. We conclude that the apposition of active zone to PSD is fundamental for an optimal synaptic transmission and should be very well controlled at the molecular level. A reduction of the AMPAR current can result from receptor de-clustering or enlarging the active zone or both altogether.
Receptor clustering modulates evoked synaptic transmission and miniature events
We have shown in Figure 4 that apposition of AMPARs and release sites reduces the variance and increases the mean of the synaptic current in comparison to the extreme case where release sites are uniformly distributed. Because adding extrasynaptic receptors (Fig. 5) increases the synaptic current, we propose that this represents a first step in the LTP process. In a second step, receptors can move by diffusion inside the PSD, where scaffolding molecules in excess [33] can bind them. Increased number of scaffolding molecules will prolong the resident time of the receptor at the PSD [31,57].
Interestingly, the possibility to obtain LTP in PSD95 knockout mice [58] can be interpreted within this model as the aforementioned first step leading to more AMPARs in the reservoir which may even result in an increase of PSD receptors. We predict that the postsynaptic response should be quite unreliable. However, as scaffolding molecules are being expressed and localized at the PSD, the CV of the current should decay. Actually, increasing the number of scaffolding molecules may be part of the development process to increase the synaptic efficacy. Conversely, a protocol that results in detaching AMPARs would lead to a decrease in the synaptic current amplitude (Fig. 4), thus reducing the detection threshold of the post synaptic neuron. Interestingly, the PSD95 KO mice can sustain LTP, and the frequency of minis is diminished while the amplitude of synaptic current is not affected [1,58]. From our analysis, we can now postulate that a synapse should contain multiple structures where vesicular fusion spots are apposed to one or several clusters of AMPARs (Fig. 8). Disrupting scaffolding molecules should affect some of these subsynaptic structures, while others remain functional. In that case, the postsynaptic detection threshold will decrease, implying a reduction in the postsynaptic frequency, while the remaining sub-synaptic structures would still generate an EPSC of an amplitude comparable to the control case. Overexpressing PSD95 could lead to the formation of new AMPAR clusters and the formation of additional sub-synaptic structures [1,58].
Efficient transmission for spiking neurons requires several depressing synaptic boutons
Vesicular release is not a reliable process [2,59]: Only sometimes a spike triggers a vesicle release. Although this process has been well studied [60], many of the molecular details are still lacking, but for various types of neurons such as CA1-hippocampal neurons, the release probability p is estimated to be around 0.2. Our analysis of Figure 7 suggests that a low release probability allows to decorrelate spikes firing at 10 Hz at least. For example, in the absence of depression, a release probability of 1 at a spike train of 20 Hz would result in a postsynaptic current mediated by 6 open AMPARs, while for an unreliable synapse, i.e., with release probability of around 0.2, the current would increase threefold. Interestingly, temporal correlation leads to receptor desensitization which cannot be compensated by receptor trafficking alone (Figs. 7 A,D). We conclude that preventing vesicular release allows desensitized AMPARs to recover and provides time during which fresh receptors can enter the synapse by trafficking. Hence a release event activates much more AMPARs and thus can generate a significant EPSC. Even though this synaptic unreliability property restricts on possible spiking frequencies, it seems that fast signaling can be restored at the cellular level. Indeed, it has been shown [19] that a presynaptic neuron can have multiple connections with a postsynaptic one, from one to several (5 on average).
Although a single synapse is an unreliable device, there are several ways by which neuron-to-neuron connections can still be made: 1) reliable, in the sense that synaptic signals are actually elicited, and 2) robust, in the sense that the resulting postsynaptic current is significant and has a low variance. These ways are illustrated in Figure 8B: one way is to integrate (in space) and hence average a given signal over several unreliable synapses that produce highly variable postsynaptic currents. A second possibility is to replace a single spike by a spike burst, which can increase release probability (hence reliability), such that the signal integration (in time) takes place over the postsynaptic currents of every elicited event in the burst. This scenario is equivalent to releasing a large number of vesicles at the same synapse. A third possibility is to distribute the signal over several robust synaptic connections and to reduce the release probability (e.g., for p~0:2 and 5 synaptic connections). As discussed above, synaptic robustness can be achieved by apposition of receptors and release sites. For this mechanism to work, i.e., to bring vesicles at the designated sites, a certain minimal time scale may actually be required. While the first two scenarios rely on increasing synaptic activity and therefore require more cellular energy, the third one relies on a local and selective activity. It is possible that different populations of neurons use one of these different possibilities. However, the third scenario of neuronal connection raises several questions: can the release probability be dependent on the number of synaptic connections? Are sister synapses between two neurons really independent? It is quite surprising that synaptic unreliability [61] can have such an effect on neuronal transmission.
To conclude, we summarize the main sources of synaptic fluctuations which contribute to synaptic unreliability: 1) synaptic geometry, 2) location of vesicle fusion, 3) apposition of release sites and AMPAR clusters, 4) low release probability. In the present analysis, we show that presynaptic depression leads to decoupling of spikes and hence to a higher synaptic current. Interestingly, multi-synaptic connections are likely fundamental to achieve a robust cellular transmission. In that context, we suggest that unreliable synapses allow actually a reliable synaptic transmission at high frequency.
Electrophysiology
Experiments were carried out according to the guidelines of the European Community Council Directives of November 24th 1986 (86/609/EEC) and approved by the ethical committee of Paris 1, agreement number 2009-0014. C57Bl6 mice (wildtype (wt)) were supplied by Charles River, L'Arbresle, France. For all analyses, mice of both genders and were used (P16-P25). Acute transverse hippocampal slices (300-400 mm) were prepared as previously described [32]. Slices were maintained at room temperature in a storage chamber that was perfused with an artificial cerebrospinal fluid (ACSF) containing (in mM): 119 NaCl, 2.5 KCl, 2.5 CaCl 2 , 1.3 MgSO 4 , 1 NaH 2 PO 4 , 26.2 NaHCO 3 and 11 glucose, saturated with 95% O 2 and 5% CO 2 , for at least one hour prior to recording. Slices were transferred to a submerged recording chamber mounted on an Olympus BX51WI microscope equipped for infra reddifferential interference (IR-DIC) microscopy and were perfused with ACSF at a rate of 1.5 ml/min at room temperature. All experiments were performed in the presence of picrotoxin (100 mM) and a cut was made between CA1 and CA3 to prevent the propagation of epileptiform activity. Somatic whole-cell recordings were obtained from visually identified CA1 pyramidal cells and stratum radiatum astrocytes, using 5-10 MV glass pipettes filled with either (in mM): 115 CsMeSO 3 , 20 CsCl, 10 HEPES, 2.5 MgCl 2 , 4 Na 2 ATP, 0.4 NaGTP, 10 Na-phosphocreatine, 0.6 EGTA, 0.1 spermine, 5 QX314 (pH 7.2, 280 mOsm). Miniature excitatory postsynaptic currents (mEPSCs) were recorded at 270 mV in the presence of 0.5 mM TTX. Evoked postsynaptic responses were induced by stimulating Schaffer collaterals (0.1 Hz) in CA1 stratum radiatum with ACSF filled glass pipettes. Stimulus artifacts were blanked in sample traces. Recordings were acquired with Axopatch-1D amplifiers (Molecular Devices, USA), digitized at 10 kHz, filtered at 2 kHz, stored and analyzed on computer using Pclamp9 and Clampfit9 softwares (Molecular Devices, USA). All data are expressed as mean + SEM. Picrotoxin was obtained from Sigma, all other chemicals from Tocris.
Simulation
We describe a simulation and modeling approach for the synaptic cleft. All programs were written in MATLAB and C. Multiple Monte Carlo simulations were performed for a discretization time step of 0.5 ms. The default values for all parameters are listed in Table 1 unless stated otherwise.
Synapse geometry and functionality. The presynaptic and postsynaptic elements were modeled as two coaxial cylinders of length 0.5 mm each and 400 nm diameter. The distance between these cylinders represents the synaptic cleft height (20 nm). The glial sheet was designed as coaxial cylindrical surface surrounding the pre and postsynaptic cylinders at a distance of 40 nm. The postsynaptic density was defined as a circular area of 200 nm in diameter, centered on the surface of the postsynaptic cylinder (see Fig. 1).
Vesicle release. Vesicle release sites were generally placed on the surface of the presynaptic cylinder. A single vesicle contains 3000 glutamate molecules, which, upon vesicle fusion, were all released at a single point and in a single time step.
Glutamate diffusion. Upon release, glutamate could diffuse freely with a diffusion constant of 0.2 mm 2 =ms [7,62]. As shown in http://arxiv.org/abs/1104.1090, variation in the glutamate diffusion constant does not affect the probability of glutamate to bind before exiting the synaptic cleft. It only affects the kinetics, however, this binding kinetics is already extremely fast, (of the order of 100 mu s), much faster than any other processes of bindings (time of ms). Thus any changes in D (which can be multiplied by 2) do not affect much the synaptic current. Glutamate trajectories were simulated according to Brownian dynamics. Upon hitting a membrane surface, they were specularly reflected (or bound on transporters, see below). Upon reaching a distance of 0.5 mm away from the cleft center, a trajectory was terminated.
AMPA-Receptors. AMPA-Receptors were placed in two areas: on the PSD and in the reservoir. The reservoir contains an intra-cleft part, i.e., the cleft-facing disk of the postsynaptic cylinder without the PSD, and an extra-cleft part, i.e., area on the lateral postsynaptic cylinder surface (see Fig. 1). Unless stated otherwise, at simulation start, AMPARs were uniformly distributed in the intraand extra-cleft areas such that the ratio of densities of PSD-AMPARs to reservoir-AMPARs was 10:1 where 100 AMPARs were placed on the PSD. AMPARs trafficked in these areas at a diffusion constant of 0.1 nm 2 =ms [17]. AMPAR trajectories were simulated by Brownian dynamics. Due to AMPARs binding to PSD scaffolding molecules and confinement in micro-domains on the PSD, AMPARs accumulate at a higher concentration on the PSD compared to the reservoir. The mean AMPAR densities on PSD and reservoir were maintained constant by free trafficking of AMPARs from the reservoir into the PSD. To simulate the PSD corral, the passage from the PSD into the reservoir is successful only one every tenth attempts and are otherwise the AMPAR is reflected at the PSD boundary (see Section 7.2.3 in Text S1 for details). At the outer boundary of the reservoir, AMPARs sent back into the reservoir. Internal states of AMPARs were modeled using the Markov schemes by Jonas-Sakmann [24] (which is called JS scheme in this paper), by Milstein-Nicoll [25] (called MN scheme), and by Raghavachari-Lisman [21] (called RL scheme). We refer to Section 6 in Text S1 for the JS, MN, and RL schemes, and a comparison of them. The random fluctuations of the internal states of AMPARs were modeled as fluctuations of the number of glutamate molecules near the receptor excluding fluctuations of the Markov chain. A small circular area was associated to every AMPAR, where the internal state dynamics was inferred from the number of glutamate molecules hitting this area per time step. Glutamate molecules hitting this area were then reflected and glutamate binding was neglected, see Section 7 in Text S1. The internal states of AMPARs located outside the cleft were not affected by hitting glutamate. Glial transporters. The glial sheath was uniformly covered with glutamate transporters which were located on an equallyspaced square grid at different densities ranging from 2,500 to 10,000=mm 2 . Glial glutamate transporters can bind glutamate molecules and internalize them into the glia. To model these kinetics, we used a Markov scheme [7] (see Text S1). A small circular area was associated to every transporter and every glutamate molecule hitting this area was either specularly reflected or bound such that the binding rate of the Markov scheme was assumed. Depending on the state transitions of the scheme, the glutamate molecule was either unbound, i.e., reinserted into the extrasynaptic space, or internalized, i.e., taken out of the simulation. See Section 7 in Text S1 for a complete description of the simulation procedure.
Supporting Information
Text S1 Presents the following information: In Section 1, the derivation of the formula for the PSD-reservoir receptor exchange rates. In Section 2, additional data for Figure 2 regarding synaptic geometry with the JS and MN AMPAR models, and for uniformly distributed release sites. In Section 3, additional data for Figure 3 regarding glutamate spread for doubled glutamate diffusion constant. In Section 4, further comments on AMPAR trafficking and synaptic transmission. In Section 5, comments on the effect of reservoir size on pulse trains. In Section 6, JS, MN, and RL AMPAR kinetic models are compared. Section 7 provides a detailed simulation analysis and description of algorithms. (PDF)
|
v3-fos-license
|
2018-04-03T03:05:39.493Z
|
2017-06-14T00:00:00.000
|
4501814
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1002/cam4.1075",
"pdf_hash": "4219f6c231de991c9ab35a6e68f7502b969f0348",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:990",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4219f6c231de991c9ab35a6e68f7502b969f0348",
"year": 2017
}
|
pes2o/s2orc
|
Renal impairment and use of nephrotoxic agents in patients with multiple myeloma in the clinical practice setting in the United States
Abstract Renal impairment is a common complication of multiple myeloma and deterioration in renal function or renal failure may complicate clinical management. This retrospective study in patients with multiple myeloma using an electronic medical records database was designed to estimate the prevalence of renal impairment (single occurrence of estimated glomerular filtration rate [eGFR] <60 mL/min per 1.73 m2 on or after multiple myeloma diagnosis) and chronic kidney disease (at least two eGFR values <60 mL/min per 1.73 m2 after multiple myeloma diagnosis that had been measured at least 90 days apart), and to describe the use of nephrotoxic agents. Eligible patients had a first diagnosis of multiple myeloma (ICD‐9CM: 203.0x) between January 1, 2012 and March 31, 2015 with no prior diagnoses in the previous 6 months. Of 12,370 eligible patients, the prevalence of both renal impairment and chronic kidney disease during the follow‐up period was high (61% and 50%, respectively), and developed rapidly following the diagnosis of multiple myeloma (6‐month prevalence of 47% and 27%, respectively). Eighty percent of patients with renal impairment developed chronic kidney disease over the follow‐up period, demonstrating a continuing course of declining kidney function after multiple myeloma diagnosis. Approximately 40% of patients with renal impairment or chronic kidney disease received nephrotoxic agents, the majority of which were bisphosphonates. As renal dysfunction may impact the clinical management of multiple myeloma and is associated with poor prognosis, the preservation of renal function is critical, warranting non‐nephrotoxic alternatives where possible in managing this population.
Introduction or emerging during the course of the disease [7][8][9][10]. Insufficient kidney function in multiple myeloma is associated with poor prognosis and increased mortality [7,[11][12][13]. Adding to the kidney damage caused by their disease, patients with multiple myeloma may receive treatments that are nephrotoxic, such as chemotherapy, targeted anti-cancer agents, and supportive care such as analgesics, antibiotics, and intravenous (IV) bisphosphonates.
This retrospective study using the Oncology Services Comprehensive Electronic Records (OSCER) electronic medical records database was designed to estimate the prevalence of renal impairment and chronic kidney disease in a current population of patients with multiple myeloma, and to describe the use of nephrotoxic agents (core antimyeloma therapies: doxorubicin, cisplatin, and epirubicin; and supportive care with metoclopramide or intravenous bisphosphonates including zoledronic acid or pamidronate) in these patients.
Study design and patients
This was a retrospective observational cohort study using the OSCER database [11] which includes electronic medical records for over 750,000 patients from over 200 outpatient practice groups across the US from 2004 to present day. Eligible patients in the OSCER database had a first diagnosis of multiple myeloma (ICD-9CM: 203.0x), defined as the index date, between January 1, 2012 and March 31, 2015. Patients were ≥18 years of age at the index date. Exclusion criteria included a history of multiple myeloma during the 6 months prior to the index date (baseline period), solid tumor diagnosis (neoplasm) (ICD9-CM: 140.xx-165.xx, 170.xx-176.xx, 179.xx-208.xx, 210.xx, 239.xx), end-stage renal disease and/or dialysis at baseline. Patients' demographic and clinical characteristics were collected for the 6-month baseline period prior to the index date. The follow-up period for each patient started at the first diagnosis of multiple myeloma (the index date) and continued until death (if documented), the last record of any kind captured for the patient, or June 30, 2015.
Endpoints
For the primary analysis of the prevalence of renal impairment, patients were required to have ≥1 serum creatinine value recorded on or after the index date. Serum creatinine values were used to calculate estimated glomerular filtration rate (eGFR) using the Chronic Kidney Disease Epidemiology Collaboration equation). Renal impairment was defined as a single occurrence of eGFR <60 mL/min per 1.73 m 2 on or after the diagnosis of multiple myeloma. The time to renal impairment was determined by the number of months from the diagnosis of multiple myeloma to the qualifying eGFR value.
Chronic kidney disease was defined as the presence of at least two eGFR values <60 mL/min per 1.73 m 2 after the diagnosis of multiple myeloma that had been measured at least 90 days apart [12]. The prevalence of chronic kidney disease was calculated as the proportion of patients with the event at any time among all patients who had fulfilled the testing criteria as defined in the previous sentence. Chronic kidney disease prevalence was also described in patients with renal impairment. The time to chronic kidney disease was determined by the number of months from the diagnosis of multiple myeloma to the first as well as to the confirming (second) test.
The prevalence of renal impairment and chronic kidney disease were also described in the first 6 months and first 12 months after the diagnosis of multiple myeloma. These analyses were evaluated based on patients with the requisite serum creatinine values available, qualifying them for the event within the time period and over the full study follow-up period. Sensitivity analyses included the prevalence and time to event of renal impairment or chronic kidney disease in all patients, regardless of the availability of serum creatinine values. Time to renal impairment was also calculated for the subset of patients who did not have renal impairment at baseline.
The use of nephrotoxic agents including core antimyeloma therapies (doxorubicin, cisplatin, and epirubicin) and supportive care (metoclopramide; intravenous bisphosphonates, zoledronic acid or pamidronate) was described before and after either the first lowest post diagnosis eGFR value (in the case of renal impairment) or the confirming eGFR value (in the case of chronic kidney disease). Usage of these nephrotoxic agents was calculated before and after the lowest eGFR during the study period by the eGFR category (<15, 15-29, and 30-59 mL/min per 1.73 m 2 ) [13]. For sensitivity analyses among patients with renal impairment, use of nephrotoxic agents before and after the lowest eGFR was assessed in the subgroup of patients that did not have improvement in renal function after the lowest eGFR during the followup period of up to 12 months.
Statistical methods
All analyses were descriptive. Kaplan-Meier methodology was used to assess the time to renal impairment and chronic kidney disease from the diagnosis date of multiple myeloma. As a sensitivity analysis, the Kaplan-Meier analysis was conducted excluding the patients with baseline renal impairment.
Renal Impairment and Nephrotoxic Agents in Myeloma Y. Qian et al.
Patients
The OSCER database contained 12,472 patients with a diagnosis of multiple myeloma of which 12,370 met the eligibility criteria (Fig. 1). Of the eligible patients, 8767 had a serum creatinine value recorded after the diagnosis of multiple myeloma and 6813 had two serum creatinine values recorded after diagnosis that were at least 90 days apart. An additional 745 patients had an eGFR value recorded in the database, but were lacking a serum creatinine value; these patients were not included in the analyses because the method used to calculate eGFR was unclear. During the baseline period (the 6-month period before the diagnosis of multiple myeloma), slightly more than half (54%) of the population with at least one post diagnosis serum creatinine value were men, 62% were Caucasian and 13% were African American, and the mean (SD) age was 69 [11] years (Table 1). Patients with unknown race/ethnicity (12%) were assumed to be non-African American for their individual eGFR calculation. This assumption was based on the observed overall racial distribution of the study population (13% African American and 75% non-African American). All geographic regions of the United States were represented in the sample, although patients were concentrated in the West North Central region (Iowa, Kansas, Minnesota, Missouri, North Dakota, Nebraska, South Dakota; 39% of patients), reflecting the distribution of OSCER sites across the nation. In the subset of patients with an available baseline serum creatinine value (N = 2835), the mean (SD) baseline eGFR was 65 (24) mL/min per 1.73 m 2 . The median (range) number of eGFR values calculated was 8 (1, 43) during the first 12 months of follow up and 10 (1, 79) at any time during the follow-up period. The median (range) total follow-up time for the study was 14.3 (0.0, 43.0) months after the diagnosis of multiple myeloma.
Prevalence of renal impairment
A total of 8767 patients had at least one serum creatinine value recorded after the diagnosis of multiple myeloma and were included in the primary analysis of the prevalence of renal impairment. Of these, 5334 patients or 61% (95% CI: 60, 62) experienced renal impairment during their follow-up period ( Table 2). The median (95% CI) time from diagnosis of multiple myeloma to renal impairment was 6.4 (5.8, 7.0) months ( Fig. 2A). In patients with an available serum creatinine value and no renal impairment at baseline (N = 7592), the median (95% CI) time to renal impairment was 10.7 (9.8, 11.6) months (Fig. 2B).
Forty-seven percent (N = 4159 of 8767) of those with at least one serum creatinine measurement at any time during the study experienced renal impairment within 6 months of the multiple myeloma diagnosis and 54% (N = 4725) experienced renal impairment within twelve months of the diagnosis. For patients with at least one serum creatinine value recorded during the respective time period (i.e., those in whom renal impairment could have been detected), the prevalence of renal impairment was 53% (N = 4159 of 7915) within 6 months of diagnosis and 57% (N = 4725 of 8365) within 12 months of diagnosis.
Among all 12,370 patients who met the eligibility criteria (diagnosis of multiple myeloma, at least 18 years of age, and did not have end-stage renal disease), regardless of the availability of serum creatinine values, the prevalence of renal impairment was 43% (95% CI: 42, 44) detected a median 13.1 months (95% CI: 12.2, 14.2) after the multiple myeloma diagnosis ( Fig. 2A). Considering only those who did not have renal impairment at baseline (N = 11,121), the median (95% CI) time to renal impairment was 19.7 (18.3, 21.4) months (Fig. 2B).
Prevalence of chronic kidney disease
A total of 6813 patients had at least two serum creatinine values recorded at least 90 days apart after the diagnosis of multiple myeloma and could be evaluated for the primary analysis of the prevalence of chronic kidney disease. Fifty percent (N = 3399; 95% CI: 49, 51) of these patients experienced chronic kidney disease during their follow up ( Table 2). Twenty-seven percent (N = 1830) had detectable chronic kidney disease within 6 months and 39% (N = 2676) had detectable chronic kidney disease within 12 months after the diagnosis of multiple myeloma. For patients with at least two available serum creatinine values at least 90 days apart during the requisite period, the prevalence of chronic kidney disease was 37% within 6 months and 43% within 12 months after diagnosis of multiple myeloma. The prevalence of chronic kidney disease over the entire follow-up period (unique for each patient) in all eligible patients (N = 12,370) regardless of the availability of serum creatinine values was 28% (sensitivity analysis). Among the 5334 patients with renal impairment, 3399 (80%) had evidence of chronic kidney disease over the course of follow up.
The median (95% CI) estimated time to chronic kidney disease after the diagnosis of multiple myeloma was 15.8 (13.6, 18.5) months to the first of the two required eGFR values (Fig. 2C) and 18.0 (16.9, 19.2) months to the second (confirmatory) eGFR value (Fig. 2D). Note that the initial flat line of the first curve reflects the 90-day period in which no events can occur by definition.
Usage of nephrotoxic medications (doxorubicin, metoclopramide, cisplatin, epirubicin, intravenous bisphosphonates)
In the 5334 patients with renal impairment, the use of nephrotoxic agents or specifically bisphosphonates did not notably change from before the lowest eGFR value (median time from diagnosis of multiple myeloma, 8.3 months) (Table 3). However, in patients with severe renal impairment (eGFR <15 mL/min per 1.73 m 2 ), reduced use of these agents was observed after the lowest eGFR (nephrotoxic agents from 27% before to 23% after; bisphosphonates from 23% before to 15% after). Decreases after the lowest eGFR were also observed in the group with eGFR 15-29 mL/min per 1.73 m 2 (nephrotoxic agents from 38% to 34%; bisphosphonates from 35% to 30%) ( Table 3). In the subgroup of patients with renal impairment who did not experience chronic kidney disease stage improvement after the lowest eGFR value (N = 1463), the use of any nephrotoxic agent prior to the lowest eGFR was reduced within the 12 months after the lowest eGFR (38% vs. 29%, respectively, for all renal function categories); however, in the lowest renal function categories, the reduction of nephrotoxic agents use was halved (41% vs. 20%, respectively, in 264 patients with eGFR 15-29 mL/min per 1.73 m 2 , and 18% vs. 9%, respectively, in 170 patients with eGFR <15 mL/min per 1.73 m 2 ). Similarly, in this same subgroup, use of IV bisphosphonates before and after the lowest eGFR value was 38% and 29%, respectively, overall, while use was halved in the lower renal function categories (41% vs. 20% in those with eGFR 15-29 mL/min per 1.73 m 2 , and 17% vs. 9%, respectively, in those with eGFR <15 mL/min per 1.73 m 2 ).
Among the 3399 patients with chronic kidney disease, 1537 (45%) received any nephrotoxic agent before the confirming eGFR, while 1567 (46%) received it within 12 months after the confirming eGFR (median follow-up time, 10.0 months). Similarly, 1466 (43%) patients received an IV bisphosphonate prior to their confirming eGFR value, while 1441 (42%) received them within the 12 months after the confirming eGFR. In contrast to the patients with renal impairment, the use of either any nephrotoxic agent or bisphosphonate remained similar before and after the confirming eGFR value in all eGFR categories (30-59, 15-29, <15 mL/min per 1.73 m 2 ) in those with chronic kidney disease.
Discussion
This retrospective study of 12,370 patients with multiple myeloma showed a high prevalence of both renal impairment and chronic kidney disease (61% and 50%, respectively) in patients treated in oncology clinics in the US between 2013 and 2015 (OSCER cancer database). Furthermore, the onset of both conditions was rapid, with a 6-month prevalence of 47% for renal impairment and 27% for chronic kidney disease after the multiple myeloma diagnosis. These results, which were observed in a recent population with newly diagnosed multiple myeloma using the current standard definitions for renal impairment and chronic kidney disease, are comparable to or higher than the prevalence reported in older datasets dating from the 1970s, 1980s, and 1990s [7][8][9][10]. 1 In the total patient sample (N = 8767). 2 Patients not required to have 12 complete months of follow up. 3 Percentage among all patients with at least two eGFR values at least 90 days apart (N = 6813).
Renal Impairment and Nephrotoxic Agents in Myeloma Y. Qian et al.
We observed that onset of renal impairment occurred within a median 6 months following the diagnosis of multiple myeloma while the onset of chronic kidney disease occurred within a median 18 months, with 80% of those with renal impairment subsequently showing evidence of chronic kidney disease. In patients who did not have renal impairment at baseline, the onset of renal impairment was slightly slower (median 10.7 months). We therefore conclude that renal impairment is prevalent with a rapid onset in those with newly diagnosed multiple myeloma, and the majority of those with renal impairment progress to chronic kidney disease.
Despite the presence of renal impairment, a substantial proportion of patients nevertheless received nephrotoxic agents (predominantly bisphosphonates). Anticipation or realization of some measure of renal function improvement after myeloma treatment may explain the continued use of nephrotoxic agents after the lowest eGFR (i.e., the worst renal function experienced by the patients during their observation period); however, despite treatment, almost 20% of patients still do not show improvement in renal function [14]. This lack of renal recovery is believed to be multifactorial, including lack of response to treatment. We observed that only patients in the two most severe renal impairment categories (eGFR <15 mL/min and 15-29 mL/min per 1.73 m 2 ) had reduced use of nephrotoxic agents and bisphosphonates after the lowest eGFR. These results highlight the unmet need in patients with multiple myeloma, who, despite the presence of renal impairment, continued to receive bisphosphonate treatment to maintain bone health in the absence of a non-nephrotoxic treatment choice. In the subgroup of patients who did not have improvement in renal function after the lowest eGFR, use of these agents was mostly halved, indicating that the unmet need in this population is even greater, as they could not continue use of nephrotoxic anti-myeloma agents or intravenous bisphosphonates. Similarly in the subset of patients without chronic kidney disease stage improvement after the lowest eGFR, both nephrotoxic agent use and bisphosphonate use were halved after renal impairment in patients in the two lowest eGFR categories.
Kidney function is expected to improve with the novel multiple myeloma therapies; however, a recent study demonstrated that even when patients with newly diagnosed multiple myeloma experienced resolution of their renal impairment upon myeloma treatment, survival outcomes remained worse compared to the population without renal impairment [14]. Therefore, our results suggest that more attention should be placed on the avoidance of nephrotoxic drug use in this setting, including a consideration of alternatives to bisphosphonates.
The limitations of this study include those common in observational research and retrospective databases, such as potential misclassification in diagnosis codes and laboratory results. In our dataset, socio-demographics such as gender, race, and region were well populated, but comorbidities were not fully captured through ICD9 codes. The OSCER database records the experience of the patients treated primarily at oncology/hematology clinics, which may affect the generalizability of the results to patients treated in other settings. A minimum of one serum creatinine record after multiple myeloma diagnosis was required for inclusion in the primary analysis cohort, resulting in the exclusion of 3603 (29%) of eligible patients in the OSCER database. We are unable to determine whether these patients were not receiving renal assessment, or whether the renal assessment was occurring outside the oncology practice. Since patients with poor renal function may have received more frequent renal assessment, selection bias may have been introduced by the requirement for a serum creatinine value. To address this potential bias, we calculated renal impairment and chronic kidney disease prevalence including all 12,370 patients, and both remained high. The assumption that those with unknown race were non-African American for the eGFR calculation was based on the racial distribution of the overall sample; however, if this assumption was incorrect, results could have been biased toward higher prevalence of renal impairment, since the eGFR of African Americans is lower than non-African Americans. We excluded patients with evidence of end-stage renal disease because these patients represent a unique population that is likely to receive different treatment at baseline. Although patients with end-stage renal disease were previously reported to represent 9% of the multiple myeloma population) [8], in the newly diagnosed myeloma population in our study, only 96 patients (0.8%) were excluded due to end-stage renal disease; therefore, the impact on the calculation of the prevalence of renal impairment in our study would have been negligible. The time-to-event analyses are subject to detection bias due to the lack of patient history prior to the diagnosis of multiple myeloma; thus, patients may have fulfilled the definitions of renal impairment and/or chronic kidney disease earlier than indicated by their post diagnosis records. A final limitation of the study was the difficulty in assessing use of non-steroidal anti-inflammatory drugs, an additional type of nephrotoxic agent used in the myeloma population. Due to frequent over the counter use, these agents are under-reported in the oncology practice electronic medical records and cannot be accounted for reliably.
Future studies, such as determination of the economic burden of renal impairment in multiple myeloma and the impact of renal impairment on anti-myeloma treatment patterns, could contribute to a better understanding of the burden of renal impairment in the multiple myeloma population.
Y. Qian et al.
Conclusion
The prevalence of both renal impairment and chronic kidney disease was high in patients with multiple myeloma, affecting 61% and 50%, respectively, of patients in the OSCER cancer database in the US for the period 2012 to 2015. The onset of renal impairment was rapid after the multiple myeloma diagnosis; however, 40% of these patients nevertheless received concomitant nephrotoxic agents, most of which were intravenous bisphosphonates. As renal impairment is associated with reduced survival and may affect clinical management, preservation of renal function is critical, and non-nephrotoxic alternatives are warranted where possible in managing the multiple myeloma population.
|
v3-fos-license
|
2019-03-28T13:34:05.537Z
|
2019-03-06T00:00:00.000
|
236973963
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.qeios.com/read/7HDG77/pdf",
"pdf_hash": "bfff46ba59278fb73b7c9df59f4c7858fa7b7961",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:994",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "aa80b8ae4b759b3878b4df8d0b7a4b812dc9759b",
"year": 2019
}
|
pes2o/s2orc
|
Small intestinal neuroendocrine tumours
Small intestinal neuroendocrine tumours (SINETs) are malignant neoplasms which at the time of diagnosis often present with distant metastasis. The field of SINET research faces several challenges. There is a lack of preclinical models for studying SINETs, and it is unclear how well currently available models actually recapitulate the tumour disease. The genetic changes that underlie SINET tumour development are largely unknown and, lastly, curative therapy is rarely achieved. Novel therapies, such as the recently FDA-approved Lu-octreotate therapy and up-and-coming immunotherapies need to be further investigated to deliver better response rates for SINET patients. In our first two papers (papers I and II), we sought to evaluate frequently used and readily available gastroenteropancreatic neuroendocrine tumour (GEPNET) cell lines as models of neuroendocrine tumour disease. We investigated the characteristics of these cell lines in terms of their neuroendocrine phenotype, genomic background, and therapeutic sensitivity. While several cell lines exhibited an expected neuroendocrine differentiation and harboured genetic alterations characteristic of the GEPNET disease, three cell lines did not. In fact, it turned out that one of the most frequently used cell lines in the field – KRJ-I, together with the cell lines L-STS and H-STS, were incorrectly identified and instead lymphoblastoid cell lines (EBVimmortalised B-lymphocytes). This might have led to the incorrect use and potentially faulty conclusions in a number of GEPNET studies. Among authentic cell lines, we performed a large-scale inhibitor sensitivity screening and predicted that SINETs would be more sensitive to HDACi compared to pancreatic neuroendocrine tumours (PanNET) and PanNET more sensitive to MEKi compared to SINET. The prediction was supported by subsequent experiments with primary tumour cells. In our third paper (paper III), we evaluated a mechanism by which hemizygous loss of SMAD4 could lead to SINET initiation and/or progression by acting as a haploinsufficient tumour suppressor. We found that loss of SMAD4 was associated with a decrease in corresponding mRNA and protein, and that this correlated to patient survival. We also found that the amount of SMAD4 protein in the primary tumour could predict whether the patient presented with distant metastasis. In our last papers (papers IV and V), we investigated the potential for two novel treatment strategies for SINETs. In paper IV we identified an inhibitor, the heat shock protein 90 inhibitor ganetespib, that could synergistically enhance the Lu-octreotate therapy for SINETs. Ganetespib was initially found to sensitise SINETs to radiation in a large-scale inhibitor synergy screening, and its radiosensitising effect for radionuclide treatment of SINETs was validated both in mouse xenografts and in primary patient tumours. Lastly, in paper V we characterised the SINET immune microenvironment. Using immunohistochemistry and flow-cytometry we detailed the immune cell composition of the SINET immune microenvironment and could demonstrate the successful isolation and expansion of tumour-infiltrating lymphocytes. We saw that after infiltrating lymphocytes were expanded they could degranulate when challenged with autologous tumour cells. In conclusion, these studies have provided a thorough characterisation of authentic, and provided important information regarding misidentified, frequently used gastroenteropancreatic cell lines. It has also investigated the role of hemizygous SMAD4 loss in the development of SINETs and demonstrated the potential of two novel therapies for SINETs: Lu-octreotate combined with Hsp90i ganetespib and immunotherapy.
In our first two papers (papers I and II), we sought to evaluate frequently used and readily available gastroenteropancreatic neuroendocrine tumour (GEPNET) cell lines as models of neuroendocrine tumour disease. We investigated the characteristics of these cell lines in terms of their neuroendocrine phenotype, genomic background, and therapeutic sensitivity. While several cell lines exhibited an expected neuroendocrine differentiation and harboured genetic alterations characteristic of the GEPNET disease, three cell lines did not. In fact, it turned out that one of the most frequently used cell lines in the field -KRJ-I, together with the cell lines L-STS and H-STS, were incorrectly identified and instead lymphoblastoid cell lines (EBVimmortalised B-lymphocytes). This might have led to the incorrect use and potentially faulty conclusions in a number of GEPNET studies. Among authentic cell lines, we performed a large-scale inhibitor sensitivity screening and predicted that SINETs would be more sensitive to HDACi compared to pancreatic neuroendocrine tumours (PanNET) and PanNET more sensitive to MEKi compared to SINET. The prediction was supported by subsequent experiments with primary tumour cells. In our third paper (paper III), we evaluated a mechanism by which hemizygous loss of SMAD4 could lead to SINET initiation and/or progression by acting as a haploinsufficient tumour suppressor. We found that loss of SMAD4 was associated with a decrease in corresponding mRNA and protein, and that this correlated to patient survival. We also found that the amount of SMAD4 protein in the primary tumour could predict whether the patient presented with distant metastasis. In our last papers (papers IV and V), we investigated the potential for two novel treatment strategies for SINETs. In paper IV we identified an inhibitor, the heat shock protein 90 inhibitor ganetespib, that could synergistically enhance the 177 Lu-octreotate therapy for SINETs. Ganetespib was initially found to sensitise SINETs to radiation in a large-scale inhibitor synergy screening, and its radiosensitising effect for radionuclide treatment of SINETs was validated both in mouse xenografts and in primary patient tumours. Lastly, in paper V we characterised the SINET immune microenvironment. Using immunohistochemistry and flow-cytometry we detailed the immune cell composition of the SINET immune microenvironment and could demonstrate the successful isolation and expansion of tumour-infiltrating lymphocytes. We saw that after infiltrating lymphocytes were expanded they could degranulate when challenged with autologous tumour cells.
In conclusion, these studies have provided a thorough characterisation of authentic, and provided important information regarding misidentified, frequently used gastroenteropancreatic cell lines. It has also investigated the role of hemizygous SMAD4 loss in the development of SINETs and demonstrated the potential of two novel therapies for SINETs: 177 Lu-octreotate combined with Hsp90i ganetespib and immunotherapy.
INTRODUCTION
We are all a matter of cells. From the simplest nematode to the human being, cells make up the living material, tied together in the utmost complex networks. Key is communication. In the early embryonic development and in the fully developed human alike, the exchange of precise and accurate information is a necessity to ensure that all the processes of the body are in concert. And every bit as important as the interplay in-between cells is the communication taking place with-in the cells. Cancer can develop first when these fine-tuned and tightly regulated intra and inter-cell signalling pathways are disrupted, and once this happen, tragedy often follows. Close to 10 million people are estimated to die globally from the disease in 2018 (1), but there is hope.
Over the past decades, advancements in the field of cancer research have led to significant improvements of patient survival after receiving a cancer diagnosis. New therapies are continuously emerging, and more and more patients are cured. Successful therapies have in common that they kill tumour cells while sparing untransformed cells from harm. One way to discover such therapies is through the use of preclinical experimental models of cancer. These models are crucial for the continued development of cancer therapies and it is thus vital that these models as accurately as possible mirror the biological aspects being investigated. This is not always the case, and unless we have a clear understanding of how the models recapitulate different biological aspects of the disease it can be of hindrance to the field and to the development of novel therapies.
Another attractive approach to discover novel therapies is through an increased understanding of the underlying mechanisms of tumour development. There are several examples of therapies that have been developed specifically against genetic changes with fundamental functions in tumour development, such as fusion proteins (e.g. imatinib for BCR-ABL), gene amplification (e.g. trastuzumab for HER2+ breast cancer) and activated proteins/pathways (e.g. vemurafenib/trametinib for BRAF-mutated melanoma).
x Alternatively, currently already available therapies can also be improved.
Research of 177 Lu-octreotate therapy for SINETs has resulted in that the therapy is now approved in the U.S. and E.U. for the treatment of somatostatin receptor type 2-positive gastroenteropancreatic tumours, but still with low curative rates. One attractive approach of improving such a therapy is through a combination with another therapy, preferably with synergistic interaction.
Lastly, we can also look beyond the tumour and change our focus to its surroundings. In the tumour microenvironment we find a wide diversity of cells, including immune cells. These immune cells would normally function to attack anything foreign to the body, including malignant tumour cells. In fact, it is believed that all cancer in one way or another need to develop mechanisms to actively avoid the detection of immune cells. The recent success of immune therapies has put emphasis on the very promising task of reactivating the immune system to target cancer.
In this thesis we have addressed all of these aspects within the scope of small intestinal neuroendocrine tumours (SINETs). We have looked at which models are available and how well they recapitulate various aspects of the tumour disease, at the molecular mechanisms underlying SINET tumour development, how to improve the 177 Lu-octreotate therapy, and finally, looked at the potential for immune therapy for these tumours.
Tumours arising from the neuroendocrine cells of the body are collectively termed neuroendocrine tumours (NETs). Small intestinal NETs (SINETs) are believed to arise from the serotonin-secreting enterochromaffin cells of the small intestinal mucosa.
The neuroendocrine system
The neuroendocrine system consists of cells that share characteristics of both the nervous and endocrine systems. Neuroendocrine cells typically receive signalling input in the form of neurotransmitters from nerve cells or neurosecretory cells, which is termed neuroendocrine integration. This serves to regulate synthesis, storage and ultimately secretion of hormones and peptides. These neuroendocrine cells are often located in glands and exist throughout the body, including the brain (hypothalamus, pituitary gland, pineal gland), kidneys (adrenal glands), ovaries, pancreas, testes, thyroid (thyroid, parathyroid), and the gastrointestinal tract. Effects of hormones and peptides span a wide range of physiological mechanisms, such as the stimulation or inhibition of cell growth, activation or inhibition of immune response, and regulation of the metabolism.
In the gastrointestinal tract, endocrine cells -termed enteroendocrine cellsare not gathered in a gland but are rather scattered throughout the mucosa and as such an example of a diffuse endocrine system, with anatomical connections to neurons (2). In fact, it has been argued that the gut is the largest endocrine organ in the body in terms of the amount of hormoneproducing cells (3,4). The whole intestinal mucosa can even be regarded as a large sensory organ with complex interactions between neurons, endocrine cells, and the immune system leading to stimulus-adequate responses such as the modulation of motility, perfusion, and tissue defence (5).
Hormones in the gastrointestinal tract are secreted by many different types of enteroendocrine cells (6). Traditionally, they are classified according to what hormone they secrete (7) and while some hormones are produced in the entire intestine -such as serotonin -others are produced at a particular location.
xii Although constituting less than 1% of the total intestinal epithelia, the most abundant enteroendocrine cell is the enterochromaffin (EC) cell, a cell type that was first proposed to have endocrine capability by Feyrter in 1938 (8).
The EC cell can detect irritants, metabolites, and catecholamines (9). Just like other primary sensory cells, EC cells are electrically excitable and express functional voltage-gated sodium and calcium channels (9). Its activation leads to serotonin-release, which is the source of >90% of all serotonin produced in the human body (9).
Epidemiology
One of the larger studies, from the United States Surveillance, Epidemiology, and End Results (SEER) data base, reports an age-adjusted incidence for SINETs of 0.86/100,000 for patients during years 2000-2004 (10). Reported data from other countries contain similar numbers with slight variations, e.g. Sweden (1.33/100,000), Norway (1.01/100,000), Netherlands (0.47/100,000), Japan (0.33/100,000) and England (0.78/100,000) (11)(12)(13)(14)(15). Common for many studies are that they report an increasing incidence over time (10,11,14,16,17). This reported increase is slightly higher in the United States compared to other countries, but whether this is a true difference is unknown. It has been suggested that the overall observed increase is mainly due to improved detection methods (18), better knowledge about the molecular and cell biological aspects and clearer histopathological characterisation (19). It seems like far from all tumours are ever diagnosed, as suggested by a postmortem study which observed SINETs in as much as 0.93/100 patients (20). Some studies show a slight male preponderance in reported numbers (15,21,22).
Clinical presentation
As it is common that patients are affected by nonspecific abdominal pain, most SINETs are discovered during surgery for these conditions. Alternatively, for cases with distant disease where the tumour produces hormones that can escape hepatic inactivation (23), SINETs can be suspected on the basis of symptoms of the carcinoid syndrome (24). This syndrome is caused by hormones such as serotonin and tachykinins and can lead to, among other things, diarrhoea (73%), flushing (65%), carcinoid heart disease (21%), and asthma-like episodes (8%) (25). Incidental discoveries such as during a CT scan performed in another clinical context are rare (19).
xiii Nonspecific abdominal pain symptoms can be due to various reasons, including dysmotility, obstruction, intermittent mesenteric ischemia, and secretory diarrhoea (19). Other less specific symptoms include nausea, vomiting, jaundice and even gastrointestinal bleeding (19). The gold standard for confirming an SINET diagnosis is by histopathological analysis (26). Tissues are fixed in formalin and embedded in paraffin and analyses typically include conventional morphological analysis, immunohistochemistry to confirm the neuroendocrine phenotype, and evaluation of the Ki67 index. The morphology is examined on haematoxylin & eosin stained sections and the neuroendocrine phenotype is confirmed by staining for a number of markers, including cytokeratins, synaptophysin (marker of small synaptic-like vesicles (27)), chromogranin A (large densecore vesicles (28)), and serotonin.
At the time of diagnosis, SINETs have often metastasised and frequently display regional disease and distant metastasis. In the late SEER data set, the numbers are 41% and 30% respectively (10). Most frequent site for distant metastasis is the liver (89%), followed by mesentery (19%), and bone (11%) (29). Interestingly, about a quarter of all patients present with multiple synchronous primary tumours (30) (Figure 1). It has been speculated that this is connected to familial cases of SINET (31).
Classification, staging and grading
In 1980, the first presented WHO classification of GEPNETs used the term 'carcinoid' to describe most gastrointestinal NETs, with exception for pancreatic islet cell tumours and small cell carcinoma. The classification has since been revised, and in the latest revision tumours are now classified as either well-differentiated NETs (grade 1 and 2) or poorly-differentiated neuroendocrine carcinomas (grade 3) (NECs) (32). Neuroendocrine carcinomas and neuroendocrine tumours differ in several aspects. In terms of genomic background, grade 3 carcinomas frequently harbour TP53 and RB mutations, which are very rarely found in grade 1 and 2 tumours (33). TP53 mutations have been shown to alter tumour cell biology and lead to a worse prognosis for patients with neuroendocrine tumours (34). Although WHO classification guidelines were updated in 2017 for pancreatic neuroendocrine tumours (PanNETs) to now distinguish grade 3 PanNETs and grade 3 pancreatic NECs, this separation is not yet applied for SINETs and small intestinal NECs.
Tumour grading is based on Ki67 index and mitotic count. Grade 1 tumours are defined as having <2 mitoses per 10 high-power fields (HPF) and/or a Ki67 index of ≤2. Grade 2 tumours are defined as having a mitotic count of 2-20 per 10 HPF and/or 3-20% Ki67 index. Finally grade 3 tumours have a mitotic count >20 per 10 HPF and/or >20% Ki67 index. The TNM (tumournode-metastasis) system is used to specify disease stage (35). Disease stages I, IIA, IIB, and IIIA correspond to localised disease with variations in tumour invasion (T1-T4). Stage IIIB describes any tumour with regional lymph node metastasis (N1; regional disease) and stage IV is used to describe tumours with any distant metastasis (M1; metastatic disease). xv
Survival and prognosis
Compared to other cancers that commonly arise in the small intestine, e.g. lymphomas, adenocarcinomas, and sarcomas, SINETs have a better survival (22). The 5-year overall survival in the United States SEER database is 68.1% (36). The disease-specific survival, which is naturally higher, has also been investigated in smaller cohorts. Two European (German and Swedish) studies have found the 5-year and 10-year disease-specific survival to be 88.9%/69.2% and 75.0%/63.4% respectively (37,38).
SINET prognostication is usually based on grading and staging, which described in the WHO classification stated in the previous section. Ki67 is more accurate than mitotic count (39) and correlates to patient survival and progression-free survival (29,40). Studies using the current Ki67 cut-offs could observe a statistical difference in 5-year survival between grade 1/2 and grade 3 tumours, and between disease stages I, IIX, IIIX (localised and regional disease) and disease stage IV (metastatic disease) (37,41). Correlation between ethnicity and prognosis has not been shown (10).
The commonly clinically used diagnostic biomarkers 5-HIAA and chromogranin A has not convincingly shown a reliable prognostic potential. There are however other emerging biomarkers that have shown such potential, but there is a need to validate these in prospective trials. Emerging biomarkers with prognostic potential include: serum NSE, pancreastatin, DcR3, TFF3, neurokinin A, neuroendocrine-associated transcripts in serum, and circulating tumour cells (42)(43)(44). xvi
Experimental models of SINET disease
Preclinical cancer research utilises a wide range of experimental models to study cancer disease. Models differ in properties that govern how well they reflect various aspects of the tumour disease and so in their applicability to different research questions. These models have helped researchers make ground-breaking discoveries leading to new innovative medicines, but they are also problematic seen to how many pharmaceuticals that are discovered in preclinical models that ultimately fail in clinical trials due to factors such as lack of treatment response or adverse effects (45). Therefore it is of great importance to understand and validate the models being used (46). Below we examine some of these models, which based on experimental setting can be divided into three broad categories: in vitro models, ex vivo models, and in vivo models ( Figure 2).
In vitro models
In vitro (Latin, approx.: 'in glass') models in cancer research usually refers the use of cell lines. Patient tumour-derived cell lines as models of tumour disease have been widely used in cancer research for studying the molecular mechanisms of tumours and their response to therapy. However, cell lines do not perfectly recapitulate the tumour disease and in terms of genomic xvii alterations, protein expression, and therapeutic sensitivity, they can differ substantially (47-51).
It has turned out that GEPNET cell lines are very hard to establish. This has been attributed to their low proliferative rate and to the limited amount of donor tissue available (52). Throughout the years, only a few cell lines have been established from human SINETs (Table 1). Unfortunately, the authenticity of several of these cell lines has since been questioned.
Although results are still occasionally published using the CNDT2 cell line, its authenticity has been challenged by several researchers (53,54). In response to the criticism, short tandem repeat (STR) analysis to match the cell line with the NET that was thought to be the source of the cell line was performed, but the STR profiles did not match (53). We also here show in paper I and II that the cell lines KRJ-I, L-STS, and H-STS do not consist of SINET cells, but rather Epstein Barr-virus (EBV)-immortalised Blymphocytes, and are thus so-called lymphoblastoid cell lines (55). This we based on the lack of a neuroendocrine phenotype, high expression of B cell markers, and a presence of EBV. In paper II we also show that the KRJ-I cell line, based on RNA-sequencing data, most closely resembles diffuse large Bcell lymphoma. KRJ-I, established from a hepatic SINET metastasis (56), is one of the most frequently published SINET cell line. L-STS and H-STS were established together with P-STS from the same SINET patient. P-STS was established from the primary tumour, L-STS from a lymph node metastasis, and H-STS from a hepatic metastasis (57).
Remaining are only two authentic non-transfected SINET cell lines, GOT1 and P-STS. GOT1, first published in 2001 (58), has because of its high expression of somatostatin receptor subtype 2 (SSTR2) mainly been used as a model for peptide receptor radionuclide therapy (59)(60)(61)(62)(63). P-STS, contrary to L-STS and H-STS, display both epithelial and neuroendocrine differentiation and is therefore presumed to be authentic. It is however worth noting that it was established from the terminal ileum of a grade 3 tumour, making it essentially not a model of SINET disease but rather a model of small intestinal neuroendocrine carcinomas (64). A molecular characterisation of the P-STS cell line has been published and the cell line has been used to study hormone secretion (65,66). The two most frequently published pancreatic NET (PanNET) cell lines are QGP-1 and BON1. QGP-1 was established from a human pancreatic somatostatin-producing islet cell carcinoma (69,70) and BON1 was established from the lymph node metastasis of a PanNET patient (71). The QGP-1 and BON1 cell lines have been previously characterised in terms of exome-sequencing and copy-number alterations (72,73). In addition to these cell lines, there are two other human tumour-derived PanNET cell lines: the CM cell line (74) and the more recently established NT-3 cell line (75), both xix from insulin-secreting tumours. The CM cell line has however been criticised for seemingly lacking insulin secretion (76).
There also exists multiple PanNET cell lines established from mouse and rat, most of which came about before the publication of human tumour-derived cell lines. They do not only derive from another species, but were also established in ways that do not necessarily represent naturally occurring tumorigenesis. The following cell lines were derived by transgenic SV40 T antigen-expressing mice: MIN6, βTC, NIT-1 (insulinomas; insulin promotordriven) (77-79), TGP61 (PanNET; elastase promotor-driven) (80), and Alpha TC (glucagonoma; preproglucagon promotor-driven) (81). The RIN and INS-1 insulinoma cell lines were derived from x-ray irradiated NEDH rats (82,83). Mu Islet E6/E7 (mouse) and HIT (Syrian hamster) were established from transduced pancreatic islets cells (84).
Ex vivo models
Ex vivo (Latin, approx.: 'outside the organism') models are due to their limited availability not as frequently used in cancer research as immortalised cell lines but have the large benefit of not having been in culture for a longer time period. This means they have not nearly in the same extent gone through the same selection and adaptation process to cell culture conditions, which in many aspects do not reflect growth conditions in the human body. Two commonly studied ex vivo model types are primary cell cultures and organoids.
Primary cell culture is the initial cultivation of cells derived from a tissue. Typically the process of establishing a primary culture is to obtain a tissue biopsy and produce single-cell suspensions by various disassociation techniques. In cancer research they have been used to study many aspects of tumour biology, such as therapeutic sensitivity and imaging (85). SINET primary cell cultures have been used to evaluate the therapeutic sensitivity of patient tumours cells to various pharmaceuticals and to study the SINET hypoxic response (86,87).
Recently the practise of 3D culturing has led to the development of a new ex vivo model. Taking tissue cells, embryonic stem cells, or induced pluripotent stem cells and growing them in a 3D matrix under the right stimulatory conditions can lead to self-organising organotypic structures called organoids. In this manner for example LGR5+ intestinal stem cells can grow into highly polarised epithelial structures with both proliferative crypts and differentiated villus compartments (88). Organoids have however rarely, if ever, been used in SINET research. However, Bellono et al. recently studied the biology of untransformed EC cells in cultured intestinal organoids, showing the potential for using this research model for studying SINET development (9).
In vivo models
'In vivo' (Latin, approx.: 'inside the organism') models have contributed largely to science. Using organisms such as the Drosophila fly or the house mouse, Mus musculus, have allowed researchers to conduct research not otherwise feasible. The model used should be carefully evaluated with respect to the research question at hand and to avoid any unnecessary suffering. For SINETs, the model of choice (with some exceptions mentioned below) has been Mus musculus. This animal model has several benefits, including the relative ease of housing, that it can be standardised by inbreeding, and that their genome well resembles that of the human. In fact, more than 99% of mouse genes are homologous to human (89).
While the mouse as mentioned has been most commonly used as a study model for NETs, certain rodents which more or less spontaneously develop NETs, like the Praomys (Mastomys) natalensis, have also been used to study NETs. These do however not well mirror SINET or PanNET disease but rather gastric NET disease (90). Additionally, serotonin release has been studied in a model were SINETs were transplanted in the anterior eye chamber of cyclosporine-treated rats (91,92 Genetically engineered mouse models (GEMs) are another alternative, used widely in cancer research (99). This could provide important information about aspects about tumour development. However, no SINET GEEMs have been reported, likely at least partly due to the lack of identified driver mutations of SINET disease. xxii
Cancer genetics
The human genome consists of roughly three billion nucleotide pairs, together making up the nucleic DNA. The nucleotides consist of guanine, cytosine, thymine, and adenosine, commonly represented by the letters 'G', 'C', 'T', and 'A'. To give a hint of how extensive the code for DNA is: this thesis, from front to back page is roughly 300,000 letters long. If one were to print the code for DNA it would require about 10,000 of these books, producing a 100 meter tall pile. This vast genetic material is most commonly distributed onto twenty-two pairs of homologous chromosomes, and 2 sex chromosomes, in total dividing the human genome onto forty-six chromosomal units. DNA both governs the sequence of transcribed RNA by templates called genes and provides the platform for the regulation of when and how much RNA should be transcribed from each gene. The majority of the produced RNA is then translated into functioning proteins which executes most biological processes in the cell.
In the untransformed cell the proteins that should be present under given conditions, homeostasis, is tightly controlled. It is when alterations occur in the DNA that this fine-tuned regulation, and/or the function of proteins is altered. Damage to the DNA is commonly caused by chemical agents or radiation. These genotoxic agents can derive from external exposures or internal biological processes. However, not all damage or errors in the DNA lead to harm. In fact, when alterations to the DNA occur, may it be through a genotoxic agent or by a naturally occurring mistake, it is commonly repaired by the cells' native DNA repair mechanisms. Furthermore, even if the repair by any reason fails, most mutations have no effect on the cell's phenotype, so called passenger mutations. It is only when the alteration leads to a change in the coding sequence resulting in an amino-acid change, so-called nonsynonymous mutations, a phenotypic effect first occurs.
Genetic aberrations in small intestinal neuroendocrine tumours
Genetic aberrations can be divided into the following types, based on the nature of the genetic consequence: point mutations and indels, copy number alterations and gene fusions. For SINETs, characterisation of substitutions xxiii and indels, and in some degree gene fusions, has mainly been addressed in two publications (100,101) and copy-number alterations in a larger number of studies.
Commonly, genomic sequencing studies aim towards identifying cancer drivers, alterations that lead to the initiation or progression of cancer. These can be identified simply by frequent recurrence, which indicate diseasespecific influence, but should also subsequently be validated in cancer models. technique, but analysis using microsatellite markers and whole-exome sequencing also occurs (100,101,103-112). The most common somatic copynumber variation (SCNV) is loss of one copy of chromosome 18, which occurs in more than 60% of all tumours. It is also in some tumours the only SCNV reported. Other commonly reported losses, albeit in substantially lower frequencies, include 3p, 9p, 11q, and 16q. Gains are usually of whole chromosomes, including chromosomes 4, 5, 7, 10, 14, and 20 ( Figure 3). xxiv
Haploinsufficiency
Most humans have 22 pairs of homologous chromosome pairs and two sex chromosomes altogether making up forty-six chromosomes. Since we have homologous chromosomal pairs, the vast majority of all genes are represented by two homologues copies -one on each chromosome. In 1971, Alfred G. Knudson JR presented data that showed that a gene mutation causing retinoblastoma (a gene defined in 1986 and now known as RB (113)) needed two mutations, one in each allele of the gene, to give rise to the disease. This has been termed the 'Knudson hypothesis', or the 'two-hit hypothesis'(114), and it is today believed that most tumour suppressors are indeed inherited in a recessive manner and in essence follow the two-hit hypothesis. However, many examples of genes that deviate from this hypothesis have been discovered, with prominent examples being e.g. PTEN (115) and TP53 (116). A loss-of-function in just one of the alleles of these genes is sufficient to cause a change in the tumour cells' phenotype and can lead to disease initiation or progression. There are two main mechanisms as to why this happens: either the mutated protein interact with the wild-type protein and inhibit the function of the same, so-called dominant negative mutation. Or, the gene product produced from the one remaining functioning gene is not sufficient to withhold cell homeostasis, which is termed haploinsufficiency. The concept that the number of genes can affect the cell phenotype is called gene dosage. In fact, also the opposite is true, that an addition of genes, such as in the amplification of oncogenes MYCN (117) and EGFR (118) or in the gain of whole chromosomes, as in germline trisomy 21, causing Down syndrome, can cause robust phenotypic changes. In the case of xxv Down syndrome, the phenotype is complicated by the vast amount of genes affected by an increased gene dosage. There are however other congenital disorders at the other side of the spectrum, caused by smaller chromosomal losses or loss or loss-of-function in a single gene that are slightly less complex to decipher. Dozens of human developmental syndromes are caused by hemizygous chromosomal loss (119). Although their effect is debatably less studied than other alterations, the concept of gene dosage can be very important in cancers, which often harbour multiple gains and losses of large chunks of DNA.
A normal cell is often subjected to stress. May it be from reactive agents, pH, temperature, or radiation, stress poses a threat to the cell homeostasis and all of the above mentioned factors can either directly or indirectly lead to considerable harm. It was when, according to Ferruccio Ritossa, a colleague of his had turned up the heat of the incubator containing his Drosophila melanogaster flies that he noticed chromosomal puffs indicative of localised and extensive gene transcription (120,121). This was the first reported observation of what came to be termed the heat-shock response. It is now known that key to this response is the upregulation of heat-shock proteins, notably Hsp90, and that it in addition to heat protect against many types of stress.
While bacteria only have one Hsp90 gene that encodes cytosolic proteins, budding yeast and humans have two: HSP90α and HSP90β (122). Throughout this book, unless otherwise stated, we use 'Hsp90' to address proteins from both these paralogues. They differ in that Hsp90β is constitutively expressed in the cell and that Hsp90α is induced by stress (123,124). In fact, in non-stressed cells Hsp90 comprise as much as 1-2% of the total cellular protein content. When subjected to stress, Hsp90 can increase to more than two-fold. In addition to the two mentioned genes, humans have genes encoding Hsp90 homologues also expressed in the mitochondria (125) and the endoplasmic reticulum (126).
Being a chaperone protein, Hsp90 functions by assisting newly translated proteins during the polypeptide-chain synthesis to fold correctly, translocating proteins across membranes, exerting protein quality control in the endoplasmic reticulum, and assisting proteasome-mediated degradation (127). Failure of these functions can lead to protein misfolding and aggregation. Unlike many other chaperones, Hsp90 is however not required for biogenesis of most proteins, but is instead important to govern the conformation of key signalling transducers. Chaperones generally do not covalently modify their substrates; they rather interact with them in an ATPdependent cyclical fashion (128). This is also true for the heat-shock response ( Figure 4). xxvii The cancer cell is under significant stress and this in turn make keeping aberrant protein interactions and misfolding yet more challenging (129). Thus, it is perhaps not surprising to find expression of heat shock proteins upregulated in several types of human cancers, both solid and haematological (130)(131)(132)(133). Hsp90 clients are involved in many types of cell signalling associated with the promotion of cancer, including proliferation (134)(135)(136)(137), immortalisation (138), impaired apoptosis (139), angiogenesis (140), and invasion/metastasis (141). Hsp90 can as such function both as a potentiator by assisting oncoproteins and as a capacitator by allowing tumours to tolerate external and internal stress (142). (1) (144)(145)(146)(147)(148). These trials have also demonstrated that ganetespib, in contrast to first-generation Hsp90 inhibitors, has improved solubility and reduced risk of cardiac, ocular, and liver toxicities.
Transforming growth factor β (TGFβ) is a regulatory cytokine involved in a multitude of biological processes (149). TGFβ-signalling is also well-known to play dual roles in cancer progression (149). While its tumour-suppressing effect is a hurdle transforming cells must bypass, it also promotes cell invasion, immune regulation, and microenvironment modulation that cancer cells can benefit from. Cancer has been shown to circumvent the inhibiting effects TGFβ-signalling in several ways. Biallelic inactivation of TGFBRII are recurrently found in colon, gastric, biliary, pulmonary, ovarian, oesophageal, and head and neck carcinomas (150). TGFBRI mutations are less prevalent but exist in a minority of patients in several cancer types. RSmads are also found inactivated in cancer, but in much lesser degree. For example, recurrent SMAD2 mutations have been found in colorectal cancers (151). The gene for SMAD4, on which the TGFβ canonical signalling converges ( Figure 5), is most frequently mutated in cancer, and in a particular high frequency in pancreatic carcinoma and colorectal cancers with microsatellite instability.
Interestingly, SMAD4 seems to play an important part in the GI tract in relation to cancer. Among the five tumour types in The Cancer Genome Atlas (TCGA) with highest frequency of SMAD4 mutations with one exception are adenocarcinomas in the gastrointestinal (GI) tract: pancreas (23%), rectum (20%), colon (14%), and stomach (9%). In addition, SMAD4 has been suggested to have a critical role in the tumourigenesis of small intestinal adenocarcinomas (152). A published analysis of TCGA shows that hotspot mutations in TGFβ pathway members are highly overrepresented in GI cancers (153). Heterozygous inactivation of the SMAD4 gene in humans frequently leads to the familial juvenile polyposis syndrome (JPS) (154 xxxi
Treatment of small intestinal neuroendocrine tumours
There is a general lack of efficient and curative therapies for SINETs. The palliative and somewhat tumour growth-inhibiting somatostatin analogues are standard care for most patients. For localised disease surgery is a viable option, but for disseminated disease there is currently no curative treatments available. Below follows a brief review of common treatment options for SINETs, including the newly recommended 177 Lu-octreotate therapy (158).
Current treatment options
Traditionally radical surgical resection has been the only hope for curing SINETs. Primary SINETs are usually relatively small and easily removed, but also very frequently present together with lymph node metastasis (86 % in the SEER data base (159)). In about 5% of patients also miliary seeding in the intra-abdominal cavity is observed (160). Distant metastases are also commonly occurring, posing a much larger challenge for surgery. Localised and regional tumours are often removed by surgical resection. There is an absence of internationally standardised surgical procedures, but when performing surgery of lymph node metastasis it is recommended to remove at least 8 nodes (158). In the cases where growth of the primary tumour and involvement of mesenteric disease, often together with fibrosis, complete resection can be more challenging, but can still be achieved in up to 80% of cases (161)(162)(163). As previously mentioned, distant metastasis are frequent and by far most commonly found in the liver. The distribution of neuroendocrine liver metastasis can be classified into three types: type 1 (single metastasis of any size), type 2 (isolated bulk with smaller deposits), and type 3 (disseminated metastatic spread) (164). While radical surgery for type 1 liver metastasis seems to be associated with improved outcome, radical surgery for type 2 and type 3 is more controversial. In addition, surgery to remove hepatic metastasis is in general not performed on poorly-differentiated (G3) tumours, which are associated with much greater risk of metastasis (165).
Somatostatin analogues, such as octreotide and lanreotide, are used to treat symptoms related to hormone hypersecretion. Somatostatin analogues however not only inhibit hormone release, but can also lead to increased time xxxii to tumour progression (166,167). For in particular somatostatin receptor negative or refractory tumours, INF-α2b, which has shown improved progression-free survival for SINETs (168), can be administered (169).
Everolimus and sunitinib are two targeted therapies that are approved for the treatment of advanced neuroendocrine tumours. Everolimus, an inhibitor of the mTOR pathway, which controls functions such as cellular proliferation, metabolism, protein synthesis, and autophagy, has shown a significant improved progression-free survival for advanced progressive gastrointestinal neuroendocrine tumours (170). This despite an overall lack of activating mutations in the mTOR pathway in SINETs (100,101). Sunitinib malate is instead an inhibitor of tyrosine kinases, including vascular endothelial growth factor receptors (VEGFR), platelet-derived growth factor receptors (PDGFR), CD117 (KIT), and RET, and although it improves progression-free survival for patients with pancreatic neuroendocrine tumours (171), its efficacy is yet to be demonstrated for SINETs.
Systemic chemotherapy is recommended by European Neuroendocrine
Tumor Society (ENETS) treatment guidelines only for grade 3 NETs (or advanced PNETs) (172). For high-grade NETs, chemotherapy involving platinum-based substances is recommended, such as the combination of cisplatin and etoposide.
Lu-octreotate therapy
Peptide receptor radionuclide therapy (PRRT) is a treatment modality that uses a therapeutic radionuclide conjugated to a targeting vector. PRRT can be used both as a potentially curative therapy and for palliation. It thus can be viewed as a way to combine radiation therapy with systemic administration and tumour selectivity. Both the properties of the radionuclide, which can emit different types of particles and electrons (173), and the targeting vector, determines the success of the radionuclide therapy.
A recently FDA-approved PRRT is the 177 Lu-octreotate therapy, which has been granted approval for the treatment of somatostatin receptor subtype 2 (SSTR2)-positive GEPNETs (174). 177 Lu-octreotate therapy consists of the radionuclide 177 Lu conjugated to the somatostatin analogue octreotate, which can bind to somatostatin receptors and provide tumour-selective irradiation ( Figure 6). 177 Lu, the radionuclide, mainly emits βparticles, but also gamma xxxiii radiation, and its emission can cause double-strand breaks in the cell (175). It has a half-life of 6.7 days and a tissue penetration of about 2 mm. Together with the conjugated somatostatin analogue octreotate, 177 Lu-octreotate therapy mainly adheres to human somatostatin receptor subtype 2, but also shows measurable affinity for subtypes 4 and 5 (176,177). Several trials using 177 Lu-octreotate therapy for GEPNETs have been reported (178)(179)(180)(181)(182)(183)(184)(185)(186)(187), but comparisons have been complicated by varying selection criteria, treatment regimens, and outcome measures. These studies in addition rarely include a control group, further complicating conclusions regarding efficacy. There has been retrospective and phase II studies with 177 Lu-octreotate that have shown a median progression-free survival of over 30 months in patients with advanced SINETs with documented tumour progression or uncontrolled carcinoid symptoms (183,187). This was enough to initiate the first randomised controlled trial, the cross-institutional phase III trial NETTER-1 (185). In this trial patients were treated with 4 cycles of 7.4 MBq 177 Lu-octreotate every 8 weeks plus long-acting repeatable (LAR) octreotide and compared to patients treated only with high-dose LAR octreotide. In total 229 patients with octreoscan-positive tumours were enrolled. At month 20 the progression-free survival was 65.2% vs. 10.8% and the response rate was 18% vs. 3%. There has also been shown an overall improvement in quality of life in NET patients treated with 177 Lu-octreotate (188,189). On the basis of this trial, 177 Lu-octreotate therapy was FDAapproved for treating SSTR2-positive GEPNETs.
While 177 Lu-octreotate in previous studies have shown similar efficacy to 90 Y-DOTATOC, it has also shown a better toxicity profile -especially related to haematological adverse effects. Haematological adverse effects are although still a prevalent side effects of 177 Lu-octreotate therapy. Overall however, the most common adverse effects are nausea and abdominal discomfort. More serious adverse effects include renal toxicity and the already mentioned haematological toxicity (190,191). Renal toxicity is believed to be caused by the renal excretion of 177 Lu-octreotate and can be somewhat mitigated by renal-blocking amino acid infusions. xxxv
Cancer and the immune system
In order for cancer to thrive, the immune system is a hurdle that needs to be overcome. Immune cells are primed to detect and eliminate any cells that do not look domestic. Indeed, most tumour cells express antigens that can mediate recognition by host CD8+ cells and applying immune evasive mechanisms is therefore a prerequisite. Tumour cells have been shown to evade the immune system in several ways, by both tumour-intrinsic and tumour-extrinsic mechanisms. Tumour cell-intrinsic mechanisms can include loss of major histocompability complex (MHC) class I proteins, inhibition of the antigen processing machinery, loss of tumour-associated antigens, or expression of inhibitory proteins. Tumour cell-extrinsic factors include the modulation of the microenvironment to recruit immune-suppressive cells (such as regulatory T cells), inactivation of immune receptors and secretion of immune suppressive cytokines. Novel therapeutic strategies have focused on overturning these evasive mechanisms. The recently successful checkpoint inhibitors are focusing on abrogating the immune receptor proteins expressed by the tumour cells, but there are more ways to go.
The therapy that first attracted large attention to check point inhibition was the inhibitor ipilimumab, a monoclonal antibody directed against cytotoxic T lymphocyte antigen 4 (CTLA4), which was approved in 2011 and was the first therapy to show an overall survival advantage in metastatic melanoma (192). CTLA4 inhibition has now been largely taken over by inhibitors against PD-1 and PD-L1, which show a better toxicity profile. A large amount of clinical trials have paved the way to the FDA-approval PD-1/PD-L1 inhibitors for a large variety of cancers (193). To date, five PD-1/PD-L1 inhibitors have been FDA-approved for the treatment of cancer (194). However there are also other interesting immunotherapies designed to enhance the immune system against cancer. These include tumour-directed monoclonal antibodies, oncolytic viruses, cancer vaccines, and T-cellfocused therapies. Tumour-directed monoclonal antibodies are designed to target tumour-specific antigens, stay on the surface and activate antibody/complement-dependent cytotoxicity, oncolytic viruses can selectively infect and kill cells that express specific proteins, and cancer vaccines can work by immunising patient to tumour-associate antigens.
Other immune therapies have focused on T cells, including the manufacturing of chimeric antigen receptor (CAR) T cells, which recognises a specified tumour-antigen and are activated in an MHC-independent manner and T cell receptor (TCR) gene-modified T cell therapy, which works by modifying the TCR to detect specific tumour antigens presented by HLA proteins. Adoptive cell transfer (ACT) is another T-cell focused immunotherapy, it refers to the stimulation and expansion in vitro of endogenous or allogeneic immune effector cells for patient administration (Figure 4). For ACT to work, IL-2, a signalling cytokine that stimulates immune cells, is often coadministered to ensure the viability and function of infused cells. ACT has achieved a 20% complete response lasting longer than 3 years in stage IV melanoma (195).
Figure 7. Schematic of the adoptive cell transfer therapy. (1) The tumour is excised from the patient, (2) plated as single cells, and (3) tumourinfiltrating T cells are selectively expanded by IL-2 stimulation. (4) An assay for tumour recognition can be performed and (5) functional clones selected and expanded. (6) Expanded T cells are reinfused into the patient.
xxxvii With limited clinical experience, investigation for the role of the immune therapy in SINETs have in light of the success of check-point inhibitors recently mainly focused on characterising the expression of programmed death-ligand 1 (PD-L1) and programmed cell death protein 1 (PD-1) by immunohistochemistry. The positivity of these proteins in SINET biopsies has varied, with reported PD-L1 positivity ranging between 0-39% and PD-L2 positivity between 0-82% (196)(197)(198). The most notable difference has been that of comparing well-differentiated (grade 1 and 2) and poorlydifferentiated (grade 3) tumours as PD-L1 expression has been observed to be significantly higher in grade 3 GEPNETs (199).
xxxviii AIMS All papers within the scope of this thesis aimed towards expanding the knowledge of small intestinal neuroendocrine tumours to give instruments for discovery and implementation of clinical therapies that benefit patients affected by this tumour disease.
Specifically, the aims of the papers were: Paper I and II: To characterise and evaluate frequently used gastroenteropancreatic cell lines in aspects relevant for studying neuroendocrine tumour disease.
Paper III: To shed light on the genetic mechanisms underlying the initiation and/or progression of small intestinal neuroendocrine tumours.
Paper IV: To identify and validate a novel combination therapy to potentiate the efficacy of the 177 Lu-octreotate therapy for small intestinal neuroendocrine tumours.
Paper V:
To evaluate the potential for immunotherapy in small intestinal neuroendocrine tumours. xxxix
METHODOLOGY
In the following sections some selected key materials and methods are detailed.
Material
Material used in the papers included in vitro models, ex vivo models, in vivo models, and patient samples.
Cell culture (Papers I, IV, and V)
All cell lines and primary cells were grown in specified media compositions and were kept at 37°C in a humidified incubator with an atmosphere of 5% CO 2 (
Tissue microarray (Papers I, III, IV, and V)
A tissue microarray (TMA) was constructed using biopsies from patients who underwent surgery for SINETs at Sahlgrenska University Hospital from 1986 to 2013. Formalin-fixed and paraffin-embedded tumour tissue from this cohort was originally retrieved from the archives of the Department of Clinical Pathology and Genetics, Sahlgrenska University Hospital, Gothenburg. The diagnosis was confirmed by reviewing haematoxylin and eosin-stained sections and immunohistochemical stainings. Sufficient tumour material for construction of tissue microarray was available from 846 tumours from 412 patients. 1.0 mm core biopsies were obtained from each tumour. Eight recipient blocks were created and each block contained a total xl of 121 core biopsies. Each block also included normal tissue from gut, small intestine, and large intestine. When available, core biopsies were taken from primary tumour, lymph node metastases, liver metastases, and other distant metastases. The quality of the constructed tissue microarray was evaluated on hematoxylin and eosin-stained sections and on immunohistochemical stainings for chromogranin A, synaptophysin, serotonin, and Ki67. We obtained approval from the Regional Ethical Review Board in Gothenburg, Sweden, for the use of clinical materials for research purpose.
Tumour xenografts (Papers IV and V)
Tumour xenografts were studied in two different papers (paper IV and V). In paper IV, in vivo experiments were based on cell-line derived xenografts, specifically from the GOT1 cell line. GOT1 tissue was transplanted subcutaneously to BALB/c nude mice (Janvier Labs) and growing tumours were measured twice weekly with slide calipers. In study V, we instead opted for establishing patient-derived xenografts in NOG mice. For this purpose we tried both different ways of pre-processing patient tumour tissue and different transplantation approaches. Tumour tissue was either collected directly from surgery or thawed from cryofrozen material before transplantation. Transplantation was done either subcutaneously or through orthologous liver injections.
For all experiments water and autoclaved food were available ad libitum and the well-being of the mice continuously looked after. Mice were sacrificed at the end of experiment by intraperitoneal injection of 60 mg/mL pentobarbital (Pentobarbitalnatrium vet., Apotek Produktion & Laboratorier), followed by cardiac puncture. We obtained approval from Regional Ethical Review Board in Gothenburg, Sweden, for all animal procedures.
xli Table 2. In vitro models used within the scope of this thesis, the cell media they were kept in, and from where they were acquired.
Selected methods
The results of the papers presented in this thesis were generated by more than twenty defined methods (Table 3). For details of each methodology, please refer to the specified papers. Below a few selected key methods are detailed.
Immunohistochemistry (Papers I, III, IV, and V)
Immunohistochemistry was performed on different types of material, including cell lines, primary cell cultures, CDXs, PDXs, patient tumour tissue, and TMAs. All material was fixed by 4% buffered formaldehyde or methanol and then embedded in paraffin. Sections (3-4 μm) from paraffin blocks were placed on glass slides and treated in Dako PT-Link using EnVision™ FLEX Target Retrieval Solution (high pH). A wide selection of antibodies was used and information about antigen, clone, and manufacturer is specified in the material and methods section of individual papers. Immunohistochemical staining was performed in a Dako Autostainer Link using EnVision™ FLEX according to the manufacturer's instructions (DakoCytomation). For most stainings, EnVision™ FLEX+ (LINKER) rabbit or mouse was used. Positive and negative controls were included in each run.
Fluorescence in situ hybridisation (Paper III)
Fluorescence in situ hybridisation (FISH) was performed on 4 µm paraffin sections from the TMA. Pre-processing of paraffin sections, hybridisation to the probe, post-hybridisation washing and fluorescence detection were performed according to manufacturer's instructions (Abnova). Tumours were examined using an Axioplan 2i epifluorescence microscope (Zeiss, Oberkochen, Germany) equipped with a 6 megapixel CCD camera (CV-M4 + CL, JAI) controlled by Isis 5.5.9 imaging software (MetaSystems Group Inc, Waltham, MA, USA). Within each section, normal regions/stromal elements served as the internal control to assess quality of hybridisation. Cases were scored at 100× magnification, counting at least three distinct areas and at least 30 discrete nuclei. xliii
Inhibitor screening (Papers I and IV)
The screening library consisted of 1224 compounds (Inhibitor library, no. L1100; Selleckchem). Inhibitors were subjected to a maximum of five freezethaw cycles. From frozen stocks, cells were expanded 2 to 5 passages before being used in experiments. Seeding density was adjusted for each cell line so xliv that control cells were approximately 70-80 % confluent at treatment endpoint in 100 µL cell medium/well in black solid-bottom 96-well plates. The plates were incubated at 37°C to allow for cell attachment. Each treatment plate included 8 internal control wells with DMSO, and each experiment included an additional plate with 96 DMSO control wells. Additionally, each experiment contained one cell-free control plate for background subtraction. For screenings in both paper I and IV, the endconcentration in the wells was 1µM. Cell viability was estimated using a fluorescence-based assay to measure the reducing capacity of metabolically active cells (alamarBlue, DAL1100; Life Technologies). The plates were read using a 96-well fluorescence plate reader (Victor 3 multilabel reader, ex. 560 nm/em. 640 nm).
Generation of tumour infiltrating lymphocytes (Paper V)
Patient tumour tissue samples were obtained from patients undergoing surgery for SINET disease at Sahlgrenska University Hospital, Gothenburg, Sweden. Tumour tissue obtained directly from surgery were cut into 1-2 mm 2 pieces and placed into separate wells in a 24 well-plate (Sarstedt) with 2 ml of culture medium (90% RPMI 1640 (Invitrogen), 10% heat inactivated Human AB serum (HS, Sigma-Aldrich), 6000 IU/ml recombinant human IL-2 (Peprotech) and gentamicin (Invitrogen). TILs were isolated from each fragment as previously described (201)(202)(203), before cryopreservation. TILs were expanded according to previously described procedures (203). In brief it was performed as follows: Irradiated (40 Gy) allogeneic feeder cells (5×10 6 ), 30 ng/ml anti-, antibody (Miltenyi; OKT3), 5 ml culture medium, 5 ml REP medium (AIM-V, Invitrogen) supplemented with 10% HS and 6000 IU/ml IL-2) and isolated TILs (5×10 4 ) were mixed in a 25-cm 2 tissue culture flask. Flasks were incubated upright at 37°C in 5% CO 2 . On day 5, half of the medium was replaced. On day 7 and every day thereafter, cells were split into further flasks with additional medium as needed to maintain cell densities around 1-2×10 6 cells/ml. On day 10-14, cells were harvested and cryopreserved. We obtained approval from Regional Ethical Review Board in Gothenburg, Sweden, for the use of clinical materials for research purposes. xlv
RESULTS AND DISCUSSION
The characteristics of GEPNET cell lines (paper I) Experimental models of neuroendocrine tumour disease are scarce, and no comprehensive characterisation of existing gastro-entero-pancreatic neuroendocrine tumour (GEPNET) cell lines has previously been reported. In this study, we aimed to define the molecular characteristics and therapeutic sensitivity of these cell lines. We therefore performed immunophenotyping, copy-number profiling, whole-exome sequencing, and a large-scale inhibitor screening of seven GEPNET cell lines. The gold standard of diagnosing a cancer disease is by histopathological examination, including immunohistochemical staining of biomarkers. To validate the diagnosis of frequently used GEPNET cell lines, we performed immunophenotyping investigating commonly used markers for GEPNET diagnostics (Figure 8). These normally include neuroendocrine markers synaptophysin (small synaptic-like vesicles (27)) and chromogranin A (large dense-core vesicles (28)) (26). To ensure an epithelial phenotype cytokeratin is also often investigated. Intriguingly, the diagnosis could not be confirmed for cell lines KRJ-I, L-STS, and H-STS, which we further address in the next results section. Remaining cell lines all expressed synaptophysin and pancytokeratins strongly but varied in their expression of other neuroendocrine markers, potentially indicative of partly lost neuroendocrine phenotypes.
Genomic background influences both prognosis and therapeutic sensitivity of tumour cells. There are for example mutations both confirmed to lead to a worse patient prognosis and mutations that are directly targeted by pharmaceuticals. If we are to study such aspects of cancer biology, we thus need to know which genetic characteristics our models harbour, and importantly, if they recapitulate the disease afflicted upon the patients. For these reasons we studied both somatic copy number alterations as well as genetic mutations using arrayCGH and whole-exome sequencing.
The copy number profiling revealed both common alterations, but also changes that are rarely detected in patient tumours. SINETs most frequently harbour loss of chromosome 18. Because of this chromosome 18 has been the subject of extensive investigation to identify inactivated tumour suppressors localised on the chromosome. Interestingly, the GOT1 cell line harboured 1.6 Mb segmental loss on 18q involving 7 genes, including SMAD4. While the SINET cell lines had a predominance of chromosomal losses, the PanNET cell lines had higher frequency chromosomal gains. Notably, BON1 harboured homozygous loss of the well-known tumour suppressors CDKN2A and CDKN2B and QGP-1 was the only cell line that harboured chromosomal amplifications, including HMGA2 and MDM2, the former often found upregulated in cancer and the latter an established oncogene.
We finished the study looking at the therapeutic sensitivity of the cell lines. This had several purposes: a) As a way of characterising the cell lines, b) to study whether the therapeutic sensitivity of the cell lines could predict the sensitivity of primary tumour cells, and c) to provide leads for potentially interesting inhibitors for GEPNET therapy. To minimise the risk of identifying efficient inhibitors based on cell culture conditions rather than tumour cell characteristics, all results were given comparing SINET and PanNET cell lines to each other. We found that SINET cell lines were more sensitive to HDACi compared to PanNET cell lines, and that PanNET cell lines were more sensitive to MEKi compared to SINET cell lines. These findings also held true when comparing primary cells generated from SINETs and PanNETs.
In conclusion, we provided a thorough and well-needed characterisation of frequently used GEPNET cell lines. This characterisation included a comprehensive immunophenotyping, copy number alterations, gene mutations, and the therapeutic sensitivity to more than 1224 inhibitors.
H-STS, L-STS, and KRJ-I are not authentic GEPNET cell lines (papers I and II) When characterising the KRJ-I, L-STS, and H-STS SINET cell lines in paper I, we were surprised to find that the cell lines expressed extremely low or undetectable levels of neuroendocrine markers chromogranin A and synaptophysin. This was also the case for all other neuroendocrine, enterochromaffin, and importantly, epithelial markers. Given the lack of even an epithelial phenotype, and the peculiar fact that, contrary to other GEPNET cell lines, they grew as sphere-forming suspension cultures, we postulated that these cell lines may be lymphoblastoid. Lymphoblastoid cell lines are immortalised B-lymphocytes that do not undergo senescence because they are infected and driven by the Epstein-Barr virus (EBV). Indeed, we could confirm the strong expression of lymphoid marker and B-cell marker CD45 and CD20 in all three cell lines. These markers were at the same time undetectable in the other GEPNET cell lines. Furthermore, EBV DNA was found in all three cell lines, which again was not the case for the other GEPNET cell lines.
This provided strong proof that the cell lines we had obtained did in fact not even consist of epithelial tumour cells, but rather immortalised B-cells. Since many publications have been produced using these cell lines, and in particular the KRJ-I cell line, we wanted to see if this was a problem not only in our lab. We therefore confirmed with the lab where the cell lines were established that the cell lines also had a lack of neuroendocrine markers, expressed B-cell markers, and had presence of EBV in early passages of the cell lines. This implies that any SINET cells present in culture from the start got overgrown early or where never present to start with. To that follows that it is likely that most or all published articles using these cell lines could present inaccurate research findings. In conclusion, we have revealed that the previously presumed and frequently in the field used SINET cell lines KRJ-I, L-STS, and H-STS are not authentic. They instead consist of immortalised EBV-infected B-cells, and are thus better described as lymphoblastoid cell lines. This has now been shown in our lab, shown in the lab that established the cell lines, and more recently shown using the RNAseq data from the Alvarez et al. study. We therefore urge that interpretation of data from studies using these cell lines should be conducted with large caution.
xlix SMAD4 haploinsufficiency in SINETs (paper III) The genomic alterations that lead to tumour initiation and progression are termed driver mutations. Identifying driver mutations is important to shed light on the tumour biology of the cancer disease and could lead to an increased understanding to how the tumour cells could be pharmacologically targeted. Currently not much is known about the molecular background of SINETs. Driver mutations can commonly be detected by their frequent occurrence. In SINETs however, despite whole-exome sequencing of more than one hundred patient tumours, only one recurrently mutated gene has been identified, CDKN1B, and in less than a tenth of all tumours.
Here we instead turned our attention to copy-number alterations. Several copy-number alterations are recurrent in SINETs and although these are rarely reported homozygous, we speculated that these alterations have an important impact to SINETs. The most frequent genomic alteration in SINETs is loss of chromosome 18. SMAD4, located on chromosome 18, has in genetically engineered mouse models been reported to be haploinsufficient (206,207) and heterozygous germline mutations of SMAD4 can lead to familial juvenile polyposis syndrome -a syndrome that among other things predispose the carrier to gastrointestinal cancers (154).
We therefore decided to investigate the role of hemizygous loss of chromosome 18 and its relation to SMAD4 mRNA and SMAD4 protein.
Investigating a for the field very large cohort of SINETs, including more than 846 tumours from 412 patients, we found that hemizygous loss of the SMAD4 was correlated to both an approximately two-fold decrease in corresponding mRNA and lower SMAD4 protein levels. Of note, we observed that a decrease in SMAD4 protein in the primary tumours was associated with a worse patient prognosis and with the occurrence of distant metastasis. In colorectal cancer, SMAD4 mutations have been shown to be cancer promoting in the presence of TGFβ stimulation (208). One possible mechanism for this is through promotion of epithelial to mesenchymal transition (EMT) resulting from accumulation of nuclear-β-catenin following SMAD4 downregulation (209). Interestingly, it has been speculated that SINETs are insensitive to TGFβ growth inhibitory effects (210). We also l studied whether monoallelic inactivation of Smad4 was alone sufficient to induce endocrine cell hyperplasia in a mouse model, but could not find support for this hypothesis.
In summary, the findings in this study suggest that copy number alterations in SINETs can affect protein expression of tumour-associated genes and could thereby represent a novel mechanism underlying SINET tumour pathogenesis. Further research regarding causal link between copy-number alterations and functional consequences is warranted.
177
Lu-octreotate therapy for SINETs can be potentiated by Hsp90 inhibition (paper IV) Following promising results in a phase 3 trial (211), 177 Lu-octreotate therapy became FDA-approved in 2018 for patients with gastroenteropancreatic neuroendocrine tumours expressing somatostatin receptors (174). The 177 Luoctreotate therapy is indeed showing better results in clinical trials than other therapies for SINETs and lead to longer progression-free survival, but complete responses are still rare. A common strategy to enhance the efficacy of a therapy without a corresponding increase in severe side effects is through implementing combination therapy (212). Our goal with paper IV was thus to identify a therapy that would potentiate the efficacy of the 177 Lu-octreotate therapy.
To identify interesting combinations, we screened the two cell lines GOT1 and P-STS for inhibitors that caused a synergistic radiosensitisation. In total, 1224 inhibitors were investigated. Out of these, 2-3% of the inhibitors showed synergistic interaction with external radiation at the evaluated dose. This is similar level to other large-scale screenings looking to identify synergistic pairs (4-10%) (213)(214)(215). By performing an analysis looking at inhibitor class overrepresentations, we saw that inhibitors of Hsp90 were highly overrepresented for the GOT1 cell line (False discovery rate; FDR: 3.2×10 -11 ). Hsp90i were however not overrepresented in the P-STS cell line, which we attribute to significant differences between the cell lines. Notably, while GOT1 was established from a grade 1 well-differentiated neuroendocrine tumour, P-STS was established from a grade 3 poorlydifferentiated carcinoma. P-STS also contains mutations that could affect its response to the combination therapy, including uncommon mutations in TP53, BRCA1, and BRCA2 (55). In fact, previous reports suggest that Hsp90 radiosensitisation occurs through impairing the DNA double strand repair mechanisms (216) and then specifically through the inhibition of BRCA1 and/or BRCA2 (217,218).
Although inhibitors of Hsp90 caused a synergistic radiosensitisation to external radiation in the GOT1 cell line, we did not know if it would have the same effect with 177 Lu-octreotate, which rather emits beta radiation. We thus decided to investigate if ganetespib, an inhibitor of Hsp90, could induce a similar synergistic radiosensitisation with 177 Lu-octreotate therapy to treat GOT1 xenograft tumours in mice. This model system was suitable since the GOT1 cell line, as opposed to other cell lines (55,219), has not lost its SSTR2-expression. The effect of 177 Lu-octreotate, ganetespib, and combination of them both on tumour volume was observed over 14 days under which we observed a potent and significant synergistic effect of the combination.
To shed some light as to how many SINET patients may benefit from this combination, and to further validate the results, we studied the combination in first-passage primary cells prepared from patient tumours collected at surgery. All eight patient tumours investigated were poorly differentiated grade 1 or 2 metastatic SINETs. All individuals' patient tumours trended towards synergy, and looking at the overall effect, we could again observe a significant synergistic radiosensitisation.
In addition, we investigated a larger cohort containing 761 SINETs from 379 patients, for the expression Hsp90 by immunohistochemistry. We could conclude that Hsp90 is upregulated compared to surrounding tumour stromal cells in more than 90% of all tumours. No association between high/low Hsp90 expression and patient survival could be found in neither the large cohort nor a smaller cohort of 43 SINET patients treated with 177 Luoctreotate. lii In conclusion, we identify ganetespib, an inhibitor of Hsp90, to be able to potentiate the 177 Lu-octreotate therapy by radiosensitising SINET cells, and suggest that this combination should be evaluated in a clinical setting.
The SINET immune microenvironment contains lymphocytes capable of recognition and activation after expansion (paper V) The recent success of check-point inhibitors has shown the large potential for curing cancer with immunotherapy. The development of such immunotherapies came from the realisation that all tumour cells are required to evade the immune system and that inhibiting their evasive manoeuvres could potentially lead to the body's own defence system being capable of clearing the tumour cells. Indeed this realisation has since in large been proven right, but still immune therapy is successful in far from all patients and cancer types, and for some cancers -including NETs -both preclinical and clinical experience is still very limited.
In this paper we looked closer at the immune cells present in the SINET microenvironment, to investigate its composition and functionality. We also set out to isolate, expand, and activate these immune cells to recognise and retaliate against the SINET cells. We first presented a thorough characterisation of SINET patient samples using immunohistochemistry and flow-cytometric immunophenotyping. Interestingly, we could see that the amount of in particular CD4+ and CD8+ T lymphocytes varied dramatically between tumour biopsies. We could also see that these immune cells were mainly (>90%) localised in the tumour stroma and in the interphase between tumour stroma and tumour nests. PD-L1 positivity was found in 2/7 tumours and NKp46+ NK-cells were very rare in all tumour samples (<10 cells/full tumour section). In total, most abundant were CD4+ T lymphocytes, followed by CD8+ T lymphocytes and B-cells.
We also isolated tumour-infiltrating lymphocytes (TILs) and expanded them through the same methodology as used for adoptive T cell transfer in the clinic, involving anti-CD3 and IL-2 stimulation (220). This successfully led to the expansion of SINET TILs, and mainly T lymphocytes. As clinical responses to ACT can be modelled using transplanted patientderived xenograft (PDX) tumours and autologous T cells in non-obese diabetic/severe combined immune-deficient/common gamma chain knockout (NOG) with the continuous presence of IL-2 (221), we attempted to establish such a model. No SINET PDX model had before been reported successfully established. In total, by both subcutaneous and orthologous liver transplantation we grafted 38 SINETs from 36 patients to 55 NOG mice. Only one tumour, from a grade 1 liver metastasis, was successfully propagated and grown through two passages. The poor take-rate was consistent with previous reports on establishing NET PDXs (98). Instead we attempted to grow tumour spheres in vitro from two patient tumours (T3 and T4), transfect them with luciferase, and inject them into mice. After three months we observed an increase in bioluminescence signal, and are still observing an ongoing increase, indicating tumour cell proliferation. One speculation to the potentially improved take rate of tumour spheres is that sphere culturing excludes the potentially tumour growth inhibiting immune microenvironment.
We also investigated whether the TILs that we isolated and expanded through stimulation could recognise and degranulate when challenged with orthologues tumour cells. Indeed, although in varying degree, all expanded TILs degranulated, and several even more than M33 -TILs from a malignant melanoma patient that have previously been demonstrated to be reactive liv against autologous tumour cells in vivo (221). Based on this, we hypothesised that SINET TILs have the potential to recognise tumour cells and that their immunologic inhibition can be overcome by the presence of exogenous interleukin-2 (IL-2), something that has been demonstrated for other tumour types (222,223).
In conclusion, we here present the so far broadest characterisation of the SINET immune microenvironment and show that SINET TILs are capable activation when challenged with autologous tumour cells after TIL expansion.
CONCLUDING REMARKS
Small intestinal neuroendocrine tumours globally afflict many patients every year. The fact that the tumour disease often present with distant metastasis, and that curative therapeutic options for spread disease do not exist, is deeply troubling. It must therefore of outmost priority to develop such therapies.
However, in order to do so in a preclinical setting, we need to have a clear understanding of our tumour models and their weaknesses, and they absolutely need to be authentic. In this thesis we conclude that this is not always the case. Paper I demonstrates features of currently used cell lines that recapitulates the tumour disease, but also those that don't, and importantly, reveal several completely non-authentic cell lines. The latter finding was subsequently reinforced by the analysis of published RNAseq data in paper II. If we are, based on preclinical research, supposed to find a cure, this must be a priority. Furthermore, while the use of cell lines is a very important tool in cancer research, we must be aware of their restrictions -especially in terms of adaptions made in cell culture. The use of alternatives, such as primary cells, has been limited to only a very few studies. Here we demonstrated the utility of using such primary cells in both paper I and IV. In addition, the availability of in vivo models that do not utilize cell lines has also been concerning. We were therefore happy to present both the first established SINET PDX in paper V, and, although it is still an early finding, a possible strategy for how to improve future PDX take-rates.
An attractive approach of identifying new therapies is by revealing the underlying drivers of the tumour disease. As everything has its starting-point in alterations in the DNA, identification of these could lead to viable therapies. This was the case for pancreatic NETs (sirolimus for mTORactivated tumours), and has previously happened for many other tumour types. Unfortunately, driver mutations are still largely unknown for SINETs. Based on reoccurrence in exome-sequencing studies, only one potential driver has been identified. In paper III we instead propose a role for recurrent copy-number alterations in SINET tumourigensis and suggest that hemizygous loss of SMAD4 can lead to tumour-promoting effects.
In this thesis we also took a look at both established and 'up-and-coming' therapies. 177 Lu-octreotate was in 2018 approved for the treatment of SINETs, but its curative rates are still low. We could in paper IV conclude that the use of Hsp90 inhibitor ganetespib could provide an efficient strategy to potentiate the 177 Lu-octreotate for SINETs. In paper V we instead demonstrated the potential for immunotherapy in that we managed to expand and reactivate SINET TILs. Overall, we believe that our findings have increased our understanding for the SINET tumour disease and taken further on the road towards finding a cure. lvii ACKNOWLEDGEMENT First, I'd like to say that this experience has been truly amazing. Despite all the hard work and late hours I can say nothing else then that it has all been completely worth it, and nothing would have been, truly, possible without the help of all the beautiful people that has surrounded me during this time. For scientific input, for moral support, and for friendship. Thank you all.
My supervisors,
Ola: Thank you for letting me have the freedom to design and pursuit studies with whatever research question that we came across, may it have been through an idea, research paper, or collaborator, you were always supportive.
Yvonne: I will tremendously miss our teamwork. Knowing I could always (and extremely often) come to you and try thoughts and ideas has been absolutely invaluable. You have meant everything to me, and have had a big part of who I am today both as a researcher and person.
Jonas: An outstanding researcher who truly believes in the power of science done right. Thank you for everything you have taught me, but mostly how you have inspired me.
The lab, I am incredibly grateful for having worked with you throughout this. We are like a big family that is looking out for each other. Heading to work is easy with such colleagues. Gülay, with your enormous heart and never-ending empathy. Thank you so much for everything you have taught me. Linda, for making every day a little bit better with your quirky jokes and contagious laugh. Bilal, you are a remarkable person with a remarkable strength, sprinkled with a lot of kindness. Taking part in your journey has put my world in perspective.
|
v3-fos-license
|
2019-10-10T09:34:19.223Z
|
2019-08-01T00:00:00.000
|
210314540
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/44/e3sconf_icaeer18_04044.pdf",
"pdf_hash": "7a9c26032322ee570e267da932bc887948f30bfc",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:996",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "1138817614c46fe0a1c4c288b881fec09024705a",
"year": 2019
}
|
pes2o/s2orc
|
Research on Carbon Emission Management System Deployed in Civil Airport
Along with the increasingly prosperous airport industry, energy consumption and carbon emission at airport rocket, rendering the building of eco-friendly airports becoming the common trend and objective of global airport development. Digging into airport carbon emission management is of relevance for reducing airport carbon emission and improving the quality of airports and surrounding environment. The article analyzes the needs of airport carbon emission management given the current situation, proposes to achieve airport carbon emission management and monitoring by using information technology, develops a carbon management system software for civil aviation airports, and provides smart strategies for airport carbon emission management.
Overview
The rapid progress made in civil aviation infrastructure benefits the whole world, and airports, as the major infrastructure, inevitably leave more "carbon footprint". By calculation, the global air transport industry contributes about 2% of the global total carbon emission, in which airport activities take up about 5%. As people concern more about carbon emission reduction, ecofriendly airports have become the common goal of global airport development. Some countries, regions and international organizations have introduced carbon emission reduction regulations and programs: The European Union announced the plan to subject airlines to the management of the Emissions Trading Scheme (ETS) in September 2005; In 2008, the Civil Aviation Administration of China promulgated the Planning on Energy Conservation and Emission Reduction of Civil Aviation Industry, requiring the whole industry to vigorously promote aviation energy conservation and emission reduction; In 2011, the Guiding Opinions on Accelerating the Energy Conservation and Emission Reduction of Civil Aviation Industry was issued, in which propositions were made that the growth rate of energy consumption and CO 2 emission of the whole industry must be lower than the development rate of the industry, and by 2020, the energy consumption and emission of China's civil aviation per unit of output decrease by 22% compared with that of 2005. In 2017, China launched the carbon trading market. The civil aviation and six other industries were chosen as the first batch for pilot.
In the future, with the tightening of China's environmental policies, the continuous expansion of civil airports in both number and scale and the complexity of large-scale airports, energy conservation and emission reduction will expose the airport industry to challenges by letting it bear the burden of "paying carbon tax" internationally and the pressure of "developing green civil aviation" domestically.
Demand Analysis of Airport Carbon Emission Management
Aiming to promote the construction of eco-friendly airports, the Civil Aviation Administration issues a series of policy documents such as the Implementation Opinions on Deepening the Green Development of Civil Aviation, further clarifying the objectives and tasks of the green development of civil aviation airports. At present, the construction of airports that are safe, green, intelligent and people-centered is in full swing. However, subpar carbon emission reduction standards, obsolescence in the development of energy consumption and carbon emission calculation and statistics systems and the gap in effective management methods are ailing domestic airports. The looming pressure of carbon emission reduction requires giving top priority to the study of the operation-generated. carbon emission volume and emission characteristics, which is the premise for future implementation and evaluation of airport carbon emission reduction.
Under this circumstance, the establishment of an airport carbon emission management system is an effective way to promote energy conservation and emission reduction in the airport industry, and is also a necessary means to deal with domestic control and development of the airport low-carbon economy. The airport carbon emission management system, running on accurate carbon emission data, can effectively bolster airports for carbon emission reduction, facilitate the detail understanding of carbon emission actuality, evaluate the outcome of carbon emission reduction measures already implemented, and provide basis and support for formulating energy conservation and emission reduction measures.
Based on the above considerations per the carbon emission system demands, the contents of each module and the system flow logic are sorted out from the system function module level, as shown in the following figure.
Airport Carbon Emission Management System Architecture
The airport carbon emission management system applies the SaaS software under the B/S framework and the system architecture is divided into different levels, i.e. the operating system, the database, the module processing, the software application and the browsing. Via internet browser, services are provided for airport carbon emission management system users. The system supports the of data and configuration between different users to ensure the security and privacy of each user data, as well as reserve space for tailored demands of future users such as interfaces, business logic, and data structures.
Three types of people with different usage authorities are the target users of the system: system administrators, system data entry personnel and people viewing/analyzing system data (such as the airport party, management leaders, etc.). For example, the administrator account has all the authorities, able to add/edit accounts of the data entry party and the airport party to ensure the normal operation of the system and allow multiple parties to manage and analyze the carbon data reasonably; the account authorities obtained by the airport party are limited to data viewing and exporting partial data and reports; the data entry party can obtain the accounts with data entry authorities to edit and import the platform data (data source: energy system related data or data submitted by organizations) to ensure data security and accuracy. Administrators can set up accounts with different authorities and hand them to relevant personnel who can verify and enter (or import from similar energy data systems according to data and units required in national standards) the basic data of the carbon management system (such as basic information of airports, emission boundaries, emission sources, emission factors and emission data). The airport party or the construction party can then check the authenticity and validity of the data. After the verification process, the system can collect different types of data according to different categories, calculate the final carbon emission data and display the basic data chart of each data, and simultaneously generate data statements of different fields highlighted in the carbon management. As it is shown in Figure 2.
Main functions realized by the system
The system functions mainly include airport information and carbon emission boundary management, carbon emission data collection and aggregation, carbon emission data analysis, carbon emission report generation, carbon management method selection, carbon quota allocation forecasting and management, carbon emission comparison and evaluation of project construction plan and multi-level login/management modules.
To achieve carbon emission monitoring and management during the airport operation process
The module can collect data on various carbon emission activities of the airport according to international/national authoritative standards: Through the network transmission, scattered front-end carbon emission data information of each part of the airport can be imported and automatically entered into the system per category, providing basic data guarantee for calculating the greenhouse gas emissions for the future, obtaining carbon dioxide equivalent (CO 2 e) data, improving the scientific and comprehensive carbon management data and bolstering future carbon management efficiency. Realtime online browsing (category selection and browsing) can be enabled simultaneously to comprehensively view and grasp the carbon emission of airports so as to achieve online monitoring and management of airport carbon emissions.
To realize the statistics and analysis of airport carbon emissions data
Automatic analysis of airport carbon emission data and automatic generation of carbon emission reports will be provided to lay decision-making foundation for the evaluation of emission reduction measures and the formulation of energy conservation and emission reduction measures.
The system can sort and analyze the collected airport carbon emission data, calculate the final emission volume of all greenhouse gases through the standardized platform module, and summarize the emissions of various dimensions in the year for in-depth analysis and comparison carbon emissions data mining, and the generation of analysis charts of different classification and different dimensions. Then, users can more intuitively and thoroughly understand the airport's carbon emission, and the decision-making basis for the evaluation of emission reduction measures and the formulation of energy-saving and emission reduction measures can be also secured. As it is shown in Figure 3.
In addition, the system enables automatic generation of carbon emission reports. The collected raw data and the results of the statistical analysis are organized into a document that is easy for administrators to review according to a certain format (for example, the existing inventory report standard), and the generated list and carbon emission report can be downloaded. Users may edit the template of the data list through the system module, organize the ideas of carbon emission report, and finally retain the ideas by generating a template to provide basic ideas for future carbon management. In the future, users are also empowered to freely define and edit new templates and report templates to be generated according to policies, laws and regulations or other influencing factors. Users may discuss the composition of the report with administrators and determine the final plan. Lists and reports will be output in PDF or Excel format, taking into account the stability of the browsing format.
To accurate predict carbon trading quota
The system can automatically calculate and predict the basic carbon quota according to the quota allocation plan set by the state by analyzing and calculating the historical carbon emission data of the airport and emission data specified by major emitting organizations, and comparing the results with those of other similar airports. Through the analysis of the estimated quota and historical data, a suitable method for managing carbon quota can be found to lay a solid foundation for the full tap into the carbon trading market in the future. The system interface will be also reserved for improving the module in the future according to the actual situation to accommodate the real, more complex trading market.
Conclusion
The carbon emission management software is mainly designed from the perspective of airport energy consumption for serving green airports. Thus, the airport management party can fully grasp the airport carbon emission situation, holistically analyze and compare carbon emission data of airports of different scales and regional climatic conditions based on the industry platform, determine the green level of the airport, and inform the airport management party of the space for airport emission reduction and the direction of improvement to better promote energy conservation and emission reduction. The construction of a green airport is a gradual process and cannot be completed overnight. Therefore, the airport should continue to advance based on the past energy conservation and emission reduction achievements, align short-term interests with long-term development, combine innovative technologies with enhanced management, and integrate green ideas and wisdom.
|
v3-fos-license
|
2018-04-03T00:00:34.947Z
|
2018-02-20T00:00:00.000
|
4384368
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0192679&type=printable",
"pdf_hash": "ff2739b80d8593e705a924d5d1e41509988be60e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:998",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"sha1": "ff2739b80d8593e705a924d5d1e41509988be60e",
"year": 2018
}
|
pes2o/s2orc
|
Lack of infrastructure, social and cultural factors limit physical activity among patients with type 2 diabetes in rural Sri Lanka, a qualitative study
Introduction South Asians have high prevalence of diabetes, increased cardiovascular risk and low levels of physical activity (PA). Reasons for low levels of PA have not previously been explored among Asians living within their endogenous environment. This qualitative study was performed to explore the contextual reasons that limited PA among type 2 diabetic patients living in a rural community. Methods Purposeful sampling recruited 40 participants with long standing type 2 diabetes for this qualitative study. Semi-structered questions utilising in-depth interviews were used to collect data on PA patterns, barriers to PA and factors that would facilitate PA. The interviews were digitally recorded and transcribed. Data were analyzed using a framework approach. Results The sample consisted of 11 males and 29 females. Mean age was 55.4 (SD 8.9) years. The mean duration of diabetes in the study population was 8.5 (SD 6.8) years. Inability to differentitate household and daily activities from PA emerged as a recurring theme. Most did not have a clear understanding of the type or duration of PA that they should perform. Health related issues, lifestyle and time management, envronmental and social factors like social embarrassment, prioritizing household activities over PA were important factors that limited PA. Most stated that the concept of exercising was alien to their culture and lifestyle. Conclusion Culturally appropriate programmes that strengthen health education and empower communities to overcome socio-economic barriers that limit PA should be implemented to better manage diabetes among rural Sri Lankan diabetic patients.
Background
Sri Lanka is a developing country with a population of 21 million inhabitants and a rapidly increasing burden of non-communicable diseases (NCD) [1].
In 2005, the prevalence of hypertension, diabetes and dysglycaemia was 20%, 11% and 20% respectively [2]. Sri Lanka is a country with very high mortality from cardiovascular disease [3]. Prevalence of diabetes in Sri Lanka, which was around 2.0% in the early nineties has increased by about five-fold during the last two decades [4,5].
Diet and physical activity (PA) are two important modifiable risk factors that play an important role in the incidence, management and outcomes of diabetes [6]. The American Diabetes Association (ADA) recommends two types of PA for individuals with diabetes, which includes aerobic exercises and strengthening exercises. The ADA recommends 30 min of moderate to vigorous intensity aerobic exercise for at least 5 days a week or up to a total of 150 min per week, and resistance training of some type at least two times per week in addition to aerobic activity [7].
Physical inactivity is identified as the fourth leading risk factor for overall mortality globally [8]. Sedentary living is responsible for one third of deaths due to Coronary Heart Disease (CHD) and diabetes, diseases for which physical inactivity is a risk factor [9]. PA is recognized to increase insulin sensitivity, reduce cardio-vascular risk factors, reduce mortality and improve quality of life [10].
Eighty percent of Sri Lankans still dwell rurally. Although, health and education indices are more in comparison with the western world, infrastructure development has not kept pace with global trends [11].
The typical Sri Lankan village consists of a self-contained microenvironment with its own agricultural land, housing, religious, schooling and sometimes basic medical amenities. However, with improving socio-economic conditions the villages tend to be less isolated with a mixture of more urban customs and practices.
Although Sri Lanka harbours a large population of patients with diabetes, very little is known about PA patterns of its inhabitants. A national study revealed that over 60% all Sri Lankan adults report being highly physically active [8]. Very little data exists on PA patterns of Sri Lankan adult diabetic patients. A subset analysis of a national study revealed 13.9% of diabetic individuals to be inactive using the short version of the IPAQ [12]. However, the reason leading to inactivity of has not been examined.
The aim of the current study was to determine barriers to physical activity among a group of patients with type 2 diabetes attending a large multi-ethnic tertiary care diabetes facility in Sri Lanka. The study also explored the associations between physical inactivity and sociodemographic characteristics.
Methods
Institutional ethical clearance was obtained from the Institutional Ethics Review Committee (IERC) of the Faculty of Medicine, University of Peradeniya, Sri Lanka. All participants gave informed written consent. The study was performed at the diabetes facility at Teaching Hospital Peradeniya, Sri Lanka from 2nd February 2015 to 26 th August 2015. Teaching Hospital Peradeniya, located on the outskirts of a major city, serves a catchment population of semi-urban and rural dwellers.
A purposeful sample was drawn from a larger study, evaluating the PA of adult patients with type 2 diabetes. Four hundred patients were recruited in to the larger study, from which 45 patients were selected. The selection was done ensuring male and female representation to be the same and proportionate numbers were selected from the each age category (less than 40 years, 40-60 years and more than 60 years). The patients attend this clinic routinely for their monthly supply of medicines and routine consultation with a physician. None of the recruited patients had any acute illnesses necessitating an out of routine consultation or hospital admission during the past 2 months. The 45 patients thus selected were approached by a trained Research Assistant (RA) on their regular clinic day and were given an information sheet and verbal clarification regarding the study. Male and female patients who were between 18-70 years were recruited and pregnant females were excluded from the study. Patients consenting to be included in the study were then given a date for an interview and they had the freedom to withdraw from the study. The interview was typically was within 2 months of the intital recruitment date. In depth interviews were selected over focus group discussions in this community due to social and cultural reasons.
The clinical team looking after the patient was intentionally kept out of the data collection, as this may have influenced the answers provided by the subjects.
Each scheduled interview lasted an average of 30 minutes and was undertaken at the Diabetes Clinic at the Teaching Hospital, Peradeniya. All interviews were conducted by one RA using a topic guide, which asked about the patients' type and duration of PA, reasons for not engaging in PA and barriers to initiating and maintaining an execise schedule. The interviews were conducted using open-ended, semistrctured questions to guide the particpants and to maintain uniformity between all the interviews. The language of the interviews was Sinhales as all the participants were fluent in this language. All patients were informed that their interviews will be recorded, transcribed and analyzed while maintaining confidentiality.
The data collected included socio-demographic details of age, gender, marital status, occupation, income, area of residence, anthropometric and diabetes related data.
Data handling and analysis
The data were initially transcribed in Sinhalese, which was the native language of all the participants. Subsequently two independent individuals fluent in both Sinhalese and English translated the transcribed data to English. The independent translations were compared with the research team with the help of the independent translators for accuracy of content. A final document was then prepared after consensus had been reached on the translated transcript contents. Data were analyzed using a framework approach [13]. This involved the researchers reading through the transcripts and developing a matrix of overarching and supporting themes. A framework approach provides an effective route map for the research process and facilities both a case and theme based approach to data analysis [13].
Patients
Over a four-month period we recruited 40 patients with type 2 diabetes, comprising 11 (28.2%) males and 29 (71.7%) females. Five of the selected 45 patients declined to participate or did not participate in the interviews. The mean age of the population was 55.4 (SD 8.9) years. The males (mean 56.4 years) were slightly older than the females (55.03 years). The patients were generally from a poor socio economic background with low levels of education, high unemployment and low incomes. Seventy one percent of the study population had not completed secondary education and 55% were never employed. The mean monthly income was less than 100USD per person. The mean BMI was 25.8kg/m2. The females had higher BMI than the males (males: 24.2 (SD 3.4), females: 26.67 (SD 4.2) p<0.001).
The mean duration of diabetes in the study population was 8.5 (SD 6.8) years. Over half the population (51.5%) had at least one diabetes related macro-vascular or micro-vascular complication.
Theme 1: Engaging in physical activity
All participants admitted that they had received advice on the benefits of PA and regular exercise. Participants were invited to describe their PA. A common misconception noted among all the females and most of the males was the inability to distinguish a busy daily schedule from activities that promoted PA. All the females said they are very busy and engage in "work" from morning to night.
When encouraged to describe "physical activity" almost all the females described household chores such as cooking, doing the laundry, washing, caring for children/grandchildren and gardening. Few (n = 12, 33%) females described walking as PA but only 4 (10%) females walked for 30 minutes or more. Walking was mostly undertaken by females for visiting neighbouring houses of friends or relatives. The male participants engaged in more PA than females and were mostly for daily transportation needs such as getting to and from work. None of our participants had access to personalized transport such as cars or motorbikes, but 3 males used a non-motorized bicycle.
None of the participants engaged in a regular regimented exercise programme, and any significant PA performed was part of their lifestyle.
Theme 2: Barriers to physical activity
Most participants (n = 38,95%)) mentioned that they encountered some form of barrier to engaging in regular PA. Three superordinate and 9 subordinate themes were developed following analysis with selected quotations from the transcripts. The barriers described by the participants were grouped into 3 themes; health related, time and lifestyle management and social. More than half the participants had more than one barrier to engaging in exercise.
Health related. A large number (n = 18, 45%) of participants said joint related problems prevented them from engaging in PA. The loose term "arthritis" was used to describe weight bearing-large joint problems. "Breathing problems" were described as another reason for not engaging in PA by 5(12.5%) participants. Few (n = 3, 7.5%) described "chest pain when exercising" as a barrier.
Time and lifestyle management. "Lack of time", "inability to effectively manage time" and "lack of motivation" was highlighted by both males and females as barriers to engaging in an exercise programme. Few (n = 15,37.5%) described their daily chores, household activities and employment as barriers to engaging in PA.
Environmental, social and cultural. All the females stated they were embarrassed and "uncomfortable" to exercise in public areas. Most participants stated that they did not have access to a suitable facility for exercising. A majority stated that PA outside their daily activities was alien to their lifestyle. Few males (n = 5,12.5%) and all females were concerned on how exercising within the home or their closed community would be accepted by others. Majority recounted many instances of walking into puddles, stepping on to irregular roads and pavements and being chased by unrestrained animals such as dogs and cattle as significant events that limited regimented exercise.
Theme 3: Overcoming barriers to exercise
Availability of privacy to engage in exercise was felt to be essential among most females. Both males and females felt that the availability of equipment and dedicated areas for exercise would improve PA. Most females felt that more support from family members to relieve them of their busy household schedules would encourage PA through regular exercise.
Discussion
To our knowledge this study is the first qualitative study to exclusively explore the barriers to PA and exercise among Sri Lankan type 2 diabetic patients. The study population had a high BMI and high waist circumference. The population was middle aged to elderly, predominantly retired or unemployed, with a low income and lower level of education. The sample was from a centre having a catchment of semi-urban and rural dwellers.
PA is known to both prevent and control diabetes independently as well as through weight control [14]. In a study performed in the UK, South Asians were found to be less active compared to other ethnic minorities [15]. However, this and previous studies used measurements of PA that are in alignment with a Western life style. In Sri Lanka, the amount of PA performed during daily chores, remains largely unaccounted for when Euro-centric measures are used to quantify PA. While this may cause underreporting of PA, there is no other validated tool to measure PA of communities that demand significant physical exertion as a part of daily living.
Furthermore, at present there are no studies from the South Asian region that explore the barriers for PA and exercise among rural communities living in their endogenous environment. Therefore the present study remains unique.
In the current study, the participants were aware of the importance and benefits of PA. However, their knowledge was vague about the type and duration of PA. Interestingly, most of the female participants had difficulty in distinguishing a busy schedule from a physically active lifestyle. Walking was the commonest PA undertaken among our participants. However this was both irregular and variable, and was mostly performed on social and transportation requirements rather than a health promoting activity. A recent qualitative study performed in Sri Lanka had similar findings regarding PA among diabetic patients [16]. The inability to distinguish PA from daily activities has been reported previously from women of Asian origin [17].
In many Asian communities including Sri Lanka, the female gender has always been at a disadvantage due to many ethno-cultural factors. Important family decisions are most often made by the males of the family, often leading to large proportion of indoor household chores being shouldered by the female, thus leaving her with inadequate time to attend to her wellbeing. [18] Almost all the participants said that they had barriers preventing them from engaging in PA. "Joint related issues" and "breathing problems" emerged as a limiting health issue. "Inability to prioritize time", "lack of motivation" and "inability to find time for PA" due to household chores or employment" were the commonest time and lifestyle related causes. Socially, embarrassment, prioritizing of domestic activities and uncertainty on the social acceptance regimented exercise were common reasons for not engaging in PA.
Fear of engaging in PA in the presence of diabetes and other comorbidities has previously been reported in Sri Lanka and elsewhere [16,19,20]. Similarly lack of motivation and inability to prioritize time has been reported as significant barriers to exercise in previous studies [21,22]. The female participants in particular noted that they would be too embarrassed to undertake exercise in public areas as they felt this was culturally alien to them. This finding is in alignment with previously reported research performed in Asian ethnic minorities living in the UK and the USA [15,17,23].
Similarly, family and social pressures to prioritize domestic responsibilities over PA among women of Asian origin has been previously reported as barrier to PA [17,20,24].
The current study is unique in that it was performed on a population of rural, low income, predominantly middle aged to elderly population from a low resource setting. Previous studies originating on Asians and PA have originated from ethnic minority communities living in affluent societies [15,17,24,25]. A recent meta-analysis has also confirmed the lack of data on PA, of patients living in their indigenous settings [25].
We recommend that patient education regarding PA and exercise be more specific leaving little or no ambiguity regarding the type and duration of PA. We also recommend that where PA can be part of the daily lifestyle, the patients and health care givers be educated to acknowledge and encourage it. More robust and culturally accepted methods of knowledge transfer should be explored, such as the existing primary health care system. At the same time, PA prescription should be more individualized taking into account the physical disabilities and perceived negative outcomes and fears. Cultural and social beliefs and traditions should be taken into account in the formulation of PA activity guidelines at community and national levels. Development of infrastructure for exercise should be aligned with cultural beliefs and social norms, in an attempt to incorporate PA into daily lifestyles such as walking. Empowering of rural communities and community leaders with the knowledge and benefits of PA may help it to be more widely accepted among the elderly, less educated and the female gender.
Strengths
This study is valuable in highlighting barriers encountered by a rural Asian population living in a resource poor environment. The qualitative design, supported by individual in-depth interviews was helpful in exploring contextual data not previously elicited by other approaches. The individual interviews were held in native Sinhalese thus facilitating a greater length of exploring in a population of elderly participants with limited education.
Limitations
Limitations of this study may include selection bias and potential contamination bias in view of the purposeful sampling performed by us. The participants were recruited and interviewed at a clinic setting rather than in their own community, which may have limited the discourse of some participants.
Conclusion
Our study revealed several recurring themes on PA patterns and barriers to PA in a rural community. Lack of uniform health education, health, lifestyle and social barriers were highlighted in this study. Culturally appropriate programmes that strengthen health education and empowering communities to overcome socio-economic barriers that limit PA should be implemented to better manage diabetes among rural Sri Lankan diabetic patients. We further highlight that new tools need to be developed that take into account PA of daily living when rural endogenous populations are studied.
|
v3-fos-license
|
2019-04-30T13:08:05.431Z
|
2019-04-30T00:00:00.000
|
139100379
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.00906/pdf",
"pdf_hash": "046886d79ce26ab2a2ec1c63475456f743e50584",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1003",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "046886d79ce26ab2a2ec1c63475456f743e50584",
"year": 2019
}
|
pes2o/s2orc
|
Insights Into the Complexity of Yeast Extract Peptides and Their Utilization by Streptococcus thermophilus
Streptococcus thermophilus, an extensively used lactic starter, is generally produced in yeast extract-based media containing a complex mixture of peptides whose exact composition remains elusive. In this work, we aimed at investigating the peptide content of a commercial yeast extract (YE) and identifying dynamics of peptide utilization during the growth of the industrial S. thermophilus N4L strain, cultivated in 1 l bioreactors under pH-regulation. To reach that goal, we set up a complete analytical workflow based on mass spectrometry (peptidomics). About 4,600 different oligopeptides ranging from 6 to more than 30 amino acids in length were identified during the time-course of the experiment. Due to the low spectral abundance of individual peptides, we performed a clustering approach to decipher the rules of peptide utilization during fermentation. The physicochemical characteristics of consumed peptides perfectly matched the known affinities of the oligopeptide transport system of S. thermophilus. Moreover, by analyzing such a large number of peptides, we were able to establish that peptide net charge is the major factor for oligopeptide transport in S. thermophilus N4L.
INTRODUCTION
Lactic acid bacteria (LAB) are a group of microorganisms displaying a wide range of properties making them suitable for various applications in fields such as health (Hill et al., 2017), chemistry (Othman et al., 2017;Sauer et al., 2017), or cosmetics (Izawa and Sone, 2014). However, they have been historically used for food production (Salque et al., 2013), and it still remains their main outcome, in particular for dairy starters. Therefore, detailed information is available about the growth of LAB in milk, especially regarding the proteolytic system responsible for their amino nitrogen nutrition (Kunji et al., 1996;Christensen et al., 1999). When considering the whole lifetime of dairy starters, the culture medium in which they are industrially produced also represents an important substrate for these bacteria. In contrast to milk culture, less information has been published on this particular step, although it is of great importance as it impacts both production yields and technological functionalities of the starter (Ummadi and Curic-Bawden, 2008). Production media usually contain complex substrates such as cell or protein hydrolysates that notably include yeast extracts (YEs), which are widely used for LAB industrial production. YEs correspond to the soluble fraction of molecules released after either yeast autolysis or controlled enzymatic lysis (Pasupuleti and Braun, 2008). They contain several classes of nutrients among which peptides that can represent more than 50% of the total YE mass, depending on the manufacturing process. They are thus mainly employed as amino nitrogen sources. If the importance of peptides in milk cultures has been extensively covered -especially with the LAB model Lactococcus lactis (Juillard et al., , 1998Kunji et al., 1995;Helinck et al., 2003) less is known about their utilization in YE-based media.
On one hand, peptide utilization in YE naturally depends on the composition of the peptide fraction, which precisely constitutes the main hurdle to this study due to the abundance and diversity of peptides composing these substrates. To the best of our knowledge, the YE peptide fraction has only been characterized up to now via filtration or chromatographic fractionation strategies (Mosser et al., 2011(Mosser et al., , 2015Spearman et al., 2014Spearman et al., , 2016, or by analyzing the composition in peptide-bound amino acids (Kevvai et al., 2014(Kevvai et al., , 2016. Nonetheless, the exact nature of the peptides composing YEs still remains elusive, these compounds are therefore commonly considered as black boxes containing a complex mixture of undefined peptides. On the other hand, peptide utilization is also dependent on the transport machinery of the cultivated LAB. Indeed, the only known LAB peptidases able to convert peptides into free amino acids are intracellularly located (Christensen et al., 1999;Savijoki et al., 2006). Several peptide transport systems are available in LAB among which two are particularly well documented and present in various species. The first one is the di-and tripeptide transporter DtpT, a secondary transporter belonging to the PTR (peptide transport) family. It is a protein encoded by a single gene and constituted of 12 transmembrane domains that couples peptide transport with the proton gradient . It has been identified in several LAB, among which L. lactis (Hagting et al., 1994), Lactobacillus helveticus (Nakajima et al., 1997), or Streptococcus thermophilus (Hols et al., 2005;Goh et al., 2011). The second well-known system for peptide uptake is an ABC (ATP-binding cassette) transporter dedicated to oligopeptides (3 residues and more). It is known as Opp in many LAB species, in particular in L. lactis where it has first been characterized (Tynkkynen et al., 1993). It has also been found in S. thermophilus where it has been named Ami due to its high sequence homology with other streptococcal transporters (Garault et al., 2002). Both Opp and Ami share a similar overall genetic organization. They consist of five conserved proteins organized in a single operon: OppDFBCA and AmiACDEF, respectively. OppA and AmiA are lipoproteins anchored to the cell membrane and devoted to peptide binding and delivery to the translocon formed by OppBC/AmiCD. Peptide internalization is enabled by the cytoplasmic membrane-bound ATPases OppDF/AmiEF that energize the whole system upon ATP binding and hydrolysis. However, the Ami system possesses supplementary AmiA proteins, in opposition to OppA present in only one copy, which is a characteristics of streptococci. Extra AmiA proteins are located in other parts of the genome, and their number is strain-dependent.
These two combined systems, DtpT and Opp/Ami, allow for a large supply of various peptides to the bacterial cells. However, peptide length is not the sole factor upon which these carriers operate, as peptides are not indiscriminately transported even when belonging to the adequate size range. In the case of DtpT, it has been evidenced in several species that it had a higher affinity for dipeptides over tripeptides, preferred hydrophobic substrates and worked less efficiently with cationic peptides (Nakajima et al., 1997;Fang et al., 2000;Solcan et al., 2012;Martinez Molledo et al., 2018). Concerning the oligopeptide carrier specificity, extensive information is available about L. lactis Opp. Specifically, in vivo, in vitro, and structural characterization data are available (Tynkkynen et al., 1993;Juillard et al., 1995Juillard et al., , 1998Detmers et al., 1998Detmers et al., , 2000Kunji et al., 1998;Lanfermeijer et al., 2000;Charbonnel et al., 2003;Helinck et al., 2003;Doeven et al., 2004Doeven et al., , 2005Berntsson et al., 2009Berntsson et al., , 2011. In comparison, apart from its initial genetic identification and characterization (Garault et al., 2002), only one in vivo study has been performed on S. thermophilus Ami transporter to determine its substrate preferences (Juille et al., 2005). Complementary approaches based on the use of mixtures of milk peptides showed that transport seemed to be in favor of hydrophobic and positively charged oligopeptides, whereas long anionic ones were never taken up. However, this study was based on a limited number of peptides notably resulting from the tryptic digestion of α s2 -casein and therefore presenting biochemical similarities such as a positively charged C-terminal residue (Lys or Arg). One can therefore question whether the trends shown by the consumption of such limited and specific peptide mixtures are representative of the Ami oligopeptide transporter specificities, and furthermore whether they can be extended to a YE-based medium where oligopeptides are abundant and available in the form of a large complex mix.
In this study, we specifically aimed at qualitatively characterizing the oligopeptide fraction of a YE-based growth medium and monitoring the dynamics of oligopeptide utilization occurring during the growth of an industrial S. thermophilus strain. For that purpose, we developed a specific peptidomicsbased analytical pipeline combined with a dedicated data analysis workflow. This whole approach revealed the complexity of the YE peptide fractions as well as peptide utilization dynamics that notably reflected the activity and the specificity of the oligopeptide transporter Ami.
Strain and Preculture Conditions
This work used S. thermophilus N4L (Proust et al., 2018), a PrtSpositive, AmiA2-negative and AmiA3-positive industrial starter. This strain also contains the gene encoding the DtpT transporter, and does not possess the Ots peptide transport system present in some other S. thermophilus strains (Goh et al., 2011;Jameh et al., 2016). Therefore, DtpT and Ami are the only known peptide transport systems identified in this strain. It was stored at −80 • C in M17 broth (Terzaghi and Sandine, 1975) containing 1% (w/v) lactose and supplemented with 20% (v/v) glycerol. The strain was routinely pre-cultured in M17 broth containing 5% lactose at 42 • C.
Bioreactor Culture Conditions
Bioreactor cultures were performed in a 1 l bioreactor system BIOSTAT R Qplus (Sartorius Stedim Biotech, Germany). Two successive precultures were performed prior to inoculating at 2% (v/v) the fermenters. The culture medium contained (w/v) 6% lactose, 0.01% calcium chloride and 2% of YE provided by Procelys (Maisons-Alfort, France) belonging to the Nucell R range notably dedicated to dairy starters. A pool of vitamins as used in S. thermophilus chemically defined medium (Letort and Juillard, 2001) was also added in the culture medium in order to ensure repeatable and optimal growth performances similar to those obtained with equivalent Tween 80-containing media (data not shown). Tween 80 is a growth factor widely used for LAB cultures (Williams et al., 1947). However, it is a highly ionizable compound known to disrupt mass spectrometry analyses (Jäpelt et al., 2016) and thus was not useable in this study. The YE fraction of the medium was sterilized by heat treatment along with the bioreactors for 20 min at 120 • C. The other components were sterilized by a 0.22 µm pore-sized polyethersulfone membrane (Millipore, Guyancourt, France). The initial pH was adjusted beforehand to 6.6 with sodium hydroxide 2 M. The batch fermentations were carried out during 6 h at 40 • C, the stirring was fixed at 50 rpm and the pH regulated at 6.0 with sodium hydroxide 2 M. Pseudo anaerobic conditions were set up by sparging nitrogen at 0.2 l/min in the growth medium for 1 h before inoculation, and in the headspace at the same flow rate thereafter. Growth was followed by optical density (600 nm) measurement and by online monitoring of the volume of base added.
Three independent cultures were carried out during which samples were collected each hour for the first repetition. In order to maximize reproducibility, samples from the two subsequent repetitions were taken when the added volumes of base reached the corresponding levels of the first repetition.
Yeast Peptide Identification
YE peptides in the culture medium were identified using an approach adapted from a previous work (Guillot et al., 2016). Bacterial cells were first discarded from the fermentation samples by centrifugation (3000 g, 10 min, 4 • C). The peptide-containing supernatants were filtered using a 0.22 µm pore-sized PVDF membrane with low protein binding properties (Millipore). Trifluoroacetic acid (TFA) and acetonitrile (ACN) were then added at final concentrations of, respectively, 0.1 and 5% (v/v). The mixes were centrifuged (3000 g, 10 min, 4 • C) after a onenight storage at 4 • C, then ultrafiltered successively through 10 and 3 kDa cut-off Amicon R Ultra-15 devices (Ultracel R -10K and Ultracel R -3K membranes, resp., Millipore). The final permeates went through a solid phase extraction step using a 200 mg StrataX R cartridge according to manufacturer procedure (Phenomenex, Le Pecq, France). Briefly, the activated cartridge was loaded with 4 ml of sample, washed with 5% ACN and 0.1% TFA, and peptides were finally eluted with 1.5 ml of 50% ACN and 0.1% TFA in MilliQ water (Waters, St-Quentin-en-Yvelines, France). The eluted fractions obtained were then dried overnight in a Speed-Vac system (Savant, Thermo Fisher Scientific France, Illkirch, France) and resolubilized in 400 µl of 0.1% TFA. Finally, the concentrates were ultrafiltered once again through a 3 kDa cut-off Amicon R Ultra-4 device to remove potential insoluble materials which might clogg column and spectrometer prior to a double chromatographic separation ( Figure 1A).
The first separation was performed on a Nucleoshell RP 18plus reversed-phase column (150 × 4.6 mm, 2.7 µm, 87.5 Å, Macherey-Nagel, Germany) at 40 • C with an injection volume of 25 µl corresponding to 250 µl of the initial supernatant. A linear gradient of acetonitrile (1.6% per min) in ammonium formate (20 mM, pH 6.2) was applied with a 0.7 ml per min flow rate and fractions were collected every minute. Preliminary analyses showed that the initial and last fractions collected, respectively, before 5 min and after 25 min of the chromatographic run contained less than 5% of the identified peptides (data not shown) and thus were discarded. The remaining 21 fractions were subsequently dried overnight in a SpeedVac system and resuspended in 30 µl of 0.1% TFA and 2% ACN in MilliQ water. Aliquots of 5 µl were analyzed by a data-dependent tandem mass spectrometry approach on the PAPPSO platform (INRA, Jouy-en-Josas). All the modus operandi of the second chromatographic separation and peptide m/z detection were the same as those previously described (Guillot et al., 2016). Peptides were separated on a Pepmap C18 column (150 mm × 0.75 mm) at 300 nl/min with a gradient of ACN in formic acid. Eluted peptides were analyzed online on an LTQ-Orbitrap Discovery mass spectrometer (Thermo Fisher Scientific). Peptide ionization was performed with a spray voltage of 1.3 kV. Peptide ions were analyzed by the data-dependent method as follows: full MS scan (m/z 350-1,600) was performed on the Orbitrap mass analyzer and the six most abundant doubly and triply charged peptides were submitted to MS/MS analysis with a collision energy of 35%. An exclusion window of 40 s was applied. Peptide identification was performed with X!Tandem version 2015.12.15.2 (Vengeance) and X!TandemPipeline (C++) version 0.2.16 (Langella et al., 2016) on the protein sequence of Saccharomyces cerevisiae S288C (version 2015-01-13) 1 . The main peptide identification parameters were the following: no cleavage specificity, variable methionine oxidation state and mass tolerance for parent and fragment ions of ± 10 ppm and ± 0.4 Da, respectively. Peptides were conserved when showing an E-value ≤ 0.05, and only one peptide per parental protein was considered as sufficient to enable identification. Contaminant peptides were discarded using a standard proteomic contaminant database, and the False Discovery Rate was estimated using the reversed protein database.
Peptide Physicochemical Characterization and Class Assignment
Peptides were characterized by nine different physicochemical properties listed in Table 1. At the exception of the proportions of aromatic residues and proline which were manually calculated, all other properties were computed using the aminoAcidProperties function of the R package "alakazam" version 0.2.8 (Gupta et al., 2015). Default settings were kept for scaling and normalization procedures. Prior to differential analysis, peptides were grouped together in classes based on their physicochemical closeness ( Figure 1B). For that purpose, a specific procedure was performed in order to assign every identified peptide to a unique physicochemical class. The range of each descriptor was divided into three intervals (Table 1 and Figure 2). For each physicochemical criterion, a given peptide can belong to only one interval. The physicochemical class or "barcode" of the peptide is then defined as the combination of the respective intervals of the nine descriptors. As an example, the peptide KGSIDEQHPRYGG belongs to the class "321223322." It is 13-residues long, therefore it belongs to the interval 3 of length, its GRAVY index value is -1.73: interval 2, and so on.
Statistical Analyses
Wilcoxon-Mann-Whitney (Wilcoxon, 1945;Mann and Whitney, 1947) and Kruskal-Wallis (Kruskal and Wallis, 1952) non-parametric tests were used in order to detect statistical differences in peptide property distributions between 2 or more than 2 groups, respectively. For that purpose, the ad hoc functions of the R package "stats" version 3.4.3 were employed. Differential analysis over time was performed on the abundance of peptide classes with the R script MassChroqR (version 0.3.8) of the MassChroQ pipeline (Valot et al., 2011). A contingency table was generated beforehand that contained the total spectral counts of each physicochemical class -i.e., the sum of the spectral counts of their constitutive peptidesin each analyzed sample. Classes showing low abundance (<5 spectra in all samples) and little variations (less than 50% of variation between the minimal and maximal average abundance observed in the different samples) were discarded. Finally, spectral count data of remaining classes were modeled via a generalized linear model with a Poisson distribution. An analysis of variance (ANOVA) was then performed using a Chi 2 test to detect significant variations in peptide class abundances with time considered as factor of analysis. The generated p-values were adjusted by a FDR procedure (Benjamini and Hochberg, 1995).
Classes showing an adjusted p ≤ 0.01 were considered as varying significantly over time (Figure 1C). A kinetic profiling was then manually performed in order to separate classes showing either a decrease, increase or fluctuating variations over-time. The physicochemical properties of peptides belonging to the different kinetic profiles were then discriminated by a principal component analysis (PCA) (R package "FactoMineR" version 1.41, Lê et al., 2008). Table 1). As peptide length, hydrophobicity, bulkiness, polarity, and net charge show a bell-shaped distribution, the limits were chosen so that they framed the profile peak. The remaining properties correspond to frequencies of specific types of residues in the peptide amino acid composition. The first limit defines the "zero sub-class": absence of the considered residues in the peptide composition, and the second limit was arbitrarily fixed at 12%.
RESULTS
S. thermophilus N4L was cultivated in a YE-based medium in 1 l bioreactors. The peptide content of the culture supernatants was monitored during growth using mass spectrometry. Peptide identification was performed on the initial medium before inoculation (t = 0 h) and then each hour from 3 to 6 h of growth, corresponding to the exponential and the early stationary growth phases (Supplementary Figure S1).
YE Contains a Large Number of Peptides With Different Levels of Abundance
Between 1,300 and 1,700 distinct peptides were identified per analyzed time point from approximately 1,900 to 2,600 Means of 3 independent repetitions ± standard deviation.
fragmentation spectra, depending on the point considered ( Table 2). These values were consistent within biological repetitions as indicated by the low coefficients of variation (average variation around the mean of 7% both at peptide and spectra levels), showing a good reproducibility in terms of number of identified peptides. Nevertheless, the qualitative identification of peptides was not as effective, as, for a given time, only an average of 55% of the peptides was identified in all three repetitions. This confirms that non-tryptic peptide identification in complex mixtures is still technically challenging, as already discussed (Guillot et al., 2016). Combining all the identifications from all time points and repetitions resulted in a total of 4,598 distinct peptides identified (FDR < 1%) from 32,920 fragmentation spectra during the course of the growth. To estimate peptide relative abundance, spectral counting is considered as the simplest method in a label-free approach (Liu et al., 2004;Colinge et al., 2005). Table 3 shows that the majority of peptides (close to 80%) was actually scarcely identified with only one spectrum per peptide. Even though label-free mass spectrometry only allows relative quantification, these peptides are likely to be either the less abundant ones, or to have poor yields of detection. Nevertheless, some peptides generated larger numbers of spectra. In particular, the top ones (more than 10 spectra per peptide) represented less than 1% of the identified Means of 3 independent repetitions ± standard deviation.
peptides in each sample but 5-10% of the total number of spectra. Thus, they are likely to be the most abundant in the medium. All these data provide evidence that this YE contains a high peptide diversity with a few of them being over-abundant.
Peptide Physicochemical Properties
Each identified peptide was characterized by a combination of nine physicochemical properties. These properties have been chosen to describe comprehensively YE peptide diversity. They are summarized in Table 1. The internal distributions of each property calculated from the initial medium peptidome (before inoculation, t = 0 h) are represented in Figure 2. No significant difference could be detected (p ≤ 0.01) between the three repetitions, and the other time points of analysis showed good reproducibility as well (Supplementary Figure S2).
Most of the nine physicochemical properties of the peptides initially identified were distributed within a relatively broad range, reflecting a large physicochemical diversity. The detected peptides were mostly hydrophilic, had a median net charge slightly positive, and their average bulkiness was close to 14 Å 2 which is moderately inferior to that of the mean of the 20 standard amino acids, ca. 15.4 Å 2 (Zimmerman et al., 1968). This last finding suggests a slight over-representation in the yeastderived peptide sequences of relatively small residues. Finally, these identified peptides showed an average length of 10 residues, although this observation has to be tempered by the specificities of the analytical pipeline. Indeed, the upper length limit was driven by the purification process employed and more specifically by the 3 kDa ultrafiltration steps, while the lower limit -no detection of peptides shorter than 6 residues -was a direct consequence of the chosen mass spectrometry range detection (350-1,600 m/z).
Identification of Peptide Kinetic Dynamics
The YE is composed of a large number of peptides displaying various physicochemical properties. This inherent diversity made it suitable to study peptide utilization dynamics during the strain growth. However, as previously described, most of the identified peptides showed intermediate to low levels of spectral abundance (Table 3), which limits the relevance of a kinetic study directly at the single peptide scale. Therefore, in order to identify which peptides were utilized and on which physicochemical basis, a specific analytical workflow was developed. The underlying idea was to pool peptides showing close physicochemical properties into groups in order to combine their spectral counts and therefore perform the study not with individual peptides but on a larger scale (see section Materials and Methods section for explanations about the grouping procedure). The limits of each interval chosen during the grouping were fixed according to the internal distribution of each physicochemical property established from the initial peptidome (t = 0 h) before inoculation (see Figure 2 for the representation of these intervals and the general rules regarding their constitution). Theoretically, there are 3 9 = 19,683 different possible classes or "barcodes." Practically, not all are physically possible or exist biologically, and experimentally only 1,308 were identified here. On this total, 612 classes (47%) contained only one peptide, whereas the top three most abundant classes pooled 41, 45 (two ex aequo classes) and 53 peptides, respectively.
After determining their relative abundance by summing the spectral counts of their constitutive peptides, these classes were submitted to statistical analyses ( Figure 1C). A first filter was applied to remove all classes showing low levels of abundance (less than 5 spectra per class) as their quantification over time would not be relevant. At this step, 1,040 classes were discarded, i.e., 80% of the total. Then, a second filter was used on the 268 remaining abundant classes in order to remove those considered largely constant, i.e., showing less than 50% of variation between their minimal and maximal abundance values in the different samples. A total of 45 classes (3%) matched this description and were not included in the following differential analysis. An analysis of variance was finally performed on the 223 remaining abundant classes that showed sufficient variation amplitude.
A total of 49 classes were declared as varying significantly over time at the threshold of an adjusted p ≤ 0.01. Their kinetic profiles are depicted in Supplementary Figure S3. These profiles were classified into 3 different groups: classes showing an unambiguous decrease (36 classes) or increase (2 classes) over time, and classes whose time-course evolution was fluctuating (11 classes). However, on a bacterial physiology point of view, it is of interest to characterize not only peptides that are utilized by the strain but also those that are left aside in the external medium. On this basis, the former 45 constant classes that were discarded during the second filtering procedure were therefore reintegrated with the 49 others for the last part of the analysis. In total, 94 classes were selected and can thus be classified into four different profiles whose main characteristics are given in Table 4. They enclosed a total of 1,060 different peptides, which represents about 23% of the total identified peptides (4,598), but enclosed about one third (10,412) of the total spectra (32,920) assigned during the whole experiment. The remaining classes corresponded to the combination of low abundance classes and abundant ones whose variations were not detected as significant.
Linking Kinetic Profiles to Peptide Composition
In an attempt to correlate those different kinetic dynamics with the physicochemical properties used to design the former selected classes, a PCA was performed ( Figure 3A). The two first components explained more than 60% of the total inertia. With the exception of peptide bulkiness and proline content, all other peptide properties were well represented with correlation coefficients > |0.6| on these axes. In order of importance, the first axis separated the peptides according to their polarity, acidic content, charge, hydrophobicity and, to a lesser extent, peptide bulkiness. The second axis mostly wore the information of the basic and aromatic content as well as peptide length. Proline frequency was equally supported by both axes. This representation allowed the segregation of the four kinetic profiles previously identified (increasing, decreasing, constant, fluctuating). Peptides belonging to both constant and fluctuating profiles were spread in wide and partially overlapping areas, reflecting a common high physicochemical diversity. In contrast, decreasing and increasing profiles were located in two specific and distinct zones. Strikingly, decreasing profile corresponded to exclusively positively charged peptides ( Figure 3B) that were also significantly shorter than average (p ≤ 0.01, median length of 8 amino acids). The positive charge was the result of both a higher and/or lower proportion of basic and acidic residues than other peptides, respectively. Moreover, these peptides were less polar, and they also contained a higher proportion of hydrophobic residues. Their aromatic and proline content was more variable and did not seem to constitute a relevant discriminative factor. In contrast, increasing profile was made up of exclusively negatively charged peptides that were also significantly longer (median length of 11 amino acids) and contained a higher amount of proline (median content = 18%). Finally, peptides enclosed in constant and fluctuating profiles displayed intermediate distributions regarding their length and net charge. Their overall hydrophobicity was not significantly different to that of increasing profile, and both displayed low and similar proline content. As an illustration, the most abundant peptides belonging to each of these kinetic profiles as well as their spectral abundance evolution are given in Table 5.
DISCUSSION
By using a mass spectrometry-based approach coupled with appropriate statistical tools, we were able to shed light on the peptide content of a yeast extract-based fermentation medium, but also to identify on a large scale distinct patterns of peptide abundance variations during the growth of Streptococcus thermophilus. The YE displayed a high peptide diversity with more than 4,000 distinct peptides identified. It possibly contains even more peptides, as the identification of some of them is still technically challenging (Guillot et al., 2016;Bingeman et al., 2017). It seems to be a general feature of YEs, as similar results have been obtained using another Nucell R YE provided by Procelys (data not shown). The number of identified peptides was large enough with an appropriate physicochemical diversity to enable a robust analysis of peptide utilization by S. thermophilus N4L. By pooling peptides into physicochemical classes, we were able (i) to identify consistent kinetic profiles, and (ii) to compensate partially for the overall low relative abundance levels of individual peptides. As this grouping procedure was based on peptide physicochemical properties, which are known to be leading factors for their use by bacteria, the temporal evolutions observed in selected classes can reasonably be considered to mainly reflect peptide utilization dynamics of the strain N4L.
Four relevant kinetic profiles of peptide utilization have been observed: stagnation, decrease, increase and fluctuation of spectral counts over time. These patterns might come from two plausible origins: transport inside the cells, and/or peptide cleavage mediated by an extracellular hydrolytic activity. This last case is especially suggested by the presence of increasing profiles. Indeed, as the fermentation was performed in batch mode, the most likely explanation is that some peptides must have been gradually hydrolyzed by the strain into smaller fragments. These fragments can share the same barcodes as other peptides of the initial medium. Some of them are likely to be used by the strain, and do not accumulate in large amounts in the medium. Some others are not, and progressively accumulate in the external medium during the growth. This hypothesis is supported by the fact that increasing classes are constituted of numerous scarce peptides, many of which were only detected in the latter stages of fermentation. The cell-envelope located protease PrtS is the most plausible effector of this increase but the membrane-anchored protease HtrA could also play a role (Guillot et al., 2016). The hypothesis of a variation of some spectral counts due to cell lysis cannot be completely ruled out. However, considering the high number of intracellular peptidases in S. thermophilus and their overall large panel of specificities (Christensen et al., 1999;Hols et al., 2005;Savijoki et al., 2006), the hypothesis of a significant peptide cleavage by intracellular peptidases released during cell lysis is very unlikely. Otherwise, all classes of peptides would have been impacted, regardless of their biochemical properties. Therefore, the observed peptide dynamics can be explained as follows: (i) decrease: transport and/or cleavage of initially present peptides; (ii) increase: accumulation of cleavage products at a higher rate than their transport (if any transport); (iii) stagnation: neither transport/cleavage nor accumulation, or both at similar rates; (iv) fluctuating profile: combination of transport/cleavage and accumulation within the same physicochemical class at various changing rates over time, or artifactual noise (peptide identification variability).
In that respect, the presence of a large group of decreasing basic peptides is noteworthy, and it is sensible to assume that this decrease is predominantly the consequence of transport. As a first reason, the decrease depended on the global physicochemical property of the peptides, and not on their amino acid sequence. This observation does not argue in favor of hydrolysis by serine-proteases such as PrtS and HtrA, PLVGGHEGAG 231112212 8 ± 1 10 ± 2 12 ± 1 13 ± 3 13 ± 1 Means of 3 independent repetitions ± standard deviation. whose activity is known to be strongly dependent on the amino acid sequence flanking the cleavage site (Perona and Craik, 1995;Siezen and Leunissen, 1997). Then, the main conclusion of our study is that this decreasing profile is firstly linked to a systematic presence of a global positive net charge combined with a significantly shorter length and a higher proportion of hydrophobic residues. This description perfectly matches that of the previously mentioned study performed with a protease-negative strain on a small number of milkderived peptides (Juille et al., 2005). Our work, by relying on a vastly higher number of peptides, not only consolidates these former results but also suggests a dominant role of a positive net charge for peptide transport. It is thus reasonable to assume that most, if not all, of these decreasing peptides were actually preferentially consumed by the strain and transported inside the cells. This transport was mediated by the Ami system which is the only oligopeptide carrier identified in the strain. It has been demonstrated in L. lactis that the oligopeptidebinding protein (OppA) primarily determines the overall peptide specificity of its cognate transporter (Doeven et al., 2004). Similarly, the consumption of positively charged peptides by S. thermophilus N4L is likely to be essentially dictated by its own oligopeptide-binding proteins, namely AmiA1 and AmiA3. Moreover, it has been formerly established in vivo that peptide transport in both species displayed very similar specificities (Juillard et al., 1998;Juille et al., 2005). Indeed, it was shown that L. lactis also preferentially used hydrophobic basic peptides ranging between 600 and 1,100 Da, although this bacterium can accommodate surprisingly long peptides up to 35 residues (Doeven et al., 2005), which is even longer than the maximal size (24 residues) observed with S. thermophilus (Garault et al., 2002). The ability of L. lactis to carry various sizes of peptides and its preference for hydrophobic peptides containing branched-chain amino acidsin particular isoleucine -were explained later on thanks to the crystallization of OppA (Berntsson et al., 2011(Berntsson et al., , 2009). However, an apparent discrepancy remains in the literature between in vivo studies and structural characterization concerning the role of peptide net charge in L. lactis OppA-based selection. If this factor was determined in vivo as a major feature for transport (Juillard et al., 1998), it has not been identified as coming into play regarding binding mechanisms in structural data. In that perspective, it is noteworthy that the crystal structure of unliganded E. coli OppA binding site had revealed a negatively charged surface responsible for the preferential binding of basic peptides (Klepsch et al., 2011). This finding was subsequently found to apply as well to S. typhimurium OppA. Therefore, further work is needed to elucidate the role of peptide charge both in L. lactis and S. thermophilus, as corroborating evidence seems to indicate that this factor may be a widespread requisite feature for peptide transport.
Milk is considered as the natural ecological niche of S. thermophilus. The main source of amino acids during growth of S. thermophilus in milk are caseins. Analysis of amino acid composition of ß-and k-caseins (the caseins mainly cleaved by PrtS) reports a high prevalence of branched-chain amino acids (22.5% of the total amino acids of the two proteins), suggesting an efficient correlation between this amino acid composition and the preferences of the Ami transport system underlined in the present study. However, the frequency of positively charged amino acids in the casein sequences is in the same range as that of negatively charged amino acids (9.5 and 9.3%, respectively). As the positive net charge of peptides exerts a key role for peptide transport, it not anymore possible to connect the composition of charged amino acids in caseins to the preferences of the Ami system. It therefore indicates that the specificity of casein cleavage by PrtS will determine the capability of released peptides to be used by S. thermophilus during growth in milk.
CONCLUSION
In conclusion, the identification of complex mixtures of peptides by mass spectrometry, although still technically challenging, is progressively gaining attention (Bingeman et al., 2017) and is proving to be an excellent exploratory approach to unravel the peptide content of complex media but also to study the diverse oligopeptide utilization patterns of a bacterial species during its growth. Combined with complementary approaches, it opens avenues for further characterization and optimization of protein hydrolysate-based culture media and could also be used to deepen our knowledge of bacterial physiology.
AUTHOR CONTRIBUTIONS
LP performed the experimental study. LP and EH performed the MS analyses. LP and VJ wrote the manuscript. All authors contributed to conception and design of the study, to manuscript revision, and approved the submitted version.
FUNDING
This work was funded by the Association Nationale de la Recherche et de la Technologie (ANRT, Contract Nr 2015/0599).
ACKNOWLEDGMENTS
We thank the INRA PAPPSO proteomics platform (http://pappso.inra.fr supported by the Ile-de-France regional council and IBISA) for providing mass spectrometry facilities. We thank Mylène Boulay and Sophie Liuu for their support and technical expertises. Finally, we also thank Simon Poirier, Rozenn Gardan and Françoise Rul for their critical reading and helpful discussions.
|
v3-fos-license
|
2018-04-03T00:22:32.519Z
|
2016-10-26T00:00:00.000
|
263929796
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1002/anie.201608406",
"pdf_hash": "17f6e82971a3068d8954151d05d503d528dfb898",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1006",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "56079fe50798da97656fd110721339c23b4c3446",
"year": 2016
}
|
pes2o/s2orc
|
Regio‐ and Stereoselective Homologation of 1,2‐Bis(Boronic Esters): Stereocontrolled Synthesis of 1,3‐Diols and Sch 725674
Abstract 1,2‐Bis(boronic esters), derived from the enantioselective diboration of terminal alkenes, can be selectively homologated at the primary boronic ester by using enantioenriched primary/secondary lithiated carbamates or benzoates to give 1,3‐bis(boronic esters), which can be subsequently oxidized to the corresponding secondary‐secondary and secondary‐tertiary 1,3‐diols with full stereocontrol. The transformation was applied to a concise total synthesis of the 14‐membered macrolactone, Sch 725674. The nine‐step synthetic route also features a novel desymmetrizing enantioselective diboration of a divinyl carbinol derivative and high‐yielding late‐stage cross‐metathesis and Yamaguchi macrolactonization reactions.
. The Fawcett Flaskcustom glassware for the addition of solutions at cryogenic temperatures to a second solution at the same temperature.
Operation: After flame-drying the flask is attach to a Schlenk manifold, and the two groundglass joints are stoppered with suba-seals, before applying a high vacuum until the glassware has cooled to ambient temperature. The necessary reagents and solvents are then added into the appropriate sides of the flask (we typically add the reagents as a solution to the receiving flask a few minutes prior to transfer as it can be difficult to control the stirring in both halves when the flask is clamped into position). The flask can then be comfortably lowered into a cooling bath for the required reaction time. To perform the inverse-addition procedure the flask needs to be unclamped (we have found that the flask can sit happily in a cooling bath without the need for clamping for short periods of time) and held at an angle to allow proper stirring in both halves. Simple tipping of the flask, without removing either side from the cooling bath, will allow the solution in the delivering half to pour across into the receiving half. We have found that it is easy to control the rate of addition, so that it is comparable to dropwise or small portion-wise addition. To pour the final few drops across it is necessary to close the Schlenk tap on the side of the receiving flask and insert a needle into the suba-seal of the receiving flask. A finger over the end of the needle is sufficient to control the rate of addition of these final few drops. IR (neat) νmax: 2956,2928,2856,1334,1252,1146,1084,834 and 773 cm 1
The collected data was identical to that described above.
Entry 2 -Diamine-free homologation:
n-BuLi (1.6 M, 0.34 mL, 0.54 mmol, 1.0 eq.) was added dropwise to a solution of (R)-3-(4methoxyphenyl)-1-(tributylstannyl)propyl diisopropylcarbamate (317 mg, 0.54 mmol, 1.0 eq.) in Et2O (2.72 mL) at -78 °C (dry ice/acetone). After 1 h a solution of 2 (199 mg, 0.54 mmol, 1.0 eq.) in Et2O (0.54 mL) was added rapidly and the resulting solution was allowed to stir at the same temperature for 1 h. The solution was warmed to ambient temperature and then heated at 35 °C (oil bath) for 16 h. After cooling to ambient temperature, the solution was cooled to 0 °C (ice/water) before adding a 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL), which was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. The mixture was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The broad regions containing products A, and B and C, were collected by purification of the crude residue by flash column chromatography (SiO2; 40:60 EtOAc:pentane) to yield A as a 1:2.27 mixture with pinacol (97 mg, 43% A) as a colorless viscous oil, and a 1:2.13 mixture of B and C (117 mg, 17% B, 37% C) as a colorless viscous oil.
Entry 3 -TMEDA-ligated carbenoid homologation:
sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) was added dropwise to a solution of 1a (168 mg, 0.57 mmol, 1.05 eq.) and TMEDA (0.09 mL, 0.57 mmol, 1.05 eq.) in anhydrous Et2O (2.87 mL) at 78 °C (dry ice/acetone). After 2 h and solution of rac-2 (200 mg, 0.55 mmol, 1.00 eq.) in anhydrous Et2O (0.55 mL) was added dropwise and the resulting solution was S19 allowed to react for a further 1 h. The solution was warmed to ambient temperature and then heated at 35 °C (oil bath) for 16 h. After cooling to ambient temperature, the solution was cooled to 0 °C (ice/water) before adding a 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL), which was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. The mixture was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The broad regions containing products A, and B, C and D, were collected by purification of the crude residue by flash column chromatography (SiO2; 40:60 EtOAc:CH2Cl2) to yield A (70 mg, 47%) as a colorless viscous oil, and a mixture of rearranged carbamate (65 mg), B and D (27 mg, 5% B; 46 mg, 19% D) as a colorless viscous oil. No presence of C was detected.
Entry 4 -TMS-diazomethane homologation:
According to a modified literature procedure, 2 (100 mg, 0.27 mmol, 1.00 eq.) was dissolved in anhydrous toluene (1.00 mL) and trimethylsilyldiazomethane (0.55 mL, 2.0 m solution in hexanes, 1.09 mmol, 4.00 eq.) was added. The resulting solution was heated for 8 h at 80 °C (oil bath) before cooling to ambient temperature, adding another portion of trimethylsilyldiazomethane (0.55 mL, 2.0 m solution in hexanes, 1.09 mmol, 4.00 eq.) and heating at 80 °C (oil bath) for 16 h. After cooling to ambient temperature a few drops of acetic acid were added to quench any unreacted trimethylsilyldiazomethane. Analysis of the crude reaction mixture by GC-MS only showed the starting 1,2-bis(boronic ester) (2) and no other compounds, even in trace amounts. and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and
Regioselective Homologation of 1,2-bis(boronic ester) development
30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react S21 for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (S,S)-3 (97 mg, 60%) as a gummy white solid and 3b (6 mg, 3%) as a colorless oil.
Entry 3:
3-(4-Methoxyphenyl)propyl diisopropylcarbamate (337 mg, 1.15 mmol, 1.05 eq.), (+)-sparteine (0.26 mL, 1.15 mmol, 1.05 eq.) and anhydrous Et2O (5.75 mL) were added to the left-hand side of a flame-dried Fawcett Flask (see Figure 1) purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.85 mL, 1.28 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (R)-2 (400 mg, 1.09 mmol, 1.00 eq.) was dissolved in anhydrous Et2O (1.10 mL) in the right-hand flask and allowed to cool to -78 °C over 5 min. The solution of lithiated carbamate was added dropwise to the boronic ester solution over 5 min before leaving to react for a further 1 hr at the same temperature. After warming to ambient temperature the flask was sealed and heated at 35 °C (oil bath) for 16 hr. The flask was allowed to cool to ambient temperature before adding THF (5.75 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). to the left-hand side of a flame-dried Fawcett Flask (see Figure 1) purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.85 mL, 1.28 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (R)-2 (400 mg, 1.09 mmol, 1.00 eq.) was dissolved in anhydrous Et2O (1.10 mL) in the right-hand flask and allowed to cool to -78 °C over 5 min. The solution of lithiated benzoate was added dropwise to the boronic ester solution over 5 min before leaving to react for a further 1 hr at the same temperature. The flask was allowed to warm to ambient temperature before adding THF (5.75 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (8 mL) and 30% aqueous H2O2 (4 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (20 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×40 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (S,S)-3 (183 mg, 57%) as a gummy white solid and 3b (11 mg, 2%) as a colorless oil. eq.), (+)-sparteine (0.13 mL, 0.57 mmol, 1.05 eq.) and anhydrous Et2O (2.85 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (R)-2 (242 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. The flask was allowed to warm to ambient temperature before adding THF (2.85 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (S,S)-3 (111 mg, 69%) as a gummy white solid.
S25
Entry 9: 3-(4-Methoxyphenyl)propyl 2,4,6-triisopropylbenzoate (226 mg, 0.57 mmol, 1.05 eq.), (+)-sparteine (0.13 mL, 0.57 mmol, 1.05 eq.) and anhydrous Et2O (2.85 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (R)-2 (242 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. A 1 M solution of MgBr2 in MeOH (0.83 mL, 0.83 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. THF (2.85 mL) and one crystal of BHT were added, and the mixture was subsequently cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL).
Entry 10: 3-(4-Methoxyphenyl)propyl diisopropylcarbamate (167 mg, 0.57 mmol, 1.05 eq.), (+)-sparteine (0.13 mL, 0.57 mmol, 1.05 eq.) and anhydrous Et2O (2.85 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (R)-2 (242 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. After warming to ambient temperature the Et2O was carefully removed under reduced pressure and replaced with anhydrous CHCl3 (4.0 mL) before sealing the Schlenk-tube and heating at 65 °C (oil bath) for 2 hr. The flask was allowed to cool to ambient temperature before adding THF (2.85 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction S26 mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (S,S)-3 (101 mg, 62%) as a gummy white solid.
Entry 11: 3-(4-Methoxyphenyl)propyl diisopropylcarbamate (167 mg, 0.57 mmol, 1.05 eq.), (+)-sparteine (0.13 mL, 0.57 mmol, 1.05 eq.) and anhydrous Et2O (2.85 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to -78 °C (dry ice/acetone) before adding sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (R)-2 (242 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. A freshly prepared 1 M solution of MgBr2 in Et2O (0.83 mL, 0.83 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. At this stage 11 B NMR showed no ate-complex and TLC showed no formation of single or doubly homologated products. min and leaving to react for 2 hr at this temperature. (R)-2 (200 mg, 0.55 mmol, 1.00 eq.) was dissolved in anhydrous Et2O (0.55 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. MeOH (0.1 mL) was added dropwise before allowing the flask to warm to ambient temperature. THF (3.40 mL) and one crystal of BHT were added, before cooling to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (S,S)-3 (107 mg, 66%) as a gummy white solid. leaving to react for 2 hr at this temperature. (R)-2 (200 mg, 0.55 mmol, 1.00 eq.) was dissolved in anhydrous Et2O (0.55 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. Anhydrous methanol (0.1 mL) was added dropwise and the reaction was left for a further 2 min. After warming to ambient temperature the Schlenk-tube was sealed and heated at 35 °C (oil bath) for 16 hr. The flask was allowed to cool to ambient temperature before adding THF (3.40 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (S,S)-3 (102 mg, 63%) as a gummy white solid and 3b (5 mg, 2%) as a colorless oil.
3-(4-Methoxyphenyl)propyl diisopropylcarbamate (167 mg, 0.57 mmol, 1.05 eq.), ()sparteine (0.13 mL, 0.57 mmol, 1.05 eq.) and anhydrous Et2O (2.85 mL) were added to a flamedried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 2 hr at this temperature. (S)-2 (242 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. After warming to ambient temperature the Schlenk-tube was sealed and heated at 35 °C (oil bath) for 16 hr. The flask was allowed to cool to ambient temperature before adding THF (2.85 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 75:25 petroleum ether 40/60:EtOAc) to yield (R,R)-3 (105 mg, 65%) as a gummy white solid.
This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×10 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was analysed by 13 C NMR and found to contain a 81:19 ratio of A:B.
Experiment B:
sec-BuLi (0.27 mL, 1.30 M, 0.35 mmol, 1.00 eq.) was added dropwise to a solution of 1a (108 mg, 0.37 mmol, 1.05 eq.) and (+)-sparteine (0.08 mL, 0.37 mmol, 1.05 eq.) in anhydrous Et2O (1.84 mL) at 78 °C (dry ice/acetone). After 2 h, a 1:1 mixture of 39 (99 mg, 0.35 mmol, 1.00 eq.) and rac-2 (129 mg, 0.35 mmol, 1.00 eq.) in anhydrous Et2O (0.35 mL) was quickly added and the resulting solution was left to react for a further 1 h. After warming to ambient temperature the solution was heated at 35 °C (oil bath) for 16 h. The flask was allowed to cool to ambient temperature before adding THF (2 mL), and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (2 mL) and 30% aqueous H2O2 (1 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (5 mL) was carefully added and the reaction mixture was extracted with EtOAc (3 × 10 mL).
The combined organic fractions were dried over Na2SO4, filtered and concentrated under S36 reduced pressure. The crude residue was analysed by 13 C NMR and found to contain a 54:46 ratio of A:C.
The collected data was identical to that described above.
S40
Aliquots of the reaction mixtures were oxidised and protected as acetonides 6 for analysis of enantiomeric purity by chiral-GC. The collected data was identical to that described above. rac-56 was synthesised using the following procedure:
The collected data was identical to that described above.
Aliquots of the reaction mixtures were oxidised and protected as acetonides 6 for analysis of enantiomeric purity by chiral-GC. (1.85 g, 7.30 mmol, 1.05 eq.) were added to a flame-dried Schlenk-tube purged with N2. THF S46 (7.00 mL) was added before sealing the flask and heating at 80 °C (oil bath) for 30 mins. After cooling to ambient temperature 4,4-dimethyl-1-pentene (1.00 mL, 6.96 mmol, 1.00 eq.) was added before re-sealing and heating for 3 hr at 60 °C (oil bath). The solution was then cooled to ambient temperature and concentrated under reduced pressure. The crude residue was directly purified by flash column chromatography (SiO2; 95:5 pentane:Et2O) to yield 57 ( The collected data was identical to that described above. (1.64 g, 6.47 mmol, 1.05 eq.) were added to a flame-dried Schlenk-tube purged with N2. THF (6.16 mL) was added before sealing the flask and heating at 80 °C (oil bath) for 30 mins. After cooling to ambient temperature 43 (2.00 g, 6.16 mmol, 1.00 eq.) was added before re-sealing and heating for 3 hr at 60 °C (oil bath). The solution was then cooled to ambient temperature and concentrated under reduced pressure. The crude residue was directly purified by flash column chromatography (SiO2; 95:5 pentane:Et2O) to yield 58 (
Carbenoid Scope (3R,5R)-2-Methylundecane-3,5-diol (4):
Isobutyl 2,4,6-triisopropylbenzoate (32) (175 mg, 0.57 mmol, 1.05 eq.), ()-sparteine (0.13 mL, 0.57 mmol, 1.05 eq.) and anhydrous Et2O (2.85 mL) were added to a flame-dried Schlenktube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.42 mL, 1.30 M, 0.55 mmol, 1.00 eq.) dropwise over 5 min and leaving to react for 3 hr at this temperature. (S)-2 (240 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. After warming the solution to ambient temperature THF (2.85 mL) and one crystal of BHT was added, and then it cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 90:10 pentane:EtOAc) to yield 4 (86 mg, 78%) as a viscous colorless oil. mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. After warming to ambient temperature the Schlenk-tube was sealed and heated at 35 °C (oil bath) for 16 hr. The flask was allowed to cool to ambient temperature before adding THF (2.85 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). before leaving to react for a further 1 hr at the same temperature. After warming to ambient temperature the Schlenk-tube was sealed and heated at 35 °C (oil bath) for 16 hr. The flask was allowed to cool to ambient temperature before adding THF (2.85 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and
TLC:
30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.50 mL, 1.30 M, 0.66 mmol, 1.20 eq.) dropwise over 5 min and leaving to react for 5 hr at this temperature. (S)-2 (242 mg, 0.55 mmol, 1.00 eq.) was dissolved in anhydrous Et2O (0.55 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. After warming to ambient temperature the Schlenk-tube was sealed and heated at 35 °C (oil bath) for 16 hr. The flask was allowed to cool to ambient temperature before adding THF (2.85 mL) and one crystal of BHT, and was then cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. Water (10 mL) was added and the reaction mixture was extracted with EtOAc (3×20 mL (S,E)-Pent-3-en-2-yl diisopropylcarbamate (47) (S)-4-Phenyl-2-(trimethylstannyl)butan-2-yl 2,4,6-triisopropylbenzoate (48) (300 mg, 0.55 mmol, 1.00 eq.) TMEDA (0.09 mL, 0.61 mmol, 1.10 eq.) and anhydrous Et2O (2.76 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding n-BuLi (0.38 mL, 1.60 M, 0.61 mmol, 1.10 eq.) dropwise over 10 min and leaving to react for 2 hr at this temperature. (S)-2 (243 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min S60 before leaving to react for a further 1 hr at the same temperature. After warming the solution to ambient temperature THF (2.85 mL) and one crystal of BHT was added, and then it cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (0.61 mL, 0.61 mmol, 1.10 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. Water (10 mL) was added and the organic phase was collected, followed by extraction of the aqueous phase (3×15 mL Et2O). The combined organic phases were dried over MgSO4, filtered and concentrated under reduced pressure. The crude residue was dissolved in THF (6 mL) and one crystal of BHT was added, then the mixture was subsequently cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 80:20 pentane:EtOAc) to yield 10 (150 mg, 94%) as a viscous colorless oil. (S)-1-Phenylethyl diisopropylcarbamate (50) (138 mg, 0.55 mmol, 1.00 eq.) and anhydrous Et2O (2.77 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.48 mL, 1.30 M, 0.62 mmol, 1.12 eq.) dropwise over 5 min and leaving to react for 15 min at this temperature. (S)-2 (243 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature.
A 1 M solution of MgBr2 in MeOH (0.83 mL, 0.83 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. Water (10 mL) was added and the organic phase was collected, followed by extraction of the aqueous phase (3×15 mL Et2O). The combined organic phases were dried over MgSO4, filtered and concentrated under reduced pressure. The crude residue was dissolved in THF (6 mL) and one crystal of BHT was added, then the mixture was subsequently cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc S63 (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and concentrated under reduced pressure. The crude residue was purified by flash column chromatography (SiO2; 80:20 pentane:EtOAc) to yield 11 (101 mg, 73%) as a viscous colorless oil. (S)-1-(4-Fluorophenyl)ethyl diisopropylcarbamate (51) (146 mg, 0.55 mmol, 1.00 eq.) and
TLC
anhydrous Et2O (2.73 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.47 mL, 1.30 M, 0.62 mmol, 1.12 eq.) dropwise over 5 min and leaving to react for 15 min at this temperature. (S)-2 (240 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. A 1 M solution of MgBr2 in MeOH (0.82 mL, 0.82 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. Water (10 mL) was added and the organic phase was collected, S64 followed by extraction of the aqueous phase (3×15 mL Et2O). The combined organic phases were dried over MgSO4, filtered and concentrated under reduced pressure. The crude residue was dissolved in THF (6 mL) and one crystal of BHT was added, then the mixture was subsequently cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. 2 M aqueous HCl (10 mL) was carefully added and the reaction mixture was extracted with EtOAc (3×20 mL). The combined organic fractions were dried over Na2SO4, filtered and IR (neat) νmax: 3344, 2928, 2857, 1602, 1509, 1416, 11375, 1225, 1159, 1088 and 834 cm 1 [ ] : +16.0 (c = 1.0, CHCl3) S65
tert-Butyl (6S,8R)-6,8-dihydroxy-8-phenylnonanoate (13):
(S)-1-Phenylethyl diisopropylcarbamate (50) (138 mg, 0.55 mmol, 1.00 eq.) and anhydrous Et2O (2.77 mL) were added to a flame-dried Schlenk-tube purged with N2. The solution was cooled to 78 °C (dry ice/acetone) before adding sec-BuLi (0.48 mL, 1.30 M, 0.62 mmol, 1.12 eq.) dropwise over 5 min and leaving to react for 15 min at this temperature. 54 (291 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. A 1 M solution of MgBr2 in MeOH (0.83 mL, 0.83 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. Water (10 mL) was added and the organic phase was collected, followed by extraction of the aqueous phase (3×15 mL Et2O). The combined organic phases were dried over MgSO4, filtered and concentrated under reduced pressure. The crude residue was dissolved in THF (6 mL) and one crystal of BHT was added, then the mixture was subsequently cooled to 0 °C (ice/water). A 2:1 v:v mixture of 3 M aqueous NaOH (4 mL) and 30% aqueous H2O2 (2 mL) was prepared at 0 °C (ice/water) and degassed by gently bubbling N2 through the solution. This aqueous solution was added dropwise to the vigorously stirred reaction mixture, which was subsequently warmed to ambient temperature and allowed to react for 1 hr. Water (10 mL) was added and the reaction mixture was extracted with EtOAc (3×20 mL). The eq.) dropwise over 5 min and leaving to react for 15 min at this temperature. 55 (258 mg, 0.66 mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. A 1 M solution of MgBr2 in MeOH (0.83 mL, 0.83 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. Water (10 mL) was added and the organic phase was collected, followed by extraction of the aqueous phase (3×15 mL Et2O). The combined organic phases were dried over MgSO4, filtered and concentrated under reduced pressure. The crude residue was dissolved in THF (6 mL) and one crystal of BHT was added, then the mixture was subsequently cooled to 0 °C (ice/water). mmol, 1.20 eq.) was dissolved in anhydrous Et2O (0.66 mL) and added dropwise to the reaction mixture over 1 min before leaving to react for a further 1 hr at the same temperature. A 1 M solution of MgBr2 in MeOH (0.83 mL, 0.83 mmol, 1.50 eq.) was added dropwise over 2 min and the mixture was allowed to react for a further 2 min before warming to ambient temperature. Water (10 mL) was added and the organic phase was collected, followed by extraction of the aqueous phase (3×15 mL Et2O). The combined organic phases were dried over MgSO4, filtered and concentrated under reduced pressure. The crude residue was dissolved in THF (6 mL) and one crystal of BHT was added, then the mixture was subsequently cooled to 0 °C (ice/water). [ ] : (c = 1.0, CHCl3)
|
v3-fos-license
|
2019-04-06T13:12:03.574Z
|
2013-02-27T00:00:00.000
|
96856215
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://lifescienceglobal.com/pms/index.php/jmst/article/download/681/pdf",
"pdf_hash": "f328960e40497585a4bd621b4defd99c69393c10",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1009",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "8ddcf078f61a1b16c692ec857969891a0082140b",
"year": 2013
}
|
pes2o/s2orc
|
A Study of Nitrate Uptake from Aqueous Solutions Using Isotactic Polypropylene-based Anion Exchangers
Abstract: Two series of efficient and cost-effective anion exchangers possessing biocidal properties are reported for the removal of nitrate ions from aqueous solutions. Isotactic polypropylene (IPP) was modified by graft copolymerization with poly(4-vinyl pyridine) using γ-rays as initiator. The graft copolymers were functionalized further by reaction with sodium 2-bromoethane sulphonate or 2-Chloroethanol to generate, respectively, the zwitterionic or choline-analogous structure on the IPP backbone. The functionalized graft copolymers have exchangeable Cl - or Br - ions and possess antimicrobial properties due to their polycationic character. These exhibited structure-property relationship when evaluated as anion exchangers for NO 3 - ions of which the maximum was removed from the feed solution with the graft copolymer having the lowest percent grafting. But the nature of the counter anion present did not exhibit much difference in the nitrate uptake behaviour. A parametric study, to evaluate the effect of different conditions on nitrate uptake, was carried as a function of contact time, temperature, pH of medium and NO 3 - concentration. A high maximum exchange capacity of 14.77 mg/g and 13.62 mg/g was observed, respectively for the graft copolymers having Br - and Cl - as the counter anions, at pH 5.0, 35 oC, and 20 ppm of the nitrate ions after ten cycles. The materials also exhibited good reusability up to ten cycles. The kinetics and mechanism of nitrate removal was studied and the data was found to fit the pseudo-second order kinetics and Langmuir isotherm.
INTRODUCTION
Nitrate is one of the major pollutants of water. Hence its removal from the ground water is a huge challenge to make wastewater fit for human consumption. The major sources of nitrate in ground or drinking water are fertilizers, sewage or its occurrence in the natural state. Excess intake of nitrate through food and water causes ill-health effects especially in babies or the elderly people. The nitrate concentration in surface water is normally low (0-18 mg/l) but can reach high levels as a result of agricultural runoff, refuse dump runoff or contamination with human or animal wastes [1]. Various techniques such as ion exchange, reverse osmosis or electro-dialysis for the treatment of nitrate removal from wastewater have been reviewed [2][3]. The membrane processes, ultrafiltration [4] or reverse osmosis [5], have been reported to be effective in the removal of nitrate. A twostage treatment based on the combination of chemical and biological processes has also been reported to be an effective method [6]. Other processes reported include the use of biosorption [7], electrokinetic method [8], reductive removal using zero valent copper or iron [9][10]. However, anion exchange processes are more suitable for making water fit for drinking purposes by designing low cost processes and products [11]. Anion exchange processes have also been re-integrated with the catalytic or with biological processes [12]. Song, et al. [13]. reported selective removal of nitrate by using a novel macroporous acrylic anion exchange resin. Zhou et al. [14]. reported the use of a magnetic anion exchanger for the selective removal of nitrate. The natural polymers-based functional hydrogels, those offer the advantages of cost effective being of renewable nature, have been used as supports for efficient removal of nitrate [15][16][17][18][19][20]. These also offer an ease of modification by the polymer analogous reactions.
In view of the above discussion, in the present article two series of the new isotactic polypropylene (IPP)-based anion exchangers have been reported. These have been synthesized in order to combine the benefit of cost-effectiveness and efficiency as an anion exchanger. There are no reports in literature on a similar material for the nitrate removal though derivatized poly(acrylic acid)-grafted-PP has been used as cation exchanger [21] or the antibacterial properties of the nano-silver particles loaded-PP have been explored [22]. In the present case, IPP was modified by grafting with 4-vinyl pyridine (4-VP), a functional monomer, by variation of different grafting conditions and the candidate graft copolymers were further functionalized by quaternization with Sodium 2-Bromoethanesulphoante and 2-Chloroethanol to generate two series of anion exchangers that have quaternary nitrogen with exchangeable Br or Cl -. The graft copolymers thus synthesized are bifunctional and possess anion exchange and antimicrobial propertiestwo most desirable attributes of material for use in wastewater treatment. The properties of the reported materials are attributed to the polycationic nature of poly(4-VP) grafted chains generated by the derivatization reaction with the above-stated two reagents.
Synthesis and Quaternization Reaction of Graft Copolymers
IPP was irradiated along with a known amount of 4-VP and water in Gamma Chamber. Different grafting conditions such as irradiation dose, monomer concentration or amount of water were varied one after the other to evaluate the optimum grafting conditions ( Table 1). The graft copolymers were extracted with methanol or equal mixture of acetone and water by stirring for 2h to remove any attached homopolymer. The graft copolymer was separated, dried and weighed. It was again subjected to homopolymer removal by the above-stated extraction process. The drying, weighing and extraction process was repeated till constant weight was obtained. The graft copolymers thus obtained were designated as PP-g-poly(4-VP). The amount of the % poly(4-VP) grafted on the backbone polymer, IPP, is defined percent grafting (P g ) and is expressed as: Amount of polymer grafted Weight of backbone polymer 100 The graft copolymers, with the highest P g from each of the grafting condition varied, that is irradiation dose, monomer concentration and volume of water and maximum volume of water, as per the details given in supplementary data, having P g of 58.00, 128, 150 and 32.0, were taken separately and immersed in an excess, 1:5 weight ratio, of Sodium 2-Bromoethanesulphonate or 2-Chloroethanol and reacted, to quaternize the tertiary nitrogen of grafted poly(4-VP) chains, in a temperature-controlled water
Anion Exchange Studies
Four candidate functionalized graft copolymers of different P g , weighing 0.1g each, from both the series were separately immersed in 25 mL of a known concentration of KNO 3 solution for different time interval (30, 60, 90, 120 or 150 min) at a fixed temperature and solution pH. After specific time interval, the graft copolymers were removed from the respective solutions. The concentration of the nitrate ions left out in the solution was measured by adding the reagent. The procedure is based on the DMP method which is akin to the ISO 7890-1:1986 method. The principle of the method is based on the reaction of nitrate with 2,6-Dimethylphenol to generate in situ 4-Nitro-2,6-dimethylphenol. That instantaneously develops colour and the resultant direct concentration of nitrate was measured by colorimetric determination at 324 nm in Photolab 6600 UV-visible spectrophotometer. Percent uptake (P u ) and other parameters were calculated as shown below: Total amount of NO 3 in solution 100 Partition Coefficient (K d )= Amount of NO 3 removed
Total amount of NO 3 in feed solution
Volume of solution in mL Weight of dry polymer (g) ) in the feed solution initially and after treatment for time t, and V is volume of solution (L) and m is the weight of dry graft copolymer used (g).
The graft copolymer from each series exhibiting the best results was used for the further studies to optimize conditions for the maximum nitrate uptake ( Table 2). The effect of the temperature, pH and concentration was also varied, one at a time, over a range as per details presented in Table 3. At the best conditions so obtained the maximum exchange capacity (MEC) of the selected materials was studied by using the same sample repeatedly for ten cycles. A single cycle was carried for 90 min at 35 °C and in pH 5.0 and 20 ppm of NO 3 ions and 0.1g of the sample. MEC was calculated by the following expression [17].
MEC= C m V m
Where C o and C t are the concentrations of nitrate in the feed initially and after time t, m is the weight of dry polymer (g), C m is the anion concentration sorbed by the polymer, and V is the total volume of the solution (L).
The reusability studies were carried for ten cycles using the same sample. After nitrate uptake in cycle 1 the sample was separated from the solution and it was stripped-off NO 3 ions by immersion in the saturated NaCl solution for 1h and sonicated. The regenerated sample was again used for nitrate uptake. The process for the regeneration and nitrate uptake was used for ten cycles as thereafter no nitrate was exchanged.
RESULTS AND DISCUSSION
Radiation grafting by -rays initiation is a clean and convenient method to modify natural or synthetic polymers to make these functional and useful for the target-specific applications. PP fibre is otherwise chemically inert and does not have functional groups. Modification or functionalization of PP by incorporation of the targeted functional groups is a desirable strategy to alter its application spectrum. In the present study, grafting of poly(4-VP) resulted in the maximum P g of 150% as a result of variation of different grafting parameters ( Table 1). The effect of different grafting parameters is as per trends reported earlier. The grafting percent decreases after an optimum value when different parameters were changed. The same may be attributed to increase in the homopolymer formation [23]. The graft copolymers, PP-g-poly(4-VP), have comb-like structure with pendant poly(4-VP) groups attached to the IPP fibre backbone. To functionalize these graft copolymers by reaction with 2-Chloroethanol or Sodium 2-Bromoethanesulphonate four graft copolymers having the highest P g obtained after the variation of a particular grafting parameter were chosen and reacted separately with the abovementioned two reagents. The quaternization reaction of the poly(4-VP) with 2-Chloroethanol or Sodium 2-Bromoethanesulphonate takes place at the tertiary nitrogen of the pendant poly(4-VP) groups. The resulting polymers are henceforth designated as PP-Br or PP-Cl to distinguish the two series on the basis of the exchangeable anions. The graft copolymers PP-Br or PP-Cl are bifunctional with quaternary nitrogen and have Br or Cl as counter anions. These materials are non-toxic and have inherent anion exchange, antimicrobial or water softening properties which are most desirable attributes of a material usable in the drinking water treatment [24]. The course of the reaction and the structure of the resultant quaternized graft copolymers with the zwitterionic (PP-Br) or choline analogous (PP-Cl) pendant moieties are presented as Scheme 1.
Characterization of Functional Graft Copolymers
IPP and its different copolymeric forms were characterized by the elemental analysis, FTIR and SEM to obtain evidence of grafting and reaction with Sodium 2-Bromoethanesulphonate or 2-Chloroethanol. Elemental analysis provides evidence of different graft levels attained by variation of grafting conditions. For example, graft copolymers with P g of 10, 58, and 150 have %N 1.529, 5.05 and 7.392 that correlates with an increase in the grafted poly(4-VP). After the respective quaternization reaction the %N decreased in both PP-Br and PP-Cl with a higher decrease observed for the former. FTIR spectra of PP (and its graft copolymers were compared. PP backbone only has bands due to -CH or C-C stretching or bending vibrations, while its graft copolymers have also the bands of pyridine ring. The latter has bands at 1597.1, 1556.1 and 1492.8 cm due to the substituted aromatic ring, which shifted to 842.3/840.5 cm -1 in the spectra of PP-Br/PP-Cl [24]. In the spectra of the functionalized polymers, absorption peaks of a band attributed to the quaternary ammonium salts absorption is observed in at 2360 cm -1 . SEMs of PP-g-poly(4-VP) and its quaternized forms are presented (Figure 1). The change in the morphology of the PP fibres is evident from the SEMs. The fibre diameter is more in the case of graft copolymers with the highest P g (150) than in ones with the lower graft levels (Figure 1a and b). The fibre surface on post-grafting and on quaternization with 2-Chloroethanol or Sodium 2-Bromoethnaesulphonate shows changes on the fibre surface as a result of charge generation. SEM-EDS of graft copolymer quaternized by reaction with 2-Chloroethanol are presented as Figure 1c. The presence of Cl can be evidenced from the EDS of the sample.
Nitrate Uptake Studies and Selection of Candidate Materials
The graft copolymers behave as anion exchanger a property that emanates from the presence of the Br or Cl counter anions present on the quaternary nitrogen.
It was observed that graft copolymers exhibited structure-property relationship in the nitrate uptake. The graft copolymer with the low P g was observed to be most effective anion exchanger than the three other candidate graft copolymers studied with higher P g level.
The reason for such behaviour, as aforementioned, is due to the comb-like shape of the graft copolymers. The graft copolymers of higher P g are expected to both high grafting density as well as longer pendant poly(4-VP) chains than the graft copolymers of low P g . The former have large number of active sites initially for the quaternization reaction and later for the anion exchange but are not approachable to the nitrate ions Scheme 1: Functionalization of IPP by quaternization reactions.
due to the electrostatic as well as steric reasons [25]. Hence, the quaternized graft copolymer from both the PP-Br and PP-Cl series having the lowest P g (32) exhibited the best P u ( Table 2). This observation has technological potential as the cost effective functional graft copolymers can be designed with the lower P g . These two materials were selected for the further studies. On the contrary, the extent of the anion exchange was not found to be markedly dependent on the nature of the counter anion present on the fibre.
Effect of Different Parameters on Anion Exchange Capacity
NO 3 uptake as a function of contact time is presented as Figure 2. The exchange of the anion was quite rapid as in the first 30 min more than 50% of anions were removed from the feed solution. P u was observed to increase with further increase of time and it reached equilibrium within 90 min. Almost similar trends in results are reported for the NO 3 uptake on hydrogels. The higher uptake of nitrate by PP-Cl than PP-Br with time variation may be attributed to the zwitterionic structure of the latter. An increase in the feed concentration of NO 3 affected the nitrate uptake positively up to 20 ppm. Thereafter it remained constant though in P u terms it is apparent decrease, but in terms of Q it is the same when concentration was increased over 20 ppm (Figure 3). The effect of temperature on the NO 3 uptake was studied from 20 ºC to 45 ºC. P u was observed to increase with an increase in temperature up to 45 ºC and thereafter it decreased with further increase in temperature. An increase in temperature beyond 35 ºC though decreased P u yet it was significantly high, that is, more than 50% at the higher temperature 40 ºC (Figure 4). Probably increase in temperature beyond a level result in decrease in ion uptake due to an increase in the kinetic energy of the feed solution resulting in lower ion adsorption. Exothermic behaviour of nitrate uptake at moderately high temperature has been reported previously [20]. Effect on P u as a function of pH variation over a range from 2 to 11 was studied and results are presented in Figure 5. There was an increase in P u initially with an increase in pH and the highest P u , around 90, was observed in the acidic at pH 5. However, thereafter it decreased with an increase of pH up to 11. It is apparent that at the higher pH the other anions of the medium also compete with the NO 3 for the pyridinium ions on the graft chains.
Evaluation of MEC and Reusability Studies
From the above discussion it is apparent that the P u is affected only marginally by the nature of anion as PP-Cl exhibited slightly higher NO 3 uptake than the PP-Br when studied as function of various parameters. The best performance of both the materials studied was observed at 90 min, 35 ºC, pH 5.0, and 20 ppm of NO 3 -. Hence, these parameters were used to evaluate the MEC of the two anion exchangers. MEC was studied by using repeatedly the same sample in ten cycles. In both the cases a reasonably high MEC of 14.77 mg/g and 13.62 mg/g for PP-Br and PP-Cl was obtained (Figure 6). These values obtained are lower than observed in a single uptake at 20 ppm or higher concentration, but at different temperature as revealed from Figure 2. Both the materials studied are reusable as desorption of NO 3 was studied up to ten cycles by treating the NO 3 --loaded materials with the NaCl solution under sonication after each uptake cycle. The materials are reusable up to ten cycles in both the cases though PP-Cl exhibited slightly better behaviour than the PP-Br (Figure 7).
Mechanism of Anion Exchange and Evaluation of Applicability of Adsorption Isotherms and Kinetic Models
IPP is a cost-effective hydrophobic material that resists water absorption. It has limited applications unless used as support to the active chains of grafted polymers and their ionic forms. The solution properties such as pH, K d and conductivity of the feed solution with the extent of anion exchange ( Table 3). To evaluate the mechanism of nitrate removal by the graft copolymers Langmuir and Freundlich isotherm equations were tested to calculate Q values at different feed concentration and the values so obtained were plotted along with the experimental values obtained [26]. The former assumes a weak physical binding of the anions on a surface with homogeneous layer formation. The latter, however, is a non-ideal phenomenon as the exchange of anions involves heterogeneous uptake over the active sites. The linear plots of C eq /Q versus C eq , from the Langmuir equation C eq /Q = 1/Q max b + C eq /Q max , in both the anion exchangers matches the experimental data far better than the plot of log Q e vs. log C e from the Freundlich equation, log Q e = log K F + (1/n) log C e, where Q e and C e are, respectively, equilibrium concentration and corresponding adsorption capacity. The consequent correlation coefficient values (R 2 ) obtained from the two relationships are widely different with 0.98476 and 0.5399, in PP-Br and 0.99435 and 0.60352 in P-Cl, respectively, for the Langmuir and Freundlich equations. The comparison of the two isotherms with experimental is shown in Figures 8a and 8b with linearfit behaviour of the Langmuir isotherm shown in the inset.
The kinetics modeling was carried using the plot of log(q e , 1 -q t ) versus t for modelling the pseudo-firstorder equation, log (q e , 1 -q t ) = log q e , 1 -(k 1 /2.303)t , and plot of t/Q t versus t from the linear pseudo-second order equation t/q t = 1/k 2 q e , 2 2 + (1/q e , 2 )t was used. The terms q e and q t are exchange capacity at equilibrium and time t, and k is respective rate constant. The nitrate uptake followed the pseudo-second order kinetics. It was found to be applicable again in both the cases almost over the whole range of the contact time (Figure 9a and 9b). Similar conclusions have been made elsewhere [27]. The mechanism of the exchange reaction is depicted below: As aforementioned this mechanism is also supported by the observation that after the completion of the experiments the pH and other parameters of the resultant solution were found to be different than the initial values ( Table 2).
CONCLUSIONS
Isotactic polyproylene (IPP) fibres were graft copolymerized with poly(4-vinyl pyridine) by -ray initiation method. The grafted IPP was further functionalized to polyzwitterionic and choline analogous materials having Br or Cl as the exchangeable ions.
The resultant materials were evaluated as anion exchangers for the removal of nitrate ions from water. The nitrate uptake was only marginally dependent on the nature of anions, but the low graft level was observed to be far more efficient than the high graft level. A parametric study carried revealed dependence of the nitrate uptake on time, pH, temperature and concentration of nitrate ions. The nitrate uptake by these materials was observed to be rapid as more than half ions were taken up within thirty minutes. It followed anion exchange mechanism. The data generated fits the Langmuir isotherm and pseudo-second order kinetics. Thus, the materials reported are low cost and efficient anion exchangers and have strong technological potential for use in water treatment technologies. Further, the study reported can be used to design low cost efficient anion exchangers.
|
v3-fos-license
|
2021-10-20T15:07:33.500Z
|
2021-10-15T00:00:00.000
|
239323644
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2226-471X/6/4/168/pdf",
"pdf_hash": "6945b4b37b5f07808fe695c37b4e18da197c6c37",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1011",
"s2fieldsofstudy": [
"Linguistics"
],
"sha1": "b509334d3d6e057a5d2c26e3fc7da37927a286c9",
"year": 2021
}
|
pes2o/s2orc
|
The Quest for Signals in Noise: Leveraging Experiential Variation to Identify Bilingual Phenotypes
Increasing evidence suggests that bilingualism does not, in itself, result in a particular pattern of response, revealing instead a complex and multidimensional construct that is shaped by evolutionary and ecological sources of variability. Despite growing recognition of the need for a richer characterization of bilingual speakers and of the different contexts of language use, we understand relatively little about the boundary conditions of putative “bilingualism” effects. Here, we review recent findings that demonstrate how variability in the language experiences of bilingual speakers, and also in the ability of bilingual speakers to adapt to the distinct demands of different interactional contexts, impact interactions between language use, language processing, and cognitive control processes generally. Given these findings, our position is that systematic variation in bilingual language experience gives rise to a variety of phenotypes that have different patterns of associations across language processing and cognitive outcomes. The goal of this paper is thus to illustrate how focusing on systematic variation through the identification of bilingual phenotypes can provide crucial insights into a variety of performance patterns, in a manner that has implications for previous and future research.
Introduction
Over the past decade, there has been a marked change in our understanding of bilingual language experience. Whereas past approaches conceptualized variation across samples and/or conditions as deviant or noisy phenomena, recent discoveries point to fundamental This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). interactivity and plasticity in bilingual language learning and processing (Green and Kroll 2019). The emergence of this work has sparked a paradigm shift in the field, resulting in an upsurge of research on individual differences and of comparative studies that seek to exploit variability within and across languages and interactional contexts of language use (for reviews, see de Bruin 2019;Dussias et al. 2019;Fricke et al. 2016;Kroll et al. 2018; Titone and Tiv forthcoming). The changing landscape reflects increased recognition of the complexity of bilingualism as a life experience: bilingualism does not, in itself, result in a particular pattern of response; rather, it is a multidimensional construct that is shaped by individual and contextual factors (Baum and Titone 2014;DeLuca et al. 2020;Luk and Bialystok 2013;Zirnstein et al. 2019). Thus, a key issue is that the differences in trajectories and outcomes of bilingualism are best understood by recognizing the extent of human diversity from an evolutionary perspective (e.g., Henrich et al. 2010;Mason et al. 2015) and by situating sources of individual variance in the sociocultural and linguistic niche within which bilinguals act (Bak 2016;Green 2011;Raviv et al. 2019;Titone and Tiv forthcoming;Wigdorowitz et al. 2020).
Our position is that systematic variation in bilingual language experience gives rise to a variety of phenotypes 1 with different patterns of associations across language processing outcomes. An implication that follows is that a particular association between two variables might be robust for one phenotype, yet absent for others, and so, characterizing speakers in terms of their profile and trajectory through different contexts is essential if we are to understand the limits and boundary conditions of putative bilingualism effects (Green and Abutalebi 2013;Green et al. 2007;Navarro-Torres et al. 2021).
In this paper, we provide an overview of this approach and show how it can be applied to develop an international network for research on diverse bilingual populations using an array of complementary multidisciplinary methods. In a similar spirit to Green and Abutalebi's (2013) adaptive control hypothesis, we consider three interrelated paths of influence to adaptive change: competition, cooperation, and regulation. We review recent findings that demonstrate how variability in the language experiences of bilingual speakers, and also in the ability of bilingual speakers to adapt to distinct demands of different interactional contexts, impact interactions between language representation, access, and control. Notably, we show that the bilingual language system is dynamic, flexible, and adept at adapting itself to the context of language use in which the speaker is immersed.
In today's globalized world, individuals are increasingly shifting between distinct environments. In some circumstances, a shift from one interactional context to another results in changes in the relative support for each language (e.g., a Basque-Spanish bilingual living in Andalusia would find little support for the first language (L1)). One question that arises is whether language regulation and language control processes are differentially coordinated for individuals whose interactional circumstances dynamically change. Moreover, bilingualism is pervasive throughout the world, but its manifestation can vary widely among different places, communicative contexts, and individuals (Grosjean 1982(Grosjean , 2013. Bilingual speakers differentially distribute their languages with different people and topics and across everyday settings, such as the classroom/workplace or the home environment (Shiron et al. 2021;Tiv et al. 2020). Some bilinguals typically keep their languages separate; others codeswitch and make use of more than one language opportunistically. We note, too, that interactional contexts differ in the relative intensity or diversity of language use (Gullifer and Titone 2020; Pot et al. 2018;Wigdorowitz et al. 2020). Bilinguals in more variable contexts are presumed to closely monitor the situational context to ensure the appropriateness of their language choices so as to avoid or reduce the interactional cost 2 that may arise in conversation (Beatty-Martínez et al. 2020b;Green and Abutalebi 2013). Thus, an outstanding question is whether bilinguals who live in more linguistically varied contexts are differentially affected compared to those immersed in relatively more homogeneous environments, where there is a higher degree of certainty with respect to language use and the types of conversational exchanges that take place.
Language Competition
Accumulating evidence shows that interactional effects on the trajectories and outcomes of bilingualism are influenced by the ways in which the two languages are engaged. We first contrast an interactional context in which languages are compartmentalized across distinct communicative contexts. Generally, the native language is the predominant language and the second language (L2) is restricted to more exclusive communicative contexts (e.g., at work or with a specific group of people). As the languages are typically highly specialized and differentiated, there is a high interactional cost for mixing languages in conversation. Therefore, individuals who tend to keep their languages separate also tend to have little-tono codeswitching experience. An important implication for these individuals is that they can use their proven experience at maximizing language competition to reliably distinguish one language from another and predict which language will be used in a given situation.
Indeed, this inference is supported by electrophysiological research demonstrating a modulation of an early frontal positivity (P2), an index of selective attention, in response to an unexpected language switch (e.g., Kuipers and Thierry 2010). In a series of experiments designed to test sensitivity to codeswitches as a function of interactional experience, Beatty-Martínez and Dussias (2017) found that non-codeswitching bilinguals exhibited a larger early frontal positivity when processing a codeswitch relative to a unilingual control. Importantly, these individuals were highly proficient Spanish-English bilinguals living in Granada, Spain and whose linguistic profile and behavioral ecology closely fitted the characterization described above. The switch effect was notably absent when bilinguals processed unilingual translation equivalent sentences in the L1 and L2 separately, further suggesting that the component's modulation cannot be attributed to differences in L1 versus L2 processing but are due to the selective gating of information flow from one language to the other. 3 As we will see below, bilinguals' electrophysiological response to codeswitches depends on the precise form of the gating, namely, whether language control is coordinated competitively (i.e., requiring a narrow attentional focus to exploit one language to the exclusion of another) or cooperatively (i.e., requiring a broad attentional focus to explore both languages opportunistically; see Green and Wei 2014 for theoretical discussion).
Further evidence for the differential engagement of attentional control in bilinguals who use each of their languages in separate communicative contexts has been shown in verbal (Beatty-Martínez 2021; Kuipers and Thierry 2010) and nonverbal (Ooi et al. 2018) auditory domains. Using the elevator counting tasks from the Test of Everyday Attention (Robertson et al. 1994), Ooi et al. (2018 observed that Edinburgh bilinguals who reported using their two languages independently exhibited greater auditory attentional switching abilities when reorienting from one auditory source to another. The interpretation is that bilinguals who avoid codeswitching and compartmentalize their language use across communicative contexts must become adept at reliably adjusting to relevant variations in the input such as a change in language by validating incoming sources of information against their experiencebased expectations. As alluded to previously, interactional contexts in which bilinguals' languages are used in distinct communicative contexts are characterized by a low degree of language entropy 4 because the appropriate language is highly predictable. However, language expectations are not always met (e.g., running into your L2-speaking boss at the grocery store). Arguably, such circumstances may trigger a need to reduce between-language interference by reactively suppressing the non-target language to guarantee retrieval in the target language. A plausible conjecture is that such interactional experiences are associated with increased reliance on reactive control processes (i.e., engagement of goal-relevant information on an as-needed basis as a function of changing task demands; Braver 2012) and, conversely, reduced reliance on context monitoring. Data from behavioral and neuroimaging studies support this assumption. Gullifer et al.
(2018) investigated individual differences in resting-state functional connectivity in French-English bilinguals living in Montréal, Canada. The bilinguals examined were all highly proficient in the two languages but varied widely with respect to their measured degree of language entropy within communicative contexts. Relative to bilinguals with more variable interactional experiences, bilinguals with low language entropy exhibited greater reliance on reactive control processes, as measured by the AX continuous performance task (AX-CPT; Ophir et al. 2009), and less connectivity between the anterior cingulate cortex and the putamen, regions previously implicated in monitoring, language switching, and L2 articulatory processing Klein et al. 1994). Similarly, Beatty-Martínez et al. (2020b) found that for Spanish-English bilinguals in Granada, greater reliance on reactive control processes in the AX-CPT was associated with better picture naming accuracy in both the L1 and L2. Taken together, the converging evidence reviewed thus far indicates that bilinguals whose interactional experiences center on compartmentalized language use are adept at attentively discriminating one language from another and appear to rely on reactive components of control to manage between-language interference. In the next section, we consider the interactional implications for bilinguals who habitually codeswitch, using their languages freely and interchangeably within different communicative contexts.
Language Cooperation
In codeswitching contexts, where most individuals actively use more than one language, and switching between them is prevalent, bilinguals have the potential to make use of either language on an opportunistic basis to achieve their communicative goals. Decades of sociolinguistic research have documented codeswitching patterns of bilingual speakers resulting in comprehensive corpora of interviews, surveys, and ethnographic research. Quantitative analysis of these data has revealed the systematicity underlying codeswitching tendencies by exemplifying bilinguals' adherence to community norms over idiosyncratic behaviors (Poplack 1980(Poplack , 1987Poplack and Meechan 1998;Travis 2015, 2018). We have come to see that codeswitching is not random between-language interference but rather serves as an opportunistic strategy 5 that provides communicative precision (Beatty-Martínez et al. 2020a;Feldman et al. 2021;Xu et al. 2021aXu et al. , 2021b. If the implication is that the interactional cost of switching between languages is lower compared to that in contexts in which the two languages are used separately, one could ask whether experience with codeswitching modulates the engagement of language control networks. Particularly strong evidence for this comes from a magnetoencephalographic study on Arabic-English bilinguals investigating language switching in ecologically valid experimental paradigms. Blanco-Elorrieta and Pylkkänen (2017) found that the anterior cingulate and prefrontal cortex, regions implicated when language switching is externally cued (e.g., Abutalebi and Green 2016), showed less involvement during the comprehension of naturalistic codeswitched conversations. Moreover, several other studies have thus far revealed no consistent pattern of association between codeswitching behavior and domaingeneral cognitive control (Beatty-Martínez et al. 2020b;Hartanto and Yang 2016;Ooi et al. 2018;Pot et al. 2018). Why might this be?
From a theoretical standpoint, because both languages are widely known and routinely used interchangeably, it is not as necessary for bilinguals in codeswitching contexts to continuously monitor the appropriate language and adjust accordingly for each communicative interaction (see Costa et al. 2009 for related evidence on the relation between cognitive control engagement and high monitoring demands). Recent proposals posit that codeswitching contexts involve a cooperative rather than a competitive relation 5 Recent findings have shown that for codeswitching bilinguals, the likelihood of switching between languages increases when the word in the other language is more accessible than the equivalent word in the current language (Xu et al. 2021a(Xu et al. , 2021b, under conditions of greater lexical diversity (Feldman et al. 2021), and when words or structures in the other language provide greater discriminatory efficiency (Beatty-Martínez et al. 2020a). What this suggests is that codeswitching offers a unique and flexible feature of bilingualism through which resources from both languages are recruited to provide an alternative means to convey a communicative intention, with implications for language control and speech planning. between the two languages and thus offer opportunities for language integration (Calabria et al. 2018;Green and Wei 2014;. Therefore, one possibility is that codeswitching creates a context in which bilinguals may adopt an open control mode, in which language membership is minimized and resources from both languages are explored. To illustrate this point, we return to the findings of Beatty-Martínez and Dussias (2017) introduced in Section 2.1. In addition to the non-codeswitchers from Granada, Beatty-Martínez and Dussias examined a group of Spanish-English bilinguals who were raised in established codeswitching communities in the United States. Contrary to the non-codeswitchers, the codeswitching bilinguals exhibited an N400 modulation (indexing difficulties of semantic integration) in response to infelicitous codeswitches, indicating they were sensitive to codeswitching conventions and community norms (see Adamou and Shen 2017;Beatty-Martínez 2019;Guzzardo Tamargo et al. 2016; Halberstadt 2017 for similar findings). More pertinent to our discussion is that codeswitching bilinguals did not show a modulation of the early frontal positivity when processing a codeswitch relative to a unilingual control. In line with our discussion above, the lack of differentiation between codeswitched and unilingual stimuli at early stages of processing is particularly noteworthy because it suggests that bilinguals' breadth of selective attention can be broadened to include both language networks. More recently, this finding was followed up in a subsequent study by Kaan et al. (2020), who examined whether bilinguals could dynamically shift between attentional control states depending on the nature of a given conversational exchange. They reported that the early frontal positivity effect was largest when bilinguals were in the presence of a monolingual interlocutor (i.e., where codeswitching was inappropriate), further exemplifying the role of the interactional demands of different contexts in mediating the ways in which bilinguals' languages are engaged.
At this point, it is useful to distinguish between two interactional experiences that are often conflated in research. While it may be tempting to associate codeswitching with a greater diversity of language experience, we note that codeswitching contexts rely on conventionalized distributional regularities and thus exert relatively uniform interactional demands on language use (Guzzardo Tamargo et al. 2016;Poplack 1987;Poplack et al. 1988;Torres Cacoullos and Travis 2018). Critically, diversity of language use can vary both within and across forms of conversational exchanges. We therefore necessarily distinguish bilinguals' propensity to engage in codeswitching (i.e., within-speaker language diversity) from their propensity to engage in variable types of conversational exchanges (i.e., betweenspeaker language diversity), which we elaborate on in the next section.
Language Regulation
Thus far, we have characterized interactional contexts of language use with relatively straightforward features: competitive environments where languages are used independently across distinct communicative contexts, and cooperative environments where codeswitching among bilinguals is the norm. Notwithstanding, there are many interactional contexts that involve variable kinds of conversational exchanges, requiring bilinguals to closely monitor and regulate the activation of both languages to suit demands in everyday life. Following the same logic as before, the conjecture is that high-entropy contexts are expected to place greater reliance on proactive control processes (i.e., active engagement and the maintenance of goal-relevant information to execute task demands; Braver 2012) to manage potential between-language interference by keeping the appropriate language active while seeking new contextual cues that may signal a language change (e.g., Pivneva et al. 2014). One strand of evidence relates to research on individual differences in brain-behavior associations in contexts with substantial variability in language diversity, such as Singapore or Montréal. In a series of studies aimed at examining diversity in social language use among Montréal bilinguals, Gullifer and colleagues reported that bilinguals with high language entropy showed increased reliance on proactive control in the AX-CPT, as well as greater functional connectivity between the anterior cingulate cortex and the putamen, regions implicated in monitoring and goal maintenance see also Li et al. 2021, for corroborative evidence with bilinguals from Singapore).
A second source of evidence comes from research on language use in an L2-immersion context (for reviews see DeLuca et al. 2020;Fricke et al. 2016;Kroll et al. 2018;Kroll et al. 2021;Zirnstein et al. 2019). L2-immersion contexts provide a unique opportunity for examining the dynamic interplay between languages when bilinguals have restricted access to the L1. A considerable body of research has revealed a decline in L1 accessibility with increasing L2 exposure (e.g., Baus et al. 2013;Linck et al. 2009;Titone 2012, 2015), suggesting that bilinguals must exert great effort to adjust and regulate co-activation (notably, of the L1 or dominant language) to accommodate to changes in the relative support for each language. There is also evidence that bilingual regulation ability supports proficient language processing by mediating cognitive control recruitment strategies in real time. For example, Zirnstein et al. (2018) examined a group of L2-immersed Mandarin-English bilinguals and found that the bilinguals' ability to recover from prediction errors during L2 reading was jointly influenced by their L1 regulatory ability and their cognitive control skills. Specifically, increased cognitive control ability related to reduced prediction error costs but only for bilinguals with better L1 regulation. Moreover, a visual world study by Navarro-Torres et al. (2019) found that L2-immersed bilinguals living in Edinburgh proactively disengaged from incorrect interpretations of syntactically ambiguous sentences by relying on early linguistic cues to preempt potential ambiguity.
Similar associations have been observed in language production. Beatty-Martínez et al. (2020b) found that for L2-immersed Spanish-English bilinguals in the United States, greater reliance on proactive control processes in the AX-CPT was associated with better picturenaming accuracy in the L1. Importantly, this pattern of association was absent for Spanish-English bilinguals living in an L1 context (i.e., bilingual groups from Granada and Puerto Rico, whose findings were alluded to in previous sections). The interpretation is that in L2immersion, bilinguals' ability to regulate the L1 by proactively monitoring when and when not to use each language can help maintain lexical accessibility in the less-supported L1 (see Zhang et al. forthcoming for corroborating electrophysiological evidence). Taken together, the emerging picture suggests that high-entropy contexts and L2-immersion environments can exert notable consequences for language performance and cognitive control engagement, even in highly proficient bilinguals, by introducing a stronger pressure for regulating coactivation and monitoring the appropriateness of using each language.
Conclusions
In this paper, we reviewed exciting new findings on how bilingual speakers adapt to the distinct demands of different interactional experiences. This approach leverages the varying experiences across different interactional contexts of language use to identify bilingual phenotypes under different boundary conditions. What is promising about this approach is that it revealed that bilinguals presumed to have been drawn from the same underlying population can differ in significant ways, and even those who might appear to behave similarly can arrive at the same outcome through different routes ; see Navarro-Torres et al. 2021 for a theoretical discussion on the role of evolutionary and ecological factors in shaping variation in language and cognitive processing).
We emphasize that the identification of bilingual phenotypes is fundamentally a transdisciplinary endeavor. In the last several years, exciting new synergies have emerged between research on bilingualism and other disciplines, such as information theory (Feldman et al. 2021; Gullifer and Titone 2020), network science (Tiv et al. 2020;Titone and Tiv forthcoming;Xu et al. 2021aXu et al. , 2021b, and usage-based approaches (Beatty-Martínez et al. 2018;Navarro-Torres et al. 2021). In this respect, a focus on multi-lab collaborations aimed at leveraging different interactional experiences offers promising prospects for embedding such work in a more comprehensive view of adaptive change Leivada et al. 2020). As a field, we are in the early stages of understanding the precise aspects of bilingual experience that give rise to different trajectories and outcomes. Notwithstanding, we have attempted to show that by providing a rich characterization of bilingual speakers in terms of their habits of language use and in relation to their interactional context, we can more effectively extract signals from noise.
|
v3-fos-license
|
2018-11-15T05:58:39.000Z
|
2018-11-15T00:00:00.000
|
119173520
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00208-020-02053-x.pdf",
"pdf_hash": "401b88319b2a2b4ad28f71954738aa0fff0e5d1e",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1012",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "a071506e99baf9b45b4e5f8d8334f9e1d69cb682",
"year": 2020
}
|
pes2o/s2orc
|
The fibration method over real function fields
Let R(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb R(C)$$\end{document} be the function field of a smooth, irreducible projective curve over R\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb R$$\end{document}. Let X be a smooth, projective, geometrically irreducible variety equipped with a dominant morphism f onto a smooth projective rational variety with a smooth generic fibre over R(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb R(C)$$\end{document}. Assume that the cohomological obstruction introduced by Colliot-Thélène is the only one to the local-global principle for rational points for the smooth fibres of f over R(C)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb R(C)$$\end{document}-valued points. Then we show that the same holds for X, too, by adopting the fibration method similarly to Harpaz–Wittenberg.
Introduction
Let C be a smooth, geometrically irreducible projective curve over R. Let R(C) denote the function field of C, and for every x ∈ C(R) let R(C) x be the completion of R(C) with respect to the valuation furnished by x. Now let V be a class of geometrically irreducible projective varieties over R(C). We say that V satisfies the local-global principle for rational points if for every X in V the following holds: is the residue map associated to the discrete valuation ring O X ,x .
Next we need some basic facts about the Galois cohomology of function fields of real algebraic curves, and some form of a residue theorem for them.
Definition 2.2
Let C be a smooth, geometrically irreducible projective curve over R.
Let R(C) denote the function field of C, as above, and for every x ∈ C(R) let R(C) x be the completion of R(C) with respect to the valuation furnished by x. Then we have a residue map as the residue field R(x) of R(C) x is R. Note that as a graded algebra: where t is the generator of the group H 1 et (R, Z/2) of order two. In particular we have a canonical isomorphism H ì et (R, Z/2) ∼ = Z/2. So the residue map is a homomorphism: By slight abuse of notation let ∂ x denote also the composition of the pull-back and this residue map.
The residue theorem, also called the reciprocity law, is the following
Proposition 2.3 Let V ⊆ C(R) be a connected component, let i be at least 2, and let h ∈ H ì et (R(C), Z/2). Then
x∈V ∂ x (h) = 0.
Remark 2.4
It is easy to see that all but finitely many terms of the sum above are zero, so the left hand side is well-defined. For a proof of this fact, and the proposition, see for example Proposition 3.7 of [3] on pages 157-158.
For the sake of simple notation let A C denote the direct product x∈C(R) R(C) x . It is an algebra over R(C). Now let X be a smooth, irreducible projective variety defined over R(C). Clearly Now we are ready to define Colliot-Thélène's obstruction. is the pull-back with respect to the map M x . Note that all but finitely many terms of the sum above are zero, so the left hand side is well-defined, and the image of X (R(C)) in X (A C ) under the diagonal embedding is in X (A C ) CT by Proposition 2.3 above.
One may justify the usage of the more mysterious group H i nr (R(C)(X )/R(C), Z/2) instead of H ì et (X , Z/2) as follows: for proper varieties having a smooth rational point is a birationally invariant property, so the obstruction should also have birational invariance. This holds for the former group, but not the latter. Now that we made explicit what we mean by the CT obstruction, we can make the following bold conjecture, which is motivated by theorems of Witt, Scheiderer and Ducros mentioned in the introduction, and in analogy with Colliot-Thélène's celebrated conjecture (see [4]) saying that the Brauer-Manin obstruction is the only one to the local-global principle for rational points for smooth projective rationally connected varieties over number fields: Conjecture 2. 6 The C T obstruction is the only one to the local-global principle for rational points for smooth projective rationally connected varieties over R(C).
All known classes of varieties for which CT obstruction is the only one to the local-global principle for rational points are rationally connected. Our main result Theorem 1.5 is a contribution to this conjecture.
We finish this section with a lemma which will be used in the proof of Theorem 5.3. Let f : X → Y be a morphism between smooth, projective, irreducible varieties over R(C). The morphism f induces a map X (R(C) x ) → Y (R(C) x ) for every x ∈ C(R), which in turn induces a map X (A C ) → Y (A C ), which we will denote also by f by slight abuse of notation.
for every x ∈ C(R) by naturality, so The claim is now clear.
The topological reinterpretation of the obstruction due to Ducros
Notation 3.1 By resolution of singularities there is an integral, smooth, projective variety X equipped with a projective dominant morphism p : X → C over R whose generic fibre is X → Spec(R(C)). As usual we will call X a model of X over C. For every closed point x of C let O x be the valuation ring of R(C) x , let X x denote the fibre of p over x, let X sm ⊆ X be the smooth locus of p, and let X x,sm = X sm ∩ X x be the smooth locus of X x .
let m x be the special fibre of the section Spec(O x ) → X × C Spec(O x ) associated to M x for every x ∈ C(R). Since X is regular, the point m x lies in X x,sm (R). Whenever convenient, we will denote the map x → m x on C(R) by σ (M).
The topological reformulation of the CT obstruction due to Ducros is the following Theorem 3.6 (Ducros) The following are equivalent: Proof This is Théorème 3.5 of [7] on page 83.
Proposition 3.7 Let σ be a weakly continuous section of p(R).
Then there is a continuous semi-algebraic section σ of p(R) such that for every x ∈ C(R) the points σ (x) and σ (x) are in the same connected component of X x,sm (R).
Remark 3.8
Note that Theorem 1.3 is an immediate corollary of this proposition.
Proof of Proposition 3.7
This is essentially Proposition 4.1 of [7] on page 85, but there it is stated in a weaker form. However the proof actually shows the stronger form above. We will give an even stronger version incorporating interpolation, see Proposition 3.19 below, using essentially the same methods. I(x) n x ⊂ O B . By slight abuse of notation we will let the symbol S denote this closed subscheme, too. When B is a curve this construction furnishes a bijective correspondence between the set of zero-dimensional closed subschemes of B whose closed points are all real and the set of effective zero cycles on B which are supported on B(R). In this case we will identify these two sets in all that follow.
We say that two maps f , g ∈ C k p (M, N ) are k-equivalent at p if f ( p) = g( p) and for every pair of maps γ ∈ C k 0 (R, M) and Similarly an interpolation condition φ : S → A × B S is the same data as a morphism S → D of schemes over R, while C k -sections of f (R) can be identified with C k -maps A(R) → D(R). Therefore we will freely apply the concepts of Definitions 3.10 and 3.12 to such functions in all that follows. Proposition 3.14 Let σ be a weakly continuous section of p(R) and let φ : S → X × C S be an interpolation condition of order ≤ k weakly compatible with σ . Then there is a C k -section σ of p(R) such that for every x ∈ C(R) the points σ (x) and σ (x) are in the same connected component of X x,sm (R) and σ is compatible with φ.
Proof Note that the fibre of p(R) over x is a Nash manifold for all but finitely many x ∈ C(R). So by the Nash version of the stratification theorem (see Theorem A of [6] on page 349) there is a finite subset P of C(R), and for every semi-algebraic connected component U of C(R) − P a Nash manifold F U such that p(R) −1 (U ) is Nash-isomorphic to F U × U and the restriction of p(R) onto p(R) −1 (U ) is, modulo the given isomorphism, is the projection onto the second coordinate. By the nature of our construction for every point P in C(R) − P the fibre X P is smooth. By adding finitely many points to the set P, if it is necessary, we may assume that P has at least two points in each semi-algebraic connected component of C(R). Similarly we may assume that P contains every closed point of S without loss of generality.
Set S = (k + 1) P∈P P and let the same symbol denote the unique closed subscheme defined by this zero cycle. Since there is an interpolation condition φ : S → X × C S which subsumes φ, we may assume without loss of generality that the zero cycle defined by S is indeed (k + 1) P∈P P. Write S as a coproduct: where S P is a closed subscheme of S supported on P for each P ∈ P (possibly empty). For every such P let φ P : S P → X × C S P be the interpolation condition which is the pull-back of φ with respect to the closed imbedding S P → S. Let P, Q be a pair of consecutive points of P. Since σ (P) and σ (Q) lie in the smooth locus of p, by the implicit function theorem there are two points P , Q in the open interval ]P Q[ such that P lies before Q , and p(R) has a C ∞ -section σ P (resp. σ Q ) defined over some open neighbourhood of On the other hand, because of the way the set P was constructed, the restriction of such that σ P,Q and σ P are k-equivalent at P , and similarly σ P,Q and σ Q are k-equivalent at Q . Therefore the concatenation of σ P , σ P,Q and σ Q (restricted to Now let P, Q, R be three consecutive points of P (where P = R is allowed). Since both σ P,Q and σ Q,R have extensions to an open neighbourhood of their definitions which are compatible with φ Q , we get that their concatenation is C k at Q. We get that the concatenation of the different sections σ P,Q for all couples P, Q of consecutive points of P is a C k -section σ of p(R) defined over all of C(R) such that for each point x ∈ C(R) the point σ (x) lies in the same connected component of X x,sm (R) as σ (x), and σ is compatible with φ.
We will need a variant of the claim above with Nash sections, since for technical reasons it will be more convenient to work with the latter in the next section. In order to do so we will show two interpolation lemmas first.
Proof Let π j : A m R → A 1 R be the projection onto the j-th coordinate, where j = 1, 2, . . . , m. It will be sufficient to show the claim for φ j = π j • φ and g j = π j • g for each j. In other words we may assume that m = 1 without loss of generality. Since V is affine there is a regular map ψ : V → A 1 R compatible with φ. By replacing g with g − ψ we may assume without loss of generality that φ is the zero map. In this form the claim is a mild variant of Lemma 12.5.5 of [2] on page 321. We include the proof for the reader's convenience. Let h 1 , h 2 , . . . , h n be the generators of the defining ideal of Y . Since V is non-singular, we can represent the germ of g at a point x ∈ V in the form g x = λ 1,x h 1 + · · · + λ n,x h n , where the λ i,x are the germs of C ∞ -functions at x. Using a partition of unity and the compactness of V (R), this allows us to represent g globally as g = λ 1 h 1 + · · · + λ n h n , where λ i ∈ C ∞ (V (R)). Then it suffices to apply Nachbin's version of the Stone-Weierstrass theorem to the functions λ i (see [13]).
Definition 3.16
Let V be a nonsingular variety over R, and let W ⊂ A m (R) = R m be a Nash manifold. An interpolation condition φ : S → W for some subscheme S ⊂ V of the type considered above is an interpolation condition φ : Since W is compact, each derivative of ρ is bounded in some fixed -neighbourhood of W , and hence sequence ρ • g n approximates g in the C ∞ -topology.
Remark 3.18
Note that for every pair of conjugate point P, P ∈ C(C) − C(R) the complement C = C − P − P is an affine curve, and C (R) = C(R), so this set is compact. Therefore we may apply Lemmas 3.15 and 3.17 to C . In particular the conclusion of Lemma 3.17 holds for C, too.
Proposition 3.19
Let σ be a weakly continuous section of p(R) and let φ : S → X × C S be an interpolation condition weakly compatible with σ . Then there is a Nash section σ of p(R) such that for every x ∈ C(R) the points σ (x) and σ (x) are in the same connected component of X x,sm (R) and σ is compatible with φ.
Proof We may assume that φ is an interpolation condition of order ≤ k, where k is a positive integer. By Lemma 3.14 there is a C k -section σ of p(R) such that for every x ∈ C(R) the points σ (x) and σ (x) are in the same connected component of X x,sm (R) and σ is compatible with φ. By the usual approximation theorems in theory of smooth manifolds the section σ can be arbitrarily well approximated by C ∞ -maps s : Note that if such an s is sufficiently close to σ in the C 1 -topology then p(R) • s is a diffeomorphism and its inverse is very close to the identity map of C(R). Therefore For any smooth section σ : C(R) → X sm (R) sufficiently close to σ in the C 0 -topology and for every x ∈ C(R) the points σ (x) and σ (x) are in the same connected component of X x,sm (R), and hence the same holds for σ (x) and σ (x). The claim now follows at once from Lemma 3.17, as we explained in Remark 3.18.
We finish this section with a convenient condition for weak continuity.
Definition 3.20
We say that a set-theoretical section σ of the map p(R) is mildly in the latter case the image of σ is closed, and its intersection with Therefore the terminology is justified.
Proposition 3.21 Let σ be a semi-algebraic section of p(R) which is mildly continuous at every x ∈ C(R). Then σ is weakly continuous.
Proof Let x and y be two arbitrary different R-valued points of C lying in the same connected component of C(R). We will need the following Proof We may assume without loss of generality that V lies in p(R) −1 (]x y[); otherwise we only need to reverse the roles of x and y. It is also enough to prove the claim for the fibre above x; the proof for the fibre above y is similar. Since V is closed, the intersection V ∩ X x,sm (R) is closed in X x,sm (R). Therefore it will be enough to show that it is also open in X x,sm (R). Let n be the relative dimension of X over C and let z ∈ V ∩ X x,sm (R) be arbitrary. By the implicit function theorem there is a small connected open neighbourhood Let J ⊂ I be the set of points in I lying to the right of x. By shrinking I , if it is necessary, we may assume that J = I ∩]x y[. By assumption Since σ is semi-algebraic, it is continuous at all but finitely many points of C(R). Therefore there is a finite sequence of points z 1 , z 2 , . . . , z n ∈ [x y] such that σ is continuous at every z ∈]x y[ not on this list. We may even assume that z 1 = x, z n = y, and z i lies to the left of z j for every pair of indices i < j. For every i = 1, 2, . . . , n let E i denote the connected component of X z i ,sm (R) containing σ (z i ) and for every i = 1, 2, . . . , n − 1 let J i denote the closure of the image of ]z i z i+1 [ with respect to σ .
Since the restriction of σ onto ]z i z i+1 [ is continuous for every index i < n, the image of ]z i z i+1 [ with respect to σ is connected, and hence its closure J i is connected, too. Since J i is also compact, its image under p(R) is the closure is non-empty. By our assumptions the latter intersection lies in E i , hence J i ∩ E i is non-empty, too. A similar argument shows that J i ∩ E i+1 is also non-empty. Therefore the set by Lemma 3.22 the intersection V ∩ X x,sm (R) contains E z 1 , and hence σ (x). We may argue similarly to deduce that V ∩ X y,sm (R) contains σ (y). In other words V touches σ (x) on the right and σ (y) on the left. Since x and y are arbitrary, we get that σ is weakly continuous.
The Stone-Weierstrass approximation theorem with interpolation
Definition 4.1 Let π : C × P 1 (R) = C(R) × P 1 (R) → C(R) denote the projection onto the first factor. Let R ⊆ C(R) × P 1 (R) be a semi-algebraic subset. We say that R is admissible if it is the union of an open semi-algebraic set and finitely many points. The kissing points of an admissible semi-algebraic set R as above are all points of R which are not in the interior of R. We say that R does not have topological obstruction if there is a Nash section s : C(R) → C × P 1 (R) whose image lies in R.
The key result we need is an analogue of Conjecture 9.1 in [11] which we will formulate next. It is essentially a refined version of the classical the Stone-Weierstrass approximation theorem with interpolation conditions. Theorem 4.2 Let R ⊆ C(R) × P 1 (R) be an admissible semi-algebraic subset, and for some closed subscheme S ⊂ C let φ : S → (P 1 ×C)× C S = P 1 ×S be an interpolation condition such that there is a Nash section s : C(R) → C × P 1 (R) compatible with φ and whose image lies in R. Then there is a regular section f : C → C × P 1 R of the first projection compatible with φ such that f (C(R)) lies in R.
In particular we get that if R ⊆ C(R)×P 1 (R) is an admissible semi-algebraic subset which does not have topological obstruction then there is a morphism f : C → P 1 of schemes over R such that f (C(R)) lies in R. We are going to prove the theorem above via a sequence of lemmas.
Lemma 4.3
For some closed subscheme Z ⊂ C let h : Z → P 1 R be an interpolation condition. Then there is a morphism f : C → P 1 R of schemes over R such that f is compatible with h and f has no poles on C(R) outside of Z .
Proof First assume that h is actually a map h : Z → A 1 R . By our usual abuse of notation let Z also denote the effective divisor defining this closed subscheme and let d denote its degree. Choose an effective real divisor D on C which is supported outside of C(R) and whose degree is bigger than 2g + d where g is the genus of C. Then by the Riemann-Roch theorem we have dim H 1 (C, O C (D)) = 0, so the pull-back induces a surjection: Therefore there is a real rational function f on C compatible with h whose polar divisor is a sub-divisor of D.
Now consider the general case. We may assume without loss of generality that Z is non-empty. Since Z is a finite scheme over R there is an interpolation condition h : be the unique map such that h 1 | Z is h 1 , and h 1 | 2O 2 is f 2 | 2O 2 . By the above there is a rational function f 1 on C compatible with h 1 whose polar divisor is supported outside of C(R). Since Z is non-empty, the functions f 1 , f 2 are not both identically zero, and hence there is a non- The composition of f and the projection
Proof Let D ⊂ A(R) be a semi-algebraic connected component, and let D 1 ⊂ D be the collection of all points x ∈ D such that x has a semi-algebraic open neighbourhood
U ⊂ D such that the semi-algebraic set U ∩ f −1 (Z ) has dimension strictly less than D. Clearly D 1 is open (in the usual semi-algebraic topology). Set D 2 = f −1 (Z )− D 1 ; since f −1 (Z ) is closed, the set D 2 is closed, too. It will be sufficient to show that D 2 is also open. Clearly we only need to verify the latter Zariski-locally, that is, we may assume without loss of generality that A is affine.
Now let x ∈ D 2 be arbitrary; then there are a Zariski-open affine neighbourhood V ⊂ B of f (x) and a finite set of regular maps g 1 , g 2 , . . . , g n from V to A 1 R such that Z ∩ V is the common zero locus of g 1 , g 2 , . . . , g n . Let U ⊂ f −1 (V (R)) be a connected semi-algebraic open neighbourhood of x in D. Then g i (R) • f : U → R is a Nash function for each index i, so by Proposition 8.1.10 of [2] on page 166 its zero set Z i ⊂ U has either dimension strictly less than U , or this set Z i is equal to U .
Since Z i contains f −1 (Z )(R) ∩ U the former is not possible, so Z i = U for every index i. This implies that f −1 (Z )(R) ∩ U also equal to U .
Lemma 4.5 In the proof of Theorem 4.2 we may assume that R ⊆ C(R) × A 1 (R) without loss of generality.
Proof By Lemma 4.4 for every connected component D ⊂ C(R) either s is constant on D, or s takes every value only finitely many times on D. Therefore for all but finitely many x ∈ P 1 (R) the function s takes x only finitely many times on C(R), and hence after applying an automorphism of P 1 over R, if this is necessary, we may assume that the set T of points t ∈ C(R) where s(t) = ∞ is finite without loss of generality. We may even assume that φ does not take ∞ as a value. Since Nash maps are analytic, there are an effective zero divisor Z with support on C(R) and an interpolation condition h : Z → P 1 R compatible with s such that for every Nash map r : C(R) → P 1 (R) compatible with h the limit lim x→t (s(x) − r (x)) (4.5.1) exists and finite for every t ∈ T . We may even assume that h subsumes φ without loss of generality by choosing a Z such that Z − S is effective. By Lemma 4.3 there is a morphism r : C → P 1 R of schemes over R such that r is compatible with h and r has no poles on C(R) outside of Z .
By our assumptions both s and r take values in R on C(R) − T , so their difference s = s − r is a Nash function C(R) − T → R. Since the limit (4.5.1) exists and finite for every t ∈ T , this map s extends uniquely to a continuous map C(R) → R which we will also denote by s by slight abuse of notation. The latter is also Nash and it is compatible with a unique interpolation condition φ : Z → A 1 R . Let R ⊆ C(R)×A 1 (R) be the union of the image of s and the set The set R is admissible, and it contains the image of s. So if we assume that the claim of the theorem holds for R then there is a map f : C → P 1 R of schemes over R compatible with φ such that f (C(R)) lies in R. The rational function f +r : C P 1 R extends to a map f : C → P 1 R which is compatible with φ and f (C(R)) lies in R.
Proposition 4.6 Let R ⊆ C × A 1 (R) be an admissible semi-algebraic subset, and let s : C(R) → C × A 1 (R) be a Nash section whose image lies in R. Then there are an open neighbourhood U of s in the
Consider the real analytic function The zero set of f is s(V ), hence the zero set of the restriction f | A is just Z . By the Łojasiewicz inequality (in the form of Corollaire to Théorème 1 of section 18 in [12]) applied to our f , G, A, E, and Z , there are d, N > 0 such that Choose an integer M ≥ N /2, and setd = min √ d, 1 . Then Indeed, the projection π is a contraction, and dist(t, T ) ≤ 1 since U was imbedded into (0, 1). So (4.2) implies (4.3) for (t, a) ∈ E. On the other hand if (t, a) / ∈ E then |a| > S + 1, hence a − s 2 (t) > 1, and (4.3) follows again.
With the divisor Y = MT let ψ : Y → C × A 1 R be the (unique) interpolation condition (of order M) compatible with s. Suppose now that a section r is compatible with ψ, and its C M+1 -distance from s is less than, say, 1. If D denotes the maximum of s Combining (4.3) and (4.4) we obtain that, after possibly shrinking K , r (t) = t, r 2 (t) / ∈ A for t ∈ K \ T , hence r (K ) ⊂ R.
On the other hand Q = s C(R) \ K • is a compact set in R • . If the C 0 -distance of r from s is smaller than dist(Q, A), then r C(R) \ K ⊂ R as well.
Proof of Theorem 4.2 By Lemma 4.5 we may assume that R ⊆ C(R) × A 1 (R) without loss of generality. By Proposition 4.6 there is an open neighbourhood U of s in the C ∞ -topology and an interpolation condition ψ : Y → A 1 R compatible with s with the following property: for every C ∞ -section r : C(R) → C × A 1 (R) which lies in U and compatible with ψ, the image of r lies in R. Let T be a zero-dimensional closed subscheme of C whose closed points are all real and which contains both S and Y as a closed subscheme. Let φ : T → A 1 R be the unique interpolation condition compatible with s. Since s is compatible both with φ and ψ, the interpolation condition φ subsumes both φ and ψ. By Lemma 3.15 there is a regular section r : C → C × A 1 compatible with φ such that r (R) lies in U . Since φ subsumes φ, the section r is compatible with φ. Since φ subsumes ψ, the section r is compatible with ψ, too. Therefore the image of r (R) lies in R.
The main theorem and some easy reductions
Definition 5.1 Note that for every x ∈ C(R) the discrete valuation of R(C) x induces a topology on the projective space P n (R(C) x ), and hence on the R(C) x -valued points of any quasi-projective variety defined over R(C) x . Moreover this topology is canonical in the sense that it does not depend on the choice of the embedding into a projective space. We will call this the x-adic topology. Now let X be again a smooth, irreducible projective variety defined over R(C). We will equip the direct product with the direct product of the x-adic topologies.
Remark 5.2
Let X be as above, and let X be a model of X over C. It is possible to give a simple description of a basis for the topology on X (A C ) defined above in terms of X as follows. By slight abuse of notation let X (O x ) denote the set of sections for every x ∈ C(R). As we already noted we have a bijection X (R(C) x ) ∼ = X (O x ) for every x ∈ C(R) by the valuative criterion of properness, so we have a bijection too. For every interpolation condition φ : S → X × C S let be the subset of all those sections whose pull-back under the closed immersion is φ. These sets form a basis for the topology of X (A C ) under the map in (5.2.1).
For every morphism f : X → Y of varieties over R(C) and for every c ∈ Y (R(C)) let X c denote the fibre of f above c. Our main result Theorem 1.5 follows from the following
Remark 5.4
Note that Theorem 1.5 is trivially true when C(R) is empty. Indeed in this case there is no CT obstruction both for the smooth fibres of f over rational points and for X itself. By assumption all such fibres will have rational points, so X has rational points, too.
Proof We start the proof with two easy reduction steps. For the sake of simple notation set n-times . Proposition 5. 5 We may assume that Y = Y n for some n without loss of generality.
Proof Using resolutions of singularities it follows from the assumption that there is a diagram of birational morphisms between smooth projective varieties over R(C) (for some n). Let X → X × Y Y be a desingularisation of the pull-back X × Y Y of X via φ which is isomorphic over the nonsingular part, and let f : X → Y be the composition of this desingularisation and the base change of f with respect to φ. Let V ⊆ Y be a non-empty Zariski-open subset such that the restrictions of both φ and ψ onto V are isomorphisms onto their images, and f is smooth over φ(V ). Then we have a commutative diagram: In particular ρ is birational. Let Z ⊂ X be the complement of f −1 (φ(V )), and let Z ⊂ X be the complement of f −1 (V ). Now let M be an element of X (A C ) CT , as in the claim above. We may assume without loss of generality that its given open neighbourhood U is of the form The set I is finite. Now let X be a model of X over C and for every x ∈ C(R) let T x ⊆ X (R(C) x ) be the x-adic open neighbourhood of M x which under the bijection X (R(C) x ) ∼ = X (O x ) corresponds to those sections Spec(O x ) → X × C Spec(O x ) whose special fibre is the same as the special fibre of the section corresponding to M x . Since Z is a proper Zariski-closed subscheme of X , the set Z (R(C) x ) is nowhere dense in X (R(C) x ) with respect to the x-adic topology, so there is a non-empty W x ⊆ U x ∩ T x , open with respect to the x-adic topology, such that W x and Z (R(C) x ) have empty intersection, for every x ∈ C(R). By the above for every Therefore we may assume, without loss of generality, that M x / ∈ Z (R(C) x ) for every x ∈ C(R), the set I is non-empty, and U x ∩ Z (R(C) x ) = ∅ for every x ∈ I . Since ρ it is an open neighbourhood of M. Since the map ψ • f : X → Y n satisfies the conditions of theorem, we get that there is a point c ∈ Y n (R(C)) such that X c is smooth, and an N ∈ X c (A C ) CT such that N ∈ U . Let x be now an element of I . Then c lies in the image of U x with respect to ψ • f , so it must lie in ψ(V (R(C) x )). Therefore there is a unique c ∈ V (R(C)) such that c = ψ(c). Set c = φ(c) and N = ρ( N ). Clearly c ∈ φ(V (R(C))) and hence X c is smooth. Moreover ρ maps X c isomorphically onto X c , so N ∈ X c (A C ) CT . Finally N ∈ U since ρ maps U into U .
Lemma 5. 6 We may assume that Y = P 1 R(C) without loss of generality.
Proof We may immediately reduce the case when Y = Y n to the case when Y = P 1
R(C)
via an easy induction on n, so the claim follows from the proposition above.
The fibration method
Let us begin the main part of the proof of Theorem 5.3. By the above we may assume without loss of generality that Y = P 1 R(C) . By resolution of singularities there is an integral, smooth, projective variety X equipped with a projective dominant morphism f : X → C × P 1 R over R whose generic fibre is f : X → P 1 R(C) . In particular X is a model of X over C with respect to the composition p of f with the projection π : C × P 1 R → C onto the first factor. Since the generic fibre of f is smooth, the same holds for f, too. Therefore there is a closed subscheme Z ⊂ C × P 1 R of positive codimension such that f| f −1 (U ) : Proof By our assumption U is of the form The set I is finite. For every connected component D ⊂ C(R) for all but finitely many x ∈ D the intersection Z ∩ π −1 (x) is a zero dimensional scheme, since Z has positive codimension in C × P 1 R . Since p is generically smooth and has geometrically irreducible fibres, for all but finitely many x ∈ D the fibre X x is smooth and geometrically irreducible. Therefore for all D as above we may choose a point x(D) ∈ D which does not lie in I , the intersection Z(R) ∩ π −1 (x D )(R) is finite, and X x(D) is smooth and geometrically irreducible.
For every D as above let E D ⊂ X x(D) (R) denote the connected component con- Assume that this is not the case for some D ∈ π 0 (C(R)). Then the image of E D under f(R) lies in the finite set Z(R) ∩ π −1 (x(D))(R). But E D is connected, so is its image under the continuous map f(R), which therefore must be a point p D ∈ Z(R) ∩ π −1 (x(D))(R). Note that E D is Zariski-dense in X x (D) . Indeed suppose that this is not the case; as the latter is geometrically irreducible, the Zariskiclosure of E D has dimension strictly less than the dimension d of X x(D) . Therefore the dimension of the semi-algebraic set E D is also less than d. But X x(D) is smooth, so the dimension of E D is d by the inverse function theorem, which is a contradiction. We get that X x(D) lies in f −1 ( p D ), and hence the fibre of f over any other point in π −1 (x(D)) is empty. But this is a contradiction, so our original assumption on the image of E D with respect to f(R) is false.
For every D as above choose an N x(D) ∈ X (R(C) x(D) ) such that the special fibre of the section Replacing M by M we may assume without loss of generality that σ (M) is Nash, compatible with φ, and f(R) • σ (M) only intersects Z(R) in finitely many points. Recall that a Nash map a : A → B of Nash manifolds over R is Nash trivial if there is a Nash manifold L and a Nash diffeomorphism b : L × B → A (over R) such that a • b : L × B → B is the projection onto the second coordinate.
Proposition 6.2 There is an admissible semi
by open semi-algebraic subsets which are Nash diffeomorphic to an affine space over R, necessarily of dimension 2, by Lemma 3.2 of [9] on page 1217. For every j ∈ J the restriction Let B be the union of the end-points of these open intervals for all j ∈ J , where by end-points we mean accumulation points not in the interval. Since the set of these intervals is finite, the set B is also finite. The complement of B in C(R) is the union of finitely many pair-wise disjoint open intervals; let K denote the set of these open intervals. Since the sets {V j } j∈J cover all but finitely many points of I, for every H ∈ K the pre-image π(R) −1 (H ) ∩ I lies in V j for some j ∈ J . For every such H fix such a V j and let W H ⊂ V j be a semi-algebraic tubular neighbourhood of
Definition 6.3
For every x ∈ C × P 1 (R) and every 1-dimensional subspace L of the tangent space of C × P 1 at x let X L,sm ⊂ X x be the largest open sub-scheme such that for every closed point P of X L,sm the image of the differential of f at P contains L.
Let σ : C(R) → X (R) be a Nash section of p(R) and let R + ⊆ C(R) × P 1 (R) be an admissible semi-algebraic subset which contains the image Im(f(R) • σ ) of f(R) • σ . For every point x on Im(f(R) • σ ) let L(x) denote the tangent line of f(R) • σ at x; it is a 1-dimensional subspace of the tangent space of C × P 1 at x. A butterfly extension of σ on R + is a semi-algebraic section β : Proof We will need the following easy semi-algebraic separation lemma.
Lemma 6.5 Let F and G be two disjoint closed semi-algebraic subsets of X (R). Then there are two disjoint open semi-algebraic subsets A, B ⊂ X (R) such that F ⊂ A and G ⊂ B.
Proof Since X is projective, by Theorem 3.4.4 of [2] on page 72 there is a continuous semi-algebraic embedding ι : X (R) → R m for some positive integer m. Because X is projective, the semi-algebraic set X (R) is compact, so the same holds for its closed subsets F and G. Therefore ι(F) and ι(G) are closed in R m , and by elimination of quantifiers these sets are also semi-algebraic. Then d = dist(F, G) > 0, where dist stands for the Euclidean distance. The sets It has a unique extension s : For every point x on I let L(x) denote the tangent line of f(R)•σ (M) at x. Note that for every x ∈ I we have s(x) = σ (M)(π(R)(x)), and σ (M) is a Nash section, so s(x) lies in X L(x),sm (R). Therefore it will be enough to show that there is an admissible semi-algebraic subset R + ⊆ R containing I such that for every kissing point x of R + on I the intersection of the closure of the image of s with X x (R) lies in the connected component of s(x) in X L(x),sm . Let K denote the set of kissing points of R.
For every x on I let X L(x),bad be the complement of X L(x),sm in X x . For every x ∈ K the subscheme X L(x),bad of X is Zariski-closed, so the semi-algebraic set X L(x),bad (R) is closed in X (R), and does not contain s(x), hence by Lemma 6.5 we may pick two disjoint open semi-algebraic subsets A x , B x ⊂ X (R) such that X L(x),bad (R) ⊂ A x and s(x) ∈ B x . Now let E be any semi-algebraic connected component of X L(x),sm (R) which does not contain s(x), and let E denote its closure in X x (R). Since E is closed in X L(x),sm (R), and s(x) lies in X L(x),sm (R), the set E does not contain s(x), so by Lemma 6.5 we may pick two disjoint open semi-algebraic subsets A E , B E ⊂ X (R) such that E ⊂ A E and s(x) ∈ B E .
For every x ∈ K let W x be the intersection of B x and the B E for all E as above. The set of connected components of X L(x),sm (R) is finite, therefore the set W x is an open semi-algebraic neighbourhood of s(x) such the intersection of its closure with X x (R) lies in the connected component of s(x) in X L(x),sm . Since σ (M) is continuous, for every x as above π(R)(x) has an open connected neighbourhood V x in C(R) such that σ (M) maps V x into W x . We may assume that the sets V x are pair-wise disjoint by shrinking them, if this is necessary. For every x ∈ K let R x be the intersection of π(R) −1 (V x ) with the image of W x ∩ f(R) −1 (R o ) with respect to f. Since Let T be the interior of the complement of the union of V x for all x ∈ K in C(R). Then the set is open, semi-algebraic and contains all but finitely many points of I. Therefore R + = R ++ ∪ I is an admissible subset of R. For every x ∈ K the intersection of the closure of the image of s| R + with X x (R) lies in the connected component of s(x) in X L(x),sm by construction. If x is a kissing point of R + not in K , then x ∈ R o , so s is continuous at x, and hence the intersection of the closure of the image of s| R + with X x (R) is just s(x).
Let X sm ⊆ X be the smooth locus of f . For every x ∈ I let P x denote the formal completion of the Nash section σ (M) around σ (M)(x); it is a section Spec(O x ) → X × C Spec(O x ). By slight abuse of notation let the same symbol P x denote its generic fibre, too. Since σ (M) only intersects Z(R) in finitely many points, we have P x ∈ X sm (R(C) x ). By property (ii) of Proposition 6.1 we have P x ∈ U x for every x ∈ I , so by Lemma 6.6 for every x ∈ I there is an open neighbourhood V x of f (P x ) in the x-adic topology such that for every z ∈ V x the set f −1 (z)(R(C) x ) ∩ U x is non-empty. We may assume that V x = P 1 (R(C) x ) whenever U x = X (R(C) x ). There is an interpolation condition κ : T → P 1 compatible with f(R) • σ (M) such Now let R + ⊆ R be an admissible semi-algebraic subset containing the image I of f(R) • σ (M) and let s : R + → X (R) be a butterfly extension of σ (M). By removing every kissing point of R + which does not lie on I we may even assume that every kissing point of R + lies on I. Let κ : T → P 1 be an interpolation condition subsuming κ : T → P 1 , compatible with f(R) • σ (M) such that for every kissing point z of R + the point π(R)(z) has coefficient at least 2 in T .
By Theorem 4.2 there is a regular map c : C → P 1 compatible with κ such that the graph c of c in C × P 1 (R) lies in R + . Let c ∈ P 1 (R(C)) be the generic point of c. Let X c denote the closed subscheme f −1 ( c ) ⊂ X and let π c : X c → C be the composition of f and π . Since all but finitely many points of c lie in R o + ⊂ U(R), the generic fibre X c of X c (with respect to π c ) is smooth. Let X c,sm ⊆ X c be the smooth locus of π c . By resolution of singularities there is a sequence of blow-ups r : X c → X c such that X c is a model of X c over C and contains X c,sm as an open sub C-scheme, i.e. the restriction r | r −1 (X c,sm ) : r −1 (X c,sm ) → X c,sm is an isomorphism.
|
v3-fos-license
|
2021-10-22T15:17:44.716Z
|
2021-10-01T00:00:00.000
|
244615659
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/jge/article-pdf/18/5/740/40741482/gxab050.pdf",
"pdf_hash": "b6b3bcbce7bc3540651a0da96ec695167e8fd797",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1014",
"s2fieldsofstudy": [
"Geology",
"Engineering",
"Environmental Science"
],
"sha1": "e190a7fe86259ef8be7e752905465525b904ed61",
"year": 2021
}
|
pes2o/s2orc
|
Site-specific seismic hazard levels at the economic zone of Duqm, Oman
A site-specific probabilistic seismic hazard assessment (PSHA) was achieved in the area of special economic zone authority of Duqm, involving hazard evaluation at the bedrock conditions and assurance of potential site influence on seismic ground motion at the bedrock. Appropriate source and ground-motion prediction models were selected and seismic hazards were identified by means of 5% damped Uniform Hazard Spectra (UHS) for three return periods of 475, 975, and 2475 years. A logic-tree algorithm was used to study the influence of the epistemic uncertainties on the source models, earthquake recurrency and maximum magnitude, along with ground-motion prediction equations (GMPEs). The local geology effects were characterized by fundamental resonance frequency (Fo) using the horizontal-to-vertical spectral ratio technique and the soil amplification factors. The effects of soil were assessed using SHAKE91 for soil parameters defined by 55 geotechnical boreholes in conjunction with surveys of 2D multichannel analysis of surface waves (MASW) at 90 sites. Scaling was performed for selected strong-motion applying spectral matching technique to be used at the soil column bottom. Selection of such records is based on scenarios characterized by deaggregation of the PSHA results on the bedrock tops. The Duqm area mostly features low amplifications, below 1.3 for the considered spectrum. Surface ground-motion maps show low hazard values with Peak Ground Accelerations (PGA) vary between about 2 and 5% g for a 475-year return period. Although several sites are assessed to be susceptible to liquefy, liquefaction analyses indicate that surface ground motions for a 475-year return period are insufficient to produce liquefaction.
Introduction
The Duqm area is currently experiencing speedy industrial and social development with ambitious growing plans using existing areas for social, industrial, tourism and marine transport mega-projects. The historical and instrumentally recorded earthquakes around the Duqm area show very low seismicity. However, there are signs of a moderate earthquake incidence (5.3 Ms) in 1939 within 200 km from the Duqm area (Aldama 2009). The important facilities to be built in this space should resist the expected seismic forces and continue to function efficiently after the tremor has occurred. This underlines the necessity of assessing seismic hazards, which is crucial for any earthquake resistant designs of critical assets, risk management, emergency response and insurance guidelines. The hazard evaluation at the Duqm area facilitates planning and developing advanced and proper infrastructures, attracting investors and expanding the old Duqm City with intention of increasing its population to 100 000 by 2025.
The Duqm area lies at the most eastern middle part of Oman, within 200 to 500 km of seismically active margins owing to the interaction between the Arabian, Eurasian, Indian and African plates (figure 1). Duqm lies within the hosting Arabian Plate, which is regarded as a stable craton area according to (Fenton et al. 2006). Directly offshore of the Duqm area lies the Arabian Sea, which is characterized by the existence of Owen Fracture Zone and the Murray Ridge to the east. These two tectonic structures mark the Arabia-India plate boundary. Two convergent margins are evident along the Zagros and Makran toward northnortheast, respectively, where the Arabian Plate moves toward the Eurasian Plate. The splitting of the Arabian and African Plates gives rise to the Gulf of Aden, which is a seismically active divergent tectonic boundary to the south. Moreover, some small to medium events occurred in Oman Mountains, which show evidence of recent tectonic movements (Kusky et al. 2005). Details of these seismotectonic elements were discussed in El-Hussain et al. (2018).
The main goal of this analysis was to conduct a sitespecific PSHA for the Duqm economic zone using EZ-FRISK8.0b software from Fugro USA Land, Inc. The hazard analysis at the bedrock condition was conducted by the well-established and widely used Cornell-McGuire methodology (Cornell 1968;McGuire 1976). Then site impact due to earthquake excitations was estimated at carefully selected sites for classifying the surface ground-motion levels.
Surface geologic investigations throughout the area, along with information from 55 geotechnical boreholes were used for giving details about the surface geological setting (figures 2 and 3). Additionally, geophysical surveys at 90 selected, well-distributed sites with spatial spacing of about 1-2 km were conducted. The field-surveys included microtremor measurements, P-wave shallow seismic refraction and MASW. Boring observations and geophysical survey results were used to determine the Fo at each studied site and to define their site response using the identified soil characteristics (e.g. P-wave and shear-wave velocities (Vs), density, thickness etc.).
The Fo of the soft sediments offers an informative indication on the frequency of the largest ground-motion amplification most. Nakamura's (1989) method was applied to determine Fo within the Duqm area. Nakamura's (1989) technique described each studied site using the horizontal Fourier spectra to the vertical ratio (HVSR) of microtremors that were recorded by single three-component seismographs. Thus, if sufficient microtremor measurements were available over the Duqm area, the resulting Fo could be mapped.
Vs depicts the superior descriptive parameter for the stiffness of the medium (Aki & Richards 1980), so Vs is usually believed the most essential element in characterizing soil amplification. Sections of shear-wave velocity were obtained using 2-D active MASW technique, which is proved to estimate the Vs reasonably (El-Hussain et al. 2014;Mohamed et al. 2019). Conventional P-wave shallow seismic refraction measurements were accomplished to initiate preliminary depth models to invert the Rayleigh-waves dispersion curves of MASW analyses. The obtained P-wave velocities were transformed to initial Vs using a Poisson's ratio of 0.4. Afterward, final Vs profiles were introduced into SHAKE91 (Idriss & Sun 1992) algorithm in combination with suitable earthquake time histories for acquiring the soil's site effect at the chosen 90 sites. Spectral amplification with surface PGA and pseudo spectral ground accelerations (PSA) for three return periods of 475, 975 and 2475 years were mapped.
Seismic hazard assessment at the bedrock
Inputs necessary to conduct a PSHA include delineation of contributing seismic zones, characterizing earthquake recurrency for each outlined seismic source, choice of applicable GMPEs and integration of these parameters to get 742 Journal of Geophysics and Engineering (2021) hazard values. The classic 'Cornell-McGuire' methodology was implemented to integrate these parameters, comprising proper management of the aleatory and epistemic uncertainties. Aleatory variability was introduced to the formula depicting the hazard by stating the mean of the parameter with standard deviations of the predicted ground motions. Epistemic uncertainties were handled using the logic-tree algorithm, offering various possibilities for the models representing seismic sources, recurrence parameters, maximum possible earthquake and GMPEs.
Two seismic source models were extracted from Deif et al. (2020) in their update of PSHA for Oman (figure 4). The two models consider the delineated seismic zones as seismically homogenous areas, relying on a modernized seismic database (Deif et al. 2017), existing geophysical data, active faults and other major geologic elements, and earlier related studies. The first model is superior to the second one as it is of higher resolution and correlated better to the identified distinct active geologic structures. Additionally, the first model splits Makran Subduction Zone into two stand-alone seismic zones (east and west segments), and divides Owen Fracture Zone to Owen and Murray Ridge seismic zones. The second model considers Makran and Owen as one seismic zone each. Therefore, the first model is privileged with a higher weight, 0.8, while a smaller weight 0.2 is assigned to the second one.
Similarly, Cornell & Vanmarcke's (1969) parameters were extracted from Deif et al. (2020), describing the earthquake recurrency at each zone. For most seismic zones, maximum magnitude (Mmax) values were provided implementing the Kijko (2004) algorithm. For zones lacking earthquake data to execute satisfactory statistical analysis, Mmax values were estimated by adding 0.5 unit of seismic moment magnitude to the largest observed event. The epistemic uncertainties of the earthquake recurrency parameters were considered using their mean values, along with ±1 standard deviation. A weight of 0.6 was given to the mean results, whereas weights of 0.2 were given to the recurrence parameters ±1 standard deviation.
One more uncertainty related to maximum observed earthquake in western Makran segment was introduced into calculations as a consequence of the earthquake of 1483. If this event truly occurred with an M 7.7 in western Makran as according to Ambraseys & Melville (1982), Mmax is estimated to be Mw 8.2. Instead, Musson (2009) located this 743 Figure 4. Seismic source models developed for PSHA studies for the Duqm area. Right panel presents the preferable model with more seismic source zone a, while the left panel illustrates the more regional seismic source model. event at Hormuz Island with an Ms 6.0, leading to a maximum credible earthquake value of Mw 6.24 in the western Makran segment. A relatively low weight of 0.3 was allocated to the former option owing to its reliance on relatively old studies with high ambiguity in the location of the earthquake. A weight of 0.7 is allocated to the other alternative, considering the frequent foreshocks headed the main shock at Hormuz Island.
Scarcity of acceleration time histories in Oman stimulated using GMPEs already created in other localities, within tectonic areas analogous to these might affect the country. The two applied source models contain three different seismotectonic regimes, namely subduction areas, shallow active areas and stable continental areas. Three GMPEs were selected for each seismotectonic regime based chiefly on the assumptions of Delavaud et al. (2012) and Douglas et al. (2014) in the work on a seismic hazard model in the Middle East. All preferred models are capable of determining the horizontal PGA and five percentage-damped PSA.
The GMPEs of Youngs et al. (1997); Atkinson & Boore (2003) and Zhao et al. (2006) were used to describe how the seismic waves attenuate at the Makran Subduction Zone, while models by Zhao et al. (2006); Chiou & Youngs (2008) and Akkar & Bommer (2010) were applied for exhibiting the ground-motion attenuation behavior in shallow active seismic zones. The two GMPEs of Campbell (2003) and Atkinson & Boore (2006) were used for stable continental areas (the Arabian craton) in combination with the three active shallow GMPEs. This is followed because the Arabian Peninsula cannot purely be characterized by the stable craton ).
For GMPEs implemented in shallow active zones, the model by Akkar & Bommer (2010) was allocated a 0.5 weight. This was used because these equations are the latest, chosen from the better GMPEs for the Middle East and used several time histories of strong-motion instruments established therein. The equations by Zhao et al. (2006) and Chiou & Youngs (2008) were allocated the same weights of 0.25. The GMPE of Zhao et al. (2006) was prioritized with a 0.5 weight in Makran Subduction area, whereas the remaining weight was like that given to the Youngs et al. (1997) and Atkinson & Boore (2003) GMPEs. For the seemingly stable Arabian craton, models by Campbell (2003) and Atkinson & Boore (2006) were preferred over the shallow active GMPE with weights of 0.3 and 0.25, as they were created for stable regions. The residual weight was portioned among like equations of the shallow active zones at 0.15 each.
To put all GMPEs in one logic tree, all of them had to be compatible concerning the magnitude scale, sourcesite distance and the classification of horizontal component. All chosen models defined the earthquake size using moment magnitude (Mw). Luckily, the chosen models in EZ-FRISK8.0b could be introduced into calculations as tables, comprising the chosen parameter's ground-motion value of a specific scenario (magnitude-distance pair). Therefore, various distance styles could be treated independently without additional modifications as the hazard levels herein were characterized by the geometric mean of the PSA horizontal component. Ground motions that calculated using GM-PEs that do not define horizontal ground-motion similarly were transformed into this class using the method by Beyer & Bommer (2006). 744 Figure 5. Hazard curves of PGA in addition to 5% damped PSA at 0.2 and 1.0 s.
Hazard calculations at bedrock
Seismic hazard analysis was conducted at bedrock layers characterized by V S30 of 765 m s −1 , whereas V S30 represents weighting average Vs for the topmost 30 m of layers. Seismic zones were described as seismically homogenous areas of fixed depth equivalent to the average depth of an earthquake in the catalog used at every single zone. Assumed fixed depths due to location difficulties (e.g. 10 and 33), which were frequently encountered in the seismic dataset, were discarded during the calculations of average depths. The minimum magnitude (Mmin) was set to 4.0 in all zones, as smaller events are supposed as being incapable of producing serious damage to engineered structures. A single point at the center of the Duqm area was selected to represent the entire area because no significant change was expected in the hazard results at the bedrock for this relatively small area. The hazard products at the bedrock were provided using the seismic hazard curves, deaggregation results and UHS for the three studied return periods.
Hazard curves
Final hazard curves were evaluated as the weighted average of all yielded hazard curves of the applied logic tree, illustrating how a particular ground-motion intensity is annually exceeded. Figure 5 shows the hazard curves of PGA and five percentage-damped PSA for 0.2 and 1.0 seconds. Moreover, PGA hazard curves of most influencing seismic zones are shown in figure 6. The figure reveals that the hazard levels for bedrock conditions at Duqm site is low and mainly influenced by the host Arabian background zone for the studied return periods. The calculated low hazard is credited to the location of Duqm site in central Oman, far from the sources of high seismicity and accordingly being more affected by fewer active seismic sources.
Uniform hazard spectra
The UHS were established by calculating hazard curves at essential spectral periods, for covering the expected heights of engineering structures in Duqm. UHS at the bedrock in the Duqm area are depicted in figure 7 for various return periods studied. The PGA weighted average for a 2475-year return period was 51 cm s −2 and the largest weighted average horizontal acceleration was 114 cm s −2 , which was detected at a 0.1 s spectral period.
Deaggregation of ground motion
The deaggregation process aims to define the different future earthquake scenarios aftermath on the whole hazard at specific locations for specified return periods. This process is 745 Figure 6. Seismic hazard curve reflecting the contribution of the source zones on PGA. essential for many design circumstances, especially for high importance facilities, it is more appropriate using groundmotion time history of a particular earthquake scenario for defining probable earthquake action. Deaggregation findings are represented here in an equal magnitude-distance spacing form. The deaggregated PGA and PSA of 0.2, 1.0 and 2.0 seconds as representative of short and long spectral periods are presented in figure 8 for return periods of 475 and 2475 years.
Deaggregation findings reveaeds the domination of the background zone on short spectral period hazards. The mean magnitude and distance of effective scenarios for return periods of 475 years were 5.2 and 21.5 km. Large earthquakes of greater distance affect the hazard most at longer spectral periods, indicating improbable large seismic hazards from the low seismicity Arabian background zone.
Local geology
Carefully collected 146 soil samples from a site walk plus the products of 55 geotechnical boreholes were used for soil description at the Duqm area. A detailed soil survey and sampling of traverse soil profiles were performed along predefined lines, crossing all the detected surface geologic elements in the prevailing geologic map (figure 9). The samples of soil were gathered at prearranged sites or wherever any variation 747 Figure 9. Locations of collected soil samples.
in soil nature was encountered. A 60 to 80 cm depth pit was excavated at every chosen site and the 146 samples of soil were collected from the native soil after eliminating the topmost transportable soil cover. The 55 geotechnical boreholes were drilled to depths ranging from 6 to 20 m beneath the surface, aiming to reach the geotechnical bedrock. Standard Penetration Tests (SPT) were accomplished at 2-meter intervals or where the soil settings were appropriate. Moreover, coring of the rock was conducted using a T6-101/NQ Wire line core barrel generating a nominal core diameter of 47.5 mm.
The Duqm area is flat at the south and northeast parts and becomes undulated rough ground toward the north and northwest areas. Various materials are encountered throughout the investigated area, ranging from outcropping rocks at the western and northern areas parts to very soft Sabkha deposits, sand and gravels, at the remaining areas. In the south, Duqm is generally covered with evaporites, silt and clay, constituting the Sabkha area. Slightly northward, it is covered by gravely sand to sandy gravel. In the northeast parts, the silty fine sand Sabkha is bordered by high lands westward covered with gravely sand and conglomerate. Toward the east, Duqm is covered with a low-lying sandy ridge and silty coastal strip. Sieve analyses reveal that, soils with coarser grains are mostly found toward the north and finer grain soils are localized at the northeast and southern parts regardless of the coarser grain soils at southernmost end of the Duqm area.
The drilling program results demonstrate that the area is mostly covered with a very thin (0.5 to 6 m) layer of loose soils. Minor exceptions appear at the eastern parts where the soil cover reaches exceeds 20 m at very few individual points. The top soil is formed by generally soft to medium stiff finegrained soils (silts), and loose to medium dense granular soils (sands). Bedrocks were described as conglomerates, weak to very weak calcareous sandy mudstone, siltstone of various thicknesses and weak limestone (marl).
Fundamental resonance frequency (Fo)
The Fo throughout the Duqm area was determined by applying the Nakamura (1989) method. This method applied the Fourier horizontal-to-vertical spectral ratios (HVSR) of seismic records to provide a reliable site effect on upright incident shear-waves. HVSR has proved to provide a 748 trustworthy Fo evaluation, but it is unsuccessful in estimating the amplification curves accurately (e.g. Theodulidis & Bard 1995;Mohamed et al. 2008). Koller et al. (2004) were implemented here to assure truthful experimental circumstances. Ambient noise data were collected using two Taurus seismographs of Nanometrics and tri-axial Trillium 20 and 40 s velocity sensors. Continuous noise recordings for no less than 3 hours at a sampling rate of 100 Hz were carried out for the chosen 90 locations, representing each delineated geological entity.
Field measurements. The recommendations suggested by
3.2.2. Data processing. Ambient noise measurements were evaluated applying the GEOPSY software created within the SESAME (2004) project. Different windows with a minimum time duration of 25 non-overlapping seconds were selected from the quietest portions of the noise records. This was made using the STA/LTA anti-trigger procedure, with STA/LTA allocated less than a small threshold (1.5-2.5).
For every site, at least 10 simultaneous time widows of the three components were selected. Each windowed signal was base-line corrected, tapered, fast Fourier-transformed and smoothed using the Konno and Omachi (1998) procedure. The geometric mean of the two horizontal spectra for each window was calculated and the produced HVSR curves were averaged to define the final Fo at the site. All peaks on HVSR curves were examined for their reliability and clarity. Moreover, they were checked to see whether they were of natural or artificial origin. HVSR curves of artificial origin or that could not meet the criteria for reliable and clear curves were discarded as they were not formed due to site characterization.
HVSR results.
HVSR curves were eventually used to facilitate mapping the Fo on account of the topmost soft deposit layers in the Duqm study area. This map demonstrates that Fo can change even within short distances, mostly on the grounds of the changes in the thickness of soft soil. Moreover, the type of soil varies throughout the area as the marl and limestone outcrop in the western and northern parts, while recent deposits of sand, clay and silt occupy the eastern and middle parts.
The resonance frequency distribution map shows 30 HVSR curves without a significant peak (Flat curves) over the entire range of frequency applied in the current analysis (0.25-20 Hz), proposing rocky locations (figure 10). These locations are mainly clustered in northeastern and western areas where the formations of shale, marl, siltstone and limestone outcrop. Locations of these flat HVSR curves are represented on the Fo map by a letter 'F' (figure 11).
High Fo (≥10 Hz) dominate most of the Duqm area, consistent with the general geology of Duqm. They are clustered mainly near the western, middle and northern parts (figure 11). This confirms the existence of thin weathered rocks covering the bedrock, implying that despite these areas being classified geologically as rocks, site conditions may be reclassified, relying on existence or absence of weathered rocks or soils and their thicknesses.
Smaller Fo are seen at the eastern coast strip of the Duqm area, indicating considerable soil thickness. The lowermost Fo is 0.85 Hz at the northeastern corner where the soft layer thickness exceeds 25 m as the boreholes show.
The map of Fo could be understood considering both the number of stories and the fundamental frequencies. Resonance cannot affect most of the Duqm area, where high Fo (≥10 Hz) dominates. Structures that might be influenced through soil effect are located at minor areas in the eastern and middle parts, where the Fo varies between 1.8 and 8 Hz. These would be buildings of about 1-5 stories high.
Soil effect analysis
Amplification of ground motion instigated by soft soils is largely influenced by their shear-wave velocity, which is usually considered the best elastic parameter to define stiffness (Aki & Richard 1980). A 1D ground response study using SHAKE91 was implemented for soil effect assessment on the anticipated ground motion in the Duqm area.
Inputs for the ground response analysis.
Vital input data necessary for effective soil response evaluation formed the subsurface model that presented the shear-wave velocity, density, thickness and lithology. These data can be obtained from the conducted geophysical surveys and available borehole data.
Nonlinear behavior of soil was introduced into SHAKE 91 using shear modulus reduction (G/Gmax) and damping curves ( ), which highly depend on changes in shear strain. Hence, ground-motion records at the soil base are also crucial for soil response analysis. The modulus reduction (G Gmax) and the damping curves developed by Seed & Idriss (1970), Seed et al. (1986) and Schnabel et al. (1972) are used for the sand, gravel and rock in the Duqm region, respectively.
Shear-wave velocity determination
The MASW technique Park et al. 1999) was used to calculate Vs at the chosen 90 sites using the SURFSEIS 5 software of the Kansas Geologic Survey. MASW has proved a robust procedure that can present consistent shear-wave velocity for the topmost 30 m (e.g. Park et al. 1999;Mahajan et al. 2007). It uses a profile of geophones to record surface waves using active controlled seismic sources, or inactive ones (man-made and natural noise). Each geophone gather creates a 1D shear-wave velocity profile of the subsurface applying an inversion process for Raleigh wave dispersion curves of a multichannel 749 Figure 10. HVSR calculations in a rocky site, showing almost a flat curve at site GS-c08. Panel 1 demonstrates the amplitude spectrum of the three components, Panel 2 illustrates the HVSR curve, Panel 3 shows the horizontal spectrum rotation with azimuth degrees, and Panel 4 shows the HVSR rotation with azimuth degrees. record. The resulting 1D velocity profiles were combined for the constructions of 2D ones.
3.3.1.1.1. MASW data acquisition. Active MASW surveys were conducted using a 24-channel seismograph 'SmartSeis ST' by Geometrics Inc., USA. Surface-wave information was gathered along 90 carefully chosen profiles (figure 2). The Vs was intended to be estimated from the surface down to the maximum depth possible. Thus, 4.5 Hz vertical geophones were arrayed at 1-m spacing along a multichannel linear arrangement to detect the smaller frequency components efficiently. A sampling rate of 1.0 ms and recording time-span of 1.0 s were used.
Seismic waves were generated using an 8-kg sledge hammer with a 5 m offset. The target signals (surface waves) were acquired and logged in SmartSeis ST in SEG-2 format. A shooting interval of 4 m was used and a typical roll-along procedure was performed across a 52 m profile. For records with low levels of signal-to-noise ratio, the signals were enhanced by stacking several hammer strikes.
3.3.1.1.2. MASW data processing. Raw data in SEG-2 format were transformed into Kansas Geological Survey format, merging the entire shot gathers of each line to one multirecord. Body-waves were then recognized and discarded using filtering and muting processes. Dispersion curves for each shot gather in the multi-record file were acquired from the fundamental mode of the dispersion image (e.g. figure 12). The extracted dispersion curves were individually inverted to generate a 1D shear-wave velocity profile, representing the 750 Figure 11. Fundamental frequency map at the Duqm area. mid-point of the geophone gather (figure 13). Multiples of these 1D shear velocity results were interpolated to get 2D profiles ( figure 14).
MASW results were consistent with the subsurface data acquired from the geotechnical boreholes. Mostly, a satisfactory agreement was noticed among soil thickness, blow number (N value) and consequently Vs in both geophysical results and geotechnical borehole data, indicating the efficacy of MASW for obtaining the near-surface characteristics ( figure 15). The obtained shear-wave velocity was used to define the soil thickness, V S30 and soil response. associated with relatively considerable thickness of soft cover (12.37 m depth) as identified using boreholes, comprising very soft dark gray distinctly weathered marl with low N values. The remaining sites belong to the C category on the NEHRP standard with V S30 ranging from 379 to 736 m s −1 .
A soil thickness map was produced from the integrated database produced by MASW results (Vs ≤ 765 m s −1 ), HVSR and the available borehole data ( figure 17). The bedrock can be encountered as shallow as 0 m to as deep as 29 m. The soil thickness map illustrates the thin soil layer (≤2.5 m) toward the north, west and the largest part of the southern areas. The soil thickness tends to increase toward the east, reaching about 14 m at the middle area. The largest soil thickness can be seen in the utmost northeastern corner at 29 m.
Bedrock ground motion
A basic input parameter for 1D site response is seismic ground-motion records. The best procedure is to use local wide spectrum of scenarios (magnitude-distance) of real recorded strong motions. Caused by the shortage of such strong-motion records in Oman, real strong ground-motion records were obtained from comparable tectonic environments through searching the PEER NGA7.3 database. Appropriate records that fulfilled the searching criteria provided by the deaggregation process at the bedrock (magnitude, distance) were downloaded. These initial time histories were amended applying the spectral matching approach of Al Atik & Abrahamson (2010) with the purpose of lowering the influence of unavoidable inconsistency between their characteristics and the resultant hazard. 3.3.2. 1-D soil response using SHAKE-91. Surface PGA and five percentage PSA for the three studied return periods were estimated by 1D soil response analysis using SHAKE91. The modified time histories, Vs profiles, thickness, density and corresponding shear and damping curves were supplied to SHAKE91 in EZ-FRISK8.0b as input. The amplification curves and the average ground-motion intensities were assessed for getting the site-specific seismic hazard at the relevant 90 sites.
Amplification values for various spectral periods were mapped for the three studied return periods. Figure 18 shows the amplification map of PSA of 0.2 s for a 475-year return period. The Duqm area shows an amplification factor less than 1.3 for the whole spectrum for all analyzed return periods. Although these low amplification values were dominant, the amplification factor reached 2.5 and 3.0 for PGA and a 0.2 s spectral period, respectively, at minor spots at the northeastern parts attributable to the deposition of relatively large thickness of soils.
Surface hazard maps.
The PGA and five percentagedamped PSA for the three studied return periods were mapped. Hence, the PSA of important periods for common engineered structures are fairly well covered. Figures 19 and 20 show PGA variation and five percentage PSA at 0.2 s at the ground surface for a 475-year return period.
The maximum hazard levels were noticed at the northeastern areas because of the relatively thick soft soil layers of recent beach sands with PGA of about 46 and 93 cm s −2 for the return periods of 475 and 2475 years, respectively.
The ground motion values decline toward west. A maximum hazard value of five percentage-damped horizontal PSA was found at a PSA of 0.2 s, ranging from 57 to 174 cm s −2 for a return period of 475 years and between 87 and 302 cm s −2 for a return period of 2475 years. Five percentage-damped ground motion of 1.0-s time periods are naturally smaller than those of high frequencies (0.1 and 0.2 spectral periods) with maximum surface hazard values of about 18 and 28 cm s −2 at most studied localities for return periods of 475 and 2475 years. These surface spectral accelerations are almost equivalent to the bedrock values with almost no amplification.
Seismic liquefaction in the Duqm area
Seismic liquefaction is a damaging engineering phenomenon that mostly has an effect within saturated loose, granular soils during violent ground shaking. Liquefaction occurs when 754 Figure 17. Soil thickness map at the Duqm area. substantial reduction takes place in soils' shear-strength as a consequence of a rapid buildup in pore liquid pressure. Thus, the soil behaves like a liquid, causing buildings to experience subsidence or tilt over.
The seismic liquefaction initiates from the association of two constituents, namely susceptibility and opportunity, which define the capability of a site to liquefy and describe the ability of earthquake to cause soil liquefaction, respectively. The loading action of earthquakes along with the soil strength (liquefaction resistance) represent the opportunity. The seismic load action is identified as the measure of cyclic stress ratio (CSR), whereas the soil ability for resisting the load imposed by seismic events is represented by cyclic resistance ratio (CRR). Soil susceptibility evaluation is the starting point in liquefaction analysis to exclude sites with unlikely liquefaction potential. When soils are known to be susceptible, the analysis process shifts to liquefaction initiation. Liquefaction potential is calculated at a particular depth using the concept of factor of safety (FS), comparing loading levels induced via the seismic event (CSR) to the soil resistance to liquefy (CRR). Once soil resistance is lower than the seismic loading, liquefaction will be initiated.
Fifty-five boreholes were carefully examined for site susceptibility to liquefaction. The water table was encountered at 20 boreholes only, at depths varying from 0.94 to 6.5 m below the ground surface. Seven boreholes out of 20 were referred to be susceptible to liquefaction owing to their appropriate local soil characteristics and were thus analyzed for quantification of their liquefaction potential. The susceptible seven boreholes showed that soil columns consist mainly of (from top to bottom) very sandy silt, fine to medium sand, very sandy gravel and silty clayey gravelly sand. CSR is defined by surface PGA and depth reduction parameter (r d ) using the Seed & Idriss (1971) procedure (simplified procedure). NCEER (1997) revised their initial formula to be: where v0 and ′ v0 are the total and effective overburden stresses, a max is the horizontal site-specific PGA and rd is peak shear stress reduction rate with depth and is estimated herein using NCEER (1997) equations. Site-specific probabilistic PGA values for a 475-year return period along with corresponding earthquake magnitude were used. This return period was chosen to be equal to the return period intended for the Omani seismic building code.
CRR evaluation for the seven susceptible boreholes sites was achieved using the blow counts of SPT (N values). These raw N values were subjected to many corrections to find the standardized (N1)60 (equivalent SPT blow counts for clean sand). Clean sands are those with a fine content less than 5%. These corrections included depth correction (effective overburden pressure), efficiency of hammer energy, borehole diameter, rod length and sampling procedure. For sands with fine contents greater than 5%, additional corrections were implemented for obtaining a clean sand equivalent; as such sands being anticipated to have increased resistance to liquefaction.
Factor of safety against liquefaction
FS is mathematically expressed as: Curves of liquefaction against CRR are built for an earthquake with Mw 7.5. The purpose behind the magnitudescaling factor (MSF) is to amend CRR 7.5 values for seismic events with magnitudes different from 7.5, accounting for the duration effect on the calculated ground motion. MSF proposed by the revised Idriss formulation in NCEER (1997) ] .
Generally, the soil layers are deemed liquefiable when FS is less than 1.0. Sometimes, soils may be liquefiable at an FS greater than 1.0, so, an FS of 1.2 is selected to be the limit at which the soil layers are rendered non-liquefiable. Figure 21 shows the principal results obtained at a borehole (34). The surface soil was (beige to yellowish brown), gravelly, very silty, slightly gypsiferous and calcareous sand down to a 1.0 m depth with 8.0 blow counts. The lithology became medium dense, (beige to yellowish brown), silty/clayey and very sandy gravel with much higher blow counts (N > 50) down to a depth of 2.0 m depth. Next, a yellowish brown, very silty/clayey, gravelly, sand with a 31 N value extended to the bottom at depth of 20 m. The water table was encountered at 3.49 m. The FS value at borehole number 34 was very much higher than 1.2 for a 475-year return period, indicating nonliquefiable soils. Similarly, all the sites within the area were non-liquefiable, attributable to low ground motion and small soil thickness.
Discussion
Accurate seismic zones geometry, fault rupture scenarios and GMPEs for strong-motion parameters are lacking for Oman.
Thus, several widely different and competing alternatives were introduced into a logic-tree algorithm to reasonably provide the seismic hazard truthfully at the top of bedrock. Results were highly reliant on the supposition that future seismic events will essentially continue to occur within the restrictedly defined seismic zones.
The Fo map could identify the boundaries between the rock outcrops toward west and north, and the soil zones in remaining areas. This map helped to recognize sites that could suffer more damage by reason of the agreement of soils Fo and the natural period of the buildings. Accordingly, short buildings (1-5 stories) at the middle and eastern parts may suffer more if a destructive earthquake occurs.
V S30 and NEHRP (2001) site classifications were determined from the calculated velocity data. Estimated soil thickness was strengthened by the outcomes of shallow seismic refraction, MASW and HVSR surveys. Combined results of the three geophysical methods enhanced the output and restricts soil physical parameters. Comparing the soil thickness map with the Fo and the V S30 maps showed a good match between them. High and medium V S30 and Fo values corresponded to areas of no or small soil thickness toward the 757 Figure 20. Site-specific PSA at 0.2 s for 475-year return period.
west, south and north, while lower ones corresponded to areas with relatively larger soil thickness in the eastern coastal and middle areas. Similarly, significant amplification factors were concentrated where relatively thick soft soil exists, indicating how soil thickness is important for defining soil characteristics.
For earthquake resistant structures, the response spectrum is a widely implemented parameter to properly describe the ground motion. Five percentage-damped PSA values were identified at the Duqm area for a wide range of spectral periods to represent building heights of one to 20 stories, which is sufficient to comprise the presumed structure elevations in the Duqm area.
Present hazard maps do not involve information regarding the damage potentiality. Rather, they show sites where the most harm is likely to occur. Nevertheless, areas with significant high hazard values should not be thought as risky. The fragility of structures, density of inhabitance, important engineering structures etc., should therefore be compiled for risk maps development at the Duqm area.
Conclusion
Variation of surface ground-motion values introduced the seismic hazard as a crucial factor in anticipating the earthquake actions for design engineers, liquefaction potential, developing risk mitigation strategies, land use management and urban planning. This paper gave the site-specific seismic hazard findings at the Duqm area for the three studied return periods. In light of the outcomes of 55 boreholes with N values and three geophysical surveys, site characterization was evaluated. This study provided knowledge on soil elastic properties, delineating the acoustic impedance boundaries between the surface and bedrock. The study ended up with a soil thickness map, Fo map, V S30 map, amplification maps and surface ground-motion maps. The economic zone of Duqm was distinguished by very low hazard values with relatively large PGA values at the northeastern Duqm area with 46 and 93 cm s −2 for return periods of 475 and 2475 years.
Site-specific hazard map reliability depended vastly on the interpolation of the available boreholes and the geophysical surveys. Site-specific ground motions were calculated at 758 Figure 21. Liquefaction potential at borehole No 34 in the Duqm area. particular 90 sites with spatial spacing of about 1-2 km. As the site characterization can change over small distances, sitespecific investigation for spaces between the investigated 90 sites should be performed to design against the earthquake force on the critical structures.
The performed liquefaction analysis showed the safety of the economic zone of the Duqm area against liquefaction for site-specific ground motion of a 475-year return period.
|
v3-fos-license
|
2018-04-03T01:01:50.039Z
|
2013-01-01T00:00:00.000
|
27851926
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1648-9144/49/2/16/pdf?version=1524481167",
"pdf_hash": "067ab355fa2ebf433a75d992fb49981433e1b081",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1016",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "067ab355fa2ebf433a75d992fb49981433e1b081",
"year": 2013
}
|
pes2o/s2orc
|
Delayed Diagnosis of lyme neuroborreliosis presenting With abducens neuropathy Without Intrathecal synthesis of Borrelia antibodies
Lyme borreliosis is the most common tick-born infection in Europe. Global climate change expanding the range of tick vectors and an increase in the incidence suggest that this disease will remain an important health issue in the forthcoming decades. Lyme borreliosis is a multisystem organ disorder affecting the nervous system in 10% to 15% of cases. Lyme neuroborreliosis can present with any disorder of the central and peripheral nervous systems. The neuro-ophthalmological manifestations are a rare feature of the disease. The intrathecal synthesis of Borrelia burgdorferi antibodies is of diagnostic importance, but in rare cases, immunoglobulins against the Borrelia burgdorferi antigen may not be detected. We report a case of possible Lyme neuroborreliosis presenting with sixth cranial nerve neuropathy at the onset of the disease further developing into typical meningoradiculitis and multiple mononeuropathy. Surprisingly, Borrelia burgdorferi antibodies were not detected in the cerebrospinal fluid.
Introduction
Lyme borreliosis (LB) is the most common tickborn infection in Europe.Global climate change expanding the range of tick vectors and an increase in the incidence suggest that LB will remain an important health issue in the forthcoming decades (1).It is possible to make only approximate estimates of the LB incidence in Europe because few countries report LB as a compulsorily notifiable disease (1,2).Epidemiological studies indicate that the mean annual number of LB-notified cases in Europe is higher than 65 400 (1).It is apparent that LB shows a gradient of increasing incidence from the West to the East with the highest incidence in Central and Northern Europe (e.g., more than 100 cases per 100 000 population) and the lowest in the Southern Europe (e.g., fewer than 1 case per 100 000 population) (1,2).In Lithuania, this disease is mandatory notifiable.The incidence of LB during 2008-2011 ranged from 34 per 100 000 to 107 per 100 000 with the highest incidence being in 2009 (3).The disease is caused by spirochetes of the Borrelia burgdorferi (Bb) group.Lyme disease is a multisystem organ disorder affecting the nervous system in 10% to 15% of cases (4).Nervous system disorders present as aseptic meningitis, recurrent meningoencephalitis or meningoencephalomyelitis, meningoradiculitis (described by Bannwarth), which is the most common presentation of Lyme neuroborreliosis (LNB), and cranial and spinal neuropathies, where the seventh cranial nerve is most often involved (5,6).
Although the clinical course of LNB has been well described, sometimes the presentation of the disease along with the lack of agreement on a precise clinical definition of this illness and the lack of standardization of serological assays with a high rate of false-positive results lead physicians to confusion, and a late diagnosis is common.Confusion exists regarding the interpretation of positive or negative results of the serological tests for antibodies to Lyme disease.The intrathecal synthesis of Bb antibodies is of diagnostic importance, but in rare cases, immunoglobulins against the Bb antigen may not be detected (7).According to the data of previous studies, LNB can present with any disorder of the central and peripheral nervous systems (5,8).Although neuro-ophthalmological manifestations have been reported, this remains a rare feature of Lyme disease (9,10), and the diagnosis can be delayed in some cases.An early diagnosis and prompt establishment of an adequate antibiotic treatment is needed to achieve a rapid resolution of symptoms and theoretically to avoid spreading and persistence of the infection (6,8).In this report, we describe a case of possible Lyme neuroborreliosis presenting with abducens mononeuropathy at the very onset of the disease further developing into typical meningoradiculitis and multiple mononeuropathy.Surprisingly, the Bb antibodies were not found in the cerebrospinal fluid (CSF).
Case Report
A 43-year-old previously healthy man developed diplopia on January 18, 2008.He was consulted by ophthalmologists, and left abducens neuropathy was diagnosed.He was treated with the vitamins of group B. Diplopia persisted despite the treatment.Because of an unclear reason of diplopia, the patient was admitted to Alytus County S. Kudirka Hospital on February 15.The results of blood, liver, and renal function tests and the blood glucose value were within the reference ranges; the results of serological tests for human immunodeficiency virus and syphilis were negative.Cerebral magnetic resonance tomography (MRT) showed no pathological findings.All possible metabolic disorders were excluded.The treatment with prednisone at a dosage of 60 mg a day orally for 7 days, followed by dose tapering by 5 mg every second day, was administered.On the fourth day of hospitalization, the patient developed severe back pain radiating to the abdomen, which was more intense during the night, and the patient had no relief on treatment with any available analgesics.Abdominal ultrasound revealed no pathological changes.An MRT scan of the chest and the lumbar spine excluded a compressive lesion.The serological Bb antibody testing was performed, and specific IgG positivity was determined; however, no IgM antibodies were detected.The patient was discharged from the hospital on February 25 without any improvement.A consultation of an infectious disease specialist was recommended.Back pain disappeared spontaneously within 7 days, but diplopia persisted.On March 3, the patient developed severe back pain again, and 3 days later, right peripheral facial palsy was documented.The patient was consulted by an infectious disease physician in Alytus.The serological testing for Bb antibodies was performed by the immunoblot analysis (Western blotting), which showed the presence of specific IgG antibodies with proteins p31 and VIsE (weak antigen-antibody reaction), but specific IgM antibodies were not detected.The EUROLINE-WB test was applied for the analysis (Germany).The test kit contained test strips with electrophoretically separated antigen extracts of Borrelia afzelii (p83, p39, p31, p30, OspC [p25], p21, p19, p17).Each test strip contained a membrane chip coated with a recombinant VIsE antigen.The reaction is considered positive if serum antibodies bind with at least one of these specific antigens.This reaction can be strong, weak, or negative.The sensitivity of this test in case of LNB is 94%.Lyme neuroborreliosis was diagnosed, and the treatment with oral cefuroxime was initiated.Despite the treatment, the condition of the patient worsened within 7 days, and on March 13, he was admitted to the Republican University Hospital of Infectious Diseases and Tuberculosis in Vilnius.He had diplopia, which was more intense when moving the eye to the left side, severe back pain worsening at night, pain in the right side of the occiput and the neck, and numbness in the region of the right ulnar nerve.The patient's medical history included numerous tick bites in summer and autumn.No erythema migrans was noticed.His mother and wife had erythema migrans after tick bites in autumn, and both of them were treated with doxycycline.On examination, the patient had malaise and fatigue, but no confusion.The signs of neuropathy of the left abducens and the right facial nerve were observed (Fig. 1).The examination of the patient revealed hypoesthesia in the regions of Th10-Th12 dermatomes, the right ulnar nerve, and the right greater occipital nerve.During the examination by the otolaryngologist, hypokinesis of the left vocal fold was documented.A lumbar puncture (LP) was performed on March 13.The examination of the CSF showed lymphocytic pleocytosis (160 cells/ mm 3 ), an elevated protein level up to 1.7 g/L, and a glucose level within the reference range.The CSF culture was negative for bacteria.A complete blood count revealed no changes except leukopenia (3.2×10 9 /L), which disappeared within 8 days.The treatment with ceftriaxone at a dosage of 2.0 g per day intravenously was started, despite the absence of specific IgM and IgG antibodies against Bb in the CSF.Later on, the serum and the CSF were repeatedly tested in 2 other laboratories of Lithuania ).No Bb antibodies in the CSF were found, and only specific IgG antibodies in the serum were detected.The diagnosis of possible Lyme neuroborreliosis was established and confirmed by positive serological testing results, typical clinical presentation, and pleocytosis in the CSF.The patient showed an improvement on the fourth day of the treatment with ceftriaxone.He was able to close his right eyelid, and occipital pain was less severe.The neuropathies of the greater occipital, ulnar, and facial nerves disappeared on the seventh day of the therapy (Fig. 2).The patient had been complaining of severe back pain with numbness and burning, which worsened at night.There were sleep disturbances because of pain.He became nervous, expressed his thoughts of suicide, lost confidence in doctors, and had no appetite.The LP and the consultation of the otolaryngologist were repeated on the 17th day of the treatment with ceftriaxone.No hypokinesis of the vocal fold was found.The CSF showed lymphocytic pleocytosis (144 leukocytes/mm 3 ), with the elevated protein level (2.3 g/L) and the glucose level within the reference range (Table ).After 17 days of therapy with ceftriaxone, the treatment was switched to oral doxycycline at a dosage of 200 mg daily and continued for 11 days.His back pain disappeared on 25th day of the antibiotic therapy.The patient was discharged from the hospital on 27th day of the therapy with the complaints of diplopia and hypoesthesia in the Th10-Th12 dermatomes.On the follow-up examination on the 40th day after discharge, the patient had no complaints.A physical examination revealed no strabismus, no movement disorders of the eyes, and no hypoesthesia.Testing of the CSF revealed only 4 lymphocytes per mm 3 with an elevated protein level of 1.0 g/L, but a normal glucose level.All the results of blood tests were within the reference ranges.Bb antibodies were not detected in the CSF.All the laboratories used the ELISA qualitative test for the detection of Bb antibodies in the CSF.Bb-specific IgG antibodies were detected in the blood by ELISA.els persisted without any decline (10.81 U/mL) (Table).In the subsequent 4 years, the patient was feeling quite well; no recurrence of any LNB symptoms was reported, and no new diseases were diagnosed.
In April 2012, Bb-specific IgG antibodies were detected in the blood by ELISA (18.0 U/mL).
Discussion
According to the European Federation Neurologist Society (EFNS) guidelines on the diagnosis and management of European LNB, definite neuroborreliosis can be diagnosed if all 3 following criteria are fulfilled: neurological symptoms suggestive of LNB, pleocytosis, and intrathecal Bb-specific antibody production.Possible neuroborreliosis is diagnosed if 2 of the 3 criteria are fulfilled (2,6).American diagnostic criteria do not require a positive Bb antibody index in the CSF (6).In this report, we described a case of possible Lyme neuroborreliosis.We strongly believe in the correct diagnosis of LNB.The patient presented with typical symptoms, such as painful meningoradiculitis and facial palsy; besides, he responded well to the treatment with antibiotics, and finally no new disease was diagnosed over the period of 4 years after recovery.
The best indicator of an early infection of B. burgdorferi is erythema migrans, but it develops at the site of the tick bite only in 40% to 60% of patients with confirmed LB (8).Another common leading syndrome is the Bannwarth syndrome, which involves radicular neuritic pain, particularly during the night, and lymphocytic pleocytosis (2,5,6,8).In about 60% of patients with this syndrome, cranial neuropathies with the facial nerve most commonly involved are documented (2,5,6).In rare cases, LNB starts out with paresis of other cranial nerves.Ophthalmoplegia and retrobulbar neuritis have been reported as rare features of neuroborreliosis in some studies (10).In our patient, the presentation of cranial neuropathy at the very onset of the illness was not typical.The disease manifested with neuropathy of the sixth cranial nerve, and no erythema migrans was found.The patient had only a single complaint of diplopia for 1 month.Isolated cranial mononeuropathy (except for the facial nerve) makes the clinical diagnosis difficult.Sixth cranial nerve mono neuropathy is not a specific symptom of LNB and may occur in many other diseases.Abducens neuropathy can develop in autoimmune and metabolic diseases, viral or bacterial infections, and tumors.Autoimmune and metabolic diseases were excluded in this case since cerebral and spinal MRT showed no pathological abnormalities.The CSF examination was not performed.This was the main reason of the late diagnosis of LNB.Lymphocytic pleocytosis is very typical of LNB, although the blood cell count may be within the reference range in very early stages of peripheral neurogenic disor-ders (4).We suggest performing a lumbar puncture to test the CSF for Lyme disease in the presence of any cranial mononeuropathy or mononeuropathy multiplex, when other diseases are excluded.
Our patient developed Bannwarth syndrome within 1 month.The analgesic medications were ineffective.The pain disappeared only after the treatment with antibiotics.This is a classical syndrome of LNB.Despite this, the clinical picture is often misinterpreted, and patients go through numerous investigations (5), as in this case.Family physicians and other physicians should be aware of the typical clinical symptoms of LNB in endemic regions.
During LNB, spirochete Bb invades the CSF.The host immune system reacts to the spirochetes with local inflammation, leading to an intrathecal accumulation of leukocytes.The percentage of B cells in the CSF of LNB patients reaches up to 80%, which is higher than in other CNS infections (11).B cells show a substantial migration only to very few chemokines: CC19, CCL21, CXCL12, and CXCL13.The results of the studies demonstrate that monocytic cells produce CXCL13 in response to the incubation with Bb through the interaction of the TLR2 receptor of the innate immune system with the outer surface proteins of spirochetes (11)(12)(13).CXL13 is a major regulator of B cell recruitment in acute LNB.Studies show that CD27+ B cells appear to be the main migrating B cell population in neuroinflammation (11).These cells can produce 5to 100-fold greater levels of immunoglobulins than CD27-cells.Some studies indicate that the successful resolution of LNB is associated with a strong T helper (Th) type 1 immune response in the CSF early in the infection, followed by a Th type 2 response, capable of suppressing the Th1 inflammation (14).The activation of B cells is driven by cytokines from Th2 cells.Chemokines have a crucial role for the Th1/Th2 balance.One study has shown the absence of both IL-17 and Bb antibodies in the CSF in children with possible LNB with pleocytosis (14).In this case, no Bb antibodies were found in the CSF, and only a very low level of IgG antibodies was found in the blood serum.The results of Western blot were positive only for 2 Borrelia antigens.The antibody-antigen reaction was weak.These findings may suggest a poor immune system response to the infection.It is difficult to say which part of intrathecal production of Bb antibodies was altered in our patient.Some reports have suggested that in immunosuppressed patients, the results of tests for Borrelia antibodies may be negative (15).However, our patient was immunocompetent.The previous prednisone therapy could have been one of the possible reasons of the lack of intrathecal Bb antibodies in our case, although some patients, treated with prednisone, have been reported to have Bb antibodies in the CSF.Further studies on the role of prednisone and other immunosuppressive medications in the Th1/Th2 balance, chemokine/cytokine level, and the percentage of CD27+ B cells in the CSF of LNB patients are needed.The intrathecal synthesis of Bb antibodies is of great importance for diagnosing LNB.The antibody index has a very high specificity (97%), but only a moderate sensitivity ranging from 40% to 89% (8,16,17).Some investigators suggest that Bb antibodies in the CSF may be absent in some patients initially, but specific intrathecal IgG production should be detectable 6-8 weeks after the onset of symptoms (2,6).On the other hand, there are some reports of LNB without the intrathecal synthesis of Bb antibodies after a period of 6 weeks (7,11,15,17).The problem lies in the confirmation of the diagnosis without intrathecal synthesis of Bb antibodies.Other laboratory tests are needed in rare cases of LNB without Bb antibodies in the CSF to prove the diagnosis.Although PCR performed in the CSF samples has a low sensitivity, it may be useful in very early LNB with a negative antibody index or in patients with immunodeficiency (6).Despite the limitations such as a low sensitivity and slow growth, the detection of Bb in CSF cultures by may be useful for the confirmation of uncertain cases (2).Recent studies have suggested a CXCL13 chemokine test (6,(11)(12)(13) and detection of antibodies to the C6-peptide (12) in the CSF for the diagnosis of LNB in seronegative patients and for the control of therapy.All these methods are still not used in Lithuania.We hope that they will be available in future because Lithuania is endemic for Lyme disease, the morbidity is increasing, and as our report demonstrated, there can be occasional cases of LNB without specific antibodies in the CSF.Some researchers suggest that a clinical response to treatment may be the best option to confirm the diagnosis in such cases (7).
We suggest a specific antibiotic therapy if LNB is suspected despite the absence of Bb antibodies in the CSF.According to the EFNS, adult patients with definite or possible early LNB (symptoms lasting <6 months) with symptoms confined to the meninges, cranial nerves, nerve roots, or peripheral nerves should be offered a single 14-day course of antibiotic treatment.Oral doxycycline (200 mg daily) and intravenous ceftriaxone (2 g daily) are equally effective (6).Although this was a case of early LNB, we decided to treat the patient for 28 days because the diagnosis was delayed; the patient was wrongly treated with prednisone and had a relapse of back pain.Longer courses are recommended for relapses or more serious and/or chronic forms (1).The choice of the best antibiotic, the mode of administration, and the duration of treatment are still debated issues (4).Overtreatment is an urgent problem.The duration of treatment cannot be longer than 21-30 days (4,6).Antibiotic therapy has to be discontinued despite the presentation of certain symptoms.
The limitation of this case report is the absence of testing for Epstein-Barr virus (EBV), cytomegalovirus (CMV), and Mycoplasma pneumonia DNR by PCR in the CSF.These are rare causes of aseptic meningitis in adults (18,19).The primary EBV or CMV infection causes infectious mononucleosis, a disease characterized by fever, tonsil and lymph node swelling, atypical lymphocytosis, and liver function abnormalities.Neurological complications can be caused by a direct viral invasion or by indirect immune mechanisms.If a direct pathogen invasion occurs, a patient usually presents with meningeal signs and symptoms of the CNS dysfunction in combination with the symptoms of infectious mononucleosis (19) or infection with Mycoplasma pneumonia.If indirect immune mechanisms occur, patients present with postinfectious polyneuropathy with elevated protein levels in the CFS, but a normal or very slightly elevated level of leukocytes (20).Moreover, postinfectious demyelinating encephalitis can occur in very rare cases after these acute infections in adults.Patients have a good response to antiviral (19) or antibiotic treatment for Mycoplasma pneumonia in cases of a direct invasion of the pathogen into the CSF.The treatment with prednisone, immunoglobulin, and plasmapheresis should be effective in cases of postinfectious neurological complications of EBV or CMV infections, Mycoplasma pneumonia, and other infections (20).Our patient had no symptoms of infectious mononucleosis and infection with Mycoplasma pneumonia, no meningeal signs, and no signs of the CNS involvement.Lymphocytosis and atypical mononuclear cells were not observed in the blood.MRT did not show any demyelinating abnormalities.The clinical presentation and findings in the CSF were not characteristic of postinfectious polyneuropathy.This patient had asymptomatic meningitis, which is very typical of LNB.The disease had been progressing until the treatment with ceftriaxone was started.Moreover, a good response to the specific treatment of LNB was the main reason why the CSF was not tested for the infections with CMV, EBV, and Mycoplasma pneumonia.
In summary, infection with Borrelia should be considered in the differential diagnosis of any isolated or multiple mononeuropathy, and CSF examination should be done.The absence of specific immunoglobulins in the CSF does not exclude the LNB diagnosis.A specific antibiotic therapy should be started immediately when there is clear evidence of a clinical picture of the characteristic symptoms and signs of LNB.
statement of Conflict of Interest
The authors state no conflict of interest.
Fig. 1 .
Fig. 1.The patient with the right facial palsy
Fig. 2 .
Fig. 2. Duration of symptoms and signs in the patient with Lyme neuroborreliosis The antibody lev-Borrelia burgdorferi; CSF, cerebrospinal fluid.*Analysiswas done in the laboratory of UAB "Endemik diagnostika" (reference range for Bb IgG, 0-9 U/mL).†Analysis was done in the Virology Laboratory of Vilnius University.‡Analysis was done in the Lithuanian National Reference Laboratory of Center for Communicable Diseases and AIDS.Table.Changes in Laboratory Test Results of the Described Patient
|
v3-fos-license
|
2018-12-18T18:34:51.157Z
|
2012-07-19T00:00:00.000
|
56267273
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=21190",
"pdf_hash": "3865a653c1caed0d107f239d7b0e93b5a6d71631",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1018",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "c49539202350d5cef4f7fe78fa61dfb625fdac2a",
"year": 2012
}
|
pes2o/s2orc
|
Physical functioning: The mediating effect on ADLs and vitality in elderly living in residential care facilities. “Act on ageing”: A pilot study
The present study aims at verifying whether participation in a physical activity programme has positive effects on the daily life autonomy and vitality of elderly people living in residential care facilities by the mediation of their physical wellbeing. Fifty-one institutionalised individuals took part in the study. The control group included 11 people (84.26 ± 7.4 years), whereas the experimental group was made up of 40 people (85 ± 6.6 years). The experimental group was involved in a physical activity programme twice a week. The 36-Item Short Form Health Survey Questionnaire, the Activities of Daily Living Scale, and the Tinetti Test were administered to the participants. The linear regression method as well as Sobel’s formula were used for the analysis. The results show that participation in a physical activity programme has positive effects on autonomy in bathing and on the participants’ sense of vitality due to the mediation of physical functioning. These results confirm the importance of physical activity for the elderly populations living in residential care facilities.
INTRODUCTION
The increasing ageing of the population and particularly the increasing proportion of very old and sometimes also very frail people is one of the main social issues that we need to face in current Western society. European countries are facing this issue differently [1]. In particular, they differ in the kinds of services offered to older people and in the traditional caring addressed to them. Generally speaking, northern European and especially Scandinavian countries invest social resources in keeping older people independent as long as possible, sometimes also by moving them to special houses specifically constructed to safeguard them from a physical point of view and to easily furnish health and social services. Thus, older people are proposed institutionalisation in apposite residential care facilities when they are very ill. Traditionally, southern European countries invest fewer resources in the promotion of the independent life of all kinds of frail people, including older ones. However, strong family bonds usually characterised these countries and the family, especially women, from a health and social perspective cares for older people also as long as possible at home. That is, older people may be institutionalised because their social network cannot assure them daily care. Despite these differences the final results are similar: people who are institutionalised are generally more frail than others, and because of the lack of social bonds and/or because of the potential comorbidity of different pathologies makes it difficult or impossible to maintain independent life. However, losing one's autonomy and being moved to a residential care facility is likely to be followed by physical inactivity, a lack of interest in daily life and further functional decline [2]. In sum, the preservation of minimum levels of mobility and ability of daily living (ADL) among the oldest elderly is of critical importance.
We already know that specific programmes of physiccal activity may be an adequate antidote.
With respect to physical aspects, we already know that a sedentary and unhealthy lifestyle can lead to highly debilitating diseases and the loss of self-sufficiency and health, while physical activity can be a protective factor for good physical and psychological conditions even in a frail physical or psychosocial situation [3]. Previous studies [4,5] already demonstrated the relationship between the worsening of balance and gait motor skills, flexibility and strength, and a decrease in ADLs, which are fundamental for a good quality of life even in old age [6]. Conversely, Rydwik and collaborators [7] pointed out that targeted physical activity protocols can positively influence strength, resistance, flexibility and balance, thus protecting the individual from certain forms of disability [8] that are strictly connected to ageing [9].
With respect to psychological aspects, the metaanalysis by McCauley [10] found a positive relationship between physical activity and psychological wellbeing. Netz, Wu, Becker and Tenenbaum [11] found a relationship with an increase in self-efficacy, which also seems to decrease the fear of falling [12]. Conversely, the fear of falling and the feeling of inadequacy when performing a motor task can accelerate decline, leading the individual into depression [13] or social isolation [14,15]. It seems likely that physical activity presents the elderly with a chance to better master small daily tasks [11], engendering a heightened sense of wellbeing. However, what we still lack is the investigation of the underlying processes, and particularly of the interconnection between physical and psychological effects, which may be stimulated in older people by participation in physical activity.
The previous studies have mainly concentrated on self-efficacy: McAuley and collaborators. [16] showed that an increase in physical activity is related to an enhanced sense of self-efficacy, which in its turn is associated with an increase in physical performance. Keysor [17] and Heikkinen [18] found that self-efficacy is a mediator in the relationship between physical activity and functional limitations. From what we know, other psychological characteristics apart from self-efficacy were hardly ever considered. Besides this, it has not yet been explored whether physical activity affects mobility functioning and whether mobility functioning affects psychological variables. The present study is aimed at exploring some of these lacks. We will concentrated particularly on: autonomy in personal hygiene activities (showering, bathing), because among all the daily living activities these are the most intimate and thus the most likely to be under the personal control of institutionalised seniors [19]; a sense of energy and vitality, because these have already been shown to be related to lower distress and greater well-being and personal autonomy in institutionalised elderly [20]; and mobility function in terms of balance and gait, because the loss of mobility represents a critical stage in the disablement process [21].
AIMS OF THE RESEARCH
The present study represents the continuation of a series of pilot research projects, which have already shown the positive effects of physical activity in an Italian sample of older people in residential care facilities with respect to their cognitive and psychological adjustment [22][23][24], and physical functioning. In the present study, we ask the following research questions: 1) What is the effect of participation in a physical activity programme on personal hygiene activities (showering, bathing), the sense of energy and vitality, and mobility function in terms of the balance and gait of institutionalised elderly individuals? We hypothesised that participation can increase or at least maintain stable autonomy in their hygiene activities, sense of energy and vitality, and mobility function.
2) Does the mobility function fulfil a mediating role in the relationship between participation in a physical activeity programme and both the personal hygiene activeties and sense of vitality in institutionalised elderly individuals? In view of the above-mentioned literature, we hypothesised that the positive effect of physical activity is due to the mediation of the mobility function on both personal hygiene activities and a sense of vitality.
Study Design
The intervention was introduced in two residential care facilities of the Piedmont region in north Italy and another residential care facility, in the same region, was used as the control group. Currently, more than 5000 older people of the Piedmont region live in residential care facilities [25]. First, from the list offered by the Health Office of the Piedmont Region, we selected 30 facilities that have similar features in terms of their accordance with the National Health Service, the number and typology of guests (ranging from 80 to 120), the intermediate social and economic conditions of the guests (all the guests in these facilities are requested to contribute a little for the care they receive), and services offered to the older people (presence of specialized nurses, onsite emergency services, health care operators, physiotherapist and psychologist). Second, we randomly selected six of these facilities from the list and all of them agreed to participate in the study. Third, we excluded three facilities because we did not find enough self-sufficient seniors to create a physical activity group. Thus, we assigned two of the remaining facilities to the experimental condition and one to the control condition.
The facilities that were selected accommodate both self-sufficient older people (i.e., individuals who can walk, eat, and use the bathroom independently) and dependent older people (requiring assistance in basic activities of daily living). All of the facilities are private institutions, but linked to the Public Health Service through a funding agreement.
Physical Training
The intervention included two sessions per week (lasting 60 minutes each) for 16 weeks, over a period of roughly five months. It was presented to small groups of self-sufficient older people living in residential care facilities. The sessions were conducted by instructors, all of whom had university degrees in physical education and sports-related fields and were specialised in physical fitness training for older people [26]. That is, we selected only those who achieved a final grade higher than the 95th percentile of the grades distribution for each subject.
The set of activities was specifically designed for the research. The intervention protocol, as advised by the American College of Sports Medicine [27], focused on three specific objectives: mobility, balance, and resistance strength.
The intervention has been organised so as to reproduce the movements and gestures of daily life, considering the three aims above. The intervention was designed with a gradual increase of the parameters of work intensity and complexity of exercise.
Participants
In each residential care facility the older participants, both of the intervention group and the control group, were selected from among all the older people living in the facility by the director of the residential care facility, who is a trained physician. The three criteria for inclusion were: 1) self-sufficiency (see above); 2) absence of serious chronic and/or acute diseases; and 3) intact cognitive functions, which were verified directly by the researchers. The Mini Mental Test [28] was used to evaluate cognitive functions, and all the older people reached or exceeded the minimum score of 23.
Previously, the entire study received the approval of the university ethics committee. Afterwards, the participants were informed that participation in the study was voluntary and confidential. All the selected individuals agreed to participate and gave their informed consent, in accordance with Italian law and the ethical code of the Italian Association of Psychologists [29].
The sample comprised of 51 people (33 women and 18 men): The control group included 11 people (9 women and 2 men), whereas the experimental group was made up of 40 people (24 women and 16 men). Hence, the experimental and control group were not completely balanced. However, this was a result of the fact that the proportion of men who meet the criteria for inclusion in the two residential care facilities assigned at the experimental condition was higher than in the residential care Age Mean (SD) 85 (6.6) 84.26 (7.4) facility assigned at the control condition. Those included in the experimental group participated in the physical activity program, while the control group was comprised of individuals who did not participate in the program and continued their normal activity planned in the facility (with respect to physical activity, the older people in the control group simply continued their free activity of walking in the facility's garden since they are self-sufficient individuals).
The main characteristics of the participants are described in Table 1. The mean age of the entire group was 84.4 (SD = 7.2; range 73 -96). The majority were widows/widowers (N = 30), some participants were married (N = 12), while others had never married (N = 6), or were divorced (N = 3). Former occupations included both manual labour (N = 43) and non-manual labour (N = 18). The majority of the elderly received a primary school education (N = 37), while a smaller portion achieved a higher level of education (N = 14). Most participants had been born in northern Italy (N = 46), with just 4 persons from central Italy and 1 from southern Italy; 30 people had never participated in any sporting activity during their lives, while 21 had. We did not find any statistically significant difference between the experimental and control group for age, marital status, former occupation, level of education, place of birth, and previous participation in sport.
Procedure
We tested both the experimental and the control group with a battery of psychological and physical instruments before and after the physical activity program, with an interval of about 18 weeks between the two waves. We did not lose any subject so we did not have any attrition, probably because the restrictive criteria for inclusion prevented us from losing some older participants due to mortality or a worsening of their physical condition (the rate certainly would have been higher among non selfsufficient individuals).
We administered different instruments to all the participants: The "Vitality" scale (drawn from the Italian version of the 36-Item Short Form Health Survey Questionnaire-SF-36; [30]): This consists of 4 questions investigating how frequently an individual feels a sensation of energy and cheerfulness, such as "How many times in the last 4 weeks have you felt full of energy/ tired/worn out/full of life?" The range of possible answers was from 1 "never" to 5 "always". The Cronbach's α is 0.72 at the pre-test and 0.80 at the post-test. The item on autonomy in bathing which was drawn from the activities of daily living (ADLs) scale [31]. The Tinetti Test [32], which evaluates mobility function in the elderly with two sub-scales, one for gait and one for balance. It attributes a score (6 items with a score from 0 to 1, and 11 items with a score from 0 to 2) to the performance of elderly individuals in 16 types of simple movements (9 movements for gait, and 7 for balance). Summing up the scores of these items provides three scores regarding gait (maximum score = 12), balance (maximum score = 16), and overall mobility function (maximum score = 28). Participation in the physical activity programme is the independent variable, autonomy in bathing and vitality are the dependent variables, while the mobility function is the mediator.
Strategy of Analysis
In order to prove the validity of our model, we adopted a mediation analysis approach using the criteria described by Baron and Kenny [33] and Holmbeck [34].
The mediation analysis was carried out by regression analysis as follows: the direct effect (participation in the physical activity program autonomy in bathing and vitality) was evaluated in order to verify the effect of the predictor on the outcome. If the relationship was significant, then we included the mediator in the model.
We verified that the main effect and the path between the independent variable and the two mediators were significant. After that, we checked our mediation models, first with regard to autonomy in bathing and then to vitality. The mediation model was also verified with the Sobel test. Table 2 presents a correlations matrix in which the associations between the variables used in this study are shown; this table also contains the means and standard deviations for each variable considered in the study. The correlation coefficients were computed to assess the hypothesised relationships between the study variables: predictors, mediators, and outcome. All the variables considered were significantly correlated.
Descriptive Analysis
Participating in the physical activity programme was positively related to autonomy in bathing, vitality, and mobility function; mobility function was positively related to autonomy in bathing and vitality; vitality and autonomy in bathing were positively related to each other.
First mediation model: Participation in a physical activity program > Mobility function > Autonomy in bathing.
First, we verified the mediation model in relation to the autonomy in bathing outcome. Table 3 shows the main effect hypothesis, participation in the physical activity programme, and the mobility function on autonomy in bathing. The results prove that participating in a physical activity programme has an effect on autonomy in bathing (B = 0.45, p = 0.013); also, the mediator has a significant effect on the outcome (B = 0.63, p = 0.001).
Then, we verified the mediation model. The regression analysis shows a significant relationship between the independent variable and the mediator (B = 0.63, p = 0.001). Table 4 illustrates the mediation analysis. First of all, we verified the relationship between the predictor (Participation in physical activity) and the outcome (Autonomy in bathing), and the relationship was significant (B = 0.47, p = 0.013). However, after introducing the mediator (mobility function), we noticed a decrease in the coefficient between the predictor and the outcome (B = 0.12, p = 0.56).
As we expected, the coefficient between the vitality mediator and the outcome was clearly significant (B = 0.03, p = 0.007).
A good fit of the model was also confirmed by R2, which had a 20% variance increase after the introduction of the mediator in the regression analysis.
The Sobel test for mediation, which determines if the coefficient decrease is significant/reliable, indicated that the mediation model was fully mediated (z = 5.49, p = 0.001).
Second mediation model: Participation in a physical activity programme > Mobility function > Vitality.
In the second mediation model, we considered the same predictor and the same mediator but we focused on a different outcome. Table 5 shows the effects of participating in the physical activity program (B = 0.38, p = 0.02) and of mobility function (B = 0.48, p = 0.002) on vitality. Both variables had a significant effect on the outcome. Then, we verified the mediation model. As already seen in the first mediation model, the relationship between the predictor and the mediator was significant. So, participating in the physical activity program had a positive impact on vitality (B = 4.1, p = 0.016) ( Table 6).
After the introduction of the mediator there was a significant decrease in the effect of the predictor (B = 1.7, p = 0.39), while the mediator was significant (B = 0.19, p = 0.04).
A good fit of the model was also confirmed by R2, which had a 10% variance increase after the introduction of the mediator in the regression analysis.
Finally, for this mediation model Sobel's Z-value was Step 1 Participation in physical activity programme 0.47 0.17 0.013 Step 2 Participation in physical activity programme 0.12 0.2 0.56 Mobility function 0.03 0.01 0.007 Step 1 Participation in physical activity programme 4.1 1.6 0.016 Step 2 Participation in physical activity programme 1.7 1.9 0.39 Mobility function 0.19 0.09 0.04 1.99 (p = 0.04). Moreover, in this model we also found the full mediation of the mobility function.
DISCUSSION AND CONCLUSIONS
The present study represents the continuation of a series of pilot research projects, which have already shown the positive effects of physical activity The objective of this study was to investigate the relationship between participation in a physical activity programme and autonomy in bathing and vitality as well as the mediation role of the mobility function in a group of elderly living in residential care facilities. Generally speaking, our results are in line with the guidelines of the American College of Sports Medicine [35], which states that physical activity programmes targeting older people living in residential care facilities may preserve the skills that are functional to independence and promote their ability to be at least partially independent as long as possible.
Participating in the programme has a positive effects on all the aspects considered in the present study, which are the mobility function, in term of balance and gait, autonomy in personal hygiene activities, and a sense of vitality and energy.
We think that these are very important findings especially considering the very old age and the condition of frailty of our participants. First, our findings also showed that even at very old ages it is possible to improve or at least to maintain stable flexibility and balance and autonomy in one's personal hygiene. The findings confirm what has already been underlined previously, that even a slight amelioration in mobility functions, such as in the strength of the lower limbs, may significantly contribute to delaying the occurrence of dependence [36], for instance by enabling individuals to walk independently [37]. Second, our findings also showed that participation in a short physical activity programme may enhance a sense of vitality. Feeling more independent and able to do things, mastering the abilities of daily living, may certainly help increase one's sensation of energy, while even limited movement experiences can initiate a process of change over the short or medium term [38].
We also found that an increase in mobility function mediates the relationship between participation in physical activity and personal hygiene activities. A previous study by Rydwik and collaborators [7] showed that balance and mobility deficits can lead to a situation of dependence, increasing the risk of falls. Other studies proved that physical functioning can be improved and the risk of falls decreased thanks to targeted intervenetions, since motor skills can be strengthened little by little by raising the amount of physical activity [39,40]. Our study showed that common daily life activities, such as autonomy in bathing, might be influenced by improving balance and gait and thus decreasing the risk of falls.
Finally we found that even small improvements in the mobility function may mediate the relationship between participation in physical activity and a sense of vitality and energy. Preserving one's motor skills, which facilitates the management of simple daily activities (such as personal hygiene, eating autonomously, and small transfers), is likely to have positive effects on feeling energetic and able to face the motor challenges of everyday life.
In sum, the motor skills regained by participating in physical activity seem to play an important role in increasing the motor skills that might help the elderly preserve their independence as long as possible, as underlined in other previous studies [41].
The limitations of this study mainly concern the small number of participants, justified by difficulties in recruiting self-sufficient older people staying in residential care facilities. Future investigations should aim at involving larger samples so that the results are more reliable. Moreover, the composition of the group of participants, which is unbalanced for gender and in this mirrors the composition of the population of institutionalised elderly people in Italy [25], prevented us from analysing gender differences in terms of physical functioning and the perception of one's wellbeing.
Despite these limitations, our research provides information about the effects of physical activity protocols on individuals staying in residential care facilities.
Understanding the processes underlying the effects of physical activity on the sense of psychophysical wellbeing in the oldest elderly can help promote active ageing by also meeting the needs of institutionalised older people, who see independence in daily life as a crucial factor [42] but who also wish to participate in activities that are rich in meaning [43,18].
We need further future studies also aimed at identifying the factors that protect the wellbeing of the elderly and their active ageing in residential care facilities. Finally, we need to introduce changes in residential care facilities in order to allow these institutions to preserve residual skills, promote active living, and generally look at their guests as unique and active individuals despite their frail condition.
|
v3-fos-license
|
2021-09-25T16:14:24.107Z
|
2021-08-23T00:00:00.000
|
238682087
|
{
"extfieldsofstudy": [
"Art"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2413-4155/3/4/39/pdf",
"pdf_hash": "91902871971112b59567078c260a95ad787f599d",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1019",
"s2fieldsofstudy": [
"Art"
],
"sha1": "3da7366e775d8fea4097b85193d77516aedce437",
"year": 2021
}
|
pes2o/s2orc
|
Graph Coverings for Investigating Non Local Structures in Proteins, Music and Poems
: We explore the structural similarities in three different languages, first in the protein language whose primary letters are the amino acids, second in the musical language whose primary letters are the notes, and third in the poetry language whose primary letters are the alphabet. For proteins, the non local (secondary) letters are the types of foldings in space ( α -helices, β -sheets, etc); for music, one is dealing with clear-cut repetition units called musical forms and for poems the structure consists of grammatical forms (names, verbs, etc). We show in this paper that the mathematics of such secondary structures relies on finitely presented groups f p on r letters, where r counts the number of types of such secondary non local segments. The number of conjugacy classes of a given index (also the number of graph coverings over a base graph) of a group f p is found to be close to the number of conjugacy classes of the same index in the free group F r − 1 on r − 1 generators. In a concrete way, we explore the group structure of a variant of the SARS-Cov-2 spike protein and the group structure of apolipoprotein-H, passing from the primary code with amino acids to the secondary structure organizing the foldings. Then, we look at the musical forms employed in the classical and contemporary periods. Finally, we investigate in much detail the group structure of a small poem in prose by Charles Baudelaire and that of the Bateau Ivre by Arthur Rimbaud.
Introduction
In this paper, we point out for the first time a remarkable analogy between the pattern structure of bonds between amino acids in a protein (the protein secondary structure [1]) and the non local structures observed in tonal music and in poems. We explain the origin of these analogies with finitely generated groups and graph covering theory.
A protein is a long polymeric linear chain encoded with 20 letters (the 20 amino acids). The surjective mapping of the 4 3 = 64 codons to the 20 amino acids is the DNA genetic code. It can be given a mathematical theory with appropriate finite groups [2,3]. In addition, a protein folds in the three dimensional space with structural elements such as coils, α-helices and β-sheets, or other arrangements that determine its biological function. The number of proteins encoded in genomes depends on the biological organism (typically from 1 to 10 2 proteins in viruses, from 10 2 to 10 3 proteins in bacteria and from 10 3 to 10 4 proteins in eukaryotes). The protein database (or PDB) contains about 1.8 × 10 5 entries [4]. Proteins ensure the language of life, amino acids are the alphabet, proteins are the words and the set of proteins in an organism are the phrases.
Analogously in music, a note is a letter encoding a musical sound. In the 12-tone chromatic scale [5], each of the 12 notes (or letters) has the frequency of the previous note Sci 2021, 3, 39 2 of 13 multiplied by 2 1/12 ≈ 1.0595. The form refers to the secondary structure of a musical composition in terms of clear-cut units of equal length, for example, A-B-A in the sonata form or A-B-C-B-A in an arch form [6]. Now, we come to human language and the Latin alphabet. There are 26 letters organized into words of various types such as names, adjectives, verbs, and so on. In the following, we will show that a verse in a poem or a phrase in prose have distinctive features, the former being closer to our theory.
Our mathematical theory of the secondary structures in proteins, music and poems relies on the concept of a finitely generated group and the corresponding graph coverings, as explained in Section 2.
We will investigate three applications of the graph covering approach. In Section 3, we look at the secondary structures of two proteins. We take as examples the spike protein of the SARS-Cov-2 virus and a glycoprotein playing a role in the immune system (see [3] for our earlier work). In Section 4, the secondary structures are the musical forms of western music in the classical age and twentieth century music. Then, in Section 5, the secondary structures in the verses of selected poems are obtained from an encoding of the types of words (names, verbs, prepositions, etc).
A Brief Review of the Literature
After we received an invitation to contribute to the present special issue of Sci "Mathematics and poetry, with a view towards machine learning" we thought that our current group theoretical approach of protein language [3] could be converted into an understanding of the poetic language, as well as an understanding of some musical structures.
Our goal in this subsection is to point out earlier work in the same direction as ours. There are many papers attempting to relate group theory to the genetic code, as reviewed in [2] but we found none of them featuring the secondary structure of proteins along the chain of amino acids, as we did in [3] and as we do below with the graph coverings.
Poetry inspired mathematics has been the common thread of most papers exploring the connection between poems and maths [7][8][9][10]. However, it is more challenging to explain what type of structure and beauty occurs in a poem in the language of mathematics [11]. Perhaps mathematical linguistics is the proper frame for making progress [12] and artificial intelligence (AI) may help in the classification of languages [13].
Although both subjects have been connected for centuries, comparing musical structures to mathematics is a fairly new research domain [14]. For a different perspective, the readers may consult Reference [15].
Graph Coverings and Conjugacy Classes of a Finitely Generated Group
Let rel(x 1 , x 2 , . . . , x r ) be the relation defining the finitely presented group f p = x 1 , x 2 , . . . , x r |rel(x 1 , x 2 , . . . , x r ) on r letters (or generators). We are interested in the conjugacy classes (cc) of subgroups of f p with respect to the nature of the relation rel. In a nutshell, one observes that the cardinality structure η d ( f p) of conjugacy classes of subgroups of index d of f p is all the closer to that of the free group F r−1 on r − 1 generators as the choice of rel contains more non local structure. To arrive at this statement, we experiment on protein foldings, musical forms and poems. The former case was first explored in [3].
Let X andX be two graphs. A graph epimorphism (an onto or surjective homomorphism) π : X →X is called a covering projection if, for every vertexṽ ofX, π maps the neighborhood ofṽ bijectively onto the neighborhood of πṽ. The graph X is referred to as a base graph (or a quotient graph) andX is called the covering graph. The conjugacy classes of subgroups of index d in the fundamental group of a base graph X are in one-to-one correspondence with the connected d-fold coverings of X, as it has been known for some time [16,17].
Graph coverings and group actions are closely related. Let us start from an enumeration of integer partitions of d that satisfy: a famous problem in analytic number theory [18,19].
Another interpretation of Iso(X; d) is found in ( [20], Euqation (12)). Taking a set of mixed quantum states comprising r + 1 subsystems, Iso(X; d) corresponds to the stable dimension of degree d local unitary invariants. For two subsystems, r = 1 and such a stable dimension is Iso( to establish that the number Isoc(X; d) of connected d-fold coverings of a graph X (alias the number of conjugacy classes of subgroups in the fundamental group of X) is as follows ( [17], Theorem 3.2, p. 84): where µ denotes the number-theoretic Möbius function. Table 1 provides the values of Isoc(X; d) for small values of r and d ( [17], Table 3.2). The finitely presented groups G = f p may be characterized in terms of a first Betti number r. For a group G, r is the rank (the number of generators) of the abelian quotient G/[G, G]. To some extent, a group f p whose first Betti number is r may be said to be close to the free group F r since both of them have the same minimum number of generators.
Graph Coverings for Proteins
As a follow up of our previous paper [3] we first apply the above theory to two proteins of current interest, the spike protein in a variant of SARS-Cov-2 and a protein that plays an important role in the immune system.
The D614G Variant (Minus RBD) of the SARS-CoV-2 Spike Protein
As a first example of the application of our approach, let us consider the D614G variant (minus RBD: the receptor binding domain) of the SARS-CoV-2 spike protein. In the Protein Data Bank in Europe, the name of the sequence is 6XS6 [22]. D614G is a missense mutation (a nonsynonymous substitution where a single nucleotide results in a codon that codes for a different amino acid). The mutation occurs at position 614 where glycine has replaced aspartic acid worldwide. Glycine increases the transmission rate and correlates with the prevalence of loss of smell as a symptom of COVID-19, possibly related to a higher binding of the RBD to the ACE2 receptor: an enzyme attached to the membrane of heart cells. A picture of the secondary structures can be found in Figure 1.
FVTQRNFYEPQIITTDNTFVSGNCDVVIGIVNNTV
Such a protein sequence, comprising 20 amino acids as letters of the primary code, can be encoded in terms of secondary structures. Most of the time, for proteins, one makes use of three types of encoding that are segments of α helices (encoded with the symbol H), segments of β pleated sheets (encoded by the symbol E) and the segment of random coils (encoded by the segment C) [1,3,23].
A finer structure may be obtained by using methods such as the SST Bayesian method. A summary of the approach can be found in Reference [23].
We used a software prepared in [24] to obtain the following secondary structure
CCCTTTTTCCCCCTTTTTCCCC44444EEEEEECC,
where G means a 3 10 helix, 4 means α-like turns, I means a right-handed π helix and T corresponds to unspecified turns.
For the group analysis, we slightly simplify the problem by taking 4 = H just one form of α turn so that the sequence is encoded with 6 letters only. Then, we further simplify by taking T = C to obtain a 5-letter encoding. We further simplify by taking I = H, then by taking G = H to get 4-letter and 3-letter encodings, respectively. The results are in Table 2. Table 2. Group analysis of the D614G variant (minus RBD) of the SARS-CoV-2 spike protein. The bold numbers mean that the cardinality structure of cc of subgroups of G fits that of the free group F r−1 when the encoding makes use of r letters. In the last column, r is the first Betti number of the generating group f p . We observe that the cardinality structure of the cc of subgroups of the finitely presented groups f p = H, E, C, G, I, T|rel , . . . , f p = H, E, C|rel fits the free group F r−1 when the encoding makes use of r = 6, 5, 4, 3 letters. This is in line with our results found in [3] on several kinds of proteins.
The β-2-Glycoprotein 1 or Apolipoprotein-H
Our second example deals with a protein playing an important role in the immune system [25]. In the Protein Data Bank, the name of the sequence is 6V06 [26] and it contains 326 aa. All models predict secondary structures mainly comprising β-pleated sheets and random coils and sometimes short segments of α-helices.
We observe in Table 3 that the cardinality structure of the cc of subgroups of the finitely presented groups f p = H, E, C|rel approximately fits the free group F 2 on two letters for the first three models but not for the RAPTORX model. In one case (with the PORTER model [27]), all first six digits fit those of F 2 and higher order digits could not be reached. The reader may refer to our paper [3] where such a good fit could be obtained for the sequences in the arms of the protein complex Hfq (with 74 aa). This complex with the 6-fold symmetry is known to play a role in DNA replication.
A picture of the secondary structure of the apolipoprotein-H obtained with the software of Ref. [24] is displayed in Figure 2. Table 3. Group analysis of apolipoprotein-H (PDB 6V06). The bold numbers means that the cardinality structure of cc of subgroups of f p fits that of the free group F 3 when the encoding makes use of 2 letters. The first model is the one used in the previous Section [24] where we took 4 = H and T = C. The other models of secondary structures with segments E, H and C are from softwares PORTER, PHYRE2 and RAPTORX. The references to these softwares may be found in our recent paper [3]. The notation r in column 3 means the first Betti number of f p .
Graph Coverings for Musical Forms
We accept that this structure determines the beauty in art. We provide two examples of this relationship, first by studying musical forms, then by looking at the structure of verses in poems. Our approach encompasses the orthodox view of periodicity or quasiperiodicity inherent to such structures. Instead of that and the non local character of the structure is investigated thanks to a group with generators given by the allowed generators x 1 , x 2 , · · · , x r and a relation rel, determining the position of such successive generators, as we did for the secondary structures of proteins. Table 1, the sequence Isoc(X; 1) only contains 1 in its entries and it is tempting to associate this sequence to the most irrational number, the Golden ratio φ = ( √ 5 − 1)/2 through the continued fraction expansion φ = 1/(1 + 1/(1 + 1/(1 + 1/(1 + · · · )))) = [0; 1, 1, 1, 1, · · · ).
Then, one can check that the finitely-presented group f p (n) = S, L|w n whose relation is a Fibonacci word w n possesses a cardinality sequence of subgroups [1, 1, 1, 1, 1, 1, 1, 1 · · · ) equal to Isoc(X; 1), up to all computable orders, despite the fact that the groups f p (n) are not the same. It is straightforward to check that the first Betti number r of f p (n) is 1, as expected.
The Period Doubling Cascade
Other rules lead to a Betti number r = 1 and the corresponding sequence Isoc(X;1). Let us consider the period-doubling cascade in the logistic map x l+1 = 1 − λx 2 l . Period doubling can be generated by repeated use of the substitutions R → RL and L → RR., so that the sequence of period doubling is [28] R, L, RL, RLR 2 , RLR 3 LRL, RLR 3 LRLRLR 3 LR 3 , RLR 3 LRLRLR 3 LR 3 LR 3 LRLRLR 3 LRLRL, · · · and the corresponding finitely presented groups also have first Betti numbers equal to 1.
Musical Forms of the Classical Age
Going into musical forms, the ternary structure L-S-L (most commonly denoted A − B − A) corresponding to the Fibonacci word w 4 is a Western instrumental genre notably used in sonatas, symphonies and string quartets. The basic elements of sonata forms are the exposition A, the development B and recapitulation A. While the musical form A − B − A is symmetric, the Fibonacci word A − B − A − A − B corresponding to w 5 is asymmetric and used in some songs or ballads from the Renaissance.
In a closely related direction, it was shown that the lengths a and b of sections A and B in all Mozart's sonata movements are such that the ratio b/(a + b) ≈ φ [29].
The Sequence Isoc(X; 2) in Twentieth Century Music and Jazz
In the 20th century, musical forms escaped the classical channels that were created. With the Hungarian composer Béla Bartók, a musical structure known as the arch form was created. The arch form is a sectional structure for a piece of music based on repetition, in reverse order, so that the overall form is symmetric, most often around a central movement. Formally, it looks like A − B − C − B − A. A well known composition of Bartok with this structure is Music for strings, percussion and celesta [30]. In Table 4, it is shown that the cardinality sequence of cc of subgroups of the group generated with the relation rel=ABCBA corresponds to Isoc(X; 2) up to the higher index 9 that we could check with our computer. A similar result is obtained with the symmetrical word ABACABA.
Our second example is a musical form known as twelve-bar blues [31], one of the most prominent chord progressions in popular music and jazz. In this context, the notation A is for the tonic, B is for the subdominant and C is for the dominant, each letter representing one chord. In twelve-bar blues, there are twelve chords arranged as in the first column of Table 4. We observe that the standard twelve-bar blues are different in structure from the sequence of Isoc(X; 2). However, variations 1 and 2 have a structure close to Isoc(X; 2). In the former case, the first 9 orders lead to the same digit in the sequence.
Our third example is the musical form A-A-B-C-C. Notably, it is found in the Slow movement from Haydn's 'Emperor quartet Opus 76, N • 3 [32] (Figure 3), much sooner than the contemporary period. (See also Ref. [33] for the frequent occurrence of the same musical form in djanba songs at Wadeye.) As in the aforementioned examples, the cardinality sequence of the cc of subgroups of the group built with rel=AABCC corresponds to Isoc(X; 2) up to the highest index 9 that we could reach in our calculations. Table 4. Group analysis of a few musical forms whose structure of subgroups, apart from exceptions, is close to Isoc(X; d) with d = 2 (at the upper part of the table) or d = 3 (at the lower part of the table). Of course, the forms A-B-C and A-B-C-D have the cardinality sequence of cc of subgroups exactly equal to Isoc(X; 2) and Isoc(X; 3), respectively. .
djanba ([33], Figure 9.8) . . Further musical forms with 4 letters A, B, C, and D and their relationship to Isoc(X; 3) are provided in the lower part of Table 4.
A-A-A-A-B-B-A-A-C-C-A-
Not surprisingly, the rank r of the abelian quotient of f p = A, B, C|rel(A, B, C) is found to be 2 when the cardinality structure fits that Isoc(X; 2) in Table 4. Otherwise, the rank is 3. Similarly, the rank r of the abelian quotient of f p = A, B, C, D|rel(A, B, C, D) is found to be 3 when the cardinality structure fits that Isoc(X; 3) in Table 4. Otherwise, the rank is 4.
In Table 5, the group analysis is performed with 3, 4 or 5 letters (in the upper part) and is compared to random sequences with the same number of letters (in the lower part).
The text of the sentence is first encoded with three letters (H for names and adjectives, E for verbs and C otherwise), we observe that the subgroup structure has cardinality close to that of a free group F 2 on two letters up to index 3. If one adds one letter A for the prepositions in the sentence (in addition to H, E and C), then the subgroup structure has cardinality close to that of a free group F 3 on three letters. If adverbs B are also selected, then the subgroup structure is close to that of the free group F 4 . In all three cases, the similarity holds up to index 3 and that the cc of subgroups are the same as in the corresponding free groups. The first Betti numbers of the generating groups are 2, 3 and 4 as expected.
In Table 5, we also computed the cardinality structure of the cc of subgroups of small indexes obtained from a random sequence of 250 letters (like the number of letters in the previously studied sentence of the small poem in prose). One took 10 runs with random sequences having 3, 4 or 5 letters. We see that the cardinality structure of the cc of subgroups for the cases with 4 or 5 letters tends to align to that of the free group F r−2 (not F r−1 ). The 3-letter case is the most random one and does not correspond to F 1 (or F 2 ), in most runs.
Our conclusion is that the considered prose sequence contains a structure close to that of F r when we select r + 1 letters for the encoding of the sentence, a result that is similar to that which we found in the group analysis of proteins in Section 3 and musical forms in Section 4. Table 5. Group analysis of an excerpt of a small poem in prose Le vieux saltimbanque by Charles Baudelaire. The text is split into segments encoded by the symbol H (for names and adjectives), E (for verbs), A for prepositions, B for adverbs, or C (for the other types: conjunctions, punctuation marks and so on). The cardinality structure of the cc of subgroups of a small index is compared to the one obtained with 10 runs of a sequence of words of a similar length (i.e., the length 250) with the corresponding number of letters. [1,3,2,9,5,20] 2 [1,3,1,6,6,15] . [1,3,7,30,124,987] . [1,7,17, . [1,3,7,26,457] . [1,3,10,39,.] . [1,3,13,52,.] . [1,7,20,143 . [1,7,41,668,.] . [1,7,50,819,.] .
Graph Coverings for Poems
In poems, the verses are generally of a smaller length than that for a sentence in prose. We selected the first strophe of the poem, Le Bateau Ivre, by Arthur Rimbaud. The poem may be found on a wall in Paris, see Figure 4. The verses in the strophe have about 35 letters. We compare the group structure of the four verses in the first strophe to that of random sequences of length 35 in Table 6 (when the encoding is with 3 letters H, E and C) and in Table 7 (when the encoding is with 4 letters H, E, C and A). Adverbs are too rare in verses of such a small length so that we did not considered the 5-letter case.
Let us first look at the 3-letter case in Table 6. Apart from the first verse in the strophe, the structure of the poem is very close to that of F 2 , up to the index 6 (for the second verse) and up to the index 7 (for verses 3 and 4). Higher order indices could not be reached in our calculations. For the English translation, the closeness to F 2 holds as well but is not so perfect. It is not so surprising since the poem was originally composed in French. For a French translation of a poem in English one would have obtained a similar (small) discrepancy to the group structure to F 2 . We looked at the cardinality structure of the cc of subgroups by taking random sequences of length 35 in 10 runs and we observe that the closeness to F 2 is much less than in the case of the poem. Table 6. Group structure of the poem Le Bateau Ivre' (The Drunken Boat) by Arthur Rimbaud. Only the first strophe (that has four lines) is analyzed, firstly in its original form, then in an English translation. Each line is split into segments encoded by the symbol H (for names and adjectives), E (for verbs) or C (for the other types: conjunctions, adverbs, prepositions, punctuation marks and so on). The group relation is displayed for the first line only.) The cardinality structure of cc of subgroups of a small index is compared to the one obtained with 10 runs of a sequence of random 3-letter words of similar length (i.e., the length 35).
Comme je descendais des fleuves impassibles, [1,1,7,17,114 Table 7. The same as in Table 6, but each line is split into segments encoded by the symbol H (for names and adjectives), E (for verbs), A for prepositions, or C (for the other types: conjunctions, adverbs, punctuation marks and so on). The cardinality structure of cc of subgroups of a small index is compared to the one obtained with 10 runs of a sequence of random 4-letter words of similar length (i.e., the length 35).
Comme je descendais des fleuves impassibles, [1,7, The group structure with 3 letters can also be obtained for the group structure with 4 letters in Table 7 but the closeness is to F 3 (not F 2 ), as expected.
Conclusions
The graph covering approach has been shown to be useful for understanding how complex structures are encoded in nature and in art. For proteins, there exists a primary encoding with 20 amino acids as letters and the secondary encoding determines the folding of proteins in the 3-dimensional space. This is useful for recognizing the relationship between the structure and function of the protein.
We took examples based on a present hot topic: a variant of the SARS-Cov-2 spike protein and the alipoprotein-H. For music, the secondary structures are called musical forms and the choice of them determines the type of music. For poems, we took the French (or English) alphabet with 26 letters, but many other alphabets may be used for the application of our approach. The secondary structures are defined from the encoding of the words (names, verbs and so on).
It is also interesting to speculate about the possible existence of a primary code and a secondary code in other fields, for example, in physics at the elementary level like in particle physics and quantum gravity [35]. According to the experience of the authors of this paper, the structure has much to do with complete quantum information. The reader may consult paper [36] about particle mixings or [3,37] about the genetic code in which finite groups are the players. Here, we are dealing with infinite groups so that the representation theory of finite groups (with characters) has to be defined on finitely-presented groups (most of the time of infinite cardinality). This will be explored further in our next paper [38].
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2020-12-10T09:06:05.310Z
|
2020-12-06T00:00:00.000
|
230550367
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2410-390X/4/4/37/pdf",
"pdf_hash": "b3fb9a60bfb701b8a7f4c2112815c8a28bbfd9d2",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1021",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "c78a2248d22c8782ae95912d246ddc16a8741893",
"year": 2020
}
|
pes2o/s2orc
|
Study of the FTM Detector Performance with Garfield++
: The Fast Timing Micro-Pattern Gaseous Detector (FTM) has been recently introduced as a promising alternative for applications that require improved time resolution, such as high-luminosity accelerators and medical imaging. The FTM consists of a stack of several coupled gas layers alternating drift and multiplication stages. The time resolution is determined by the time of the fastest signal among all amplification stages, read out by external electrodes through capacitive couplings. In the present work, we use the Garfield++ simulation toolkit in order to investigate and optimize the FTM performances. Gain, timing, and efficiency of the FTM are studied as a function of different parameters, such as detector geometry, gas mixture, and applied electric fields. The simulations that are presented in this paper show that a time resolution as low as 160 ps can be reached with a 32-layers FTM.
Introduction
Micro-Pattern Gaseous Detectors (MPGD) witnessed a significant growth over the past twenty years. With their excellent spatial resolution, radiation hardness, flexible geometry, and relatively lower production and operation constraints, different MPGD detectors, such as GEM (Gas Electron Multiplier) and Micromegas, have been playing essential roles in many high energy physics experiments [1]. However, MPGDs are generally vulnerable to electric discharges, particularly in high rate environments, eventually causing potential damages to the readout electronics, as well as increasing the noise, which can result in data loss [2].
Although resistive materials have been recently introduced for building compact spark-protected MPGDs [3][4][5], also opening the possibility to make electrically transparent structures with external signal pick-up, current MPGDs still suffer from a relatively poor time resolution of few nanoseconds, which makes them less performing in environments such as high-luminosity accelerators and medical imaging, which require sub-ns time resolution in order to reduce the background.
Time resolution, which is dominated by the fluctuations on the position of the ionization cluster that is closest to the amplification region, is improved in the novel Fast Timing Micro-Pattern Gaseous Detector (FTM) by dividing the drift gap into several smaller gaps, each with its own amplification structure. This leads to a reduction of the fluctuations in the distance between the closest ion-electron pair and amplification structure. Thanks to its fully resistive structure, signals from any FTM layer can be picked-up by the external readout strips. The time resolution, which is inversely proportional to the number of layers, is then given by the best timing among all layers [6].
In a previous work [7], simulations have shown that a time resolution below 400 ps can be obtained with a 16-layers FTM operated in an Ar/CO 2 (70/30) gas mixture with 3 kV/cm drift field, 120 kV/cm amplification field (which corresponds to 600 V voltage difference over the 50 µm well height), and with up to 4000 electrons cut-off on the integrated electron signal that is used to mimic the electronic noise threshold. The time benefit of the FTM was experimentally proven with the first two-layers FTM built and tested at CERN (European Organization of Nuclear Research) in 2014 [8]. The simulations have also revealed that, for large number of FTM thin layers, the reduction of time resolution is moving away from linearity, which is expected, as some detector effects would start to play an important role [7]. In the present work, we extend the simulations to 32 layers and investigate the reasons for the time resolution deviating from the linear expected behavior. Besides, we study the performance of the FTM detector in terms of timing, gain, and efficiency as a function of different parameters, such as the gas mixture and hole configuration. While a wider hole diameter leads to gain deterioration in GEMs [9], we show that an improved time resolution is obtained in wide holes FTM due to an increased collection efficiency.
Setup and Simulations
The FTM is realized by creating successive drift and amplification layers, with a constant total drift thickness of 4 mm for all configurations (1 layer of 4 mm, 2 layers of 2 mm, . . . , 32 layers of 0.125 mm), as shown in Figure 1. An amplification structure is realized by perforating 140 µm-pitch holes on a 50 µm thick Kapton foil, coated with resistive Diamond Like Carbon (DLC) on both the top and bottom sides: while the upper DLC induces the high electric field inside the hole, the bottom DLC coating serves as the drift electrode for the next FTM layer. A top plane that is made of a Kapton foil covered with ∼100 nm DLC constitutes the drift electrode, while the bottom layer of the FTM is kept at ground. The overall structure is assumed to be transparent to the signals created in any layer. Indeed, the transparency depends on the capacitance between each layer and the signal electrodes as well as the surface resistivity of the electrodes. We believe that a careful choice of materials and thickness will allow finding a configuration with nearly 100% transparency. While the electric field map in the FTM was calculated using ANSYS Mechanical APDL suite [10], the detector and the charge transport simulations were performed using Garfield++ [11] (the reader could ask authors for codes). Solving the charge transport equation of motion can be done in Garfield++ with Runge-Kutta-Fehlberg (RKF) integration or Microscopic Tracking. RKF is suitable for tracking electrons over large distances and in cases where detailed calculations of ionization and excitation processes are not required. On the other hand, Microscopic Tracking, which is used in this work, is the method of choice for accurate and detailed simulations of electron trajectories in small-scale structures [12].
Three different hole configurations were studied in the present work, as shown in Table 1: Hole-1 with 90 µm diameter at the top and 40 µm at the base, hole-2 with 70 µm diameter at the top and 50 µm at the base, and hole-3 with 100 µm diameter at the top and 70 µm at the base. Those different hole configurations were the actual experimental configurations that were obtained after the etching of the DLC-clad polyimide foils of various prototypes built and tested at CERN since 2014 [8].
Results and Discussion
Time resolution, gain, and efficiency are investigated in order to study the performance of the FTM detector. Time performance is studied as a function of the drift field, the number of FTM layers, and for different gas mixtures. The detector gain is plotted as a function of electric fields and for different gas mixtures. Finally, we study the detector efficiency in order to understand the timing and gain behaviors. For all studies, a minimum of 2000 events are simulated each time, enough to minimize statistical uncertainties.
Time Performance
The time performance is the most important parameter as the detector was mainly proposed in light of improving the general MPGD time resolution. For a FTM, the time resolution is expected to linearly decrease with increasing number of layers following the equation: [6] where λ is the average number of primary clusters that were generated by an ionising particle inside the gas (whose occurrence is a Poisson process), N D is the number of the FTM drift layers, and v d is the drift velocity. Cluster density and drift velocity, which both depend on the gas mixture, were estimated with Garfield++, as shown in Figure 2. In this work, time studies were performed with 50 GeV/c muons coming vertically downward. Garfield++ calculations were done using the Microscopic Tracking method that offers two tracing functions: DriftElectron, which only traces the primary electrons, but not the secondaries produced along the drift path, and AvalancheElectron, which traces all of the electrons produced in the avalanche, as mentioned in Section 1. DriftElectron was preferred over AvalancheElectron, as this results in a considerable reduction of the computation times by approx. one order of magnitude, without altering the time resolution (which is given by the RMS of the distribution of the arrival time of the fastest electrons among all layers), provided that the amplification field (E A ) is above 110 kV/cm as the time resolution becomes independent of the value of the noise threshold [7]. Figure 3 shows the time resolution as a function of the drift field at an amplification field of 120 kV/cm for a single layer FTM with gap thickness G D = 4 mm in Ar/CO 2 (70/30) gas mixture. Hole-3 shows a better time resolution when compared to both other configurations: time resolution as a function of drift field seems to improve with wider holes due to increasing collection efficiency, which will be discussed in Section 3.3.
Time Resolution as a Function of FTM Number of Layers
Time resolution (under the assumption of total signal transparency) was then plotted as a function of the number of FTM-layers for a drift field of 3 kV/cm and an amplification field of 120 kV/cm. Figure 4 shows that the time resolution is inversely proportional to the number of layers, as expected from Equation (1). The resolution is also in good agreement with experimental results published in [8]. However, a certain deviation from the linear behavior is observed with an increasing number of layers. It will be demonstrated in Section 3.3 that this deviation must be due to a reduced electron collection efficiency in thin layers. On the other hand, hole-3 shows a better timing performance with a less pronounced deviation as compared to both other configurations: Hole-3 has the wider hole diameter at the top and, hence, is expected to have better collection efficiency.
For a 32-layers FTM, a time resolution of 173 ps is obtained with hole-3 configuration in Ar/CO 2 (70/30), ∼30% better than both other configurations. No timing simulations were preformed beyond 32 layers, as the technical realisation would become increasingly more difficult and the reduced size of the drift gap would not be thick enough for efficient electron collection in the holes.
Time Resolution with Different Gas Mixtures
Time resolution was simulated with two additional gas mixtures: Ar/CO 2 /CF 4 (45/15/40), and Ar/CO 2 /ISO-C 4 H 10 (65/28/7). Both of the gases have shown an improved time resolutions in GEM-based detectors [13,14]. The time resolution is compared in Figure 5 for the three gas mixtures at an equivalent total charge of ∼3.9 × 10 3 electrons (which corresponds to an amplification field of 120 kV/cm for Ar/CO 2 , 124.66 kV/cm for Ar/CO 2 /CF 4 and 91.67 kV/cm for Ar/CO 2 /ISO-C 4 H 10 ). The Ar/CO 2 /CF 4 mixture seems to achieve the best timing performance with a resolution of 160 ps in a 32-layers FTM. However, the deviation from the linear behavior looks more pronounced with Ar/CO 2 /ISO-C 4 H 10 and Ar/CO 2 /CF 4 when compared to Ar/CO 2 , which will be investigated in a future work.
Gain as a Function of the Electric Fields
A scan of the gain as a function of the drift and amplification fields was performed in a 4 mm thick single FTM layer while using the AvalancheElectron function in Garfield++. For the three hole configurations, both the total charge produced in the avalanches and the effective gain (defined as the number of electrons reaching the readout plane) were simulated in Ar/CO 2 (70/30) for drift fields between 0.5 and 5 kV/cm ( Figure 6) and for amplification fields between 90 and 130 kV/cm (Figure 7). Hole-2 configuration exhibits significantly higher gain when compared to both other configurations that show an almost similar total and effective gains. Indeed, the gain is expected to be higher for diameters closer to the foil thickness [2] and it is supposed to be increasing with narrower holes due to increasing fields [9].
Gain with Different Gas Mixtures
The total charge was compared for the three gas mixtures in hole-3 configuration at E D = 3 kV/cm. The use of Ar/CO 2 /ISO-C 4 H 10 (65/28/7) results in a gain more than one order of magnitude higher when compared to both other gas mixtures at even lower fields, as shown in Figure 8.
FTM Efficiency
A detailed analysis of the efficiency is required in order to analyze the performance of the FTM and to understand the deviation of the time resolution from the expected linear behavior observed for a high number of layers, which is less pronounced with hole-3 configuration. We first propose the two following definitions of efficiency that will be presented in this section: (1) the collection efficiency defined as the number of electrons entering a hole divided by the total number of initial electrons and (2) the detection efficiency defined as the number of electrons reaching the readout plane divided by the total number of initial electrons: the detection efficiency can be considered to be the ratio of electrons that have contributed to the signal. In both cases, we simulate a minimum of 2000 electrons in Ar/CO 2 (70/30) and we trace them using the DriftElectron function in Garfield++. Figure 9 shows the collection and detection efficiencies as a function of the drift field, obtained by simulating electrons drifting in a 4 mm thick single-layer FTM at 120 kV/cm amplification field. The behavior of FTM efficiency as a function of the drift field is generally similar to the GEM efficiency found in [9]. While collection efficiency is almost similar for all hole configurations below 4 kV/cm, it tends to decrease with decreasing top diameters for E D > 4 kV/cm. Besides, hole-3 configuration shows an improved detection efficiency, 5-10% higher than hole-2 and up to 20% higher than hole-1, which indicates that the bottom diameter might also affect the fields inside the hole. Indeed, while both efficiencies show the same functional behavior, the detection efficiency declines with decreasing lower diameter: electrons are lost on the internal kapton surface inside the hole.
Efficiency as a Function of the Amplification Field
Similarly, collection and detection efficiencies were computed as a function of the amplification field at a fixed drift field of 3 kV/cm, as shown in Figure 10. While the collection efficiency is similar for all configurations, hole-3 configuration exhibits an improved detection efficiency, being almost 15% higher than hole-1 at 120 kV/cm. This further indicates that, in narrower holes, more electrons get attached to the walls, which results in a loss in the signal.
Muon Efficiency as a Function of the Drift Gap Thickness
The deviation from the linear behavior that is observed in Figure 4 can be explained by the deterioration of the efficiency in thin layers. Figure 11 shows the collection efficiency of primary ionization electrons that are produced by the passage of a 50 GeV/c muon in Ar/CO 2 (70/30), as a function of the FTM layer thickness. While the efficiency is maximal in 4 mm thick layer, it decreases to below 40% in 0.125 mm thick layer (which corresponds to a 32-layers FTM). Besides, the hole-3 configuration shows a 10% higher efficiency as compared to hole-1 and 5% higher compared to hole-2, further confirming the time resolution variations observed in Figure 4 for a large number of layers. Hole-3 shows 10% higher efficiency as compared to hole-1 config. Figure 11 shows the theoretical maximum that is expected from the Poisson distribution. This maximum is obtained by multiplying the number of primary ionization clusters (∼3.75 cls/mm in Ar/CO 2 (70/30) from Figure 2) with the gap thickness, which gives the probability to have a primary electron in the gap. The maximum theoretical efficiency is then obtained by: where x is the event of having a primary electron in the gap and P(x) is the probability of this event.
Efficiency as a Function of the Initial Electron Position
The electron collection efficiency was computed as a function of the initial x-position of the simulated electron with a fixed y and z positions in a 0.5, 0.25, and 0.125 mm thick single-layer FTM, respectively. Figure 12 shows how the efficiency is maximal when the electron is created around the center of the hole, but declines when the electron is created away from the center. There is more than 10% loss in collection efficiency in a 0.25 mm layer and more than 15% loss in a 0.125 mm layer when compared to a 0.5 mm thick layer. This result might contribute to the understanding of the deviation of the time resolution from the model expectations in very thin layers, as observed in Figure 4 and in [7] . Electron collection efficiency vs. initial electron x-position. Efficiency declines when the electron is created away from the hole center. There is more than 10% loss in collection efficiency in a 0.25 mm layer and more than 15% loss in a 0.125 mm layer when compared to a 0.5 mm thick layer.
Summary and Conclusions
In this paper, we discussed the performance of the FTM detector, which was recently introduced in view of improving the timing performance of the MPGD detectors, a crucial parameter in future collider experiments and medical applications.
Time resolution, gain, and efficiency have been investigated using ANSYS and Garfield++ codes. Three hole configurations with different diameters and three gas mixtures were tested. The simulations show that a time resolution of 173 ps can be reached in a 32-layers FTM with hole-3 configuration operated in an Ar/CO 2 (70/30) gas mixture at a drift field of 3 kV/cm and an amplification field of 120 kV/cm. This value is ∼30% higher than the theoretical value of 134 ps that is expected from Equation (1). This observed deviation of time resolution from the expected linear behavior at high number of layers seems to be mainly due to an increasing loss of efficiency with decreasing layer thickness. Besides, time resolution is becoming worse with hole-1 and hole-2 configurations (233 ps and 225 ps respectively) also due to efficiency deterioration. The efficiency seems to be impacted by both top and bottom diameters: while the top diameter might impact on the way electron drift on the hole, the bottom diameter might impact the field of the hole and so on the amplification. This assumption can be verified by studying the field intensity inside the hole, which will be investigated in a future work.
Finally, two other gas mixtures were tested: Ar/CO 2 /CF 4 (45/15/40) and Ar/CO 2 /ISO-C 4 H 10 (65/28/7). While Ar/CO 2 /ISO-C 4 H 10 resulted in a much higher gain with no benefit in terms of time resolution, Ar/CO 2 /CF 4 showed a slightly better timing performance (∼160 ns with Ar/CO 2 /CF 4 and 206 ns with Ar/CO 2 /ISO-C 4 H 10 for a 32-layers FTM) along with a lower gain. Therefore, the use of Ar/CO 2 /CF 4 seems to bring a limited benefit to the FTM, especially with the increasing restrictions concerning the use of fluorine-based gases [15].
In conclusion, the results that are presented in this work suggest that the FTM detector would be better performing in terms of timing and efficiency with hole-1 configuration (100 microns at the top and 70 microns at the bottom), operated in Ar/CO 2 (70/30) gas mixture at 3 kV/cm drift and 120 kV/cm amplification fields, which provides a gain of ∼10 4 . These results need to be further confirmed and developed with additional simulations and experimental tests.
|
v3-fos-license
|
2024-02-28T06:17:50.218Z
|
2024-02-26T00:00:00.000
|
268028805
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43032-024-01469-z.pdf",
"pdf_hash": "d8a3c638181a450d50adcf3a8fc9ad7ca782484e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1022",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0ec77bf47c9ff20ecd2649f914b09e9da3eea233",
"year": 2024
}
|
pes2o/s2orc
|
Predictive Factors for the Formation of Viable Embryos in Subfertile Patients with Diminished Ovarian Reserve: A Clinical Prediction Study
This study aims to construct and validate a nomogram for predicting blastocyst formation in patients with diminished ovarian reserve (DOR) during in vitro fertilization (IVF) procedures. A retrospective analysis was conducted on 445 DOR patients who underwent in vitro fertilization (IVF)/intracytoplasmic sperm injection (ICSI) at the Reproductive Center of Yulin Maternal and Child Health Hospital from January 2019 to January 2023. A total of 1016 embryos were cultured for blastocyst formation, of which 487 were usable blastocysts and 529 did not form usable blastocysts. The embryos were randomly divided into a training set (711 embryos) and a validation set (305 embryos). Relevant factors were initially identified through univariate logistic regression analysis based on the training set, followed by multivariate logistic regression analysis to establish a nomogram model. The prediction model was then calibrated and validated. Multivariate stepwise forward logistic regression analysis showed that female age, normal fertilization status, embryo grade on D2, and embryo grade on D3 were independent predictors of blastocyst formation in DOR patients. The Hosmer–Lemeshow test indicated no statistical difference between the predicted probabilities of blastocyst formation and actual blastocyst formation (P > 0.05). These results suggest that female age, normal fertilization status, embryo grade on D2, and embryo grade on D3 are independent predictors of blastocyst formation in DOR patients. The clinical prediction nomogram constructed from these factors has good predictive value and clinical utility and can provide a basis for clinical prognosis, intervention, and the formulation of individualized medical plans.
Introduction
Infertility affects approximately 10-15% of couples worldwide, with female factors accounting for approximately 40% of all cases [1].One of the common causes of female infertility is diminished ovarian reserve (DOR), also known as premature ovarian insufficiency or ovarian function decline, which refers to a reduction in the quantity or quality of oocytes in the ovaries [2].DOR refers to the decline in the ability of the ovaries to produce oocytes and the reduction in oocyte quality in women under the age of 40 due to various reasons, leading to a decrease in fertility and a deficiency in sex hormones.According to the Bologna criteria, DOR is defined as having low levels of anti-Müllerian hormone (AMH < 0.5-1.1 ng/ml), a low antral follicle count (AFC < 5-7), and/or elevated baseline follicle-stimulating hormone (FSH) levels in women of reproductive age [3].Potential causes of DOR mainly include autoimmune diseases, hereditary chromosomal and genetic disorders, environmental hazards, and iatrogenic factors.Meanwhile, Park SU et al. have elaborated on the molecular mechanisms associated with the onset of DOR, considering gene mutations and errors in meiotic recombination, as well as related factors such as DNA damage, telomere changes, reactive oxygen species, and mitochondrial dysfunction [4].How to achieve clinical pregnancy in DOR patients is a challenge for assisted reproductive technology, with an incidence rate of 10-30% [5].Among the infertile population, there is a trend of increasing incidence and younger age in recent years.In recent years, with the improvement in sequential embryo culture techniques, blastocyst culture technology has become more mature.Blastocyst culture undergoes a developmental process involving cell fusion, formation, and expansion of the blastocoel, eliminating some embryos with poor developmental potential.Additionally, studies by Papanikolaou EG reported that blastocyst transfer is more conducive to increasing the synchrony between endometrial and embryo development, improving clinical pregnancy rates and birth rates while also reducing the risk of multiple pregnancies [6].However, there is a risk of culture failure in blastocyst culture, and various studies report that the rate of blastocyst formation is 40-60% in different age groups and using different culture methods [7,8].Therefore, in clinical practice, some patients may experience the situation of having no transferable blastocysts, and the risk of having no usable blastocysts after sequential culture of embryos is higher in DOR patients, causing severe economic and psychological burdens.
Previous studies have explored the predictive factors for embryo formation in infertile populations, but limited research has specifically addressed this issue in DOR patients, such as Mi Z et al. suggesting that D2 cleavagestage embryos with four cells have the highest rate of blastocyst formation [9]; Bassil R et al. considering the diameter of the oocyte as an important factor affecting blastocyst formation, with embryos formed from oocytes measuring between 105.96 and 118.69 μm in diameter having the highest probability of forming high-quality D5 blastocysts [10]; and Yang SH et al. that used time-lapse imaging systems to observe embryo morphokinetics and morphokinetic parameters, finding that the time of pronuclear fading after fertilization and the timing of blastomere division and abnormal division patterns are key factors affecting blastocyst formation [11].However, there is a lack of research reports on the prediction of blastocyst formation combining clinical data such as patient age and endocrine status with embryo morphokinetic parameters.In this study, we aim to conduct a clinical prediction study to investigate the factors associated with the formation of usable embryos in DOR patients undergoing ART procedures.
Patient Data
A retrospective analysis was conducted on 445 patients with diminished ovarian reserve (DOR) who underwent in vitro fertilization/intracytoplasmic sperm injection (IVF/ICSI) at the Reproductive Center of Yulin Maternal and Child Health Hospital from January 2019 to January 2023, involving a total of 1016 embryos for blastocyst culture.The inclusion criteria are as follows: (1) antral follicle count (AFC) < 6 in both ovaries, serum anti-Müllerian hormone (AMH) levels < 0.5-1.1 ng/ml, or baseline follicle-stimulating hormone (FSH) levels ≥ 10 IU/L for two consecutive menstrual cycles; (2) patients undergoing IVF/ICSI and blastocyst culture; and (3) patients fully informed about the IVF embryo transfer process and who have signed an informed consent form.The exclusion criteria are as follows: (1) patients receiving reproductive assistance through ICSI with testicular sperm aspiration (TESA) or round spermatid injection (ROSI); (2) those without oocyte retrieval on the day of oocyte pick-up, complete fertilization failure, or with no cleavage embryos available for blastocyst culture on day 3; (3) either partner suffering from severe psychiatric disorders, acute urinary or genital infections or sexually transmitted diseases, or hereditary diseases that are deemed inappropriate for procreation under the "Maternal and Infant Health Care Law" of the People's Republic of China, and for which prenatal diagnosis or preimplantation genetic diagnosis is currently unfeasible.The study was conducted following the approval of the Medical Ethics Committee of Yulin Maternal and Child Health Hospital, Guangxi.
Data Collection
Key observation indicators were collected from the hospital's reproductive medical record management system, encompassing complete clinical and laboratory data for both male and female partners from January 2019 to January 2023.The following demographic and clinical data were obtained: maternal age, paternal age, infertility duration, infertility type, number of blastocysts cultured, normal fertilization, the blastomere number of D2, embryo fragmentation of D2, day 3 embryo grade, fusion embryos, the blastomere number of D3, embryo fragmentation of D3, day 2 embryo grade, ovarian stimulation protocol, type of fertilization, maternal BMI, basal FSH, basal LH, basal PRL, basal E2, basal T, basal P, AMH, AFC, total Gn dosage, duration of stimulation, initial FSH dosage, serum E2 level on the HCG trigger day, serum LH level on the HCG trigger day, serum P level on the HCG trigger day, volume of semen after treatment, semen density after treatment, and semen recovery rate.The data of outcome, usable blastocyst, were also collected.
Statistical Analysis
The dataset collected was randomly divided into training and validation cohorts at a ratio of 7:3, and the variables were compared.Non-normal data were presented as median (interquartile ranges).In the univariate analysis, chi-square test or Fisher's exact test was used to analyze the categorical variables, while the Student's t-test or rank-sum test was used to examine the continuous variables.In the training cohort, the least absolute shrinkage and selection operator (LASSO) logistic regression analysis was used for multivariate analysis to screen the independent risk factors and build a prediction nomogram for usable blastocyst.The performance of the nomogram was assessed using the receiver operating characteristic (ROC) curve and calibration curve, with the area under the ROC curve (AUC) ranging from 0.5 (no discriminant) to 1 (complete discriminant).A decision curve analysis (DCA) was also performed to determine the net benefit threshold of prediction.Results with a p-value of < 0.05 were considered significant.All statistical analyses were performed using the R software (version 4.2.2).
Patient Characteristics
General Characteristics This retrospective study included records of 1016 blastocyst cultures, which were randomly divided into a training set and a validation set at a ratio of 7:3.The baseline demographic and clinical characteristics of the study population are summarized in Table 1.The characteristics include maternal age, paternal age, infertility duration, infertility type, number of blastocysts cultured, normal fertilization, the blastomere number of D2, embryo fragmentation of D2, day 3 embryo grade, fusion embryos, the blastomere number of D3, embryo fragmentation of D3, day 2 embryo grade, ovarian stimulation protocol, type of fertilization, maternal BMI, basal FSH, basal LH, basal PRL, basal E2, basal T, basal P, AMH, AFC, total Gn dosage, duration of stimulation, initial FSH dosage, serum E2 level on the HCG trigger day, serum LH level on the HCG trigger day, serum P level on the HCG trigger day, volume of semen after treatment, semen density after treatment, semen recovery rate, and semen recovery rate.Overall, the baseline characteristics were generally well-balanced between the training cohort and the internal test cohort, with nonsignificant p-values for most comparisons, suggesting that the two cohorts were suitable for predictive research.
LASSO Regression Model
The candidate predictors, maternal age, paternal age, infertility duration, infertility type, number of blastocysts cultured, normal fertilization, the blastomere number of D2, embryo fragmentation of D2, day 3 embryo grade, fusion embryos, the blastomere number of D3, embryo fragmentation of D3, day 2 embryo grade, ovarian stimulation protocol, type of fertilization, maternal BMI, basal FSH, basal LH, basal PRL, basal E2, basal T, basal P, AMH, AFC, total Gn dosage, duration of stimulation, initial FSH dosage, serum E2 level on the HCG trigger day, serum LH level on the HCG trigger day, serum P level on the HCG trigger day, volume of semen after treatment, semen density after treatment, and semen recovery rate, were included in the original model, which were then reduced to 9 potential predictors using LASSO regression analysis performed in the training cohort.The coefficients are shown in the following table, and a coefficient profile is plotted in Fig. 1.A cross-validated error plot of the LASSO regression model is also shown in Fig. 2. The most regularized and parsimonious model, with a crossvalidated error within one standard error of the minimum, included 9 variables.As shown in Fig. 3, the ROC analysis of the abovementioned variables yielded AUC values greater than 0.5.
Multivariate Logistic Analyses
Using multivariate logistic regression analysis, further analysis was conducted on the optimal matching factors identified by Lasso regression.The results revealed that five variables-female age, normal fertilization, day 2 cleavagestage embryo grading, day 3 cleavage-stage embryo grading, and method of fertilization-were independent predictors of blastocyst formation in patients with diminished ovarian reserve (DOR), as shown in Table 2
Construction of a Nomogram Predicting Usable Blastocyst
Formation in DOR Patients Based on the results of the multivariate logistic regression analysis, a nomogram was constructed to predict the formation of usable blastocysts in DOR patients, incorporating female age, normal fertilization, day 2 cleavage-stage embryo grading, day 3 cleavage-stage embryo grading, and method of fertilization.The model for this nomogram is shown in Fig. 4. A nomogram is a graphical calculating device, a two-dimensional diagram designed to allow the approximate graphical computation of a function.It is based on the principles of multivariable regression analysis and integrates multiple prognostic indicators by employing scaled lines drawn proportionally on the same plane to express the interrelationships among various variables in a predictive model.Each variable is represented by a line segment with marked scales, indicating the range of possible values for that variable, while the length of the line segment reflects the impact of that factor on the outcome event.As illustrated in Fig. 4, the age of patients with diminished ovarian reserve (DOR) is the most significant factor affecting the formation of usable blastocysts.This is followed in importance by day 3 embryo grade, normal fertilization, type of fertilization, and day 2 embryo grade.
Analysis of Calibration of the Nomogram for Predicting Usable Blastocyst Formation in DOR Patients
The area under the receiver operating characteristic (ROC) curve (AUC) for the model constructed from the training set was 0.832.Internal validation of the nomogram model was performed, and the AUC for the validation set was 0.793, as seen in Fig. 5.In the nomogram prediction model, the individual ROC for each of the five included factors was ≤ 0.63.The calibration plots of the nomogram in the different cohorts are plotted in the following figures, which demonstrate a good correlation between the observed and predicted Usable blastocyst.The results showed that the original nomogram was still valid for use in the validation sets, and the calibration curve of this model was relatively close to the ideal curve, which indicates that the predicted results were consistent with the actual findings.
Decision Curve Analysis
The following figure displays the DCA curves related to the nomogram.A high-risk threshold probability indicates the chance of significant discrepancies in the model's prediction when clinicians encounter major flaws while utilizing the nomogram for diagnostic and decision-making purposes.This research shows that the nomogram offers substantial net benefits for clinical application through its DCA curve (Figs. 6, 7, 8 and 9).
Discussion
As reported by Awonuga AO et al., endometrial receptivity and embryo quality are key factors influencing embryo implantation.Despite numerous studies suggesting that enhancing endometrial blood flow or improving the intrauterine environment may increase live birth rates, the authors argue that this perspective lacks clear, highquality evidence from systematic reviews and believe that improving embryo quality might be more effective than endometrial treatment [12].Arab S et al. reported that in frozen-thawed embryo transfer, transferring two low-quality blastocysts does not increase clinical pregnancy rates but may raise the risk of multiple pregnancies; therefore, single blastocyst transfer is still recommended [13].This underscores the importance of blastocyst culture in guiding assisted reproductive technology (ART) transfer strategies.However, patients with diminished ovarian reserve (DOR) have fewer available oocytes, hence a reduced number of viable embryos and a higher risk of not forming usable blastocysts after further culture.The absence of transferable embryos not only imposes an economic burden on DOR patients but also causes severe psychological stress.In this study, This study indicates that age is a risk factor for the formation of viable blastocysts in patients with diminished ovarian reserve (DOR).La Marca A [14] suggests that age is a significant determinant affecting the outcomes of assisted reproductive technologies (ART), primarily because increasing age directly leads to a decline in ovarian reserve function and oocyte quality.Additionally, structural and functional abnormalities in the oocyte's spindle apparatus and mitochondria may result in atypical cell division, consequently increasing the rate of chromosomal abnormalities in embryos.Research by Soler A et al. [15] reveals that patients over 35 years of age have approximately a 25% higher rate This may be related to the inherent impact of the ICSI technique on the developmental potential of embryos [18].Firstly, during the IVF process, the zona pellucida of the oocyte exerts a relative selection for sperm during the natural sperm-oocyte binding process.ICSI bypasses this selective step, potentially allowing for the fertilization of oocytes by morphologically normal but compromised sperm, which could further affect embryo development [19].Additionally, ICSI necessitates the injection of a certain amount of exogenous substances due to the requirement of sperm immobilization, potentially influencing the developmental potential and safety of the offspring [20].Secondly, the ICSI technique increases the duration of extra-embryonic manipulations compared to conventional IVF, due to the needs for denudation and microinjection.The consequent changes in temperature and osmotic pressure may directly affect the embryo's developmental potential.Thirdly, ICSI demands high technical proficiency from the embryologist.There is a risk of damage to the aster, spindle, microtubules, and microfilaments due to operator factors, which could lead to abnormal embryo division, or even degeneration, severely compromising the developmental potential of the embryo [21].
This study reveals that the day 2 cleavage-stage embryo grading and day 3 cleavage-stage embryo grading are influential factors for the formation of viable blastocysts in patients with diminished ovarian reserve (DOR).The grading of D2 and D3 cleavage-stage embryos in this study is based on the Istanbul consensus criteria published by ESHRE in 2011 [22], while the blastocyst grading refers to the Gardner blastocyst scoring system [23].Research by Wong et al. suggests that combining time-lapse imaging analysis with gene expression profiling to evaluate day 2 embryos can predict embryo development and blastocyst formation [24].Significant correlations exist between the quality of day 3 cleavage-stage embryos and blastocyst formation.Studies have shown that when the number of blastomeres on day 3 is fewer than five, the developmental potential of the embryo is reduced due to delayed development, subsequently affecting blastocyst formation [25].High-quality day 3 embryos with 7-9 blastomeres, low fragmentation, normal cell size, good uniformity, and appropriate developmental stage and without abnormalities such as multinucleation, smooth endoplasmic reticulum, and vacuolization have a higher potential to develop into blastocysts.This aligns with our study's findings that day 3 embryo grading is a determinant factor for blastocyst formation.Awadalla M's research suggests that day 3 cleavage-stage embryos with eight blastomeres and less than 10% fragmentation are more likely to form blastocysts and have higher live birth rates [26].However, embryologists' assessments of any embryonic structures are subject to subjective factors, resulting in varying degrees of inter-observer differences.Eastick J. proposes that the introduction of time-lapse systems into laboratories allows for continuous monitoring of developing embryos, enabling the observation and discovery of dynamic structures, such as cytoplasmic strings, which could be a crucial information for predicting the developmental potential of embryos [27].
Achieving successful pregnancy in patients with diminished ovarian reserve (DOR) is a particularly challenging problem in the field of assisted reproductive technologies (ART).Currently, a wealth of research is exploring various techniques to improve the quality of oocytes and increase the number of oocytes in DOR patients, including mitochondrial transfer, activation of primordial follicles, in vitro culture of follicles, and the regeneration of oocytes from various stem cells [28].Given the precious nature of embryos in DOR patients, guiding them on how to make optimal use of the obtained embryos is an urgent clinical issue that needs to be addressed.This study identified five factors impacting the formation of viable blastocysts in DOR patients: female age, normal fertilization status, day 2 cleavage-stage embryo grading, day 3 cleavage-stage embryo grading, and fertilization method.A predictive model based on these factors, represented by a nomogram, demonstrates good clinical predictive value and efficacy for clinical interventions and personalized medicine in DOR patients.However, this study has certain limitations.First, it is a single-center, retrospective study with a relatively limited sample size, lacking external validation from multicenter trials and prospective studies with larger cohorts.Second, due to the numerous uncertain factors affecting blastocyst formation and the scarcity of clinical models for viable blastocyst formation in DOR patients, there is a lack of horizontal comparison between the strengths and weaknesses of other models.
LASSO regression was utilized to identify factors affecting the formation of viable blastocysts in DOR patients.The results revealed nine factors: female age, cell count of day 2 cleavagestage embryos, grade of day 2 cleavage-stage embryos, cell count of day 3 cleavage-stage embryos, grade of day 3 cleavage-stage embryos, normal fertilization, fertilization method, post-treatment semen volume, and post-treatment semen density.Subsequent multivariate logistic regression analysis showed that female age, normal fertilization, day 2 cleavage-stage embryo grade, day 3 cleavage-stage embryo grade, and fertilization method were significant factors affecting the formation of viable blastocysts in DOR patients.To enhance the applicability of this model in clinical practice, the study included a nomogram for the identified factors.
Fig. 6 Fig. 7
Fig. 6 Calibration curve of the nomogram prediction mode for the training cohort
Fig. 8 Fig. 9
Fig. 8 Decision curve analysis of the nomogram of the training cohort
Table 1
Patient demographics and baseline characteristics
|
v3-fos-license
|
2019-03-08T14:17:14.481Z
|
2019-02-01T00:00:00.000
|
67857093
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/s19030737",
"pdf_hash": "00246d5f9151e9bf541b858fd35c281c5802356b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1023",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"sha1": "00246d5f9151e9bf541b858fd35c281c5802356b",
"year": 2019
}
|
pes2o/s2orc
|
A Non-Invasive Medical Device for Parkinson’s Patients with Episodes of Freezing of Gait
A critical symptom of Parkinson’s disease (PD) is the occurrence of Freezing of Gait (FOG), an episodic disorder that causes frequent falls and consequential injuries in PD patients. There are various auditory, visual, tactile, and other types of stimulation interventions that can be used to induce PD patients to escape FOG episodes. In this article, we describe a low cost wearable system for non-invasive gait monitoring and external delivery of superficial vibratory stimulation to the lower extremities triggered by FOG episodes. The intended purpose is to reduce the duration of the FOG episode, thus allowing prompt resumption of gait to prevent major injuries. The system, based on an Android mobile application, uses a tri-axial accelerometer device for gait data acquisition. Gathered data is processed via a discrete wavelet transform-based algorithm that precisely detects FOG episodes in real time. Detection activates external vibratory stimulation of the legs to reduce FOG time. The integration of detection and stimulation in one low cost device is the chief novel contribution of this work. We present analyses of sensitivity, specificity and effectiveness of the proposed system to validate its usefulness.
Introduction
Parkinson's disease (PD) is a neurodegenerative chronic illness that affects movement, making it difficult for patients to comfortably perform tasks of everyday life such as: walking, stair climbing, writing, eating, etc. The World Health Organization rates PD as the second most common neurodegenerative disorder. According to the Parkinson's Disease foundation, approximately 10 million people suffer from this disease worldwide [1,2]. People with PD experience both motor and non-motor symptoms. Typical motor symptoms during the early stages of PD are resting tremors, rigidity, and bradykinesia.
PD imposes a chronic burden not only on the patients but also on their personal and social environments. The difficulty to control movement produced by PD has a negative impact on the social and psychological behavior of the patient, who feels isolated and useless to perform common and simple tasks. Other important motor symptoms present during the disease's middle stage include cramping (dystonia), dyskinesia, loss of postural reflexes, and Freezing of Gait (FOG). FOG is a major motor symptom of PD that shows up during the advanced stages of the disease. It is characterized by a brief episode of involuntary absence of locomotion, i.e., a sensation of being stuck in place, which is experienced by the patient especially when trying to initiate a step or when navigating through or turning around obstacles. FOG episodes cause serious difficulties in mobility and balance that significantly increase the risk of falling, thereby potentially producing serious injury, including bone fracture. Most PD patients affected by FOG usually are aged between 60 and 80 years. Therefore, if injured, these patients require permanent assistance and care by relatives as well as by specialized healthcare personnel.
Being a disease presently without a cure, treatments for PD are aimed at suppressing or ameliorating specific PD symptoms. Currently used symptom treatment types include pharmacological (medication), invasive (e.g., surgical, deep brain stimulation), and non-invasive and minimally invasive (e.g., transcranial and transcutaneous stimulation) interventions. Other more general treatment types consist of lifestyle modifications such as diet and exercise.
The use of non-invasive sensors is an effective approach for monitoring [3] gait and detecting motor symptoms such as FOG. Non-invasive sensors used for this purpose can be classified as either portable or stationary. Portable sensors may be attached to clothing [4,5] or to adequate supports [6][7][8][9][10][11][12]. They have the advantage of not limiting the PD patient's travel space. Stationary (a.k.a. environmental) sensors are distributed at fixed positions throughout the personal environmental space of the PD patient [13] allowing the acquisition of a wider range of characteristics.
Both types can be used for the detection and prediction of FOG episodes, depending on the type of processing their output data is subjected to. Processing tools and techniques used for this purpose include: Power Spectral Density (PSD) [5], root mean square error (RMSE) [6], Fast Fourier Transform (FTT) [7,9], Artificial Intelligence (AI) [10,12,14] and Discrete Wavelet Transform (DWT) [14]. The use of some of these processing tools within common and easily accessible technologies, such as the mobile smartphones, already has achieved sensitivities and specificities greater than 70% [7,9,10,13].
Symptom detection sensors and data processing algorithms are used, in conjunction with external (non-invasive) sensory stimulation, for treatment of FOG. Among the most significant sensory type of external stimulation interventions are the auditory [7,11], visual [15] and tactile [4,6,10] modalities. Although early auditory or vibratory stimulation per se are not known to prevent FOG [16,17], they are nonetheless able to reduce the length of FOG episodes [6,11,16,18]. With this in mind, we present and describe here a novel low cost, compact, comfortable and integrated (detection + stimulation) real-time system intended to induce prompt resumption of gait during FOG episodes. The proposed integrated system consists of two light-weight devices, which are attached to the lower limbs of the PD patient, strategically placed to avoid discomfort. The devices sense gait and send the resulting data to be processed through a DWT-based Java encoded algorithm, designed to detect FOG episodes, in a mobile Android application. As soon as the FOG episode is detected, the system generates and applies a vibratory stimulation to help the PD patient to quickly regain gait, thus reducing the probability of serious injury.
Parkinson's Disease
PD is a multi-systemic neurodegenerative disorder that affects the human nervous system, specifically the dopamine-producing ("dopaminergic") neurons in the substantia nigra region of the brain. Dopamine is essential for sending messages to control and coordinate movement [19]. It acts as a messenger between the substantia nigra and the striatum, an area of the brain responsible for controlled smooth movement [19,20], as shown in Figure 1.
As was already mentioned, the most characteristic motor symptoms of the disease are resting tremor, limb rigidity, bradykinesia, and postural instability [1,2]. The diagnosis of PD depends on the presence of one or more of these four motor symptoms, as well as on the presence of other motor and non-motor secondary symptoms [5,21], such as changes in writing (micrography), reduction of facial expression and loosing of arm swinging and gait [22], constipation, olfactory dysfunction, psychiatric symptoms (such as apathy, anxiety, depression, dementia and psychosis), sleep disturbances, hypophonia, drooling (due to reduced swallowing) and pain [20,23,24]. Early studies about PD were mostly aimed at describing movement and motor disorders and at differentiating the stages of the disease. The degree of PD can be estimated using a widely accepted metric, the unified scale for the evaluation of Parkinson's disease (UPDRS) [26]. The UPDRS value lays in the range from 0 to 176, where 0 represents the healthy condition and 176 represents total disability condition. This scale is based on the following three factors: 1. Mood, mental and behavioral, 2. Activities of daily living, 3. Motricity factor, which ranges from 0 (symptom-free condition) to 108 (severe motor condition) [27].
Symptoms and signs of PD vary from person to person, often beginning on one side of the body and usually continuing to worsen on that side, as symptoms begin to affect both sides. Evolution may be slow in some patients while it can be quicker in others.
Drugs used to reduce motor symptoms in PD can cause neuropsychiatric disorders. Among them, dopaminergic receptor agonists are the most frequently produced. However, there are no well-designed comparison studies about the frequency of such disorders in relation to the type of treatment used [28,29].
PD symptoms, including the FOG episodes, progressively worsen over time [30,31]. There are monitoring devices, based on accelerometers and gyrometers, which can be placed on different parts of the body to detect FOG episodes. Although these devices cannot prevent by themselves the occurrence of FoG, they can be used to trigger stimulation mechanisms to induce the resumption of gait [30].
Symptoms of Parkinson's Disease
PD symptoms may be separated into three categories: primary motor symptoms, secondary motor symptoms, and pre-motor symptoms. They all progressively worsen as the disease advances. Table 1 lists the three categories. Gait disorders caused by PD, such as FOG, have important effects on the health of the patient, most notably the risk of falling, which can cause injuries with serious consequences. Falls cause stress, pain, and are the leading cause of death from injuries in the elderly. In fact, more than a third of PD patients older than 65 suffer at least one fall per year, representing 65% of all their injuries. As a consequence, PD patients develop increasing fear of falling, which causes stress and produces a significant psychological impact on their lives [32].
Freezing of Gait (FOG)
Movement freezing during the march is an episodic motor function disturbance, known as Freezing of Gait (FOG). FOG episodes last only a few seconds, and rarely exceed 30 s duration [33]. FOG is commonly observed in PD patients during the advanced stage of the disease [34,35]. Patients usually describe the episode as a feeling of having your feet "glued to the ground.". FOG episodes may be triggered by different factors: attempting to start or continue the march, changing gait speed or direction of the march, presence of obstacles, walking in narrow spaces, monotone color environments, etc. Actual causes of FOG are still not well known, although there are some hypotheses, such as freezing being caused by the inability to generate a normal amplitude step length, or asymmetry of gait [36]. There does not seem to be a direct correlation between the frequency of FOG and other PD motor symptoms, such as stiffness and bradykinesia. However, FOG ocurrence is inversely proportional to the presence of tremors [37,38]. There is also evidence that indicates that L-Dopa and dopamine agonists contribute to the development of FOG. Neurodegeneration associated with normal aging seems to be a contributing factor [34,36]. The use of dopamine antagonists such as ropirinole [39] and pramipexole [40] can increase the frequency of freezing. FOG that occurs during the off phase of PD responds to L-Dopa, while freezing that occurs during the on phase does not [41]. Evidence indicates that L-Dopa or dopaminergic agonists can contribute to the development of freezing [38]. On phase patients do not respond to L-Dopa, suggesting possible involvement of non-dopaminergic pathways [41]. The inability to generate normal step length can trigger freezing [34]. Likewise, alteration in visual perception may be involved also in the genesis of FOG [39].
Appraisal of FOG is usually performed by a team of neuropsychiatric experts using certain tools and methods, such as: the Unified Scale for Parkinson's disease (UPDRS) to determine the stage of the disease, the freezing of gait questionnaire (FOGQ) to determine presence of FOG [42]. They are complemented by an evaluation of the emotional and cognitive status, as well as the quality of life of the patient.
Wavelet Theory
The wavelets operate analogously to Fourier analysis in some applications. The main difference that wavelets have with Fourier transforms is that wavelets perform local analysis, which makes them appropriate for the analysis of signals in the time-frequency domain, while Fourier transforms are global [43,44]. Wavelet techniques allow to divide a complex function into simpler ones and study them separately. They are appropriate for the analysis of image and biomedical signals, since they allow for decomposing a signal in subband, allowing the calculation of energy for each subband of decomposition.
The term "wavelet" is used to define the functions that are used to sample the signal (1): where Ψ * is the conjugate of the mother wavelet that will be scaled and run point by point to determine the levels of comparison with the signal S(t).
A wavelet function is a small wave, whose energy is concentrated in time and serves as a tool for the analysis of transient phenomena, non-stationary and variants in time [45]. In a wavelet mother, a signal S(t) can be decomposed into: Thus, where: where A j and D j are the coefficients of approximation and detail respectively of the signal S(t) at the level j (see Figure 2); φ j and ψ j are the scaling function and the wavelet function at level j for reconstruction; c jk and d jk , given by the wavelet transformed, there are coefficients of the function scaling and wavelet coefficients in the level j and the change of time k, respectively. The analysis wavelet allows the use of large intervals of time in those segments where greater accuracy is required at low frequency and smaller regions where high frequency information is required.
The Discrete Wavelet Transform
The Discrete Wavelet transform (DWT) is very similar to the discrete Fourier transform (DFT), but, instead of using sine and cosine functions like the latter, it uses a type of function called scale functions and wavelets. These functions combine the double characteristic of orthogonality (so that the reconstruction is the same as the transformation), as well as compact support in space [46].
The DWT of a function f (x) is given by the following expression (5):
Energy of the Wavelet Coefficients
The energy in these components and their wave coefficients are related to the energy of the original signal. According to Parseval's theorem, the energy contained in the signal is equal to the sum of the energy contained in the coefficients of detail and approximation in the different resolution levels of the wavelet transform [47,48]. That is, the signal energy can be decomposed in terms of the coefficients of transformation. Equations (6) and (7) express this theorem in function of different times (k) and scales (j = 1, ..., l): where N is the number of details coefficients (DC j ) and approximation (AC l ) in each decomposition level. In Equation (8), the total energy of the wavelet coefficients is detailed:
Diagnostic Tests
In order to validate the system, the results were submitted to calculations of: sensitivity (9), specificity (10) and effectiveness (11), taking into account its calculation parameters: True Positive (TP), False Negative (FN), True Negative (TN) and False Positive (FP) [49]. The total duration of the signals that were processed and analyzed is 480 s of each patient (15 signals of 32 s for patient), doing a total of 3840 s for eight patients:
Data Collection and Processing
The tests were performed in eight patients between 60 and 84 years of age, of which seven suffer from Parkinson's disease (PD) and a healthy subject considered as the control patient. The characteristics of the patients are presented in Table 2, along with the identification of the degree of the disease and the episodes of Freezing of Gait (FOG) that occurred during the system test (described by a neurologist). Figure 3 compares the similarity between both low extremities (right and left); through the application of cross-correlation, it was determined that both extremities present the same behavior, indicating that there is no need to acquire the signals of the two legs. In all patients, the acceleration of motor activity of the posterior sural nerve of the right leg was recorded and a superficial stimulation was performed on the intersection of the posterior tibial nerve and the lateral plantar nerve in both legs. A test circuit was established with the typical scenarios of the occurrence of an episode of freezing. For 32 s, the data of the walk was registered crossing the circuit of each patient, which in turn contain 256 values of the module of the tri-axial acceleration. During testing, patients made some activities to stimulate FOG occurrence: • Walk in straight line, • 180 degree turns on the walk, • Climb steps.
The processing was done in the Arduino Pro Mini module (3.3 V/40 mA), based on the ATmega328 microcontroller (Atmel Corporation, San Jose, California, United States of America) that works on an open source platform, in which the rest of the modules and elements are connected. Acceleration data of the lower extremity is acquired by means of MPU 6050, where the Arduino Pro mini is in charge of applying a pre-processing and sending the measurements through bluetooth to the Smartphone, after the processing is forwarded a bit (1 or 0) for the Control of the motor, where the device replicates the bit when it is sent via radio frequency to the left device in order to control the second motor.
Hardware
We used two devices. On the right leg, one device acquires the data, sends it via Bluetooth to the Smartphone and executes the vibratory stimulation when necessary, and, on the left leg, another device only executes the vibratory stimulation. Both are built on a Printed Circuit Board (PCB) to double layer with a supply of individual energy by means of two batteries of lithium of 3.7 V/500 mA, in addition to a load system and covered with encapsulated of Poly-lactic Acid (PLA). The right device contains the following elements: Triaxial accelerometer (MPU 6050), Bluetooth Module v 2.0 (HC-05), Radio Frequency (RF) Emitter (433 MHz), On/OFF switch, LED indicator and vibratory motor; while the left device consists of: RF receiver (433 MHz), voltage amplifier module, On/OFF switch, LED indicator and vibratory motor (see Figure 4a).
The already encapsulated devices are located on two ergonomic supports and adjustable to the lower extremities, designed in polyamide and elastane material, as shown in Figure 4b. The devices and vibratory motors are placed in these supports, so that they coincide with the proposed location for the acquisition and stimulation. The total weight is 220 g for the right device and 210 for the left. In Figure 4c, it is possible to visualize the devices and the motor that remains fixed thanks to the adjustment of the supports, allowing complete mobility and the use of the patient's usual footwear. In addition, the LED indicators, charging port and switches are displayed.
Software
Inside the Arduino, the acceleration of the sensor to 8 Hz (frequency of sampling) is acquired, it is transmitted by radio frequency and Bluetooth with the help of virtual libraries, and the pins of input and output of data are also configured. The data frame received in the Smartphone is decomposed to separate the values of the acceleration in each axis, and Equation (12) is applied to acceleration data, which determines the module resulting from the triaxial acceleration in the patient: The result of the previous calculation is stored in a dynamic FIFO (First In First Out) vector of 256 elements. When this vector is full, with first 256 elements, DWT is executed. The DWT parameters used are: wavelet = Haar, scaling = 2, decomposition levels = 5 and filter order = 1. The algorithm is encoded in Java to run inside an Android platform developed application.
The saved signal is multiplied with a vector and its result is debugged by orthogonal low-pass and high-pass filters. This is sent to a second filtering that depends on the scale and length of the signal, and allows all data to be divided and stacked in frequency ranges, covering the size of the sampling frequency. The data groups resulting from the low-frequency filter (according to their level of decomposition) continue to be filtered, separated and grouped according to the frequency spectrum; the grouped vector compendium is designated as wavelet coefficients. The wavelets' coefficients of all levels of decomposition correspond to the same number of elements that exist in the vector of the acquired signal, grouped in a new vector and separated as detail and approximation coefficients.
From the coefficients of the DWT, it is possible to estimate the total energy level of the signal as well as the amount of energy stored in each frequency subband that was established in the wavelet decomposition. Equations (6)-(8) establish the amount of energy in Joules by levels of the coefficients of detail, approximation and total, respectively.
The calculation of the DWT and wavelet energy is made for every eight new pieces of acceleration data, which, according to the sampling frequency is every 1 s, then the detection of FOG and therefore the decision to activate the stimulus is made every 1 s. The time of the stimulation will be until the energy levels exceed the proposed threshold.
Graphical User Interface
An Android mobile application "FOG Detection" was developed for graphic interaction with the patient, and Figure 5a shows the first screen that appears when the application is opened. The following elements are found on this screen:
•
List of previously linked bluetooth devices, • MAC address of each physical bluetooth device. Once a device to link from the list of available Bluetooth's devices is selected, the screen changes to Figure 5b. This screen interface shows the following: • Acceleration data in its three axes (x, y, z), • Amount of energy in the signal, • Buttons for manual activation and deactivation of the vibratory stimulus, • Real time graph of triaxial acceleration, • Indicator of motors activation.
In Figure 5c, active state of the automatic stimulation is observed, by means of a green indicator, representing the appearance of a FOG and the activation of vibratory stimulus.
Results and Discussion
From acquisition and processing were obtained results of acceleration and energy, respectively. The acceleration data provide characteristics of the walk and allow for differentiating and extracting characteristics of the episodes of FOG, while the energy levels establish the beginning, duration and end of the episodes, permitting the activation of the vibratory stimulation until the resumption of the gait. The patient's response to stimulation during the walk is evaluated by calculations of sensitivity, specificity, and effectiveness, to quantify the ability of the system to differentiate between patients who have FOG from those who do not.
Acceleration
The triaxial acceleration data are received and correlatively stored in a file with ".csv" extension. Figure 6 show some graphs of recorded signals where patients 1 to 6 exhibit episodes of FOG, along with the beginning of the episode and the resumption of the walk, while patients 7 and 8 did not show episodes of FOG during testing.
The segment limited by parallel lines of red establishes the episodes of FOG that were diagnosed by the specialist. The green lines establish the beginning of the resumption of the gait in patients affected by the vibratory stimulus.
It is notable that signals with FOG ( Figure 6) contain lower extremities accelerations that change less rapidly and have less amplitude compared to signals of Figure 7, which correspond to patients without PD. In turn, in Figure 6, there is a greater number of peaks and uniformity in their signals, while the signal segments that were diagnosed as FOG present small variations in peak-to-peak amplitude around 9.81 m/s 2 , which indicates a state of almost at rest of the lower extremities.
In Figure 6, FOG episodes occur at different instants and periods of duration with a range between 9 to 11 m/s 2 of amplitude, establishing a peak-to-peak magnitude of approximately 2 m/s 2 , regardless of age and sex. Two episodes of FOG are presented in patient 3 ( Figure 6), corresponding to 32 s duration of the accumulated signal, becoming an indicator that the patient may be in a grade 4 of the PD; patients with this degree of disease require long-term continuous stimuli to continue the gait. The above establishes that, depending on the degree of the disease, the duration of its episodes of FOG can increase. Patient 7 (Figure 7) has the signal of a patient diagnosed with Parkinson's disease without FOG in grade 1, with its mild symptoms and the disease is controlled by medication administration; there is a decrease in energy between 20 and 24 s of the signal due to an incomplete turn made by the patient. This isn't a FOG episode. While patient 8 (Figure 7) is a healthy patient, within the same age range as previous patients, its acceleration is uniform and periodic, that is to say, it does not present flaws in the behavior of the walk. Figure 8 shows the amount of energy that contains the wavelet coefficients, a product of the DWT processing. It shows the difference in energy levels between patients with episodes of FOG and those who do not have them. In addition, the distribution of the energy for each level of detail decomposition and approximation coefficients is observed. In [50], it is established that the frequency of FOG occurs in the bandwidth of 3 to 8 Hz. This range was placed within the detail coefficients of the first level (DC1), excluding from the analysis the coefficients in the frequency range of 3 to 4 Hz because they do not provide relevant information in FOG detection, but it is necessary to consider all the coefficients for the calculation of the total energy and energy levels of the subbands, as included in the Equations (6)- (8). Based on the above considerations, for calculations, processing and analysis, there is a spectrum of frequencies for the presence of FOG from 4 to 8 Hz, whose frequency ranges are contained by detail coefficients from level 1 (DC1) to detail coefficients of level 5 (DC5).
Energy Levels
According to the results of the tests in Figure 8, each patient, regardless of the degree of their illness, presents different amounts of energy because everyone maintains a different walking pattern, establishing that there are no significant characteristics that differentiate the presence of FOG in the signals. Then, it is necessary to do a percentage comparative analysis, except for AC5, but included in the calculation as shown in Equation (13), which is contrasted in Figure 9, which presents the percentages of energy from DC's from level 1 to level 5 (five levels of wavelet decomposition) derived from eight signals from the eight patients described in Table 2: six with PD and FOG (Patients 1 to 6), 1 with the PD without FOG (Patient 7) and one without the PD or FOG (Patient 8): Based on the results in Figure 9, it is shown that the total energy of signals with FOG is lower than that of signals without FOG (see Figure 8), due to the scarce or null activity of movement in the lower extremities during a FOG episode. Because the FOG occurs in the DC1 frequency band, the DC1's energy value is compared between all patients described in Figure 9, where the percentage of energy in patients 7 and 8 is higher than in patients 1 to 6, approximately over 4%, while patients who had FOG episodes have energy levels below 2%. The value of 2% is taken as the limit level for the activation of the proposed vibratory stimulation, which should increase with the resumption of activity in the lower extremities and thus avoid possible falls.
System Validation
Tables 3 and 4 summarize the results of the tests performed with the system developed in conjunction with the neurosurgeon's assessment. Taking into account the parameters useful for calculating sensitivity, the amount of TP and FN was recorded in Table 3, while the amount of TN and FP, necessary for the calculating specificity, is recorded in Table 4. Using the values recorded in Tables 3 and 4 in the Equations (9)- (11), it obtains a specificity of 86.66% and a sensitivity of 60.61% in the FOG detection, while the system's effectiveness for the resumption of walking after the freezing is detected is 80%. Table 5 highlights an improvement in the time reduction of the FOG episodes of each patient using vibrational stimulation versus measurements without any stimulation, approximately 27% reduction in the duration of FOG episodes. To verify system usability, all patients were asked about system comfort and stimulus effect, with two questions:
•
Have you ever experieced any discomfort while wearing the system? • Do you feel uncomfortable from any aspect with system feedback (vibratory stimulus)?
All of them manifest that wearing the system does not represent any discomfort, and system feedback is soft enough to not be annoying, but detectable enough to help.
To detect FOG, many techniques were tested by different researchers. Table 6 shows the most recent papers about it. Most of them combine Video Recording and Acceleration [51][52][53][54][55][56]; Acceleration alone was used by [57][58][59]; Acceleration in combination with angular velocity by [60] and in combination with Inertial Measurement Unit sensor by [61,62]; Video Recording alone by [63]; Using Microelectromechanical systems by [64] and using Electroencephalography by [14,65]. The main objective of our research is to find an efficient system to detect and to stimulate with an affordable cost based on motor frequency analysis that can be improved with the implementation of neural networks and hip acceleration measures, in addition to exploring vibratory stimulation as a blockage of FOG. It has a similar performance in specificity and a lower average performance of 22.96% in sensitivity with respect to the other investigations, but our system works in real time (some studies use external hardware to processes data offline with, of course, better results), it is low cost, compact, comfortable and integrated (detection + stimulation).
Conclusions
The motor problems of Parkinson's disease originate in the brain and branch out through the nervous system, leading to changes in gait such as FOG. The use of an exogenous stimulus allows the brain to break from the freezing state of the lower extremities and resume the gait. This device integrates a wireless, low cost, compact, comfortable, easy to use and portable system designed, developed and implemented for patients with Parkinson's disease who have episodes of FOG.
This device integrates an algorithm that detects in real time and accurately episodes of FOG and then stimulates the patient to resumption of gait.
The duration of the FOG episodes is variable for each patient as well as the reaction time of the patient to the vibratory stimulus; although its appearance is strongly linked to situations of stress and low self-esteem, such as the difficulty of crossing narrow places or climbing steps. During system tests, it was observed that the resumption of the gait presented sudden and accelerated movements compared to normal walking.
The use of this developed device helps the patient to move and perform their daily activities without restrictions, and, at the same time, allows their real-time monitoring. In addition, storage of gait data that would help to understand, process and clarify FOG episodes occurs.
The mathematical tool of the DWT is useful to find differences in the acquired signals and to establish thresholds variables that define an episode of FOG, with the possibility of characterizing other types of motor anomalies.
Future work with more patients and with different processing techniques, as, for example, neural networks, will be done to improve specificity and sensitivity, maintaining the restriction of a low cost system. This would provide the possibility of differentiating an episode of FOG from a voluntary pause in gait, correcting system levels of sensitivity, and calibrating effectiveness of the stimulus.
|
v3-fos-license
|
2022-11-17T16:17:36.826Z
|
2022-11-01T00:00:00.000
|
253563468
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/14/22/4818/pdf?version=1668429162",
"pdf_hash": "adfd5f431a94126cb1a2c90b70cc49e7562432ac",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1026",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "c87360c651646b69ea0cf28d9c4dab089a074ea2",
"year": 2022
}
|
pes2o/s2orc
|
Gut Microbiota Associated with Gestational Health Conditions in a Sample of Mexican Women
Gestational diabetes (GD), pre-gestational diabetes (PD), and pre-eclampsia (PE) are morbidities affecting gestational health which have been associated with dysbiosis of the mother’s gut microbiota. This study aimed to assess the extent of change in the gut microbiota diversity, short-chain fatty acids (SCFA) production, and fecal metabolites profile in a sample of Mexican women affected by these disorders. Fecal samples were collected from women with GD, PD, or PE in the third trimester of pregnancy, along with clinical and biochemical data. Gut microbiota was characterized by high-throughput DNA sequencing of V3-16S rRNA gene libraries; SCFA and metabolites were measured by High-Pressure Liquid Chromatography (HPLC) and (Fourier Transform Ion Cyclotron Mass Spectrometry (FT-ICR MS), respectively, in extracts prepared from feces. Although the results for fecal microbiota did not show statistically significant differences in alfa diversity for GD, PD, and PE concerning controls, there was a difference in beta diversity for GD versus CO, and a high abundance of Proteobacteria, followed by Firmicutes and Bacteroidota among gestational health conditions. DESeq2 analysis revealed bacterial genera associated with each health condition; the Spearman’s correlation analyses showed selected anthropometric, biochemical, dietary, and SCFA metadata associated with specific bacterial abundances, and although the HPLC did not show relevant differences in SCFA content among the studied groups, FT-ICR MS disclosed the presence of interesting metabolites of complex phenolic, valeric, arachidic, and caprylic acid nature. The major conclusion of our work is that GD, PD, and PE are associated with fecal bacterial microbiota profiles, with distinct predictive metagenomes.
The American Diabetes Association defines GD as an abnormal blood glucose rise during the second or third trimester of pregnancy without any previous diabetes record [11]. The development of GD is associated with insulin secretion failure in a chronic insulin resistance context and with deficient glucose uptake in β-cells [12]. GD is diagnosed by the first recognition of hyperglycemia or impaired glucose tolerance during pregnancy through an oral glucose tolerance test (OGTT) [11].
The prevalence of GD has increased considerably, and it has become a global public health concern affecting 9.3-25.5% of pregnancies worldwide [13]. In Mexico, there is insufficient epidemiological information to assess GD prevalence, since there is no consensus to establish an accurate clinical diagnosis. It has been estimated that 8.7-17.7% of Mexican pregnant women develop GD [14], whereas the International Association of Diabetes and Pregnancy Study Groups estimated up to 30% [15]. Moreover, in clinical practice, only half of GD women are correctly diagnosed [16,17].
Risk factors associated with GD, such as being overweight, suffering from obesity, and hypertension, are highly prevalent in the Mexican population [18]. Other risk factors include maternal age greater than or equal to 35 years, multiparity, excessive weight gain or obesity during pregnancy, a GD history, first-degree relatives with diabetes, previous fetal macrosomia pregnancies [19], polycystic ovary syndrome, hypothyroidism [20], and diet [21].
Women with GD have an increased risk of comorbidities, thus leading to short and long-term poor life quality [22]. Associated complications comprise pre-eclampsia, postpartum infection, antenatal depression [23], metabolic syndrome [20], and cardiovascular diseases [24]. After delivery, women who suffer from GD are up to seven times at greater risk of T2DM development [25]. In addition to the maternal illness, neonates from mothers with GD have an increased risk of macrosomia, diabetic fetopathy, and neonatal hyperinsulinemia [26], and have a substantial risk for obesity and even T2DM later in life, contributing to the already growing diabetes epidemic [13,27].
It is important to mention, that gut microbiota change through pregnancy. This change mainly consists of an overall increase in Proteobacteria and Actinobacteria and reduced microbial richness in the third trimester [28]. In addition to these natural changes in the microbiota, an aberrant gut microbiota has been documented in GD individuals compared to healthy counterparts [29]. For instance, Danish women suffering from GD, in the third trimester of pregnancy, have reported persistent alterations in gut microbiota up to eight months after delivery, including high Collinsella abundance, reduction of SCFA-producing bacteria such as Faecalibacterium and Bacteroides, and reduction of Isobaculum [30]. Moreover, Blautia species, which are abundant in GD individuals, were correlated to an unhealthy metabolic profile. On the other hand, there was a Ruminococcus abundance reduction in postpartum GD women. A lower abundance of Akkermansia was also reported in women with gestational diabetes [30].
As a result of gut microbiota-host interactions studies in the third trimester of pregnancy, multiple mechanisms have been proposed for gut microbiota involvement in GD pathophysiology [29]. One of the most significant proposals is the role of SCFA. It has been reported that a reduction in the relative abundance of SCFA-producing bacteria is associated with an increase in blood glucose levels [26]. SCFA are involved in the activation of several important receptors, such as the peroxisome proliferator-activated receptor (PPAR), which helps to reduce the expression of inflammatory markers and oxidative stress [26]. In addition, SCFA interact with G protein-coupled receptors (GPR) to promote anorexic hormones such as glugacon-like peptide 1 (GLP-1) and peptide tyrosine tyrosine (PYY), thereby stimulating insulin secretion, promoting glucose metabolism, and inducing satiety [31].
Due to the high prevalence of GD and other gestational health conditions, such as pre-eclampsia and pre-gestational diabetes in Mexican women, it is of important interest to explore the gut microbiota and its produced metabolites to determine its relationship with the clinical, anthropometric, and dietary parameters in Mexican pregnant women diagnosed with GD and other morbidities during gestation.
Study Type and Selection of Subjects
An observational, retrospective, case-control study was conducted with the participation of attending Mexican pregnant women at the "Hospital Regional de Alta Especialidad de Ixtapaluca", a governmental third-level hospital located in the State of Mexico (19 • 19 07 N 98 • 52 56 W). Fifty-four pregnant women in the third trimester of gestation were recruited and divided into four experimental groups: 30 healthy pregnant women (controls, CO), 11 pregnant women diagnosed with gestational diabetes (GD), 8 pregnant women diagnosed with preeclampsia (PE), and 5 women with a pre-pregnancy diagnosis of type 1 or 2 diabetes mellitus (PD). GD was diagnosed with the following criteria: fasting blood glucose ≥ 92 mg/dL, plasma glucose at 1 h post-stimulation with 75 g of anhydrous glucose ≥ 180 mg/dL, and plasma glucose at 2 h post-stimulation with 75 g of anhydrous glucose ≥ 153 mg/dL. For pre-eclampsia, the diagnosis was made after week 20, considering a systolic blood pressure ≥ 140 and a diastolic pressure ≥ 90, on more than 2 occasions with 4 h difference in a day, in addition to the presence of proteinuria. Pre-gestational diabetes patients were previously diagnosed before pregnancy with type 1 or 2 diabetes mellitus. Inclusion criteria were patients over 18 years, without associated pathologies in the case of the control group, no consumption of probiotics or antibiotics in the 3 months before the sample collection, and no gastrointestinal disease. The study was approved by the hospital's Ethics Committee in Research (Comité de Ética en Investigación, CEI), register number NR-CEI-HRAEI-07-2021, and Research Committee (Comité de Investigación, CI), register number NR-07-2021. All participants consented to the collection of data and signed informed consent following the Declaration of Helsinki. It is important to mention that all sample collection occurred from July to October 2021, during the severe COVID-19 pandemic in Mexico, which restricted all the research operations in hospitals.
Data and Specimen Collection
Stool samples were provided by the participants who signed the informed consent and who met the inclusion criteria. Samples were stored at −70 • C until further use. Clinical parameters were obtained from each patient (age, parity, first-degree relatives diagnosed with diabetes mellitus, history of abortion, previous pregnancy with GD, fetal macrosomia in a previous pregnancy, etc.), anthropometric (weight, height, body mass index (BMI)), and metabolic (fasting glucose, glucose at 2 h after stimulation with 75 g of anhydrous glucose, glycosylated hemoglobin (HbA1c), cholesterol, and triglycerides). All data related to the diagnosis, gynecological-obstetric history, and risk factors were obtained from the clinical record. For each patient, a food frequency questionnaire, designed to obtain information about eating habits, was applied.
DNA Extraction
To perform the DNA extraction, 200 mg of fecal samples were processed using the FavorPrep ™ Stool DNA Isolation Mini Kit (Cat. FASTI 001-1, FAVORGEN© Biotech Corporation, Zhunan, Taiwan) following the manufacturer instructions. Subsequently, the integrity of the DNA fragments was confirmed by 0.5% agarose electrophoresis gel (90 V per 50 min) and the purity was assessed with the absorbance ratio 260/280 and 260/230 measured in the NanoDrop Lite Spectrophotometer (Thermo Scientific, Waltham, MA, USA) equipment.
Amplification of the V3 Region of the Bacterial 16S rRNA Gene
The fecal microbiota composition of experimental groups was established by sequencing the polymorphic region V3 of the bacterial 16S rRNA gene in each sample. Forward (V3-341F) and reverse (V3-518R) primers complementary to the upstream and downstream regions of the locus of interest were used [6]. Forward primer contains a known sequence barcode allowing individual sequences identification of samples in the pool. This procedure was performed by endpoint PCR. An amplicon of 281 bp was obtained, under the following amplification cycle: 3 min at 98 • C; 25 cycles (12 s to 98 • C, 15 s to 62 • C, and 10 s to 72 • C); and 5 min at 72 • C. The PCR product was visualized in 2% agarose gels. The amount of each amplicon was estimated by densitometry, using the Image Lab v.4.1 program, and a final library was made by mixing equal amounts of amplicons.
High-Throughput DNA Sequencing
The final library was purified using a highly sensitive 2% agarose gel stained with SYBR GOLD DNA (E-Gel™ EX, 2%, Invitrogen™, Cat. G401002, Waltham, MA, USA). The DNA library concentration and final size fragment were measured with 2100 Bioanalyzer Instrument (Agilent Technologies, Santa Clara, CA, USA) fragment analyzer, the resulting average size of the library was 263 bp. PCR emulsion was carried out using Ion OneTouchTM 200 Template Kit v2 DL (Life Technologies, Carlsbad, CA, USA), according to the manufacturer's instructions. Enrichment of the amplicon with ionic spheres was carried out using Ion OneTouch ES (Life Technologies, Carlsbad, CA, USA). Sequencing was performed using the Ion 318 Kit V2 Chip (Cat. 4488146, Life Technologies, Carlsbad, CA, USA) and the Ion Torrent PGM system v4.0.2. After sequencing, the readings were filtered by the PGM software to remove the polyclonal (homopolymers > 6) and low-quality sequences (quality score ≤ 20).
Taxonomic Assignment and Bacterial Diversity
Amplicon Sequence Variants (ASV) were determined from reads that met the quality criteria using the QIIME2-2022.2 pipeline [32]. Representative sequences were taxonomically annotated with Silva 138 database with the weighted pre-trained classifier (Weighted Silva 138, 99% OTUs full-length sequences) [33]. Further analyses were performed with R 4.2.1 [34] into RStudio 2022.07.01 + 554 IDE [35]. Data were imported into R with qiime2R 0.99.6 package [36], phyloseq 1.40.0 package [37] was used for the analysis of microbial communities with relative abundances. For intra-sample diversity Chao1, Shannon, Simpson, InvSimpson, ACE, and Fisher indexes were calculated. Analysis of the inter-sample diversity was carried out with UniFrac distance, and Non-Metric Multidimensional Scaling (NMDS) ordination with vegan 2.6.2 package [38]. Core microbiota heat map (50% prevalence, 1% detection) and Spearman's rank correlation of bacteria with variables (anthropometric, clinic, dietary, and SCFA content) were elaborated with microbiome 1.18.0 [39] and ComplexHeatmap 2.12.1 [40] packages. Differential abundance analysis was performed with DESeq2 1.36.0 [41]. Data were managed with tydiverse 1.3.2 [42]. Correlograms were made with psych 2.2.5 package [43]. Figures were elaborated with ggplotify 0.1.0 [44], ggpubr 0.4.0, RColorBrewer 1.1.3 [45], and pals 1.7 [46]. To predict metabolic profiles of the bacterial metagenome from the sequencing data, PICRUSt v2 program was used, with the MetaCyc metabolic pathway database option. Statistical, taxonomic, and functional analysis software was used (STAMP v2.1.3) to determine significant differences in the functional metabolic pathways of the bacterial metagenome [47]. The pipeline script for analysis was included in the Supplementary Material.
Analysis of Short-Chain Fatty Acids by HPLC
The SCFA were measured from freeze-dried fecal samples using the Perkin Elmer-Flexar HPLC system (Waltham, MA, USA). Samples were pre-treated before injection in the equipment as follows: 0.5 mL deionized water was added to 50 mg of dehydrated sample, then 100 µL of concentrated HCl, and mixed by vortex for 15 s. Subsequently, 1 mL of ethyl ether was added and mixed on an orbital shaker (speed 80 rpm for 20 min). A centrifugation step was applied for 5 min at 3500 rpm, recovering the supernatant and repeating the ether extraction step. Finally, 500 µL of 1M NaOH was added to the final supernatant, taking the aqueous phase and filtering with a 0.45 µm PTFE filter. After filtering, 100 µL of concentrated HCl was added and mixed by vortex for 6 s.
Analysis of Metabolites by ESI FT-ICR MS
Solarix XR (Bruker, Bremen, Germany) Fourier Transform Ion Cyclotron Resonance Mass Spectrophotometer (FT-ICR MS) was calibrated in positive and negative Electrospray (ESI) mode with sodium trifluoroacetate solution. Samples were processed as described for the HPLC methods and injected into the instrument with a Hamilton 250 µL syringe at 120 µL/h flow rate by positive and negative ESI (450 V, 1 nA Capillary; −500 V, 9.451 nA End Plate Offset) to ensure optimal ionization efficiency and a larger number of identified metabolites. The acquisition conditions were as follows: 42.99 Low m/z, 3000 High m/z, 24 Average scans, 0.1 Accum (s), and 8M resolution. The source gas tune was N 2 , at 1 bar nebulization, 2 L/min dry gas, and 176.5 • C dry temperature. The DataAnalysis v.6.0. program was used for the generated data. The name and structure of the candidate of metabolites were assigned using Bruker Compass MetaboScape 2022 b v.9.0.1. For the statistical analysis, OriginPro 2021 v.9.8.0.200 was used.
Sequence Accession Numbers
The sequence FASTQ files and corresponding mapping files for all samples used in this study were deposited in the NCBI repository BioProject PRJNA884382 https://www. ncbi.nlm.nih.gov/sra/PRJNA884382 (accessed on 10 October 2022). Table 1 show that women in the sample had an average age of 28 years, with a gestational age of approximately 32 weeks. The anthropometry indicated a height of 1.57 m, which is normal among Mexican women, and a tendency of higher weight for women in the GD, PD, and PE groups compared to CO women. BMI data showed more than 68% of women were overweight or obese in the groups. The blood test revealed that women of the GD, PD, and PE groups had higher levels of fasting glucose and triglycerides. The average parity was <3 births among the 54 studied women. Most of the women had a secondary education level and were in free unions, being housewives as their main activity ( Table 1). The measurement of SCFA in feces had no statistical difference in formic, acetic, propionic, butyric, and valeric acid concentration among the studied groups, with formic acid being the most abundant (Table 1). The analysis of the nutrimental information collected from the participants revealed significant differences for nine macronutrients among the CO, GD, PD, and PE groups ( Table 2); however, only statistically significant differences for energy, carbohydrates, protein, total fiber, cereal, and sodium intake were observed for CO vs. GD, after applying a Benjamini-Hochberg post-hoc test (Supplementary Materials Table S1). Table S1).
Alfa and Beta Diversity of the Gut Microbiota in Gestational Health Conditions
The gut microbiota diversity was inferred by characterization of the fecal microbiota using V3−16S rRNA gene libraries and high-throughput DNA sequencing. Five million total reads were obtained, with an average of 87,000 reads/sample and a median quality score of 32 (Table 3). The sequencing was satisfactory as shown in the rarefaction plots (Supplementary Materials Figure S1). Analyses characterizing the alfa diversity in samples in CO, GD, PD, and PE groups ( Figure 1A), did not show a statistically significant difference for the Chao1, ACE, Shannon, Simpson, InvSimpson, and Fisher indexes (Supplementary Materials Table S2). Additionally, the evaluation of the beta diversity, showed that only the microbiota diversity in CO and GD differed with statistical significance (p = 0.01) ( Figure 1B). Table S3; CO versus PD, Table S4, and CO versus PE, Table S5). PE (Pre-Eclampsia), PD (Pre-gestational Diabetes), GD (Gestational Diabetes), and CO (Control).
Diversity of the Fecal Microbiota Shows a Predominance of Proteobacteria Phylum
When the microbiota diversity was evaluated at the phylum level in CO, GD, PD, and PE groups, a higher abundance of Proteobacteria was observed, followed by Firmicutes and Bacteroidota in all groups ( Figure 1C). There was, however, no statistically significant difference for these phyla among the groups (Supplementary Materials Tables S3-S5). At the genus level, Sphingomonas (Proteobacteria) was the most abundant taxa for CO (12.32%) and PD (21.35%); the genus Blautia (Firmicutes) for GD (17.27%), and the genus Enterococcus (Firmicutes) for PE (14.99%) (Figure 2A). However, there were only statistically significant differences between CO and GD groups for Achromobacter, Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium, Mesorhizobium (Proteobacteria), Bifidobacterium, and Cutibacterium (Actinobacteriota) (Supplementary Materials Table S6). The bacterial taxa, whose abundance contrasted when comparing CO versus PD, were Allorhizobium-Neorhizobium-Pararhizobium-Rhizobium, Methylobacterium-Methylorubrum, Pseudomonas (Proteobacteria), and Bifidobacterium (Actinobacteriota) (Supplementary Materials Table S7). On the other hand, there was only a statistically significant difference for Corynebacterium (Actinobacteriota), Mesorhizobium (Proteobacteria), and Streptococcus (Firmicutes) when comparing the abundances between CO and PE (Supplementary Materials Table S8).
The bacterial diversity was also explored in a core microbiota model assessment, where the abundance of taxa with >1% reads presented in at least 50% of the samples was comparatively analyzed and the results were plotted in a heat map of relative abundances normalized for the core taxa of each group ( Figure 2B). As observed in the heat map, the CO group had comparatively more abundance of Mesorhizobium (Proteobacteria), Alistipes, Bacteroides (Bacteroidota), and NK4A136 (Firmicutes) and less abundance of Methylobacterium (Proteobacteria), Enterococcus, Gemella, Finegoldia, Staphylococcus, and Streptococcus (Firmicutes), than the other experimental groups ( Figure 2B). The GD group had more abundance of Paraclostridium, Lactobacillus (Firmicutes), UCG-001 (family Prevotellaceae, Bacteroidota), Bosea, Escherichia (Proteobacteria), and less abundance of Cutibacterium, Micrococcus (Actinobacteriota), Variovorax, Achromobacter (Proteobacteria), and Alistipes (Bacteroidota) than CO, PD, and PE ( Figure 2B). In the PE group, the abundance of Corynebacterium (Actinobacteriota), Methylobacterium, Paracoccus, Acinetobacter (Proteobacteria), Muribaculaceae (Bacteroidota), Helicobacter (Campilobacterota), Enterococcus, Gemella, Finegoldia, and Staphylococcus (Firmicutes) was increased and only the abundance of Reyranella (Proteobacteria) was comparatively decreased with respect to the abundance in the other groups ( Figure 2B). Finally, in the PD group, the abundance of Mycobacterium (Actinobacteriota), Sphingopyxis, Pseudomonas, Stenotrophomonas, Variovorax, Allorhizobium, and Sphingomonas (Proteobacteria) was increased and the abundance of Bradyrhizobium (Proteobacteria), Muribaculaceae (Bacteroidota), Lactobacillus (Firmicutes), Bifidobacterium (Actinobacteriota), and Enterococcus (Firmicutes) was decreased. abundances are shown as percentages on the Y-axis. The graphic shows the twenty-six topmost abundant genera, while "Other" group genera with <1% relative abundance-(see Supplementary Materials for numerical data abundance and statistical test for CO versus GD, Table S6; CO versus PD, Table S7, and CO versus PE, Table S8). (B) Core microbiota heatmap among samples. Columns show the abundance of core microbiota members with a prevalence of at least 50% in the samples and an abundance ≥1%. The color scale from blue (−2) to red (2) indicates the relative abundance normalized from the core taxa of groups.
Spearman's Correlation Analyses of Selected Metadata with Bacterial Abundance
The Spearman correlation analyses using the metadata and ASV files detected positive and negative correlations for the explored variables. Relevant results for CO and GD were obtained when the bacterial abundance was correlated with anthropometric and biochemical data. For the case of the CO group, there was a positive correlation of members of the phylum Firmicutes with gestational age (Enterococcus), total cholesterol (Clostridium); triglycerides (Enterococcus, Gemella, Streptococcus, vadinBB60, Class Clostridia, Clostridium sensu stricto 1; phylum Proteobacteria with weight (Achromobacter), BMI (Reyranella), body surface (Achromobacter), total cholesterol (Reyranella, Mesorhizobium, Sphingopyxis); phylum Actinobacteriota (Mycobacterium, Microbacterium, Lawsonella, Rothia), and phylum Cyanobacteria with heart rate (Obscuribacteraceae). In contrast, there was a negative correlation between members of phylum Campilobacterota (Helicobacter) with gestational age, total cholesterol, Actinobacteriota (Mycobacterium) with size, and Bacteroidota (Muribaculaceae) with gestational age ( Figure 4A). On the other hand, the GD group had only a positive correlation for members of the phylum Actinobacteriota (Microbacterium, Cutibacterium), and Firmicutes (Bacillus) with age; Bacteroidota (UCG-001, family Prevotellaceae) and Campilobacterota (Helicobacter) with fasting glucose, and Proteobacteria (Enterobacter) with triglycerides ( Figure 4B).
Prediction of Bacterial Metagenome and Metabolite Profile in Fecal Samples
The PICRUSt analysis of the ASV table determined a prediction metagenome and identified interesting functional metabolic pathways in the bacterial microbiota, where the mean proportion (%) of each metabolic pathway contrasted among the studied groups after a strict statistical analysis (Welch's test, with Bonferroni correction). There were thirteen metabolic pathways when comparing CO and GD; being most of them catabolic and primarily detected in CO bacterial microbiota ( Figure 5A), (Supplementary Materials Table S12). For the case of CO versus PD, twenty-seven pathways were reported by the analysis, of which fourteen were more abundant in CO, five anabolic and nine catabolic, and thirteen in PD, being eight anabolic and five catabolic ( Figure 5B), (Supplementary Materials Table S12). Finally, the comparative analysis between CO and PE revealed only one catabolic pathway for vitamin B6 degradation in CO ( Figure 5C), (Supplementary Materials Table S12). 5 Figure 5. Prediction of functional microbial metabolic pathways using PICRUSt 2 analysis with the MetaCyc database. The graphics show the abundance of (A) thirteen statistically significant metabolic pathways between CO (blue color) and GD (red color) bacterial communities. (B) Twenty-seven statistically significant metabolic pathways between CO (blue color) and PD (green color) bacterial communities; and (C) one statistically significant metabolic pathway between CO (blue color) and PE (orange color) bacterial communities. Confidence intervals are indicated on top, while the mean proportions and differences in mean proportions with percentage scale are shown underneath each graphic. Groups are identified by a tab placed below the graphics. A Welch test was applied with a Bonferroni post-hoc. Corrected p-values are shown on the right side of each graphic. -(see Supplementary Materials Table S12 for all included statistically significant pathways q < 0.05).
The metabolite profile in fecal samples collected from CO, GD, PD, and PE groups, was explored by FT-ICR MS. The profile analysis of identified metabolites using positive ionization for CO versus GD ( Figure 6A), CO versus PD ( Figure 6B), CO versus PE ( Figure 6C) and negative ionization mode for CO versus GD ( Figure 6D), CO versus PD ( Figure 6E), and CO versus PE ( Figure 6F) did not show a clear clustering of samples under comparison. However interesting metabolites like trioxopyrrolopyridine, 9,9'-spirobi[carbazol-9-ium], and complex phenolic, valeric, arachidic, and capric acids among others were identified under positive (Supplementary Materials Table S13) as well as negative (Supplementary Materials Table S14) ionizations.
Discussion
Characterization of the gut microbiota diversity associated with gestational health conditions is of great importance to understand the effect of changes in the microbiota and host interactions during the development of a new human being. Unlike other reports, in our study, Proteobacteria had the highest relative abundance, followed by Firmicutes and Actinobacteria [28]. The gut of healthy humans is dominated by four major bacterial phyla: Firmicutes, Bacteroidetes, and to a lesser degree, Proteobacteria and Actinobacteria [4,49].
It has been reported that gut microbiota changes remarkably from the first to the third trimester during a healthy pregnancy, increasing diversity and reducing richness, with an increased abundance of Proteobacteria and Actinobacteria [28].
A previous review reported changes in the gut microbiota composition in gestational diabetes pregnancies in comparison with normoglycemic pregnancies, where alpha diversity was decreased and beta diversity increased. The variations in the gut microbial composition during pregnancy showed an increased Proteobacteria/Actinobacteria ratio, and an increase in Firmicutes and Bacteroidota abundance was observed as well [50]. A study reported similar diversity and community structure in women with gestational diabetes compared to control women concerning Observed OTUs, Shannon's diversity index, and Pielou's evenness index [27]. These results are similar to those obtained in our study.
In our GD group, the abundance of Achromobacter, Rhizobium, Bifidobacterium, and Mesorhizobium is decreased. We also found more abundance of UGC-014, Clostridium_sensu_ stricto_1 (class Clostridia), Staphylococcus, Bosea, Rothia, and Enterobacter. Bifidobacterium is a known primary colonizer of the intestinal epithelium and producer of SCFA [51]. The lower abundance of this genus induces the downregulation of GLP-2 synthesis, a protein involved in the regulation of gut barrier function [52]. Bifidobacterium is reported highly abundant in Crohn's disease with respect to the control group in a study in Canadian population [53]. Women in Denmark in the third trimester with gestational diabetes, diagnosed by oral glucose tolerance test, showed an increased abundance of phylum Actinobacteria and genera Collinsella, Rothia, and Desulfovibrio compared with the normoglycemic group [27].
Regarding Clostridium sensu stricto 1, the bacterial high density of this genus may cause epithelial intestinal inflammation. Certain Clostridium spp. are harmful to host health, for instance, epithelial inflammation observed in weaned piglets may be correlated with Clostridium sensu stricto 1 enrichment in their intestinal mucosa [54]. Additionally, the correlation of the expression of pro-inflammatory cytokines, such as IL-1β and TNF-α with colon inflammation caused by Clostridium sensu stricto 1, has been observed [55]. The presence of this genus correlated inversely with the consumption of cholesterol, protein intake, and sour milk products in the CO group, and GD group, as well as associated with energy intake, in our work. Clostridium has been implicated in the maintenance of mucosal homeostasis and prevention of inflammatory bowel disease and with an increase in the antiinflammatory activity of Treg lymphocytes in mice, therefore Clostridium might modulate various aspects of the immune system [56]. In the GD group, we found a correlation between the presence of genus UCG-001 (family Prevotellaceae) with fasting glucose; other member of the same family Prevotella, produces SCFA increasing incretin secretion and reducing inflammation and insulin resistance [56].
Through several mechanisms, gut microbial dysbiosis can contribute to the development of proteinuria, a strong risk factor for the development and progression of chronic kidney disease, hypertension, and diabetes, in addition to preeclampsia [57]. In the PE group of our study, the Firmicutes to Bacteroidota ratio was increased, as reported for hypertensive subjects, in another study [58]. Our results for the PE group, are similar to a report on pregnant Chinese women, where there are no significant differences in diversity between the pre-eclampsia and control groups [59]. This work in Chinese women, also reported that the relative abundance of Proteobacteria decreased significantly in the control group, and the relative abundance of Firmicutes was significantly lower in the preeclampsia group than in the control group; in contrast, in our work we found a tendency to increase in the PE group.
In the PE group, some genera increased (Bosea, Escherichia, Staphylococcus, Enterococcus), while others decreased (Sphingomonas, Microbacterium, Pseudomonas, Bifidobacterium, and Lactobacillus) in comparison with the CO group. In patients with proteinuria-associated diseases, a reduction in the abundance of Lactobacillus and Bifidobacterium species has been reported. These two genera are among the most well-known probiotics with important functions such as protection of the gut barrier structure, SCFA, nitric oxide, and vitamin complex production [57,60]. Other genera were observed to increase, such as Corynebac-terium, Methylobacterium-Methylorubrum, and Streptococcus in contrast, Mesorhizobium was diminished with a significant difference in the PE group of our work. The genus Mesorhizobium belongs to the Proteobacteria phylum; this genus consists of 51 species, isolated mostly from root nodules of various leguminous plants [61]. Some strains of Mesorhizobium can oxidate acids (i.e., acetic acid), as well as assimilate sugars, in addition, to being an important nitrogen fixer in legume roots [62]. Methylobacterium species are opportunistic pathogens in immunocompromised patients, described as a cause of cross-contaminations, that frequently colonizes in the hospital setting and are major inhabitants of aqueous environments, including potable water supplies and hospital tap water, and some Methylobacterium infections have been associated with raw vegetable consumption [63]. Finally, with the use of antibiotics, decreased incidence of cases of pre-eclampsia was demonstrated (Chinese population) in patients with hypertension, where decreased microbial richness and diversity, and overgrowth of bacteria such as Prevotella and Klebsiella were observed [58].
In the PE group, of our work, Gemella and Staphylococcus (Firmicutes) are two taxa with differential abundance (according to DESeq2). Gemella is a common resident of mucosal membranes, with a high abundance in cases of Crohn's disease and ulcerative colitis [53]. We found that Gemella was positively correlated with triglycerides. The increased abundance of this bacteria was considered a risk factor in pregnancies with overweight and metabolic disease according to one report on obese Italian adults [64,65]. Streptococcus was found increased in the PE group, the abundance of this bacteria has been reported to be higher in numerous inflammatory diseases [53] and alterations in the prevalence of these bacteria may alter the vascular tone and contribute to the development of hypertension and pre-eclampsia [66].
In the PD group of our work, Proteobacteria increased and the Firmicutes phyla decreased along with other taxa. Some genera belonging to the Rhizobiaceae family detected in the PD group, are known as potential nitrogen-fixing symbionts of legumes, isolated from root nodules [67]. Other studies report that genera like Rhizobium are found as contaminants of DNA extraction and PCR kits, and this is also the case for Methylobacterium-Methylorubrum [68]. Results in relative abundance at the genus level, show that although the abundance of some taxa did not show a significant difference among groups, they are still important since they are associated with changes occurring in pregnancy. For example, Prevotella (Bacteroidota) and Clostridium (Firmicutes) display different changes in some diseases such as hypertension and diabetes (Type 1 or 2) [58]. Pseudomonas was found differentially abundant in our work in patients with PD. This genus has been reported as an opportunistic pathogen associated with mice suffering from diabetes mellitus [69].
Concerning SCFA production, in the GD group, a direct correlation between the genus Lactococcus and the presence of propionic acid was observed. In a study conducted in mice, the administration of a food supplement was associated with an increase in Lactococcus and other microbial genera, and greater production of SCFA, including propionic acid, as reported in our work [70]. In another study, a correlation of this genus with propionic acid originating from the biotransformation of L-threonine and L-methionine was observed [71]. A positive correlation of the genus Streptococcus with butyric acid was also observed in patients with GD in our group. Other studies have shown that species of this genus, such as Streptococcus mitis, are capable of oxidizing butyric acid mainly to acetic acid mainly [19], and this could be a mechanism of regulation and compensation of acetic acid since many genera in GD were observed to have an inverse correlation with acetic acid, genera belonging mostly to the phylum Proteobacteria.
A previous study found that the metabolic pathways related to the intestinal microbiota in patients with gestational diabetes are different from those in healthy female controls [72]; in this study, it was further shown that the amino acid content in fecal samples was decreased in patients with gestational diabetes. In our work, we observed an overall decrease in the metabolic pathways involved with the metabolism of amino acids such as tryptophan, alanine, aspartate, glutamate, cysteine, and methionine. It has been observed that, in patients with gestational diabetes, there is an increase in serum levels of branched amino acids, such as isoleucine, tyrosine, and alanine [73]. Lactobacillus and Bacteroides are bacteria related to amino acid metabolism (especially tryptophan), found with decreased abundance in the GD group of our work. In germ-free mice colonized with Lactobacillus and Bacteroides, it was found that a particular Lactobacillus reuteri, was able to promote the production of double-positive intraepithelial lymphocytes (DP IEL) [74]. DP IEL cells are present in the small intestine, normally helping the body to tolerate food components and other foreign molecules, attenuating immune responses. The importance of this study in germ-free mice is that these bacteria needed tryptophan to promote the production of this type of host cells, with a role in reducing low-grade inflammation, which is common in patients with gestational diabetes. We observed metabolic pathways related to the use of carbohydrates, previously reported for gestational diabetes. In general, affected women have a decrease in the correct assimilation of carbohydrates obtained through the diet [23], which is related to dysbiosis in the gut microbiota, since there is a decrease in microorganisms related to the use of carbohydrates and an increase in bacterial species related to insulin resistance (Akkermansia) and glucose intolerance (Blautia) [27]. Concerning the differences in metabolic pathways found in the CO and PD groups, it was possible to appreciate an increase of pathways related to LPS synthesis in the PD group, in general, type II diabetes mellitus is associated with obesity and consumption of high-fat diets, and this, in turn, causes an increase in intestinal permeability caused by high serum levels of LPS, favoring the characteristic low-grade inflammation of these patients [50].
The comparison of metabolic pathways between the CO and PE groups revealed an increase in Vitamin B6 degradation in the CO group. The intestinal microbiota metabolism provides the host with many nutrients including amino acids and B-complex vitamins including Vitamin B6, important cofactors for carbohydrate metabolism and DNA synthesis. A large amount of B-vitamins are then obtained from the diet or intestinal microbiota. Vitamin B6 metabolism has been associated with bacteria, such as Bacteroides, Feacalibacterium, E. coli, Klebsiella, and Salmonella, among others [75], and a decrease in Bacteroides abundance was found in the PE group of our work.
The metabolome analysis did not show differential metabolites among our studied groups. There are few studies where candidate metabolite biomarkers for gestational diabetes were evaluated. A review on this topic reports changes in free fatty acids (FFAs), branched-chain amino acids (BCAAs), lipids, and organooxygen compounds, which differentiated both the control and gestational diabetes [76]. However, most of these studies analyzed the metabolome of serum samples, while there are few studies of metabolome performed on fecal samples from patients with gestational diabetes. For instance, in a study, the metabolome was evaluated with 1 H-NMR, from fecal samples of pregnant women with gestational diabetes and control groups, at 24-28 gestational weeks. Here, a clear clusterization of metabolites between both groups was observed and five biomarker metabolites for gestational diabetes were also proposed [77]. Further metabolomic studies with more samples are needed to identify the specific microbial metabolites and pathways involved in diabetic onset and pathology. The results obtained in our work suggest that disturbances of the gut microbiota contribute to the occurrence of GD, PD, and PE.
Conclusions
In this work, we find fecal microbial profiles, with predictive metagenomes associated with different gestational health conditions, such as GD, PD, and PE, in Mexican women. Although a major limitation of this work is the low number of samples, the results and conclusions are valid for the studied participants. Table S11. DESeq2 comparative analysis for Control versus Pre-Eclampsia groups.; Table S12. Comparative differences between means, p, and corrected p-values for the predicted metagenomes.; Table S13. Metabolites identified with positive ionization with ESI FT-ICR mass spectral analysis., and Table S14. Metabolites identified with negative ionization with ESI FT-ICR mass spectral analysis. Figure S1. Rarefaction curves.; Figure S2. Correlogram showing anthropometric, biochemical and diversity data for CO group.; Figure S3. Correlogram showing anthropometric, biochemical and diversity data for GD group.; Figure S4. Correlogram showing dietary and diversity data for CO group.; Figure S5. Data Availability Statement: The sequence FASTQ files and corresponding mapping files for all samples used in this study were deposited in the NCBI repository BioProject PRJNA884382 https: //www.ncbi.nlm.nih.gov/sra/PRJNA884382 (accessed on 10 October 2022).
|
v3-fos-license
|
2020-04-30T09:10:40.729Z
|
2019-06-28T00:00:00.000
|
219277699
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ojs.ugent.be/vdt/article/download/16018/13547",
"pdf_hash": "d06461ae7954073a29a73a10d2f465057874266e",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1027",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "a7bf005c0dc26a7d1532b8d2f9ab1e0a2b38cf56",
"year": 2019
}
|
pes2o/s2orc
|
β-carotene and vitamin E in the dairy industry: blood levels and influencing factors – a case study in Flanders Beta-caroteen en vitamine E in de melkveehouderij: factoren die bloedconcentraties
In this case study performed in Flemish dairy herds, it is shown that lactation stage, farm type (grazing (fresh grass) or zero-grazing) and season are interrelated factors associated with circulating β-carotene (bC) and Vitamin E (VitE) concentrations. The iCheck bC is an easy applicable cow-side test to evaluate a cow’s bC status. One third of the dairy cows in the study had deficiencies in circulating bC and VitE, especially cows in early lactation and cows from zerograzing farms. Fresh grass in the diet could not resolve the early post-partum decline in plasma bC and VitE. However, the bC and VitE statuses of dry cows were significantly better on grazing farms. These findings can help updating antioxidant recommendations since it is clear that there is a need for optimization of antioxidant nutritional management in the Flemish dairy industry in order to feed for optimal dairy cow health.
BSTRACT
In this case study performed in Flemish dairy herds, it is shown that lactation stage, farm type (grazing (fresh grass) or zero-grazing) and season are interrelated factors associated with circulating β-carotene (bC) and Vitamin E (VitE) concentrations. The iCheck bC is an easy applicable cow-side test to evaluate a cow's bC status. One third of the dairy cows in the study had deficiencies in circulating bC and VitE, especially cows in early lactation and cows from zerograzing farms. Fresh grass in the diet could not resolve the early post-partum decline in plasma bC and VitE. However, the bC and VitE statuses of dry cows were significantly better on grazing farms. These findings can help updating antioxidant recommendations since it is clear that there is a need for optimization of antioxidant nutritional management in the Flemish dairy industry in order to feed for optimal dairy cow health.
INTRODUCTION
With the onset of lactation, cows enter a period of negative energy balance (NEB) with increased lipolysis resulting in elevated serum non-esterified fatty acid (NEFA) and β-hydroxybutyrate (BHB) concentrations (Adewuyi et al., 2005). This rapid mobiliza-tion of body reserves may in turn reduce appetite and thus dry matter intake (Vernon, 2005). Furthermore, this period of increased metabolic demands implies an increase in the production of reactive oxygen species by mitochondria, which are produced as by-products of aerobic metabolism. These changes in oxidative metabolism result in oxidative stress (OS) during the transition period, and several studies have shown that OS is an incentive for the occurrence of diseases and increases dairy cow susceptibility to suboptimal management, e.g. housing conditions (grazing or stalled), composition of the ration (fresh grass, antioxidant intake) (Bernabucci et al., 2005;Castillo et al., 2005b;Roche, 2006;Sordillo and Aitken, 2009). The total antioxidative capacity of NEB cows is often insufficient (Castillo et al., 2005;De Bie et al., 2014), and may be further reduced by heat stress (Bernabucci et al., 2010) and suboptimal antioxidant uptake through the diet. With fresh grass being the major source of dietary vitamins or antioxidants (AO) such as β-carotene (bC) and vitamin E (VitE) (Ballet et al., 2000), it contributes significantly to the health and antioxidative status of dairy cows. As expanding herd sizes outgrow the 'grazing platform' of a dairy farm, the dairy industry is evolved into zero-grazing systems with increased use of ensiled forages and hay low in vitamins and antioxidants (Reijs et al., 2013;Wilkinson and Rinne, 2018). This, together with the current faster-growing dairy industry and higherproducing animals kept in more intensified dairying, jeopardizes the cow's metabolic health (James, 2012) and might increase the incidence of vitamin and antioxidant deficiencies in the dairy industry. Supplementation guidelines originating from 2001 (NRC) need to be re-evaluated according to the current AO needs in the modern dairy industry (Abuelo et al., 2015). Interventional studies on bC and VitE supplementation are rather univocal confirming that an optimized AO supplementation may positively influence dairy cow health and fertility (Miller et al., 1993;de Ondarza and Engstrom, 2009). Designing ready-to-use AO supplementation protocols is a real challenge due to the lack of a complete understanding of the interrelating factors influencing the AO status of modern high-yielding dairy cows. Moreover, information on the actual bC and VitE statuses of dairy cows (with emphasis on Flanders, the North of Belgium) is lacking, which is valuable information that may contribute to the optimization of the AO status of dairy herds. As such, the authors aimed to: 1) investigate the associations between lactation stage, type of farm (grazing (fresh grass in the diet) or zero-grazing (no fresh grass in the diet)) or season on the one hand and plasma bC and VitE concentrations in dairy cows on the other hand and 2) investigate the current bC and VitE statuses as a measure of the antioxidant status in the dairy industry, using Flanders as a base.
Selection of dairy farms
Dairy farms in Flanders were invited to participate in a survey to estimate the AO status of high-yielding dairy cows through a call on the website of Dierenge-zondheidszorg Flanders (DGZ, Drongen, Belgium) in September 2014. Out of 48 interested farms, a total of fourteen were selected, diffusely located in Flanders: seven grazing (presence of fresh grass in the ration) and seven zero-grazing farms (no access to fresh grass). The average Flemish dairy farm in 2015 counted 71 cows and had an annual milk yield per cow of 8,515 kg (Coöperatie Rundvee Verbetering, CRV, 2015). In order to have a representative cohort, only dairy farms with a minimum of fifty lactating animals and a minimum annual milk yield per cow of 8,500 kg were included in the study.
Animals, blood collection and study design
In Figure 1, an overview of the study design is given. All fourteen dairy farms were visited three times: 1) at the beginning of autumn (AUT, Oct-Nov) immediately after the grazing season, 2) at the end of winter (WIN, Feb-Mar) when all cows had been stalled inside for winter and 3) during summer (SUM, Jul-Aug) when cows in grazing farms had access to fresh grass, and day temperatures and temperature humidity indexes (THI) increased (official air temperature and relative humidity at the day of sampling collected from www.meteo.be was used to calculate the THI with http://www.abstechservices.com/?pages=calc4). Each visit, five dry cows (DRY, 2-4 weeks before calving), five cows in early lactation (EARLY LACT, 0-3 weeks after calving) and five cows in mid lactation (MID LACT, at the time of artificial insemination ± 12 weeks after calving) were randomly chosen on each farm. Only multiparous cows were sampled in this study. After disinfecting the skin with 70% ethanol, plasma was sampled from the udder vein in EDTA tubes (BD Vacutainer® K2EDTA, BD, Plymouth, UK) and gently mixed. Serum was sampled in clot activating tubes (BD Vacutainer® SST TM II Avance tubes). After collection, all blood tubes were transported at room temperature and protected from light until further processing.
At the day of blood sampling, the exact number of days after calving, the body condition score (BCS) and parity of each individual cow, and the THI in summer were recorded. In addition, the milk production (mean annual milk yield per cow of each farm) and the average calving interval (CI) of each farm were recorded.
An overview of the composition of the lactation and dry-cow ration (on DM basis) is presented in Table 1. The ration consisted of corn and grass silage, hay, beet pulp, concentrates (and pasture in grazing farms). On grazing-farms, cows were typically allowed on pasture on average 9 hours daily when lactating and 24 hours per day during the dry period. The estimated maximum fresh grass intake of the grazing cows was 6 kg DM/day. During winter, all cows (from grazing-and zero-grazing farms) were stalled inside and did not consume any fresh grass.
The actual intake of fresh grass and other compo- nents of the ration can vary significantly under field conditions, as they rely on the appetite of the cow, availability and reachability of food at the feed bunk, competition between animals, etc. Furthermore, the AO content of roughages may vary in time as well, influenced by conservation time, UV (season), pH and others (Ballet et al., 2000). In accordance to LeBlanc et al. (2004), the authors did not study the effects of detailed dietary components and/or exact feed intake in a large multi-herd case study. The specific aim was to broadly screen the average circulating bC and VitE concentrations as a measure of the antioxidative status of grazing (fresh grass in diet) and zero-grazing (no fresh grass in diet) dairy farms. However, next to the presence of fresh grass in grazing farms (pasture or freshly cut grass, regardless of the proportion of this fresh grass in the total ration), the following objectively measurable dietary factors under field conditions were taken into account: 1) whether the cows housed in grazing farms were grazing or had access to fresh grass at the specific moment of blood sampling, 2) whether the farmer added vitamin supplements of any kind in the diet (regardless of the amount) (Yes/ No) and 3) whether the farmer added bC supplements to the diet (regardless of the amount) (Yes/No).
Analysis of blood parameters
Inter-assay coefficients of variation (CV) are indicated between brackets. Plasma bC was photometrically analyzed with the iCheck TM (BioAnalyt GmbH, Germany) (2.3 %) according to . Serum VitE (9.5 %) was analyzed by means of liquid-liquid extraction and HPLC with UV detection at 292 nm (1260 Infinity, Agilent Technologies, Santa Carla, USA). To estimate the metabolic impact of the negative energy balance in the cows, NEFA (5 %) and BHB (3.5 %) were colorimetrically and enzymatically determined (Randox Laboratories, CrumLin, United Kingdom) in serum with a Gallery TM Plus Automated Photometric Analyzer with detection at 550 nm and 340 nm (Thermo Fisher Scientific, Waltham, USA), respectively. In addition to bC and VitE, plasma concentrations of glutathione peroxidase (GPx; 1.8 % intra-assay, 6.8 % inter-assay CV) were routinely analyzed as a measure of the AO status according to Paglia and Valentine (1967) by means of a commercially available GPx kit (Randox Laboratories, Germany) and were spectrophotometrically detected (Cobas 8000, Rotkreuz, Switzerland).
iCheck β-carotene analysis and evaluation of the antioxidant status of dairy cows
A portable spectrophotometer (iCheck TM , Bioanalyt, Germany) was used to assess the blood bC concentrations of each cow. This method was evaluated under field conditions by analyzing identical samples on blood bC with: 1) the portable 'on farm' iCheck TM method (612 cows), 2) a laboratory chromatographic method (liquid-liquid extraction and HPLC with UV-VIS detection at 450 nm, 6.1 % CV, Surveyor LC Pump Plus, Autosampler Plus and PDA Detector, Thermo Fisher Scientific, Belgium) (54 cows) and 3) a laboratory spectrophotometric method (with UV detection at 450 nm, 4.5 % CV, DR3900, Hach Lange, Berlin, Germany) (612 cows). Additionally, blood was also sampled from the tail vein from the same cows (40 cows) in order to evaluate whether the source of blood (udder versus tail vein) influences the iCheck TM bC analysis.
Statistical methods
The influence of lactation stage, season and farm type on bC, VitE, GPx, logNEFA, and logBHB (hereafter referred to as 'the outcomes') was modeled using linear mixed models. For each of the five outcomes, the linear mixed model was built using a stepwise backward approach, starting from a full model including lactation stage, season, farm type and their pairwise interactions. In addition, the main effects of parity, bC supplementation, vitamin supplementation and CI were included in the initial model. To account for the dependence between observations in the same cow (random sampling of identical cows occurred occasionally) and for observations within the same farm, a random intercept for cow and farm was included plus random slopes for season and lactation stage. In case of a significant pairwise interaction, the data were split according to one of the interacting variables and the effect of the other interacting variable was reported in the separate groups. If the pairwise interaction was non-significant, the interaction term was removed from the model and the significance of the main effects was tested using the Ftest with Kenward-Roger correction for the number of degrees of freedom.
Body condition scores were compared between lactation stages and the annual milk yield per cow of each farm was compared between grazing and zerograzing farms using a mixed model and one-way ANOVA, respectively. Pairwise correlation between iCheck TM bC and laboratory analyzed bC values or VitE concentrations were expressed using the Pearson correlation coefficient. Plasma bC sampled from the udder vein was compared with plasma bC withdrawn from the tail vein using one-way ANOVA. A log trans- 3.40 ± 0.10 a 3.00 ± 0.10 a 5.30 ± 0.20 b
LactStage FarmType
Grazing Zero-grazing DRY (3.90 ± 0.10) 4.50 ± 0.20 a 3.40 ± 0.10 b EARLY LACT (3.00 ± 0.10) 2.90 ± 0.20 a 3.00 ± 0.10 a MID LACT (5.20 ± 0.10) 5.00 ± 0.20 a 5.30 ± 0.20 a DRY = dry period; EARLY LACT = early lactation; MID LACT = mid lactation; SEM = standard error of the mean. ab Data marked with different letters in the same row differ significantly. formation was applied to correct for abnormality and inhomogeneity of variances when necessary. All data were presented as means ± SEM. Analyses were carried out in IBM SPSS Statistics 23 for Windows (Chicago, IL, USA) or in R 3.2.1 (R Core Team, 2014). The threshold for statistical significance was set at P < 0.05.
To model the effect of lactation stage, season and farm type on bC, VitE, GPx, logNEFA, and logBHB, linear mixed models were fitted. The significant terms from these models for each outcome parameter are shown in Table 2. Means ± SEM are shown in Tables 3 to 7 including the main effects of interacting variables, taking into account the other interacting variables. Effect sizes accounting for significant effects of parity, bC supplementation, vitamin supplementation and CI are described below.
Animals
A total of 630 cows from fourteen farms situated in Flanders were sampled, of which 612 samples were successfully analyzed. The total number of sampled cows in each lactation stage (DRY, EARLY LACT and MID LACT) as well as mean days post-partum, parity and BCS are presented in Table 8. The body condition scores were significantly different with the lowest scores in MID LACT and the highest scores in DRY cows (P < 0.01). The dairy farms included in this study counted a mean of 106 ± 12 cows (84 ± 7 in grazing farms, 128 ± 19 in zero-grazing farms) and had an average annual milk yield per cow of 9,280 ± 188 kg. The mean annual milk yield per cow on each farm did not significantly differ between grazing (9,166 ± 256 kg) and zero-grazing farms (9,394 ± 290 kg) and was therefore not further taken into account in the final statistical model.
β-carotene
The final model for bC included significant interactions between lactation stage (LactStage), type of Statistics were only performed on BCS. abc Data marked with different letters in the same row differ significantly. farm (FarmType) and season, as well as significant main effects for vitamin supplementation (VitSupp) and parity (cf Table 2 for exact P-values). Since there are no main effects of LactStage, FarmType and Season on plasma bC, the effect of LactStage is separately reported by Season and FarmType in Table 3. In each FarmType and season, EARLY LACT was associated with a significantly lower plasma bC compared with MID LACT. In grazing farms in AUT and SUM, the DRY period was associated with significant higher bC concentrations in cows than in EARLY LACT cows, which was not the case in cows from zero-grazing farms. Similarly, when focusing on the effect of Farm-Type in each Season and LactStage (Table 3), only in DRY cows in AUT and SUM, bC concentrations were significantly higher in cows from grazing farms than in cows from zero-grazing farms. Additionally, when focusing on the effect of Season in each LactStage and FarmType (Table 3), bC concentrations in cows from grazing farms were significantly lower in SUM than in AUT. Interestingly, this reduction in circulating bC during SUM was not present in cows housed under zero-grazing conditions. VitSupp (but not bCSupp) was associated with increased plasma bC in DRY cows from grazing farms in all seasons (+1.25 ± 0.28 µg/mL bC). In contrast, VitSupp in MID LACT was significantly linked to reduced plasma bC concentrations (-1.27 ± 0.49 µg/mL bC in grazing farms and -1.04 ± 0.49 µg/mL bC in zero-grazing farms). Also parity was a factor associated with plasma bC, with a slightly reduced plasma bC concentration with increased parity (-0.11 ± 0.05 to -0.31 ± 0.10 µg/mL bC depending on FarmType, LactStage and season).
Vitamin E
Whereas season did not alter VitE concentrations, LactStage and FarmType significantly interacted and influenced plasma VitE concentrations (cf Table 2 for exact P-values). Similarly to bC, VitSupp and Parity significantly altered plasma VitE concentrations and were included as a dependent variable in the final model with VitE as outcome. The main effects of LactStage and FarmType on plasma VitE could not be calculated separately and thus the effect of LactStage in each FarmType was investigated and is shown in Table 4. The effects of LactStage and FarmType on VitE concentrations were similar to the effects observed on circulating bC. VitE concentrations were significantly lower in EARLY LACT than in MID LACT cows in both types of farms. In grazing farms, DRY cows had significant higher plasma VitE than EARLY LACT cows. Similarly, when focusing on the effect of FarmType in each LactStage (Table 4), VitE concentrations were significantly higher in DRY cows in grazing farms than in zero-grazing farms.
VitSupp (but not bCSupp) was associated with increased plasma VitE concentrations in both farm types, but only in DRY cows (+1.00 ± 0.28 µg/mL VitE). Increasing parity was linked to reduced plasma VitE concentrations (-0.18 ± 0.06 µg/mL VitE), but only in cows on grazing farms.
Glutathione peroxidase
LactStage and FarmType significantly interacted and affected plasma GPx concentrations (cf Table 2 for exact P-values). Season did not influence plasma GPx concentrations. VitSupp and bCSupp but not Parity significantly altered plasma GPx concentrations as well, and were included as a dependent variable in the final model with GPx as outcome. The main effects of LactStage and FarmType on plasma GPx could not be calculated separately and thus the effect of LactStage in each FarmType was investigated (Table 5). Only in cows from zero-grazing farms, GPx concentrations were significantly lower in EARLY LACT than in MID LACT. DRY cows from grazing farms had significantly higher plasma GPx concentrations than MID LACT cows and EARLY LACT cows. Regardless of these findings, no significant impact of type of farm was found (Table 5).
VitSupp and bCSupp were significantly associated with reduced plasma GPx concentrations, but only in cows from grazing farms (-80.76 ± 27.29 U/gHb GPx when supplemented with vitamins and -159.02 ± 46.36 U/gHb GPx when supplemented with bC).
Non-esterified fatty acids
Plasma NEFA concentrations were heavily skewed and subsequently log-transformed in the final model. None of the three main factors (LactStage, FarmType and Season) significantly interacted for logNEFA concentrations, but they all had a significant main effect on plasma logNEFA concentrations (cf Table 2 for exact P-values). VitSupp, parity and CI were significantly linked to plasma logNEFA and are included as a dependent variable in the final model with logNEFA as outcome. Mean NEFA concentrations ± SEM in each group of interest are shown in Table 6. Circulating NEFAs were significantly higher in EAR-LY LACT cows than in DRY and MID LACT cows (Table 6). Plasma NEFA concentrations were significantly, but only 0.01 mM higher in cows from zerograzing farms than in cows from grazing farms (Table 6). Plasma NEFA were significantly higher during SUM than during AUT and WIN (Table 6). VitSupp was linked to reduced plasma NEFA concentrations (-0.04 ± 0.06 mM NEFA), whereas increasing parity and CI were associated with increased circulating NEFAs (+0.03 ± 0.01 and +0.01 ± 0.00 mM NEFA, respectively).
Β-hydroxybutyrate
Plasma BHB concentrations were heavily skewed and subsequently log-transformed in the final model. Season and LactStage significantly interacted and affected plasma logBHB concentrations (cf Table 2 for exact P-values). FarmType did not influence plasma logBHB concentrations. VitSupp and parity were significantly associated with plasma logBHB concentrations and were included as a dependent variable in the final model with logBHB as outcome. Mean BHB concentrations ± SEM dependent on LactStage and Season are shown in Table 7. Only in WIN and SUM, EARLY LACT cows had significantly higher plasma BHB concentrations than MID LACT cows. In EARLY LACT cows, BHB concentrations were significantly increased in SUM than in AUT. Regardless of a significant main effect of VitSupp and Parity on BHB concentrations (Table 2), no differences were found in plasma BHB in cows due to VitSupp or Parity.
iCheck β-carotene analysis and the antioxidant status of dairy cows
The iCheck TM bC concentrations measured under field conditions (n = 612) correlated significantly with the spectrophotometric bC concentrations (n = 612) (R = 0.798) and the chromatographic bC concentrations (n = 54) (R = 0.769) measured under laboratory conditions. β-carotene in blood sampled from the udder vein (3.55 ± 0.24 µg/mL, n = 40) was similar to blood bC concentrations withdrawn from the tail vein (3.63 ± 0.25 µg/mL, n = 40). ICheck bC also correlated significantly with VitE concentrations (R = 0.668). The results of the bC and VitE statuses of Flemish dairy cows are presented in Figures 2 a and 2b. When bC reference values were applied, 20 % of the sam-pled cows had an optimal bC level of > 3.5 µg/mL, whereas 23 % of the cows had deficient bC concentrations < 1.5 µg/mL. The majority of cows had suboptimal levels of bC (between 1.5 and 3.5 µg/mL). In case of VitE, 64 % of the cows had sufficient levels of VitE (≥ 3 µg/mL), but 36 % of the sampled cows showed deficient circulating VitE concentrations (< 3 µg/mL). Notably, the majority of cows with deficient bC and VitE levels were EARLY LACT cows (54 % for bC and 59 % for VitE), while DRY cows accounted for 34 % (bC) and 30 % (VitE) and MID LACT cows for 12 % (bC) and 11 % (VitE). In total, 77 % of the cows deficient in bC and 59 % of the cows with deficient VitE levels were cows from zero-grazing farms.
DISCUSSION
β-carotene and VitE have been proposed as important antioxidants in metabolically stressed dairy cows, as their serum concentrations are indicative for their antioxidative status and linked to health and fertility outcomes (Jukola et al., 1996;Ayasan and Karakozak, 2010;Nayyar and Jindal, 2010). However, knowledge of the actual bC and VitE statuses in dairy cows as well as the interrelationship of factors influencing that antioxidant status is currently lacking. In this study, the authors aimed to investigate the antioxidant status (focusing on bC and VitE) in dairy farms in Flanders taking into account factors, such as lactation stage of the cow, type of farm (grazing or zero-grazing) and season. In the study, it was shown that these factors were associated with the levels of plasma bC and VitE and red blood cell GPx and that they were interrelated. One third of the dairy cows sampled in this study had deficient circulating bC and VitE concentrations. It could be confirmed that especially the early postpartum period is the most critical one in terms of circulating bC, VitE and GPx, coinciding with the NEB status.
Plasma bC and VitE levels reached their nadir early post-partum, whereas serum NEFA and BHB were highest during this period, indicative for a metabolic status of NEB (Drackley et al., 2001). This reduction in bC and VitE concentrations peri-partum has been reported previously (Calderon et al., 2007;Sharma et al., 2011) and can be attributed to: 1) the reduced dry matter intake post-partum (Grummer et al., 2004;Calderon et al., 2007) resulting in less antioxidant uptake from the ration, 2) the increased use of antioxidants during this NEB state since cows suffer from elevated OS early post-partum (Bernabucci et al., 2002) or 3) the massive increased milk production and loss of fat soluble antioxidants via colostrum and milk (Calderon et al., 2007;Kankofer and Albera, 2008).
The type of farm is associated with altered plasma bC and VitE concentrations in dry cows only. This implies that fresh grass-based diets high in bC and VitE can increase plasma bC and VitE levels in dry cows, as shown by Calderon et al. (2007). However, dry cows at grazing farms are often kept on fields with low quality pasture to avoid high calcium and potassium intakes pre-partum. As such, the observed higher bC and VitE levels in dry cows from grazingfarms may be explained by the release of bC and VitE reserves from fat depots when needed, which will not be excreted in dry cows via milk. This accumulation in lipid stores most probably takes place during the last phase of lactation when cows are in positive energy balance and still grazing on pasture (Baldi, 2005;Noziere et al., 2006). Similar to the observed increase in AO in dry cows under grazing conditions, the authors previously showed that bC supplementation exceeding daily recommendations (>300 -1.000 mg bC per head per day (Calsamiglia and Rodriguez, 2012), but matching the amount of bC intake at grazing (2.000 mg bC; Kawashima et al., 2010) in nonlactating (dry, but not pregnant) cows, could increase circulating bC levels (De Bie et al., 2016). Moreover, this interventional study also showed that daily bC supplementation was associated with increased bC levels in NEB cows as well (De Bie et al., 2016). In the present field study, grazing could not be linked with increased bC levels in NEB cows in early lactation. In accordance, Johansson et al. (2014) showed that cows in organic dairy farms, receiving fresh grass and legume silages, could fulfill their VitE and VitA (metabolite of bC) requirements without supplementation, except at the time around peak lactation. These observations further emphasize the impact of the three factors described above (reduced DMI and thus AO uptake, increased OS, increased loss of AO via milk) on the cow's AO status early post-partum, explaining the absence of any effect of fresh grass in the diet on bC levels in lactating cows.
Season was significantly associated with altered plasma bC concentrations, but only in cows from grazing farms. Unexpectedly, bC levels in cows from grazing farms were not highest during SUM when all cows were grazing, but in AUT when 52% of the cows had already been stalled and temperature had cooled down. In four out of five grazing farms in SUM, blood was collected at a moment when the THI reached a mean of 79 ± 4, indicating moderate stress due to heat (Armstrong, 1994). Heat stress may be responsible for the reduced circulating bC concentrations observed in SUM (Quintela et al., 2008). In addition, fresh grass contains high concentrations of bC, which is sensitive to breakdown by UV, lowering the bC uptake from the ration and thus resulting in reduced circulating bC in SUM (Ballet et al., 2000). In zero-grazing farms, no effect of season on circulating bC concentrations could be observed. This may be attributed to the presence of shade in the stables, which is known to reduce heat stress (Armstrong, 1994), or to the fact that no fresh grass was present in the ration of cows housed in zero-grazing farms that is vulnerable to breakdown by UV.
Next to dietary antioxidants, GPx (present in e.g. erythrocytes) is an important intracellular enzyme, which represents a major antioxidant defence mechanism in the body (Cohen and Hochstein, 1963). In the present study, a decline in red blood cell GPx was observed during early lactation, which is consistent with other reports (Festila et al., 2013;Konvičná et al., 2015). The highest GPx concentrations were observed in dry cows from grazing farms. It can be assumed that the availability of fresh grass during or prior to the dry period can increase the capacity of defence mechanisms against OS, which can be emphasized by the earlier described high bC and VitE levels in dry cows from grazing farms. As such, the type of ration prior to or during the dry period seems to be of high importance for the circulating bC, VitE and GPx levels of dry cows in particular. In this regard, most recent supplementation guidelines recommend higher supplementation of bC and VitE in cows during the dry and early lactation period than in late post-partum cows (Calsamiglia and Rodríguez, 2012).
As described above, bC and VitE levels in dairy cows vary according to lactation stage, farm type and season, but type of forage or nutritional vitamin uptake is the predominant factor affecting circulating bC and VitE (Noziere et al., 2006). As such, the influence of vitamin and bC supplements was taken into account in this study. Increased plasma bC and VitE levels were detected when the farmer indicated that extra vitamins were supplemented in the ration, but only in dry cows. Surprisingly and in contrast, vitamin and/or bC supplementation in mid lactating cows was associated with reduced plasma bC and reduced red blood cell GPx concentrations. Caution needs to be taken when interpreting these results, especially because farmers with herd problems may supplement their cows more easily in order to improve dairy cow health, transition, production and/or reproduction. As such, the factor 'vitamin or bC supplementation' may be associated with reduced circulating AO as a result of health problems during the transition period that are associated with increased oxidative stress and thus reduced circulating AO.
In search for the bC status of cows, an easy applicable cow side test (iCheck TM ) was used. This portable spectrophotometer allows direct on-farm analysis of whole blood samples from the tail vein within five minutes. have validated this cow side bC test under controlled experimental conditions and showed high correlations between iCheck bC and bC analyzed by means of HPLC (R=0.990). For the first time, this bC test has been used under variable field conditions and showed good correlations with bC analyzed by HPLC (R=0.798). In this way, the dairy farmer and the veterinarian are provided with a highly efficient analyzing strategy to rapidly screen the bC status of its herd . Moreover, thanks to the correlation between iCheck bC and VitE, the VitE status can also be estimated based on the iCheck bC levels. These blood bC and VitE concentrations contribute to the AO defence system (Sies, 1993), which provides an indication of the AO status of cows within a farm. Results of this screening in the present study showed that about one fourth of all cows were deficient in circulating bC levels while one third of the cows had deficient VitE levels. Not surprisingly, the majority of the cows deficient in bC and VitE were cows in early lactation and on zero-grazing farms. Nevertheless, still a significant number of cows on grazing farms (± one third) had deficient bC and VitE concentrations. As previously discussed, it is clear that the early lactation stage, in which cows suffer from a NEB period is responsible for these deficiencies in bC and VitE both on grazing as well as on zero-grazing farms. Focusing on reproduction, this NEB induced reduction in AO levels is reflected in the oocyte's micro-environment or follicular fluid. Strategically supplied bC can improve antioxidant concentrations in the follicular fluid regardless of the energy state of the cow, which provides opportunities for improvement of fertility (De Bie et al., 2016). However, that study was performed in a non-lactating dietary induced NEB cow model. Integrating this with the data of the present study, it can be concluded that optimizing the AO status in early lactating dairy cows may be a challenge. Nowadays, commercially established vitamin requirements have substantially increased (Calsamiglia and Rodríguez, 2012), with an obvious need for increases in daily bC and VitE supplementations per head. Despite the existence of these commercial advices and the awareness of the importance of optimal vitamin nutrition by dairy farmers (reflected by the fact that 71% of the Flemish dairy farms in this study supplemented their cows with vitamins of any kind), still 25 to 35% of the Flemish dairy cows have deficient bC and VitE levels. Possibly, the lack of knowledge on the factors influencing these circulating antioxidants and thus the ignorance of when to supplement and which cows to supplement might be responsible for the deficiencies observed in the dairy industry. Moreover, in 2001, the NRC concluded that there was insufficient knowledge on the bC action in dairy cattle and the factors influencing its presence to be able to formulate any official requirements for dairy cows (NRC, 2001). The knowledge generated in the present study may help to formulate new bC and VitE recommendations, since it is clear that there is a need for the optimization of antioxidant nutritional management in the dairy industry, especially in early lactating cows and cows from zero-grazing farms.
The farms included in this case study are representative for the average dairy farm in Flanders based on average milk yield and mean number of cows per farm. Models predict that the number of grazing farms in North-West Europe will keep on declining from two thirds of the cows currently grazing to only one third grazing in 2025 (Reijs et al., 2013). These results possibly also apply to the Southern part of Belgium and other countries throughout Europe and beyond which have a similar climate and follow the same trend of a reduced number of grazing farms and an increased use of silages. This furthermore emphasizes the growing importance of the optimization of AO nutrition in the dairy industry in general. The authors believe that the conclusions drawn in this study are interesting for and may be applicable to dairy farms in Western Europe managed in similar climate and feeding conditions as seen in the cohort of the current study.
CONCLUSION
The dataset revealed that the factors influencing circulating bC and VitE are all interrelated and mainly depend on the metabolic state of the cow and management practices of the farmer: lactation stage of the cow, farm type (grazing and zero-grazing farms), season, extra vitamin supplementation and parity. The presence of fresh grass in the ration could not prevent the NEB induced decline in circulating bC, VitE and GPx, but it seems that dry cows in particular benefit most from being housed on a grazing farm. Finally, at least one third of dairy cows was shown to have deficient circulating bC and VitE concentrations, especially cows in early lactation and cows housed under zero-grazing conditions. The easily applicable cow side bC test (iCheck TM ) is used under field conditions and is ideal to easily assess a cow's bC status. In this study, it is emphasized that more attention on antioxidant nutrition in the dairy industry is required, which would provide an important opportunity for improvement of health and fertility of the modern high yield-ing dairy cow kept under rapidly evolving management conditions.
|
v3-fos-license
|
2023-01-29T15:08:37.954Z
|
2015-11-17T00:00:00.000
|
256332604
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40065-015-0137-6.pdf",
"pdf_hash": "b188f762694b8720e7ea5ca0cd65566db14b0167",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1029",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"sha1": "b188f762694b8720e7ea5ca0cd65566db14b0167",
"year": 2015
}
|
pes2o/s2orc
|
An efficient scheme for numerical solution of Burgers’ equation using quintic Hermite interpolating polynomials
A numerical scheme combining the features of quintic Hermite interpolating polynomials and orthogonal collocation method has been presented to solve the well-known non-linear Burgers’ equation. The quintic Hermite collocation method (QHCM) solves the non-linear Burgers’ equation directly without converting it into linear form using Hopf–Cole transformation. Stability of the QHCM has been checked using Eucledian and Supremum norms. Numerical values obtained from QHCM are compared with the values obtained from other techniques such as orthogonal collocation method, orthogonal collocation on finite elements and pdepe solver. Numerical values have been plotted using plane and surface plots to demonstrate the results graphically.
Introduction
The majority of the problems arising in the field of physics, engineering, chemistry and biology, etc. are modelled using linear or non-linear partial differential equations. One such type of equation having numerous applications in physics and engineering is Burgers' equation. It is a well-known non-linear problem which gives an insight into the relation between convection and diffusion.
A variety of numerical methods have been developed to solve the Burgers' equation, such as finite difference scheme [12,14], finite element method [2], quadratic B-spline [13,16], cubic B-spline [5,9], automatic differentiation [6], and modified Adomain method [1]. In the present study, numerical solution of Burgers' equation has been shown by applying the quintic Hermite collocation method directly, without transforming the non-linear form into the linear form using Hopf-Cole transformation. The paper is divided into six sections. Section 1 gives the introduction of Burgers' equation, whereas Sect. 2 discusses about QHCM and collocation points. In Sect. 3, application and implementation of QHCM are discussed. In Sect. 4, stability analysis is discussed and Sect. 5 gives the discussion of all the results obtained and finally in Sect. 6, the crust of the present study is concluded.
Quintic Hermite collocation method(QHCM)
Quintic Hermite collocation is one of the Hermite collocation method [10,17], where Hermite interpolating polynomials are used as base functions. The trial function is approximated by Hermite interpolating polynomials of the order 2k + 1(k > 0). It is the generalization of the Lagrange interpolation with polynomials that not only interpolate function at each node but also its consecutive derivatives. In general for real numbers x 1 < x 2 < x 3 < · · · < x k and all integers m 1 , m 2 , m 3 , . . . , m k greater than zero, there exists a unique polynomial of degree m 1 + m 2 + m 3 + · · · + m k − 1. In the present work, quintic Hermite interpolating polynomials which are of order 5 are used to approximate the trial function.
Quintic Hermite interpolating polynomials can be expressed in the following form [8]: where P j ,P j ,P j can be expressed as: Orthogonal collocation is applied within each subdomain by introducing a new variable ξ = After rearranging the terms, P j ,P j ,P j can also be written in simplified form as:
Collocation points
Choice of collocation points is an important characteristic to be well thought out while considering the technique of orthogonal collocation. Collocation points are taken to be the zeros of orthogonal polynomials. Basically, the discretization end points are taken to be 0 and 1. In QHCM, four interior collocation points are taken within each element to discretize the problem. These interior collocation points are the zeros of orthogonal polynomials such as Jacobi polynomials [3]. The basic recurrence formula for Jacobi polynomials P In the present, the zeros of Legendre polynomials have been taken as collocation points as a special case of Jacobi polynomials for α = β = 0.
Application of QHCM
To apply QHCM on system of equations of non-linear Burgers' equation defined by (1)-(3), the approximating function is defined as: where c i s are continuous function of t. As defined earlier, to apply collocation, a new variable ξ is introduced within each of the subdomains such that the trial function takes the form: The boundary conditions are defined on x = 0 and x = 1. At jth collocation point, the system of equations (1)-(3) can be expressed as: Initially, After application of QHCM, the system of equations defined from (1) to (3) transforms into a set of ordinary differential equations (ODEs) with four ODEs within each subinterval [x i−1 , x i ]. As the approximating function consists of quintic Hermite interpolating polynomials which have the property to interpolate the firstand second-order derivatives at node points, due to which the additional condition of continuity waives off. It reduces the system of partial differential equations into a system of ODEs instead of the system of differential algebraic equations as in orthogonal collocation on finite elements (OCFE) [4,7]. After implementation of QHCM, the system of equations defined in Eqs. (7)-(9) can be written as: where D is the differential operator, u is the vector of collocation solutions u i s and M is the coefficient matrix at jth collocation point. The Matrix structure for QHCM is shown in Fig. 1. M is the square matrix of order 4ne × 4ne, Du and u are vectors of order 4ne. Bandwidth of each subinterval except first and last in matrix M is of order 4 × 6 and bandwidth of first and last subinterval is of order 4 × 5 due to boundary conditions. (1) with the initial condition u(x, 0) = sin π x; the problem becomes
Problem 3.1 First solve the Burgers' equation
Initially, Boundary conditions, Problem 3.2 In the second problem, the initial condition is taken to be u( Initially, Boundary condition, After the application of quintic Hermite collocation method to the above system of equations, 4ne number of equations appear, with ne as the number of elements. The resulting system of equation has been solved numerically using MATLAB with ode15s system solver.
Stability analysis
In the present work, stability of the numerical method has been checked by Eucledian norm and supremum norm. Let E = u − u γ , where E defines the pointwise rate of error, u being the exact solution and u γ is where E i is the pointwise error, w i s are the corresponding weight functions and h γ is the length of the γ th subdomain.
The order of convergence can be determined by the lemma given in [11] and is mentioned below: where C is the generic constant.
Therefore, the order of convergence of quintic Hermite interpolation is of the order h 6 . The stability analysis has been performed on the basis of maximum and Eucledian norm and is shown in Tables 1 and 2 for different values of ε. It has been observed that both the norms lie between 0 and 1.
Results and discussion
All the numerical findings obtained in this study have been adequately described in this section. Problem 3.1 has been solved numerically using different techniques such as QHCM, OCFE [4,7], pdepe solver, OCM [15,18] for different values of ε and τ . In Tables 3, 4, 5, 6 and 7, the comparison between exact and the numerical solution with different techniques has been shown. It has been observed from these tables that the numerical values obtained from QHCM agree well with the exact ones and this fact authenticates that quintic Hermite collocation method is better than OCFE, pdepe solver and OCM. In Fig. 2, the graphical view of the numerical solution has been shown for different values of ε and τ in the form of 2D plots. In Fig. 3, surface plots have been shown for the graphical view of the solution for different values of ε and all τ . It has been observed that in all the figures values are symmetric and lie between 0 and 1. From these surface plots, the smoothness of the solution can also be observed easily. Tables 8, 9, 10, 11 and 12 and it can be easily observed that numerical values agree well with the exact ones for QHCM as compared to OCFE, pdepe solver and OCM. In Fig. 4, numerical solution has been shown in the form of 2D plots for different values of ε and τ . In Fig. 5, surface plots have been plotted for different values of ε and all τ . From both the problems, it has been observed that the values obtained from QHCM agree well with the exact solution as compared to the OCFE, OCM and pdepe solver. Since for OCFE one needs a large number of elements, pdepe solver gives oscillations for certain values of parameter and because of stiff system of equations OCM does not give good results. Error analysis has been performed by calculating relative error (RE) given in the following formula: Tables 13 and 14 for Problems 3.1 and 3.2, respectively. It has been observed from these tables that maximum R.E. for QHCM is of order 10 −2 . This simply shows that the values obtained from QHCM agree quite well with the exact ones up to a certain degree of accuracy.
Conclusions
The Hermite collocation method developed here solves the non-linear Burgers' equation directly without transforming it into the linear form using Hopf-Cole transformation. The numerical study on two problems shows the accuracy of QHCM by comparing the values with other techniques as OCFE, pdepe solver and OCM. Results obtained from both the problems agree fairly well with the exact ones for QHCM as compared to other techniques, which authenticate the applicability of QHCM in solving non-linear stiff system of partial differential equations. This method is efficient due to its simplicity and easily programmable nature. For stability reasons, U 2 and U ∞ norms have been calculated.
|
v3-fos-license
|
2020-11-30T14:20:15.116Z
|
2020-11-30T00:00:00.000
|
227216979
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aepi.biomedcentral.com/track/pdf/10.1186/s42494-020-00032-y",
"pdf_hash": "ce7e9814aaf1170655a1d53c78c4b4d2575030c6",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1030",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "531b91cfaf2a22aef0f54a3673f0481c48d8cf8e",
"year": 2020
}
|
pes2o/s2orc
|
Sudden unexpected death after acute symptomatic seizures in a patient on mechanical ventilation
The mechanism of sudden unexpected death in epilepsy remains poorly understood. Seizure induced cardiac arrhythmia, central and obstructive apneas have been proposed as possible causes of death. Here we report a unique case of seizure related sudden unexpected death in a patient whose airway was fully protected by intubation and mechanic ventilation in the absence of fatal cardiac arrhythmia. A 70-year-old woman was undergoing mechanical ventilation and video-electroencephalography (EEG) monitoring following two convulsive seizures with ictal hypoventilation and hypoxemia. Several hours after intubation, she suffered another generalized tonic clonic seizure lasted for 3 min and developed postictal generalized EEG suppression in the presence of stable vital signs with SpO2 > 90%. EEG suppression persisted throughout the postictal phase. There was a significant fluctuation of systolic blood pressure between 50 and 180 mmHg with several bouts of hypotension < 60 mmHg. She remained unresponsive after the convulsive seizure and died of diffuse cerebral edema 12 h later. Autopsy revealed no clear cause of death, except for possible hypoxic and ischemic injury leading to the diffuse cerebral edema. Given the reliable periictal airway protection, neither seizure induced central apnea nor obstructive apnea appeared to be the direct cause of death in this unique case. In the absence of fatal cardiac arrhythmia, diffuse cerebral edema secondary to seizure-induced autonomic dysfunction, hypotension and hypoxemia might be the cause of death, highlighting the etiological heterogeneity of sudden unexpected death in epilepsy.
Background
Sudden unexpected death in epilepsy (SUDEP) is the leading cause of premature death in patients with chronic refractory epilepsy [1]. Over the last several decades, several risk factors for SUDEP have been proposed including chronic uncontrolled epilepsy, the duration of epilepsy, young age, male sex and intellectual disability [2][3][4]. Pooled data indicated that the frequency of generalized tonic clonic seizures (GTCS) is the most important risk factor for SUDEP [5]. Lack of night-time supervision and absence of nocturnal listening device are also important risk factors [6]. Nevertheless, the mechanisms of SUDEP have remained poorly understood. SUDEP commonly occurs during sleep and in bed with most cases being unwitnessed [7]. In a minority of witnessed SUDEP cases, cardiorespiratory functions were not adequately monitored, particularly lack of oxygen and respiratory monitoring [8]. Here, we reported a patient with new onset of acute symptomatic seizures who died after a GTCS during videoelectroencephalography (EEG) monitoring, while being mechanically ventilated, which might provide new insights into the mechanisms of SUDEP.
Case presentation
The patient is a 70-year-old female with a medical history of diabetes mellitus type 2, hypertension, hyperlipidemia, and herpes zoster of right face who presented with subacute left chest pain, shortness of breath and a diffuse painful rash for 7 days with progressive worsening despite the treatment with oral prednisone and antibiotics. Vital signs upon presentation were normal except for mild tachypnea. Diagnostic tests for pulmonary embolism and acute coronary syndrome were unrevealing. On day 2 of admission a progressive mental status change was noted. Initial head CT was normal. An extended EEG was remarkable for diffuse slowing of background to 5-7 Hz which was thought to be related to cefepime neurotoxicity. Cefepime was subsequently transitioned to ceftriaxone. In the following days, she continued to have a fluctuating level of awareness, but was in stable condition with an EEG background 7-8 Hz during a routine EEG study. Brain MRI (3 T) with and without contrast was unremarkable except for two punctate subcortical infarcts.
On day 9 of hospitalization, she had an acute episode of altered mental status, desaturation (SpO 2 80%) with atrial fibrillation (HR 160/min) and bowel incontinence. After vital signs were stabilized by a rapid response team, an EEG was ordered due to suspicion of a seizure. During the hook-up, the patient was noted to have right gaze deviation with left arm and leg clonic jerking. Since the electrodes had not yet been fully applied, EEG was not interpretable during this period. She underwent immediate endotracheal intubation and mechanical ventilation and she was treated with fosphenytoin. Overnight continuous video-EEG captured another generalized tonic clonic seizure that lasted for 3 min while receiving nursing care, which was followed by postictal generalized EEG suppression (PGES) that lasted for approximately 2 min (Fig. 1).
After this point, EEG suppression was persisted with rare (1-2 s) bursts of diffuse delta activity. Four subclinical Fig. 1 Top panel showed 7-8 Hz EEG background and unclear ictal onset that was obscured by diffuse muscle artifacts. Bottom panel showed ictal offset and postictal generalized EEG suppression that occurred in the presence of stable vital signs including heart rate, blood pressure and oxygen saturation as shown in Fig. 3. EEG recording settings: high pass filter 1 Hz and low pass filter 50 Hz. Sensitivity: 10 μV/mm nonconvulsive seizures (NCS) were observed over the right hemisphere lasting 30-75 s, and the last NCS was approximately 8 h after the GTCS. Portable chest X-ray showed possible aspiration pneumonia or mild neurogenic pulmonary edema (Fig. 2a to d). A repeat head CT scans 9 h after the GTCS did not show acute changes ( Fig. 2e and f). After the GTCS, heart rate ranged between 70 and 100 bpm and O 2 Sat was > 96%. There was a significant fluctuation of systolic blood pressure between 50 and 180 mmHg with several bouts of hypotension < 60 mmHg (Fig. 3). Twelve hours later, the EEG became electrically silent and the patient developed dilated, unreactive pupils and absent brainstem reflexes, consistent with brain death. A repeat head CT showed diffuse cerebral edema and loss of gray-white differentiation (Fig. 2g). The patient subsequently died on the same day.
There was no significant hypoxemia, electrolyte imbalance and sepsis prior to the onset of last GTCS. Basic metabolic panel showed sodium 139 and potassium 5.0, chloride 107, anion gap 14, BUN 33, creatinine 1.5 and GFR 34. Complete blood count (WBC) was within normal range except for mild thrombocytopenia (WBC 10.9, RBC 4.6, HB 12.2 and platelet 125). Blood and urine cultures were negative for bacteria. Lactic acid was 2.1. Arterial blood gas showed PH 7.43, PCO 2 32, SO 2 99.4%. After the onset of last GTCS, the patient developed significant metabolic acidosis with PH 7.1, lactic acid 7.9 and worsening renal insufficiency.
Gross examination of the brain during the autopsy demonstrated a somewhat dusky cerebral surface and moderate symmetrical edema. The cerebellar tonsils showed some notching, raising the possibility of tonsillar herniation through the foramen magnum. Microscopic examination revealed shrunken neurons and mottling of the cerebral cortex, suggestive of hypoxic ischemic injury. The meninges contained small foci of acute inflammation. These could be secondary to hypoxic ischemic injury or could represent incipient meningitis. Overall though, the inflammation is interpreted as too subtle and potentially too early to explain the clinically observed cerebral edema. There are no other changes that would explain cerebral edema, and no other inflammatory changes or neoplastic infiltrates are seen. With that, the possibility of hypoxic ischemic brain injury leading to cerebral edema should be considered.
Discussion
SUDEP is defined as sudden unexpected death (non-traumatic and non-drowning) in an individual with epilepsy with or without evidence of a terminal seizure and excluding documented status epilepticus (seizure duration > 30 min or seizures without recovery in between) and which investigation and postmortem examination, including toxicology, do not reveal a cause of death other than epilepsy [9]. SUDEP has been observed without preceding epileptic seizures during video-EEG monitoring, reflecting the heterogeneity of SUDEP symptomatology [10]. This case is conceptually not considered a SUDEP case, in which the patient died hours after the possible terminal GTCS, likely delayed by the mechanical ventilation. Additionally, she did not have a history of epilepsy prior to this hospital admission, and her death was likely caused by acute symptomatic seizures. Nevertheless, acute symptomatic seizures share the similar clinical pathophysiological characteristics to that of epileptic seizures. Given that SUDEP has not been recorded under the respiratory and oxygen monitoring, this unique case may provide important insights into the mechanisms of SUDEP. Additionally, SUDEP has also been reported in patients with new onset of seizures [11].
There were likely two seizures prior to the likely terminal GTCS. One was un-witnessed, but suspected based on oxygen desaturation and urinary incontinence, and the other was witnessed by an EEG technologist prior to EEG being hooked up fully. Death occurred after the likely terminal GTCS when the patient was undergoing mechanical ventilation, presumably under the best case-scenario for SUDEP prevention after a seizure. However, the mechanical ventilation only delayed the death for 12 h. Meanwhile, PGES was observed after the GTCS in the presence of stable vital signs, including heart rate, blood pressure, body temperature and oxygen saturation at the time of seizure termination, suggesting the underlying pathogenesis of PGES might be related to seizure-induced diffuse cerebral suppression and is independent of cerebral oxygen saturation [12]. The causes of seizures were not clear. It was possible that CNS inflammation, infection and electrolyte disturbances might be contributory, but evidences supporting these causes were lacking. The cause of death in this case was likely due to diffuse cerebral edema and herniation as supported by the findings of head CT and autopsy results. Although the factors leading to diffuse cerebral edema were unclear, cerebral hypoxic injury related to seizureinduced autonomic dysfunction and hypotension (< 60 mmHg) might be the contributing factors (Fig. 3) [13]. Seizure induced central and peripheral apneas as commonly suspected mechanisms of SUDEP were unlikely the direct causes of death in this case, given that the patient was intubated and mechanically ventilated. There was no fatal cardiac arrhythmia based on the EKG recording. Consecutive chest X-ray did not show significant neurogenic pulmonary edema (Fig. 2).
Conclusions
To our knowledge, this is one of the first cases reporting seizure-induced sudden unexpected death while the victim's airway was protected by intubation and mechanical ventilation. Although the cause of the death was not clear, cerebral edema secondary to autonomic dysfunction, hypotension and hypoxemia was likely the cause in this case. Fatal cardiac arrhythmia, central and obstructive apneas were not the direct causes of the death. A majority of SUDEP occurs during sleep, in bed and in the prone position, with most cases being unwitnessed [7]. These patients often live alone or are unsupervised during the sleep period. These common circumstances of SUDEP have highlighted the importance of periictal supervision to the prevention of SUDEP. In the case presented here, periictal airway protection was not effective in preventing the seizure related death, which underscores the challenges of SUDEP prevention in witnessed cases. This case also raises the question if witnessed SUDEP might have a different pathogenesis from unwitnessed SUDEP.
|
v3-fos-license
|
2020-04-09T09:21:28.437Z
|
2020-04-05T00:00:00.000
|
216490051
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://res.mdpi.com/d_attachment/energies/energies-13-01741/article_deploy/energies-13-01741-v2.pdf",
"pdf_hash": "9c5a93f36efe662d61a2b31e7fc7bfe795dd9e7d",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1031",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "425a28c91a534ce63a27a49b970132a16c78421a",
"year": 2020
}
|
pes2o/s2orc
|
New Knowledge on the Performance of Supercritical Brayton Cycle with CO 2 -Based Mixtures
: As one of the promising technologies to meet the increasing demand for electricity, supercritical CO 2 (S-CO 2 ) Brayton cycle has the characteristics of high e ffi ciency, economic structure, and compact turbomachinery. These characteristics are closely related to the thermodynamic properties of working fluid. When CO 2 is mixed with other gas, cycle parameters are determined by the constituent and the mass fraction of CO 2 . Therefore, in this contribution, a thermodynamic model is developed and validated for the recompression cycle. Seven types of CO 2 -based mixtures, namely CO 2 -Xe, CO 2 -Kr, CO 2 -O 2 , CO 2 -Ar, CO 2 -N 2 , CO 2 -Ne, and CO 2 -He, are employed. At di ff erent CO 2 mass fractions, cycle parameters are determined under a fixed compressor inlet temperature, based on the maximization of cycle e ffi ciency. Cycle performance and recuperators’ parameters are comprehensively compared for di ff erent CO 2 -based mixtures. Furthermore, in order to investigate the e ff ect of compressor inlet temperature, cycle parameters of CO 2 -N 2 are obtained under four di ff erent temperatures. From the obtained results, it can be concluded that, as the mass fraction of CO 2 increases, di ff erent mixtures show di ff erent variations of cycle performance and recuperators’ parameters. In generally, the performance order of mixtures coincides with the descending or ascending order of corresponding critical temperatures. Performance curves of these considered mixtures locate between the curves of CO 2 -Xe and CO 2 -He. Meanwhile, the curves of CO 2 -O 2 and CO 2 -N 2 are always closed to each other at high CO 2 mass fractions. In addition, with the increase of compressor inlet temperature, cycle performance decreases, and more heat transfer occurs in the recuperators. Mass flow rates of Kr, Xe, and Ar reach to 1225.14 kg / s, 982.78 kg / s, and 744.22 kg / s, respectively. the increase of CO 2 mass fraction, the flow rates decrease to the value of CO 2 (250.69 kg / s). Although mass flow rates of CO 2 -Ne, CO 2 -O 2 , and CO 2 -N 2 satisfies the order CO 2 -Ne > CO 2 -O 2 > CO 2 -N 2 , the curves of these mixtures are close to each other. The lowest curve of mass flow rate is observed for CO 2 -He. As CO 2 mass fraction increases, mass flow rate of He (88.35 kg / s) starts to increase to that of CO 2 (250.69 kg / s).
Background
According to the BP (British Petroleum) Statistical Review of World Energy 2019, global energy demand grew by 2.9% and carbon emissions grew by 2.0% in 2018. With energy demand and carbon emissions growing at their fastest rate for years, there is a growing mismatch between societal demands for action on climate change and the actual pace of progress [1]. Therefore, the need to develop renewable energy and improve the energy conversion efficiency is urgent. As one of the most promising candidates that can potentially replace the steam Rankine cycle, the supercritical
Cycle Layouts and Performance Comparison
In order to improve the cycle efficiency and alleviate the temperature mismatch in the heat exchanger, many advanced S-CO 2 cycle layouts such as simple recuperation cycle, recompression cycle and partial cooling cycle have been proposed, based on the original configuration of Brayton cycle [10]. For the recuperation cycle, a recuperator is introduced to recover the exhaust heat of S-CO 2 at the low-pressure side. Although the recuperation cycle can improve the efficiency greatly, it still suffers the temperature pinch-point problem in the recuperator, which is caused by the huge difference in the heat capacity between the hot and cold sides of the recuperator. Thus, the recompression cycle divides the recuperator into the high-temperature recuperator (HTR) and the low-temperature recuperator (LTR) and reduces the difference of heat capacity in LTR by splitting the flow stream of S-CO 2 at the inlet of gas-cooler [11]. For the partial cooling cycle, it is generally derived from the recompression cycle. The flow stream is usually split after the pre-cooler [12]. In addition, thermodynamic processes such as the reheating and the intermediate cooling can also be used to improve the efficiency of existing cycles further [10]. For these cycle layouts, a detailed description has been given in the literature [13].
Performance of different cycle layouts has been comprehensively compared by researchers. For instance, Turchi et al. [14] explored the thermodynamic performance of different S-CO 2 cycle layouts from the perspective of a concentrating solar power application. The results indicated that under dry cooling, cycle efficiency 50% could be obtained by partial-cooling S-CO 2 Brayton cycle with reheating and intercooling S-CO 2 Brayton cycle with reheating. Zhu et al. [15] developed a mathematical model to conduct the thermodynamic analysis and comparison for different direct-heated S-CO 2 Brayton cycles integrated into a solar power tower system. It was found that the intercooling S-CO 2 cycle achieved the highest overall efficiency, followed by the recompression, the partial-cooling, the pre-compression, and the simple cycles at different turbine inlet temperatures. Thereafter, Wang et al. [9] simultaneously compared the efficiency and specific work for different S-CO 2 cycle layouts by obtaining the Pareto optimal fronts of multi-objective optimizations. The results suggested that the inter-cooling cycle layout and the partial-cooling cycle layout can generally yield the most excellent performances, followed by the recompression cycle layout and the precompression cycle layout. As for the application in nuclear energy, Moisseytsev and Sienicki [16] studied the performance of alternative S-CO 2 cycle layouts for a sodium-cooled fast reactor (SFR). It has been confirmed that slight gains in efficiency (0.3%) of recompression cycle could be achieved by increasing the pressure to 22 MPa. Kulhanek and Dostal [17] compared thermal efficiency of precompression cycle, simple Brayton cycle and partial cooling cycle in nuclear reactors. It was found that the precompression cycle could achieve equivalent efficiency as the recompression cycle when turbine inlet temperature was above 700 • C.
Current Status of CO 2 -Based Mixtures
Besides the structures of S-CO 2 power cycle, another way to improve the cycle performance is to use the CO 2 -based mixture as working fluid. This is because mixing with other gases can adjust the critical point of CO 2 , so as to change the lowest operation condition of the Brayton cycle. The direction and range of the critical point variation of CO 2 depend on the mixed component and its amount. So far, a few studies have been conducted to discuss the feasibility and performance of the Energies 2020, 13, 1741 3 of 23 supercritical CO 2 -based mixture power cycle. For example, Sandia National Laboratories performed experimental tests on the compatibility of CO 2 mixtures with the turbomachinery and the compressor operation in the supercritical region of mixtures [18]. Jeong et al. [19,20] developed a supercritical cycle model, based on the mixture properties. They investigated the performance of a supercritical CO 2 -based mixture cycle, which was applied to the power conversion of sodium cold fast reactor. It was found that the mixtures of CO 2 -He, CO 2 -Xe, and CO 2 -Kr had an increase in the total cycle efficiency, when the inlet temperature of main compressor is 1 K above the critical temperature of mixtures. Thereafter, Hu et al. [21] analyzed the performance of a nuclear reactor integrated with the CO 2 -based mixture cycle. The obtained results indicated that the adoption of CO 2 -He and CO 2 -Kr could increase the cycle efficiency and decrease the amounts of heat transfer in the HTR and LTR. In order to reduce the air-cooled waste heat removal difficulty, Baik and Lee [22] compared the simple cycle performance of CO 2 -SF 6 , CO 2 -R123, CO 2 -R134a, CO 2 -R32, and CO 2 -toluene under the minimum temperature 304. 15-313.15 K and the maximum temperature 573.15 K. It was concluded that CO 2 -R32 and CO 2 -Toluene could potentially reduce the efficiency degradation of pure S-CO 2 power cycles at higher heat sink temperatures. Recently, Guo et al. [23] also analyzed the thermodynamic performance of four different Brayton cycles using CO 2 mixtures in the molten salt solar power tower systems. The used mixtures were CO 2 -Xe (0.7/0.3) and CO 2 -butane (0.95/0.05). The results indicated that adding xenon into S-CO 2 cycle could obviously improve the overall thermal efficiency and exergy efficiency, while the effects of butane as an additive were converse. On the other hand, in engineering application of Brayton cycle, S-CO 2 will be inevitably mixed with the gas impurities. Therefore, Vesely et al. [24] investigated the effect of gaseous admixtures on the cycle efficiency at a fixed inlet temperature of main compressor. They found that all researched mixtures except CO 2 -H 2 S had negative effects on the cycle efficiency and net power output. Thereafter, they examined the effect of different impurity compositions on the performance of various cycle components at different inlet temperatures of main compressor [25]. The above studies on the Brayton cycle with CO 2 -based mixtures are summarized in Table 1. (1) For the studies of Jeong et al. [19,20] and Hu et al. [21], the inlet temperature of main compressor is always assumed to be 1 K above the critical temperature of corresponding mixture, namely [24,25] and Guo et al. [23], the range of employed gas fraction is too less. Although Baik and Lee [22] considered the whole fraction range 0-1, most of the employed additives belong to organic fluids, which may decompose in the high temperature. Furthermore, Baik and Lee [22] only analyzed the performance of simple Brayton cycle. (3) Cycle efficiency is mainly employed to evaluate the cycle performance of different CO 2 -based mixtures. Thus, more detailed performance comparisons are required.
Contribution of the Study
Considering the limits of the existing studies on the CO 2 -based mixtures, this work will comprehensively conduct the investigation of recompression cycle performance and the recuperator analysis under the same inlet temperature of main compressor. The studied range of CO 2 mass fraction is 0-1. Parameters such as cycle efficiency, specific work, mass flow rate, heat input and heat conductance, are compared for different CO 2 -based mixtures. Furthermore, considering that the inlet temperature of main compressor is closely related with ambient temperature, this work will also reveal the effect of compressor inlet temperature on the performance of CO 2 -based mixtures and recuperators in the recompression cycle.
Cycle Layout
For different configurations of the S-CO 2 power cycle, the recompression cycle is the most representative, because of its relative simplicity and higher efficiency [20,23]. Furthermore, introducing a reheating process into the cycle can further improve the performance. Thus, in this work, the recompression cycle with reheater is employed to analyze and compare the performances of different CO 2 -based mixtures. As shown in Figure 1, the considered cycle consists of a primary heater, reheater, HPT (high pressure turbine), LPT (low pressure turbine), HTR, LTR, compressor, recompressor and gas cooler. The working fluid firstly receives the thermal energy in the primary heater. After generating work in the HPT, the working fluid is reheated at the medium pressure and then produces work in the LPT. Thereafter, the low-pressure fluid flows through the HTR and LTR in turn. Before entering the gas cooler, the flow is split into two streams. One stream flows through the cooler and then is compressed to a high pressure in the main compressor, while the other stream flows through the recompressor. The high-pressure stream from the main compressor flows into the LTR and absorbs heat from the low pressure working fluid. Then, the preheated stream merges with the working fluid from the recompressor. After that, the combined flow is further preheated by the low-pressure working fluid in the HTR, and finally returns to the primary heater.
Energies 2020, 13, x FOR PEER REVIEW 4 of 23 (1) For the studies of Jeong et al. [19,20] and Hu et al. [21], the inlet temperature of main compressor is always assumed to be 1 K above the critical temperature of corresponding mixture, namely Tc + 1. However, due to the fact that Tc is different for various mixtures, this assumption will cause the performance comparison of mixtures under different compressor inlet temperatures. (2) For the work of Vesely et al. [24,25] and Guo et al. [23], the range of employed gas fraction is too less. Although Baik and Lee [22] considered the whole fraction range 0-1, most of the employed additives belong to organic fluids, which may decompose in the high temperature. Furthermore, Baik and Lee [22] only analyzed the performance of simple Brayton cycle. (3) Cycle efficiency is mainly employed to evaluate the cycle performance of different CO2-based mixtures. Thus, more detailed performance comparisons are required.
Contribution of the Study
Considering the limits of the existing studies on the CO2-based mixtures, this work will comprehensively conduct the investigation of recompression cycle performance and the recuperator analysis under the same inlet temperature of main compressor. The studied range of CO2 mass fraction is 0-1. Parameters such as cycle efficiency, specific work, mass flow rate, heat input and heat conductance, are compared for different CO2-based mixtures. Furthermore, considering that the inlet temperature of main compressor is closely related with ambient temperature, this work will also reveal the effect of compressor inlet temperature on the performance of CO2-based mixtures and recuperators in the recompression cycle.
Cycle Layout
For different configurations of the S-CO2 power cycle, the recompression cycle is the most representative, because of its relative simplicity and higher efficiency [20,23]. Furthermore, introducing a reheating process into the cycle can further improve the performance. Thus, in this work, the recompression cycle with reheater is employed to analyze and compare the performances of different CO2-based mixtures. As shown in Figure 1, the considered cycle consists of a primary heater, reheater, HPT (high pressure turbine), LPT (low pressure turbine), HTR, LTR, compressor, recompressor and gas cooler. The working fluid firstly receives the thermal energy in the primary heater. After generating work in the HPT, the working fluid is reheated at the medium pressure and then produces work in the LPT. Thereafter, the low-pressure fluid flows through the HTR and LTR in turn. Before entering the gas cooler, the flow is split into two streams. One stream flows through the cooler and then is compressed to a high pressure in the main compressor, while the other stream flows through the recompressor. The high-pressure stream from the main compressor flows into the LTR and absorbs heat from the low pressure working fluid. Then, the preheated stream merges with the working fluid from the recompressor. After that, the combined flow is further preheated by the low-pressure working fluid in the HTR, and finally returns to the primary heater. For the above recompression cycle with a reheater, when employing CO 2 as working fluid, the corresponding T-s (Temperature-entropy) diagram is presented in Figure 2. As the lowest cycle operation parameters, the inlet temperature and pressure of main compressor should be located just above the critical point of CO 2 to reduce the required compressor work. Furthermore, when two divided streams merge at the inlet of HTR, the corresponding temperature difference should be small enough to avoid thermal fatigue cracking of the channel wall. Thus, a suitable flow split ratio must be determined for the recompression cycle.
Energies 2020, 13, x FOR PEER REVIEW 5 of 23 For the above recompression cycle with a reheater, when employing CO2 as working fluid, the corresponding T-s (Temperature-entropy) diagram is presented in Figure 2. As the lowest cycle operation parameters, the inlet temperature and pressure of main compressor should be located just above the critical point of CO2 to reduce the required compressor work. Furthermore, when two divided streams merge at the inlet of HTR, the corresponding temperature difference should be small enough to avoid thermal fatigue cracking of the channel wall. Thus, a suitable flow split ratio must be determined for the recompression cycle. Table 1 lists the considered CO2-based mixtures in the existing studies. Thermodynamic properties of these mixtures are always obtained by REFPROP [19][20][21][22][23][24][25][26]. According to the shift direction of critical temperature, these mixtures can be classified into two groups: the ascending critical temperature group and the descending critical temperature group. In general, the ascending critical temperature group contains organic fluids such as butane, R134a, and R123. These constituents may decompose at high temperatures. However, for the descending critical temperature group, the additives usually include O2, N2, and Ar, which have the characteristics of safety, thermostability, and compatibility. Under the high temperature and pressure conditions, these gases would not be reacted with CO2. Furthermore, in engineering applications of the S-CO2 cycle, the gases usually appear as the impurities. Therefore, in this work, seven additive gases are employed, as presented in Table 2. Table 1 lists the considered CO 2 -based mixtures in the existing studies. Thermodynamic properties of these mixtures are always obtained by REFPROP [19][20][21][22][23][24][25][26]. According to the shift direction of critical temperature, these mixtures can be classified into two groups: the ascending critical temperature group and the descending critical temperature group. In general, the ascending critical temperature group contains organic fluids such as butane, R134a, and R123. These constituents may decompose at high temperatures. However, for the descending critical temperature group, the additives usually include O 2 , N 2 , and Ar, which have the characteristics of safety, thermostability, and compatibility. Under the high temperature and pressure conditions, these gases would not be reacted with CO 2 . Furthermore, in engineering applications of the S-CO 2 cycle, the gases usually appear as the impurities. Therefore, in this work, seven additive gases are employed, as presented in Table 2. Table 2 lists the pure gases involved in the considered CO 2 -based mixtures, according to the decrease sequence of critical temperature: CO 2 > Xe > Kr > O 2 > Ar > N 2 > Ne > He. For every gas, Table 2 provides molecular weight, boiling temperature and critical properties. Furthermore, under the temperature T c0 + 150 K (454.13 K), heat capacities at pressures 7.5 MPa and 25 MPa are given, based on the calculation of REFPROP. From the table, it can be seen that CO 2 has the highest critical temperature 304.13 K, while He has the lowest critical temperature 5.1953 K. For the seven additive gases, Xe has closest critical temperature (289.73 K) to that of CO 2 . Furthermore, the boiling temperature and the critical pressure decrease with the decrease of critical temperature. For the critical density, Xe has the highest value (1102.9 kg/m 3 ), while He has the lowest density (69.58 kg/m 3 ). As for the heat capacity, He has much higher value than other fluids under the same condition. In addition, the heat capacity at high pressure is always larger than that at low pressure. However, for different fluids, with the decrease of critical temperature, the difference of heat capacity between different pressures decreases. Although this heat capacity difference is small, if the difference is multiplied by the mass flow rate, the difference of total heat capacity will be considerable to affect the heat exchange in the recuperator. That is why the split flow process is widely employed in the configurations of the S-CO 2 power cycle. The mismatch of heat capacity in the LTR can be greatly alleviated by adjusting the mass flow rate at the high-pressure side of heat exchanger.
CO 2 -Based Mixtures
For the considered CO 2 -based mixtures, the corresponding thermodynamic properties are evaluated by the newest version REFPROP 10 [27]. The variations of critical temperature and pressure with the mass fraction of CO 2 are respectively presented in Figures 3 and 4. Figure 3 indicates that the critical temperature of mixture increases monotonously with the increase of CO 2 mass fraction. The curves of these mixtures are distributed according to the order of critical temperature of pure gases. However, it should be noted that although critical temperature satisfies O 2 > Ar, CO 2 -Ar has an obviously higher critical temperature than that of CO 2 -O 2 at mass fractions larger than 0.2. Critical temperature of mixtures usually satisfies the following order: Furthermore, when CO 2 mass fraction is larger than 0.5, the curves of CO 2 -O 2 and CO 2 -N 2 are close to each other. As for the critical pressure of mixture in Figure 4, there exist very different variation trends among the mixtures. From Figure 4, it can be seen that with the increase of CO 2 mass fraction, critical pressures of mixtures CO 2 -Ar, CO 2 -N 2 , and CO 2 -O 2 first increase and then decrease. It means that there exists a high peak value of critical pressure, especially for CO 2 -Ar. For the mixtures CO 2 -Xe, CO 2 -Kr, and CO 2 -Ne, as the mass fraction of CO 2 increases, the critical pressure shows slowly increase monotonously. As for CO 2 -He, it should be noted that when CO 2 mass fraction approaches to 1.0, the critical pressure of CO 2 -He is a little higher than that of CO 2 in a small range of mass fraction.
temperature Tc0 + 150 K (454.13 K), heat capacities at pressures 7.5 MPa and 25 MPa are given, based on the calculation of REFPROP. From the table, it can be seen that CO2 has the highest critical temperature 304.13 K, while He has the lowest critical temperature 5.1953 K. For the seven additive gases, Xe has closest critical temperature (289.73 K) to that of CO2. Furthermore, the boiling temperature and the critical pressure decrease with the decrease of critical temperature. For the critical density, Xe has the highest value (1102.9 kg/m 3 ), while He has the lowest density (69.58 kg/m 3 ). As for the heat capacity, He has much higher value than other fluids under the same condition. In addition, the heat capacity at high pressure is always larger than that at low pressure. However, for different fluids, with the decrease of critical temperature, the difference of heat capacity between different pressures decreases. Although this heat capacity difference is small, if the difference is multiplied by the mass flow rate, the difference of total heat capacity will be considerable to affect the heat exchange in the recuperator. That is why the split flow process is widely employed in the configurations of the S-CO2 power cycle. The mismatch of heat capacity in the LTR can be greatly alleviated by adjusting the mass flow rate at the high-pressure side of heat exchanger.
For the considered CO2-based mixtures, the corresponding thermodynamic properties are evaluated by the newest version REFPROP 10 [27]. The variations of critical temperature and pressure with the mass fraction of CO2 are respectively presented in Figures 3 and 4. Figure 3 indicates that the critical temperature of mixture increases monotonously with the increase of CO2 mass fraction. The curves of these mixtures are distributed according to the order of critical temperature of pure gases. However, it should be noted that although critical temperature satisfies O2 > Ar, CO2-Ar has an obviously higher critical temperature than that of CO2-O2 at mass fractions larger than 0.2. Critical temperature of mixtures usually satisfies the following order: CO2-Xe > CO2-Kr > CO2-Ar > CO2-O2 > CO2-N2 > CO2-Ne > CO2-He. Furthermore, when CO2 mass fraction is larger than 0.5, the curves of CO2-O2 and CO2-N2 are close to each other. As for the critical pressure of mixture in Figure 4, there exist very different variation trends among the mixtures. From Figure 4, it can be seen that with the increase of CO2 mass fraction, critical pressures of mixtures CO2-Ar, CO2-N2, and CO2-O2 first increase and then decrease. It means that there exists a high peak value of critical pressure, especially for CO2-Ar. For the mixtures CO2-Xe, CO2-Kr, and CO2-Ne, as the mass fraction of CO2 increases, the critical pressure shows slowly increase monotonously. As for CO2-He, it should be noted that when CO2 mass fraction approaches to 1.0, the critical pressure of CO2-He is a little higher than that of CO2 in a small range of mass fraction.
Model Establishment
In order to analyze the cycle performance of CO2-based mixtures, a simulation model is developed for the recompression cycle with reference to Figures 1 and 2. For the simplicity of modeling, the following assumptions are applied in the establishment of simulation model.
•
The recompression cycle is reached in steady state operating condition.
•
Pressure drops and heat losses in the pipes and heat exchangers are ignored.
•
Recuperators are considered as counter-flow heat exchangers.
The power generated by the HPT and LPT can be determined as follows: For the consumed work in the main compressor and recompressor, it can be expressed from Equations (3) and (4).
where split ratio (SR) denotes the ratio of the main compressor mass flow to the cycle mass flow and is defined as Thus, the net power output is The input heat through the primary heater and reheater is expressed as For the primary heater, the temperature increase of working fluid is defined as
Model Establishment
In order to analyze the cycle performance of CO 2 -based mixtures, a simulation model is developed for the recompression cycle with reference to Figures 1 and 2. For the simplicity of modeling, the following assumptions are applied in the establishment of simulation model.
•
The recompression cycle is reached in steady state operating condition.
•
Pressure drops and heat losses in the pipes and heat exchangers are ignored.
•
Recuperators are considered as counter-flow heat exchangers.
The power generated by the HPT and LPT can be determined as follows: For the consumed work in the main compressor and recompressor, it can be expressed from Equations (3) and (4).
where split ratio (SR) denotes the ratio of the main compressor mass flow to the cycle mass flow and is defined as Thus, the net power output is The input heat through the primary heater and reheater is expressed as For the primary heater, the temperature increase of working fluid is defined as Furthermore, for the reheater, according to the reference [28], the intermediate pressure of working fluid is set as the average of the high and low pressures.
Based on the above parameters, the thermal efficiency can be described using Equation (10) As for the heat exchange in the LTR and HTR, energy balance is satisfied.
In the modeling of recuperators, the effectiveness approach is employed. For LTR and HTR, the effectiveness is respectively defined as [5] Furthermore, for the mixing process of two divided flows, it is governed by For the recuperators, since heat transfer coefficients of mixtures are different with each other, heat conduction (UA), as a product of heat transfer coefficient and area, is employed to represent the required size and transfer performance of the heat exchangers. In general, a larger UA results into a higher cost of heat exchanger. UA can be calculated by knowing the inlet and outlet temperatures of recuperator. Considering that the thermodynamic properties of CO 2 -based mixture vary greatly with the temperature, the recuperator is discretized into N sub-heat exchangers. Due to the slight variation of properties, they can be assumed as constants in each sub-heat exchanger. Accordingly, the total heat transfer rate of recuperator is divided into N sections. For each sub-heat exchanger, the transferred heat is expressed by The corresponding logarithmic mean temperature difference is expressed as Then, the heat conductance of sub-heat exchanger can be obtained, and the overall conductance is calculated as In order to compare the cycle performance of different mixtures, main parameters of the recompression cycle are specified in Table 3. These parameters are designed by National Renewable Energy Laboratory for the application of S-CO 2 Brayton cycle in concentrating solar power. The corresponding reference values have been employed in the performance comparison among the simple cycle, recompression cycle and partial-cooling cycle [28]. In the table, the efficiencies of Energies 2020, 13, 1741 9 of 23 turbomachinery are provided, and the effectiveness of heat exchanger is given. In order to guarantee the heat transfer in HTR and LTR, the temperature difference in heat exchanger is set to be above 5 • C. In addition, the highest cycle temperature and maximum pressure are set to be 650 • C and 25 MPa, respectively. As for the lowest cycle temperature, the reference value of compressor inlet temperature is set to be just 1K above the critical temperature of CO 2 (T c0 + 1). However, in order to investigate the effect of compressor inlet temperature on the cycle performance, the inlet temperature also varies from T c0 + 1 to T c0 + 30. Furthermore, it is assumed that the recompression system has a net power output of 35 MW. Under the design parameters in Table 3, pressure ratio and split ratio are optimized for each fluid based on the criteria of maximum cycle efficiency. The optimization procedure is shown in Figure 5. For the calculation of LTR and HTR, the temperature distributions are obtained, and the minimum temperature differences are checked at the pinch point. If the pinch point temperature difference (PPTD) is lower than 5 • C, parameters of LTR and HTR are recalculated by setting PPTD = 5 • C. Computer programs for this calculation procedure are developed on the platform of MATLAB 2015. REFPROP 10 is embedded into the program to determine the thermodynamic properties of CO 2 -based mixtures at different state points.
Model Validation
The established model is validated by comparing the results with the data of Neises and Turchi's study [28]. In the validation, the working fluid is supercritical CO2 and the adopted parameters are consistent with the reference values in Table 3, except that the compressor inlet temperature is assumed to be 50 °C. Based on the above parameters, Neises
Model Validation
The established model is validated by comparing the results with the data of Neises and Turchi's study [28]. In the validation, the working fluid is supercritical CO 2 and the adopted parameters are consistent with the reference values in Table 3, except that the compressor inlet temperature is assumed to be 50 • C. Based on the above parameters, Neises and Turchi got results using Engineering Equation Solver (University of Wisconsin-Madison, Madison, WI, US), while results of the present model are obtained by running MATLAB. Table 4 gives the comparison results of these two models. It can be seen that there exists a good agreement between the present model and the reference.
Results and Discussions
Under the design conditions in Table 3, optimized results of recompression cycle are obtained for CO 2 -based mixtures. The employed CO 2 mass fraction varies from 0 to 1 at an interval of 0.1. The corresponding calculated data are listed in Supplementary Materials. On this basis, thermodynamic analysis on cycle performance and recuperator is conducted for mixtures under a fixed compressor inlet temperature (T c0 + 1). Thereafter, cycle parameters are optimized at different compressor inlet temperatures, so that the effect can be revealed for mixtures. Figure 6 presents the calculated cycle efficiency of mixtures at different mass fractions of CO 2 . It can be seen that every mixture has a unique variation curve of cycle efficiency with the increase of CO 2 mass fraction. For CO 2 -Xe, as the CO 2 mass fraction increases from 0 to 1, the cycle efficiency firstly increases from the value of Xe (0.538) and then decreases slowly to the value of CO 2 (0.539). It means that there exists a peak value of cycle efficiency around the CO 2 mass fraction 0.2. Due to the little difference of critical temperatures between CO 2 and Xe, the range of efficiency variation is small for CO 2 -Xe. Unlike the mixture CO 2 -Xe, the efficiency of the rest mixtures always increases with the increase of CO 2 mass fraction. Since the critical temperature of CO 2 is much higher than temperatures of pure gases, cycle efficiencies of these gases are far lower than that of CO 2 under the fixed compressor inlet temperature. Furthermore, for mixtures CO 2 -O 2 , CO 2 -Ar, CO 2 -N 2 , CO 2 -Ne, and CO 2 -He, the cycle efficiency increases slowly at the fraction range 0-0.9. However, when CO 2 mass fraction approaches to 1.0, a sharp increase of efficiency is observed. On the other hand, for the considered mixtures, the legend is illustrated according to the descending order of the critical temperatures of pure gases. Figure 6 indicates that with the critical temperature decrease of pure gases, the corresponding mixtures usually have lower cycle efficiency. However, for the gases O 2 and Ar, efficiency of CO 2 -Ar is always higher than that of CO 2 -O 2 . This is because, although the critical temperature of O 2 is higher than that of Ar, the critical temperature of CO 2 -O 2 is lower than that of CO 2 -Ar. Thus, the considered mixtures satisfy the following order of cycle efficiency: CO 2 -Xe > CO 2 -Kr > CO 2 -Ar > CO 2 -O 2 > CO 2 -N 2 > CO 2 -Ne > CO 2 -He. In addition, when the critical temperature of mixture is far from the compressor inlet temperature, the efficiency difference of different mixtures such as CO 2 -N 2 , CO 2 -Ne, or CO 2 -He becomes smaller. temperature of O2 is higher than that of Ar, the critical temperature of CO2-O2 is lower than that of CO2-Ar. Thus, the considered mixtures satisfy the following order of cycle efficiency: CO2-Xe > CO2-Kr > CO2-Ar > CO2-O2 > CO2-N2 > CO2-Ne > CO2-He. In addition, when the critical temperature of mixture is far from the compressor inlet temperature, the efficiency difference of different mixtures such as CO2-N2, CO2-Ne, or CO2-He becomes smaller. For the mixtures, Figure 7 presents the variation of specific work (output work per mass flow rate) with the mass fraction of CO2. It can be seen that specific work of He (396.14 kW) is much higher than that of CO2 (139.61 kW). Thus, the mixture CO2-He shows a continue decrease of specific work in the CO2 mass fraction range 0-0.9. When the mass fraction approaches to 1.0, the output work will increase to that of pure CO2. For the other mixtures, as CO2 mass fraction increases, the corresponding output work slowly increases from the value of pure gas to that of CO2. However, the variation range is smaller than that of CO2-He. It is interesting to note that CO2-He has the lowest efficiency, while For the mixtures, Figure 7 presents the variation of specific work (output work per mass flow rate) with the mass fraction of CO 2 . It can be seen that specific work of He (396.14 kW) is much higher than that of CO 2 (139.61 kW). Thus, the mixture CO 2 -He shows a continue decrease of specific work in the CO 2 mass fraction range 0-0.9. When the mass fraction approaches to 1.0, the output work will increase to that of pure CO 2 . For the other mixtures, as CO 2 mass fraction increases, the corresponding output work slowly increases from the value of pure gas to that of CO 2 . However, the variation range is smaller than that of CO 2 -He. It is interesting to note that CO 2 -He has the lowest efficiency, while the output work of CO 2 -He is the highest in the CO 2 mass fraction rang 0-0.8. Furthermore, due to the fact that the net power output is given in the simulation, mass flow rates of mixtures can be determined, as shown in Figure 8. Mass flow rates of Kr, Xe, and Ar reach to 1225.14 kg/s, 982.78 kg/s, and 744.22 kg/s, respectively. With the increase of CO 2 mass fraction, the flow rates gradually decrease to the value of CO 2 (250.69 kg/s). Although mass flow rates of CO 2 -Ne, CO 2 -O 2 , and CO 2 -N 2 satisfies the order CO 2 -Ne > CO 2 -O 2 > CO 2 -N 2 , the curves of these mixtures are close to each other. The lowest curve of mass flow rate is observed for CO 2 -He. As CO 2 mass fraction increases, mass flow rate of He (88.35 kg/s) starts to increase to that of CO 2 (250.69 kg/s). Besides cycle efficiency, output work, and mass flow rate, amounts of heat input are also compared for different mixtures, as illustrated in Figure 9. Since the total net power output is fixed, the amount of absorbed heat is inversely proportional to cycle efficiency. Thus, the absorbed heat of CO2-Xe is the lowest. The value first decreases from Xe (64.95 MW) and then increase to CO2 (64.90 MW). As for the other mixtures, the absorbed heat gradually decreases to that of CO2. When mass fraction of CO2 is beyond 0.5, the heat input has the following order: CO2-He > CO2-Ne > CO2-N2 > CO2-O2 > CO2-Ar > CO2-Kr > CO2-Xe. At high mass fraction of CO2, the curve of CO2-N2 almost coincides with that of CO2-O2, due to the fact that critical temperatures of CO2-N2 and CO2-O2 are close to each other. Besides cycle efficiency, output work, and mass flow rate, amounts of heat input are also compared for different mixtures, as illustrated in Figure 9. Since the total net power output is fixed, the amount of absorbed heat is inversely proportional to cycle efficiency. Thus, the absorbed heat of CO 2 -Xe is the lowest. The value first decreases from Xe (64.95 MW) and then increase to CO 2 (64.90 MW). As for the other mixtures, the absorbed heat gradually decreases to that of CO 2 . When mass fraction of CO 2 is beyond 0.5, the heat input has the following order: CO 2 -He > CO 2 -Ne > CO 2 -N 2 > CO 2 -O 2 > CO 2 -Ar > CO 2 -Kr > CO 2 -Xe. At high mass fraction of CO 2 , the curve of CO 2 -N 2 almost coincides with that of In the simulation, the high cycle pressure is fixed at 25 MPa and pressure ratio is optimized for every mixture. Based on the determined pressure ratio, the low cycle pressure can be calculated. The corresponding variation curves are presented in Figure 10 for the considered mixtures. It can be observed that low cycle pressures for pure gases are higher than that of CO2. As the CO2 mass fraction increases, the pressure of CO2-Xe firstly shows a decrease and then a slowly increase, while other mixtures shows continue decrease of low pressure. For the considered mixtures, the highest curve of In the simulation, the high cycle pressure is fixed at 25 MPa and pressure ratio is optimized for every mixture. Based on the determined pressure ratio, the low cycle pressure can be calculated. The corresponding variation curves are presented in Figure 10 for the considered mixtures. It can Energies 2020, 13, 1741 13 of 23 be observed that low cycle pressures for pure gases are higher than that of CO 2 . As the CO 2 mass fraction increases, the pressure of CO 2 -Xe firstly shows a decrease and then a slowly increase, while other mixtures shows continue decrease of low pressure. For the considered mixtures, the highest curve of pressure is CO 2 -He, while the lowest is CO 2 -Xe. When the mass fraction is higher than 0.5, the low cycle pressure satisfies the following order: CO 2 -He > CO 2 -Ne > CO 2 -Ar > CO 2 -N 2 > CO 2 -O 2 > CO 2 -Kr > CO 2 -Xe. It should be noted that little difference exists for the pressure of CO 2 -Ar, CO 2 -N 2 , and CO 2 -O 2 at high mass fractions of CO 2 .
Cycle Performance Analysis
Mass fraction of CO 2 In the simulation, the high cycle pressure is fixed at 25 MPa and pressure ratio is optimized for every mixture. Based on the determined pressure ratio, the low cycle pressure can be calculated. The corresponding variation curves are presented in Figure 10 for the considered mixtures. It can be observed that low cycle pressures for pure gases are higher than that of CO2. As the CO2 mass fraction increases, the pressure of CO2-Xe firstly shows a decrease and then a slowly increase, while other mixtures shows continue decrease of low pressure. For the considered mixtures, the highest curve of pressure is CO2-He, while the lowest is CO2-Xe. When the mass fraction is higher than 0.5, the low cycle pressure satisfies the following order: CO2-He > CO2-Ne > CO2-Ar > CO2-N2 > CO2-O2 > CO2-Kr > CO2-Xe. It should be noted that little difference exists for the pressure of CO2-Ar, CO2-N2, and CO2-O2 at high mass fractions of CO2. When the S-CO2 power cycle is applied to solar energy, temperature increase of working fluid in the primary heater is an important index to evaluate the performance of solar power system. The larger the temperature increase, the greater the heat storage of molten salt [13]. Thus, Figure 11 presents the temperature increase of mixtures in the primary heater. It indicates that CO2-Xe has the highest temperature difference, followed by CO2-Kr. For the CO2-Xe mixture, the temperature difference continues to decrease from Xe (235.24 K) to CO2 (147.58 K), while CO2-Kr firstly decrease from Kr (147.58 K) and then increase to CO2 (147.58 K) slowly. For other mixtures, the temperature difference continues to increase, as the CO2 mass fraction increases. At mass fraction larger than 0.5, When the S-CO 2 power cycle is applied to solar energy, temperature increase of working fluid in the primary heater is an important index to evaluate the performance of solar power system. The larger the temperature increase, the greater the heat storage of molten salt [13]. Thus, Figure 11 presents the temperature increase of mixtures in the primary heater. It indicates that CO 2 -Xe has the highest temperature difference, followed by CO 2 -Kr. For the CO 2 -Xe mixture, the temperature difference continues to decrease from Xe (235.24 K) to CO 2 (147.58 K), while CO 2 -Kr firstly decrease from Kr (147.58 K) and then increase to CO 2 (147.58 K) slowly. For other mixtures, the temperature difference continues to increase, as the CO 2 mass fraction increases. At mass fraction larger than 0.5, the temperature difference has the order: CO 2 -Xe > CO 2 -Kr > CO 2 -Ar > CO 2 -N 2 > CO 2 -O 2 > CO 2 -Ne > CO 2 -He. It should be noted that the curves of CO 2 -Ar, CO 2 -N 2 , and CO 2 -O 2 are close to each other.
Recuperator Analysis
For the recompression cycle, the recuperator is a key component to improve cycle performance. Thus, the heat transfer and the required conductance of HTR and LTR are calculated for mixtures
Recuperator Analysis
For the recompression cycle, the recuperator is a key component to improve cycle performance. Thus, the heat transfer and the required conductance of HTR and LTR are calculated for mixtures according to the conditions in Table 3. HTR heat and conductance are respectively presented in Figures 12 and 13. For the heat transfer in HTR, the heat of Xe (35.47 MW) is the lowest, then followed by CO 2 (99.82 MW). The highest heat is for He (167. 16 MW). With the increase of CO 2 mass fraction, HTR heat of CO 2 -Xe increases, while other mixtures show downtrend of HTR heat in a whole. The irregularly small variation of HTR heat is because of nonlinear properties of mixtures. At different mass fractions of CO 2 , CO 2 -Xe has the lowest heat transfer, while heat of CO 2 -He is the highest. Compared with the heat input in Figure 9, HTR heat of mixtures is much higher except CO 2 -Xe at low mass fraction of CO 2 . As for the heat conductance of HTR. It is proportional to the transferred heat, as shown in Figure 13. It can be observed that the conductance variation of mixtures with CO 2 mass fraction is similar to that of HTR heat. The lowest conductance is 1.8 MW/K for Xe, while the highest value is 15.0 MW/K for He. CO 2 has the conductance 3.26 MW/K. In order to meet so much high heat transfer, it's thought that printed circuit heat exchanger (PCHE) has the potential to be applied, because of the great compactness and capability to withstand the high temperature and pressure. Furthermore, for the order of HTR heat and conductance in mixtures at high CO 2 mass fraction, it has the following order: In order to analyze the heat transfer difference between pure fluids and mixtures, temperature distributions in HTR are calculated for seven mixtures. The corresponding figures are presented in Supplementary Materials. Here, take CO2-N2 as an example, the temperature distributions for CO2, In order to analyze the heat transfer difference between pure fluids and mixtures, temperature distributions in HTR are calculated for seven mixtures. The corresponding figures are presented in Supplementary Materials. Here, take CO2-N2 as an example, the temperature distributions for CO2, In order to analyze the heat transfer difference between pure fluids and mixtures, temperature distributions in HTR are calculated for seven mixtures. The corresponding figures are presented in Supplementary Materials. Here, take CO 2 -N 2 as an example, the temperature distributions for CO 2 , CO 2 -N 2 (0.5/0.5) and N 2 are illustrated in Figure 14. It can be seen that the transferred heats of CO 2 -N 2 (0.5/0.5) and N 2 are much higher than that of CO 2 . In general, the pinch point of heat transfer locates at the cold end of HTR. The corresponding minimum temperature differences of CO 2 , CO 2 -N 2 (0.5/0.5) and N 2 are 10.88 • C, 10.79 • C and 10.76 • C, respectively. Furthermore, for the considered mixtures, the calculated minimum temperature differences are provided in Supplementary Materials. rder to analyze the heat transfer difference between pure fluids and mixtures, temp tions in HTR are calculated for seven mixtures. The corresponding figures are prese entary Materials. Here, take CO2-N2 as an example, the temperature distributions f (0.5/0.5) and N2 are illustrated in Figure 14. It can be seen that the transferred heats .5) and N2 are much higher than that of CO2. In general, the pinch point of heat t the cold end of HTR. The corresponding minimum temperature differences of CO2, and N2 are 10.88 °C, 10.79°C and 10.76 °C, respectively. Furthermore, for the con s, the calculated minimum temperature differences are provided in Supplementary M Figure 15 illustrates the LTR heat of mixtures at different mass fractions of CO 2 . It can be seen that the LTR heat is far lower than those of heater and HTR, and the heat difference of LTR between different mixtures is also less than that presented in Figures 9 and 12. For the considered mixtures, CO 2 -Xe has the lowest LTR heat. As the CO 2 mass fraction increases, LTR heat of the mixture gradually decrease from Xe (40.02 MW) to CO 2 (38.06 MW). Being similar to CO 2 -Xe, CO 2 -Kr also shows a slow decrease of LTR heat. However, for the other mixtures, when CO 2 mass fraction is higher than 0.9, there exists a sharp decrease of LTR heat, as shown in Figure 15. As for the curves of LTR conductance, they are given in Figure 16. It indicates that although the LTR conductance of mixtures decreases with the increase of CO 2 mass fraction, the variation range is smaller than that of HTR conductance. Special care should be given for CO 2 -Xe. When CO 2 mass fraction is less than 0.2, there is a sharp decrease of LTR conductance. Even if the heat transfer of LTR is the lowest for CO 2 -Xe, the corresponding conductance is higher than that of other mixture at low CO 2 mass fractions. This phenomenon can be explained by the matched heat capacities of the cold and hot sides. When the temperature match is improved, the logarithmic mean temperature difference will be reduced. On the other hand, at high mass fractions of CO 2 , mixtures have the following order of LTR heat and conductance: CO 2 -He > CO 2 -Ne > CO 2 -O 2 >CO 2 -N 2 > CO 2 -Ar > CO 2 -Kr > CO 2 -Xe. It should be noted that the curves of CO 2 -O 2 and CO 2 -N 2 are close to each other. e temperature match is improved, the logarithmic mean temperature difference will be reduce n the other hand, at high mass fractions of CO2, mixtures have the following order of LTR heat an nductance: CO2-He > CO2-Ne > CO2-O2 >CO2-N2 > CO2-Ar > CO2-Kr > CO2-Xe. It should be note at the curves of CO2-O2 and CO2-N2 are close to each other. As for the temperature distributions in LTR, all figures are provided in Supplementary Materia r the considered mixtures. Figure 17 illustrates the temperature distributions of CO2, CO2-N .5/0.5), and N2. It can be observed that although there are large differences of specific heat capaci tween hot and cold streams at low temperatures, the differences of total heat capacity are great n the other hand, at high mass fractions of CO2, mixtures have the following order of LTR heat an nductance: CO2-He > CO2-Ne > CO2-O2 >CO2-N2 > CO2-Ar > CO2-Kr > CO2-Xe. It should be note at the curves of CO2-O2 and CO2-N2 are close to each other. As for the temperature distributions in LTR, all figures are provided in Supplementary Materia r the considered mixtures. Figure 17 illustrates the temperature distributions of CO2, CO2-N .5/0.5), and N2. It can be observed that although there are large differences of specific heat capaci tween hot and cold streams at low temperatures, the differences of total heat capacity are great As for the temperature distributions in LTR, all figures are provided in Supplementary Materials for the considered mixtures. Figure 17 illustrates the temperature distributions of CO 2 , CO 2 -N 2 (0.5/0.5), and N 2 . It can be observed that although there are large differences of specific heat capacity between hot and cold streams at low temperatures, the differences of total heat capacity are greatly minimized by introducing splitting progress, so that the temperature mismatch in LTR is alleviated. For CO 2 , CO 2 -N 2 (0.5/0.5) and N 2 , the minimum temperature differences are 10.89 • C, 10.80 • C, and 10.76 • C, respectively. Similarly, the minimum temperature differences of all mixtures are given in the Supplementary Materials. In order to reduce the difference of specific heat capacity between the cold and hot sides in LTR, split ratio is optimized. The corresponding values are presented in Figure 18. It can be observed that split ratio of pure gas is higher than that of CO2 (0.63). The reason is that heat capacity difference decreases with the decrease of critical temperature, as illustrated in Table 2. The larger the heat capacity difference, the lower the split ratio. Therefore, As CO2 mass fraction increases, split ratio decreases. Except CO2-Xe and CO2-Kr, all mixtures show slow decrease firstly and then rapid decrease, especially for CO2-He. For these mixtures, the lowest curve is for CO2-Xe, while CO2-He has the highest curve. The values of CO2-He are close to 1.0. Similarly, at high mass fractions of CO2, split ratio has the order: CO2-He > CO2-Ne > CO2-N2 > CO2-O2 > CO2-Ar > CO2-Kr > CO2-Xe.
Effect of Compressor Inlet Temperature
In order to investigate the effect of compressor inlet temperature, temperatures Tc0 + 1, Tc0 + 10, Tc0 + 20, and Tc0 + 30 are considered. Cycle performance and recuperator parameters of the considered mixtures are analyzed under the provided conditions in Table 3. Since the effect of compressor inlet temperature on the cycle parameters keeps the same for different mixtures, results of a typical mixture are presented in the following study. Considering that N2 is the most abundant gas in the In order to reduce the difference of specific heat capacity between the cold and hot sides in LTR, split ratio is optimized. The corresponding values are presented in Figure 18. It can be observed that split ratio of pure gas is higher than that of CO 2 (0.63). The reason is that heat capacity difference decreases with the decrease of critical temperature, as illustrated in Table 2. The larger the heat capacity difference, the lower the split ratio. Therefore, As CO 2 mass fraction increases, split ratio decreases. Except CO 2 -Xe and CO 2 -Kr, all mixtures show slow decrease firstly and then rapid decrease, especially for CO 2 -He. For these mixtures, the lowest curve is for CO 2 -Xe, while CO 2 -He has the highest curve. The values of CO 2 -He are close to 1.0. Similarly, at high mass fractions of CO 2 , split ratio has the order: For CO2, CO2-N2 (0.5/0.5) and N2, the minimum temperature differences are 10.89 °C, 10.80 °C, and 10.76 °C, respectively. Similarly, the minimum temperature differences of all mixtures are given in the Supplementary Materials. In order to reduce the difference of specific heat capacity between the cold and hot sides in LTR, split ratio is optimized. The corresponding values are presented in Figure 18. It can be observed that split ratio of pure gas is higher than that of CO2 (0.63). The reason is that heat capacity difference decreases with the decrease of critical temperature, as illustrated in Table 2. The larger the heat capacity difference, the lower the split ratio. Therefore, As CO2 mass fraction increases, split ratio decreases. Except CO2-Xe and CO2-Kr, all mixtures show slow decrease firstly and then rapid decrease, especially for CO2-He. For these mixtures, the lowest curve is for CO2-Xe, while CO2-He has the highest curve. The values of CO2-He are close to 1.0. Similarly, at high mass fractions of CO2, split ratio has the order: CO2-He > CO2-Ne > CO2-N2 > CO2-O2 > CO2-Ar > CO2-Kr > CO2-Xe.
Effect of Compressor Inlet Temperature
In order to investigate the effect of compressor inlet temperature, temperatures Tc0 + 1, Tc0 + 10, Tc0 + 20, and Tc0 + 30 are considered. Cycle performance and recuperator parameters of the considered mixtures are analyzed under the provided conditions in Table 3. Since the effect of compressor inlet temperature on the cycle parameters keeps the same for different mixtures, results of a typical mixture are presented in the following study. Considering that N2 is the most abundant gas in the
Effect of Compressor Inlet Temperature
In order to investigate the effect of compressor inlet temperature, temperatures T c0 + 1, T c0 + 10, T c0 + 20, and T c0 + 30 are considered. Cycle performance and recuperator parameters of the considered mixtures are analyzed under the provided conditions in Table 3. Since the effect of compressor inlet temperature on the cycle parameters keeps the same for different mixtures, results of a typical mixture are presented in the following study. Considering that N 2 is the most abundant gas in the air, it is highly possible to mix with CO 2 so as to affect the performance of supercritical power cycle. Thus, CO 2 -N 2 is selected. Figure 19 gives the cycle efficiency of CO 2 -N 2 for the four temperatures. As the inlet temperature increases, cycle efficiency naturally decreases. For instance, at T c0 + 1 and T c0 + 30, CO 2 has cycle efficiency 0.54 and 0.48, respectively. Furthermore, with the increase of inlet temperature, the efficiency difference between N 2 and CO 2 decreases, so as to make the curve rise more smoothly, especially at the CO 2 mass fraction beyond 0.9. As for the specific work, the increased temperature results into a greater power consumed by the main compressor. This makes the reduction of specific work, as shown in Figure 20. On the other hand, due to the increase of compressor inlet temperature, the inlet temperature of primary heater will accordingly increase. When the highest temperature is fixed at 650 • C, the temperature difference in the primary heater will automatically decreases, as presented in Figure 21. Meanwhile, with the increase of compressor inlet temperature, the corresponding curve shows a slow increase from N 2 to CO 2 , especially when CO 2 mass fraction is higher than 0.9. air, it is highly possible to mix with CO2 so as to affect the performance of supercritical power cycle. Thus, CO2-N2 is selected. Figure 19 gives the cycle efficiency of CO2-N2 for the four temperatures. As the inlet temperature increases, cycle efficiency naturally decreases. For instance, at Tc0 + 1 and Tc0 + 30, CO2 has cycle efficiency 0.54 and 0.48, respectively. Furthermore, with the increase of inlet temperature, the efficiency difference between N2 and CO2 decreases, so as to make the curve rise more smoothly, especially at the CO2 mass fraction beyond 0.9. As for the specific work, the increased temperature results into a greater power consumed by the main compressor. This makes the reduction of specific work, as shown in Figure 20. On the other hand, due to the increase of compressor inlet temperature, the inlet temperature of primary heater will accordingly increase. When the highest temperature is fixed at 650 °C, the temperature difference in the primary heater will automatically decreases, as presented in Figure 21. Meanwhile, with the increase of compressor inlet temperature, the corresponding curve shows a slow increase from N2 to CO2, especially when CO2 mass fraction is higher than 0.9. Energies 2020, 13, x FOR PEER REVIEW 18 of 23 air, it is highly possible to mix with CO2 so as to affect the performance of supercritical power cycle. Thus, CO2-N2 is selected. Figure 19 gives the cycle efficiency of CO2-N2 for the four temperatures. As the inlet temperature increases, cycle efficiency naturally decreases. For instance, at Tc0 + 1 and Tc0 + 30, CO2 has cycle efficiency 0.54 and 0.48, respectively. Furthermore, with the increase of inlet temperature, the efficiency difference between N2 and CO2 decreases, so as to make the curve rise more smoothly, especially at the CO2 mass fraction beyond 0.9. As for the specific work, the increased temperature results into a greater power consumed by the main compressor. This makes the reduction of specific work, as shown in Figure 20. On the other hand, due to the increase of compressor inlet temperature, the inlet temperature of primary heater will accordingly increase. When the highest temperature is fixed at 650 °C, the temperature difference in the primary heater will automatically decreases, as presented in Figure 21. Meanwhile, with the increase of compressor inlet temperature, the corresponding curve shows a slow increase from N2 to CO2, especially when CO2 mass fraction is higher than 0.9. In the simulation, the total net power output is fixed. When the cycle efficiency decreases with the increase of compressor inlet temperature, the absorbed heat from the heat source will increase. Meanwhile, considering the temperature difference of mixture in the primary heater decreases, cycle mass flow rate is certain to increase to assure the required heat. This will result into more heat transfer in the recuperators. As presented in Figures 22 and 23, heat conductance of HTR and LTR increase with the increase of compressor inlet temperature. Furthermore, compared with the LTR conductance, the HTR conductance is more easily to be influenced by the increase of inlet temperature. Take CO2 as example, when inlet temperatures are Tc0 + 1 and Tc0 + 30, HTR conductance are 3.26 MW/K and 6.30 MW/K, while LTR has the conductance 2.84 MW/K and 3.54 MW/K, respectively. In the simulation, the total net power output is fixed. When the cycle efficiency decreases with the increase of compressor inlet temperature, the absorbed heat from the heat source will increase. Meanwhile, considering the temperature difference of mixture in the primary heater decreases, cycle mass flow rate is certain to increase to assure the required heat. This will result into more heat transfer in the recuperators. As presented in Figures 22 and 23, heat conductance of HTR and LTR increase with the increase of compressor inlet temperature. Furthermore, compared with the LTR conductance, the HTR conductance is more easily to be influenced by the increase of inlet temperature. Take CO 2 as example, when inlet temperatures are T c0 + 1 and T c0 + 30, HTR conductance are 3.26 MW/K and 6.30 MW/K, while LTR has the conductance 2.84 MW/K and 3.54 MW/K, respectively. In the simulation, the total net power output is fixed. When the cycle efficiency decreases with the increase of compressor inlet temperature, the absorbed heat from the heat source will increase. Meanwhile, considering the temperature difference of mixture in the primary heater decreases, cycle mass flow rate is certain to increase to assure the required heat. This will result into more heat transfer in the recuperators. As presented in Figures 22 and 23, heat conductance of HTR and LTR increase with the increase of compressor inlet temperature. Furthermore, compared with the LTR conductance, the HTR conductance is more easily to be influenced by the increase of inlet temperature. Take CO2 as example, when inlet temperatures are Tc0 + 1 and Tc0 + 30, HTR conductance are 3.26 MW/K and 6.30 MW/K, while LTR has the conductance 2.84 MW/K and 3.54 MW/K, respectively.
Conclusions
In the present study, a thermodynamic model is established for the recompression cycle. Seven types of CO2-based mixtures are employed to investigate the effect of mixtures on the supercritical power cycle from the perspective of thermodynamic analysis. Based on the optimized results, cycle performance and recuperators' heat transfer for different mixtures are comprehensively compared and analyzed at different mass factions of CO2.Therafter, in order to reveal the effect of compressor inlet temperature, cycle parameters of CO2-N2 are obtained and compared under four different temperatures. The main conclusions can be drawn as follows: (1) Under the fixed compressor inlet temperature, except CO2-Xe and CO2-He, cycle efficiency, specific work and temperature increase in the primary heater for other mixtures are generally lower than those for pure CO2. However, mass flow rate, heat input and low cycle pressure of these mixtures are generally higher than those of CO2. For the analyzed parameters, the order of mixtures almost coincides with the descending or ascending order of corresponding critical temperature. Performance curves of the considered mixtures locate between the curves of CO2-Xe and CO2-He. Furthermore, when the mass fraction is close to 1.0, there usually exists a sharp change of cycle performance for CO2-He, CO2-Ne, CO2-N2, CO2-Ar, and CO2-O2. (2) For the recuperators of recompression cycle, compared with the LTR heat, the HTR heat of mixtures is larger. Except CO2-Xe, the transferred heat and the required conductance of HTR and LTR decrease with the increase of CO2 mass fraction. At high mass fractions of CO2, the considered mixtures satisfy the order: CO2-He > CO2-Ne > CO2-O2 > CO2-N2 > CO2-Ar > CO2-Kr > CO2-Xe. As for the split ratio, the larger the heat capacity difference of mixtures between hot and cold sides in LTR, the lower the split ratio. All mixtures have higher split ratio than pure CO2. The order of split ratio for mixtures almost coincides with the above sequence. In general, the curves of CO2-O2 and CO2-N2 are close to each other. (3) For the effect of compressor inlet temperature, the increase of inlet temperature will cause the decrease of cycle efficiency, specific work, and temperature difference in the primary heater. Heat conductance of recuperators will be increased. However, the obvious benefit of increasing inlet temperature is that it can increase the temperature difference between the working fluid and the cooling source, especially in hot or arid environments. This will greatly reduce the mass flow of cooling source and the required heat exchange area of gas cooler. Thus, how to determine the optimal inlet temperature of compressor is worth investigating by considering the thermodynamic and economic performances of working fluid simultaneously.
Conclusions
In the present study, a thermodynamic model is established for the recompression cycle. Seven types of CO 2 -based mixtures are employed to investigate the effect of mixtures on the supercritical power cycle from the perspective of thermodynamic analysis. Based on the optimized results, cycle performance and recuperators' heat transfer for different mixtures are comprehensively compared and analyzed at different mass factions of CO 2 .Therafter, in order to reveal the effect of compressor inlet temperature, cycle parameters of CO 2 -N 2 are obtained and compared under four different temperatures. The main conclusions can be drawn as follows: (1) Under the fixed compressor inlet temperature, except CO 2 -Xe and CO 2 -He, cycle efficiency, specific work and temperature increase in the primary heater for other mixtures are generally lower than those for pure CO 2 . However, mass flow rate, heat input and low cycle pressure of these mixtures are generally higher than those of CO 2 . For the analyzed parameters, the order of mixtures almost coincides with the descending or ascending order of corresponding critical temperature. Performance curves of the considered mixtures locate between the curves of CO 2 -Xe and CO 2 -He. Furthermore, when the mass fraction is close to 1.0, there usually exists a sharp change of cycle performance for CO 2 -He, CO 2 -Ne, CO 2 -N 2 , CO 2 -Ar, and CO 2 -O 2 . (2) For the recuperators of recompression cycle, compared with the LTR heat, the HTR heat of mixtures is larger. Except CO 2 -Xe, the transferred heat and the required conductance of HTR and LTR decrease with the increase of CO 2 mass fraction. At high mass fractions of CO 2 , the considered mixtures satisfy the order: CO 2 -He > CO 2 -Ne > CO 2 -O 2 > CO 2 -N 2 > CO 2 -Ar > CO 2 -Kr > CO 2 -Xe. As for the split ratio, the larger the heat capacity difference of mixtures between hot and cold sides in LTR, the lower the split ratio. All mixtures have higher split ratio than pure CO 2 . The order of split ratio for mixtures almost coincides with the above sequence. In general, the curves of CO 2 -O 2 and CO 2 -N 2 are close to each other. (3) For the effect of compressor inlet temperature, the increase of inlet temperature will cause the decrease of cycle efficiency, specific work, and temperature difference in the primary heater. Heat conductance of recuperators will be increased. However, the obvious benefit of increasing inlet temperature is that it can increase the temperature difference between the working fluid and the cooling source, especially in hot or arid environments. This will greatly reduce the mass flow of cooling source and the required heat exchange area of gas cooler. Thus, how to determine the optimal inlet temperature of compressor is worth investigating by considering the thermodynamic and economic performances of working fluid simultaneously.
In engineering, CO 2 is inevitably mixed with impurity gases, such as N 2 or O 2 . This work is helpful to reveal the effect of these impurities on the performance of S-CO 2 Brayton cycle. Furthermore, Energies 2020, 13, 1741 21 of 23 for the utilization of CO 2 -based mixtures to improve the cycle performance, based on the property calculation of REFPROP, this study provides preliminary comparison results for seven types of mixtures under certain conditions. Therefore, in the future research, properties of CO 2 -based mixtures have to be deeply studied to ensure the application of mixtures. More investigations about the environmental impacts, costs, and experimental limitations should be performed for supercritical Brayton cycle with CO 2 -based mixtures by considering different driven heat sources.
Supplementary Materials: The following are available online at http://www.mdpi.com/1996-1073/13/7/1741/ s1. According to the given conditions, cycle parameters are completely calculated for the seven mixtures. The corresponding data are listed in Excel. Furthermore, for these mixtures, temperature distributions in the HTR and LTR are also presented in the form of figure. Funding: This work is supported by the program "Researches on the fundamental theory for the optimization and operation of supercritical CO 2 power cycle" from China Three Gorges Corporation, grant number 202003024.
|
v3-fos-license
|
2018-06-21T12:41:03.551Z
|
2018-09-01T00:00:00.000
|
46918665
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.ypmed.2018.05.024",
"pdf_hash": "153edb312cd0185f96a12c6d1fa1fb57d6550b60",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1033",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "58c7a42748f3525fcf34824c11da66197c5b6be5",
"year": 2018
}
|
pes2o/s2orc
|
Preventing intimate partner violence through paid parental leave policies.
Paid parental leave policies have the potential to strengthen economic supports, reduce family discord, and provide opportunities to empower women (Basile et al., 2016; Niolon et al., 2017). In this article, we present a theory of change and evidence to suggest how paid parental leave may impact intimate partner violence (IPV). In doing so, we present three mechanisms of change (i.e., reduction in financial stress, increase in egalitarian parenting practices, and promotion of child/parent bonding) through which paid parental leave could reduce rates of IPV. We also describe limitations of the current state of knowledge in this area, as well as opportunities for future research. Ultimately, our goal is to facilitate the identification and implementation of approaches that have the potential to reduce violence at the population level. Paid parental leave embodies the potential of policies to change societal-level factors and serve as an important prevention strategy for IPV.
Introduction
Intimate partner violence (IPV) is a significant public health issue, with 37.3% of women and 30.9% of men in the United States experiencing contact sexual violence, physical violence, or stalking by an intimate partner in their lifetime . Global estimates suggest that the lifetime prevalence of physical and/or sexual IPV against women is approximately 30% (World Health Organization, 2013). However, we currently have few effective strategies to prevent the onset of violence (i.e., primary prevention) or reduce violence that is already ongoing (i.e., secondary/tertiary prevention) in intimate relationships Whitaker, Murphy, Eckhardt, Hodges, & Cowart, 2013). Moreover, those few that are effective focus on individual-or relationship-level factors and have limited population impact due to inability to scale up these strategies (Frieden, 2010;Spivak et al., 2014;Whitaker, Hall, & Coker, 2009;Whitaker et al., 2013).
In this vein, policy-based prevention approaches have the potential to change the outer layers of the social ecology (i.e., community and societal factors; Bronfenbrenner, 1979), altering social inequalities and ultimately changing norms that support the use of violence (Dahlberg & Krug, 2002). Given that the impacts of such policies can be broad, reaching communities and/or society at large, it seems that policy approaches may be ideally suited to modify those societal factors that contribute to rates of violence in communities. In support of this effort to identify potential policy approaches to prevent violence, it may be useful to evaluate whether the effects of current policies designed or enacted for other purposes, extend beyond their original purpose to affect rates of violence. Policies from various sectors (e.g., education, economic, criminal justice) designed to affect health inequities may serve as effective violence prevention strategies. For example, Kearns, Reidy, and Valle (2015) summarized the literature examining alcohol-related policies and their association with IPV. The authors reported an association between alcohol outlet density and rates of IPV, which suggest that policies that regulate the number of alcohol outlets in a given community may be an effective method to curb IPV in those communities. In similar fashion, D'Inverno, Kearns, and Reidy (2016) argued that policies designed to increase girls' and women's enrollment in science, technology, engineering, and math (STEM) fields may be an effective primary prevention strategy for teen dating violence (TDV) and IPV in great part due to effects on strengthening household financial security and reducing financial stress and its impact on relationship discord (Matjasko, Niolon, Valle, 2013;Niolon et al., 2017). In addition, supporting girls' and women's enrollment in STEM fields may also lead to more distal effects of promoting attitudes and beliefs about women as equals thereby increasing gender equity (Glick & Fiske, 2001). Indeed, given the links among economic deprivation, gender, health disparities, and IPV, policies that reduce familial financial stresses and increase gender parity may likely be effective tools to prevent IPV (D'Inverno et al., 2016;Niolon et al., 2017).
Paid parental leave represents one policy-based approach that has potential to strengthen economic supports, reduce family discord, and provide opportunities to empower women all of which have the potential to affect rates of IPV (Basile et al., 2016;Niolon et al., 2017). Paid parental leave 1 supports new parents by providing job-protected, paid time off to care and bond with a new child without interruptions to household income or conflict between work and family responsibilities. This bonding period may be invaluable in fostering positive parenting skills and promoting healthy family relationships and lifestyles (Chatterji & Markowitz, 2012;Goodman, 2012;Huang & Yang, 2015;Johansson, Wennberg, & Hammarström, 2014;Månsdotter, Lindholm, Lundberg, Öhman, & Winkvist, 2006;Månsdotter & Lundin, 2010;Saade, Barbour, & Salameh, 2010;Whitehouse, Romaniuk, Lucas, & Nicholson, 2013). In this sense, paid parental leave simultaneously supports the family as a whole while also strengthening support for mothers individually. But beyond the multitude of social, mental, and physical health benefits proffered by paid leave practices, paid parental leave policies may be an effective strategy to prevent future instances of violence in intimate relationships.
In this article, we outline the potential for paid parental leave to influence IPV indirectly through its purported influence on risk and protective factors associated with IPV. We present a rationale behind paid parental leave as a promising prevention approach for IPV, including a theoretical model based on empirical evidence of the various pathways by which paid parental leave may influence rates of IPV. We also describe limitations of the current state of knowledge in this area, as well as opportunities for future research. Our goal is to facilitate the identification of evidence-based, societal-level approaches for preventing violence (such as policy-based approaches) in order to achieve greater population impact. Ultimately, this article is a call for researchers, practitioners, and stakeholders across disciplines to collaborate in the implementation and evaluation of innovative strategies to prevent IPV.
Theoretical Model Describing Paid Parental Leave and its Impact on Intimate Partner Violence
There is a dearth of research that directly examines the relation between paid parental leave and IPV. For many states, implementation is still in the early stages; thus, there has been limited opportunity to examine the relation between paid parental leave and IPV. However, there are several theoretical reasons to expect that paid parental leave may affect rates of IPV. We propose three processes or, mechanisms of change, through which paid parental leave may potentially prevent or decrease IPV (illustrated in Figure 1).
•
Path 1 -paid leave maintains household income preventing financial stressors and associated relationship discord that can incite instances of relationship violence; • Path 2 -paid leave increases egalitarian parenting practices and decreases the impact of work interruptions on women's advancement in the workplace, thereby increasing gender equity, which is associated with lower rates of IPV against women; and • Path 3 -paid leave provides new parents a period of time to bond with a child free of conflict between work and family demands, which facilities IPV/TDV protective factors and reduces risk factors in youth (e.g., healthy parenting practices, healthy relationships, good parental mental health, etc).
Collectively, the proposed paths may work together in additive and multiplicative fashion to attenuate risk factors and increase protective factors, with the shared objective of preventing or reducing IPV. Below we present empirical evidence supporting the argument for each of these mechanisms to prevent IPV.
Path 1
The economic benefits of paid parental leave may be likely to impact the frequency of IPV in a relationship by reducing financial stress and worry about insufficient household income that can serve as precipitant stressors for violence. Poverty and stress related to financial strain have been linked to negative outcomes, including relationship dissatisfaction and conflict, which are risk factors for IPV (Byun, 2012;Capaldi, Knoble, Shortt, & Kim, 2012;Davis & Mantler, 2004;Dew, 2008;Fox & Chauncey, 1998;Neff, Holamon, & Schluter, 1995;Slep, Foran, Heyman, & Snarr, 2010). For the most economically disadvantaged, paid leave may proffer reduction in the number of violent events given that financial stressors such as food insecurity, eviction, disconnected phone service, and being unable to pay utilities are significant predictors of physical IPV perpetration among men and women (Schwab-Reese, Peek-Asa, & Parker, 2016). For example, a qualitative study of women who had experienced IPV during or shortly after giving birth found IPV often existed in conjunction with other stressful life events, including financial and housing difficulties (Bacchus, Mezey, & Bewley, 2003). Similarly, Breiding, Basile, Klevens, and Smith (2017) found robust associations between food and housing insecurity in the preceding 12 months and rates of IPV and sexual violence victimization. Notably, when the state of California implemented a paid leave policy, the most economically disadvantaged families showed the greatest increase in leave-taking (Bartel, Baum, Rossin-Slater, Ruhm, & Waldfogel, 2014). Thus, it seems providing paid parental leave could mitigate relationship stress about finances among the most at risk families during this critical and already stressful period.
Only one study has directly assessed the association between paid leave and IPV. Gartland and colleagues (2011) surveyed 1,507 Australian women during pregnancy and three, six, and twelve months postpartum about their experiences with physical and emotional IPV.
Women were also asked about employment status and eligibility for paid maternity leave. The authors identified three groups, women that: (1) worked during pregnancy and qualified for paid maternity leave; (2) worked during pregnancy but did not qualify for paid maternity leave; and (3) did not work during pregnancy, thus did not qualify for paid maternity leave. After controlling for maternal age at birth, relationship status, income, and education level, women who worked during pregnancy and qualified for paid maternity leave reported 58% lower odds of IPV in the first twelve months postpartum compared to women who did not have access to paid maternity leave (i.e., the combination of women that worked during pregnancy but did not qualify for paid maternity leave and women that did not work during pregnancy and therefore did not qualify for paid maternity leave, see Aitken et al., 2015).
Unfortunately, the authors did not test differences in rates of IPV between working mothers with access compared to working mothers without access to paid leave. Additionally, interpretation of the results is limited because the authors were unable to determine whether the women with access to paid maternity leave actually used their leave. It is possible that other factors, such as the perception of support in the workplace, may have played a role in decreasing violence against women. Nevertheless, there is evidence to suggest a trend between access to financial resources and reduced violence in intimate relationships (Ellsberg et al., 2015;Kim et al., 2007;Matjasko et al., 2013). Hence, it is possible that even partial wage replacement during parental leave may mitigate the stress associated with household finances, thereby reducing relationship problems, and consequently reducing the frequency of violent events in the relationship.
Path 2
Paid parental leave also has potential to influence rates of IPV by promoting more egalitarian parenting practices, which in turn, generalize to promote less traditional gender norms and ultimately reduce gender inequality. This is pertinent because traditional (i.e., patriarchal) gender norms and gender inequality are risk factors for violence against girls and women (Gressard, Swahn, & Teten, 2015; World Health Organization [WHO]/London School of Hygiene and Tropical Medicine, 2010). For example, at the individual level, endorsement of patriarchal gender role attitudes has been linked to physical and sexual violence against an intimate partner (Parrott & Zeichner, 2003;Reidy Berke, Gentile, & Zeichner, 2014;Smith-Hunter, Parrot, Swartout, & Teten-Tharp, 2015). Likewise, at the societal level, indices of gender inequality are strongly associated with the rates of girls' (but not boys') physical dating violence victimization (Gressard et al., 2015). Accordingly, it seems altering patriarchal gender norms and consequent gender inequality may be fruitful in the prevention of IPV.
In the U.S., working mothers spend approximately twice as much time as working fathers engaged in domestic work and this difference is due largely to primary childcare duties such as feeding, changing diapers, bathing, taking care of children when they are sick, and managing children's schedules and activities (Allard & Jane, 2008;Bureau of Labor Statistics, 2015;Pew Research Center, 2015). Evidence suggests policies that provide and encourage fathers to take paid leave increase their participation in these childcare duties (Haas & Hwang, 1999;OECD, 2016;Tanaka & Waldfogel, 2007). Moreover, fathers who participate early in childcare duties tend to stay involved throughout a child's life (OECD, 2016). This participation is notable because fathers who are more involved in direct physical and emotional care of children hold more gender-equitable attitudes (Bonney, Kelley, & Levant, 1999;Bulanda, 2004;Craig, 2006). In fact, involved fathers who attend prenatal visits, take paternity leave, and help their children with homework, etc., are less likely to perpetrate IPV (Chan, Emery, Fulu, Tolman, & Ip, 2017). Thus, as fathers' use of parental leave becomes more commonplace, the stigma of assisting with childcare (and other domestic work) may abate potentially altering traditional, hegemonic, masculine ideologies that are associated with gender inequality and ultimately IPV against women (Farmer & Tiefenthaler, 2003;Gressard et al., 2015;McCauley et al., 2013;Murshid & Critelli, 2017;Reidy, Shirk, Sloan, & Zeichner, 2009;Smith-Hunter et al., 2015).
In addition to supporting increased involvement from fathers, paid parental leave would also have potential benefits for mothers. Research has shown that women in the United States who have access to job-protected maternity leave are more likely to return to their previous employers after childbirth and experience positive wage benefits, even when controlling for employer characteristics (Waldfogel, 1998). 2 Both in the United States and internationally, one consistently documented source of gender inequality relates to the wage gap between male and female workers. Median weekly earnings for women in the United States represented 82% of median weekly earnings for men in 2016 (Bureau of Labor Statistics, 2017), and research suggests that the gender wage gap grows with age, becoming even more pronounced for women with children (Budig & England, 2001;Goldin, 2014;Slaughter, 2015). Gangl and Ziefle (2009) found that when controlling for work experience, working mothers in the United States experience a 4-7% wage penalty per child. This penalty was largely accounted for by work interruptions for childcare, changes in employer at reentry into the labor market, and other economic responses to motherhood. In the United States, the gender wage gap may also play a role in the slow uptake in fathers utilizing paid leave, since the unavailability of paid leave incentivizes the parent making the most money to keep working. In other countries where paid leave is offered, but only at a percentage of the parent's earned salary, the gender wage gap also motivates the parent making the most money to continue working, yet another reason for the slow uptake in fathers utilizing paid leave. In fact, some countries have implemented successful strategies such as "bonus periods" and non-transferrable parental leave to increase parental leave in men (Haas & Rostgaard, 2011). A couple may receive extra weeks of paid leave if the father uses a certain amount of paid parental leave, providing a "bonus period." Non-transferrable parental leave provides each parent with their own paid leave period, which cannot be used by the other parent. Non-transferrable parental leave has doubled the number of parental leave days taken by men in Iceland and Sweden (Organisation for Economic Co-operation and Development, 2016). Encouraging fathers to take parental leave is critical to combat traditional patriarchal gender roles being reinforced when mothers exclusively stay home to care for a new child. In this way, paid parental leave represents an opportunity to not only encourage fathers to participate more frequently in childcare duties, but also to support mothers in returning to the workforce following the birth of a child. This has the potential to decrease the impact of work interruptions on future earning potential and ultimately advance gender equality in the long-term.
Path 3
Paid parental leave policies may also support prevention of IPV through the prevention of TDV, a risk factor for IPV (Exner-Cortens, Eckenrode, Bunge, & Rothman, 2017). Specifically, paid parental leave has demonstrated positive impacts on parental involvement and positive parenting practices. For example, this dedicated time encourages new parents to learn and become interested in child development, increases involvement in child caretaking responsibilities, offers them the opportunity to become more attentive to the infant's needs, and increases the probability and duration of exclusive breastfeeding (Feldman, Sussman, & Zigler, 2004;Nepomnyaschy and Waldfogel, 2007;Galtry & Callister, 2005;Roe, Whittington, Fein, & Teisl, 1999). These parenting behaviors, in turn, contribute to improved child and family physical, behavioral, and mental health including decreasing the risk of externalizing disorders, depression, substance use, and risky sexual behavior (Oddy et al., 2010;Cookston & Finlay, 2006;Deptula, Henry, & Schoeny, 2010). Pertinently, a recent review of the literature on risk and protective factors for TDV identified parenting-related factors (e.g., low parental monitoring, harsh parenting practices, and negative parent-child interactions) as increasing the risk of TDV perpetration (Vagi et al., 2013). Likewise, the aforementioned physical, behavioral, and mental health outcomes are risk factors that exacerbate TDV (Vagi et al., 2013). Clearly, longitudinal research is necessary to claim longterm impacts of paid parental leave on child and adolescent outcomes, including perpetration of TDV (and ultimately IPV). However, it seems possible that utilizing parental leave may improve parenting practices and family bonding, thereby reducing adverse mental and behavioral health disorders in adolescence, which in turn, may reduce risk for TDV perpetration.
In a related vein, some parenting practices influenced by paid leave, such as duration of breastfeeding, have also been linked to lower risk for child abuse and neglect (Klevens, Luo, Xu, Peterson, & Latzman, 2016;Strathearn, Mamun, Najmun, & O'Callaghan, 2009). To the extent that paid parental leave prevents a child from being a victim of child maltreatment or a witness to IPV, the intergenerational transmission of IPV may also be interrupted, as these forms of family violence are also predictors of IPV (Ireland & Smith, 2009;Linder & Collins, 2005). Thus, paid parental leave may buffer against known risk factors for perpetration against intimate partners by promoting parent-child bonding and healthy parenting practices, which in turn may decrease risk for child maltreatment and promote the healthy development of youth.
Caveats & Conclusions
Despite widespread recognition of the significant public health implications of IPV, a body of accumulated research, a vast field of dedicated practitioners, and the resources that have been devoted to IPV/TDV over the past decades, we still have significant progress to make in preventing such violence Whitaker et al., 2013). This is surely, in part, due to the multifaceted nature of IPV/TDV (Reidy & Niolon, 2012), wherein there is no singular cause for any one person, and no unified set of causes across persons. Consequently, truly effective interventions will likely necessitate comprehensive strategies that incorporate multiple causal mechanisms at multiple levels of the social ecology. Herein, we have laid out three potential mechanisms of change whereby paid parental leave may influence the perpetration of IPV/TDV. Of course, these paths at this point are primarily theoretical, albeit based on empirical links. Accordingly, we believe these to be three fruitful areas of exploration for prevention researchers in investigating the relationship between paid leave and IPV outcomes.
However, there are a number of critical questions that future research on this topic should likely consider. For example, the ideal length of paid leave has not yet been determined. Some studies suggest that too much leave could be harmful to a woman's career in the form of lower wages, lower labor market attachment, and workplace discrimination (Hegewisch & Gornick, 2011;Morgan & Zippel, 2003;Ray, Gornick, & Schmitt, 2009), with some evidence suggesting that even short interruptions from work (i.e., less than four months) can increase a woman's risk for downward mobility and decrease chances for upward mobility (Aisenbrey, Evertsson, & Grunow, 2009). Conversely, it will be important to establish the minimum time necessary to achieve the positive outcomes related to paid leave. In addition, policies of this nature may produce different results in the United States relative to other countries. For instance, the Nordic countries, which are known to have high levels of gender equality, paradoxically report high levels of IPV (Gracia & Merlo, 2016;World Economic Forum, 2014). One potential explanation for this unexpected association is backlash from males who respond to an increase in female independence with violence because it challenges the norm of male dominance and female dependence (Macmillan & Gartner, 1999). Many policies attempt to curtail a behavior and change power roles, so that behavior may increase in response to the policy before deeply rooted ideologies change and the behavior declines. It is also possible that reports of a behavior will increase once a policy raises awareness of the problem behavior. In the "Nordic paradox" above, while not a specific policy per se, IPV may be elevated because of higher levels of disclosure, as incidents of violence against women are more likely to be openly addressed and challenged in societies with greater equality. It is also worth noting that research examining gender equality and IPV can present a complicated picture. Using data from 30 states in the U.S., Yllo and Straus (1990) found a curvilinear trend such that states with the lowest and highest levels of gender equality had the highest rate of wife assaults, but once the study was updated to include a larger sample size and more recent data from all 50 states, Straus (1994) found that states with higher levels of gender equality also reported lower rates of wife assaults. Future research should continue to carefully examine policies, their impact on gender equality, and the effect this has on IPV.
The way in which policies are analyzed can also have implications on the findings. Research at the individual-level and policy-level may produce different results. A systematic review that examined both individual-level and policy-level comparisons of paid leave found the individual-level showed positive maternal health benefits (e.g., lower psychological distress and reduced odds of poor physical health), whereas the studies at the policy-level showed negative or null effects of paid leave (e.g., no differences in depression; less life satisfaction and poorer general health; Aitken et al., 2015). The authors concluded that the studies conducted at a policy-level aggregated the effects for women who do and do not take leave, thus accounting for the null findings (Aitken et al., 2015). Studies at the policy-level may also limit understanding of the impacts of paid leave on people of differing marital status, sexual orientation, race, socioeconomic status, and job roles (Aitken et al., 2015). This is particularly problematic for learning more about subpopulations in which gender role norms may differ from typical patriarchal gender norms (e.g., cisgender individuals and those in same sex relationships).
Despite these unknowns, there are clear benefits of paid parental leave (Chatterji & Markowitz, 2012;Goodman, 2012;Huang & Yang, 2015;Johansson et al., 2014;Månsdotter et al., 2006;Månsdotter & Lundin, 2010;Saade et al., 2010;Whitehouse et al., 2013). As such, exploring additional outcomes (i.e., IPV prevention) seems only logical. In addition, we should point out that the three mechanisms of change we present here are not necessarily exhaustive. It is entirely possible that paid parental leave policies may have preventive effects on IPV through additional paths. For example, the period of time following the birth of a new child represents a period of heightened risk for IPV victimization, especially for younger, lower income mothers (Agrawal, Ickovics, Lewis, Magriples, & Kershaw, 2014;Harrykissoon, Rickert, & Wiemann, 2002). For some women, it is possible paid leave could reduce financial dependence on their abuser enough to allow them to escape a potentially escalating abusive situation, even if only temporarily. This guaranteed income, combined with time off, might empower a woman to leave her abuser without having to immediately return to work or fear losing her job. The potential of paid parental leave to prevent IPV in the first place or reduce IPV in relationships where it is already occurring merits further empirical investigation.
Paid parental leave is only one of many policies that may prove to be an important violence prevention tool. This policy can offer families a multitude of benefits, but other related policies, in conjunction with paid leave, could be considered as well. For example, equal pay for equal work, subsidized childcare, or policies which put value on unpaid childcare at home through basic income or participation income where parents can combine paid and unpaid work may impact IPV through similar pathways. These policies have previously been recognized as potential approaches to empower and support women by increasing economic stability and decreasing gender inequality (Basile et al., 2016). Paid parental leave is a concrete approach that highlights the potential of policies to change societal-level factors and serve as an important prevention strategy that prevents multiple, connected forms of violence.
|
v3-fos-license
|
2021-11-25T06:22:47.713Z
|
2021-11-24T00:00:00.000
|
244527940
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.analchem.1c02420",
"pdf_hash": "e323d6f01897528fea1f37f1c9c72a277e670e66",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1034",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "fe81616890afde036be0fc1181bcf6eea419c886",
"year": 2021
}
|
pes2o/s2orc
|
When Red Turns Black: Influence of the 79 AD Volcanic Eruption and Burial Environment on the Blackening/Darkening of Pompeian Cinnabar
It is widely known that the vivid hue of red cinnabar can darken or turn black. Many authors have studied this transformation, but only a few in the context of the archeological site of Pompeii. In this work, the co-occurrence of different degradation patterns associated with Pompeian cinnabar-containing fresco paintings (alone or in combination with red/yellow ocher pigments) exposed to different types of environments (pre- and post-79 AD atmosphere) is reported. Results obtained from the in situ and laboratory multianalytical methodology revealed the existence of diverse transformation products in the Pompeian cinnabar, consistent with the impact of the environment. The effect of hydrogen sulfide and sulfur dioxide emitted during the 79 AD eruption on the cinnabar transformation was also evaluated by comparing the experimental evidence found on paintings exposed and not exposed to the post-79 AD atmosphere. Our results highlight that not all the darkened areas on the Pompeian cinnabar paintings are related to the transformation of the pigment itself, as clear evidence of darkening associated with the presence of manganese and iron oxide formation (rock varnish) on fragments buried before the 79 AD eruption has also been found.
T he Roman city of Pompeii was destroyed after the eruption of Mount Vesuvius in 79 AD. Although this was an unfortunate natural and societal event, it resulted in a remarkably good conservation of its remains, thanks to the burial of the city under the pyroclastic flow. However, some of the pigments applied on the walls of Pompeii experienced transformations due to the eruption, such as the blackening process of hematite (α-Fe 2 O 3 ) 1 and the dehydration of yellow ocher (goethite, α-FeOOH) into hematite. 2,3 A recent study has shown that another reason for the degradation of the mural paintings of Pompeii is the crystallization of salts coming from the pyroclastic materials ejected in the 79 AD eruption. 4 In addition, since the first archeological excavations in the 18th century, the archeological park has suffered a continuous decay, due to its exposure to the modern atmosphere and the (former) application of restoration products that are no longer used. 5 The study of ancient sources 6,7 and archeological records demonstrates that red cinnabar (α-HgS) has been used as a pigment since antiquity. This precious pigment, employed in the mural paintings of the archeological site of Pompeii, suffers from blackening. Hence, Vitruvius did not encourage its application in open spaces (e.g., peristylia), since its exposure to sunlight and moonlight 6 was already thought at that time to be responsible for its deterioration.
Prominent examples of cinnabar blackening are found at the Casa della Farnesina (Rome) or the Villa dei Misteri (Pompeii). 8,9 This process occurs to a lesser degree in several locations and can remain unnoticed by nonexperts.
After visual inspection, the color of the altered cinnabar from Pompeii looks blacker 10 than the one on other discolored cinnabar easel paintings. 11 In the latter, the altered cinnabar/ vermilion shows brownish to grayish hues. 11,12 The blackening of cinnabar has traditionally been attributed to light exposure and transformation of red α-HgS (trigonal crystal system) into black β-HgS metacinnabar (cubic crystal structure), which is reported to take place at 344 ± 2°C. 13 However, there exist scarce confirmations of black metacinna-bar detection in darkened cinnabar. 14,15 Since Raman spectroscopy cannot distinguish cinnabar from metacinnabar, other techniques such as X-ray diffraction (XRD) 14 or pump-probe microscopy 15 could be applied for that purpose. Nevertheless, although different cinnabar-based mural paintings are exposed to light, not all of them show the same degree of transformation and even some areas do not present any sign of darkening or blackening. 8,11 Hence, other variables, which could contribute to this transformation, should be considered for its complete explanation.
Further examples of the darkening or blackening process of cinnabar in presence of Cl in easel and mural paintings featuring mercury chlorides or Hg-S-Cl compounds (calomel: Hg 2 Cl 2 , mercury (II) chloride: HgCl 2 , corderoite: α-Hg 3 S 2 Cl 2 , terlinguaite: Hg 2 OCl, kenhsuite: γ-Hg 3 S 2 Cl 2 ) have been published in the past years. 10,11,16−18 Another plausible degradation pathway that involves the formation of gypsum crusts (possibly favored by the photodecomposition of cinnabar) 10 has been proposed for the Vesuvian mural paintings of Torre del Greco (Campania), in which calcite acts as a binder: the formation of gypsum crusts as a result of calcite sulfation, 10 favored by the photodecomposition of cinnabar. The subsequent accumulation of airborne particulate matter and organic pollutants inside the porous structure of gypsum gives the crust its black color. Cotte et al. 10 mentioned that calcite sulfation could take place due to the influence of the SO 2 present in the polluted atmosphere or to the oxidized S produced by the decomposition of HgS. This last hypothesis might explain the failure to identify black crusts (gypsum crusts with airborne particulate matter/organic pollutants) in murals from the Vesuvian area decorated with pigments, other than cinnabar.
In this work, blackened/darkened cinnabar paintings (alone or in combination with red/yellow ocher pigments) have been analyzed in situ and in the laboratory through a multianalytical methodology. The main goals of this study were (i) to determine the role of the 79 AD volcanic eruption in the blackening of Pompeian murals decorated with cinnabar and (ii) to evaluate whether different transformation phenomena can be identified on samples protected from the pre-and post-79 AD volcanic eruption.
To achieve these goals, three different kinds of cinnabar paintings were compared: (i) painted areas impacted by the 79 AD eruption, excavated more than 150 years ago and exposed to the modern atmosphere since then; (ii) painted panels impacted by the 79 AD eruption, removed during the excavations of the 19th century, stored at the Naples National Archaeological Museum (MANN) and thus, protected from the modern atmosphere; and (iii) painting fragments exposed to the ancient atmosphere of Pompeii, presumably detached after the 62 AD earthquake and deposited in a house pit since then.
■ EXPERIMENTAL SECTION
Samples and Studied Mural Paintings. Three Pompeian houses were selected for this study: House of Marcus Lucretius (Regio IX, 5, 3/24), House of Ariadne (Regio VII, 4, 31/51), and House of the Golden Cupids (Regio VI, 16, 7) (see Table S1). All the houses have suffered the influence of the volcanic eruption and the preserved mural paintings have been exposed to the modern atmosphere since their excavation (19th century to beginning of 20th century).
Two samples (ATT2007/14 and 16/56) from the triclinium of the House of Marcus Lucretius (see Table S1) were considered. In the wall paintings of this room, the blackening of hematite pigment was previously studied, being possible to identify the presence of coquimbite/paracoquimbite (Fe 2 (SO 4 ) 3 ·9H 2 O) as degradation product of the pigment. 1 Interestingly, this house also presented a deposit where earlier detached mural decorations were abandoned and buried. This deposit was used to cast aside detached fragments, possibly as a consequence of the 62 AD earthquake that damaged the murals of the house. 19 This waste pit was excavated during the EPUH (Expeditio Pompeiana Universitatis Helsingiensis) campaign in 2005. Since then, the recovered fragments have been stored in the dark. In this work, two fragments from this deposit (samples 3T, Red A) showing dark stains on the cinnabar painting layer were considered. Additionally, panel paintings extracted from the triclinia (panel references 9206, 9285, 8992, and 9103, the latter from the summer triclinium) in the excavations of the 19th century and stored since then at the MANN were also in situ analyzed. The three panels belonging to the triclinium are surrounded by a blackened red frame, which could have been painted with red cinnabar.
From the House of Ariadne, three samples were considered (samples 6, 17, and 18; see Table S1).
Finally, in the House of the Golden Cupids, the blackened cinnabar decorations from the exedra (Room G; see Table S1) were studied. Due to sampling restrictions in this house, the analyses were performed in situ, without taking any sample.
Portable and Benchtop Instrumentation. The in situ molecular analysis was performed using a portable innoRam Raman spectrometer (B&W Tek, Newark, USA) equipped with a CleanLaze technology 785 nm excitation laser (<300 mW laser output power) and mounting the probe on a motorized tripod (MICROBEAM S.A. Barcelona, Spain). For the in situ elemental analysis, the XMET5100 (Oxford Instruments, UK) Handheld Energy Dispersive X-ray Fluorescence spectrometer (HH-EDXRF), equipped with an Rh Xray tube, was used. Details about the normalization procedure to compare the S and Cl counts extracted from the walls and panels under study can be reviewed in the Supporting Information.
In the laboratory, the molecular study of the samples was achieved using the inVia confocal Raman microscope (Renishaw, Gloucestershire, UK). The main objective lens used was the 50× one. Excitation lasers of 785 (nominal laser power 350 mW) and 532 nm (nominal laser power 50 mW) were employed for the acquisition of the spectra. The spectra were acquired in the 60−1200 cm −1 or 60−3000 cm −1 spectral range and accumulated 3, 5 to 10 times for 5−10 s.
To confirm molecular results, an elemental imaging study was conducted on sample Red A. For that the M4 TORNADO (Bruker Nano GmbH, Berlin, Germany) EDXRF spectrometer was used. Elemental distribution maps were acquired at down to 25 μm of lateral resolution using a use of polycapillary lens and with the Rh X-ray tube working at 50 kV and 600 μA. The spectral acquisition and data treatment were performed using the ESPRIT software from Bruker.
To evaluate the composition of the black stains of the sample 3T, X-ray Photoelectron Spectroscopy (XPS) and Time-of-Flight Secondary Ion Mass Spectrometry (TOF-SIMS) were employed.
XPS analysis was conducted using a Thermo Scientific K-Alpha ESCA instrument equipped with aluminum Kα 1,2 Analytical Chemistry pubs.acs.org/ac Article monochromatic radiation at 1486.6 eV. Neutralization of the surface charge was achieved by using both a low energy flood gun (electrons in the range 0−14 eV) and a low energy Ar-ions gun. Photoelectrons were collected from a take-off angle of 90°r elative to the sample surface. The measurement was done in a Constant Analyzer Energy mode (CAE) with a 100-eV pass energy for survey spectra and 20-eV pass energy for high resolution spectra. Charge referencing was done by setting the lower binding energy C 1s photopeak at 285.0 eV C 1s hydrocarbon peak. Surface elemental composition was determined using the standard Scofield photoemission cross sections.
A TOF-SIMS IV instrument from Ion-Tof GmbH Germany was employed to collect the mass spectra and to conduct mapping. A pulsed Bi 3 ion beam at 25 keV impacted the sample, the generated secondary ions were extracted with a 10 kV voltage, and their TOF from the sample to the detector was measured in a reflectron mass spectrometer. Pulsed Bi 3 beam at 25 keV and incidence of 45°were used to scan 500 × 500 μm 2 areas.
Additional details of the experimental aspects and data treatment conducted using specific benchtop and portable instruments are available in the Supporting Information.
■ RESULTS AND DISCUSSION
Characterization of Blackened Cinnabar on Mural Paintings Impacted by the 79 AD Eruption and Nowadays Exposed to the Atmosphere. The eastern and southern walls of the triclinium of the House of Marcus Lucretius revealed the occurrence of Fe, Hg, and S, confirming the presence of cinnabar, together with red and yellow ochers. 20 In situ Raman measurements allowed the systematic identification of calomel (Hg 2 Cl 2 ) and gypsum (CaSO 4 ·2H 2 O) on the blackened cinnabar areas. Cl was also identified by HH-EDXRF. Additional analytical details can be found in Table S2.
To evaluate the presence of additional compounds in the triclinium of the House of Marcus Lucretius, samples from the northern wall (ATT2007/14) and southern wall (16/56) were analyzed in the laboratory by Raman microscopy (Table S2). Calomel was present in gray-whitish particles of sample 16/56 (see Figure 1a). In sample ATT2007/14, extracted from the upper red frame of the central panel, wax (band at 1062 cm −1 ), cinnabar (weak band at 254 cm −1 ), calomel (Hg 2 Cl 2 , bands at 168 and 275 cm −1 ), and gypsum (CaSO 4 ·2H 2 O, bands at 1008 and 1130 cm −1 ) were detected (see Figure 1b).
The 1062 cm −1 Raman band was attributed to a wax applied in the 19th century restorations 5 of the mural paintings and not to the presence of nitratine (NaNO 3 ), based on the detection of a series of signals ascribable to an organic compound 21 (see Figure S1, Supporting Information). The presence of the 1734 cm −1 band, assigned to the ν(CO) vibrational mode, suggests the occurrence of a saturated wax. The proposed assignment of the rest of the Raman bands is shown in Table S3.
Both samples extracted from the triclinium of the House of Marcus Lucretius show a dark crust on the top of the painting layer (see for example the microscopic observation of sample 16/56, Figure 1a). To obtain further insights into the gypsum and calomel distribution on these samples, cross sections were studied by SEM-EDS. Figure 1a shows part of the stratigraphy of sample 16/56, composed of a "black crust" layer, a pictorial layer, and a plaster. An EDS map of the whole stratigraphy ( Figure 1c) reveals the accumulation of S, attributed to the presence of gypsum in the "black crust" over the pictorial layer. In the latter, it was possible to detect bright particles (marked with a circle in Figure 1c,d) distributed throughout the layer. The EDS analyses (Figure 1e) confirmed the detection of both Hg and Cl in those particles (see Figure 1d), related to the presence of calomel. The cross section of the sample ATT2007/14 also revealed a black crust formed on the top of a pictorial layer with Hg-rich particles and chlorine.
In a preliminary in situ Raman screening from the area where sample 6 was obtained in the House of Ariadne, gypsum and calomel had been detected. This last was confirmed later in the laboratory with additional Raman analyses conducted on sample 6. Furthermore, microscopic observations allowed the identification of cinnabar as random pigment particles in a yellow pictorial layer, as in the case of the samples from the House of Marcus Lucretius (see Figure 2a). Raman analyses performed on these particles (see Figure 2b) showed the presence of goethite (FeOOH, bands at 301, 387, 483, 551, and 686 cm −1 ), related to the yellow color (yellow ocher), together with the presence of tridimite (high-temperature Calomel was also detected in sample 17, but not in sample 18. Gypsum had already been identified in situ by Raman spectroscopy. Sample 18 corresponds to a white stripe painted on a red background, which consisted of a mixture of cinnabar and red ocher (see Figure 3a,b). In this underlying pictorial layer, black-grayish metallic particles (around 15−20 × 5−10 μm 2 ) were identified microscopically (Figure 3a,c). Raman and EDS spectra acquired on those particles did not offer additional information other than the signals related to cinnabar (Hg and S detection). Interestingly, a previous electrochemical study has demonstrated the formation of metallic mercury as a degradation product of HgS upon the influence of light and Cl − , 22 whereas a recent publication concerning egg tempera painting has already proposed the occurrence of metallic mercury on vermilion mock-ups. 23 The attributions of the in situ and laboratory-based Raman spectra of the House of Ariadne are summarized in Table S2.
The cinnabar used in the southern wall of the exedra of the House of the Golden Cupids shows a totally black appearance, suggesting that the blackening process is even more dramatic than the one occurring in the well-known Villa dei Misteri 9 (see Figure 4a-c). In the painting fragments of the predella, the intonaco described by Meyer-Graft, 24 based on a yellowish granular lime with fine orange inclusions, is visible in some areas (see Figure 4b). The Raman analysis of this mortar lead to the identification of calcite (155, 711, 1086 cm −1 ) and goethite (302, 308 cm −1 ). In addition, the 1062 cm −1 Raman band could correspond to a wax, as in the case of the House of Marcus Lucretius (see Figure S1 and Table S3), probably applied to the painting during the 20th century restorations of the house. 24 The in situ measurements performed on the totally blackened cinnabar from the predella showed the ubiquitous presence of calomel and gypsum, together with cinnabar (see Figure 4d). Both calomel and gypsum were also detected in the whitish drips (see Figure 4a). In a previous study, thanks to portable laser-induced breakdown spectroscopy (LIBS) mapping of the mural paintings of the House of the Golden Cupids, 25 it was possible to assess that this predella was the most Cl-impacted painted surface among those considered in the study.
Characterization of Blackened Cinnabar on Panel Paintings Stored at Naples National Archaeological Museum (MANN). Three panel paintings (9206, 9208, and 9255) extracted from the triclinium of the House of Marcus Lucretius include a red frame that nowadays looks quite blackened (see Figure S2). HH-EDXRF measurements conducted in all the frames allowed the detection of Hg and S together with high Fe contribution. These results pointed out to the combined use of red ocher and cinnabar. Bands associated to calomel or other Hg-Cl or Hg-S-Cl compounds were not identified in any of the in situ Raman measurements performed on the blackened frames. However, Cl was detected by HH-EDXRF in the blackened frames (see the example of panel 9206 in Figure S2).
Gypsum was also detected in the blackened cinnabar areas (see the example of panel 9103 in Figure S3) of all the considered panel paintings.
Interestingly, the identification of gypsum was not only restricted to the blackened cinnabar areas. This sulfate has been previously identified by infrared spectroscopy on the same panel paintings. 26 To discard the intentional addition of gypsum to the plaster, several measurements were conducted on the surface of the panel paintings 9285, 9206, and 8992 by HH-EDXRF. Moreover, additional measurements were performed on the south and east walls of the triclinium, concretely on the surrounding areas (left and right side) of the voids that the panels left when they were removed during the first excavations of the house (see Table S4 and Figure S4). The normalized net counts of S and Cl obtained from each spectrum of each panel paintings stored in the MANN were compared with the ones obtained from the measurements in the walls from Pompeii. Since the red frames are rich in HgS, these points were not taken into account for the evaluation of S originating from gypsum. According to the obtained values, the normalized counts of S are higher in the panel paintings preserved at MANN, than in their respective adjacent walls currently exposed to the modern atmosphere (see Table S4 for comparison).
The S decrease in the exposed walls may be associated to the dissolution-mobilization-recrystallization of the formed sulfates during the exposure to the open atmosphere. Moreover, the restoration campaigns conducted in this room could have also contributed to the reduction of the content of soluble salts such as sulfates in the wall.
The S intensities are lower in the walls of Pompeii nowadays exposed to the atmosphere than in the panel paintings As regards the normalized Cl counts (see Table S4), they are only slightly lower in the panel paintings (0.3 ± 0.1 in panel 9206) than in the exposed walls (0.9 ± 0.3 in the left side of panel 9206, 0.5 ± 0.2 in the right side of panel 9206). The only exception is the right side of the void left by panel 9191 on the southern wall (3.7 ± 0.3). In this area, a prolonged direct exposure to the marine aerosol is expected (see Figure S5, Supporting Information) due to the strikingly intense Cl peak. On the other hand, low Cl intensities, present both in the stored panels and in the exposed walls, may be attributed to the Cl emission of the volcanic eruption 27 and/or to a diffuse exposure to marine aerosol.
Note that in some cases the standard deviation related to Cl and S normalized counts is high (see Table S4). This is associated with their heterogeneous distribution in the walls. 25 Characterization of Dark Stains on Cinnabar in Painting Fragments Buried and Protected from the 79 AD Eruption. Samples 3T and Red A, recovered from the deposit of the House of Marcus Lucretius, were also studied to observe possible differences in the state of conservation of cinnabar not exposed to the 79 AD eruption and to the atmosphere since their recovery from the excavations.
In this case, the samples did not show a widespread black appearance of the cinnabar layer, but only certain dark stains or patches (see the species identified by Raman spectroscopy in this work in Table S2). In previous studies conducted using Raman spectroscopy, calomel had been detected. 20 Interestingly, the stratigraphic analysis of sample 3T shows that cinnabar was applied over a pictorial layer composed by Egyptian Blue (Raman bands at 112, 137, 164, 192, 378, 400, 431, 473, 568, 763, 788, 985, 1010, and 1083 cm −1 ) 28 (see Figure S6) and goethite (300 and 386 cm −1 , spectrum not shown). This suggests either a previous redecoration of the area from which these fragments were detached or the application of cinnabar as overlying color on a greenish blue background. 19 Raman measurements performed on the dark spots (see Figure S7a, Supporting Information) unveiled the presence of a broad band at around 683 cm −1 (see an example of it in the measurements performed in sample Red A, Figure S7b). Bearing in mind that only cinnabar was clearly detected in this sample and no abundant evidences of hematite were identified, this band cannot be associated only with magnetite (Fe 3 O 4 ), 29 as it could happen in some measurements acquired in the darkened hematite areas of the triclinium in the House of Marcus Lucretius (Fe 3 O 4 , Raman band at 661 cm −1 , see Figure S7c,d). Moreover, considering the width of the 683 cm −1 band, it is complicated to attribute it to a specific mineral phase being most probable the presence of a mixture of several ones.
To further investigate this issue, XPS and TOF-SIMS measurements were performed on sample 3T for an in-depth study of the dark patches (around 50−250 μm, see Figures S8 and 5). XPS was preferred for line analysis on altered (dark areas) and intact cinnabar areas, while TOF-SIMS was more adequate to perform mapping, due to the sample roughness and the better depth and lateral resolution of the technique.
The XPS analyses (see Figure S8) and TOF-SIMS maps (see Figure 5) on the dark stains revealed an increment in manganese and iron oxides or oxide hydroxides. Therefore, the broad band identified in the Raman measurements of the dark stains (see Figure S7, Supporting Information) could be related to the presence of a mixture of manganese and iron oxides or oxide hydroxides. 29−31 In the literature, many references can be found regarding the formation of dark colored coatings composed mostly of manganese and iron oxides. This dark discoloration process is usually called "rock varnish". Mn and Fe present in "rock varnish" could come from various sources, including dust and soil. 32,33 Considering that these painting fragments have been buried for more than 2000 years, the occurrence of these metals could be readily elucidated. To explain the dissolution of Mn and Fe from the soil and their subsequent precipitation as oxides, pH and Eh changes in the burial environment should take place. 34 Moreover, water should be present to favor the process. This is also guaranteed due to the previously assessed influence of groundwater in this archeological site. 4,25 Whereas certain authors concluded that this phenomenon takes place under abiotic conditions, others held that microorganisms control Mn precipitation 35 (biomineralization of Mn).
Although most of the "rock varnish" examples are located in desert environments, 32 in the last years, different examples have been published regarding archeological contexts 36 and 19th century buildings. 37 Together with manganese oxides, whitish stains were also visible, related to the formation of a calcareous (calcite) patina (caused by the dissolution and recrystallization of the binder, see Figure S9). This result reinforces the influence of a water source in the dissolution−recrystallization process.
TOF-SIMS maps (area of 342 × 342 μm 2 ) also showed the occurrence of F − and Cl − in the surface, while Hg-SCl − (related to a Hg-S-Cl compound, such as corderoite, α-Hg 3 S 2 Cl 2 , or kehnsuite, γ-Hg 3 S 2 Cl 2 ) was more abundant in the red areas not affected by the dark stains (see Figure 5). This Analytical Chemistry pubs.acs.org/ac Article result suggests that the presence of Cl − is not always strictly related to the darkening of the cinnabar pigment, as already proposed by certain authors. 18 The TOF-SIMS detection of F − , a halide of volcanic origin, 4 reinforces the hypothesis of a leaching process (favored by groundwater) of the volcanic soil that covered the fragments, contributing to the increase in fluorine. Moreover, the contribution of groundwater rich in F − and Cl −4,25 could also favor the relatively prominent presence of these halides in the cinnabar pictorial layer.
To confirm the occurrence of manganese oxides on the dark patches of sample Red A detected by Raman spectroscopy, EDXRF imaging was conducted. As in sample 3T, the Mn distribution coincided with the dark areas present on the red cinnabar pictorial layer (Figure 6), verifying the presence of manganese oxides. Besides, Fe accumulations on these areas were detected, as expected according to the XPS analyses of sample 3T ( Figure 6). In this case, Fe is scarcely distributed comparing with Mn, suggesting that iron oxide has been formed to a lesser extent over the cinnabar layer.
■ CONCLUSIONS
This work shows how the state of conservation of the Pompeian cinnabar pigment varies depending on its protection against the pre-and post-79 AD atmospheres.
Gypsum has been systematically identified in the blackened areas of the pigment exposed to the pre-and post-79 AD atmospheres (e.g., House of the Golden Cupids). This result could suggest that the blackening might be related to the formation of a gypsum layer, either due to the polluted atmosphere or due to the sulfur emissions of the volcanic eruption. This layer would be subsequently enriched with airborne particulate matter, responsible for the final black color of the painting surface ("black crust" formation).
The experimental evidence agrees with what Cotte et al. 10 previously suggested. In this last case, the sulfation of calcite in the fresco painting was explained by the oxidation of S coming from the decomposition of the HgS pigment. Nevertheless, in the mural paintings of the House of Marcus Lucretius here presented, the cinnabar proportion is much lower than the one of yellow ocher, and thus the extended formation of gypsum cannot be explained according to this hypothesis. In the future, additional painting stratigraphies, other than pure cinnabar or cinnabar mixed with ocher, should be investigated in order to track the formation of "black crusts" on other decorated/ nondecorated areas of Pompeii.
The lower S intensities detected in the mural paintings of the triclinium of the House of Marcus Lucretius (exposed to the atmosphere since the 19th century excavations) compared with the panels of the same room preserved at the MANN, suggest that the H 2 S and SO 2 emitted in the 79 AD eruption are crucial in the sulfation process. 26 Another evidence of the impact owing to the eruption is the clear transformation of yellow ocher into red hematite in specific areas of the cubiculum annexed to the triclinium of this house. 2 This room of the house is covered by a roof, protecting the mural paintings from the direct influence of polluted atmosphere, reducing the effects of this environmental agent in the sulfation process.
Regarding the protection of the cinnabar pigment when mixed with other pigments, 11,38 this work demonstrates that cinnabar can be altered (calomel identification) even when blended with an ocher pigment. In addition, visually altered cinnabar particles were also identified in a red hematite pigment layer covered by a superficial one (calcite and dolomite). The absence of cinnabar transformation products in the metallic-like cinnabar particles identified could suggest that they either belong to metallic mercury, metacinnabar or even an amorphous cinnabar phase. This hypothesis should be confirmed in the future with the use of adequate instrumentation, which can offer sub-micrometric resolution, such as synchrotron assisted μXANES at the Hg L3-edge.
In the buried Pompeian cinnabar-based fresco fragments and not exposed to the 79 AD eruption, well preserved areas and dark stains/patches were identified. In the nondarkened areas Hg-Cl and Hg-Cl-S compounds were detected. These results confirm that such compounds can be formed independently of the pigment darkening or blackening process, as already stated by various authors. 18 Moreover, in the darkened and not darkened areas of the samples, it was not possible to identify the presence of gypsum, since they were exposed neither to the H 2 S and SO 2 gases of the eruption nor to the postexcavation atmosphere. On the contrary, the dark patches/stains are rich in manganese and iron oxide hydroxides, and do not belong to the conventional blackening process of the cinnabar. Therefore, for conservation purposes, when a cinnabar mural painting/fragment is recovered from an archeological context, an in-depth characterization of the dark/black formations on the cinnabar is necessary to conclude whether the cinnabar pigment is transformed or just affected by "rock varnish" or by the precipitation of other colored crusts.
Furthermore, this work also demonstrates that the color of the transformed Pompeian cinnabar may suggest different pigment degradation prompted by the impact of a number of environmental agents. The main transformation occurred after its exposure to the pre-and post-79 AD atmosphere is the blackening process connected to the formation of calomel and gypsum. On the other hand, buried Pompeian cinnabar could experience darkening due to the formation of black/brownish Mn/Fe stains and not to the raw pigment transformation itself.
In the future, accelerated weathering experiments using cinnabar fresco mock-ups reproducing the pre/post-79 AD atmosphere impact and burial environment will help delve into the chemical reactivity leading to these transformation products. It will thus allow the development of conservation protocols, which will protect and preserve the original red color of this pigment.
|
v3-fos-license
|
2019-01-23T21:23:07.793Z
|
2019-01-14T00:00:00.000
|
58949786
|
{
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.7717/peerj.6234",
"pdf_hash": "101b7d6005848eaea184867c9b5859fbaf056628",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1035",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "34b0374e8f1351a7a2b344c71b363a1e1c186d27",
"year": 2019
}
|
pes2o/s2orc
|
Method for the quantitative evaluation of ecosystem services in coastal regions
Wetlands, tidal flats, seaweed beds, and coral reefs are valuable not only as habitats for many species, but also as places where people interact with the sea. Unfortunately, these areas have declined in recent years, so environmental improvement projects to conserve and restore them are being carried out across the world. In this study, we propose a method for quantifying ecosystem services, that is, useful for the proper maintenance and management of artificial tidal flats, a type of environmental improvement project. With this method, a conceptual model of the relationship between each service and related environmental factors in natural and social systems was created, and the relationships between services and environmental factors were clarified. The state of the environmental factors affecting each service was quantified, and the state of those factors was reflected in the evaluation value of the service. As a result, the method can identify which environmental factors need to be improved and if the goal is to increase the value of the targeted tidal flat. The method demonstrates an effective approach in environmental conservation for the restoration and preservation of coastal areas.
119 targeted in this study, it would generally be desirable to use more target tidal flats to reduce the 120 deviation of the service scores.
121
SN is an artificial biological symbiotic port structure in Tokyo Bay ( 132 The scope of the tidal flat evaluation comprised the area from the water-land interface to the 133 intertidal zone (i.e., the area shallower than the low water level). The water-land interface was 134 delineated by embankments or structures abutting the landward side of tidal flats.
135
The Manuscript to be reviewed 166 The present status x i is normalized by the reference point with Equation (2): 168 where X i is the present status value for service i and X i,R is the reference point. Any X i value 169 beyond 2 from the mean was determined to be an outlying observation and was not used in the 170 calculations. Halpern 198 where T i is the trend for service i (see Section 2.4), β is the relative importance of the trend 199 versus PR (pressure and resilience) scores, and PR i is the PR score for service i (see Section 2.6).
216
A positive sustainability score means that the service will improve under present conditions, 217 and a negative one means that the service will decline under present conditions. We can look for 280 As an index of environmental education for water front use, we used the number of visitors for 281 the purpose of environmental education and related activities (SI 4). As an index of research for 282 water front use, we used the number of published papers and reports (SI 5).
283
As an index of historical designation as special sites for sense of place, we used the numbers 284 of festivals and of faith-related buildings (SI 6). As an index of places for everyday rest and 285 relaxation for sense of place, we developed a rest and relaxation index relative to the total hours 286 of everyday use that was adjusted for the user's stated level of conscious awareness of the value 287 of the sites for walks, rest and relaxation, and other similar uses (SI 7).
288
As an index of suspended material removal for water quality regulation, we used the bivalve 289 water filtration volume (SI 8). As an index of organic matter decomposition for water quality 290 regulation, we used the COD purification amount (calculated from the production/biomass ratio) Manuscript to be reviewed 291 by benthic organisms (SI 9). As an index of carbon storage for water quality regulation, we used 292 the carbon fixation in benthic organisms and sediment (0-10 cm in depth) (SI 10).
293
As an index of degree of diversity for biological diversity, we used the Shannon-Wiener 294 diversity index (H') for the entire study area (SI 11). Finally, as an index of rare species for 295 biological diversity, we used the number of threatened species adjusted by category of threatened 296 status (SI 12). 336 Although SN had a high service score, its sustainability score was negative (-41%), indicating 337 that this service will decay under the present condition. To suppress this decay, countermeasures 338 need to be taken in the categories of anoxic water, blue tide, ground stability, predatory or 339 competitive species, and protection of species, all of which had negative PR scores (Fig. S3).
340 Although the service score of UK was low (5.3), its sustainability score was positive (+17%), so 341 the present status can be maintained in the present environmental condition. UK is located in an 342 area with good water quality and has been established for more than 30 years, so there is no need 343 for countermeasures against the water environment and instability of the ground just after 344 construction. The service scores of TR and OR (4.0 and 5.6) were not high, but the sustainability Manuscript to be reviewed 507 our results allow consideration of countermeasures to improve individual services, but that is not 508 sufficient to improve the comprehensive evaluation of services of tidal flats. Incorporating trade-509 off relationships and a weighting of services is necessary to be able to consider which services 510 would be most effective for taking countermeasures.
511
It is conceivable that the weight of the effect of the environmental factors also differs. At 512 present, the PR scores were all weighted the same, but we need to consider weighting these 513 scores as well. In addition, we assumed a qualitative PR score to be half that of a quantitative PR 546 where T i is the trend score, T i-U is the upper limit of the 95% CI, T i-L is the lower limit of the 95% 547 CI, t i is the slope of the regression line, and se i is the standard error of the slope.
548
We estimated CIs for likely near-term future status and service scores for food provision, 549 coastal protection, environmental education, research, suspended material removal, organic 550 matter decomposition, carbon storage, degree of diversity, and rare species for which likely near-551 term future status were calculated based on past data (Fig. 5). The other services (recreation, 552 historical designation as special sites, and places for everyday rest and relaxation) were not 553 included because there was no past data and trends were not estimated.
554
For food provision, suspended material removal, organic matter decomposition, and degree 555 of diversity, whose service scores depend on biomass, the error was large in SN. It has been less 556 than 10 years since SN was constructed, and biomass in the area may still be in transition. In 557 addition, SN is located in a port with poor water quality, and environmental impacts such as 558 anoxic water and blue tide often occur and the habitat environment is unstable. In contrast, the 559 error for rare species was greater for the natural tidal flats. This occurred because, in the natural 560 tidal flats, the annual differences in the number of rare species observed was large, whereas in Manuscript to be reviewed Services, indices of services provided, definitions of spatial and temporal range, and key index unit of tidal flats and tidal flat ecosystems.
The services and indices are described in more detail in the SI. 1 Note: A "-" indicates the tidal flat was omitted from the analysis, usually because the service did 2 not apply in that tidal flat.
|
v3-fos-license
|
2023-01-21T15:29:14.708Z
|
2013-10-11T00:00:00.000
|
256048018
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1186/1868-7083-5-19",
"pdf_hash": "00fe0ddf631c38889a4d9083e35a69b526bb096a",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1037",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "00fe0ddf631c38889a4d9083e35a69b526bb096a",
"year": 2013
}
|
pes2o/s2orc
|
Changes in DNA methylation at the aryl hydrocarbon receptor repressor may be a new biomarker for smoking
Smoking is the largest preventable cause of morbidity and mortality in the United States. In previous work, we demonstrated that altered DNA methylation at the aryl hydrocarbon receptor repressor (AHRR) is correlated with self-reported smoking in 19-year-old African Americans with relatively low levels of smoking. However, one limitation of the prior work is that it was based on self-reported data only. Therefore, the relationship of AHRR methylation to smoking in older subjects and to indicators such as serum cotinine levels remains unknown. To address this question, we examined the relationship between genome- wide DNA methylation and smoking status as indicated by serum cotinine levels in a cohort of 22-year-old African American men. Consistent with prior findings, smoking was associated with significant DNA demethylation at two distinct loci within AHRR (cg05575921 and cg21161138) with the degree of demethylation being greater than that observed in the prior cohort of 19-year-old smoking subjects. Additionally, methylation status at the AHRR residue interrogated by cg05575921 was highly correlated with serum cotinine levels (adjusted R2 = 0.42, P < 0.0001). We conclude that AHRR DNA methylation status is a sensitive marker of smoking history and could serve as a biomarker of smoking that could supplement self-report or existing biomarker measures in clinical or epidemiological analyses of the effects of smoking. In addition, if properly configured as a clinical assay, the determination of AHRR methylation could also be used as a screening tool in efforts to target antismoking interventions to nascent smokers in the early phases of smoking.
Background
Cigarette smoking is a leading preventable cause of mortality in the United States and leads to the premature death of over 100,000 Americans each year [1]. Despite substantial public and private sector efforts to decrease the rate of smoking, the rate of smoking in US adults remains at approximately 19% [2]. To date, efforts to decrease smoking have taken two forms [3]. The first strategy focuses on changes in public policy designed to decrease the availability of cigarettes or to educate the public on the adverse consequences of smoking. The second seeks to increase the effectiveness of smoking cessation treatment. Both of these approaches have had their share of success in decreasing the rate of smoking from 43% in 1965 to current levels [4]. However, despite ongoing efforts, the rate of smoking in young adults has largely stabilized and additional advances are needed to further decrease the rate of smoking.
Conceivably, a better biomarker for smoking could increase the effectiveness of preventive interventions.
Smoking prevention programming depends on sensitive and valid epidemiological surveillance of the processes surrounding smoking initiation. Currently, many of these analyses are solely dependent on self-report data, which can be inaccurate. Therefore, it is important that the field develop new tools to supplement existing self-report and existing biomarkers of this critical period.
A better biomarker for smoking could also improve efforts to treat patients in the early phases of smoking. Like most addictive behaviors, smoking is most effectively treated in the first two stages of use, smoking initiation and periodic smoking [5]. In these early stages, smoking cessation efforts may be less hindered by well-established patterns, cues, and symptoms of withdrawal. Unfortunately, identifying individuals in these two earliest stages of smoking, initial experimentation and experimental smoking, is somewhat difficult. Currently, the principal mode of identifying these early stage smokers is through self-reporting. Despite its general utility in a research context, there are concerns about the reliability of selfreported data, particularly if nascent smokers do not wish to be identified or are embarrassed about their smoking [6,7]. Objective measures, namely serum cotinine and carbon monoxide assessment, are effective in identifying individuals who are in the more advanced regular and dependent phases of smoking [8]. However, owing to the restricted detection windows for cotinine and carbon monoxide measurements, these same biomarkers are often insensitive in earlier stage smokers or in the socalled 'chippers', smokers who only smoke at weekends [8]. Hence, a more sensitive marker of early onset smoking could conceivably aid efforts to treat early onset smoking by increasing our ability to detect the more malleable, earlier phases of cigarette use.
It is possible that by detecting smoking associated changes in DNA methylation, we may devise a better method to detect the early phases of smoking. Recently, we and others have demonstrated that established smoking is associated with altered DNA methylation at a number of loci, including AHRR, MYO1G, and GFI1 [9][10][11][12]. However, these studies greatly differed from each another in the chronicity of smoking and the type of DNA being assessed. Based on our prior study of 19year-old African American males and self-reported data, we believe that demethylation at the CpG residue in the aryl hydrocarbon receptor repressor (AHRR) recognized by cg05575921, may be the first change evident in the methylome [13]. If so, change at this locus may be an excellent indicator of nascent smoking and further smoking could be expected to both increase the amount of demethylation at this locus and be accompanied by additional changes in the genome. In this communication, we expand on our previous study of 19-year-old male smokers by using a slightly older population (22 years of age) of male subjects and objective measures of smoking detection to re-examine the relationship of smoking to genome-wide methylation.
Results
The clinical and demographic characteristics of the 107 ' Adults in the Making' (AIM) program subjects who participated in the study are given in Table 1. The subjects averaged 22 years of age. Nearly 54% of the subjects reported having smoked at least one cigarette during our clinical interviews. The amount of self-reported smoking tended to be rather light, with the 35 subjects who reported smoking at the last wave of data reporting an average daily consumption of 8 ± 7 cigarettes.
Because our DNA samples were collected approximately 6 months after the collection of wave-4 data and selfreported data may often be an under report of actual smoking consumption [6,7], we next examined serum cotinine levels each of the subjects. Figure 1 illustrates the cumulative frequency distribution of the serum cotinine levels. As the figure illustrates, there was a sharp dogleg break in the distribution of values, with 44 (41%) of the subjects having levels of <1 ng/ml, no subjects having values between 1 and 2 ng/dl and 64 (59%) of the subjects having serum cotinine levels of >2 ng/dl (designated here after as positive cotinine values). Of considerable interest, 23 of the 64 subjects who denied smoking at all four waves including the last interview conducted 6 months prior to the blood draw, had serum cotinine levels of >2.0 ng/dl. As the first step of our main epigenetic analyses, we conducted genome-wide analysis of the relationship of smoking to DNA methylation. Because the serum cotinine data of Figure 1 suggest that self-reported smoking status may not be reliable, we choose to use serum cotinine levels as our indicator of current smoking status, and contrasted the DNA methylation status of those 64 subjects with serum cotinine levels >2 ng/ml only with that of those 37 subjects who consistently denied smoking through all four waves of data collection and who had negligible levels of serum cotinine (<1.0 ng/ml). Because our previous work with monoamine oxidase A (MAOA) has shown that smoking cessation is associated with a highly variable remodeling of the MAOA DNA methylation signature, the data from the six subjects with serum cotinine levels <1.0 ng/dl but with a positive self-reported history of smoking were not included in the genome-wide contrasts [14]. Table 2 lists the 30 most significant findings with respect to the data from those 98 subjects. Consistent with prior studies, cg05575921 was the probe most highly associated with smoking status with a false discovery rate (FDR) All average methylation values are non-log transformed beta values. Island status refers to the position of the probe relative to the island. Classes include: 1) Island, 2) N (north) shore, 3) S (south) shore, 4) N (north shelf), 5) S (south) shelf and 6) blank, denoting that the probe does not map to an island.
corrected P value <0.002 (Nonsmoker (NS) greater than smokers (S); NS mean 0.85, S mean 0.74, 95% confidence interval 0.82 to 0.87, and 0.72 to 0.76, respectively). A second probe from AHRR, cg21161138, also attained genome-wide significance with a FDR corrected P value < 0.03 (NS greater than S; NS mean 0.73, S mean 0.69, 95% confidence interval 0.72 to 0.75, and 0.68 to 0.70, respectively). Finally, there was a trend for association at a third AHRR probe locus, cg26703534 (NS greater than S; NS mean 0.69, S mean 0.64, 95% confidence interval 0.68 to 0.70, and 0.63 to 0.65, respectively). Methylation at MYO1G probe cg22132788, which Joubert and colleagues [10] had reported to be differentially methylated in DNA prepared from newborns of smoking mothers, was the fourth-ranked probe, with a genomewide corrected P value of <0.144. Because AHRR is a complexly regulated gene (for example, it has at least five CpG islands) with 146 probes mapping to it, we then scrutinized the relationship of smoking status to methylation at each these 146 probes. Figure 2 illustrates the degree of methylation at each of those residues in the smokers and nonsmokers, while Additional file 1: Table S1 gives the ID, position, sequence exact averages, and P values obtained for each probe. As the figure and table together demonstrate, 10 probes clustering to four discrete areas have nominal significance values of < 1×10 -3 . Notably, at all ten of these AHRR probes with a nominal significance value of < 1×10 -3 , smoking was associated with demethylation.
Because methylation at cg05575921 was once again the most highly associated residue in terms of DNA methylation, we analyzed the relationship between methylation status at that residue and serum cotinine levels. Using the data from all 107 subjects, we found that methylation status at cg05575921 was highly correlated with serum cotinine levels (Figure 3, adjusted R 2 = 0.42, P < 0.0001). Methylation status at the other two highly associated AHRR residues, cg26703534 (adjusted R 2 = 0.28, P < 0.0001) and cg21161138 (adjusted R 2 = 0.19, P < 0.0001), was also highly correlated, although the proportion of the variance explained was considerably less.
Discussion
Using data from a group African Americans who are slightly older than our previous group of subjects, we confirm and extend our prior findings, showing that AHRR Figure 2 Comparison of the methylation levels in DNA from male smokers (n = 64) and lifetime male nonsmokers (n = 37) at the 146 probes covering the AHRR locus. The average of the nonsmokers is indicated by the red line, whereas the values for smokers when they diverge from that of the nonsmokers as illustrated by blue line. The location of those three AHRR probes with at least a trend for genome-wide significance is illustrated by the double asterisk. The exact ID, methylation values, and P values for the comparisons at each probe are given in Additional file 1: Table S1. appears to be the locus whose methylation is significantly affected by nascent smoking, with degree of demethylation strongly associated with level of exposure. In addition, we show a strong correlation between demethylation at cg05575921 and serum cotinine levels. Significant limitations of the current study include the reliance on selfreported data for certain aspects of the study and the lack of self-reported data with respect to smoking at the time of the actual blood sampling.
The findings with respect to AHRR extend the prior findings in 19-year-old African American subjects and indicate that smoking induces a steady yet predictable series of changes in the methylation signature of lymphocytes. In our first group of 19-year-old men, only cg05575921 was significantly changed with an average change of 6%. In this group of slightly older subjects, with a presumably longer smoking history, the average demethylation at cg05575921 was 11%, with two other probes from AHRR achieving at least a trend for genome-wide significance. Taken together with other evidence, this suggests that continued smoking increases the degree of change at AHRR and other genes, even though degree of smoking, on average, remained quite low in this slightly older sample. Some other changes may be notable at genes suggested by others, including MYO1G (herein the fourth-ranked probe), F2RL3 and GFI1 [9,10,12]. Indeed, in our analyses of the effects of smoking on DNA methylation in 50-year-old African American smokers, the methylation signatures of a large number of genes are significantly remodeled (Dogan et al., unpublished data). Hence, it may be that as individuals continue to smoke, the degree of differential methylation at these other loci continues to develop to the point that it is detectable at genome-wide levels using similarly powered analyses. This also suggests the possibility of dose-response relationships at other CpG sites in addition to those on AHRR.
The semiquantitative nature of the relationship between serum cotinine levels and AHRR methylation status raises the possibility that DNA methylation could be used as a biomarker for smoking in place of exhaled carbon monoxide or serum cotinine levels when such measures are unavailable. Indeed, for large-scale epidemiological work, DNA demethylation at AHRR might prove useful as an index of smoking if there is stored blood or if other potential assessments are unavailable. For those existing data sets without separate serum samples or quantitative smoking data, this is certainly an attractive possibility. In addition, given the relatively short half-life of exhaled carbon dioxide (3 to 5 hours) [15] and serum cotinine levels (15 hours) [8,16], the current data suggest that altered DNA methylation could be used to detect otherwise undetectable smoking by individuals such as 'chippers', who smoke only periodically [8,16]. Further research to develop the response profile for AHRR and related loci could result in the development of a versatile assessment tool that could find considerable use in both research and clinical applications.
It is natural to ask why AHRR is the most significant locus. Although not immediately intuitive at first glance, changes in the epigenetic status of AHRR could be expected to be one of the first cellular responses to tobacco smoke exposure, owing to the interaction of AHRR with the aryl hydrocarbon receptor (AHR), which is the induction point for the xenobiotic pathway [17]. This catabolic pathway, which is active both in the liver and in lymphocytes, includes several well-known P450 enzymes, including CYP1A1, and is responsible for the degradation of environment toxins, such as polyaromatic hydrocarbons and dioxins commonly found in cigarettes [18,19]. Activation of the pathway is initiated by the binding of ligands such as dioxin, which also serve as targets for degradation to the PAH domain of AHR. Following ligand binding, the AHR protein dimerizes with the aryl nuclear receptor translocator (ARNT), which facilitates its translocation to the nucleus and to binding to the promoters of key catabolic genes. AHRR serves as a negative feedback regulator of AHR induction and does so by competing with AHR for binding with ARNT and by sterically competing with AHR at critical gene promoters [20]. Critically, changes in AHRR methylation are known to alter AHRR gene expression [11]. Unfortunately, because AHRR has at least 21 known splice variants and 10 known protein isoforms, the relationship between these toxin exposures, AHRR methylation changes, and AHR pathway activity is likely to be complex. However, given the extant data, it is reasonable to hypothesize that the demethylation seen in smokers is associated with increased AHR activation of the xenobiotic pathway, with the current findings highlighting the need for further understanding of these processes.
A pertinent negative in the current study is the failure to observe significant changes in the DNA methylation signature at nicotinic cholinergic receptors (NChRs). However, it is important to note that in contrast to the situation with respect to AHRR, NChRs are not expressed heavily nor are they functionally coupled in lymphocytes. Furthermore, the genome-wide approaches used in this paper are relatively insensitive to smaller scale, yet more behaviorally relevant smoking associated changes in genes, such as monoamine oxidase A (MAOA), which is only lightly expressed in the lymphocytes [14]. Therefore, examinations of the role of smoking associated changes of NChR methylation in addictive processes should perhaps focus on those cell types in which the genes are heavily expressed and functionally coupled.
A potential problem for any epigenetic study is the presence of confounding genetic vulnerability. However, this is not likely to be a problem for our findings with respect to cg05575921, for several reasons. The nearest polymorphisms, rs6869832 and rs6894195, are relatively uninformative in the African American population (minor allele frequency 0.02); in a previous study of 399 subjects, we genotyped these loci and found no effect on cg05575921 methylation [13]. Still, genetic variation may have an effect on the methylation status at other interesting loci and we encourage the reader to inspect Additional file 1: Table S1 carefully for further details on polymorphisms flanking potentially interesting CpG residues.
An unanticipated finding was the degree of disparity between self-reported smoking status at wave 4 and the serum cotinine levels determined using samples collected 6 months after wave-4 self-reported data collection. Some discrepancy is, of course, understandable. Because the reliability of recall dims with increasing time, and because our yearly examinations only interrogated smoking behavior over the past month, some inaccuracy of self-reporting is to be expected. At the same time, such problems are common in both investigations of adolescent, nascent smoking [6,7] and in studies of smoking in minority populations [21], highlighting the need for biochemical confirmation of smoking status in studies of tobacco use. In addition, some of the disparity between negative self-report and positive cotinine levels may reflect recent onset in smoking.
Our choice of a 2 ng/ml cutoff level was based on analyses of the shape of the cumulative distribution curve. This level is quite consistent with the optimum cutoff levels developed by Benowitz and colleagues using data from 16,156 subjects from the National Health and Nutrition Examination Study (NHANES) [22]. However, it is possible that a few of our lower 'positive' cotinine levels reflected secondhand smoke exposure in the home or from friends who smoked. However, in our opinion, secondhand smoke exposure is unlikely to explain more than one or two falsepositives. The lowest cotinine level in the self-reported nonsmokers who had serum cotinine levels of >1.0 ng/dl was 9.3 ng/dl, which is considerably above that expected for secondhand smoke exposure [23]. Accordingly, the finding that one-third of the subjects with positive cotinine levels denied smoking at wave 4 suggests either a surge of smoking initiation at this age, or the possibility that both substantive intermittent, fast-moving changes in smoking behaviors and resulting unreliable self-reporting account for the discrepancies. Given the later onset of smoking in African Americans [24] and the higher rates of discrepant reports in underserved minorities [6,21], these findings reemphasize the need for repeated measures with shorter lags between assessments and the need for use of biomarkers in both phenomenological and biological examinations of the effects of smoking. In this context, AHRR emerges as a potentially useful adjunct to selfreporting of smoking and may have particular utility in studies of the early phases of smoking.
Conclusions
In summary, we confirm and extend prior findings indicating the primacy of the AHRR locus in the epigenetic response to cigarette smoking. We also demonstrate a strong correlation between demethylation of discrete AHRR CpG residues and serum cotinine levels. We suggest that studies to firmly delineate the dose dependency and temporal characteristics of AHRR methylation changes with respect to smoking are indicated.
Availability of supporting data
The complete data for the AHRR locus are attached as Additional file 1: Table S1.
Methods
The 107 subjects featured in these analyses are drawn from the AIM project which is a longitudinal study of young African Americans as they transition from adolescence into early adulthood [25]. Youths were enrolled in the study when they were 16 years of age. At wave 1, among youths' families, median household gross monthly income was below $2,100 and mean monthly per capita gross income was below $900. Accordingly, on average, they could be described as working poor.
Procedures
Families were contacted and enrolled by community liaisons residing in the counties where the participants lived. The community liaisons were African American community members who worked with the researchers on participant recruitment and retention. At all data collection points, parents gave written consent to minor youths' participation, and youth gave written assent or consent to their own participation. To enhance rapport and cultural understanding, African American university students and community members served as field researchers to collect data. At the home visit, self-report questionnaires were administered privately via audio computer-assisted self-interviewing technology on a laptop computer. Youths were compensated for their participation with $50 after each assessment. All protocols and procedures used in the AIM project were approved by the University of Georgia Institutional Review Board. As a part of the self-report assessment, at each wave of data collection, the subjects were asked, 'In the past month, how often did you smoke cigarettes?' The number of cigarettes given in reply was used as that year's estimated average monthly consumption with that number being divided by 20 to give the number of packs smoked. A positive response at any time point from a subject resulted in the categorization of that subject as a smoker for the given wave.
Approximately 6 months after the collection of the wave-4 data, the subjects were phlebotomized to provide sera and DNA for the proposed studies. Their average age was 22. The DNA for the current studies was prepared from lymphocyte (mononuclear) cell pellets, as previously described [13]. Sera were prepared using serum separator tubes and were frozen at −80°C after preparation until use.
Genome-wide DNA methylation was assessed using the Illumina (San Diego, CA) HumanMethylation450 Beadchip by the University of Minnesota Genome Center (Minneapolis, MN) using the protocol specified by the manufacturer as previously described [26]. This chip contains 485,577 probes recognizing at least 20216 transcripts, potential transcripts or CpG islands. Subjects were randomly assigned to 12 sample 'slides' with groups of eight slides representing the samples from a single 96-well plate being bisulfite converted in a single batch. Four replicates of the same DNA sample were also included to monitor for slide-to-slide and batch bisulfite conversion variability with the average correlation co-efficient between the replicate samples being 0.997. The resulting data were inspected for complete bisulfite conversion and average beta values for each targeted CpG residue determined using the Illumina Genome Studio Methylation Module, Version 3.2. The resulting data were then cleaned using a Perl-based algorithm to remove those beta values whose detection P values, an index of the likelihood that the observed sequence represents random noise, were greater than 0.05.
Genome-wide linear regression analyses of the log transformed data were conducted using MethLAB, version 1.5, using our previously described procedures [13,27]. All the analyses were controlled for both batch and slide. Correction for multiple comparisons was accomplished by using the false discovery rate method using an alpha of 0.05 and a subroutine within MethLAB [28]. As noted in the results, the regression analyses that were controlled for batch and slide contrasted the log transformed beta values of those who denied ever having smoked and had serum cotinine levels <1.0 ng/dl (n = 37) with those with serum cotinine levels >2.0 ng/dl (n = 64).
The analyses of clinical, serological and single point methylation data were analyzed using the suite of general linear model algorithms contained in JMP, version 10 (SAS Institute, Cary, USA), as indicated in the text.
Additional file
Additional file 1: Table S1. This file contains the beta values for all 107 subject for every locus in AHRR as well as the annotation file which contains extensive information with respect to probe sequence, relative gene location, local genetic variation, etc.
|
v3-fos-license
|
2018-12-05T21:26:59.643Z
|
2009-12-31T00:00:00.000
|
54681313
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://scientiamarina.revistas.csic.es/index.php/scientiamarina/article/download/1143/1188",
"pdf_hash": "5b7e9167f65b2d882708b1f5e8b5cbdd580a981b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1040",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "90c5d738c5950c39a281540abacd6ac91a177ccc",
"year": 2009
}
|
pes2o/s2orc
|
Reproductive strategies in black scabbardfish ( Aphanopus carbo Lowe , 1839 ) from the NE Atlantic
gonads of the ne Atlantic black scabbardfish were examined to give an insight into the reproductive biology of this species. it was concluded that black scabbardfish had determinate fecundity because: (i) a distinct hiatus in oocyte size was observed between pre-vitellogenic and vitellogenic oocytes; (ii) vitellogenic oocytes increased in size during the spawning season; (iii) the number of vitellogenic oocytes did not increase during the spawning season; and (iv) the intensity of atresia was low in pre-spawning and spawning ovaries. Fecundity estimates ranged from 73 to 373 oocytes g-1 female. Comparison of developing ovaries from mainland Portugal and Madeira revealed that those from Madeira were more advanced in development, with more cortical alveoli stage oocytes and a higher gonadosomatic index. starting in July, the reproductive development of all females from mainland Portugal was interrupted by a generalised atresia of developing oocytes. Completion of gametogenesis and spawning only occurred for fish from Madeira but some fish from this area also failed to complete oocyte development due to mass follicular atresia of vitellogenic oocytes. the percentage of Madeiran fish that failed to spawn due to follicular atresia ranged from 21.2% in 2006 to 37.4% in 2005.
ductive strategies that fish species follow in order to maximise reproductively active offspring in relation to available energy and parental life expectancy (Pianka, 2000).
the study of fish reproduction is fundamental for understanding/predicting population dynamics, for improving the management of the assessed stock (using the reproductive information needed for assessment models) and for stock identification studies.Regarding the latter, a multitude of reproductive life-history traits has been used to provide the basis for stock differentiation, including timing, duration and location of spawning (Begg, 1998); egg size and fecundity relationships (Marteinsdottir et al., 2000); and reproductive potential (trippel, 1999).
the black scabbardfish (Aphanopus carbo Lowe, 1839) is a widely distributed deep-water species that occurs in the ne Atlantic from iceland to Madeira island (tucker and Palmer, 1949;gordon, 1986;Merrett et al., 1991).despite the increasing commercial interest in this species, little is known about its life cycle.egg and larval stages of the black scabbardfish are unknown and juvenile fish are seldom caught (swan et al., 2003).immature individuals predominate to the west of the British isles, while larger but non-reproductive individuals are present off mainland Portugal (Figueiredo et al., 2003).Ripe individuals are only caught in Madeiran waters during the fourth quarter (Carvalho, 1988), while spent individuals are caught up until March (Anon., 2000).
due to the scarcity of knowledge on the reproductive biology of this species, the main objectives of the present work are: i) to study the sexual cycle of both males and females from mainland Portugal and Madeira.
ii) to investigate the fecundity type of the black scabbardfish based on four lines of evidence, suggested by hunter et al. (1992), greer Walker et al. (1994) and Murua and saborido-Rey (2003): (1) hiatus between the advanced stock of vitellogenic oocytes and the smaller immature oocytes; (2) mean diameter of the advanced vitellogenic oocytes in the standing stock over the spawning season; (3) number of vitellogenic oocytes in the ovary during the spawning cycle; and (4) level of atresia during spawning.
iii) to estimate total fecundity by weight of female using two methodologies.iv) to investigate why specimens off mainland Portugal do not spawn.v) to examine the possibility of the occurrence of skipped spawning among the individuals from Madeiran waters.
MAteRiAL And Methods in the early 1990s the presence of another species of the genus Aphanopus (A. intermedius) was detected in the southern northeast Atlantic (nakamura and Parin, 1993) and more recently in the Azores (stefanni and knutsen, 2007) and Madeiran waters, but never in mainland waters (sara Reis, pers. comm.).these two species are morphologically similar, A. intermedius having 102-108 vertebrae and 40-44 dorsal fin spines, compared with 97-100 vertebrae and 38-40 dorsal fin spines in A. carbo (nakamura and Parin, 1993).to assure that only A. carbo was sampled, an effort was made to determine whether specimens from A. intermedius were present in the samples and every individual morphologically analysed belonged to A. carbo.
samples were collected on a fortnightly basis at Funchal, Madeira (84 samples) and on a monthly basis at sesimbra, mainland Portugal (32 samples) from May 2005 to december 2007 (Fig. 1).over the three years of the project 1479 males and 1732 females were sampled off Madeira, whereas off mainland Portugal 461 males and 488 females were analysed.
For the purpose of the study of the sexual cycle, total length (cm) and total and gutted weights (g) were recorded for each individual.Maturity stages were assigned according to the maturity scale defined by gordo et al. (2000) (table 1).
For the remaining objectives, only females were considered.the liver and the ovaries were weighed (cg) and the latter were preserved in a 10% buffered formaldehyde solution.slices were taken from the anterior, middle and posterior regions of the ovary, dehydrated with ethanol and embedded in methacrylate.two sections (3-5 mm) were cut from each slice, stained with toluidine blue and digitised using a visual image analysis system (Leica dFC 290).to determine whether the development in the middle region of the ovary was representative of the whole ovary, sections from the three regions were analysed and compared with each other.since no differences in oocyte size frequency distributions between regions were found, the analysis continued only using middle ovary sections.
to investigate the type of fecundity and estimate total fecundity (second and third goals), only prespawning ovaries without post-ovulatory follicles from females caught off Madeira between september and december (the known spawning season, Figueiredo et al., 2003) in 2006 and 2007 were used.the total number obtained per sampling year was 83 and 84, respectively.to investigate the type of fecundity present in the black scabbardfish, the size frequency distribution of oocytes with a visible nucleus was determined in histological sections of prespawning ovaries (without migratory nucleus stage or post-ovulatory follicles), using the image-Pro Plus software.to estimate the changes in size and number of vitellogenic oocytes during the spawning season, pre-spawning and spawning ovaries (without post-ovulatory follicles) were analysed between september and november.
the relative intensity of atresia in yolked oocytes was calculated as the percentage of alpha atresia stage oocytes in the total number of oocytes present in an individual ovary, using the histological sections.in the present work only the alpha atresia stage was counted because the later stages of atresia can be easily confused with other structures such as degenerating post-ovulatory follicles.the prevalence of alpha atresia, defined as the number of female fish with alpha atresia oocytes as a proportion of the population (greer Walker et al., 1994), was also investigated.total fecundity (the number of yolked oocytes in the ovary-potential fecundity-minus the number of atretic oocytes) was estimated using gravimetric and stereological methods.the gravimetric method is based on the relationship between ovary weight and the oocyte density in the ovary (reviewed in greater detail in hunter et al., 1989 andMurua et al., 2003).histological ovary sections were examined at 40× magnification to select the ovaries to be used in the gravimetric method to ensure that post-ovulatory follicles were not present.in fact, the presence of these follicles indicates that spawning has already begun and the potential fecundity would be underestimated.From each ovary, three subsamples with a weight of approximately 0.04 g were cut and dried (in order to obtain a minimum of 250 oocytes).the oocytes in the subsamples were counted and measured under a stereomicroscope with a grid, a micrometric eyepiece and 50× magnification.
the stereological method uses the examination of digitised images of histological sections.From histological sections it is possible to distinguish the cells and determine the number of cells in different categories, i.e. oocytes that are either pre-vitellogenic, vitellogenic or atretic.We used images from each histological section and oocytes were counted at a known area of the images.only oocytes with a visible nucleus were counted from pre-spawning ovaries, using for this purpose the image-Pro Plus software.to estimate the number of yolked oocytes in the ovary, the stereological method was applied (emerson et al., 1990).
Analysis of variance-AnovA (α = 0.05)-was applied to compare: (i) the fecundity estimates by gravimetric and stereological methods; (ii) the total fecundity estimated per year; (iii) the number of yolked oocytes estimated by spawning season; and (iv) the relative fecundity per maturity stage per year.
to determine why specimens off mainland Portugal do not spawn (fourth goal), only developing females above the length at first maturity (103 cm, Figueiredo et al., 2003) caught off mainland Portugal and in Madeiran waters were considered for the period from April to november.developing stage is the latest maturity stage reached by the very large majority of mainland specimens (Figueiredo et al., 2003).the above time period was chosen to guarantee that most females were in the developing, pre-spawning and spawning condition in Madeiran waters.in their ovaries, the nucleated oocytes were measured using the image-Pro Plus software and the mean oocyte diameter was compared between Madeiran and mainland specimens using student's t test.gonadosomatic index (gsi, calculated as the percentage of the ovary weight in relation to the gutted weight), hepatosomatic index (hsi, the percentage of the liver weight in relation to the gutted weight) and Fulton's condition factor k (k=gutted weight/length 3 ) were also estimated for mainland and Madeiran females grouped in 5-cm size classes.these indices were then compared for each size class using single-factor analysis of variance.
to analyse the presence of reproductive versus non-reproductive specimens (skipped spawners) in Madeiran waters (fifth goal), only the period from september to november was considered.Females with no opaque oocytes in their ovaries were considered as non-reproductive or skipped spawners (Rideout et al., 2005).the frequency of skipped spawning by year was calculated as the number of non-reproductive females divided by the total number of mature females sampled (Rideout et al., 2006).Fish were grouped into 5-cm size classes in order to compare the frequency of skipped spawning among fish of different size classes.gsi, hsi and k were also estimated for the reproductive and nonreproductive females grouped in 5-cm size classes.these indices were also compared for each size class using single-factor analysis of variance.
Sexual cycle
in females, only the first two maturity stages were observed off mainland Portugal (Fig. 2a).during this study, only one stage iii female was sampled, in August 2006.in Madeiran waters, females in all five stages were observed.the distribution of the maturity stages of Madeiran females throughout the year is shown in Figure 3a.immature females were rare in the samples, whereas the developing females were dominant from April to August. the reproduction period lasted from september to december, with pre-spawning and spawning females prevailing.Post-spawning females appeared from december to March.only the first three stages of males were recorded in mainland waters (Fig. 2b), whereas all five stages were recorded in Madeiran waters.immature males were rare in Madeiran waters but developing males occurred throughout the year, mainly from March to August (Fig. 3b).Pre-spawning and spawning individuals were more abundant from July to november, whereas post-spawning males occurred mainly from december to April.
Fecundity
oocyte size frequency distribution of pre-spawning ovaries is shown in Figure 4. the pre-vitellogenic oocytes constituted 62.6% of the total number of the oocytes and ranged in diameter from <0.05 mm to 0.35 mm.vitellogenic oocytes constituted 37.4% of the total number of the oocytes and ranged from 0.60 mm to 1.50 mm. the hiatus between pre-vitellogenic and vitellogenic oocytes was present and evident in all ovaries analysed, indicating that the standing stock of advanced yolked oocytes is well defined throughout the spawning season.the presence of this hiatus corroborates the attribution of determinate fecundity to black scabbardfish.
Figure 5 shows the size frequency distribution of the standing stock of advanced vitellogenic oocytes over the spawning season (between september and november) for females in maturity stage iii (prespawning) and iv (spawning).the movement of the size distribution to the right is evident, as a result of the increase in oocyte size as the spawning season progresses.these results are also consistent with determinate fecundity.
the number of standing stock of advanced vitellogenic oocytes over the spawning season was also estimated and no significant differences were found in the potential fecundity estimated for pre-spawning females between september and november (spawning season) (p-value = 0.594).Atresia was present in the two analysed maturity stages but with different levels of incidence (Fig. 6). in 2006, the relative intensity of atresia reached values between 3.13% and 8.00% in pre-spawning ovaries and between 3.85% and 5.00% in spawning ovaries. in 2007 the relative intensity of atresia in pre-spawning ovaries was between 2.22% and 8.33%.For spawning ovaries, only one ovary (4% of total) with atresia was observed.
the prevalence of atresia was also investigated.table 2 shows the results obtained for each sampling year and maturity stage analysed.it ranged from 33.33% in 2006 to 16.67% in 2007.no significant differences were found between the potential fecundity estimate and the total fecundity estimate by year (p-value=0.819and p-value=0.779for 2006 and 2007, respectively), which indicates that atresia intensity was low.
Figure 7 shows the total fecundity estimated by gravimetric and stereological methods by total length of female.it is evident that, from class [122; 124[ cm, there is a tendency for larger females to show higher mean values of total fecundity, and no apparent trend for smaller length classes.estimates of total fecundity by gravimetric methods tend to be slightly higher than those based on stereological methods.however, no significant differences were obtained between the gravimetric and the stereological methods (p-value = 0.108).Relative fecundity was also estimated as the total number of oocytes by weight of female.Figure 8 shows the range of relative fecundity estimate by sampling year and maturity stage.the lowest value (73 oocytes g -1 ) was recorded in 2006 and the highest (373 oocytes g -1 ) in 2007, both for pre-spawning females (table 3).spawning females (maturity stage iv) showed slightly higher mean values of relative fecundity than pre-spawning females, but no significant differences were found.
Reproductive status of females from mainland Portugal
developing females occurred mainly in spring in Madeiran waters (Fig. 3a) and all-year round (although in lower percentages during the second semester) in waters of mainland Portugal (Fig. 2a). to investigate the presence of non-reproductive individuals in mainland waters the ovaries from females captured between April and June were analysed.the comparison of the ovaries from females from the two regions showed a significant difference in the mean oocyte size (p-value <0.001), with Madeiran females showing a higher occurrence of oocytes in the cortical alveoli stage.this difference in the oocyte size was reflected in the gsi, which was significantly (p-value <0.05) higher in individuals from Madeiran waters in all but one of the size classes (Fig. 9a).however, significant differences were found in both the hsi and Fulton's condition factor in only one size class (110-115 cm for the hsi), though the hsi was generally higher in Madeiran than in mainland individuals (Fig. 9b and c).From July on, the oocytes of all mainland females began to suffer a generalised atresia (Fig. 10).on the other hand, in Madeiran waters, the reproductive cycle continued and, after the period of yolk accumulation, maturation and ovulation occurred.
Reproductive status of females from Madeira
in Madeiran waters, the pre-spawning stage lasted mainly from July to october and the ripe condition mainly from october to december, although a few individuals could persist in this stage until February (Fig. 3a).however, not all the individuals sampled during the spawning season had ovaries in stages iii or iv. in fact, during this 3-year period and in some individuals, gametogenesis was halted and all vitellogenic oocytes were reabsorbed via follicular atresia (Fig. 11).the proportion of non-reproductive fish for the 3-year period was almost 28.0% and varied from 37.4% in 2005 to 21.2% in 2006 (table 4).these individuals were easily recognised by their gsi, which was significantly (p-value <0.05) lower than that of fish preparing for spawning (Fig. 12a).Furthermore, the hsi was also significantly (p-value <0.05) lower in non-reproductive individuals in all but one of the size classes (Fig. 12b).A comparison of the Fulton's condition factor between reproductive and non-reproductive females showed that, although it was generally smaller in non-reproductive individuals (Fig. 12c), no significant differences were found in the majority of the size classes.non-spawning individuals were present in all size classes although their proportion was higher among the smaller size classes, 105 to 115 cm (table 4).disCussion Aphanopus carbo can be found in the ne Atlantic from iceland to the south of Madeira island (gordon, 1986;Merrett et al., 1991).however, mature individuals were only caught in Madeiran waters (Figueiredo et al., 2003) and, more recently, off the Canary islands (Pajuelo et al., 2008) and the northwest coast of Africa (Perera, 2008).Moreover, only two individuals with ripe gonads were cited by ehrich (1983) in Porcupine Bank and Magnússon and Magnússon (1995) found specimens in spent condition in icelandic waters.
Four main criteria were used to describe the type of fecundity (hunter et al., 1992;greer Walker et al., 1994 andMurua andsaborido-Rey, 2003) the presence of a distinct hiatus in the oocyte size frequency distribution between pre-vitellogenic and vitellogenic oocytes is generally associated with determinate fecundity and the absence of such hiatus is usually related to indeterminate fecundity.in the present study, the analysis of oocyte size frequency distribution in ovaries without post-ovulatory follicles showed that there was a clear hiatus between pre-vitellogenic and vitellogenic oocytes.this fact strongly suggests that black scabbardfish has a determinate fecundity strategy.
the advanced vitellogenic oocytes increased in size over the spawning season and no significant differences were observed in the number of vitellogenic oocytes in pre-spawning females.these facts also suggest that black scabbardfish has a determinate fecundity, because no new yolked oocytes are recruited to replace those that have been shed during the spawning season.in fishes with indeterminate fecundity, it is expected that the size of the advanced vitellogenic oocytes does not increase (it may remain constant or decrease) over the spawning season due to the recruitment of new oocytes (de novo vitellogenesis) (Murua and saborido-Rey, 2003).
Atresia is a potential source of error for fecundity estimates (hunter et al., 1992;Cooper et al., 2005), although atretic oocytes can be easily identified and estimated with the stereological method.According to hunter et al. (1992), in fish with determinate fecundity, the intensity of atresia is rarely generalised and if present, it is distributed sparsely over the spawning season.Moreover, low levels of intensity of atresia usually characterise determinate spawners and do not seem to have a greater effect on the potential fecundity (hunter et al., 1992;Murua and saborido-Rey, 2003).however, studying the norwegian spring-spawning herring, a species with determinate fecundity, kurita et al. (2003) found that the level of fecundity reduction was particularly high (56%) in comparison with that of other species.in the present work the prevalence of atresia attained 33% in pre-spawning females, but the intensity of atresia was rather low in both pre-spawning and spawning females.this value of prevalence of atresia is low when compared with those attained for other species with determinate fecundity like the herring (kurita et al., 2003).
Fecundity estimates will be biased if the sampling is done either too early or too late in the spawning season.if the sampling is done too early, the advanced stock of oocytes may not have matured enough to be completely separate from the smaller immature oocytes, and consequently estimates may be imprecise or biased.if it is done too late, spawning will have begun, the stock of advanced oocytes will have been reduced, and the potential fecundity will be underestimated (hunter et al., 1989). in this study, this sampling problem did not occur because we always confirmed the presence of post-ovulatory follicles and excluded from the fecundity estimates the ovaries in which such follicles were present. in the present study, no significant differences were found between fecundity estimates by gravimetric and stereological methods.however, the stereological method seemed to be better for fecundity estimates than the gravimetric method, due to: (a) the ability to distinguish vitellogenic oocytes from non-vitellogenic oocytes at an early stage of ovary development (impossible with the gravimetric method); (b) the fact that the atretic oocytes can be counted and subtracted from potential fecundity estimates; and (c) the fact that post-ovulatory follicles can be identified and quantified (emerson et al., 1990).the need to prepare histological sections may seem to be a disadvantage of the stereological method, but histological sections are always necessary to determine the extent of atresia and the presence of post-ovulatory follicles.
Fecundity estimates are essential to calculate spawning stock biomass (gordo et al., 2008). in this context, it is important to have an estimate of the annual realised fecundity, i.e. the number of ovulated oocytes spawned in a year by each female.in species with determinate fecundity, this estimate corresponds to the number of vitellogenic oocytes minus the number of oocytes reabsorbed because of atresia.the black scabbardfish shows a relative fecundity ranging from 73 to 373 oocytes g -1 female.Based on the findings following the application of the four main criteria mentioned above, it is most likely a determinate spawner.
Regarding the presence of non-reproductive individuals, it is usually assumed that iteroparous teleost fishes spawn annually when they reach sexual maturity but the occurrence of skipped spawning is being more frequently reported for several fish species (e.g.Walsh and Bowering, 1981;Burton, 1991;Ramsay and Witthames, 1996;Burton et al., 1997). in fact, as Rideout et al. (2005) point out, the scarcity of reports of non-annual spawning does not necessarily reflect the rarity of this condition but rather may be a consequence of the difficulty in identifying post-mature non-reproductive individuals.
According to Rideout et al. (2005), the omission of spawning can be identified by the external appearance of the gonads, the size of the gonads relative to the rest of the body, and the histological appearance of the gonads.these authors report that the gsi and the minimum size at first maturity can be a quick means of identifying a spawning omission: all fish larger than that minimum size and having a gsi below a certain value can be considered to be nonreproductive.A low gsi does not, however, always imply immaturity (since it can also apply to partially spent or spent fish) and, in fact, gsi fluctuates in somewhat different ways in different species, according to the type/mode of spawning (kainge et al., 2007).however, among the three methods that can be used to test spawning omission, the histological examination of ovaries is the most accurate means of identifying non-reproductive individuals.
in the present work we used both the gsi and histology to investigate why the individuals caught off mainland Portugal do not spawn and the presence of non-reproductive individuals in Madeiran waters.there is emerging evidence that there is a critical window during early vitellogenesis which is highly influential for the decision to spawn or not to spawn (skjaeraasen et al., 2007). in the black scabbardfish the developing stage (characterised by the appearance of cortical alveoli) seems to be that critical window since significant differences occurred in the gsi between the individuals from Madeira and the mainland in that stage.however, these differences could not be seen in either the hsi or Fulton's condition factor.it appears that, in that stage, the stored energy does not differ greatly between future non-reproductive and reproductive individuals or that the stored energy has not yet been utilised for reproductive purposes.
therefore, the absence of reproductive specimens in mainland Portugal may be due to: (i) the cessation of gametogenesis with the reabsorption of all vitellogenic oocytes via follicular atresia, which may be related to a continuous poor nutritional condition or a different diet; or (ii) a migration of individuals in better condition to other areas to spawn.Regarding the first possibility, data from Atlantic cod showed that high levels of non-reproductive individuals might be related to low abundance of their main prey-juvenile redfish in the case of the Flemish Cap cod (Walsh et al., 1986) and capelin in the case of the Barents sea cod (kjesbu et al., 1998).unfortunately, the information available for the diet of black scabbardfish is limited (Mauchline and gordon, 1984) since the percentage of empty stomachs can be as high as 93% (Freitas, 1998).Concerning the second possibility, new spawning areas have been found very recently in the Canary islands (Pajuelo et al., 2008) and near the northwest coast of Africa (Perera, 2008) in addition to Madeiran waters, the previously known spawning area of black scabbardfish (Figueiredo et al., 2003).therefore, it is possible that a spawning migration of the individuals with better condition may occur to one of these known areas in the southern northeast Atlantic.the fish in poorer condition would remain off mainland Portugal and might interrupt reproductive development in successive years and thus increase in length but never have spawned (i.e. they are still technically immature).Pawson et al. (2000) reported that full maturity and spawning in female sea bass may not occur until they grow to a length greater than 42 cm and remain in water above 10ºC during the main period of gonad development, and these conditions may be achieved through migrations between relatively warm waters.Furthermore, fish that do not meet these conditions maintain their virgin state, entering the largest oocyte cohort in atresia.
the presence of non-reproductive individuals in Madeiran waters may indicate the occurrence of skipped spawning.this hypothesis can only be confirmed by the existence of old PoFs in the ovaries of non-reproductive females (confirming that these fish had already spawned in the past).on the other hand, the absence of PoFs in our samples could be due to a quicker reabsorption of the follicles at higher and more stable water temperature, which can reach 11ºC, when compared with species living in colder waters, which maintain the PoFs for several months (e.g.Atlantic cod as reported by saborido-Rey and Junquera (1998) and Rideout et al. (2000)).
it is known that non-reproductive individuals tend to be in poorer condition and have a lower hepatosomatic index than fish that are successfully ripening (Burton and idler, 1984;Rideout et al., 2000).it is possible that the liver of the black scabbardfish plays a very important role in the success of maturation based on the difference found in the hsi between non-reproductive and ripening individuals.the liver could be the primary source of energy, with muscle being the secondary source.this could explain the absence of differences in the condition factor found at the beginning of the spawning season between non-reproductive and ripening individuals.however, according to Jørgensen et al. (2006), poor individual condition at the beginning of a spawning season can be either a cause or an effect of skipped spawning and if spawning is skipped then the best option should be to give priority to somatic growth, keeping energy reserves at a moderate level and resulting in a low condition factor.this poor condition would then be an effect of skipped spawning and thus hard to separate from poor condition stemming from low food availability, which could lead to skipped spawning.skipped spawning may be related to fish size, since the highest percentage of non-reproductive individuals was observed in the smallest length classes.the same observation was made by Jørgensen et al. (2006) in the cod and engelhard and heino (2005) in the Atlantic herring.the former reported that smaller cod needed full energy stores to spawn, whereas larger cod also spawned when less energy was stored.the latter also noticed this behaviour in second-time spawners that frequently skipped spawning.Rideout and Rose (2006) also reported the greatest proportion of non-reproductive fish in the smallest size class of potential spawners in Atlantic cod.this situation may be related to the individual's growth trajectory (Jørgensen et al., 2006): first, only somatic growth takes place up to the age at sexual maturation; second, growth is balanced with reproduction for several years following maturation (growth taking precedence when spawning is skipped more frequently); and third, after that, reproduction receives the full allocation of energy and the frequency of skipped spawning stabilises. in the case of skipped spawning and in a fish that has already achieved sexual maturity, the probability of spawning in the upcoming season is less dependent on size and more dependent on the amount of energy stored in the liver and available to be allocated to gonad and gamete development (Rideout et al., 2006).
in black scabbardfish, the percentage of nonreproductive females varied between 21.23% and 37.40% which means that, in the future and in terms of upcoming management, it will be very important to analyse the extent of skipped spawning in order to prevent an overestimate of the population's reproductive potential that could lead to erroneous assessment estimates with severe consequences for management options.pelago and to dr John gordon for revising the english.the manuscript was greatly improved by the comments of three anonymous referees.this study was partially supported by Fundação para a Ciência e tecnologia (project PoCti/Cvt/46851/2002).
Fig. 5 .
Fig. 5. -Frequency distribution oocyte size (in mm) in prespawning and spawning ovaries without post-ovulatory follicles over the spawning season.(Ms iii september = Maturity stage iii in september; Ms iii october, Maturity stage iii in october; Ms iii november, Maturity stage iii in november; Ms iv november, Maturity stage iv in november).
Fig. 7 .
Fig. 7. -Comparison of total fecundity estimates (number of oocytes) (mean, minimum and maximum) obtained by gravimetric and stereological methods, by total length.
Fig. 12
Fig. 12. -Comparison of the (a) gonadosomatic index (gsi), (b) hepatosomatic index (hsi) and (c) Fulton's condition factor (CF) per 5-cm length class between non-reproductive and reproductive females from Madeira.sample sizes are given in parenthesis : (a) size frequency distribution; (b) mean diameter of the advanced vitellogenic oocytes; (c) number of vitellogenic oocytes; and (d) atresia.
Table 1 .
-description of females and males maturity stages for black scabbardfish.
Table 2 .
-Prevalence of atresia by maturity stage in black scabbardfish from Madeiran waters in 2006 and 2007.
Table 3 .
-Minimum, mean and maximum values of relative fecundity by weight of female estimated by year and maturity stage.
Table 4 .
-Proportion of non-reproductive female black scabbardfish per size class (cm) and year in Madeiran waters.sample sizes are given in parenthesis.
|
v3-fos-license
|
2021-07-27T19:06:52.181Z
|
2021-07-08T00:00:00.000
|
236484611
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/5477848",
"pdf_hash": "8c9cd358161aed1a2dbda649300e8c5b3673f044",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1041",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "8c9cd358161aed1a2dbda649300e8c5b3673f044",
"year": 2021
}
|
pes2o/s2orc
|
Joint Angle and Frequency Estimation in Linear Arrays Based on Covariance Reconstruction and ESPRIT
Key Laboratory of Radar Imaging and Microwave Photonics, College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China Department of Precision Instrument, State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China College of Energy and Power Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Introduction
e joint angle and frequency estimation of received signals submerged in Gaussian white noise has important applications in wireless communication [1], audio and speech signal processing [2], and other fields [3,4]. For example, in a wireless communication system, accurate and robust joint angle and frequency estimation can help provide better channel information, thereby improving the link quality and anti-interference ability of the system [1]. Especially in electronic reconnaissance [5][6][7][8], we often use the operating frequencies and directions of arrival (DOAs) [9][10][11][12][13] of noncooperative radar radiation source signals to describe the main parameters of radar signal characteristics [14][15][16]. erefore, to effectively obtain the parameters of noncooperative radar source signals, it is necessary to study a joint DOA and frequency estimation method for such signals submerged in Gaussian white noise. Regarding the joint DOA and frequency estimation of noisy signals, researchers worldwide have proposed various methods [16][17][18][19][20][21][22]. In 1986, Schmidt [17] proposed the multiple signal classification (MUSIC) algorithm for parameter estimation. Although the algorithm has good estimation performance, it has high computational complexity since it needs to search for spectral peaks to obtain the estimated values. Lemma et al. [18] presented a joint angle and frequency estimation method based on the multidimensional estimation of signal parameters via rotational invariance techniques (ESPRIT). Nevertheless, this algorithm has low parameter estimation accuracy under low signal-to-noise ratios (SNRs). To effectively improve the accuracy of estimated DOA and frequency results, in 2010, Wang proposed a joint angle and frequency estimation technique using multiple-delay outputs (MDJAFE) [16] based on the ESPRIT algorithm. However, this method cannot realize automatic parameter pairing when performing the joint estimation of signal parameters. Since the propagator method (PM) shows good performance in parameter estimation, it has attracted the attention of scholars. Sun et al. [19] proposed a joint DOA and frequency estimation based on the improved PM. Although the complexity of the algorithm is low and it can realize the automatic pairing of DOA and frequency parameter estimations, its parameter estimation accuracy is not high. Wang et al. [20] proposed an improved ESPRIT algorithm using the multidelay output of a uniform linear antenna (ULA). Although the algorithm's complexity is greatly reduced, this method is greatly affected by noise, and the estimation accuracy of this method is still very limited when the SNR is low. Based on the extended orthogonal matching pursuit (EOMP) algorithm, Gao et al. [21] proposed an approach to jointly estimate DOAs and frequency, whereas this method has high computational complexity. Xu et al. [22] proposed a joint DOA and narrowband source carrier frequency estimation method based on parallel factor (PARAFAC) analysis. e computational complexity of this method is relatively high, and the hardware cost is also high.
Due to the wide range of possible SNRs, frequency and DOA estimation algorithms have unstable anti-noise performance and limited estimation accuracy. We propose a method for the joint DOA and frequency estimation of signals submerged in Gaussian white noise. e algorithm involves a three-step estimation procedure. First, we preprocess the received signal. Second, we use the least squares-ESPRIT (LS-ESPRIT) algorithm to estimate the frequency parameters of the signal. Finally, according to the unique relationship between the signal angle and its frequency, we estimate the DOAs. Computer simulations and comparisons with other methods prove the excellent performance of the proposed method. e main contributions of our work can be summarized as follows: (1) We improve upon the estimation process in [20].
Under the condition of a uniform or a nonuniform array, the method proposed in this paper can estimate the required parameters by performing automatic pairing without an additional parameter pairing process. Moreover, this method has good estimation accuracy, stable anti-noise performance, and robustness. erefore, the method proposed in this paper is more suitable than other approaches for the parameter estimation of noncooperative radar radiation sources in an external field, which usually contains a complex electromagnetic environment. (2) is paper proposes a joint angle and frequency estimation method based on covariance reconstruction and ESPRIT (CR-ESPRIT). Within the SNR range from -15 dB to 15 dB (step: 2 dB), its performance is better than that of the PM, the covariance reconstruction and propagator method (CR-PM), the ESPRIT method [16], and the improved ESPRIT method [20]. e remainder of this paper is structured as follows. e materials and methods are presented in Section 2; Section 3 contains the results and a discussion, and Section 4 is the summary of the paper. − 1 , and (•) + denote the conjugate transpose, complex conjugation, inverse, and Moore-Penrose inverse (pseudoinverse) operations, respectively. Matrices and vectors are represented by boldfaced capital letters and lowercase letters, respectively.
Signal Model.
Consider an antenna array that consists of M array elements arranged in a straight line at equal distances, where the distance between each pair of array elements is d [23]. We suppose that there exist K (K < M) farfield source narrowband signals (the center frequency is f k ), which are incident on the antenna array. erefore, we can regard the signals as plane waves when they reach the array. en, we can express the received signal of the mth antenna as follows [24]: where s k (t) is the kth incident far-field source signal, c is the speed of light (m/s), θ k and f k are the DOA and frequency of the kth signal, respectively, and n m (t) is the zero-mean additive white Gaussian noise on the mth antenna. We can express the output signal of the linear array as Y 0 � y 1 (n)y 2 (n), . . . , y M (n) T , n � 1, 2, . . . , N. (2) We assume that the signal is uniformly sampled by a period that conforms to the Nyquist sampling rate and that the number of snapshots is N. erefore, we can transform the signal model studied in this paper into a joint DOA and frequency estimation model for multiple source signals, where N sampling points are obtained for each source signal.
We assume that the number of signal sources K is known; thus, we can rewrite output state vector (2) in the following matrix form: In equation (4), α k � 2πdf k sin(θ k )/c, k � 1, . . . , K. To realize the joint DOA and frequency estimation model, we take (P-1) delays [25,26] for the signal received from the antenna arrays shown in Figure 1. In addition, we set 0 < (P − 1)τ < 1/ max(f k ).
erefore, we can obtain the delay signal with the delay value τ as We can transform equation (5) into the following form: where When the delay value is pτ, we can express the delay signal as en, we can also express equation (7) as After reorganizing the equations, we can obtain the following expression: 2.2. e Proposed Method. In this paper, inspired by the improved ESPRIT method [20], we propose a joint angle and frequency estimation method based on CR-ESPRIT. In a real space, the improved ESPRIT method is not suitable for complex electromagnetic environments. Due to the noncooperative characteristics of radiation sources, we generally believe that there is no prior information available regarding the parameters. Moreover, in a complex and harsh electromagnetic environment, the detected radiation source signals are very weak. erefore, in a situation with a low SNR, the developed method not only needs to distinguish useful signals and noise effectively but also needs to have good estimation performance, noise immunity and robustness. Additionally, it also needs to have the ability to automatically pair the relevant parameters without an additional parameter pairing process under the condition of a uniform or a nonuniform array. We first preprocess the received signal in Section 2.2.1. Second, we use the LS-ESPRIT algorithm to estimate the frequency parameters of the received signal in Section 2.2.2.
ird, according to the relationship between the DOA and frequency in the signal model, we reconstruct the received signal, and then we estimate the DOAs in Section 2.2.3. In Mathematical Problems in Engineering Section 2.2.4, we provide the detailed steps of the proposed method. Finally, we provide the detailed steps of the proposed method under the condition of a nonuniform array in Section 2.2.5.
2.2.1.
e Preprocessing Procedure. First, we obtain the covariance matrix R Y � YY H of the received signal. To make full use of the conjugate information contained in the received signal, we define the permutation matrix J [27]: erefore, we can construct the following matrix: We add the covariance matrix R Y and R J from equation (11), and then we average them. e form of the obtained covariance matrix is shown as follows: rough analysis, we can obtain that the new total covariance matrix R is a Hermitian matrix (PM × PM) [28]. erefore, we can apply eigenvalue decomposition, and then we can reconstruct the signal subspace E ss . In a no-noise situation, E ss can be approximately expressed as where F is a full-rank matrix with K × K dimensions.
Remark 1.
As mentioned earlier, the new total covariance matrix R is already a Hermitian matrix. According to Hermitian matrix characteristics, we assume that the diagonal matrix of the eigenvalues of R is G. en, there exists a unitary matrix U, which assures RU = UG. erefore, we can treat R as the unitary matrix U by using this correlation feature to further reduce the complexity of the proposed method and then propose a much lower complexity method.
Frequency Estimation.
We define the following parameters: erefore, equations (14) and (15) have the following relationship: According to the LS-ESPRIT algorithm, we can estimate Φ by the eigenvalue decomposition of Ψ, and we can also estimate the matrix F −1 by the eigenvector of Φ. In a no-noise situation, we define where Θ is a fuzzy column matrix. Since Ψ and Φ have the same eigenvalues, we can obtain the eigenvalues λ k (k � 1, 2, . . . , K) from matrix Ψ. As shown in equation (6 , it is obvious that we can estimate the frequency parameter f k , k � 1, 2, . . . , K: Antenna M Antenna 2 Antenna 1 Figure 1: Received signals with multilevel delays.
DOA Estimation. AΦ P−1 has the following expression:
According to the above estimation F − 1 , we can define the following expression by reconstructing equation (13): where According to reconstructed equation (20), we can use the method described below to estimate the DOA.
We define the following matrices: We can also define a matrix D since D � E + Q1 E Q2 . en, according to the definitions of E Q1 and E Q2 , we can express D in a no-noise situation: erefore, we can take the diagonal elements of D, and then, we can obtain ϖ k (k � 1, 2, . . . , K), where α k � 2πdf k sin(θ k )/c((k � 1, 2, . . . , K), to obtain the estimation of the DOA:
e Steps of the Proposed Method.
us far, we have given the complete process for automatically pairing DOA and frequency estimations in a linear array. e main steps required to implement the method proposed in this paper are as follows: (i) Step 1: according to permutation matrix J and equation (12), we reconstruct the covariance matrix R. (ii) Step 2: we apply eigenvalue decomposition to R, and then we reconstruct the signal subspace E ss . According to equations (14) and (15), we construct matrices E 1 and E 2 , respectively. (iii) Step 3: we use equation (Ψ � E + 1 E 2 ) for eigenvalue decomposition to obtain F −1 and Φ. Finally, we Mathematical Problems in Engineering estimate the frequency parameter f k according to equation (18). (iv) Step 4: we can obtain matrix E Q according to the reconstruction of E ss in equation (13). en, we can also construct matrices E Q1 and E Q2 . (v) Step 5: we calculate D � E + Q1 E Q2 to obtain matrix D. Finally, we estimate the DOA parameter θ k according to equation (24).
e Condition of a Nonuniform Array.
In this section, we first present the method proposed in this paper when the distances between the array elements are not equal. en, we present the main steps for implementing the method in the case of a nonuniform array.
We assume that the first element d 1 = 0 and that the distance between the mth element and the first element is d m .
en, we can transform equation (4) into the following form: where η k � 2πf k sin(θ k )/c(k � 1, 2, . . . , K). At the same time, equation (19) undergoes the following transformation: Similarly, we can reconstruct equation (13) and define the following expression: where We define the following matrix: We also define the matrix erefore, we can take the diagonal elements of Q m and then obtain We sort the diagonal elements and then define the following matrix: According to equation (30), we can obtain the estimation of the DOA: e main steps for implementing the method in this paper under the condition of a nonuniform array are as follows: (i) Step 1: according to equations (10), (12), and (25), we reconstruct the new covariance matrix. (ii) Step 2: we apply eigenvalue decomposition to the new covariance matrix, and then we reconstruct the signal subspace E ss . According to equations (14) and (15), we construct matrices E 1 and E 2 , respectively. (iii) Step 3: we use equation (Ψ � E + 1 E 2 ) to perform eigenvalue decomposition and obtain F −1 and Φ. Finally, we estimate the frequency parameter f k according to equation (18). (iv) Step 4: we can obtain the matrix E QQ according to equation (27). en, we can also construct matrices E QQ1 and E QQ2 . (v) Step 5: we can obtain matrix V according to equation (30). Finally, we estimate the DOA parameter θ k according to equation (31).
Method Complexity.
In this section, we focus on the performance analysis with respect to complexity.
Complexity is mainly measured by the number of complex multiplications and the running time required by a given method. For the ESPRIT method in [16], the complexity is For the improved ESPRIT method in [20], the complexity required to calculate the covariance matrix R Y is O(M 2 P 2 N). e complexity required for eigenvalue de- . When estimating the DOA, the complexity is O(2K 3 + 2K 2 (M − 1)P). erefore, the complexity of the improved ES- For the proposed method, the preprocessing complexity is O(M 2 P 2 N + M 3 P 3 ). e complexity of frequency estimation is O(2K 2 M(P − 1) + 3K 3 ). In addition, the complexity of DOA estimation is O(2K 3 + 2K 2 (M − 1)P). erefore, the complexity of the proposed method is O(M 2 P 2 N + M 3 P 3 + 2K 2 M(P − 1) +5K 3 + 2K 2 (M − 1)P + 1). It should be noted that in the preprocessing of this paper, we only need to calculate the covariance matrix R Y and the eigenvalue decomposition, which means that we do not require additional calculations to construct the matrix R J . e reason is that, according to equation (11), R J has the following expression:
Mathematical Problems in Engineering
By observing equations (32) and (33), we find that through simple moment transformation, we can transform the matrix R Y into the matrix R J . erefore, when preprocessing, we do not need additional complex multiplications to reconstruct the matrix R J .
For the PM, the complexity of frequency estimation is O(M 2 P 2 N + 4K 3 + M 2 P 2 K + PMK 2 + 2K 2 (M − K)). In addition, the complexity of DOA estimation is O(K 2 (M− K) + 2K 3 + 2K 2 (M − 1)). erefore, the complexity of the PM is O( 1)). e CR method is also applicable to the PM. erefore, the complexity of the CR-PM is O(M 2 P 2 N + 6K 3 + M 2 P 2 K + PMK 2 + 3K 2 (M − K) + 2K 2 (M − 1)). Figures 2 and 3 present the complexity comparison of these algorithms versus the number of signal sources K and the number of snapshots N with M = 12 and P = 3, respectively. Table 1 compares the running time of these algorithms under the condition of an i7-8550U CPU with K = 3, N = 200, and 2000 Monte Carlo simulations. Figures 2 and 3 show that the complexity of the method proposed in this paper is almost the same as that of the ESPRIT method in [16] and that of the improved ESPRIT method in [20] and is much higher than that of the PM and that of the CR-PM. In addition, the running time of the proposed method does not increase much. Moreover, through subsequent analysis, within the SNR range from −15 dB to −1 dB, the advantages of the proposed method are more obvious. In particular, when SNR = −15 dB, compared with the improved ESPRIT method, the frequency estimation accuracy of the method proposed in this paper is an approximately 25.50% improvement; the DOA estimation accuracy of the method proposed in this paper is an approximately 31.95% improvement. erefore, we can confirm that by increasing the utilization of the originally received data, we can improve the parameter estimation accuracy and the noise robustness of the proposed method.
e Advantages of the Proposed Method.
In this section, we summarize the advantages of the proposed method in this paper as follows: (1) Under the condition of a uniform or a nonuniform array, the method can effectively estimate the DOAs and frequencies of source signals. It can also realize automatic pairing without an additional parameter pairing process since the method has the same fuzzy column matrix for both parameters.
(2) For incoherent signal sources whose angles are close together, this method can perform effective identification and parameter estimation.
(3) Compared with those of the PM, the CR-PM, the ESPRIT method [16], and the improved ESPRIT method [20], the frequency and DOA estimation accuracies of the proposed method are greatly improved, and this method has superior estimation performance. Moreover, the proposed method has better anti-noise performance and stronger robustness.
Numerical Simulation.
In the simulation, we assume that the array receives signals emitted by K incoherent farfield sources. We also use the root mean square error (RMSE) metric to evaluate the DOA and frequency estimation performances of the proposed method; we define the RMSEs as where θ k,l and f k,l are the estimated values of θ k and f k , respectively, in the lth Monte Carlo simulation and L is the number of Monte Carlo simulations. In this paper, we set L = 2000.
Performance Analysis of the Proposed Method in a Uniform
Array. In this section, we assume that the array receives signals emitted by three incoherent far-field sources. e DOAs and operating frequencies of the signals are (θ 1 , f 1 ) = (15°, 1 MHz), (θ 2 , f 2 ) = (40°, 2.1 MHz), and (θ 3 , f 3 ) = (50°, 3.1 MHz). SNR = 0 dB, M = 12 is the number of array elements, P = 3 is the number of delay values, d = 50 denotes the distances between the array elements, and N = 400 and K = 3 are the numbers of snapshots and signal sources, respectively. e scatter diagram of the joint frequency and DOA estimation of the proposed method in this paper is shown in Figure 4. Figure 4 shows that the proposed method is efficient in estimating the frequency and DOA results for a uniform array.
Performance Analysis under Different Numbers of Array Elements M.
We set d = 50 m, K = 3, P = 3, and N = 400. We also set different numbers of array elements (M = 8, 12, and 16). e SNR range is from −15 dB to 15 dB (step: 2 dB), and the RMSEs of the frequency and DOA estimates of the method proposed in this paper are shown in Figures 5 and 6, respectively.
We can see from Figures 5 and 6 that the method proposed in this paper can achieve high estimation performance within the SNR range of −15 dB to 15 dB (step: 2 dB) under different numbers of array elements. e estimation performance is stable under the condition of a low SNR. Moreover, we can see that the SNR has a great impact on the estimation accuracies of the frequency and DOA. e higher the SNR is, the higher the parameter estimation accuracies of the method for these two parameters. With the increase in the number of array elements, the DOA and frequency estimation accuracies of the method proposed in this paper improve. Furthermore, the RMSEs of the proposed method are greatly reduced. is is because as the number of array elements increases, the space diversity gain increases [29]. Mathematical Problems in Engineering 9 2 dB), the RMSEs of the proposed method demonstrate that with different snapshots, the algorithm can still maintain high estimation performance. Even in a situation with a low SNR, the estimation performance is still stable. As the number of snapshots increases, the estimation accuracy of the method proposed in this paper is enhanced, the performance is more precise, and the RMSEs of the frequency and DOA estimations of the proposed method decrease.
Performance Analysis under Different Delay Values P.
We set d = 50 m, K = 3, N = 400, and M = 12. We also set different delay values (P = 2, 3, and 4). e range of the SNR is from −15 dB to 15 dB (step: 2 dB), and the RMSEs of the frequency and DOA estimations of the method proposed in this paper are shown in Figures 9 and 10, respectively. In Figures 9 and 10, under different delay values, the proposed method maintains high DOA and frequency estimation performance when the SNR ranges from −15 dB to 15 dB. e estimation performance is stable even in a situation with a low SNR. As the delay value increases, the estimation accuracy of the method proposed in this paper is enhanced, the performance is more precise, and the RMSEs of the DOA and frequency estimations of the method decrease.
Performance Analysis under Different Numbers of
Signal Sources K. We set d = 50 m, P = 3, N = 400, and M = 12. We also set different numbers of signal sources (K = 2, 3, and 4). e range of the SNR is from −15 dB to 15 dB (step: 2 dB), and the RMSEs of the frequency and DOA estimations of the method proposed in this paper are shown in Figures 11 and 12, respectively. In Figures 11 and 12, under different numbers of signal sources, the proposed method maintains high DOA and frequency estimation performance when the SNR ranges from −15 dB to 15 dB. e estimation performance is stable even in a situation with a low SNR. As the number of signal sources increases, the estimation accuracy of the method proposed in this paper makes the performance more imprecise, and the RMSEs of the DOA and frequency estimations of the proposed method increase. As the number of signal sources increases, the interference between sources increases, and the frequency and DOA estimation performances deteriorate [30]. . We also set SNR = 5 dB. As shown in Figure 13, for signal sources with close angles, the proposed method can also perform effective identification and parameter estimation.
Performance Analysis of the Proposed Method in a Nonuniform Array.
In an actual field receiving system, the assumed reception model is different from the true model even after a calibration procedure [31]. erefore, in this section, we mainly discuss the performance analysis under the condition of a nonuniform array, such as array element position deviation [32] and uneven distance between array elements.
In this section, we assume that the array receives signals emitted by three incoherent far-field sources. e DOAs and operating frequencies of the signals are (θ 1 , f 1 ) = (15°, Figures 14 and 15 show that the proposed method is efficient in estimating the frequency and DOA results for both nonuniform array conditions.
Analysis of the Performances of Different Methods.
In this section, we focus on analyzing the performances of different methods. We assume that the array receives signals emitted by two incoherent far-field sources. e DOAs and operating frequencies of the signals are (θ 1 , f 1 ) = (15°, 1 MHz) and (θ 2 , f 2 ) = (40°, 2.1 MHz). We set d = 50 m, K = 2, P = 2, N = 400, and M = 12. e range of the SNR is from −15 dB to 15 dB (step: 2 dB), and the RMSEs of the Cramer-Rao lower bound (CRLB), the PM, the CR-PM, the ESPRIT method [16], the improved ESPRIT method [20], and the method proposed in this paper with respect to the frequency and DOA estimations are shown in Figures 16 and 17, respectively.
To quantitatively illustrate, under the condition of a low SNR, compared with the improved ESPRIT method [20], the estimate improvement of the method proposed in this paper, we define the relative improvement ratio as According to the definitions of equations (35) and (36), we show the relative improvement ratio in Figures 18 and 19.
As shown in Figures 16 and 17, when the SNR is within the range of −15 dB to 15 dB (in steps of 2 dB), the estimation accuracy of the proposed method is better than that of the PM, the CR-PM, the ESPRIT method [16], and the improved ESPRIT method [20] in terms of both the DOA and frequency. Among them, the ESPRIT method has extremely poor angle estimation accuracy since it cannot automatically pair parameters.
As shown in Figures 18 and 19, when SNR = −15 dB to −1 dB, compared with the improved ESPRIT method, the estimation accuracy of the proposed method is greatly improved. In particular, when SNR = −15 dB, compared with the improved ESPRIT method, the frequency estimation accuracy of the method proposed in this paper is an approximately 25.50% improvement; the DOA estimation accuracy of the method proposed in this paper is an approximately 31.95% improvement. However, when SNR = −1 dB to 15 dB, the relative improvement ratio fluctuates around zero. is result illustrates that the estimation accuracy of the proposed method is almost the same as that of the improved ESPRIT method. For fluctuation, we surmise that the reason for this phenomenon may be the result of too few simulations in this paper.
In summary, a comprehensive analysis of Figures 16-19 shows that the estimation accuracy of the proposed method is improved over that of the PM, the CR-PM, the ESPRIT method, and the improved ESPRIT method.
e results further verify that the method proposed in this paper has good anti-noise performance and stability under different SNRs.
erefore, compared to the PM, the CR-PM, the ESPRIT method, and the improved ESPRIT method, the method proposed in this paper is more suitable for use in a complex electromagnetic environment.
Conclusions
For linear arrays, this paper proposes a joint angle and frequency estimation method based on CR-ESPRIT. We first preprocess the received signal by taking full advantage of the conjugate information contained in the originally received data, and we reconstruct a new total covariance matrix. en, we use the LS-ESPRIT algorithm to estimate the frequency parameter. According to the unique relationship between angles and frequencies, we estimate the DOAs based on the reconstructed received signal. e complexity of the method proposed in this paper is almost the same as that of the ESPRIT and the improved ESPRIT. Numerical simulations and comparisons with the PM, the CR-PM, the ESPRIT method, and the improved ESPRIT method prove the superiority of the proposed method. In a real space environment, under the condition of a uniform or a nonuniform array, this method can realize the automatic pairing of the estimated DOAs and frequencies of radiation source signals without an additional parameter pairing process. Moreover, this method has high accuracy and strong anti-noise performance when conducting parameter estimation.
Data Availability
e data used to support the findings of the study are included within this paper.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.
|
v3-fos-license
|
2018-04-27T07:21:56.744Z
|
2018-04-21T00:00:00.000
|
5076191
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aclweb.org/anthology/D18-1316.pdf",
"pdf_hash": "c2215ce1cdc47a59fe62c098467c21d305a4f3e9",
"pdf_src": "ACL",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1042",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "c68fbc1f4aa72d30974f8a3071054e3b227137fd",
"year": 2018
}
|
pes2o/s2orc
|
Generating Natural Language Adversarial Examples
Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations can often be made virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.
Introduction
Recent research has found that deep neural networks (DNNs) are vulnerable to adversarial examples (Goodfellow et al., 2015;Szegedy et al., 2014). The existence of adversarial examples has been shown in image classification (Szegedy et al., 2014) and speech recognition (Carlini and Wagner, 2018). In this work, we demonstrate that adversarial examples can be constructed in the context of natural language. Using a black-box * Moustafa Alzantot and Yash Sharma contribute equally to this work. population-based optimization algorithm, we successfully generate both semantically and syntactically similar adversarial examples against models trained on both the IMDB (Maas et al., 2011) sentiment analysis task and the Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) textual entailment task. In addition, we validate that the examples are both correctly classified by human evaluators and similar to the original via a human study. Finally, we attempt to defend against said adversarial attack using adversarial training, but fail to yield any robustness, demonstrating the strength and diversity of the generated adversarial examples.
Our results show that by minimizing the semantic and syntactic dissimilarity, an attacker can perturb examples such that humans correctly classify, but high-performing models misclassify. We are open-sourcing our attack 1 to encourage research in training DNNs robust to adversarial attacks in the natural language domain.
Natural Language Adversarial Examples
Adversarial examples have been explored primarily in the image recognition domain. Examples have been generated through solving an optimization problem, attempting to induce misclassification while minimizing the perceptual distortion (Szegedy et al., 2014;Carlini and Wagner, 2017;. Due to the computational cost of such approaches, fast methods were introduced which, either in onestep or iteratively, shift all pixels simultaneously until a distortion constraint is reached (Goodfellow et al., 2015;Kurakin et al., 2017;Madry et al., 2018). Nearly all popular methods are gradientbased.
Such methods, however, rely on the fact that adding small perturbations to many pixels in the image will not have a noticeable effect on a human viewer. This approach obviously does not transfer to the natural language domain, as all changes are perceptible. Furthermore, unlike continuous image pixel values, words in a sentence are discrete tokens. Therefore, it is not possible to compute the gradient of the network loss function with respect to the input words. A straightforward workaround is to project input sentences into a continuous space (e.g. word embeddings) and consider this as the model input. However, this approach also fails because it still assumes that replacing every word with words nearby in the embedding space will not be noticeable. Replacing words without accounting for syntactic coherence will certainly lead to improperly constructed sentences which will look odd to the reader.
Relative to the image domain, little work has been pursued for generating natural language adversarial examples. Given the difficulty in generating semantics-preserving perturbations, distracting sentences have been added to the input document in order to induce misclassification (Jia and Liang, 2017). In our work, we attempt to generate semantically and syntactically similar adversarial examples, via word replacements, resolving the aforementioned issues. Minimizing the number of word replacements necessary to induce misclassification has been studied in previous work (Papernot et al., 2016), however without consideration given to semantics or syntactics, yielding incoherent generated examples. In recent work, there have been a few attempts at generating adversarial examples for language tasks by using back-translation (Iyyer et al., 2018), exploiting machine-generated rules (Ribeiro et al., 2018), and searching in underlying semantic space (Zhao et al., 2018). In addition, while preparing our submission, we became aware of recent work which target a similar contribution (Kuleshov et al., 2018;Ebrahimi et al., 2018). We treat these contributions as parallel work.
Threat model
We assume the attacker has black-box access to the target model; the attacker is not aware of the model architecture, parameters, or training data, and is only capable of querying the target model with supplied inputs and obtaining the output pre-dictions and their confidence scores. This setting has been extensively studied in the image domain (Papernot et al., 2017;Chen et al., 2017a;Alzantot et al., 2018), but has yet to be explored in the context of natural language.
Algorithm
To avoid the limitations of gradient-based attack methods, we design an algorithm for constructing adversarial examples with the following goals in mind. We aim to minimize the number of modified words between the original and adversarial examples, but only perform modifications which retain semantic similarity with the original and syntactic coherence. To achieve these goals, instead of relying on gradient-based optimization, we developed an attack algorithm that exploits population-based gradient-free optimization via genetic algorithms.
An added benefit of using gradient-free optimization is enabling use in the black-box case; gradient-reliant algorithms are inapplicable in this case, as they are dependent on the model being differentiable and the internals being accessible (Papernot et al., 2016;Ebrahimi et al., 2018).
Genetic algorithms are inspired by the process of natural selection, iteratively evolving a population of candidate solutions towards better solutions. The population of each iteration is a called a generation. In each generation, the quality of population members is evaluated using a fitness function. "Fitter" solutions are more likely to be selected for breeding the next generation. The next generation is generated through a combination of crossover and mutation. Crossover is the process of taking more than one parent solution and producing a child solution from them; it is analogous to reproduction and biological crossover. Mutation is done in order to increase the diversity of population members and provide better exploration of the search space. Genetic algorithms are known to perform well in solving combinatorial optimization problems (Anderson and Ferris, 1994;Mühlenbein, 1989), and due to employing a population of candidate solutions, these algorithms can find successful adversarial examples with fewer modifications.
Perturb Subroutine: In order to explain our algorithm, we first introduce the subroutine Perturb. This subroutine accepts an input sentence x cur which can be either a modified sentence or the same as x orig . It randomly selects a word w in the sentence x cur and then selects a suitable replacement word that has similar semantic meaning, fits within the surrounding context, and increases the target label prediction score. In order to select the best replacement word, Perturb applies the following steps: • Computes the N nearest neighbors of the selected word according to the distance in the GloVe embedding space (Pennington et al., 2014). We used euclidean distance, as we did not see noticeable improvement using cosine. We filter out candidates with distance to the selected word greater than δ.
We use the counter-fitting method presented in (Mrkšić et al., 2016) to post-process the adversary's GloVe vectors to ensure that the nearest neighbors are synonyms. The resulting embedding is independent of the embeddings used by victim models. • Second, we use the Google 1 billion words language model (Chelba et al., 2013) to filter out words that do not fit within the context surrounding the word w in x cur . We do so by ranking the candidate words based on their language model scores when fit within the replacement context, and keeping only the top K words with the highest scores. • From the remaining set of words, we pick the one that will maximize the target label prediction probability when it replaces the word w in x cur . • Finally, the selected word is inserted in place of w, and Perturb returns the resulting sentence. The selection of which word to replace in the input sentence is done by random sampling with probabilities proportional to the number of neighbors each word has within Euclidean distance δ in the counter-fitted embedding space, encouraging the solution set to be large enough for the algorithm to make appropriate modifications. We exclude common articles and prepositions (e.g. a, to) from being selected for replacement.
Optimization Procedure: The optimization algorithm can be seen in Algorithm 1. The algorithm starts by creating the initial generation P 0 of size S by calling the Perturb subroutine S times to create a set of distinct modifications to the original sentence. Then, the fitness of each population member in the current generation is computed as the target label prediction probability, found by
., S in population do
Sample parent 1 from P g−1 with probs p Sample parent 2 from P g−1 with probs p child = Crossover(parent 1 , parent 2 ) child mut = Perturb(child, target) P g i = {child mut } querying the victim model function f . If a population member's predicted label is equal to the target label, the optimization is complete. Otherwise, pairs of population members from the current generation are randomly sampled with probability proportional to their fitness values. A new child sentence is then synthesized from a pair of parent sentences by independently sampling from the two using a uniform distribution. Finally, the Perturb subroutine is applied to the resulting children.
Experiments
To evaluate our attack method, we trained models for the sentiment analysis and textual entailment classification tasks. For both models, each word in the input sentence is first projected into a fixed 300-dimensional vector space using GloVe (Pennington et al., 2014). Each of the models used are based on popular open-source benchmarks, and can be found in the following repositories 23 . Model descriptions are given below. Sentiment Analysis: We trained a sentiment analysis model using the IMDB dataset of movie reviews (Maas et al., 2011 This movie had terrible acting, terrible plot, and terrible choice of actors. (Leslie Nielsen ...come on!!!) the one part I considered slightly funny was the battling FBI/CIA agents, but because the audience was mainly kids they didn't understand that theme. Adversarial Text Prediction = Positive. (Confidence = 59.8%) This movie had horrific acting, horrific plot, and horrifying choice of actors. (Leslie Nielsen ...come on!!!) the one part I regarded slightly funny was the battling FBI/CIA agents, but because the audience was mainly youngsters they didn't understand that theme. averaged and fed to the output layer. The test accuracy of the model is 90%, which is relatively close to the state-of-the-art results on this dataset. Textual Entailment: We trained a textual entailment model using the Stanford Natural Language Inference (SNLI) corpus (Bowman et al., 2015). The model passes the input through a ReLU "translation" layer (Bowman et al., 2015), which encodes the premise and hypothesis sentences by performing a summation over the word embeddings, concatenates the two sentence embeddings, and finally passes the output through 3 600-dimensional ReLU layers before feeding it to a 3-way softmax. The model predicts whether the premise sentence entails, contradicts or is neutral to the hypothesis sentence. The test accuracy of the model is 83% which is also relatively close to the state-of-the-art (Chen et al., 2017b).
Attack Evaluation Results
We randomly sampled 1000, and 500 correctly classified examples from the test sets of the two tasks to evaluate our algorithm. Correctly classified examples were chosen to limit the accuracy levels of the victim models from confounding our results. For the sentiment analysis task, the attacker aims to divert the prediction result from positive to negative, and vice versa. For the textual entailment task, the attacker is only allowed to modify the hypothesis, and aims to divert the prediction result from 'entailment' to 'contradiction', and vice versa. We limit the attacker to maximum G = 20 iterations, and fix the hyperparameter values to S = 60, N = 8, K = 4, and δ = 0.5. We also fixed the maximum percentage of allowed changes to the document to be 20% and 25% for the two tasks, respectively. If increased, the success rate would increase but the mean quality would decrease. If the attack does not succeed within the iterations limit or exceeds the specified threshold, it is counted as a failure.
Sample outputs produced by our attack are shown in Tables 1 and 2. Additional outputs can be found in the supplementary material. Table 3 shows the attack success rate and mean percentage of modified words on each task. We compare to the Perturb baseline, which greedily applies the Perturb subroutine, to validate the use of population-based optimization. As can be seen from our results, we are able to achieve high success rate with a limited number of modifications on both tasks. In addition, the genetic algorithm significantly outperformed the Perturb baseline in both success rate and percentage of words modified, demonstrating the additional benefit yielded by using population-based optimization. Testing using a single TitanX GPU, for sentiment analysis and textual entailment, we measured average runtimes on success to be 43.5 and 5 seconds per example, respectively. The high success rate and reasonable runtimes demonstrate the practicality of our approach, even when scaling to long sentences, such as those found in the IMDB dataset.
Speaking of which, our success rate on textual entailment is lower due to the large disparity in sentence length. On average, hypothesis sentences in the SNLI corpus are 9 words long, which is very short compared to IMDB (229 words, limited to 100 for experiments). With sentences that short, applying successful perturbations becomes much harder, however we were still able to achieve a success rate of 70%. For the same reason, we didn't apply the Perturb baseline on the textual entailment task, as the Perturb baseline fails to achieve any success under the limits of the maximum allowed changes constraint.
User study
We performed a user study on the sentiment analysis task with 20 volunteers to evaluate how perceptible our adversarial perturbations are. Note that the number of participating volunteers is significantly larger than used in previous studies (Jia and Liang, 2017;Ebrahimi et al., 2018). The user study was composed of two parts. First, we presented 100 adversarial examples to the participants and asked them to label the sentiment of the text (i.e., positive or negative.) 92.3% of the responses matched the original text sentiment, indicating that our modification did not significantly affect human judgment on the text sentiment. Second, we prepared 100 questions, each question includes the original example and the corresponding adversarial example in a pair. Participants were asked to judge the similarity of each pair on a scale from 1 (very similar) to 4 (very different). The average rating is 2.23 ± 0.25, which shows the perceived difference is also small.
Adversarial Training
The results demonstrated in section 4.1 raise the following question: How can we defend against these attacks? We performed a preliminary experiment to see if adversarial training (Madry et al., 2018), the only effective defense in the image domain, can be used to lower the attack success rate. We generated 1000 adversarial examples on the cleanly trained sentiment analysis model using the IMDB training set, appended them to the existing training set, and used the updated dataset to adversarially train a model from scratch. We found that adversarial training provided no additional robustness benefit in our experiments using the test set, despite the fact that the model achieves near 100% accuracy classifying adversarial examples included in the training set. These results demonstrate the diversity in the perturbations generated by our attack algorithm, and illustrates the difficulty in defending against adversarial attacks. We hope these results inspire further work in increasing the robustness of natural language models.
Conclusion
We demonstrate that despite the difficulties in generating imperceptible adversarial examples in the natural language domain, semantically and syntactically similar adversarial examples can be crafted using a black-box population-based optimization algorithm, yielding success on both the sentiment analysis and textual entailment tasks. Our human study validated that the generated examples were indeed adversarial and perceptibly quite similar. We hope our work encourages researchers to pursue improving the robustness of DNNs in the natural language domain.
|
v3-fos-license
|
2017-04-14T22:53:01.859Z
|
2013-12-06T00:00:00.000
|
8712424
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0081645&type=printable",
"pdf_hash": "586c1020e4dd57666c5c56f71e674f9318e4528d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:1043",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering",
"Environmental Science"
],
"sha1": "586c1020e4dd57666c5c56f71e674f9318e4528d",
"year": 2013
}
|
pes2o/s2orc
|
A Built-In Strategy to Mitigate Transgene Spreading from Genetically Modified Corn
Transgene spreading is a major concern in cultivating genetically modified (GM) corn. Cross-pollination may cause the spread of transgenes from GM cornfields to conventional fields. Occasionally, seed lot contamination, volunteers, mixing during sowing, harvest, and trade can also lead to transgene escape. Obviously, new biological confinement technologies are highly desired to mitigate transgene spreading in addition to physical separation and isolation methods. In this study, we report the development of a built-in containment method to mitigate transgene spreading in corn. In this method, an RNAi cassette for suppressing the expression of the nicosulfuron detoxifying enzyme CYP81A9 and an expression cassette for the glyphosate tolerant 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS) gene G10 were constructed and transformed into corn via Agrobacterium-mediated transformation. The GM corn plants that were generated were found to be sensitive to nicosulfuron but resistant to glyphosate, which is exactly the opposite of conventional corn. Field tests demonstrated that GM corn plants with silenced CYP81A9 could be killed by applying nicosulfuron at 40 g/ha, which is the recommended dose for weed control in cornfields. This study suggests that this built-in containment method for controlling the spread of corn transgenes is effective and easy to implement.
Introduction
Corn (Zea mays. L) is one of the three most widely grown crops worldwide along with wheat and rice [1]. Genetically modified (GM) corn has been increasing in planting acreage since it was commercialized in 1996. GM corn was the second largest biotech crop, after GM soybean, and accounted for 35% of the total cornplanting acreage in the world in 2012 [2]. Herbicide tolerance and insect resistance are two primary input traits for GM plants. GM corn has also been used to produce pharmaceutical protein by a technique known as molecular pharming [3].
Despite the great benefit of using transgene technology in corn, there have been considerable concerns over transgene spreading, particularly the potential for transgene contamination, which is caused by pollen-mediated gene flow [4,5]. Because corn is a wind-pollinated crop, gene flow via pollen commonly occurs. In spite of the strict management policy in transgenic cornfield tests and in commercial planting, GM corn contamination and unintended transgene spreading still occasionally occurred [6][7][8].
The contamination by GM corn in conventional cornfields often occurred due to unintended seed lot contamination, volunteers, mixing at sowing, cross-pollination, harvest, and trade [9]. In some countries, there is a demand for labeling GM corn food and feed if a threshold value is achieved, 0.9% in EU, for example. In some instances, such as seeds mixing during the sowing period and the presence of volunteers, the adventitious rates of GMO in corn could exceed the threshold value [10]. Volunteer corn could be common in temperate areas. During harvesting, some cobs, cob fragments or isolated kernels may remain in the fields. Depending on the climatic conditions and crop management, the seeds on these kernels from the previous year might germinate, and these GM plants might flower together with that season's plants [11]. Usually, transgenic plants are difficult to detect, let alone to selectively eliminate the transgenic plants from the non-transgenic ones in a large area of field crops. Furthermore, the gene flow between the transgenic crops and their wild relatives is also an important concern when GM corn is grown in its center of domestication, such as Mexico. Because insect resistant and herbicide tolerant genes confer some advantages under the pressure of insect pests and herbicides, these transgenes might accumulate in the wild relatives over time. To minimize transgene spreading and contamination, new biological confinement technologies are highly desirable in addition to physical separation and isolation measures.
To date, several biological confinement strategies have been developed for containing transgene spreading, such as plastid transformation, male sterility, and gene use restriction technologies (GURTs). Some reviews have evaluated these strategies in detail [12][13][14][15]. Plastid transformation has been successfully developed in many plants, such as tobacco, tomato and oilseed rape [16]. However, cereals are particularly recalcitrant to plastid transformation [17]. In recent years, some progress has been reported in rice [18]; however, plastid transformation in corn remains unfeasible. Cytoplasmic male sterility (CMS) was suggested as a biocontainment strategy of GM corn pollen [19]. A blend of cytoplasmic male-sterile hybrids and unrelated male-fertile hybrids, which were called Plus-Hybrids, could minimize the release of GM pollen and simultaneously increase the yield [20]. Gene use restriction technologies (GURTs) were developed to produce sterilized seeds, which germinate only when they are exposed to a specific activator molecule [21]. However, these GURTs have not been used in the field for major grain crops, such as corn and rice [12]. Most of the biological confinement mechanisms that were described above were far from being used for commercial production [22]. In addition, some of these confinement mechanisms (such as plastid transformation or male sterility) could not prevent gene spreading that resulted from seed dispersal during cultivation, harvest or transportation.
In previous reports, we described a built-in strategy for the containment of transgenic rice [23][24][25]. Such transgenic rice was created to be sensitive to bentazon by suppressing the expression of a bentazon degradation enzyme, which was encoded by a cytochrome P450 gene CYP81A6 [23]. This decontamination method could be incorporated into the rice weed control process, and thus, this method is a simple, reliable and inexpensive way to selectively kill transgenic plants in non-transgenic fields.
Nicosulfuron, which is a sulfonylurea herbicide that was developed by DuPont, has been successfully used for weed control in corn [26]. Sulfonylurea herbicides control a wide range of annual and perennial grasses and broadleaf weeds. This type of herbicide has low application rates and displays low levels of acute and chronic animal toxicity [27]. Nicosulfuron has the widest corn safety margin and the fewest sensitive varieties [28]. Tolerant corn plants metabolize nicosulfuron by hydroxylation similar to cytochrome P450 [29,30]. Previously, the cytochrome P450 enzyme, which is responsible for the detoxification of nicosulfuron in corn, was cloned as nsf1 (CYP81A9, GI: 195612396) by Dam et al. [31] and by our group [32].
In this study, we report a method to create glyphosate tolerant GM corn plants that are sensitive to nicosulfuron. This method was achieved by constructing a T-DNA, which consisted of two functional cassettes: one cassette contained the glyphosate tolerant 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS) gene G10, and the other was an RNA interference (RNAi) cassette, which suppressed the expression of the nicosulfuron detoxifying enzyme gene CYP81A9. We demonstrated that such transgenic corn could be selectively eliminated by spraying nicosulfuron. Based on laboratory and field test results, we concluded that any hybrid progenies that carry these transgenes could be selectively eliminated by nicosulfuron during the regular weed control process in cornfields.
Construction of a T-DNA Plasmid for Creating Terminable Transgenic Corn
A binary T-DNA transformation plasmid was built that was based on pCAMBIA1300. This plasmid contained an RNAi cassette, which targeted the corn cytochrome P450 gene CYP81A9, and an expression cassette for the glyphosate tolerant 5enolpyruvylshikimate-3-phosphate synthase (EPSPS) gene G10 (GI: 8469109) (Fig. 1). The RNAi cassette included the cauliflower mosaic virus 35S promoter and an inverted repeat sequence of 712 bp of the corn cytochrome P450 gene CYP81A9. The EPSPS gene G10 was originally cloned from Deinococcus radiodurans R1 and was then further codon-optimized and synthesized for corn expression. The G10 expression cassette contained the following sequences: the Z. mays polyubiquitin-1 promoter (pZmUbi-1), a chloroplast transit signal peptide from the corn acetohydroxyacid synthase gene (GB: X63553.1), the codon-optimized synthetic G10 gene, and a 3' end terminator fragment from corn phosphoenolpyruvate carboxylase. The chloroplast transit signal peptide was used to direct the G10 protein to chloroplasts. The RNAi cassette was constructed in tandem to the G10 gene expression cassette inside the same T-DNA (Fig. 1).
Corn Transformation and Identification of Transgenic Plants
The corn line ''hybrid Hi-II'' was used as the recipient of the transgene, and the transformation was conducted using an Agrobacterium-mediated approach. A typical protocol for the highefficiency transformation of maize (Zea mays L.), which was mediated by Agrobacterium tumefaciens, was used for transformation with only minor modifications [33]. For the transformation, the embryonic tissue from 8-10 day-developing seeds were cultured with the Agrobacterium tumefaciens strain LBA4404. To select transgenic events, 2 mM glyphosate was used. In total, 109 independent transgenic events with 375 plants were obtained. All these transgenic plants survived on rooting media, which contained 0.1 mM glyphosate and were all positive in the PCR detection of the G10 gene, as expected. All T0 transgenic plants were cultivated in a greenhouse and were crossed with the elite corn line Zheng-58 to generate T06Zheng-58 plants. The T06Zheng-58 plants were used for further analysis.
Analysis of the T06Zheng-58 Transgenic Corn Plants
The hybrid seeds that were harvested from T0 plants were germinated, and seedlings were grown in the greenhouse to test their sensitivity to nicosulfuron and glyphosate. Hybrids between Hi-II and Zheng-58 were used as the control plants. The T06Zheng-58 plants were divided into two groups for the herbicide spray test. One group of the T06Zheng-58 plants was sprayed with nicosulfuron at 60 mg/L, and the other group was sprayed with 4 g/L glyphosate at the V4 to V5 stage. Among the 109 transgenic events that were generated, 35 were obviously sensitive to nicosulfuron and exhibited symptoms from varying degrees of injuries, such as stunting and malformation, to death 10 days after nicosulfuron spray. The other events showed no obvious damage that was caused by the application of nicosulfuron.
To search for a transgenic event that was sensitive to nicosulfuron and that only had a single copy of the transgene, 5 of the 35 sensitive events were selected for Southern blot analysis. Among the 5 events that were analyzed, events R450-42, R450-58, and R450-93 had a single copy of T-DNA insertions (Fig. 2).
The plants that were killed by nicosulfuron were expected to contain the transgene, whereas the plants that were killed by glyphosate were segregates without the transgene. To verify this prediction, Western blot analysis was performed on the tested plants of the event R450-58 to detect the G10 protein. The results confirmed that the dying plants, which were affected by nicosulfuron, were the transgenic plants, whereas the survived plants were non-transgenic segregates (Fig. 4, upper panel). The Western analysis further indicated that the plants that were killed by glyphosate were non-transgenic segregates, whereas the survived plants were transgenic (Fig. 4, lower panel). These results clearly demonstrated that the transgenic corn plants that were generated in this study could be selectively eliminated by nicosulfuron.
RNAi Suppression of the Nicosulfuron Detoxification Gene CYP81A9
To analyze the efficiency of the suppression of CYP81A9 transcripts in transgenic corn plants, qRT-PCR was performed using the GAPDH (glyceraldehyde-3-phosphate dehydrogenase) gene as an internal control. qRT-PCR analysis revealed that the transcript levels of CYP81A9 in the event R450-58, which showed high sensitivity to nicosulfuron, decreased by 97.2% compared with its expression levels in non-transgenic control plants (Fig. 5). This result indicated that the expression of the CYP81A9 gene was drastically suppressed by transforming this RNAi cassette. Similar results were also observed among other independent events that were sensitive to nicosulfuron (data not shown).
Field Test of the Terminable Transgenic Corn
A field test was performed for the further analysis of the sensitivity of T06Zheng-58 transgenic plants to nicosulfuron and glyphosate. The plants of the event R450-58 were planted in experimental plots in the summer of 2012 in Hangzhou, Zhejiang Province, China. Non-transgenic corn plants that were produced by crossing Hi-II with Zheng-58 were planted along with the transgenic corn as controls. In total, 98 individual corn seedlings of R450-58 were analyzed by PCR before spraying to identify the non-transgenic segregates, which were removed from the field. The segregation ratio was approximately 1:1 (data not shown), which is consistent with a single T-DNA copy. The planting rows were divided into two parts: one part was sprayed with 40 g/ha nicosulfuron, whereas the other part was sprayed with 4 kg/ha glyphosate at the V5 stage. These application rates were typically used to control weeds under field conditions. Sensitivity was scored 10 days and 20 days after herbicide treatments. All of the transgenic plants were sensitive to nicosulfuron but were resistant to glyphosate. In contrast, the non-transformed control plants were resistant to nicosulfuron but were sensitive to glyphosate (Fig. 6).
Furthermore, there were no significant morphological differences between the transgenic and non-transgenic plants in the field test. The agronomic traits of plant height, ears length, number of kernels per ear and weight per 1000 kernels were measured for the transgenic event R450-58 and the control plants. There were no significant differences (P.0.05) between the transgenic and nontransgenic plants in all these parameters, which were analyzed by Student's t-test using the DPS statistical software (Table 2). This result suggested that the suppression of CYP81A9 expression did not have statistically significant side effects on the agronomic performance of the transgenic corn in the field.
Discussion
Previously, we reported a built-in strategy for the containment of transgenes in rice [23]. This method was found to be effective in mitigating the spread of transgenes when this method was used in the development of insect-resistant and herbicide-tolerant GM rice [25].
In the study, we demonstrated that the transgenic corn plants that contained the silenced nicosulfuron-detoxifying gene CYP81A9 were highly sensitive to nicosulfuron and could be selectively killed by spraying nicosulfuron at a regular application dosage. Because nicosulfuron is widely used for weed control in conventional cornfields, any transgenic corn plants that are developed by this method can be selectively decontaminated without extra effort or cost. Therefore, this transgene spreading control method is simple, preventive and could easily be incorporated into the regular weed control process.
In addition to its application in regular transgenic corn, such as transgenic insect-resistant corn, this built-in containment strategy is especially useful for the development of transgenic plants as bioreactors [23,24]. Corn has many advantages for the large-scale production of recombinant pharmaceutical proteins, which makes corn the widest used cereal for molecular pharming [3]. However, the risk of transgene spreading is relatively high because corn is a wind-pollinated plant. Due to the nature of the pharmaceutical or industrial proteins, it is particularly important to develop a reliable method for the containment of such transgenes. The method that is described in this report could be an ideal technology for minimizing the risk of contamination of such transgenic corn into food and feed supplies.
Construction of Binary Vector for Corn Transformation
Genomic DNA was extracted from young leaves of corn according to a standard CTAB method [34]. The 712 bp fragment of CYP81A9 (GI: 195612396) DNA was obtained by PCR using a forward primer 450F (59-CTCGAGTTCTC-CATGCGCCTGGGGACC-39, with the XhoI recognition site underlined) and a reverse primer 450R1 (59-AGATCTCAGT-GATCACAGTGTCAGTGTAGAC-39, with the BglII recognition site underlined). This fragment represents the 223 to 934 bp of the genomic DNA from the initial codon. The other 923 bp fragment of CYP81A9 DNA was amplified by PCR using the primer 450F and the reverse primer 450R2 (59-AGATCTACA-GACTATGTCAACATAAAGCAC-39, with the BglII recognition site underlined). This fragment represents the 223 to 1145 bp of the genomic DNA from the initial codon. Both PCR products were separately cloned into the pMD-T vector (Shanghai Sangon, China) and sequenced. Plasmids were digested with XhoI and BglII to obtain the inserts. A three-way ligation was performed to clone these two fragments into pCAMBIA1300, which was predigested with XhoI and dephosphorylated by CIAP. The resulting plasmid, which contained a 712 bp inverted repeat sequence of CYP81A9 for RNA interference, was designated as p1300-450i. Essentially, the hptII gene (hygromycin resistance) in the pCAMBIA1300 vector was replaced with the 450i gene, which was driven by the CaMV 35S promoter for nicosulfuron sensitivity. The Z. mays polyubiquitin-1 promoter (pZmUbi-1) DNA fragment, including the chloroplast transit signal peptide from the acetohydroxyacid synthase of Z. mays, was generated by digesting a pMD-T plasmid, which contained the promoter with KpnI and BamHI. The corn codon-optimized synthetic glyphosate resistant 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS) gene G10 (GI: 8469109), including a 39 end terminator fragment from the corn phosphoenolpyruvate carboxylase (PEPC), was synthesized by Shanghai Sangon Limited, China. To facilitate the cloning, the restriction sites of BamHI and KpnI were introduced into the 59 and 39 ends of the G10 gene, respectively. By a three-way ligation, the restriction fragments of the ZmUbi-1 promoter that was fused with the signal peptide and the G10 gene were inserted into the plasmid p1300-450i, which was predigested with KpnI and dephosphorylated by CIAP. Finally, the direction of the inserts was identified by XhoI. Thus, the RNA interference cassette was constructed in tandem with the glyphosate resistance cassette inside the T-DNA. The final plasmid, p1300-450i-G10, with the two expression cassettes in same orientation, was used for corn transformation.
Agrobacterium-mediated Corn Transformation
The plasmid p1300-450i-G10 was transformed into the Agrobacterium tumefaciens strain LBA4404 by electroporation using an Electroporator 2510 (Eppendorf) following the manufacturer's instructions. The hybrid Hi-II corn (Z. mays L.) was transformed using a standard Agrobacterium-mediated transformation method, which was described previously [33], except that the final concentration of 2 mM glyphosate (Sigma) was used as the selection agent during callus culture and that 0.1 mM glyphosate was used during rooting stage. The independently transformed events were propagated in sterile culture and then planted in soil in the greenhouse for artificial pollinations. T0 plants were crossed with the elite event Zheng-58 to obtain the T06Zheng-58 seeds.
Greenhouse Test
The resulting T06Zheng-58 plants from T0 plants, which were crossed with Zheng-58 and seeds were planted in the greenhouse. Hybrid plants from Hi-II that were crossed with Zheng-58 were used as controls. The amount of the plants that were evaluated for the herbicide resistance was unequal according to the total number of seeds, which were obtained from the events. Briefly, 3-4 plants were planted in one pot (some seeds did not germinate). Plants from an event were planted in 2, 4, 8 or 10 pots for the herbicide spray test. Each of the transgenic events was divided into two groups for the herbicide spray test. One group of the T06Zheng-58 plants was sprayed with nicosulfuron at 60 mg/L, whereas the other group of the T06Zheng-58 plants was sprayed with 4 g/L glyphosate at the V4 to V5 leaf stage. A handhold sprayer was used to apply the herbicide formulations. Individual plants were assessed for the nicosulfuron response score 10 days after being sprayed and assigned a visual response score from 1 to 4 (1 = dead plant, 2 = serious damaged plant, 3 = slight damaged plant and 4 = no effect observed). Nicosulfuron (4% suspending solution, Ishihara Sangyo Kalsha Ltd., Japan) and Roundup (41% isopropylamine salt of glyphosate, Monsanto) were used for the herbicide spraying test.
Western Blot Analysis
A standard Western blot analysis was performed to detect the expression of G10 in both the survival and the dying transgenic corn plants after being sprayed with 60 mg/L nicosulfuron or 4 g/ L glyphosate. The samples from all tested plants were collected 7 days after the treatment. Leaf samples that were collected from transgenic plants as well as from non-transgenic control plants were ground to a powder in liquid nitrogen and then suspended in SDS sample buffer. After lysis and centrifugation, the protein samples were separated by 10% SDS-PAGE and then blotted onto nitrocellulose membranes. The rabbit polyclonal antibodies that were generated against G10 were used as the primary antibodies, and the horseradish peroxidase-conjugated goat anti-rabbit IgG was used as the secondary antibody (Promega) in the analysis.
Southern Blot Analysis
For Southern blot analysis, approximately 100 mg of genomic DNA was digested with the BamHI restriction enzyme. The digested genomic DNA was size-fractionated on a 0.7% (w/v) agarose gel by electrophoresis, transferred onto a positively charged nylon membrane and cross-linked to the membrane at 121uC for 30 min. The hybridization probes, which were specific to the G10 gene, were prepared as described in the DIG System Manual (Roche). The DNA template that was used for producing the G10 probe was amplified by PCR using the primers G10-F (59-CACCTTCGACGTGATCGTGCATCCA-39) and G10-R (59-CGAGG TGAGCGAAGAACTGAGGG TAGGA-39).
Total RNA Isolation and cDNA Synthesis
To minimize dehydration-and wounding-induced gene expression, leaf samples were quickly excised, wrapped in aluminum foil, and immediately frozen in liquid nitrogen. Total RNA was extracted from 100 mg of leaves using the TRIzol reagent (Invitrogen). The resulting RNA was treated with RNase-free DNase I (Promega) to remove all genomic DNA residues. The RNA concentration and purity were determined using a Nano-Drop ND-2000 spectrophotometer (Thermo Scientific). The quality of RNA samples were also confirmed by electrophoresis in 1% agarose gels. Only intact RNA was used for the following reactions. Finally, 2.5 mg of the DNase-treated RNA samples were reversing transcribed into cDNA with an oligo (dT) 18 primer using a Revert Aid First Strand cDNA Synthesis Kit (Fermentas). The synthesized cDNA products were diluted 1:10 in nucleasefree water before being used as templates in the qRT-PCR analysis.
qRT-PCR Analysis
Two-step qRT-PCR was performed using a 26SYBR Green Master Mix (Applied BiosystemsH) on an ABI 7500 Real-time PCR System according to the manufacturer's instructions. The level of CYP81A9 transcript was normalized using the internal control, the GAPDH (glyceraldehyde-3-phosphate dehydrogenase) gene. The primers that were used for qRT-PCR were 59-GCTGGCGACGAGAGCGAAAGTA-39 and 59-ATGGCC-CATTCCGTCGTGGT-39 for CYP81A9 and were 59-AG-CAGGTCGAGCATCTTCG-39 and 59-CTGTAGCCC-CACTCGTTGTC-39 for GAPDH. The PCR reaction volume was 20 ml, which contained 5.0 ml 10-fold diluted cDNA template, 100 mM of each primer and 10.0 ml of 26SYBR Mix. The cycling conditions were composed of 10 minutes of polymerase activation at 95uC, followed by 40 cycles of 95uC for 15 seconds, and 60uC for 1 minute. To verify that only a single product was amplified, a dissociation curve analysis was performed using the ABI Prism Dissociation Curve Analysis software. All samples were run on the same plate to avoid between-run variations. For each qRT-PCR experiment, four technical repetitions were performed, and the mean values were calculated. The relative expression levels were calculated using the comparative 2 2ggCT method [35] with GAPDH as the internal control.
Field Trials
The R450-58 plants were planted in the experimental plots in the summer of 2012 in Hangzhou, Zhejiang Province, China. Non-transgenic plants, which were prepared by crossing Hi-II with Zheng-58, were planted along with the transgenic corn as controls. The application rates for nicosulfuron and glyphosate were 40 g/ha and 4 kg/ha, respectively. Testing plots were inspected visually 10 days and 20 days post-herbicide application. The number of dead plants was recorded to determine the segregation ratios. Five major agronomic traits including plant height, ears length, number of kernels per ear and weight per 1000 kernels, were recorded from mature plants. Plant heights were measured from the soil surface to the tip of the tassel of each tested plant. Ears length and kernels number were averaged from 6 randomly selected ears from transgenic and non-transgenic control plants.
Statistical Analysis
All the data were presented as the mean 6 SD (standard deviation). Student's t-test was performed to compare the differences between the transgenic and the non-transgenic corn plants. P,0.001 was considered extremely significant, P.0.05 was considered non-significant. All statistical analyses were performed using the DPS statistical software (Refine Information Tech. Inc., Hangzhou, China).
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.